CVE ID
stringlengths 13
43
⌀ | CVE Page
stringlengths 45
48
⌀ | CWE ID
stringclasses 90
values | codeLink
stringlengths 46
139
| commit_id
stringlengths 6
81
| commit_message
stringlengths 3
13.3k
⌀ | func_after
stringlengths 14
241k
| func_before
stringlengths 14
241k
| lang
stringclasses 3
values | project
stringclasses 309
values | vul
int8 0
1
|
---|---|---|---|---|---|---|---|---|---|---|
CVE-2014-7842
|
https://www.cvedetails.com/cve/CVE-2014-7842/
|
CWE-362
|
https://github.com/torvalds/linux/commit/a2b9e6c1a35afcc0973acb72e591c714e78885ff
|
a2b9e6c1a35afcc0973acb72e591c714e78885ff
|
KVM: x86: Don't report guest userspace emulation error to userspace
Commit fc3a9157d314 ("KVM: X86: Don't report L2 emulation failures to
user-space") disabled the reporting of L2 (nested guest) emulation failures to
userspace due to race-condition between a vmexit and the instruction emulator.
The same rational applies also to userspace applications that are permitted by
the guest OS to access MMIO area or perform PIO.
This patch extends the current behavior - of injecting a #UD instead of
reporting it to userspace - also for guest userspace code.
Signed-off-by: Nadav Amit <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
gfn_t ngfn, void *data, int offset, int len,
u32 access)
{
struct x86_exception exception;
gfn_t real_gfn;
gpa_t ngpa;
ngpa = gfn_to_gpa(ngfn);
real_gfn = mmu->translate_gpa(vcpu, ngpa, access, &exception);
if (real_gfn == UNMAPPED_GVA)
return -EFAULT;
real_gfn = gpa_to_gfn(real_gfn);
return kvm_read_guest_page(vcpu->kvm, real_gfn, data, offset, len);
}
|
int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
gfn_t ngfn, void *data, int offset, int len,
u32 access)
{
struct x86_exception exception;
gfn_t real_gfn;
gpa_t ngpa;
ngpa = gfn_to_gpa(ngfn);
real_gfn = mmu->translate_gpa(vcpu, ngpa, access, &exception);
if (real_gfn == UNMAPPED_GVA)
return -EFAULT;
real_gfn = gpa_to_gfn(real_gfn);
return kvm_read_guest_page(vcpu->kvm, real_gfn, data, offset, len);
}
|
C
|
linux
| 0 |
CVE-2017-6903
|
https://www.cvedetails.com/cve/CVE-2017-6903/
|
CWE-269
|
https://github.com/iortcw/iortcw/commit/b6ff2bcb1e4e6976d61e316175c6d7c99860fe20
|
b6ff2bcb1e4e6976d61e316175c6d7c99860fe20
|
All: Don't load .pk3s as .dlls, and don't load user config files from .pk3s
|
int FS_Delete( char *filename ) {
#if 0 // Stub...not used in MP
char *ospath;
if ( !fs_searchpaths ) {
Com_Error( ERR_FATAL, "Filesystem call made without initialization\n" );
}
if ( !filename || filename[0] == 0 ) {
return 0;
}
if ( Q_strncmp( filename, "save/", 5 ) != 0 ) {
return 0;
}
ospath = FS_BuildOSPath( fs_homepath->string, fs_gamedir, filename );
if ( remove( ospath ) != -1 ) { // success
return 1;
}
#endif
return 0;
}
|
int FS_Delete( char *filename ) {
#if 0 // Stub...not used in MP
char *ospath;
if ( !fs_searchpaths ) {
Com_Error( ERR_FATAL, "Filesystem call made without initialization\n" );
}
if ( !filename || filename[0] == 0 ) {
return 0;
}
if ( Q_strncmp( filename, "save/", 5 ) != 0 ) {
return 0;
}
ospath = FS_BuildOSPath( fs_homepath->string, fs_gamedir, filename );
if ( remove( ospath ) != -1 ) { // success
return 1;
}
#endif
return 0;
}
|
C
|
OpenJK
| 0 |
CVE-2011-2906
|
https://www.cvedetails.com/cve/CVE-2011-2906/
|
CWE-189
|
https://github.com/torvalds/linux/commit/b5b515445f4f5a905c5dd27e6e682868ccd6c09d
|
b5b515445f4f5a905c5dd27e6e682868ccd6c09d
|
[SCSI] pmcraid: reject negative request size
There's a code path in pmcraid that can be reached via device ioctl that
causes all sorts of ugliness, including heap corruption or triggering the
OOM killer due to consecutive allocation of large numbers of pages.
First, the user can call pmcraid_chr_ioctl(), with a type
PMCRAID_PASSTHROUGH_IOCTL. This calls through to
pmcraid_ioctl_passthrough(). Next, a pmcraid_passthrough_ioctl_buffer
is copied in, and the request_size variable is set to
buffer->ioarcb.data_transfer_length, which is an arbitrary 32-bit
signed value provided by the user. If a negative value is provided
here, bad things can happen. For example,
pmcraid_build_passthrough_ioadls() is called with this request_size,
which immediately calls pmcraid_alloc_sglist() with a negative size.
The resulting math on allocating a scatter list can result in an
overflow in the kzalloc() call (if num_elem is 0, the sglist will be
smaller than expected), or if num_elem is unexpectedly large the
subsequent loop will call alloc_pages() repeatedly, a high number of
pages will be allocated and the OOM killer might be invoked.
It looks like preventing this value from being negative in
pmcraid_ioctl_passthrough() would be sufficient.
Signed-off-by: Dan Rosenberg <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
|
static ssize_t pmcraid_show_log_level(
struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(dev);
struct pmcraid_instance *pinstance =
(struct pmcraid_instance *)shost->hostdata;
return snprintf(buf, PAGE_SIZE, "%d\n", pinstance->current_log_level);
}
|
static ssize_t pmcraid_show_log_level(
struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct Scsi_Host *shost = class_to_shost(dev);
struct pmcraid_instance *pinstance =
(struct pmcraid_instance *)shost->hostdata;
return snprintf(buf, PAGE_SIZE, "%d\n", pinstance->current_log_level);
}
|
C
|
linux
| 0 |
CVE-2014-7841
|
https://www.cvedetails.com/cve/CVE-2014-7841/
|
CWE-399
|
https://github.com/torvalds/linux/commit/e40607cbe270a9e8360907cb1e62ddf0736e4864
|
e40607cbe270a9e8360907cb1e62ddf0736e4864
|
net: sctp: fix NULL pointer dereference in af->from_addr_param on malformed packet
An SCTP server doing ASCONF will panic on malformed INIT ping-of-death
in the form of:
------------ INIT[PARAM: SET_PRIMARY_IP] ------------>
While the INIT chunk parameter verification dissects through many things
in order to detect malformed input, it misses to actually check parameters
inside of parameters. E.g. RFC5061, section 4.2.4 proposes a 'set primary
IP address' parameter in ASCONF, which has as a subparameter an address
parameter.
So an attacker may send a parameter type other than SCTP_PARAM_IPV4_ADDRESS
or SCTP_PARAM_IPV6_ADDRESS, param_type2af() will subsequently return 0
and thus sctp_get_af_specific() returns NULL, too, which we then happily
dereference unconditionally through af->from_addr_param().
The trace for the log:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000078
IP: [<ffffffffa01e9c62>] sctp_process_init+0x492/0x990 [sctp]
PGD 0
Oops: 0000 [#1] SMP
[...]
Pid: 0, comm: swapper Not tainted 2.6.32-504.el6.x86_64 #1 Bochs Bochs
RIP: 0010:[<ffffffffa01e9c62>] [<ffffffffa01e9c62>] sctp_process_init+0x492/0x990 [sctp]
[...]
Call Trace:
<IRQ>
[<ffffffffa01f2add>] ? sctp_bind_addr_copy+0x5d/0xe0 [sctp]
[<ffffffffa01e1fcb>] sctp_sf_do_5_1B_init+0x21b/0x340 [sctp]
[<ffffffffa01e3751>] sctp_do_sm+0x71/0x1210 [sctp]
[<ffffffffa01e5c09>] ? sctp_endpoint_lookup_assoc+0xc9/0xf0 [sctp]
[<ffffffffa01e61f6>] sctp_endpoint_bh_rcv+0x116/0x230 [sctp]
[<ffffffffa01ee986>] sctp_inq_push+0x56/0x80 [sctp]
[<ffffffffa01fcc42>] sctp_rcv+0x982/0xa10 [sctp]
[<ffffffffa01d5123>] ? ipt_local_in_hook+0x23/0x28 [iptable_filter]
[<ffffffff8148bdc9>] ? nf_iterate+0x69/0xb0
[<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0
[<ffffffff8148bf86>] ? nf_hook_slow+0x76/0x120
[<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0
[...]
A minimal way to address this is to check for NULL as we do on all
other such occasions where we know sctp_get_af_specific() could
possibly return with NULL.
Fixes: d6de3097592b ("[SCTP]: Add the handling of "Set Primary IP Address" parameter to INIT")
Signed-off-by: Daniel Borkmann <[email protected]>
Cc: Vlad Yasevich <[email protected]>
Acked-by: Neil Horman <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
struct sctp_chunk *sctp_make_abort_user(const struct sctp_association *asoc,
const struct msghdr *msg,
size_t paylen)
{
struct sctp_chunk *retval;
void *payload = NULL;
int err;
retval = sctp_make_abort(asoc, NULL, sizeof(sctp_errhdr_t) + paylen);
if (!retval)
goto err_chunk;
if (paylen) {
/* Put the msg_iov together into payload. */
payload = kmalloc(paylen, GFP_KERNEL);
if (!payload)
goto err_payload;
err = memcpy_fromiovec(payload, msg->msg_iov, paylen);
if (err < 0)
goto err_copy;
}
sctp_init_cause(retval, SCTP_ERROR_USER_ABORT, paylen);
sctp_addto_chunk(retval, paylen, payload);
if (paylen)
kfree(payload);
return retval;
err_copy:
kfree(payload);
err_payload:
sctp_chunk_free(retval);
retval = NULL;
err_chunk:
return retval;
}
|
struct sctp_chunk *sctp_make_abort_user(const struct sctp_association *asoc,
const struct msghdr *msg,
size_t paylen)
{
struct sctp_chunk *retval;
void *payload = NULL;
int err;
retval = sctp_make_abort(asoc, NULL, sizeof(sctp_errhdr_t) + paylen);
if (!retval)
goto err_chunk;
if (paylen) {
/* Put the msg_iov together into payload. */
payload = kmalloc(paylen, GFP_KERNEL);
if (!payload)
goto err_payload;
err = memcpy_fromiovec(payload, msg->msg_iov, paylen);
if (err < 0)
goto err_copy;
}
sctp_init_cause(retval, SCTP_ERROR_USER_ABORT, paylen);
sctp_addto_chunk(retval, paylen, payload);
if (paylen)
kfree(payload);
return retval;
err_copy:
kfree(payload);
err_payload:
sctp_chunk_free(retval);
retval = NULL;
err_chunk:
return retval;
}
|
C
|
linux
| 0 |
CVE-2012-0957
|
https://www.cvedetails.com/cve/CVE-2012-0957/
|
CWE-16
|
https://github.com/torvalds/linux/commit/2702b1526c7278c4d65d78de209a465d4de2885e
|
2702b1526c7278c4d65d78de209a465d4de2885e
|
kernel/sys.c: fix stack memory content leak via UNAME26
Calling uname() with the UNAME26 personality set allows a leak of kernel
stack contents. This fixes it by defensively calculating the length of
copy_to_user() call, making the len argument unsigned, and initializing
the stack buffer to zero (now technically unneeded, but hey, overkill).
CVE-2012-0957
Reported-by: PaX Team <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: PaX Team <[email protected]>
Cc: Brad Spengler <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
{
int errno = 0;
down_read(&uts_sem);
if (copy_to_user(name, utsname(), sizeof *name))
errno = -EFAULT;
up_read(&uts_sem);
if (!errno && override_release(name->release, sizeof(name->release)))
errno = -EFAULT;
if (!errno && override_architecture(name))
errno = -EFAULT;
return errno;
}
|
SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
{
int errno = 0;
down_read(&uts_sem);
if (copy_to_user(name, utsname(), sizeof *name))
errno = -EFAULT;
up_read(&uts_sem);
if (!errno && override_release(name->release, sizeof(name->release)))
errno = -EFAULT;
if (!errno && override_architecture(name))
errno = -EFAULT;
return errno;
}
|
C
|
linux
| 0 |
CVE-2013-0828
|
https://www.cvedetails.com/cve/CVE-2013-0828/
|
CWE-399
|
https://github.com/chromium/chromium/commit/4d17163f4b66be517dc49019a029e5ddbd45078c
|
4d17163f4b66be517dc49019a029e5ddbd45078c
|
Remove the Simple Default Stylesheet, it's just a foot-gun.
We've been bitten by the Simple Default Stylesheet being out
of sync with the real html.css twice this week.
The Simple Default Stylesheet was invented years ago for Mac:
http://trac.webkit.org/changeset/36135
It nicely handles the case where you just want to create
a single WebView and parse some simple HTML either without
styling said HTML, or only to display a small string, etc.
Note that this optimization/complexity *only* helps for the
very first document, since the default stylesheets are
all static (process-global) variables. Since any real page
on the internet uses a tag not covered by the simple default
stylesheet, not real load benefits from this optimization.
Only uses of WebView which were just rendering small bits
of text might have benefited from this. about:blank would
also have used this sheet.
This was a common application for some uses of WebView back
in those days. These days, even with WebView on Android,
there are likely much larger overheads than parsing the
html.css stylesheet, so making it required seems like the
right tradeoff of code-simplicity for this case.
BUG=319556
Review URL: https://codereview.chromium.org/73723005
git-svn-id: svn://svn.chromium.org/blink/trunk@162153 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
StyleResolver::~StyleResolver()
{
m_fontSelector->unregisterForInvalidationCallbacks(this);
m_fontSelector->clearDocument();
m_viewportStyleResolver->clearDocument();
}
|
StyleResolver::~StyleResolver()
{
m_fontSelector->unregisterForInvalidationCallbacks(this);
m_fontSelector->clearDocument();
m_viewportStyleResolver->clearDocument();
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/8f883f2b12f68fed993671dce7fb5fb91f2229aa
|
8f883f2b12f68fed993671dce7fb5fb91f2229aa
|
Add more non client Windows messages to the list of messages not being sent to the renderer.
Turns out we get WM_NCLBUTTONDOWN/UP messages at times which go to the renderer and are not acked causing the
unresponsive renderer dialog to show up in Desktop Chrome Aura.
BUG=335248
[email protected]
TBR=jam
Review URL: https://codereview.chromium.org/141103004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@245949 0039d316-1c4b-4281-b951-d872f2087c98
|
void RenderWidgetHostViewAura::SetTooltipText(
const base::string16& tooltip_text) {
tooltip_ = tooltip_text;
aura::Window* root_window = window_->GetRootWindow();
aura::client::TooltipClient* tooltip_client =
aura::client::GetTooltipClient(root_window);
if (tooltip_client) {
tooltip_client->UpdateTooltip(window_);
tooltip_client->SetTooltipShownTimeout(window_, 0);
}
}
|
void RenderWidgetHostViewAura::SetTooltipText(
const base::string16& tooltip_text) {
tooltip_ = tooltip_text;
aura::Window* root_window = window_->GetRootWindow();
aura::client::TooltipClient* tooltip_client =
aura::client::GetTooltipClient(root_window);
if (tooltip_client) {
tooltip_client->UpdateTooltip(window_);
tooltip_client->SetTooltipShownTimeout(window_, 0);
}
}
|
C
|
Chrome
| 0 |
CVE-2017-12187
|
https://www.cvedetails.com/cve/CVE-2017-12187/
|
CWE-20
|
https://cgit.freedesktop.org/xorg/xserver/commit/?id=cad5a1050b7184d828aef9c1dd151c3ab649d37e
|
cad5a1050b7184d828aef9c1dd151c3ab649d37e
| null |
XineramaXvPutImage(ClientPtr client)
{
REQUEST(xvPutImageReq);
PanoramiXRes *draw, *gc, *port;
Bool isRoot;
int result, i, x, y;
REQUEST_AT_LEAST_SIZE(xvPutImageReq);
result = dixLookupResourceByClass((void **) &draw, stuff->drawable,
XRC_DRAWABLE, client, DixWriteAccess);
if (result != Success)
return (result == BadValue) ? BadDrawable : result;
result = dixLookupResourceByType((void **) &gc, stuff->gc,
XRT_GC, client, DixReadAccess);
if (result != Success)
return result;
result = dixLookupResourceByType((void **) &port, stuff->port,
XvXRTPort, client, DixReadAccess);
if (result != Success)
return result;
isRoot = (draw->type == XRT_WINDOW) && draw->u.win.root;
x = stuff->drw_x;
y = stuff->drw_y;
FOR_NSCREENS_BACKWARD(i) {
if (port->info[i].id) {
stuff->drawable = draw->info[i].id;
stuff->port = port->info[i].id;
stuff->gc = gc->info[i].id;
stuff->drw_x = x;
stuff->drw_y = y;
if (isRoot) {
stuff->drw_x -= screenInfo.screens[i]->x;
stuff->drw_y -= screenInfo.screens[i]->y;
}
result = ProcXvPutImage(client);
}
}
return result;
}
|
XineramaXvPutImage(ClientPtr client)
{
REQUEST(xvPutImageReq);
PanoramiXRes *draw, *gc, *port;
Bool isRoot;
int result, i, x, y;
REQUEST_AT_LEAST_SIZE(xvPutImageReq);
result = dixLookupResourceByClass((void **) &draw, stuff->drawable,
XRC_DRAWABLE, client, DixWriteAccess);
if (result != Success)
return (result == BadValue) ? BadDrawable : result;
result = dixLookupResourceByType((void **) &gc, stuff->gc,
XRT_GC, client, DixReadAccess);
if (result != Success)
return result;
result = dixLookupResourceByType((void **) &port, stuff->port,
XvXRTPort, client, DixReadAccess);
if (result != Success)
return result;
isRoot = (draw->type == XRT_WINDOW) && draw->u.win.root;
x = stuff->drw_x;
y = stuff->drw_y;
FOR_NSCREENS_BACKWARD(i) {
if (port->info[i].id) {
stuff->drawable = draw->info[i].id;
stuff->port = port->info[i].id;
stuff->gc = gc->info[i].id;
stuff->drw_x = x;
stuff->drw_y = y;
if (isRoot) {
stuff->drw_x -= screenInfo.screens[i]->x;
stuff->drw_y -= screenInfo.screens[i]->y;
}
result = ProcXvPutImage(client);
}
}
return result;
}
|
C
|
xserver
| 0 |
CVE-2014-5045
|
https://www.cvedetails.com/cve/CVE-2014-5045/
|
CWE-59
|
https://github.com/torvalds/linux/commit/295dc39d941dc2ae53d5c170365af4c9d5c16212
|
295dc39d941dc2ae53d5c170365af4c9d5c16212
|
fs: umount on symlink leaks mnt count
Currently umount on symlink blocks following umount:
/vz is separate mount
# ls /vz/ -al | grep test
drwxr-xr-x. 2 root root 4096 Jul 19 01:14 testdir
lrwxrwxrwx. 1 root root 11 Jul 19 01:16 testlink -> /vz/testdir
# umount -l /vz/testlink
umount: /vz/testlink: not mounted (expected)
# lsof /vz
# umount /vz
umount: /vz: device is busy. (unexpected)
In this case mountpoint_last() gets an extra refcount on path->mnt
Signed-off-by: Vasily Averin <[email protected]>
Acked-by: Ian Kent <[email protected]>
Acked-by: Jeff Layton <[email protected]>
Cc: [email protected]
Signed-off-by: Christoph Hellwig <[email protected]>
|
static int atomic_open(struct nameidata *nd, struct dentry *dentry,
struct path *path, struct file *file,
const struct open_flags *op,
bool got_write, bool need_lookup,
int *opened)
{
struct inode *dir = nd->path.dentry->d_inode;
unsigned open_flag = open_to_namei_flags(op->open_flag);
umode_t mode;
int error;
int acc_mode;
int create_error = 0;
struct dentry *const DENTRY_NOT_SET = (void *) -1UL;
bool excl;
BUG_ON(dentry->d_inode);
/* Don't create child dentry for a dead directory. */
if (unlikely(IS_DEADDIR(dir))) {
error = -ENOENT;
goto out;
}
mode = op->mode;
if ((open_flag & O_CREAT) && !IS_POSIXACL(dir))
mode &= ~current_umask();
excl = (open_flag & (O_EXCL | O_CREAT)) == (O_EXCL | O_CREAT);
if (excl)
open_flag &= ~O_TRUNC;
/*
* Checking write permission is tricky, bacuse we don't know if we are
* going to actually need it: O_CREAT opens should work as long as the
* file exists. But checking existence breaks atomicity. The trick is
* to check access and if not granted clear O_CREAT from the flags.
*
* Another problem is returing the "right" error value (e.g. for an
* O_EXCL open we want to return EEXIST not EROFS).
*/
if (((open_flag & (O_CREAT | O_TRUNC)) ||
(open_flag & O_ACCMODE) != O_RDONLY) && unlikely(!got_write)) {
if (!(open_flag & O_CREAT)) {
/*
* No O_CREATE -> atomicity not a requirement -> fall
* back to lookup + open
*/
goto no_open;
} else if (open_flag & (O_EXCL | O_TRUNC)) {
/* Fall back and fail with the right error */
create_error = -EROFS;
goto no_open;
} else {
/* No side effects, safe to clear O_CREAT */
create_error = -EROFS;
open_flag &= ~O_CREAT;
}
}
if (open_flag & O_CREAT) {
error = may_o_create(&nd->path, dentry, mode);
if (error) {
create_error = error;
if (open_flag & O_EXCL)
goto no_open;
open_flag &= ~O_CREAT;
}
}
if (nd->flags & LOOKUP_DIRECTORY)
open_flag |= O_DIRECTORY;
file->f_path.dentry = DENTRY_NOT_SET;
file->f_path.mnt = nd->path.mnt;
error = dir->i_op->atomic_open(dir, dentry, file, open_flag, mode,
opened);
if (error < 0) {
if (create_error && error == -ENOENT)
error = create_error;
goto out;
}
if (error) { /* returned 1, that is */
if (WARN_ON(file->f_path.dentry == DENTRY_NOT_SET)) {
error = -EIO;
goto out;
}
if (file->f_path.dentry) {
dput(dentry);
dentry = file->f_path.dentry;
}
if (*opened & FILE_CREATED)
fsnotify_create(dir, dentry);
if (!dentry->d_inode) {
WARN_ON(*opened & FILE_CREATED);
if (create_error) {
error = create_error;
goto out;
}
} else {
if (excl && !(*opened & FILE_CREATED)) {
error = -EEXIST;
goto out;
}
}
goto looked_up;
}
/*
* We didn't have the inode before the open, so check open permission
* here.
*/
acc_mode = op->acc_mode;
if (*opened & FILE_CREATED) {
WARN_ON(!(open_flag & O_CREAT));
fsnotify_create(dir, dentry);
acc_mode = MAY_OPEN;
}
error = may_open(&file->f_path, acc_mode, open_flag);
if (error)
fput(file);
out:
dput(dentry);
return error;
no_open:
if (need_lookup) {
dentry = lookup_real(dir, dentry, nd->flags);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
if (create_error) {
int open_flag = op->open_flag;
error = create_error;
if ((open_flag & O_EXCL)) {
if (!dentry->d_inode)
goto out;
} else if (!dentry->d_inode) {
goto out;
} else if ((open_flag & O_TRUNC) &&
S_ISREG(dentry->d_inode->i_mode)) {
goto out;
}
/* will fail later, go on to get the right error */
}
}
looked_up:
path->dentry = dentry;
path->mnt = nd->path.mnt;
return 1;
}
|
static int atomic_open(struct nameidata *nd, struct dentry *dentry,
struct path *path, struct file *file,
const struct open_flags *op,
bool got_write, bool need_lookup,
int *opened)
{
struct inode *dir = nd->path.dentry->d_inode;
unsigned open_flag = open_to_namei_flags(op->open_flag);
umode_t mode;
int error;
int acc_mode;
int create_error = 0;
struct dentry *const DENTRY_NOT_SET = (void *) -1UL;
bool excl;
BUG_ON(dentry->d_inode);
/* Don't create child dentry for a dead directory. */
if (unlikely(IS_DEADDIR(dir))) {
error = -ENOENT;
goto out;
}
mode = op->mode;
if ((open_flag & O_CREAT) && !IS_POSIXACL(dir))
mode &= ~current_umask();
excl = (open_flag & (O_EXCL | O_CREAT)) == (O_EXCL | O_CREAT);
if (excl)
open_flag &= ~O_TRUNC;
/*
* Checking write permission is tricky, bacuse we don't know if we are
* going to actually need it: O_CREAT opens should work as long as the
* file exists. But checking existence breaks atomicity. The trick is
* to check access and if not granted clear O_CREAT from the flags.
*
* Another problem is returing the "right" error value (e.g. for an
* O_EXCL open we want to return EEXIST not EROFS).
*/
if (((open_flag & (O_CREAT | O_TRUNC)) ||
(open_flag & O_ACCMODE) != O_RDONLY) && unlikely(!got_write)) {
if (!(open_flag & O_CREAT)) {
/*
* No O_CREATE -> atomicity not a requirement -> fall
* back to lookup + open
*/
goto no_open;
} else if (open_flag & (O_EXCL | O_TRUNC)) {
/* Fall back and fail with the right error */
create_error = -EROFS;
goto no_open;
} else {
/* No side effects, safe to clear O_CREAT */
create_error = -EROFS;
open_flag &= ~O_CREAT;
}
}
if (open_flag & O_CREAT) {
error = may_o_create(&nd->path, dentry, mode);
if (error) {
create_error = error;
if (open_flag & O_EXCL)
goto no_open;
open_flag &= ~O_CREAT;
}
}
if (nd->flags & LOOKUP_DIRECTORY)
open_flag |= O_DIRECTORY;
file->f_path.dentry = DENTRY_NOT_SET;
file->f_path.mnt = nd->path.mnt;
error = dir->i_op->atomic_open(dir, dentry, file, open_flag, mode,
opened);
if (error < 0) {
if (create_error && error == -ENOENT)
error = create_error;
goto out;
}
if (error) { /* returned 1, that is */
if (WARN_ON(file->f_path.dentry == DENTRY_NOT_SET)) {
error = -EIO;
goto out;
}
if (file->f_path.dentry) {
dput(dentry);
dentry = file->f_path.dentry;
}
if (*opened & FILE_CREATED)
fsnotify_create(dir, dentry);
if (!dentry->d_inode) {
WARN_ON(*opened & FILE_CREATED);
if (create_error) {
error = create_error;
goto out;
}
} else {
if (excl && !(*opened & FILE_CREATED)) {
error = -EEXIST;
goto out;
}
}
goto looked_up;
}
/*
* We didn't have the inode before the open, so check open permission
* here.
*/
acc_mode = op->acc_mode;
if (*opened & FILE_CREATED) {
WARN_ON(!(open_flag & O_CREAT));
fsnotify_create(dir, dentry);
acc_mode = MAY_OPEN;
}
error = may_open(&file->f_path, acc_mode, open_flag);
if (error)
fput(file);
out:
dput(dentry);
return error;
no_open:
if (need_lookup) {
dentry = lookup_real(dir, dentry, nd->flags);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
if (create_error) {
int open_flag = op->open_flag;
error = create_error;
if ((open_flag & O_EXCL)) {
if (!dentry->d_inode)
goto out;
} else if (!dentry->d_inode) {
goto out;
} else if ((open_flag & O_TRUNC) &&
S_ISREG(dentry->d_inode->i_mode)) {
goto out;
}
/* will fail later, go on to get the right error */
}
}
looked_up:
path->dentry = dentry;
path->mnt = nd->path.mnt;
return 1;
}
|
C
|
linux
| 0 |
CVE-2011-3055
|
https://www.cvedetails.com/cve/CVE-2011-3055/
| null |
https://github.com/chromium/chromium/commit/e9372a1bfd3588a80fcf49aa07321f0971dd6091
|
e9372a1bfd3588a80fcf49aa07321f0971dd6091
|
[V8] Pass Isolate to throwNotEnoughArgumentsError()
https://bugs.webkit.org/show_bug.cgi?id=86983
Reviewed by Adam Barth.
The objective is to pass Isolate around in V8 bindings.
This patch passes Isolate to throwNotEnoughArgumentsError().
No tests. No change in behavior.
* bindings/scripts/CodeGeneratorV8.pm:
(GenerateArgumentsCountCheck):
(GenerateEventConstructorCallback):
* bindings/scripts/test/V8/V8Float64Array.cpp:
(WebCore::Float64ArrayV8Internal::fooCallback):
* bindings/scripts/test/V8/V8TestActiveDOMObject.cpp:
(WebCore::TestActiveDOMObjectV8Internal::excitingFunctionCallback):
(WebCore::TestActiveDOMObjectV8Internal::postMessageCallback):
* bindings/scripts/test/V8/V8TestCustomNamedGetter.cpp:
(WebCore::TestCustomNamedGetterV8Internal::anotherFunctionCallback):
* bindings/scripts/test/V8/V8TestEventConstructor.cpp:
(WebCore::V8TestEventConstructor::constructorCallback):
* bindings/scripts/test/V8/V8TestEventTarget.cpp:
(WebCore::TestEventTargetV8Internal::itemCallback):
(WebCore::TestEventTargetV8Internal::dispatchEventCallback):
* bindings/scripts/test/V8/V8TestInterface.cpp:
(WebCore::TestInterfaceV8Internal::supplementalMethod2Callback):
(WebCore::V8TestInterface::constructorCallback):
* bindings/scripts/test/V8/V8TestMediaQueryListListener.cpp:
(WebCore::TestMediaQueryListListenerV8Internal::methodCallback):
* bindings/scripts/test/V8/V8TestNamedConstructor.cpp:
(WebCore::V8TestNamedConstructorConstructorCallback):
* bindings/scripts/test/V8/V8TestObj.cpp:
(WebCore::TestObjV8Internal::voidMethodWithArgsCallback):
(WebCore::TestObjV8Internal::intMethodWithArgsCallback):
(WebCore::TestObjV8Internal::objMethodWithArgsCallback):
(WebCore::TestObjV8Internal::methodWithSequenceArgCallback):
(WebCore::TestObjV8Internal::methodReturningSequenceCallback):
(WebCore::TestObjV8Internal::methodThatRequiresAllArgsAndThrowsCallback):
(WebCore::TestObjV8Internal::serializedValueCallback):
(WebCore::TestObjV8Internal::idbKeyCallback):
(WebCore::TestObjV8Internal::optionsObjectCallback):
(WebCore::TestObjV8Internal::methodWithNonOptionalArgAndOptionalArgCallback):
(WebCore::TestObjV8Internal::methodWithNonOptionalArgAndTwoOptionalArgsCallback):
(WebCore::TestObjV8Internal::methodWithCallbackArgCallback):
(WebCore::TestObjV8Internal::methodWithNonCallbackArgAndCallbackArgCallback):
(WebCore::TestObjV8Internal::overloadedMethod1Callback):
(WebCore::TestObjV8Internal::overloadedMethod2Callback):
(WebCore::TestObjV8Internal::overloadedMethod3Callback):
(WebCore::TestObjV8Internal::overloadedMethod4Callback):
(WebCore::TestObjV8Internal::overloadedMethod5Callback):
(WebCore::TestObjV8Internal::overloadedMethod6Callback):
(WebCore::TestObjV8Internal::overloadedMethod7Callback):
(WebCore::TestObjV8Internal::overloadedMethod11Callback):
(WebCore::TestObjV8Internal::overloadedMethod12Callback):
(WebCore::TestObjV8Internal::enabledAtRuntimeMethod1Callback):
(WebCore::TestObjV8Internal::enabledAtRuntimeMethod2Callback):
(WebCore::TestObjV8Internal::convert1Callback):
(WebCore::TestObjV8Internal::convert2Callback):
(WebCore::TestObjV8Internal::convert3Callback):
(WebCore::TestObjV8Internal::convert4Callback):
(WebCore::TestObjV8Internal::convert5Callback):
(WebCore::TestObjV8Internal::strictFunctionCallback):
(WebCore::V8TestObj::constructorCallback):
* bindings/scripts/test/V8/V8TestSerializedScriptValueInterface.cpp:
(WebCore::TestSerializedScriptValueInterfaceV8Internal::acceptTransferListCallback):
(WebCore::V8TestSerializedScriptValueInterface::constructorCallback):
* bindings/v8/ScriptController.cpp:
(WebCore::setValueAndClosePopupCallback):
* bindings/v8/V8Proxy.cpp:
(WebCore::V8Proxy::throwNotEnoughArgumentsError):
* bindings/v8/V8Proxy.h:
(V8Proxy):
* bindings/v8/custom/V8AudioContextCustom.cpp:
(WebCore::V8AudioContext::constructorCallback):
* bindings/v8/custom/V8DataViewCustom.cpp:
(WebCore::V8DataView::getInt8Callback):
(WebCore::V8DataView::getUint8Callback):
(WebCore::V8DataView::setInt8Callback):
(WebCore::V8DataView::setUint8Callback):
* bindings/v8/custom/V8DirectoryEntryCustom.cpp:
(WebCore::V8DirectoryEntry::getDirectoryCallback):
(WebCore::V8DirectoryEntry::getFileCallback):
* bindings/v8/custom/V8IntentConstructor.cpp:
(WebCore::V8Intent::constructorCallback):
* bindings/v8/custom/V8SVGLengthCustom.cpp:
(WebCore::V8SVGLength::convertToSpecifiedUnitsCallback):
* bindings/v8/custom/V8WebGLRenderingContextCustom.cpp:
(WebCore::getObjectParameter):
(WebCore::V8WebGLRenderingContext::getAttachedShadersCallback):
(WebCore::V8WebGLRenderingContext::getExtensionCallback):
(WebCore::V8WebGLRenderingContext::getFramebufferAttachmentParameterCallback):
(WebCore::V8WebGLRenderingContext::getParameterCallback):
(WebCore::V8WebGLRenderingContext::getProgramParameterCallback):
(WebCore::V8WebGLRenderingContext::getShaderParameterCallback):
(WebCore::V8WebGLRenderingContext::getUniformCallback):
(WebCore::vertexAttribAndUniformHelperf):
(WebCore::uniformHelperi):
(WebCore::uniformMatrixHelper):
* bindings/v8/custom/V8WebKitMutationObserverCustom.cpp:
(WebCore::V8WebKitMutationObserver::constructorCallback):
(WebCore::V8WebKitMutationObserver::observeCallback):
* bindings/v8/custom/V8WebSocketCustom.cpp:
(WebCore::V8WebSocket::constructorCallback):
(WebCore::V8WebSocket::sendCallback):
* bindings/v8/custom/V8XMLHttpRequestCustom.cpp:
(WebCore::V8XMLHttpRequest::openCallback):
git-svn-id: svn://svn.chromium.org/blink/trunk@117736 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
static v8::Handle<v8::Value> unsignedShortAttrAttrGetter(v8::Local<v8::String> name, const v8::AccessorInfo& info)
{
INC_STATS("DOM.TestObj.unsignedShortAttr._get");
TestObj* imp = V8TestObj::toNative(info.Holder());
return v8::Integer::New(imp->unsignedShortAttr());
}
|
static v8::Handle<v8::Value> unsignedShortAttrAttrGetter(v8::Local<v8::String> name, const v8::AccessorInfo& info)
{
INC_STATS("DOM.TestObj.unsignedShortAttr._get");
TestObj* imp = V8TestObj::toNative(info.Holder());
return v8::Integer::New(imp->unsignedShortAttr());
}
|
C
|
Chrome
| 0 |
CVE-2013-3232
|
https://www.cvedetails.com/cve/CVE-2013-3232/
|
CWE-200
|
https://github.com/torvalds/linux/commit/c802d759623acbd6e1ee9fbdabae89159a513913
|
c802d759623acbd6e1ee9fbdabae89159a513913
|
netrom: fix invalid use of sizeof in nr_recvmsg()
sizeof() when applied to a pointer typed expression gives the size of the
pointer, not that of the pointed data.
Introduced by commit 3ce5ef(netrom: fix info leak via msg_name in nr_recvmsg)
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int nr_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size, int flags)
{
struct sock *sk = sock->sk;
struct sockaddr_ax25 *sax = (struct sockaddr_ax25 *)msg->msg_name;
size_t copied;
struct sk_buff *skb;
int er;
/*
* This works for seqpacket too. The receiver has ordered the queue for
* us! We do one quick check first though
*/
lock_sock(sk);
if (sk->sk_state != TCP_ESTABLISHED) {
release_sock(sk);
return -ENOTCONN;
}
/* Now we can treat all alike */
if ((skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, flags & MSG_DONTWAIT, &er)) == NULL) {
release_sock(sk);
return er;
}
skb_reset_transport_header(skb);
copied = skb->len;
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
}
er = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (er < 0) {
skb_free_datagram(sk, skb);
release_sock(sk);
return er;
}
if (sax != NULL) {
memset(sax, 0, sizeof(*sax));
sax->sax25_family = AF_NETROM;
skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call,
AX25_ADDR_LEN);
}
msg->msg_namelen = sizeof(*sax);
skb_free_datagram(sk, skb);
release_sock(sk);
return copied;
}
|
static int nr_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size, int flags)
{
struct sock *sk = sock->sk;
struct sockaddr_ax25 *sax = (struct sockaddr_ax25 *)msg->msg_name;
size_t copied;
struct sk_buff *skb;
int er;
/*
* This works for seqpacket too. The receiver has ordered the queue for
* us! We do one quick check first though
*/
lock_sock(sk);
if (sk->sk_state != TCP_ESTABLISHED) {
release_sock(sk);
return -ENOTCONN;
}
/* Now we can treat all alike */
if ((skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, flags & MSG_DONTWAIT, &er)) == NULL) {
release_sock(sk);
return er;
}
skb_reset_transport_header(skb);
copied = skb->len;
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
}
er = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (er < 0) {
skb_free_datagram(sk, skb);
release_sock(sk);
return er;
}
if (sax != NULL) {
memset(sax, 0, sizeof(sax));
sax->sax25_family = AF_NETROM;
skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call,
AX25_ADDR_LEN);
}
msg->msg_namelen = sizeof(*sax);
skb_free_datagram(sk, skb);
release_sock(sk);
return copied;
}
|
C
|
linux
| 1 |
null | null | null |
https://github.com/chromium/chromium/commit/59f5e0204cbc0e524b2687fb1beddda82047d16d
|
59f5e0204cbc0e524b2687fb1beddda82047d16d
|
AutoFill: Record whether the user initiated the form submission and don't save form data if the form was not user-submitted.
BUG=48225
TEST=none
Review URL: http://codereview.chromium.org/2842062
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@53350 0039d316-1c4b-4281-b951-d872f2087c98
|
void AutoFillManager::FillSelectOneField(const AutoFillProfile* profile,
AutoFillType type,
webkit_glue::FormField* field) {
DCHECK(profile);
DCHECK(field);
DCHECK(field->form_control_type() == ASCIIToUTF16("select-one"));
string16 selected_string = profile->GetFieldText(type);
std::string ascii_value = UTF16ToASCII(selected_string);
for (size_t i = 0; i < field->option_strings().size(); ++i) {
if (profile->GetFieldText(type) == field->option_strings()[i]) {
selected_string = profile->GetFieldText(type);
break;
}
if (!base::strcasecmp(UTF16ToASCII(field->option_strings()[i]).c_str(),
ascii_value.c_str())) {
selected_string = field->option_strings()[i];
}
}
field->set_value(selected_string);
}
|
void AutoFillManager::FillSelectOneField(const AutoFillProfile* profile,
AutoFillType type,
webkit_glue::FormField* field) {
DCHECK(profile);
DCHECK(field);
DCHECK(field->form_control_type() == ASCIIToUTF16("select-one"));
string16 selected_string = profile->GetFieldText(type);
std::string ascii_value = UTF16ToASCII(selected_string);
for (size_t i = 0; i < field->option_strings().size(); ++i) {
if (profile->GetFieldText(type) == field->option_strings()[i]) {
selected_string = profile->GetFieldText(type);
break;
}
if (!base::strcasecmp(UTF16ToASCII(field->option_strings()[i]).c_str(),
ascii_value.c_str())) {
selected_string = field->option_strings()[i];
}
}
field->set_value(selected_string);
}
|
C
|
Chrome
| 0 |
CVE-2016-2476
|
https://www.cvedetails.com/cve/CVE-2016-2476/
|
CWE-119
|
https://android.googlesource.com/platform/frameworks/av/+/295c883fe3105b19bcd0f9e07d54c6b589fc5bff
|
295c883fe3105b19bcd0f9e07d54c6b589fc5bff
|
DO NOT MERGE Verify OMX buffer sizes prior to access
Bug: 27207275
Change-Id: I4412825d1ee233d993af0a67708bea54304ff62d
|
static void FreeWrapper(void * /* userData */, void* ptr) {
free(ptr);
}
|
static void FreeWrapper(void * /* userData */, void* ptr) {
free(ptr);
}
|
C
|
Android
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/ee8d6fd30b022ac2c87b7a190c954e7bb3c9b21e
|
ee8d6fd30b022ac2c87b7a190c954e7bb3c9b21e
|
Clean up calls like "gfx::Rect(0, 0, size().width(), size().height()".
The caller can use the much shorter "gfx::Rect(size())", since gfx::Rect
has a constructor that just takes a Size.
BUG=none
TEST=none
Review URL: http://codereview.chromium.org/2204001
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@48283 0039d316-1c4b-4281-b951-d872f2087c98
|
void WebPluginDelegateProxy::OnGetWindowScriptNPObject(
int route_id, bool* success) {
*success = false;
NPObject* npobject = NULL;
if (plugin_)
npobject = plugin_->GetWindowScriptNPObject();
if (!npobject)
return;
window_script_object_ = (new NPObjectStub(
npobject, channel_host_.get(), route_id, 0, page_url_))->AsWeakPtr();
*success = true;
}
|
void WebPluginDelegateProxy::OnGetWindowScriptNPObject(
int route_id, bool* success) {
*success = false;
NPObject* npobject = NULL;
if (plugin_)
npobject = plugin_->GetWindowScriptNPObject();
if (!npobject)
return;
window_script_object_ = (new NPObjectStub(
npobject, channel_host_.get(), route_id, 0, page_url_))->AsWeakPtr();
*success = true;
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/6c5d779aaf0dec9628da8a20751e95fd09554b14
|
6c5d779aaf0dec9628da8a20751e95fd09554b14
|
Move the cancellation of blocked requests code from ResourceDispatcherHost::~ResourceDispatcherHost() to ResourceDispatcherHost::OnShutdown().
This causes the requests to be cancelled on the IO thread rather than the UI thread, which is important since cancellation may delete the URLRequest (and URLRequests should not outlive the IO thread).
BUG=39243
Review URL: http://codereview.chromium.org/1213004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@42575 0039d316-1c4b-4281-b951-d872f2087c98
|
void ResourceDispatcherHost::OnShutdown() {
DCHECK(ChromeThread::CurrentlyOn(ChromeThread::IO));
is_shutdown_ = true;
resource_queue_.Shutdown();
STLDeleteValues(&pending_requests_);
update_load_states_timer_.Stop();
// Clear blocked requests if any left.
// Note that we have to do this in 2 passes as we cannot call
// CancelBlockedRequestsForRoute while iterating over
// blocked_requests_map_, as it modifies it.
std::set<ProcessRouteIDs> ids;
for (BlockedRequestMap::const_iterator iter = blocked_requests_map_.begin();
iter != blocked_requests_map_.end(); ++iter) {
std::pair<std::set<ProcessRouteIDs>::iterator, bool> result =
ids.insert(iter->first);
// We should not have duplicates.
DCHECK(result.second);
}
for (std::set<ProcessRouteIDs>::const_iterator iter = ids.begin();
iter != ids.end(); ++iter) {
CancelBlockedRequestsForRoute(iter->first, iter->second);
}
}
|
void ResourceDispatcherHost::OnShutdown() {
DCHECK(ChromeThread::CurrentlyOn(ChromeThread::IO));
is_shutdown_ = true;
resource_queue_.Shutdown();
STLDeleteValues(&pending_requests_);
update_load_states_timer_.Stop();
}
|
C
|
Chrome
| 1 |
CVE-2019-12068
|
https://www.cvedetails.com/cve/CVE-2019-12068/
|
CWE-835
|
https://git.qemu.org/?p=qemu.git;a=commit;h=de594e47659029316bbf9391efb79da0a1a08e08
|
de594e47659029316bbf9391efb79da0a1a08e08
| null |
static void lsi_reselect(LSIState *s, lsi_request *p)
{
int id;
assert(s->current == NULL);
QTAILQ_REMOVE(&s->queue, p, next);
s->current = p;
id = (p->tag >> 8) & 0xf;
s->ssid = id | 0x80;
/* LSI53C700 Family Compatibility, see LSI53C895A 4-73 */
if (!(s->dcntl & LSI_DCNTL_COM)) {
s->sfbr = 1 << (id & 0x7);
}
trace_lsi_reselect(id);
s->scntl1 |= LSI_SCNTL1_CON;
lsi_set_phase(s, PHASE_MI);
s->msg_action = p->out ? LSI_MSG_ACTION_DOUT : LSI_MSG_ACTION_DIN;
s->current->dma_len = p->pending;
lsi_add_msg_byte(s, 0x80);
if (s->current->tag & LSI_TAG_VALID) {
lsi_add_msg_byte(s, 0x20);
lsi_add_msg_byte(s, p->tag & 0xff);
}
if (lsi_irq_on_rsl(s)) {
lsi_script_scsi_interrupt(s, LSI_SIST0_RSL, 0);
}
}
|
static void lsi_reselect(LSIState *s, lsi_request *p)
{
int id;
assert(s->current == NULL);
QTAILQ_REMOVE(&s->queue, p, next);
s->current = p;
id = (p->tag >> 8) & 0xf;
s->ssid = id | 0x80;
/* LSI53C700 Family Compatibility, see LSI53C895A 4-73 */
if (!(s->dcntl & LSI_DCNTL_COM)) {
s->sfbr = 1 << (id & 0x7);
}
trace_lsi_reselect(id);
s->scntl1 |= LSI_SCNTL1_CON;
lsi_set_phase(s, PHASE_MI);
s->msg_action = p->out ? LSI_MSG_ACTION_DOUT : LSI_MSG_ACTION_DIN;
s->current->dma_len = p->pending;
lsi_add_msg_byte(s, 0x80);
if (s->current->tag & LSI_TAG_VALID) {
lsi_add_msg_byte(s, 0x20);
lsi_add_msg_byte(s, p->tag & 0xff);
}
if (lsi_irq_on_rsl(s)) {
lsi_script_scsi_interrupt(s, LSI_SIST0_RSL, 0);
}
}
|
C
|
qemu
| 0 |
CVE-2011-2350
|
https://www.cvedetails.com/cve/CVE-2011-2350/
|
CWE-20
|
https://github.com/chromium/chromium/commit/b944f670bb7a8a919daac497a4ea0536c954c201
|
b944f670bb7a8a919daac497a4ea0536c954c201
|
[JSC] Implement a helper method createNotEnoughArgumentsError()
https://bugs.webkit.org/show_bug.cgi?id=85102
Reviewed by Geoffrey Garen.
In bug 84787, kbr@ requested to avoid hard-coding
createTypeError(exec, "Not enough arguments") here and there.
This patch implements createNotEnoughArgumentsError(exec)
and uses it in JSC bindings.
c.f. a corresponding bug for V8 bindings is bug 85097.
Source/JavaScriptCore:
* runtime/Error.cpp:
(JSC::createNotEnoughArgumentsError):
(JSC):
* runtime/Error.h:
(JSC):
Source/WebCore:
Test: bindings/scripts/test/TestObj.idl
* bindings/scripts/CodeGeneratorJS.pm: Modified as described above.
(GenerateArgumentsCountCheck):
* bindings/js/JSDataViewCustom.cpp: Ditto.
(WebCore::getDataViewMember):
(WebCore::setDataViewMember):
* bindings/js/JSDeprecatedPeerConnectionCustom.cpp:
(WebCore::JSDeprecatedPeerConnectionConstructor::constructJSDeprecatedPeerConnection):
* bindings/js/JSDirectoryEntryCustom.cpp:
(WebCore::JSDirectoryEntry::getFile):
(WebCore::JSDirectoryEntry::getDirectory):
* bindings/js/JSSharedWorkerCustom.cpp:
(WebCore::JSSharedWorkerConstructor::constructJSSharedWorker):
* bindings/js/JSWebKitMutationObserverCustom.cpp:
(WebCore::JSWebKitMutationObserverConstructor::constructJSWebKitMutationObserver):
(WebCore::JSWebKitMutationObserver::observe):
* bindings/js/JSWorkerCustom.cpp:
(WebCore::JSWorkerConstructor::constructJSWorker):
* bindings/scripts/test/JS/JSFloat64Array.cpp: Updated run-bindings-tests.
(WebCore::jsFloat64ArrayPrototypeFunctionFoo):
* bindings/scripts/test/JS/JSTestActiveDOMObject.cpp:
(WebCore::jsTestActiveDOMObjectPrototypeFunctionExcitingFunction):
(WebCore::jsTestActiveDOMObjectPrototypeFunctionPostMessage):
* bindings/scripts/test/JS/JSTestCustomNamedGetter.cpp:
(WebCore::jsTestCustomNamedGetterPrototypeFunctionAnotherFunction):
* bindings/scripts/test/JS/JSTestEventTarget.cpp:
(WebCore::jsTestEventTargetPrototypeFunctionItem):
(WebCore::jsTestEventTargetPrototypeFunctionAddEventListener):
(WebCore::jsTestEventTargetPrototypeFunctionRemoveEventListener):
(WebCore::jsTestEventTargetPrototypeFunctionDispatchEvent):
* bindings/scripts/test/JS/JSTestInterface.cpp:
(WebCore::JSTestInterfaceConstructor::constructJSTestInterface):
(WebCore::jsTestInterfacePrototypeFunctionSupplementalMethod2):
* bindings/scripts/test/JS/JSTestMediaQueryListListener.cpp:
(WebCore::jsTestMediaQueryListListenerPrototypeFunctionMethod):
* bindings/scripts/test/JS/JSTestNamedConstructor.cpp:
(WebCore::JSTestNamedConstructorNamedConstructor::constructJSTestNamedConstructor):
* bindings/scripts/test/JS/JSTestObj.cpp:
(WebCore::JSTestObjConstructor::constructJSTestObj):
(WebCore::jsTestObjPrototypeFunctionVoidMethodWithArgs):
(WebCore::jsTestObjPrototypeFunctionIntMethodWithArgs):
(WebCore::jsTestObjPrototypeFunctionObjMethodWithArgs):
(WebCore::jsTestObjPrototypeFunctionMethodWithSequenceArg):
(WebCore::jsTestObjPrototypeFunctionMethodReturningSequence):
(WebCore::jsTestObjPrototypeFunctionMethodThatRequiresAllArgsAndThrows):
(WebCore::jsTestObjPrototypeFunctionSerializedValue):
(WebCore::jsTestObjPrototypeFunctionIdbKey):
(WebCore::jsTestObjPrototypeFunctionOptionsObject):
(WebCore::jsTestObjPrototypeFunctionAddEventListener):
(WebCore::jsTestObjPrototypeFunctionRemoveEventListener):
(WebCore::jsTestObjPrototypeFunctionMethodWithNonOptionalArgAndOptionalArg):
(WebCore::jsTestObjPrototypeFunctionMethodWithNonOptionalArgAndTwoOptionalArgs):
(WebCore::jsTestObjPrototypeFunctionMethodWithCallbackArg):
(WebCore::jsTestObjPrototypeFunctionMethodWithNonCallbackArgAndCallbackArg):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod1):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod2):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod3):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod4):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod5):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod6):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod7):
(WebCore::jsTestObjConstructorFunctionClassMethod2):
(WebCore::jsTestObjConstructorFunctionOverloadedMethod11):
(WebCore::jsTestObjConstructorFunctionOverloadedMethod12):
(WebCore::jsTestObjPrototypeFunctionMethodWithUnsignedLongArray):
(WebCore::jsTestObjPrototypeFunctionConvert1):
(WebCore::jsTestObjPrototypeFunctionConvert2):
(WebCore::jsTestObjPrototypeFunctionConvert3):
(WebCore::jsTestObjPrototypeFunctionConvert4):
(WebCore::jsTestObjPrototypeFunctionConvert5):
(WebCore::jsTestObjPrototypeFunctionStrictFunction):
* bindings/scripts/test/JS/JSTestSerializedScriptValueInterface.cpp:
(WebCore::JSTestSerializedScriptValueInterfaceConstructor::constructJSTestSerializedScriptValueInterface):
(WebCore::jsTestSerializedScriptValueInterfacePrototypeFunctionAcceptTransferList):
git-svn-id: svn://svn.chromium.org/blink/trunk@115536 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void setJSTestObjIntSequenceAttr(ExecState* exec, JSObject* thisObject, JSValue value)
{
JSTestObj* castedThis = jsCast<JSTestObj*>(thisObject);
TestObj* impl = static_cast<TestObj*>(castedThis->impl());
impl->setIntSequenceAttr(toNativeArray<int>(exec, value));
}
|
void setJSTestObjIntSequenceAttr(ExecState* exec, JSObject* thisObject, JSValue value)
{
JSTestObj* castedThis = jsCast<JSTestObj*>(thisObject);
TestObj* impl = static_cast<TestObj*>(castedThis->impl());
impl->setIntSequenceAttr(toNativeArray<int>(exec, value));
}
|
C
|
Chrome
| 0 |
CVE-2011-4080
|
https://www.cvedetails.com/cve/CVE-2011-4080/
|
CWE-264
|
https://github.com/torvalds/linux/commit/bfdc0b497faa82a0ba2f9dddcf109231dd519fcc
|
bfdc0b497faa82a0ba2f9dddcf109231dd519fcc
|
sysctl: restrict write access to dmesg_restrict
When dmesg_restrict is set to 1 CAP_SYS_ADMIN is needed to read the kernel
ring buffer. But a root user without CAP_SYS_ADMIN is able to reset
dmesg_restrict to 0.
This is an issue when e.g. LXC (Linux Containers) are used and complete
user space is running without CAP_SYS_ADMIN. A unprivileged and jailed
root user can bypass the dmesg_restrict protection.
With this patch writing to dmesg_restrict is only allowed when root has
CAP_SYS_ADMIN.
Signed-off-by: Richard Weinberger <[email protected]>
Acked-by: Dan Rosenberg <[email protected]>
Acked-by: Serge E. Hallyn <[email protected]>
Cc: Eric Paris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: James Morris <[email protected]>
Cc: Eugene Teo <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
int proc_dointvec_ms_jiffies(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
return -ENOSYS;
}
|
int proc_dointvec_ms_jiffies(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
return -ENOSYS;
}
|
C
|
linux
| 0 |
CVE-2016-2324
|
https://www.cvedetails.com/cve/CVE-2016-2324/
|
CWE-119
|
https://github.com/git/git/commit/de1e67d0703894cb6ea782e36abb63976ab07e60
|
de1e67d0703894cb6ea782e36abb63976ab07e60
|
list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
static int want_object_in_pack(const unsigned char *sha1,
int exclude,
struct packed_git **found_pack,
off_t *found_offset)
{
struct packed_git *p;
if (!exclude && local && has_loose_object_nonlocal(sha1))
return 0;
*found_pack = NULL;
*found_offset = 0;
for (p = packed_git; p; p = p->next) {
off_t offset = find_pack_entry_one(sha1, p);
if (offset) {
if (!*found_pack) {
if (!is_pack_valid(p))
continue;
*found_offset = offset;
*found_pack = p;
}
if (exclude)
return 1;
if (incremental)
return 0;
if (local && !p->pack_local)
return 0;
if (ignore_packed_keep && p->pack_local && p->pack_keep)
return 0;
}
}
return 1;
}
|
static int want_object_in_pack(const unsigned char *sha1,
int exclude,
struct packed_git **found_pack,
off_t *found_offset)
{
struct packed_git *p;
if (!exclude && local && has_loose_object_nonlocal(sha1))
return 0;
*found_pack = NULL;
*found_offset = 0;
for (p = packed_git; p; p = p->next) {
off_t offset = find_pack_entry_one(sha1, p);
if (offset) {
if (!*found_pack) {
if (!is_pack_valid(p))
continue;
*found_offset = offset;
*found_pack = p;
}
if (exclude)
return 1;
if (incremental)
return 0;
if (local && !p->pack_local)
return 0;
if (ignore_packed_keep && p->pack_local && p->pack_keep)
return 0;
}
}
return 1;
}
|
C
|
git
| 0 |
CVE-2016-5688
|
https://www.cvedetails.com/cve/CVE-2016-5688/
|
CWE-119
|
https://github.com/ImageMagick/ImageMagick/commit/aecd0ada163a4d6c769cec178955d5f3e9316f2f
|
aecd0ada163a4d6c769cec178955d5f3e9316f2f
|
Set pixel cache to undefined if any resource limit is exceeded
|
MagickExport void *GetAuthenticMetacontent(const Image *image)
{
CacheInfo
*magick_restrict cache_info;
const int
id = GetOpenMPThreadId();
assert(image != (const Image *) NULL);
assert(image->signature == MagickCoreSignature);
assert(image->cache != (Cache) NULL);
cache_info=(CacheInfo *) image->cache;
assert(cache_info->signature == MagickCoreSignature);
if (cache_info->methods.get_authentic_metacontent_from_handler !=
(GetAuthenticMetacontentFromHandler) NULL)
{
void
*metacontent;
metacontent=cache_info->methods.
get_authentic_metacontent_from_handler(image);
return(metacontent);
}
assert(id < (int) cache_info->number_threads);
return(cache_info->nexus_info[id]->metacontent);
}
|
MagickExport void *GetAuthenticMetacontent(const Image *image)
{
CacheInfo
*magick_restrict cache_info;
const int
id = GetOpenMPThreadId();
assert(image != (const Image *) NULL);
assert(image->signature == MagickCoreSignature);
assert(image->cache != (Cache) NULL);
cache_info=(CacheInfo *) image->cache;
assert(cache_info->signature == MagickCoreSignature);
if (cache_info->methods.get_authentic_metacontent_from_handler !=
(GetAuthenticMetacontentFromHandler) NULL)
{
void
*metacontent;
metacontent=cache_info->methods.
get_authentic_metacontent_from_handler(image);
return(metacontent);
}
assert(id < (int) cache_info->number_threads);
return(cache_info->nexus_info[id]->metacontent);
}
|
C
|
ImageMagick
| 0 |
CVE-2016-10133
|
https://www.cvedetails.com/cve/CVE-2016-10133/
|
CWE-119
|
http://git.ghostscript.com/?p=mujs.git;a=commit;h=77ab465f1c394bb77f00966cd950650f3f53cb24
|
77ab465f1c394bb77f00966cd950650f3f53cb24
| null |
void js_pushiterator(js_State *J, int idx, int own)
{
js_pushobject(J, jsV_newiterator(J, js_toobject(J, idx), own));
}
|
void js_pushiterator(js_State *J, int idx, int own)
{
js_pushobject(J, jsV_newiterator(J, js_toobject(J, idx), own));
}
|
C
|
ghostscript
| 0 |
CVE-2017-5847
|
https://www.cvedetails.com/cve/CVE-2017-5847/
|
CWE-125
|
https://github.com/GStreamer/gst-plugins-ugly/commit/d21017b52a585f145e8d62781bcc1c5fefc7ee37
|
d21017b52a585f145e8d62781bcc1c5fefc7ee37
|
asfdemux: Check that we have enough data available before parsing bool/uint extended content descriptors
https://bugzilla.gnome.org/show_bug.cgi?id=777955
|
gst_asf_demux_get_stream_video_format (asf_stream_video_format * fmt,
guint8 ** p_data, guint64 * p_size)
{
if (*p_size < (4 + 4 + 4 + 2 + 2 + 4 + 4 + 4 + 4 + 4 + 4))
return FALSE;
fmt->size = gst_asf_demux_get_uint32 (p_data, p_size);
/* Sanity checks */
if (fmt->size < 40) {
GST_WARNING ("Corrupted asf_stream_video_format (size < 40)");
return FALSE;
}
if ((guint64) fmt->size - 4 > *p_size) {
GST_WARNING ("Corrupted asf_stream_video_format (codec_data is too small)");
return FALSE;
}
fmt->width = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->height = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->planes = gst_asf_demux_get_uint16 (p_data, p_size);
fmt->depth = gst_asf_demux_get_uint16 (p_data, p_size);
fmt->tag = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->image_size = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->xpels_meter = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->ypels_meter = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->num_colors = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->imp_colors = gst_asf_demux_get_uint32 (p_data, p_size);
return TRUE;
}
|
gst_asf_demux_get_stream_video_format (asf_stream_video_format * fmt,
guint8 ** p_data, guint64 * p_size)
{
if (*p_size < (4 + 4 + 4 + 2 + 2 + 4 + 4 + 4 + 4 + 4 + 4))
return FALSE;
fmt->size = gst_asf_demux_get_uint32 (p_data, p_size);
/* Sanity checks */
if (fmt->size < 40) {
GST_WARNING ("Corrupted asf_stream_video_format (size < 40)");
return FALSE;
}
if ((guint64) fmt->size - 4 > *p_size) {
GST_WARNING ("Corrupted asf_stream_video_format (codec_data is too small)");
return FALSE;
}
fmt->width = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->height = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->planes = gst_asf_demux_get_uint16 (p_data, p_size);
fmt->depth = gst_asf_demux_get_uint16 (p_data, p_size);
fmt->tag = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->image_size = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->xpels_meter = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->ypels_meter = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->num_colors = gst_asf_demux_get_uint32 (p_data, p_size);
fmt->imp_colors = gst_asf_demux_get_uint32 (p_data, p_size);
return TRUE;
}
|
C
|
gst-plugins-ugly
| 0 |
CVE-2017-16932
|
https://www.cvedetails.com/cve/CVE-2017-16932/
|
CWE-835
|
https://github.com/GNOME/libxml2/commit/899a5d9f0ed13b8e32449a08a361e0de127dd961
|
899a5d9f0ed13b8e32449a08a361e0de127dd961
|
Detect infinite recursion in parameter entities
When expanding a parameter entity in a DTD, infinite recursion could
lead to an infinite loop or memory exhaustion.
Thanks to Wei Lei for the first of many reports.
Fixes bug 759579.
|
xmlSAXParseDTD(xmlSAXHandlerPtr sax, const xmlChar *ExternalID,
const xmlChar *SystemID) {
xmlDtdPtr ret = NULL;
xmlParserCtxtPtr ctxt;
xmlParserInputPtr input = NULL;
xmlCharEncoding enc;
xmlChar* systemIdCanonic;
if ((ExternalID == NULL) && (SystemID == NULL)) return(NULL);
ctxt = xmlNewParserCtxt();
if (ctxt == NULL) {
return(NULL);
}
/* We are loading a DTD */
ctxt->options |= XML_PARSE_DTDLOAD;
/*
* Set-up the SAX context
*/
if (sax != NULL) {
if (ctxt->sax != NULL)
xmlFree(ctxt->sax);
ctxt->sax = sax;
ctxt->userData = ctxt;
}
/*
* Canonicalise the system ID
*/
systemIdCanonic = xmlCanonicPath(SystemID);
if ((SystemID != NULL) && (systemIdCanonic == NULL)) {
xmlFreeParserCtxt(ctxt);
return(NULL);
}
/*
* Ask the Entity resolver to load the damn thing
*/
if ((ctxt->sax != NULL) && (ctxt->sax->resolveEntity != NULL))
input = ctxt->sax->resolveEntity(ctxt->userData, ExternalID,
systemIdCanonic);
if (input == NULL) {
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
if (systemIdCanonic != NULL)
xmlFree(systemIdCanonic);
return(NULL);
}
/*
* plug some encoding conversion routines here.
*/
if (xmlPushInput(ctxt, input) < 0) {
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
if (systemIdCanonic != NULL)
xmlFree(systemIdCanonic);
return(NULL);
}
if ((ctxt->input->end - ctxt->input->cur) >= 4) {
enc = xmlDetectCharEncoding(ctxt->input->cur, 4);
xmlSwitchEncoding(ctxt, enc);
}
if (input->filename == NULL)
input->filename = (char *) systemIdCanonic;
else
xmlFree(systemIdCanonic);
input->line = 1;
input->col = 1;
input->base = ctxt->input->cur;
input->cur = ctxt->input->cur;
input->free = NULL;
/*
* let's parse that entity knowing it's an external subset.
*/
ctxt->inSubset = 2;
ctxt->myDoc = xmlNewDoc(BAD_CAST "1.0");
if (ctxt->myDoc == NULL) {
xmlErrMemory(ctxt, "New Doc failed");
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
return(NULL);
}
ctxt->myDoc->properties = XML_DOC_INTERNAL;
ctxt->myDoc->extSubset = xmlNewDtd(ctxt->myDoc, BAD_CAST "none",
ExternalID, SystemID);
xmlParseExternalSubset(ctxt, ExternalID, SystemID);
if (ctxt->myDoc != NULL) {
if (ctxt->wellFormed) {
ret = ctxt->myDoc->extSubset;
ctxt->myDoc->extSubset = NULL;
if (ret != NULL) {
xmlNodePtr tmp;
ret->doc = NULL;
tmp = ret->children;
while (tmp != NULL) {
tmp->doc = NULL;
tmp = tmp->next;
}
}
} else {
ret = NULL;
}
xmlFreeDoc(ctxt->myDoc);
ctxt->myDoc = NULL;
}
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
return(ret);
}
|
xmlSAXParseDTD(xmlSAXHandlerPtr sax, const xmlChar *ExternalID,
const xmlChar *SystemID) {
xmlDtdPtr ret = NULL;
xmlParserCtxtPtr ctxt;
xmlParserInputPtr input = NULL;
xmlCharEncoding enc;
xmlChar* systemIdCanonic;
if ((ExternalID == NULL) && (SystemID == NULL)) return(NULL);
ctxt = xmlNewParserCtxt();
if (ctxt == NULL) {
return(NULL);
}
/* We are loading a DTD */
ctxt->options |= XML_PARSE_DTDLOAD;
/*
* Set-up the SAX context
*/
if (sax != NULL) {
if (ctxt->sax != NULL)
xmlFree(ctxt->sax);
ctxt->sax = sax;
ctxt->userData = ctxt;
}
/*
* Canonicalise the system ID
*/
systemIdCanonic = xmlCanonicPath(SystemID);
if ((SystemID != NULL) && (systemIdCanonic == NULL)) {
xmlFreeParserCtxt(ctxt);
return(NULL);
}
/*
* Ask the Entity resolver to load the damn thing
*/
if ((ctxt->sax != NULL) && (ctxt->sax->resolveEntity != NULL))
input = ctxt->sax->resolveEntity(ctxt->userData, ExternalID,
systemIdCanonic);
if (input == NULL) {
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
if (systemIdCanonic != NULL)
xmlFree(systemIdCanonic);
return(NULL);
}
/*
* plug some encoding conversion routines here.
*/
if (xmlPushInput(ctxt, input) < 0) {
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
if (systemIdCanonic != NULL)
xmlFree(systemIdCanonic);
return(NULL);
}
if ((ctxt->input->end - ctxt->input->cur) >= 4) {
enc = xmlDetectCharEncoding(ctxt->input->cur, 4);
xmlSwitchEncoding(ctxt, enc);
}
if (input->filename == NULL)
input->filename = (char *) systemIdCanonic;
else
xmlFree(systemIdCanonic);
input->line = 1;
input->col = 1;
input->base = ctxt->input->cur;
input->cur = ctxt->input->cur;
input->free = NULL;
/*
* let's parse that entity knowing it's an external subset.
*/
ctxt->inSubset = 2;
ctxt->myDoc = xmlNewDoc(BAD_CAST "1.0");
if (ctxt->myDoc == NULL) {
xmlErrMemory(ctxt, "New Doc failed");
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
return(NULL);
}
ctxt->myDoc->properties = XML_DOC_INTERNAL;
ctxt->myDoc->extSubset = xmlNewDtd(ctxt->myDoc, BAD_CAST "none",
ExternalID, SystemID);
xmlParseExternalSubset(ctxt, ExternalID, SystemID);
if (ctxt->myDoc != NULL) {
if (ctxt->wellFormed) {
ret = ctxt->myDoc->extSubset;
ctxt->myDoc->extSubset = NULL;
if (ret != NULL) {
xmlNodePtr tmp;
ret->doc = NULL;
tmp = ret->children;
while (tmp != NULL) {
tmp->doc = NULL;
tmp = tmp->next;
}
}
} else {
ret = NULL;
}
xmlFreeDoc(ctxt->myDoc);
ctxt->myDoc = NULL;
}
if (sax != NULL) ctxt->sax = NULL;
xmlFreeParserCtxt(ctxt);
return(ret);
}
|
C
|
libxml2
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/70cff275010b33dbab628f837d76364359065b79
|
70cff275010b33dbab628f837d76364359065b79
|
Disable compositor thread input handling on windows
BUG=160122
Review URL: https://chromiumcodereview.appspot.com/11946019
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@177369 0039d316-1c4b-4281-b951-d872f2087c98
|
void RenderViewImpl::showContextMenu(
WebFrame* frame, const WebContextMenuData& data) {
ContextMenuParams params(data);
string16 selection_text;
if (!selection_text_.empty() && !selection_range_.is_empty()) {
const int start = selection_range_.GetMin() - selection_text_offset_;
const size_t length = selection_range_.length();
if (start >= 0 && start + length <= selection_text_.length())
selection_text = selection_text_.substr(start, length);
}
if (params.selection_text != selection_text) {
selection_text_ = params.selection_text;
selection_text_offset_ = 0;
selection_range_ = ui::Range(0, selection_text_.length());
Send(new ViewHostMsg_SelectionChanged(routing_id_,
selection_text_,
selection_text_offset_,
selection_range_));
}
if (frame)
params.frame_id = frame->identifier();
if (params.src_url.spec().size() > kMaxURLChars)
params.src_url = GURL();
context_menu_node_ = data.node;
#if defined(OS_ANDROID)
gfx::Rect start_rect;
gfx::Rect end_rect;
GetSelectionBounds(&start_rect, &end_rect);
params.selection_start =
gfx::Point(start_rect.x(), start_rect.bottom()) + GetScrollOffset();
params.selection_end =
gfx::Point(end_rect.right(), end_rect.bottom()) + GetScrollOffset();
#endif
Send(new ViewHostMsg_ContextMenu(routing_id_, params));
FOR_EACH_OBSERVER(
RenderViewObserver, observers_, DidRequestShowContextMenu(frame, data));
}
|
void RenderViewImpl::showContextMenu(
WebFrame* frame, const WebContextMenuData& data) {
ContextMenuParams params(data);
string16 selection_text;
if (!selection_text_.empty() && !selection_range_.is_empty()) {
const int start = selection_range_.GetMin() - selection_text_offset_;
const size_t length = selection_range_.length();
if (start >= 0 && start + length <= selection_text_.length())
selection_text = selection_text_.substr(start, length);
}
if (params.selection_text != selection_text) {
selection_text_ = params.selection_text;
selection_text_offset_ = 0;
selection_range_ = ui::Range(0, selection_text_.length());
Send(new ViewHostMsg_SelectionChanged(routing_id_,
selection_text_,
selection_text_offset_,
selection_range_));
}
if (frame)
params.frame_id = frame->identifier();
if (params.src_url.spec().size() > kMaxURLChars)
params.src_url = GURL();
context_menu_node_ = data.node;
#if defined(OS_ANDROID)
gfx::Rect start_rect;
gfx::Rect end_rect;
GetSelectionBounds(&start_rect, &end_rect);
params.selection_start =
gfx::Point(start_rect.x(), start_rect.bottom()) + GetScrollOffset();
params.selection_end =
gfx::Point(end_rect.right(), end_rect.bottom()) + GetScrollOffset();
#endif
Send(new ViewHostMsg_ContextMenu(routing_id_, params));
FOR_EACH_OBSERVER(
RenderViewObserver, observers_, DidRequestShowContextMenu(frame, data));
}
|
C
|
Chrome
| 0 |
CVE-2019-16910
|
https://www.cvedetails.com/cve/CVE-2019-16910/
|
CWE-200
|
https://github.com/ARMmbed/mbedtls/commit/298a43a77ec0ed2c19a8c924ddd8571ef3e65dfd
|
298a43a77ec0ed2c19a8c924ddd8571ef3e65dfd
|
Merge remote-tracking branch 'upstream-restricted/pr/549' into mbedtls-2.7-restricted
|
static inline ecp_curve_type ecp_get_type( const mbedtls_ecp_group *grp )
{
if( grp->G.X.p == NULL )
return( ECP_TYPE_NONE );
if( grp->G.Y.p == NULL )
return( ECP_TYPE_MONTGOMERY );
else
return( ECP_TYPE_SHORT_WEIERSTRASS );
}
|
static inline ecp_curve_type ecp_get_type( const mbedtls_ecp_group *grp )
{
if( grp->G.X.p == NULL )
return( ECP_TYPE_NONE );
if( grp->G.Y.p == NULL )
return( ECP_TYPE_MONTGOMERY );
else
return( ECP_TYPE_SHORT_WEIERSTRASS );
}
|
C
|
mbedtls
| 0 |
CVE-2011-3055
|
https://www.cvedetails.com/cve/CVE-2011-3055/
| null |
https://github.com/chromium/chromium/commit/e9372a1bfd3588a80fcf49aa07321f0971dd6091
|
e9372a1bfd3588a80fcf49aa07321f0971dd6091
|
[V8] Pass Isolate to throwNotEnoughArgumentsError()
https://bugs.webkit.org/show_bug.cgi?id=86983
Reviewed by Adam Barth.
The objective is to pass Isolate around in V8 bindings.
This patch passes Isolate to throwNotEnoughArgumentsError().
No tests. No change in behavior.
* bindings/scripts/CodeGeneratorV8.pm:
(GenerateArgumentsCountCheck):
(GenerateEventConstructorCallback):
* bindings/scripts/test/V8/V8Float64Array.cpp:
(WebCore::Float64ArrayV8Internal::fooCallback):
* bindings/scripts/test/V8/V8TestActiveDOMObject.cpp:
(WebCore::TestActiveDOMObjectV8Internal::excitingFunctionCallback):
(WebCore::TestActiveDOMObjectV8Internal::postMessageCallback):
* bindings/scripts/test/V8/V8TestCustomNamedGetter.cpp:
(WebCore::TestCustomNamedGetterV8Internal::anotherFunctionCallback):
* bindings/scripts/test/V8/V8TestEventConstructor.cpp:
(WebCore::V8TestEventConstructor::constructorCallback):
* bindings/scripts/test/V8/V8TestEventTarget.cpp:
(WebCore::TestEventTargetV8Internal::itemCallback):
(WebCore::TestEventTargetV8Internal::dispatchEventCallback):
* bindings/scripts/test/V8/V8TestInterface.cpp:
(WebCore::TestInterfaceV8Internal::supplementalMethod2Callback):
(WebCore::V8TestInterface::constructorCallback):
* bindings/scripts/test/V8/V8TestMediaQueryListListener.cpp:
(WebCore::TestMediaQueryListListenerV8Internal::methodCallback):
* bindings/scripts/test/V8/V8TestNamedConstructor.cpp:
(WebCore::V8TestNamedConstructorConstructorCallback):
* bindings/scripts/test/V8/V8TestObj.cpp:
(WebCore::TestObjV8Internal::voidMethodWithArgsCallback):
(WebCore::TestObjV8Internal::intMethodWithArgsCallback):
(WebCore::TestObjV8Internal::objMethodWithArgsCallback):
(WebCore::TestObjV8Internal::methodWithSequenceArgCallback):
(WebCore::TestObjV8Internal::methodReturningSequenceCallback):
(WebCore::TestObjV8Internal::methodThatRequiresAllArgsAndThrowsCallback):
(WebCore::TestObjV8Internal::serializedValueCallback):
(WebCore::TestObjV8Internal::idbKeyCallback):
(WebCore::TestObjV8Internal::optionsObjectCallback):
(WebCore::TestObjV8Internal::methodWithNonOptionalArgAndOptionalArgCallback):
(WebCore::TestObjV8Internal::methodWithNonOptionalArgAndTwoOptionalArgsCallback):
(WebCore::TestObjV8Internal::methodWithCallbackArgCallback):
(WebCore::TestObjV8Internal::methodWithNonCallbackArgAndCallbackArgCallback):
(WebCore::TestObjV8Internal::overloadedMethod1Callback):
(WebCore::TestObjV8Internal::overloadedMethod2Callback):
(WebCore::TestObjV8Internal::overloadedMethod3Callback):
(WebCore::TestObjV8Internal::overloadedMethod4Callback):
(WebCore::TestObjV8Internal::overloadedMethod5Callback):
(WebCore::TestObjV8Internal::overloadedMethod6Callback):
(WebCore::TestObjV8Internal::overloadedMethod7Callback):
(WebCore::TestObjV8Internal::overloadedMethod11Callback):
(WebCore::TestObjV8Internal::overloadedMethod12Callback):
(WebCore::TestObjV8Internal::enabledAtRuntimeMethod1Callback):
(WebCore::TestObjV8Internal::enabledAtRuntimeMethod2Callback):
(WebCore::TestObjV8Internal::convert1Callback):
(WebCore::TestObjV8Internal::convert2Callback):
(WebCore::TestObjV8Internal::convert3Callback):
(WebCore::TestObjV8Internal::convert4Callback):
(WebCore::TestObjV8Internal::convert5Callback):
(WebCore::TestObjV8Internal::strictFunctionCallback):
(WebCore::V8TestObj::constructorCallback):
* bindings/scripts/test/V8/V8TestSerializedScriptValueInterface.cpp:
(WebCore::TestSerializedScriptValueInterfaceV8Internal::acceptTransferListCallback):
(WebCore::V8TestSerializedScriptValueInterface::constructorCallback):
* bindings/v8/ScriptController.cpp:
(WebCore::setValueAndClosePopupCallback):
* bindings/v8/V8Proxy.cpp:
(WebCore::V8Proxy::throwNotEnoughArgumentsError):
* bindings/v8/V8Proxy.h:
(V8Proxy):
* bindings/v8/custom/V8AudioContextCustom.cpp:
(WebCore::V8AudioContext::constructorCallback):
* bindings/v8/custom/V8DataViewCustom.cpp:
(WebCore::V8DataView::getInt8Callback):
(WebCore::V8DataView::getUint8Callback):
(WebCore::V8DataView::setInt8Callback):
(WebCore::V8DataView::setUint8Callback):
* bindings/v8/custom/V8DirectoryEntryCustom.cpp:
(WebCore::V8DirectoryEntry::getDirectoryCallback):
(WebCore::V8DirectoryEntry::getFileCallback):
* bindings/v8/custom/V8IntentConstructor.cpp:
(WebCore::V8Intent::constructorCallback):
* bindings/v8/custom/V8SVGLengthCustom.cpp:
(WebCore::V8SVGLength::convertToSpecifiedUnitsCallback):
* bindings/v8/custom/V8WebGLRenderingContextCustom.cpp:
(WebCore::getObjectParameter):
(WebCore::V8WebGLRenderingContext::getAttachedShadersCallback):
(WebCore::V8WebGLRenderingContext::getExtensionCallback):
(WebCore::V8WebGLRenderingContext::getFramebufferAttachmentParameterCallback):
(WebCore::V8WebGLRenderingContext::getParameterCallback):
(WebCore::V8WebGLRenderingContext::getProgramParameterCallback):
(WebCore::V8WebGLRenderingContext::getShaderParameterCallback):
(WebCore::V8WebGLRenderingContext::getUniformCallback):
(WebCore::vertexAttribAndUniformHelperf):
(WebCore::uniformHelperi):
(WebCore::uniformMatrixHelper):
* bindings/v8/custom/V8WebKitMutationObserverCustom.cpp:
(WebCore::V8WebKitMutationObserver::constructorCallback):
(WebCore::V8WebKitMutationObserver::observeCallback):
* bindings/v8/custom/V8WebSocketCustom.cpp:
(WebCore::V8WebSocket::constructorCallback):
(WebCore::V8WebSocket::sendCallback):
* bindings/v8/custom/V8XMLHttpRequestCustom.cpp:
(WebCore::V8XMLHttpRequest::openCallback):
git-svn-id: svn://svn.chromium.org/blink/trunk@117736 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
ScriptValue ScriptController::callFunctionEvenIfScriptDisabled(v8::Handle<v8::Function> function, v8::Handle<v8::Object> receiver, int argc, v8::Handle<v8::Value> argv[])
{
return ScriptValue(m_proxy->callFunction(function, receiver, argc, argv));
}
|
ScriptValue ScriptController::callFunctionEvenIfScriptDisabled(v8::Handle<v8::Function> function, v8::Handle<v8::Object> receiver, int argc, v8::Handle<v8::Value> argv[])
{
return ScriptValue(m_proxy->callFunction(function, receiver, argc, argv));
}
|
C
|
Chrome
| 0 |
CVE-2015-5232
|
https://www.cvedetails.com/cve/CVE-2015-5232/
|
CWE-362
|
https://github.com/01org/opa-fm/commit/c5759e7b76f5bf844be6c6641cc1b356bbc83869
|
c5759e7b76f5bf844be6c6641cc1b356bbc83869
|
Fix scripts and code that use well-known tmp files.
|
int sm_adaptive_routing(p_fm_config_conx_hdlt hdl, fm_mgr_type_t mgr, int argc, char *argv[]) {
fm_mgr_config_errno_t res;
fm_msg_ret_code_t ret_code;
uint32_t enable=0;
if (argc == 1) {
enable = atol(argv[0]);
if((res = fm_mgr_simple_query(hdl, FM_ACT_GET, FM_DT_SM_SET_ADAPTIVE_ROUTING, mgr, sizeof(enable), (void*)&enable, &ret_code)) != FM_CONF_OK)
{
fprintf(stderr, "sm_adaptive_routing: Failed to retrieve data: \n"
"\tError:(%d) %s \n\tRet code:(%d) %s\n",
res, fm_mgr_get_error_str(res),ret_code,
fm_mgr_get_resp_error_str(ret_code));
} else {
printf("Successfully sent SM Adaptive Routing control to local SM instance\n");
}
} else if (argc == 0) {
if((res = fm_mgr_simple_query(hdl, FM_ACT_GET, FM_DT_SM_GET_ADAPTIVE_ROUTING, mgr, sizeof(enable), (void*)&enable, &ret_code)) != FM_CONF_OK)
{
fprintf(stderr, "sm_adaptive_routing: Failed to retrieve data: \n"
"\tError:(%d) %s \n\tRet code:(%d) %s\n",
res, fm_mgr_get_error_str(res),ret_code,
fm_mgr_get_resp_error_str(ret_code));
} else {
printf("SM Adaptive Routing is %s\n", enable ? "enabled" : "disabled");
}
}
return 0;
}
|
int sm_adaptive_routing(p_fm_config_conx_hdlt hdl, fm_mgr_type_t mgr, int argc, char *argv[]) {
fm_mgr_config_errno_t res;
fm_msg_ret_code_t ret_code;
uint32_t enable=0;
if (argc == 1) {
enable = atol(argv[0]);
if((res = fm_mgr_simple_query(hdl, FM_ACT_GET, FM_DT_SM_SET_ADAPTIVE_ROUTING, mgr, sizeof(enable), (void*)&enable, &ret_code)) != FM_CONF_OK)
{
fprintf(stderr, "sm_adaptive_routing: Failed to retrieve data: \n"
"\tError:(%d) %s \n\tRet code:(%d) %s\n",
res, fm_mgr_get_error_str(res),ret_code,
fm_mgr_get_resp_error_str(ret_code));
} else {
printf("Successfully sent SM Adaptive Routing control to local SM instance\n");
}
} else if (argc == 0) {
if((res = fm_mgr_simple_query(hdl, FM_ACT_GET, FM_DT_SM_GET_ADAPTIVE_ROUTING, mgr, sizeof(enable), (void*)&enable, &ret_code)) != FM_CONF_OK)
{
fprintf(stderr, "sm_adaptive_routing: Failed to retrieve data: \n"
"\tError:(%d) %s \n\tRet code:(%d) %s\n",
res, fm_mgr_get_error_str(res),ret_code,
fm_mgr_get_resp_error_str(ret_code));
} else {
printf("SM Adaptive Routing is %s\n", enable ? "enabled" : "disabled");
}
}
return 0;
}
|
C
|
opa-ff
| 0 |
CVE-2019-11922
|
https://www.cvedetails.com/cve/CVE-2019-11922/
|
CWE-362
|
https://github.com/facebook/zstd/pull/1404/commits/3e5cdf1b6a85843e991d7d10f6a2567c15580da0
|
3e5cdf1b6a85843e991d7d10f6a2567c15580da0
|
fixed T36302429
|
size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs, int compressionLevel, unsigned long long pss)
{
U64 const pledgedSrcSize = (pss==0) ? ZSTD_CONTENTSIZE_UNKNOWN : pss; /* temporary : 0 interpreted as "unknown" during transition period. Users willing to specify "unknown" **must** use ZSTD_CONTENTSIZE_UNKNOWN. `0` will be interpreted as "empty" in the future */
ZSTD_CCtxParams_init(&zcs->requestedParams, compressionLevel);
return ZSTD_initCStream_internal(zcs, NULL, 0, NULL, zcs->requestedParams, pledgedSrcSize);
}
|
size_t ZSTD_initCStream_srcSize(ZSTD_CStream* zcs, int compressionLevel, unsigned long long pss)
{
U64 const pledgedSrcSize = (pss==0) ? ZSTD_CONTENTSIZE_UNKNOWN : pss; /* temporary : 0 interpreted as "unknown" during transition period. Users willing to specify "unknown" **must** use ZSTD_CONTENTSIZE_UNKNOWN. `0` will be interpreted as "empty" in the future */
ZSTD_CCtxParams_init(&zcs->requestedParams, compressionLevel);
return ZSTD_initCStream_internal(zcs, NULL, 0, NULL, zcs->requestedParams, pledgedSrcSize);
}
|
C
|
zstd
| 0 |
CVE-2015-8839
|
https://www.cvedetails.com/cve/CVE-2015-8839/
|
CWE-362
|
https://github.com/torvalds/linux/commit/ea3d7209ca01da209cda6f0dea8be9cc4b7a933b
|
ea3d7209ca01da209cda6f0dea8be9cc4b7a933b
|
ext4: fix races between page faults and hole punching
Currently, page faults and hole punching are completely unsynchronized.
This can result in page fault faulting in a page into a range that we
are punching after truncate_pagecache_range() has been called and thus
we can end up with a page mapped to disk blocks that will be shortly
freed. Filesystem corruption will shortly follow. Note that the same
race is avoided for truncate by checking page fault offset against
i_size but there isn't similar mechanism available for punching holes.
Fix the problem by creating new rw semaphore i_mmap_sem in inode and
grab it for writing over truncate, hole punching, and other functions
removing blocks from extent tree and for read over page faults. We
cannot easily use i_data_sem for this since that ranks below transaction
start and we need something ranking above it so that it can be held over
the whole truncate / hole punching operation. Also remove various
workarounds we had in the code to reduce race window when page fault
could have created pages with stale mapping information.
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
|
static void destroy_inodecache(void)
{
/*
* Make sure all delayed rcu free inodes are flushed before we
* destroy cache.
*/
rcu_barrier();
kmem_cache_destroy(ext4_inode_cachep);
}
|
static void destroy_inodecache(void)
{
/*
* Make sure all delayed rcu free inodes are flushed before we
* destroy cache.
*/
rcu_barrier();
kmem_cache_destroy(ext4_inode_cachep);
}
|
C
|
linux
| 0 |
CVE-2012-2880
|
https://www.cvedetails.com/cve/CVE-2012-2880/
|
CWE-362
|
https://github.com/chromium/chromium/commit/fcd3a7a671ecf2d5f46ea34787d27507a914d2f5
|
fcd3a7a671ecf2d5f46ea34787d27507a914d2f5
|
[Sync] Cleanup all tab sync enabling logic now that its on by default.
BUG=none
TEST=
Review URL: https://chromiumcodereview.appspot.com/10443046
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@139462 0039d316-1c4b-4281-b951-d872f2087c98
|
void SyncTest::DisableNotifications() {
DisableNotificationsImpl();
notifications_enabled_ = false;
}
|
void SyncTest::DisableNotifications() {
DisableNotificationsImpl();
notifications_enabled_ = false;
}
|
C
|
Chrome
| 0 |
CVE-2015-1265
|
https://www.cvedetails.com/cve/CVE-2015-1265/
| null |
https://github.com/chromium/chromium/commit/8ea5693d5cf304e56174bb6b65412f04209904db
|
8ea5693d5cf304e56174bb6b65412f04209904db
|
Move Editor::Transpose() out of Editor class
This patch moves |Editor::Transpose()| out of |Editor| class as preparation of
expanding it into |ExecutTranspose()| in "EditorCommand.cpp" to make |Editor|
class simpler for improving code health.
Following patch will expand |Transpose()| into |ExecutTranspose()|.
Bug: 672405
Change-Id: Icde253623f31813d2b4517c4da7d4798bd5fadf6
Reviewed-on: https://chromium-review.googlesource.com/583880
Reviewed-by: Xiaocheng Hu <[email protected]>
Commit-Queue: Yoshifumi Inoue <[email protected]>
Cr-Commit-Position: refs/heads/master@{#489518}
|
void Editor::ApplyStyle(StylePropertySet* style,
InputEvent::InputType input_type) {
switch (GetFrame()
.Selection()
.ComputeVisibleSelectionInDOMTreeDeprecated()
.GetSelectionType()) {
case kNoSelection:
break;
case kCaretSelection:
ComputeAndSetTypingStyle(style, input_type);
break;
case kRangeSelection:
if (style) {
DCHECK(GetFrame().GetDocument());
ApplyStyleCommand::Create(*GetFrame().GetDocument(),
EditingStyle::Create(style), input_type)
->Apply();
}
break;
}
}
|
void Editor::ApplyStyle(StylePropertySet* style,
InputEvent::InputType input_type) {
switch (GetFrame()
.Selection()
.ComputeVisibleSelectionInDOMTreeDeprecated()
.GetSelectionType()) {
case kNoSelection:
break;
case kCaretSelection:
ComputeAndSetTypingStyle(style, input_type);
break;
case kRangeSelection:
if (style) {
DCHECK(GetFrame().GetDocument());
ApplyStyleCommand::Create(*GetFrame().GetDocument(),
EditingStyle::Create(style), input_type)
->Apply();
}
break;
}
}
|
C
|
Chrome
| 0 |
CVE-2016-5224
|
https://www.cvedetails.com/cve/CVE-2016-5224/
|
CWE-189
|
https://github.com/chromium/chromium/commit/a4acc2991a60408f2044b2a3b19817074c04b751
|
a4acc2991a60408f2044b2a3b19817074c04b751
|
Add Android SDK version to crash reports.
Bug: 911669
Change-Id: I62a97d76a0b88099a5a42b93463307f03be9b3e2
Reviewed-on: https://chromium-review.googlesource.com/c/1361104
Reviewed-by: Jochen Eisinger <[email protected]>
Reviewed-by: Peter Conn <[email protected]>
Reviewed-by: Ilya Sherman <[email protected]>
Commit-Queue: Michael van Ouwerkerk <[email protected]>
Cr-Commit-Position: refs/heads/master@{#615851}
|
void DumpWithoutCrashing() {
if (g_is_browser) {
CRASHPAD_SIMULATE_CRASH();
} else {
siginfo_t siginfo;
siginfo.si_signo = crashpad::Signals::kSimulatedSigno;
siginfo.si_errno = 0;
siginfo.si_code = 0;
ucontext_t context;
crashpad::CaptureContext(&context);
crashpad::SandboxedHandler::Get()->HandleCrashNonFatal(siginfo.si_signo,
&siginfo, &context);
}
}
|
void DumpWithoutCrashing() {
if (g_is_browser) {
CRASHPAD_SIMULATE_CRASH();
} else {
siginfo_t siginfo;
siginfo.si_signo = crashpad::Signals::kSimulatedSigno;
siginfo.si_errno = 0;
siginfo.si_code = 0;
ucontext_t context;
crashpad::CaptureContext(&context);
crashpad::SandboxedHandler::Get()->HandleCrashNonFatal(siginfo.si_signo,
&siginfo, &context);
}
}
|
C
|
Chrome
| 0 |
CVE-2017-10971
|
https://www.cvedetails.com/cve/CVE-2017-10971/
|
CWE-119
|
https://cgit.freedesktop.org/xorg/xserver/commit/?id=215f894965df5fb0bb45b107d84524e700d2073c
|
215f894965df5fb0bb45b107d84524e700d2073c
| null |
DevHasCursor(DeviceIntPtr pDev)
{
return pDev->spriteInfo->spriteOwner;
}
|
DevHasCursor(DeviceIntPtr pDev)
{
return pDev->spriteInfo->spriteOwner;
}
|
C
|
xserver
| 0 |
CVE-2017-9374
|
https://www.cvedetails.com/cve/CVE-2017-9374/
|
CWE-772
|
https://git.qemu.org/?p=qemu.git;a=commit;h=d710e1e7bd3d5bfc26b631f02ae87901ebe646b0
|
d710e1e7bd3d5bfc26b631f02ae87901ebe646b0
| null |
static void ehci_trace_sitd(EHCIState *s, hwaddr addr,
EHCIsitd *sitd)
{
trace_usb_ehci_sitd(addr, sitd->next,
(bool)(sitd->results & SITD_RESULTS_ACTIVE));
}
|
static void ehci_trace_sitd(EHCIState *s, hwaddr addr,
EHCIsitd *sitd)
{
trace_usb_ehci_sitd(addr, sitd->next,
(bool)(sitd->results & SITD_RESULTS_ACTIVE));
}
|
C
|
qemu
| 0 |
CVE-2015-3215
|
https://www.cvedetails.com/cve/CVE-2015-3215/
|
CWE-20
|
https://github.com/YanVugenfirer/kvm-guest-drivers-windows/commit/fbfa4d1083ea84c5429992ca3e996d7d4fbc8238
|
fbfa4d1083ea84c5429992ca3e996d7d4fbc8238
|
NetKVM: BZ#1169718: More rigoruous testing of incoming packet
Signed-off-by: Joseph Hindin <[email protected]>
|
static USHORT DetermineQueueNumber(PARANDIS_ADAPTER *)
{
return 1;
}
|
static USHORT DetermineQueueNumber(PARANDIS_ADAPTER *)
{
return 1;
}
|
C
|
kvm-guest-drivers-windows
| 0 |
CVE-2016-5170
|
https://www.cvedetails.com/cve/CVE-2016-5170/
|
CWE-416
|
https://github.com/chromium/chromium/commit/c3957448cfc6e299165196a33cd954b790875fdb
|
c3957448cfc6e299165196a33cd954b790875fdb
|
Cleanup and remove dead code in SetFocusedElement
This early-out was added in:
https://crrev.com/ce8ea3446283965c7eabab592cbffe223b1cf2bc
Back then, we applied fragment focus in LayoutUpdated() which could
cause this issue. This got cleaned up in:
https://crrev.com/45236fd563e9df53dc45579be1f3d0b4784885a2
so that focus is no longer applied after layout.
+Cleanup: Goto considered harmful
Bug: 795381
Change-Id: Ifeb4d2e03e872fd48cca6720b1d4de36ad1ecbb7
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1524417
Commit-Queue: David Bokan <[email protected]>
Reviewed-by: Stefan Zager <[email protected]>
Cr-Commit-Position: refs/heads/master@{#641101}
|
void Document::LoadPluginsSoon() {
if (!plugin_loading_timer_.IsActive())
plugin_loading_timer_.StartOneShot(TimeDelta(), FROM_HERE);
}
|
void Document::LoadPluginsSoon() {
if (!plugin_loading_timer_.IsActive())
plugin_loading_timer_.StartOneShot(TimeDelta(), FROM_HERE);
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/e93dc535728da259ec16d1c3cc393f80b25f64ae
|
e93dc535728da259ec16d1c3cc393f80b25f64ae
|
Add a unit test that filenames aren't unintentionally converted to URLs.
Also fixes two issues in OSExchangeDataProviderWin:
- It used a disjoint set of clipboard formats when handling
GetUrl(..., true /* filename conversion */) vs GetFilenames(...), so the
actual returned results would vary depending on which one was called.
- It incorrectly used ::DragFinish() instead of ::ReleaseStgMedium().
::DragFinish() is only meant to be used in conjunction with WM_DROPFILES.
BUG=346135
Review URL: https://codereview.chromium.org/380553002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@283226 0039d316-1c4b-4281-b951-d872f2087c98
|
OSExchangeData::Provider* OSExchangeData::CreateProvider() {
return new OSExchangeDataProviderWin();
}
|
OSExchangeData::Provider* OSExchangeData::CreateProvider() {
return new OSExchangeDataProviderWin();
}
|
C
|
Chrome
| 0 |
CVE-2018-16427
|
https://www.cvedetails.com/cve/CVE-2018-16427/
|
CWE-125
|
https://github.com/OpenSC/OpenSC/pull/1447/commits/8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
|
8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
|
fixed out of bounds reads
Thanks to Eric Sesterhenn from X41 D-SEC GmbH
for reporting and suggesting security fixes.
|
struct sc_card_driver * sc_get_coolkey_driver(void)
{
return sc_get_driver();
}
|
struct sc_card_driver * sc_get_coolkey_driver(void)
{
return sc_get_driver();
}
|
C
|
OpenSC
| 0 |
CVE-2017-14174
|
https://www.cvedetails.com/cve/CVE-2017-14174/
|
CWE-834
|
https://github.com/ImageMagick/ImageMagick/commit/f68a98a9d385838a1c73ec960a14102949940a64
|
f68a98a9d385838a1c73ec960a14102949940a64
|
https://github.com/ImageMagick/ImageMagick/issues/714
|
static const StringInfo *GetAdditionalInformation(const ImageInfo *image_info,
Image *image)
{
#define PSDKeySize 5
#define PSDAllowedLength 36
char
key[PSDKeySize];
/* Whitelist of keys from: https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/ */
const char
allowed[PSDAllowedLength][PSDKeySize] = {
"blnc", "blwh", "brit", "brst", "clbl", "clrL", "curv", "expA", "FMsk",
"GdFl", "grdm", "hue ", "hue2", "infx", "knko", "lclr", "levl", "lnsr",
"lfx2", "luni", "lrFX", "lspf", "lyid", "lyvr", "mixr", "nvrt", "phfl",
"post", "PtFl", "selc", "shpa", "sn2P", "SoCo", "thrs", "tsly", "vibA"
},
*option;
const StringInfo
*info;
MagickBooleanType
found;
register size_t
i;
size_t
remaining_length,
length;
StringInfo
*profile;
unsigned char
*p;
unsigned int
size;
info=GetImageProfile(image,"psd:additional-info");
if (info == (const StringInfo *) NULL)
return((const StringInfo *) NULL);
option=GetImageOption(image_info,"psd:additional-info");
if (LocaleCompare(option,"all") == 0)
return(info);
if (LocaleCompare(option,"selective") != 0)
{
profile=RemoveImageProfile(image,"psd:additional-info");
return(DestroyStringInfo(profile));
}
length=GetStringInfoLength(info);
p=GetStringInfoDatum(info);
remaining_length=length;
length=0;
while (remaining_length >= 12)
{
/* skip over signature */
p+=4;
key[0]=(*p++);
key[1]=(*p++);
key[2]=(*p++);
key[3]=(*p++);
key[4]='\0';
size=(unsigned int) (*p++) << 24;
size|=(unsigned int) (*p++) << 16;
size|=(unsigned int) (*p++) << 8;
size|=(unsigned int) (*p++);
size=size & 0xffffffff;
remaining_length-=12;
if ((size_t) size > remaining_length)
return((const StringInfo *) NULL);
found=MagickFalse;
for (i=0; i < PSDAllowedLength; i++)
{
if (LocaleNCompare(key,allowed[i],PSDKeySize) != 0)
continue;
found=MagickTrue;
break;
}
remaining_length-=(size_t) size;
if (found == MagickFalse)
{
if (remaining_length > 0)
p=(unsigned char *) CopyMagickMemory(p-12,p+size,remaining_length);
continue;
}
length+=(size_t) size+12;
p+=size;
}
profile=RemoveImageProfile(image,"psd:additional-info");
if (length == 0)
return(DestroyStringInfo(profile));
SetStringInfoLength(profile,(const size_t) length);
SetImageProfile(image,"psd:additional-info",info);
return(profile);
}
|
static const StringInfo *GetAdditionalInformation(const ImageInfo *image_info,
Image *image)
{
#define PSDKeySize 5
#define PSDAllowedLength 36
char
key[PSDKeySize];
/* Whitelist of keys from: https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/ */
const char
allowed[PSDAllowedLength][PSDKeySize] = {
"blnc", "blwh", "brit", "brst", "clbl", "clrL", "curv", "expA", "FMsk",
"GdFl", "grdm", "hue ", "hue2", "infx", "knko", "lclr", "levl", "lnsr",
"lfx2", "luni", "lrFX", "lspf", "lyid", "lyvr", "mixr", "nvrt", "phfl",
"post", "PtFl", "selc", "shpa", "sn2P", "SoCo", "thrs", "tsly", "vibA"
},
*option;
const StringInfo
*info;
MagickBooleanType
found;
register size_t
i;
size_t
remaining_length,
length;
StringInfo
*profile;
unsigned char
*p;
unsigned int
size;
info=GetImageProfile(image,"psd:additional-info");
if (info == (const StringInfo *) NULL)
return((const StringInfo *) NULL);
option=GetImageOption(image_info,"psd:additional-info");
if (LocaleCompare(option,"all") == 0)
return(info);
if (LocaleCompare(option,"selective") != 0)
{
profile=RemoveImageProfile(image,"psd:additional-info");
return(DestroyStringInfo(profile));
}
length=GetStringInfoLength(info);
p=GetStringInfoDatum(info);
remaining_length=length;
length=0;
while (remaining_length >= 12)
{
/* skip over signature */
p+=4;
key[0]=(*p++);
key[1]=(*p++);
key[2]=(*p++);
key[3]=(*p++);
key[4]='\0';
size=(unsigned int) (*p++) << 24;
size|=(unsigned int) (*p++) << 16;
size|=(unsigned int) (*p++) << 8;
size|=(unsigned int) (*p++);
size=size & 0xffffffff;
remaining_length-=12;
if ((size_t) size > remaining_length)
return((const StringInfo *) NULL);
found=MagickFalse;
for (i=0; i < PSDAllowedLength; i++)
{
if (LocaleNCompare(key,allowed[i],PSDKeySize) != 0)
continue;
found=MagickTrue;
break;
}
remaining_length-=(size_t) size;
if (found == MagickFalse)
{
if (remaining_length > 0)
p=(unsigned char *) CopyMagickMemory(p-12,p+size,remaining_length);
continue;
}
length+=(size_t) size+12;
p+=size;
}
profile=RemoveImageProfile(image,"psd:additional-info");
if (length == 0)
return(DestroyStringInfo(profile));
SetStringInfoLength(profile,(const size_t) length);
SetImageProfile(image,"psd:additional-info",info);
return(profile);
}
|
C
|
ImageMagick
| 0 |
CVE-2011-2799
|
https://www.cvedetails.com/cve/CVE-2011-2799/
|
CWE-399
|
https://github.com/chromium/chromium/commit/5a2de6455f565783c73e53eae2c8b953e7d48520
|
5a2de6455f565783c73e53eae2c8b953e7d48520
|
2011-06-02 Joone Hur <[email protected]>
Reviewed by Martin Robinson.
[GTK] Only load dictionaries if spell check is enabled
https://bugs.webkit.org/show_bug.cgi?id=32879
We don't need to call enchant if enable-spell-checking is false.
* webkit/webkitwebview.cpp:
(webkit_web_view_update_settings): Skip loading dictionaries when enable-spell-checking is false.
(webkit_web_view_settings_notify): Ditto.
git-svn-id: svn://svn.chromium.org/blink/trunk@87925 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void webkit_web_view_set_highlight_text_matches(WebKitWebView* webView, gboolean shouldHighlight)
{
g_return_if_fail(WEBKIT_IS_WEB_VIEW(webView));
Frame *frame = core(webView)->mainFrame();
do {
frame->editor()->setMarkedTextMatchesAreHighlighted(shouldHighlight);
frame = frame->tree()->traverseNextWithWrap(false);
} while (frame);
}
|
void webkit_web_view_set_highlight_text_matches(WebKitWebView* webView, gboolean shouldHighlight)
{
g_return_if_fail(WEBKIT_IS_WEB_VIEW(webView));
Frame *frame = core(webView)->mainFrame();
do {
frame->editor()->setMarkedTextMatchesAreHighlighted(shouldHighlight);
frame = frame->tree()->traverseNextWithWrap(false);
} while (frame);
}
|
C
|
Chrome
| 0 |
CVE-2015-8324
|
https://www.cvedetails.com/cve/CVE-2015-8324/
| null |
https://github.com/torvalds/linux/commit/744692dc059845b2a3022119871846e74d4f6e11
|
744692dc059845b2a3022119871846e74d4f6e11
|
ext4: use ext4_get_block_write in buffer write
Allocate uninitialized extent before ext4 buffer write and
convert the extent to initialized after io completes.
The purpose is to make sure an extent can only be marked
initialized after it has been written with new data so
we can safely drop the i_mutex lock in ext4 DIO read without
exposing stale data. This helps to improve multi-thread DIO
read performance on high-speed disks.
Skip the nobh and data=journal mount cases to make things simple for now.
Signed-off-by: Jiaying Zhang <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
|
static int ext4_da_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
pgoff_t index;
int range_whole = 0;
handle_t *handle = NULL;
struct mpage_da_data mpd;
struct inode *inode = mapping->host;
int no_nrwrite_index_update;
int pages_written = 0;
long pages_skipped;
unsigned int max_pages;
int range_cyclic, cycled = 1, io_done = 0;
int needed_blocks, ret = 0;
long desired_nr_to_write, nr_to_writebump = 0;
loff_t range_start = wbc->range_start;
struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);
trace_ext4_da_writepages(inode, wbc);
/*
* No pages to write? This is mainly a kludge to avoid starting
* a transaction for special inodes like journal inode on last iput()
* because that could violate lock ordering on umount
*/
if (!mapping->nrpages || !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
return 0;
/*
* If the filesystem has aborted, it is read-only, so return
* right away instead of dumping stack traces later on that
* will obscure the real source of the problem. We test
* EXT4_MF_FS_ABORTED instead of sb->s_flag's MS_RDONLY because
* the latter could be true if the filesystem is mounted
* read-only, and in that case, ext4_da_writepages should
* *never* be called, so if that ever happens, we would want
* the stack trace.
*/
if (unlikely(sbi->s_mount_flags & EXT4_MF_FS_ABORTED))
return -EROFS;
if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
range_whole = 1;
range_cyclic = wbc->range_cyclic;
if (wbc->range_cyclic) {
index = mapping->writeback_index;
if (index)
cycled = 0;
wbc->range_start = index << PAGE_CACHE_SHIFT;
wbc->range_end = LLONG_MAX;
wbc->range_cyclic = 0;
} else
index = wbc->range_start >> PAGE_CACHE_SHIFT;
/*
* This works around two forms of stupidity. The first is in
* the writeback code, which caps the maximum number of pages
* written to be 1024 pages. This is wrong on multiple
* levels; different architectues have a different page size,
* which changes the maximum amount of data which gets
* written. Secondly, 4 megabytes is way too small. XFS
* forces this value to be 16 megabytes by multiplying
* nr_to_write parameter by four, and then relies on its
* allocator to allocate larger extents to make them
* contiguous. Unfortunately this brings us to the second
* stupidity, which is that ext4's mballoc code only allocates
* at most 2048 blocks. So we force contiguous writes up to
* the number of dirty blocks in the inode, or
* sbi->max_writeback_mb_bump whichever is smaller.
*/
max_pages = sbi->s_max_writeback_mb_bump << (20 - PAGE_CACHE_SHIFT);
if (!range_cyclic && range_whole)
desired_nr_to_write = wbc->nr_to_write * 8;
else
desired_nr_to_write = ext4_num_dirty_pages(inode, index,
max_pages);
if (desired_nr_to_write > max_pages)
desired_nr_to_write = max_pages;
if (wbc->nr_to_write < desired_nr_to_write) {
nr_to_writebump = desired_nr_to_write - wbc->nr_to_write;
wbc->nr_to_write = desired_nr_to_write;
}
mpd.wbc = wbc;
mpd.inode = mapping->host;
/*
* we don't want write_cache_pages to update
* nr_to_write and writeback_index
*/
no_nrwrite_index_update = wbc->no_nrwrite_index_update;
wbc->no_nrwrite_index_update = 1;
pages_skipped = wbc->pages_skipped;
retry:
while (!ret && wbc->nr_to_write > 0) {
/*
* we insert one extent at a time. So we need
* credit needed for single extent allocation.
* journalled mode is currently not supported
* by delalloc
*/
BUG_ON(ext4_should_journal_data(inode));
needed_blocks = ext4_da_writepages_trans_blocks(inode);
/* start a new transaction*/
handle = ext4_journal_start(inode, needed_blocks);
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
ext4_msg(inode->i_sb, KERN_CRIT, "%s: jbd2_start: "
"%ld pages, ino %lu; err %d\n", __func__,
wbc->nr_to_write, inode->i_ino, ret);
goto out_writepages;
}
/*
* Now call __mpage_da_writepage to find the next
* contiguous region of logical blocks that need
* blocks to be allocated by ext4. We don't actually
* submit the blocks for I/O here, even though
* write_cache_pages thinks it will, and will set the
* pages as clean for write before calling
* __mpage_da_writepage().
*/
mpd.b_size = 0;
mpd.b_state = 0;
mpd.b_blocknr = 0;
mpd.first_page = 0;
mpd.next_page = 0;
mpd.io_done = 0;
mpd.pages_written = 0;
mpd.retval = 0;
ret = write_cache_pages(mapping, wbc, __mpage_da_writepage,
&mpd);
/*
* If we have a contiguous extent of pages and we
* haven't done the I/O yet, map the blocks and submit
* them for I/O.
*/
if (!mpd.io_done && mpd.next_page != mpd.first_page) {
if (mpage_da_map_blocks(&mpd) == 0)
mpage_da_submit_io(&mpd);
mpd.io_done = 1;
ret = MPAGE_DA_EXTENT_TAIL;
}
trace_ext4_da_write_pages(inode, &mpd);
wbc->nr_to_write -= mpd.pages_written;
ext4_journal_stop(handle);
if ((mpd.retval == -ENOSPC) && sbi->s_journal) {
/* commit the transaction which would
* free blocks released in the transaction
* and try again
*/
jbd2_journal_force_commit_nested(sbi->s_journal);
wbc->pages_skipped = pages_skipped;
ret = 0;
} else if (ret == MPAGE_DA_EXTENT_TAIL) {
/*
* got one extent now try with
* rest of the pages
*/
pages_written += mpd.pages_written;
wbc->pages_skipped = pages_skipped;
ret = 0;
io_done = 1;
} else if (wbc->nr_to_write)
/*
* There is no more writeout needed
* or we requested for a noblocking writeout
* and we found the device congested
*/
break;
}
if (!io_done && !cycled) {
cycled = 1;
index = 0;
wbc->range_start = index << PAGE_CACHE_SHIFT;
wbc->range_end = mapping->writeback_index - 1;
goto retry;
}
if (pages_skipped != wbc->pages_skipped)
ext4_msg(inode->i_sb, KERN_CRIT,
"This should not happen leaving %s "
"with nr_to_write = %ld ret = %d\n",
__func__, wbc->nr_to_write, ret);
/* Update index */
index += pages_written;
wbc->range_cyclic = range_cyclic;
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
/*
* set the writeback_index so that range_cyclic
* mode will write it back later
*/
mapping->writeback_index = index;
out_writepages:
if (!no_nrwrite_index_update)
wbc->no_nrwrite_index_update = 0;
wbc->nr_to_write -= nr_to_writebump;
wbc->range_start = range_start;
trace_ext4_da_writepages_result(inode, wbc, ret, pages_written);
return ret;
}
|
static int ext4_da_writepages(struct address_space *mapping,
struct writeback_control *wbc)
{
pgoff_t index;
int range_whole = 0;
handle_t *handle = NULL;
struct mpage_da_data mpd;
struct inode *inode = mapping->host;
int no_nrwrite_index_update;
int pages_written = 0;
long pages_skipped;
unsigned int max_pages;
int range_cyclic, cycled = 1, io_done = 0;
int needed_blocks, ret = 0;
long desired_nr_to_write, nr_to_writebump = 0;
loff_t range_start = wbc->range_start;
struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);
trace_ext4_da_writepages(inode, wbc);
/*
* No pages to write? This is mainly a kludge to avoid starting
* a transaction for special inodes like journal inode on last iput()
* because that could violate lock ordering on umount
*/
if (!mapping->nrpages || !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
return 0;
/*
* If the filesystem has aborted, it is read-only, so return
* right away instead of dumping stack traces later on that
* will obscure the real source of the problem. We test
* EXT4_MF_FS_ABORTED instead of sb->s_flag's MS_RDONLY because
* the latter could be true if the filesystem is mounted
* read-only, and in that case, ext4_da_writepages should
* *never* be called, so if that ever happens, we would want
* the stack trace.
*/
if (unlikely(sbi->s_mount_flags & EXT4_MF_FS_ABORTED))
return -EROFS;
if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
range_whole = 1;
range_cyclic = wbc->range_cyclic;
if (wbc->range_cyclic) {
index = mapping->writeback_index;
if (index)
cycled = 0;
wbc->range_start = index << PAGE_CACHE_SHIFT;
wbc->range_end = LLONG_MAX;
wbc->range_cyclic = 0;
} else
index = wbc->range_start >> PAGE_CACHE_SHIFT;
/*
* This works around two forms of stupidity. The first is in
* the writeback code, which caps the maximum number of pages
* written to be 1024 pages. This is wrong on multiple
* levels; different architectues have a different page size,
* which changes the maximum amount of data which gets
* written. Secondly, 4 megabytes is way too small. XFS
* forces this value to be 16 megabytes by multiplying
* nr_to_write parameter by four, and then relies on its
* allocator to allocate larger extents to make them
* contiguous. Unfortunately this brings us to the second
* stupidity, which is that ext4's mballoc code only allocates
* at most 2048 blocks. So we force contiguous writes up to
* the number of dirty blocks in the inode, or
* sbi->max_writeback_mb_bump whichever is smaller.
*/
max_pages = sbi->s_max_writeback_mb_bump << (20 - PAGE_CACHE_SHIFT);
if (!range_cyclic && range_whole)
desired_nr_to_write = wbc->nr_to_write * 8;
else
desired_nr_to_write = ext4_num_dirty_pages(inode, index,
max_pages);
if (desired_nr_to_write > max_pages)
desired_nr_to_write = max_pages;
if (wbc->nr_to_write < desired_nr_to_write) {
nr_to_writebump = desired_nr_to_write - wbc->nr_to_write;
wbc->nr_to_write = desired_nr_to_write;
}
mpd.wbc = wbc;
mpd.inode = mapping->host;
/*
* we don't want write_cache_pages to update
* nr_to_write and writeback_index
*/
no_nrwrite_index_update = wbc->no_nrwrite_index_update;
wbc->no_nrwrite_index_update = 1;
pages_skipped = wbc->pages_skipped;
retry:
while (!ret && wbc->nr_to_write > 0) {
/*
* we insert one extent at a time. So we need
* credit needed for single extent allocation.
* journalled mode is currently not supported
* by delalloc
*/
BUG_ON(ext4_should_journal_data(inode));
needed_blocks = ext4_da_writepages_trans_blocks(inode);
/* start a new transaction*/
handle = ext4_journal_start(inode, needed_blocks);
if (IS_ERR(handle)) {
ret = PTR_ERR(handle);
ext4_msg(inode->i_sb, KERN_CRIT, "%s: jbd2_start: "
"%ld pages, ino %lu; err %d\n", __func__,
wbc->nr_to_write, inode->i_ino, ret);
goto out_writepages;
}
/*
* Now call __mpage_da_writepage to find the next
* contiguous region of logical blocks that need
* blocks to be allocated by ext4. We don't actually
* submit the blocks for I/O here, even though
* write_cache_pages thinks it will, and will set the
* pages as clean for write before calling
* __mpage_da_writepage().
*/
mpd.b_size = 0;
mpd.b_state = 0;
mpd.b_blocknr = 0;
mpd.first_page = 0;
mpd.next_page = 0;
mpd.io_done = 0;
mpd.pages_written = 0;
mpd.retval = 0;
ret = write_cache_pages(mapping, wbc, __mpage_da_writepage,
&mpd);
/*
* If we have a contiguous extent of pages and we
* haven't done the I/O yet, map the blocks and submit
* them for I/O.
*/
if (!mpd.io_done && mpd.next_page != mpd.first_page) {
if (mpage_da_map_blocks(&mpd) == 0)
mpage_da_submit_io(&mpd);
mpd.io_done = 1;
ret = MPAGE_DA_EXTENT_TAIL;
}
trace_ext4_da_write_pages(inode, &mpd);
wbc->nr_to_write -= mpd.pages_written;
ext4_journal_stop(handle);
if ((mpd.retval == -ENOSPC) && sbi->s_journal) {
/* commit the transaction which would
* free blocks released in the transaction
* and try again
*/
jbd2_journal_force_commit_nested(sbi->s_journal);
wbc->pages_skipped = pages_skipped;
ret = 0;
} else if (ret == MPAGE_DA_EXTENT_TAIL) {
/*
* got one extent now try with
* rest of the pages
*/
pages_written += mpd.pages_written;
wbc->pages_skipped = pages_skipped;
ret = 0;
io_done = 1;
} else if (wbc->nr_to_write)
/*
* There is no more writeout needed
* or we requested for a noblocking writeout
* and we found the device congested
*/
break;
}
if (!io_done && !cycled) {
cycled = 1;
index = 0;
wbc->range_start = index << PAGE_CACHE_SHIFT;
wbc->range_end = mapping->writeback_index - 1;
goto retry;
}
if (pages_skipped != wbc->pages_skipped)
ext4_msg(inode->i_sb, KERN_CRIT,
"This should not happen leaving %s "
"with nr_to_write = %ld ret = %d\n",
__func__, wbc->nr_to_write, ret);
/* Update index */
index += pages_written;
wbc->range_cyclic = range_cyclic;
if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))
/*
* set the writeback_index so that range_cyclic
* mode will write it back later
*/
mapping->writeback_index = index;
out_writepages:
if (!no_nrwrite_index_update)
wbc->no_nrwrite_index_update = 0;
wbc->nr_to_write -= nr_to_writebump;
wbc->range_start = range_start;
trace_ext4_da_writepages_result(inode, wbc, ret, pages_written);
return ret;
}
|
C
|
linux
| 0 |
CVE-2010-1166
|
https://www.cvedetails.com/cve/CVE-2010-1166/
|
CWE-189
|
https://cgit.freedesktop.org/xorg/xserver/commit/?id=d2f813f7db
|
d2f813f7db157fc83abc4b3726821c36ee7e40b1
| null |
static void fbFetchTransformed(PicturePtr pict, int x, int y, int width, CARD32 *buffer, CARD32 *mask, CARD32 maskBits)
{
FbBits *bits;
FbStride stride;
int bpp;
int xoff, yoff, dx, dy;
fetchPixelProc fetch;
PictVector v;
PictVector unit;
int i;
BoxRec box;
miIndexedPtr indexed = (miIndexedPtr) pict->pFormat->index.devPrivate;
Bool affine = TRUE;
fetch = fetchPixelProcForPicture(pict);
fbGetDrawable(pict->pDrawable, bits, stride, bpp, xoff, yoff);
x += xoff;
y += yoff;
dx = pict->pDrawable->x;
dy = pict->pDrawable->y;
/* reference point is the center of the pixel */
v.vector[0] = IntToxFixed(x - dx) + xFixed1 / 2;
v.vector[1] = IntToxFixed(y - dy) + xFixed1 / 2;
v.vector[2] = xFixed1;
/* when using convolution filters one might get here without a transform */
if (pict->transform) {
if (!PictureTransformPoint3d (pict->transform, &v)) {
fbFinishAccess (pict->pDrawable);
return;
}
unit.vector[0] = pict->transform->matrix[0][0];
unit.vector[1] = pict->transform->matrix[1][0];
unit.vector[2] = pict->transform->matrix[2][0];
affine = v.vector[2] == xFixed1 && unit.vector[2] == 0;
} else {
unit.vector[0] = xFixed1;
unit.vector[1] = 0;
unit.vector[2] = 0;
}
if (pict->filter == PictFilterNearest)
{
if (pict->repeatType == RepeatNormal) {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = MOD(DIV(v.vector[1],v.vector[2]), pict->pDrawable->height);
x = MOD(DIV(v.vector[0],v.vector[2]), pict->pDrawable->width);
} else {
y = MOD(v.vector[1]>>16, pict->pDrawable->height);
x = MOD(v.vector[0]>>16, pict->pDrawable->width);
}
WRITE(buffer + i, fetch(bits + (y + dy)*stride, x + dx, indexed));
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = MOD(DIV(v.vector[1],v.vector[2]), pict->pDrawable->height);
x = MOD(DIV(v.vector[0],v.vector[2]), pict->pDrawable->width);
} else {
y = MOD(v.vector[1]>>16, pict->pDrawable->height);
x = MOD(v.vector[0]>>16, pict->pDrawable->width);
}
if (POINT_IN_REGION (0, pict->pCompositeClip, x, y, &box))
WRITE(buffer + i, fetch(bits + (y + dy)*stride, x + dx, indexed));
else
WRITE(buffer + i, 0);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
} else {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
box = pict->pCompositeClip->extents;
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = DIV(v.vector[1],v.vector[2]);
x = DIV(v.vector[0],v.vector[2]);
} else {
y = v.vector[1]>>16;
x = v.vector[0]>>16;
}
WRITE(buffer + i, ((x < box.x1-dx) | (x >= box.x2-dx) | (y < box.y1-dy) | (y >= box.y2-dy)) ?
0 : fetch(bits + (y + dy)*stride, x + dx, indexed));
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = DIV(v.vector[1],v.vector[2]);
x = DIV(v.vector[0],v.vector[2]);
} else {
y = v.vector[1]>>16;
x = v.vector[0]>>16;
}
if (POINT_IN_REGION (0, pict->pCompositeClip, x + dx, y + dy, &box))
WRITE(buffer + i, fetch(bits + (y + dy)*stride, x + dx, indexed));
else
WRITE(buffer + i, 0);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
}
} else if (pict->filter == PictFilterBilinear) {
/* adjust vector for maximum contribution at 0.5, 0.5 of each texel. */
v.vector[0] -= v.vector[2] / 2;
v.vector[1] -= v.vector[2] / 2;
unit.vector[0] -= unit.vector[2] / 2;
unit.vector[1] -= unit.vector[2] / 2;
if (pict->repeatType == RepeatNormal) {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty;
FbBits *b;
CARD32 tl, tr, bl, br, r;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
x1 = MOD (x1, pict->pDrawable->width);
x2 = MOD (x2, pict->pDrawable->width);
y1 = MOD (y1, pict->pDrawable->height);
y2 = MOD (y2, pict->pDrawable->height);
b = bits + (y1 + dy)*stride;
tl = fetch(b, x1 + dx, indexed);
tr = fetch(b, x2 + dx, indexed);
b = bits + (y2 + dy)*stride;
bl = fetch(b, x1 + dx, indexed);
br = fetch(b, x2 + dx, indexed);
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty;
FbBits *b;
CARD32 tl, tr, bl, br, r;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
x1 = MOD (x1, pict->pDrawable->width);
x2 = MOD (x2, pict->pDrawable->width);
y1 = MOD (y1, pict->pDrawable->height);
y2 = MOD (y2, pict->pDrawable->height);
b = bits + (y1 + dy)*stride;
tl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y1 + dy, &box)
? fetch(b, x1 + dx, indexed) : 0;
tr = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y1 + dy, &box)
? fetch(b, x2 + dx, indexed) : 0;
b = bits + (y2 + dy)*stride;
bl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y2 + dy, &box)
? fetch(b, x1 + dx, indexed) : 0;
br = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y2 + dy, &box)
? fetch(b, x2 + dx, indexed) : 0;
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
} else {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
box = pict->pCompositeClip->extents;
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty, x_off;
FbBits *b;
CARD32 tl, tr, bl, br, r;
Bool x1_out, x2_out, y1_out, y2_out;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
b = bits + (y1 + dy)*stride;
x_off = x1 + dx;
x1_out = (x1 < box.x1-dx) | (x1 >= box.x2-dx);
x2_out = (x2 < box.x1-dx) | (x2 >= box.x2-dx);
y1_out = (y1 < box.y1-dy) | (y1 >= box.y2-dy);
y2_out = (y2 < box.y1-dy) | (y2 >= box.y2-dy);
tl = x1_out|y1_out ? 0 : fetch(b, x_off, indexed);
tr = x2_out|y1_out ? 0 : fetch(b, x_off + 1, indexed);
b += stride;
bl = x1_out|y2_out ? 0 : fetch(b, x_off, indexed);
br = x2_out|y2_out ? 0 : fetch(b, x_off + 1, indexed);
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty, x_off;
FbBits *b;
CARD32 tl, tr, bl, br, r;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
b = bits + (y1 + dy)*stride;
x_off = x1 + dx;
tl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y1 + dy, &box)
? fetch(b, x_off, indexed) : 0;
tr = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y1 + dy, &box)
? fetch(b, x_off + 1, indexed) : 0;
b += stride;
bl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y2 + dy, &box)
? fetch(b, x_off, indexed) : 0;
br = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y2 + dy, &box)
? fetch(b, x_off + 1, indexed) : 0;
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
}
} else if (pict->filter == PictFilterConvolution) {
xFixed *params = pict->filter_params;
INT32 cwidth = xFixedToInt(params[0]);
INT32 cheight = xFixedToInt(params[1]);
int xoff = (params[0] - xFixed1) >> 1;
int yoff = (params[1] - xFixed1) >> 1;
params += 2;
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, x, y;
INT32 srtot, sgtot, sbtot, satot;
xFixed *p = params;
if (!affine) {
xFixed_48_16 tmp;
tmp = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2] - xoff;
x1 = xFixedToInt(tmp);
tmp = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2] - yoff;
y1 = xFixedToInt(tmp);
} else {
x1 = xFixedToInt(v.vector[0] - xoff);
y1 = xFixedToInt(v.vector[1] - yoff);
}
x2 = x1 + cwidth;
y2 = y1 + cheight;
srtot = sgtot = sbtot = satot = 0;
for (y = y1; y < y2; y++) {
int ty = (pict->repeatType == RepeatNormal) ? MOD (y, pict->pDrawable->height) : y;
for (x = x1; x < x2; x++) {
if (*p) {
int tx = (pict->repeatType == RepeatNormal) ? MOD (x, pict->pDrawable->width) : x;
if (POINT_IN_REGION (0, pict->pCompositeClip, tx + dx, ty + dy, &box)) {
FbBits *b = bits + (ty + dy)*stride;
CARD32 c = fetch(b, tx + dx, indexed);
srtot += Red(c) * *p;
sgtot += Green(c) * *p;
sbtot += Blue(c) * *p;
satot += Alpha(c) * *p;
}
}
p++;
}
}
satot >>= 16;
srtot >>= 16;
sgtot >>= 16;
sbtot >>= 16;
if (satot < 0) satot = 0; else if (satot > 0xff) satot = 0xff;
if (srtot < 0) srtot = 0; else if (srtot > 0xff) srtot = 0xff;
if (sgtot < 0) sgtot = 0; else if (sgtot > 0xff) sgtot = 0xff;
if (sbtot < 0) sbtot = 0; else if (sbtot > 0xff) sbtot = 0xff;
WRITE(buffer + i, ((satot << 24) |
(srtot << 16) |
(sgtot << 8) |
(sbtot )));
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
fbFinishAccess (pict->pDrawable);
}
|
static void fbFetchTransformed(PicturePtr pict, int x, int y, int width, CARD32 *buffer, CARD32 *mask, CARD32 maskBits)
{
FbBits *bits;
FbStride stride;
int bpp;
int xoff, yoff, dx, dy;
fetchPixelProc fetch;
PictVector v;
PictVector unit;
int i;
BoxRec box;
miIndexedPtr indexed = (miIndexedPtr) pict->pFormat->index.devPrivate;
Bool affine = TRUE;
fetch = fetchPixelProcForPicture(pict);
fbGetDrawable(pict->pDrawable, bits, stride, bpp, xoff, yoff);
x += xoff;
y += yoff;
dx = pict->pDrawable->x;
dy = pict->pDrawable->y;
/* reference point is the center of the pixel */
v.vector[0] = IntToxFixed(x - dx) + xFixed1 / 2;
v.vector[1] = IntToxFixed(y - dy) + xFixed1 / 2;
v.vector[2] = xFixed1;
/* when using convolution filters one might get here without a transform */
if (pict->transform) {
if (!PictureTransformPoint3d (pict->transform, &v)) {
fbFinishAccess (pict->pDrawable);
return;
}
unit.vector[0] = pict->transform->matrix[0][0];
unit.vector[1] = pict->transform->matrix[1][0];
unit.vector[2] = pict->transform->matrix[2][0];
affine = v.vector[2] == xFixed1 && unit.vector[2] == 0;
} else {
unit.vector[0] = xFixed1;
unit.vector[1] = 0;
unit.vector[2] = 0;
}
if (pict->filter == PictFilterNearest)
{
if (pict->repeatType == RepeatNormal) {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = MOD(DIV(v.vector[1],v.vector[2]), pict->pDrawable->height);
x = MOD(DIV(v.vector[0],v.vector[2]), pict->pDrawable->width);
} else {
y = MOD(v.vector[1]>>16, pict->pDrawable->height);
x = MOD(v.vector[0]>>16, pict->pDrawable->width);
}
WRITE(buffer + i, fetch(bits + (y + dy)*stride, x + dx, indexed));
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = MOD(DIV(v.vector[1],v.vector[2]), pict->pDrawable->height);
x = MOD(DIV(v.vector[0],v.vector[2]), pict->pDrawable->width);
} else {
y = MOD(v.vector[1]>>16, pict->pDrawable->height);
x = MOD(v.vector[0]>>16, pict->pDrawable->width);
}
if (POINT_IN_REGION (0, pict->pCompositeClip, x, y, &box))
WRITE(buffer + i, fetch(bits + (y + dy)*stride, x + dx, indexed));
else
WRITE(buffer + i, 0);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
} else {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
box = pict->pCompositeClip->extents;
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = DIV(v.vector[1],v.vector[2]);
x = DIV(v.vector[0],v.vector[2]);
} else {
y = v.vector[1]>>16;
x = v.vector[0]>>16;
}
WRITE(buffer + i, ((x < box.x1-dx) | (x >= box.x2-dx) | (y < box.y1-dy) | (y >= box.y2-dy)) ?
0 : fetch(bits + (y + dy)*stride, x + dx, indexed));
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
if (!affine) {
y = DIV(v.vector[1],v.vector[2]);
x = DIV(v.vector[0],v.vector[2]);
} else {
y = v.vector[1]>>16;
x = v.vector[0]>>16;
}
if (POINT_IN_REGION (0, pict->pCompositeClip, x + dx, y + dy, &box))
WRITE(buffer + i, fetch(bits + (y + dy)*stride, x + dx, indexed));
else
WRITE(buffer + i, 0);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
}
} else if (pict->filter == PictFilterBilinear) {
/* adjust vector for maximum contribution at 0.5, 0.5 of each texel. */
v.vector[0] -= v.vector[2] / 2;
v.vector[1] -= v.vector[2] / 2;
unit.vector[0] -= unit.vector[2] / 2;
unit.vector[1] -= unit.vector[2] / 2;
if (pict->repeatType == RepeatNormal) {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty;
FbBits *b;
CARD32 tl, tr, bl, br, r;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
x1 = MOD (x1, pict->pDrawable->width);
x2 = MOD (x2, pict->pDrawable->width);
y1 = MOD (y1, pict->pDrawable->height);
y2 = MOD (y2, pict->pDrawable->height);
b = bits + (y1 + dy)*stride;
tl = fetch(b, x1 + dx, indexed);
tr = fetch(b, x2 + dx, indexed);
b = bits + (y2 + dy)*stride;
bl = fetch(b, x1 + dx, indexed);
br = fetch(b, x2 + dx, indexed);
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty;
FbBits *b;
CARD32 tl, tr, bl, br, r;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
x1 = MOD (x1, pict->pDrawable->width);
x2 = MOD (x2, pict->pDrawable->width);
y1 = MOD (y1, pict->pDrawable->height);
y2 = MOD (y2, pict->pDrawable->height);
b = bits + (y1 + dy)*stride;
tl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y1 + dy, &box)
? fetch(b, x1 + dx, indexed) : 0;
tr = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y1 + dy, &box)
? fetch(b, x2 + dx, indexed) : 0;
b = bits + (y2 + dy)*stride;
bl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y2 + dy, &box)
? fetch(b, x1 + dx, indexed) : 0;
br = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y2 + dy, &box)
? fetch(b, x2 + dx, indexed) : 0;
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
} else {
if (REGION_NUM_RECTS(pict->pCompositeClip) == 1) {
box = pict->pCompositeClip->extents;
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty, x_off;
FbBits *b;
CARD32 tl, tr, bl, br, r;
Bool x1_out, x2_out, y1_out, y2_out;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
b = bits + (y1 + dy)*stride;
x_off = x1 + dx;
x1_out = (x1 < box.x1-dx) | (x1 >= box.x2-dx);
x2_out = (x2 < box.x1-dx) | (x2 >= box.x2-dx);
y1_out = (y1 < box.y1-dy) | (y1 >= box.y2-dy);
y2_out = (y2 < box.y1-dy) | (y2 >= box.y2-dy);
tl = x1_out|y1_out ? 0 : fetch(b, x_off, indexed);
tr = x2_out|y1_out ? 0 : fetch(b, x_off + 1, indexed);
b += stride;
bl = x1_out|y2_out ? 0 : fetch(b, x_off, indexed);
br = x2_out|y2_out ? 0 : fetch(b, x_off + 1, indexed);
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
} else {
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, distx, idistx, disty, idisty, x_off;
FbBits *b;
CARD32 tl, tr, bl, br, r;
CARD32 ft, fb;
if (!affine) {
xFixed_48_16 div;
div = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2];
x1 = div >> 16;
distx = ((xFixed)div >> 8) & 0xff;
div = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2];
y1 = div >> 16;
disty = ((xFixed)div >> 8) & 0xff;
} else {
x1 = v.vector[0] >> 16;
distx = (v.vector[0] >> 8) & 0xff;
y1 = v.vector[1] >> 16;
disty = (v.vector[1] >> 8) & 0xff;
}
x2 = x1 + 1;
y2 = y1 + 1;
idistx = 256 - distx;
idisty = 256 - disty;
b = bits + (y1 + dy)*stride;
x_off = x1 + dx;
tl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y1 + dy, &box)
? fetch(b, x_off, indexed) : 0;
tr = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y1 + dy, &box)
? fetch(b, x_off + 1, indexed) : 0;
b += stride;
bl = POINT_IN_REGION(0, pict->pCompositeClip, x1 + dx, y2 + dy, &box)
? fetch(b, x_off, indexed) : 0;
br = POINT_IN_REGION(0, pict->pCompositeClip, x2 + dx, y2 + dy, &box)
? fetch(b, x_off + 1, indexed) : 0;
ft = FbGet8(tl,0) * idistx + FbGet8(tr,0) * distx;
fb = FbGet8(bl,0) * idistx + FbGet8(br,0) * distx;
r = (((ft * idisty + fb * disty) >> 16) & 0xff);
ft = FbGet8(tl,8) * idistx + FbGet8(tr,8) * distx;
fb = FbGet8(bl,8) * idistx + FbGet8(br,8) * distx;
r |= (((ft * idisty + fb * disty) >> 8) & 0xff00);
ft = FbGet8(tl,16) * idistx + FbGet8(tr,16) * distx;
fb = FbGet8(bl,16) * idistx + FbGet8(br,16) * distx;
r |= (((ft * idisty + fb * disty)) & 0xff0000);
ft = FbGet8(tl,24) * idistx + FbGet8(tr,24) * distx;
fb = FbGet8(bl,24) * idistx + FbGet8(br,24) * distx;
r |= (((ft * idisty + fb * disty) << 8) & 0xff000000);
WRITE(buffer + i, r);
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
}
} else if (pict->filter == PictFilterConvolution) {
xFixed *params = pict->filter_params;
INT32 cwidth = xFixedToInt(params[0]);
INT32 cheight = xFixedToInt(params[1]);
int xoff = (params[0] - xFixed1) >> 1;
int yoff = (params[1] - xFixed1) >> 1;
params += 2;
for (i = 0; i < width; ++i) {
if (!mask || mask[i] & maskBits)
{
if (!v.vector[2]) {
WRITE(buffer + i, 0);
} else {
int x1, x2, y1, y2, x, y;
INT32 srtot, sgtot, sbtot, satot;
xFixed *p = params;
if (!affine) {
xFixed_48_16 tmp;
tmp = ((xFixed_48_16)v.vector[0] << 16)/v.vector[2] - xoff;
x1 = xFixedToInt(tmp);
tmp = ((xFixed_48_16)v.vector[1] << 16)/v.vector[2] - yoff;
y1 = xFixedToInt(tmp);
} else {
x1 = xFixedToInt(v.vector[0] - xoff);
y1 = xFixedToInt(v.vector[1] - yoff);
}
x2 = x1 + cwidth;
y2 = y1 + cheight;
srtot = sgtot = sbtot = satot = 0;
for (y = y1; y < y2; y++) {
int ty = (pict->repeatType == RepeatNormal) ? MOD (y, pict->pDrawable->height) : y;
for (x = x1; x < x2; x++) {
if (*p) {
int tx = (pict->repeatType == RepeatNormal) ? MOD (x, pict->pDrawable->width) : x;
if (POINT_IN_REGION (0, pict->pCompositeClip, tx + dx, ty + dy, &box)) {
FbBits *b = bits + (ty + dy)*stride;
CARD32 c = fetch(b, tx + dx, indexed);
srtot += Red(c) * *p;
sgtot += Green(c) * *p;
sbtot += Blue(c) * *p;
satot += Alpha(c) * *p;
}
}
p++;
}
}
satot >>= 16;
srtot >>= 16;
sgtot >>= 16;
sbtot >>= 16;
if (satot < 0) satot = 0; else if (satot > 0xff) satot = 0xff;
if (srtot < 0) srtot = 0; else if (srtot > 0xff) srtot = 0xff;
if (sgtot < 0) sgtot = 0; else if (sgtot > 0xff) sgtot = 0xff;
if (sbtot < 0) sbtot = 0; else if (sbtot > 0xff) sbtot = 0xff;
WRITE(buffer + i, ((satot << 24) |
(srtot << 16) |
(sgtot << 8) |
(sbtot )));
}
}
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
fbFinishAccess (pict->pDrawable);
}
|
C
|
xserver
| 0 |
CVE-2016-6213
|
https://www.cvedetails.com/cve/CVE-2016-6213/
|
CWE-400
|
https://github.com/torvalds/linux/commit/d29216842a85c7970c536108e093963f02714498
|
d29216842a85c7970c536108e093963f02714498
|
mnt: Add a per mount namespace limit on the number of mounts
CAI Qian <[email protected]> pointed out that the semantics
of shared subtrees make it possible to create an exponentially
increasing number of mounts in a mount namespace.
mkdir /tmp/1 /tmp/2
mount --make-rshared /
for i in $(seq 1 20) ; do mount --bind /tmp/1 /tmp/2 ; done
Will create create 2^20 or 1048576 mounts, which is a practical problem
as some people have managed to hit this by accident.
As such CVE-2016-6213 was assigned.
Ian Kent <[email protected]> described the situation for autofs users
as follows:
> The number of mounts for direct mount maps is usually not very large because of
> the way they are implemented, large direct mount maps can have performance
> problems. There can be anywhere from a few (likely case a few hundred) to less
> than 10000, plus mounts that have been triggered and not yet expired.
>
> Indirect mounts have one autofs mount at the root plus the number of mounts that
> have been triggered and not yet expired.
>
> The number of autofs indirect map entries can range from a few to the common
> case of several thousand and in rare cases up to between 30000 and 50000. I've
> not heard of people with maps larger than 50000 entries.
>
> The larger the number of map entries the greater the possibility for a large
> number of active mounts so it's not hard to expect cases of a 1000 or somewhat
> more active mounts.
So I am setting the default number of mounts allowed per mount
namespace at 100,000. This is more than enough for any use case I
know of, but small enough to quickly stop an exponential increase
in mounts. Which should be perfect to catch misconfigurations and
malfunctioning programs.
For anyone who needs a higher limit this can be changed by writing
to the new /proc/sys/fs/mount-max sysctl.
Tested-by: CAI Qian <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
|
static void shrink_submounts(struct mount *mnt)
{
LIST_HEAD(graveyard);
struct mount *m;
/* extract submounts of 'mountpoint' from the expiration list */
while (select_submounts(mnt, &graveyard)) {
while (!list_empty(&graveyard)) {
m = list_first_entry(&graveyard, struct mount,
mnt_expire);
touch_mnt_namespace(m->mnt_ns);
umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
}
}
}
|
static void shrink_submounts(struct mount *mnt)
{
LIST_HEAD(graveyard);
struct mount *m;
/* extract submounts of 'mountpoint' from the expiration list */
while (select_submounts(mnt, &graveyard)) {
while (!list_empty(&graveyard)) {
m = list_first_entry(&graveyard, struct mount,
mnt_expire);
touch_mnt_namespace(m->mnt_ns);
umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
}
}
}
|
C
|
linux
| 0 |
CVE-2016-6290
|
https://www.cvedetails.com/cve/CVE-2016-6290/
|
CWE-416
|
https://git.php.net/?p=php-src.git;a=commit;h=3798eb6fd5dddb211b01d41495072fd9858d4e32
|
3798eb6fd5dddb211b01d41495072fd9858d4e32
| null |
PHPAPI void php_session_start(TSRMLS_D) /* {{{ */
{
zval **ppid;
zval **data;
char *p, *value;
int nrand;
int lensess;
if (PS(use_only_cookies)) {
PS(apply_trans_sid) = 0;
} else {
PS(apply_trans_sid) = PS(use_trans_sid);
}
switch (PS(session_status)) {
case php_session_active:
php_error(E_NOTICE, "A session had already been started - ignoring session_start()");
return;
break;
case php_session_disabled:
value = zend_ini_string("session.save_handler", sizeof("session.save_handler"), 0);
if (!PS(mod) && value) {
PS(mod) = _php_find_ps_module(value TSRMLS_CC);
if (!PS(mod)) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "Cannot find save handler '%s' - session startup failed", value);
return;
}
}
value = zend_ini_string("session.serialize_handler", sizeof("session.serialize_handler"), 0);
if (!PS(serializer) && value) {
PS(serializer) = _php_find_ps_serializer(value TSRMLS_CC);
if (!PS(serializer)) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "Cannot find serialization handler '%s' - session startup failed", value);
return;
}
}
PS(session_status) = php_session_none;
/* fallthrough */
default:
case php_session_none:
PS(define_sid) = 1;
PS(send_cookie) = 1;
}
lensess = strlen(PS(session_name));
/* Cookies are preferred, because initially
* cookie and get variables will be available. */
if (!PS(id)) {
if (PS(use_cookies) && zend_hash_find(&EG(symbol_table), "_COOKIE", sizeof("_COOKIE"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_ARRAY &&
zend_hash_find(Z_ARRVAL_PP(data), PS(session_name), lensess + 1, (void **) &ppid) == SUCCESS
) {
ppid2sid(ppid TSRMLS_CC);
PS(apply_trans_sid) = 0;
PS(define_sid) = 0;
}
if (!PS(use_only_cookies) && !PS(id) &&
zend_hash_find(&EG(symbol_table), "_GET", sizeof("_GET"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_ARRAY &&
zend_hash_find(Z_ARRVAL_PP(data), PS(session_name), lensess + 1, (void **) &ppid) == SUCCESS
) {
ppid2sid(ppid TSRMLS_CC);
}
if (!PS(use_only_cookies) && !PS(id) &&
zend_hash_find(&EG(symbol_table), "_POST", sizeof("_POST"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_ARRAY &&
zend_hash_find(Z_ARRVAL_PP(data), PS(session_name), lensess + 1, (void **) &ppid) == SUCCESS
) {
ppid2sid(ppid TSRMLS_CC);
}
}
/* Check the REQUEST_URI symbol for a string of the form
* '<session-name>=<session-id>' to allow URLs of the form
* http://yoursite/<session-name>=<session-id>/script.php */
if (!PS(use_only_cookies) && !PS(id) && PG(http_globals)[TRACK_VARS_SERVER] &&
zend_hash_find(Z_ARRVAL_P(PG(http_globals)[TRACK_VARS_SERVER]), "REQUEST_URI", sizeof("REQUEST_URI"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_STRING &&
(p = strstr(Z_STRVAL_PP(data), PS(session_name))) &&
p[lensess] == '='
) {
char *q;
p += lensess + 1;
if ((q = strpbrk(p, "/?\\"))) {
PS(id) = estrndup(p, q - p);
PS(send_cookie) = 0;
}
}
/* Check whether the current request was referred to by
* an external site which invalidates the previously found id. */
if (PS(id) &&
PS(extern_referer_chk)[0] != '\0' &&
PG(http_globals)[TRACK_VARS_SERVER] &&
zend_hash_find(Z_ARRVAL_P(PG(http_globals)[TRACK_VARS_SERVER]), "HTTP_REFERER", sizeof("HTTP_REFERER"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_STRING &&
Z_STRLEN_PP(data) != 0 &&
strstr(Z_STRVAL_PP(data), PS(extern_referer_chk)) == NULL
) {
efree(PS(id));
PS(id) = NULL;
PS(send_cookie) = 1;
if (PS(use_trans_sid) && !PS(use_only_cookies)) {
PS(apply_trans_sid) = 1;
}
}
/* Finally check session id for dangarous characters
* Security note: session id may be embedded in HTML pages.*/
if (PS(id) && strpbrk(PS(id), "\r\n\t <>'\"\\")) {
efree(PS(id));
PS(id) = NULL;
}
php_session_initialize(TSRMLS_C);
php_session_cache_limiter(TSRMLS_C);
if ((PS(mod_data) || PS(mod_user_implemented)) && PS(gc_probability) > 0) {
int nrdels = -1;
nrand = (int) ((float) PS(gc_divisor) * php_combined_lcg(TSRMLS_C));
if (nrand < PS(gc_probability)) {
PS(mod)->s_gc(&PS(mod_data), PS(gc_maxlifetime), &nrdels TSRMLS_CC);
#ifdef SESSION_DEBUG
if (nrdels != -1) {
php_error_docref(NULL TSRMLS_CC, E_NOTICE, "purged %d expired session objects", nrdels);
}
#endif
}
}
}
/* }}} */
|
PHPAPI void php_session_start(TSRMLS_D) /* {{{ */
{
zval **ppid;
zval **data;
char *p, *value;
int nrand;
int lensess;
if (PS(use_only_cookies)) {
PS(apply_trans_sid) = 0;
} else {
PS(apply_trans_sid) = PS(use_trans_sid);
}
switch (PS(session_status)) {
case php_session_active:
php_error(E_NOTICE, "A session had already been started - ignoring session_start()");
return;
break;
case php_session_disabled:
value = zend_ini_string("session.save_handler", sizeof("session.save_handler"), 0);
if (!PS(mod) && value) {
PS(mod) = _php_find_ps_module(value TSRMLS_CC);
if (!PS(mod)) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "Cannot find save handler '%s' - session startup failed", value);
return;
}
}
value = zend_ini_string("session.serialize_handler", sizeof("session.serialize_handler"), 0);
if (!PS(serializer) && value) {
PS(serializer) = _php_find_ps_serializer(value TSRMLS_CC);
if (!PS(serializer)) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "Cannot find serialization handler '%s' - session startup failed", value);
return;
}
}
PS(session_status) = php_session_none;
/* fallthrough */
default:
case php_session_none:
PS(define_sid) = 1;
PS(send_cookie) = 1;
}
lensess = strlen(PS(session_name));
/* Cookies are preferred, because initially
* cookie and get variables will be available. */
if (!PS(id)) {
if (PS(use_cookies) && zend_hash_find(&EG(symbol_table), "_COOKIE", sizeof("_COOKIE"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_ARRAY &&
zend_hash_find(Z_ARRVAL_PP(data), PS(session_name), lensess + 1, (void **) &ppid) == SUCCESS
) {
ppid2sid(ppid TSRMLS_CC);
PS(apply_trans_sid) = 0;
PS(define_sid) = 0;
}
if (!PS(use_only_cookies) && !PS(id) &&
zend_hash_find(&EG(symbol_table), "_GET", sizeof("_GET"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_ARRAY &&
zend_hash_find(Z_ARRVAL_PP(data), PS(session_name), lensess + 1, (void **) &ppid) == SUCCESS
) {
ppid2sid(ppid TSRMLS_CC);
}
if (!PS(use_only_cookies) && !PS(id) &&
zend_hash_find(&EG(symbol_table), "_POST", sizeof("_POST"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_ARRAY &&
zend_hash_find(Z_ARRVAL_PP(data), PS(session_name), lensess + 1, (void **) &ppid) == SUCCESS
) {
ppid2sid(ppid TSRMLS_CC);
}
}
/* Check the REQUEST_URI symbol for a string of the form
* '<session-name>=<session-id>' to allow URLs of the form
* http://yoursite/<session-name>=<session-id>/script.php */
if (!PS(use_only_cookies) && !PS(id) && PG(http_globals)[TRACK_VARS_SERVER] &&
zend_hash_find(Z_ARRVAL_P(PG(http_globals)[TRACK_VARS_SERVER]), "REQUEST_URI", sizeof("REQUEST_URI"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_STRING &&
(p = strstr(Z_STRVAL_PP(data), PS(session_name))) &&
p[lensess] == '='
) {
char *q;
p += lensess + 1;
if ((q = strpbrk(p, "/?\\"))) {
PS(id) = estrndup(p, q - p);
PS(send_cookie) = 0;
}
}
/* Check whether the current request was referred to by
* an external site which invalidates the previously found id. */
if (PS(id) &&
PS(extern_referer_chk)[0] != '\0' &&
PG(http_globals)[TRACK_VARS_SERVER] &&
zend_hash_find(Z_ARRVAL_P(PG(http_globals)[TRACK_VARS_SERVER]), "HTTP_REFERER", sizeof("HTTP_REFERER"), (void **) &data) == SUCCESS &&
Z_TYPE_PP(data) == IS_STRING &&
Z_STRLEN_PP(data) != 0 &&
strstr(Z_STRVAL_PP(data), PS(extern_referer_chk)) == NULL
) {
efree(PS(id));
PS(id) = NULL;
PS(send_cookie) = 1;
if (PS(use_trans_sid) && !PS(use_only_cookies)) {
PS(apply_trans_sid) = 1;
}
}
/* Finally check session id for dangarous characters
* Security note: session id may be embedded in HTML pages.*/
if (PS(id) && strpbrk(PS(id), "\r\n\t <>'\"\\")) {
efree(PS(id));
PS(id) = NULL;
}
php_session_initialize(TSRMLS_C);
php_session_cache_limiter(TSRMLS_C);
if ((PS(mod_data) || PS(mod_user_implemented)) && PS(gc_probability) > 0) {
int nrdels = -1;
nrand = (int) ((float) PS(gc_divisor) * php_combined_lcg(TSRMLS_C));
if (nrand < PS(gc_probability)) {
PS(mod)->s_gc(&PS(mod_data), PS(gc_maxlifetime), &nrdels TSRMLS_CC);
#ifdef SESSION_DEBUG
if (nrdels != -1) {
php_error_docref(NULL TSRMLS_CC, E_NOTICE, "purged %d expired session objects", nrdels);
}
#endif
}
}
}
/* }}} */
|
C
|
php
| 0 |
CVE-2019-7395
|
https://www.cvedetails.com/cve/CVE-2019-7395/
|
CWE-399
|
https://github.com/ImageMagick/ImageMagick/commit/8a43abefb38c5e29138e1c9c515b313363541c06
|
8a43abefb38c5e29138e1c9c515b313363541c06
|
https://github.com/ImageMagick/ImageMagick/issues/1451
|
static inline ssize_t WritePSDOffset(const PSDInfo *psd_info,Image *image,
const MagickSizeType size,const MagickOffsetType offset)
{
MagickOffsetType
current_offset;
ssize_t
result;
current_offset=TellBlob(image);
(void) SeekBlob(image,offset,SEEK_SET);
if (psd_info->version == 1)
result=WriteBlobMSBShort(image,(unsigned short) size);
else
result=WriteBlobMSBLong(image,(unsigned int) size);
(void) SeekBlob(image,current_offset,SEEK_SET);
return(result);
}
|
static inline ssize_t WritePSDOffset(const PSDInfo *psd_info,Image *image,
const MagickSizeType size,const MagickOffsetType offset)
{
MagickOffsetType
current_offset;
ssize_t
result;
current_offset=TellBlob(image);
(void) SeekBlob(image,offset,SEEK_SET);
if (psd_info->version == 1)
result=WriteBlobMSBShort(image,(unsigned short) size);
else
result=WriteBlobMSBLong(image,(unsigned int) size);
(void) SeekBlob(image,current_offset,SEEK_SET);
return(result);
}
|
C
|
ImageMagick
| 0 |
CVE-2015-1271
|
https://www.cvedetails.com/cve/CVE-2015-1271/
|
CWE-119
|
https://github.com/chromium/chromium/commit/74fce5949bdf05a92c2bc0bd98e6e3e977c55376
|
74fce5949bdf05a92c2bc0bd98e6e3e977c55376
|
Fixed volume slider element event handling
MediaControlVolumeSliderElement::defaultEventHandler has making
redundant calls to setVolume() & setMuted() on mouse activity. E.g. if
a mouse click changed the slider position, the above calls were made 4
times, once for each of these events: mousedown, input, mouseup,
DOMActive, click. This crack got exposed when PointerEvents are enabled
by default on M55, adding pointermove, pointerdown & pointerup to the
list.
This CL fixes the code to trigger the calls to setVolume() & setMuted()
only when the slider position is changed. Also added pointer events to
certain lists of mouse events in the code.
BUG=677900
Review-Url: https://codereview.chromium.org/2622273003
Cr-Commit-Position: refs/heads/master@{#446032}
|
MediaControlToggleClosedCaptionsButtonElement::getOverflowStringName() {
return WebLocalizedString::OverflowMenuCaptions;
}
|
MediaControlToggleClosedCaptionsButtonElement::getOverflowStringName() {
return WebLocalizedString::OverflowMenuCaptions;
}
|
C
|
Chrome
| 0 |
CVE-2018-18352
|
https://www.cvedetails.com/cve/CVE-2018-18352/
|
CWE-732
|
https://github.com/chromium/chromium/commit/a9cbaa7a40e2b2723cfc2f266c42f4980038a949
|
a9cbaa7a40e2b2723cfc2f266c42f4980038a949
|
Simplify "WouldTaintOrigin" concept in media/blink
Currently WebMediaPlayer has three predicates:
- DidGetOpaqueResponseFromServiceWorker
- HasSingleSecurityOrigin
- DidPassCORSAccessCheck
. These are used to determine whether the response body is available
for scripts. They are known to be confusing, and actually
MediaElementAudioSourceHandler::WouldTaintOrigin misuses them.
This CL merges the three predicates to one, WouldTaintOrigin, to remove
the confusion. Now the "response type" concept is available and we
don't need a custom CORS check, so this CL removes
BaseAudioContext::WouldTaintOrigin. This CL also renames
URLData::has_opaque_data_ and its (direct and indirect) data accessors
to match the spec.
Bug: 849942, 875153
Change-Id: I6acf50169d7445c4ff614e80ac606f79ee577d2a
Reviewed-on: https://chromium-review.googlesource.com/c/1238098
Reviewed-by: Fredrik Hubinette <[email protected]>
Reviewed-by: Kinuko Yasuda <[email protected]>
Reviewed-by: Raymond Toy <[email protected]>
Commit-Queue: Yutaka Hirano <[email protected]>
Cr-Commit-Position: refs/heads/master@{#598258}
|
void WebMediaPlayerImpl::RequestRemotePlaybackControl() {
cast_impl_.requestRemotePlaybackControl();
}
|
void WebMediaPlayerImpl::RequestRemotePlaybackControl() {
cast_impl_.requestRemotePlaybackControl();
}
|
C
|
Chrome
| 0 |
CVE-2019-5810
|
https://www.cvedetails.com/cve/CVE-2019-5810/
|
CWE-200
|
https://github.com/chromium/chromium/commit/0bd10e13a008389ec14bbe7cc95f17d82ea151c1
|
0bd10e13a008389ec14bbe7cc95f17d82ea151c1
|
[autofill] Pin preview font-family to a system font
Bug: 916838
Change-Id: I4e874105262f2e15a11a7a18a7afd204e5827400
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1423109
Reviewed-by: Fabio Tirelo <[email protected]>
Reviewed-by: Koji Ishii <[email protected]>
Commit-Queue: Roger McFarlane <[email protected]>
Cr-Commit-Position: refs/heads/master@{#640884}
|
void TestClearSectionWithNodeContainingSelectOne(const char* html,
bool unowned) {
LoadHTML(html);
WebLocalFrame* web_frame = GetMainFrame();
ASSERT_NE(nullptr, web_frame);
FormCache form_cache(web_frame);
std::vector<FormData> forms = form_cache.ExtractNewForms();
ASSERT_EQ(1U, forms.size());
WebInputElement firstname = GetInputElementById("firstname");
firstname.SetAutofillState(WebAutofillState::kAutofilled);
WebInputElement lastname = GetInputElementById("lastname");
lastname.SetAutofillState(WebAutofillState::kAutofilled);
WebSelectElement state =
web_frame->GetDocument().GetElementById("state").To<WebSelectElement>();
state.SetValue(WebString::FromUTF8("AK"));
state.SetAutofillState(WebAutofillState::kAutofilled);
EXPECT_TRUE(form_cache.ClearSectionWithElement(firstname));
EXPECT_FALSE(firstname.IsAutofilled());
FormData form;
FormFieldData field;
EXPECT_TRUE(
FindFormAndFieldForFormControlElement(firstname, &form, &field));
EXPECT_EQ(GetCanonicalOriginForDocument(web_frame->GetDocument()),
form.origin);
EXPECT_FALSE(form.origin.is_empty());
if (!unowned) {
EXPECT_EQ(ASCIIToUTF16("TestForm"), form.name);
EXPECT_EQ(GURL("http://abc.com"), form.action);
}
const std::vector<FormFieldData>& fields = form.fields;
ASSERT_EQ(3U, fields.size());
FormFieldData expected;
expected.id_attribute = ASCIIToUTF16("firstname");
expected.name = expected.id_attribute;
expected.value.clear();
expected.form_control_type = "text";
expected.max_length = WebInputElement::DefaultMaxLength();
EXPECT_FORM_FIELD_DATA_EQUALS(expected, fields[0]);
expected.id_attribute = ASCIIToUTF16("lastname");
expected.name = expected.id_attribute;
expected.value.clear();
expected.form_control_type = "text";
expected.max_length = WebInputElement::DefaultMaxLength();
EXPECT_FORM_FIELD_DATA_EQUALS(expected, fields[1]);
expected.id_attribute = ASCIIToUTF16("state");
expected.name_attribute = ASCIIToUTF16("state");
expected.name = expected.name_attribute;
expected.value = ASCIIToUTF16("?");
expected.form_control_type = "select-one";
expected.max_length = 0;
EXPECT_FORM_FIELD_DATA_EQUALS(expected, fields[2]);
EXPECT_EQ(0, firstname.SelectionStart());
EXPECT_EQ(0, firstname.SelectionEnd());
}
|
void TestClearSectionWithNodeContainingSelectOne(const char* html,
bool unowned) {
LoadHTML(html);
WebLocalFrame* web_frame = GetMainFrame();
ASSERT_NE(nullptr, web_frame);
FormCache form_cache(web_frame);
std::vector<FormData> forms = form_cache.ExtractNewForms();
ASSERT_EQ(1U, forms.size());
WebInputElement firstname = GetInputElementById("firstname");
firstname.SetAutofillState(WebAutofillState::kAutofilled);
WebInputElement lastname = GetInputElementById("lastname");
lastname.SetAutofillState(WebAutofillState::kAutofilled);
WebSelectElement state =
web_frame->GetDocument().GetElementById("state").To<WebSelectElement>();
state.SetValue(WebString::FromUTF8("AK"));
state.SetAutofillState(WebAutofillState::kAutofilled);
EXPECT_TRUE(form_cache.ClearSectionWithElement(firstname));
EXPECT_FALSE(firstname.IsAutofilled());
FormData form;
FormFieldData field;
EXPECT_TRUE(
FindFormAndFieldForFormControlElement(firstname, &form, &field));
EXPECT_EQ(GetCanonicalOriginForDocument(web_frame->GetDocument()),
form.origin);
EXPECT_FALSE(form.origin.is_empty());
if (!unowned) {
EXPECT_EQ(ASCIIToUTF16("TestForm"), form.name);
EXPECT_EQ(GURL("http://abc.com"), form.action);
}
const std::vector<FormFieldData>& fields = form.fields;
ASSERT_EQ(3U, fields.size());
FormFieldData expected;
expected.id_attribute = ASCIIToUTF16("firstname");
expected.name = expected.id_attribute;
expected.value.clear();
expected.form_control_type = "text";
expected.max_length = WebInputElement::DefaultMaxLength();
EXPECT_FORM_FIELD_DATA_EQUALS(expected, fields[0]);
expected.id_attribute = ASCIIToUTF16("lastname");
expected.name = expected.id_attribute;
expected.value.clear();
expected.form_control_type = "text";
expected.max_length = WebInputElement::DefaultMaxLength();
EXPECT_FORM_FIELD_DATA_EQUALS(expected, fields[1]);
expected.id_attribute = ASCIIToUTF16("state");
expected.name_attribute = ASCIIToUTF16("state");
expected.name = expected.name_attribute;
expected.value = ASCIIToUTF16("?");
expected.form_control_type = "select-one";
expected.max_length = 0;
EXPECT_FORM_FIELD_DATA_EQUALS(expected, fields[2]);
EXPECT_EQ(0, firstname.SelectionStart());
EXPECT_EQ(0, firstname.SelectionEnd());
}
|
C
|
Chrome
| 0 |
CVE-2016-2324
|
https://www.cvedetails.com/cve/CVE-2016-2324/
|
CWE-119
|
https://github.com/git/git/commit/de1e67d0703894cb6ea782e36abb63976ab07e60
|
de1e67d0703894cb6ea782e36abb63976ab07e60
|
list-objects: pass full pathname to callbacks
When we find a blob at "a/b/c", we currently pass this to
our show_object_fn callbacks as two components: "a/b/" and
"c". Callbacks which want the full value then call
path_name(), which concatenates the two. But this is an
inefficient interface; the path is a strbuf, and we could
simply append "c" to it temporarily, then roll back the
length, without creating a new copy.
So we could improve this by teaching the callsites of
path_name() this trick (and there are only 3). But we can
also notice that no callback actually cares about the
broken-down representation, and simply pass each callback
the full path "a/b/c" as a string. The callback code becomes
even simpler, then, as we do not have to worry about freeing
an allocated buffer, nor rolling back our modification to
the strbuf.
This is theoretically less efficient, as some callbacks
would not bother to format the final path component. But in
practice this is not measurable. Since we use the same
strbuf over and over, our work to grow it is amortized, and
we really only pay to memcpy a few bytes.
Signed-off-by: Jeff King <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
|
void mark_reachable_objects(struct rev_info *revs, int mark_reflog,
unsigned long mark_recent,
struct progress *progress)
{
struct connectivity_progress cp;
/*
* Set up revision parsing, and mark us as being interested
* in all object types, not just commits.
*/
revs->tag_objects = 1;
revs->blob_objects = 1;
revs->tree_objects = 1;
/* Add all refs from the index file */
add_index_objects_to_pending(revs, 0);
/* Add all external refs */
for_each_ref(add_one_ref, revs);
/* detached HEAD is not included in the list above */
head_ref(add_one_ref, revs);
/* Add all reflog info */
if (mark_reflog)
add_reflogs_to_pending(revs, 0);
cp.progress = progress;
cp.count = 0;
/*
* Set up the revision walk - this will move all commits
* from the pending list to the commit walking list.
*/
if (prepare_revision_walk(revs))
die("revision walk setup failed");
traverse_commit_list(revs, mark_commit, mark_object, &cp);
if (mark_recent) {
revs->ignore_missing_links = 1;
if (add_unseen_recent_objects_to_traversal(revs, mark_recent))
die("unable to mark recent objects");
if (prepare_revision_walk(revs))
die("revision walk setup failed");
traverse_commit_list(revs, mark_commit, mark_object, &cp);
}
display_progress(cp.progress, cp.count);
}
|
void mark_reachable_objects(struct rev_info *revs, int mark_reflog,
unsigned long mark_recent,
struct progress *progress)
{
struct connectivity_progress cp;
/*
* Set up revision parsing, and mark us as being interested
* in all object types, not just commits.
*/
revs->tag_objects = 1;
revs->blob_objects = 1;
revs->tree_objects = 1;
/* Add all refs from the index file */
add_index_objects_to_pending(revs, 0);
/* Add all external refs */
for_each_ref(add_one_ref, revs);
/* detached HEAD is not included in the list above */
head_ref(add_one_ref, revs);
/* Add all reflog info */
if (mark_reflog)
add_reflogs_to_pending(revs, 0);
cp.progress = progress;
cp.count = 0;
/*
* Set up the revision walk - this will move all commits
* from the pending list to the commit walking list.
*/
if (prepare_revision_walk(revs))
die("revision walk setup failed");
traverse_commit_list(revs, mark_commit, mark_object, &cp);
if (mark_recent) {
revs->ignore_missing_links = 1;
if (add_unseen_recent_objects_to_traversal(revs, mark_recent))
die("unable to mark recent objects");
if (prepare_revision_walk(revs))
die("revision walk setup failed");
traverse_commit_list(revs, mark_commit, mark_object, &cp);
}
display_progress(cp.progress, cp.count);
}
|
C
|
git
| 0 |
CVE-2017-8287
|
https://www.cvedetails.com/cve/CVE-2017-8287/
|
CWE-119
|
https://git.savannah.gnu.org/cgit/freetype/freetype2.git/commit/?id=3774fc08b502c3e685afca098b6e8a195aded6a0
|
3774fc08b502c3e685afca098b6e8a195aded6a0
| null |
ps_table_add( PS_Table table,
FT_Int idx,
void* object,
FT_UInt length )
{
if ( idx < 0 || idx >= table->max_elems )
{
FT_ERROR(( "ps_table_add: invalid index\n" ));
return FT_THROW( Invalid_Argument );
}
/* grow the base block if needed */
if ( table->cursor + length > table->capacity )
{
FT_Error error;
FT_Offset new_size = table->capacity;
FT_PtrDist in_offset;
in_offset = (FT_Byte*)object - table->block;
if ( in_offset < 0 || (FT_Offset)in_offset >= table->capacity )
in_offset = -1;
while ( new_size < table->cursor + length )
{
/* increase size by 25% and round up to the nearest multiple
of 1024 */
new_size += ( new_size >> 2 ) + 1;
new_size = FT_PAD_CEIL( new_size, 1024 );
}
error = reallocate_t1_table( table, new_size );
if ( error )
return error;
if ( in_offset >= 0 )
object = table->block + in_offset;
}
/* add the object to the base block and adjust offset */
table->elements[idx] = table->block + table->cursor;
table->lengths [idx] = length;
FT_MEM_COPY( table->block + table->cursor, object, length );
table->cursor += length;
return FT_Err_Ok;
}
|
ps_table_add( PS_Table table,
FT_Int idx,
void* object,
FT_UInt length )
{
if ( idx < 0 || idx >= table->max_elems )
{
FT_ERROR(( "ps_table_add: invalid index\n" ));
return FT_THROW( Invalid_Argument );
}
/* grow the base block if needed */
if ( table->cursor + length > table->capacity )
{
FT_Error error;
FT_Offset new_size = table->capacity;
FT_PtrDist in_offset;
in_offset = (FT_Byte*)object - table->block;
if ( in_offset < 0 || (FT_Offset)in_offset >= table->capacity )
in_offset = -1;
while ( new_size < table->cursor + length )
{
/* increase size by 25% and round up to the nearest multiple
of 1024 */
new_size += ( new_size >> 2 ) + 1;
new_size = FT_PAD_CEIL( new_size, 1024 );
}
error = reallocate_t1_table( table, new_size );
if ( error )
return error;
if ( in_offset >= 0 )
object = table->block + in_offset;
}
/* add the object to the base block and adjust offset */
table->elements[idx] = table->block + table->cursor;
table->lengths [idx] = length;
FT_MEM_COPY( table->block + table->cursor, object, length );
table->cursor += length;
return FT_Err_Ok;
}
|
C
|
savannah
| 0 |
CVE-2017-5112
|
https://www.cvedetails.com/cve/CVE-2017-5112/
|
CWE-119
|
https://github.com/chromium/chromium/commit/f6ac1dba5e36f338a490752a2cbef3339096d9fe
|
f6ac1dba5e36f338a490752a2cbef3339096d9fe
|
Reset ES3 pixel pack parameters and PIXEL_PACK_BUFFER binding in DrawingBuffer before ReadPixels() and recover them later.
BUG=740603
TEST=new conformance test
[email protected],[email protected]
Change-Id: I3ea54c6cc34f34e249f7c8b9f792d93c5e1958f4
Reviewed-on: https://chromium-review.googlesource.com/570840
Reviewed-by: Antoine Labour <[email protected]>
Reviewed-by: Kenneth Russell <[email protected]>
Commit-Queue: Zhenyao Mo <[email protected]>
Cr-Commit-Position: refs/heads/master@{#486518}
|
void WebGLRenderingContextBase::bufferSubData(GLenum target,
long long offset,
DOMArrayBuffer* data) {
if (isContextLost())
return;
DCHECK(data);
BufferSubDataImpl(target, offset, data->ByteLength(), data->Data());
}
|
void WebGLRenderingContextBase::bufferSubData(GLenum target,
long long offset,
DOMArrayBuffer* data) {
if (isContextLost())
return;
DCHECK(data);
BufferSubDataImpl(target, offset, data->ByteLength(), data->Data());
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/dc3857aac17be72c96f28d860d875235b3be349a
|
dc3857aac17be72c96f28d860d875235b3be349a
|
Unreviewed, rolling out r142736.
http://trac.webkit.org/changeset/142736
https://bugs.webkit.org/show_bug.cgi?id=109716
Broke ABI, nightly builds crash on launch (Requested by ap on
#webkit).
Patch by Sheriff Bot <[email protected]> on 2013-02-13
Source/WebKit2:
* Shared/APIClientTraits.cpp:
(WebKit):
* Shared/APIClientTraits.h:
* UIProcess/API/C/WKPage.h:
* UIProcess/API/gtk/WebKitLoaderClient.cpp:
(attachLoaderClientToView):
* WebProcess/InjectedBundle/API/c/WKBundlePage.h:
* WebProcess/qt/QtBuiltinBundlePage.cpp:
(WebKit::QtBuiltinBundlePage::QtBuiltinBundlePage):
Tools:
* MiniBrowser/mac/WK2BrowserWindowController.m:
(-[WK2BrowserWindowController awakeFromNib]):
* WebKitTestRunner/InjectedBundle/InjectedBundlePage.cpp:
(WTR::InjectedBundlePage::InjectedBundlePage):
* WebKitTestRunner/TestController.cpp:
(WTR::TestController::createWebViewWithOptions):
git-svn-id: svn://svn.chromium.org/blink/trunk@142762 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void InjectedBundlePage::unableToImplementPolicy(WKBundlePageRef page, WKBundleFrameRef frame, WKErrorRef error, WKTypeRef* userData, const void* clientInfo)
{
static_cast<InjectedBundlePage*>(const_cast<void*>(clientInfo))->unableToImplementPolicy(page, frame, error, userData);
}
|
void InjectedBundlePage::unableToImplementPolicy(WKBundlePageRef page, WKBundleFrameRef frame, WKErrorRef error, WKTypeRef* userData, const void* clientInfo)
{
static_cast<InjectedBundlePage*>(const_cast<void*>(clientInfo))->unableToImplementPolicy(page, frame, error, userData);
}
|
C
|
Chrome
| 0 |
CVE-2011-2795
|
https://www.cvedetails.com/cve/CVE-2011-2795/
|
CWE-264
|
https://github.com/chromium/chromium/commit/73edae623529f04c668268de49d00324b96166a2
|
73edae623529f04c668268de49d00324b96166a2
|
There are too many poorly named functions to create a fragment from markup
https://bugs.webkit.org/show_bug.cgi?id=87339
Reviewed by Eric Seidel.
Source/WebCore:
Moved all functions that create a fragment from markup to markup.h/cpp.
There should be no behavioral change.
* dom/Range.cpp:
(WebCore::Range::createContextualFragment):
* dom/Range.h: Removed createDocumentFragmentForElement.
* dom/ShadowRoot.cpp:
(WebCore::ShadowRoot::setInnerHTML):
* editing/markup.cpp:
(WebCore::createFragmentFromMarkup):
(WebCore::createFragmentForInnerOuterHTML): Renamed from createFragmentFromSource.
(WebCore::createFragmentForTransformToFragment): Moved from XSLTProcessor.
(WebCore::removeElementPreservingChildren): Moved from Range.
(WebCore::createContextualFragment): Ditto.
* editing/markup.h:
* html/HTMLElement.cpp:
(WebCore::HTMLElement::setInnerHTML):
(WebCore::HTMLElement::setOuterHTML):
(WebCore::HTMLElement::insertAdjacentHTML):
* inspector/DOMPatchSupport.cpp:
(WebCore::DOMPatchSupport::patchNode): Added a FIXME since this code should be using
one of the functions listed in markup.h
* xml/XSLTProcessor.cpp:
(WebCore::XSLTProcessor::transformToFragment):
Source/WebKit/qt:
Replace calls to Range::createDocumentFragmentForElement by calls to
createContextualDocumentFragment.
* Api/qwebelement.cpp:
(QWebElement::appendInside):
(QWebElement::prependInside):
(QWebElement::prependOutside):
(QWebElement::appendOutside):
(QWebElement::encloseContentsWith):
(QWebElement::encloseWith):
git-svn-id: svn://svn.chromium.org/blink/trunk@118414 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
short Range::comparePoint(Node* refNode, int offset, ExceptionCode& ec) const
{
if (!m_start.container()) {
ec = INVALID_STATE_ERR;
return 0;
}
if (!refNode) {
ec = HIERARCHY_REQUEST_ERR;
return 0;
}
if (!refNode->attached() || refNode->document() != m_ownerDocument) {
ec = WRONG_DOCUMENT_ERR;
return 0;
}
ec = 0;
checkNodeWOffset(refNode, offset, ec);
if (ec)
return 0;
if (compareBoundaryPoints(refNode, offset, m_start.container(), m_start.offset(), ec) < 0)
return -1;
if (ec)
return 0;
if (compareBoundaryPoints(refNode, offset, m_end.container(), m_end.offset(), ec) > 0 && !ec)
return 1;
return 0;
}
|
short Range::comparePoint(Node* refNode, int offset, ExceptionCode& ec) const
{
if (!m_start.container()) {
ec = INVALID_STATE_ERR;
return 0;
}
if (!refNode) {
ec = HIERARCHY_REQUEST_ERR;
return 0;
}
if (!refNode->attached() || refNode->document() != m_ownerDocument) {
ec = WRONG_DOCUMENT_ERR;
return 0;
}
ec = 0;
checkNodeWOffset(refNode, offset, ec);
if (ec)
return 0;
if (compareBoundaryPoints(refNode, offset, m_start.container(), m_start.offset(), ec) < 0)
return -1;
if (ec)
return 0;
if (compareBoundaryPoints(refNode, offset, m_end.container(), m_end.offset(), ec) > 0 && !ec)
return 1;
return 0;
}
|
C
|
Chrome
| 0 |
CVE-2016-5170
|
https://www.cvedetails.com/cve/CVE-2016-5170/
|
CWE-416
|
https://github.com/chromium/chromium/commit/c3957448cfc6e299165196a33cd954b790875fdb
|
c3957448cfc6e299165196a33cd954b790875fdb
|
Cleanup and remove dead code in SetFocusedElement
This early-out was added in:
https://crrev.com/ce8ea3446283965c7eabab592cbffe223b1cf2bc
Back then, we applied fragment focus in LayoutUpdated() which could
cause this issue. This got cleaned up in:
https://crrev.com/45236fd563e9df53dc45579be1f3d0b4784885a2
so that focus is no longer applied after layout.
+Cleanup: Goto considered harmful
Bug: 795381
Change-Id: Ifeb4d2e03e872fd48cca6720b1d4de36ad1ecbb7
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1524417
Commit-Queue: David Bokan <[email protected]>
Reviewed-by: Stefan Zager <[email protected]>
Cr-Commit-Position: refs/heads/master@{#641101}
|
const AtomicString& Document::vlinkColor() const {
return BodyAttributeValue(kVlinkAttr);
}
|
const AtomicString& Document::vlinkColor() const {
return BodyAttributeValue(kVlinkAttr);
}
|
C
|
Chrome
| 0 |
CVE-2015-3842
|
https://www.cvedetails.com/cve/CVE-2015-3842/
|
CWE-119
|
https://android.googlesource.com/platform/frameworks/av/+/aeea52da00d210587fb3ed895de3d5f2e0264c88
|
aeea52da00d210587fb3ed895de3d5f2e0264c88
|
audio effects: fix heap overflow
Check consistency of effect command reply sizes before
copying to reply address.
Also add null pointer check on reply size.
Also remove unused parameter warning.
Bug: 21953516.
Change-Id: I4cf00c12eaed696af28f3b7613f7e36f47a160c4
(cherry picked from commit 0f714a464d2425afe00d6450535e763131b40844)
|
void EqualizerSetBandLevel(EffectContext *pContext, int band, short Gain){
int gainRounded;
if(Gain > 0){
gainRounded = (int)((Gain+50)/100);
}else{
gainRounded = (int)((Gain-50)/100);
}
pContext->pBundledContext->bandGaindB[band] = gainRounded;
pContext->pBundledContext->CurPreset = PRESET_CUSTOM;
EqualizerUpdateActiveParams(pContext);
LvmEffect_limitLevel(pContext);
}
|
void EqualizerSetBandLevel(EffectContext *pContext, int band, short Gain){
int gainRounded;
if(Gain > 0){
gainRounded = (int)((Gain+50)/100);
}else{
gainRounded = (int)((Gain-50)/100);
}
pContext->pBundledContext->bandGaindB[band] = gainRounded;
pContext->pBundledContext->CurPreset = PRESET_CUSTOM;
EqualizerUpdateActiveParams(pContext);
LvmEffect_limitLevel(pContext);
}
|
C
|
Android
| 0 |
CVE-2015-5302
|
https://www.cvedetails.com/cve/CVE-2015-5302/
|
CWE-200
|
https://github.com/abrt/libreport/commit/257578a23d1537a2d235aaa2b1488ee4f818e360
|
257578a23d1537a2d235aaa2b1488ee4f818e360
|
wizard: fix save users changes after reviewing dump dir files
If the user reviewed the dump dir's files during reporting the crash, the
changes was thrown away and original data was passed to the bugzilla bug
report.
report-gtk saves the first text view buffer and then reloads data from the
reported problem directory, which causes that the changes made to those text
views are thrown away.
Function save_text_if_changed(), except of saving text, also reload the files
from dump dir and update gui state from the dump dir. The commit moves the
reloading and updating gui functions away from this function.
Related to rhbz#1270235
Signed-off-by: Matej Habrnal <[email protected]>
|
static char *missing_items_in_comma_list(const char *input_item_list)
{
if (!input_item_list)
return NULL;
char *item_list = xstrdup(input_item_list);
char *result = item_list;
char *dst = item_list;
while (item_list[0])
{
char *end = strchr(item_list, ',');
if (end) *end = '\0';
if (!problem_data_get_item_or_NULL(g_cd, item_list))
{
if (dst != result)
*dst++ = ',';
dst = stpcpy(dst, item_list);
}
if (!end)
break;
*end = ',';
item_list = end + 1;
}
if (result == dst)
{
free(result);
result = NULL;
}
return result;
}
|
static char *missing_items_in_comma_list(const char *input_item_list)
{
if (!input_item_list)
return NULL;
char *item_list = xstrdup(input_item_list);
char *result = item_list;
char *dst = item_list;
while (item_list[0])
{
char *end = strchr(item_list, ',');
if (end) *end = '\0';
if (!problem_data_get_item_or_NULL(g_cd, item_list))
{
if (dst != result)
*dst++ = ',';
dst = stpcpy(dst, item_list);
}
if (!end)
break;
*end = ',';
item_list = end + 1;
}
if (result == dst)
{
free(result);
result = NULL;
}
return result;
}
|
C
|
libreport
| 0 |
CVE-2011-1799
|
https://www.cvedetails.com/cve/CVE-2011-1799/
|
CWE-20
|
https://github.com/chromium/chromium/commit/5fd35e5359c6345b8709695cd71fba307318e6aa
|
5fd35e5359c6345b8709695cd71fba307318e6aa
|
Source/WebCore: Fix for bug 64046 - Wrong image height in absolutely positioned div in
relatively positioned parent with bottom padding.
https://bugs.webkit.org/show_bug.cgi?id=64046
Patch by Kulanthaivel Palanichamy <[email protected]> on 2011-07-21
Reviewed by David Hyatt.
Test: fast/css/absolute-child-with-percent-height-inside-relative-parent.html
* rendering/RenderBox.cpp:
(WebCore::RenderBox::availableLogicalHeightUsing):
LayoutTests: Test to cover absolutely positioned child with percentage height
in relatively positioned parent with bottom padding.
https://bugs.webkit.org/show_bug.cgi?id=64046
Patch by Kulanthaivel Palanichamy <[email protected]> on 2011-07-21
Reviewed by David Hyatt.
* fast/css/absolute-child-with-percent-height-inside-relative-parent-expected.txt: Added.
* fast/css/absolute-child-with-percent-height-inside-relative-parent.html: Added.
git-svn-id: svn://svn.chromium.org/blink/trunk@91533 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void RenderBox::paint(PaintInfo& paintInfo, const LayoutPoint& paintOffset)
{
LayoutPoint adjustedPaintOffset = paintOffset + location();
PaintInfo childInfo(paintInfo);
childInfo.updatePaintingRootForChildren(this);
for (RenderObject* child = firstChild(); child; child = child->nextSibling())
child->paint(childInfo, adjustedPaintOffset);
}
|
void RenderBox::paint(PaintInfo& paintInfo, const LayoutPoint& paintOffset)
{
LayoutPoint adjustedPaintOffset = paintOffset + location();
PaintInfo childInfo(paintInfo);
childInfo.updatePaintingRootForChildren(this);
for (RenderObject* child = firstChild(); child; child = child->nextSibling())
child->paint(childInfo, adjustedPaintOffset);
}
|
C
|
Chrome
| 0 |
CVE-2017-17862
|
https://www.cvedetails.com/cve/CVE-2017-17862/
|
CWE-20
|
https://github.com/torvalds/linux/commit/c131187db2d3fa2f8bf32fdf4e9a4ef805168467
|
c131187db2d3fa2f8bf32fdf4e9a4ef805168467
|
bpf: fix branch pruning logic
when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.
Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
static bool try_match_pkt_pointers(const struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state *src_reg,
struct bpf_verifier_state *this_branch,
struct bpf_verifier_state *other_branch)
{
if (BPF_SRC(insn->code) != BPF_X)
return false;
switch (BPF_OP(insn->code)) {
case BPF_JGT:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' > pkt_end, pkt_meta' > pkt_data */
find_good_pkt_pointers(this_branch, dst_reg,
dst_reg->type, false);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end > pkt_data', pkt_data > pkt_meta' */
find_good_pkt_pointers(other_branch, src_reg,
src_reg->type, true);
} else {
return false;
}
break;
case BPF_JLT:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' < pkt_end, pkt_meta' < pkt_data */
find_good_pkt_pointers(other_branch, dst_reg,
dst_reg->type, true);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end < pkt_data', pkt_data > pkt_meta' */
find_good_pkt_pointers(this_branch, src_reg,
src_reg->type, false);
} else {
return false;
}
break;
case BPF_JGE:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' >= pkt_end, pkt_meta' >= pkt_data */
find_good_pkt_pointers(this_branch, dst_reg,
dst_reg->type, true);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end >= pkt_data', pkt_data >= pkt_meta' */
find_good_pkt_pointers(other_branch, src_reg,
src_reg->type, false);
} else {
return false;
}
break;
case BPF_JLE:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' <= pkt_end, pkt_meta' <= pkt_data */
find_good_pkt_pointers(other_branch, dst_reg,
dst_reg->type, false);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end <= pkt_data', pkt_data <= pkt_meta' */
find_good_pkt_pointers(this_branch, src_reg,
src_reg->type, true);
} else {
return false;
}
break;
default:
return false;
}
return true;
}
|
static bool try_match_pkt_pointers(const struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state *src_reg,
struct bpf_verifier_state *this_branch,
struct bpf_verifier_state *other_branch)
{
if (BPF_SRC(insn->code) != BPF_X)
return false;
switch (BPF_OP(insn->code)) {
case BPF_JGT:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' > pkt_end, pkt_meta' > pkt_data */
find_good_pkt_pointers(this_branch, dst_reg,
dst_reg->type, false);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end > pkt_data', pkt_data > pkt_meta' */
find_good_pkt_pointers(other_branch, src_reg,
src_reg->type, true);
} else {
return false;
}
break;
case BPF_JLT:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' < pkt_end, pkt_meta' < pkt_data */
find_good_pkt_pointers(other_branch, dst_reg,
dst_reg->type, true);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end < pkt_data', pkt_data > pkt_meta' */
find_good_pkt_pointers(this_branch, src_reg,
src_reg->type, false);
} else {
return false;
}
break;
case BPF_JGE:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' >= pkt_end, pkt_meta' >= pkt_data */
find_good_pkt_pointers(this_branch, dst_reg,
dst_reg->type, true);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end >= pkt_data', pkt_data >= pkt_meta' */
find_good_pkt_pointers(other_branch, src_reg,
src_reg->type, false);
} else {
return false;
}
break;
case BPF_JLE:
if ((dst_reg->type == PTR_TO_PACKET &&
src_reg->type == PTR_TO_PACKET_END) ||
(dst_reg->type == PTR_TO_PACKET_META &&
reg_is_init_pkt_pointer(src_reg, PTR_TO_PACKET))) {
/* pkt_data' <= pkt_end, pkt_meta' <= pkt_data */
find_good_pkt_pointers(other_branch, dst_reg,
dst_reg->type, false);
} else if ((dst_reg->type == PTR_TO_PACKET_END &&
src_reg->type == PTR_TO_PACKET) ||
(reg_is_init_pkt_pointer(dst_reg, PTR_TO_PACKET) &&
src_reg->type == PTR_TO_PACKET_META)) {
/* pkt_end <= pkt_data', pkt_data <= pkt_meta' */
find_good_pkt_pointers(this_branch, src_reg,
src_reg->type, true);
} else {
return false;
}
break;
default:
return false;
}
return true;
}
|
C
|
linux
| 0 |
CVE-2017-7501
|
https://www.cvedetails.com/cve/CVE-2017-7501/
|
CWE-59
|
https://github.com/rpm-software-management/rpm/commit/404ef011c300207cdb1e531670384564aae04bdc
|
404ef011c300207cdb1e531670384564aae04bdc
|
Don't follow symlinks on file creation (CVE-2017-7501)
Open newly created files with O_EXCL to prevent symlink tricks.
When reopening hardlinks for writing the actual content, use append
mode instead. This is compatible with the write-only permissions but
is not destructive in case we got redirected to somebody elses file,
verify the target before actually writing anything.
As these are files with the temporary suffix, errors mean a local
user with sufficient privileges to break the installation of the package
anyway is trying to goof us on purpose, don't bother trying to mend it
(we couldn't fix the hardlink case anyhow) but just bail out.
Based on a patch by Florian Festi.
|
static int fsmRmdir(const char *path)
{
int rc = rmdir(path);
if (_fsm_debug)
rpmlog(RPMLOG_DEBUG, " %8s (%s) %s\n", __func__,
path, (rc < 0 ? strerror(errno) : ""));
if (rc < 0)
switch (errno) {
case ENOENT: rc = RPMERR_ENOENT; break;
case ENOTEMPTY: rc = RPMERR_ENOTEMPTY; break;
default: rc = RPMERR_RMDIR_FAILED; break;
}
return rc;
}
|
static int fsmRmdir(const char *path)
{
int rc = rmdir(path);
if (_fsm_debug)
rpmlog(RPMLOG_DEBUG, " %8s (%s) %s\n", __func__,
path, (rc < 0 ? strerror(errno) : ""));
if (rc < 0)
switch (errno) {
case ENOENT: rc = RPMERR_ENOENT; break;
case ENOTEMPTY: rc = RPMERR_ENOTEMPTY; break;
default: rc = RPMERR_RMDIR_FAILED; break;
}
return rc;
}
|
C
|
rpm
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/a3e2afaedd8190398ae45ccef34fcdee00fb19aa
|
a3e2afaedd8190398ae45ccef34fcdee00fb19aa
|
Fixed crash related to cellular network payment plan retreival.
BUG=chromium-os:8864
TEST=none
Review URL: http://codereview.chromium.org/4690002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@65405 0039d316-1c4b-4281-b951-d872f2087c98
|
virtual void EnableOfflineMode(bool enable) {}
|
virtual void EnableOfflineMode(bool enable) {}
|
C
|
Chrome
| 0 |
CVE-2012-6545
|
https://www.cvedetails.com/cve/CVE-2012-6545/
|
CWE-200
|
https://github.com/torvalds/linux/commit/f9432c5ec8b1e9a09b9b0e5569e3c73db8de432a
|
f9432c5ec8b1e9a09b9b0e5569e3c73db8de432a
|
Bluetooth: RFCOMM - Fix info leak in ioctl(RFCOMMGETDEVLIST)
The RFCOMM code fails to initialize the two padding bytes of struct
rfcomm_dev_list_req inserted for alignment before copying it to
userland. Additionally there are two padding bytes in each instance of
struct rfcomm_dev_info. The ioctl() that for disclosures two bytes plus
dev_num times two bytes uninitialized kernel heap memory.
Allocate the memory using kzalloc() to fix this issue.
Signed-off-by: Mathias Krause <[email protected]>
Cc: Marcel Holtmann <[email protected]>
Cc: Gustavo Padovan <[email protected]>
Cc: Johan Hedberg <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
int rfcomm_dev_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
{
BT_DBG("cmd %d arg %p", cmd, arg);
switch (cmd) {
case RFCOMMCREATEDEV:
return rfcomm_create_dev(sk, arg);
case RFCOMMRELEASEDEV:
return rfcomm_release_dev(arg);
case RFCOMMGETDEVLIST:
return rfcomm_get_dev_list(arg);
case RFCOMMGETDEVINFO:
return rfcomm_get_dev_info(arg);
}
return -EINVAL;
}
|
int rfcomm_dev_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
{
BT_DBG("cmd %d arg %p", cmd, arg);
switch (cmd) {
case RFCOMMCREATEDEV:
return rfcomm_create_dev(sk, arg);
case RFCOMMRELEASEDEV:
return rfcomm_release_dev(arg);
case RFCOMMGETDEVLIST:
return rfcomm_get_dev_list(arg);
case RFCOMMGETDEVINFO:
return rfcomm_get_dev_info(arg);
}
return -EINVAL;
}
|
C
|
linux
| 0 |
CVE-2018-9506
|
https://www.cvedetails.com/cve/CVE-2018-9506/
|
CWE-125
|
https://android.googlesource.com/platform/system/bt/+/830cb39cb2a0f1bf6704d264e2a5c5029c175dd7
|
830cb39cb2a0f1bf6704d264e2a5c5029c175dd7
|
Add missing AVRCP message length checks inside avrc_msg_cback
Explicitly check the length of the received message before
accessing the data.
Bug: 111803925
Bug: 79883824
Test: POC scripts
Change-Id: I00b1c6bd6dd7e18ac2c469ef2032c7ff10dcaecb
Merged-In: I00b1c6bd6dd7e18ac2c469ef2032c7ff10dcaecb
(cherry picked from commit 282deb3e27407aaa88b8ddbdbd7bb7d56ddc635f)
(cherry picked from commit 007868d05f4b761842c7345161aeda6fd40dd245)
|
uint16_t AVRC_Open(uint8_t* p_handle, tAVRC_CONN_CB* p_ccb,
const RawAddress& peer_addr) {
uint16_t status;
tAVCT_CC cc;
cc.p_ctrl_cback = avrc_ctrl_cback; /* Control callback */
cc.p_msg_cback = avrc_msg_cback; /* Message callback */
cc.pid = UUID_SERVCLASS_AV_REMOTE_CONTROL; /* Profile ID */
cc.role = p_ccb->conn; /* Initiator/acceptor role */
cc.control = p_ccb->control; /* Control role (Control/Target) */
status = AVCT_CreateConn(p_handle, &cc, peer_addr);
if (status == AVCT_SUCCESS) {
avrc_cb.ccb[*p_handle] = *p_ccb;
memset(&avrc_cb.ccb_int[*p_handle], 0, sizeof(tAVRC_CONN_INT_CB));
memset(&avrc_cb.fcb[*p_handle], 0, sizeof(tAVRC_FRAG_CB));
memset(&avrc_cb.rcb[*p_handle], 0, sizeof(tAVRC_RASM_CB));
avrc_cb.ccb_int[*p_handle].tle = alarm_new("avrcp.commandTimer");
avrc_cb.ccb_int[*p_handle].cmd_q = fixed_queue_new(SIZE_MAX);
}
AVRC_TRACE_DEBUG("%s role: %d, control:%d status:%d, handle:%d", __func__,
cc.role, cc.control, status, *p_handle);
return status;
}
|
uint16_t AVRC_Open(uint8_t* p_handle, tAVRC_CONN_CB* p_ccb,
const RawAddress& peer_addr) {
uint16_t status;
tAVCT_CC cc;
cc.p_ctrl_cback = avrc_ctrl_cback; /* Control callback */
cc.p_msg_cback = avrc_msg_cback; /* Message callback */
cc.pid = UUID_SERVCLASS_AV_REMOTE_CONTROL; /* Profile ID */
cc.role = p_ccb->conn; /* Initiator/acceptor role */
cc.control = p_ccb->control; /* Control role (Control/Target) */
status = AVCT_CreateConn(p_handle, &cc, peer_addr);
if (status == AVCT_SUCCESS) {
avrc_cb.ccb[*p_handle] = *p_ccb;
memset(&avrc_cb.ccb_int[*p_handle], 0, sizeof(tAVRC_CONN_INT_CB));
memset(&avrc_cb.fcb[*p_handle], 0, sizeof(tAVRC_FRAG_CB));
memset(&avrc_cb.rcb[*p_handle], 0, sizeof(tAVRC_RASM_CB));
avrc_cb.ccb_int[*p_handle].tle = alarm_new("avrcp.commandTimer");
avrc_cb.ccb_int[*p_handle].cmd_q = fixed_queue_new(SIZE_MAX);
}
AVRC_TRACE_DEBUG("%s role: %d, control:%d status:%d, handle:%d", __func__,
cc.role, cc.control, status, *p_handle);
return status;
}
|
C
|
Android
| 0 |
CVE-2014-9644
|
https://www.cvedetails.com/cve/CVE-2014-9644/
|
CWE-264
|
https://github.com/torvalds/linux/commit/4943ba16bbc2db05115707b3ff7b4874e9e3c560
|
4943ba16bbc2db05115707b3ff7b4874e9e3c560
|
crypto: include crypto- module prefix in template
This adds the module loading prefix "crypto-" to the template lookup
as well.
For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
includes the "crypto-" prefix at every level, correctly rejecting "vfat":
net-pf-38
algif-hash
crypto-vfat(blowfish)
crypto-vfat(blowfish)-all
crypto-vfat
Reported-by: Mathias Krause <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Mathias Krause <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
static void crypto_ctr_free(struct crypto_instance *inst)
{
crypto_drop_spawn(crypto_instance_ctx(inst));
kfree(inst);
}
|
static void crypto_ctr_free(struct crypto_instance *inst)
{
crypto_drop_spawn(crypto_instance_ctx(inst));
kfree(inst);
}
|
C
|
linux
| 0 |
CVE-2012-2875
|
https://www.cvedetails.com/cve/CVE-2012-2875/
| null |
https://github.com/chromium/chromium/commit/d345af9ed62ee5f431be327967f41c3cc3fe936a
|
d345af9ed62ee5f431be327967f41c3cc3fe936a
|
[BlackBerry] Adapt to new BlackBerry::Platform::TouchPoint API
https://bugs.webkit.org/show_bug.cgi?id=105143
RIM PR 171941
Reviewed by Rob Buis.
Internally reviewed by George Staikos.
Source/WebCore:
TouchPoint instances now provide document coordinates for the viewport
and content position of the touch event. The pixel coordinates stored
in the TouchPoint should no longer be needed in WebKit.
Also adapt to new method names and encapsulation of TouchPoint data
members.
No change in behavior, no new tests.
* platform/blackberry/PlatformTouchPointBlackBerry.cpp:
(WebCore::PlatformTouchPoint::PlatformTouchPoint):
Source/WebKit/blackberry:
TouchPoint instances now provide document coordinates for the viewport
and content position of the touch event. The pixel coordinates stored
in the TouchPoint should no longer be needed in WebKit. One exception
is when passing events to a full screen plugin.
Also adapt to new method names and encapsulation of TouchPoint data
members.
* Api/WebPage.cpp:
(BlackBerry::WebKit::WebPage::touchEvent):
(BlackBerry::WebKit::WebPage::touchPointAsMouseEvent):
(BlackBerry::WebKit::WebPagePrivate::dispatchTouchEventToFullScreenPlugin):
(BlackBerry::WebKit::WebPagePrivate::dispatchTouchPointAsMouseEventToFullScreenPlugin):
* WebKitSupport/InputHandler.cpp:
(BlackBerry::WebKit::InputHandler::shouldRequestSpellCheckingOptionsForPoint):
* WebKitSupport/InputHandler.h:
(InputHandler):
* WebKitSupport/TouchEventHandler.cpp:
(BlackBerry::WebKit::TouchEventHandler::doFatFingers):
(BlackBerry::WebKit::TouchEventHandler::handleTouchPoint):
* WebKitSupport/TouchEventHandler.h:
(TouchEventHandler):
Tools:
Adapt to new method names and encapsulation of TouchPoint data members.
* DumpRenderTree/blackberry/EventSender.cpp:
(addTouchPointCallback):
(updateTouchPointCallback):
(touchEndCallback):
(releaseTouchPointCallback):
(sendTouchEvent):
git-svn-id: svn://svn.chromium.org/blink/trunk@137880 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void WebPage::onCertificateStoreLocationSet(const BlackBerry::Platform::String& caPath)
{
#if ENABLE(VIDEO)
MediaPlayerPrivate::setCertificatePath(caPath);
#endif
}
|
void WebPage::onCertificateStoreLocationSet(const BlackBerry::Platform::String& caPath)
{
#if ENABLE(VIDEO)
MediaPlayerPrivate::setCertificatePath(caPath);
#endif
}
|
C
|
Chrome
| 0 |
CVE-2016-8858
|
https://www.cvedetails.com/cve/CVE-2016-8858/
|
CWE-399
|
https://github.com/openssh/openssh-portable/commit/ec165c392ca54317dbe3064a8c200de6531e89ad
|
ec165c392ca54317dbe3064a8c200de6531e89ad
|
upstream commit
Unregister the KEXINIT handler after message has been
received. Otherwise an unauthenticated peer can repeat the KEXINIT and cause
allocation of up to 128MB -- until the connection is closed. Reported by
shilei-c at 360.cn
Upstream-ID: 43649ae12a27ef94290db16d1a98294588b75c05
|
kex_input_newkeys(int type, u_int32_t seq, void *ctxt)
{
struct ssh *ssh = ctxt;
struct kex *kex = ssh->kex;
int r;
debug("SSH2_MSG_NEWKEYS received");
ssh_dispatch_set(ssh, SSH2_MSG_NEWKEYS, &kex_protocol_error);
if ((r = sshpkt_get_end(ssh)) != 0)
return r;
if ((r = ssh_set_newkeys(ssh, MODE_IN)) != 0)
return r;
kex->done = 1;
sshbuf_reset(kex->peer);
/* sshbuf_reset(kex->my); */
kex->flags &= ~KEX_INIT_SENT;
free(kex->name);
kex->name = NULL;
return 0;
}
|
kex_input_newkeys(int type, u_int32_t seq, void *ctxt)
{
struct ssh *ssh = ctxt;
struct kex *kex = ssh->kex;
int r;
debug("SSH2_MSG_NEWKEYS received");
ssh_dispatch_set(ssh, SSH2_MSG_NEWKEYS, &kex_protocol_error);
if ((r = sshpkt_get_end(ssh)) != 0)
return r;
if ((r = ssh_set_newkeys(ssh, MODE_IN)) != 0)
return r;
kex->done = 1;
sshbuf_reset(kex->peer);
/* sshbuf_reset(kex->my); */
kex->flags &= ~KEX_INIT_SENT;
free(kex->name);
kex->name = NULL;
return 0;
}
|
C
|
openssh-portable
| 0 |
CVE-2016-9540
|
https://www.cvedetails.com/cve/CVE-2016-9540/
|
CWE-787
|
https://github.com/vadz/libtiff/commit/5ad9d8016fbb60109302d558f7edb2cb2a3bb8e3
|
5ad9d8016fbb60109302d558f7edb2cb2a3bb8e3
|
* tools/tiffcp.c: fix out-of-bounds write on tiled images with odd
tile width vs image width. Reported as MSVR 35103
by Axel Souchet and Vishal Chauhan from the MSRC Vulnerabilities &
Mitigations team.
|
usage(void)
{
char buf[BUFSIZ];
int i;
setbuf(stderr, buf);
fprintf(stderr, "%s\n\n", TIFFGetVersion());
for (i = 0; stuff[i] != NULL; i++)
fprintf(stderr, "%s\n", stuff[i]);
exit(-1);
}
|
usage(void)
{
char buf[BUFSIZ];
int i;
setbuf(stderr, buf);
fprintf(stderr, "%s\n\n", TIFFGetVersion());
for (i = 0; stuff[i] != NULL; i++)
fprintf(stderr, "%s\n", stuff[i]);
exit(-1);
}
|
C
|
libtiff
| 0 |
CVE-2013-6636
|
https://www.cvedetails.com/cve/CVE-2013-6636/
|
CWE-20
|
https://github.com/chromium/chromium/commit/5cfe3023574666663d970ce48cdbc8ed15ce61d9
|
5cfe3023574666663d970ce48cdbc8ed15ce61d9
|
Clear out some minor TODOs.
BUG=none
Review URL: https://codereview.chromium.org/1047063002
Cr-Commit-Position: refs/heads/master@{#322959}
|
bool AutofillDialogViews::SectionContainer::OnMousePressed(
const ui::MouseEvent& event) {
if (!ShouldForwardEvent(event))
return false;
return proxy_button_->OnMousePressed(ProxyEvent(event));
}
|
bool AutofillDialogViews::SectionContainer::OnMousePressed(
const ui::MouseEvent& event) {
if (!ShouldForwardEvent(event))
return false;
return proxy_button_->OnMousePressed(ProxyEvent(event));
}
|
C
|
Chrome
| 0 |
CVE-2017-0635
|
https://www.cvedetails.com/cve/CVE-2017-0635/
|
CWE-476
|
https://android.googlesource.com/platform/frameworks/av/+/523f6b49c1a2289161f40cf9fe80b92e592e9441
|
523f6b49c1a2289161f40cf9fe80b92e592e9441
|
Validate lengths in HEVC metadata parsing
Add code to validate the size parameter passed to
HecvParameterSets::addNalUnit(). Previously vulnerable
to decrementing an unsigned past 0, yielding a huge result value.
Bug: 35467107
Test: ran POC, no crash, emitted new "bad length" log entry
Change-Id: Ia169b9edc1e0f7c5302e3c68aa90a54e8863d79e
(cherry picked from commit e0dcf097cc029d056926029a29419e1650cbdf1b)
|
size_t HevcParameterSets::getNumNalUnitsOfType(uint8_t type) {
size_t num = 0;
for (size_t i = 0; i < mNalUnits.size(); ++i) {
if (getType(i) == type) {
++num;
}
}
return num;
}
|
size_t HevcParameterSets::getNumNalUnitsOfType(uint8_t type) {
size_t num = 0;
for (size_t i = 0; i < mNalUnits.size(); ++i) {
if (getType(i) == type) {
++num;
}
}
return num;
}
|
C
|
Android
| 0 |
CVE-2019-11487
|
https://www.cvedetails.com/cve/CVE-2019-11487/
|
CWE-416
|
https://github.com/torvalds/linux/commit/6b3a707736301c2128ca85ce85fb13f60b5e350a
|
6b3a707736301c2128ca85ce85fb13f60b5e350a
|
Merge branch 'page-refs' (page ref overflow)
Merge page ref overflow branch.
Jann Horn reported that he can overflow the page ref count with
sufficient memory (and a filesystem that is intentionally extremely
slow).
Admittedly it's not exactly easy. To have more than four billion
references to a page requires a minimum of 32GB of kernel memory just
for the pointers to the pages, much less any metadata to keep track of
those pointers. Jann needed a total of 140GB of memory and a specially
crafted filesystem that leaves all reads pending (in order to not ever
free the page references and just keep adding more).
Still, we have a fairly straightforward way to limit the two obvious
user-controllable sources of page references: direct-IO like page
references gotten through get_user_pages(), and the splice pipe page
duplication. So let's just do that.
* branch page-refs:
fs: prevent page refcount overflow in pipe_buf_get
mm: prevent get_user_pages() from overflowing page refcount
mm: add 'try_get_page()' helper function
mm: make page ref count overflow check tighter and more explicit
|
static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
int write, struct page **pages, int *nr)
{
struct dev_pagemap *pgmap = NULL;
int nr_start = *nr, ret = 0;
pte_t *ptep, *ptem;
ptem = ptep = pte_offset_map(&pmd, addr);
do {
pte_t pte = gup_get_pte(ptep);
struct page *head, *page;
/*
* Similar to the PMD case below, NUMA hinting must take slow
* path using the pte_protnone check.
*/
if (pte_protnone(pte))
goto pte_unmap;
if (!pte_access_permitted(pte, write))
goto pte_unmap;
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
undo_dev_pagemap(nr, nr_start, pages);
goto pte_unmap;
}
} else if (pte_special(pte))
goto pte_unmap;
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
head = try_get_compound_head(page, 1);
if (!head)
goto pte_unmap;
if (unlikely(pte_val(pte) != pte_val(*ptep))) {
put_page(head);
goto pte_unmap;
}
VM_BUG_ON_PAGE(compound_head(page) != head, page);
SetPageReferenced(page);
pages[*nr] = page;
(*nr)++;
} while (ptep++, addr += PAGE_SIZE, addr != end);
ret = 1;
pte_unmap:
if (pgmap)
put_dev_pagemap(pgmap);
pte_unmap(ptem);
return ret;
}
|
static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
int write, struct page **pages, int *nr)
{
struct dev_pagemap *pgmap = NULL;
int nr_start = *nr, ret = 0;
pte_t *ptep, *ptem;
ptem = ptep = pte_offset_map(&pmd, addr);
do {
pte_t pte = gup_get_pte(ptep);
struct page *head, *page;
/*
* Similar to the PMD case below, NUMA hinting must take slow
* path using the pte_protnone check.
*/
if (pte_protnone(pte))
goto pte_unmap;
if (!pte_access_permitted(pte, write))
goto pte_unmap;
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
undo_dev_pagemap(nr, nr_start, pages);
goto pte_unmap;
}
} else if (pte_special(pte))
goto pte_unmap;
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
head = compound_head(page);
if (!page_cache_get_speculative(head))
goto pte_unmap;
if (unlikely(pte_val(pte) != pte_val(*ptep))) {
put_page(head);
goto pte_unmap;
}
VM_BUG_ON_PAGE(compound_head(page) != head, page);
SetPageReferenced(page);
pages[*nr] = page;
(*nr)++;
} while (ptep++, addr += PAGE_SIZE, addr != end);
ret = 1;
pte_unmap:
if (pgmap)
put_dev_pagemap(pgmap);
pte_unmap(ptem);
return ret;
}
|
C
|
linux
| 1 |
null | null | null |
https://github.com/chromium/chromium/commit/20106b615c3d11637864fcd4dd4de3356c858f2c
|
20106b615c3d11637864fcd4dd4de3356c858f2c
|
[gtk] un-break status bubble.
broken in r71858.
BUG=70244
TEST=manual (in and out of --kiosk)
Review URL: http://codereview.chromium.org/6343003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@71988 0039d316-1c4b-4281-b951-d872f2087c98
|
void StatusBubbleGtk::SetURL(const GURL& url, const string16& languages) {
url_ = url;
languages_ = languages;
if (url.is_empty() && !status_text_.empty()) {
url_text_ = std::string();
SetStatusTextTo(status_text_);
return;
}
SetStatusTextToURL();
}
|
void StatusBubbleGtk::SetURL(const GURL& url, const string16& languages) {
url_ = url;
languages_ = languages;
if (url.is_empty() && !status_text_.empty()) {
url_text_ = std::string();
SetStatusTextTo(status_text_);
return;
}
SetStatusTextToURL();
}
|
C
|
Chrome
| 0 |
CVE-2018-18445
|
https://www.cvedetails.com/cve/CVE-2018-18445/
|
CWE-125
|
https://github.com/torvalds/linux/commit/b799207e1e1816b09e7a5920fbb2d5fcf6edd681
|
b799207e1e1816b09e7a5920fbb2d5fcf6edd681
|
bpf: 32-bit RSH verification must truncate input before the ALU op
When I wrote commit 468f6eafa6c4 ("bpf: fix 32-bit ALU op verification"), I
assumed that, in order to emulate 64-bit arithmetic with 32-bit logic, it
is sufficient to just truncate the output to 32 bits; and so I just moved
the register size coercion that used to be at the start of the function to
the end of the function.
That assumption is true for almost every op, but not for 32-bit right
shifts, because those can propagate information towards the least
significant bit. Fix it by always truncating inputs for 32-bit ops to 32
bits.
Also get rid of the coerce_reg_to_size() after the ALU op, since that has
no effect.
Fixes: 468f6eafa6c4 ("bpf: fix 32-bit ALU op verification")
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: Jann Horn <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
static int check_cond_jmp_op(struct bpf_verifier_env *env,
struct bpf_insn *insn, int *insn_idx)
{
struct bpf_verifier_state *this_branch = env->cur_state;
struct bpf_verifier_state *other_branch;
struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs;
struct bpf_reg_state *dst_reg, *other_branch_regs;
u8 opcode = BPF_OP(insn->code);
int err;
if (opcode > BPF_JSLE) {
verbose(env, "invalid BPF_JMP opcode %x\n", opcode);
return -EINVAL;
}
if (BPF_SRC(insn->code) == BPF_X) {
if (insn->imm != 0) {
verbose(env, "BPF_JMP uses reserved fields\n");
return -EINVAL;
}
/* check src1 operand */
err = check_reg_arg(env, insn->src_reg, SRC_OP);
if (err)
return err;
if (is_pointer_value(env, insn->src_reg)) {
verbose(env, "R%d pointer comparison prohibited\n",
insn->src_reg);
return -EACCES;
}
} else {
if (insn->src_reg != BPF_REG_0) {
verbose(env, "BPF_JMP uses reserved fields\n");
return -EINVAL;
}
}
/* check src2 operand */
err = check_reg_arg(env, insn->dst_reg, SRC_OP);
if (err)
return err;
dst_reg = ®s[insn->dst_reg];
/* detect if R == 0 where R was initialized to zero earlier */
if (BPF_SRC(insn->code) == BPF_K &&
(opcode == BPF_JEQ || opcode == BPF_JNE) &&
dst_reg->type == SCALAR_VALUE &&
tnum_is_const(dst_reg->var_off)) {
if ((opcode == BPF_JEQ && dst_reg->var_off.value == insn->imm) ||
(opcode == BPF_JNE && dst_reg->var_off.value != insn->imm)) {
/* if (imm == imm) goto pc+off;
* only follow the goto, ignore fall-through
*/
*insn_idx += insn->off;
return 0;
} else {
/* if (imm != imm) goto pc+off;
* only follow fall-through branch, since
* that's where the program will go
*/
return 0;
}
}
other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx);
if (!other_branch)
return -EFAULT;
other_branch_regs = other_branch->frame[other_branch->curframe]->regs;
/* detect if we are comparing against a constant value so we can adjust
* our min/max values for our dst register.
* this is only legit if both are scalars (or pointers to the same
* object, I suppose, but we don't support that right now), because
* otherwise the different base pointers mean the offsets aren't
* comparable.
*/
if (BPF_SRC(insn->code) == BPF_X) {
if (dst_reg->type == SCALAR_VALUE &&
regs[insn->src_reg].type == SCALAR_VALUE) {
if (tnum_is_const(regs[insn->src_reg].var_off))
reg_set_min_max(&other_branch_regs[insn->dst_reg],
dst_reg, regs[insn->src_reg].var_off.value,
opcode);
else if (tnum_is_const(dst_reg->var_off))
reg_set_min_max_inv(&other_branch_regs[insn->src_reg],
®s[insn->src_reg],
dst_reg->var_off.value, opcode);
else if (opcode == BPF_JEQ || opcode == BPF_JNE)
/* Comparing for equality, we can combine knowledge */
reg_combine_min_max(&other_branch_regs[insn->src_reg],
&other_branch_regs[insn->dst_reg],
®s[insn->src_reg],
®s[insn->dst_reg], opcode);
}
} else if (dst_reg->type == SCALAR_VALUE) {
reg_set_min_max(&other_branch_regs[insn->dst_reg],
dst_reg, insn->imm, opcode);
}
/* detect if R == 0 where R is returned from bpf_map_lookup_elem() */
if (BPF_SRC(insn->code) == BPF_K &&
insn->imm == 0 && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
dst_reg->type == PTR_TO_MAP_VALUE_OR_NULL) {
/* Mark all identical map registers in each branch as either
* safe or unknown depending R == 0 or R != 0 conditional.
*/
mark_map_regs(this_branch, insn->dst_reg, opcode == BPF_JNE);
mark_map_regs(other_branch, insn->dst_reg, opcode == BPF_JEQ);
} else if (!try_match_pkt_pointers(insn, dst_reg, ®s[insn->src_reg],
this_branch, other_branch) &&
is_pointer_value(env, insn->dst_reg)) {
verbose(env, "R%d pointer comparison prohibited\n",
insn->dst_reg);
return -EACCES;
}
if (env->log.level)
print_verifier_state(env, this_branch->frame[this_branch->curframe]);
return 0;
}
|
static int check_cond_jmp_op(struct bpf_verifier_env *env,
struct bpf_insn *insn, int *insn_idx)
{
struct bpf_verifier_state *this_branch = env->cur_state;
struct bpf_verifier_state *other_branch;
struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs;
struct bpf_reg_state *dst_reg, *other_branch_regs;
u8 opcode = BPF_OP(insn->code);
int err;
if (opcode > BPF_JSLE) {
verbose(env, "invalid BPF_JMP opcode %x\n", opcode);
return -EINVAL;
}
if (BPF_SRC(insn->code) == BPF_X) {
if (insn->imm != 0) {
verbose(env, "BPF_JMP uses reserved fields\n");
return -EINVAL;
}
/* check src1 operand */
err = check_reg_arg(env, insn->src_reg, SRC_OP);
if (err)
return err;
if (is_pointer_value(env, insn->src_reg)) {
verbose(env, "R%d pointer comparison prohibited\n",
insn->src_reg);
return -EACCES;
}
} else {
if (insn->src_reg != BPF_REG_0) {
verbose(env, "BPF_JMP uses reserved fields\n");
return -EINVAL;
}
}
/* check src2 operand */
err = check_reg_arg(env, insn->dst_reg, SRC_OP);
if (err)
return err;
dst_reg = ®s[insn->dst_reg];
/* detect if R == 0 where R was initialized to zero earlier */
if (BPF_SRC(insn->code) == BPF_K &&
(opcode == BPF_JEQ || opcode == BPF_JNE) &&
dst_reg->type == SCALAR_VALUE &&
tnum_is_const(dst_reg->var_off)) {
if ((opcode == BPF_JEQ && dst_reg->var_off.value == insn->imm) ||
(opcode == BPF_JNE && dst_reg->var_off.value != insn->imm)) {
/* if (imm == imm) goto pc+off;
* only follow the goto, ignore fall-through
*/
*insn_idx += insn->off;
return 0;
} else {
/* if (imm != imm) goto pc+off;
* only follow fall-through branch, since
* that's where the program will go
*/
return 0;
}
}
other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx);
if (!other_branch)
return -EFAULT;
other_branch_regs = other_branch->frame[other_branch->curframe]->regs;
/* detect if we are comparing against a constant value so we can adjust
* our min/max values for our dst register.
* this is only legit if both are scalars (or pointers to the same
* object, I suppose, but we don't support that right now), because
* otherwise the different base pointers mean the offsets aren't
* comparable.
*/
if (BPF_SRC(insn->code) == BPF_X) {
if (dst_reg->type == SCALAR_VALUE &&
regs[insn->src_reg].type == SCALAR_VALUE) {
if (tnum_is_const(regs[insn->src_reg].var_off))
reg_set_min_max(&other_branch_regs[insn->dst_reg],
dst_reg, regs[insn->src_reg].var_off.value,
opcode);
else if (tnum_is_const(dst_reg->var_off))
reg_set_min_max_inv(&other_branch_regs[insn->src_reg],
®s[insn->src_reg],
dst_reg->var_off.value, opcode);
else if (opcode == BPF_JEQ || opcode == BPF_JNE)
/* Comparing for equality, we can combine knowledge */
reg_combine_min_max(&other_branch_regs[insn->src_reg],
&other_branch_regs[insn->dst_reg],
®s[insn->src_reg],
®s[insn->dst_reg], opcode);
}
} else if (dst_reg->type == SCALAR_VALUE) {
reg_set_min_max(&other_branch_regs[insn->dst_reg],
dst_reg, insn->imm, opcode);
}
/* detect if R == 0 where R is returned from bpf_map_lookup_elem() */
if (BPF_SRC(insn->code) == BPF_K &&
insn->imm == 0 && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
dst_reg->type == PTR_TO_MAP_VALUE_OR_NULL) {
/* Mark all identical map registers in each branch as either
* safe or unknown depending R == 0 or R != 0 conditional.
*/
mark_map_regs(this_branch, insn->dst_reg, opcode == BPF_JNE);
mark_map_regs(other_branch, insn->dst_reg, opcode == BPF_JEQ);
} else if (!try_match_pkt_pointers(insn, dst_reg, ®s[insn->src_reg],
this_branch, other_branch) &&
is_pointer_value(env, insn->dst_reg)) {
verbose(env, "R%d pointer comparison prohibited\n",
insn->dst_reg);
return -EACCES;
}
if (env->log.level)
print_verifier_state(env, this_branch->frame[this_branch->curframe]);
return 0;
}
|
C
|
linux
| 0 |
CVE-2013-2061
|
https://www.cvedetails.com/cve/CVE-2013-2061/
|
CWE-200
|
https://github.com/OpenVPN/openvpn/commit/11d21349a4e7e38a025849479b36ace7c2eec2ee
|
11d21349a4e7e38a025849479b36ace7c2eec2ee
|
Use constant time memcmp when comparing HMACs in openvpn_decrypt.
Signed-off-by: Steffan Karger <[email protected]>
Acked-by: Gert Doering <[email protected]>
Signed-off-by: Gert Doering <[email protected]>
|
keydirection2ascii (int kd, bool remote)
{
if (kd == KEY_DIRECTION_BIDIRECTIONAL)
return NULL;
else if (kd == KEY_DIRECTION_NORMAL)
return remote ? "1" : "0";
else if (kd == KEY_DIRECTION_INVERSE)
return remote ? "0" : "1";
else
{
ASSERT (0);
}
return NULL; /* NOTREACHED */
}
|
keydirection2ascii (int kd, bool remote)
{
if (kd == KEY_DIRECTION_BIDIRECTIONAL)
return NULL;
else if (kd == KEY_DIRECTION_NORMAL)
return remote ? "1" : "0";
else if (kd == KEY_DIRECTION_INVERSE)
return remote ? "0" : "1";
else
{
ASSERT (0);
}
return NULL; /* NOTREACHED */
}
|
C
|
openvpn
| 0 |
CVE-2017-0380
|
https://www.cvedetails.com/cve/CVE-2017-0380/
|
CWE-532
|
https://github.com/torproject/tor/commit/09ea89764a4d3a907808ed7d4fe42abfe64bd486
|
09ea89764a4d3a907808ed7d4fe42abfe64bd486
|
Fix log-uninitialized-stack bug in rend_service_intro_established.
Fixes bug 23490; bugfix on 0.2.7.2-alpha.
TROVE-2017-008
CVE-2017-0380
|
rend_service_free(rend_service_t *service)
{
if (!service)
return;
tor_free(service->directory);
if (service->ports) {
SMARTLIST_FOREACH(service->ports, rend_service_port_config_t*, p,
rend_service_port_config_free(p));
smartlist_free(service->ports);
}
if (service->private_key)
crypto_pk_free(service->private_key);
if (service->intro_nodes) {
SMARTLIST_FOREACH(service->intro_nodes, rend_intro_point_t *, intro,
rend_intro_point_free(intro););
smartlist_free(service->intro_nodes);
}
if (service->expiring_nodes) {
SMARTLIST_FOREACH(service->expiring_nodes, rend_intro_point_t *, intro,
rend_intro_point_free(intro););
smartlist_free(service->expiring_nodes);
}
rend_service_descriptor_free(service->desc);
if (service->clients) {
SMARTLIST_FOREACH(service->clients, rend_authorized_client_t *, c,
rend_authorized_client_free(c););
smartlist_free(service->clients);
}
if (service->accepted_intro_dh_parts) {
replaycache_free(service->accepted_intro_dh_parts);
}
tor_free(service);
}
|
rend_service_free(rend_service_t *service)
{
if (!service)
return;
tor_free(service->directory);
if (service->ports) {
SMARTLIST_FOREACH(service->ports, rend_service_port_config_t*, p,
rend_service_port_config_free(p));
smartlist_free(service->ports);
}
if (service->private_key)
crypto_pk_free(service->private_key);
if (service->intro_nodes) {
SMARTLIST_FOREACH(service->intro_nodes, rend_intro_point_t *, intro,
rend_intro_point_free(intro););
smartlist_free(service->intro_nodes);
}
if (service->expiring_nodes) {
SMARTLIST_FOREACH(service->expiring_nodes, rend_intro_point_t *, intro,
rend_intro_point_free(intro););
smartlist_free(service->expiring_nodes);
}
rend_service_descriptor_free(service->desc);
if (service->clients) {
SMARTLIST_FOREACH(service->clients, rend_authorized_client_t *, c,
rend_authorized_client_free(c););
smartlist_free(service->clients);
}
if (service->accepted_intro_dh_parts) {
replaycache_free(service->accepted_intro_dh_parts);
}
tor_free(service);
}
|
C
|
tor
| 0 |
CVE-2012-2867
|
https://www.cvedetails.com/cve/CVE-2012-2867/
| null |
https://github.com/chromium/chromium/commit/b7a161633fd7ecb59093c2c56ed908416292d778
|
b7a161633fd7ecb59093c2c56ed908416292d778
|
[GTK][WTR] Implement AccessibilityUIElement::stringValue
https://bugs.webkit.org/show_bug.cgi?id=102951
Reviewed by Martin Robinson.
Implement AccessibilityUIElement::stringValue in the ATK backend
in the same manner it is implemented in DumpRenderTree.
* WebKitTestRunner/InjectedBundle/atk/AccessibilityUIElementAtk.cpp:
(WTR::replaceCharactersForResults):
(WTR):
(WTR::AccessibilityUIElement::stringValue):
git-svn-id: svn://svn.chromium.org/blink/trunk@135485 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
PassRefPtr<AccessibilityUIElement> AccessibilityUIElement::disclosedRowAtIndex(unsigned index)
{
return 0;
}
|
PassRefPtr<AccessibilityUIElement> AccessibilityUIElement::disclosedRowAtIndex(unsigned index)
{
return 0;
}
|
C
|
Chrome
| 0 |
CVE-2017-5009
|
https://www.cvedetails.com/cve/CVE-2017-5009/
|
CWE-119
|
https://github.com/chromium/chromium/commit/1c40f9042ae2d6ee7483d72998aabb5e73b2ff60
|
1c40f9042ae2d6ee7483d72998aabb5e73b2ff60
|
DevTools: send proper resource type in Network.RequestWillBeSent
This patch plumbs resoure type into the DispatchWillSendRequest
instrumenation. This allows us to report accurate type in
Network.RequestWillBeSent event, instead of "Other", that we report
today.
BUG=765501
R=dgozman
Change-Id: I0134c08b841e8dd247fdc8ff208bfd51e462709c
Reviewed-on: https://chromium-review.googlesource.com/667504
Reviewed-by: Pavel Feldman <[email protected]>
Reviewed-by: Dmitry Gozman <[email protected]>
Commit-Queue: Andrey Lushnikov <[email protected]>
Cr-Commit-Position: refs/heads/master@{#507936}
|
InspectorFileReaderLoaderClient(
RefPtr<BlobDataHandle> blob,
const String& mime_type,
const String& text_encoding_name,
std::unique_ptr<GetResponseBodyCallback> callback)
: blob_(std::move(blob)),
mime_type_(mime_type),
text_encoding_name_(text_encoding_name),
callback_(std::move(callback)) {
loader_ = FileReaderLoader::Create(FileReaderLoader::kReadByClient, this);
}
|
InspectorFileReaderLoaderClient(
RefPtr<BlobDataHandle> blob,
const String& mime_type,
const String& text_encoding_name,
std::unique_ptr<GetResponseBodyCallback> callback)
: blob_(std::move(blob)),
mime_type_(mime_type),
text_encoding_name_(text_encoding_name),
callback_(std::move(callback)) {
loader_ = FileReaderLoader::Create(FileReaderLoader::kReadByClient, this);
}
|
C
|
Chrome
| 0 |
CVE-2016-5170
|
https://www.cvedetails.com/cve/CVE-2016-5170/
|
CWE-416
|
https://github.com/chromium/chromium/commit/c3957448cfc6e299165196a33cd954b790875fdb
|
c3957448cfc6e299165196a33cd954b790875fdb
|
Cleanup and remove dead code in SetFocusedElement
This early-out was added in:
https://crrev.com/ce8ea3446283965c7eabab592cbffe223b1cf2bc
Back then, we applied fragment focus in LayoutUpdated() which could
cause this issue. This got cleaned up in:
https://crrev.com/45236fd563e9df53dc45579be1f3d0b4784885a2
so that focus is no longer applied after layout.
+Cleanup: Goto considered harmful
Bug: 795381
Change-Id: Ifeb4d2e03e872fd48cca6720b1d4de36ad1ecbb7
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1524417
Commit-Queue: David Bokan <[email protected]>
Reviewed-by: Stefan Zager <[email protected]>
Cr-Commit-Position: refs/heads/master@{#641101}
|
V0CustomElementMicrotaskRunQueue* Document::CustomElementMicrotaskRunQueue() {
if (!custom_element_microtask_run_queue_)
custom_element_microtask_run_queue_ =
V0CustomElementMicrotaskRunQueue::Create();
return custom_element_microtask_run_queue_.Get();
}
|
V0CustomElementMicrotaskRunQueue* Document::CustomElementMicrotaskRunQueue() {
if (!custom_element_microtask_run_queue_)
custom_element_microtask_run_queue_ =
V0CustomElementMicrotaskRunQueue::Create();
return custom_element_microtask_run_queue_.Get();
}
|
C
|
Chrome
| 0 |
CVE-2018-16075
|
https://www.cvedetails.com/cve/CVE-2018-16075/
|
CWE-254
|
https://github.com/chromium/chromium/commit/d913f72b4875cf0814fc3f03ad7c00642097c4a4
|
d913f72b4875cf0814fc3f03ad7c00642097c4a4
|
Remove RequireCSSExtensionForFile runtime enabled flag.
The feature has long since been stable (since M64) and doesn't seem
to be a need for this flag.
BUG=788936
Change-Id: I666390b869289c328acb4a2daa5bf4154e1702c0
Reviewed-on: https://chromium-review.googlesource.com/c/1324143
Reviewed-by: Mike West <[email protected]>
Reviewed-by: Camille Lamy <[email protected]>
Commit-Queue: Dave Tapuska <[email protected]>
Cr-Commit-Position: refs/heads/master@{#607329}
|
void WebRuntimeFeatures::EnableTouchEventFeatureDetection(bool enable) {
RuntimeEnabledFeatures::SetTouchEventFeatureDetectionEnabled(enable);
}
|
void WebRuntimeFeatures::EnableTouchEventFeatureDetection(bool enable) {
RuntimeEnabledFeatures::SetTouchEventFeatureDetectionEnabled(enable);
}
|
C
|
Chrome
| 0 |
CVE-2016-3951
|
https://www.cvedetails.com/cve/CVE-2016-3951/
| null |
https://github.com/torvalds/linux/commit/4d06dd537f95683aba3651098ae288b7cbff8274
|
4d06dd537f95683aba3651098ae288b7cbff8274
|
cdc_ncm: do not call usbnet_link_change from cdc_ncm_bind
usbnet_link_change will call schedule_work and should be
avoided if bind is failing. Otherwise we will end up with
scheduled work referring to a netdev which has gone away.
Instead of making the call conditional, we can just defer
it to usbnet_probe, using the driver_info flag made for
this purpose.
Fixes: 8a34b0ae8778 ("usbnet: cdc_ncm: apply usbnet_link_change")
Reported-by: Andrey Konovalov <[email protected]>
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Bjørn Mork <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static u32 cdc_ncm_max_dgram_size(struct usbnet *dev)
{
struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
if (cdc_ncm_comm_intf_is_mbim(dev->intf->cur_altsetting) && ctx->mbim_desc)
return le16_to_cpu(ctx->mbim_desc->wMaxSegmentSize);
if (ctx->ether_desc)
return le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
return CDC_NCM_MAX_DATAGRAM_SIZE;
}
|
static u32 cdc_ncm_max_dgram_size(struct usbnet *dev)
{
struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
if (cdc_ncm_comm_intf_is_mbim(dev->intf->cur_altsetting) && ctx->mbim_desc)
return le16_to_cpu(ctx->mbim_desc->wMaxSegmentSize);
if (ctx->ether_desc)
return le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
return CDC_NCM_MAX_DATAGRAM_SIZE;
}
|
C
|
linux
| 0 |
CVE-2014-3690
|
https://www.cvedetails.com/cve/CVE-2014-3690/
|
CWE-399
|
https://github.com/torvalds/linux/commit/d974baa398f34393db76be45f7d4d04fbdbb4a0a
|
d974baa398f34393db76be45f7d4d04fbdbb4a0a
|
x86,kvm,vmx: Preserve CR4 across VM entry
CR4 isn't constant; at least the TSD and PCE bits can vary.
TBH, treating CR0 and CR3 as constant scares me a bit, too, but it looks
like it's correct.
This adds a branch and a read from cr4 to each vm entry. Because it is
extremely likely that consecutive entries into the same vcpu will have
the same host cr4 value, this fixes up the vmcs instead of restoring cr4
after the fact. A subsequent patch will add a kernel-wide cr4 shadow,
reducing the overhead in the common case to just two memory reads and a
branch.
Signed-off-by: Andy Lutomirski <[email protected]>
Acked-by: Paolo Bonzini <[email protected]>
Cc: [email protected]
Cc: Petr Matousek <[email protected]>
Cc: Gleb Natapov <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static inline bool vmx_control_verify(u32 control, u32 low, u32 high)
{
/*
* Bits 0 in high must be 0, and bits 1 in low must be 1.
*/
return ((control & high) | low) == control;
}
|
static inline bool vmx_control_verify(u32 control, u32 low, u32 high)
{
/*
* Bits 0 in high must be 0, and bits 1 in low must be 1.
*/
return ((control & high) | low) == control;
}
|
C
|
linux
| 0 |
CVE-2018-16427
|
https://www.cvedetails.com/cve/CVE-2018-16427/
|
CWE-125
|
https://github.com/OpenSC/OpenSC/pull/1447/commits/8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
|
8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
|
fixed out of bounds reads
Thanks to Eric Sesterhenn from X41 D-SEC GmbH
for reporting and suggesting security fixes.
|
static int decode_bit_string(const u8 * inbuf, size_t inlen, void *outbuf,
size_t outlen, int invert)
{
const u8 *in = inbuf;
u8 *out = (u8 *) outbuf;
int zero_bits = *in & 0x07;
size_t octets_left = inlen - 1;
int i, count = 0;
memset(outbuf, 0, outlen);
in++;
if (outlen < octets_left)
return SC_ERROR_BUFFER_TOO_SMALL;
if (inlen < 1)
return SC_ERROR_INVALID_ASN1_OBJECT;
while (octets_left) {
/* 1st octet of input: ABCDEFGH, where A is the MSB */
/* 1st octet of output: HGFEDCBA, where A is the LSB */
/* first bit in bit string is the LSB in first resulting octet */
int bits_to_go;
*out = 0;
if (octets_left == 1)
bits_to_go = 8 - zero_bits;
else
bits_to_go = 8;
if (invert)
for (i = 0; i < bits_to_go; i++) {
*out |= ((*in >> (7 - i)) & 1) << i;
}
else {
*out = *in;
}
out++;
in++;
octets_left--;
count++;
}
return (count * 8) - zero_bits;
}
|
static int decode_bit_string(const u8 * inbuf, size_t inlen, void *outbuf,
size_t outlen, int invert)
{
const u8 *in = inbuf;
u8 *out = (u8 *) outbuf;
int zero_bits = *in & 0x07;
size_t octets_left = inlen - 1;
int i, count = 0;
memset(outbuf, 0, outlen);
in++;
if (outlen < octets_left)
return SC_ERROR_BUFFER_TOO_SMALL;
if (inlen < 1)
return SC_ERROR_INVALID_ASN1_OBJECT;
while (octets_left) {
/* 1st octet of input: ABCDEFGH, where A is the MSB */
/* 1st octet of output: HGFEDCBA, where A is the LSB */
/* first bit in bit string is the LSB in first resulting octet */
int bits_to_go;
*out = 0;
if (octets_left == 1)
bits_to_go = 8 - zero_bits;
else
bits_to_go = 8;
if (invert)
for (i = 0; i < bits_to_go; i++) {
*out |= ((*in >> (7 - i)) & 1) << i;
}
else {
*out = *in;
}
out++;
in++;
octets_left--;
count++;
}
return (count * 8) - zero_bits;
}
|
C
|
OpenSC
| 0 |
CVE-2012-2816
|
https://www.cvedetails.com/cve/CVE-2012-2816/
| null |
https://github.com/chromium/chromium/commit/cd0bd79d6ebdb72183e6f0833673464cc10b3600
|
cd0bd79d6ebdb72183e6f0833673464cc10b3600
|
Convert plugin and GPU process to brokered handle duplication.
BUG=119250
Review URL: https://chromiumcodereview.appspot.com/9958034
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@132303 0039d316-1c4b-4281-b951-d872f2087c98
|
bool GpuCommandBufferStub::Send(IPC::Message* message) {
return channel_->Send(message);
}
|
bool GpuCommandBufferStub::Send(IPC::Message* message) {
return channel_->Send(message);
}
|
C
|
Chrome
| 0 |
CVE-2017-5120
|
https://www.cvedetails.com/cve/CVE-2017-5120/
| null |
https://github.com/chromium/chromium/commit/b7277af490d28ac7f802c015bb0ff31395768556
|
b7277af490d28ac7f802c015bb0ff31395768556
|
bindings: Support "attribute FrozenArray<T>?"
Adds a quick hack to support a case of "attribute FrozenArray<T>?".
Bug: 1028047
Change-Id: Ib3cecc4beb6bcc0fb0dbc667aca595454cc90c86
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1933866
Reviewed-by: Hitoshi Yoshida <[email protected]>
Commit-Queue: Yuki Shiino <[email protected]>
Cr-Commit-Position: refs/heads/master@{#718676}
|
static void DeprecateAsMeasureAsSameValueOverloadedMethod2Method(const v8::FunctionCallbackInfo<v8::Value>& info) {
ExceptionState exception_state(info.GetIsolate(), ExceptionState::kExecutionContext, "TestObject", "deprecateAsMeasureAsSameValueOverloadedMethod");
TestObject* impl = V8TestObject::ToImpl(info.Holder());
int32_t arg;
arg = NativeValueTraits<IDLLong>::NativeValue(info.GetIsolate(), info[0], exception_state);
if (exception_state.HadException())
return;
impl->deprecateAsMeasureAsSameValueOverloadedMethod(arg);
}
|
static void DeprecateAsMeasureAsSameValueOverloadedMethod2Method(const v8::FunctionCallbackInfo<v8::Value>& info) {
ExceptionState exception_state(info.GetIsolate(), ExceptionState::kExecutionContext, "TestObject", "deprecateAsMeasureAsSameValueOverloadedMethod");
TestObject* impl = V8TestObject::ToImpl(info.Holder());
int32_t arg;
arg = NativeValueTraits<IDLLong>::NativeValue(info.GetIsolate(), info[0], exception_state);
if (exception_state.HadException())
return;
impl->deprecateAsMeasureAsSameValueOverloadedMethod(arg);
}
|
C
|
Chrome
| 0 |
CVE-2017-5077
|
https://www.cvedetails.com/cve/CVE-2017-5077/
|
CWE-125
|
https://github.com/chromium/chromium/commit/fec26ff33bf372476a70326f3669a35f34a9d474
|
fec26ff33bf372476a70326f3669a35f34a9d474
|
Origins should be represented as url::Origin (not as GURL).
As pointed out in //docs/security/origin-vs-url.md, origins should be
represented as url::Origin (not as GURL). This CL applies this
guideline to predictor-related code and changes the type of the
following fields from GURL to url::Origin:
- OriginRequestSummary::origin
- PreconnectedRequestStats::origin
- PreconnectRequest::origin
The old code did not depend on any non-origin parts of GURL
(like path and/or query). Therefore, this CL has no intended
behavior change.
Bug: 973885
Change-Id: Idd14590b4834cb9d50c74ed747b595fe1a4ba357
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1895167
Commit-Queue: Łukasz Anforowicz <[email protected]>
Reviewed-by: Alex Ilin <[email protected]>
Cr-Commit-Position: refs/heads/master@{#716311}
|
void LoadingPredictor::MaybeAddPreconnect(
const GURL& url,
std::vector<PreconnectRequest> requests,
HintOrigin origin) {
DCHECK(!shutdown_);
preconnect_manager()->Start(url, std::move(requests));
}
|
void LoadingPredictor::MaybeAddPreconnect(
const GURL& url,
std::vector<PreconnectRequest> requests,
HintOrigin origin) {
DCHECK(!shutdown_);
preconnect_manager()->Start(url, std::move(requests));
}
|
C
|
Chrome
| 0 |
CVE-2012-2100
|
https://www.cvedetails.com/cve/CVE-2012-2100/
|
CWE-189
|
https://github.com/torvalds/linux/commit/d50f2ab6f050311dbf7b8f5501b25f0bf64a439b
|
d50f2ab6f050311dbf7b8f5501b25f0bf64a439b
|
ext4: fix undefined behavior in ext4_fill_flex_info()
Commit 503358ae01b70ce6909d19dd01287093f6b6271c ("ext4: avoid divide by
zero when trying to mount a corrupted file system") fixes CVE-2009-4307
by performing a sanity check on s_log_groups_per_flex, since it can be
set to a bogus value by an attacker.
sbi->s_log_groups_per_flex = sbi->s_es->s_log_groups_per_flex;
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex < 2) { ... }
This patch fixes two potential issues in the previous commit.
1) The sanity check might only work on architectures like PowerPC.
On x86, 5 bits are used for the shifting amount. That means, given a
large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36
is essentially 1 << 4 = 16, rather than 0. This will bypass the check,
leaving s_log_groups_per_flex and groups_per_flex inconsistent.
2) The sanity check relies on undefined behavior, i.e., oversized shift.
A standard-confirming C compiler could rewrite the check in unexpected
ways. Consider the following equivalent form, assuming groups_per_flex
is unsigned for simplicity.
groups_per_flex = 1 << sbi->s_log_groups_per_flex;
if (groups_per_flex == 0 || groups_per_flex == 1) {
We compile the code snippet using Clang 3.0 and GCC 4.6. Clang will
completely optimize away the check groups_per_flex == 0, leaving the
patched code as vulnerable as the original. GCC keeps the check, but
there is no guarantee that future versions will do the same.
Signed-off-by: Xi Wang <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected]
|
static ssize_t inode_readahead_blks_store(struct ext4_attr *a,
struct ext4_sb_info *sbi,
const char *buf, size_t count)
{
unsigned long t;
if (parse_strtoul(buf, 0x40000000, &t))
return -EINVAL;
if (t && !is_power_of_2(t))
return -EINVAL;
sbi->s_inode_readahead_blks = t;
return count;
}
|
static ssize_t inode_readahead_blks_store(struct ext4_attr *a,
struct ext4_sb_info *sbi,
const char *buf, size_t count)
{
unsigned long t;
if (parse_strtoul(buf, 0x40000000, &t))
return -EINVAL;
if (t && !is_power_of_2(t))
return -EINVAL;
sbi->s_inode_readahead_blks = t;
return count;
}
|
C
|
linux
| 0 |
CVE-2016-2464
|
https://www.cvedetails.com/cve/CVE-2016-2464/
|
CWE-20
|
https://android.googlesource.com/platform/external/libvpx/+/cc274e2abe8b2a6698a5c47d8aa4bb45f1f9538d
|
cc274e2abe8b2a6698a5c47d8aa4bb45f1f9538d
|
external/libvpx/libwebm: Update snapshot
Update libwebm snapshot. This update contains security fixes from upstream.
Upstream git hash: 229f49347d19b0ca0941e072b199a242ef6c5f2b
BUG=23167726
Change-Id: Id3e140e7b31ae11294724b1ecfe2e9c83b4d4207
(cherry picked from commit d0281a15b3c6bd91756e453cc9398c5ef412d99a)
|
const CuePoint::TrackPosition* CuePoint::Find(const Track* pTrack) const {
assert(pTrack);
const long long n = pTrack->GetNumber();
const TrackPosition* i = m_track_positions;
const TrackPosition* const j = i + m_track_positions_count;
while (i != j) {
const TrackPosition& p = *i++;
if (p.m_track == n)
return &p;
}
return NULL; // no matching track number found
}
|
const CuePoint::TrackPosition* CuePoint::Find(const Track* pTrack) const {
assert(pTrack);
const long long n = pTrack->GetNumber();
const TrackPosition* i = m_track_positions;
const TrackPosition* const j = i + m_track_positions_count;
while (i != j) {
const TrackPosition& p = *i++;
if (p.m_track == n)
return &p;
}
return NULL; // no matching track number found
}
|
C
|
Android
| 0 |
CVE-2019-5803
|
https://www.cvedetails.com/cve/CVE-2019-5803/
|
CWE-20
|
https://github.com/chromium/chromium/commit/0e3b0c22a5c596bdc24a391b3f02952c1c3e4f1b
|
0e3b0c22a5c596bdc24a391b3f02952c1c3e4f1b
|
Check the source browsing context's CSP in Location::SetLocation prior to dispatching a navigation to a `javascript:` URL.
Makes `javascript:` navigations via window.location.href compliant with
https://html.spec.whatwg.org/#navigate, which states that the source
browsing context must be checked (rather than the current browsing
context).
Bug: 909865
Change-Id: Id6aef6eef56865e164816c67eb9fe07ea1cb1b4e
Reviewed-on: https://chromium-review.googlesource.com/c/1359823
Reviewed-by: Andy Paicu <[email protected]>
Reviewed-by: Mike West <[email protected]>
Commit-Queue: Andrew Comminos <[email protected]>
Cr-Commit-Position: refs/heads/master@{#614451}
|
String Location::hash() const {
return DOMURLUtilsReadOnly::hash(Url());
}
|
String Location::hash() const {
return DOMURLUtilsReadOnly::hash(Url());
}
|
C
|
Chrome
| 0 |
CVE-2018-12714
|
https://www.cvedetails.com/cve/CVE-2018-12714/
|
CWE-787
|
https://github.com/torvalds/linux/commit/81f9c4e4177d31ced6f52a89bb70e93bfb77ca03
|
81f9c4e4177d31ced6f52a89bb70e93bfb77ca03
|
Merge tag 'trace-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"This contains a few fixes and a clean up.
- a bad merge caused an "endif" to go in the wrong place in
scripts/Makefile.build
- softirq tracing fix for tracing that corrupts lockdep and causes a
false splat
- histogram documentation typo fixes
- fix a bad memory reference when passing in no filter to the filter
code
- simplify code by using the swap macro instead of open coding the
swap"
* tag 'trace-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Fix SKIP_STACK_VALIDATION=1 build due to bad merge with -mrecord-mcount
tracing: Fix some errors in histogram documentation
tracing: Use swap macro in update_max_tr
softirq: Reorder trace_softirqs_on to prevent lockdep splat
tracing: Check for no filter when processing event filters
|
static int t_show(struct seq_file *m, void *v)
{
struct tracer *t = v;
if (!t)
return 0;
seq_puts(m, t->name);
if (t->next)
seq_putc(m, ' ');
else
seq_putc(m, '\n');
return 0;
}
|
static int t_show(struct seq_file *m, void *v)
{
struct tracer *t = v;
if (!t)
return 0;
seq_puts(m, t->name);
if (t->next)
seq_putc(m, ' ');
else
seq_putc(m, '\n');
return 0;
}
|
C
|
linux
| 0 |
CVE-2013-2853
|
https://www.cvedetails.com/cve/CVE-2013-2853/
| null |
https://github.com/chromium/chromium/commit/9c18dbcb79e5f700c453d1ac01fb6d8768e4844a
|
9c18dbcb79e5f700c453d1ac01fb6d8768e4844a
|
net: don't process truncated headers on HTTPS connections.
This change causes us to not process any headers unless they are correctly
terminated with a \r\n\r\n sequence.
BUG=244260
Review URL: https://chromiumcodereview.appspot.com/15688012
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@202927 0039d316-1c4b-4281-b951-d872f2087c98
|
int HttpStreamParser::ReadResponseHeaders(const CompletionCallback& callback) {
DCHECK(io_state_ == STATE_REQUEST_SENT || io_state_ == STATE_DONE);
DCHECK(callback_.is_null());
DCHECK(!callback.is_null());
if (io_state_ == STATE_DONE)
return ERR_CONNECTION_CLOSED;
int result = OK;
io_state_ = STATE_READ_HEADERS;
if (read_buf_->offset() > 0) {
result = read_buf_->offset() - read_buf_unused_offset_;
read_buf_->set_offset(read_buf_unused_offset_);
}
if (result > 0)
io_state_ = STATE_READ_HEADERS_COMPLETE;
result = DoLoop(result);
if (result == ERR_IO_PENDING)
callback_ = callback;
return result > 0 ? OK : result;
}
|
int HttpStreamParser::ReadResponseHeaders(const CompletionCallback& callback) {
DCHECK(io_state_ == STATE_REQUEST_SENT || io_state_ == STATE_DONE);
DCHECK(callback_.is_null());
DCHECK(!callback.is_null());
if (io_state_ == STATE_DONE)
return ERR_CONNECTION_CLOSED;
int result = OK;
io_state_ = STATE_READ_HEADERS;
if (read_buf_->offset() > 0) {
result = read_buf_->offset() - read_buf_unused_offset_;
read_buf_->set_offset(read_buf_unused_offset_);
}
if (result > 0)
io_state_ = STATE_READ_HEADERS_COMPLETE;
result = DoLoop(result);
if (result == ERR_IO_PENDING)
callback_ = callback;
return result > 0 ? OK : result;
}
|
C
|
Chrome
| 0 |
CVE-2016-0723
|
https://www.cvedetails.com/cve/CVE-2016-0723/
|
CWE-362
|
https://github.com/torvalds/linux/commit/5c17c861a357e9458001f021a7afa7aab9937439
|
5c17c861a357e9458001f021a7afa7aab9937439
|
tty: Fix unsafe ldisc reference via ioctl(TIOCGETD)
ioctl(TIOCGETD) retrieves the line discipline id directly from the
ldisc because the line discipline id (c_line) in termios is untrustworthy;
userspace may have set termios via ioctl(TCSETS*) without actually
changing the line discipline via ioctl(TIOCSETD).
However, directly accessing the current ldisc via tty->ldisc is
unsafe; the ldisc ptr dereferenced may be stale if the line discipline
is changing via ioctl(TIOCSETD) or hangup.
Wait for the line discipline reference (just like read() or write())
to retrieve the "current" line discipline id.
Cc: <[email protected]>
Signed-off-by: Peter Hurley <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
static ssize_t hung_up_tty_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return -EIO;
}
|
static ssize_t hung_up_tty_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
return -EIO;
}
|
C
|
linux
| 0 |
CVE-2017-9994
|
https://www.cvedetails.com/cve/CVE-2017-9994/
|
CWE-119
|
https://github.com/FFmpeg/FFmpeg/commit/6b5d3fb26fb4be48e4966e4b1d97c2165538d4ef
|
6b5d3fb26fb4be48e4966e4b1d97c2165538d4ef
|
avcodec/webp: Always set pix_fmt
Fixes: out of array access
Fixes: 1434/clusterfuzz-testcase-minimized-6314998085189632
Fixes: 1435/clusterfuzz-testcase-minimized-6483783723253760
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/targets/ffmpeg
Reviewed-by: "Ronald S. Bultje" <[email protected]>
Signed-off-by: Michael Niedermayer <[email protected]>
|
static int read_huffman_code_normal(WebPContext *s, HuffReader *hc,
int alphabet_size)
{
HuffReader code_len_hc = { { 0 }, 0, 0, { 0 } };
int *code_lengths = NULL;
int code_length_code_lengths[NUM_CODE_LENGTH_CODES] = { 0 };
int i, symbol, max_symbol, prev_code_len, ret;
int num_codes = 4 + get_bits(&s->gb, 4);
if (num_codes > NUM_CODE_LENGTH_CODES)
return AVERROR_INVALIDDATA;
for (i = 0; i < num_codes; i++)
code_length_code_lengths[code_length_code_order[i]] = get_bits(&s->gb, 3);
ret = huff_reader_build_canonical(&code_len_hc, code_length_code_lengths,
NUM_CODE_LENGTH_CODES);
if (ret < 0)
goto finish;
code_lengths = av_mallocz_array(alphabet_size, sizeof(*code_lengths));
if (!code_lengths) {
ret = AVERROR(ENOMEM);
goto finish;
}
if (get_bits1(&s->gb)) {
int bits = 2 + 2 * get_bits(&s->gb, 3);
max_symbol = 2 + get_bits(&s->gb, bits);
if (max_symbol > alphabet_size) {
av_log(s->avctx, AV_LOG_ERROR, "max symbol %d > alphabet size %d\n",
max_symbol, alphabet_size);
ret = AVERROR_INVALIDDATA;
goto finish;
}
} else {
max_symbol = alphabet_size;
}
prev_code_len = 8;
symbol = 0;
while (symbol < alphabet_size) {
int code_len;
if (!max_symbol--)
break;
code_len = huff_reader_get_symbol(&code_len_hc, &s->gb);
if (code_len < 16) {
/* Code length code [0..15] indicates literal code lengths. */
code_lengths[symbol++] = code_len;
if (code_len)
prev_code_len = code_len;
} else {
int repeat = 0, length = 0;
switch (code_len) {
case 16:
/* Code 16 repeats the previous non-zero value [3..6] times,
* i.e., 3 + ReadBits(2) times. If code 16 is used before a
* non-zero value has been emitted, a value of 8 is repeated. */
repeat = 3 + get_bits(&s->gb, 2);
length = prev_code_len;
break;
case 17:
/* Code 17 emits a streak of zeros [3..10], i.e.,
* 3 + ReadBits(3) times. */
repeat = 3 + get_bits(&s->gb, 3);
break;
case 18:
/* Code 18 emits a streak of zeros of length [11..138], i.e.,
* 11 + ReadBits(7) times. */
repeat = 11 + get_bits(&s->gb, 7);
break;
}
if (symbol + repeat > alphabet_size) {
av_log(s->avctx, AV_LOG_ERROR,
"invalid symbol %d + repeat %d > alphabet size %d\n",
symbol, repeat, alphabet_size);
ret = AVERROR_INVALIDDATA;
goto finish;
}
while (repeat-- > 0)
code_lengths[symbol++] = length;
}
}
ret = huff_reader_build_canonical(hc, code_lengths, alphabet_size);
finish:
ff_free_vlc(&code_len_hc.vlc);
av_free(code_lengths);
return ret;
}
|
static int read_huffman_code_normal(WebPContext *s, HuffReader *hc,
int alphabet_size)
{
HuffReader code_len_hc = { { 0 }, 0, 0, { 0 } };
int *code_lengths = NULL;
int code_length_code_lengths[NUM_CODE_LENGTH_CODES] = { 0 };
int i, symbol, max_symbol, prev_code_len, ret;
int num_codes = 4 + get_bits(&s->gb, 4);
if (num_codes > NUM_CODE_LENGTH_CODES)
return AVERROR_INVALIDDATA;
for (i = 0; i < num_codes; i++)
code_length_code_lengths[code_length_code_order[i]] = get_bits(&s->gb, 3);
ret = huff_reader_build_canonical(&code_len_hc, code_length_code_lengths,
NUM_CODE_LENGTH_CODES);
if (ret < 0)
goto finish;
code_lengths = av_mallocz_array(alphabet_size, sizeof(*code_lengths));
if (!code_lengths) {
ret = AVERROR(ENOMEM);
goto finish;
}
if (get_bits1(&s->gb)) {
int bits = 2 + 2 * get_bits(&s->gb, 3);
max_symbol = 2 + get_bits(&s->gb, bits);
if (max_symbol > alphabet_size) {
av_log(s->avctx, AV_LOG_ERROR, "max symbol %d > alphabet size %d\n",
max_symbol, alphabet_size);
ret = AVERROR_INVALIDDATA;
goto finish;
}
} else {
max_symbol = alphabet_size;
}
prev_code_len = 8;
symbol = 0;
while (symbol < alphabet_size) {
int code_len;
if (!max_symbol--)
break;
code_len = huff_reader_get_symbol(&code_len_hc, &s->gb);
if (code_len < 16) {
/* Code length code [0..15] indicates literal code lengths. */
code_lengths[symbol++] = code_len;
if (code_len)
prev_code_len = code_len;
} else {
int repeat = 0, length = 0;
switch (code_len) {
case 16:
/* Code 16 repeats the previous non-zero value [3..6] times,
* i.e., 3 + ReadBits(2) times. If code 16 is used before a
* non-zero value has been emitted, a value of 8 is repeated. */
repeat = 3 + get_bits(&s->gb, 2);
length = prev_code_len;
break;
case 17:
/* Code 17 emits a streak of zeros [3..10], i.e.,
* 3 + ReadBits(3) times. */
repeat = 3 + get_bits(&s->gb, 3);
break;
case 18:
/* Code 18 emits a streak of zeros of length [11..138], i.e.,
* 11 + ReadBits(7) times. */
repeat = 11 + get_bits(&s->gb, 7);
break;
}
if (symbol + repeat > alphabet_size) {
av_log(s->avctx, AV_LOG_ERROR,
"invalid symbol %d + repeat %d > alphabet size %d\n",
symbol, repeat, alphabet_size);
ret = AVERROR_INVALIDDATA;
goto finish;
}
while (repeat-- > 0)
code_lengths[symbol++] = length;
}
}
ret = huff_reader_build_canonical(hc, code_lengths, alphabet_size);
finish:
ff_free_vlc(&code_len_hc.vlc);
av_free(code_lengths);
return ret;
}
|
C
|
FFmpeg
| 0 |
CVE-2016-2538
|
https://www.cvedetails.com/cve/CVE-2016-2538/
|
CWE-189
|
https://git.qemu.org/?p=qemu.git;a=commit;h=fe3c546c5ff2a6210f9a4d8561cc64051ca8603e
|
fe3c546c5ff2a6210f9a4d8561cc64051ca8603e
| null |
static void usb_net_handle_datain(USBNetState *s, USBPacket *p)
{
int len;
if (s->in_ptr > s->in_len) {
usb_net_reset_in_buf(s);
p->status = USB_RET_NAK;
return;
}
if (!s->in_len) {
p->status = USB_RET_NAK;
return;
}
len = s->in_len - s->in_ptr;
if (len > p->iov.size) {
len = p->iov.size;
}
usb_packet_copy(p, &s->in_buf[s->in_ptr], len);
s->in_ptr += len;
if (s->in_ptr >= s->in_len &&
(is_rndis(s) || (s->in_len & (64 - 1)) || !len)) {
/* no short packet necessary */
usb_net_reset_in_buf(s);
}
#ifdef TRAFFIC_DEBUG
fprintf(stderr, "usbnet: data in len %zu return %d", p->iov.size, len);
iov_hexdump(p->iov.iov, p->iov.niov, stderr, "usbnet", len);
#endif
}
|
static void usb_net_handle_datain(USBNetState *s, USBPacket *p)
{
int len;
if (s->in_ptr > s->in_len) {
usb_net_reset_in_buf(s);
p->status = USB_RET_NAK;
return;
}
if (!s->in_len) {
p->status = USB_RET_NAK;
return;
}
len = s->in_len - s->in_ptr;
if (len > p->iov.size) {
len = p->iov.size;
}
usb_packet_copy(p, &s->in_buf[s->in_ptr], len);
s->in_ptr += len;
if (s->in_ptr >= s->in_len &&
(is_rndis(s) || (s->in_len & (64 - 1)) || !len)) {
/* no short packet necessary */
usb_net_reset_in_buf(s);
}
#ifdef TRAFFIC_DEBUG
fprintf(stderr, "usbnet: data in len %zu return %d", p->iov.size, len);
iov_hexdump(p->iov.iov, p->iov.niov, stderr, "usbnet", len);
#endif
}
|
C
|
qemu
| 0 |
CVE-2014-3645
|
https://www.cvedetails.com/cve/CVE-2014-3645/
|
CWE-20
|
https://github.com/torvalds/linux/commit/bfd0a56b90005f8c8a004baf407ad90045c2b11e
|
bfd0a56b90005f8c8a004baf407ad90045c2b11e
|
nEPT: Nested INVEPT
If we let L1 use EPT, we should probably also support the INVEPT instruction.
In our current nested EPT implementation, when L1 changes its EPT table
for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in
the course of this modification already calls INVEPT. But if last level
of shadow page is unsync not all L1's changes to EPT12 are intercepted,
which means roots need to be synced when L1 calls INVEPT. Global INVEPT
should not be different since roots are synced by kvm_mmu_load() each
time EPTP02 changes.
Reviewed-by: Xiao Guangrong <[email protected]>
Signed-off-by: Nadav Har'El <[email protected]>
Signed-off-by: Jun Nakajima <[email protected]>
Signed-off-by: Xinhao Xu <[email protected]>
Signed-off-by: Yang Zhang <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
{
int ret;
struct kvm_userspace_memory_region tss_mem = {
.slot = TSS_PRIVATE_MEMSLOT,
.guest_phys_addr = addr,
.memory_size = PAGE_SIZE * 3,
.flags = 0,
};
ret = kvm_set_memory_region(kvm, &tss_mem);
if (ret)
return ret;
kvm->arch.tss_addr = addr;
if (!init_rmode_tss(kvm))
return -ENOMEM;
return 0;
}
|
static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
{
int ret;
struct kvm_userspace_memory_region tss_mem = {
.slot = TSS_PRIVATE_MEMSLOT,
.guest_phys_addr = addr,
.memory_size = PAGE_SIZE * 3,
.flags = 0,
};
ret = kvm_set_memory_region(kvm, &tss_mem);
if (ret)
return ret;
kvm->arch.tss_addr = addr;
if (!init_rmode_tss(kvm))
return -ENOMEM;
return 0;
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/d4cd2b2c0953ad7e9fa988c234eb9361be80fe81
|
d4cd2b2c0953ad7e9fa988c234eb9361be80fe81
|
DevTools: 'Overrides' UI overlay obstructs page and element inspector
BUG=302862
[email protected]
Review URL: https://codereview.chromium.org/40233006
git-svn-id: svn://svn.chromium.org/blink/trunk@160559 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void InspectorPageAgent::webViewResized(const IntSize& size)
{
int currentWidth = static_cast<int>(m_state->getLong(PageAgentState::pageAgentScreenWidthOverride));
m_overlay->resize(currentWidth ? size : IntSize());
}
|
void InspectorPageAgent::webViewResized(const IntSize& size)
{
int currentWidth = static_cast<int>(m_state->getLong(PageAgentState::pageAgentScreenWidthOverride));
m_overlay->resize(currentWidth ? size : IntSize());
}
|
C
|
Chrome
| 0 |
CVE-2012-3552
|
https://www.cvedetails.com/cve/CVE-2012-3552/
|
CWE-362
|
https://github.com/torvalds/linux/commit/f6d8bd051c391c1c0458a30b2a7abcd939329259
|
f6d8bd051c391c1c0458a30b2a7abcd939329259
|
inet: add RCU protection to inet->opt
We lack proper synchronization to manipulate inet->opt ip_options
Problem is ip_make_skb() calls ip_setup_cork() and
ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options),
without any protection against another thread manipulating inet->opt.
Another thread can change inet->opt pointer and free old one under us.
Use RCU to protect inet->opt (changed to inet->inet_opt).
Instead of handling atomic refcounts, just copy ip_options when
necessary, to avoid cache line dirtying.
We cant insert an rcu_head in struct ip_options since its included in
skb->cb[], so this patch is large because I had to introduce a new
ip_options_rcu structure.
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Herbert Xu <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static inline int compute_score(struct sock *sk, struct net *net, __be32 saddr,
unsigned short hnum,
__be16 sport, __be32 daddr, __be16 dport, int dif)
{
int score = -1;
if (net_eq(sock_net(sk), net) && udp_sk(sk)->udp_port_hash == hnum &&
!ipv6_only_sock(sk)) {
struct inet_sock *inet = inet_sk(sk);
score = (sk->sk_family == PF_INET ? 1 : 0);
if (inet->inet_rcv_saddr) {
if (inet->inet_rcv_saddr != daddr)
return -1;
score += 2;
}
if (inet->inet_daddr) {
if (inet->inet_daddr != saddr)
return -1;
score += 2;
}
if (inet->inet_dport) {
if (inet->inet_dport != sport)
return -1;
score += 2;
}
if (sk->sk_bound_dev_if) {
if (sk->sk_bound_dev_if != dif)
return -1;
score += 2;
}
}
return score;
}
|
static inline int compute_score(struct sock *sk, struct net *net, __be32 saddr,
unsigned short hnum,
__be16 sport, __be32 daddr, __be16 dport, int dif)
{
int score = -1;
if (net_eq(sock_net(sk), net) && udp_sk(sk)->udp_port_hash == hnum &&
!ipv6_only_sock(sk)) {
struct inet_sock *inet = inet_sk(sk);
score = (sk->sk_family == PF_INET ? 1 : 0);
if (inet->inet_rcv_saddr) {
if (inet->inet_rcv_saddr != daddr)
return -1;
score += 2;
}
if (inet->inet_daddr) {
if (inet->inet_daddr != saddr)
return -1;
score += 2;
}
if (inet->inet_dport) {
if (inet->inet_dport != sport)
return -1;
score += 2;
}
if (sk->sk_bound_dev_if) {
if (sk->sk_bound_dev_if != dif)
return -1;
score += 2;
}
}
return score;
}
|
C
|
linux
| 0 |
CVE-2011-2918
|
https://www.cvedetails.com/cve/CVE-2011-2918/
|
CWE-399
|
https://github.com/torvalds/linux/commit/a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Michael Cree <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: Anton Blanchard <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Jason Wessel <[email protected]>
Cc: Don Zickus <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
prepare_task_switch(struct rq *rq, struct task_struct *prev,
struct task_struct *next)
{
sched_info_switch(prev, next);
perf_event_task_sched_out(prev, next);
fire_sched_out_preempt_notifiers(prev, next);
prepare_lock_switch(rq, next);
prepare_arch_switch(next);
trace_sched_switch(prev, next);
}
|
prepare_task_switch(struct rq *rq, struct task_struct *prev,
struct task_struct *next)
{
sched_info_switch(prev, next);
perf_event_task_sched_out(prev, next);
fire_sched_out_preempt_notifiers(prev, next);
prepare_lock_switch(rq, next);
prepare_arch_switch(next);
trace_sched_switch(prev, next);
}
|
C
|
linux
| 0 |
CVE-2017-12187
|
https://www.cvedetails.com/cve/CVE-2017-12187/
|
CWE-20
|
https://cgit.freedesktop.org/xorg/xserver/commit/?id=cad5a1050b7184d828aef9c1dd151c3ab649d37e
|
cad5a1050b7184d828aef9c1dd151c3ab649d37e
| null |
ProcRenderAddGlyphs(ClientPtr client)
{
GlyphSetPtr glyphSet;
REQUEST(xRenderAddGlyphsReq);
GlyphNewRec glyphsLocal[NLOCALGLYPH];
GlyphNewPtr glyphsBase, glyphs, glyph_new;
int remain, nglyphs;
CARD32 *gids;
xGlyphInfo *gi;
CARD8 *bits;
unsigned int size;
int err;
int i, screen;
PicturePtr pSrc = NULL, pDst = NULL;
PixmapPtr pSrcPix = NULL, pDstPix = NULL;
CARD32 component_alpha;
REQUEST_AT_LEAST_SIZE(xRenderAddGlyphsReq);
err =
dixLookupResourceByType((void **) &glyphSet, stuff->glyphset,
GlyphSetType, client, DixAddAccess);
if (err != Success) {
client->errorValue = stuff->glyphset;
return err;
}
err = BadAlloc;
nglyphs = stuff->nglyphs;
if (nglyphs > UINT32_MAX / sizeof(GlyphNewRec))
return BadAlloc;
component_alpha = NeedsComponent(glyphSet->format->format);
if (nglyphs <= NLOCALGLYPH) {
memset(glyphsLocal, 0, sizeof(glyphsLocal));
glyphsBase = glyphsLocal;
}
else {
glyphsBase = (GlyphNewPtr) calloc(nglyphs, sizeof(GlyphNewRec));
if (!glyphsBase)
return BadAlloc;
}
remain = (client->req_len << 2) - sizeof(xRenderAddGlyphsReq);
glyphs = glyphsBase;
gids = (CARD32 *) (stuff + 1);
gi = (xGlyphInfo *) (gids + nglyphs);
bits = (CARD8 *) (gi + nglyphs);
remain -= (sizeof(CARD32) + sizeof(xGlyphInfo)) * nglyphs;
/* protect against bad nglyphs */
if (gi < ((xGlyphInfo *) stuff) ||
gi > ((xGlyphInfo *) ((CARD32 *) stuff + client->req_len)) ||
bits < ((CARD8 *) stuff) ||
bits > ((CARD8 *) ((CARD32 *) stuff + client->req_len))) {
err = BadLength;
goto bail;
}
for (i = 0; i < nglyphs; i++) {
size_t padded_width;
glyph_new = &glyphs[i];
padded_width = PixmapBytePad(gi[i].width, glyphSet->format->depth);
if (gi[i].height &&
padded_width > (UINT32_MAX - sizeof(GlyphRec)) / gi[i].height)
break;
size = gi[i].height * padded_width;
if (remain < size)
break;
err = HashGlyph(&gi[i], bits, size, glyph_new->sha1);
if (err)
goto bail;
glyph_new->glyph = FindGlyphByHash(glyph_new->sha1, glyphSet->fdepth);
if (glyph_new->glyph && glyph_new->glyph != DeletedGlyph) {
glyph_new->found = TRUE;
}
else {
GlyphPtr glyph;
glyph_new->found = FALSE;
glyph_new->glyph = glyph = AllocateGlyph(&gi[i], glyphSet->fdepth);
if (!glyph) {
err = BadAlloc;
goto bail;
}
for (screen = 0; screen < screenInfo.numScreens; screen++) {
int width = gi[i].width;
int height = gi[i].height;
int depth = glyphSet->format->depth;
ScreenPtr pScreen;
int error;
/* Skip work if it's invisibly small anyway */
if (!width || !height)
break;
pScreen = screenInfo.screens[screen];
pSrcPix = GetScratchPixmapHeader(pScreen,
width, height,
depth, depth, -1, bits);
if (!pSrcPix) {
err = BadAlloc;
goto bail;
}
pSrc = CreatePicture(0, &pSrcPix->drawable,
glyphSet->format, 0, NULL,
serverClient, &error);
if (!pSrc) {
err = BadAlloc;
goto bail;
}
pDstPix = (pScreen->CreatePixmap) (pScreen,
width, height, depth,
CREATE_PIXMAP_USAGE_GLYPH_PICTURE);
if (!pDstPix) {
err = BadAlloc;
goto bail;
}
pDst = CreatePicture(0, &pDstPix->drawable,
glyphSet->format,
CPComponentAlpha, &component_alpha,
serverClient, &error);
SetGlyphPicture(glyph, pScreen, pDst);
/* The picture takes a reference to the pixmap, so we
drop ours. */
(pScreen->DestroyPixmap) (pDstPix);
pDstPix = NULL;
if (!pDst) {
err = BadAlloc;
goto bail;
}
CompositePicture(PictOpSrc,
pSrc,
None, pDst, 0, 0, 0, 0, 0, 0, width, height);
FreePicture((void *) pSrc, 0);
pSrc = NULL;
FreeScratchPixmapHeader(pSrcPix);
pSrcPix = NULL;
}
memcpy(glyph_new->glyph->sha1, glyph_new->sha1, 20);
}
glyph_new->id = gids[i];
if (size & 3)
size += 4 - (size & 3);
bits += size;
remain -= size;
}
if (remain || i < nglyphs) {
err = BadLength;
goto bail;
}
if (!ResizeGlyphSet(glyphSet, nglyphs)) {
err = BadAlloc;
goto bail;
}
for (i = 0; i < nglyphs; i++)
AddGlyph(glyphSet, glyphs[i].glyph, glyphs[i].id);
if (glyphsBase != glyphsLocal)
free(glyphsBase);
return Success;
bail:
if (pSrc)
FreePicture((void *) pSrc, 0);
if (pSrcPix)
FreeScratchPixmapHeader(pSrcPix);
for (i = 0; i < nglyphs; i++)
if (glyphs[i].glyph && !glyphs[i].found)
free(glyphs[i].glyph);
if (glyphsBase != glyphsLocal)
free(glyphsBase);
return err;
}
|
ProcRenderAddGlyphs(ClientPtr client)
{
GlyphSetPtr glyphSet;
REQUEST(xRenderAddGlyphsReq);
GlyphNewRec glyphsLocal[NLOCALGLYPH];
GlyphNewPtr glyphsBase, glyphs, glyph_new;
int remain, nglyphs;
CARD32 *gids;
xGlyphInfo *gi;
CARD8 *bits;
unsigned int size;
int err;
int i, screen;
PicturePtr pSrc = NULL, pDst = NULL;
PixmapPtr pSrcPix = NULL, pDstPix = NULL;
CARD32 component_alpha;
REQUEST_AT_LEAST_SIZE(xRenderAddGlyphsReq);
err =
dixLookupResourceByType((void **) &glyphSet, stuff->glyphset,
GlyphSetType, client, DixAddAccess);
if (err != Success) {
client->errorValue = stuff->glyphset;
return err;
}
err = BadAlloc;
nglyphs = stuff->nglyphs;
if (nglyphs > UINT32_MAX / sizeof(GlyphNewRec))
return BadAlloc;
component_alpha = NeedsComponent(glyphSet->format->format);
if (nglyphs <= NLOCALGLYPH) {
memset(glyphsLocal, 0, sizeof(glyphsLocal));
glyphsBase = glyphsLocal;
}
else {
glyphsBase = (GlyphNewPtr) calloc(nglyphs, sizeof(GlyphNewRec));
if (!glyphsBase)
return BadAlloc;
}
remain = (client->req_len << 2) - sizeof(xRenderAddGlyphsReq);
glyphs = glyphsBase;
gids = (CARD32 *) (stuff + 1);
gi = (xGlyphInfo *) (gids + nglyphs);
bits = (CARD8 *) (gi + nglyphs);
remain -= (sizeof(CARD32) + sizeof(xGlyphInfo)) * nglyphs;
/* protect against bad nglyphs */
if (gi < ((xGlyphInfo *) stuff) ||
gi > ((xGlyphInfo *) ((CARD32 *) stuff + client->req_len)) ||
bits < ((CARD8 *) stuff) ||
bits > ((CARD8 *) ((CARD32 *) stuff + client->req_len))) {
err = BadLength;
goto bail;
}
for (i = 0; i < nglyphs; i++) {
size_t padded_width;
glyph_new = &glyphs[i];
padded_width = PixmapBytePad(gi[i].width, glyphSet->format->depth);
if (gi[i].height &&
padded_width > (UINT32_MAX - sizeof(GlyphRec)) / gi[i].height)
break;
size = gi[i].height * padded_width;
if (remain < size)
break;
err = HashGlyph(&gi[i], bits, size, glyph_new->sha1);
if (err)
goto bail;
glyph_new->glyph = FindGlyphByHash(glyph_new->sha1, glyphSet->fdepth);
if (glyph_new->glyph && glyph_new->glyph != DeletedGlyph) {
glyph_new->found = TRUE;
}
else {
GlyphPtr glyph;
glyph_new->found = FALSE;
glyph_new->glyph = glyph = AllocateGlyph(&gi[i], glyphSet->fdepth);
if (!glyph) {
err = BadAlloc;
goto bail;
}
for (screen = 0; screen < screenInfo.numScreens; screen++) {
int width = gi[i].width;
int height = gi[i].height;
int depth = glyphSet->format->depth;
ScreenPtr pScreen;
int error;
/* Skip work if it's invisibly small anyway */
if (!width || !height)
break;
pScreen = screenInfo.screens[screen];
pSrcPix = GetScratchPixmapHeader(pScreen,
width, height,
depth, depth, -1, bits);
if (!pSrcPix) {
err = BadAlloc;
goto bail;
}
pSrc = CreatePicture(0, &pSrcPix->drawable,
glyphSet->format, 0, NULL,
serverClient, &error);
if (!pSrc) {
err = BadAlloc;
goto bail;
}
pDstPix = (pScreen->CreatePixmap) (pScreen,
width, height, depth,
CREATE_PIXMAP_USAGE_GLYPH_PICTURE);
if (!pDstPix) {
err = BadAlloc;
goto bail;
}
pDst = CreatePicture(0, &pDstPix->drawable,
glyphSet->format,
CPComponentAlpha, &component_alpha,
serverClient, &error);
SetGlyphPicture(glyph, pScreen, pDst);
/* The picture takes a reference to the pixmap, so we
drop ours. */
(pScreen->DestroyPixmap) (pDstPix);
pDstPix = NULL;
if (!pDst) {
err = BadAlloc;
goto bail;
}
CompositePicture(PictOpSrc,
pSrc,
None, pDst, 0, 0, 0, 0, 0, 0, width, height);
FreePicture((void *) pSrc, 0);
pSrc = NULL;
FreeScratchPixmapHeader(pSrcPix);
pSrcPix = NULL;
}
memcpy(glyph_new->glyph->sha1, glyph_new->sha1, 20);
}
glyph_new->id = gids[i];
if (size & 3)
size += 4 - (size & 3);
bits += size;
remain -= size;
}
if (remain || i < nglyphs) {
err = BadLength;
goto bail;
}
if (!ResizeGlyphSet(glyphSet, nglyphs)) {
err = BadAlloc;
goto bail;
}
for (i = 0; i < nglyphs; i++)
AddGlyph(glyphSet, glyphs[i].glyph, glyphs[i].id);
if (glyphsBase != glyphsLocal)
free(glyphsBase);
return Success;
bail:
if (pSrc)
FreePicture((void *) pSrc, 0);
if (pSrcPix)
FreeScratchPixmapHeader(pSrcPix);
for (i = 0; i < nglyphs; i++)
if (glyphs[i].glyph && !glyphs[i].found)
free(glyphs[i].glyph);
if (glyphsBase != glyphsLocal)
free(glyphsBase);
return err;
}
|
C
|
xserver
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/181c7400b2bf50ba02ac77149749fb419b4d4797
|
181c7400b2bf50ba02ac77149749fb419b4d4797
|
gpu: Use GetUniformSetup computed result size.
[email protected]
BUG=468936
Review URL: https://codereview.chromium.org/1016193003
Cr-Commit-Position: refs/heads/master@{#321489}
|
error::Error GLES2DecoderImpl::HandleAsyncTexSubImage2DCHROMIUM(
uint32 immediate_data_size,
const void* cmd_data) {
const gles2::cmds::AsyncTexSubImage2DCHROMIUM& c =
*static_cast<const gles2::cmds::AsyncTexSubImage2DCHROMIUM*>(cmd_data);
TRACE_EVENT0("gpu", "GLES2DecoderImpl::HandleAsyncTexSubImage2DCHROMIUM");
GLenum target = static_cast<GLenum>(c.target);
GLint level = static_cast<GLint>(c.level);
GLint xoffset = static_cast<GLint>(c.xoffset);
GLint yoffset = static_cast<GLint>(c.yoffset);
GLsizei width = static_cast<GLsizei>(c.width);
GLsizei height = static_cast<GLsizei>(c.height);
GLenum format = static_cast<GLenum>(c.format);
GLenum type = static_cast<GLenum>(c.type);
uint32 async_upload_token = static_cast<uint32>(c.async_upload_token);
uint32 sync_data_shm_id = static_cast<uint32>(c.sync_data_shm_id);
uint32 sync_data_shm_offset = static_cast<uint32>(c.sync_data_shm_offset);
base::ScopedClosureRunner scoped_completion_callback;
if (async_upload_token) {
base::Closure completion_closure =
AsyncUploadTokenCompletionClosure(async_upload_token,
sync_data_shm_id,
sync_data_shm_offset);
if (completion_closure.is_null())
return error::kInvalidArguments;
scoped_completion_callback.Reset(completion_closure);
}
uint32 data_size;
if (!GLES2Util::ComputeImageDataSizes(
width, height, 1, format, type, state_.unpack_alignment, &data_size,
NULL, NULL)) {
return error::kOutOfBounds;
}
const void* pixels = GetSharedMemoryAs<const void*>(
c.data_shm_id, c.data_shm_offset, data_size);
error::Error error = error::kNoError;
if (!ValidateTexSubImage2D(&error, "glAsyncTexSubImage2DCHROMIUM",
target, level, xoffset, yoffset, width, height, format, type, pixels)) {
return error;
}
TextureRef* texture_ref = texture_manager()->GetTextureInfoForTarget(
&state_, target);
Texture* texture = texture_ref->texture();
if (!ValidateAsyncTransfer(
"glAsyncTexSubImage2DCHROMIUM", texture_ref, target, level, pixels))
return error::kNoError;
if (!texture->SafeToRenderFrom()) {
if (!texture_manager()->ClearTextureLevel(this, texture_ref,
target, level)) {
LOCAL_SET_GL_ERROR(
GL_OUT_OF_MEMORY,
"glAsyncTexSubImage2DCHROMIUM", "dimensions too big");
return error::kNoError;
}
}
AsyncTexSubImage2DParams tex_params = {target, level, xoffset, yoffset,
width, height, format, type};
AsyncMemoryParams mem_params(
GetSharedMemoryBuffer(c.data_shm_id), c.data_shm_offset, data_size);
AsyncPixelTransferDelegate* delegate =
async_pixel_transfer_manager_->GetPixelTransferDelegate(texture_ref);
if (!delegate) {
AsyncTexImage2DParams define_params = {target, level,
0, 0, 0, 0, 0, 0};
texture->GetLevelSize(target, level, &define_params.width,
&define_params.height);
texture->GetLevelType(target, level, &define_params.type,
&define_params.internal_format);
delegate = async_pixel_transfer_manager_->CreatePixelTransferDelegate(
texture_ref, define_params);
texture->SetImmutable(true);
}
delegate->AsyncTexSubImage2D(tex_params, mem_params);
return error::kNoError;
}
|
error::Error GLES2DecoderImpl::HandleAsyncTexSubImage2DCHROMIUM(
uint32 immediate_data_size,
const void* cmd_data) {
const gles2::cmds::AsyncTexSubImage2DCHROMIUM& c =
*static_cast<const gles2::cmds::AsyncTexSubImage2DCHROMIUM*>(cmd_data);
TRACE_EVENT0("gpu", "GLES2DecoderImpl::HandleAsyncTexSubImage2DCHROMIUM");
GLenum target = static_cast<GLenum>(c.target);
GLint level = static_cast<GLint>(c.level);
GLint xoffset = static_cast<GLint>(c.xoffset);
GLint yoffset = static_cast<GLint>(c.yoffset);
GLsizei width = static_cast<GLsizei>(c.width);
GLsizei height = static_cast<GLsizei>(c.height);
GLenum format = static_cast<GLenum>(c.format);
GLenum type = static_cast<GLenum>(c.type);
uint32 async_upload_token = static_cast<uint32>(c.async_upload_token);
uint32 sync_data_shm_id = static_cast<uint32>(c.sync_data_shm_id);
uint32 sync_data_shm_offset = static_cast<uint32>(c.sync_data_shm_offset);
base::ScopedClosureRunner scoped_completion_callback;
if (async_upload_token) {
base::Closure completion_closure =
AsyncUploadTokenCompletionClosure(async_upload_token,
sync_data_shm_id,
sync_data_shm_offset);
if (completion_closure.is_null())
return error::kInvalidArguments;
scoped_completion_callback.Reset(completion_closure);
}
uint32 data_size;
if (!GLES2Util::ComputeImageDataSizes(
width, height, 1, format, type, state_.unpack_alignment, &data_size,
NULL, NULL)) {
return error::kOutOfBounds;
}
const void* pixels = GetSharedMemoryAs<const void*>(
c.data_shm_id, c.data_shm_offset, data_size);
error::Error error = error::kNoError;
if (!ValidateTexSubImage2D(&error, "glAsyncTexSubImage2DCHROMIUM",
target, level, xoffset, yoffset, width, height, format, type, pixels)) {
return error;
}
TextureRef* texture_ref = texture_manager()->GetTextureInfoForTarget(
&state_, target);
Texture* texture = texture_ref->texture();
if (!ValidateAsyncTransfer(
"glAsyncTexSubImage2DCHROMIUM", texture_ref, target, level, pixels))
return error::kNoError;
if (!texture->SafeToRenderFrom()) {
if (!texture_manager()->ClearTextureLevel(this, texture_ref,
target, level)) {
LOCAL_SET_GL_ERROR(
GL_OUT_OF_MEMORY,
"glAsyncTexSubImage2DCHROMIUM", "dimensions too big");
return error::kNoError;
}
}
AsyncTexSubImage2DParams tex_params = {target, level, xoffset, yoffset,
width, height, format, type};
AsyncMemoryParams mem_params(
GetSharedMemoryBuffer(c.data_shm_id), c.data_shm_offset, data_size);
AsyncPixelTransferDelegate* delegate =
async_pixel_transfer_manager_->GetPixelTransferDelegate(texture_ref);
if (!delegate) {
AsyncTexImage2DParams define_params = {target, level,
0, 0, 0, 0, 0, 0};
texture->GetLevelSize(target, level, &define_params.width,
&define_params.height);
texture->GetLevelType(target, level, &define_params.type,
&define_params.internal_format);
delegate = async_pixel_transfer_manager_->CreatePixelTransferDelegate(
texture_ref, define_params);
texture->SetImmutable(true);
}
delegate->AsyncTexSubImage2D(tex_params, mem_params);
return error::kNoError;
}
|
C
|
Chrome
| 0 |
CVE-2011-2795
|
https://www.cvedetails.com/cve/CVE-2011-2795/
|
CWE-264
|
https://github.com/chromium/chromium/commit/73edae623529f04c668268de49d00324b96166a2
|
73edae623529f04c668268de49d00324b96166a2
|
There are too many poorly named functions to create a fragment from markup
https://bugs.webkit.org/show_bug.cgi?id=87339
Reviewed by Eric Seidel.
Source/WebCore:
Moved all functions that create a fragment from markup to markup.h/cpp.
There should be no behavioral change.
* dom/Range.cpp:
(WebCore::Range::createContextualFragment):
* dom/Range.h: Removed createDocumentFragmentForElement.
* dom/ShadowRoot.cpp:
(WebCore::ShadowRoot::setInnerHTML):
* editing/markup.cpp:
(WebCore::createFragmentFromMarkup):
(WebCore::createFragmentForInnerOuterHTML): Renamed from createFragmentFromSource.
(WebCore::createFragmentForTransformToFragment): Moved from XSLTProcessor.
(WebCore::removeElementPreservingChildren): Moved from Range.
(WebCore::createContextualFragment): Ditto.
* editing/markup.h:
* html/HTMLElement.cpp:
(WebCore::HTMLElement::setInnerHTML):
(WebCore::HTMLElement::setOuterHTML):
(WebCore::HTMLElement::insertAdjacentHTML):
* inspector/DOMPatchSupport.cpp:
(WebCore::DOMPatchSupport::patchNode): Added a FIXME since this code should be using
one of the functions listed in markup.h
* xml/XSLTProcessor.cpp:
(WebCore::XSLTProcessor::transformToFragment):
Source/WebKit/qt:
Replace calls to Range::createDocumentFragmentForElement by calls to
createContextualDocumentFragment.
* Api/qwebelement.cpp:
(QWebElement::appendInside):
(QWebElement::prependInside):
(QWebElement::prependOutside):
(QWebElement::appendOutside):
(QWebElement::encloseContentsWith):
(QWebElement::encloseWith):
git-svn-id: svn://svn.chromium.org/blink/trunk@118414 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void HTMLElement::accessKeyAction(bool sendMouseEvents)
{
dispatchSimulatedClick(0, sendMouseEvents);
}
|
void HTMLElement::accessKeyAction(bool sendMouseEvents)
{
dispatchSimulatedClick(0, sendMouseEvents);
}
|
C
|
Chrome
| 0 |
CVE-2016-10066
|
https://www.cvedetails.com/cve/CVE-2016-10066/
|
CWE-119
|
https://github.com/ImageMagick/ImageMagick/commit/f6e9d0d9955e85bdd7540b251cd50d598dacc5e6
|
f6e9d0d9955e85bdd7540b251cd50d598dacc5e6
| null |
static inline ssize_t MagickMin(const ssize_t x,const ssize_t y)
{
if (x < y)
return(x);
return(y);
}
|
static inline ssize_t MagickMin(const ssize_t x,const ssize_t y)
{
if (x < y)
return(x);
return(y);
}
|
C
|
ImageMagick
| 0 |
CVE-2019-11487
|
https://www.cvedetails.com/cve/CVE-2019-11487/
|
CWE-416
|
https://github.com/torvalds/linux/commit/6b3a707736301c2128ca85ce85fb13f60b5e350a
|
6b3a707736301c2128ca85ce85fb13f60b5e350a
|
Merge branch 'page-refs' (page ref overflow)
Merge page ref overflow branch.
Jann Horn reported that he can overflow the page ref count with
sufficient memory (and a filesystem that is intentionally extremely
slow).
Admittedly it's not exactly easy. To have more than four billion
references to a page requires a minimum of 32GB of kernel memory just
for the pointers to the pages, much less any metadata to keep track of
those pointers. Jann needed a total of 140GB of memory and a specially
crafted filesystem that leaves all reads pending (in order to not ever
free the page references and just keep adding more).
Still, we have a fairly straightforward way to limit the two obvious
user-controllable sources of page references: direct-IO like page
references gotten through get_user_pages(), and the splice pipe page
duplication. So let's just do that.
* branch page-refs:
fs: prevent page refcount overflow in pipe_buf_get
mm: prevent get_user_pages() from overflowing page refcount
mm: add 'try_get_page()' helper function
mm: make page ref count overflow check tighter and more explicit
|
ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
splice_direct_actor *actor)
{
struct pipe_inode_info *pipe;
long ret, bytes;
umode_t i_mode;
size_t len;
int i, flags, more;
/*
* We require the input being a regular file, as we don't want to
* randomly drop data for eg socket -> socket splicing. Use the
* piped splicing for that!
*/
i_mode = file_inode(in)->i_mode;
if (unlikely(!S_ISREG(i_mode) && !S_ISBLK(i_mode)))
return -EINVAL;
/*
* neither in nor out is a pipe, setup an internal pipe attached to
* 'out' and transfer the wanted data from 'in' to 'out' through that
*/
pipe = current->splice_pipe;
if (unlikely(!pipe)) {
pipe = alloc_pipe_info();
if (!pipe)
return -ENOMEM;
/*
* We don't have an immediate reader, but we'll read the stuff
* out of the pipe right after the splice_to_pipe(). So set
* PIPE_READERS appropriately.
*/
pipe->readers = 1;
current->splice_pipe = pipe;
}
/*
* Do the splice.
*/
ret = 0;
bytes = 0;
len = sd->total_len;
flags = sd->flags;
/*
* Don't block on output, we have to drain the direct pipe.
*/
sd->flags &= ~SPLICE_F_NONBLOCK;
more = sd->flags & SPLICE_F_MORE;
WARN_ON_ONCE(pipe->nrbufs != 0);
while (len) {
size_t read_len;
loff_t pos = sd->pos, prev_pos = pos;
/* Don't try to read more the pipe has space for. */
read_len = min_t(size_t, len,
(pipe->buffers - pipe->nrbufs) << PAGE_SHIFT);
ret = do_splice_to(in, &pos, pipe, read_len, flags);
if (unlikely(ret <= 0))
goto out_release;
read_len = ret;
sd->total_len = read_len;
/*
* If more data is pending, set SPLICE_F_MORE
* If this is the last data and SPLICE_F_MORE was not set
* initially, clears it.
*/
if (read_len < len)
sd->flags |= SPLICE_F_MORE;
else if (!more)
sd->flags &= ~SPLICE_F_MORE;
/*
* NOTE: nonblocking mode only applies to the input. We
* must not do the output in nonblocking mode as then we
* could get stuck data in the internal pipe:
*/
ret = actor(pipe, sd);
if (unlikely(ret <= 0)) {
sd->pos = prev_pos;
goto out_release;
}
bytes += ret;
len -= ret;
sd->pos = pos;
if (ret < read_len) {
sd->pos = prev_pos + ret;
goto out_release;
}
}
done:
pipe->nrbufs = pipe->curbuf = 0;
file_accessed(in);
return bytes;
out_release:
/*
* If we did an incomplete transfer we must release
* the pipe buffers in question:
*/
for (i = 0; i < pipe->buffers; i++) {
struct pipe_buffer *buf = pipe->bufs + i;
if (buf->ops)
pipe_buf_release(pipe, buf);
}
if (!bytes)
bytes = ret;
goto done;
}
|
ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
splice_direct_actor *actor)
{
struct pipe_inode_info *pipe;
long ret, bytes;
umode_t i_mode;
size_t len;
int i, flags, more;
/*
* We require the input being a regular file, as we don't want to
* randomly drop data for eg socket -> socket splicing. Use the
* piped splicing for that!
*/
i_mode = file_inode(in)->i_mode;
if (unlikely(!S_ISREG(i_mode) && !S_ISBLK(i_mode)))
return -EINVAL;
/*
* neither in nor out is a pipe, setup an internal pipe attached to
* 'out' and transfer the wanted data from 'in' to 'out' through that
*/
pipe = current->splice_pipe;
if (unlikely(!pipe)) {
pipe = alloc_pipe_info();
if (!pipe)
return -ENOMEM;
/*
* We don't have an immediate reader, but we'll read the stuff
* out of the pipe right after the splice_to_pipe(). So set
* PIPE_READERS appropriately.
*/
pipe->readers = 1;
current->splice_pipe = pipe;
}
/*
* Do the splice.
*/
ret = 0;
bytes = 0;
len = sd->total_len;
flags = sd->flags;
/*
* Don't block on output, we have to drain the direct pipe.
*/
sd->flags &= ~SPLICE_F_NONBLOCK;
more = sd->flags & SPLICE_F_MORE;
WARN_ON_ONCE(pipe->nrbufs != 0);
while (len) {
size_t read_len;
loff_t pos = sd->pos, prev_pos = pos;
/* Don't try to read more the pipe has space for. */
read_len = min_t(size_t, len,
(pipe->buffers - pipe->nrbufs) << PAGE_SHIFT);
ret = do_splice_to(in, &pos, pipe, read_len, flags);
if (unlikely(ret <= 0))
goto out_release;
read_len = ret;
sd->total_len = read_len;
/*
* If more data is pending, set SPLICE_F_MORE
* If this is the last data and SPLICE_F_MORE was not set
* initially, clears it.
*/
if (read_len < len)
sd->flags |= SPLICE_F_MORE;
else if (!more)
sd->flags &= ~SPLICE_F_MORE;
/*
* NOTE: nonblocking mode only applies to the input. We
* must not do the output in nonblocking mode as then we
* could get stuck data in the internal pipe:
*/
ret = actor(pipe, sd);
if (unlikely(ret <= 0)) {
sd->pos = prev_pos;
goto out_release;
}
bytes += ret;
len -= ret;
sd->pos = pos;
if (ret < read_len) {
sd->pos = prev_pos + ret;
goto out_release;
}
}
done:
pipe->nrbufs = pipe->curbuf = 0;
file_accessed(in);
return bytes;
out_release:
/*
* If we did an incomplete transfer we must release
* the pipe buffers in question:
*/
for (i = 0; i < pipe->buffers; i++) {
struct pipe_buffer *buf = pipe->bufs + i;
if (buf->ops)
pipe_buf_release(pipe, buf);
}
if (!bytes)
bytes = ret;
goto done;
}
|
C
|
linux
| 0 |
Subsets and Splits
CWE-119 Function Changes
This query retrieves specific examples (before and after code changes) of vulnerabilities with CWE-119, providing basic filtering but limited insight.
Vulnerable Code with CWE IDs
The query filters and combines records from multiple datasets to list specific vulnerability details, providing a basic overview of vulnerable functions but lacking deeper insights.
Vulnerable Functions in BigVul
Retrieves details of vulnerable functions from both validation and test datasets where vulnerabilities are present, providing a basic set of data points for further analysis.
Vulnerable Code Functions
This query filters and shows raw data for vulnerable functions, which provides basic insight into specific vulnerabilities but lacks broader analytical value.
Top 100 Vulnerable Functions
Retrieves 100 samples of vulnerabilities from the training dataset, showing the CVE ID, CWE ID, and code changes before and after the vulnerability, which is a basic filtering of vulnerability data.