CVE ID
stringlengths 13
43
⌀ | CVE Page
stringlengths 45
48
⌀ | CWE ID
stringclasses 90
values | codeLink
stringlengths 46
139
| commit_id
stringlengths 6
81
| commit_message
stringlengths 3
13.3k
⌀ | func_after
stringlengths 14
241k
| func_before
stringlengths 14
241k
| lang
stringclasses 3
values | project
stringclasses 309
values | vul
int8 0
1
|
---|---|---|---|---|---|---|---|---|---|---|
CVE-2011-2350
|
https://www.cvedetails.com/cve/CVE-2011-2350/
|
CWE-20
|
https://github.com/chromium/chromium/commit/b944f670bb7a8a919daac497a4ea0536c954c201
|
b944f670bb7a8a919daac497a4ea0536c954c201
|
[JSC] Implement a helper method createNotEnoughArgumentsError()
https://bugs.webkit.org/show_bug.cgi?id=85102
Reviewed by Geoffrey Garen.
In bug 84787, kbr@ requested to avoid hard-coding
createTypeError(exec, "Not enough arguments") here and there.
This patch implements createNotEnoughArgumentsError(exec)
and uses it in JSC bindings.
c.f. a corresponding bug for V8 bindings is bug 85097.
Source/JavaScriptCore:
* runtime/Error.cpp:
(JSC::createNotEnoughArgumentsError):
(JSC):
* runtime/Error.h:
(JSC):
Source/WebCore:
Test: bindings/scripts/test/TestObj.idl
* bindings/scripts/CodeGeneratorJS.pm: Modified as described above.
(GenerateArgumentsCountCheck):
* bindings/js/JSDataViewCustom.cpp: Ditto.
(WebCore::getDataViewMember):
(WebCore::setDataViewMember):
* bindings/js/JSDeprecatedPeerConnectionCustom.cpp:
(WebCore::JSDeprecatedPeerConnectionConstructor::constructJSDeprecatedPeerConnection):
* bindings/js/JSDirectoryEntryCustom.cpp:
(WebCore::JSDirectoryEntry::getFile):
(WebCore::JSDirectoryEntry::getDirectory):
* bindings/js/JSSharedWorkerCustom.cpp:
(WebCore::JSSharedWorkerConstructor::constructJSSharedWorker):
* bindings/js/JSWebKitMutationObserverCustom.cpp:
(WebCore::JSWebKitMutationObserverConstructor::constructJSWebKitMutationObserver):
(WebCore::JSWebKitMutationObserver::observe):
* bindings/js/JSWorkerCustom.cpp:
(WebCore::JSWorkerConstructor::constructJSWorker):
* bindings/scripts/test/JS/JSFloat64Array.cpp: Updated run-bindings-tests.
(WebCore::jsFloat64ArrayPrototypeFunctionFoo):
* bindings/scripts/test/JS/JSTestActiveDOMObject.cpp:
(WebCore::jsTestActiveDOMObjectPrototypeFunctionExcitingFunction):
(WebCore::jsTestActiveDOMObjectPrototypeFunctionPostMessage):
* bindings/scripts/test/JS/JSTestCustomNamedGetter.cpp:
(WebCore::jsTestCustomNamedGetterPrototypeFunctionAnotherFunction):
* bindings/scripts/test/JS/JSTestEventTarget.cpp:
(WebCore::jsTestEventTargetPrototypeFunctionItem):
(WebCore::jsTestEventTargetPrototypeFunctionAddEventListener):
(WebCore::jsTestEventTargetPrototypeFunctionRemoveEventListener):
(WebCore::jsTestEventTargetPrototypeFunctionDispatchEvent):
* bindings/scripts/test/JS/JSTestInterface.cpp:
(WebCore::JSTestInterfaceConstructor::constructJSTestInterface):
(WebCore::jsTestInterfacePrototypeFunctionSupplementalMethod2):
* bindings/scripts/test/JS/JSTestMediaQueryListListener.cpp:
(WebCore::jsTestMediaQueryListListenerPrototypeFunctionMethod):
* bindings/scripts/test/JS/JSTestNamedConstructor.cpp:
(WebCore::JSTestNamedConstructorNamedConstructor::constructJSTestNamedConstructor):
* bindings/scripts/test/JS/JSTestObj.cpp:
(WebCore::JSTestObjConstructor::constructJSTestObj):
(WebCore::jsTestObjPrototypeFunctionVoidMethodWithArgs):
(WebCore::jsTestObjPrototypeFunctionIntMethodWithArgs):
(WebCore::jsTestObjPrototypeFunctionObjMethodWithArgs):
(WebCore::jsTestObjPrototypeFunctionMethodWithSequenceArg):
(WebCore::jsTestObjPrototypeFunctionMethodReturningSequence):
(WebCore::jsTestObjPrototypeFunctionMethodThatRequiresAllArgsAndThrows):
(WebCore::jsTestObjPrototypeFunctionSerializedValue):
(WebCore::jsTestObjPrototypeFunctionIdbKey):
(WebCore::jsTestObjPrototypeFunctionOptionsObject):
(WebCore::jsTestObjPrototypeFunctionAddEventListener):
(WebCore::jsTestObjPrototypeFunctionRemoveEventListener):
(WebCore::jsTestObjPrototypeFunctionMethodWithNonOptionalArgAndOptionalArg):
(WebCore::jsTestObjPrototypeFunctionMethodWithNonOptionalArgAndTwoOptionalArgs):
(WebCore::jsTestObjPrototypeFunctionMethodWithCallbackArg):
(WebCore::jsTestObjPrototypeFunctionMethodWithNonCallbackArgAndCallbackArg):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod1):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod2):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod3):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod4):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod5):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod6):
(WebCore::jsTestObjPrototypeFunctionOverloadedMethod7):
(WebCore::jsTestObjConstructorFunctionClassMethod2):
(WebCore::jsTestObjConstructorFunctionOverloadedMethod11):
(WebCore::jsTestObjConstructorFunctionOverloadedMethod12):
(WebCore::jsTestObjPrototypeFunctionMethodWithUnsignedLongArray):
(WebCore::jsTestObjPrototypeFunctionConvert1):
(WebCore::jsTestObjPrototypeFunctionConvert2):
(WebCore::jsTestObjPrototypeFunctionConvert3):
(WebCore::jsTestObjPrototypeFunctionConvert4):
(WebCore::jsTestObjPrototypeFunctionConvert5):
(WebCore::jsTestObjPrototypeFunctionStrictFunction):
* bindings/scripts/test/JS/JSTestSerializedScriptValueInterface.cpp:
(WebCore::JSTestSerializedScriptValueInterfaceConstructor::constructJSTestSerializedScriptValueInterface):
(WebCore::jsTestSerializedScriptValueInterfacePrototypeFunctionAcceptTransferList):
git-svn-id: svn://svn.chromium.org/blink/trunk@115536 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void setJSTestObjUnsignedShortSequenceAttr(ExecState* exec, JSObject* thisObject, JSValue value)
{
JSTestObj* castedThis = jsCast<JSTestObj*>(thisObject);
TestObj* impl = static_cast<TestObj*>(castedThis->impl());
impl->setUnsignedShortSequenceAttr(toNativeArray<unsigned short>(exec, value));
}
|
void setJSTestObjUnsignedShortSequenceAttr(ExecState* exec, JSObject* thisObject, JSValue value)
{
JSTestObj* castedThis = jsCast<JSTestObj*>(thisObject);
TestObj* impl = static_cast<TestObj*>(castedThis->impl());
impl->setUnsignedShortSequenceAttr(toNativeArray<unsigned short>(exec, value));
}
|
C
|
Chrome
| 0 |
CVE-2018-17470
|
https://www.cvedetails.com/cve/CVE-2018-17470/
|
CWE-119
|
https://github.com/chromium/chromium/commit/385508dc888ef15d272cdd2705b17996abc519d6
|
385508dc888ef15d272cdd2705b17996abc519d6
|
Implement immutable texture base/max level clamping
It seems some drivers fail to handle that gracefully, so let's always clamp
to be on the safe side.
BUG=877874
TEST=test case in the bug, gpu_unittests
[email protected]
Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;luci.chromium.try:linux_optional_gpu_tests_rel;luci.chromium.try:mac_optional_gpu_tests_rel;luci.chromium.try:win_optional_gpu_tests_rel
Change-Id: I6d93cb9389ea70525df4604112223604577582a2
Reviewed-on: https://chromium-review.googlesource.com/1194994
Reviewed-by: Kenneth Russell <[email protected]>
Commit-Queue: Zhenyao Mo <[email protected]>
Cr-Commit-Position: refs/heads/master@{#587264}
|
bool GLES2DecoderImpl::ValidateCompressedCopyTextureCHROMIUM(
const char* function_name,
TextureRef* source_texture_ref,
TextureRef* dest_texture_ref) {
if (!source_texture_ref || !dest_texture_ref) {
LOCAL_SET_GL_ERROR(GL_INVALID_VALUE, function_name, "unknown texture id");
return false;
}
Texture* source_texture = source_texture_ref->texture();
Texture* dest_texture = dest_texture_ref->texture();
if (source_texture == dest_texture) {
LOCAL_SET_GL_ERROR(GL_INVALID_OPERATION, function_name,
"source and destination textures are the same");
return false;
}
if (dest_texture->target() != GL_TEXTURE_2D ||
(source_texture->target() != GL_TEXTURE_2D &&
source_texture->target() != GL_TEXTURE_RECTANGLE_ARB &&
source_texture->target() != GL_TEXTURE_EXTERNAL_OES)) {
LOCAL_SET_GL_ERROR(GL_INVALID_VALUE, function_name,
"invalid texture target binding");
return false;
}
GLenum source_type = 0;
GLenum source_internal_format = 0;
source_texture->GetLevelType(source_texture->target(), 0, &source_type,
&source_internal_format);
bool valid_format =
source_internal_format == GL_ATC_RGB_AMD ||
source_internal_format == GL_ATC_RGBA_INTERPOLATED_ALPHA_AMD ||
source_internal_format == GL_COMPRESSED_RGB_S3TC_DXT1_EXT ||
source_internal_format == GL_COMPRESSED_RGBA_S3TC_DXT5_EXT ||
source_internal_format == GL_ETC1_RGB8_OES;
if (!valid_format) {
LOCAL_SET_GL_ERROR(GL_INVALID_OPERATION, function_name,
"invalid internal format");
return false;
}
return true;
}
|
bool GLES2DecoderImpl::ValidateCompressedCopyTextureCHROMIUM(
const char* function_name,
TextureRef* source_texture_ref,
TextureRef* dest_texture_ref) {
if (!source_texture_ref || !dest_texture_ref) {
LOCAL_SET_GL_ERROR(GL_INVALID_VALUE, function_name, "unknown texture id");
return false;
}
Texture* source_texture = source_texture_ref->texture();
Texture* dest_texture = dest_texture_ref->texture();
if (source_texture == dest_texture) {
LOCAL_SET_GL_ERROR(GL_INVALID_OPERATION, function_name,
"source and destination textures are the same");
return false;
}
if (dest_texture->target() != GL_TEXTURE_2D ||
(source_texture->target() != GL_TEXTURE_2D &&
source_texture->target() != GL_TEXTURE_RECTANGLE_ARB &&
source_texture->target() != GL_TEXTURE_EXTERNAL_OES)) {
LOCAL_SET_GL_ERROR(GL_INVALID_VALUE, function_name,
"invalid texture target binding");
return false;
}
GLenum source_type = 0;
GLenum source_internal_format = 0;
source_texture->GetLevelType(source_texture->target(), 0, &source_type,
&source_internal_format);
bool valid_format =
source_internal_format == GL_ATC_RGB_AMD ||
source_internal_format == GL_ATC_RGBA_INTERPOLATED_ALPHA_AMD ||
source_internal_format == GL_COMPRESSED_RGB_S3TC_DXT1_EXT ||
source_internal_format == GL_COMPRESSED_RGBA_S3TC_DXT5_EXT ||
source_internal_format == GL_ETC1_RGB8_OES;
if (!valid_format) {
LOCAL_SET_GL_ERROR(GL_INVALID_OPERATION, function_name,
"invalid internal format");
return false;
}
return true;
}
|
C
|
Chrome
| 0 |
CVE-2018-6140
|
https://www.cvedetails.com/cve/CVE-2018-6140/
|
CWE-20
|
https://github.com/chromium/chromium/commit/2aec794f26098c7a361c27d7c8f57119631cca8a
|
2aec794f26098c7a361c27d7c8f57119631cca8a
|
[DevTools] Do not allow chrome.debugger to attach to web ui pages
If the page navigates to web ui, we force detach the debugger extension.
[email protected]
Bug: 798222
Change-Id: Idb46c2f59e839388397a8dfa6ce2e2a897698df3
Reviewed-on: https://chromium-review.googlesource.com/935961
Commit-Queue: Dmitry Gozman <[email protected]>
Reviewed-by: Devlin <[email protected]>
Reviewed-by: Pavel Feldman <[email protected]>
Reviewed-by: Nasko Oskov <[email protected]>
Cr-Commit-Position: refs/heads/master@{#540916}
|
bool BrowserDevToolsAgentHost::Activate() {
return false;
}
|
bool BrowserDevToolsAgentHost::Activate() {
return false;
}
|
C
|
Chrome
| 0 |
CVE-2014-0131
|
https://www.cvedetails.com/cve/CVE-2014-0131/
|
CWE-416
|
https://github.com/torvalds/linux/commit/1fd819ecb90cc9b822cd84d3056ddba315d3340f
|
1fd819ecb90cc9b822cd84d3056ddba315d3340f
|
skbuff: skb_segment: orphan frags before copying
skb_segment copies frags around, so we need
to copy them carefully to avoid accessing
user memory after reporting completion to userspace
through a callback.
skb_segment doesn't normally happen on datapath:
TSO needs to be disabled - so disabling zero copy
in this case does not look like a big deal.
Signed-off-by: Michael S. Tsirkin <[email protected]>
Acked-by: Herbert Xu <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
__wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,
u8 *to, int len, __wsum csum)
{
int start = skb_headlen(skb);
int i, copy = start - offset;
struct sk_buff *frag_iter;
int pos = 0;
/* Copy header. */
if (copy > 0) {
if (copy > len)
copy = len;
csum = csum_partial_copy_nocheck(skb->data + offset, to,
copy, csum);
if ((len -= copy) == 0)
return csum;
offset += copy;
to += copy;
pos = copy;
}
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
WARN_ON(start > offset + len);
end = start + skb_frag_size(&skb_shinfo(skb)->frags[i]);
if ((copy = end - offset) > 0) {
__wsum csum2;
u8 *vaddr;
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
if (copy > len)
copy = len;
vaddr = kmap_atomic(skb_frag_page(frag));
csum2 = csum_partial_copy_nocheck(vaddr +
frag->page_offset +
offset - start, to,
copy, 0);
kunmap_atomic(vaddr);
csum = csum_block_add(csum, csum2, pos);
if (!(len -= copy))
return csum;
offset += copy;
to += copy;
pos += copy;
}
start = end;
}
skb_walk_frags(skb, frag_iter) {
__wsum csum2;
int end;
WARN_ON(start > offset + len);
end = start + frag_iter->len;
if ((copy = end - offset) > 0) {
if (copy > len)
copy = len;
csum2 = skb_copy_and_csum_bits(frag_iter,
offset - start,
to, copy, 0);
csum = csum_block_add(csum, csum2, pos);
if ((len -= copy) == 0)
return csum;
offset += copy;
to += copy;
pos += copy;
}
start = end;
}
BUG_ON(len);
return csum;
}
|
__wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,
u8 *to, int len, __wsum csum)
{
int start = skb_headlen(skb);
int i, copy = start - offset;
struct sk_buff *frag_iter;
int pos = 0;
/* Copy header. */
if (copy > 0) {
if (copy > len)
copy = len;
csum = csum_partial_copy_nocheck(skb->data + offset, to,
copy, csum);
if ((len -= copy) == 0)
return csum;
offset += copy;
to += copy;
pos = copy;
}
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
WARN_ON(start > offset + len);
end = start + skb_frag_size(&skb_shinfo(skb)->frags[i]);
if ((copy = end - offset) > 0) {
__wsum csum2;
u8 *vaddr;
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
if (copy > len)
copy = len;
vaddr = kmap_atomic(skb_frag_page(frag));
csum2 = csum_partial_copy_nocheck(vaddr +
frag->page_offset +
offset - start, to,
copy, 0);
kunmap_atomic(vaddr);
csum = csum_block_add(csum, csum2, pos);
if (!(len -= copy))
return csum;
offset += copy;
to += copy;
pos += copy;
}
start = end;
}
skb_walk_frags(skb, frag_iter) {
__wsum csum2;
int end;
WARN_ON(start > offset + len);
end = start + frag_iter->len;
if ((copy = end - offset) > 0) {
if (copy > len)
copy = len;
csum2 = skb_copy_and_csum_bits(frag_iter,
offset - start,
to, copy, 0);
csum = csum_block_add(csum, csum2, pos);
if ((len -= copy) == 0)
return csum;
offset += copy;
to += copy;
pos += copy;
}
start = end;
}
BUG_ON(len);
return csum;
}
|
C
|
linux
| 0 |
CVE-2012-5148
|
https://www.cvedetails.com/cve/CVE-2012-5148/
|
CWE-20
|
https://github.com/chromium/chromium/commit/e89cfcb9090e8c98129ae9160c513f504db74599
|
e89cfcb9090e8c98129ae9160c513f504db74599
|
Remove TabContents from TabStripModelObserver::TabDetachedAt.
BUG=107201
TEST=no visible change
Review URL: https://chromiumcodereview.appspot.com/11293205
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@167122 0039d316-1c4b-4281-b951-d872f2087c98
|
void Browser::FileSelectedWithExtraInfo(
const ui::SelectedFileInfo& file_info,
int index,
void* params) {
profile_->set_last_selected_directory(file_info.file_path.DirName());
const FilePath& path = file_info.local_path;
GURL file_url = net::FilePathToFileURL(path);
#if defined(OS_CHROMEOS)
drive::util::ModifyDriveFileResourceUrl(profile_, path, &file_url);
#endif
if (file_url.is_empty())
return;
OpenURL(OpenURLParams(
file_url, Referrer(), CURRENT_TAB, content::PAGE_TRANSITION_TYPED,
false));
}
|
void Browser::FileSelectedWithExtraInfo(
const ui::SelectedFileInfo& file_info,
int index,
void* params) {
profile_->set_last_selected_directory(file_info.file_path.DirName());
const FilePath& path = file_info.local_path;
GURL file_url = net::FilePathToFileURL(path);
#if defined(OS_CHROMEOS)
drive::util::ModifyDriveFileResourceUrl(profile_, path, &file_url);
#endif
if (file_url.is_empty())
return;
OpenURL(OpenURLParams(
file_url, Referrer(), CURRENT_TAB, content::PAGE_TRANSITION_TYPED,
false));
}
|
C
|
Chrome
| 0 |
CVE-2015-1213
|
https://www.cvedetails.com/cve/CVE-2015-1213/
|
CWE-119
|
https://github.com/chromium/chromium/commit/faaa2fd0a05f1622d9a8806da118d4f3b602e707
|
faaa2fd0a05f1622d9a8806da118d4f3b602e707
|
[Blink>Media] Allow autoplay muted on Android by default
There was a mistake causing autoplay muted is shipped on Android
but it will be disabled if the chromium embedder doesn't specify
content setting for "AllowAutoplay" preference. This CL makes the
AllowAutoplay preference true by default so that it is allowed by
embedders (including AndroidWebView) unless they explicitly
disable it.
Intent to ship:
https://groups.google.com/a/chromium.org/d/msg/blink-dev/Q1cnzNI2GpI/AL_eyUNABgAJ
BUG=689018
Review-Url: https://codereview.chromium.org/2677173002
Cr-Commit-Position: refs/heads/master@{#448423}
|
DEFINE_TRACE_WRAPPERS(HTMLMediaElement) {
visitor->traceWrappers(m_videoTracks);
visitor->traceWrappers(m_audioTracks);
visitor->traceWrappers(m_textTracks);
HTMLElement::traceWrappers(visitor);
}
|
DEFINE_TRACE_WRAPPERS(HTMLMediaElement) {
visitor->traceWrappers(m_videoTracks);
visitor->traceWrappers(m_audioTracks);
visitor->traceWrappers(m_textTracks);
HTMLElement::traceWrappers(visitor);
}
|
C
|
Chrome
| 0 |
CVE-2016-9754
|
https://www.cvedetails.com/cve/CVE-2016-9754/
|
CWE-190
|
https://github.com/torvalds/linux/commit/59643d1535eb220668692a5359de22545af579f6
|
59643d1535eb220668692a5359de22545af579f6
|
ring-buffer: Prevent overflow of size in ring_buffer_resize()
If the size passed to ring_buffer_resize() is greater than MAX_LONG - BUF_PAGE_SIZE
then the DIV_ROUND_UP() will return zero.
Here's the details:
# echo 18014398509481980 > /sys/kernel/debug/tracing/buffer_size_kb
tracing_entries_write() processes this and converts kb to bytes.
18014398509481980 << 10 = 18446744073709547520
and this is passed to ring_buffer_resize() as unsigned long size.
size = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
Where DIV_ROUND_UP(a, b) is (a + b - 1)/b
BUF_PAGE_SIZE is 4080 and here
18446744073709547520 + 4080 - 1 = 18446744073709551599
where 18446744073709551599 is still smaller than 2^64
2^64 - 18446744073709551599 = 17
But now 18446744073709551599 / 4080 = 4521260802379792
and size = size * 4080 = 18446744073709551360
This is checked to make sure its still greater than 2 * 4080,
which it is.
Then we convert to the number of buffer pages needed.
nr_page = DIV_ROUND_UP(size, BUF_PAGE_SIZE)
but this time size is 18446744073709551360 and
2^64 - (18446744073709551360 + 4080 - 1) = -3823
Thus it overflows and the resulting number is less than 4080, which makes
3823 / 4080 = 0
an nr_pages is set to this. As we already checked against the minimum that
nr_pages may be, this causes the logic to fail as well, and we crash the
kernel.
There's no reason to have the two DIV_ROUND_UP() (that's just result of
historical code changes), clean up the code and fix this bug.
Cc: [email protected] # 3.5+
Fixes: 83f40318dab00 ("ring-buffer: Make removal of ring buffer pages atomic")
Signed-off-by: Steven Rostedt <[email protected]>
|
static inline bool sched_clock_stable(void)
{
return true;
}
|
static inline bool sched_clock_stable(void)
{
return true;
}
|
C
|
linux
| 0 |
CVE-2017-6903
|
https://www.cvedetails.com/cve/CVE-2017-6903/
|
CWE-269
|
https://github.com/ioquake/ioq3/commit/b173ac05993f634a42be3d3535e1b158de0c3372
|
b173ac05993f634a42be3d3535e1b158de0c3372
|
Merge some file writing extension checks from OpenJK.
Thanks Ensiform.
https://github.com/JACoders/OpenJK/commit/05928a57f9e4aae15a3bd0
https://github.com/JACoders/OpenJK/commit/ef124fd0fc48af164581176
|
void Con_DrawInput (void) {
int y;
if ( clc.state != CA_DISCONNECTED && !(Key_GetCatcher( ) & KEYCATCH_CONSOLE ) ) {
return;
}
y = con.vislines - ( SMALLCHAR_HEIGHT * 2 );
re.SetColor( con.color );
SCR_DrawSmallChar( con.xadjust + 1 * SMALLCHAR_WIDTH, y, ']' );
Field_Draw( &g_consoleField, con.xadjust + 2 * SMALLCHAR_WIDTH, y,
SCREEN_WIDTH - 3 * SMALLCHAR_WIDTH, qtrue, qtrue );
}
|
void Con_DrawInput (void) {
int y;
if ( clc.state != CA_DISCONNECTED && !(Key_GetCatcher( ) & KEYCATCH_CONSOLE ) ) {
return;
}
y = con.vislines - ( SMALLCHAR_HEIGHT * 2 );
re.SetColor( con.color );
SCR_DrawSmallChar( con.xadjust + 1 * SMALLCHAR_WIDTH, y, ']' );
Field_Draw( &g_consoleField, con.xadjust + 2 * SMALLCHAR_WIDTH, y,
SCREEN_WIDTH - 3 * SMALLCHAR_WIDTH, qtrue, qtrue );
}
|
C
|
OpenJK
| 0 |
CVE-2017-12993
|
https://www.cvedetails.com/cve/CVE-2017-12993/
|
CWE-125
|
https://github.com/the-tcpdump-group/tcpdump/commit/b534e304568585707c4a92422aeca25cf908ff02
|
b534e304568585707c4a92422aeca25cf908ff02
|
CVE-2017-12993/Juniper: Add more bounds checks.
This fixes a buffer over-read discovered by Kamil Frankowicz.
Add tests using the capture files supplied by the reporter(s).
|
juniper_parse_header(netdissect_options *ndo,
const u_char *p, const struct pcap_pkthdr *h, struct juniper_l2info_t *l2info)
{
const struct juniper_cookie_table_t *lp = juniper_cookie_table;
u_int idx, jnx_ext_len, jnx_header_len = 0;
uint8_t tlv_type,tlv_len;
uint32_t control_word;
int tlv_value;
const u_char *tptr;
l2info->header_len = 0;
l2info->cookie_len = 0;
l2info->proto = 0;
l2info->length = h->len;
l2info->caplen = h->caplen;
ND_TCHECK2(p[0], 4);
l2info->flags = p[3];
l2info->direction = p[3]&JUNIPER_BPF_PKT_IN;
if (EXTRACT_24BITS(p) != JUNIPER_MGC_NUMBER) { /* magic number found ? */
ND_PRINT((ndo, "no magic-number found!"));
return 0;
}
if (ndo->ndo_eflag) /* print direction */
ND_PRINT((ndo, "%3s ", tok2str(juniper_direction_values, "---", l2info->direction)));
/* magic number + flags */
jnx_header_len = 4;
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "\n\tJuniper PCAP Flags [%s]",
bittok2str(jnx_flag_values, "none", l2info->flags)));
/* extensions present ? - calculate how much bytes to skip */
if ((l2info->flags & JUNIPER_BPF_EXT ) == JUNIPER_BPF_EXT ) {
tptr = p+jnx_header_len;
/* ok to read extension length ? */
ND_TCHECK2(tptr[0], 2);
jnx_ext_len = EXTRACT_16BITS(tptr);
jnx_header_len += 2;
tptr +=2;
/* nail up the total length -
* just in case something goes wrong
* with TLV parsing */
jnx_header_len += jnx_ext_len;
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, ", PCAP Extension(s) total length %u", jnx_ext_len));
ND_TCHECK2(tptr[0], jnx_ext_len);
while (jnx_ext_len > JUNIPER_EXT_TLV_OVERHEAD) {
tlv_type = *(tptr++);
tlv_len = *(tptr++);
tlv_value = 0;
/* sanity checks */
if (tlv_type == 0 || tlv_len == 0)
break;
if (tlv_len+JUNIPER_EXT_TLV_OVERHEAD > jnx_ext_len)
goto trunc;
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "\n\t %s Extension TLV #%u, length %u, value ",
tok2str(jnx_ext_tlv_values,"Unknown",tlv_type),
tlv_type,
tlv_len));
tlv_value = juniper_read_tlv_value(tptr, tlv_type, tlv_len);
switch (tlv_type) {
case JUNIPER_EXT_TLV_IFD_NAME:
/* FIXME */
break;
case JUNIPER_EXT_TLV_IFD_MEDIATYPE:
case JUNIPER_EXT_TLV_TTP_IFD_MEDIATYPE:
if (tlv_value != -1) {
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "%s (%u)",
tok2str(juniper_ifmt_values, "Unknown", tlv_value),
tlv_value));
}
break;
case JUNIPER_EXT_TLV_IFL_ENCAPS:
case JUNIPER_EXT_TLV_TTP_IFL_ENCAPS:
if (tlv_value != -1) {
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "%s (%u)",
tok2str(juniper_ifle_values, "Unknown", tlv_value),
tlv_value));
}
break;
case JUNIPER_EXT_TLV_IFL_IDX: /* fall through */
case JUNIPER_EXT_TLV_IFL_UNIT:
case JUNIPER_EXT_TLV_IFD_IDX:
default:
if (tlv_value != -1) {
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "%u", tlv_value));
}
break;
}
tptr+=tlv_len;
jnx_ext_len -= tlv_len+JUNIPER_EXT_TLV_OVERHEAD;
}
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "\n\t-----original packet-----\n\t"));
}
if ((l2info->flags & JUNIPER_BPF_NO_L2 ) == JUNIPER_BPF_NO_L2 ) {
if (ndo->ndo_eflag)
ND_PRINT((ndo, "no-L2-hdr, "));
/* there is no link-layer present -
* perform the v4/v6 heuristics
* to figure out what it is
*/
ND_TCHECK2(p[jnx_header_len + 4], 1);
if (ip_heuristic_guess(ndo, p + jnx_header_len + 4,
l2info->length - (jnx_header_len + 4)) == 0)
ND_PRINT((ndo, "no IP-hdr found!"));
l2info->header_len=jnx_header_len+4;
return 0; /* stop parsing the output further */
}
l2info->header_len = jnx_header_len;
p+=l2info->header_len;
l2info->length -= l2info->header_len;
l2info->caplen -= l2info->header_len;
/* search through the cookie table and copy values matching for our PIC type */
ND_TCHECK(p[0]);
while (lp->s != NULL) {
if (lp->pictype == l2info->pictype) {
l2info->cookie_len += lp->cookie_len;
switch (p[0]) {
case LS_COOKIE_ID:
l2info->cookie_type = LS_COOKIE_ID;
l2info->cookie_len += 2;
break;
case AS_COOKIE_ID:
l2info->cookie_type = AS_COOKIE_ID;
l2info->cookie_len = 8;
break;
default:
l2info->bundle = l2info->cookie[0];
break;
}
#ifdef DLT_JUNIPER_MFR
/* MFR child links don't carry cookies */
if (l2info->pictype == DLT_JUNIPER_MFR &&
(p[0] & MFR_BE_MASK) == MFR_BE_MASK) {
l2info->cookie_len = 0;
}
#endif
l2info->header_len += l2info->cookie_len;
l2info->length -= l2info->cookie_len;
l2info->caplen -= l2info->cookie_len;
if (ndo->ndo_eflag)
ND_PRINT((ndo, "%s-PIC, cookie-len %u",
lp->s,
l2info->cookie_len));
if (l2info->cookie_len > 0) {
ND_TCHECK2(p[0], l2info->cookie_len);
if (ndo->ndo_eflag)
ND_PRINT((ndo, ", cookie 0x"));
for (idx = 0; idx < l2info->cookie_len; idx++) {
l2info->cookie[idx] = p[idx]; /* copy cookie data */
if (ndo->ndo_eflag) ND_PRINT((ndo, "%02x", p[idx]));
}
}
if (ndo->ndo_eflag) ND_PRINT((ndo, ": ")); /* print demarc b/w L2/L3*/
l2info->proto = EXTRACT_16BITS(p+l2info->cookie_len);
break;
}
++lp;
}
p+=l2info->cookie_len;
/* DLT_ specific parsing */
switch(l2info->pictype) {
#ifdef DLT_JUNIPER_MLPPP
case DLT_JUNIPER_MLPPP:
switch (l2info->cookie_type) {
case LS_COOKIE_ID:
l2info->bundle = l2info->cookie[1];
break;
case AS_COOKIE_ID:
l2info->bundle = (EXTRACT_16BITS(&l2info->cookie[6])>>3)&0xfff;
l2info->proto = (l2info->cookie[5])&JUNIPER_LSQ_L3_PROTO_MASK;
break;
default:
l2info->bundle = l2info->cookie[0];
break;
}
break;
#endif
#ifdef DLT_JUNIPER_MLFR
case DLT_JUNIPER_MLFR:
switch (l2info->cookie_type) {
case LS_COOKIE_ID:
ND_TCHECK2(p[0], 2);
l2info->bundle = l2info->cookie[1];
l2info->proto = EXTRACT_16BITS(p);
l2info->header_len += 2;
l2info->length -= 2;
l2info->caplen -= 2;
break;
case AS_COOKIE_ID:
l2info->bundle = (EXTRACT_16BITS(&l2info->cookie[6])>>3)&0xfff;
l2info->proto = (l2info->cookie[5])&JUNIPER_LSQ_L3_PROTO_MASK;
break;
default:
l2info->bundle = l2info->cookie[0];
l2info->header_len += 2;
l2info->length -= 2;
l2info->caplen -= 2;
break;
}
break;
#endif
#ifdef DLT_JUNIPER_MFR
case DLT_JUNIPER_MFR:
switch (l2info->cookie_type) {
case LS_COOKIE_ID:
ND_TCHECK2(p[0], 2);
l2info->bundle = l2info->cookie[1];
l2info->proto = EXTRACT_16BITS(p);
l2info->header_len += 2;
l2info->length -= 2;
l2info->caplen -= 2;
break;
case AS_COOKIE_ID:
l2info->bundle = (EXTRACT_16BITS(&l2info->cookie[6])>>3)&0xfff;
l2info->proto = (l2info->cookie[5])&JUNIPER_LSQ_L3_PROTO_MASK;
break;
default:
l2info->bundle = l2info->cookie[0];
break;
}
break;
#endif
#ifdef DLT_JUNIPER_ATM2
case DLT_JUNIPER_ATM2:
ND_TCHECK2(p[0], 4);
/* ATM cell relay control word present ? */
if (l2info->cookie[7] & ATM2_PKT_TYPE_MASK) {
control_word = EXTRACT_32BITS(p);
/* some control word heuristics */
switch(control_word) {
case 0: /* zero control word */
case 0x08000000: /* < JUNOS 7.4 control-word */
case 0x08380000: /* cntl word plus cell length (56) >= JUNOS 7.4*/
l2info->header_len += 4;
break;
default:
break;
}
if (ndo->ndo_eflag)
ND_PRINT((ndo, "control-word 0x%08x ", control_word));
}
break;
#endif
#ifdef DLT_JUNIPER_GGSN
case DLT_JUNIPER_GGSN:
break;
#endif
#ifdef DLT_JUNIPER_ATM1
case DLT_JUNIPER_ATM1:
break;
#endif
#ifdef DLT_JUNIPER_PPP
case DLT_JUNIPER_PPP:
break;
#endif
#ifdef DLT_JUNIPER_CHDLC
case DLT_JUNIPER_CHDLC:
break;
#endif
#ifdef DLT_JUNIPER_ETHER
case DLT_JUNIPER_ETHER:
break;
#endif
#ifdef DLT_JUNIPER_FRELAY
case DLT_JUNIPER_FRELAY:
break;
#endif
default:
ND_PRINT((ndo, "Unknown Juniper DLT_ type %u: ", l2info->pictype));
break;
}
if (ndo->ndo_eflag > 1)
ND_PRINT((ndo, "hlen %u, proto 0x%04x, ", l2info->header_len, l2info->proto));
return 1; /* everything went ok so far. continue parsing */
trunc:
ND_PRINT((ndo, "[|juniper_hdr], length %u", h->len));
return 0;
}
|
juniper_parse_header(netdissect_options *ndo,
const u_char *p, const struct pcap_pkthdr *h, struct juniper_l2info_t *l2info)
{
const struct juniper_cookie_table_t *lp = juniper_cookie_table;
u_int idx, jnx_ext_len, jnx_header_len = 0;
uint8_t tlv_type,tlv_len;
uint32_t control_word;
int tlv_value;
const u_char *tptr;
l2info->header_len = 0;
l2info->cookie_len = 0;
l2info->proto = 0;
l2info->length = h->len;
l2info->caplen = h->caplen;
ND_TCHECK2(p[0], 4);
l2info->flags = p[3];
l2info->direction = p[3]&JUNIPER_BPF_PKT_IN;
if (EXTRACT_24BITS(p) != JUNIPER_MGC_NUMBER) { /* magic number found ? */
ND_PRINT((ndo, "no magic-number found!"));
return 0;
}
if (ndo->ndo_eflag) /* print direction */
ND_PRINT((ndo, "%3s ", tok2str(juniper_direction_values, "---", l2info->direction)));
/* magic number + flags */
jnx_header_len = 4;
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "\n\tJuniper PCAP Flags [%s]",
bittok2str(jnx_flag_values, "none", l2info->flags)));
/* extensions present ? - calculate how much bytes to skip */
if ((l2info->flags & JUNIPER_BPF_EXT ) == JUNIPER_BPF_EXT ) {
tptr = p+jnx_header_len;
/* ok to read extension length ? */
ND_TCHECK2(tptr[0], 2);
jnx_ext_len = EXTRACT_16BITS(tptr);
jnx_header_len += 2;
tptr +=2;
/* nail up the total length -
* just in case something goes wrong
* with TLV parsing */
jnx_header_len += jnx_ext_len;
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, ", PCAP Extension(s) total length %u", jnx_ext_len));
ND_TCHECK2(tptr[0], jnx_ext_len);
while (jnx_ext_len > JUNIPER_EXT_TLV_OVERHEAD) {
tlv_type = *(tptr++);
tlv_len = *(tptr++);
tlv_value = 0;
/* sanity checks */
if (tlv_type == 0 || tlv_len == 0)
break;
if (tlv_len+JUNIPER_EXT_TLV_OVERHEAD > jnx_ext_len)
goto trunc;
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "\n\t %s Extension TLV #%u, length %u, value ",
tok2str(jnx_ext_tlv_values,"Unknown",tlv_type),
tlv_type,
tlv_len));
tlv_value = juniper_read_tlv_value(tptr, tlv_type, tlv_len);
switch (tlv_type) {
case JUNIPER_EXT_TLV_IFD_NAME:
/* FIXME */
break;
case JUNIPER_EXT_TLV_IFD_MEDIATYPE:
case JUNIPER_EXT_TLV_TTP_IFD_MEDIATYPE:
if (tlv_value != -1) {
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "%s (%u)",
tok2str(juniper_ifmt_values, "Unknown", tlv_value),
tlv_value));
}
break;
case JUNIPER_EXT_TLV_IFL_ENCAPS:
case JUNIPER_EXT_TLV_TTP_IFL_ENCAPS:
if (tlv_value != -1) {
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "%s (%u)",
tok2str(juniper_ifle_values, "Unknown", tlv_value),
tlv_value));
}
break;
case JUNIPER_EXT_TLV_IFL_IDX: /* fall through */
case JUNIPER_EXT_TLV_IFL_UNIT:
case JUNIPER_EXT_TLV_IFD_IDX:
default:
if (tlv_value != -1) {
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "%u", tlv_value));
}
break;
}
tptr+=tlv_len;
jnx_ext_len -= tlv_len+JUNIPER_EXT_TLV_OVERHEAD;
}
if (ndo->ndo_vflag > 1)
ND_PRINT((ndo, "\n\t-----original packet-----\n\t"));
}
if ((l2info->flags & JUNIPER_BPF_NO_L2 ) == JUNIPER_BPF_NO_L2 ) {
if (ndo->ndo_eflag)
ND_PRINT((ndo, "no-L2-hdr, "));
/* there is no link-layer present -
* perform the v4/v6 heuristics
* to figure out what it is
*/
ND_TCHECK2(p[jnx_header_len + 4], 1);
if (ip_heuristic_guess(ndo, p + jnx_header_len + 4,
l2info->length - (jnx_header_len + 4)) == 0)
ND_PRINT((ndo, "no IP-hdr found!"));
l2info->header_len=jnx_header_len+4;
return 0; /* stop parsing the output further */
}
l2info->header_len = jnx_header_len;
p+=l2info->header_len;
l2info->length -= l2info->header_len;
l2info->caplen -= l2info->header_len;
/* search through the cookie table and copy values matching for our PIC type */
while (lp->s != NULL) {
if (lp->pictype == l2info->pictype) {
l2info->cookie_len += lp->cookie_len;
switch (p[0]) {
case LS_COOKIE_ID:
l2info->cookie_type = LS_COOKIE_ID;
l2info->cookie_len += 2;
break;
case AS_COOKIE_ID:
l2info->cookie_type = AS_COOKIE_ID;
l2info->cookie_len = 8;
break;
default:
l2info->bundle = l2info->cookie[0];
break;
}
#ifdef DLT_JUNIPER_MFR
/* MFR child links don't carry cookies */
if (l2info->pictype == DLT_JUNIPER_MFR &&
(p[0] & MFR_BE_MASK) == MFR_BE_MASK) {
l2info->cookie_len = 0;
}
#endif
l2info->header_len += l2info->cookie_len;
l2info->length -= l2info->cookie_len;
l2info->caplen -= l2info->cookie_len;
if (ndo->ndo_eflag)
ND_PRINT((ndo, "%s-PIC, cookie-len %u",
lp->s,
l2info->cookie_len));
if (l2info->cookie_len > 0) {
ND_TCHECK2(p[0], l2info->cookie_len);
if (ndo->ndo_eflag)
ND_PRINT((ndo, ", cookie 0x"));
for (idx = 0; idx < l2info->cookie_len; idx++) {
l2info->cookie[idx] = p[idx]; /* copy cookie data */
if (ndo->ndo_eflag) ND_PRINT((ndo, "%02x", p[idx]));
}
}
if (ndo->ndo_eflag) ND_PRINT((ndo, ": ")); /* print demarc b/w L2/L3*/
l2info->proto = EXTRACT_16BITS(p+l2info->cookie_len);
break;
}
++lp;
}
p+=l2info->cookie_len;
/* DLT_ specific parsing */
switch(l2info->pictype) {
#ifdef DLT_JUNIPER_MLPPP
case DLT_JUNIPER_MLPPP:
switch (l2info->cookie_type) {
case LS_COOKIE_ID:
l2info->bundle = l2info->cookie[1];
break;
case AS_COOKIE_ID:
l2info->bundle = (EXTRACT_16BITS(&l2info->cookie[6])>>3)&0xfff;
l2info->proto = (l2info->cookie[5])&JUNIPER_LSQ_L3_PROTO_MASK;
break;
default:
l2info->bundle = l2info->cookie[0];
break;
}
break;
#endif
#ifdef DLT_JUNIPER_MLFR
case DLT_JUNIPER_MLFR:
switch (l2info->cookie_type) {
case LS_COOKIE_ID:
l2info->bundle = l2info->cookie[1];
l2info->proto = EXTRACT_16BITS(p);
l2info->header_len += 2;
l2info->length -= 2;
l2info->caplen -= 2;
break;
case AS_COOKIE_ID:
l2info->bundle = (EXTRACT_16BITS(&l2info->cookie[6])>>3)&0xfff;
l2info->proto = (l2info->cookie[5])&JUNIPER_LSQ_L3_PROTO_MASK;
break;
default:
l2info->bundle = l2info->cookie[0];
l2info->header_len += 2;
l2info->length -= 2;
l2info->caplen -= 2;
break;
}
break;
#endif
#ifdef DLT_JUNIPER_MFR
case DLT_JUNIPER_MFR:
switch (l2info->cookie_type) {
case LS_COOKIE_ID:
l2info->bundle = l2info->cookie[1];
l2info->proto = EXTRACT_16BITS(p);
l2info->header_len += 2;
l2info->length -= 2;
l2info->caplen -= 2;
break;
case AS_COOKIE_ID:
l2info->bundle = (EXTRACT_16BITS(&l2info->cookie[6])>>3)&0xfff;
l2info->proto = (l2info->cookie[5])&JUNIPER_LSQ_L3_PROTO_MASK;
break;
default:
l2info->bundle = l2info->cookie[0];
break;
}
break;
#endif
#ifdef DLT_JUNIPER_ATM2
case DLT_JUNIPER_ATM2:
ND_TCHECK2(p[0], 4);
/* ATM cell relay control word present ? */
if (l2info->cookie[7] & ATM2_PKT_TYPE_MASK) {
control_word = EXTRACT_32BITS(p);
/* some control word heuristics */
switch(control_word) {
case 0: /* zero control word */
case 0x08000000: /* < JUNOS 7.4 control-word */
case 0x08380000: /* cntl word plus cell length (56) >= JUNOS 7.4*/
l2info->header_len += 4;
break;
default:
break;
}
if (ndo->ndo_eflag)
ND_PRINT((ndo, "control-word 0x%08x ", control_word));
}
break;
#endif
#ifdef DLT_JUNIPER_GGSN
case DLT_JUNIPER_GGSN:
break;
#endif
#ifdef DLT_JUNIPER_ATM1
case DLT_JUNIPER_ATM1:
break;
#endif
#ifdef DLT_JUNIPER_PPP
case DLT_JUNIPER_PPP:
break;
#endif
#ifdef DLT_JUNIPER_CHDLC
case DLT_JUNIPER_CHDLC:
break;
#endif
#ifdef DLT_JUNIPER_ETHER
case DLT_JUNIPER_ETHER:
break;
#endif
#ifdef DLT_JUNIPER_FRELAY
case DLT_JUNIPER_FRELAY:
break;
#endif
default:
ND_PRINT((ndo, "Unknown Juniper DLT_ type %u: ", l2info->pictype));
break;
}
if (ndo->ndo_eflag > 1)
ND_PRINT((ndo, "hlen %u, proto 0x%04x, ", l2info->header_len, l2info->proto));
return 1; /* everything went ok so far. continue parsing */
trunc:
ND_PRINT((ndo, "[|juniper_hdr], length %u", h->len));
return 0;
}
|
C
|
tcpdump
| 1 |
CVE-2015-5330
|
https://www.cvedetails.com/cve/CVE-2015-5330/
|
CWE-200
|
https://git.samba.org/?p=samba.git;a=commit;h=a118d4220ed85749c07fb43c1229d9e2fecbea6b
|
a118d4220ed85749c07fb43c1229d9e2fecbea6b
| null |
_PUBLIC_ char *strupper_talloc(TALLOC_CTX *ctx, const char *src)
{
return strupper_talloc_n(ctx, src, src?strlen(src):0);
}
|
_PUBLIC_ char *strupper_talloc(TALLOC_CTX *ctx, const char *src)
{
return strupper_talloc_n(ctx, src, src?strlen(src):0);
}
|
C
|
samba
| 0 |
CVE-2011-3896
|
https://www.cvedetails.com/cve/CVE-2011-3896/
|
CWE-119
|
https://github.com/chromium/chromium/commit/5925dff83699508b5e2735afb0297dfb310e159d
|
5925dff83699508b5e2735afb0297dfb310e159d
|
Implement a bubble that appears at the top of the screen when a tab enters
fullscreen mode via webkitRequestFullScreen(), telling the user how to exit
fullscreen.
This is implemented as an NSView rather than an NSWindow because the floating
chrome that appears in presentation mode should overlap the bubble.
Content-initiated fullscreen mode makes use of 'presentation mode' on the Mac:
the mode in which the UI is hidden, accessible by moving the cursor to the top
of the screen. On Snow Leopard, this mode is synonymous with fullscreen mode.
On Lion, however, fullscreen mode does not imply presentation mode: in
non-presentation fullscreen mode, the chrome is permanently shown. It is
possible to switch between presentation mode and fullscreen mode using the
presentation mode UI control.
When a tab initiates fullscreen mode on Lion, we enter presentation mode if not
in presentation mode already. When the user exits fullscreen mode using Chrome
UI (i.e. keyboard shortcuts, menu items, buttons, switching tabs, etc.) we
return the user to the mode they were in before the tab entered fullscreen.
BUG=14471
TEST=Enter fullscreen mode using webkitRequestFullScreen. You should see a bubble pop down from the top of the screen.
Need to test the Lion logic somehow, with no Lion trybots.
BUG=96883
Original review http://codereview.chromium.org/7890056/
TBR=thakis
Review URL: http://codereview.chromium.org/7920024
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@101624 0039d316-1c4b-4281-b951-d872f2087c98
|
void Browser::ReloadInternal(WindowOpenDisposition disposition,
bool ignore_cache) {
TabContents* current_tab = GetSelectedTabContents();
if (current_tab && current_tab->showing_interstitial_page()) {
NavigationEntry* entry = current_tab->controller().GetActiveEntry();
DCHECK(entry); // Should exist if interstitial is showing.
OpenURL(entry->url(), GURL(), disposition, PageTransition::RELOAD);
return;
}
TabContents* tab = GetOrCloneTabForDisposition(disposition);
if (!tab->FocusLocationBarByDefault())
tab->Focus();
if (ignore_cache)
tab->controller().ReloadIgnoringCache(true);
else
tab->controller().Reload(true);
}
|
void Browser::ReloadInternal(WindowOpenDisposition disposition,
bool ignore_cache) {
TabContents* current_tab = GetSelectedTabContents();
if (current_tab && current_tab->showing_interstitial_page()) {
NavigationEntry* entry = current_tab->controller().GetActiveEntry();
DCHECK(entry); // Should exist if interstitial is showing.
OpenURL(entry->url(), GURL(), disposition, PageTransition::RELOAD);
return;
}
TabContents* tab = GetOrCloneTabForDisposition(disposition);
if (!tab->FocusLocationBarByDefault())
tab->Focus();
if (ignore_cache)
tab->controller().ReloadIgnoringCache(true);
else
tab->controller().Reload(true);
}
|
C
|
Chrome
| 0 |
CVE-2013-7271
|
https://www.cvedetails.com/cve/CVE-2013-7271/
|
CWE-20
|
https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c
|
f3d3342602f8bcbf37d7c46641cb9bca7618eb1c
|
net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <[email protected]>
Suggested-by: Eric Dumazet <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void sco_conn_defer_accept(struct hci_conn *conn, u16 setting)
{
struct hci_dev *hdev = conn->hdev;
BT_DBG("conn %p", conn);
conn->state = BT_CONFIG;
if (!lmp_esco_capable(hdev)) {
struct hci_cp_accept_conn_req cp;
bacpy(&cp.bdaddr, &conn->dst);
cp.role = 0x00; /* Ignored */
hci_send_cmd(hdev, HCI_OP_ACCEPT_CONN_REQ, sizeof(cp), &cp);
} else {
struct hci_cp_accept_sync_conn_req cp;
bacpy(&cp.bdaddr, &conn->dst);
cp.pkt_type = cpu_to_le16(conn->pkt_type);
cp.tx_bandwidth = __constant_cpu_to_le32(0x00001f40);
cp.rx_bandwidth = __constant_cpu_to_le32(0x00001f40);
cp.content_format = cpu_to_le16(setting);
switch (setting & SCO_AIRMODE_MASK) {
case SCO_AIRMODE_TRANSP:
if (conn->pkt_type & ESCO_2EV3)
cp.max_latency = __constant_cpu_to_le16(0x0008);
else
cp.max_latency = __constant_cpu_to_le16(0x000D);
cp.retrans_effort = 0x02;
break;
case SCO_AIRMODE_CVSD:
cp.max_latency = __constant_cpu_to_le16(0xffff);
cp.retrans_effort = 0xff;
break;
}
hci_send_cmd(hdev, HCI_OP_ACCEPT_SYNC_CONN_REQ,
sizeof(cp), &cp);
}
}
|
static void sco_conn_defer_accept(struct hci_conn *conn, u16 setting)
{
struct hci_dev *hdev = conn->hdev;
BT_DBG("conn %p", conn);
conn->state = BT_CONFIG;
if (!lmp_esco_capable(hdev)) {
struct hci_cp_accept_conn_req cp;
bacpy(&cp.bdaddr, &conn->dst);
cp.role = 0x00; /* Ignored */
hci_send_cmd(hdev, HCI_OP_ACCEPT_CONN_REQ, sizeof(cp), &cp);
} else {
struct hci_cp_accept_sync_conn_req cp;
bacpy(&cp.bdaddr, &conn->dst);
cp.pkt_type = cpu_to_le16(conn->pkt_type);
cp.tx_bandwidth = __constant_cpu_to_le32(0x00001f40);
cp.rx_bandwidth = __constant_cpu_to_le32(0x00001f40);
cp.content_format = cpu_to_le16(setting);
switch (setting & SCO_AIRMODE_MASK) {
case SCO_AIRMODE_TRANSP:
if (conn->pkt_type & ESCO_2EV3)
cp.max_latency = __constant_cpu_to_le16(0x0008);
else
cp.max_latency = __constant_cpu_to_le16(0x000D);
cp.retrans_effort = 0x02;
break;
case SCO_AIRMODE_CVSD:
cp.max_latency = __constant_cpu_to_le16(0xffff);
cp.retrans_effort = 0xff;
break;
}
hci_send_cmd(hdev, HCI_OP_ACCEPT_SYNC_CONN_REQ,
sizeof(cp), &cp);
}
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/2bfb2b8299e2fb6a432390a93a99a85fed1d29c9
|
2bfb2b8299e2fb6a432390a93a99a85fed1d29c9
|
Fix erroneous semicolon causing build failure: if statement has empty body [-Werror,-Wempty-body]
https://bugs.webkit.org/show_bug.cgi?id=108241
Patch by Kiran Muppala <[email protected]> on 2013-01-29
Reviewed by Anders Carlsson.
* UIProcess/WebProcessProxy.cpp:
(WebKit::WebProcessProxy::addExistingWebPage): Remove erroneous
semicolon following the if condition.
git-svn-id: svn://svn.chromium.org/blink/trunk@141180 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
void WebProcessProxy::didReceiveMessage(CoreIPC::Connection* connection, CoreIPC::MessageID messageID, CoreIPC::MessageDecoder& decoder)
{
if (m_messageReceiverMap.dispatchMessage(connection, messageID, decoder))
return;
if (m_context->dispatchMessage(connection, messageID, decoder))
return;
if (decoder.messageReceiverName() == Messages::WebProcessProxy::messageReceiverName()) {
didReceiveWebProcessProxyMessage(connection, messageID, decoder);
return;
}
#if ENABLE(CUSTOM_PROTOCOLS)
if (decoder.messageReceiverName() == Messages::CustomProtocolManagerProxy::messageReceiverName()) {
#if ENABLE(NETWORK_PROCESS)
ASSERT(!context()->usesNetworkProcess());
#endif
m_customProtocolManagerProxy.didReceiveMessage(connection, messageID, decoder);
return;
}
#endif
uint64_t pageID = decoder.destinationID();
if (!pageID)
return;
WebPageProxy* pageProxy = webPage(pageID);
if (!pageProxy)
return;
pageProxy->didReceiveMessage(connection, messageID, decoder);
}
|
void WebProcessProxy::didReceiveMessage(CoreIPC::Connection* connection, CoreIPC::MessageID messageID, CoreIPC::MessageDecoder& decoder)
{
if (m_messageReceiverMap.dispatchMessage(connection, messageID, decoder))
return;
if (m_context->dispatchMessage(connection, messageID, decoder))
return;
if (decoder.messageReceiverName() == Messages::WebProcessProxy::messageReceiverName()) {
didReceiveWebProcessProxyMessage(connection, messageID, decoder);
return;
}
#if ENABLE(CUSTOM_PROTOCOLS)
if (decoder.messageReceiverName() == Messages::CustomProtocolManagerProxy::messageReceiverName()) {
#if ENABLE(NETWORK_PROCESS)
ASSERT(!context()->usesNetworkProcess());
#endif
m_customProtocolManagerProxy.didReceiveMessage(connection, messageID, decoder);
return;
}
#endif
uint64_t pageID = decoder.destinationID();
if (!pageID)
return;
WebPageProxy* pageProxy = webPage(pageID);
if (!pageProxy)
return;
pageProxy->didReceiveMessage(connection, messageID, decoder);
}
|
C
|
Chrome
| 0 |
CVE-2016-5400
|
https://www.cvedetails.com/cve/CVE-2016-5400/
|
CWE-119
|
https://github.com/torvalds/linux/commit/aa93d1fee85c890a34f2510a310e55ee76a27848
|
aa93d1fee85c890a34f2510a310e55ee76a27848
|
media: fix airspy usb probe error path
Fix a memory leak on probe error of the airspy usb device driver.
The problem is triggered when more than 64 usb devices register with
v4l2 of type VFL_TYPE_SDR or VFL_TYPE_SUBDEV.
The memory leak is caused by the probe function of the airspy driver
mishandeling errors and not freeing the corresponding control structures
when an error occours registering the device to v4l2 core.
A badusb device can emulate 64 of these devices, and then through
continual emulated connect/disconnect of the 65th device, cause the
kernel to run out of RAM and crash the kernel, thus causing a local DOS
vulnerability.
Fixes CVE-2016-5400
Signed-off-by: James Patrick-Evans <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: [email protected] # 3.17+
Signed-off-by: Linus Torvalds <[email protected]>
|
static int airspy_ctrl_msg(struct airspy *s, u8 request, u16 value, u16 index,
u8 *data, u16 size)
{
int ret;
unsigned int pipe;
u8 requesttype;
switch (request) {
case CMD_RECEIVER_MODE:
case CMD_SET_FREQ:
pipe = usb_sndctrlpipe(s->udev, 0);
requesttype = (USB_TYPE_VENDOR | USB_DIR_OUT);
break;
case CMD_BOARD_ID_READ:
case CMD_VERSION_STRING_READ:
case CMD_BOARD_PARTID_SERIALNO_READ:
case CMD_SET_LNA_GAIN:
case CMD_SET_MIXER_GAIN:
case CMD_SET_VGA_GAIN:
case CMD_SET_LNA_AGC:
case CMD_SET_MIXER_AGC:
pipe = usb_rcvctrlpipe(s->udev, 0);
requesttype = (USB_TYPE_VENDOR | USB_DIR_IN);
break;
default:
dev_err(s->dev, "Unknown command %02x\n", request);
ret = -EINVAL;
goto err;
}
/* write request */
if (!(requesttype & USB_DIR_IN))
memcpy(s->buf, data, size);
ret = usb_control_msg(s->udev, pipe, request, requesttype, value,
index, s->buf, size, 1000);
airspy_dbg_usb_control_msg(s->dev, request, requesttype, value,
index, s->buf, size);
if (ret < 0) {
dev_err(s->dev, "usb_control_msg() failed %d request %02x\n",
ret, request);
goto err;
}
/* read request */
if (requesttype & USB_DIR_IN)
memcpy(data, s->buf, size);
return 0;
err:
return ret;
}
|
static int airspy_ctrl_msg(struct airspy *s, u8 request, u16 value, u16 index,
u8 *data, u16 size)
{
int ret;
unsigned int pipe;
u8 requesttype;
switch (request) {
case CMD_RECEIVER_MODE:
case CMD_SET_FREQ:
pipe = usb_sndctrlpipe(s->udev, 0);
requesttype = (USB_TYPE_VENDOR | USB_DIR_OUT);
break;
case CMD_BOARD_ID_READ:
case CMD_VERSION_STRING_READ:
case CMD_BOARD_PARTID_SERIALNO_READ:
case CMD_SET_LNA_GAIN:
case CMD_SET_MIXER_GAIN:
case CMD_SET_VGA_GAIN:
case CMD_SET_LNA_AGC:
case CMD_SET_MIXER_AGC:
pipe = usb_rcvctrlpipe(s->udev, 0);
requesttype = (USB_TYPE_VENDOR | USB_DIR_IN);
break;
default:
dev_err(s->dev, "Unknown command %02x\n", request);
ret = -EINVAL;
goto err;
}
/* write request */
if (!(requesttype & USB_DIR_IN))
memcpy(s->buf, data, size);
ret = usb_control_msg(s->udev, pipe, request, requesttype, value,
index, s->buf, size, 1000);
airspy_dbg_usb_control_msg(s->dev, request, requesttype, value,
index, s->buf, size);
if (ret < 0) {
dev_err(s->dev, "usb_control_msg() failed %d request %02x\n",
ret, request);
goto err;
}
/* read request */
if (requesttype & USB_DIR_IN)
memcpy(data, s->buf, size);
return 0;
err:
return ret;
}
|
C
|
linux
| 0 |
CVE-2019-5787
|
https://www.cvedetails.com/cve/CVE-2019-5787/
|
CWE-416
|
https://github.com/chromium/chromium/commit/6a7063ae61cf031630b48bdcdb09863ffc199962
|
6a7063ae61cf031630b48bdcdb09863ffc199962
|
Clean up CanvasResourceDispatcher on finalizer
We may have pending mojo messages after GC, so we want to drop the
dispatcher as soon as possible.
Bug: 929757,913964
Change-Id: I5789bcbb55aada4a74c67a28758f07686f8911c0
Reviewed-on: https://chromium-review.googlesource.com/c/1489175
Reviewed-by: Ken Rockot <[email protected]>
Commit-Queue: Ken Rockot <[email protected]>
Commit-Queue: Fernando Serboncini <[email protected]>
Auto-Submit: Fernando Serboncini <[email protected]>
Cr-Commit-Position: refs/heads/master@{#635833}
|
CanvasResourceProvider* OffscreenCanvas::GetOrCreateResourceProvider() {
if (!ResourceProvider()) {
bool can_use_gpu = false;
CanvasResourceProvider::PresentationMode presentation_mode =
CanvasResourceProvider::kDefaultPresentationMode;
if (Is2d()) {
if (RuntimeEnabledFeatures::Canvas2dImageChromiumEnabled()) {
presentation_mode =
CanvasResourceProvider::kAllowImageChromiumPresentationMode;
}
if (SharedGpuContext::IsGpuCompositingEnabled() &&
RuntimeEnabledFeatures::Accelerated2dCanvasEnabled()) {
can_use_gpu = true;
}
} else if (Is3d()) {
if (RuntimeEnabledFeatures::WebGLImageChromiumEnabled()) {
presentation_mode =
CanvasResourceProvider::kAllowImageChromiumPresentationMode;
}
can_use_gpu = SharedGpuContext::IsGpuCompositingEnabled();
}
IntSize surface_size(width(), height());
CanvasResourceProvider::ResourceUsage usage;
if (can_use_gpu) {
if (HasPlaceholderCanvas())
usage = CanvasResourceProvider::kAcceleratedCompositedResourceUsage;
else
usage = CanvasResourceProvider::kAcceleratedResourceUsage;
} else {
if (HasPlaceholderCanvas())
usage = CanvasResourceProvider::kSoftwareCompositedResourceUsage;
else
usage = CanvasResourceProvider::kSoftwareResourceUsage;
}
base::WeakPtr<CanvasResourceDispatcher> dispatcher_weakptr =
HasPlaceholderCanvas() ? GetOrCreateResourceDispatcher()->GetWeakPtr()
: nullptr;
ReplaceResourceProvider(CanvasResourceProvider::Create(
surface_size, usage, SharedGpuContext::ContextProviderWrapper(), 0,
context_->ColorParams(), presentation_mode,
std::move(dispatcher_weakptr), false /* is_origin_top_left */));
CHECK(!ResourceProvider() || !HasPlaceholderCanvas() ||
ResourceProvider()->SupportsDirectCompositing());
if (ResourceProvider() && ResourceProvider()->IsValid()) {
ResourceProvider()->Clear();
ResourceProvider()->Canvas()->save();
if (needs_matrix_clip_restore_) {
needs_matrix_clip_restore_ = false;
context_->RestoreCanvasMatrixClipStack(ResourceProvider()->Canvas());
}
}
}
return ResourceProvider();
}
|
CanvasResourceProvider* OffscreenCanvas::GetOrCreateResourceProvider() {
if (!ResourceProvider()) {
bool can_use_gpu = false;
CanvasResourceProvider::PresentationMode presentation_mode =
CanvasResourceProvider::kDefaultPresentationMode;
if (Is2d()) {
if (RuntimeEnabledFeatures::Canvas2dImageChromiumEnabled()) {
presentation_mode =
CanvasResourceProvider::kAllowImageChromiumPresentationMode;
}
if (SharedGpuContext::IsGpuCompositingEnabled() &&
RuntimeEnabledFeatures::Accelerated2dCanvasEnabled()) {
can_use_gpu = true;
}
} else if (Is3d()) {
if (RuntimeEnabledFeatures::WebGLImageChromiumEnabled()) {
presentation_mode =
CanvasResourceProvider::kAllowImageChromiumPresentationMode;
}
can_use_gpu = SharedGpuContext::IsGpuCompositingEnabled();
}
IntSize surface_size(width(), height());
CanvasResourceProvider::ResourceUsage usage;
if (can_use_gpu) {
if (HasPlaceholderCanvas())
usage = CanvasResourceProvider::kAcceleratedCompositedResourceUsage;
else
usage = CanvasResourceProvider::kAcceleratedResourceUsage;
} else {
if (HasPlaceholderCanvas())
usage = CanvasResourceProvider::kSoftwareCompositedResourceUsage;
else
usage = CanvasResourceProvider::kSoftwareResourceUsage;
}
base::WeakPtr<CanvasResourceDispatcher> dispatcher_weakptr =
HasPlaceholderCanvas() ? GetOrCreateResourceDispatcher()->GetWeakPtr()
: nullptr;
ReplaceResourceProvider(CanvasResourceProvider::Create(
surface_size, usage, SharedGpuContext::ContextProviderWrapper(), 0,
context_->ColorParams(), presentation_mode,
std::move(dispatcher_weakptr), false /* is_origin_top_left */));
CHECK(!ResourceProvider() || !HasPlaceholderCanvas() ||
ResourceProvider()->SupportsDirectCompositing());
if (ResourceProvider() && ResourceProvider()->IsValid()) {
ResourceProvider()->Clear();
ResourceProvider()->Canvas()->save();
if (needs_matrix_clip_restore_) {
needs_matrix_clip_restore_ = false;
context_->RestoreCanvasMatrixClipStack(ResourceProvider()->Canvas());
}
}
}
return ResourceProvider();
}
|
C
|
Chrome
| 0 |
CVE-2013-2867
|
https://www.cvedetails.com/cve/CVE-2013-2867/
| null |
https://github.com/chromium/chromium/commit/d358f57009b85fb7440208afa5ba87636b491889
|
d358f57009b85fb7440208afa5ba87636b491889
|
Refactor to support default Bluetooth pairing delegate
In order to support a default pairing delegate we need to move the agent
service provider delegate implementation from BluetoothDevice to
BluetoothAdapter while retaining the existing API.
BUG=338492
TEST=device_unittests, unit_tests, browser_tests
Review URL: https://codereview.chromium.org/148293003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@252216 0039d316-1c4b-4281-b951-d872f2087c98
|
void BluetoothAdapterChromeOS::SetDefaultAdapterName() {
std::string board = base::SysInfo::GetLsbReleaseBoard();
std::string alias;
if (board.substr(0, 6) == "stumpy") {
alias = "Chromebox";
} else if (board.substr(0, 4) == "link") {
alias = "Chromebook Pixel";
} else {
alias = "Chromebook";
}
SetName(alias, base::Bind(&base::DoNothing), base::Bind(&base::DoNothing));
}
|
void BluetoothAdapterChromeOS::SetDefaultAdapterName() {
std::string board = base::SysInfo::GetLsbReleaseBoard();
std::string alias;
if (board.substr(0, 6) == "stumpy") {
alias = "Chromebox";
} else if (board.substr(0, 4) == "link") {
alias = "Chromebook Pixel";
} else {
alias = "Chromebook";
}
SetName(alias, base::Bind(&base::DoNothing), base::Bind(&base::DoNothing));
}
|
C
|
Chrome
| 0 |
CVE-2018-6040
|
https://www.cvedetails.com/cve/CVE-2018-6040/
|
CWE-732
|
https://github.com/chromium/chromium/commit/209f225b2d51334eaf69ffdf002e25eaa1e0d448
|
209f225b2d51334eaf69ffdf002e25eaa1e0d448
|
Fixed bug where PlzNavigate CSP in a iframe did not get the inherited CSP
When inheriting the CSP from a parent document to a local-scheme CSP,
it does not always get propagated to the PlzNavigate CSP. This means
that PlzNavigate CSP checks (like `frame-src`) would be ran against
a blank policy instead of the proper inherited policy.
Bug: 778658
Change-Id: I61bb0d432e1cea52f199e855624cb7b3078f56a9
Reviewed-on: https://chromium-review.googlesource.com/765969
Commit-Queue: Andy Paicu <[email protected]>
Reviewed-by: Mike West <[email protected]>
Cr-Commit-Position: refs/heads/master@{#518245}
|
scoped_refptr<WebTaskRunner> Document::GetTaskRunner(TaskType type) {
DCHECK(IsMainThread());
if (ContextDocument() && ContextDocument()->GetFrame())
return ContextDocument()->GetFrame()->GetTaskRunner(type);
return Platform::Current()->CurrentThread()->GetWebTaskRunner();
}
|
scoped_refptr<WebTaskRunner> Document::GetTaskRunner(TaskType type) {
DCHECK(IsMainThread());
if (ContextDocument() && ContextDocument()->GetFrame())
return ContextDocument()->GetFrame()->GetTaskRunner(type);
return Platform::Current()->CurrentThread()->GetWebTaskRunner();
}
|
C
|
Chrome
| 0 |
CVE-2016-10066
|
https://www.cvedetails.com/cve/CVE-2016-10066/
|
CWE-119
|
https://github.com/ImageMagick/ImageMagick/commit/f6e9d0d9955e85bdd7540b251cd50d598dacc5e6
|
f6e9d0d9955e85bdd7540b251cd50d598dacc5e6
| null |
static MagickBooleanType WriteTIFFImage(const ImageInfo *image_info,
Image *image)
{
#if !defined(TIFFDefaultStripSize)
#define TIFFDefaultStripSize(tiff,request) (8192UL/TIFFScanlineSize(tiff))
#endif
const char
*mode,
*option;
CompressionType
compression;
EndianType
endian_type;
MagickBooleanType
debug,
status;
MagickOffsetType
scene;
QuantumInfo
*quantum_info;
QuantumType
quantum_type;
register ssize_t
i;
size_t
length;
ssize_t
y;
TIFF
*tiff;
TIFFErrorHandler
error_handler,
warning_handler;
TIFFInfo
tiff_info;
uint16
bits_per_sample,
compress_tag,
endian,
photometric;
uint32
rows_per_strip;
unsigned char
*pixels;
/*
Open TIFF file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickSignature);
assert(image != (Image *) NULL);
assert(image->signature == MagickSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename);
status=OpenBlob(image_info,image,WriteBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
(void) MagickSetThreadValue(tiff_exception,&image->exception);
error_handler=TIFFSetErrorHandler((TIFFErrorHandler) TIFFErrors);
warning_handler=TIFFSetWarningHandler((TIFFErrorHandler) TIFFWarnings);
endian_type=UndefinedEndian;
option=GetImageOption(image_info,"tiff:endian");
if (option != (const char *) NULL)
{
if (LocaleNCompare(option,"msb",3) == 0)
endian_type=MSBEndian;
if (LocaleNCompare(option,"lsb",3) == 0)
endian_type=LSBEndian;;
}
switch (endian_type)
{
case LSBEndian: mode="wl"; break;
case MSBEndian: mode="wb"; break;
default: mode="w"; break;
}
#if defined(TIFF_VERSION_BIG)
if (LocaleCompare(image_info->magick,"TIFF64") == 0)
switch (endian_type)
{
case LSBEndian: mode="wl8"; break;
case MSBEndian: mode="wb8"; break;
default: mode="w8"; break;
}
#endif
tiff=TIFFClientOpen(image->filename,mode,(thandle_t) image,TIFFReadBlob,
TIFFWriteBlob,TIFFSeekBlob,TIFFCloseBlob,TIFFGetBlobSize,TIFFMapBlob,
TIFFUnmapBlob);
if (tiff == (TIFF *) NULL)
{
(void) TIFFSetWarningHandler(warning_handler);
(void) TIFFSetErrorHandler(error_handler);
return(MagickFalse);
}
scene=0;
debug=IsEventLogging();
(void) debug;
do
{
/*
Initialize TIFF fields.
*/
if ((image_info->type != UndefinedType) &&
(image_info->type != OptimizeType))
(void) SetImageType(image,image_info->type);
compression=UndefinedCompression;
if (image->compression != JPEGCompression)
compression=image->compression;
if (image_info->compression != UndefinedCompression)
compression=image_info->compression;
switch (compression)
{
case FaxCompression:
case Group4Compression:
{
(void) SetImageType(image,BilevelType);
break;
}
case JPEGCompression:
{
(void) SetImageStorageClass(image,DirectClass);
(void) SetImageDepth(image,8);
break;
}
default:
break;
}
quantum_info=AcquireQuantumInfo(image_info,image);
if (quantum_info == (QuantumInfo *) NULL)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
if ((image->storage_class != PseudoClass) && (image->depth >= 32) &&
(quantum_info->format == UndefinedQuantumFormat) &&
(IsHighDynamicRangeImage(image,&image->exception) != MagickFalse))
{
status=SetQuantumFormat(image,quantum_info,FloatingPointQuantumFormat);
if (status == MagickFalse)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
}
if ((LocaleCompare(image_info->magick,"PTIF") == 0) &&
(GetPreviousImageInList(image) != (Image *) NULL))
(void) TIFFSetField(tiff,TIFFTAG_SUBFILETYPE,FILETYPE_REDUCEDIMAGE);
if ((image->columns != (uint32) image->columns) ||
(image->rows != (uint32) image->rows))
ThrowWriterException(ImageError,"WidthOrHeightExceedsLimit");
(void) TIFFSetField(tiff,TIFFTAG_IMAGELENGTH,(uint32) image->rows);
(void) TIFFSetField(tiff,TIFFTAG_IMAGEWIDTH,(uint32) image->columns);
switch (compression)
{
case FaxCompression:
{
compress_tag=COMPRESSION_CCITTFAX3;
SetQuantumMinIsWhite(quantum_info,MagickTrue);
break;
}
case Group4Compression:
{
compress_tag=COMPRESSION_CCITTFAX4;
SetQuantumMinIsWhite(quantum_info,MagickTrue);
break;
}
#if defined(COMPRESSION_JBIG)
case JBIG1Compression:
{
compress_tag=COMPRESSION_JBIG;
break;
}
#endif
case JPEGCompression:
{
compress_tag=COMPRESSION_JPEG;
break;
}
#if defined(COMPRESSION_LZMA)
case LZMACompression:
{
compress_tag=COMPRESSION_LZMA;
break;
}
#endif
case LZWCompression:
{
compress_tag=COMPRESSION_LZW;
break;
}
case RLECompression:
{
compress_tag=COMPRESSION_PACKBITS;
break;
}
case ZipCompression:
{
compress_tag=COMPRESSION_ADOBE_DEFLATE;
break;
}
case NoCompression:
default:
{
compress_tag=COMPRESSION_NONE;
break;
}
}
#if defined(MAGICKCORE_HAVE_TIFFISCODECCONFIGURED) || (TIFFLIB_VERSION > 20040919)
if ((compress_tag != COMPRESSION_NONE) &&
(TIFFIsCODECConfigured(compress_tag) == 0))
{
(void) ThrowMagickException(&image->exception,GetMagickModule(),
CoderError,"CompressionNotSupported","`%s'",CommandOptionToMnemonic(
MagickCompressOptions,(ssize_t) compression));
compress_tag=COMPRESSION_NONE;
compression=NoCompression;
}
#else
switch (compress_tag)
{
#if defined(CCITT_SUPPORT)
case COMPRESSION_CCITTFAX3:
case COMPRESSION_CCITTFAX4:
#endif
#if defined(YCBCR_SUPPORT) && defined(JPEG_SUPPORT)
case COMPRESSION_JPEG:
#endif
#if defined(LZMA_SUPPORT) && defined(COMPRESSION_LZMA)
case COMPRESSION_LZMA:
#endif
#if defined(LZW_SUPPORT)
case COMPRESSION_LZW:
#endif
#if defined(PACKBITS_SUPPORT)
case COMPRESSION_PACKBITS:
#endif
#if defined(ZIP_SUPPORT)
case COMPRESSION_ADOBE_DEFLATE:
#endif
case COMPRESSION_NONE:
break;
default:
{
(void) ThrowMagickException(&image->exception,GetMagickModule(),
CoderError,"CompressionNotSupported","`%s'",CommandOptionToMnemonic(
MagickCompressOptions,(ssize_t) compression));
compress_tag=COMPRESSION_NONE;
compression=NoCompression;
break;
}
}
#endif
if (image->colorspace == CMYKColorspace)
{
photometric=PHOTOMETRIC_SEPARATED;
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,4);
(void) TIFFSetField(tiff,TIFFTAG_INKSET,INKSET_CMYK);
}
else
{
/*
Full color TIFF raster.
*/
if (image->colorspace == LabColorspace)
{
photometric=PHOTOMETRIC_CIELAB;
EncodeLabImage(image,&image->exception);
}
else
if (image->colorspace == YCbCrColorspace)
{
photometric=PHOTOMETRIC_YCBCR;
(void) TIFFSetField(tiff,TIFFTAG_YCBCRSUBSAMPLING,1,1);
(void) SetImageStorageClass(image,DirectClass);
(void) SetImageDepth(image,8);
}
else
photometric=PHOTOMETRIC_RGB;
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,3);
if ((image_info->type != TrueColorType) &&
(image_info->type != TrueColorMatteType))
{
if ((image_info->type != PaletteType) &&
(IsGrayImage(image,&image->exception) != MagickFalse))
{
photometric=(uint16) (quantum_info->min_is_white !=
MagickFalse ? PHOTOMETRIC_MINISWHITE :
PHOTOMETRIC_MINISBLACK);
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,1);
if ((image_info->depth == 0) && (image->matte == MagickFalse) &&
(IsMonochromeImage(image,&image->exception) != MagickFalse))
{
status=SetQuantumDepth(image,quantum_info,1);
if (status == MagickFalse)
ThrowWriterException(ResourceLimitError,
"MemoryAllocationFailed");
}
}
else
if (image->storage_class == PseudoClass)
{
size_t
depth;
/*
Colormapped TIFF raster.
*/
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,1);
photometric=PHOTOMETRIC_PALETTE;
depth=1;
while ((GetQuantumRange(depth)+1) < image->colors)
depth<<=1;
status=SetQuantumDepth(image,quantum_info,depth);
if (status == MagickFalse)
ThrowWriterException(ResourceLimitError,
"MemoryAllocationFailed");
}
}
}
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_FILLORDER,&endian);
if ((compress_tag == COMPRESSION_CCITTFAX3) &&
(photometric != PHOTOMETRIC_MINISWHITE))
{
compress_tag=COMPRESSION_NONE;
endian=FILLORDER_MSB2LSB;
}
else
if ((compress_tag == COMPRESSION_CCITTFAX4) &&
(photometric != PHOTOMETRIC_MINISWHITE))
{
compress_tag=COMPRESSION_NONE;
endian=FILLORDER_MSB2LSB;
}
option=GetImageOption(image_info,"tiff:fill-order");
if (option != (const char *) NULL)
{
if (LocaleNCompare(option,"msb",3) == 0)
endian=FILLORDER_MSB2LSB;
if (LocaleNCompare(option,"lsb",3) == 0)
endian=FILLORDER_LSB2MSB;
}
(void) TIFFSetField(tiff,TIFFTAG_COMPRESSION,compress_tag);
(void) TIFFSetField(tiff,TIFFTAG_FILLORDER,endian);
(void) TIFFSetField(tiff,TIFFTAG_BITSPERSAMPLE,quantum_info->depth);
if (image->matte != MagickFalse)
{
uint16
extra_samples,
sample_info[1],
samples_per_pixel;
/*
TIFF has a matte channel.
*/
extra_samples=1;
sample_info[0]=EXTRASAMPLE_UNASSALPHA;
option=GetImageOption(image_info,"tiff:alpha");
if (option != (const char *) NULL)
{
if (LocaleCompare(option,"associated") == 0)
sample_info[0]=EXTRASAMPLE_ASSOCALPHA;
else
if (LocaleCompare(option,"unspecified") == 0)
sample_info[0]=EXTRASAMPLE_UNSPECIFIED;
}
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_SAMPLESPERPIXEL,
&samples_per_pixel);
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,samples_per_pixel+1);
(void) TIFFSetField(tiff,TIFFTAG_EXTRASAMPLES,extra_samples,
&sample_info);
if (sample_info[0] == EXTRASAMPLE_ASSOCALPHA)
SetQuantumAlphaType(quantum_info,AssociatedQuantumAlpha);
}
(void) TIFFSetField(tiff,TIFFTAG_PHOTOMETRIC,photometric);
switch (quantum_info->format)
{
case FloatingPointQuantumFormat:
{
(void) TIFFSetField(tiff,TIFFTAG_SAMPLEFORMAT,SAMPLEFORMAT_IEEEFP);
(void) TIFFSetField(tiff,TIFFTAG_SMINSAMPLEVALUE,quantum_info->minimum);
(void) TIFFSetField(tiff,TIFFTAG_SMAXSAMPLEVALUE,quantum_info->maximum);
break;
}
case SignedQuantumFormat:
{
(void) TIFFSetField(tiff,TIFFTAG_SAMPLEFORMAT,SAMPLEFORMAT_INT);
break;
}
case UnsignedQuantumFormat:
{
(void) TIFFSetField(tiff,TIFFTAG_SAMPLEFORMAT,SAMPLEFORMAT_UINT);
break;
}
default:
break;
}
(void) TIFFSetField(tiff,TIFFTAG_ORIENTATION,ORIENTATION_TOPLEFT);
(void) TIFFSetField(tiff,TIFFTAG_PLANARCONFIG,PLANARCONFIG_CONTIG);
if (photometric == PHOTOMETRIC_RGB)
if ((image_info->interlace == PlaneInterlace) ||
(image_info->interlace == PartitionInterlace))
(void) TIFFSetField(tiff,TIFFTAG_PLANARCONFIG,PLANARCONFIG_SEPARATE);
rows_per_strip=TIFFDefaultStripSize(tiff,0);
option=GetImageOption(image_info,"tiff:rows-per-strip");
if (option != (const char *) NULL)
rows_per_strip=(size_t) strtol(option,(char **) NULL,10);
switch (compress_tag)
{
case COMPRESSION_JPEG:
{
#if defined(JPEG_SUPPORT)
const char
*sampling_factor;
GeometryInfo
geometry_info;
MagickStatusType
flags;
rows_per_strip+=(16-(rows_per_strip % 16));
if (image_info->quality != UndefinedCompressionQuality)
(void) TIFFSetField(tiff,TIFFTAG_JPEGQUALITY,image_info->quality);
(void) TIFFSetField(tiff,TIFFTAG_JPEGCOLORMODE,JPEGCOLORMODE_RAW);
if (IssRGBCompatibleColorspace(image->colorspace) != MagickFalse)
{
const char
*value;
(void) TIFFSetField(tiff,TIFFTAG_JPEGCOLORMODE,JPEGCOLORMODE_RGB);
sampling_factor=(const char *) NULL;
value=GetImageProperty(image,"jpeg:sampling-factor");
if (value != (char *) NULL)
{
sampling_factor=value;
if (image->debug != MagickFalse)
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
" Input sampling-factors=%s",sampling_factor);
}
if (image_info->sampling_factor != (char *) NULL)
sampling_factor=image_info->sampling_factor;
if (sampling_factor != (const char *) NULL)
{
flags=ParseGeometry(sampling_factor,&geometry_info);
if ((flags & SigmaValue) == 0)
geometry_info.sigma=geometry_info.rho;
if (image->colorspace == YCbCrColorspace)
(void) TIFFSetField(tiff,TIFFTAG_YCBCRSUBSAMPLING,(uint16)
geometry_info.rho,(uint16) geometry_info.sigma);
}
}
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,
&bits_per_sample);
if (bits_per_sample == 12)
(void) TIFFSetField(tiff,TIFFTAG_JPEGTABLESMODE,JPEGTABLESMODE_QUANT);
#endif
break;
}
case COMPRESSION_ADOBE_DEFLATE:
{
rows_per_strip=(uint32) image->rows;
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,
&bits_per_sample);
if (((photometric == PHOTOMETRIC_RGB) ||
(photometric == PHOTOMETRIC_MINISBLACK)) &&
((bits_per_sample == 8) || (bits_per_sample == 16)))
(void) TIFFSetField(tiff,TIFFTAG_PREDICTOR,PREDICTOR_HORIZONTAL);
(void) TIFFSetField(tiff,TIFFTAG_ZIPQUALITY,(long) (
image_info->quality == UndefinedCompressionQuality ? 7 :
MagickMin((ssize_t) image_info->quality/10,9)));
break;
}
case COMPRESSION_CCITTFAX3:
{
/*
Byte-aligned EOL.
*/
rows_per_strip=(uint32) image->rows;
(void) TIFFSetField(tiff,TIFFTAG_GROUP3OPTIONS,4);
break;
}
case COMPRESSION_CCITTFAX4:
{
rows_per_strip=(uint32) image->rows;
break;
}
#if defined(LZMA_SUPPORT) && defined(COMPRESSION_LZMA)
case COMPRESSION_LZMA:
{
if (((photometric == PHOTOMETRIC_RGB) ||
(photometric == PHOTOMETRIC_MINISBLACK)) &&
((bits_per_sample == 8) || (bits_per_sample == 16)))
(void) TIFFSetField(tiff,TIFFTAG_PREDICTOR,PREDICTOR_HORIZONTAL);
(void) TIFFSetField(tiff,TIFFTAG_LZMAPRESET,(long) (
image_info->quality == UndefinedCompressionQuality ? 7 :
MagickMin((ssize_t) image_info->quality/10,9)));
break;
}
#endif
case COMPRESSION_LZW:
{
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,
&bits_per_sample);
if (((photometric == PHOTOMETRIC_RGB) ||
(photometric == PHOTOMETRIC_MINISBLACK)) &&
((bits_per_sample == 8) || (bits_per_sample == 16)))
(void) TIFFSetField(tiff,TIFFTAG_PREDICTOR,PREDICTOR_HORIZONTAL);
break;
}
default:
break;
}
if (rows_per_strip < 1)
rows_per_strip=1;
if ((image->rows/rows_per_strip) >= (1UL << 15))
rows_per_strip=(uint32) (image->rows >> 15);
(void) TIFFSetField(tiff,TIFFTAG_ROWSPERSTRIP,rows_per_strip);
if ((image->x_resolution != 0.0) && (image->y_resolution != 0.0))
{
unsigned short
units;
/*
Set image resolution.
*/
units=RESUNIT_NONE;
if (image->units == PixelsPerInchResolution)
units=RESUNIT_INCH;
if (image->units == PixelsPerCentimeterResolution)
units=RESUNIT_CENTIMETER;
(void) TIFFSetField(tiff,TIFFTAG_RESOLUTIONUNIT,(uint16) units);
(void) TIFFSetField(tiff,TIFFTAG_XRESOLUTION,image->x_resolution);
(void) TIFFSetField(tiff,TIFFTAG_YRESOLUTION,image->y_resolution);
if ((image->page.x < 0) || (image->page.y < 0))
(void) ThrowMagickException(&image->exception,GetMagickModule(),
CoderError,"TIFF: negative image positions unsupported","%s",
image->filename);
if ((image->page.x > 0) && (image->x_resolution > 0.0))
{
/*
Set horizontal image position.
*/
(void) TIFFSetField(tiff,TIFFTAG_XPOSITION,(float) image->page.x/
image->x_resolution);
}
if ((image->page.y > 0) && (image->y_resolution > 0.0))
{
/*
Set vertical image position.
*/
(void) TIFFSetField(tiff,TIFFTAG_YPOSITION,(float) image->page.y/
image->y_resolution);
}
}
if (image->chromaticity.white_point.x != 0.0)
{
float
chromaticity[6];
/*
Set image chromaticity.
*/
chromaticity[0]=(float) image->chromaticity.red_primary.x;
chromaticity[1]=(float) image->chromaticity.red_primary.y;
chromaticity[2]=(float) image->chromaticity.green_primary.x;
chromaticity[3]=(float) image->chromaticity.green_primary.y;
chromaticity[4]=(float) image->chromaticity.blue_primary.x;
chromaticity[5]=(float) image->chromaticity.blue_primary.y;
(void) TIFFSetField(tiff,TIFFTAG_PRIMARYCHROMATICITIES,chromaticity);
chromaticity[0]=(float) image->chromaticity.white_point.x;
chromaticity[1]=(float) image->chromaticity.white_point.y;
(void) TIFFSetField(tiff,TIFFTAG_WHITEPOINT,chromaticity);
}
if ((LocaleCompare(image_info->magick,"PTIF") != 0) &&
(image_info->adjoin != MagickFalse) && (GetImageListLength(image) > 1))
{
(void) TIFFSetField(tiff,TIFFTAG_SUBFILETYPE,FILETYPE_PAGE);
if (image->scene != 0)
(void) TIFFSetField(tiff,TIFFTAG_PAGENUMBER,(uint16) image->scene,
GetImageListLength(image));
}
if (image->orientation != UndefinedOrientation)
(void) TIFFSetField(tiff,TIFFTAG_ORIENTATION,(uint16) image->orientation);
(void) TIFFSetProfiles(tiff,image);
{
uint16
page,
pages;
page=(uint16) scene;
pages=(uint16) GetImageListLength(image);
if ((LocaleCompare(image_info->magick,"PTIF") != 0) &&
(image_info->adjoin != MagickFalse) && (pages > 1))
(void) TIFFSetField(tiff,TIFFTAG_SUBFILETYPE,FILETYPE_PAGE);
(void) TIFFSetField(tiff,TIFFTAG_PAGENUMBER,page,pages);
}
(void) TIFFSetProperties(tiff,image);
DisableMSCWarning(4127)
if (0)
RestoreMSCWarning
(void) TIFFSetEXIFProperties(tiff,image);
/*
Write image scanlines.
*/
if (GetTIFFInfo(image_info,tiff,&tiff_info) == MagickFalse)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
quantum_info->endian=LSBEndian;
pixels=GetQuantumPixels(quantum_info);
tiff_info.scanline=GetQuantumPixels(quantum_info);
switch (photometric)
{
case PHOTOMETRIC_CIELAB:
case PHOTOMETRIC_YCBCR:
case PHOTOMETRIC_RGB:
{
/*
RGB TIFF image.
*/
switch (image_info->interlace)
{
case NoInterlace:
default:
{
quantum_type=RGBQuantum;
if (image->matte != MagickFalse)
quantum_type=RGBAQuantum;
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
(void) length;
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType)
y,image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
case PlaneInterlace:
case PartitionInterlace:
{
/*
Plane interlacing: RRRRRR...GGGGGG...BBBBBB...
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,RedQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,100,400);
if (status == MagickFalse)
break;
}
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,GreenQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,1,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,200,400);
if (status == MagickFalse)
break;
}
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,BlueQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,2,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,300,400);
if (status == MagickFalse)
break;
}
if (image->matte != MagickFalse)
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,
&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,AlphaQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,3,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,400,400);
if (status == MagickFalse)
break;
}
break;
}
}
break;
}
case PHOTOMETRIC_SEPARATED:
{
/*
CMYK TIFF image.
*/
quantum_type=CMYKQuantum;
if (image->matte != MagickFalse)
quantum_type=CMYKAQuantum;
if (image->colorspace != CMYKColorspace)
(void) TransformImageColorspace(image,CMYKColorspace);
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
case PHOTOMETRIC_PALETTE:
{
uint16
*blue,
*green,
*red;
/*
Colormapped TIFF image.
*/
red=(uint16 *) AcquireQuantumMemory(65536,sizeof(*red));
green=(uint16 *) AcquireQuantumMemory(65536,sizeof(*green));
blue=(uint16 *) AcquireQuantumMemory(65536,sizeof(*blue));
if ((red == (uint16 *) NULL) || (green == (uint16 *) NULL) ||
(blue == (uint16 *) NULL))
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
/*
Initialize TIFF colormap.
*/
(void) ResetMagickMemory(red,0,65536*sizeof(*red));
(void) ResetMagickMemory(green,0,65536*sizeof(*green));
(void) ResetMagickMemory(blue,0,65536*sizeof(*blue));
for (i=0; i < (ssize_t) image->colors; i++)
{
red[i]=ScaleQuantumToShort(image->colormap[i].red);
green[i]=ScaleQuantumToShort(image->colormap[i].green);
blue[i]=ScaleQuantumToShort(image->colormap[i].blue);
}
(void) TIFFSetField(tiff,TIFFTAG_COLORMAP,red,green,blue);
red=(uint16 *) RelinquishMagickMemory(red);
green=(uint16 *) RelinquishMagickMemory(green);
blue=(uint16 *) RelinquishMagickMemory(blue);
}
default:
{
/*
Convert PseudoClass packets to contiguous grayscale scanlines.
*/
quantum_type=IndexQuantum;
if (image->matte != MagickFalse)
{
if (photometric != PHOTOMETRIC_PALETTE)
quantum_type=GrayAlphaQuantum;
else
quantum_type=IndexAlphaQuantum;
}
else
if (photometric != PHOTOMETRIC_PALETTE)
quantum_type=GrayQuantum;
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
}
quantum_info=DestroyQuantumInfo(quantum_info);
if (image->colorspace == LabColorspace)
DecodeLabImage(image,&image->exception);
DestroyTIFFInfo(&tiff_info);
DisableMSCWarning(4127)
if (0 && (image_info->verbose != MagickFalse))
RestoreMSCWarning
TIFFPrintDirectory(tiff,stdout,MagickFalse);
(void) TIFFWriteDirectory(tiff);
image=SyncNextImageInList(image);
if (image == (Image *) NULL)
break;
status=SetImageProgress(image,SaveImagesTag,scene++,
GetImageListLength(image));
if (status == MagickFalse)
break;
} while (image_info->adjoin != MagickFalse);
(void) TIFFSetWarningHandler(warning_handler);
(void) TIFFSetErrorHandler(error_handler);
TIFFClose(tiff);
return(MagickTrue);
}
|
static MagickBooleanType WriteTIFFImage(const ImageInfo *image_info,
Image *image)
{
#if !defined(TIFFDefaultStripSize)
#define TIFFDefaultStripSize(tiff,request) (8192UL/TIFFScanlineSize(tiff))
#endif
const char
*mode,
*option;
CompressionType
compression;
EndianType
endian_type;
MagickBooleanType
debug,
status;
MagickOffsetType
scene;
QuantumInfo
*quantum_info;
QuantumType
quantum_type;
register ssize_t
i;
size_t
length;
ssize_t
y;
TIFF
*tiff;
TIFFErrorHandler
error_handler,
warning_handler;
TIFFInfo
tiff_info;
uint16
bits_per_sample,
compress_tag,
endian,
photometric;
uint32
rows_per_strip;
unsigned char
*pixels;
/*
Open TIFF file.
*/
assert(image_info != (const ImageInfo *) NULL);
assert(image_info->signature == MagickSignature);
assert(image != (Image *) NULL);
assert(image->signature == MagickSignature);
if (image->debug != MagickFalse)
(void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename);
status=OpenBlob(image_info,image,WriteBinaryBlobMode,&image->exception);
if (status == MagickFalse)
return(status);
(void) MagickSetThreadValue(tiff_exception,&image->exception);
error_handler=TIFFSetErrorHandler((TIFFErrorHandler) TIFFErrors);
warning_handler=TIFFSetWarningHandler((TIFFErrorHandler) TIFFWarnings);
endian_type=UndefinedEndian;
option=GetImageOption(image_info,"tiff:endian");
if (option != (const char *) NULL)
{
if (LocaleNCompare(option,"msb",3) == 0)
endian_type=MSBEndian;
if (LocaleNCompare(option,"lsb",3) == 0)
endian_type=LSBEndian;;
}
switch (endian_type)
{
case LSBEndian: mode="wl"; break;
case MSBEndian: mode="wb"; break;
default: mode="w"; break;
}
#if defined(TIFF_VERSION_BIG)
if (LocaleCompare(image_info->magick,"TIFF64") == 0)
switch (endian_type)
{
case LSBEndian: mode="wl8"; break;
case MSBEndian: mode="wb8"; break;
default: mode="w8"; break;
}
#endif
tiff=TIFFClientOpen(image->filename,mode,(thandle_t) image,TIFFReadBlob,
TIFFWriteBlob,TIFFSeekBlob,TIFFCloseBlob,TIFFGetBlobSize,TIFFMapBlob,
TIFFUnmapBlob);
if (tiff == (TIFF *) NULL)
{
(void) TIFFSetWarningHandler(warning_handler);
(void) TIFFSetErrorHandler(error_handler);
return(MagickFalse);
}
scene=0;
debug=IsEventLogging();
(void) debug;
do
{
/*
Initialize TIFF fields.
*/
if ((image_info->type != UndefinedType) &&
(image_info->type != OptimizeType))
(void) SetImageType(image,image_info->type);
compression=UndefinedCompression;
if (image->compression != JPEGCompression)
compression=image->compression;
if (image_info->compression != UndefinedCompression)
compression=image_info->compression;
switch (compression)
{
case FaxCompression:
case Group4Compression:
{
(void) SetImageType(image,BilevelType);
break;
}
case JPEGCompression:
{
(void) SetImageStorageClass(image,DirectClass);
(void) SetImageDepth(image,8);
break;
}
default:
break;
}
quantum_info=AcquireQuantumInfo(image_info,image);
if (quantum_info == (QuantumInfo *) NULL)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
if ((image->storage_class != PseudoClass) && (image->depth >= 32) &&
(quantum_info->format == UndefinedQuantumFormat) &&
(IsHighDynamicRangeImage(image,&image->exception) != MagickFalse))
{
status=SetQuantumFormat(image,quantum_info,FloatingPointQuantumFormat);
if (status == MagickFalse)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
}
if ((LocaleCompare(image_info->magick,"PTIF") == 0) &&
(GetPreviousImageInList(image) != (Image *) NULL))
(void) TIFFSetField(tiff,TIFFTAG_SUBFILETYPE,FILETYPE_REDUCEDIMAGE);
if ((image->columns != (uint32) image->columns) ||
(image->rows != (uint32) image->rows))
ThrowWriterException(ImageError,"WidthOrHeightExceedsLimit");
(void) TIFFSetField(tiff,TIFFTAG_IMAGELENGTH,(uint32) image->rows);
(void) TIFFSetField(tiff,TIFFTAG_IMAGEWIDTH,(uint32) image->columns);
switch (compression)
{
case FaxCompression:
{
compress_tag=COMPRESSION_CCITTFAX3;
SetQuantumMinIsWhite(quantum_info,MagickTrue);
break;
}
case Group4Compression:
{
compress_tag=COMPRESSION_CCITTFAX4;
SetQuantumMinIsWhite(quantum_info,MagickTrue);
break;
}
#if defined(COMPRESSION_JBIG)
case JBIG1Compression:
{
compress_tag=COMPRESSION_JBIG;
break;
}
#endif
case JPEGCompression:
{
compress_tag=COMPRESSION_JPEG;
break;
}
#if defined(COMPRESSION_LZMA)
case LZMACompression:
{
compress_tag=COMPRESSION_LZMA;
break;
}
#endif
case LZWCompression:
{
compress_tag=COMPRESSION_LZW;
break;
}
case RLECompression:
{
compress_tag=COMPRESSION_PACKBITS;
break;
}
case ZipCompression:
{
compress_tag=COMPRESSION_ADOBE_DEFLATE;
break;
}
case NoCompression:
default:
{
compress_tag=COMPRESSION_NONE;
break;
}
}
#if defined(MAGICKCORE_HAVE_TIFFISCODECCONFIGURED) || (TIFFLIB_VERSION > 20040919)
if ((compress_tag != COMPRESSION_NONE) &&
(TIFFIsCODECConfigured(compress_tag) == 0))
{
(void) ThrowMagickException(&image->exception,GetMagickModule(),
CoderError,"CompressionNotSupported","`%s'",CommandOptionToMnemonic(
MagickCompressOptions,(ssize_t) compression));
compress_tag=COMPRESSION_NONE;
compression=NoCompression;
}
#else
switch (compress_tag)
{
#if defined(CCITT_SUPPORT)
case COMPRESSION_CCITTFAX3:
case COMPRESSION_CCITTFAX4:
#endif
#if defined(YCBCR_SUPPORT) && defined(JPEG_SUPPORT)
case COMPRESSION_JPEG:
#endif
#if defined(LZMA_SUPPORT) && defined(COMPRESSION_LZMA)
case COMPRESSION_LZMA:
#endif
#if defined(LZW_SUPPORT)
case COMPRESSION_LZW:
#endif
#if defined(PACKBITS_SUPPORT)
case COMPRESSION_PACKBITS:
#endif
#if defined(ZIP_SUPPORT)
case COMPRESSION_ADOBE_DEFLATE:
#endif
case COMPRESSION_NONE:
break;
default:
{
(void) ThrowMagickException(&image->exception,GetMagickModule(),
CoderError,"CompressionNotSupported","`%s'",CommandOptionToMnemonic(
MagickCompressOptions,(ssize_t) compression));
compress_tag=COMPRESSION_NONE;
compression=NoCompression;
break;
}
}
#endif
if (image->colorspace == CMYKColorspace)
{
photometric=PHOTOMETRIC_SEPARATED;
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,4);
(void) TIFFSetField(tiff,TIFFTAG_INKSET,INKSET_CMYK);
}
else
{
/*
Full color TIFF raster.
*/
if (image->colorspace == LabColorspace)
{
photometric=PHOTOMETRIC_CIELAB;
EncodeLabImage(image,&image->exception);
}
else
if (image->colorspace == YCbCrColorspace)
{
photometric=PHOTOMETRIC_YCBCR;
(void) TIFFSetField(tiff,TIFFTAG_YCBCRSUBSAMPLING,1,1);
(void) SetImageStorageClass(image,DirectClass);
(void) SetImageDepth(image,8);
}
else
photometric=PHOTOMETRIC_RGB;
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,3);
if ((image_info->type != TrueColorType) &&
(image_info->type != TrueColorMatteType))
{
if ((image_info->type != PaletteType) &&
(IsGrayImage(image,&image->exception) != MagickFalse))
{
photometric=(uint16) (quantum_info->min_is_white !=
MagickFalse ? PHOTOMETRIC_MINISWHITE :
PHOTOMETRIC_MINISBLACK);
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,1);
if ((image_info->depth == 0) && (image->matte == MagickFalse) &&
(IsMonochromeImage(image,&image->exception) != MagickFalse))
{
status=SetQuantumDepth(image,quantum_info,1);
if (status == MagickFalse)
ThrowWriterException(ResourceLimitError,
"MemoryAllocationFailed");
}
}
else
if (image->storage_class == PseudoClass)
{
size_t
depth;
/*
Colormapped TIFF raster.
*/
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,1);
photometric=PHOTOMETRIC_PALETTE;
depth=1;
while ((GetQuantumRange(depth)+1) < image->colors)
depth<<=1;
status=SetQuantumDepth(image,quantum_info,depth);
if (status == MagickFalse)
ThrowWriterException(ResourceLimitError,
"MemoryAllocationFailed");
}
}
}
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_FILLORDER,&endian);
if ((compress_tag == COMPRESSION_CCITTFAX3) &&
(photometric != PHOTOMETRIC_MINISWHITE))
{
compress_tag=COMPRESSION_NONE;
endian=FILLORDER_MSB2LSB;
}
else
if ((compress_tag == COMPRESSION_CCITTFAX4) &&
(photometric != PHOTOMETRIC_MINISWHITE))
{
compress_tag=COMPRESSION_NONE;
endian=FILLORDER_MSB2LSB;
}
option=GetImageOption(image_info,"tiff:fill-order");
if (option != (const char *) NULL)
{
if (LocaleNCompare(option,"msb",3) == 0)
endian=FILLORDER_MSB2LSB;
if (LocaleNCompare(option,"lsb",3) == 0)
endian=FILLORDER_LSB2MSB;
}
(void) TIFFSetField(tiff,TIFFTAG_COMPRESSION,compress_tag);
(void) TIFFSetField(tiff,TIFFTAG_FILLORDER,endian);
(void) TIFFSetField(tiff,TIFFTAG_BITSPERSAMPLE,quantum_info->depth);
if (image->matte != MagickFalse)
{
uint16
extra_samples,
sample_info[1],
samples_per_pixel;
/*
TIFF has a matte channel.
*/
extra_samples=1;
sample_info[0]=EXTRASAMPLE_UNASSALPHA;
option=GetImageOption(image_info,"tiff:alpha");
if (option != (const char *) NULL)
{
if (LocaleCompare(option,"associated") == 0)
sample_info[0]=EXTRASAMPLE_ASSOCALPHA;
else
if (LocaleCompare(option,"unspecified") == 0)
sample_info[0]=EXTRASAMPLE_UNSPECIFIED;
}
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_SAMPLESPERPIXEL,
&samples_per_pixel);
(void) TIFFSetField(tiff,TIFFTAG_SAMPLESPERPIXEL,samples_per_pixel+1);
(void) TIFFSetField(tiff,TIFFTAG_EXTRASAMPLES,extra_samples,
&sample_info);
if (sample_info[0] == EXTRASAMPLE_ASSOCALPHA)
SetQuantumAlphaType(quantum_info,AssociatedQuantumAlpha);
}
(void) TIFFSetField(tiff,TIFFTAG_PHOTOMETRIC,photometric);
switch (quantum_info->format)
{
case FloatingPointQuantumFormat:
{
(void) TIFFSetField(tiff,TIFFTAG_SAMPLEFORMAT,SAMPLEFORMAT_IEEEFP);
(void) TIFFSetField(tiff,TIFFTAG_SMINSAMPLEVALUE,quantum_info->minimum);
(void) TIFFSetField(tiff,TIFFTAG_SMAXSAMPLEVALUE,quantum_info->maximum);
break;
}
case SignedQuantumFormat:
{
(void) TIFFSetField(tiff,TIFFTAG_SAMPLEFORMAT,SAMPLEFORMAT_INT);
break;
}
case UnsignedQuantumFormat:
{
(void) TIFFSetField(tiff,TIFFTAG_SAMPLEFORMAT,SAMPLEFORMAT_UINT);
break;
}
default:
break;
}
(void) TIFFSetField(tiff,TIFFTAG_ORIENTATION,ORIENTATION_TOPLEFT);
(void) TIFFSetField(tiff,TIFFTAG_PLANARCONFIG,PLANARCONFIG_CONTIG);
if (photometric == PHOTOMETRIC_RGB)
if ((image_info->interlace == PlaneInterlace) ||
(image_info->interlace == PartitionInterlace))
(void) TIFFSetField(tiff,TIFFTAG_PLANARCONFIG,PLANARCONFIG_SEPARATE);
rows_per_strip=TIFFDefaultStripSize(tiff,0);
option=GetImageOption(image_info,"tiff:rows-per-strip");
if (option != (const char *) NULL)
rows_per_strip=(size_t) strtol(option,(char **) NULL,10);
switch (compress_tag)
{
case COMPRESSION_JPEG:
{
#if defined(JPEG_SUPPORT)
const char
*sampling_factor;
GeometryInfo
geometry_info;
MagickStatusType
flags;
rows_per_strip+=(16-(rows_per_strip % 16));
if (image_info->quality != UndefinedCompressionQuality)
(void) TIFFSetField(tiff,TIFFTAG_JPEGQUALITY,image_info->quality);
(void) TIFFSetField(tiff,TIFFTAG_JPEGCOLORMODE,JPEGCOLORMODE_RAW);
if (IssRGBCompatibleColorspace(image->colorspace) != MagickFalse)
{
const char
*value;
(void) TIFFSetField(tiff,TIFFTAG_JPEGCOLORMODE,JPEGCOLORMODE_RGB);
sampling_factor=(const char *) NULL;
value=GetImageProperty(image,"jpeg:sampling-factor");
if (value != (char *) NULL)
{
sampling_factor=value;
if (image->debug != MagickFalse)
(void) LogMagickEvent(CoderEvent,GetMagickModule(),
" Input sampling-factors=%s",sampling_factor);
}
if (image_info->sampling_factor != (char *) NULL)
sampling_factor=image_info->sampling_factor;
if (sampling_factor != (const char *) NULL)
{
flags=ParseGeometry(sampling_factor,&geometry_info);
if ((flags & SigmaValue) == 0)
geometry_info.sigma=geometry_info.rho;
if (image->colorspace == YCbCrColorspace)
(void) TIFFSetField(tiff,TIFFTAG_YCBCRSUBSAMPLING,(uint16)
geometry_info.rho,(uint16) geometry_info.sigma);
}
}
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,
&bits_per_sample);
if (bits_per_sample == 12)
(void) TIFFSetField(tiff,TIFFTAG_JPEGTABLESMODE,JPEGTABLESMODE_QUANT);
#endif
break;
}
case COMPRESSION_ADOBE_DEFLATE:
{
rows_per_strip=(uint32) image->rows;
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,
&bits_per_sample);
if (((photometric == PHOTOMETRIC_RGB) ||
(photometric == PHOTOMETRIC_MINISBLACK)) &&
((bits_per_sample == 8) || (bits_per_sample == 16)))
(void) TIFFSetField(tiff,TIFFTAG_PREDICTOR,PREDICTOR_HORIZONTAL);
(void) TIFFSetField(tiff,TIFFTAG_ZIPQUALITY,(long) (
image_info->quality == UndefinedCompressionQuality ? 7 :
MagickMin((ssize_t) image_info->quality/10,9)));
break;
}
case COMPRESSION_CCITTFAX3:
{
/*
Byte-aligned EOL.
*/
rows_per_strip=(uint32) image->rows;
(void) TIFFSetField(tiff,TIFFTAG_GROUP3OPTIONS,4);
break;
}
case COMPRESSION_CCITTFAX4:
{
rows_per_strip=(uint32) image->rows;
break;
}
#if defined(LZMA_SUPPORT) && defined(COMPRESSION_LZMA)
case COMPRESSION_LZMA:
{
if (((photometric == PHOTOMETRIC_RGB) ||
(photometric == PHOTOMETRIC_MINISBLACK)) &&
((bits_per_sample == 8) || (bits_per_sample == 16)))
(void) TIFFSetField(tiff,TIFFTAG_PREDICTOR,PREDICTOR_HORIZONTAL);
(void) TIFFSetField(tiff,TIFFTAG_LZMAPRESET,(long) (
image_info->quality == UndefinedCompressionQuality ? 7 :
MagickMin((ssize_t) image_info->quality/10,9)));
break;
}
#endif
case COMPRESSION_LZW:
{
(void) TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,
&bits_per_sample);
if (((photometric == PHOTOMETRIC_RGB) ||
(photometric == PHOTOMETRIC_MINISBLACK)) &&
((bits_per_sample == 8) || (bits_per_sample == 16)))
(void) TIFFSetField(tiff,TIFFTAG_PREDICTOR,PREDICTOR_HORIZONTAL);
break;
}
default:
break;
}
if (rows_per_strip < 1)
rows_per_strip=1;
if ((image->rows/rows_per_strip) >= (1UL << 15))
rows_per_strip=(uint32) (image->rows >> 15);
(void) TIFFSetField(tiff,TIFFTAG_ROWSPERSTRIP,rows_per_strip);
if ((image->x_resolution != 0.0) && (image->y_resolution != 0.0))
{
unsigned short
units;
/*
Set image resolution.
*/
units=RESUNIT_NONE;
if (image->units == PixelsPerInchResolution)
units=RESUNIT_INCH;
if (image->units == PixelsPerCentimeterResolution)
units=RESUNIT_CENTIMETER;
(void) TIFFSetField(tiff,TIFFTAG_RESOLUTIONUNIT,(uint16) units);
(void) TIFFSetField(tiff,TIFFTAG_XRESOLUTION,image->x_resolution);
(void) TIFFSetField(tiff,TIFFTAG_YRESOLUTION,image->y_resolution);
if ((image->page.x < 0) || (image->page.y < 0))
(void) ThrowMagickException(&image->exception,GetMagickModule(),
CoderError,"TIFF: negative image positions unsupported","%s",
image->filename);
if ((image->page.x > 0) && (image->x_resolution > 0.0))
{
/*
Set horizontal image position.
*/
(void) TIFFSetField(tiff,TIFFTAG_XPOSITION,(float) image->page.x/
image->x_resolution);
}
if ((image->page.y > 0) && (image->y_resolution > 0.0))
{
/*
Set vertical image position.
*/
(void) TIFFSetField(tiff,TIFFTAG_YPOSITION,(float) image->page.y/
image->y_resolution);
}
}
if (image->chromaticity.white_point.x != 0.0)
{
float
chromaticity[6];
/*
Set image chromaticity.
*/
chromaticity[0]=(float) image->chromaticity.red_primary.x;
chromaticity[1]=(float) image->chromaticity.red_primary.y;
chromaticity[2]=(float) image->chromaticity.green_primary.x;
chromaticity[3]=(float) image->chromaticity.green_primary.y;
chromaticity[4]=(float) image->chromaticity.blue_primary.x;
chromaticity[5]=(float) image->chromaticity.blue_primary.y;
(void) TIFFSetField(tiff,TIFFTAG_PRIMARYCHROMATICITIES,chromaticity);
chromaticity[0]=(float) image->chromaticity.white_point.x;
chromaticity[1]=(float) image->chromaticity.white_point.y;
(void) TIFFSetField(tiff,TIFFTAG_WHITEPOINT,chromaticity);
}
if ((LocaleCompare(image_info->magick,"PTIF") != 0) &&
(image_info->adjoin != MagickFalse) && (GetImageListLength(image) > 1))
{
(void) TIFFSetField(tiff,TIFFTAG_SUBFILETYPE,FILETYPE_PAGE);
if (image->scene != 0)
(void) TIFFSetField(tiff,TIFFTAG_PAGENUMBER,(uint16) image->scene,
GetImageListLength(image));
}
if (image->orientation != UndefinedOrientation)
(void) TIFFSetField(tiff,TIFFTAG_ORIENTATION,(uint16) image->orientation);
(void) TIFFSetProfiles(tiff,image);
{
uint16
page,
pages;
page=(uint16) scene;
pages=(uint16) GetImageListLength(image);
if ((LocaleCompare(image_info->magick,"PTIF") != 0) &&
(image_info->adjoin != MagickFalse) && (pages > 1))
(void) TIFFSetField(tiff,TIFFTAG_SUBFILETYPE,FILETYPE_PAGE);
(void) TIFFSetField(tiff,TIFFTAG_PAGENUMBER,page,pages);
}
(void) TIFFSetProperties(tiff,image);
DisableMSCWarning(4127)
if (0)
RestoreMSCWarning
(void) TIFFSetEXIFProperties(tiff,image);
/*
Write image scanlines.
*/
if (GetTIFFInfo(image_info,tiff,&tiff_info) == MagickFalse)
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
quantum_info->endian=LSBEndian;
pixels=GetQuantumPixels(quantum_info);
tiff_info.scanline=GetQuantumPixels(quantum_info);
switch (photometric)
{
case PHOTOMETRIC_CIELAB:
case PHOTOMETRIC_YCBCR:
case PHOTOMETRIC_RGB:
{
/*
RGB TIFF image.
*/
switch (image_info->interlace)
{
case NoInterlace:
default:
{
quantum_type=RGBQuantum;
if (image->matte != MagickFalse)
quantum_type=RGBAQuantum;
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
(void) length;
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType)
y,image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
case PlaneInterlace:
case PartitionInterlace:
{
/*
Plane interlacing: RRRRRR...GGGGGG...BBBBBB...
*/
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,RedQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,100,400);
if (status == MagickFalse)
break;
}
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,GreenQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,1,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,200,400);
if (status == MagickFalse)
break;
}
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,BlueQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,2,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,300,400);
if (status == MagickFalse)
break;
}
if (image->matte != MagickFalse)
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,
&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,AlphaQuantum,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,3,image) == -1)
break;
}
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,400,400);
if (status == MagickFalse)
break;
}
break;
}
}
break;
}
case PHOTOMETRIC_SEPARATED:
{
/*
CMYK TIFF image.
*/
quantum_type=CMYKQuantum;
if (image->matte != MagickFalse)
quantum_type=CMYKAQuantum;
if (image->colorspace != CMYKColorspace)
(void) TransformImageColorspace(image,CMYKColorspace);
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
case PHOTOMETRIC_PALETTE:
{
uint16
*blue,
*green,
*red;
/*
Colormapped TIFF image.
*/
red=(uint16 *) AcquireQuantumMemory(65536,sizeof(*red));
green=(uint16 *) AcquireQuantumMemory(65536,sizeof(*green));
blue=(uint16 *) AcquireQuantumMemory(65536,sizeof(*blue));
if ((red == (uint16 *) NULL) || (green == (uint16 *) NULL) ||
(blue == (uint16 *) NULL))
ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed");
/*
Initialize TIFF colormap.
*/
(void) ResetMagickMemory(red,0,65536*sizeof(*red));
(void) ResetMagickMemory(green,0,65536*sizeof(*green));
(void) ResetMagickMemory(blue,0,65536*sizeof(*blue));
for (i=0; i < (ssize_t) image->colors; i++)
{
red[i]=ScaleQuantumToShort(image->colormap[i].red);
green[i]=ScaleQuantumToShort(image->colormap[i].green);
blue[i]=ScaleQuantumToShort(image->colormap[i].blue);
}
(void) TIFFSetField(tiff,TIFFTAG_COLORMAP,red,green,blue);
red=(uint16 *) RelinquishMagickMemory(red);
green=(uint16 *) RelinquishMagickMemory(green);
blue=(uint16 *) RelinquishMagickMemory(blue);
}
default:
{
/*
Convert PseudoClass packets to contiguous grayscale scanlines.
*/
quantum_type=IndexQuantum;
if (image->matte != MagickFalse)
{
if (photometric != PHOTOMETRIC_PALETTE)
quantum_type=GrayAlphaQuantum;
else
quantum_type=IndexAlphaQuantum;
}
else
if (photometric != PHOTOMETRIC_PALETTE)
quantum_type=GrayQuantum;
for (y=0; y < (ssize_t) image->rows; y++)
{
register const PixelPacket
*restrict p;
p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception);
if (p == (const PixelPacket *) NULL)
break;
length=ExportQuantumPixels(image,(const CacheView *) NULL,
quantum_info,quantum_type,pixels,&image->exception);
if (TIFFWritePixels(tiff,&tiff_info,y,0,image) == -1)
break;
if (image->previous == (Image *) NULL)
{
status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y,
image->rows);
if (status == MagickFalse)
break;
}
}
break;
}
}
quantum_info=DestroyQuantumInfo(quantum_info);
if (image->colorspace == LabColorspace)
DecodeLabImage(image,&image->exception);
DestroyTIFFInfo(&tiff_info);
DisableMSCWarning(4127)
if (0 && (image_info->verbose != MagickFalse))
RestoreMSCWarning
TIFFPrintDirectory(tiff,stdout,MagickFalse);
(void) TIFFWriteDirectory(tiff);
image=SyncNextImageInList(image);
if (image == (Image *) NULL)
break;
status=SetImageProgress(image,SaveImagesTag,scene++,
GetImageListLength(image));
if (status == MagickFalse)
break;
} while (image_info->adjoin != MagickFalse);
(void) TIFFSetWarningHandler(warning_handler);
(void) TIFFSetErrorHandler(error_handler);
TIFFClose(tiff);
return(MagickTrue);
}
|
C
|
ImageMagick
| 0 |
CVE-2017-6439
|
https://www.cvedetails.com/cve/CVE-2017-6439/
|
CWE-787
|
https://github.com/libimobiledevice/libplist/commit/32ee5213fe64f1e10ec76c1ee861ee6f233120dd
|
32ee5213fe64f1e10ec76c1ee861ee6f233120dd
|
bplist: Fix data range check for string/data/dict/array nodes
Passing a size of 0xFFFFFFFFFFFFFFFF to parse_string_node() might result
in a memcpy with a size of -1, leading to undefined behavior.
This commit makes sure that the actual node data (which depends on the size)
is in the range start_of_object..start_of_object+size.
Credit to OSS-Fuzz
|
static plist_t parse_bin_node_at_index(struct bplist_data *bplist, uint32_t node_index)
{
int i = 0;
const char* ptr = NULL;
plist_t plist = NULL;
const char* idx_ptr = NULL;
if (node_index >= bplist->num_objects) {
PLIST_BIN_ERR("node index (%u) must be smaller than the number of objects (%" PRIu64 ")\n", node_index, bplist->num_objects);
return NULL;
}
idx_ptr = bplist->offset_table + node_index * bplist->offset_size;
if (idx_ptr < bplist->offset_table ||
idx_ptr >= bplist->offset_table + bplist->num_objects * bplist->offset_size) {
PLIST_BIN_ERR("node index %u points outside of valid range\n", node_index);
return NULL;
}
ptr = bplist->data + UINT_TO_HOST(idx_ptr, bplist->offset_size);
/* make sure the node offset is in a sane range */
if ((ptr < bplist->data) || (ptr >= bplist->offset_table)) {
PLIST_BIN_ERR("offset for node index %u points outside of valid range\n", node_index);
return NULL;
}
/* store node_index for current recursion level */
if (plist_array_get_size(bplist->used_indexes) < bplist->level+1) {
while (plist_array_get_size(bplist->used_indexes) < bplist->level+1) {
plist_array_append_item(bplist->used_indexes, plist_new_uint(node_index));
}
} else {
plist_array_set_item(bplist->used_indexes, plist_new_uint(node_index), bplist->level);
}
/* recursion check */
if (bplist->level > 0) {
for (i = bplist->level-1; i >= 0; i--) {
plist_t node_i = plist_array_get_item(bplist->used_indexes, i);
plist_t node_level = plist_array_get_item(bplist->used_indexes, bplist->level);
if (plist_compare_node_value(node_i, node_level)) {
PLIST_BIN_ERR("recursion detected in binary plist\n");
return NULL;
}
}
}
/* finally parse node */
bplist->level++;
plist = parse_bin_node(bplist, &ptr);
bplist->level--;
return plist;
}
|
static plist_t parse_bin_node_at_index(struct bplist_data *bplist, uint32_t node_index)
{
int i = 0;
const char* ptr = NULL;
plist_t plist = NULL;
const char* idx_ptr = NULL;
if (node_index >= bplist->num_objects) {
PLIST_BIN_ERR("node index (%u) must be smaller than the number of objects (%" PRIu64 ")\n", node_index, bplist->num_objects);
return NULL;
}
idx_ptr = bplist->offset_table + node_index * bplist->offset_size;
if (idx_ptr < bplist->offset_table ||
idx_ptr >= bplist->offset_table + bplist->num_objects * bplist->offset_size) {
PLIST_BIN_ERR("node index %u points outside of valid range\n", node_index);
return NULL;
}
ptr = bplist->data + UINT_TO_HOST(idx_ptr, bplist->offset_size);
/* make sure the node offset is in a sane range */
if ((ptr < bplist->data) || (ptr >= bplist->offset_table)) {
PLIST_BIN_ERR("offset for node index %u points outside of valid range\n", node_index);
return NULL;
}
/* store node_index for current recursion level */
if (plist_array_get_size(bplist->used_indexes) < bplist->level+1) {
while (plist_array_get_size(bplist->used_indexes) < bplist->level+1) {
plist_array_append_item(bplist->used_indexes, plist_new_uint(node_index));
}
} else {
plist_array_set_item(bplist->used_indexes, plist_new_uint(node_index), bplist->level);
}
/* recursion check */
if (bplist->level > 0) {
for (i = bplist->level-1; i >= 0; i--) {
plist_t node_i = plist_array_get_item(bplist->used_indexes, i);
plist_t node_level = plist_array_get_item(bplist->used_indexes, bplist->level);
if (plist_compare_node_value(node_i, node_level)) {
PLIST_BIN_ERR("recursion detected in binary plist\n");
return NULL;
}
}
}
/* finally parse node */
bplist->level++;
plist = parse_bin_node(bplist, &ptr);
bplist->level--;
return plist;
}
|
C
|
libplist
| 0 |
CVE-2016-1665
|
https://www.cvedetails.com/cve/CVE-2016-1665/
|
CWE-20
|
https://github.com/chromium/chromium/commit/282f53ffdc3b1902da86f6a0791af736837efbf8
|
282f53ffdc3b1902da86f6a0791af736837efbf8
|
[signin] Add metrics to track the source for refresh token updated events
This CL add a source for update and revoke credentials operations. It then
surfaces the source in the chrome://signin-internals page.
This CL also records the following histograms that track refresh token events:
* Signin.RefreshTokenUpdated.ToValidToken.Source
* Signin.RefreshTokenUpdated.ToInvalidToken.Source
* Signin.RefreshTokenRevoked.Source
These histograms are needed to validate the assumptions of how often tokens
are revoked by the browser and the sources for the token revocations.
Bug: 896182
Change-Id: I2fcab80ee8e5699708e695bc3289fa6d34859a90
Reviewed-on: https://chromium-review.googlesource.com/c/1286464
Reviewed-by: Jochen Eisinger <[email protected]>
Reviewed-by: David Roger <[email protected]>
Reviewed-by: Ilya Sherman <[email protected]>
Commit-Queue: Mihai Sardarescu <[email protected]>
Cr-Commit-Position: refs/heads/master@{#606181}
|
MutableProfileOAuth2TokenServiceDelegateTest()
: signin_error_controller_(
SigninErrorController::AccountMode::ANY_ACCOUNT),
access_token_success_count_(0),
access_token_failure_count_(0),
access_token_failure_(GoogleServiceAuthError::NONE),
token_available_count_(0),
token_revoked_count_(0),
tokens_loaded_count_(0),
start_batch_changes_(0),
end_batch_changes_(0),
auth_error_changed_count_(0),
revoke_all_tokens_on_load_(false) {}
|
MutableProfileOAuth2TokenServiceDelegateTest()
: signin_error_controller_(
SigninErrorController::AccountMode::ANY_ACCOUNT),
access_token_success_count_(0),
access_token_failure_count_(0),
access_token_failure_(GoogleServiceAuthError::NONE),
token_available_count_(0),
token_revoked_count_(0),
tokens_loaded_count_(0),
start_batch_changes_(0),
end_batch_changes_(0),
auth_error_changed_count_(0),
revoke_all_tokens_on_load_(false) {}
|
C
|
Chrome
| 0 |
CVE-2014-9903
|
https://www.cvedetails.com/cve/CVE-2014-9903/
|
CWE-200
|
https://github.com/torvalds/linux/commit/4efbc454ba68def5ef285b26ebfcfdb605b52755
|
4efbc454ba68def5ef285b26ebfcfdb605b52755
|
sched: Fix information leak in sys_sched_getattr()
We're copying the on-stack structure to userspace, but forgot to give
the right number of bytes to copy. This allows the calling process to
obtain up to PAGE_SIZE bytes from the stack (and possibly adjacent
kernel memory).
This fix copies only as much as we actually have on the stack
(attr->size defaults to the size of the struct) and leaves the rest of
the userspace-provided buffer untouched.
Found using kmemcheck + trinity.
Fixes: d50dde5a10f30 ("sched: Add new scheduler syscalls to support an extended scheduling parameters ABI")
Cc: Dario Faggioli <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Vegard Nossum <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
|
static void claim_allocations(int cpu, struct sched_domain *sd)
{
struct sd_data *sdd = sd->private;
WARN_ON_ONCE(*per_cpu_ptr(sdd->sd, cpu) != sd);
*per_cpu_ptr(sdd->sd, cpu) = NULL;
if (atomic_read(&(*per_cpu_ptr(sdd->sg, cpu))->ref))
*per_cpu_ptr(sdd->sg, cpu) = NULL;
if (atomic_read(&(*per_cpu_ptr(sdd->sgp, cpu))->ref))
*per_cpu_ptr(sdd->sgp, cpu) = NULL;
}
|
static void claim_allocations(int cpu, struct sched_domain *sd)
{
struct sd_data *sdd = sd->private;
WARN_ON_ONCE(*per_cpu_ptr(sdd->sd, cpu) != sd);
*per_cpu_ptr(sdd->sd, cpu) = NULL;
if (atomic_read(&(*per_cpu_ptr(sdd->sg, cpu))->ref))
*per_cpu_ptr(sdd->sg, cpu) = NULL;
if (atomic_read(&(*per_cpu_ptr(sdd->sgp, cpu))->ref))
*per_cpu_ptr(sdd->sgp, cpu) = NULL;
}
|
C
|
linux
| 0 |
CVE-2013-6621
|
https://www.cvedetails.com/cve/CVE-2013-6621/
|
CWE-399
|
https://github.com/chromium/chromium/commit/4039d2fcaab746b6c20017ba9bb51c3a2403a76c
|
4039d2fcaab746b6c20017ba9bb51c3a2403a76c
|
Add logging to figure out which IPC we're failing to deserialize in RenderFrame.
BUG=369553
[email protected]
Review URL: https://codereview.chromium.org/263833020
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@268565 0039d316-1c4b-4281-b951-d872f2087c98
|
void RenderFrameImpl::PlayerGone(blink::WebMediaPlayer* player) {
DidPause(player);
}
|
void RenderFrameImpl::PlayerGone(blink::WebMediaPlayer* player) {
DidPause(player);
}
|
C
|
Chrome
| 0 |
CVE-2019-11599
|
https://www.cvedetails.com/cve/CVE-2019-11599/
|
CWE-362
|
https://github.com/torvalds/linux/commit/04f5866e41fb70690e28397487d8bd8eea7d712a
|
04f5866e41fb70690e28397487d8bd8eea7d712a
|
coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping
The core dumping code has always run without holding the mmap_sem for
writing, despite that is the only way to ensure that the entire vma
layout will not change from under it. Only using some signal
serialization on the processes belonging to the mm is not nearly enough.
This was pointed out earlier. For example in Hugh's post from Jul 2017:
https://lkml.kernel.org/r/[email protected]
"Not strictly relevant here, but a related note: I was very surprised
to discover, only quite recently, how handle_mm_fault() may be called
without down_read(mmap_sem) - when core dumping. That seems a
misguided optimization to me, which would also be nice to correct"
In particular because the growsdown and growsup can move the
vm_start/vm_end the various loops the core dump does around the vma will
not be consistent if page faults can happen concurrently.
Pretty much all users calling mmget_not_zero()/get_task_mm() and then
taking the mmap_sem had the potential to introduce unexpected side
effects in the core dumping code.
Adding mmap_sem for writing around the ->core_dump invocation is a
viable long term fix, but it requires removing all copy user and page
faults and to replace them with get_dump_page() for all binary formats
which is not suitable as a short term fix.
For the time being this solution manually covers the places that can
confuse the core dump either by altering the vma layout or the vma flags
while it runs. Once ->core_dump runs under mmap_sem for writing the
function mmget_still_valid() can be dropped.
Allowing mmap_sem protected sections to run in parallel with the
coredump provides some minor parallelism advantage to the swapoff code
(which seems to be safe enough by never mangling any vma field and can
keep doing swapins in parallel to the core dumping) and to some other
corner case.
In order to facilitate the backporting I added "Fixes: 86039bd3b4e6"
however the side effect of this same race condition in /proc/pid/mem
should be reproducible since before 2.6.12-rc2 so I couldn't add any
other "Fixes:" because there's no hash beyond the git genesis commit.
Because find_extend_vma() is the only location outside of the process
context that could modify the "mm" structures under mmap_sem for
reading, by adding the mmget_still_valid() check to it, all other cases
that take the mmap_sem for reading don't need the new check after
mmget_not_zero()/get_task_mm(). The expand_stack() in page fault
context also doesn't need the new check, because all tasks under core
dumping are frozen.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 86039bd3b4e6 ("userfaultfd: add new syscall to provide memory externalization")
Signed-off-by: Andrea Arcangeli <[email protected]>
Reported-by: Jann Horn <[email protected]>
Suggested-by: Oleg Nesterov <[email protected]>
Acked-by: Peter Xu <[email protected]>
Reviewed-by: Mike Rapoport <[email protected]>
Reviewed-by: Oleg Nesterov <[email protected]>
Reviewed-by: Jann Horn <[email protected]>
Acked-by: Jason Gunthorpe <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static inline pagemap_entry_t make_pme(u64 frame, u64 flags)
{
return (pagemap_entry_t) { .pme = (frame & PM_PFRAME_MASK) | flags };
}
|
static inline pagemap_entry_t make_pme(u64 frame, u64 flags)
{
return (pagemap_entry_t) { .pme = (frame & PM_PFRAME_MASK) | flags };
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/d1a59e4e845a01d7d7b80ef184b672752a9eae4d
|
d1a59e4e845a01d7d7b80ef184b672752a9eae4d
|
Fixing cross-process postMessage replies on more than two iterations.
When two frames are replying to each other using event.source across processes,
after the first two replies, things break down. The root cause is that in
RenderViewImpl::GetFrameByMappedID, the lookup was incorrect. It is now
properly searching for the remote frame id and returning the local one.
BUG=153445
Review URL: https://chromiumcodereview.appspot.com/11040015
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@159924 0039d316-1c4b-4281-b951-d872f2087c98
|
void RenderViewImpl::UpdateSessionHistory(WebFrame* frame) {
if (page_id_ == -1)
return;
const WebHistoryItem& item =
webview()->mainFrame()->previousHistoryItem();
SendUpdateState(item);
}
|
void RenderViewImpl::UpdateSessionHistory(WebFrame* frame) {
if (page_id_ == -1)
return;
const WebHistoryItem& item =
webview()->mainFrame()->previousHistoryItem();
SendUpdateState(item);
}
|
C
|
Chrome
| 0 |
CVE-2015-3885
|
https://www.cvedetails.com/cve/CVE-2015-3885/
|
CWE-189
|
https://github.com/rawstudio/rawstudio/commit/983bda1f0fa5fa86884381208274198a620f006e
|
983bda1f0fa5fa86884381208274198a620f006e
|
Avoid overflow in ljpeg_start().
|
unsigned CLASS getbithuff (int nbits, ushort *huff)
{
int c;
if (nbits == -1)
return bitbuf = vbits = reset = 0;
if (nbits == 0 || vbits < 0) return 0;
while (!reset && vbits < nbits && (c = fgetc(ifp)) != EOF &&
!(reset = zero_after_ff && c == 0xff && fgetc(ifp))) {
bitbuf = (bitbuf << 8) + (uchar) c;
vbits += 8;
}
c = bitbuf << (32-vbits) >> (32-nbits);
if (huff) {
vbits -= huff[c] >> 8;
c = (uchar) huff[c];
} else
vbits -= nbits;
if (vbits < 0) derror();
return c;
}
|
unsigned CLASS getbithuff (int nbits, ushort *huff)
{
int c;
if (nbits == -1)
return bitbuf = vbits = reset = 0;
if (nbits == 0 || vbits < 0) return 0;
while (!reset && vbits < nbits && (c = fgetc(ifp)) != EOF &&
!(reset = zero_after_ff && c == 0xff && fgetc(ifp))) {
bitbuf = (bitbuf << 8) + (uchar) c;
vbits += 8;
}
c = bitbuf << (32-vbits) >> (32-nbits);
if (huff) {
vbits -= huff[c] >> 8;
c = (uchar) huff[c];
} else
vbits -= nbits;
if (vbits < 0) derror();
return c;
}
|
C
|
rawstudio
| 0 |
CVE-2018-6063
|
https://www.cvedetails.com/cve/CVE-2018-6063/
|
CWE-787
|
https://github.com/chromium/chromium/commit/673ce95d481ea9368c4d4d43ac756ba1d6d9e608
|
673ce95d481ea9368c4d4d43ac756ba1d6d9e608
|
Correct mojo::WrapSharedMemoryHandle usage
Fixes some incorrect uses of mojo::WrapSharedMemoryHandle which
were assuming that the call actually has any control over the memory
protection applied to a handle when mapped.
Where fixing usage is infeasible for this CL, TODOs are added to
annotate follow-up work.
Also updates the API and documentation to (hopefully) improve clarity
and avoid similar mistakes from being made in the future.
BUG=792900
Cq-Include-Trybots: master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel
Change-Id: I0578aaa9ca3bfcb01aaf2451315d1ede95458477
Reviewed-on: https://chromium-review.googlesource.com/818282
Reviewed-by: Wei Li <[email protected]>
Reviewed-by: Lei Zhang <[email protected]>
Reviewed-by: John Abd-El-Malek <[email protected]>
Reviewed-by: Daniel Cheng <[email protected]>
Reviewed-by: Sadrul Chowdhury <[email protected]>
Reviewed-by: Yuzhu Shen <[email protected]>
Reviewed-by: Robert Sesek <[email protected]>
Commit-Queue: Ken Rockot <[email protected]>
Cr-Commit-Position: refs/heads/master@{#530268}
|
void MojoJpegDecodeAccelerator::OnLostConnectionToJpegDecoder() {
DCHECK(io_task_runner_->BelongsToCurrentThread());
OnDecodeAck(kInvalidBitstreamBufferId,
::media::JpegDecodeAccelerator::Error::PLATFORM_FAILURE);
}
|
void MojoJpegDecodeAccelerator::OnLostConnectionToJpegDecoder() {
DCHECK(io_task_runner_->BelongsToCurrentThread());
OnDecodeAck(kInvalidBitstreamBufferId,
::media::JpegDecodeAccelerator::Error::PLATFORM_FAILURE);
}
|
C
|
Chrome
| 0 |
CVE-2013-1792
|
https://www.cvedetails.com/cve/CVE-2013-1792/
|
CWE-362
|
https://github.com/torvalds/linux/commit/0da9dfdd2cd9889201bc6f6f43580c99165cd087
|
0da9dfdd2cd9889201bc6f6f43580c99165cd087
|
keys: fix race with concurrent install_user_keyrings()
This fixes CVE-2013-1792.
There is a race in install_user_keyrings() that can cause a NULL pointer
dereference when called concurrently for the same user if the uid and
uid-session keyrings are not yet created. It might be possible for an
unprivileged user to trigger this by calling keyctl() from userspace in
parallel immediately after logging in.
Assume that we have two threads both executing lookup_user_key(), both
looking for KEY_SPEC_USER_SESSION_KEYRING.
THREAD A THREAD B
=============================== ===============================
==>call install_user_keyrings();
if (!cred->user->session_keyring)
==>call install_user_keyrings()
...
user->uid_keyring = uid_keyring;
if (user->uid_keyring)
return 0;
<==
key = cred->user->session_keyring [== NULL]
user->session_keyring = session_keyring;
atomic_inc(&key->usage); [oops]
At the point thread A dereferences cred->user->session_keyring, thread B
hasn't updated user->session_keyring yet, but thread A assumes it is
populated because install_user_keyrings() returned ok.
The race window is really small but can be exploited if, for example,
thread B is interrupted or preempted after initializing uid_keyring, but
before doing setting session_keyring.
This couldn't be reproduced on a stock kernel. However, after placing
systemtap probe on 'user->session_keyring = session_keyring;' that
introduced some delay, the kernel could be crashed reliably.
Fix this by checking both pointers before deciding whether to return.
Alternatively, the test could be done away with entirely as it is checked
inside the mutex - but since the mutex is global, that may not be the best
way.
Signed-off-by: David Howells <[email protected]>
Reported-by: Mateusz Guzik <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
long join_session_keyring(const char *name)
{
const struct cred *old;
struct cred *new;
struct key *keyring;
long ret, serial;
new = prepare_creds();
if (!new)
return -ENOMEM;
old = current_cred();
/* if no name is provided, install an anonymous keyring */
if (!name) {
ret = install_session_keyring_to_cred(new, NULL);
if (ret < 0)
goto error;
serial = new->session_keyring->serial;
ret = commit_creds(new);
if (ret == 0)
ret = serial;
goto okay;
}
/* allow the user to join or create a named keyring */
mutex_lock(&key_session_mutex);
/* look for an existing keyring of this name */
keyring = find_keyring_by_name(name, false);
if (PTR_ERR(keyring) == -ENOKEY) {
/* not found - try and create a new one */
keyring = keyring_alloc(
name, old->uid, old->gid, old,
KEY_POS_ALL | KEY_USR_VIEW | KEY_USR_READ | KEY_USR_LINK,
KEY_ALLOC_IN_QUOTA, NULL);
if (IS_ERR(keyring)) {
ret = PTR_ERR(keyring);
goto error2;
}
} else if (IS_ERR(keyring)) {
ret = PTR_ERR(keyring);
goto error2;
} else if (keyring == new->session_keyring) {
ret = 0;
goto error2;
}
/* we've got a keyring - now to install it */
ret = install_session_keyring_to_cred(new, keyring);
if (ret < 0)
goto error2;
commit_creds(new);
mutex_unlock(&key_session_mutex);
ret = keyring->serial;
key_put(keyring);
okay:
return ret;
error2:
mutex_unlock(&key_session_mutex);
error:
abort_creds(new);
return ret;
}
|
long join_session_keyring(const char *name)
{
const struct cred *old;
struct cred *new;
struct key *keyring;
long ret, serial;
new = prepare_creds();
if (!new)
return -ENOMEM;
old = current_cred();
/* if no name is provided, install an anonymous keyring */
if (!name) {
ret = install_session_keyring_to_cred(new, NULL);
if (ret < 0)
goto error;
serial = new->session_keyring->serial;
ret = commit_creds(new);
if (ret == 0)
ret = serial;
goto okay;
}
/* allow the user to join or create a named keyring */
mutex_lock(&key_session_mutex);
/* look for an existing keyring of this name */
keyring = find_keyring_by_name(name, false);
if (PTR_ERR(keyring) == -ENOKEY) {
/* not found - try and create a new one */
keyring = keyring_alloc(
name, old->uid, old->gid, old,
KEY_POS_ALL | KEY_USR_VIEW | KEY_USR_READ | KEY_USR_LINK,
KEY_ALLOC_IN_QUOTA, NULL);
if (IS_ERR(keyring)) {
ret = PTR_ERR(keyring);
goto error2;
}
} else if (IS_ERR(keyring)) {
ret = PTR_ERR(keyring);
goto error2;
} else if (keyring == new->session_keyring) {
ret = 0;
goto error2;
}
/* we've got a keyring - now to install it */
ret = install_session_keyring_to_cred(new, keyring);
if (ret < 0)
goto error2;
commit_creds(new);
mutex_unlock(&key_session_mutex);
ret = keyring->serial;
key_put(keyring);
okay:
return ret;
error2:
mutex_unlock(&key_session_mutex);
error:
abort_creds(new);
return ret;
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/8353baf8d1504dbdd4ad7584ff2466de657521cd
|
8353baf8d1504dbdd4ad7584ff2466de657521cd
|
Remove WebFrame::canHaveSecureChild
To simplify the public API, ServiceWorkerNetworkProvider can do the
parent walk itself.
Follow-up to https://crrev.com/ad1850962644e19.
BUG=607543
Review-Url: https://codereview.chromium.org/2082493002
Cr-Commit-Position: refs/heads/master@{#400896}
|
bool Document::importContainerNodeChildren(ContainerNode* oldContainerNode, ContainerNode* newContainerNode, ExceptionState& exceptionState)
{
for (Node& oldChild : NodeTraversal::childrenOf(*oldContainerNode)) {
Node* newChild = importNode(&oldChild, true, exceptionState);
if (exceptionState.hadException())
return false;
newContainerNode->appendChild(newChild, exceptionState);
if (exceptionState.hadException())
return false;
}
return true;
}
|
bool Document::importContainerNodeChildren(ContainerNode* oldContainerNode, ContainerNode* newContainerNode, ExceptionState& exceptionState)
{
for (Node& oldChild : NodeTraversal::childrenOf(*oldContainerNode)) {
Node* newChild = importNode(&oldChild, true, exceptionState);
if (exceptionState.hadException())
return false;
newContainerNode->appendChild(newChild, exceptionState);
if (exceptionState.hadException())
return false;
}
return true;
}
|
C
|
Chrome
| 0 |
CVE-2013-7421
|
https://www.cvedetails.com/cve/CVE-2013-7421/
|
CWE-264
|
https://github.com/torvalds/linux/commit/5d26a105b5a73e5635eae0629b42fa0a90e07b7b
|
5d26a105b5a73e5635eae0629b42fa0a90e07b7b
|
crypto: prefix module autoloading with "crypto-"
This prefixes all crypto module loading with "crypto-" so we never run
the risk of exposing module auto-loading to userspace via a crypto API,
as demonstrated by Mathias Krause:
https://lkml.org/lkml/2013/3/4/70
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha256_state *sctx = shash_desc_ctx(desc);
unsigned int partial = sctx->count % SHA256_BLOCK_SIZE;
int res;
/* Handle the fast case right here */
if (partial + len < SHA256_BLOCK_SIZE) {
sctx->count += len;
memcpy(sctx->buf + partial, data, len);
return 0;
}
if (!irq_fpu_usable()) {
res = crypto_sha256_update(desc, data, len);
} else {
kernel_fpu_begin();
res = __sha256_ssse3_update(desc, data, len, partial);
kernel_fpu_end();
}
return res;
}
|
static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha256_state *sctx = shash_desc_ctx(desc);
unsigned int partial = sctx->count % SHA256_BLOCK_SIZE;
int res;
/* Handle the fast case right here */
if (partial + len < SHA256_BLOCK_SIZE) {
sctx->count += len;
memcpy(sctx->buf + partial, data, len);
return 0;
}
if (!irq_fpu_usable()) {
res = crypto_sha256_update(desc, data, len);
} else {
kernel_fpu_begin();
res = __sha256_ssse3_update(desc, data, len, partial);
kernel_fpu_end();
}
return res;
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/1161a49d663dd395bd639549c2dfe7324f847938
|
1161a49d663dd395bd639549c2dfe7324f847938
|
Don't populate URL data in WebDropData when dragging files.
This is considered a potential security issue as well, since it leaks
filesystem paths.
BUG=332579
Review URL: https://codereview.chromium.org/135633002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@244538 0039d316-1c4b-4281-b951-d872f2087c98
|
explicit OverscrollNavigationOverlay(WebContentsImpl* web_contents)
: web_contents_(web_contents),
image_delegate_(NULL),
view_(NULL),
loading_complete_(false),
received_paint_update_(false),
compositor_updated_(false),
slide_direction_(SLIDE_UNKNOWN),
need_paint_update_(true) {
}
|
explicit OverscrollNavigationOverlay(WebContentsImpl* web_contents)
: web_contents_(web_contents),
image_delegate_(NULL),
view_(NULL),
loading_complete_(false),
received_paint_update_(false),
compositor_updated_(false),
slide_direction_(SLIDE_UNKNOWN),
need_paint_update_(true) {
}
|
C
|
Chrome
| 0 |
CVE-2013-7421
|
https://www.cvedetails.com/cve/CVE-2013-7421/
|
CWE-264
|
https://github.com/torvalds/linux/commit/5d26a105b5a73e5635eae0629b42fa0a90e07b7b
|
5d26a105b5a73e5635eae0629b42fa0a90e07b7b
|
crypto: prefix module autoloading with "crypto-"
This prefixes all crypto module loading with "crypto-" so we never run
the risk of exposing module auto-loading to userspace via a crypto API,
as demonstrated by Mathias Krause:
https://lkml.org/lkml/2013/3/4/70
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
static void camellia_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
camellia_dec_blk(crypto_tfm_ctx(tfm), dst, src);
}
|
static void camellia_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
camellia_dec_blk(crypto_tfm_ctx(tfm), dst, src);
}
|
C
|
linux
| 0 |
CVE-2016-3699
|
https://www.cvedetails.com/cve/CVE-2016-3699/
|
CWE-264
|
https://github.com/mjg59/linux/commit/a4a5ed2835e8ea042868b7401dced3f517cafa76
|
a4a5ed2835e8ea042868b7401dced3f517cafa76
|
acpi: Disable ACPI table override if securelevel is set
From the kernel documentation (initrd_table_override.txt):
If the ACPI_INITRD_TABLE_OVERRIDE compile option is true, it is possible
to override nearly any ACPI table provided by the BIOS with an
instrumented, modified one.
When securelevel is set, the kernel should disallow any unauthenticated
changes to kernel space. ACPI tables contain code invoked by the kernel, so
do not allow ACPI tables to be overridden if securelevel is set.
Signed-off-by: Linn Crosetto <[email protected]>
|
static void __init reserve_brk(void)
{
if (_brk_end > _brk_start)
memblock_reserve(__pa_symbol(_brk_start),
_brk_end - _brk_start);
/* Mark brk area as locked down and no longer taking any
new allocations */
_brk_start = 0;
}
|
static void __init reserve_brk(void)
{
if (_brk_end > _brk_start)
memblock_reserve(__pa_symbol(_brk_start),
_brk_end - _brk_start);
/* Mark brk area as locked down and no longer taking any
new allocations */
_brk_start = 0;
}
|
C
|
linux
| 0 |
CVE-2012-5375
|
https://www.cvedetails.com/cve/CVE-2012-5375/
|
CWE-310
|
https://github.com/torvalds/linux/commit/9c52057c698fb96f8f07e7a4bcf4801a092bda89
|
9c52057c698fb96f8f07e7a4bcf4801a092bda89
|
Btrfs: fix hash overflow handling
The handling for directory crc hash overflows was fairly obscure,
split_leaf returns EOVERFLOW when we try to extend the item and that is
supposed to bubble up to userland. For a while it did so, but along the
way we added better handling of errors and forced the FS readonly if we
hit IO errors during the directory insertion.
Along the way, we started testing only for EEXIST and the EOVERFLOW case
was dropped. The end result is that we may force the FS readonly if we
catch a directory hash bucket overflow.
This fixes a few problem spots. First I add tests for EOVERFLOW in the
places where we can safely just return the error up the chain.
btrfs_rename is harder though, because it tries to insert the new
directory item only after it has already unlinked anything the rename
was going to overwrite. Rather than adding very complex logic, I added
a helper to test for the hash overflow case early while it is still safe
to bail out.
Snapshot and subvolume creation had a similar problem, so they are using
the new helper now too.
Signed-off-by: Chris Mason <[email protected]>
Reported-by: Pascal Junod <[email protected]>
|
static void fill_inode_item(struct btrfs_trans_handle *trans,
struct extent_buffer *leaf,
struct btrfs_inode_item *item,
struct inode *inode)
{
btrfs_set_inode_uid(leaf, item, i_uid_read(inode));
btrfs_set_inode_gid(leaf, item, i_gid_read(inode));
btrfs_set_inode_size(leaf, item, BTRFS_I(inode)->disk_i_size);
btrfs_set_inode_mode(leaf, item, inode->i_mode);
btrfs_set_inode_nlink(leaf, item, inode->i_nlink);
btrfs_set_timespec_sec(leaf, btrfs_inode_atime(item),
inode->i_atime.tv_sec);
btrfs_set_timespec_nsec(leaf, btrfs_inode_atime(item),
inode->i_atime.tv_nsec);
btrfs_set_timespec_sec(leaf, btrfs_inode_mtime(item),
inode->i_mtime.tv_sec);
btrfs_set_timespec_nsec(leaf, btrfs_inode_mtime(item),
inode->i_mtime.tv_nsec);
btrfs_set_timespec_sec(leaf, btrfs_inode_ctime(item),
inode->i_ctime.tv_sec);
btrfs_set_timespec_nsec(leaf, btrfs_inode_ctime(item),
inode->i_ctime.tv_nsec);
btrfs_set_inode_nbytes(leaf, item, inode_get_bytes(inode));
btrfs_set_inode_generation(leaf, item, BTRFS_I(inode)->generation);
btrfs_set_inode_sequence(leaf, item, inode->i_version);
btrfs_set_inode_transid(leaf, item, trans->transid);
btrfs_set_inode_rdev(leaf, item, inode->i_rdev);
btrfs_set_inode_flags(leaf, item, BTRFS_I(inode)->flags);
btrfs_set_inode_block_group(leaf, item, 0);
}
|
static void fill_inode_item(struct btrfs_trans_handle *trans,
struct extent_buffer *leaf,
struct btrfs_inode_item *item,
struct inode *inode)
{
btrfs_set_inode_uid(leaf, item, i_uid_read(inode));
btrfs_set_inode_gid(leaf, item, i_gid_read(inode));
btrfs_set_inode_size(leaf, item, BTRFS_I(inode)->disk_i_size);
btrfs_set_inode_mode(leaf, item, inode->i_mode);
btrfs_set_inode_nlink(leaf, item, inode->i_nlink);
btrfs_set_timespec_sec(leaf, btrfs_inode_atime(item),
inode->i_atime.tv_sec);
btrfs_set_timespec_nsec(leaf, btrfs_inode_atime(item),
inode->i_atime.tv_nsec);
btrfs_set_timespec_sec(leaf, btrfs_inode_mtime(item),
inode->i_mtime.tv_sec);
btrfs_set_timespec_nsec(leaf, btrfs_inode_mtime(item),
inode->i_mtime.tv_nsec);
btrfs_set_timespec_sec(leaf, btrfs_inode_ctime(item),
inode->i_ctime.tv_sec);
btrfs_set_timespec_nsec(leaf, btrfs_inode_ctime(item),
inode->i_ctime.tv_nsec);
btrfs_set_inode_nbytes(leaf, item, inode_get_bytes(inode));
btrfs_set_inode_generation(leaf, item, BTRFS_I(inode)->generation);
btrfs_set_inode_sequence(leaf, item, inode->i_version);
btrfs_set_inode_transid(leaf, item, trans->transid);
btrfs_set_inode_rdev(leaf, item, inode->i_rdev);
btrfs_set_inode_flags(leaf, item, BTRFS_I(inode)->flags);
btrfs_set_inode_block_group(leaf, item, 0);
}
|
C
|
linux
| 0 |
CVE-2017-12179
|
https://www.cvedetails.com/cve/CVE-2017-12179/
|
CWE-190
|
https://cgit.freedesktop.org/xorg/xserver/commit/?id=d088e3c1286b548a58e62afdc70bb40981cdb9e8
|
d088e3c1286b548a58e62afdc70bb40981cdb9e8
| null |
CreatePointerBarrierClient(ClientPtr client,
xXFixesCreatePointerBarrierReq * stuff,
PointerBarrierClientPtr *client_out)
{
WindowPtr pWin;
ScreenPtr screen;
BarrierScreenPtr cs;
int err;
int size;
int i;
struct PointerBarrierClient *ret;
CARD16 *in_devices;
DeviceIntPtr dev;
size = sizeof(*ret) + sizeof(DeviceIntPtr) * stuff->num_devices;
ret = malloc(size);
if (!ret) {
return BadAlloc;
}
xorg_list_init(&ret->per_device);
err = dixLookupWindow(&pWin, stuff->window, client, DixReadAccess);
if (err != Success) {
client->errorValue = stuff->window;
goto error;
}
screen = pWin->drawable.pScreen;
cs = GetBarrierScreen(screen);
ret->screen = screen;
ret->window = stuff->window;
ret->num_devices = stuff->num_devices;
if (ret->num_devices > 0)
ret->device_ids = (int*)&ret[1];
else
ret->device_ids = NULL;
in_devices = (CARD16 *) &stuff[1];
for (i = 0; i < stuff->num_devices; i++) {
int device_id = in_devices[i];
DeviceIntPtr device;
if ((err = dixLookupDevice (&device, device_id,
client, DixReadAccess))) {
client->errorValue = device_id;
goto error;
}
if (!IsMaster (device)) {
client->errorValue = device_id;
err = BadDevice;
goto error;
}
ret->device_ids[i] = device_id;
}
/* Alloc one per master pointer, they're the ones that can be blocked */
xorg_list_init(&ret->per_device);
nt_list_for_each_entry(dev, inputInfo.devices, next) {
struct PointerBarrierDevice *pbd;
if (dev->type != MASTER_POINTER)
continue;
pbd = AllocBarrierDevice();
if (!pbd) {
err = BadAlloc;
goto error;
}
pbd->deviceid = dev->id;
xorg_list_add(&pbd->entry, &ret->per_device);
}
ret->id = stuff->barrier;
ret->barrier.x1 = stuff->x1;
ret->barrier.x2 = stuff->x2;
ret->barrier.y1 = stuff->y1;
ret->barrier.y2 = stuff->y2;
sort_min_max(&ret->barrier.x1, &ret->barrier.x2);
sort_min_max(&ret->barrier.y1, &ret->barrier.y2);
ret->barrier.directions = stuff->directions & 0x0f;
if (barrier_is_horizontal(&ret->barrier))
ret->barrier.directions &= ~(BarrierPositiveX | BarrierNegativeX);
if (barrier_is_vertical(&ret->barrier))
ret->barrier.directions &= ~(BarrierPositiveY | BarrierNegativeY);
xorg_list_add(&ret->entry, &cs->barriers);
*client_out = ret;
return Success;
error:
*client_out = NULL;
FreePointerBarrierClient(ret);
return err;
}
|
CreatePointerBarrierClient(ClientPtr client,
xXFixesCreatePointerBarrierReq * stuff,
PointerBarrierClientPtr *client_out)
{
WindowPtr pWin;
ScreenPtr screen;
BarrierScreenPtr cs;
int err;
int size;
int i;
struct PointerBarrierClient *ret;
CARD16 *in_devices;
DeviceIntPtr dev;
size = sizeof(*ret) + sizeof(DeviceIntPtr) * stuff->num_devices;
ret = malloc(size);
if (!ret) {
return BadAlloc;
}
xorg_list_init(&ret->per_device);
err = dixLookupWindow(&pWin, stuff->window, client, DixReadAccess);
if (err != Success) {
client->errorValue = stuff->window;
goto error;
}
screen = pWin->drawable.pScreen;
cs = GetBarrierScreen(screen);
ret->screen = screen;
ret->window = stuff->window;
ret->num_devices = stuff->num_devices;
if (ret->num_devices > 0)
ret->device_ids = (int*)&ret[1];
else
ret->device_ids = NULL;
in_devices = (CARD16 *) &stuff[1];
for (i = 0; i < stuff->num_devices; i++) {
int device_id = in_devices[i];
DeviceIntPtr device;
if ((err = dixLookupDevice (&device, device_id,
client, DixReadAccess))) {
client->errorValue = device_id;
goto error;
}
if (!IsMaster (device)) {
client->errorValue = device_id;
err = BadDevice;
goto error;
}
ret->device_ids[i] = device_id;
}
/* Alloc one per master pointer, they're the ones that can be blocked */
xorg_list_init(&ret->per_device);
nt_list_for_each_entry(dev, inputInfo.devices, next) {
struct PointerBarrierDevice *pbd;
if (dev->type != MASTER_POINTER)
continue;
pbd = AllocBarrierDevice();
if (!pbd) {
err = BadAlloc;
goto error;
}
pbd->deviceid = dev->id;
xorg_list_add(&pbd->entry, &ret->per_device);
}
ret->id = stuff->barrier;
ret->barrier.x1 = stuff->x1;
ret->barrier.x2 = stuff->x2;
ret->barrier.y1 = stuff->y1;
ret->barrier.y2 = stuff->y2;
sort_min_max(&ret->barrier.x1, &ret->barrier.x2);
sort_min_max(&ret->barrier.y1, &ret->barrier.y2);
ret->barrier.directions = stuff->directions & 0x0f;
if (barrier_is_horizontal(&ret->barrier))
ret->barrier.directions &= ~(BarrierPositiveX | BarrierNegativeX);
if (barrier_is_vertical(&ret->barrier))
ret->barrier.directions &= ~(BarrierPositiveY | BarrierNegativeY);
xorg_list_add(&ret->entry, &cs->barriers);
*client_out = ret;
return Success;
error:
*client_out = NULL;
FreePointerBarrierClient(ret);
return err;
}
|
C
|
xserver
| 0 |
CVE-2014-3478
|
https://www.cvedetails.com/cve/CVE-2014-3478/
|
CWE-119
|
https://github.com/file/file/commit/27a14bc7ba285a0a5ebfdb55e54001aa11932b08
|
27a14bc7ba285a0a5ebfdb55e54001aa11932b08
|
Correctly compute the truncated pascal string size (Francisco Alonso and
Jan Kaluza at RedHat)
|
magiccheck(struct magic_set *ms, struct magic *m)
{
uint64_t l = m->value.q;
uint64_t v;
float fl, fv;
double dl, dv;
int matched;
union VALUETYPE *p = &ms->ms_value;
switch (m->type) {
case FILE_BYTE:
v = p->b;
break;
case FILE_SHORT:
case FILE_BESHORT:
case FILE_LESHORT:
v = p->h;
break;
case FILE_LONG:
case FILE_BELONG:
case FILE_LELONG:
case FILE_MELONG:
case FILE_DATE:
case FILE_BEDATE:
case FILE_LEDATE:
case FILE_MEDATE:
case FILE_LDATE:
case FILE_BELDATE:
case FILE_LELDATE:
case FILE_MELDATE:
v = p->l;
break;
case FILE_QUAD:
case FILE_LEQUAD:
case FILE_BEQUAD:
case FILE_QDATE:
case FILE_BEQDATE:
case FILE_LEQDATE:
case FILE_QLDATE:
case FILE_BEQLDATE:
case FILE_LEQLDATE:
case FILE_QWDATE:
case FILE_BEQWDATE:
case FILE_LEQWDATE:
v = p->q;
break;
case FILE_FLOAT:
case FILE_BEFLOAT:
case FILE_LEFLOAT:
fl = m->value.f;
fv = p->f;
switch (m->reln) {
case 'x':
matched = 1;
break;
case '!':
matched = fv != fl;
break;
case '=':
matched = fv == fl;
break;
case '>':
matched = fv > fl;
break;
case '<':
matched = fv < fl;
break;
default:
file_magerror(ms, "cannot happen with float: invalid relation `%c'",
m->reln);
return -1;
}
return matched;
case FILE_DOUBLE:
case FILE_BEDOUBLE:
case FILE_LEDOUBLE:
dl = m->value.d;
dv = p->d;
switch (m->reln) {
case 'x':
matched = 1;
break;
case '!':
matched = dv != dl;
break;
case '=':
matched = dv == dl;
break;
case '>':
matched = dv > dl;
break;
case '<':
matched = dv < dl;
break;
default:
file_magerror(ms, "cannot happen with double: invalid relation `%c'", m->reln);
return -1;
}
return matched;
case FILE_DEFAULT:
case FILE_CLEAR:
l = 0;
v = 0;
break;
case FILE_STRING:
case FILE_PSTRING:
l = 0;
v = file_strncmp(m->value.s, p->s, (size_t)m->vallen, m->str_flags);
break;
case FILE_BESTRING16:
case FILE_LESTRING16:
l = 0;
v = file_strncmp16(m->value.s, p->s, (size_t)m->vallen, m->str_flags);
break;
case FILE_SEARCH: { /* search ms->search.s for the string m->value.s */
size_t slen;
size_t idx;
if (ms->search.s == NULL)
return 0;
slen = MIN(m->vallen, sizeof(m->value.s));
l = 0;
v = 0;
for (idx = 0; m->str_range == 0 || idx < m->str_range; idx++) {
if (slen + idx > ms->search.s_len)
break;
v = file_strncmp(m->value.s, ms->search.s + idx, slen,
m->str_flags);
if (v == 0) { /* found match */
ms->search.offset += idx;
break;
}
}
break;
}
case FILE_REGEX: {
int rc;
file_regex_t rx;
if (ms->search.s == NULL)
return 0;
l = 0;
rc = file_regcomp(&rx, m->value.s,
REG_EXTENDED|REG_NEWLINE|
((m->str_flags & STRING_IGNORE_CASE) ? REG_ICASE : 0));
if (rc) {
file_regerror(&rx, rc, ms);
v = (uint64_t)-1;
} else {
regmatch_t pmatch[1];
size_t slen = ms->search.s_len;
#ifndef REG_STARTEND
#define REG_STARTEND 0
char c;
if (slen != 0)
slen--;
c = ms->search.s[slen];
((char *)(intptr_t)ms->search.s)[slen] = '\0';
#else
pmatch[0].rm_so = 0;
pmatch[0].rm_eo = slen;
#endif
rc = file_regexec(&rx, (const char *)ms->search.s,
1, pmatch, REG_STARTEND);
#if REG_STARTEND == 0
((char *)(intptr_t)ms->search.s)[l] = c;
#endif
switch (rc) {
case 0:
ms->search.s += (int)pmatch[0].rm_so;
ms->search.offset += (size_t)pmatch[0].rm_so;
ms->search.rm_len =
(size_t)(pmatch[0].rm_eo - pmatch[0].rm_so);
v = 0;
break;
case REG_NOMATCH:
v = 1;
break;
default:
file_regerror(&rx, rc, ms);
v = (uint64_t)-1;
break;
}
}
file_regfree(&rx);
if (v == (uint64_t)-1)
return -1;
break;
}
case FILE_INDIRECT:
case FILE_USE:
case FILE_NAME:
return 1;
default:
file_magerror(ms, "invalid type %d in magiccheck()", m->type);
return -1;
}
v = file_signextend(ms, m, v);
switch (m->reln) {
case 'x':
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"u == *any* = 1\n", (unsigned long long)v);
matched = 1;
break;
case '!':
matched = v != l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT "u != %"
INT64_T_FORMAT "u = %d\n", (unsigned long long)v,
(unsigned long long)l, matched);
break;
case '=':
matched = v == l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT "u == %"
INT64_T_FORMAT "u = %d\n", (unsigned long long)v,
(unsigned long long)l, matched);
break;
case '>':
if (m->flag & UNSIGNED) {
matched = v > l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"u > %" INT64_T_FORMAT "u = %d\n",
(unsigned long long)v,
(unsigned long long)l, matched);
}
else {
matched = (int64_t) v > (int64_t) l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"d > %" INT64_T_FORMAT "d = %d\n",
(long long)v, (long long)l, matched);
}
break;
case '<':
if (m->flag & UNSIGNED) {
matched = v < l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"u < %" INT64_T_FORMAT "u = %d\n",
(unsigned long long)v,
(unsigned long long)l, matched);
}
else {
matched = (int64_t) v < (int64_t) l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"d < %" INT64_T_FORMAT "d = %d\n",
(long long)v, (long long)l, matched);
}
break;
case '&':
matched = (v & l) == l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "((%" INT64_T_FORMAT "x & %"
INT64_T_FORMAT "x) == %" INT64_T_FORMAT
"x) = %d\n", (unsigned long long)v,
(unsigned long long)l, (unsigned long long)l,
matched);
break;
case '^':
matched = (v & l) != l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "((%" INT64_T_FORMAT "x & %"
INT64_T_FORMAT "x) != %" INT64_T_FORMAT
"x) = %d\n", (unsigned long long)v,
(unsigned long long)l, (unsigned long long)l,
matched);
break;
default:
file_magerror(ms, "cannot happen: invalid relation `%c'",
m->reln);
return -1;
}
return matched;
}
|
magiccheck(struct magic_set *ms, struct magic *m)
{
uint64_t l = m->value.q;
uint64_t v;
float fl, fv;
double dl, dv;
int matched;
union VALUETYPE *p = &ms->ms_value;
switch (m->type) {
case FILE_BYTE:
v = p->b;
break;
case FILE_SHORT:
case FILE_BESHORT:
case FILE_LESHORT:
v = p->h;
break;
case FILE_LONG:
case FILE_BELONG:
case FILE_LELONG:
case FILE_MELONG:
case FILE_DATE:
case FILE_BEDATE:
case FILE_LEDATE:
case FILE_MEDATE:
case FILE_LDATE:
case FILE_BELDATE:
case FILE_LELDATE:
case FILE_MELDATE:
v = p->l;
break;
case FILE_QUAD:
case FILE_LEQUAD:
case FILE_BEQUAD:
case FILE_QDATE:
case FILE_BEQDATE:
case FILE_LEQDATE:
case FILE_QLDATE:
case FILE_BEQLDATE:
case FILE_LEQLDATE:
case FILE_QWDATE:
case FILE_BEQWDATE:
case FILE_LEQWDATE:
v = p->q;
break;
case FILE_FLOAT:
case FILE_BEFLOAT:
case FILE_LEFLOAT:
fl = m->value.f;
fv = p->f;
switch (m->reln) {
case 'x':
matched = 1;
break;
case '!':
matched = fv != fl;
break;
case '=':
matched = fv == fl;
break;
case '>':
matched = fv > fl;
break;
case '<':
matched = fv < fl;
break;
default:
file_magerror(ms, "cannot happen with float: invalid relation `%c'",
m->reln);
return -1;
}
return matched;
case FILE_DOUBLE:
case FILE_BEDOUBLE:
case FILE_LEDOUBLE:
dl = m->value.d;
dv = p->d;
switch (m->reln) {
case 'x':
matched = 1;
break;
case '!':
matched = dv != dl;
break;
case '=':
matched = dv == dl;
break;
case '>':
matched = dv > dl;
break;
case '<':
matched = dv < dl;
break;
default:
file_magerror(ms, "cannot happen with double: invalid relation `%c'", m->reln);
return -1;
}
return matched;
case FILE_DEFAULT:
case FILE_CLEAR:
l = 0;
v = 0;
break;
case FILE_STRING:
case FILE_PSTRING:
l = 0;
v = file_strncmp(m->value.s, p->s, (size_t)m->vallen, m->str_flags);
break;
case FILE_BESTRING16:
case FILE_LESTRING16:
l = 0;
v = file_strncmp16(m->value.s, p->s, (size_t)m->vallen, m->str_flags);
break;
case FILE_SEARCH: { /* search ms->search.s for the string m->value.s */
size_t slen;
size_t idx;
if (ms->search.s == NULL)
return 0;
slen = MIN(m->vallen, sizeof(m->value.s));
l = 0;
v = 0;
for (idx = 0; m->str_range == 0 || idx < m->str_range; idx++) {
if (slen + idx > ms->search.s_len)
break;
v = file_strncmp(m->value.s, ms->search.s + idx, slen,
m->str_flags);
if (v == 0) { /* found match */
ms->search.offset += idx;
break;
}
}
break;
}
case FILE_REGEX: {
int rc;
file_regex_t rx;
if (ms->search.s == NULL)
return 0;
l = 0;
rc = file_regcomp(&rx, m->value.s,
REG_EXTENDED|REG_NEWLINE|
((m->str_flags & STRING_IGNORE_CASE) ? REG_ICASE : 0));
if (rc) {
file_regerror(&rx, rc, ms);
v = (uint64_t)-1;
} else {
regmatch_t pmatch[1];
size_t slen = ms->search.s_len;
#ifndef REG_STARTEND
#define REG_STARTEND 0
char c;
if (slen != 0)
slen--;
c = ms->search.s[slen];
((char *)(intptr_t)ms->search.s)[slen] = '\0';
#else
pmatch[0].rm_so = 0;
pmatch[0].rm_eo = slen;
#endif
rc = file_regexec(&rx, (const char *)ms->search.s,
1, pmatch, REG_STARTEND);
#if REG_STARTEND == 0
((char *)(intptr_t)ms->search.s)[l] = c;
#endif
switch (rc) {
case 0:
ms->search.s += (int)pmatch[0].rm_so;
ms->search.offset += (size_t)pmatch[0].rm_so;
ms->search.rm_len =
(size_t)(pmatch[0].rm_eo - pmatch[0].rm_so);
v = 0;
break;
case REG_NOMATCH:
v = 1;
break;
default:
file_regerror(&rx, rc, ms);
v = (uint64_t)-1;
break;
}
}
file_regfree(&rx);
if (v == (uint64_t)-1)
return -1;
break;
}
case FILE_INDIRECT:
case FILE_USE:
case FILE_NAME:
return 1;
default:
file_magerror(ms, "invalid type %d in magiccheck()", m->type);
return -1;
}
v = file_signextend(ms, m, v);
switch (m->reln) {
case 'x':
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"u == *any* = 1\n", (unsigned long long)v);
matched = 1;
break;
case '!':
matched = v != l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT "u != %"
INT64_T_FORMAT "u = %d\n", (unsigned long long)v,
(unsigned long long)l, matched);
break;
case '=':
matched = v == l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT "u == %"
INT64_T_FORMAT "u = %d\n", (unsigned long long)v,
(unsigned long long)l, matched);
break;
case '>':
if (m->flag & UNSIGNED) {
matched = v > l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"u > %" INT64_T_FORMAT "u = %d\n",
(unsigned long long)v,
(unsigned long long)l, matched);
}
else {
matched = (int64_t) v > (int64_t) l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"d > %" INT64_T_FORMAT "d = %d\n",
(long long)v, (long long)l, matched);
}
break;
case '<':
if (m->flag & UNSIGNED) {
matched = v < l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"u < %" INT64_T_FORMAT "u = %d\n",
(unsigned long long)v,
(unsigned long long)l, matched);
}
else {
matched = (int64_t) v < (int64_t) l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "%" INT64_T_FORMAT
"d < %" INT64_T_FORMAT "d = %d\n",
(long long)v, (long long)l, matched);
}
break;
case '&':
matched = (v & l) == l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "((%" INT64_T_FORMAT "x & %"
INT64_T_FORMAT "x) == %" INT64_T_FORMAT
"x) = %d\n", (unsigned long long)v,
(unsigned long long)l, (unsigned long long)l,
matched);
break;
case '^':
matched = (v & l) != l;
if ((ms->flags & MAGIC_DEBUG) != 0)
(void) fprintf(stderr, "((%" INT64_T_FORMAT "x & %"
INT64_T_FORMAT "x) != %" INT64_T_FORMAT
"x) = %d\n", (unsigned long long)v,
(unsigned long long)l, (unsigned long long)l,
matched);
break;
default:
file_magerror(ms, "cannot happen: invalid relation `%c'",
m->reln);
return -1;
}
return matched;
}
|
C
|
file
| 0 |
CVE-2013-2237
|
https://www.cvedetails.com/cve/CVE-2013-2237/
|
CWE-119
|
https://github.com/torvalds/linux/commit/85dfb745ee40232876663ae206cba35f24ab2a40
|
85dfb745ee40232876663ae206cba35f24ab2a40
|
af_key: initialize satype in key_notify_policy_flush()
This field was left uninitialized. Some user daemons perform check against this
field.
Signed-off-by: Nicolas Dichtel <[email protected]>
Signed-off-by: Steffen Klassert <[email protected]>
|
static int unicast_flush_resp(struct sock *sk, const struct sadb_msg *ihdr)
{
struct sk_buff *skb;
struct sadb_msg *hdr;
skb = alloc_skb(sizeof(struct sadb_msg) + 16, GFP_ATOMIC);
if (!skb)
return -ENOBUFS;
hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
memcpy(hdr, ihdr, sizeof(struct sadb_msg));
hdr->sadb_msg_errno = (uint8_t) 0;
hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ONE, sk, sock_net(sk));
}
|
static int unicast_flush_resp(struct sock *sk, const struct sadb_msg *ihdr)
{
struct sk_buff *skb;
struct sadb_msg *hdr;
skb = alloc_skb(sizeof(struct sadb_msg) + 16, GFP_ATOMIC);
if (!skb)
return -ENOBUFS;
hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
memcpy(hdr, ihdr, sizeof(struct sadb_msg));
hdr->sadb_msg_errno = (uint8_t) 0;
hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ONE, sk, sock_net(sk));
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/5041f984669fe3a989a84c348eb838c8f7233f6b
|
5041f984669fe3a989a84c348eb838c8f7233f6b
|
AutoFill: Release the cached frame when we receive the frameDestroyed() message
from WebKit.
BUG=48857
TEST=none
Review URL: http://codereview.chromium.org/3173005
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@55789 0039d316-1c4b-4281-b951-d872f2087c98
|
void RenderView::OnAutoFillFormDataFilled(int query_id,
const webkit_glue::FormData& form) {
autofill_helper_.FormDataFilled(query_id, form);
}
|
void RenderView::OnAutoFillFormDataFilled(int query_id,
const webkit_glue::FormData& form) {
autofill_helper_.FormDataFilled(query_id, form);
}
|
C
|
Chrome
| 0 |
CVE-2016-1696
|
https://www.cvedetails.com/cve/CVE-2016-1696/
|
CWE-284
|
https://github.com/chromium/chromium/commit/c0569cc04741cccf6548c2169fcc1609d958523f
|
c0569cc04741cccf6548c2169fcc1609d958523f
|
[Extensions] Expand bindings access checks
BUG=601149
BUG=601073
Review URL: https://codereview.chromium.org/1866103002
Cr-Commit-Position: refs/heads/master@{#387710}
|
void RenderFrameObserverNatives::InvokeCallback(
v8::Global<v8::Function> callback,
bool succeeded) {
v8::Isolate* isolate = context()->isolate();
v8::HandleScope handle_scope(isolate);
v8::Local<v8::Value> args[] = {v8::Boolean::New(isolate, succeeded)};
context()->CallFunction(v8::Local<v8::Function>::New(isolate, callback),
arraysize(args), args);
}
|
void RenderFrameObserverNatives::InvokeCallback(
v8::Global<v8::Function> callback,
bool succeeded) {
v8::Isolate* isolate = context()->isolate();
v8::HandleScope handle_scope(isolate);
v8::Local<v8::Value> args[] = {v8::Boolean::New(isolate, succeeded)};
context()->CallFunction(v8::Local<v8::Function>::New(isolate, callback),
arraysize(args), args);
}
|
C
|
Chrome
| 0 |
CVE-2017-8284
|
https://www.cvedetails.com/cve/CVE-2017-8284/
|
CWE-94
|
https://github.com/qemu/qemu/commit/30663fd26c0307e414622c7a8607fbc04f92ec14
|
30663fd26c0307e414622c7a8607fbc04f92ec14
|
tcg/i386: Check the size of instruction being translated
This fixes the bug: 'user-to-root privesc inside VM via bad translation
caching' reported by Jann Horn here:
https://bugs.chromium.org/p/project-zero/issues/detail?id=1122
Reviewed-by: Richard Henderson <[email protected]>
CC: Peter Maydell <[email protected]>
CC: Paolo Bonzini <[email protected]>
Reported-by: Jann Horn <[email protected]>
Signed-off-by: Pranith Kumar <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static CCPrepare gen_prepare_eflags_p(DisasContext *s, TCGv reg)
{
gen_compute_eflags(s);
return (CCPrepare) { .cond = TCG_COND_NE, .reg = cpu_cc_src,
.mask = CC_P };
}
|
static CCPrepare gen_prepare_eflags_p(DisasContext *s, TCGv reg)
{
gen_compute_eflags(s);
return (CCPrepare) { .cond = TCG_COND_NE, .reg = cpu_cc_src,
.mask = CC_P };
}
|
C
|
qemu
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/1161a49d663dd395bd639549c2dfe7324f847938
|
1161a49d663dd395bd639549c2dfe7324f847938
|
Don't populate URL data in WebDropData when dragging files.
This is considered a potential security issue as well, since it leaks
filesystem paths.
BUG=332579
Review URL: https://codereview.chromium.org/135633002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@244538 0039d316-1c4b-4281-b951-d872f2087c98
|
bool IsOmniboxAutoCompletionForImeEnabled() {
return !CommandLine::ForCurrentProcess()->HasSwitch(
switches::kDisableOmniboxAutoCompletionForIme);
}
|
bool IsOmniboxAutoCompletionForImeEnabled() {
return !CommandLine::ForCurrentProcess()->HasSwitch(
switches::kDisableOmniboxAutoCompletionForIme);
}
|
C
|
Chrome
| 0 |
CVE-2015-1274
|
https://www.cvedetails.com/cve/CVE-2015-1274/
|
CWE-254
|
https://github.com/chromium/chromium/commit/d27468a832d5316884bd02f459cbf493697fd7e1
|
d27468a832d5316884bd02f459cbf493697fd7e1
|
Switch to equalIgnoringASCIICase throughout modules/accessibility
BUG=627682
Review-Url: https://codereview.chromium.org/2793913007
Cr-Commit-Position: refs/heads/master@{#461858}
|
bool AXLayoutObject::ariaHasPopup() const {
return elementAttributeValue(aria_haspopupAttr);
}
|
bool AXLayoutObject::ariaHasPopup() const {
return elementAttributeValue(aria_haspopupAttr);
}
|
C
|
Chrome
| 0 |
CVE-2018-12714
|
https://www.cvedetails.com/cve/CVE-2018-12714/
|
CWE-787
|
https://github.com/torvalds/linux/commit/81f9c4e4177d31ced6f52a89bb70e93bfb77ca03
|
81f9c4e4177d31ced6f52a89bb70e93bfb77ca03
|
Merge tag 'trace-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"This contains a few fixes and a clean up.
- a bad merge caused an "endif" to go in the wrong place in
scripts/Makefile.build
- softirq tracing fix for tracing that corrupts lockdep and causes a
false splat
- histogram documentation typo fixes
- fix a bad memory reference when passing in no filter to the filter
code
- simplify code by using the swap macro instead of open coding the
swap"
* tag 'trace-v4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Fix SKIP_STACK_VALIDATION=1 build due to bad merge with -mrecord-mcount
tracing: Fix some errors in histogram documentation
tracing: Use swap macro in update_max_tr
softirq: Reorder trace_softirqs_on to prevent lockdep splat
tracing: Check for no filter when processing event filters
|
int unregister_ftrace_export(struct trace_export *export)
{
int ret;
mutex_lock(&ftrace_export_lock);
ret = rm_ftrace_export(&ftrace_exports_list, export);
mutex_unlock(&ftrace_export_lock);
return ret;
}
|
int unregister_ftrace_export(struct trace_export *export)
{
int ret;
mutex_lock(&ftrace_export_lock);
ret = rm_ftrace_export(&ftrace_exports_list, export);
mutex_unlock(&ftrace_export_lock);
return ret;
}
|
C
|
linux
| 0 |
CVE-2017-18241
|
https://www.cvedetails.com/cve/CVE-2017-18241/
|
CWE-476
|
https://github.com/torvalds/linux/commit/d4fdf8ba0e5808ba9ad6b44337783bd9935e0982
|
d4fdf8ba0e5808ba9ad6b44337783bd9935e0982
|
f2fs: fix a panic caused by NULL flush_cmd_control
Mount fs with option noflush_merge, boot failed for illegal address
fcc in function f2fs_issue_flush:
if (!test_opt(sbi, FLUSH_MERGE)) {
ret = submit_flush_wait(sbi);
atomic_inc(&fcc->issued_flush); -> Here, fcc illegal
return ret;
}
Signed-off-by: Yunlei He <[email protected]>
Signed-off-by: Jaegeuk Kim <[email protected]>
|
static unsigned long __find_rev_next_bit(const unsigned long *addr,
unsigned long size, unsigned long offset)
{
const unsigned long *p = addr + BIT_WORD(offset);
unsigned long result = size;
unsigned long tmp;
if (offset >= size)
return size;
size -= (offset & ~(BITS_PER_LONG - 1));
offset %= BITS_PER_LONG;
while (1) {
if (*p == 0)
goto pass;
tmp = __reverse_ulong((unsigned char *)p);
tmp &= ~0UL >> offset;
if (size < BITS_PER_LONG)
tmp &= (~0UL << (BITS_PER_LONG - size));
if (tmp)
goto found;
pass:
if (size <= BITS_PER_LONG)
break;
size -= BITS_PER_LONG;
offset = 0;
p++;
}
return result;
found:
return result - size + __reverse_ffs(tmp);
}
|
static unsigned long __find_rev_next_bit(const unsigned long *addr,
unsigned long size, unsigned long offset)
{
const unsigned long *p = addr + BIT_WORD(offset);
unsigned long result = size;
unsigned long tmp;
if (offset >= size)
return size;
size -= (offset & ~(BITS_PER_LONG - 1));
offset %= BITS_PER_LONG;
while (1) {
if (*p == 0)
goto pass;
tmp = __reverse_ulong((unsigned char *)p);
tmp &= ~0UL >> offset;
if (size < BITS_PER_LONG)
tmp &= (~0UL << (BITS_PER_LONG - size));
if (tmp)
goto found;
pass:
if (size <= BITS_PER_LONG)
break;
size -= BITS_PER_LONG;
offset = 0;
p++;
}
return result;
found:
return result - size + __reverse_ffs(tmp);
}
|
C
|
linux
| 0 |
CVE-2016-4478
|
https://www.cvedetails.com/cve/CVE-2016-4478/
|
CWE-119
|
https://github.com/atheme/atheme/commit/87580d767868360d2fed503980129504da84b63e
|
87580d767868360d2fed503980129504da84b63e
|
Do not copy more bytes than were allocated
|
char *xmlrpc_string(char *buf, const char *value)
{
char encoded[XMLRPC_BUFSIZE];
*buf = '\0';
xmlrpc_char_encode(encoded, value);
snprintf(buf, XMLRPC_BUFSIZE, "<string>%s</string>", encoded);
return buf;
}
|
char *xmlrpc_string(char *buf, const char *value)
{
char encoded[XMLRPC_BUFSIZE];
*buf = '\0';
xmlrpc_char_encode(encoded, value);
snprintf(buf, XMLRPC_BUFSIZE, "<string>%s</string>", encoded);
return buf;
}
|
C
|
atheme
| 0 |
CVE-2015-6768
|
https://www.cvedetails.com/cve/CVE-2015-6768/
|
CWE-264
|
https://github.com/chromium/chromium/commit/4c8b008f055f79e622344627fed7f820375a4f01
|
4c8b008f055f79e622344627fed7f820375a4f01
|
Change Document::detach() to RELEASE_ASSERT all subframes are gone.
BUG=556724,577105
Review URL: https://codereview.chromium.org/1667573002
Cr-Commit-Position: refs/heads/master@{#373642}
|
void Document::scheduleUseShadowTreeUpdate(SVGUseElement& element)
{
m_useElementsNeedingUpdate.add(&element);
scheduleLayoutTreeUpdateIfNeeded();
}
|
void Document::scheduleUseShadowTreeUpdate(SVGUseElement& element)
{
m_useElementsNeedingUpdate.add(&element);
scheduleLayoutTreeUpdateIfNeeded();
}
|
C
|
Chrome
| 0 |
CVE-2018-6111
|
https://www.cvedetails.com/cve/CVE-2018-6111/
|
CWE-20
|
https://github.com/chromium/chromium/commit/3c8e4852477d5b1e2da877808c998dc57db9460f
|
3c8e4852477d5b1e2da877808c998dc57db9460f
|
DevTools: speculative fix for crash in NetworkHandler::Disable
This keeps BrowserContext* and StoragePartition* instead of
RenderProcessHost* in an attemp to resolve UAF of RenderProcessHost
upon closure of DevTools front-end.
Bug: 801117, 783067, 780694
Change-Id: I6c2cca60cc0c29f0949d189cf918769059f80c1b
Reviewed-on: https://chromium-review.googlesource.com/876657
Commit-Queue: Andrey Kosyakov <[email protected]>
Reviewed-by: Dmitry Gozman <[email protected]>
Cr-Commit-Position: refs/heads/master@{#531157}
|
void RenderFrameDevToolsAgentHost::GrantPolicy() {
if (!frame_host_)
return;
uint32_t process_id = frame_host_->GetProcess()->GetID();
if (base::FeatureList::IsEnabled(features::kNetworkService))
GetNetworkService()->SetRawHeadersAccess(process_id, true);
ChildProcessSecurityPolicyImpl::GetInstance()->GrantReadRawCookies(
process_id);
}
|
void RenderFrameDevToolsAgentHost::GrantPolicy() {
if (!frame_host_)
return;
uint32_t process_id = frame_host_->GetProcess()->GetID();
if (base::FeatureList::IsEnabled(features::kNetworkService))
GetNetworkService()->SetRawHeadersAccess(process_id, true);
ChildProcessSecurityPolicyImpl::GetInstance()->GrantReadRawCookies(
process_id);
}
|
C
|
Chrome
| 0 |
CVE-2018-1000880
|
https://www.cvedetails.com/cve/CVE-2018-1000880/
|
CWE-415
|
https://github.com/libarchive/libarchive/pull/1105/commits/9c84b7426660c09c18cc349f6d70b5f8168b5680
|
9c84b7426660c09c18cc349f6d70b5f8168b5680
|
warc: consume data once read
The warc decoder only used read ahead, it wouldn't actually consume
data that had previously been printed. This means that if you specify
an invalid content length, it will just reprint the same data over
and over and over again until it hits the desired length.
This means that a WARC resource with e.g.
Content-Length: 666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666665
but only a few hundred bytes of data, causes a quasi-infinite loop.
Consume data in subsequent calls to _warc_read.
Found with an AFL + afl-rb + qsym setup.
|
_warc_find_eoh(const char *buf, size_t bsz)
{
static const char _marker[] = "\r\n\r\n";
const char *hit = xmemmem(buf, bsz, _marker, sizeof(_marker) - 1U);
if (hit != NULL) {
hit += sizeof(_marker) - 1U;
}
return hit;
}
|
_warc_find_eoh(const char *buf, size_t bsz)
{
static const char _marker[] = "\r\n\r\n";
const char *hit = xmemmem(buf, bsz, _marker, sizeof(_marker) - 1U);
if (hit != NULL) {
hit += sizeof(_marker) - 1U;
}
return hit;
}
|
C
|
libarchive
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/3a353ebdb7753a3fbeb401c4c0e0f3358ccbb90b
|
3a353ebdb7753a3fbeb401c4c0e0f3358ccbb90b
|
Support pausing media when a context is frozen.
Media is resumed when the context is unpaused. This feature will be used
for bfcache and pausing iframes feature policy.
BUG=907125
Change-Id: Ic3925ea1a4544242b7bf0b9ad8c9cb9f63976bbd
Reviewed-on: https://chromium-review.googlesource.com/c/1410126
Commit-Queue: Dave Tapuska <[email protected]>
Reviewed-by: Kentaro Hara <[email protected]>
Reviewed-by: Mounir Lamouri <[email protected]>
Cr-Commit-Position: refs/heads/master@{#623319}
|
DefaultAudioDestinationNode::DefaultAudioDestinationNode(
BaseAudioContext& context,
const WebAudioLatencyHint& latency_hint)
: AudioDestinationNode(context) {
SetHandler(DefaultAudioDestinationHandler::Create(*this, latency_hint));
}
|
DefaultAudioDestinationNode::DefaultAudioDestinationNode(
BaseAudioContext& context,
const WebAudioLatencyHint& latency_hint)
: AudioDestinationNode(context) {
SetHandler(DefaultAudioDestinationHandler::Create(*this, latency_hint));
}
|
C
|
Chrome
| 0 |
CVE-2016-5220
|
https://www.cvedetails.com/cve/CVE-2016-5220/
|
CWE-200
|
https://github.com/chromium/chromium/commit/c6f0d22d508a551a40fc8bd7418941b77435aac3
|
c6f0d22d508a551a40fc8bd7418941b77435aac3
|
omnibox: experiment with restoring placeholder when caret shows
Shows the "Search Google or type a URL" omnibox placeholder even when
the caret (text edit cursor) is showing / when focused. views::Textfield
works this way, as does <input placeholder="">. Omnibox and the NTP's
"fakebox" are exceptions in this regard and this experiment makes this
more consistent.
[email protected]
BUG=955585
Change-Id: I23c299c0973f2feb43f7a2be3bd3425a80b06c2d
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1582315
Commit-Queue: Dan Beam <[email protected]>
Reviewed-by: Tommy Li <[email protected]>
Cr-Commit-Position: refs/heads/master@{#654279}
|
void OmniboxViewViews::UpdateContextMenu(ui::SimpleMenuModel* menu_contents) {
if (send_tab_to_self::ShouldOfferFeature(
location_bar_view_->GetWebContents())) {
send_tab_to_self::RecordSendTabToSelfClickResult(
send_tab_to_self::kOmniboxMenu, SendTabToSelfClickResult::kShowItem);
int index = menu_contents->GetIndexOfCommandId(IDS_APP_UNDO);
if (index)
menu_contents->InsertSeparatorAt(index++, ui::NORMAL_SEPARATOR);
menu_contents->InsertItemWithStringIdAt(index, IDC_SEND_TAB_TO_SELF,
IDS_CONTEXT_MENU_SEND_TAB_TO_SELF);
menu_contents->SetIcon(index,
gfx::Image(*send_tab_to_self::GetImageSkia()));
menu_contents->InsertSeparatorAt(++index, ui::NORMAL_SEPARATOR);
}
int paste_position = menu_contents->GetIndexOfCommandId(IDS_APP_PASTE);
DCHECK_GE(paste_position, 0);
menu_contents->InsertItemWithStringIdAt(paste_position + 1, IDC_PASTE_AND_GO,
IDS_PASTE_AND_GO);
if (base::FeatureList::IsEnabled(omnibox::kQueryInOmnibox)) {
if (!read_only() && !model()->user_input_in_progress() &&
text() != controller()->GetLocationBarModel()->GetFormattedFullURL()) {
menu_contents->AddItemWithStringId(IDS_SHOW_URL, IDS_SHOW_URL);
}
}
menu_contents->AddSeparator(ui::NORMAL_SEPARATOR);
menu_contents->AddItemWithStringId(IDC_EDIT_SEARCH_ENGINES,
IDS_EDIT_SEARCH_ENGINES);
}
|
void OmniboxViewViews::UpdateContextMenu(ui::SimpleMenuModel* menu_contents) {
if (send_tab_to_self::ShouldOfferFeature(
location_bar_view_->GetWebContents())) {
send_tab_to_self::RecordSendTabToSelfClickResult(
send_tab_to_self::kOmniboxMenu, SendTabToSelfClickResult::kShowItem);
int index = menu_contents->GetIndexOfCommandId(IDS_APP_UNDO);
if (index)
menu_contents->InsertSeparatorAt(index++, ui::NORMAL_SEPARATOR);
menu_contents->InsertItemWithStringIdAt(index, IDC_SEND_TAB_TO_SELF,
IDS_CONTEXT_MENU_SEND_TAB_TO_SELF);
menu_contents->SetIcon(index,
gfx::Image(*send_tab_to_self::GetImageSkia()));
menu_contents->InsertSeparatorAt(++index, ui::NORMAL_SEPARATOR);
}
int paste_position = menu_contents->GetIndexOfCommandId(IDS_APP_PASTE);
DCHECK_GE(paste_position, 0);
menu_contents->InsertItemWithStringIdAt(paste_position + 1, IDC_PASTE_AND_GO,
IDS_PASTE_AND_GO);
if (base::FeatureList::IsEnabled(omnibox::kQueryInOmnibox)) {
if (!read_only() && !model()->user_input_in_progress() &&
text() != controller()->GetLocationBarModel()->GetFormattedFullURL()) {
menu_contents->AddItemWithStringId(IDS_SHOW_URL, IDS_SHOW_URL);
}
}
menu_contents->AddSeparator(ui::NORMAL_SEPARATOR);
menu_contents->AddItemWithStringId(IDC_EDIT_SEARCH_ENGINES,
IDS_EDIT_SEARCH_ENGINES);
}
|
C
|
Chrome
| 0 |
CVE-2016-10266
|
https://www.cvedetails.com/cve/CVE-2016-10266/
|
CWE-369
|
https://github.com/vadz/libtiff/commit/438274f938e046d33cb0e1230b41da32ffe223e1
|
438274f938e046d33cb0e1230b41da32ffe223e1
|
* libtiff/tif_read.c, libtiff/tiffiop.h: fix uint32 overflow in
TIFFReadEncodedStrip() that caused an integer division by zero.
Reported by Agostino Sarubbo.
Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2596
|
TIFFReadRawStrip1(TIFF* tif, uint32 strip, void* buf, tmsize_t size,
const char* module)
{
TIFFDirectory *td = &tif->tif_dir;
if (!_TIFFFillStriles( tif ))
return ((tmsize_t)(-1));
assert((tif->tif_flags&TIFF_NOREADRAW)==0);
if (!isMapped(tif)) {
tmsize_t cc;
if (!SeekOK(tif, td->td_stripoffset[strip])) {
TIFFErrorExt(tif->tif_clientdata, module,
"Seek error at scanline %lu, strip %lu",
(unsigned long) tif->tif_row, (unsigned long) strip);
return ((tmsize_t)(-1));
}
cc = TIFFReadFile(tif, buf, size);
if (cc != size) {
#if defined(__WIN32__) && (defined(_MSC_VER) || defined(__MINGW32__))
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu; got %I64u bytes, expected %I64u",
(unsigned long) tif->tif_row,
(unsigned __int64) cc,
(unsigned __int64) size);
#else
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu; got %llu bytes, expected %llu",
(unsigned long) tif->tif_row,
(unsigned long long) cc,
(unsigned long long) size);
#endif
return ((tmsize_t)(-1));
}
} else {
tmsize_t ma,mb;
tmsize_t n;
ma=(tmsize_t)td->td_stripoffset[strip];
mb=ma+size;
if ((td->td_stripoffset[strip] > (uint64)TIFF_TMSIZE_T_MAX)||(ma>tif->tif_size))
n=0;
else if ((mb<ma)||(mb<size)||(mb>tif->tif_size))
n=tif->tif_size-ma;
else
n=size;
if (n!=size) {
#if defined(__WIN32__) && (defined(_MSC_VER) || defined(__MINGW32__))
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu, strip %lu; got %I64u bytes, expected %I64u",
(unsigned long) tif->tif_row,
(unsigned long) strip,
(unsigned __int64) n,
(unsigned __int64) size);
#else
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu, strip %lu; got %llu bytes, expected %llu",
(unsigned long) tif->tif_row,
(unsigned long) strip,
(unsigned long long) n,
(unsigned long long) size);
#endif
return ((tmsize_t)(-1));
}
_TIFFmemcpy(buf, tif->tif_base + ma,
size);
}
return (size);
}
|
TIFFReadRawStrip1(TIFF* tif, uint32 strip, void* buf, tmsize_t size,
const char* module)
{
TIFFDirectory *td = &tif->tif_dir;
if (!_TIFFFillStriles( tif ))
return ((tmsize_t)(-1));
assert((tif->tif_flags&TIFF_NOREADRAW)==0);
if (!isMapped(tif)) {
tmsize_t cc;
if (!SeekOK(tif, td->td_stripoffset[strip])) {
TIFFErrorExt(tif->tif_clientdata, module,
"Seek error at scanline %lu, strip %lu",
(unsigned long) tif->tif_row, (unsigned long) strip);
return ((tmsize_t)(-1));
}
cc = TIFFReadFile(tif, buf, size);
if (cc != size) {
#if defined(__WIN32__) && (defined(_MSC_VER) || defined(__MINGW32__))
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu; got %I64u bytes, expected %I64u",
(unsigned long) tif->tif_row,
(unsigned __int64) cc,
(unsigned __int64) size);
#else
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu; got %llu bytes, expected %llu",
(unsigned long) tif->tif_row,
(unsigned long long) cc,
(unsigned long long) size);
#endif
return ((tmsize_t)(-1));
}
} else {
tmsize_t ma,mb;
tmsize_t n;
ma=(tmsize_t)td->td_stripoffset[strip];
mb=ma+size;
if ((td->td_stripoffset[strip] > (uint64)TIFF_TMSIZE_T_MAX)||(ma>tif->tif_size))
n=0;
else if ((mb<ma)||(mb<size)||(mb>tif->tif_size))
n=tif->tif_size-ma;
else
n=size;
if (n!=size) {
#if defined(__WIN32__) && (defined(_MSC_VER) || defined(__MINGW32__))
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu, strip %lu; got %I64u bytes, expected %I64u",
(unsigned long) tif->tif_row,
(unsigned long) strip,
(unsigned __int64) n,
(unsigned __int64) size);
#else
TIFFErrorExt(tif->tif_clientdata, module,
"Read error at scanline %lu, strip %lu; got %llu bytes, expected %llu",
(unsigned long) tif->tif_row,
(unsigned long) strip,
(unsigned long long) n,
(unsigned long long) size);
#endif
return ((tmsize_t)(-1));
}
_TIFFmemcpy(buf, tif->tif_base + ma,
size);
}
return (size);
}
|
C
|
libtiff
| 0 |
CVE-2019-15903
|
https://www.cvedetails.com/cve/CVE-2019-15903/
|
CWE-611
|
https://github.com/libexpat/libexpat/commit/c20b758c332d9a13afbbb276d30db1d183a85d43
|
c20b758c332d9a13afbbb276d30db1d183a85d43
|
xmlparse.c: Deny internal entities closing the doctype
|
XML_GetCurrentLineNumber(XML_Parser parser) {
if (parser == NULL)
return 0;
if (parser->m_eventPtr && parser->m_eventPtr >= parser->m_positionPtr) {
XmlUpdatePosition(parser->m_encoding, parser->m_positionPtr,
parser->m_eventPtr, &parser->m_position);
parser->m_positionPtr = parser->m_eventPtr;
}
return parser->m_position.lineNumber + 1;
}
|
XML_GetCurrentLineNumber(XML_Parser parser) {
if (parser == NULL)
return 0;
if (parser->m_eventPtr && parser->m_eventPtr >= parser->m_positionPtr) {
XmlUpdatePosition(parser->m_encoding, parser->m_positionPtr,
parser->m_eventPtr, &parser->m_position);
parser->m_positionPtr = parser->m_eventPtr;
}
return parser->m_position.lineNumber + 1;
}
|
C
|
libexpat
| 0 |
CVE-2018-6057
|
https://www.cvedetails.com/cve/CVE-2018-6057/
|
CWE-732
|
https://github.com/chromium/chromium/commit/c0c8978849ac57e4ecd613ddc8ff7852a2054734
|
c0c8978849ac57e4ecd613ddc8ff7852a2054734
|
android: Fix sensors in device service.
This patch fixes a bug that prevented more than one sensor data
to be available at once when using the device motion/orientation
API.
The issue was introduced by this other patch [1] which fixed
some security-related issues in the way shared memory region
handles are managed throughout Chromium (more details at
https://crbug.com/789959).
The device service´s sensor implementation doesn´t work
correctly because it assumes it is possible to create a
writable mapping of a given shared memory region at any
time. This assumption is not correct on Android, once an
Ashmem region has been turned read-only, such mappings
are no longer possible.
To fix the implementation, this CL changes the following:
- PlatformSensor used to require moving a
mojo::ScopedSharedBufferMapping into the newly-created
instance. Said mapping being owned by and destroyed
with the PlatformSensor instance.
With this patch, the constructor instead takes a single
pointer to the corresponding SensorReadingSharedBuffer,
i.e. the area in memory where the sensor-specific
reading data is located, and can be either updated
or read-from.
Note that the PlatformSensor does not own the mapping
anymore.
- PlatformSensorProviderBase holds the *single* writable
mapping that is used to store all SensorReadingSharedBuffer
buffers. It is created just after the region itself,
and thus can be used even after the region's access
mode has been changed to read-only.
Addresses within the mapping will be passed to
PlatformSensor constructors, computed from the
mapping's base address plus a sensor-specific
offset.
The mapping is now owned by the
PlatformSensorProviderBase instance.
Note that, security-wise, nothing changes, because all
mojo::ScopedSharedBufferMapping before the patch actually
pointed to the same writable-page in memory anyway.
Since unit or integration tests didn't catch the regression
when [1] was submitted, this patch was tested manually by
running a newly-built Chrome apk in the Android emulator
and on a real device running Android O.
[1] https://chromium-review.googlesource.com/c/chromium/src/+/805238
BUG=805146
[email protected],[email protected],[email protected],[email protected]
Change-Id: I7d60a1cad278f48c361d2ece5a90de10eb082b44
Reviewed-on: https://chromium-review.googlesource.com/891180
Commit-Queue: David Turner <[email protected]>
Reviewed-by: Reilly Grant <[email protected]>
Reviewed-by: Matthew Cary <[email protected]>
Reviewed-by: Alexandr Ilin <[email protected]>
Cr-Commit-Position: refs/heads/master@{#532607}
|
void PlatformSensorFusion::StopSensor() {
for (const auto& pair : source_sensors_)
pair.second->StopListening(this);
fusion_algorithm_->Reset();
}
|
void PlatformSensorFusion::StopSensor() {
for (const auto& pair : source_sensors_)
pair.second->StopListening(this);
fusion_algorithm_->Reset();
}
|
C
|
Chrome
| 0 |
CVE-2019-1549
|
https://www.cvedetails.com/cve/CVE-2019-1549/
|
CWE-330
|
https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=1b0fe00e2704b5e20334a16d3c9099d1ba2ef1be
|
1b0fe00e2704b5e20334a16d3c9099d1ba2ef1be
| null |
size_t rand_drbg_get_entropy(RAND_DRBG *drbg,
unsigned char **pout,
int entropy, size_t min_len, size_t max_len,
int prediction_resistance)
{
size_t ret = 0;
size_t entropy_available = 0;
RAND_POOL *pool;
if (drbg->parent != NULL && drbg->strength > drbg->parent->strength) {
/*
* We currently don't support the algorithm from NIST SP 800-90C
* 10.1.2 to use a weaker DRBG as source
*/
RANDerr(RAND_F_RAND_DRBG_GET_ENTROPY, RAND_R_PARENT_STRENGTH_TOO_WEAK);
return 0;
}
if (drbg->seed_pool != NULL) {
pool = drbg->seed_pool;
pool->entropy_requested = entropy;
} else {
pool = rand_pool_new(entropy, drbg->secure, min_len, max_len);
if (pool == NULL)
return 0;
}
if (drbg->parent != NULL) {
size_t bytes_needed = rand_pool_bytes_needed(pool, 1 /*entropy_factor*/);
unsigned char *buffer = rand_pool_add_begin(pool, bytes_needed);
if (buffer != NULL) {
size_t bytes = 0;
/*
* Get random from parent, include our state as additional input.
* Our lock is already held, but we need to lock our parent before
* generating bits from it. (Note: taking the lock will be a no-op
* if locking if drbg->parent->lock == NULL.)
*/
rand_drbg_lock(drbg->parent);
if (RAND_DRBG_generate(drbg->parent,
buffer, bytes_needed,
prediction_resistance,
NULL, 0) != 0)
bytes = bytes_needed;
drbg->reseed_next_counter
= tsan_load(&drbg->parent->reseed_prop_counter);
rand_drbg_unlock(drbg->parent);
rand_pool_add_end(pool, bytes, 8 * bytes);
entropy_available = rand_pool_entropy_available(pool);
}
} else {
if (prediction_resistance) {
/*
* We don't have any entropy sources that comply with the NIST
* standard to provide prediction resistance (see NIST SP 800-90C,
* Section 5.4).
*/
RANDerr(RAND_F_RAND_DRBG_GET_ENTROPY,
RAND_R_PREDICTION_RESISTANCE_NOT_SUPPORTED);
goto err;
}
/* Get entropy by polling system entropy sources. */
entropy_available = rand_pool_acquire_entropy(pool);
}
if (entropy_available > 0) {
ret = rand_pool_length(pool);
*pout = rand_pool_detach(pool);
}
err:
if (drbg->seed_pool == NULL)
rand_pool_free(pool);
return ret;
}
|
size_t rand_drbg_get_entropy(RAND_DRBG *drbg,
unsigned char **pout,
int entropy, size_t min_len, size_t max_len,
int prediction_resistance)
{
size_t ret = 0;
size_t entropy_available = 0;
RAND_POOL *pool;
if (drbg->parent != NULL && drbg->strength > drbg->parent->strength) {
/*
* We currently don't support the algorithm from NIST SP 800-90C
* 10.1.2 to use a weaker DRBG as source
*/
RANDerr(RAND_F_RAND_DRBG_GET_ENTROPY, RAND_R_PARENT_STRENGTH_TOO_WEAK);
return 0;
}
if (drbg->seed_pool != NULL) {
pool = drbg->seed_pool;
pool->entropy_requested = entropy;
} else {
pool = rand_pool_new(entropy, drbg->secure, min_len, max_len);
if (pool == NULL)
return 0;
}
if (drbg->parent != NULL) {
size_t bytes_needed = rand_pool_bytes_needed(pool, 1 /*entropy_factor*/);
unsigned char *buffer = rand_pool_add_begin(pool, bytes_needed);
if (buffer != NULL) {
size_t bytes = 0;
/*
* Get random from parent, include our state as additional input.
* Our lock is already held, but we need to lock our parent before
* generating bits from it. (Note: taking the lock will be a no-op
* if locking if drbg->parent->lock == NULL.)
*/
rand_drbg_lock(drbg->parent);
if (RAND_DRBG_generate(drbg->parent,
buffer, bytes_needed,
prediction_resistance,
NULL, 0) != 0)
bytes = bytes_needed;
drbg->reseed_next_counter
= tsan_load(&drbg->parent->reseed_prop_counter);
rand_drbg_unlock(drbg->parent);
rand_pool_add_end(pool, bytes, 8 * bytes);
entropy_available = rand_pool_entropy_available(pool);
}
} else {
if (prediction_resistance) {
/*
* We don't have any entropy sources that comply with the NIST
* standard to provide prediction resistance (see NIST SP 800-90C,
* Section 5.4).
*/
RANDerr(RAND_F_RAND_DRBG_GET_ENTROPY,
RAND_R_PREDICTION_RESISTANCE_NOT_SUPPORTED);
goto err;
}
/* Get entropy by polling system entropy sources. */
entropy_available = rand_pool_acquire_entropy(pool);
}
if (entropy_available > 0) {
ret = rand_pool_length(pool);
*pout = rand_pool_detach(pool);
}
err:
if (drbg->seed_pool == NULL)
rand_pool_free(pool);
return ret;
}
|
C
|
openssl
| 0 |
CVE-2012-6638
|
https://www.cvedetails.com/cve/CVE-2012-6638/
|
CWE-399
|
https://github.com/torvalds/linux/commit/fdf5af0daf8019cec2396cdef8fb042d80fe71fa
|
fdf5af0daf8019cec2396cdef8fb042d80fe71fa
|
tcp: drop SYN+FIN messages
Denys Fedoryshchenko reported that SYN+FIN attacks were bringing his
linux machines to their limits.
Dont call conn_request() if the TCP flags includes SYN flag
Reported-by: Denys Fedoryshchenko <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void tcp_urg(struct sock *sk, struct sk_buff *skb, const struct tcphdr *th)
{
struct tcp_sock *tp = tcp_sk(sk);
/* Check if we get a new urgent pointer - normally not. */
if (th->urg)
tcp_check_urg(sk, th);
/* Do we wait for any urgent data? - normally not... */
if (tp->urg_data == TCP_URG_NOTYET) {
u32 ptr = tp->urg_seq - ntohl(th->seq) + (th->doff * 4) -
th->syn;
/* Is the urgent pointer pointing into this packet? */
if (ptr < skb->len) {
u8 tmp;
if (skb_copy_bits(skb, ptr, &tmp, 1))
BUG();
tp->urg_data = TCP_URG_VALID | tmp;
if (!sock_flag(sk, SOCK_DEAD))
sk->sk_data_ready(sk, 0);
}
}
}
|
static void tcp_urg(struct sock *sk, struct sk_buff *skb, const struct tcphdr *th)
{
struct tcp_sock *tp = tcp_sk(sk);
/* Check if we get a new urgent pointer - normally not. */
if (th->urg)
tcp_check_urg(sk, th);
/* Do we wait for any urgent data? - normally not... */
if (tp->urg_data == TCP_URG_NOTYET) {
u32 ptr = tp->urg_seq - ntohl(th->seq) + (th->doff * 4) -
th->syn;
/* Is the urgent pointer pointing into this packet? */
if (ptr < skb->len) {
u8 tmp;
if (skb_copy_bits(skb, ptr, &tmp, 1))
BUG();
tp->urg_data = TCP_URG_VALID | tmp;
if (!sock_flag(sk, SOCK_DEAD))
sk->sk_data_ready(sk, 0);
}
}
}
|
C
|
linux
| 0 |
CVE-2017-5044
|
https://www.cvedetails.com/cve/CVE-2017-5044/
|
CWE-119
|
https://github.com/chromium/chromium/commit/62154472bd2c43e1790dd1bd8a527c1db9118d88
|
62154472bd2c43e1790dd1bd8a527c1db9118d88
|
bluetooth: Implement getAvailability()
This change implements the getAvailability() method for
navigator.bluetooth as defined in the specification.
Bug: 707640
Change-Id: I9e9b3e7f8ea7f259e975f71cb6d9570e5f04b479
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1651516
Reviewed-by: Chris Harrelson <[email protected]>
Reviewed-by: Giovanni Ortuño Urquidi <[email protected]>
Reviewed-by: Kinuko Yasuda <[email protected]>
Commit-Queue: Ovidio de Jesús Ruiz-Henríquez <[email protected]>
Auto-Submit: Ovidio de Jesús Ruiz-Henríquez <[email protected]>
Cr-Commit-Position: refs/heads/master@{#688987}
|
BluetoothAdapter::GetPendingAdvertisementsForTesting() const {
return {};
}
|
BluetoothAdapter::GetPendingAdvertisementsForTesting() const {
return {};
}
|
C
|
Chrome
| 0 |
CVE-2016-3899
|
https://www.cvedetails.com/cve/CVE-2016-3899/
|
CWE-284
|
https://android.googlesource.com/platform/frameworks/av/+/97837bb6cbac21ea679843a0037779d3834bed64
|
97837bb6cbac21ea679843a0037779d3834bed64
|
OMXCodec: check IMemory::pointer() before using allocation
Bug: 29421811
Change-Id: I0a73ba12bae4122f1d89fc92e5ea4f6a96cd1ed1
|
void OMXCodec::findMatchingCodecs(
const char *mime,
bool createEncoder, const char *matchComponentName,
uint32_t flags,
Vector<CodecNameAndQuirks> *matchingCodecs) {
matchingCodecs->clear();
const sp<IMediaCodecList> list = MediaCodecList::getInstance();
if (list == NULL) {
return;
}
size_t index = 0;
for (;;) {
ssize_t matchIndex =
list->findCodecByType(mime, createEncoder, index);
if (matchIndex < 0) {
break;
}
index = matchIndex + 1;
const sp<MediaCodecInfo> info = list->getCodecInfo(matchIndex);
CHECK(info != NULL);
const char *componentName = info->getCodecName();
if (matchComponentName && strcmp(componentName, matchComponentName)) {
continue;
}
if (((flags & kSoftwareCodecsOnly) && IsSoftwareCodec(componentName)) ||
((flags & kHardwareCodecsOnly) && !IsSoftwareCodec(componentName)) ||
(!(flags & (kSoftwareCodecsOnly | kHardwareCodecsOnly)))) {
ssize_t index = matchingCodecs->add();
CodecNameAndQuirks *entry = &matchingCodecs->editItemAt(index);
entry->mName = String8(componentName);
entry->mQuirks = getComponentQuirks(info);
ALOGV("matching '%s' quirks 0x%08x",
entry->mName.string(), entry->mQuirks);
}
}
if (flags & kPreferSoftwareCodecs) {
matchingCodecs->sort(CompareSoftwareCodecsFirst);
}
}
|
void OMXCodec::findMatchingCodecs(
const char *mime,
bool createEncoder, const char *matchComponentName,
uint32_t flags,
Vector<CodecNameAndQuirks> *matchingCodecs) {
matchingCodecs->clear();
const sp<IMediaCodecList> list = MediaCodecList::getInstance();
if (list == NULL) {
return;
}
size_t index = 0;
for (;;) {
ssize_t matchIndex =
list->findCodecByType(mime, createEncoder, index);
if (matchIndex < 0) {
break;
}
index = matchIndex + 1;
const sp<MediaCodecInfo> info = list->getCodecInfo(matchIndex);
CHECK(info != NULL);
const char *componentName = info->getCodecName();
if (matchComponentName && strcmp(componentName, matchComponentName)) {
continue;
}
if (((flags & kSoftwareCodecsOnly) && IsSoftwareCodec(componentName)) ||
((flags & kHardwareCodecsOnly) && !IsSoftwareCodec(componentName)) ||
(!(flags & (kSoftwareCodecsOnly | kHardwareCodecsOnly)))) {
ssize_t index = matchingCodecs->add();
CodecNameAndQuirks *entry = &matchingCodecs->editItemAt(index);
entry->mName = String8(componentName);
entry->mQuirks = getComponentQuirks(info);
ALOGV("matching '%s' quirks 0x%08x",
entry->mName.string(), entry->mQuirks);
}
}
if (flags & kPreferSoftwareCodecs) {
matchingCodecs->sort(CompareSoftwareCodecsFirst);
}
}
|
C
|
Android
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/a151041807a7e3c702c5f935a742368333aa69d4
|
a151041807a7e3c702c5f935a742368333aa69d4
|
Don't move the window when resizing
When the GPU ode resizes the window to make it non-0x0 size, it should not move the window.
BUG=137523
Review URL: https://chromiumcodereview.appspot.com/10829206
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@150176 0039d316-1c4b-4281-b951-d872f2087c98
|
void SendOnIOThreadTask(int host_id, IPC::Message* msg) {
GpuProcessHost* host = GpuProcessHost::FromID(host_id);
if (host)
host->Send(msg);
else
delete msg;
}
|
void SendOnIOThreadTask(int host_id, IPC::Message* msg) {
GpuProcessHost* host = GpuProcessHost::FromID(host_id);
if (host)
host->Send(msg);
else
delete msg;
}
|
C
|
Chrome
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/30f5bc981921d9c0221c82f38d80bd2d5c86a022
|
30f5bc981921d9c0221c82f38d80bd2d5c86a022
|
browser: Extract AutocompleteProvider from autocomplete.*
This is the first part of splitting autocomplete.* into small pieces, one for
each class.
BUG=94842
[email protected]
[email protected]
Review URL: https://chromiumcodereview.appspot.com/10663015
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@144107 0039d316-1c4b-4281-b951-d872f2087c98
|
void AutocompleteProvider::UpdateStarredStateOfMatches() {
|
void AutocompleteProvider::UpdateStarredStateOfMatches() {
if (matches_.empty())
return;
if (!profile_)
return;
BookmarkModel* bookmark_model = profile_->GetBookmarkModel();
if (!bookmark_model || !bookmark_model->IsLoaded())
return;
for (ACMatches::iterator i = matches_.begin(); i != matches_.end(); ++i)
i->starred = bookmark_model->IsBookmarked(GURL(i->destination_url));
}
|
C
|
Chrome
| 1 |
CVE-2018-6085
|
https://www.cvedetails.com/cve/CVE-2018-6085/
|
CWE-20
|
https://github.com/chromium/chromium/commit/df5b1e1f88e013bc96107cc52c4a4f33a8238444
|
df5b1e1f88e013bc96107cc52c4a4f33a8238444
|
Blockfile cache: fix long-standing sparse + evict reentrancy problem
Thanks to nedwilliamson@ (on gmail) for an alternative perspective
plus a reduction to make fixing this much easier.
Bug: 826626, 518908, 537063, 802886
Change-Id: Ibfa01416f9a8e7f7b361e4f93b4b6b134728b85f
Reviewed-on: https://chromium-review.googlesource.com/985052
Reviewed-by: Matt Menke <[email protected]>
Commit-Queue: Maks Orlovich <[email protected]>
Cr-Commit-Position: refs/heads/master@{#547103}
|
void BackendIO::RunTask(const base::Closure& task) {
operation_ = OP_RUN_TASK;
task_ = task;
}
|
void BackendIO::RunTask(const base::Closure& task) {
operation_ = OP_RUN_TASK;
task_ = task;
}
|
C
|
Chrome
| 0 |
CVE-2019-16921
|
https://www.cvedetails.com/cve/CVE-2019-16921/
|
CWE-665
|
https://github.com/torvalds/linux/commit/df7e40425813c50cd252e6f5e348a81ef1acae56
|
df7e40425813c50cd252e6f5e348a81ef1acae56
|
RDMA/hns: Fix init resp when alloc ucontext
The data in resp will be copied from kernel to userspace, thus it needs to
be initialized to zeros to avoid copying uninited stack memory.
Reported-by: Dan Carpenter <[email protected]>
Fixes: e088a685eae9 ("RDMA/hns: Support rq record doorbell for the user space")
Signed-off-by: Yixian Liu <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
static struct ib_ucontext *hns_roce_alloc_ucontext(struct ib_device *ib_dev,
struct ib_udata *udata)
{
int ret = 0;
struct hns_roce_ucontext *context;
struct hns_roce_ib_alloc_ucontext_resp resp = {};
struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
resp.qp_tab_size = hr_dev->caps.num_qps;
context = kmalloc(sizeof(*context), GFP_KERNEL);
if (!context)
return ERR_PTR(-ENOMEM);
ret = hns_roce_uar_alloc(hr_dev, &context->uar);
if (ret)
goto error_fail_uar_alloc;
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB) {
INIT_LIST_HEAD(&context->page_list);
mutex_init(&context->page_mutex);
}
ret = ib_copy_to_udata(udata, &resp, sizeof(resp));
if (ret)
goto error_fail_copy_to_udata;
return &context->ibucontext;
error_fail_copy_to_udata:
hns_roce_uar_free(hr_dev, &context->uar);
error_fail_uar_alloc:
kfree(context);
return ERR_PTR(ret);
}
|
static struct ib_ucontext *hns_roce_alloc_ucontext(struct ib_device *ib_dev,
struct ib_udata *udata)
{
int ret = 0;
struct hns_roce_ucontext *context;
struct hns_roce_ib_alloc_ucontext_resp resp;
struct hns_roce_dev *hr_dev = to_hr_dev(ib_dev);
resp.qp_tab_size = hr_dev->caps.num_qps;
context = kmalloc(sizeof(*context), GFP_KERNEL);
if (!context)
return ERR_PTR(-ENOMEM);
ret = hns_roce_uar_alloc(hr_dev, &context->uar);
if (ret)
goto error_fail_uar_alloc;
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_RECORD_DB) {
INIT_LIST_HEAD(&context->page_list);
mutex_init(&context->page_mutex);
}
ret = ib_copy_to_udata(udata, &resp, sizeof(resp));
if (ret)
goto error_fail_copy_to_udata;
return &context->ibucontext;
error_fail_copy_to_udata:
hns_roce_uar_free(hr_dev, &context->uar);
error_fail_uar_alloc:
kfree(context);
return ERR_PTR(ret);
}
|
C
|
linux
| 1 |
null | null | null |
https://github.com/chromium/chromium/commit/62b8b6e168a12263aab6b88dbef0b900cc37309f
|
62b8b6e168a12263aab6b88dbef0b900cc37309f
|
Add partial magnifier to ash palette.
The partial magnifier will magnify a small portion of the screen, similar to a spyglass.
TEST=./out/Release/ash_unittests --gtest_filter=PartialMagnificationControllerTest.*
[email protected]
BUG=616112
Review-Url: https://codereview.chromium.org/2239553002
Cr-Commit-Position: refs/heads/master@{#414124}
|
void PaletteTray::OnMouseEnteredView() {}
|
void PaletteTray::OnMouseEnteredView() {}
|
C
|
Chrome
| 0 |
CVE-2011-4112
|
https://www.cvedetails.com/cve/CVE-2011-4112/
|
CWE-264
|
https://github.com/torvalds/linux/commit/550fd08c2cebad61c548def135f67aba284c6162
|
550fd08c2cebad61c548def135f67aba284c6162
|
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <[email protected]>
CC: Karsten Keil <[email protected]>
CC: "David S. Miller" <[email protected]>
CC: Jay Vosburgh <[email protected]>
CC: Andy Gospodarek <[email protected]>
CC: Patrick McHardy <[email protected]>
CC: Krzysztof Halasa <[email protected]>
CC: "John W. Linville" <[email protected]>
CC: Greg Kroah-Hartman <[email protected]>
CC: Marcel Holtmann <[email protected]>
CC: Johannes Berg <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void bond_arp_send(struct net_device *slave_dev, int arp_op, __be32 dest_ip, __be32 src_ip, unsigned short vlan_id)
{
struct sk_buff *skb;
pr_debug("arp %d on slave %s: dst %x src %x vid %d\n", arp_op,
slave_dev->name, dest_ip, src_ip, vlan_id);
skb = arp_create(arp_op, ETH_P_ARP, dest_ip, slave_dev, src_ip,
NULL, slave_dev->dev_addr, NULL);
if (!skb) {
pr_err("ARP packet allocation failed\n");
return;
}
if (vlan_id) {
skb = vlan_put_tag(skb, vlan_id);
if (!skb) {
pr_err("failed to insert VLAN tag\n");
return;
}
}
arp_xmit(skb);
}
|
static void bond_arp_send(struct net_device *slave_dev, int arp_op, __be32 dest_ip, __be32 src_ip, unsigned short vlan_id)
{
struct sk_buff *skb;
pr_debug("arp %d on slave %s: dst %x src %x vid %d\n", arp_op,
slave_dev->name, dest_ip, src_ip, vlan_id);
skb = arp_create(arp_op, ETH_P_ARP, dest_ip, slave_dev, src_ip,
NULL, slave_dev->dev_addr, NULL);
if (!skb) {
pr_err("ARP packet allocation failed\n");
return;
}
if (vlan_id) {
skb = vlan_put_tag(skb, vlan_id);
if (!skb) {
pr_err("failed to insert VLAN tag\n");
return;
}
}
arp_xmit(skb);
}
|
C
|
linux
| 0 |
CVE-2012-2888
|
https://www.cvedetails.com/cve/CVE-2012-2888/
|
CWE-399
|
https://github.com/chromium/chromium/commit/3b0d77670a0613f409110817455d2137576b485a
|
3b0d77670a0613f409110817455d2137576b485a
|
Revert 143656 - Add an IPC channel between the NaCl loader process and the renderer.
BUG=116317
TEST=ppapi, nacl tests, manual testing for experimental IPC proxy.
Review URL: https://chromiumcodereview.appspot.com/10641016
[email protected]
Review URL: https://chromiumcodereview.appspot.com/10625007
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@143665 0039d316-1c4b-4281-b951-d872f2087c98
|
BufferSizeStatus GetBufferStatus(const char* data, size_t len) {
if (len < sizeof(NaClIPCAdapter::NaClMessageHeader))
return MESSAGE_IS_TRUNCATED;
const NaClIPCAdapter::NaClMessageHeader* header =
reinterpret_cast<const NaClIPCAdapter::NaClMessageHeader*>(data);
uint32 message_size =
sizeof(NaClIPCAdapter::NaClMessageHeader) + header->payload_size;
if (len == message_size)
return MESSAGE_IS_COMPLETE;
if (len > message_size)
return MESSAGE_HAS_EXTRA_DATA;
return MESSAGE_IS_TRUNCATED;
}
|
BufferSizeStatus GetBufferStatus(const char* data, size_t len) {
if (len < sizeof(NaClIPCAdapter::NaClMessageHeader))
return MESSAGE_IS_TRUNCATED;
const NaClIPCAdapter::NaClMessageHeader* header =
reinterpret_cast<const NaClIPCAdapter::NaClMessageHeader*>(data);
uint32 message_size =
sizeof(NaClIPCAdapter::NaClMessageHeader) + header->payload_size;
if (len == message_size)
return MESSAGE_IS_COMPLETE;
if (len > message_size)
return MESSAGE_HAS_EXTRA_DATA;
return MESSAGE_IS_TRUNCATED;
}
|
C
|
Chrome
| 0 |
CVE-2014-3645
|
https://www.cvedetails.com/cve/CVE-2014-3645/
|
CWE-20
|
https://github.com/torvalds/linux/commit/bfd0a56b90005f8c8a004baf407ad90045c2b11e
|
bfd0a56b90005f8c8a004baf407ad90045c2b11e
|
nEPT: Nested INVEPT
If we let L1 use EPT, we should probably also support the INVEPT instruction.
In our current nested EPT implementation, when L1 changes its EPT table
for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in
the course of this modification already calls INVEPT. But if last level
of shadow page is unsync not all L1's changes to EPT12 are intercepted,
which means roots need to be synced when L1 calls INVEPT. Global INVEPT
should not be different since roots are synced by kvm_mmu_load() each
time EPTP02 changes.
Reviewed-by: Xiao Guangrong <[email protected]>
Signed-off-by: Nadav Har'El <[email protected]>
Signed-off-by: Jun Nakajima <[email protected]>
Signed-off-by: Xinhao Xu <[email protected]>
Signed-off-by: Yang Zhang <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static void kvm_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn)
{
struct kvm_mmu_page *s;
for_each_gfn_indirect_valid_sp(vcpu->kvm, s, gfn) {
if (s->unsync)
continue;
WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL);
__kvm_unsync_page(vcpu, s);
}
}
|
static void kvm_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn)
{
struct kvm_mmu_page *s;
for_each_gfn_indirect_valid_sp(vcpu->kvm, s, gfn) {
if (s->unsync)
continue;
WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL);
__kvm_unsync_page(vcpu, s);
}
}
|
C
|
linux
| 0 |
CVE-2015-7665
|
https://www.cvedetails.com/cve/CVE-2015-7665/
|
CWE-200
|
https://git.savannah.gnu.org/cgit/wget.git/commit/?id=075d7556964f5a871a73c22ac4b69f5361295099
|
075d7556964f5a871a73c22ac4b69f5361295099
| null |
ftp_retrieve_dirs (struct url *u, struct fileinfo *f, ccon *con)
{
char *container = NULL;
int container_size = 0;
for (; f; f = f->next)
{
int size;
char *odir, *newdir;
if (opt.quota && total_downloaded_bytes > opt.quota)
break;
if (f->type != FT_DIRECTORY)
continue;
/* Allocate u->dir off stack, but reallocate only if a larger
string is needed. It's a pity there's no "realloca" for an
item on the bottom of the stack. */
size = strlen (u->dir) + 1 + strlen (f->name) + 1;
if (size > container_size)
container = (char *)alloca (size);
newdir = container;
odir = u->dir;
if (*odir == '\0'
|| (*odir == '/' && *(odir + 1) == '\0'))
/* If ODIR is empty or just "/", simply append f->name to
ODIR. (In the former case, to preserve u->dir being
relative; in the latter case, to avoid double slash.) */
sprintf (newdir, "%s%s", odir, f->name);
else
/* Else, use a separator. */
sprintf (newdir, "%s/%s", odir, f->name);
DEBUGP (("Composing new CWD relative to the initial directory.\n"));
DEBUGP ((" odir = '%s'\n f->name = '%s'\n newdir = '%s'\n\n",
odir, f->name, newdir));
if (!accdir (newdir))
{
logprintf (LOG_VERBOSE, _("\
Not descending to %s as it is excluded/not-included.\n"),
quote (newdir));
continue;
}
con->st &= ~DONE_CWD;
odir = xstrdup (u->dir); /* because url_set_dir will free
u->dir. */
url_set_dir (u, newdir);
ftp_retrieve_glob (u, con, GLOB_GETALL);
url_set_dir (u, odir);
xfree (odir);
/* Set the time-stamp? */
}
if (opt.quota && total_downloaded_bytes > opt.quota)
return QUOTEXC;
else
return RETROK;
}
|
ftp_retrieve_dirs (struct url *u, struct fileinfo *f, ccon *con)
{
char *container = NULL;
int container_size = 0;
for (; f; f = f->next)
{
int size;
char *odir, *newdir;
if (opt.quota && total_downloaded_bytes > opt.quota)
break;
if (f->type != FT_DIRECTORY)
continue;
/* Allocate u->dir off stack, but reallocate only if a larger
string is needed. It's a pity there's no "realloca" for an
item on the bottom of the stack. */
size = strlen (u->dir) + 1 + strlen (f->name) + 1;
if (size > container_size)
container = (char *)alloca (size);
newdir = container;
odir = u->dir;
if (*odir == '\0'
|| (*odir == '/' && *(odir + 1) == '\0'))
/* If ODIR is empty or just "/", simply append f->name to
ODIR. (In the former case, to preserve u->dir being
relative; in the latter case, to avoid double slash.) */
sprintf (newdir, "%s%s", odir, f->name);
else
/* Else, use a separator. */
sprintf (newdir, "%s/%s", odir, f->name);
DEBUGP (("Composing new CWD relative to the initial directory.\n"));
DEBUGP ((" odir = '%s'\n f->name = '%s'\n newdir = '%s'\n\n",
odir, f->name, newdir));
if (!accdir (newdir))
{
logprintf (LOG_VERBOSE, _("\
Not descending to %s as it is excluded/not-included.\n"),
quote (newdir));
continue;
}
con->st &= ~DONE_CWD;
odir = xstrdup (u->dir); /* because url_set_dir will free
u->dir. */
url_set_dir (u, newdir);
ftp_retrieve_glob (u, con, GLOB_GETALL);
url_set_dir (u, odir);
xfree (odir);
/* Set the time-stamp? */
}
if (opt.quota && total_downloaded_bytes > opt.quota)
return QUOTEXC;
else
return RETROK;
}
|
C
|
savannah
| 0 |
CVE-2019-15922
|
https://www.cvedetails.com/cve/CVE-2019-15922/
|
CWE-476
|
https://github.com/torvalds/linux/commit/58ccd2d31e502c37e108b285bf3d343eb00c235b
|
58ccd2d31e502c37e108b285bf3d343eb00c235b
|
paride/pf: Fix potential NULL pointer dereference
Syzkaller report this:
pf: pf version 1.04, major 47, cluster 64, nice 0
pf: No ATAPI disk detected
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN PTI
CPU: 0 PID: 9887 Comm: syz-executor.0 Tainted: G C 5.1.0-rc3+ #8
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
RIP: 0010:pf_init+0x7af/0x1000 [pf]
Code: 46 77 d2 48 89 d8 48 c1 e8 03 80 3c 28 00 74 08 48 89 df e8 03 25 a6 d2 4c 8b 23 49 8d bc 24 80 05 00 00 48 89 f8 48 c1 e8 03 <80> 3c 28 00 74 05 e8 e6 24 a6 d2 49 8b bc 24 80 05 00 00 e8 79 34
RSP: 0018:ffff8881abcbf998 EFLAGS: 00010202
RAX: 00000000000000b0 RBX: ffffffffc1e4a8a8 RCX: ffffffffaec50788
RDX: 0000000000039b10 RSI: ffffc9000153c000 RDI: 0000000000000580
RBP: dffffc0000000000 R08: ffffed103ee44e59 R09: ffffed103ee44e59
R10: 0000000000000001 R11: ffffed103ee44e58 R12: 0000000000000000
R13: ffffffffc1e4b028 R14: 0000000000000000 R15: 0000000000000020
FS: 00007f1b78a91700(0000) GS:ffff8881f7200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6d72b207f8 CR3: 00000001d5790004 CR4: 00000000007606f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
? 0xffffffffc1e50000
do_one_initcall+0xbc/0x47d init/main.c:901
do_init_module+0x1b5/0x547 kernel/module.c:3456
load_module+0x6405/0x8c10 kernel/module.c:3804
__do_sys_finit_module+0x162/0x190 kernel/module.c:3898
do_syscall_64+0x9f/0x450 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462e99
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1b78a90c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462e99
RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000003
RBP: 00007f1b78a90c70 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f1b78a916bc
R13: 00000000004bcefa R14: 00000000006f6fb0 R15: 0000000000000004
Modules linked in: pf(+) paride gpio_tps65218 tps65218 i2c_cht_wc ati_remote dc395x act_meta_skbtcindex act_ife ife ecdh_generic rc_xbox_dvd sky81452_regulator v4l2_fwnode leds_blinkm snd_usb_hiface comedi(C) aes_ti slhc cfi_cmdset_0020 mtd cfi_util sx8654 mdio_gpio of_mdio fixed_phy mdio_bitbang libphy alcor_pci matrix_keymap hid_uclogic usbhid scsi_transport_fc videobuf2_v4l2 videobuf2_dma_sg snd_soc_pcm179x_spi snd_soc_pcm179x_codec i2c_demux_pinctrl mdev snd_indigodj isl6405 mii enc28j60 cmac adt7316_i2c(C) adt7316(C) fmc_trivial fmc nf_reject_ipv4 authenc rc_dtt200u rtc_ds1672 dvb_usb_dibusb_mc dvb_usb_dibusb_mc_common dib3000mc dibx000_common dvb_usb_dibusb_common dvb_usb dvb_core videobuf2_common videobuf2_vmalloc videobuf2_memops regulator_haptic adf7242 mac802154 ieee802154 s5h1409 da9034_ts snd_intel8x0m wmi cx24120 usbcore sdhci_cadence sdhci_pltfm sdhci mmc_core joydev i2c_algo_bit scsi_transport_iscsi iscsi_boot_sysfs ves1820 lockd grace nfs_acl auth_rpcgss sunrp
c
ip_vs snd_soc_adau7002 snd_cs4281 snd_rawmidi gameport snd_opl3_lib snd_seq_device snd_hwdep snd_ac97_codec ad7418 hid_primax hid snd_soc_cs4265 snd_soc_core snd_pcm_dmaengine snd_pcm snd_timer ac97_bus snd_compress snd soundcore ti_adc108s102 eeprom_93cx6 i2c_algo_pca mlxreg_hotplug st_pressure st_sensors industrialio_triggered_buffer kfifo_buf industrialio v4l2_common videodev media snd_soc_adau_utils rc_pinnacle_grey rc_core pps_gpio leds_lm3692x nandcore ledtrig_pattern iptable_security iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter ip6_vti ip_vti ip_gre ipip sit tunnel4 ip_tunnel hsr veth netdevsim vxcan batman_adv cfg80211 rfkill chnl_net caif nlmon dummy team bonding vcan bridge stp llc ip6_gre gre ip6_tunnel tunnel6 tun mousedev ppdev tpm kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel ide_pci_generic aes_x86_64 piix crypto_simd input_leds psmouse cryp
td
glue_helper ide_core intel_agp serio_raw intel_gtt agpgart ata_generic i2c_piix4 pata_acpi parport_pc parport rtc_cmos floppy sch_fq_codel ip_tables x_tables sha1_ssse3 sha1_generic ipv6 [last unloaded: paride]
Dumping ftrace buffer:
(ftrace buffer empty)
---[ end trace 7a818cf5f210d79e ]---
If alloc_disk fails in pf_init_units, pf->disk will be
NULL, however in pf_detect and pf_exit, it's not check
this before free.It may result a NULL pointer dereference.
Also when register_blkdev failed, blk_cleanup_queue() and
blk_mq_free_tag_set() should be called to free resources.
Reported-by: Hulk Robot <[email protected]>
Fixes: 6ce59025f118 ("paride/pf: cleanup queues when detection fails")
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
static void do_pf_write(void)
{
ps_set_intr(do_pf_write_start, NULL, 0, nice);
}
|
static void do_pf_write(void)
{
ps_set_intr(do_pf_write_start, NULL, 0, nice);
}
|
C
|
linux
| 0 |
CVE-2016-10130
|
https://www.cvedetails.com/cve/CVE-2016-10130/
|
CWE-284
|
https://github.com/libgit2/libgit2/commit/9a64e62f0f20c9cf9b2e1609f037060eb2d8eb22
|
9a64e62f0f20c9cf9b2e1609f037060eb2d8eb22
|
http: check certificate validity before clobbering the error variable
|
static int http_stream_read(
git_smart_subtransport_stream *stream,
char *buffer,
size_t buf_size,
size_t *bytes_read)
{
http_stream *s = (http_stream *)stream;
http_subtransport *t = OWNING_SUBTRANSPORT(s);
parser_context ctx;
size_t bytes_parsed;
replay:
*bytes_read = 0;
assert(t->connected);
if (!s->sent_request) {
git_buf request = GIT_BUF_INIT;
clear_parser_state(t);
if (gen_request(&request, s, 0) < 0)
return -1;
if (git_stream_write(t->io, request.ptr, request.size, 0) < 0) {
git_buf_free(&request);
return -1;
}
git_buf_free(&request);
s->sent_request = 1;
}
if (!s->received_response) {
if (s->chunked) {
assert(s->verb == post_verb);
/* Flush, if necessary */
if (s->chunk_buffer_len > 0 &&
write_chunk(t->io, s->chunk_buffer, s->chunk_buffer_len) < 0)
return -1;
s->chunk_buffer_len = 0;
/* Write the final chunk. */
if (git_stream_write(t->io, "0\r\n\r\n", 5, 0) < 0)
return -1;
}
s->received_response = 1;
}
while (!*bytes_read && !t->parse_finished) {
size_t data_offset;
int error;
/*
* Make the parse_buffer think it's as full of data as
* the buffer, so it won't try to recv more data than
* we can put into it.
*
* data_offset is the actual data offset from which we
* should tell the parser to start reading.
*/
if (buf_size >= t->parse_buffer.len) {
t->parse_buffer.offset = 0;
} else {
t->parse_buffer.offset = t->parse_buffer.len - buf_size;
}
data_offset = t->parse_buffer.offset;
if (gitno_recv(&t->parse_buffer) < 0)
return -1;
/* This call to http_parser_execute will result in invocations of the
* on_* family of callbacks. The most interesting of these is
* on_body_fill_buffer, which is called when data is ready to be copied
* into the target buffer. We need to marshal the buffer, buf_size, and
* bytes_read parameters to this callback. */
ctx.t = t;
ctx.s = s;
ctx.buffer = buffer;
ctx.buf_size = buf_size;
ctx.bytes_read = bytes_read;
/* Set the context, call the parser, then unset the context. */
t->parser.data = &ctx;
bytes_parsed = http_parser_execute(&t->parser,
&t->settings,
t->parse_buffer.data + data_offset,
t->parse_buffer.offset - data_offset);
t->parser.data = NULL;
/* If there was a handled authentication failure, then parse_error
* will have signaled us that we should replay the request. */
if (PARSE_ERROR_REPLAY == t->parse_error) {
s->sent_request = 0;
if ((error = http_connect(t)) < 0)
return error;
goto replay;
}
if (t->parse_error == PARSE_ERROR_EXT) {
return t->error;
}
if (t->parse_error < 0)
return -1;
if (bytes_parsed != t->parse_buffer.offset - data_offset) {
giterr_set(GITERR_NET,
"HTTP parser error: %s",
http_errno_description((enum http_errno)t->parser.http_errno));
return -1;
}
}
return 0;
}
|
static int http_stream_read(
git_smart_subtransport_stream *stream,
char *buffer,
size_t buf_size,
size_t *bytes_read)
{
http_stream *s = (http_stream *)stream;
http_subtransport *t = OWNING_SUBTRANSPORT(s);
parser_context ctx;
size_t bytes_parsed;
replay:
*bytes_read = 0;
assert(t->connected);
if (!s->sent_request) {
git_buf request = GIT_BUF_INIT;
clear_parser_state(t);
if (gen_request(&request, s, 0) < 0)
return -1;
if (git_stream_write(t->io, request.ptr, request.size, 0) < 0) {
git_buf_free(&request);
return -1;
}
git_buf_free(&request);
s->sent_request = 1;
}
if (!s->received_response) {
if (s->chunked) {
assert(s->verb == post_verb);
/* Flush, if necessary */
if (s->chunk_buffer_len > 0 &&
write_chunk(t->io, s->chunk_buffer, s->chunk_buffer_len) < 0)
return -1;
s->chunk_buffer_len = 0;
/* Write the final chunk. */
if (git_stream_write(t->io, "0\r\n\r\n", 5, 0) < 0)
return -1;
}
s->received_response = 1;
}
while (!*bytes_read && !t->parse_finished) {
size_t data_offset;
int error;
/*
* Make the parse_buffer think it's as full of data as
* the buffer, so it won't try to recv more data than
* we can put into it.
*
* data_offset is the actual data offset from which we
* should tell the parser to start reading.
*/
if (buf_size >= t->parse_buffer.len) {
t->parse_buffer.offset = 0;
} else {
t->parse_buffer.offset = t->parse_buffer.len - buf_size;
}
data_offset = t->parse_buffer.offset;
if (gitno_recv(&t->parse_buffer) < 0)
return -1;
/* This call to http_parser_execute will result in invocations of the
* on_* family of callbacks. The most interesting of these is
* on_body_fill_buffer, which is called when data is ready to be copied
* into the target buffer. We need to marshal the buffer, buf_size, and
* bytes_read parameters to this callback. */
ctx.t = t;
ctx.s = s;
ctx.buffer = buffer;
ctx.buf_size = buf_size;
ctx.bytes_read = bytes_read;
/* Set the context, call the parser, then unset the context. */
t->parser.data = &ctx;
bytes_parsed = http_parser_execute(&t->parser,
&t->settings,
t->parse_buffer.data + data_offset,
t->parse_buffer.offset - data_offset);
t->parser.data = NULL;
/* If there was a handled authentication failure, then parse_error
* will have signaled us that we should replay the request. */
if (PARSE_ERROR_REPLAY == t->parse_error) {
s->sent_request = 0;
if ((error = http_connect(t)) < 0)
return error;
goto replay;
}
if (t->parse_error == PARSE_ERROR_EXT) {
return t->error;
}
if (t->parse_error < 0)
return -1;
if (bytes_parsed != t->parse_buffer.offset - data_offset) {
giterr_set(GITERR_NET,
"HTTP parser error: %s",
http_errno_description((enum http_errno)t->parser.http_errno));
return -1;
}
}
return 0;
}
|
C
|
libgit2
| 0 |
CVE-2011-3055
|
https://www.cvedetails.com/cve/CVE-2011-3055/
| null |
https://github.com/chromium/chromium/commit/e9372a1bfd3588a80fcf49aa07321f0971dd6091
|
e9372a1bfd3588a80fcf49aa07321f0971dd6091
|
[V8] Pass Isolate to throwNotEnoughArgumentsError()
https://bugs.webkit.org/show_bug.cgi?id=86983
Reviewed by Adam Barth.
The objective is to pass Isolate around in V8 bindings.
This patch passes Isolate to throwNotEnoughArgumentsError().
No tests. No change in behavior.
* bindings/scripts/CodeGeneratorV8.pm:
(GenerateArgumentsCountCheck):
(GenerateEventConstructorCallback):
* bindings/scripts/test/V8/V8Float64Array.cpp:
(WebCore::Float64ArrayV8Internal::fooCallback):
* bindings/scripts/test/V8/V8TestActiveDOMObject.cpp:
(WebCore::TestActiveDOMObjectV8Internal::excitingFunctionCallback):
(WebCore::TestActiveDOMObjectV8Internal::postMessageCallback):
* bindings/scripts/test/V8/V8TestCustomNamedGetter.cpp:
(WebCore::TestCustomNamedGetterV8Internal::anotherFunctionCallback):
* bindings/scripts/test/V8/V8TestEventConstructor.cpp:
(WebCore::V8TestEventConstructor::constructorCallback):
* bindings/scripts/test/V8/V8TestEventTarget.cpp:
(WebCore::TestEventTargetV8Internal::itemCallback):
(WebCore::TestEventTargetV8Internal::dispatchEventCallback):
* bindings/scripts/test/V8/V8TestInterface.cpp:
(WebCore::TestInterfaceV8Internal::supplementalMethod2Callback):
(WebCore::V8TestInterface::constructorCallback):
* bindings/scripts/test/V8/V8TestMediaQueryListListener.cpp:
(WebCore::TestMediaQueryListListenerV8Internal::methodCallback):
* bindings/scripts/test/V8/V8TestNamedConstructor.cpp:
(WebCore::V8TestNamedConstructorConstructorCallback):
* bindings/scripts/test/V8/V8TestObj.cpp:
(WebCore::TestObjV8Internal::voidMethodWithArgsCallback):
(WebCore::TestObjV8Internal::intMethodWithArgsCallback):
(WebCore::TestObjV8Internal::objMethodWithArgsCallback):
(WebCore::TestObjV8Internal::methodWithSequenceArgCallback):
(WebCore::TestObjV8Internal::methodReturningSequenceCallback):
(WebCore::TestObjV8Internal::methodThatRequiresAllArgsAndThrowsCallback):
(WebCore::TestObjV8Internal::serializedValueCallback):
(WebCore::TestObjV8Internal::idbKeyCallback):
(WebCore::TestObjV8Internal::optionsObjectCallback):
(WebCore::TestObjV8Internal::methodWithNonOptionalArgAndOptionalArgCallback):
(WebCore::TestObjV8Internal::methodWithNonOptionalArgAndTwoOptionalArgsCallback):
(WebCore::TestObjV8Internal::methodWithCallbackArgCallback):
(WebCore::TestObjV8Internal::methodWithNonCallbackArgAndCallbackArgCallback):
(WebCore::TestObjV8Internal::overloadedMethod1Callback):
(WebCore::TestObjV8Internal::overloadedMethod2Callback):
(WebCore::TestObjV8Internal::overloadedMethod3Callback):
(WebCore::TestObjV8Internal::overloadedMethod4Callback):
(WebCore::TestObjV8Internal::overloadedMethod5Callback):
(WebCore::TestObjV8Internal::overloadedMethod6Callback):
(WebCore::TestObjV8Internal::overloadedMethod7Callback):
(WebCore::TestObjV8Internal::overloadedMethod11Callback):
(WebCore::TestObjV8Internal::overloadedMethod12Callback):
(WebCore::TestObjV8Internal::enabledAtRuntimeMethod1Callback):
(WebCore::TestObjV8Internal::enabledAtRuntimeMethod2Callback):
(WebCore::TestObjV8Internal::convert1Callback):
(WebCore::TestObjV8Internal::convert2Callback):
(WebCore::TestObjV8Internal::convert3Callback):
(WebCore::TestObjV8Internal::convert4Callback):
(WebCore::TestObjV8Internal::convert5Callback):
(WebCore::TestObjV8Internal::strictFunctionCallback):
(WebCore::V8TestObj::constructorCallback):
* bindings/scripts/test/V8/V8TestSerializedScriptValueInterface.cpp:
(WebCore::TestSerializedScriptValueInterfaceV8Internal::acceptTransferListCallback):
(WebCore::V8TestSerializedScriptValueInterface::constructorCallback):
* bindings/v8/ScriptController.cpp:
(WebCore::setValueAndClosePopupCallback):
* bindings/v8/V8Proxy.cpp:
(WebCore::V8Proxy::throwNotEnoughArgumentsError):
* bindings/v8/V8Proxy.h:
(V8Proxy):
* bindings/v8/custom/V8AudioContextCustom.cpp:
(WebCore::V8AudioContext::constructorCallback):
* bindings/v8/custom/V8DataViewCustom.cpp:
(WebCore::V8DataView::getInt8Callback):
(WebCore::V8DataView::getUint8Callback):
(WebCore::V8DataView::setInt8Callback):
(WebCore::V8DataView::setUint8Callback):
* bindings/v8/custom/V8DirectoryEntryCustom.cpp:
(WebCore::V8DirectoryEntry::getDirectoryCallback):
(WebCore::V8DirectoryEntry::getFileCallback):
* bindings/v8/custom/V8IntentConstructor.cpp:
(WebCore::V8Intent::constructorCallback):
* bindings/v8/custom/V8SVGLengthCustom.cpp:
(WebCore::V8SVGLength::convertToSpecifiedUnitsCallback):
* bindings/v8/custom/V8WebGLRenderingContextCustom.cpp:
(WebCore::getObjectParameter):
(WebCore::V8WebGLRenderingContext::getAttachedShadersCallback):
(WebCore::V8WebGLRenderingContext::getExtensionCallback):
(WebCore::V8WebGLRenderingContext::getFramebufferAttachmentParameterCallback):
(WebCore::V8WebGLRenderingContext::getParameterCallback):
(WebCore::V8WebGLRenderingContext::getProgramParameterCallback):
(WebCore::V8WebGLRenderingContext::getShaderParameterCallback):
(WebCore::V8WebGLRenderingContext::getUniformCallback):
(WebCore::vertexAttribAndUniformHelperf):
(WebCore::uniformHelperi):
(WebCore::uniformMatrixHelper):
* bindings/v8/custom/V8WebKitMutationObserverCustom.cpp:
(WebCore::V8WebKitMutationObserver::constructorCallback):
(WebCore::V8WebKitMutationObserver::observeCallback):
* bindings/v8/custom/V8WebSocketCustom.cpp:
(WebCore::V8WebSocket::constructorCallback):
(WebCore::V8WebSocket::sendCallback):
* bindings/v8/custom/V8XMLHttpRequestCustom.cpp:
(WebCore::V8XMLHttpRequest::openCallback):
git-svn-id: svn://svn.chromium.org/blink/trunk@117736 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
v8::Handle<v8::Value> V8WebGLRenderingContext::uniform1fvCallback(const v8::Arguments& args)
{
INC_STATS("DOM.WebGLRenderingContext.uniform1fv()");
return vertexAttribAndUniformHelperf(args, kUniform1v);
}
|
v8::Handle<v8::Value> V8WebGLRenderingContext::uniform1fvCallback(const v8::Arguments& args)
{
INC_STATS("DOM.WebGLRenderingContext.uniform1fv()");
return vertexAttribAndUniformHelperf(args, kUniform1v);
}
|
C
|
Chrome
| 0 |
CVE-2016-3835
|
https://www.cvedetails.com/cve/CVE-2016-3835/
|
CWE-200
|
https://android.googlesource.com/platform/hardware/qcom/media/+/7558d03e6498e970b761aa44fff6b2c659202d95
|
7558d03e6498e970b761aa44fff6b2c659202d95
|
DO NOT MERGE mm-video-v4l2: venc: add checks before accessing heap pointers
Heap pointers do not point to user virtual addresses in case
of secure session.
Set them to NULL and add checks to avoid accesing them
Bug: 28815329
Bug: 28920116
Change-Id: I94fd5808e753b58654d65e175d3857ef46ffba26
|
int venc_dev::venc_set_format(int format)
{
int rc = true;
if (format)
color_format = format;
else {
color_format = 0;
rc = false;
}
return rc;
}
|
int venc_dev::venc_set_format(int format)
{
int rc = true;
if (format)
color_format = format;
else {
color_format = 0;
rc = false;
}
return rc;
}
|
C
|
Android
| 0 |
CVE-2014-1700
|
https://www.cvedetails.com/cve/CVE-2014-1700/
|
CWE-399
|
https://github.com/chromium/chromium/commit/d926098e2e2be270c80a5ba25ab8a611b80b8556
|
d926098e2e2be270c80a5ba25ab8a611b80b8556
|
Connect WebUSB client interface to the devices app
This provides a basic WebUSB client interface in
content/renderer. Most of the interface is unimplemented,
but this CL hooks up navigator.usb.getDevices() to the
browser's Mojo devices app to enumerate available USB
devices.
BUG=492204
Review URL: https://codereview.chromium.org/1293253002
Cr-Commit-Position: refs/heads/master@{#344881}
|
int64_t RenderFrameImpl::serviceWorkerID(WebDataSource& data_source) {
ServiceWorkerNetworkProvider* provider =
ServiceWorkerNetworkProvider::FromDocumentState(
DocumentState::FromDataSource(&data_source));
if (provider->context() && provider->context()->controller())
return provider->context()->controller()->version_id();
return kInvalidServiceWorkerVersionId;
}
|
int64_t RenderFrameImpl::serviceWorkerID(WebDataSource& data_source) {
ServiceWorkerNetworkProvider* provider =
ServiceWorkerNetworkProvider::FromDocumentState(
DocumentState::FromDataSource(&data_source));
if (provider->context() && provider->context()->controller())
return provider->context()->controller()->version_id();
return kInvalidServiceWorkerVersionId;
}
|
C
|
Chrome
| 0 |
CVE-2011-2918
|
https://www.cvedetails.com/cve/CVE-2011-2918/
|
CWE-399
|
https://github.com/torvalds/linux/commit/a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
a8b0ca17b80e92faab46ee7179ba9e99ccb61233
|
perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Michael Cree <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: Anton Blanchard <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Jason Wessel <[email protected]>
Cc: Don Zickus <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
void show_task(unsigned long *sp)
{
show_stack(NULL, sp);
}
|
void show_task(unsigned long *sp)
{
show_stack(NULL, sp);
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/dfd28b1909358445e838fb0fdf3995c77a420aa8
|
dfd28b1909358445e838fb0fdf3995c77a420aa8
|
Refactor ScrollableShelf on |space_for_icons_|
|space_for_icons_| indicates the available space in scrollable shelf
to accommodate shelf icons. Now it is an integer type. Replace it with
gfx::Rect.
Bug: 997807
Change-Id: I4f9ba3206bd69dfdaf50894de46239e676db6454
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1801326
Commit-Queue: Andrew Xu <[email protected]>
Reviewed-by: Manu Cornet <[email protected]>
Reviewed-by: Xiyuan Xia <[email protected]>
Cr-Commit-Position: refs/heads/master@{#696446}
|
const char* ScrollableShelfView::GetClassName() const {
return "ScrollableShelfView";
}
|
const char* ScrollableShelfView::GetClassName() const {
return "ScrollableShelfView";
}
|
C
|
Chrome
| 0 |
CVE-2016-10197
|
https://www.cvedetails.com/cve/CVE-2016-10197/
|
CWE-125
|
https://github.com/libevent/libevent/commit/ec65c42052d95d2c23d1d837136d1cf1d9ecef9e
|
ec65c42052d95d2c23d1d837136d1cf1d9ecef9e
|
evdns: fix searching empty hostnames
From #332:
Here follows a bug report by **Guido Vranken** via the _Tor bug bounty program_. Please credit Guido accordingly.
## Bug report
The DNS code of Libevent contains this rather obvious OOB read:
```c
static char *
search_make_new(const struct search_state *const state, int n, const char *const base_name) {
const size_t base_len = strlen(base_name);
const char need_to_append_dot = base_name[base_len - 1] == '.' ? 0 : 1;
```
If the length of ```base_name``` is 0, then line 3125 reads 1 byte before the buffer. This will trigger a crash on ASAN-protected builds.
To reproduce:
Build libevent with ASAN:
```
$ CFLAGS='-fomit-frame-pointer -fsanitize=address' ./configure && make -j4
```
Put the attached ```resolv.conf``` and ```poc.c``` in the source directory and then do:
```
$ gcc -fsanitize=address -fomit-frame-pointer poc.c .libs/libevent.a
$ ./a.out
=================================================================
==22201== ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60060000efdf at pc 0x4429da bp 0x7ffe1ed47300 sp 0x7ffe1ed472f8
READ of size 1 at 0x60060000efdf thread T0
```
P.S. we can add a check earlier, but since this is very uncommon, I didn't add it.
Fixes: #332
|
nameserver_prod_callback(evutil_socket_t fd, short events, void *arg) {
struct nameserver *const ns = (struct nameserver *) arg;
(void)fd;
(void)events;
EVDNS_LOCK(ns->base);
nameserver_send_probe(ns);
EVDNS_UNLOCK(ns->base);
}
|
nameserver_prod_callback(evutil_socket_t fd, short events, void *arg) {
struct nameserver *const ns = (struct nameserver *) arg;
(void)fd;
(void)events;
EVDNS_LOCK(ns->base);
nameserver_send_probe(ns);
EVDNS_UNLOCK(ns->base);
}
|
C
|
libevent
| 0 |
CVE-2016-5200
|
https://www.cvedetails.com/cve/CVE-2016-5200/
|
CWE-119
|
https://github.com/chromium/chromium/commit/2f19869af13bbfdcfd682a55c0d2c61c6e102475
|
2f19869af13bbfdcfd682a55c0d2c61c6e102475
|
chrome/browser/ui/webauthn: long domains may cause a line break.
As requested by UX in [1], allow long host names to split a title into
two lines. This allows us to show more of the name before eliding,
although sufficiently long names will still trigger elision.
Screenshot at
https://drive.google.com/open?id=1_V6t2CeZDAVazy3Px-OET2LnB__aEW1r.
[1] https://docs.google.com/presentation/d/1TtxkPUchyVZulqgdMcfui-68B0W-DWaFFVJEffGIbLA/edit#slide=id.g5913c4105f_1_12
Change-Id: I70f6541e0db3e9942239304de43b487a7561ca34
Bug: 870892
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1601812
Auto-Submit: Adam Langley <[email protected]>
Commit-Queue: Nina Satragno <[email protected]>
Reviewed-by: Nina Satragno <[email protected]>
Cr-Commit-Position: refs/heads/master@{#658114}
|
bool AuthenticatorPaaskSheetModel::IsActivityIndicatorVisible() const {
return true;
}
|
bool AuthenticatorPaaskSheetModel::IsActivityIndicatorVisible() const {
return true;
}
|
C
|
Chrome
| 0 |
CVE-2013-6051
|
https://www.cvedetails.com/cve/CVE-2013-6051/
| null |
https://git.savannah.gnu.org/gitweb/?p=quagga.git;a=commitdiff;h=8794e8d229dc9fe29ea31424883433d4880ef408
|
8794e8d229dc9fe29ea31424883433d4880ef408
| null |
bgp_dump_routes_attr (struct stream *s, struct attr *attr,
struct prefix *prefix)
{
unsigned long cp;
unsigned long len;
size_t aspath_lenp;
struct aspath *aspath;
/* Remember current pointer. */
cp = stream_get_endp (s);
/* Place holder of length. */
stream_putw (s, 0);
/* Origin attribute. */
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_ORIGIN);
stream_putc (s, 1);
stream_putc (s, attr->origin);
aspath = attr->aspath;
stream_putc (s, BGP_ATTR_FLAG_TRANS|BGP_ATTR_FLAG_EXTLEN);
stream_putc (s, BGP_ATTR_AS_PATH);
aspath_lenp = stream_get_endp (s);
stream_putw (s, 0);
stream_putw_at (s, aspath_lenp, aspath_put (s, aspath, 1));
/* Nexthop attribute. */
/* If it's an IPv6 prefix, don't dump the IPv4 nexthop to save space */
if(prefix != NULL
#ifdef HAVE_IPV6
&& prefix->family != AF_INET6
#endif /* HAVE_IPV6 */
)
{
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_NEXT_HOP);
stream_putc (s, 4);
stream_put_ipv4 (s, attr->nexthop.s_addr);
}
/* MED attribute. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_MULTI_EXIT_DISC))
{
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL);
stream_putc (s, BGP_ATTR_MULTI_EXIT_DISC);
stream_putc (s, 4);
stream_putl (s, attr->med);
}
/* Local preference. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_LOCAL_PREF))
{
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_LOCAL_PREF);
stream_putc (s, 4);
stream_putl (s, attr->local_pref);
}
/* Atomic aggregate. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_ATOMIC_AGGREGATE))
{
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_ATOMIC_AGGREGATE);
stream_putc (s, 0);
}
/* Aggregator. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_AGGREGATOR))
{
assert (attr->extra);
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL|BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_AGGREGATOR);
stream_putc (s, 8);
stream_putl (s, attr->extra->aggregator_as);
stream_put_ipv4 (s, attr->extra->aggregator_addr.s_addr);
}
/* Community attribute. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_COMMUNITIES))
{
if (attr->community->size * 4 > 255)
{
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL|BGP_ATTR_FLAG_TRANS|BGP_ATTR_FLAG_EXTLEN);
stream_putc (s, BGP_ATTR_COMMUNITIES);
stream_putw (s, attr->community->size * 4);
}
else
{
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL|BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_COMMUNITIES);
stream_putc (s, attr->community->size * 4);
}
stream_put (s, attr->community->val, attr->community->size * 4);
}
#ifdef HAVE_IPV6
/* Add a MP_NLRI attribute to dump the IPv6 next hop */
if (prefix != NULL && prefix->family == AF_INET6 && attr->extra &&
(attr->extra->mp_nexthop_len == 16 || attr->extra->mp_nexthop_len == 32) )
{
int sizep;
struct attr_extra *attre = attr->extra;
stream_putc(s, BGP_ATTR_FLAG_OPTIONAL);
stream_putc(s, BGP_ATTR_MP_REACH_NLRI);
sizep = stream_get_endp (s);
/* MP header */
stream_putc (s, 0); /* Marker: Attribute length. */
stream_putw(s, AFI_IP6); /* AFI */
stream_putc(s, SAFI_UNICAST); /* SAFI */
/* Next hop */
stream_putc(s, attre->mp_nexthop_len);
stream_put(s, &attre->mp_nexthop_global, 16);
if (attre->mp_nexthop_len == 32)
stream_put(s, &attre->mp_nexthop_local, 16);
/* SNPA */
stream_putc(s, 0);
/* Prefix */
stream_put_prefix(s, prefix);
/* Set MP attribute length. */
stream_putc_at (s, sizep, (stream_get_endp (s) - sizep) - 1);
}
#endif /* HAVE_IPV6 */
/* Return total size of attribute. */
len = stream_get_endp (s) - cp - 2;
stream_putw_at (s, cp, len);
}
|
bgp_dump_routes_attr (struct stream *s, struct attr *attr,
struct prefix *prefix)
{
unsigned long cp;
unsigned long len;
size_t aspath_lenp;
struct aspath *aspath;
/* Remember current pointer. */
cp = stream_get_endp (s);
/* Place holder of length. */
stream_putw (s, 0);
/* Origin attribute. */
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_ORIGIN);
stream_putc (s, 1);
stream_putc (s, attr->origin);
aspath = attr->aspath;
stream_putc (s, BGP_ATTR_FLAG_TRANS|BGP_ATTR_FLAG_EXTLEN);
stream_putc (s, BGP_ATTR_AS_PATH);
aspath_lenp = stream_get_endp (s);
stream_putw (s, 0);
stream_putw_at (s, aspath_lenp, aspath_put (s, aspath, 1));
/* Nexthop attribute. */
/* If it's an IPv6 prefix, don't dump the IPv4 nexthop to save space */
if(prefix != NULL
#ifdef HAVE_IPV6
&& prefix->family != AF_INET6
#endif /* HAVE_IPV6 */
)
{
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_NEXT_HOP);
stream_putc (s, 4);
stream_put_ipv4 (s, attr->nexthop.s_addr);
}
/* MED attribute. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_MULTI_EXIT_DISC))
{
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL);
stream_putc (s, BGP_ATTR_MULTI_EXIT_DISC);
stream_putc (s, 4);
stream_putl (s, attr->med);
}
/* Local preference. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_LOCAL_PREF))
{
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_LOCAL_PREF);
stream_putc (s, 4);
stream_putl (s, attr->local_pref);
}
/* Atomic aggregate. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_ATOMIC_AGGREGATE))
{
stream_putc (s, BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_ATOMIC_AGGREGATE);
stream_putc (s, 0);
}
/* Aggregator. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_AGGREGATOR))
{
assert (attr->extra);
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL|BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_AGGREGATOR);
stream_putc (s, 8);
stream_putl (s, attr->extra->aggregator_as);
stream_put_ipv4 (s, attr->extra->aggregator_addr.s_addr);
}
/* Community attribute. */
if (attr->flag & ATTR_FLAG_BIT (BGP_ATTR_COMMUNITIES))
{
if (attr->community->size * 4 > 255)
{
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL|BGP_ATTR_FLAG_TRANS|BGP_ATTR_FLAG_EXTLEN);
stream_putc (s, BGP_ATTR_COMMUNITIES);
stream_putw (s, attr->community->size * 4);
}
else
{
stream_putc (s, BGP_ATTR_FLAG_OPTIONAL|BGP_ATTR_FLAG_TRANS);
stream_putc (s, BGP_ATTR_COMMUNITIES);
stream_putc (s, attr->community->size * 4);
}
stream_put (s, attr->community->val, attr->community->size * 4);
}
#ifdef HAVE_IPV6
/* Add a MP_NLRI attribute to dump the IPv6 next hop */
if (prefix != NULL && prefix->family == AF_INET6 && attr->extra &&
(attr->extra->mp_nexthop_len == 16 || attr->extra->mp_nexthop_len == 32) )
{
int sizep;
struct attr_extra *attre = attr->extra;
stream_putc(s, BGP_ATTR_FLAG_OPTIONAL);
stream_putc(s, BGP_ATTR_MP_REACH_NLRI);
sizep = stream_get_endp (s);
/* MP header */
stream_putc (s, 0); /* Marker: Attribute length. */
stream_putw(s, AFI_IP6); /* AFI */
stream_putc(s, SAFI_UNICAST); /* SAFI */
/* Next hop */
stream_putc(s, attre->mp_nexthop_len);
stream_put(s, &attre->mp_nexthop_global, 16);
if (attre->mp_nexthop_len == 32)
stream_put(s, &attre->mp_nexthop_local, 16);
/* SNPA */
stream_putc(s, 0);
/* Prefix */
stream_put_prefix(s, prefix);
/* Set MP attribute length. */
stream_putc_at (s, sizep, (stream_get_endp (s) - sizep) - 1);
}
#endif /* HAVE_IPV6 */
/* Return total size of attribute. */
len = stream_get_endp (s) - cp - 2;
stream_putw_at (s, cp, len);
}
|
C
|
savannah
| 0 |
CVE-2019-5774
|
https://www.cvedetails.com/cve/CVE-2019-5774/
|
CWE-20
|
https://github.com/chromium/chromium/commit/b32471d5abb3b3a4fe56e1dd79871400b51a0cca
|
b32471d5abb3b3a4fe56e1dd79871400b51a0cca
|
Add .desktop file to download_file_types.asciipb
.desktop files act as shortcuts on Linux, allowing arbitrary code
execution. We should send pings for these files.
Bug: 904182
Change-Id: Ibc26141fb180e843e1ffaf3f78717a9109d2fa9a
Reviewed-on: https://chromium-review.googlesource.com/c/1344552
Reviewed-by: Varun Khaneja <[email protected]>
Commit-Queue: Daniel Rubery <[email protected]>
Cr-Commit-Position: refs/heads/master@{#611272}
|
void RecordParallelizableDownloadAverageStats(
int64_t bytes_downloaded,
const base::TimeDelta& time_span) {
if (time_span.is_zero() || bytes_downloaded <= 0)
return;
int64_t average_bandwidth =
CalculateBandwidthBytesPerSecond(bytes_downloaded, time_span);
int64_t file_size_kb = bytes_downloaded / 1024;
RecordBandwidthMetric("Download.ParallelizableDownloadBandwidth",
average_bandwidth);
UMA_HISTOGRAM_LONG_TIMES("Download.Parallelizable.DownloadTime", time_span);
UMA_HISTOGRAM_CUSTOM_COUNTS("Download.Parallelizable.FileSize", file_size_kb,
1, kMaxFileSizeKb, 50);
if (average_bandwidth > kHighBandwidthBytesPerSecond) {
UMA_HISTOGRAM_LONG_TIMES(
"Download.Parallelizable.DownloadTime.HighDownloadBandwidth",
time_span);
UMA_HISTOGRAM_CUSTOM_COUNTS(
"Download.Parallelizable.FileSize.HighDownloadBandwidth", file_size_kb,
1, kMaxFileSizeKb, 50);
}
}
|
void RecordParallelizableDownloadAverageStats(
int64_t bytes_downloaded,
const base::TimeDelta& time_span) {
if (time_span.is_zero() || bytes_downloaded <= 0)
return;
int64_t average_bandwidth =
CalculateBandwidthBytesPerSecond(bytes_downloaded, time_span);
int64_t file_size_kb = bytes_downloaded / 1024;
RecordBandwidthMetric("Download.ParallelizableDownloadBandwidth",
average_bandwidth);
UMA_HISTOGRAM_LONG_TIMES("Download.Parallelizable.DownloadTime", time_span);
UMA_HISTOGRAM_CUSTOM_COUNTS("Download.Parallelizable.FileSize", file_size_kb,
1, kMaxFileSizeKb, 50);
if (average_bandwidth > kHighBandwidthBytesPerSecond) {
UMA_HISTOGRAM_LONG_TIMES(
"Download.Parallelizable.DownloadTime.HighDownloadBandwidth",
time_span);
UMA_HISTOGRAM_CUSTOM_COUNTS(
"Download.Parallelizable.FileSize.HighDownloadBandwidth", file_size_kb,
1, kMaxFileSizeKb, 50);
}
}
|
C
|
Chrome
| 0 |
CVE-2019-5826
| null | null |
https://github.com/chromium/chromium/commit/eaf2e8bce3855d362e53034bd83f0e3aff8714e4
|
eaf2e8bce3855d362e53034bd83f0e3aff8714e4
|
[IndexedDB] Fixed force close during pending connection open
During a force close of the database, the connections to that database
are iterated and force closed. The iteration method was not safe to
modification, and if there was a pending connection waiting to open,
that request would execute once all the other connections were
destroyed and create a new connection.
This change changes the iteration method to account for new connections
that are added during the iteration.
[email protected]
Bug: 941746
Change-Id: If1b3137237dc2920ad369d6ac99c963ed9c57d0c
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1522330
Commit-Queue: Daniel Murphy <[email protected]>
Reviewed-by: Chase Phillips <[email protected]>
Cr-Commit-Position: refs/heads/master@{#640604}
|
Status IndexedDBDatabase::DeleteIndexOperation(
int64_t object_store_id,
int64_t index_id,
IndexedDBTransaction* transaction) {
IDB_TRACE1(
"IndexedDBDatabase::DeleteIndexOperation", "txn.id", transaction->id());
IndexedDBIndexMetadata index_metadata =
RemoveIndex(object_store_id, index_id);
Status s = metadata_coding_->DeleteIndex(
transaction->BackingStoreTransaction()->transaction(),
transaction->database()->id(), object_store_id, index_metadata);
if (!s.ok())
return s;
s = backing_store_->ClearIndex(transaction->BackingStoreTransaction(),
transaction->database()->id(), object_store_id,
index_id);
if (!s.ok()) {
AddIndex(object_store_id, std::move(index_metadata),
IndexedDBIndexMetadata::kInvalidId);
return s;
}
transaction->ScheduleAbortTask(
base::BindOnce(&IndexedDBDatabase::DeleteIndexAbortOperation, this,
object_store_id, std::move(index_metadata)));
return s;
}
|
Status IndexedDBDatabase::DeleteIndexOperation(
int64_t object_store_id,
int64_t index_id,
IndexedDBTransaction* transaction) {
IDB_TRACE1(
"IndexedDBDatabase::DeleteIndexOperation", "txn.id", transaction->id());
IndexedDBIndexMetadata index_metadata =
RemoveIndex(object_store_id, index_id);
Status s = metadata_coding_->DeleteIndex(
transaction->BackingStoreTransaction()->transaction(),
transaction->database()->id(), object_store_id, index_metadata);
if (!s.ok())
return s;
s = backing_store_->ClearIndex(transaction->BackingStoreTransaction(),
transaction->database()->id(), object_store_id,
index_id);
if (!s.ok()) {
AddIndex(object_store_id, std::move(index_metadata),
IndexedDBIndexMetadata::kInvalidId);
return s;
}
transaction->ScheduleAbortTask(
base::BindOnce(&IndexedDBDatabase::DeleteIndexAbortOperation, this,
object_store_id, std::move(index_metadata)));
return s;
}
|
C
|
Chrome
| 0 |
CVE-2016-5219
|
https://www.cvedetails.com/cve/CVE-2016-5219/
|
CWE-416
|
https://github.com/chromium/chromium/commit/a4150b688a754d3d10d2ca385155b1c95d77d6ae
|
a4150b688a754d3d10d2ca385155b1c95d77d6ae
|
Add GL_PROGRAM_COMPLETION_QUERY_CHROMIUM
This makes the query of GL_COMPLETION_STATUS_KHR to programs much
cheaper by minimizing the round-trip to the GPU thread.
Bug: 881152, 957001
Change-Id: Iadfa798af29225e752c710ca5c25f50b3dd3101a
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1586630
Commit-Queue: Kenneth Russell <[email protected]>
Reviewed-by: Kentaro Hara <[email protected]>
Reviewed-by: Geoff Lang <[email protected]>
Reviewed-by: Kenneth Russell <[email protected]>
Cr-Commit-Position: refs/heads/master@{#657568}
|
bool GLES2DecoderImpl::CheckFramebufferValid(
Framebuffer* framebuffer,
GLenum target,
GLenum gl_error,
const char* func_name) {
if (!framebuffer) {
if (surfaceless_)
return false;
if (backbuffer_needs_clear_bits_) {
api()->glClearColorFn(0, 0, 0, BackBufferAlphaClearColor());
state_.SetDeviceColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
api()->glClearStencilFn(0);
state_.SetDeviceStencilMaskSeparate(GL_FRONT, kDefaultStencilMask);
state_.SetDeviceStencilMaskSeparate(GL_BACK, kDefaultStencilMask);
api()->glClearDepthFn(1.0f);
state_.SetDeviceDepthMask(GL_TRUE);
state_.SetDeviceCapabilityState(GL_SCISSOR_TEST, false);
ClearDeviceWindowRectangles();
bool reset_draw_buffer = false;
if ((backbuffer_needs_clear_bits_ & GL_COLOR_BUFFER_BIT) != 0 &&
back_buffer_draw_buffer_ == GL_NONE) {
reset_draw_buffer = true;
GLenum buf = GL_BACK;
if (GetBackbufferServiceId() != 0) // emulated backbuffer
buf = GL_COLOR_ATTACHMENT0;
api()->glDrawBuffersARBFn(1, &buf);
}
if (workarounds().gl_clear_broken) {
ClearFramebufferForWorkaround(backbuffer_needs_clear_bits_);
} else {
api()->glClearFn(backbuffer_needs_clear_bits_);
}
if (reset_draw_buffer) {
GLenum buf = GL_NONE;
api()->glDrawBuffersARBFn(1, &buf);
}
backbuffer_needs_clear_bits_ = 0;
RestoreClearState();
}
return true;
}
if (!framebuffer_manager()->IsComplete(framebuffer)) {
GLenum completeness = framebuffer->IsPossiblyComplete(feature_info_.get());
if (completeness != GL_FRAMEBUFFER_COMPLETE) {
LOCAL_SET_GL_ERROR(gl_error, func_name, "framebuffer incomplete");
return false;
}
if (framebuffer->GetStatus(texture_manager(), target) !=
GL_FRAMEBUFFER_COMPLETE) {
LOCAL_SET_GL_ERROR(
gl_error, func_name, "framebuffer incomplete (check)");
return false;
}
framebuffer_manager()->MarkAsComplete(framebuffer);
}
if (renderbuffer_manager()->HaveUnclearedRenderbuffers() ||
texture_manager()->HaveUnclearedMips()) {
if (!framebuffer->IsCleared()) {
ClearUnclearedAttachments(target, framebuffer);
}
}
return true;
}
|
bool GLES2DecoderImpl::CheckFramebufferValid(
Framebuffer* framebuffer,
GLenum target,
GLenum gl_error,
const char* func_name) {
if (!framebuffer) {
if (surfaceless_)
return false;
if (backbuffer_needs_clear_bits_) {
api()->glClearColorFn(0, 0, 0, BackBufferAlphaClearColor());
state_.SetDeviceColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
api()->glClearStencilFn(0);
state_.SetDeviceStencilMaskSeparate(GL_FRONT, kDefaultStencilMask);
state_.SetDeviceStencilMaskSeparate(GL_BACK, kDefaultStencilMask);
api()->glClearDepthFn(1.0f);
state_.SetDeviceDepthMask(GL_TRUE);
state_.SetDeviceCapabilityState(GL_SCISSOR_TEST, false);
ClearDeviceWindowRectangles();
bool reset_draw_buffer = false;
if ((backbuffer_needs_clear_bits_ & GL_COLOR_BUFFER_BIT) != 0 &&
back_buffer_draw_buffer_ == GL_NONE) {
reset_draw_buffer = true;
GLenum buf = GL_BACK;
if (GetBackbufferServiceId() != 0) // emulated backbuffer
buf = GL_COLOR_ATTACHMENT0;
api()->glDrawBuffersARBFn(1, &buf);
}
if (workarounds().gl_clear_broken) {
ClearFramebufferForWorkaround(backbuffer_needs_clear_bits_);
} else {
api()->glClearFn(backbuffer_needs_clear_bits_);
}
if (reset_draw_buffer) {
GLenum buf = GL_NONE;
api()->glDrawBuffersARBFn(1, &buf);
}
backbuffer_needs_clear_bits_ = 0;
RestoreClearState();
}
return true;
}
if (!framebuffer_manager()->IsComplete(framebuffer)) {
GLenum completeness = framebuffer->IsPossiblyComplete(feature_info_.get());
if (completeness != GL_FRAMEBUFFER_COMPLETE) {
LOCAL_SET_GL_ERROR(gl_error, func_name, "framebuffer incomplete");
return false;
}
if (framebuffer->GetStatus(texture_manager(), target) !=
GL_FRAMEBUFFER_COMPLETE) {
LOCAL_SET_GL_ERROR(
gl_error, func_name, "framebuffer incomplete (check)");
return false;
}
framebuffer_manager()->MarkAsComplete(framebuffer);
}
if (renderbuffer_manager()->HaveUnclearedRenderbuffers() ||
texture_manager()->HaveUnclearedMips()) {
if (!framebuffer->IsCleared()) {
ClearUnclearedAttachments(target, framebuffer);
}
}
return true;
}
|
C
|
Chrome
| 0 |
CVE-2010-1152
|
https://www.cvedetails.com/cve/CVE-2010-1152/
|
CWE-20
|
https://github.com/memcached/memcached/commit/75cc83685e103bc8ba380a57468c8f04413033f9
|
75cc83685e103bc8ba380a57468c8f04413033f9
|
Issue 102: Piping null to the server will crash it
|
static int try_read_command(conn *c) {
assert(c != NULL);
assert(c->rcurr <= (c->rbuf + c->rsize));
assert(c->rbytes > 0);
if (c->protocol == negotiating_prot || c->transport == udp_transport) {
if ((unsigned char)c->rbuf[0] == (unsigned char)PROTOCOL_BINARY_REQ) {
c->protocol = binary_prot;
} else {
c->protocol = ascii_prot;
}
if (settings.verbose > 1) {
fprintf(stderr, "%d: Client using the %s protocol\n", c->sfd,
prot_text(c->protocol));
}
}
if (c->protocol == binary_prot) {
/* Do we have the complete packet header? */
if (c->rbytes < sizeof(c->binary_header)) {
/* need more data! */
return 0;
} else {
#ifdef NEED_ALIGN
if (((long)(c->rcurr)) % 8 != 0) {
/* must realign input buffer */
memmove(c->rbuf, c->rcurr, c->rbytes);
c->rcurr = c->rbuf;
if (settings.verbose > 1) {
fprintf(stderr, "%d: Realign input buffer\n", c->sfd);
}
}
#endif
protocol_binary_request_header* req;
req = (protocol_binary_request_header*)c->rcurr;
if (settings.verbose > 1) {
/* Dump the packet before we convert it to host order */
int ii;
fprintf(stderr, "<%d Read binary protocol data:", c->sfd);
for (ii = 0; ii < sizeof(req->bytes); ++ii) {
if (ii % 4 == 0) {
fprintf(stderr, "\n<%d ", c->sfd);
}
fprintf(stderr, " 0x%02x", req->bytes[ii]);
}
fprintf(stderr, "\n");
}
c->binary_header = *req;
c->binary_header.request.keylen = ntohs(req->request.keylen);
c->binary_header.request.bodylen = ntohl(req->request.bodylen);
c->binary_header.request.cas = ntohll(req->request.cas);
if (c->binary_header.request.magic != PROTOCOL_BINARY_REQ) {
if (settings.verbose) {
fprintf(stderr, "Invalid magic: %x\n",
c->binary_header.request.magic);
}
conn_set_state(c, conn_closing);
return -1;
}
c->msgcurr = 0;
c->msgused = 0;
c->iovused = 0;
if (add_msghdr(c) != 0) {
out_string(c, "SERVER_ERROR out of memory");
return 0;
}
c->cmd = c->binary_header.request.opcode;
c->keylen = c->binary_header.request.keylen;
c->opaque = c->binary_header.request.opaque;
/* clear the returned cas value */
c->cas = 0;
dispatch_bin_command(c);
c->rbytes -= sizeof(c->binary_header);
c->rcurr += sizeof(c->binary_header);
}
} else {
char *el, *cont;
if (c->rbytes == 0)
return 0;
el = memchr(c->rcurr, '\n', c->rbytes);
if (!el) {
if (c->rbytes > 1024) {
/*
* We didn't have a '\n' in the first k. This _has_ to be a
* large multiget, if not we should just nuke the connection.
*/
char *ptr = c->rcurr;
while (*ptr == ' ') { /* ignore leading whitespaces */
++ptr;
}
if (strcmp(ptr, "get ") && strcmp(ptr, "gets ")) {
conn_set_state(c, conn_closing);
return 1;
}
}
return 0;
}
cont = el + 1;
if ((el - c->rcurr) > 1 && *(el - 1) == '\r') {
el--;
}
*el = '\0';
assert(cont <= (c->rcurr + c->rbytes));
process_command(c, c->rcurr);
c->rbytes -= (cont - c->rcurr);
c->rcurr = cont;
assert(c->rcurr <= (c->rbuf + c->rsize));
}
return 1;
}
|
static int try_read_command(conn *c) {
assert(c != NULL);
assert(c->rcurr <= (c->rbuf + c->rsize));
assert(c->rbytes > 0);
if (c->protocol == negotiating_prot || c->transport == udp_transport) {
if ((unsigned char)c->rbuf[0] == (unsigned char)PROTOCOL_BINARY_REQ) {
c->protocol = binary_prot;
} else {
c->protocol = ascii_prot;
}
if (settings.verbose > 1) {
fprintf(stderr, "%d: Client using the %s protocol\n", c->sfd,
prot_text(c->protocol));
}
}
if (c->protocol == binary_prot) {
/* Do we have the complete packet header? */
if (c->rbytes < sizeof(c->binary_header)) {
/* need more data! */
return 0;
} else {
#ifdef NEED_ALIGN
if (((long)(c->rcurr)) % 8 != 0) {
/* must realign input buffer */
memmove(c->rbuf, c->rcurr, c->rbytes);
c->rcurr = c->rbuf;
if (settings.verbose > 1) {
fprintf(stderr, "%d: Realign input buffer\n", c->sfd);
}
}
#endif
protocol_binary_request_header* req;
req = (protocol_binary_request_header*)c->rcurr;
if (settings.verbose > 1) {
/* Dump the packet before we convert it to host order */
int ii;
fprintf(stderr, "<%d Read binary protocol data:", c->sfd);
for (ii = 0; ii < sizeof(req->bytes); ++ii) {
if (ii % 4 == 0) {
fprintf(stderr, "\n<%d ", c->sfd);
}
fprintf(stderr, " 0x%02x", req->bytes[ii]);
}
fprintf(stderr, "\n");
}
c->binary_header = *req;
c->binary_header.request.keylen = ntohs(req->request.keylen);
c->binary_header.request.bodylen = ntohl(req->request.bodylen);
c->binary_header.request.cas = ntohll(req->request.cas);
if (c->binary_header.request.magic != PROTOCOL_BINARY_REQ) {
if (settings.verbose) {
fprintf(stderr, "Invalid magic: %x\n",
c->binary_header.request.magic);
}
conn_set_state(c, conn_closing);
return -1;
}
c->msgcurr = 0;
c->msgused = 0;
c->iovused = 0;
if (add_msghdr(c) != 0) {
out_string(c, "SERVER_ERROR out of memory");
return 0;
}
c->cmd = c->binary_header.request.opcode;
c->keylen = c->binary_header.request.keylen;
c->opaque = c->binary_header.request.opaque;
/* clear the returned cas value */
c->cas = 0;
dispatch_bin_command(c);
c->rbytes -= sizeof(c->binary_header);
c->rcurr += sizeof(c->binary_header);
}
} else {
char *el, *cont;
if (c->rbytes == 0)
return 0;
el = memchr(c->rcurr, '\n', c->rbytes);
if (!el)
return 0;
cont = el + 1;
if ((el - c->rcurr) > 1 && *(el - 1) == '\r') {
el--;
}
*el = '\0';
assert(cont <= (c->rcurr + c->rbytes));
process_command(c, c->rcurr);
c->rbytes -= (cont - c->rcurr);
c->rcurr = cont;
assert(c->rcurr <= (c->rbuf + c->rsize));
}
return 1;
}
|
C
|
memcached
| 1 |
CVE-2015-6761
|
https://www.cvedetails.com/cve/CVE-2015-6761/
|
CWE-362
|
https://github.com/chromium/chromium/commit/fd506b0ac6c7846ae45b5034044fe85c28ee68ac
|
fd506b0ac6c7846ae45b5034044fe85c28ee68ac
|
Fix detach with open()ed document leaving parent loading indefinitely
Change-Id: I26c2a054b9f1e5eb076acd677e1223058825f6d6
Bug: 803416
Test: fast/loader/document-open-iframe-then-detach.html
Change-Id: I26c2a054b9f1e5eb076acd677e1223058825f6d6
Reviewed-on: https://chromium-review.googlesource.com/887298
Reviewed-by: Mike West <[email protected]>
Commit-Queue: Nate Chapin <[email protected]>
Cr-Commit-Position: refs/heads/master@{#532967}
|
void FrameLoader::DispatchDidClearWindowObjectInMainWorld() {
DCHECK(frame_->GetDocument());
if (!frame_->GetDocument()->CanExecuteScripts(kNotAboutToExecuteScript))
return;
if (dispatching_did_clear_window_object_in_main_world_)
return;
AutoReset<bool> in_did_clear_window_object(
&dispatching_did_clear_window_object_in_main_world_, true);
Client()->DispatchDidClearWindowObjectInMainWorld();
}
|
void FrameLoader::DispatchDidClearWindowObjectInMainWorld() {
DCHECK(frame_->GetDocument());
if (!frame_->GetDocument()->CanExecuteScripts(kNotAboutToExecuteScript))
return;
if (dispatching_did_clear_window_object_in_main_world_)
return;
AutoReset<bool> in_did_clear_window_object(
&dispatching_did_clear_window_object_in_main_world_, true);
Client()->DispatchDidClearWindowObjectInMainWorld();
}
|
C
|
Chrome
| 0 |
CVE-2014-3570
|
https://www.cvedetails.com/cve/CVE-2014-3570/
|
CWE-310
|
https://github.com/openssl/openssl/commit/a7a44ba55cb4f884c6bc9ceac90072dea38e66d0
|
a7a44ba55cb4f884c6bc9ceac90072dea38e66d0
|
Fix for CVE-2014-3570 (with minor bn_asm.c revamp).
Reviewed-by: Emilia Kasper <[email protected]>
|
int test_gf2m_mod_div(BIO *bp,BN_CTX *ctx)
{
BIGNUM *a,*b[2],*c,*d,*e,*f;
int i, j, ret = 0;
int p0[] = {163,7,6,3,0,-1};
int p1[] = {193,15,0,-1};
a=BN_new();
b[0]=BN_new();
b[1]=BN_new();
c=BN_new();
d=BN_new();
e=BN_new();
f=BN_new();
BN_GF2m_arr2poly(p0, b[0]);
BN_GF2m_arr2poly(p1, b[1]);
for (i=0; i<num0; i++)
{
BN_bntest_rand(a, 512, 0, 0);
BN_bntest_rand(c, 512, 0, 0);
for (j=0; j < 2; j++)
{
BN_GF2m_mod_div(d, a, c, b[j], ctx);
BN_GF2m_mod_mul(e, d, c, b[j], ctx);
BN_GF2m_mod_div(f, a, e, b[j], ctx);
#if 0 /* make test uses ouput in bc but bc can't handle GF(2^m) arithmetic */
if (bp != NULL)
{
if (!results)
{
BN_print(bp,a);
BIO_puts(bp, " = ");
BN_print(bp,c);
BIO_puts(bp," * ");
BN_print(bp,d);
BIO_puts(bp, " % ");
BN_print(bp,b[j]);
BIO_puts(bp,"\n");
}
}
#endif
/* Test that ((a/c)*c)/a = 1. */
if(!BN_is_one(f))
{
fprintf(stderr,"GF(2^m) modular division test failed!\n");
goto err;
}
}
}
ret = 1;
err:
BN_free(a);
BN_free(b[0]);
BN_free(b[1]);
BN_free(c);
BN_free(d);
BN_free(e);
BN_free(f);
return ret;
}
|
int test_gf2m_mod_div(BIO *bp,BN_CTX *ctx)
{
BIGNUM *a,*b[2],*c,*d,*e,*f;
int i, j, ret = 0;
int p0[] = {163,7,6,3,0,-1};
int p1[] = {193,15,0,-1};
a=BN_new();
b[0]=BN_new();
b[1]=BN_new();
c=BN_new();
d=BN_new();
e=BN_new();
f=BN_new();
BN_GF2m_arr2poly(p0, b[0]);
BN_GF2m_arr2poly(p1, b[1]);
for (i=0; i<num0; i++)
{
BN_bntest_rand(a, 512, 0, 0);
BN_bntest_rand(c, 512, 0, 0);
for (j=0; j < 2; j++)
{
BN_GF2m_mod_div(d, a, c, b[j], ctx);
BN_GF2m_mod_mul(e, d, c, b[j], ctx);
BN_GF2m_mod_div(f, a, e, b[j], ctx);
#if 0 /* make test uses ouput in bc but bc can't handle GF(2^m) arithmetic */
if (bp != NULL)
{
if (!results)
{
BN_print(bp,a);
BIO_puts(bp, " = ");
BN_print(bp,c);
BIO_puts(bp," * ");
BN_print(bp,d);
BIO_puts(bp, " % ");
BN_print(bp,b[j]);
BIO_puts(bp,"\n");
}
}
#endif
/* Test that ((a/c)*c)/a = 1. */
if(!BN_is_one(f))
{
fprintf(stderr,"GF(2^m) modular division test failed!\n");
goto err;
}
}
}
ret = 1;
err:
BN_free(a);
BN_free(b[0]);
BN_free(b[1]);
BN_free(c);
BN_free(d);
BN_free(e);
BN_free(f);
return ret;
}
|
C
|
openssl
| 0 |
CVE-2018-6154
|
https://www.cvedetails.com/cve/CVE-2018-6154/
|
CWE-119
|
https://github.com/chromium/chromium/commit/98095c718d7580b5d6715e5bfd8698234ecb4470
|
98095c718d7580b5d6715e5bfd8698234ecb4470
|
Validate all incoming WebGLObjects.
A few entry points were missing the correct validation.
Tested with improved conformance tests in
https://github.com/KhronosGroup/WebGL/pull/2654 .
Bug: 848914
Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;luci.chromium.try:linux_optional_gpu_tests_rel;luci.chromium.try:mac_optional_gpu_tests_rel;luci.chromium.try:win_optional_gpu_tests_rel
Change-Id: Ib98a61cc5bf378d1b3338b04acd7e1bc4c2fe008
Reviewed-on: https://chromium-review.googlesource.com/1086718
Reviewed-by: Kai Ninomiya <[email protected]>
Reviewed-by: Antoine Labour <[email protected]>
Commit-Queue: Kenneth Russell <[email protected]>
Cr-Commit-Position: refs/heads/master@{#565016}
|
WebGLBuffer* WebGL2RenderingContextBase::ValidateBufferDataTarget(
const char* function_name,
GLenum target) {
WebGLBuffer* buffer = nullptr;
switch (target) {
case GL_ELEMENT_ARRAY_BUFFER:
buffer = bound_vertex_array_object_->BoundElementArrayBuffer();
break;
case GL_ARRAY_BUFFER:
buffer = bound_array_buffer_.Get();
break;
case GL_COPY_READ_BUFFER:
buffer = bound_copy_read_buffer_.Get();
break;
case GL_COPY_WRITE_BUFFER:
buffer = bound_copy_write_buffer_.Get();
break;
case GL_PIXEL_PACK_BUFFER:
buffer = bound_pixel_pack_buffer_.Get();
break;
case GL_PIXEL_UNPACK_BUFFER:
buffer = bound_pixel_unpack_buffer_.Get();
break;
case GL_TRANSFORM_FEEDBACK_BUFFER:
buffer = bound_transform_feedback_buffer_.Get();
break;
case GL_UNIFORM_BUFFER:
buffer = bound_uniform_buffer_.Get();
break;
default:
SynthesizeGLError(GL_INVALID_ENUM, function_name, "invalid target");
return nullptr;
}
if (!buffer) {
SynthesizeGLError(GL_INVALID_OPERATION, function_name, "no buffer");
return nullptr;
}
return buffer;
}
|
WebGLBuffer* WebGL2RenderingContextBase::ValidateBufferDataTarget(
const char* function_name,
GLenum target) {
WebGLBuffer* buffer = nullptr;
switch (target) {
case GL_ELEMENT_ARRAY_BUFFER:
buffer = bound_vertex_array_object_->BoundElementArrayBuffer();
break;
case GL_ARRAY_BUFFER:
buffer = bound_array_buffer_.Get();
break;
case GL_COPY_READ_BUFFER:
buffer = bound_copy_read_buffer_.Get();
break;
case GL_COPY_WRITE_BUFFER:
buffer = bound_copy_write_buffer_.Get();
break;
case GL_PIXEL_PACK_BUFFER:
buffer = bound_pixel_pack_buffer_.Get();
break;
case GL_PIXEL_UNPACK_BUFFER:
buffer = bound_pixel_unpack_buffer_.Get();
break;
case GL_TRANSFORM_FEEDBACK_BUFFER:
buffer = bound_transform_feedback_buffer_.Get();
break;
case GL_UNIFORM_BUFFER:
buffer = bound_uniform_buffer_.Get();
break;
default:
SynthesizeGLError(GL_INVALID_ENUM, function_name, "invalid target");
return nullptr;
}
if (!buffer) {
SynthesizeGLError(GL_INVALID_OPERATION, function_name, "no buffer");
return nullptr;
}
return buffer;
}
|
C
|
Chrome
| 0 |
CVE-2019-11487
|
https://www.cvedetails.com/cve/CVE-2019-11487/
|
CWE-416
|
https://github.com/torvalds/linux/commit/6b3a707736301c2128ca85ce85fb13f60b5e350a
|
6b3a707736301c2128ca85ce85fb13f60b5e350a
|
Merge branch 'page-refs' (page ref overflow)
Merge page ref overflow branch.
Jann Horn reported that he can overflow the page ref count with
sufficient memory (and a filesystem that is intentionally extremely
slow).
Admittedly it's not exactly easy. To have more than four billion
references to a page requires a minimum of 32GB of kernel memory just
for the pointers to the pages, much less any metadata to keep track of
those pointers. Jann needed a total of 140GB of memory and a specially
crafted filesystem that leaves all reads pending (in order to not ever
free the page references and just keep adding more).
Still, we have a fairly straightforward way to limit the two obvious
user-controllable sources of page references: direct-IO like page
references gotten through get_user_pages(), and the splice pipe page
duplication. So let's just do that.
* branch page-refs:
fs: prevent page refcount overflow in pipe_buf_get
mm: prevent get_user_pages() from overflowing page refcount
mm: add 'try_get_page()' helper function
mm: make page ref count overflow check tighter and more explicit
|
static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
int nid, nodemask_t *nodemask) { return NULL; }
|
static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
int nid, nodemask_t *nodemask) { return NULL; }
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/a03d4448faf2c40f4ef444a88cb9aace5b98e8c4
|
a03d4448faf2c40f4ef444a88cb9aace5b98e8c4
|
Introduce background.scripts feature for extension manifests.
This optimizes for the common use case where background pages
just include a reference to one or more script files and no
additional HTML.
BUG=107791
Review URL: http://codereview.chromium.org/9150008
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@117110 0039d316-1c4b-4281-b951-d872f2087c98
|
void TestingAutomationProvider::SetBooleanPreference(int handle,
const std::string& name,
bool value,
bool* success) {
*success = false;
if (browser_tracker_->ContainsHandle(handle)) {
Browser* browser = browser_tracker_->GetResource(handle);
browser->profile()->GetPrefs()->SetBoolean(name.c_str(), value);
*success = true;
}
}
|
void TestingAutomationProvider::SetBooleanPreference(int handle,
const std::string& name,
bool value,
bool* success) {
*success = false;
if (browser_tracker_->ContainsHandle(handle)) {
Browser* browser = browser_tracker_->GetResource(handle);
browser->profile()->GetPrefs()->SetBoolean(name.c_str(), value);
*success = true;
}
}
|
C
|
Chrome
| 0 |
CVE-2017-12154
|
https://www.cvedetails.com/cve/CVE-2017-12154/
| null |
https://github.com/torvalds/linux/commit/51aa68e7d57e3217192d88ce90fd5b8ef29ec94f
|
51aa68e7d57e3217192d88ce90fd5b8ef29ec94f
|
kvm: nVMX: Don't allow L2 to access the hardware CR8
If L1 does not specify the "use TPR shadow" VM-execution control in
vmcs12, then L0 must specify the "CR8-load exiting" and "CR8-store
exiting" VM-execution controls in vmcs02. Failure to do so will give
the L2 VM unrestricted read/write access to the hardware CR8.
This fixes CVE-2017-12154.
Signed-off-by: Jim Mattson <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static int check_vmentry_postreqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
u32 *exit_qual)
{
bool ia32e;
*exit_qual = ENTRY_FAIL_DEFAULT;
if (!nested_guest_cr0_valid(vcpu, vmcs12->guest_cr0) ||
!nested_guest_cr4_valid(vcpu, vmcs12->guest_cr4))
return 1;
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_SHADOW_VMCS) &&
vmcs12->vmcs_link_pointer != -1ull) {
*exit_qual = ENTRY_FAIL_VMCS_LINK_PTR;
return 1;
}
/*
* If the load IA32_EFER VM-entry control is 1, the following checks
* are performed on the field for the IA32_EFER MSR:
* - Bits reserved in the IA32_EFER MSR must be 0.
* - Bit 10 (corresponding to IA32_EFER.LMA) must equal the value of
* the IA-32e mode guest VM-exit control. It must also be identical
* to bit 8 (LME) if bit 31 in the CR0 field (corresponding to
* CR0.PG) is 1.
*/
if (to_vmx(vcpu)->nested.nested_run_pending &&
(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) {
ia32e = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) != 0;
if (!kvm_valid_efer(vcpu, vmcs12->guest_ia32_efer) ||
ia32e != !!(vmcs12->guest_ia32_efer & EFER_LMA) ||
((vmcs12->guest_cr0 & X86_CR0_PG) &&
ia32e != !!(vmcs12->guest_ia32_efer & EFER_LME)))
return 1;
}
/*
* If the load IA32_EFER VM-exit control is 1, bits reserved in the
* IA32_EFER MSR must be 0 in the field for that register. In addition,
* the values of the LMA and LME bits in the field must each be that of
* the host address-space size VM-exit control.
*/
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_EFER) {
ia32e = (vmcs12->vm_exit_controls &
VM_EXIT_HOST_ADDR_SPACE_SIZE) != 0;
if (!kvm_valid_efer(vcpu, vmcs12->host_ia32_efer) ||
ia32e != !!(vmcs12->host_ia32_efer & EFER_LMA) ||
ia32e != !!(vmcs12->host_ia32_efer & EFER_LME))
return 1;
}
return 0;
}
|
static int check_vmentry_postreqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
u32 *exit_qual)
{
bool ia32e;
*exit_qual = ENTRY_FAIL_DEFAULT;
if (!nested_guest_cr0_valid(vcpu, vmcs12->guest_cr0) ||
!nested_guest_cr4_valid(vcpu, vmcs12->guest_cr4))
return 1;
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_SHADOW_VMCS) &&
vmcs12->vmcs_link_pointer != -1ull) {
*exit_qual = ENTRY_FAIL_VMCS_LINK_PTR;
return 1;
}
/*
* If the load IA32_EFER VM-entry control is 1, the following checks
* are performed on the field for the IA32_EFER MSR:
* - Bits reserved in the IA32_EFER MSR must be 0.
* - Bit 10 (corresponding to IA32_EFER.LMA) must equal the value of
* the IA-32e mode guest VM-exit control. It must also be identical
* to bit 8 (LME) if bit 31 in the CR0 field (corresponding to
* CR0.PG) is 1.
*/
if (to_vmx(vcpu)->nested.nested_run_pending &&
(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) {
ia32e = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) != 0;
if (!kvm_valid_efer(vcpu, vmcs12->guest_ia32_efer) ||
ia32e != !!(vmcs12->guest_ia32_efer & EFER_LMA) ||
((vmcs12->guest_cr0 & X86_CR0_PG) &&
ia32e != !!(vmcs12->guest_ia32_efer & EFER_LME)))
return 1;
}
/*
* If the load IA32_EFER VM-exit control is 1, bits reserved in the
* IA32_EFER MSR must be 0 in the field for that register. In addition,
* the values of the LMA and LME bits in the field must each be that of
* the host address-space size VM-exit control.
*/
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_EFER) {
ia32e = (vmcs12->vm_exit_controls &
VM_EXIT_HOST_ADDR_SPACE_SIZE) != 0;
if (!kvm_valid_efer(vcpu, vmcs12->host_ia32_efer) ||
ia32e != !!(vmcs12->host_ia32_efer & EFER_LMA) ||
ia32e != !!(vmcs12->host_ia32_efer & EFER_LME))
return 1;
}
return 0;
}
|
C
|
linux
| 0 |
CVE-2016-6290
|
https://www.cvedetails.com/cve/CVE-2016-6290/
|
CWE-416
|
https://git.php.net/?p=php-src.git;a=commit;h=3798eb6fd5dddb211b01d41495072fd9858d4e32
|
3798eb6fd5dddb211b01d41495072fd9858d4e32
| null |
static PHP_FUNCTION(session_save_path)
{
char *name = NULL;
int name_len;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "|s", &name, &name_len) == FAILURE) {
return;
}
RETVAL_STRING(PS(save_path), 1);
if (name) {
if (memchr(name, '\0', name_len) != NULL) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "The save_path cannot contain NULL characters");
zval_dtor(return_value);
RETURN_FALSE;
}
zend_alter_ini_entry("session.save_path", sizeof("session.save_path"), name, name_len, PHP_INI_USER, PHP_INI_STAGE_RUNTIME);
}
}
|
static PHP_FUNCTION(session_save_path)
{
char *name = NULL;
int name_len;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "|s", &name, &name_len) == FAILURE) {
return;
}
RETVAL_STRING(PS(save_path), 1);
if (name) {
if (memchr(name, '\0', name_len) != NULL) {
php_error_docref(NULL TSRMLS_CC, E_WARNING, "The save_path cannot contain NULL characters");
zval_dtor(return_value);
RETURN_FALSE;
}
zend_alter_ini_entry("session.save_path", sizeof("session.save_path"), name, name_len, PHP_INI_USER, PHP_INI_STAGE_RUNTIME);
}
}
|
C
|
php
| 0 |
CVE-2016-10165
|
https://www.cvedetails.com/cve/CVE-2016-10165/
|
CWE-125
|
https://github.com/mm2/Little-CMS/commit/5ca71a7bc18b6897ab21d815d15e218e204581e2
|
5ca71a7bc18b6897ab21d815d15e218e204581e2
|
Added an extra check to MLU bounds
Thanks to Ibrahim el-sayed for spotting the bug
|
void* Type_Signature_Dup(struct _cms_typehandler_struct* self, const void *Ptr, cmsUInt32Number n)
{
return _cmsDupMem(self ->ContextID, Ptr, n * sizeof(cmsSignature));
}
|
void* Type_Signature_Dup(struct _cms_typehandler_struct* self, const void *Ptr, cmsUInt32Number n)
{
return _cmsDupMem(self ->ContextID, Ptr, n * sizeof(cmsSignature));
}
|
C
|
Little-CMS
| 0 |
CVE-2011-4127
|
https://www.cvedetails.com/cve/CVE-2011-4127/
|
CWE-264
|
https://github.com/torvalds/linux/commit/ec8013beddd717d1740cfefb1a9b900deef85462
|
ec8013beddd717d1740cfefb1a9b900deef85462
|
dm: do not forward ioctls from logical volumes to the underlying device
A logical volume can map to just part of underlying physical volume.
In this case, it must be treated like a partition.
Based on a patch from Alasdair G Kergon.
Cc: Alasdair G Kergon <[email protected]>
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static int multipath_end_io(struct dm_target *ti, struct request *clone,
int error, union map_info *map_context)
{
struct multipath *m = ti->private;
struct dm_mpath_io *mpio = map_context->ptr;
struct pgpath *pgpath = mpio->pgpath;
struct path_selector *ps;
int r;
r = do_end_io(m, clone, error, mpio);
if (pgpath) {
ps = &pgpath->pg->ps;
if (ps->type->end_io)
ps->type->end_io(ps, &pgpath->path, mpio->nr_bytes);
}
mempool_free(mpio, m->mpio_pool);
return r;
}
|
static int multipath_end_io(struct dm_target *ti, struct request *clone,
int error, union map_info *map_context)
{
struct multipath *m = ti->private;
struct dm_mpath_io *mpio = map_context->ptr;
struct pgpath *pgpath = mpio->pgpath;
struct path_selector *ps;
int r;
r = do_end_io(m, clone, error, mpio);
if (pgpath) {
ps = &pgpath->pg->ps;
if (ps->type->end_io)
ps->type->end_io(ps, &pgpath->path, mpio->nr_bytes);
}
mempool_free(mpio, m->mpio_pool);
return r;
}
|
C
|
linux
| 0 |
CVE-2013-3229
|
https://www.cvedetails.com/cve/CVE-2013-3229/
|
CWE-200
|
https://github.com/torvalds/linux/commit/a5598bd9c087dc0efc250a5221e5d0e6f584ee88
|
a5598bd9c087dc0efc250a5221e5d0e6f584ee88
|
iucv: Fix missing msg_namelen update in iucv_sock_recvmsg()
The current code does not fill the msg_name member in case it is set.
It also does not set the msg_namelen member to 0 and therefore makes
net/socket.c leak the local, uninitialized sockaddr_storage variable
to userland -- 128 bytes of kernel stack memory.
Fix that by simply setting msg_namelen to 0 as obviously nobody cared
about iucv_sock_recvmsg() not filling the msg_name in case it was set.
Cc: Ursula Braun <[email protected]>
Signed-off-by: Mathias Krause <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int afiucv_hs_callback_syn(struct sock *sk, struct sk_buff *skb)
{
struct sock *nsk;
struct iucv_sock *iucv, *niucv;
struct af_iucv_trans_hdr *trans_hdr;
int err;
iucv = iucv_sk(sk);
trans_hdr = (struct af_iucv_trans_hdr *)skb->data;
if (!iucv) {
/* no sock - connection refused */
afiucv_swap_src_dest(skb);
trans_hdr->flags = AF_IUCV_FLAG_SYN | AF_IUCV_FLAG_FIN;
err = dev_queue_xmit(skb);
goto out;
}
nsk = iucv_sock_alloc(NULL, sk->sk_type, GFP_ATOMIC);
bh_lock_sock(sk);
if ((sk->sk_state != IUCV_LISTEN) ||
sk_acceptq_is_full(sk) ||
!nsk) {
/* error on server socket - connection refused */
if (nsk)
sk_free(nsk);
afiucv_swap_src_dest(skb);
trans_hdr->flags = AF_IUCV_FLAG_SYN | AF_IUCV_FLAG_FIN;
err = dev_queue_xmit(skb);
bh_unlock_sock(sk);
goto out;
}
niucv = iucv_sk(nsk);
iucv_sock_init(nsk, sk);
niucv->transport = AF_IUCV_TRANS_HIPER;
niucv->msglimit = iucv->msglimit;
if (!trans_hdr->window)
niucv->msglimit_peer = IUCV_HIPER_MSGLIM_DEFAULT;
else
niucv->msglimit_peer = trans_hdr->window;
memcpy(niucv->dst_name, trans_hdr->srcAppName, 8);
memcpy(niucv->dst_user_id, trans_hdr->srcUserID, 8);
memcpy(niucv->src_name, iucv->src_name, 8);
memcpy(niucv->src_user_id, iucv->src_user_id, 8);
nsk->sk_bound_dev_if = sk->sk_bound_dev_if;
niucv->hs_dev = iucv->hs_dev;
dev_hold(niucv->hs_dev);
afiucv_swap_src_dest(skb);
trans_hdr->flags = AF_IUCV_FLAG_SYN | AF_IUCV_FLAG_ACK;
trans_hdr->window = niucv->msglimit;
/* if receiver acks the xmit connection is established */
err = dev_queue_xmit(skb);
if (!err) {
iucv_accept_enqueue(sk, nsk);
nsk->sk_state = IUCV_CONNECTED;
sk->sk_data_ready(sk, 1);
} else
iucv_sock_kill(nsk);
bh_unlock_sock(sk);
out:
return NET_RX_SUCCESS;
}
|
static int afiucv_hs_callback_syn(struct sock *sk, struct sk_buff *skb)
{
struct sock *nsk;
struct iucv_sock *iucv, *niucv;
struct af_iucv_trans_hdr *trans_hdr;
int err;
iucv = iucv_sk(sk);
trans_hdr = (struct af_iucv_trans_hdr *)skb->data;
if (!iucv) {
/* no sock - connection refused */
afiucv_swap_src_dest(skb);
trans_hdr->flags = AF_IUCV_FLAG_SYN | AF_IUCV_FLAG_FIN;
err = dev_queue_xmit(skb);
goto out;
}
nsk = iucv_sock_alloc(NULL, sk->sk_type, GFP_ATOMIC);
bh_lock_sock(sk);
if ((sk->sk_state != IUCV_LISTEN) ||
sk_acceptq_is_full(sk) ||
!nsk) {
/* error on server socket - connection refused */
if (nsk)
sk_free(nsk);
afiucv_swap_src_dest(skb);
trans_hdr->flags = AF_IUCV_FLAG_SYN | AF_IUCV_FLAG_FIN;
err = dev_queue_xmit(skb);
bh_unlock_sock(sk);
goto out;
}
niucv = iucv_sk(nsk);
iucv_sock_init(nsk, sk);
niucv->transport = AF_IUCV_TRANS_HIPER;
niucv->msglimit = iucv->msglimit;
if (!trans_hdr->window)
niucv->msglimit_peer = IUCV_HIPER_MSGLIM_DEFAULT;
else
niucv->msglimit_peer = trans_hdr->window;
memcpy(niucv->dst_name, trans_hdr->srcAppName, 8);
memcpy(niucv->dst_user_id, trans_hdr->srcUserID, 8);
memcpy(niucv->src_name, iucv->src_name, 8);
memcpy(niucv->src_user_id, iucv->src_user_id, 8);
nsk->sk_bound_dev_if = sk->sk_bound_dev_if;
niucv->hs_dev = iucv->hs_dev;
dev_hold(niucv->hs_dev);
afiucv_swap_src_dest(skb);
trans_hdr->flags = AF_IUCV_FLAG_SYN | AF_IUCV_FLAG_ACK;
trans_hdr->window = niucv->msglimit;
/* if receiver acks the xmit connection is established */
err = dev_queue_xmit(skb);
if (!err) {
iucv_accept_enqueue(sk, nsk);
nsk->sk_state = IUCV_CONNECTED;
sk->sk_data_ready(sk, 1);
} else
iucv_sock_kill(nsk);
bh_unlock_sock(sk);
out:
return NET_RX_SUCCESS;
}
|
C
|
linux
| 0 |
null | null | null |
https://github.com/chromium/chromium/commit/de485eb849be99305925de2257da3b85325df2fd
|
de485eb849be99305925de2257da3b85325df2fd
|
Disable ash configuring display when running mash.
When running ash inside of mash_shell on a device, DisplayConfigurator
will request a NativeDisplayDelegate from Ozone. Ozone is initialized
only in the mus process, not in the ash process, so ash crashes at this
point.
Add accessor for configure_display_ to avoid crashing. The default
display size can be set via command line with --ash-host-window-bounds
flag until a better solution exists.
BUG=590096
Review URL: https://codereview.chromium.org/1782093003
Cr-Commit-Position: refs/heads/master@{#380663}
|
void Shell::Init(const ShellInitParams& init_params) {
in_mus_ = init_params.in_mus;
if (!in_mus_) {
native_cursor_manager_ = new AshNativeCursorManager;
#if defined(OS_CHROMEOS)
cursor_manager_.reset(
new CursorManager(make_scoped_ptr(native_cursor_manager_)));
#else
cursor_manager_.reset(
new ::wm::CursorManager(make_scoped_ptr(native_cursor_manager_)));
#endif
}
delegate_->PreInit();
bool display_initialized = display_manager_->InitFromCommandLine();
display_configuration_controller_.reset(new DisplayConfigurationController(
display_manager_.get(), window_tree_host_manager_.get()));
#if defined(OS_CHROMEOS)
// When running as part of mash, OzonePlatform is not initialized in the
// ash_sysui process. DisplayConfigurator will try to use OzonePlatform and
// crash. Instead, mash can manually set default display size using
// --ash-host-window-bounds flag.
if (in_mus_)
display_configurator_->set_configure_display(false);
display_configurator_->Init(!gpu_support_->IsPanelFittingDisabled());
chromeos::DBusThreadManager* dbus_thread_manager =
chromeos::DBusThreadManager::Get();
projecting_observer_.reset(
new ProjectingObserver(dbus_thread_manager->GetPowerManagerClient()));
display_configurator_->AddObserver(projecting_observer_.get());
AddShellObserver(projecting_observer_.get());
if (!display_initialized && base::SysInfo::IsRunningOnChromeOS()) {
display_change_observer_.reset(new DisplayChangeObserver);
display_configurator_->AddObserver(display_change_observer_.get());
display_error_observer_.reset(new DisplayErrorObserver());
display_configurator_->AddObserver(display_error_observer_.get());
display_configurator_->set_state_controller(display_change_observer_.get());
display_configurator_->set_mirroring_controller(display_manager_.get());
display_configurator_->ForceInitialConfigure(
delegate_->IsFirstRunAfterBoot() ? kChromeOsBootColor : 0);
display_initialized = true;
}
display_color_manager_.reset(
new DisplayColorManager(display_configurator_.get(), blocking_pool_));
#endif // defined(OS_CHROMEOS)
if (!display_initialized)
display_manager_->InitDefaultDisplay();
display_manager_->RefreshFontParams();
views::FocusManagerFactory::Install(new AshFocusManagerFactory);
aura::Env::GetInstance()->set_context_factory(init_params.context_factory);
window_modality_controller_.reset(
new ::wm::WindowModalityController(this));
env_filter_.reset(new ::wm::CompoundEventFilter);
AddPreTargetHandler(env_filter_.get());
wm::AshFocusRules* focus_rules = new wm::AshFocusRules();
::wm::FocusController* focus_controller =
new ::wm::FocusController(focus_rules);
focus_client_.reset(focus_controller);
activation_client_ = focus_controller;
activation_client_->AddObserver(this);
focus_cycler_.reset(new FocusCycler());
screen_position_controller_.reset(new ScreenPositionController);
window_tree_host_manager_->Start();
window_tree_host_manager_->CreatePrimaryHost(
ShellInitParamsToAshWindowTreeHostInitParams(init_params));
aura::Window* root_window = window_tree_host_manager_->GetPrimaryRootWindow();
target_root_window_ = root_window;
#if defined(OS_CHROMEOS)
resolution_notification_controller_.reset(
new ResolutionNotificationController);
#endif
if (cursor_manager_)
cursor_manager_->SetDisplay(gfx::Screen::GetScreen()->GetPrimaryDisplay());
accelerator_controller_.reset(new AcceleratorController);
maximize_mode_controller_.reset(new MaximizeModeController());
AddPreTargetHandler(window_tree_host_manager_->input_method_event_handler());
#if defined(OS_CHROMEOS)
magnifier_key_scroll_handler_ = MagnifierKeyScroller::CreateHandler();
AddPreTargetHandler(magnifier_key_scroll_handler_.get());
speech_feedback_handler_ = SpokenFeedbackToggler::CreateHandler();
AddPreTargetHandler(speech_feedback_handler_.get());
#endif
user_activity_detector_.reset(new ui::UserActivityDetector);
overlay_filter_.reset(new OverlayEventFilter);
AddPreTargetHandler(overlay_filter_.get());
AddShellObserver(overlay_filter_.get());
accelerator_filter_.reset(new ::wm::AcceleratorFilter(
scoped_ptr<::wm::AcceleratorDelegate>(new AcceleratorDelegate),
accelerator_controller_->accelerator_history()));
AddPreTargetHandler(accelerator_filter_.get());
event_transformation_handler_.reset(new EventTransformationHandler);
AddPreTargetHandler(event_transformation_handler_.get());
toplevel_window_event_handler_.reset(new ToplevelWindowEventHandler);
system_gesture_filter_.reset(new SystemGestureEventFilter);
AddPreTargetHandler(system_gesture_filter_.get());
keyboard_metrics_filter_.reset(new KeyboardUMAEventFilter);
AddPreTargetHandler(keyboard_metrics_filter_.get());
#if defined(OS_CHROMEOS)
sticky_keys_controller_.reset(new StickyKeysController);
#endif
lock_state_controller_.reset(new LockStateController);
power_button_controller_.reset(new PowerButtonController(
lock_state_controller_.get()));
#if defined(OS_CHROMEOS)
power_button_controller_->OnDisplayModeChanged(
display_configurator_->cached_displays());
#endif
AddShellObserver(lock_state_controller_.get());
drag_drop_controller_.reset(new DragDropController);
partial_screenshot_controller_.reset(new PartialScreenshotController());
mouse_cursor_filter_.reset(new MouseCursorEventFilter());
PrependPreTargetHandler(mouse_cursor_filter_.get());
visibility_controller_.reset(new AshVisibilityController);
magnification_controller_.reset(
MagnificationController::CreateInstance());
mru_window_tracker_.reset(new MruWindowTracker(activation_client_,
focus_rules));
partial_magnification_controller_.reset(
new PartialMagnificationController());
autoclick_controller_.reset(AutoclickController::CreateInstance());
high_contrast_controller_.reset(new HighContrastController);
video_detector_.reset(new VideoDetector);
window_selector_controller_.reset(new WindowSelectorController());
window_cycle_controller_.reset(new WindowCycleController());
tooltip_controller_.reset(new views::corewm::TooltipController(
scoped_ptr<views::corewm::Tooltip>(new views::corewm::TooltipAura)));
AddPreTargetHandler(tooltip_controller_.get());
event_client_.reset(new EventClientImpl);
desktop_background_controller_.reset(
new DesktopBackgroundController(blocking_pool_));
user_wallpaper_delegate_.reset(delegate_->CreateUserWallpaperDelegate());
session_state_delegate_.reset(delegate_->CreateSessionStateDelegate());
accessibility_delegate_.reset(delegate_->CreateAccessibilityDelegate());
new_window_delegate_.reset(delegate_->CreateNewWindowDelegate());
media_delegate_.reset(delegate_->CreateMediaDelegate());
resize_shadow_controller_.reset(new ResizeShadowController());
shadow_controller_.reset(
new ::wm::ShadowController(activation_client_));
system_tray_notifier_.reset(new SystemTrayNotifier());
system_tray_delegate_.reset(delegate()->CreateSystemTrayDelegate());
DCHECK(system_tray_delegate_.get());
locale_notification_controller_.reset(new LocaleNotificationController);
system_tray_delegate_->Initialize();
#if defined(OS_CHROMEOS)
logout_confirmation_controller_.reset(new LogoutConfirmationController(
base::Bind(&SystemTrayDelegate::SignOut,
base::Unretained(system_tray_delegate_.get()))));
touch_transformer_controller_.reset(new TouchTransformerController());
#endif // defined(OS_CHROMEOS)
keyboard_ui_ = init_params.keyboard_factory.is_null()
? KeyboardUI::Create()
: init_params.keyboard_factory.Run();
window_tree_host_manager_->InitHosts();
#if defined(OS_CHROMEOS)
virtual_keyboard_controller_.reset(new VirtualKeyboardController);
#endif // defined(OS_CHROMEOS)
user_wallpaper_delegate_->InitializeWallpaper();
if (cursor_manager_) {
if (initially_hide_cursor_)
cursor_manager_->HideCursor();
cursor_manager_->SetCursor(ui::kCursorPointer);
}
#if defined(OS_CHROMEOS)
accelerator_controller_->SetBrightnessControlDelegate(
scoped_ptr<BrightnessControlDelegate>(
new system::BrightnessControllerChromeos));
power_event_observer_.reset(new PowerEventObserver());
user_activity_notifier_.reset(
new ui::UserActivityPowerManagerNotifier(user_activity_detector_.get()));
video_activity_notifier_.reset(
new VideoActivityNotifier(video_detector_.get()));
bluetooth_notification_controller_.reset(new BluetoothNotificationController);
last_window_closed_logout_reminder_.reset(new LastWindowClosedLogoutReminder);
screen_orientation_controller_.reset(new ScreenOrientationController());
#endif
display_manager_->CreateMirrorWindowAsyncIfAny();
FOR_EACH_OBSERVER(ShellObserver, observers_, OnShellInitialized());
user_metrics_recorder_->OnShellInitialized();
}
|
void Shell::Init(const ShellInitParams& init_params) {
in_mus_ = init_params.in_mus;
if (!in_mus_) {
native_cursor_manager_ = new AshNativeCursorManager;
#if defined(OS_CHROMEOS)
cursor_manager_.reset(
new CursorManager(make_scoped_ptr(native_cursor_manager_)));
#else
cursor_manager_.reset(
new ::wm::CursorManager(make_scoped_ptr(native_cursor_manager_)));
#endif
}
delegate_->PreInit();
bool display_initialized = display_manager_->InitFromCommandLine();
display_configuration_controller_.reset(new DisplayConfigurationController(
display_manager_.get(), window_tree_host_manager_.get()));
#if defined(OS_CHROMEOS)
display_configurator_->Init(!gpu_support_->IsPanelFittingDisabled());
chromeos::DBusThreadManager* dbus_thread_manager =
chromeos::DBusThreadManager::Get();
projecting_observer_.reset(
new ProjectingObserver(dbus_thread_manager->GetPowerManagerClient()));
display_configurator_->AddObserver(projecting_observer_.get());
AddShellObserver(projecting_observer_.get());
if (!display_initialized && base::SysInfo::IsRunningOnChromeOS()) {
display_change_observer_.reset(new DisplayChangeObserver);
display_configurator_->AddObserver(display_change_observer_.get());
display_error_observer_.reset(new DisplayErrorObserver());
display_configurator_->AddObserver(display_error_observer_.get());
display_configurator_->set_state_controller(display_change_observer_.get());
display_configurator_->set_mirroring_controller(display_manager_.get());
display_configurator_->ForceInitialConfigure(
delegate_->IsFirstRunAfterBoot() ? kChromeOsBootColor : 0);
display_initialized = true;
}
display_color_manager_.reset(
new DisplayColorManager(display_configurator_.get(), blocking_pool_));
#endif // defined(OS_CHROMEOS)
if (!display_initialized)
display_manager_->InitDefaultDisplay();
display_manager_->RefreshFontParams();
views::FocusManagerFactory::Install(new AshFocusManagerFactory);
aura::Env::GetInstance()->set_context_factory(init_params.context_factory);
window_modality_controller_.reset(
new ::wm::WindowModalityController(this));
env_filter_.reset(new ::wm::CompoundEventFilter);
AddPreTargetHandler(env_filter_.get());
wm::AshFocusRules* focus_rules = new wm::AshFocusRules();
::wm::FocusController* focus_controller =
new ::wm::FocusController(focus_rules);
focus_client_.reset(focus_controller);
activation_client_ = focus_controller;
activation_client_->AddObserver(this);
focus_cycler_.reset(new FocusCycler());
screen_position_controller_.reset(new ScreenPositionController);
window_tree_host_manager_->Start();
window_tree_host_manager_->CreatePrimaryHost(
ShellInitParamsToAshWindowTreeHostInitParams(init_params));
aura::Window* root_window = window_tree_host_manager_->GetPrimaryRootWindow();
target_root_window_ = root_window;
#if defined(OS_CHROMEOS)
resolution_notification_controller_.reset(
new ResolutionNotificationController);
#endif
if (cursor_manager_)
cursor_manager_->SetDisplay(gfx::Screen::GetScreen()->GetPrimaryDisplay());
accelerator_controller_.reset(new AcceleratorController);
maximize_mode_controller_.reset(new MaximizeModeController());
AddPreTargetHandler(window_tree_host_manager_->input_method_event_handler());
#if defined(OS_CHROMEOS)
magnifier_key_scroll_handler_ = MagnifierKeyScroller::CreateHandler();
AddPreTargetHandler(magnifier_key_scroll_handler_.get());
speech_feedback_handler_ = SpokenFeedbackToggler::CreateHandler();
AddPreTargetHandler(speech_feedback_handler_.get());
#endif
user_activity_detector_.reset(new ui::UserActivityDetector);
overlay_filter_.reset(new OverlayEventFilter);
AddPreTargetHandler(overlay_filter_.get());
AddShellObserver(overlay_filter_.get());
accelerator_filter_.reset(new ::wm::AcceleratorFilter(
scoped_ptr<::wm::AcceleratorDelegate>(new AcceleratorDelegate),
accelerator_controller_->accelerator_history()));
AddPreTargetHandler(accelerator_filter_.get());
event_transformation_handler_.reset(new EventTransformationHandler);
AddPreTargetHandler(event_transformation_handler_.get());
toplevel_window_event_handler_.reset(new ToplevelWindowEventHandler);
system_gesture_filter_.reset(new SystemGestureEventFilter);
AddPreTargetHandler(system_gesture_filter_.get());
keyboard_metrics_filter_.reset(new KeyboardUMAEventFilter);
AddPreTargetHandler(keyboard_metrics_filter_.get());
#if defined(OS_CHROMEOS)
sticky_keys_controller_.reset(new StickyKeysController);
#endif
lock_state_controller_.reset(new LockStateController);
power_button_controller_.reset(new PowerButtonController(
lock_state_controller_.get()));
#if defined(OS_CHROMEOS)
power_button_controller_->OnDisplayModeChanged(
display_configurator_->cached_displays());
#endif
AddShellObserver(lock_state_controller_.get());
drag_drop_controller_.reset(new DragDropController);
partial_screenshot_controller_.reset(new PartialScreenshotController());
mouse_cursor_filter_.reset(new MouseCursorEventFilter());
PrependPreTargetHandler(mouse_cursor_filter_.get());
visibility_controller_.reset(new AshVisibilityController);
magnification_controller_.reset(
MagnificationController::CreateInstance());
mru_window_tracker_.reset(new MruWindowTracker(activation_client_,
focus_rules));
partial_magnification_controller_.reset(
new PartialMagnificationController());
autoclick_controller_.reset(AutoclickController::CreateInstance());
high_contrast_controller_.reset(new HighContrastController);
video_detector_.reset(new VideoDetector);
window_selector_controller_.reset(new WindowSelectorController());
window_cycle_controller_.reset(new WindowCycleController());
tooltip_controller_.reset(new views::corewm::TooltipController(
scoped_ptr<views::corewm::Tooltip>(new views::corewm::TooltipAura)));
AddPreTargetHandler(tooltip_controller_.get());
event_client_.reset(new EventClientImpl);
desktop_background_controller_.reset(
new DesktopBackgroundController(blocking_pool_));
user_wallpaper_delegate_.reset(delegate_->CreateUserWallpaperDelegate());
session_state_delegate_.reset(delegate_->CreateSessionStateDelegate());
accessibility_delegate_.reset(delegate_->CreateAccessibilityDelegate());
new_window_delegate_.reset(delegate_->CreateNewWindowDelegate());
media_delegate_.reset(delegate_->CreateMediaDelegate());
resize_shadow_controller_.reset(new ResizeShadowController());
shadow_controller_.reset(
new ::wm::ShadowController(activation_client_));
system_tray_notifier_.reset(new SystemTrayNotifier());
system_tray_delegate_.reset(delegate()->CreateSystemTrayDelegate());
DCHECK(system_tray_delegate_.get());
locale_notification_controller_.reset(new LocaleNotificationController);
system_tray_delegate_->Initialize();
#if defined(OS_CHROMEOS)
logout_confirmation_controller_.reset(new LogoutConfirmationController(
base::Bind(&SystemTrayDelegate::SignOut,
base::Unretained(system_tray_delegate_.get()))));
touch_transformer_controller_.reset(new TouchTransformerController());
#endif // defined(OS_CHROMEOS)
keyboard_ui_ = init_params.keyboard_factory.is_null()
? KeyboardUI::Create()
: init_params.keyboard_factory.Run();
window_tree_host_manager_->InitHosts();
#if defined(OS_CHROMEOS)
virtual_keyboard_controller_.reset(new VirtualKeyboardController);
#endif // defined(OS_CHROMEOS)
user_wallpaper_delegate_->InitializeWallpaper();
if (cursor_manager_) {
if (initially_hide_cursor_)
cursor_manager_->HideCursor();
cursor_manager_->SetCursor(ui::kCursorPointer);
}
#if defined(OS_CHROMEOS)
accelerator_controller_->SetBrightnessControlDelegate(
scoped_ptr<BrightnessControlDelegate>(
new system::BrightnessControllerChromeos));
power_event_observer_.reset(new PowerEventObserver());
user_activity_notifier_.reset(
new ui::UserActivityPowerManagerNotifier(user_activity_detector_.get()));
video_activity_notifier_.reset(
new VideoActivityNotifier(video_detector_.get()));
bluetooth_notification_controller_.reset(new BluetoothNotificationController);
last_window_closed_logout_reminder_.reset(new LastWindowClosedLogoutReminder);
screen_orientation_controller_.reset(new ScreenOrientationController());
#endif
display_manager_->CreateMirrorWindowAsyncIfAny();
FOR_EACH_OBSERVER(ShellObserver, observers_, OnShellInitialized());
user_metrics_recorder_->OnShellInitialized();
}
|
C
|
Chrome
| 1 |
CVE-2017-5120
|
https://www.cvedetails.com/cve/CVE-2017-5120/
| null |
https://github.com/chromium/chromium/commit/b7277af490d28ac7f802c015bb0ff31395768556
|
b7277af490d28ac7f802c015bb0ff31395768556
|
bindings: Support "attribute FrozenArray<T>?"
Adds a quick hack to support a case of "attribute FrozenArray<T>?".
Bug: 1028047
Change-Id: Ib3cecc4beb6bcc0fb0dbc667aca595454cc90c86
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1933866
Reviewed-by: Hitoshi Yoshida <[email protected]>
Commit-Queue: Yuki Shiino <[email protected]>
Cr-Commit-Position: refs/heads/master@{#718676}
|
void V8TestObject::CustomGetterLongAttributeAttributeSetterCallback(
const v8::FunctionCallbackInfo<v8::Value>& info) {
RUNTIME_CALL_TIMER_SCOPE_DISABLED_BY_DEFAULT(info.GetIsolate(), "Blink_TestObject_customGetterLongAttribute_Setter");
v8::Local<v8::Value> v8_value = info[0];
test_object_v8_internal::CustomGetterLongAttributeAttributeSetter(v8_value, info);
}
|
void V8TestObject::CustomGetterLongAttributeAttributeSetterCallback(
const v8::FunctionCallbackInfo<v8::Value>& info) {
RUNTIME_CALL_TIMER_SCOPE_DISABLED_BY_DEFAULT(info.GetIsolate(), "Blink_TestObject_customGetterLongAttribute_Setter");
v8::Local<v8::Value> v8_value = info[0];
test_object_v8_internal::CustomGetterLongAttributeAttributeSetter(v8_value, info);
}
|
C
|
Chrome
| 0 |
CVE-2017-0391
|
https://www.cvedetails.com/cve/CVE-2017-0391/
| null |
https://android.googlesource.com/platform/external/libhevc/+/a33f6725d7e9f92330f995ce2dcf4faa33f6433f
|
a33f6725d7e9f92330f995ce2dcf4faa33f6433f
|
Handle invalid slice_address in slice header
If an invalid slice_address was parsed, it was resulting in an incomplete
slice header during decode stage. Fix this by not incrementing slice_idx
for ignore slice error
Bug: 32322258
Change-Id: I8638d7094d65f4409faa9b9e337ef7e7b64505de
(cherry picked from commit f4f3556e04a9776bcc776523ae0763e7d0d5c668)
|
static void ihevcd_fill_outargs(codec_t *ps_codec,
ivd_video_decode_ip_t *ps_dec_ip,
ivd_video_decode_op_t *ps_dec_op)
{
ps_dec_op->u4_error_code = ihevcd_map_error((IHEVCD_ERROR_T)ps_codec->i4_error_code);
ps_dec_op->u4_num_bytes_consumed = ps_dec_ip->u4_num_Bytes
- ps_codec->i4_bytes_remaining;
if(ps_codec->i4_sps_done)
{
ps_dec_op->u4_pic_wd = ps_codec->i4_disp_wd;
ps_dec_op->u4_pic_ht = ps_codec->i4_disp_ht;
}
else
{
ps_dec_op->u4_pic_wd = 0;
ps_dec_op->u4_pic_ht = 0;
}
ps_dec_op->e_pic_type = ps_codec->e_dec_pic_type;
ps_dec_op->u4_frame_decoded_flag = ps_codec->i4_pic_present;
ps_dec_op->u4_new_seq = 0;
ps_dec_op->u4_output_present = 0;
ps_dec_op->u4_progressive_frame_flag = 1;
ps_dec_op->u4_is_ref_flag = 1;
ps_dec_op->e_output_format = ps_codec->e_chroma_fmt;
ps_dec_op->u4_is_ref_flag = 1;
ps_dec_op->e4_fld_type = IV_FLD_TYPE_DEFAULT;
ps_dec_op->u4_ts = (UWORD32)(-1);
ps_dec_op->u4_disp_buf_id = ps_codec->i4_disp_buf_id;
if(ps_codec->i4_flush_mode)
{
ps_dec_op->u4_num_bytes_consumed = 0;
/*In the case of flush ,since no frame is decoded set pic type as invalid*/
ps_dec_op->u4_is_ref_flag = 0;
ps_dec_op->e_pic_type = IV_NA_FRAME;
ps_dec_op->u4_frame_decoded_flag = 0;
}
/* If there is a display buffer */
if(ps_codec->ps_disp_buf)
{
pic_buf_t *ps_disp_buf = ps_codec->ps_disp_buf;
ps_dec_op->u4_output_present = 1;
ps_dec_op->u4_ts = ps_disp_buf->u4_ts;
if((ps_codec->i4_flush_mode == 0) && (ps_codec->s_parse.i4_end_of_frame == 0))
ps_dec_op->u4_output_present = 0;
ps_dec_op->s_disp_frm_buf.u4_y_wd = ps_codec->i4_disp_wd;
ps_dec_op->s_disp_frm_buf.u4_y_ht = ps_codec->i4_disp_ht;
if(ps_codec->i4_share_disp_buf)
{
ps_dec_op->s_disp_frm_buf.pv_y_buf = ps_disp_buf->pu1_luma;
if(ps_codec->e_chroma_fmt != IV_YUV_420P)
{
ps_dec_op->s_disp_frm_buf.pv_u_buf = ps_disp_buf->pu1_chroma;
ps_dec_op->s_disp_frm_buf.pv_v_buf = NULL;
}
else
{
WORD32 i;
UWORD8 *pu1_u_dst = NULL, *pu1_v_dst = NULL;
for(i = 0; i < ps_codec->i4_share_disp_buf_cnt; i++)
{
WORD32 diff = ps_disp_buf->pu1_luma - ps_codec->s_disp_buffer[i].pu1_bufs[0];
if(diff == (ps_codec->i4_strd * PAD_TOP + PAD_LEFT))
{
pu1_u_dst = ps_codec->s_disp_buffer[i].pu1_bufs[1];
pu1_u_dst += (ps_codec->i4_strd * PAD_TOP) / 4 + (PAD_LEFT / 2);
pu1_v_dst = ps_codec->s_disp_buffer[i].pu1_bufs[2];
pu1_v_dst += (ps_codec->i4_strd * PAD_TOP) / 4 + (PAD_LEFT / 2);
break;
}
}
ps_dec_op->s_disp_frm_buf.pv_u_buf = pu1_u_dst;
ps_dec_op->s_disp_frm_buf.pv_v_buf = pu1_v_dst;
}
ps_dec_op->s_disp_frm_buf.u4_y_strd = ps_codec->i4_strd;
}
else
{
ps_dec_op->s_disp_frm_buf.pv_y_buf =
ps_dec_ip->s_out_buffer.pu1_bufs[0];
ps_dec_op->s_disp_frm_buf.pv_u_buf =
ps_dec_ip->s_out_buffer.pu1_bufs[1];
ps_dec_op->s_disp_frm_buf.pv_v_buf =
ps_dec_ip->s_out_buffer.pu1_bufs[2];
ps_dec_op->s_disp_frm_buf.u4_y_strd = ps_codec->i4_disp_strd;
}
if((IV_YUV_420SP_VU == ps_codec->e_chroma_fmt)
|| (IV_YUV_420SP_UV == ps_codec->e_chroma_fmt))
{
ps_dec_op->s_disp_frm_buf.u4_u_strd =
ps_dec_op->s_disp_frm_buf.u4_y_strd;
ps_dec_op->s_disp_frm_buf.u4_v_strd = 0;
ps_dec_op->s_disp_frm_buf.u4_u_wd =
ps_dec_op->s_disp_frm_buf.u4_y_wd;
ps_dec_op->s_disp_frm_buf.u4_v_wd = 0;
ps_dec_op->s_disp_frm_buf.u4_u_ht =
ps_dec_op->s_disp_frm_buf.u4_y_ht / 2;
ps_dec_op->s_disp_frm_buf.u4_v_ht = 0;
}
else if(IV_YUV_420P == ps_codec->e_chroma_fmt)
{
ps_dec_op->s_disp_frm_buf.u4_u_strd =
ps_dec_op->s_disp_frm_buf.u4_y_strd / 2;
ps_dec_op->s_disp_frm_buf.u4_v_strd =
ps_dec_op->s_disp_frm_buf.u4_y_strd / 2;
ps_dec_op->s_disp_frm_buf.u4_u_wd =
ps_dec_op->s_disp_frm_buf.u4_y_wd / 2;
ps_dec_op->s_disp_frm_buf.u4_v_wd =
ps_dec_op->s_disp_frm_buf.u4_y_wd / 2;
ps_dec_op->s_disp_frm_buf.u4_u_ht =
ps_dec_op->s_disp_frm_buf.u4_y_ht / 2;
ps_dec_op->s_disp_frm_buf.u4_v_ht =
ps_dec_op->s_disp_frm_buf.u4_y_ht / 2;
}
}
else if(ps_codec->i4_flush_mode)
{
ps_dec_op->u4_error_code = IHEVCD_END_OF_SEQUENCE;
/* Come out of flush mode */
ps_codec->i4_flush_mode = 0;
}
}
|
static void ihevcd_fill_outargs(codec_t *ps_codec,
ivd_video_decode_ip_t *ps_dec_ip,
ivd_video_decode_op_t *ps_dec_op)
{
ps_dec_op->u4_error_code = ihevcd_map_error((IHEVCD_ERROR_T)ps_codec->i4_error_code);
ps_dec_op->u4_num_bytes_consumed = ps_dec_ip->u4_num_Bytes
- ps_codec->i4_bytes_remaining;
if(ps_codec->i4_sps_done)
{
ps_dec_op->u4_pic_wd = ps_codec->i4_disp_wd;
ps_dec_op->u4_pic_ht = ps_codec->i4_disp_ht;
}
else
{
ps_dec_op->u4_pic_wd = 0;
ps_dec_op->u4_pic_ht = 0;
}
ps_dec_op->e_pic_type = ps_codec->e_dec_pic_type;
ps_dec_op->u4_frame_decoded_flag = ps_codec->i4_pic_present;
ps_dec_op->u4_new_seq = 0;
ps_dec_op->u4_output_present = 0;
ps_dec_op->u4_progressive_frame_flag = 1;
ps_dec_op->u4_is_ref_flag = 1;
ps_dec_op->e_output_format = ps_codec->e_chroma_fmt;
ps_dec_op->u4_is_ref_flag = 1;
ps_dec_op->e4_fld_type = IV_FLD_TYPE_DEFAULT;
ps_dec_op->u4_ts = (UWORD32)(-1);
ps_dec_op->u4_disp_buf_id = ps_codec->i4_disp_buf_id;
if(ps_codec->i4_flush_mode)
{
ps_dec_op->u4_num_bytes_consumed = 0;
/*In the case of flush ,since no frame is decoded set pic type as invalid*/
ps_dec_op->u4_is_ref_flag = 0;
ps_dec_op->e_pic_type = IV_NA_FRAME;
ps_dec_op->u4_frame_decoded_flag = 0;
}
/* If there is a display buffer */
if(ps_codec->ps_disp_buf)
{
pic_buf_t *ps_disp_buf = ps_codec->ps_disp_buf;
ps_dec_op->u4_output_present = 1;
ps_dec_op->u4_ts = ps_disp_buf->u4_ts;
if((ps_codec->i4_flush_mode == 0) && (ps_codec->s_parse.i4_end_of_frame == 0))
ps_dec_op->u4_output_present = 0;
ps_dec_op->s_disp_frm_buf.u4_y_wd = ps_codec->i4_disp_wd;
ps_dec_op->s_disp_frm_buf.u4_y_ht = ps_codec->i4_disp_ht;
if(ps_codec->i4_share_disp_buf)
{
ps_dec_op->s_disp_frm_buf.pv_y_buf = ps_disp_buf->pu1_luma;
if(ps_codec->e_chroma_fmt != IV_YUV_420P)
{
ps_dec_op->s_disp_frm_buf.pv_u_buf = ps_disp_buf->pu1_chroma;
ps_dec_op->s_disp_frm_buf.pv_v_buf = NULL;
}
else
{
WORD32 i;
UWORD8 *pu1_u_dst = NULL, *pu1_v_dst = NULL;
for(i = 0; i < ps_codec->i4_share_disp_buf_cnt; i++)
{
WORD32 diff = ps_disp_buf->pu1_luma - ps_codec->s_disp_buffer[i].pu1_bufs[0];
if(diff == (ps_codec->i4_strd * PAD_TOP + PAD_LEFT))
{
pu1_u_dst = ps_codec->s_disp_buffer[i].pu1_bufs[1];
pu1_u_dst += (ps_codec->i4_strd * PAD_TOP) / 4 + (PAD_LEFT / 2);
pu1_v_dst = ps_codec->s_disp_buffer[i].pu1_bufs[2];
pu1_v_dst += (ps_codec->i4_strd * PAD_TOP) / 4 + (PAD_LEFT / 2);
break;
}
}
ps_dec_op->s_disp_frm_buf.pv_u_buf = pu1_u_dst;
ps_dec_op->s_disp_frm_buf.pv_v_buf = pu1_v_dst;
}
ps_dec_op->s_disp_frm_buf.u4_y_strd = ps_codec->i4_strd;
}
else
{
ps_dec_op->s_disp_frm_buf.pv_y_buf =
ps_dec_ip->s_out_buffer.pu1_bufs[0];
ps_dec_op->s_disp_frm_buf.pv_u_buf =
ps_dec_ip->s_out_buffer.pu1_bufs[1];
ps_dec_op->s_disp_frm_buf.pv_v_buf =
ps_dec_ip->s_out_buffer.pu1_bufs[2];
ps_dec_op->s_disp_frm_buf.u4_y_strd = ps_codec->i4_disp_strd;
}
if((IV_YUV_420SP_VU == ps_codec->e_chroma_fmt)
|| (IV_YUV_420SP_UV == ps_codec->e_chroma_fmt))
{
ps_dec_op->s_disp_frm_buf.u4_u_strd =
ps_dec_op->s_disp_frm_buf.u4_y_strd;
ps_dec_op->s_disp_frm_buf.u4_v_strd = 0;
ps_dec_op->s_disp_frm_buf.u4_u_wd =
ps_dec_op->s_disp_frm_buf.u4_y_wd;
ps_dec_op->s_disp_frm_buf.u4_v_wd = 0;
ps_dec_op->s_disp_frm_buf.u4_u_ht =
ps_dec_op->s_disp_frm_buf.u4_y_ht / 2;
ps_dec_op->s_disp_frm_buf.u4_v_ht = 0;
}
else if(IV_YUV_420P == ps_codec->e_chroma_fmt)
{
ps_dec_op->s_disp_frm_buf.u4_u_strd =
ps_dec_op->s_disp_frm_buf.u4_y_strd / 2;
ps_dec_op->s_disp_frm_buf.u4_v_strd =
ps_dec_op->s_disp_frm_buf.u4_y_strd / 2;
ps_dec_op->s_disp_frm_buf.u4_u_wd =
ps_dec_op->s_disp_frm_buf.u4_y_wd / 2;
ps_dec_op->s_disp_frm_buf.u4_v_wd =
ps_dec_op->s_disp_frm_buf.u4_y_wd / 2;
ps_dec_op->s_disp_frm_buf.u4_u_ht =
ps_dec_op->s_disp_frm_buf.u4_y_ht / 2;
ps_dec_op->s_disp_frm_buf.u4_v_ht =
ps_dec_op->s_disp_frm_buf.u4_y_ht / 2;
}
}
else if(ps_codec->i4_flush_mode)
{
ps_dec_op->u4_error_code = IHEVCD_END_OF_SEQUENCE;
/* Come out of flush mode */
ps_codec->i4_flush_mode = 0;
}
}
|
C
|
Android
| 0 |
CVE-2016-0828
|
https://www.cvedetails.com/cve/CVE-2016-0828/
|
CWE-254
|
https://android.googlesource.com/platform/frameworks/native/+/dded8fdbb700d6cc498debc69a780915bc34d755
|
dded8fdbb700d6cc498debc69a780915bc34d755
|
IGraphicBufferConsumer: fix ATTACH_BUFFER info leak
Bug: 26338113
Change-Id: I019c4df2c6adbc944122df96968ddd11a02ebe33
|
status_t BnGraphicBufferConsumer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case ACQUIRE_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
BufferItem item;
int64_t presentWhen = data.readInt64();
status_t result = acquireBuffer(&item, presentWhen);
status_t err = reply->write(item);
if (err) return err;
reply->writeInt32(result);
return NO_ERROR;
} break;
case DETACH_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
int slot = data.readInt32();
int result = detachBuffer(slot);
reply->writeInt32(result);
return NO_ERROR;
} break;
case ATTACH_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
sp<GraphicBuffer> buffer = new GraphicBuffer();
data.read(*buffer.get());
int slot = -1;
int result = attachBuffer(&slot, buffer);
reply->writeInt32(slot);
reply->writeInt32(result);
return NO_ERROR;
} break;
case RELEASE_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
int buf = data.readInt32();
uint64_t frameNumber = data.readInt64();
sp<Fence> releaseFence = new Fence();
status_t err = data.read(*releaseFence);
if (err) return err;
status_t result = releaseBuffer(buf, frameNumber,
EGL_NO_DISPLAY, EGL_NO_SYNC_KHR, releaseFence);
reply->writeInt32(result);
return NO_ERROR;
} break;
case CONSUMER_CONNECT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
sp<IConsumerListener> consumer = IConsumerListener::asInterface( data.readStrongBinder() );
bool controlledByApp = data.readInt32();
status_t result = consumerConnect(consumer, controlledByApp);
reply->writeInt32(result);
return NO_ERROR;
} break;
case CONSUMER_DISCONNECT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
status_t result = consumerDisconnect();
reply->writeInt32(result);
return NO_ERROR;
} break;
case GET_RELEASED_BUFFERS: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint64_t slotMask;
status_t result = getReleasedBuffers(&slotMask);
reply->writeInt64(slotMask);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_DEFAULT_BUFFER_SIZE: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t w = data.readInt32();
uint32_t h = data.readInt32();
status_t result = setDefaultBufferSize(w, h);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_DEFAULT_MAX_BUFFER_COUNT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t bufferCount = data.readInt32();
status_t result = setDefaultMaxBufferCount(bufferCount);
reply->writeInt32(result);
return NO_ERROR;
} break;
case DISABLE_ASYNC_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
status_t result = disableAsyncBuffer();
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_MAX_ACQUIRED_BUFFER_COUNT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t maxAcquiredBuffers = data.readInt32();
status_t result = setMaxAcquiredBufferCount(maxAcquiredBuffers);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_CONSUMER_NAME: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
setConsumerName( data.readString8() );
return NO_ERROR;
} break;
case SET_DEFAULT_BUFFER_FORMAT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t defaultFormat = data.readInt32();
status_t result = setDefaultBufferFormat(defaultFormat);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_CONSUMER_USAGE_BITS: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t usage = data.readInt32();
status_t result = setConsumerUsageBits(usage);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_TRANSFORM_HINT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t hint = data.readInt32();
status_t result = setTransformHint(hint);
reply->writeInt32(result);
return NO_ERROR;
} break;
case DUMP: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
String8 result = data.readString8();
String8 prefix = data.readString8();
static_cast<IGraphicBufferConsumer*>(this)->dump(result, prefix);
reply->writeString8(result);
return NO_ERROR;
}
}
return BBinder::onTransact(code, data, reply, flags);
}
|
status_t BnGraphicBufferConsumer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case ACQUIRE_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
BufferItem item;
int64_t presentWhen = data.readInt64();
status_t result = acquireBuffer(&item, presentWhen);
status_t err = reply->write(item);
if (err) return err;
reply->writeInt32(result);
return NO_ERROR;
} break;
case DETACH_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
int slot = data.readInt32();
int result = detachBuffer(slot);
reply->writeInt32(result);
return NO_ERROR;
} break;
case ATTACH_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
sp<GraphicBuffer> buffer = new GraphicBuffer();
data.read(*buffer.get());
int slot;
int result = attachBuffer(&slot, buffer);
reply->writeInt32(slot);
reply->writeInt32(result);
return NO_ERROR;
} break;
case RELEASE_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
int buf = data.readInt32();
uint64_t frameNumber = data.readInt64();
sp<Fence> releaseFence = new Fence();
status_t err = data.read(*releaseFence);
if (err) return err;
status_t result = releaseBuffer(buf, frameNumber,
EGL_NO_DISPLAY, EGL_NO_SYNC_KHR, releaseFence);
reply->writeInt32(result);
return NO_ERROR;
} break;
case CONSUMER_CONNECT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
sp<IConsumerListener> consumer = IConsumerListener::asInterface( data.readStrongBinder() );
bool controlledByApp = data.readInt32();
status_t result = consumerConnect(consumer, controlledByApp);
reply->writeInt32(result);
return NO_ERROR;
} break;
case CONSUMER_DISCONNECT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
status_t result = consumerDisconnect();
reply->writeInt32(result);
return NO_ERROR;
} break;
case GET_RELEASED_BUFFERS: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint64_t slotMask;
status_t result = getReleasedBuffers(&slotMask);
reply->writeInt64(slotMask);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_DEFAULT_BUFFER_SIZE: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t w = data.readInt32();
uint32_t h = data.readInt32();
status_t result = setDefaultBufferSize(w, h);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_DEFAULT_MAX_BUFFER_COUNT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t bufferCount = data.readInt32();
status_t result = setDefaultMaxBufferCount(bufferCount);
reply->writeInt32(result);
return NO_ERROR;
} break;
case DISABLE_ASYNC_BUFFER: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
status_t result = disableAsyncBuffer();
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_MAX_ACQUIRED_BUFFER_COUNT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t maxAcquiredBuffers = data.readInt32();
status_t result = setMaxAcquiredBufferCount(maxAcquiredBuffers);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_CONSUMER_NAME: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
setConsumerName( data.readString8() );
return NO_ERROR;
} break;
case SET_DEFAULT_BUFFER_FORMAT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t defaultFormat = data.readInt32();
status_t result = setDefaultBufferFormat(defaultFormat);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_CONSUMER_USAGE_BITS: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t usage = data.readInt32();
status_t result = setConsumerUsageBits(usage);
reply->writeInt32(result);
return NO_ERROR;
} break;
case SET_TRANSFORM_HINT: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
uint32_t hint = data.readInt32();
status_t result = setTransformHint(hint);
reply->writeInt32(result);
return NO_ERROR;
} break;
case DUMP: {
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
String8 result = data.readString8();
String8 prefix = data.readString8();
static_cast<IGraphicBufferConsumer*>(this)->dump(result, prefix);
reply->writeString8(result);
return NO_ERROR;
}
}
return BBinder::onTransact(code, data, reply, flags);
}
|
C
|
Android
| 1 |
CVE-2019-15296
|
https://www.cvedetails.com/cve/CVE-2019-15296/
|
CWE-119
|
https://github.com/knik0/faad2/commit/942c3e0aee748ea6fe97cb2c1aa5893225316174
|
942c3e0aee748ea6fe97cb2c1aa5893225316174
|
Fix a couple buffer overflows
https://hackerone.com/reports/502816
https://hackerone.com/reports/507858
https://github.com/videolan/vlc/blob/master/contrib/src/faad2/faad2-fix-overflows.patch
|
static uint8_t spectral_data(NeAACDecStruct *hDecoder, ic_stream *ics, bitfile *ld,
int16_t *spectral_data)
{
int8_t i;
uint8_t g;
uint16_t inc, k, p = 0;
uint8_t groups = 0;
uint8_t sect_cb;
uint8_t result;
uint16_t nshort = hDecoder->frameLength/8;
#ifdef PROFILE
int64_t count = faad_get_ts();
#endif
for(g = 0; g < ics->num_window_groups; g++)
{
p = groups*nshort;
for (i = 0; i < ics->num_sec[g]; i++)
{
sect_cb = ics->sect_cb[g][i];
inc = (sect_cb >= FIRST_PAIR_HCB) ? 2 : 4;
switch (sect_cb)
{
case ZERO_HCB:
case NOISE_HCB:
case INTENSITY_HCB:
case INTENSITY_HCB2:
#ifdef SD_PRINT
{
int j;
for (j = ics->sect_sfb_offset[g][ics->sect_start[g][i]]; j < ics->sect_sfb_offset[g][ics->sect_end[g][i]]; j++)
{
printf("%d\n", 0);
}
}
#endif
#ifdef SFBO_PRINT
printf("%d\n", ics->sect_sfb_offset[g][ics->sect_start[g][i]]);
#endif
p += (ics->sect_sfb_offset[g][ics->sect_end[g][i]] -
ics->sect_sfb_offset[g][ics->sect_start[g][i]]);
break;
default:
#ifdef SFBO_PRINT
printf("%d\n", ics->sect_sfb_offset[g][ics->sect_start[g][i]]);
#endif
for (k = ics->sect_sfb_offset[g][ics->sect_start[g][i]];
k < ics->sect_sfb_offset[g][ics->sect_end[g][i]]; k += inc)
{
if ((result = huffman_spectral_data(sect_cb, ld, &spectral_data[p])) > 0)
return result;
#ifdef SD_PRINT
{
int j;
for (j = p; j < p+inc; j++)
{
printf("%d\n", spectral_data[j]);
}
}
#endif
p += inc;
}
break;
}
}
groups += ics->window_group_length[g];
}
#ifdef PROFILE
count = faad_get_ts() - count;
hDecoder->spectral_cycles += count;
#endif
return 0;
}
|
static uint8_t spectral_data(NeAACDecStruct *hDecoder, ic_stream *ics, bitfile *ld,
int16_t *spectral_data)
{
int8_t i;
uint8_t g;
uint16_t inc, k, p = 0;
uint8_t groups = 0;
uint8_t sect_cb;
uint8_t result;
uint16_t nshort = hDecoder->frameLength/8;
#ifdef PROFILE
int64_t count = faad_get_ts();
#endif
for(g = 0; g < ics->num_window_groups; g++)
{
p = groups*nshort;
for (i = 0; i < ics->num_sec[g]; i++)
{
sect_cb = ics->sect_cb[g][i];
inc = (sect_cb >= FIRST_PAIR_HCB) ? 2 : 4;
switch (sect_cb)
{
case ZERO_HCB:
case NOISE_HCB:
case INTENSITY_HCB:
case INTENSITY_HCB2:
#ifdef SD_PRINT
{
int j;
for (j = ics->sect_sfb_offset[g][ics->sect_start[g][i]]; j < ics->sect_sfb_offset[g][ics->sect_end[g][i]]; j++)
{
printf("%d\n", 0);
}
}
#endif
#ifdef SFBO_PRINT
printf("%d\n", ics->sect_sfb_offset[g][ics->sect_start[g][i]]);
#endif
p += (ics->sect_sfb_offset[g][ics->sect_end[g][i]] -
ics->sect_sfb_offset[g][ics->sect_start[g][i]]);
break;
default:
#ifdef SFBO_PRINT
printf("%d\n", ics->sect_sfb_offset[g][ics->sect_start[g][i]]);
#endif
for (k = ics->sect_sfb_offset[g][ics->sect_start[g][i]];
k < ics->sect_sfb_offset[g][ics->sect_end[g][i]]; k += inc)
{
if ((result = huffman_spectral_data(sect_cb, ld, &spectral_data[p])) > 0)
return result;
#ifdef SD_PRINT
{
int j;
for (j = p; j < p+inc; j++)
{
printf("%d\n", spectral_data[j]);
}
}
#endif
p += inc;
}
break;
}
}
groups += ics->window_group_length[g];
}
#ifdef PROFILE
count = faad_get_ts() - count;
hDecoder->spectral_cycles += count;
#endif
return 0;
}
|
C
|
faad2
| 0 |
CVE-2018-20784
|
https://www.cvedetails.com/cve/CVE-2018-20784/
|
CWE-400
|
https://github.com/torvalds/linux/commit/c40f7d74c741a907cfaeb73a7697081881c497d0
|
c40f7d74c741a907cfaeb73a7697081881c497d0
|
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the
scheduler under high loads, starting at around the v4.18 time frame,
and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list
manipulation.
Do a (manual) revert of:
a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
It turns out that the list_del_leaf_cfs_rq() introduced by this commit
is a surprising property that was not considered in followup commits
such as:
9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list")
As Vincent Guittot explains:
"I think that there is a bigger problem with commit a9e7f6544b9c and
cfs_rq throttling:
Let take the example of the following topology TG2 --> TG1 --> root:
1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1
cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in
one path because it has never been used and can't be throttled so
tmp_alone_branch will point to leaf_cfs_rq_list at the end.
2) Then TG1 is throttled
3) and we add TG3 as a new child of TG1.
4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1
cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list.
With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list.
So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1
cfs_rq is removed from the list.
Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list
but tmp_alone_branch still points to TG3 cfs_rq because its throttled
parent can't be enqueued when the lock is released.
tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should.
So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch
points on another TG cfs_rq, the next TG cfs_rq that will be added,
will be linked outside rq->leaf_cfs_rq_list - which is bad.
In addition, we can break the ordering of the cfs_rq in
rq->leaf_cfs_rq_list but this ordering is used to update and
propagate the update from leaf down to root."
Instead of trying to work through all these cases and trying to reproduce
the very high loads that produced the lockup to begin with, simplify
the code temporarily by reverting a9e7f6544b9c - which change was clearly
not thought through completely.
This (hopefully) gives us a kernel that doesn't lock up so people
can continue to enjoy their holidays without worrying about regressions. ;-)
[ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ]
Analyzed-by: Xie XiuQi <[email protected]>
Analyzed-by: Vincent Guittot <[email protected]>
Reported-by: Zhipeng Xie <[email protected]>
Reported-by: Sargun Dhillon <[email protected]>
Reported-by: Xie XiuQi <[email protected]>
Tested-by: Zhipeng Xie <[email protected]>
Tested-by: Sargun Dhillon <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Acked-by: Vincent Guittot <[email protected]>
Cc: <[email protected]> # v4.13+
Cc: Bin Li <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
void unregister_fair_sched_group(struct task_group *tg) { }
|
void unregister_fair_sched_group(struct task_group *tg) { }
|
C
|
linux
| 0 |
CVE-2017-9059
|
https://www.cvedetails.com/cve/CVE-2017-9059/
|
CWE-404
|
https://github.com/torvalds/linux/commit/c70422f760c120480fee4de6c38804c72aa26bc1
|
c70422f760c120480fee4de6c38804c72aa26bc1
|
Merge tag 'nfsd-4.12' of git://linux-nfs.org/~bfields/linux
Pull nfsd updates from Bruce Fields:
"Another RDMA update from Chuck Lever, and a bunch of miscellaneous
bugfixes"
* tag 'nfsd-4.12' of git://linux-nfs.org/~bfields/linux: (26 commits)
nfsd: Fix up the "supattr_exclcreat" attributes
nfsd: encoders mustn't use unitialized values in error cases
nfsd: fix undefined behavior in nfsd4_layout_verify
lockd: fix lockd shutdown race
NFSv4: Fix callback server shutdown
SUNRPC: Refactor svc_set_num_threads()
NFSv4.x/callback: Create the callback service through svc_create_pooled
lockd: remove redundant check on block
svcrdma: Clean out old XDR encoders
svcrdma: Remove the req_map cache
svcrdma: Remove unused RDMA Write completion handler
svcrdma: Reduce size of sge array in struct svc_rdma_op_ctxt
svcrdma: Clean up RPC-over-RDMA backchannel reply processing
svcrdma: Report Write/Reply chunk overruns
svcrdma: Clean up RDMA_ERROR path
svcrdma: Use rdma_rw API in RPC reply path
svcrdma: Introduce local rdma_rw API helpers
svcrdma: Clean up svc_rdma_get_inv_rkey()
svcrdma: Add helper to save pages under I/O
svcrdma: Eliminate RPCRDMA_SQ_DEPTH_MULT
...
|
nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
struct nfsd4_clid_slot *slot, __be32 nfserr)
{
slot->sl_status = nfserr;
memcpy(&slot->sl_cr_ses, cr_ses, sizeof(*cr_ses));
}
|
nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
struct nfsd4_clid_slot *slot, __be32 nfserr)
{
slot->sl_status = nfserr;
memcpy(&slot->sl_cr_ses, cr_ses, sizeof(*cr_ses));
}
|
C
|
linux
| 0 |
CVE-2013-0890
|
https://www.cvedetails.com/cve/CVE-2013-0890/
|
CWE-119
|
https://github.com/chromium/chromium/commit/d443be6fdfe17ca4f3ff1843ded362ff0cd01096
|
d443be6fdfe17ca4f3ff1843ded362ff0cd01096
|
Check for a negative integer properly.
BUG=169966
Review URL: https://chromiumcodereview.appspot.com/11892002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@176879 0039d316-1c4b-4281-b951-d872f2087c98
|
void SafeBrowsingBlockingPage::OnDontProceed() {
RecordUserReactionTime(kNavigatedAwayMetaCommand);
if (proceeded_)
return;
RecordUserAction(DONT_PROCEED);
FinishMalwareDetails(0); // No delay
NotifySafeBrowsingUIManager(ui_manager_, unsafe_resources_, false);
UnsafeResourceMap* unsafe_resource_map = GetUnsafeResourcesMap();
UnsafeResourceMap::iterator iter = unsafe_resource_map->find(web_contents_);
if (iter != unsafe_resource_map->end() && !iter->second.empty()) {
NotifySafeBrowsingUIManager(ui_manager_, iter->second, false);
unsafe_resource_map->erase(iter);
}
int last_committed_index =
web_contents_->GetController().GetLastCommittedEntryIndex();
if (navigation_entry_index_to_remove_ != -1 &&
navigation_entry_index_to_remove_ != last_committed_index &&
!web_contents_->IsBeingDestroyed()) {
web_contents_->GetController().RemoveEntryAtIndex(
navigation_entry_index_to_remove_);
navigation_entry_index_to_remove_ = -1;
}
}
|
void SafeBrowsingBlockingPage::OnDontProceed() {
RecordUserReactionTime(kNavigatedAwayMetaCommand);
if (proceeded_)
return;
RecordUserAction(DONT_PROCEED);
FinishMalwareDetails(0); // No delay
NotifySafeBrowsingUIManager(ui_manager_, unsafe_resources_, false);
UnsafeResourceMap* unsafe_resource_map = GetUnsafeResourcesMap();
UnsafeResourceMap::iterator iter = unsafe_resource_map->find(web_contents_);
if (iter != unsafe_resource_map->end() && !iter->second.empty()) {
NotifySafeBrowsingUIManager(ui_manager_, iter->second, false);
unsafe_resource_map->erase(iter);
}
int last_committed_index =
web_contents_->GetController().GetLastCommittedEntryIndex();
if (navigation_entry_index_to_remove_ != -1 &&
navigation_entry_index_to_remove_ != last_committed_index &&
!web_contents_->IsBeingDestroyed()) {
web_contents_->GetController().RemoveEntryAtIndex(
navigation_entry_index_to_remove_);
navigation_entry_index_to_remove_ = -1;
}
}
|
C
|
Chrome
| 0 |
CVE-2017-15391
|
https://www.cvedetails.com/cve/CVE-2017-15391/
| null |
https://github.com/chromium/chromium/commit/f1afce25b3f94d8bddec69b08ffbc29b989ad844
|
f1afce25b3f94d8bddec69b08ffbc29b989ad844
|
[Extensions] Update navigations across hypothetical extension extents
Update code to treat navigations across hypothetical extension extents
(e.g. for nonexistent extensions) the same as we do for navigations
crossing installed extension extents.
Bug: 598265
Change-Id: Ibdf2f563ce1fd108ead279077901020a24de732b
Reviewed-on: https://chromium-review.googlesource.com/617180
Commit-Queue: Devlin <[email protected]>
Reviewed-by: Alex Moshchuk <[email protected]>
Reviewed-by: Nasko Oskov <[email protected]>
Cr-Commit-Position: refs/heads/master@{#495779}
|
extensions::ExtensionHost* ExtensionBrowserTest::FindHostWithPath(
extensions::ProcessManager* manager,
const std::string& path,
int expected_hosts) {
extensions::ExtensionHost* result_host = nullptr;
int num_hosts = 0;
for (extensions::ExtensionHost* host : manager->background_hosts()) {
if (host->GetURL().path() == path) {
EXPECT_FALSE(result_host);
result_host = host;
}
num_hosts++;
}
EXPECT_EQ(expected_hosts, num_hosts);
return result_host;
}
|
extensions::ExtensionHost* ExtensionBrowserTest::FindHostWithPath(
extensions::ProcessManager* manager,
const std::string& path,
int expected_hosts) {
extensions::ExtensionHost* result_host = nullptr;
int num_hosts = 0;
for (extensions::ExtensionHost* host : manager->background_hosts()) {
if (host->GetURL().path() == path) {
EXPECT_FALSE(result_host);
result_host = host;
}
num_hosts++;
}
EXPECT_EQ(expected_hosts, num_hosts);
return result_host;
}
|
C
|
Chrome
| 0 |
CVE-2017-0380
|
https://www.cvedetails.com/cve/CVE-2017-0380/
|
CWE-532
|
https://github.com/torproject/tor/commit/09ea89764a4d3a907808ed7d4fe42abfe64bd486
|
09ea89764a4d3a907808ed7d4fe42abfe64bd486
|
Fix log-uninitialized-stack bug in rend_service_intro_established.
Fixes bug 23490; bugfix on 0.2.7.2-alpha.
TROVE-2017-008
CVE-2017-0380
|
count_established_intro_points(const rend_service_t *service)
{
unsigned int num = 0;
SMARTLIST_FOREACH(service->intro_nodes, rend_intro_point_t *, intro,
num += intro->circuit_established
);
return num;
}
|
count_established_intro_points(const rend_service_t *service)
{
unsigned int num = 0;
SMARTLIST_FOREACH(service->intro_nodes, rend_intro_point_t *, intro,
num += intro->circuit_established
);
return num;
}
|
C
|
tor
| 0 |
CVE-2013-6624
|
https://www.cvedetails.com/cve/CVE-2013-6624/
|
CWE-399
|
https://github.com/chromium/chromium/commit/36773850210becda3d76f27285ecd899fafdfc72
|
36773850210becda3d76f27285ecd899fafdfc72
|
Fix tracking of the id attribute string if it is shared across elements.
The patch to remove AtomicStringImpl:
http://src.chromium.org/viewvc/blink?view=rev&rev=154790
Exposed a lifetime issue with strings for id attributes. We simply need to use
AtomicString.
BUG=290566
Review URL: https://codereview.chromium.org/33793004
git-svn-id: svn://svn.chromium.org/blink/trunk@160250 bbb929c8-8fbe-4397-9dbb-9b2b20218538
|
HTMLDocument::~HTMLDocument()
{
}
|
HTMLDocument::~HTMLDocument()
{
}
|
C
|
Chrome
| 0 |
Subsets and Splits
CWE-119 Function Changes
This query retrieves specific examples (before and after code changes) of vulnerabilities with CWE-119, providing basic filtering but limited insight.
Vulnerable Code with CWE IDs
The query filters and combines records from multiple datasets to list specific vulnerability details, providing a basic overview of vulnerable functions but lacking deeper insights.
Vulnerable Functions in BigVul
Retrieves details of vulnerable functions from both validation and test datasets where vulnerabilities are present, providing a basic set of data points for further analysis.
Vulnerable Code Functions
This query filters and shows raw data for vulnerable functions, which provides basic insight into specific vulnerabilities but lacks broader analytical value.
Top 100 Vulnerable Functions
Retrieves 100 samples of vulnerabilities from the training dataset, showing the CVE ID, CWE ID, and code changes before and after the vulnerability, which is a basic filtering of vulnerability data.