CVE ID
stringlengths
13
43
CVE Page
stringlengths
45
48
CWE ID
stringclasses
90 values
codeLink
stringlengths
46
139
commit_id
stringlengths
6
81
commit_message
stringlengths
3
13.3k
func_after
stringlengths
14
241k
func_before
stringlengths
14
241k
lang
stringclasses
3 values
project
stringclasses
309 values
vul
int8
0
1
CVE-2016-5218
https://www.cvedetails.com/cve/CVE-2016-5218/
CWE-20
https://github.com/chromium/chromium/commit/45d901b56f578a74b19ba0d10fa5c4c467f19303
45d901b56f578a74b19ba0d10fa5c4c467f19303
Paint tab groups with the group color. * The background of TabGroupHeader now uses the group color. * The backgrounds of tabs in the group are tinted with the group color. This treatment, along with the colors chosen, are intended to be a placeholder. Bug: 905491 Change-Id: Ic808548f8eba23064606e7fb8c9bba281d0d117f Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1610504 Commit-Queue: Bret Sepulveda <[email protected]> Reviewed-by: Taylor Bergquist <[email protected]> Cr-Commit-Position: refs/heads/master@{#660498}
void TabStrip::MaybeStartDrag( Tab* tab, const ui::LocatedEvent& event, const ui::ListSelectionModel& original_selection) { if (IsAnimating() || tab->closing() || controller_->HasAvailableDragActions() == 0) { return; } int model_index = GetModelIndexOfTab(tab); if (!IsValidModelIndex(model_index)) { CHECK(false); return; } drag_context_->MaybeStartDrag(tab, model_index, event, original_selection); }
void TabStrip::MaybeStartDrag( Tab* tab, const ui::LocatedEvent& event, const ui::ListSelectionModel& original_selection) { if (IsAnimating() || tab->closing() || controller_->HasAvailableDragActions() == 0) { return; } int model_index = GetModelIndexOfTab(tab); if (!IsValidModelIndex(model_index)) { CHECK(false); return; } drag_context_->MaybeStartDrag(tab, model_index, event, original_selection); }
C
Chrome
0
CVE-2014-3610
https://www.cvedetails.com/cve/CVE-2014-3610/
CWE-264
https://github.com/torvalds/linux/commit/854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
KVM: x86: Check non-canonical addresses upon WRMSR Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is written to certain MSRs. The behavior is "almost" identical for AMD and Intel (ignoring MSRs that are not implemented in either architecture since they would anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if non-canonical address is written on Intel but not on AMD (which ignores the top 32-bits). Accordingly, this patch injects a #GP on the MSRs which behave identically on Intel and AMD. To eliminate the differences between the architecutres, the value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to canonical value before writing instead of injecting a #GP. Some references from Intel and AMD manuals: According to Intel SDM description of WRMSR instruction #GP is expected on WRMSR "If the source register contains a non-canonical address and ECX specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE, IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP." According to AMD manual instruction manual: LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the LSTAR and CSTAR registers. If an RIP written by WRMSR is not in canonical form, a general-protection exception (#GP) occurs." IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the base field must be in canonical form or a #GP fault will occur." IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must be in canonical form." This patch fixes CVE-2014-3610. Cc: [email protected] Signed-off-by: Nadav Amit <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static void svm_inject_nmi(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI; vcpu->arch.hflags |= HF_NMI_MASK; set_intercept(svm, INTERCEPT_IRET); ++vcpu->stat.nmi_injections; }
static void svm_inject_nmi(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI; vcpu->arch.hflags |= HF_NMI_MASK; set_intercept(svm, INTERCEPT_IRET); ++vcpu->stat.nmi_injections; }
C
linux
0
CVE-2013-2128
https://www.cvedetails.com/cve/CVE-2013-2128/
CWE-119
https://github.com/torvalds/linux/commit/baff42ab1494528907bf4d5870359e31711746ae
baff42ab1494528907bf4d5870359e31711746ae
net: Fix oops from tcp_collapse() when using splice() tcp_read_sock() can have a eat skbs without immediately advancing copied_seq. This can cause a panic in tcp_collapse() if it is called as a result of the recv_actor dropping the socket lock. A userspace program that splices data from a socket to either another socket or to a file can trigger this bug. Signed-off-by: Steven J. Magnani <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void tcp_done(struct sock *sk) { if (sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV) TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS); tcp_set_state(sk, TCP_CLOSE); tcp_clear_xmit_timers(sk); sk->sk_shutdown = SHUTDOWN_MASK; if (!sock_flag(sk, SOCK_DEAD)) sk->sk_state_change(sk); else inet_csk_destroy_sock(sk); }
void tcp_done(struct sock *sk) { if (sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV) TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS); tcp_set_state(sk, TCP_CLOSE); tcp_clear_xmit_timers(sk); sk->sk_shutdown = SHUTDOWN_MASK; if (!sock_flag(sk, SOCK_DEAD)) sk->sk_state_change(sk); else inet_csk_destroy_sock(sk); }
C
linux
0
CVE-2014-5336
https://www.cvedetails.com/cve/CVE-2014-5336/
CWE-20
https://github.com/monkey/monkey/commit/b2d0e6f92310bb14a15aa2f8e96e1fb5379776dd
b2d0e6f92310bb14a15aa2f8e96e1fb5379776dd
Request: new request session flag to mark those files opened by FDT This patch aims to fix a potential DDoS problem that can be caused in the server quering repetitive non-existent resources. When serving a static file, the core use Vhost FDT mechanism, but if it sends a static error page it does a direct open(2). When closing the resources for the same request it was just calling mk_vhost_close() which did not clear properly the file descriptor. This patch adds a new field on the struct session_request called 'fd_is_fdt', which contains MK_TRUE or MK_FALSE depending of how fd_file was opened. Thanks to Matthew Daley <[email protected]> for report and troubleshoot this problem. Signed-off-by: Eduardo Silva <[email protected]>
void mk_request_free_list(struct client_session *cs) { struct session_request *sr_node; struct mk_list *sr_head, *temp; /* sr = last node */ MK_TRACE("[FD %i] Free struct client_session", cs->socket); mk_list_foreach_safe(sr_head, temp, &cs->request_list) { sr_node = mk_list_entry(sr_head, struct session_request, _head); mk_list_del(sr_head); mk_request_free(sr_node); if (sr_node != &cs->sr_fixed) { mk_mem_free(sr_node); } } }
void mk_request_free_list(struct client_session *cs) { struct session_request *sr_node; struct mk_list *sr_head, *temp; /* sr = last node */ MK_TRACE("[FD %i] Free struct client_session", cs->socket); mk_list_foreach_safe(sr_head, temp, &cs->request_list) { sr_node = mk_list_entry(sr_head, struct session_request, _head); mk_list_del(sr_head); mk_request_free(sr_node); if (sr_node != &cs->sr_fixed) { mk_mem_free(sr_node); } } }
C
monkey
0
CVE-2013-2017
https://www.cvedetails.com/cve/CVE-2013-2017/
CWE-399
https://github.com/torvalds/linux/commit/6ec82562ffc6f297d0de36d65776cff8e5704867
6ec82562ffc6f297d0de36d65776cff8e5704867
veth: Dont kfree_skb() after dev_forward_skb() In case of congestion, netif_rx() frees the skb, so we must assume dev_forward_skb() also consume skb. Bug introduced by commit 445409602c092 (veth: move loopback logic to common location) We must change dev_forward_skb() to always consume skb, and veth to not double free it. Bug report : http://marc.info/?l=linux-netdev&m=127310770900442&w=3 Reported-by: Martín Ferrari <[email protected]> Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void dev_seq_printf_stats(struct seq_file *seq, struct net_device *dev) { const struct net_device_stats *stats = dev_get_stats(dev); seq_printf(seq, "%6s: %7lu %7lu %4lu %4lu %4lu %5lu %10lu %9lu " "%8lu %7lu %4lu %4lu %4lu %5lu %7lu %10lu\n", dev->name, stats->rx_bytes, stats->rx_packets, stats->rx_errors, stats->rx_dropped + stats->rx_missed_errors, stats->rx_fifo_errors, stats->rx_length_errors + stats->rx_over_errors + stats->rx_crc_errors + stats->rx_frame_errors, stats->rx_compressed, stats->multicast, stats->tx_bytes, stats->tx_packets, stats->tx_errors, stats->tx_dropped, stats->tx_fifo_errors, stats->collisions, stats->tx_carrier_errors + stats->tx_aborted_errors + stats->tx_window_errors + stats->tx_heartbeat_errors, stats->tx_compressed); }
static void dev_seq_printf_stats(struct seq_file *seq, struct net_device *dev) { const struct net_device_stats *stats = dev_get_stats(dev); seq_printf(seq, "%6s: %7lu %7lu %4lu %4lu %4lu %5lu %10lu %9lu " "%8lu %7lu %4lu %4lu %4lu %5lu %7lu %10lu\n", dev->name, stats->rx_bytes, stats->rx_packets, stats->rx_errors, stats->rx_dropped + stats->rx_missed_errors, stats->rx_fifo_errors, stats->rx_length_errors + stats->rx_over_errors + stats->rx_crc_errors + stats->rx_frame_errors, stats->rx_compressed, stats->multicast, stats->tx_bytes, stats->tx_packets, stats->tx_errors, stats->tx_dropped, stats->tx_fifo_errors, stats->collisions, stats->tx_carrier_errors + stats->tx_aborted_errors + stats->tx_window_errors + stats->tx_heartbeat_errors, stats->tx_compressed); }
C
linux
0
CVE-2017-15923
https://www.cvedetails.com/cve/CVE-2017-15923/
null
https://cgit.kde.org/konversation.git/commit/?h=1.7&id=6a7f59ee1b9dbc6e5cf9e5f3b306504d02b73ef0
6a7f59ee1b9dbc6e5cf9e5f3b306504d02b73ef0
null
QString IRCView::openTags(TextHtmlData* data, int from) { QString ret, tag; int i = from > -1 ? from : 0; for ( ; i < data->openHtmlTags.count(); ++i) { tag = data->openHtmlTags.at(i); if (data->reverse) { ret += fontColorOpenTag(Preferences::self()->color(Preferences::TextViewBackground).name()); } else { ret += fontColorOpenTag(data->lastFgColor); } } else if (tag == QLatin1String("span")) { if (data->reverse) { ret += spanColorOpenTag(data->defaultColor); } else { ret += spanColorOpenTag(data->lastBgColor); } } else { ret += QLatin1Char('<') + tag + QLatin1Char('>'); } }
QString IRCView::openTags(TextHtmlData* data, int from) { QString ret, tag; int i = from; for ( ; i < data->openHtmlTags.count(); ++i) { tag = data->openHtmlTags.at(i); if (data->reverse) { ret += fontColorOpenTag(Preferences::self()->color(Preferences::TextViewBackground).name()); } else { ret += fontColorOpenTag(data->lastFgColor); } } else if (tag == QLatin1String("span")) { if (data->reverse) { ret += spanColorOpenTag(data->defaultColor); } else { ret += spanColorOpenTag(data->lastBgColor); } } else { ret += QLatin1Char('<') + tag + QLatin1Char('>'); } }
CPP
kde
1
CVE-2013-2882
https://www.cvedetails.com/cve/CVE-2013-2882/
null
https://github.com/chromium/chromium/commit/dbbcd55a666ab8389d5d223994a95a59ad20dd13
dbbcd55a666ab8389d5d223994a95a59ad20dd13
Disable AutofillInteractiveTest.OnChangeAfterAutofill test. Failing due to http://src.chromium.org/viewvc/blink?view=revision&revision=170278. BUG=353691 [email protected], [email protected] Review URL: https://codereview.chromium.org/216853002 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@260106 0039d316-1c4b-4281-b951-d872f2087c98
void SetProfile(const AutofillProfile& profile) { std::vector<AutofillProfile> profiles; profiles.push_back(profile); SetProfiles(&profiles); }
void SetProfile(const AutofillProfile& profile) { std::vector<AutofillProfile> profiles; profiles.push_back(profile); SetProfiles(&profiles); }
C
Chrome
0
CVE-2015-6773
https://www.cvedetails.com/cve/CVE-2015-6773/
CWE-119
https://github.com/chromium/chromium/commit/33827275411b33371e7bb750cce20f11de85002d
33827275411b33371e7bb750cce20f11de85002d
Move SelectionTemplate::is_handle_visible_ to FrameSelection This patch moves |is_handle_visible_| to |FrameSelection| from |SelectionTemplate| since handle visibility is used only for setting |FrameSelection|, hence it is a redundant member variable of |SelectionTemplate|. Bug: 742093 Change-Id: I3add4da3844fb40be34dcb4d4b46b5fa6fed1d7e Reviewed-on: https://chromium-review.googlesource.com/595389 Commit-Queue: Yoshifumi Inoue <[email protected]> Reviewed-by: Xiaocheng Hu <[email protected]> Reviewed-by: Kent Tamura <[email protected]> Cr-Commit-Position: refs/heads/master@{#491660}
String InputMethodController::ComposingText() const { DocumentLifecycle::DisallowTransitionScope disallow_transition( GetDocument().Lifecycle()); return PlainText( CompositionEphemeralRange(), TextIteratorBehavior::Builder().SetEmitsOriginalText(true).Build()); }
String InputMethodController::ComposingText() const { DocumentLifecycle::DisallowTransitionScope disallow_transition( GetDocument().Lifecycle()); return PlainText( CompositionEphemeralRange(), TextIteratorBehavior::Builder().SetEmitsOriginalText(true).Build()); }
C
Chrome
0
CVE-2013-6663
https://www.cvedetails.com/cve/CVE-2013-6663/
CWE-399
https://github.com/chromium/chromium/commit/fb5dce12f0462056fc9f66967b0f7b2b7bcd88f5
fb5dce12f0462056fc9f66967b0f7b2b7bcd88f5
One polymer_config.js to rule them all. [email protected],[email protected],[email protected] BUG=425626 Review URL: https://codereview.chromium.org/1224783005 Cr-Commit-Position: refs/heads/master@{#337882}
content::WebUIDataSource* CreateOobeUIDataSource( const base::DictionaryValue& localized_strings, const std::string& display_type) { content::WebUIDataSource* source = content::WebUIDataSource::Create(chrome::kChromeUIOobeHost); source->AddLocalizedStrings(localized_strings); source->SetJsonPath(kStringsJSPath); if (display_type == OobeUI::kOobeDisplay) { source->SetDefaultResource(IDR_OOBE_HTML); source->AddResourcePath(kOobeJSPath, IDR_OOBE_JS); source->AddResourcePath(kCustomElementsHTMLPath, IDR_CUSTOM_ELEMENTS_OOBE_HTML); source->AddResourcePath(kCustomElementsJSPath, IDR_CUSTOM_ELEMENTS_OOBE_JS); } else { source->SetDefaultResource(IDR_LOGIN_HTML); source->AddResourcePath(kLoginJSPath, IDR_LOGIN_JS); source->AddResourcePath(kCustomElementsHTMLPath, IDR_CUSTOM_ELEMENTS_LOGIN_HTML); source->AddResourcePath(kCustomElementsJSPath, IDR_CUSTOM_ELEMENTS_LOGIN_JS); } source->AddResourcePath(kKeyboardUtilsJSPath, IDR_KEYBOARD_UTILS_JS); source->OverrideContentSecurityPolicyFrameSrc( base::StringPrintf( "frame-src chrome://terms/ %s/;", extensions::kGaiaAuthExtensionOrigin)); source->OverrideContentSecurityPolicyObjectSrc("object-src *;"); bool is_webview_signin_enabled = StartupUtils::IsWebviewSigninEnabled(); source->AddResourcePath("gaia_auth_host.js", is_webview_signin_enabled ? IDR_GAIA_AUTH_AUTHENTICATOR_JS : IDR_GAIA_AUTH_HOST_JS); source->AddResourcePath(kEnrollmentHTMLPath, is_webview_signin_enabled ? IDR_OOBE_ENROLLMENT_WEBVIEW_HTML : IDR_OOBE_ENROLLMENT_HTML); source->AddResourcePath(kEnrollmentCSSPath, is_webview_signin_enabled ? IDR_OOBE_ENROLLMENT_WEBVIEW_CSS : IDR_OOBE_ENROLLMENT_CSS); source->AddResourcePath(kEnrollmentJSPath, is_webview_signin_enabled ? IDR_OOBE_ENROLLMENT_WEBVIEW_JS : IDR_OOBE_ENROLLMENT_JS); if (display_type == OobeUI::kOobeDisplay) { source->AddResourcePath("Roboto-Thin.ttf", IDR_FONT_ROBOTO_THIN); source->AddResourcePath("Roboto-Light.ttf", IDR_FONT_ROBOTO_LIGHT); source->AddResourcePath("Roboto-Regular.ttf", IDR_FONT_ROBOTO_REGULAR); source->AddResourcePath("Roboto-Medium.ttf", IDR_FONT_ROBOTO_MEDIUM); source->AddResourcePath("Roboto-Bold.ttf", IDR_FONT_ROBOTO_BOLD); } return source; }
content::WebUIDataSource* CreateOobeUIDataSource( const base::DictionaryValue& localized_strings, const std::string& display_type) { content::WebUIDataSource* source = content::WebUIDataSource::Create(chrome::kChromeUIOobeHost); source->AddLocalizedStrings(localized_strings); source->SetJsonPath(kStringsJSPath); if (display_type == OobeUI::kOobeDisplay) { source->SetDefaultResource(IDR_OOBE_HTML); source->AddResourcePath(kOobeJSPath, IDR_OOBE_JS); source->AddResourcePath(kCustomElementsHTMLPath, IDR_CUSTOM_ELEMENTS_OOBE_HTML); source->AddResourcePath(kCustomElementsJSPath, IDR_CUSTOM_ELEMENTS_OOBE_JS); } else { source->SetDefaultResource(IDR_LOGIN_HTML); source->AddResourcePath(kLoginJSPath, IDR_LOGIN_JS); source->AddResourcePath(kCustomElementsHTMLPath, IDR_CUSTOM_ELEMENTS_LOGIN_HTML); source->AddResourcePath(kCustomElementsJSPath, IDR_CUSTOM_ELEMENTS_LOGIN_JS); } source->AddResourcePath(kPolymerConfigJSPath, IDR_POLYMER_CONFIG_JS); source->AddResourcePath(kKeyboardUtilsJSPath, IDR_KEYBOARD_UTILS_JS); source->OverrideContentSecurityPolicyFrameSrc( base::StringPrintf( "frame-src chrome://terms/ %s/;", extensions::kGaiaAuthExtensionOrigin)); source->OverrideContentSecurityPolicyObjectSrc("object-src *;"); bool is_webview_signin_enabled = StartupUtils::IsWebviewSigninEnabled(); source->AddResourcePath("gaia_auth_host.js", is_webview_signin_enabled ? IDR_GAIA_AUTH_AUTHENTICATOR_JS : IDR_GAIA_AUTH_HOST_JS); source->AddResourcePath(kEnrollmentHTMLPath, is_webview_signin_enabled ? IDR_OOBE_ENROLLMENT_WEBVIEW_HTML : IDR_OOBE_ENROLLMENT_HTML); source->AddResourcePath(kEnrollmentCSSPath, is_webview_signin_enabled ? IDR_OOBE_ENROLLMENT_WEBVIEW_CSS : IDR_OOBE_ENROLLMENT_CSS); source->AddResourcePath(kEnrollmentJSPath, is_webview_signin_enabled ? IDR_OOBE_ENROLLMENT_WEBVIEW_JS : IDR_OOBE_ENROLLMENT_JS); if (display_type == OobeUI::kOobeDisplay) { source->AddResourcePath("Roboto-Thin.ttf", IDR_FONT_ROBOTO_THIN); source->AddResourcePath("Roboto-Light.ttf", IDR_FONT_ROBOTO_LIGHT); source->AddResourcePath("Roboto-Regular.ttf", IDR_FONT_ROBOTO_REGULAR); source->AddResourcePath("Roboto-Medium.ttf", IDR_FONT_ROBOTO_MEDIUM); source->AddResourcePath("Roboto-Bold.ttf", IDR_FONT_ROBOTO_BOLD); } return source; }
C
Chrome
1
CVE-2016-3826
https://www.cvedetails.com/cve/CVE-2016-3826/
CWE-20
https://android.googlesource.com/platform/frameworks/av/+/9cd8c3289c91254b3955bd7347cf605d6fa032c6
9cd8c3289c91254b3955bd7347cf605d6fa032c6
Check effect command reply size in AudioFlinger Bug: 29251553 Change-Id: I1bcc1281f1f0542bb645f6358ce31631f2a8ffbf
status_t AudioFlinger::EffectModule::configure() { status_t status; sp<ThreadBase> thread; uint32_t size; audio_channel_mask_t channelMask; if (mEffectInterface == NULL) { status = NO_INIT; goto exit; } thread = mThread.promote(); if (thread == 0) { status = DEAD_OBJECT; goto exit; } channelMask = thread->channelMask(); mConfig.outputCfg.channels = channelMask; if ((mDescriptor.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_MONO; } else { mConfig.inputCfg.channels = channelMask; if (channelMask == AUDIO_CHANNEL_OUT_MONO) { mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_STEREO; mConfig.outputCfg.channels = AUDIO_CHANNEL_OUT_STEREO; ALOGV("Overriding effect input and output as STEREO"); } } mConfig.inputCfg.format = AUDIO_FORMAT_PCM_16_BIT; mConfig.outputCfg.format = AUDIO_FORMAT_PCM_16_BIT; mConfig.inputCfg.samplingRate = thread->sampleRate(); mConfig.outputCfg.samplingRate = mConfig.inputCfg.samplingRate; mConfig.inputCfg.bufferProvider.cookie = NULL; mConfig.inputCfg.bufferProvider.getBuffer = NULL; mConfig.inputCfg.bufferProvider.releaseBuffer = NULL; mConfig.outputCfg.bufferProvider.cookie = NULL; mConfig.outputCfg.bufferProvider.getBuffer = NULL; mConfig.outputCfg.bufferProvider.releaseBuffer = NULL; mConfig.inputCfg.accessMode = EFFECT_BUFFER_ACCESS_READ; if (mConfig.inputCfg.buffer.raw != mConfig.outputCfg.buffer.raw) { mConfig.outputCfg.accessMode = EFFECT_BUFFER_ACCESS_ACCUMULATE; } else { mConfig.outputCfg.accessMode = EFFECT_BUFFER_ACCESS_WRITE; } mConfig.inputCfg.mask = EFFECT_CONFIG_ALL; mConfig.outputCfg.mask = EFFECT_CONFIG_ALL; mConfig.inputCfg.buffer.frameCount = thread->frameCount(); mConfig.outputCfg.buffer.frameCount = mConfig.inputCfg.buffer.frameCount; ALOGV("configure() %p thread %p buffer %p framecount %d", this, thread.get(), mConfig.inputCfg.buffer.raw, mConfig.inputCfg.buffer.frameCount); status_t cmdStatus; size = sizeof(int); status = (*mEffectInterface)->command(mEffectInterface, EFFECT_CMD_SET_CONFIG, sizeof(effect_config_t), &mConfig, &size, &cmdStatus); if (status == 0) { status = cmdStatus; } if (status == 0 && (memcmp(&mDescriptor.type, SL_IID_VISUALIZATION, sizeof(effect_uuid_t)) == 0)) { uint32_t buf32[sizeof(effect_param_t) / sizeof(uint32_t) + 2]; effect_param_t *p = (effect_param_t *)buf32; p->psize = sizeof(uint32_t); p->vsize = sizeof(uint32_t); size = sizeof(int); *(int32_t *)p->data = VISUALIZER_PARAM_LATENCY; uint32_t latency = 0; PlaybackThread *pbt = thread->mAudioFlinger->checkPlaybackThread_l(thread->mId); if (pbt != NULL) { latency = pbt->latency_l(); } *((int32_t *)p->data + 1)= latency; (*mEffectInterface)->command(mEffectInterface, EFFECT_CMD_SET_PARAM, sizeof(effect_param_t) + 8, &buf32, &size, &cmdStatus); } mMaxDisableWaitCnt = (MAX_DISABLE_TIME_MS * mConfig.outputCfg.samplingRate) / (1000 * mConfig.outputCfg.buffer.frameCount); exit: mStatus = status; return status; }
status_t AudioFlinger::EffectModule::configure() { status_t status; sp<ThreadBase> thread; uint32_t size; audio_channel_mask_t channelMask; if (mEffectInterface == NULL) { status = NO_INIT; goto exit; } thread = mThread.promote(); if (thread == 0) { status = DEAD_OBJECT; goto exit; } channelMask = thread->channelMask(); mConfig.outputCfg.channels = channelMask; if ((mDescriptor.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_MONO; } else { mConfig.inputCfg.channels = channelMask; if (channelMask == AUDIO_CHANNEL_OUT_MONO) { mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_STEREO; mConfig.outputCfg.channels = AUDIO_CHANNEL_OUT_STEREO; ALOGV("Overriding effect input and output as STEREO"); } } mConfig.inputCfg.format = AUDIO_FORMAT_PCM_16_BIT; mConfig.outputCfg.format = AUDIO_FORMAT_PCM_16_BIT; mConfig.inputCfg.samplingRate = thread->sampleRate(); mConfig.outputCfg.samplingRate = mConfig.inputCfg.samplingRate; mConfig.inputCfg.bufferProvider.cookie = NULL; mConfig.inputCfg.bufferProvider.getBuffer = NULL; mConfig.inputCfg.bufferProvider.releaseBuffer = NULL; mConfig.outputCfg.bufferProvider.cookie = NULL; mConfig.outputCfg.bufferProvider.getBuffer = NULL; mConfig.outputCfg.bufferProvider.releaseBuffer = NULL; mConfig.inputCfg.accessMode = EFFECT_BUFFER_ACCESS_READ; if (mConfig.inputCfg.buffer.raw != mConfig.outputCfg.buffer.raw) { mConfig.outputCfg.accessMode = EFFECT_BUFFER_ACCESS_ACCUMULATE; } else { mConfig.outputCfg.accessMode = EFFECT_BUFFER_ACCESS_WRITE; } mConfig.inputCfg.mask = EFFECT_CONFIG_ALL; mConfig.outputCfg.mask = EFFECT_CONFIG_ALL; mConfig.inputCfg.buffer.frameCount = thread->frameCount(); mConfig.outputCfg.buffer.frameCount = mConfig.inputCfg.buffer.frameCount; ALOGV("configure() %p thread %p buffer %p framecount %d", this, thread.get(), mConfig.inputCfg.buffer.raw, mConfig.inputCfg.buffer.frameCount); status_t cmdStatus; size = sizeof(int); status = (*mEffectInterface)->command(mEffectInterface, EFFECT_CMD_SET_CONFIG, sizeof(effect_config_t), &mConfig, &size, &cmdStatus); if (status == 0) { status = cmdStatus; } if (status == 0 && (memcmp(&mDescriptor.type, SL_IID_VISUALIZATION, sizeof(effect_uuid_t)) == 0)) { uint32_t buf32[sizeof(effect_param_t) / sizeof(uint32_t) + 2]; effect_param_t *p = (effect_param_t *)buf32; p->psize = sizeof(uint32_t); p->vsize = sizeof(uint32_t); size = sizeof(int); *(int32_t *)p->data = VISUALIZER_PARAM_LATENCY; uint32_t latency = 0; PlaybackThread *pbt = thread->mAudioFlinger->checkPlaybackThread_l(thread->mId); if (pbt != NULL) { latency = pbt->latency_l(); } *((int32_t *)p->data + 1)= latency; (*mEffectInterface)->command(mEffectInterface, EFFECT_CMD_SET_PARAM, sizeof(effect_param_t) + 8, &buf32, &size, &cmdStatus); } mMaxDisableWaitCnt = (MAX_DISABLE_TIME_MS * mConfig.outputCfg.samplingRate) / (1000 * mConfig.outputCfg.buffer.frameCount); exit: mStatus = status; return status; }
C
Android
0
CVE-2019-11922
https://www.cvedetails.com/cve/CVE-2019-11922/
CWE-362
https://github.com/facebook/zstd/pull/1404/commits/3e5cdf1b6a85843e991d7d10f6a2567c15580da0
3e5cdf1b6a85843e991d7d10f6a2567c15580da0
fixed T36302429
size_t ZSTD_compress_advanced_internal( ZSTD_CCtx* cctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const void* dict,size_t dictSize, ZSTD_CCtx_params params) { DEBUGLOG(4, "ZSTD_compress_advanced_internal (srcSize:%u)", (U32)srcSize); CHECK_F( ZSTD_compressBegin_internal(cctx, dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL, params, srcSize, ZSTDb_not_buffered) ); return ZSTD_compressEnd(cctx, dst, dstCapacity, src, srcSize); }
size_t ZSTD_compress_advanced_internal( ZSTD_CCtx* cctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const void* dict,size_t dictSize, ZSTD_CCtx_params params) { DEBUGLOG(4, "ZSTD_compress_advanced_internal (srcSize:%u)", (U32)srcSize); CHECK_F( ZSTD_compressBegin_internal(cctx, dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL, params, srcSize, ZSTDb_not_buffered) ); return ZSTD_compressEnd(cctx, dst, dstCapacity, src, srcSize); }
C
zstd
0
CVE-2010-1149
https://www.cvedetails.com/cve/CVE-2010-1149/
CWE-200
https://cgit.freedesktop.org/udisks/commit/?id=0fcc7cb3b66f23fac53ae08647aa0007a2bd56c4
0fcc7cb3b66f23fac53ae08647aa0007a2bd56c4
null
_dupv8 (const char *s) { const char *end_valid; if (!g_utf8_validate (s, -1, &end_valid)) { g_print ("**** NOTE: The string '%s' is not valid UTF-8. Invalid characters begins at '%s'\n", s, end_valid); return g_strndup (s, end_valid - s); } else { return g_strdup (s); } }
_dupv8 (const char *s) { const char *end_valid; if (!g_utf8_validate (s, -1, &end_valid)) { g_print ("**** NOTE: The string '%s' is not valid UTF-8. Invalid characters begins at '%s'\n", s, end_valid); return g_strndup (s, end_valid - s); } else { return g_strdup (s); } }
C
udisks
0
CVE-2013-3236
https://www.cvedetails.com/cve/CVE-2013-3236/
CWE-200
https://github.com/torvalds/linux/commit/680d04e0ba7e926233e3b9cee59125ce181f66ba
680d04e0ba7e926233e3b9cee59125ce181f66ba
VSOCK: vmci - fix possible info leak in vmci_transport_dgram_dequeue() In case we received no data on the call to skb_recv_datagram(), i.e. skb->data is NULL, vmci_transport_dgram_dequeue() will return with 0 without updating msg_namelen leading to net/socket.c leaking the local, uninitialized sockaddr_storage variable to userland -- 128 bytes of kernel stack memory. Fix this by moving the already existing msg_namelen assignment a few lines above. Cc: Andy King <[email protected]> Cc: Dmitry Torokhov <[email protected]> Cc: George Zhang <[email protected]> Signed-off-by: Mathias Krause <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int vmci_transport_notify_poll_in( struct vsock_sock *vsk, size_t target, bool *data_ready_now) { return vmci_trans(vsk)->notify_ops->poll_in( &vsk->sk, target, data_ready_now); }
static int vmci_transport_notify_poll_in( struct vsock_sock *vsk, size_t target, bool *data_ready_now) { return vmci_trans(vsk)->notify_ops->poll_in( &vsk->sk, target, data_ready_now); }
C
linux
0
CVE-2011-2793
https://www.cvedetails.com/cve/CVE-2011-2793/
CWE-399
https://github.com/chromium/chromium/commit/a6e146b4a369b31afa4c4323cc813dcbe0ef0c2b
a6e146b4a369b31afa4c4323cc813dcbe0ef0c2b
Use URLFetcher::Create instead of new in http_bridge.cc. This change modified http_bridge so that it uses a factory to construct the URLFetcher. Moreover, it modified sync_backend_host_unittest.cc to use an URLFetcher factory which will prevent access to www.example.com during the test. BUG=none TEST=sync_backend_host_unittest.cc Review URL: http://codereview.chromium.org/7053011 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@87227 0039d316-1c4b-4281-b951-d872f2087c98
bool LiveSyncTest::SetUpLocalTestServer() { CommandLine* cl = CommandLine::ForCurrentProcess(); CommandLine::StringType server_cmdline_string = cl->GetSwitchValueNative( switches::kSyncServerCommandLine); CommandLine::StringVector server_cmdline_vector; CommandLine::StringType delimiters(FILE_PATH_LITERAL(" ")); Tokenize(server_cmdline_string, delimiters, &server_cmdline_vector); CommandLine server_cmdline(server_cmdline_vector); if (!base::LaunchApp(server_cmdline, false, true, &test_server_handle_)) LOG(ERROR) << "Could not launch local test server."; const int kMaxWaitTime = TestTimeouts::live_operation_timeout_ms(); const int kNumIntervals = 15; if (WaitForTestServerToStart(kMaxWaitTime, kNumIntervals)) { VLOG(1) << "Started local test server at " << cl->GetSwitchValueASCII(switches::kSyncServiceURL) << "."; return true; } else { LOG(ERROR) << "Could not start local test server at " << cl->GetSwitchValueASCII(switches::kSyncServiceURL) << "."; return false; } }
bool LiveSyncTest::SetUpLocalTestServer() { CommandLine* cl = CommandLine::ForCurrentProcess(); CommandLine::StringType server_cmdline_string = cl->GetSwitchValueNative( switches::kSyncServerCommandLine); CommandLine::StringVector server_cmdline_vector; CommandLine::StringType delimiters(FILE_PATH_LITERAL(" ")); Tokenize(server_cmdline_string, delimiters, &server_cmdline_vector); CommandLine server_cmdline(server_cmdline_vector); if (!base::LaunchApp(server_cmdline, false, true, &test_server_handle_)) LOG(ERROR) << "Could not launch local test server."; const int kMaxWaitTime = TestTimeouts::live_operation_timeout_ms(); const int kNumIntervals = 15; if (WaitForTestServerToStart(kMaxWaitTime, kNumIntervals)) { VLOG(1) << "Started local test server at " << cl->GetSwitchValueASCII(switches::kSyncServiceURL) << "."; return true; } else { LOG(ERROR) << "Could not start local test server at " << cl->GetSwitchValueASCII(switches::kSyncServiceURL) << "."; return false; } }
C
Chrome
0
CVE-2014-1713
https://www.cvedetails.com/cve/CVE-2014-1713/
CWE-399
https://github.com/chromium/chromium/commit/f85a87ec670ad0fce9d98d90c9a705b72a288154
f85a87ec670ad0fce9d98d90c9a705b72a288154
document.location bindings fix BUG=352374 [email protected] Review URL: https://codereview.chromium.org/196343011 git-svn-id: svn://svn.chromium.org/blink/trunk@169176 bbb929c8-8fbe-4397-9dbb-9b2b20218538
static void conditionalConditionStaticVoidMethodMethod(const v8::FunctionCallbackInfo<v8::Value>& info) { TestObjectPython::conditionalConditionStaticVoidMethod(); }
static void conditionalConditionStaticVoidMethodMethod(const v8::FunctionCallbackInfo<v8::Value>& info) { TestObjectPython::conditionalConditionStaticVoidMethod(); }
C
Chrome
0
CVE-2012-2100
https://www.cvedetails.com/cve/CVE-2012-2100/
CWE-189
https://github.com/torvalds/linux/commit/d50f2ab6f050311dbf7b8f5501b25f0bf64a439b
d50f2ab6f050311dbf7b8f5501b25f0bf64a439b
ext4: fix undefined behavior in ext4_fill_flex_info() Commit 503358ae01b70ce6909d19dd01287093f6b6271c ("ext4: avoid divide by zero when trying to mount a corrupted file system") fixes CVE-2009-4307 by performing a sanity check on s_log_groups_per_flex, since it can be set to a bogus value by an attacker. sbi->s_log_groups_per_flex = sbi->s_es->s_log_groups_per_flex; groups_per_flex = 1 << sbi->s_log_groups_per_flex; if (groups_per_flex < 2) { ... } This patch fixes two potential issues in the previous commit. 1) The sanity check might only work on architectures like PowerPC. On x86, 5 bits are used for the shifting amount. That means, given a large s_log_groups_per_flex value like 36, groups_per_flex = 1 << 36 is essentially 1 << 4 = 16, rather than 0. This will bypass the check, leaving s_log_groups_per_flex and groups_per_flex inconsistent. 2) The sanity check relies on undefined behavior, i.e., oversized shift. A standard-confirming C compiler could rewrite the check in unexpected ways. Consider the following equivalent form, assuming groups_per_flex is unsigned for simplicity. groups_per_flex = 1 << sbi->s_log_groups_per_flex; if (groups_per_flex == 0 || groups_per_flex == 1) { We compile the code snippet using Clang 3.0 and GCC 4.6. Clang will completely optimize away the check groups_per_flex == 0, leaving the patched code as vulnerable as the original. GCC keeps the check, but there is no guarantee that future versions will do the same. Signed-off-by: Xi Wang <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]> Cc: [email protected]
static void ext4_orphan_cleanup(struct super_block *sb, struct ext4_super_block *es) { unsigned int s_flags = sb->s_flags; int nr_orphans = 0, nr_truncates = 0; #ifdef CONFIG_QUOTA int i; #endif if (!es->s_last_orphan) { jbd_debug(4, "no orphan inodes to clean up\n"); return; } if (bdev_read_only(sb->s_bdev)) { ext4_msg(sb, KERN_ERR, "write access " "unavailable, skipping orphan cleanup"); return; } /* Check if feature set would not allow a r/w mount */ if (!ext4_feature_set_ok(sb, 0)) { ext4_msg(sb, KERN_INFO, "Skipping orphan cleanup due to " "unknown ROCOMPAT features"); return; } if (EXT4_SB(sb)->s_mount_state & EXT4_ERROR_FS) { if (es->s_last_orphan) jbd_debug(1, "Errors on filesystem, " "clearing orphan list.\n"); es->s_last_orphan = 0; jbd_debug(1, "Skipping orphan recovery on fs with errors.\n"); return; } if (s_flags & MS_RDONLY) { ext4_msg(sb, KERN_INFO, "orphan cleanup on readonly fs"); sb->s_flags &= ~MS_RDONLY; } #ifdef CONFIG_QUOTA /* Needed for iput() to work correctly and not trash data */ sb->s_flags |= MS_ACTIVE; /* Turn on quotas so that they are updated correctly */ for (i = 0; i < MAXQUOTAS; i++) { if (EXT4_SB(sb)->s_qf_names[i]) { int ret = ext4_quota_on_mount(sb, i); if (ret < 0) ext4_msg(sb, KERN_ERR, "Cannot turn on journaled " "quota: error %d", ret); } } #endif while (es->s_last_orphan) { struct inode *inode; inode = ext4_orphan_get(sb, le32_to_cpu(es->s_last_orphan)); if (IS_ERR(inode)) { es->s_last_orphan = 0; break; } list_add(&EXT4_I(inode)->i_orphan, &EXT4_SB(sb)->s_orphan); dquot_initialize(inode); if (inode->i_nlink) { ext4_msg(sb, KERN_DEBUG, "%s: truncating inode %lu to %lld bytes", __func__, inode->i_ino, inode->i_size); jbd_debug(2, "truncating inode %lu to %lld bytes\n", inode->i_ino, inode->i_size); ext4_truncate(inode); nr_truncates++; } else { ext4_msg(sb, KERN_DEBUG, "%s: deleting unreferenced inode %lu", __func__, inode->i_ino); jbd_debug(2, "deleting unreferenced inode %lu\n", inode->i_ino); nr_orphans++; } iput(inode); /* The delete magic happens here! */ } #define PLURAL(x) (x), ((x) == 1) ? "" : "s" if (nr_orphans) ext4_msg(sb, KERN_INFO, "%d orphan inode%s deleted", PLURAL(nr_orphans)); if (nr_truncates) ext4_msg(sb, KERN_INFO, "%d truncate%s cleaned up", PLURAL(nr_truncates)); #ifdef CONFIG_QUOTA /* Turn quotas off */ for (i = 0; i < MAXQUOTAS; i++) { if (sb_dqopt(sb)->files[i]) dquot_quota_off(sb, i); } #endif sb->s_flags = s_flags; /* Restore MS_RDONLY status */ }
static void ext4_orphan_cleanup(struct super_block *sb, struct ext4_super_block *es) { unsigned int s_flags = sb->s_flags; int nr_orphans = 0, nr_truncates = 0; #ifdef CONFIG_QUOTA int i; #endif if (!es->s_last_orphan) { jbd_debug(4, "no orphan inodes to clean up\n"); return; } if (bdev_read_only(sb->s_bdev)) { ext4_msg(sb, KERN_ERR, "write access " "unavailable, skipping orphan cleanup"); return; } /* Check if feature set would not allow a r/w mount */ if (!ext4_feature_set_ok(sb, 0)) { ext4_msg(sb, KERN_INFO, "Skipping orphan cleanup due to " "unknown ROCOMPAT features"); return; } if (EXT4_SB(sb)->s_mount_state & EXT4_ERROR_FS) { if (es->s_last_orphan) jbd_debug(1, "Errors on filesystem, " "clearing orphan list.\n"); es->s_last_orphan = 0; jbd_debug(1, "Skipping orphan recovery on fs with errors.\n"); return; } if (s_flags & MS_RDONLY) { ext4_msg(sb, KERN_INFO, "orphan cleanup on readonly fs"); sb->s_flags &= ~MS_RDONLY; } #ifdef CONFIG_QUOTA /* Needed for iput() to work correctly and not trash data */ sb->s_flags |= MS_ACTIVE; /* Turn on quotas so that they are updated correctly */ for (i = 0; i < MAXQUOTAS; i++) { if (EXT4_SB(sb)->s_qf_names[i]) { int ret = ext4_quota_on_mount(sb, i); if (ret < 0) ext4_msg(sb, KERN_ERR, "Cannot turn on journaled " "quota: error %d", ret); } } #endif while (es->s_last_orphan) { struct inode *inode; inode = ext4_orphan_get(sb, le32_to_cpu(es->s_last_orphan)); if (IS_ERR(inode)) { es->s_last_orphan = 0; break; } list_add(&EXT4_I(inode)->i_orphan, &EXT4_SB(sb)->s_orphan); dquot_initialize(inode); if (inode->i_nlink) { ext4_msg(sb, KERN_DEBUG, "%s: truncating inode %lu to %lld bytes", __func__, inode->i_ino, inode->i_size); jbd_debug(2, "truncating inode %lu to %lld bytes\n", inode->i_ino, inode->i_size); ext4_truncate(inode); nr_truncates++; } else { ext4_msg(sb, KERN_DEBUG, "%s: deleting unreferenced inode %lu", __func__, inode->i_ino); jbd_debug(2, "deleting unreferenced inode %lu\n", inode->i_ino); nr_orphans++; } iput(inode); /* The delete magic happens here! */ } #define PLURAL(x) (x), ((x) == 1) ? "" : "s" if (nr_orphans) ext4_msg(sb, KERN_INFO, "%d orphan inode%s deleted", PLURAL(nr_orphans)); if (nr_truncates) ext4_msg(sb, KERN_INFO, "%d truncate%s cleaned up", PLURAL(nr_truncates)); #ifdef CONFIG_QUOTA /* Turn quotas off */ for (i = 0; i < MAXQUOTAS; i++) { if (sb_dqopt(sb)->files[i]) dquot_quota_off(sb, i); } #endif sb->s_flags = s_flags; /* Restore MS_RDONLY status */ }
C
linux
0
CVE-2017-0823
https://www.cvedetails.com/cve/CVE-2017-0823/
CWE-200
https://android.googlesource.com/platform/hardware/ril/+/cd5f15f588a5d27e99ba12f057245bfe507f8c42
cd5f15f588a5d27e99ba12f057245bfe507f8c42
DO NOT MERGE Fix security vulnerability in pre-O rild code. Remove wrong code for setup_data_call. Add check for max address for RIL_DIAL. Bug: 37896655 Test: Manual. Change-Id: I05c027140ae828a2653794fcdd94e1b1a130941b (cherry picked from commit dda24c6557911aa1f4708abbd6b2f20f0e205b9e)
dispatchImsSms(Parcel &p, RequestInfo *pRI) { int32_t t; status_t status = p.readInt32(&t); RIL_RadioTechnologyFamily format; uint8_t retry; int32_t messageRef; RLOGD("dispatchImsSms"); if (status != NO_ERROR) { goto invalid; } format = (RIL_RadioTechnologyFamily) t; status = p.read(&retry,sizeof(retry)); if (status != NO_ERROR) { goto invalid; } status = p.read(&messageRef,sizeof(messageRef)); if (status != NO_ERROR) { goto invalid; } if (RADIO_TECH_3GPP == format) { dispatchImsGsmSms(p, pRI, retry, messageRef); } else if (RADIO_TECH_3GPP2 == format) { dispatchImsCdmaSms(p, pRI, retry, messageRef); } else { ALOGE("requestImsSendSMS invalid format value =%d", format); } return; invalid: invalidCommandBlock(pRI); return; }
dispatchImsSms(Parcel &p, RequestInfo *pRI) { int32_t t; status_t status = p.readInt32(&t); RIL_RadioTechnologyFamily format; uint8_t retry; int32_t messageRef; RLOGD("dispatchImsSms"); if (status != NO_ERROR) { goto invalid; } format = (RIL_RadioTechnologyFamily) t; status = p.read(&retry,sizeof(retry)); if (status != NO_ERROR) { goto invalid; } status = p.read(&messageRef,sizeof(messageRef)); if (status != NO_ERROR) { goto invalid; } if (RADIO_TECH_3GPP == format) { dispatchImsGsmSms(p, pRI, retry, messageRef); } else if (RADIO_TECH_3GPP2 == format) { dispatchImsCdmaSms(p, pRI, retry, messageRef); } else { ALOGE("requestImsSendSMS invalid format value =%d", format); } return; invalid: invalidCommandBlock(pRI); return; }
C
Android
0
CVE-2012-3552
https://www.cvedetails.com/cve/CVE-2012-3552/
CWE-362
https://github.com/torvalds/linux/commit/f6d8bd051c391c1c0458a30b2a7abcd939329259
f6d8bd051c391c1c0458a30b2a7abcd939329259
inet: add RCU protection to inet->opt We lack proper synchronization to manipulate inet->opt ip_options Problem is ip_make_skb() calls ip_setup_cork() and ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options), without any protection against another thread manipulating inet->opt. Another thread can change inet->opt pointer and free old one under us. Use RCU to protect inet->opt (changed to inet->inet_opt). Instead of handling atomic refcounts, just copy ip_options when necessary, to avoid cache line dirtying. We cant insert an rcu_head in struct ip_options since its included in skb->cb[], so this patch is large because I had to introduce a new ip_options_rcu structure. Signed-off-by: Eric Dumazet <[email protected]> Cc: Herbert Xu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
int udp_rcv(struct sk_buff *skb) { return __udp4_lib_rcv(skb, &udp_table, IPPROTO_UDP); }
int udp_rcv(struct sk_buff *skb) { return __udp4_lib_rcv(skb, &udp_table, IPPROTO_UDP); }
C
linux
0
CVE-2018-16080
https://www.cvedetails.com/cve/CVE-2018-16080/
CWE-20
https://github.com/chromium/chromium/commit/c552cd7b8a0862f6b3c8c6a07f98bda3721101eb
c552cd7b8a0862f6b3c8c6a07f98bda3721101eb
Mac: turn popups into new tabs while in fullscreen. It's platform convention to show popups as new tabs while in non-HTML5 fullscreen. (Popups cause tabs to lose HTML5 fullscreen.) This was implemented for Cocoa in a BrowserWindow override, but it makes sense to just stick it into Browser and remove a ton of override code put in just to support this. BUG=858929, 868416 TEST=as in bugs Change-Id: I43471f242813ec1159d9c690bab73dab3e610b7d Reviewed-on: https://chromium-review.googlesource.com/1153455 Reviewed-by: Sidney San Martín <[email protected]> Commit-Queue: Avi Drissman <[email protected]> Cr-Commit-Position: refs/heads/master@{#578755}
void BrowserView::InitViews() { GetWidget()->AddObserver(this); GetWidget()->SetNativeWindowProperty(kBrowserViewKey, this); GetWidget()->SetNativeWindowProperty(Profile::kProfileKey, browser_->profile()); #if defined(OS_WIN) || (defined(OS_LINUX) && !defined(OS_CHROMEOS)) GetWidget()->GetNativeView()->AddPreTargetHandler( ConfirmQuitBubbleController::GetInstance()); #endif #if defined(USE_AURA) SetThemeProfileForWindow(GetNativeWindow(), browser_->profile()); #endif LoadAccelerators(); contents_web_view_ = new ContentsWebView(browser_->profile()); contents_web_view_->set_id(VIEW_ID_TAB_CONTAINER); contents_web_view_->SetEmbedFullscreenWidgetMode(true); web_contents_close_handler_.reset( new WebContentsCloseHandler(contents_web_view_)); devtools_web_view_ = new views::WebView(browser_->profile()); devtools_web_view_->set_id(VIEW_ID_DEV_TOOLS_DOCKED); devtools_web_view_->SetVisible(false); contents_container_ = new views::View(); contents_container_->SetBackground(views::CreateSolidBackground( GetThemeProvider()->GetColor(ThemeProperties::COLOR_CONTROL_BACKGROUND))); contents_container_->AddChildView(devtools_web_view_); contents_container_->AddChildView(contents_web_view_); contents_container_->SetLayoutManager(std::make_unique<ContentsLayoutManager>( devtools_web_view_, contents_web_view_)); AddChildView(contents_container_); set_contents_view(contents_container_); top_container_ = new TopContainerView(this); AddChildView(top_container_); BrowserTabStripController* tabstrip_controller = new BrowserTabStripController(browser_->tab_strip_model(), this); tabstrip_ = new TabStrip(std::unique_ptr<TabStripController>(tabstrip_controller)); top_container_->AddChildView(tabstrip_); // Takes ownership. tabstrip_controller->InitFromModel(tabstrip_); toolbar_ = new ToolbarView(browser_.get(), this); top_container_->AddChildView(toolbar_); toolbar_->Init(); if (!toolbar_button_provider_) SetToolbarButtonProvider(toolbar_); infobar_container_ = new InfoBarContainerView(this); AddChildView(infobar_container_); InitStatusBubble(); find_bar_host_view_ = new View(); AddChildView(find_bar_host_view_); immersive_mode_controller_->Init(this); auto browser_view_layout = std::make_unique<BrowserViewLayout>(); browser_view_layout->Init(new BrowserViewLayoutDelegateImpl(this), browser(), this, top_container_, tabstrip_, toolbar_, infobar_container_, contents_container_, GetContentsLayoutManager(), immersive_mode_controller_.get()); SetLayoutManager(std::move(browser_view_layout)); #if defined(OS_WIN) if (JumpList::Enabled()) { load_complete_listener_.reset(new LoadCompleteListener(this)); } #endif frame_->OnBrowserViewInitViewsComplete(); frame_->GetFrameView()->UpdateMinimumSize(); }
void BrowserView::InitViews() { GetWidget()->AddObserver(this); GetWidget()->SetNativeWindowProperty(kBrowserViewKey, this); GetWidget()->SetNativeWindowProperty(Profile::kProfileKey, browser_->profile()); #if defined(OS_WIN) || (defined(OS_LINUX) && !defined(OS_CHROMEOS)) GetWidget()->GetNativeView()->AddPreTargetHandler( ConfirmQuitBubbleController::GetInstance()); #endif #if defined(USE_AURA) SetThemeProfileForWindow(GetNativeWindow(), browser_->profile()); #endif LoadAccelerators(); contents_web_view_ = new ContentsWebView(browser_->profile()); contents_web_view_->set_id(VIEW_ID_TAB_CONTAINER); contents_web_view_->SetEmbedFullscreenWidgetMode(true); web_contents_close_handler_.reset( new WebContentsCloseHandler(contents_web_view_)); devtools_web_view_ = new views::WebView(browser_->profile()); devtools_web_view_->set_id(VIEW_ID_DEV_TOOLS_DOCKED); devtools_web_view_->SetVisible(false); contents_container_ = new views::View(); contents_container_->SetBackground(views::CreateSolidBackground( GetThemeProvider()->GetColor(ThemeProperties::COLOR_CONTROL_BACKGROUND))); contents_container_->AddChildView(devtools_web_view_); contents_container_->AddChildView(contents_web_view_); contents_container_->SetLayoutManager(std::make_unique<ContentsLayoutManager>( devtools_web_view_, contents_web_view_)); AddChildView(contents_container_); set_contents_view(contents_container_); top_container_ = new TopContainerView(this); AddChildView(top_container_); BrowserTabStripController* tabstrip_controller = new BrowserTabStripController(browser_->tab_strip_model(), this); tabstrip_ = new TabStrip(std::unique_ptr<TabStripController>(tabstrip_controller)); top_container_->AddChildView(tabstrip_); // Takes ownership. tabstrip_controller->InitFromModel(tabstrip_); toolbar_ = new ToolbarView(browser_.get(), this); top_container_->AddChildView(toolbar_); toolbar_->Init(); if (!toolbar_button_provider_) SetToolbarButtonProvider(toolbar_); infobar_container_ = new InfoBarContainerView(this); AddChildView(infobar_container_); InitStatusBubble(); find_bar_host_view_ = new View(); AddChildView(find_bar_host_view_); immersive_mode_controller_->Init(this); auto browser_view_layout = std::make_unique<BrowserViewLayout>(); browser_view_layout->Init(new BrowserViewLayoutDelegateImpl(this), browser(), this, top_container_, tabstrip_, toolbar_, infobar_container_, contents_container_, GetContentsLayoutManager(), immersive_mode_controller_.get()); SetLayoutManager(std::move(browser_view_layout)); #if defined(OS_WIN) if (JumpList::Enabled()) { load_complete_listener_.reset(new LoadCompleteListener(this)); } #endif frame_->OnBrowserViewInitViewsComplete(); frame_->GetFrameView()->UpdateMinimumSize(); }
C
Chrome
0
null
null
null
https://github.com/chromium/chromium/commit/3a353ebdb7753a3fbeb401c4c0e0f3358ccbb90b
3a353ebdb7753a3fbeb401c4c0e0f3358ccbb90b
Support pausing media when a context is frozen. Media is resumed when the context is unpaused. This feature will be used for bfcache and pausing iframes feature policy. BUG=907125 Change-Id: Ic3925ea1a4544242b7bf0b9ad8c9cb9f63976bbd Reviewed-on: https://chromium-review.googlesource.com/c/1410126 Commit-Queue: Dave Tapuska <[email protected]> Reviewed-by: Kentaro Hara <[email protected]> Reviewed-by: Mounir Lamouri <[email protected]> Cr-Commit-Position: refs/heads/master@{#623319}
void HTMLMediaElement::ScheduleNotifyPlaying() { ScheduleEvent(event_type_names::kPlaying); ScheduleResolvePlayPromises(); }
void HTMLMediaElement::ScheduleNotifyPlaying() { ScheduleEvent(event_type_names::kPlaying); ScheduleResolvePlayPromises(); }
C
Chrome
0
CVE-2018-6177
https://www.cvedetails.com/cve/CVE-2018-6177/
CWE-200
https://github.com/chromium/chromium/commit/4504a474c069d07104237d0c03bfce7b29a42de6
4504a474c069d07104237d0c03bfce7b29a42de6
defeat cors attacks on audio/video tags Neutralize error messages and fire no progress events until media metadata has been loaded for media loaded from cross-origin locations. Bug: 828265, 826187 Change-Id: Iaf15ef38676403687d6a913cbdc84f2d70a6f5c6 Reviewed-on: https://chromium-review.googlesource.com/1015794 Reviewed-by: Mounir Lamouri <[email protected]> Reviewed-by: Dale Curtis <[email protected]> Commit-Queue: Fredrik Hubinette <[email protected]> Cr-Commit-Position: refs/heads/master@{#557312}
WebLayer* HTMLMediaElement::PlatformLayer() const { return web_layer_; }
WebLayer* HTMLMediaElement::PlatformLayer() const { return web_layer_; }
C
Chrome
0
CVE-2011-3104
https://www.cvedetails.com/cve/CVE-2011-3104/
CWE-119
https://github.com/chromium/chromium/commit/6b5f83842b5edb5d4bd6684b196b3630c6769731
6b5f83842b5edb5d4bd6684b196b3630c6769731
[i18n-fixlet] Make strings branding specific in extension code. IDS_EXTENSIONS_UNINSTALL IDS_EXTENSIONS_INCOGNITO_WARNING IDS_EXTENSION_INSTALLED_HEADING IDS_EXTENSION_ALERT_ITEM_EXTERNAL And fix a $1 $1 bug. IDS_EXTENSION_INLINE_INSTALL_PROMPT_TITLE BUG=NONE TEST=NONE Review URL: http://codereview.chromium.org/9107061 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@118018 0039d316-1c4b-4281-b951-d872f2087c98
RefCountedMemory* NTPResourceCache::GetNewTabCSS(bool is_incognito) { DCHECK(BrowserThread::CurrentlyOn(BrowserThread::UI)); if (is_incognito) { if (!new_tab_incognito_css_.get()) CreateNewTabIncognitoCSS(); } else { if (!new_tab_css_.get()) CreateNewTabCSS(); } return is_incognito ? new_tab_incognito_css_.get() : new_tab_css_.get(); }
RefCountedMemory* NTPResourceCache::GetNewTabCSS(bool is_incognito) { DCHECK(BrowserThread::CurrentlyOn(BrowserThread::UI)); if (is_incognito) { if (!new_tab_incognito_css_.get()) CreateNewTabIncognitoCSS(); } else { if (!new_tab_css_.get()) CreateNewTabCSS(); } return is_incognito ? new_tab_incognito_css_.get() : new_tab_css_.get(); }
C
Chrome
0
CVE-2018-10184
https://www.cvedetails.com/cve/CVE-2018-10184/
CWE-119
https://git.haproxy.org/?p=haproxy.git;a=commit;h=3f0e1ec70173593f4c2b3681b26c04a4ed5fc588
3f0e1ec70173593f4c2b3681b26c04a4ed5fc588
null
static inline __maybe_unused uint64_t h2_get_n64(const struct buffer *b, int o) { return readv_n64(b_ptr(b, o), b_end(b) - b_ptr(b, o), b->data); }
static inline __maybe_unused uint64_t h2_get_n64(const struct buffer *b, int o) { return readv_n64(b_ptr(b, o), b_end(b) - b_ptr(b, o), b->data); }
C
haproxy
0
CVE-2011-3097
https://www.cvedetails.com/cve/CVE-2011-3097/
CWE-20
https://github.com/chromium/chromium/commit/027429ee5abe6e2fb5e3b2b4542f0a6fe0dbc12d
027429ee5abe6e2fb5e3b2b4542f0a6fe0dbc12d
Metrics for measuring how much overhead reading compressed content states adds. BUG=104293 TEST=NONE Review URL: https://chromiumcodereview.appspot.com/9426039 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@123733 0039d316-1c4b-4281-b951-d872f2087c98
SessionService::FindClosestNavigationWithIndex( std::vector<TabNavigation>* navigations, int index) { DCHECK(navigations); for (std::vector<TabNavigation>::iterator i = navigations->begin(); i != navigations->end(); ++i) { if (i->index() >= index) return i; } return navigations->end(); }
SessionService::FindClosestNavigationWithIndex( std::vector<TabNavigation>* navigations, int index) { DCHECK(navigations); for (std::vector<TabNavigation>::iterator i = navigations->begin(); i != navigations->end(); ++i) { if (i->index() >= index) return i; } return navigations->end(); }
C
Chrome
0
null
null
null
https://github.com/chromium/chromium/commit/ee8d6fd30b022ac2c87b7a190c954e7bb3c9b21e
ee8d6fd30b022ac2c87b7a190c954e7bb3c9b21e
Clean up calls like "gfx::Rect(0, 0, size().width(), size().height()". The caller can use the much shorter "gfx::Rect(size())", since gfx::Rect has a constructor that just takes a Size. BUG=none TEST=none Review URL: http://codereview.chromium.org/2204001 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@48283 0039d316-1c4b-4281-b951-d872f2087c98
bool CreateTabFunction::RunImpl() { DictionaryValue* args; EXTENSION_FUNCTION_VALIDATE(args_->GetDictionary(0, &args)); Browser *browser; int window_id = -1; if (args->HasKey(keys::kWindowIdKey)) { EXTENSION_FUNCTION_VALIDATE(args->GetInteger( keys::kWindowIdKey, &window_id)); browser = GetBrowserInProfileWithId(profile(), window_id, include_incognito(), &error_); } else { browser = GetCurrentBrowser(); if (!browser) error_ = keys::kNoCurrentWindowError; } if (!browser) return false; std::string url_string; GURL url; if (args->HasKey(keys::kUrlKey)) { EXTENSION_FUNCTION_VALIDATE(args->GetString(keys::kUrlKey, &url_string)); url = ResolvePossiblyRelativeURL(url_string, GetExtension()); if (!url.is_valid()) { error_ = ExtensionErrorUtils::FormatErrorMessage(keys::kInvalidUrlError, url_string); return false; } } bool selected = true; if (args->HasKey(keys::kSelectedKey)) EXTENSION_FUNCTION_VALIDATE(args->GetBoolean(keys::kSelectedKey, &selected)); int index = -1; if (args->HasKey(keys::kIndexKey)) EXTENSION_FUNCTION_VALIDATE(args->GetInteger(keys::kIndexKey, &index)); if (url.SchemeIs(chrome::kExtensionScheme) && browser->profile()->IsOffTheRecord()) { browser = Browser::GetOrCreateTabbedBrowser( browser->profile()->GetOriginalProfile()); DCHECK(browser); } TabStripModel* tab_strip = browser->tabstrip_model(); if (index < 0) { index = -1; } if (index > tab_strip->count()) { index = tab_strip->count(); } int add_types = selected ? Browser::ADD_SELECTED : Browser::ADD_NONE; add_types |= Browser::ADD_FORCE_INDEX; TabContents* contents = browser->AddTabWithURL(url, GURL(), PageTransition::LINK, index, add_types, NULL, std::string()); index = tab_strip->GetIndexOfTabContents(contents); browser->window()->Show(); if (selected) contents->Focus(); if (has_callback()) result_.reset(ExtensionTabUtil::CreateTabValue(contents, tab_strip, index)); return true; }
bool CreateTabFunction::RunImpl() { DictionaryValue* args; EXTENSION_FUNCTION_VALIDATE(args_->GetDictionary(0, &args)); Browser *browser; int window_id = -1; if (args->HasKey(keys::kWindowIdKey)) { EXTENSION_FUNCTION_VALIDATE(args->GetInteger( keys::kWindowIdKey, &window_id)); browser = GetBrowserInProfileWithId(profile(), window_id, include_incognito(), &error_); } else { browser = GetCurrentBrowser(); if (!browser) error_ = keys::kNoCurrentWindowError; } if (!browser) return false; std::string url_string; GURL url; if (args->HasKey(keys::kUrlKey)) { EXTENSION_FUNCTION_VALIDATE(args->GetString(keys::kUrlKey, &url_string)); url = ResolvePossiblyRelativeURL(url_string, GetExtension()); if (!url.is_valid()) { error_ = ExtensionErrorUtils::FormatErrorMessage(keys::kInvalidUrlError, url_string); return false; } } bool selected = true; if (args->HasKey(keys::kSelectedKey)) EXTENSION_FUNCTION_VALIDATE(args->GetBoolean(keys::kSelectedKey, &selected)); int index = -1; if (args->HasKey(keys::kIndexKey)) EXTENSION_FUNCTION_VALIDATE(args->GetInteger(keys::kIndexKey, &index)); if (url.SchemeIs(chrome::kExtensionScheme) && browser->profile()->IsOffTheRecord()) { browser = Browser::GetOrCreateTabbedBrowser( browser->profile()->GetOriginalProfile()); DCHECK(browser); } TabStripModel* tab_strip = browser->tabstrip_model(); if (index < 0) { index = -1; } if (index > tab_strip->count()) { index = tab_strip->count(); } int add_types = selected ? Browser::ADD_SELECTED : Browser::ADD_NONE; add_types |= Browser::ADD_FORCE_INDEX; TabContents* contents = browser->AddTabWithURL(url, GURL(), PageTransition::LINK, index, add_types, NULL, std::string()); index = tab_strip->GetIndexOfTabContents(contents); browser->window()->Show(); if (selected) contents->Focus(); if (has_callback()) result_.reset(ExtensionTabUtil::CreateTabValue(contents, tab_strip, index)); return true; }
C
Chrome
0
CVE-2012-1179
https://www.cvedetails.com/cve/CVE-2012-1179/
CWE-264
https://github.com/torvalds/linux/commit/4a1d704194a441bf83c636004a479e01360ec850
4a1d704194a441bf83c636004a479e01360ec850
mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode commit 1a5a9906d4e8d1976b701f889d8f35d54b928f25 upstream. In some cases it may happen that pmd_none_or_clear_bad() is called with the mmap_sem hold in read mode. In those cases the huge page faults can allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a false positive from pmd_bad() that will not like to see a pmd materializing as trans huge. It's not khugepaged causing the problem, khugepaged holds the mmap_sem in write mode (and all those sites must hold the mmap_sem in read mode to prevent pagetables to go away from under them, during code review it seems vm86 mode on 32bit kernels requires that too unless it's restricted to 1 thread per process or UP builds). The race is only with the huge pagefaults that can convert a pmd_none() into a pmd_trans_huge(). Effectively all these pmd_none_or_clear_bad() sites running with mmap_sem in read mode are somewhat speculative with the page faults, and the result is always undefined when they run simultaneously. This is probably why it wasn't common to run into this. For example if the madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page fault, the hugepage will not be zapped, if the page fault runs first it will be zapped. Altering pmd_bad() not to error out if it finds hugepmds won't be enough to fix this, because zap_pmd_range would then proceed to call zap_pte_range (which would be incorrect if the pmd become a pmd_trans_huge()). The simplest way to fix this is to read the pmd in the local stack (regardless of what we read, no need of actual CPU barriers, only compiler barrier needed), and be sure it is not changing under the code that computes its value. Even if the real pmd is changing under the value we hold on the stack, we don't care. If we actually end up in zap_pte_range it means the pmd was not none already and it was not huge, and it can't become huge from under us (khugepaged locking explained above). All we need is to enforce that there is no way anymore that in a code path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad can run into a hugepmd. The overhead of a barrier() is just a compiler tweak and should not be measurable (I only added it for THP builds). I don't exclude different compiler versions may have prevented the race too by caching the value of *pmd on the stack (that hasn't been verified, but it wouldn't be impossible considering pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines and there's no external function called in between pmd_trans_huge and pmd_none_or_clear_bad). if (pmd_trans_huge(*pmd)) { if (next-addr != HPAGE_PMD_SIZE) { VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem)); split_huge_page_pmd(vma->vm_mm, pmd); } else if (zap_huge_pmd(tlb, vma, pmd, addr)) continue; /* fall through */ } if (pmd_none_or_clear_bad(pmd)) Because this race condition could be exercised without special privileges this was reported in CVE-2012-1179. The race was identified and fully explained by Ulrich who debugged it. I'm quoting his accurate explanation below, for reference. ====== start quote ======= mapcount 0 page_mapcount 1 kernel BUG at mm/huge_memory.c:1384! At some point prior to the panic, a "bad pmd ..." message similar to the following is logged on the console: mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7). The "bad pmd ..." message is logged by pmd_clear_bad() before it clears the page's PMD table entry. 143 void pmd_clear_bad(pmd_t *pmd) 144 { -> 145 pmd_ERROR(*pmd); 146 pmd_clear(pmd); 147 } After the PMD table entry has been cleared, there is an inconsistency between the actual number of PMD table entries that are mapping the page and the page's map count (_mapcount field in struct page). When the page is subsequently reclaimed, __split_huge_page() detects this inconsistency. 1381 if (mapcount != page_mapcount(page)) 1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n", 1383 mapcount, page_mapcount(page)); -> 1384 BUG_ON(mapcount != page_mapcount(page)); The root cause of the problem is a race of two threads in a multithreaded process. Thread B incurs a page fault on a virtual address that has never been accessed (PMD entry is zero) while Thread A is executing an madvise() system call on a virtual address within the same 2 MB (huge page) range. virtual address space .---------------------. | | | | .-|---------------------| | | | | | |<-- B(fault) | | | 2 MB | |/////////////////////|-. huge < |/////////////////////| > A(range) page | |/////////////////////|-' | | | | | | '-|---------------------| | | | | '---------------------' - Thread A is executing an madvise(..., MADV_DONTNEED) system call on the virtual address range "A(range)" shown in the picture. sys_madvise // Acquire the semaphore in shared mode. down_read(&current->mm->mmap_sem) ... madvise_vma switch (behavior) case MADV_DONTNEED: madvise_dontneed zap_page_range unmap_vmas unmap_page_range zap_pud_range zap_pmd_range // // Assume that this huge page has never been accessed. // I.e. content of the PMD entry is zero (not mapped). // if (pmd_trans_huge(*pmd)) { // We don't get here due to the above assumption. } // // Assume that Thread B incurred a page fault and .---------> // sneaks in here as shown below. | // | if (pmd_none_or_clear_bad(pmd)) | { | if (unlikely(pmd_bad(*pmd))) | pmd_clear_bad | { | pmd_ERROR | // Log "bad pmd ..." message here. | pmd_clear | // Clear the page's PMD entry. | // Thread B incremented the map count | // in page_add_new_anon_rmap(), but | // now the page is no longer mapped | // by a PMD entry (-> inconsistency). | } | } | v - Thread B is handling a page fault on virtual address "B(fault)" shown in the picture. ... do_page_fault __do_page_fault // Acquire the semaphore in shared mode. down_read_trylock(&mm->mmap_sem) ... handle_mm_fault if (pmd_none(*pmd) && transparent_hugepage_enabled(vma)) // We get here due to the above assumption (PMD entry is zero). do_huge_pmd_anonymous_page alloc_hugepage_vma // Allocate a new transparent huge page here. ... __do_huge_pmd_anonymous_page ... spin_lock(&mm->page_table_lock) ... page_add_new_anon_rmap // Here we increment the page's map count (starts at -1). atomic_set(&page->_mapcount, 0) set_pmd_at // Here we set the page's PMD entry which will be cleared // when Thread A calls pmd_clear_bad(). ... spin_unlock(&mm->page_table_lock) The mmap_sem does not prevent the race because both threads are acquiring it in shared mode (down_read). Thread B holds the page_table_lock while the page's map count and PMD table entry are updated. However, Thread A does not synchronize on that lock. ====== end quote ======= [[email protected]: checkpatch fixes] Reported-by: Ulrich Obergfell <[email protected]> Signed-off-by: Andrea Arcangeli <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Dave Jones <[email protected]> Acked-by: Larry Woodman <[email protected]> Acked-by: Rik van Riel <[email protected]> Cc: Mark Salter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
static ssize_t pagemap_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { struct task_struct *task = get_proc_task(file->f_path.dentry->d_inode); struct mm_struct *mm; struct pagemapread pm; int ret = -ESRCH; struct mm_walk pagemap_walk = {}; unsigned long src; unsigned long svpfn; unsigned long start_vaddr; unsigned long end_vaddr; int copied = 0; if (!task) goto out; ret = -EINVAL; /* file position must be aligned */ if ((*ppos % PM_ENTRY_BYTES) || (count % PM_ENTRY_BYTES)) goto out_task; ret = 0; if (!count) goto out_task; pm.len = PM_ENTRY_BYTES * (PAGEMAP_WALK_SIZE >> PAGE_SHIFT); pm.buffer = kmalloc(pm.len, GFP_TEMPORARY); ret = -ENOMEM; if (!pm.buffer) goto out_task; mm = mm_for_maps(task); ret = PTR_ERR(mm); if (!mm || IS_ERR(mm)) goto out_free; pagemap_walk.pmd_entry = pagemap_pte_range; pagemap_walk.pte_hole = pagemap_pte_hole; #ifdef CONFIG_HUGETLB_PAGE pagemap_walk.hugetlb_entry = pagemap_hugetlb_range; #endif pagemap_walk.mm = mm; pagemap_walk.private = &pm; src = *ppos; svpfn = src / PM_ENTRY_BYTES; start_vaddr = svpfn << PAGE_SHIFT; end_vaddr = TASK_SIZE_OF(task); /* watch out for wraparound */ if (svpfn > TASK_SIZE_OF(task) >> PAGE_SHIFT) start_vaddr = end_vaddr; /* * The odds are that this will stop walking way * before end_vaddr, because the length of the * user buffer is tracked in "pm", and the walk * will stop when we hit the end of the buffer. */ ret = 0; while (count && (start_vaddr < end_vaddr)) { int len; unsigned long end; pm.pos = 0; end = (start_vaddr + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK; /* overflow ? */ if (end < start_vaddr || end > end_vaddr) end = end_vaddr; down_read(&mm->mmap_sem); ret = walk_page_range(start_vaddr, end, &pagemap_walk); up_read(&mm->mmap_sem); start_vaddr = end; len = min(count, PM_ENTRY_BYTES * pm.pos); if (copy_to_user(buf, pm.buffer, len)) { ret = -EFAULT; goto out_mm; } copied += len; buf += len; count -= len; } *ppos += copied; if (!ret || ret == PM_END_OF_BUFFER) ret = copied; out_mm: mmput(mm); out_free: kfree(pm.buffer); out_task: put_task_struct(task); out: return ret; }
static ssize_t pagemap_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { struct task_struct *task = get_proc_task(file->f_path.dentry->d_inode); struct mm_struct *mm; struct pagemapread pm; int ret = -ESRCH; struct mm_walk pagemap_walk = {}; unsigned long src; unsigned long svpfn; unsigned long start_vaddr; unsigned long end_vaddr; int copied = 0; if (!task) goto out; ret = -EINVAL; /* file position must be aligned */ if ((*ppos % PM_ENTRY_BYTES) || (count % PM_ENTRY_BYTES)) goto out_task; ret = 0; if (!count) goto out_task; pm.len = PM_ENTRY_BYTES * (PAGEMAP_WALK_SIZE >> PAGE_SHIFT); pm.buffer = kmalloc(pm.len, GFP_TEMPORARY); ret = -ENOMEM; if (!pm.buffer) goto out_task; mm = mm_for_maps(task); ret = PTR_ERR(mm); if (!mm || IS_ERR(mm)) goto out_free; pagemap_walk.pmd_entry = pagemap_pte_range; pagemap_walk.pte_hole = pagemap_pte_hole; #ifdef CONFIG_HUGETLB_PAGE pagemap_walk.hugetlb_entry = pagemap_hugetlb_range; #endif pagemap_walk.mm = mm; pagemap_walk.private = &pm; src = *ppos; svpfn = src / PM_ENTRY_BYTES; start_vaddr = svpfn << PAGE_SHIFT; end_vaddr = TASK_SIZE_OF(task); /* watch out for wraparound */ if (svpfn > TASK_SIZE_OF(task) >> PAGE_SHIFT) start_vaddr = end_vaddr; /* * The odds are that this will stop walking way * before end_vaddr, because the length of the * user buffer is tracked in "pm", and the walk * will stop when we hit the end of the buffer. */ ret = 0; while (count && (start_vaddr < end_vaddr)) { int len; unsigned long end; pm.pos = 0; end = (start_vaddr + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK; /* overflow ? */ if (end < start_vaddr || end > end_vaddr) end = end_vaddr; down_read(&mm->mmap_sem); ret = walk_page_range(start_vaddr, end, &pagemap_walk); up_read(&mm->mmap_sem); start_vaddr = end; len = min(count, PM_ENTRY_BYTES * pm.pos); if (copy_to_user(buf, pm.buffer, len)) { ret = -EFAULT; goto out_mm; } copied += len; buf += len; count -= len; } *ppos += copied; if (!ret || ret == PM_END_OF_BUFFER) ret = copied; out_mm: mmput(mm); out_free: kfree(pm.buffer); out_task: put_task_struct(task); out: return ret; }
C
linux
0
CVE-2016-3713
https://www.cvedetails.com/cve/CVE-2016-3713/
CWE-284
https://github.com/torvalds/linux/commit/9842df62004f366b9fed2423e24df10542ee0dc5
9842df62004f366b9fed2423e24df10542ee0dc5
KVM: MTRR: remove MSR 0x2f8 MSR 0x2f8 accessed the 124th Variable Range MTRR ever since MTRR support was introduced by 9ba075a664df ("KVM: MTRR support"). 0x2f8 became harmful when 910a6aae4e2e ("KVM: MTRR: exactly define the size of variable MTRRs") shrinked the array of VR MTRRs from 256 to 8, which made access to index 124 out of bounds. The surrounding code only WARNs in this situation, thus the guest gained a limited read/write access to struct kvm_arch_vcpu. 0x2f8 is not a valid VR MTRR MSR, because KVM has/advertises only 16 VR MTRR MSRs, 0x200-0x20f. Every VR MTRR is set up using two MSRs, 0x2f8 was treated as a PHYSBASE and 0x2f9 would be its PHYSMASK, but 0x2f9 was not implemented in KVM, therefore 0x2f8 could never do anything useful and getting rid of it is safe. This fixes CVE-2016-3713. Fixes: 910a6aae4e2e ("KVM: MTRR: exactly define the size of variable MTRRs") Cc: [email protected] Reported-by: David Matlack <[email protected]> Signed-off-by: Andy Honig <[email protected]> Signed-off-by: Radim Krčmář <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static void mtrr_lookup_init(struct mtrr_iter *iter, struct kvm_mtrr *mtrr_state, u64 start, u64 end) { iter->mtrr_state = mtrr_state; iter->start = start; iter->end = end; iter->mtrr_disabled = false; iter->partial_map = false; iter->fixed = false; iter->range = NULL; mtrr_lookup_start(iter); }
static void mtrr_lookup_init(struct mtrr_iter *iter, struct kvm_mtrr *mtrr_state, u64 start, u64 end) { iter->mtrr_state = mtrr_state; iter->start = start; iter->end = end; iter->mtrr_disabled = false; iter->partial_map = false; iter->fixed = false; iter->range = NULL; mtrr_lookup_start(iter); }
C
linux
0
CVE-2019-13106
https://www.cvedetails.com/cve/CVE-2019-13106/
CWE-787
https://github.com/u-boot/u-boot/commits/master
master
Merge branch '2020-01-22-master-imports' - Re-add U8500 platform support - Add bcm968360bg support - Assorted Keymile fixes - Other assorted bugfixes
void qrio_enable_app_buffer(void) { u8 ctrll; void __iomem *qrio_base = (void *)CONFIG_SYS_QRIO_BASE; /* enable application buffer */ ctrll = in_8(qrio_base + CTRLL_OFF); ctrll |= (CTRLL_WRB_BUFENA); out_8(qrio_base + CTRLL_OFF, ctrll); }
void qrio_enable_app_buffer(void) { u8 ctrll; void __iomem *qrio_base = (void *)CONFIG_SYS_QRIO_BASE; /* enable application buffer */ ctrll = in_8(qrio_base + CTRLL_OFF); ctrll |= (CTRLL_WRB_BUFENA); out_8(qrio_base + CTRLL_OFF, ctrll); }
C
u-boot
0
CVE-2013-2909
https://www.cvedetails.com/cve/CVE-2013-2909/
CWE-399
https://github.com/chromium/chromium/commit/248a92c21c20c14b5983680c50e1d8b73fc79a2f
248a92c21c20c14b5983680c50e1d8b73fc79a2f
Update containtingIsolate to go back all the way to top isolate from current root, rather than stopping at the first isolate it finds. This works because the current root is always updated with each isolate run. BUG=279277 Review URL: https://chromiumcodereview.appspot.com/23972003 git-svn-id: svn://svn.chromium.org/blink/trunk@157268 bbb929c8-8fbe-4397-9dbb-9b2b20218538
static bool alwaysRequiresLineBox(RenderObject* flow) { return isEmptyInline(flow) && toRenderInline(flow)->hasInlineDirectionBordersPaddingOrMargin(); }
static bool alwaysRequiresLineBox(RenderObject* flow) { return isEmptyInline(flow) && toRenderInline(flow)->hasInlineDirectionBordersPaddingOrMargin(); }
C
Chrome
0
CVE-2013-7421
https://www.cvedetails.com/cve/CVE-2013-7421/
CWE-264
https://github.com/torvalds/linux/commit/5d26a105b5a73e5635eae0629b42fa0a90e07b7b
5d26a105b5a73e5635eae0629b42fa0a90e07b7b
crypto: prefix module autoloading with "crypto-" This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70 Signed-off-by: Kees Cook <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes) { struct cast6_xts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); return glue_xts_crypt_128bit(&cast6_enc_xts, desc, dst, src, nbytes, XTS_TWEAK_CAST(__cast6_encrypt), &ctx->tweak_ctx, &ctx->crypt_ctx); }
static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes) { struct cast6_xts_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); return glue_xts_crypt_128bit(&cast6_enc_xts, desc, dst, src, nbytes, XTS_TWEAK_CAST(__cast6_encrypt), &ctx->tweak_ctx, &ctx->crypt_ctx); }
C
linux
0
CVE-2012-3552
https://www.cvedetails.com/cve/CVE-2012-3552/
CWE-362
https://github.com/torvalds/linux/commit/f6d8bd051c391c1c0458a30b2a7abcd939329259
f6d8bd051c391c1c0458a30b2a7abcd939329259
inet: add RCU protection to inet->opt We lack proper synchronization to manipulate inet->opt ip_options Problem is ip_make_skb() calls ip_setup_cork() and ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options), without any protection against another thread manipulating inet->opt. Another thread can change inet->opt pointer and free old one under us. Use RCU to protect inet->opt (changed to inet->inet_opt). Instead of handling atomic refcounts, just copy ip_options when necessary, to avoid cache line dirtying. We cant insert an rcu_head in struct ip_options since its included in skb->cb[], so this patch is large because I had to introduce a new ip_options_rcu structure. Signed-off-by: Eric Dumazet <[email protected]> Cc: Herbert Xu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int cipso_v4_map_cat_rbm_ntoh(const struct cipso_v4_doi *doi_def, const unsigned char *net_cat, u32 net_cat_len, struct netlbl_lsm_secattr *secattr) { int ret_val; int net_spot = -1; u32 host_spot = CIPSO_V4_INV_CAT; u32 net_clen_bits = net_cat_len * 8; u32 net_cat_size = 0; u32 *net_cat_array = NULL; if (doi_def->type == CIPSO_V4_MAP_TRANS) { net_cat_size = doi_def->map.std->cat.cipso_size; net_cat_array = doi_def->map.std->cat.cipso; } for (;;) { net_spot = cipso_v4_bitmap_walk(net_cat, net_clen_bits, net_spot + 1, 1); if (net_spot < 0) { if (net_spot == -2) return -EFAULT; return 0; } switch (doi_def->type) { case CIPSO_V4_MAP_PASS: host_spot = net_spot; break; case CIPSO_V4_MAP_TRANS: if (net_spot >= net_cat_size) return -EPERM; host_spot = net_cat_array[net_spot]; if (host_spot >= CIPSO_V4_INV_CAT) return -EPERM; break; } ret_val = netlbl_secattr_catmap_setbit(secattr->attr.mls.cat, host_spot, GFP_ATOMIC); if (ret_val != 0) return ret_val; } return -EINVAL; }
static int cipso_v4_map_cat_rbm_ntoh(const struct cipso_v4_doi *doi_def, const unsigned char *net_cat, u32 net_cat_len, struct netlbl_lsm_secattr *secattr) { int ret_val; int net_spot = -1; u32 host_spot = CIPSO_V4_INV_CAT; u32 net_clen_bits = net_cat_len * 8; u32 net_cat_size = 0; u32 *net_cat_array = NULL; if (doi_def->type == CIPSO_V4_MAP_TRANS) { net_cat_size = doi_def->map.std->cat.cipso_size; net_cat_array = doi_def->map.std->cat.cipso; } for (;;) { net_spot = cipso_v4_bitmap_walk(net_cat, net_clen_bits, net_spot + 1, 1); if (net_spot < 0) { if (net_spot == -2) return -EFAULT; return 0; } switch (doi_def->type) { case CIPSO_V4_MAP_PASS: host_spot = net_spot; break; case CIPSO_V4_MAP_TRANS: if (net_spot >= net_cat_size) return -EPERM; host_spot = net_cat_array[net_spot]; if (host_spot >= CIPSO_V4_INV_CAT) return -EPERM; break; } ret_val = netlbl_secattr_catmap_setbit(secattr->attr.mls.cat, host_spot, GFP_ATOMIC); if (ret_val != 0) return ret_val; } return -EINVAL; }
C
linux
0
CVE-2015-3885
https://www.cvedetails.com/cve/CVE-2015-3885/
CWE-189
https://github.com/rawstudio/rawstudio/commit/983bda1f0fa5fa86884381208274198a620f006e
983bda1f0fa5fa86884381208274198a620f006e
Avoid overflow in ljpeg_start().
void CLASS panasonic_load_raw() { int row, col, i, j, sh=0, pred[2], nonz[2]; pana_bits(0); for (row=0; row < height; row++) for (col=0; col < raw_width; col++) { if ((i = col % 14) == 0) pred[0] = pred[1] = nonz[0] = nonz[1] = 0; if (i % 3 == 2) sh = 4 >> (3 - pana_bits(2)); if (nonz[i & 1]) { if ((j = pana_bits(8))) { if ((pred[i & 1] -= 0x80 << sh) < 0 || sh == 4) pred[i & 1] &= ~(-1 << sh); pred[i & 1] += j << sh; } } else if ((nonz[i & 1] = pana_bits(8)) || i > 11) pred[i & 1] = nonz[i & 1] << 4 | pana_bits(4); if (col < width) if ((BAYER(row,col) = pred[col & 1]) > 4098) derror(); } }
void CLASS panasonic_load_raw() { int row, col, i, j, sh=0, pred[2], nonz[2]; pana_bits(0); for (row=0; row < height; row++) for (col=0; col < raw_width; col++) { if ((i = col % 14) == 0) pred[0] = pred[1] = nonz[0] = nonz[1] = 0; if (i % 3 == 2) sh = 4 >> (3 - pana_bits(2)); if (nonz[i & 1]) { if ((j = pana_bits(8))) { if ((pred[i & 1] -= 0x80 << sh) < 0 || sh == 4) pred[i & 1] &= ~(-1 << sh); pred[i & 1] += j << sh; } } else if ((nonz[i & 1] = pana_bits(8)) || i > 11) pred[i & 1] = nonz[i & 1] << 4 | pana_bits(4); if (col < width) if ((BAYER(row,col) = pred[col & 1]) > 4098) derror(); } }
C
rawstudio
0
CVE-2017-18208
https://www.cvedetails.com/cve/CVE-2017-18208/
CWE-835
https://github.com/torvalds/linux/commit/6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91
6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91
mm/madvise.c: fix madvise() infinite loop under special circumstances MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings. Unfortunately madvise_willneed() doesn't communicate this information properly to the generic madvise syscall implementation. The calling convention is quite subtle there. madvise_vma() is supposed to either return an error or update &prev otherwise the main loop will never advance to the next vma and it will keep looping for ever without a way to get out of the kernel. It seems this has been broken since introduction. Nobody has noticed because nobody seems to be using MADVISE_WILLNEED on these DAX mappings. [[email protected]: rewrite changelog] Link: http://lkml.kernel.org/r/[email protected] Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place") Signed-off-by: chenjie <[email protected]> Signed-off-by: guoxuenan <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Minchan Kim <[email protected]> Cc: zhangyi (F) <[email protected]> Cc: Miao Xie <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Shaohua Li <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: David Rientjes <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Carsten Otte <[email protected]> Cc: Dan Williams <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static int madvise_free_single_vma(struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr) { unsigned long start, end; struct mm_struct *mm = vma->vm_mm; struct mmu_gather tlb; /* MADV_FREE works for only anon vma at the moment */ if (!vma_is_anonymous(vma)) return -EINVAL; start = max(vma->vm_start, start_addr); if (start >= vma->vm_end) return -EINVAL; end = min(vma->vm_end, end_addr); if (end <= vma->vm_start) return -EINVAL; lru_add_drain(); tlb_gather_mmu(&tlb, mm, start, end); update_hiwater_rss(mm); mmu_notifier_invalidate_range_start(mm, start, end); madvise_free_page_range(&tlb, vma, start, end); mmu_notifier_invalidate_range_end(mm, start, end); tlb_finish_mmu(&tlb, start, end); return 0; }
static int madvise_free_single_vma(struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr) { unsigned long start, end; struct mm_struct *mm = vma->vm_mm; struct mmu_gather tlb; /* MADV_FREE works for only anon vma at the moment */ if (!vma_is_anonymous(vma)) return -EINVAL; start = max(vma->vm_start, start_addr); if (start >= vma->vm_end) return -EINVAL; end = min(vma->vm_end, end_addr); if (end <= vma->vm_start) return -EINVAL; lru_add_drain(); tlb_gather_mmu(&tlb, mm, start, end); update_hiwater_rss(mm); mmu_notifier_invalidate_range_start(mm, start, end); madvise_free_page_range(&tlb, vma, start, end); mmu_notifier_invalidate_range_end(mm, start, end); tlb_finish_mmu(&tlb, start, end); return 0; }
C
linux
0
CVE-2016-10267
https://www.cvedetails.com/cve/CVE-2016-10267/
CWE-369
https://github.com/vadz/libtiff/commit/43bc256d8ae44b92d2734a3c5bc73957a4d7c1ec
43bc256d8ae44b92d2734a3c5bc73957a4d7c1ec
* libtiff/tif_ojpeg.c: make OJPEGDecode() early exit in case of failure in OJPEGPreDecode(). This will avoid a divide by zero, and potential other issues. Reported by Agostino Sarubbo. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2611
OJPEGWriteStreamSoi(TIFF* tif, void** mem, uint32* len) { OJPEGState* sp=(OJPEGState*)tif->tif_data; assert(OJPEG_BUFFER>=2); sp->out_buffer[0]=255; sp->out_buffer[1]=JPEG_MARKER_SOI; *len=2; *mem=(void*)sp->out_buffer; sp->out_state++; }
OJPEGWriteStreamSoi(TIFF* tif, void** mem, uint32* len) { OJPEGState* sp=(OJPEGState*)tif->tif_data; assert(OJPEG_BUFFER>=2); sp->out_buffer[0]=255; sp->out_buffer[1]=JPEG_MARKER_SOI; *len=2; *mem=(void*)sp->out_buffer; sp->out_state++; }
C
libtiff
0
CVE-2016-10218
https://www.cvedetails.com/cve/CVE-2016-10218/
CWE-476
http://git.ghostscript.com/?p=ghostpdl.git;a=commit;h=d621292fb2c8157d9899dcd83fd04dd250e30fe4
d621292fb2c8157d9899dcd83fd04dd250e30fe4
null
pdf14_buf_new(gs_int_rect *rect, bool has_tags, bool has_alpha_g, bool has_shape, bool idle, int n_chan, int num_spots, gs_memory_t *memory) { /* Note that alpha_g is the alpha for the GROUP */ /* This is distinct from the alpha that may also exist */ /* for the objects within the group. Hence it can introduce */ /* yet another plane */ pdf14_buf *result; pdf14_parent_color_t *new_parent_color; int rowstride = (rect->q.x - rect->p.x + 3) & -4; int height = (rect->q.y - rect->p.y); int n_planes = n_chan + (has_shape ? 1 : 0) + (has_alpha_g ? 1 : 0) + (has_tags ? 1 : 0); int planestride; double dsize = (((double) rowstride) * height) * n_planes; if (dsize > (double)max_uint) return NULL; result = gs_alloc_struct(memory, pdf14_buf, &st_pdf14_buf, "pdf14_buf_new"); if (result == NULL) return result; result->backdrop = NULL; result->saved = NULL; result->isolated = false; result->knockout = false; result->has_alpha_g = has_alpha_g; result->has_shape = has_shape; result->has_tags = has_tags; result->rect = *rect; result->n_chan = n_chan; result->n_planes = n_planes; result->rowstride = rowstride; result->transfer_fn = NULL; result->matte_num_comps = 0; result->matte = NULL; result->mask_stack = NULL; result->idle = idle; result->mask_id = 0; result->num_spots = num_spots; new_parent_color = gs_alloc_struct(memory, pdf14_parent_color_t, &st_pdf14_clr, "pdf14_buf_new"); if (new_parent_color == NULL) { gs_free_object(memory, result, "pdf14_buf_new"); return NULL; } result->parent_color_info_procs = new_parent_color; result->parent_color_info_procs->get_cmap_procs = NULL; result->parent_color_info_procs->parent_color_mapping_procs = NULL; result->parent_color_info_procs->parent_color_comp_index = NULL; result->parent_color_info_procs->icc_profile = NULL; result->parent_color_info_procs->previous = NULL; result->parent_color_info_procs->encode = NULL; result->parent_color_info_procs->decode = NULL; if (height <= 0) { /* Empty clipping - will skip all drawings. */ result->planestride = 0; result->data = 0; } else { planestride = rowstride * height; result->planestride = planestride; result->data = gs_alloc_bytes(memory, planestride * n_planes, "pdf14_buf_new"); if (result->data == NULL) { gs_free_object(memory, result, "pdf14_buf_new"); return NULL; } if (has_alpha_g) { int alpha_g_plane = n_chan + (has_shape ? 1 : 0); memset (result->data + alpha_g_plane * planestride, 0, planestride); } if (has_tags) { int tags_plane = n_chan + (has_shape ? 1 : 0) + (has_alpha_g ? 1 : 0); memset (result->data + tags_plane * planestride, GS_UNTOUCHED_TAG, planestride); } } /* Initialize dirty box with an invalid rectangle (the reversed rectangle). * Any future drawing will make it valid again, so we won't blend back * more than we need. */ result->dirty.p.x = rect->q.x; result->dirty.p.y = rect->q.y; result->dirty.q.x = rect->p.x; result->dirty.q.y = rect->p.y; return result; }
pdf14_buf_new(gs_int_rect *rect, bool has_tags, bool has_alpha_g, bool has_shape, bool idle, int n_chan, int num_spots, gs_memory_t *memory) { /* Note that alpha_g is the alpha for the GROUP */ /* This is distinct from the alpha that may also exist */ /* for the objects within the group. Hence it can introduce */ /* yet another plane */ pdf14_buf *result; pdf14_parent_color_t *new_parent_color; int rowstride = (rect->q.x - rect->p.x + 3) & -4; int height = (rect->q.y - rect->p.y); int n_planes = n_chan + (has_shape ? 1 : 0) + (has_alpha_g ? 1 : 0) + (has_tags ? 1 : 0); int planestride; double dsize = (((double) rowstride) * height) * n_planes; if (dsize > (double)max_uint) return NULL; result = gs_alloc_struct(memory, pdf14_buf, &st_pdf14_buf, "pdf14_buf_new"); if (result == NULL) return result; result->backdrop = NULL; result->saved = NULL; result->isolated = false; result->knockout = false; result->has_alpha_g = has_alpha_g; result->has_shape = has_shape; result->has_tags = has_tags; result->rect = *rect; result->n_chan = n_chan; result->n_planes = n_planes; result->rowstride = rowstride; result->transfer_fn = NULL; result->matte_num_comps = 0; result->matte = NULL; result->mask_stack = NULL; result->idle = idle; result->mask_id = 0; result->num_spots = num_spots; new_parent_color = gs_alloc_struct(memory, pdf14_parent_color_t, &st_pdf14_clr, "pdf14_buf_new"); if (new_parent_color == NULL) { gs_free_object(memory, result, "pdf14_buf_new"); return NULL; } result->parent_color_info_procs = new_parent_color; result->parent_color_info_procs->get_cmap_procs = NULL; result->parent_color_info_procs->parent_color_mapping_procs = NULL; result->parent_color_info_procs->parent_color_comp_index = NULL; result->parent_color_info_procs->icc_profile = NULL; result->parent_color_info_procs->previous = NULL; result->parent_color_info_procs->encode = NULL; result->parent_color_info_procs->decode = NULL; if (height <= 0) { /* Empty clipping - will skip all drawings. */ result->planestride = 0; result->data = 0; } else { planestride = rowstride * height; result->planestride = planestride; result->data = gs_alloc_bytes(memory, planestride * n_planes, "pdf14_buf_new"); if (result->data == NULL) { gs_free_object(memory, result, "pdf14_buf_new"); return NULL; } if (has_alpha_g) { int alpha_g_plane = n_chan + (has_shape ? 1 : 0); memset (result->data + alpha_g_plane * planestride, 0, planestride); } if (has_tags) { int tags_plane = n_chan + (has_shape ? 1 : 0) + (has_alpha_g ? 1 : 0); memset (result->data + tags_plane * planestride, GS_UNTOUCHED_TAG, planestride); } } /* Initialize dirty box with an invalid rectangle (the reversed rectangle). * Any future drawing will make it valid again, so we won't blend back * more than we need. */ result->dirty.p.x = rect->q.x; result->dirty.p.y = rect->q.y; result->dirty.q.x = rect->p.x; result->dirty.q.y = rect->p.y; return result; }
C
ghostscript
0
CVE-2013-3301
https://www.cvedetails.com/cve/CVE-2013-3301/
null
https://github.com/torvalds/linux/commit/6a76f8c0ab19f215af2a3442870eeb5f0e81998d
6a76f8c0ab19f215af2a3442870eeb5f0e81998d
tracing: Fix possible NULL pointer dereferences Currently set_ftrace_pid and set_graph_function files use seq_lseek for their fops. However seq_open() is called only for FMODE_READ in the fops->open() so that if an user tries to seek one of those file when she open it for writing, it sees NULL seq_file and then panic. It can be easily reproduced with following command: $ cd /sys/kernel/debug/tracing $ echo 1234 | sudo tee -a set_ftrace_pid In this example, GNU coreutils' tee opens the file with fopen(, "a") and then the fopen() internally calls lseek(). Link: http://lkml.kernel.org/r/[email protected] Cc: Frederic Weisbecker <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: [email protected] Signed-off-by: Namhyung Kim <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
static void *g_start(struct seq_file *m, loff_t *pos) { mutex_lock(&graph_lock); /* Nothing, tell g_show to print all functions are enabled */ if (!ftrace_graph_filter_enabled && !*pos) return (void *)1; return __g_next(m, pos); }
static void *g_start(struct seq_file *m, loff_t *pos) { mutex_lock(&graph_lock); /* Nothing, tell g_show to print all functions are enabled */ if (!ftrace_graph_filter_enabled && !*pos) return (void *)1; return __g_next(m, pos); }
C
linux
0
CVE-2019-17113
https://www.cvedetails.com/cve/CVE-2019-17113/
CWE-120
https://github.com/OpenMPT/openmpt/commit/927688ddab43c2b203569de79407a899e734fabe
927688ddab43c2b203569de79407a899e734fabe
[Fix] libmodplug: C API: Limit the length of strings copied to the output buffer of ModPlug_InstrumentName() and ModPlug_SampleName() to 32 bytes (including terminating null) as is done by original libmodplug. This avoids potential buffer overflows in software relying on this limit instead of querying the required buffer size beforehand. libopenmpt can return strings longer than 32 bytes here beacuse the internal limit of 32 bytes applies to strings encoded in arbitrary character encodings but the API returns them converted to UTF-8, which can be longer. (reported by Antonio Morales Maldonado of Semmle Security Research Team) git-svn-id: https://source.openmpt.org/svn/openmpt/trunk/OpenMPT@12127 56274372-70c3-4bfc-bfc3-4c3a0b034d27
LIBOPENMPT_MODPLUG_API unsigned int ModPlug_InstrumentName(ModPlugFile* file, unsigned int qual, char* buff) { const char* str; char buf[32]; if(!file) return 0; str = openmpt_module_get_instrument_name(file->mod,qual-1); memset(buf,0,32); if(str){ strncpy(buf,str,31); openmpt_free_string(str); } if(buff){ strncpy(buff,buf,32); } return (unsigned int)strlen(buf); }
LIBOPENMPT_MODPLUG_API unsigned int ModPlug_InstrumentName(ModPlugFile* file, unsigned int qual, char* buff) { const char* str; unsigned int retval; size_t tmpretval; if(!file) return 0; str = openmpt_module_get_instrument_name(file->mod,qual-1); if(!str){ if(buff){ *buff = '\0'; } return 0; } tmpretval = strlen(str); if(tmpretval>=INT_MAX){ tmpretval = INT_MAX-1; } retval = (int)tmpretval; if(buff){ memcpy(buff,str,retval+1); buff[retval] = '\0'; } openmpt_free_string(str); return retval; }
C
openmpt
1
CVE-2017-0375
https://www.cvedetails.com/cve/CVE-2017-0375/
CWE-617
https://github.com/torproject/tor/commit/79b59a2dfcb68897ee89d98587d09e55f07e68d7
79b59a2dfcb68897ee89d98587d09e55f07e68d7
TROVE-2017-004: Fix assertion failure in relay_send_end_cell_from_edge_ This fixes an assertion failure in relay_send_end_cell_from_edge_() when an origin circuit and a cpath_layer = NULL were passed. A service rendezvous circuit could do such a thing when a malformed BEGIN cell is received but shouldn't in the first place because the service needs to send an END cell on the circuit for which it can not do without a cpath_layer. Fixes #22493 Reported-by: Roger Dingledine <[email protected]> Signed-off-by: David Goulet <[email protected]>
connection_ap_can_use_exit(const entry_connection_t *conn, const node_t *exit_node) { const or_options_t *options = get_options(); tor_assert(conn); tor_assert(conn->socks_request); tor_assert(exit_node); /* If a particular exit node has been requested for the new connection, * make sure the exit node of the existing circuit matches exactly. */ if (conn->chosen_exit_name) { const node_t *chosen_exit = node_get_by_nickname(conn->chosen_exit_name, 1); if (!chosen_exit || tor_memneq(chosen_exit->identity, exit_node->identity, DIGEST_LEN)) { /* doesn't match */ return 0; } } if (conn->use_begindir) { /* Internal directory fetches do not count as exiting. */ return 1; } if (conn->socks_request->command == SOCKS_COMMAND_CONNECT) { tor_addr_t addr, *addrp = NULL; addr_policy_result_t r; if (0 == tor_addr_parse(&addr, conn->socks_request->address)) { addrp = &addr; } else if (!conn->entry_cfg.ipv4_traffic && conn->entry_cfg.ipv6_traffic) { tor_addr_make_null(&addr, AF_INET6); addrp = &addr; } else if (conn->entry_cfg.ipv4_traffic && !conn->entry_cfg.ipv6_traffic) { tor_addr_make_null(&addr, AF_INET); addrp = &addr; } r = compare_tor_addr_to_node_policy(addrp, conn->socks_request->port, exit_node); if (r == ADDR_POLICY_REJECTED) return 0; /* We know the address, and the exit policy rejects it. */ if (r == ADDR_POLICY_PROBABLY_REJECTED && !conn->chosen_exit_name) return 0; /* We don't know the addr, but the exit policy rejects most * addresses with this port. Since the user didn't ask for * this node, err on the side of caution. */ } else if (SOCKS_COMMAND_IS_RESOLVE(conn->socks_request->command)) { /* Don't send DNS requests to non-exit servers by default. */ if (!conn->chosen_exit_name && node_exit_policy_rejects_all(exit_node)) return 0; } if (routerset_contains_node(options->ExcludeExitNodesUnion_, exit_node)) { /* Not a suitable exit. Refuse it. */ return 0; } return 1; }
connection_ap_can_use_exit(const entry_connection_t *conn, const node_t *exit_node) { const or_options_t *options = get_options(); tor_assert(conn); tor_assert(conn->socks_request); tor_assert(exit_node); /* If a particular exit node has been requested for the new connection, * make sure the exit node of the existing circuit matches exactly. */ if (conn->chosen_exit_name) { const node_t *chosen_exit = node_get_by_nickname(conn->chosen_exit_name, 1); if (!chosen_exit || tor_memneq(chosen_exit->identity, exit_node->identity, DIGEST_LEN)) { /* doesn't match */ return 0; } } if (conn->use_begindir) { /* Internal directory fetches do not count as exiting. */ return 1; } if (conn->socks_request->command == SOCKS_COMMAND_CONNECT) { tor_addr_t addr, *addrp = NULL; addr_policy_result_t r; if (0 == tor_addr_parse(&addr, conn->socks_request->address)) { addrp = &addr; } else if (!conn->entry_cfg.ipv4_traffic && conn->entry_cfg.ipv6_traffic) { tor_addr_make_null(&addr, AF_INET6); addrp = &addr; } else if (conn->entry_cfg.ipv4_traffic && !conn->entry_cfg.ipv6_traffic) { tor_addr_make_null(&addr, AF_INET); addrp = &addr; } r = compare_tor_addr_to_node_policy(addrp, conn->socks_request->port, exit_node); if (r == ADDR_POLICY_REJECTED) return 0; /* We know the address, and the exit policy rejects it. */ if (r == ADDR_POLICY_PROBABLY_REJECTED && !conn->chosen_exit_name) return 0; /* We don't know the addr, but the exit policy rejects most * addresses with this port. Since the user didn't ask for * this node, err on the side of caution. */ } else if (SOCKS_COMMAND_IS_RESOLVE(conn->socks_request->command)) { /* Don't send DNS requests to non-exit servers by default. */ if (!conn->chosen_exit_name && node_exit_policy_rejects_all(exit_node)) return 0; } if (routerset_contains_node(options->ExcludeExitNodesUnion_, exit_node)) { /* Not a suitable exit. Refuse it. */ return 0; } return 1; }
C
tor
0
CVE-2012-5148
https://www.cvedetails.com/cve/CVE-2012-5148/
CWE-20
https://github.com/chromium/chromium/commit/e89cfcb9090e8c98129ae9160c513f504db74599
e89cfcb9090e8c98129ae9160c513f504db74599
Remove TabContents from TabStripModelObserver::TabDetachedAt. BUG=107201 TEST=no visible change Review URL: https://chromiumcodereview.appspot.com/11293205 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@167122 0039d316-1c4b-4281-b951-d872f2087c98
void BrowserView::ZoomChangedForActiveTab(bool can_show_bubble) { GetLocationBarView()->ZoomChangedForActiveTab( can_show_bubble && !toolbar_->IsWrenchMenuShowing()); }
void BrowserView::ZoomChangedForActiveTab(bool can_show_bubble) { GetLocationBarView()->ZoomChangedForActiveTab( can_show_bubble && !toolbar_->IsWrenchMenuShowing()); }
C
Chrome
0
CVE-2014-6269
https://www.cvedetails.com/cve/CVE-2014-6269/
CWE-189
https://git.haproxy.org/?p=haproxy-1.5.git;a=commitdiff;h=b4d05093bc89f71377230228007e69a1434c1a0c
b4d05093bc89f71377230228007e69a1434c1a0c
null
int conn_si_send_proxy(struct connection *conn, unsigned int flag) { /* we might have been called just after an asynchronous shutw */ if (conn->flags & CO_FL_SOCK_WR_SH) goto out_error; if (!conn_ctrl_ready(conn)) goto out_error; if (!fd_send_ready(conn->t.sock.fd)) goto out_wait; /* If we have a PROXY line to send, we'll use this to validate the * connection, in which case the connection is validated only once * we've sent the whole proxy line. Otherwise we use connect(). */ while (conn->send_proxy_ofs) { int ret; /* The target server expects a PROXY line to be sent first. * If the send_proxy_ofs is negative, it corresponds to the * offset to start sending from then end of the proxy string * (which is recomputed every time since it's constant). If * it is positive, it means we have to send from the start. * We can only send a "normal" PROXY line when the connection * is attached to a stream interface. Otherwise we can only * send a LOCAL line (eg: for use with health checks). */ if (conn->data == &si_conn_cb) { struct stream_interface *si = conn->owner; struct connection *remote = objt_conn(si->ob->prod->end); ret = make_proxy_line(trash.str, trash.size, objt_server(conn->target), remote); } else { /* The target server expects a LOCAL line to be sent first. Retrieving * local or remote addresses may fail until the connection is established. */ conn_get_from_addr(conn); if (!(conn->flags & CO_FL_ADDR_FROM_SET)) goto out_wait; conn_get_to_addr(conn); if (!(conn->flags & CO_FL_ADDR_TO_SET)) goto out_wait; ret = make_proxy_line(trash.str, trash.size, objt_server(conn->target), conn); } if (!ret) goto out_error; if (conn->send_proxy_ofs > 0) conn->send_proxy_ofs = -ret; /* first call */ /* we have to send trash from (ret+sp for -sp bytes). If the * data layer has a pending write, we'll also set MSG_MORE. */ ret = send(conn->t.sock.fd, trash.str + ret + conn->send_proxy_ofs, -conn->send_proxy_ofs, (conn->flags & CO_FL_DATA_WR_ENA) ? MSG_MORE : 0); if (ret == 0) goto out_wait; if (ret < 0) { if (errno == EAGAIN || errno == ENOTCONN) goto out_wait; if (errno == EINTR) continue; conn->flags |= CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH; goto out_error; } conn->send_proxy_ofs += ret; /* becomes zero once complete */ if (conn->send_proxy_ofs != 0) goto out_wait; /* OK we've sent the whole line, we're connected */ break; } /* The connection is ready now, simply return and let the connection * handler notify upper layers if needed. */ if (conn->flags & CO_FL_WAIT_L4_CONN) conn->flags &= ~CO_FL_WAIT_L4_CONN; conn->flags &= ~flag; return 1; out_error: /* Write error on the file descriptor */ conn->flags |= CO_FL_ERROR; return 0; out_wait: __conn_sock_stop_recv(conn); fd_cant_send(conn->t.sock.fd); return 0; }
int conn_si_send_proxy(struct connection *conn, unsigned int flag) { /* we might have been called just after an asynchronous shutw */ if (conn->flags & CO_FL_SOCK_WR_SH) goto out_error; if (!conn_ctrl_ready(conn)) goto out_error; if (!fd_send_ready(conn->t.sock.fd)) goto out_wait; /* If we have a PROXY line to send, we'll use this to validate the * connection, in which case the connection is validated only once * we've sent the whole proxy line. Otherwise we use connect(). */ while (conn->send_proxy_ofs) { int ret; /* The target server expects a PROXY line to be sent first. * If the send_proxy_ofs is negative, it corresponds to the * offset to start sending from then end of the proxy string * (which is recomputed every time since it's constant). If * it is positive, it means we have to send from the start. * We can only send a "normal" PROXY line when the connection * is attached to a stream interface. Otherwise we can only * send a LOCAL line (eg: for use with health checks). */ if (conn->data == &si_conn_cb) { struct stream_interface *si = conn->owner; struct connection *remote = objt_conn(si->ob->prod->end); ret = make_proxy_line(trash.str, trash.size, objt_server(conn->target), remote); } else { /* The target server expects a LOCAL line to be sent first. Retrieving * local or remote addresses may fail until the connection is established. */ conn_get_from_addr(conn); if (!(conn->flags & CO_FL_ADDR_FROM_SET)) goto out_wait; conn_get_to_addr(conn); if (!(conn->flags & CO_FL_ADDR_TO_SET)) goto out_wait; ret = make_proxy_line(trash.str, trash.size, objt_server(conn->target), conn); } if (!ret) goto out_error; if (conn->send_proxy_ofs > 0) conn->send_proxy_ofs = -ret; /* first call */ /* we have to send trash from (ret+sp for -sp bytes). If the * data layer has a pending write, we'll also set MSG_MORE. */ ret = send(conn->t.sock.fd, trash.str + ret + conn->send_proxy_ofs, -conn->send_proxy_ofs, (conn->flags & CO_FL_DATA_WR_ENA) ? MSG_MORE : 0); if (ret == 0) goto out_wait; if (ret < 0) { if (errno == EAGAIN || errno == ENOTCONN) goto out_wait; if (errno == EINTR) continue; conn->flags |= CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH; goto out_error; } conn->send_proxy_ofs += ret; /* becomes zero once complete */ if (conn->send_proxy_ofs != 0) goto out_wait; /* OK we've sent the whole line, we're connected */ break; } /* The connection is ready now, simply return and let the connection * handler notify upper layers if needed. */ if (conn->flags & CO_FL_WAIT_L4_CONN) conn->flags &= ~CO_FL_WAIT_L4_CONN; conn->flags &= ~flag; return 1; out_error: /* Write error on the file descriptor */ conn->flags |= CO_FL_ERROR; return 0; out_wait: __conn_sock_stop_recv(conn); fd_cant_send(conn->t.sock.fd); return 0; }
C
haproxy
0
CVE-2013-2867
https://www.cvedetails.com/cve/CVE-2013-2867/
null
https://github.com/chromium/chromium/commit/d358f57009b85fb7440208afa5ba87636b491889
d358f57009b85fb7440208afa5ba87636b491889
Refactor to support default Bluetooth pairing delegate In order to support a default pairing delegate we need to move the agent service provider delegate implementation from BluetoothDevice to BluetoothAdapter while retaining the existing API. BUG=338492 TEST=device_unittests, unit_tests, browser_tests Review URL: https://codereview.chromium.org/148293003 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@252216 0039d316-1c4b-4281-b951-d872f2087c98
void BluetoothDeviceChromeOS::OnConnectError( bool after_pairing, const ConnectErrorCallback& error_callback, const std::string& error_name, const std::string& error_message) { if (--num_connecting_calls_ == 0) adapter_->NotifyDeviceChanged(this); DCHECK(num_connecting_calls_ >= 0); LOG(WARNING) << object_path_.value() << ": Failed to connect device: " << error_name << ": " << error_message; VLOG(1) << object_path_.value() << ": " << num_connecting_calls_ << " still in progress"; ConnectErrorCode error_code = ERROR_UNKNOWN; if (error_name == bluetooth_device::kErrorFailed) { error_code = ERROR_FAILED; } else if (error_name == bluetooth_device::kErrorInProgress) { error_code = ERROR_INPROGRESS; } else if (error_name == bluetooth_device::kErrorNotSupported) { error_code = ERROR_UNSUPPORTED_DEVICE; } if (after_pairing) RecordPairingResult(error_code); error_callback.Run(error_code); }
void BluetoothDeviceChromeOS::OnConnectError( bool after_pairing, const ConnectErrorCallback& error_callback, const std::string& error_name, const std::string& error_message) { if (--num_connecting_calls_ == 0) adapter_->NotifyDeviceChanged(this); DCHECK(num_connecting_calls_ >= 0); LOG(WARNING) << object_path_.value() << ": Failed to connect device: " << error_name << ": " << error_message; VLOG(1) << object_path_.value() << ": " << num_connecting_calls_ << " still in progress"; ConnectErrorCode error_code = ERROR_UNKNOWN; if (error_name == bluetooth_device::kErrorFailed) { error_code = ERROR_FAILED; } else if (error_name == bluetooth_device::kErrorInProgress) { error_code = ERROR_INPROGRESS; } else if (error_name == bluetooth_device::kErrorNotSupported) { error_code = ERROR_UNSUPPORTED_DEVICE; } if (after_pairing) RecordPairingResult(error_code); error_callback.Run(error_code); }
C
Chrome
0
CVE-2015-8955
https://www.cvedetails.com/cve/CVE-2015-8955/
CWE-264
https://github.com/torvalds/linux/commit/8fff105e13041e49b82f92eef034f363a6b1c071
8fff105e13041e49b82f92eef034f363a6b1c071
arm64: perf: reject groups spanning multiple HW PMUs The perf core implicitly rejects events spanning multiple HW PMUs, as in these cases the event->ctx will differ. However this validation is performed after pmu::event_init() is called in perf_init_event(), and thus pmu::event_init() may be called with a group leader from a different HW PMU. The ARM64 PMU driver does not take this fact into account, and when validating groups assumes that it can call to_arm_pmu(event->pmu) for any HW event. When the event in question is from another HW PMU this is wrong, and results in dereferencing garbage. This patch updates the ARM64 PMU driver to first test for and reject events from other PMUs, moving the to_arm_pmu and related logic after this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with a CCI PMU present: Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL) CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249 Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT) task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000 PC is at 0x0 LR is at validate_event+0x90/0xa8 pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145 sp : ffffffc07b0a3ba0 [< (null)>] (null) [<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc [<ffffffc00015d870>] perf_try_init_event+0x34/0x70 [<ffffffc000164094>] perf_init_event+0xe0/0x10c [<ffffffc000164348>] perf_event_alloc+0x288/0x358 [<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c Code: bad PC value Also cleans up the code to use the arm_pmu only when we know that we are dealing with an arm pmu event. Cc: Will Deacon <[email protected]> Acked-by: Mark Rutland <[email protected]> Acked-by: Peter Ziljstra (Intel) <[email protected]> Signed-off-by: Suzuki K. Poulose <[email protected]> Signed-off-by: Will Deacon <[email protected]>
static int map_cpu_event(struct perf_event *event, const unsigned (*event_map)[PERF_COUNT_HW_MAX], const unsigned (*cache_map) [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX], u32 raw_event_mask) { u64 config = event->attr.config; switch (event->attr.type) { case PERF_TYPE_HARDWARE: return armpmu_map_event(event_map, config); case PERF_TYPE_HW_CACHE: return armpmu_map_cache_event(cache_map, config); case PERF_TYPE_RAW: return armpmu_map_raw_event(raw_event_mask, config); } return -ENOENT; }
static int map_cpu_event(struct perf_event *event, const unsigned (*event_map)[PERF_COUNT_HW_MAX], const unsigned (*cache_map) [PERF_COUNT_HW_CACHE_MAX] [PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_RESULT_MAX], u32 raw_event_mask) { u64 config = event->attr.config; switch (event->attr.type) { case PERF_TYPE_HARDWARE: return armpmu_map_event(event_map, config); case PERF_TYPE_HW_CACHE: return armpmu_map_cache_event(cache_map, config); case PERF_TYPE_RAW: return armpmu_map_raw_event(raw_event_mask, config); } return -ENOENT; }
C
linux
0
CVE-2014-1713
https://www.cvedetails.com/cve/CVE-2014-1713/
CWE-399
https://github.com/chromium/chromium/commit/f85a87ec670ad0fce9d98d90c9a705b72a288154
f85a87ec670ad0fce9d98d90c9a705b72a288154
document.location bindings fix BUG=352374 [email protected] Review URL: https://codereview.chromium.org/196343011 git-svn-id: svn://svn.chromium.org/blink/trunk@169176 bbb929c8-8fbe-4397-9dbb-9b2b20218538
static void reflectedClassAttributeGetterCallback(v8::Local<v8::String>, const v8::PropertyCallbackInfo<v8::Value>& info) { TRACE_EVENT_SET_SAMPLING_STATE("Blink", "DOMGetter"); TestObjectPythonV8Internal::reflectedClassAttributeGetter(info); TRACE_EVENT_SET_SAMPLING_STATE("V8", "V8Execution"); }
static void reflectedClassAttributeGetterCallback(v8::Local<v8::String>, const v8::PropertyCallbackInfo<v8::Value>& info) { TRACE_EVENT_SET_SAMPLING_STATE("Blink", "DOMGetter"); TestObjectPythonV8Internal::reflectedClassAttributeGetter(info); TRACE_EVENT_SET_SAMPLING_STATE("V8", "V8Execution"); }
C
Chrome
0
CVE-2013-6449
https://www.cvedetails.com/cve/CVE-2013-6449/
CWE-310
https://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=ca989269a2876bae79393bd54c3e72d49975fc75
ca989269a2876bae79393bd54c3e72d49975fc75
null
int ssl3_num_ciphers(void) { return(SSL3_NUM_CIPHERS); }
int ssl3_num_ciphers(void) { return(SSL3_NUM_CIPHERS); }
C
openssl
0
CVE-2013-6763
https://www.cvedetails.com/cve/CVE-2013-6763/
CWE-119
https://github.com/torvalds/linux/commit/7314e613d5ff9f0934f7a0f74ed7973b903315d1
7314e613d5ff9f0934f7a0f74ed7973b903315d1
Fix a few incorrectly checked [io_]remap_pfn_range() calls Nico Golde reports a few straggling uses of [io_]remap_pfn_range() that really should use the vm_iomap_memory() helper. This trivially converts two of them to the helper, and comments about why the third one really needs to continue to use remap_pfn_range(), and adds the missing size check. Reported-by: Nico Golde <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected].
static int fbinfo2index (struct fb_info *fb_info) { int i; for (i = 0; i < device_count; ++i) { if (fb_info == _au1200fb_infos[i]) return i; } printk("au1200fb: ERROR: fbinfo2index failed!\n"); return -1; }
static int fbinfo2index (struct fb_info *fb_info) { int i; for (i = 0; i < device_count; ++i) { if (fb_info == _au1200fb_infos[i]) return i; } printk("au1200fb: ERROR: fbinfo2index failed!\n"); return -1; }
C
linux
0
CVE-2018-17205
https://www.cvedetails.com/cve/CVE-2018-17205/
CWE-617
https://github.com/openvswitch/ovs/commit/0befd1f3745055c32940f5faf9559be6a14395e6
0befd1f3745055c32940f5faf9559be6a14395e6
ofproto: Fix OVS crash when reverting old flows in bundle commit During bundle commit flows which are added in bundle are applied to ofproto in-order. In case if a flow cannot be added (e.g. flow action is go-to group id which does not exist), OVS tries to revert back all previous flows which were successfully applied from the same bundle. This is possible since OVS maintains list of old flows which were replaced by flows from the bundle. While reinserting old flows ovs asserts due to check on rule state != RULE_INITIALIZED. This will work only for new flows, but for old flow the rule state will be RULE_REMOVED. This is causing an assert and OVS crash. The ovs assert check should be modified to != RULE_INSERTED to prevent any existing rule being re-inserted and allow new rules and old rules (in case of revert) to get inserted. Here is an example to trigger the assert: $ ovs-vsctl add-br br-test -- set Bridge br-test datapath_type=netdev $ cat flows.txt flow add table=1,priority=0,in_port=2,actions=NORMAL flow add table=1,priority=0,in_port=3,actions=NORMAL $ ovs-ofctl dump-flows -OOpenflow13 br-test cookie=0x0, duration=2.465s, table=1, n_packets=0, n_bytes=0, priority=0,in_port=2 actions=NORMAL cookie=0x0, duration=2.465s, table=1, n_packets=0, n_bytes=0, priority=0,in_port=3 actions=NORMAL $ cat flow-modify.txt flow modify table=1,priority=0,in_port=2,actions=drop flow modify table=1,priority=0,in_port=3,actions=group:10 $ ovs-ofctl bundle br-test flow-modify.txt -OOpenflow13 First flow rule will be modified since it is a valid rule. However second rule is invalid since no group with id 10 exists. Bundle commit tries to revert (insert) the first rule to old flow which results in ovs_assert at ofproto_rule_insert__() since old rule->state = RULE_REMOVED. Signed-off-by: Vishal Deep Ajmera <[email protected]> Signed-off-by: Ben Pfaff <[email protected]>
dealloc_ofp_port(struct ofproto *ofproto, ofp_port_t ofp_port) { if (ofp_to_u16(ofp_port) < ofproto->max_ports) { ofport_set_usage(ofproto, ofp_port, time_msec()); } }
dealloc_ofp_port(struct ofproto *ofproto, ofp_port_t ofp_port) { if (ofp_to_u16(ofp_port) < ofproto->max_ports) { ofport_set_usage(ofproto, ofp_port, time_msec()); } }
C
ovs
0
CVE-2016-10741
https://www.cvedetails.com/cve/CVE-2016-10741/
CWE-362
https://github.com/torvalds/linux/commit/04197b341f23b908193308b8d63d17ff23232598
04197b341f23b908193308b8d63d17ff23232598
xfs: don't BUG() on mixed direct and mapped I/O We've had reports of generic/095 causing XFS to BUG() in __xfs_get_blocks() due to the existence of delalloc blocks on a direct I/O read. generic/095 issues a mix of various types of I/O, including direct and memory mapped I/O to a single file. This is clearly not supported behavior and is known to lead to such problems. E.g., the lack of exclusion between the direct I/O and write fault paths means that a write fault can allocate delalloc blocks in a region of a file that was previously a hole after the direct read has attempted to flush/inval the file range, but before it actually reads the block mapping. In turn, the direct read discovers a delalloc extent and cannot proceed. While the appropriate solution here is to not mix direct and memory mapped I/O to the same regions of the same file, the current BUG_ON() behavior is probably overkill as it can crash the entire system. Instead, localize the failure to the I/O in question by returning an error for a direct I/O that cannot be handled safely due to delalloc blocks. Be careful to allow the case of a direct write to post-eof delalloc blocks. This can occur due to speculative preallocation and is safe as post-eof blocks are not accompanied by dirty pages in pagecache (conversely, preallocation within eof must have been zeroed, and thus dirtied, before the inode size could have been increased beyond said blocks). Finally, provide an additional warning if a direct I/O write occurs while the file is memory mapped. This may not catch all problematic scenarios, but provides a hint that some known-to-be-problematic I/O methods are in use. Signed-off-by: Brian Foster <[email protected]> Reviewed-by: Dave Chinner <[email protected]> Signed-off-by: Dave Chinner <[email protected]>
xfs_end_io_direct_write( struct kiocb *iocb, loff_t offset, ssize_t size, void *private) { struct inode *inode = file_inode(iocb->ki_filp); struct xfs_inode *ip = XFS_I(inode); uintptr_t flags = (uintptr_t)private; int error = 0; trace_xfs_end_io_direct_write(ip, offset, size); if (XFS_FORCED_SHUTDOWN(ip->i_mount)) return -EIO; if (size <= 0) return size; /* * The flags tell us whether we are doing unwritten extent conversions * or an append transaction that updates the on-disk file size. These * cases are the only cases where we should *potentially* be needing * to update the VFS inode size. */ if (flags == 0) { ASSERT(offset + size <= i_size_read(inode)); return 0; } /* * We need to update the in-core inode size here so that we don't end up * with the on-disk inode size being outside the in-core inode size. We * have no other method of updating EOF for AIO, so always do it here * if necessary. * * We need to lock the test/set EOF update as we can be racing with * other IO completions here to update the EOF. Failing to serialise * here can result in EOF moving backwards and Bad Things Happen when * that occurs. */ spin_lock(&ip->i_flags_lock); if (offset + size > i_size_read(inode)) i_size_write(inode, offset + size); spin_unlock(&ip->i_flags_lock); if (flags & XFS_DIO_FLAG_COW) error = xfs_reflink_end_cow(ip, offset, size); if (flags & XFS_DIO_FLAG_UNWRITTEN) { trace_xfs_end_io_direct_write_unwritten(ip, offset, size); error = xfs_iomap_write_unwritten(ip, offset, size); } if (flags & XFS_DIO_FLAG_APPEND) { trace_xfs_end_io_direct_write_append(ip, offset, size); error = xfs_setfilesize(ip, offset, size); } return error; }
xfs_end_io_direct_write( struct kiocb *iocb, loff_t offset, ssize_t size, void *private) { struct inode *inode = file_inode(iocb->ki_filp); struct xfs_inode *ip = XFS_I(inode); uintptr_t flags = (uintptr_t)private; int error = 0; trace_xfs_end_io_direct_write(ip, offset, size); if (XFS_FORCED_SHUTDOWN(ip->i_mount)) return -EIO; if (size <= 0) return size; /* * The flags tell us whether we are doing unwritten extent conversions * or an append transaction that updates the on-disk file size. These * cases are the only cases where we should *potentially* be needing * to update the VFS inode size. */ if (flags == 0) { ASSERT(offset + size <= i_size_read(inode)); return 0; } /* * We need to update the in-core inode size here so that we don't end up * with the on-disk inode size being outside the in-core inode size. We * have no other method of updating EOF for AIO, so always do it here * if necessary. * * We need to lock the test/set EOF update as we can be racing with * other IO completions here to update the EOF. Failing to serialise * here can result in EOF moving backwards and Bad Things Happen when * that occurs. */ spin_lock(&ip->i_flags_lock); if (offset + size > i_size_read(inode)) i_size_write(inode, offset + size); spin_unlock(&ip->i_flags_lock); if (flags & XFS_DIO_FLAG_COW) error = xfs_reflink_end_cow(ip, offset, size); if (flags & XFS_DIO_FLAG_UNWRITTEN) { trace_xfs_end_io_direct_write_unwritten(ip, offset, size); error = xfs_iomap_write_unwritten(ip, offset, size); } if (flags & XFS_DIO_FLAG_APPEND) { trace_xfs_end_io_direct_write_append(ip, offset, size); error = xfs_setfilesize(ip, offset, size); } return error; }
C
linux
0
CVE-2015-6575
https://www.cvedetails.com/cve/CVE-2015-6575/
CWE-189
https://android.googlesource.com/platform/frameworks/av/+/cf1581c66c2ad8c5b1aaca2e43e350cf5974f46d
cf1581c66c2ad8c5b1aaca2e43e350cf5974f46d
Fix several ineffective integer overflow checks Commit edd4a76 (which addressed bugs 15328708, 15342615, 15342751) added several integer overflow checks. Unfortunately, those checks fail to take into account integer promotion rules and are thus themselves subject to an integer overflow. Cast the sizeof() operator to a uint64_t to force promotion while multiplying. Bug: 20139950 (cherry picked from commit e2e812e58e8d2716b00d7d82db99b08d3afb4b32) Change-Id: I080eb3fa147601f18cedab86e0360406c3963d7b
status_t SampleTable::getSampleSize_l( uint32_t sampleIndex, size_t *sampleSize) { return mSampleIterator->getSampleSizeDirect( sampleIndex, sampleSize); }
status_t SampleTable::getSampleSize_l( uint32_t sampleIndex, size_t *sampleSize) { return mSampleIterator->getSampleSizeDirect( sampleIndex, sampleSize); }
C
Android
0
CVE-2018-6079
https://www.cvedetails.com/cve/CVE-2018-6079/
CWE-200
https://github.com/chromium/chromium/commit/d128139d53e9268e87921e82d89b3f2053cb83fd
d128139d53e9268e87921e82d89b3f2053cb83fd
Fix tabs sharing TEXTURE_2D_ARRAY/TEXTURE_3D data. In linux and android, we are seeing an issue where texture data from one tab overwrites the texture data of another tab. This is happening for apps which are using webgl2 texture of type TEXTURE_2D_ARRAY/TEXTURE_3D. Due to a bug in virtual context save/restore code for above texture formats, the texture data is not properly restored while switching tabs. Hence texture data from one tab overwrites other. This CL has fix for that issue, an update for existing test expectations and a new unit test for this bug. Bug: 788448 Cq-Include-Trybots: master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Change-Id: Ie933984cdd2d1381f42eb4638f730c8245207a28 Reviewed-on: https://chromium-review.googlesource.com/930327 Reviewed-by: Zhenyao Mo <[email protected]> Commit-Queue: vikas soni <[email protected]> Cr-Commit-Position: refs/heads/master@{#539111}
void ContextState::InitStateManual(const ContextState*) const { UpdatePackParameters(); UpdateUnpackParameters(); UpdateWindowRectangles(); }
void ContextState::InitStateManual(const ContextState*) const { UpdatePackParameters(); UpdateUnpackParameters(); UpdateWindowRectangles(); }
C
Chrome
0
CVE-2019-1010251
https://www.cvedetails.com/cve/CVE-2019-1010251/
CWE-20
https://github.com/OISF/suricata/pull/3590/commits/8357ef3f8ffc7d99ef6571350724160de356158b
8357ef3f8ffc7d99ef6571350724160de356158b
proto/detect: workaround dns misdetected as dcerpc The DCERPC UDP detection would misfire on DNS with transaction ID 0x0400. This would happen as the protocol detection engine gives preference to pattern based detection over probing parsers for performance reasons. This hack/workaround fixes this specific case by still running the probing parser if DCERPC has been detected on UDP. The probing parser result will take precedence. Bug #2736.
int AppLayerProtoDetectPPParseConfPorts(const char *ipproto_name, uint8_t ipproto, const char *alproto_name, AppProto alproto, uint16_t min_depth, uint16_t max_depth, ProbingParserFPtr ProbingParserTs, ProbingParserFPtr ProbingParserTc) { SCEnter(); char param[100]; int r; ConfNode *node; ConfNode *port_node = NULL; int config = 0; r = snprintf(param, sizeof(param), "%s%s%s", "app-layer.protocols.", alproto_name, ".detection-ports"); if (r < 0) { SCLogError(SC_ERR_FATAL, "snprintf failure."); exit(EXIT_FAILURE); } else if (r > (int)sizeof(param)) { SCLogError(SC_ERR_FATAL, "buffer not big enough to write param."); exit(EXIT_FAILURE); } node = ConfGetNode(param); if (node == NULL) { SCLogDebug("Entry for %s not found.", param); r = snprintf(param, sizeof(param), "%s%s%s%s%s", "app-layer.protocols.", alproto_name, ".", ipproto_name, ".detection-ports"); if (r < 0) { SCLogError(SC_ERR_FATAL, "snprintf failure."); exit(EXIT_FAILURE); } else if (r > (int)sizeof(param)) { SCLogError(SC_ERR_FATAL, "buffer not big enough to write param."); exit(EXIT_FAILURE); } node = ConfGetNode(param); if (node == NULL) goto end; } /* detect by destination port of the flow (e.g. port 53 for DNS) */ port_node = ConfNodeLookupChild(node, "dp"); if (port_node == NULL) port_node = ConfNodeLookupChild(node, "toserver"); if (port_node != NULL && port_node->val != NULL) { AppLayerProtoDetectPPRegister(ipproto, port_node->val, alproto, min_depth, max_depth, STREAM_TOSERVER, /* to indicate dp */ ProbingParserTs, ProbingParserTc); } /* detect by source port of flow */ port_node = ConfNodeLookupChild(node, "sp"); if (port_node == NULL) port_node = ConfNodeLookupChild(node, "toclient"); if (port_node != NULL && port_node->val != NULL) { AppLayerProtoDetectPPRegister(ipproto, port_node->val, alproto, min_depth, max_depth, STREAM_TOCLIENT, /* to indicate sp */ ProbingParserTc, ProbingParserTs); } config = 1; end: SCReturnInt(config); }
int AppLayerProtoDetectPPParseConfPorts(const char *ipproto_name, uint8_t ipproto, const char *alproto_name, AppProto alproto, uint16_t min_depth, uint16_t max_depth, ProbingParserFPtr ProbingParserTs, ProbingParserFPtr ProbingParserTc) { SCEnter(); char param[100]; int r; ConfNode *node; ConfNode *port_node = NULL; int config = 0; r = snprintf(param, sizeof(param), "%s%s%s", "app-layer.protocols.", alproto_name, ".detection-ports"); if (r < 0) { SCLogError(SC_ERR_FATAL, "snprintf failure."); exit(EXIT_FAILURE); } else if (r > (int)sizeof(param)) { SCLogError(SC_ERR_FATAL, "buffer not big enough to write param."); exit(EXIT_FAILURE); } node = ConfGetNode(param); if (node == NULL) { SCLogDebug("Entry for %s not found.", param); r = snprintf(param, sizeof(param), "%s%s%s%s%s", "app-layer.protocols.", alproto_name, ".", ipproto_name, ".detection-ports"); if (r < 0) { SCLogError(SC_ERR_FATAL, "snprintf failure."); exit(EXIT_FAILURE); } else if (r > (int)sizeof(param)) { SCLogError(SC_ERR_FATAL, "buffer not big enough to write param."); exit(EXIT_FAILURE); } node = ConfGetNode(param); if (node == NULL) goto end; } /* detect by destination port of the flow (e.g. port 53 for DNS) */ port_node = ConfNodeLookupChild(node, "dp"); if (port_node == NULL) port_node = ConfNodeLookupChild(node, "toserver"); if (port_node != NULL && port_node->val != NULL) { AppLayerProtoDetectPPRegister(ipproto, port_node->val, alproto, min_depth, max_depth, STREAM_TOSERVER, /* to indicate dp */ ProbingParserTs, ProbingParserTc); } /* detect by source port of flow */ port_node = ConfNodeLookupChild(node, "sp"); if (port_node == NULL) port_node = ConfNodeLookupChild(node, "toclient"); if (port_node != NULL && port_node->val != NULL) { AppLayerProtoDetectPPRegister(ipproto, port_node->val, alproto, min_depth, max_depth, STREAM_TOCLIENT, /* to indicate sp */ ProbingParserTc, ProbingParserTs); } config = 1; end: SCReturnInt(config); }
C
suricata
0
CVE-2019-1563
https://www.cvedetails.com/cve/CVE-2019-1563/
CWE-311
https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=08229ad838c50f644d7e928e2eef147b4308ad64
08229ad838c50f644d7e928e2eef147b4308ad64
null
CMS_RecipientInfo *CMS_add0_recipient_key(CMS_ContentInfo *cms, int nid, unsigned char *key, size_t keylen, unsigned char *id, size_t idlen, ASN1_GENERALIZEDTIME *date, ASN1_OBJECT *otherTypeId, ASN1_TYPE *otherType) { CMS_RecipientInfo *ri = NULL; CMS_EnvelopedData *env; CMS_KEKRecipientInfo *kekri; env = cms_get0_enveloped(cms); if (!env) goto err; if (nid == NID_undef) { switch (keylen) { case 16: nid = NID_id_aes128_wrap; break; case 24: nid = NID_id_aes192_wrap; break; case 32: nid = NID_id_aes256_wrap; break; default: CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, CMS_R_INVALID_KEY_LENGTH); goto err; } } else { size_t exp_keylen = aes_wrap_keylen(nid); if (!exp_keylen) { CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, CMS_R_UNSUPPORTED_KEK_ALGORITHM); goto err; } if (keylen != exp_keylen) { CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, CMS_R_INVALID_KEY_LENGTH); goto err; } } /* Initialize recipient info */ ri = M_ASN1_new_of(CMS_RecipientInfo); if (!ri) goto merr; ri->d.kekri = M_ASN1_new_of(CMS_KEKRecipientInfo); if (!ri->d.kekri) goto merr; ri->type = CMS_RECIPINFO_KEK; kekri = ri->d.kekri; if (otherTypeId) { kekri->kekid->other = M_ASN1_new_of(CMS_OtherKeyAttribute); if (kekri->kekid->other == NULL) goto merr; } if (!sk_CMS_RecipientInfo_push(env->recipientInfos, ri)) goto merr; /* After this point no calls can fail */ kekri->version = 4; kekri->key = key; kekri->keylen = keylen; ASN1_STRING_set0(kekri->kekid->keyIdentifier, id, idlen); kekri->kekid->date = date; if (kekri->kekid->other) { kekri->kekid->other->keyAttrId = otherTypeId; kekri->kekid->other->keyAttr = otherType; } X509_ALGOR_set0(kekri->keyEncryptionAlgorithm, OBJ_nid2obj(nid), V_ASN1_UNDEF, NULL); return ri; merr: CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, ERR_R_MALLOC_FAILURE); err: M_ASN1_free_of(ri, CMS_RecipientInfo); return NULL; }
CMS_RecipientInfo *CMS_add0_recipient_key(CMS_ContentInfo *cms, int nid, unsigned char *key, size_t keylen, unsigned char *id, size_t idlen, ASN1_GENERALIZEDTIME *date, ASN1_OBJECT *otherTypeId, ASN1_TYPE *otherType) { CMS_RecipientInfo *ri = NULL; CMS_EnvelopedData *env; CMS_KEKRecipientInfo *kekri; env = cms_get0_enveloped(cms); if (!env) goto err; if (nid == NID_undef) { switch (keylen) { case 16: nid = NID_id_aes128_wrap; break; case 24: nid = NID_id_aes192_wrap; break; case 32: nid = NID_id_aes256_wrap; break; default: CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, CMS_R_INVALID_KEY_LENGTH); goto err; } } else { size_t exp_keylen = aes_wrap_keylen(nid); if (!exp_keylen) { CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, CMS_R_UNSUPPORTED_KEK_ALGORITHM); goto err; } if (keylen != exp_keylen) { CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, CMS_R_INVALID_KEY_LENGTH); goto err; } } /* Initialize recipient info */ ri = M_ASN1_new_of(CMS_RecipientInfo); if (!ri) goto merr; ri->d.kekri = M_ASN1_new_of(CMS_KEKRecipientInfo); if (!ri->d.kekri) goto merr; ri->type = CMS_RECIPINFO_KEK; kekri = ri->d.kekri; if (otherTypeId) { kekri->kekid->other = M_ASN1_new_of(CMS_OtherKeyAttribute); if (kekri->kekid->other == NULL) goto merr; } if (!sk_CMS_RecipientInfo_push(env->recipientInfos, ri)) goto merr; /* After this point no calls can fail */ kekri->version = 4; kekri->key = key; kekri->keylen = keylen; ASN1_STRING_set0(kekri->kekid->keyIdentifier, id, idlen); kekri->kekid->date = date; if (kekri->kekid->other) { kekri->kekid->other->keyAttrId = otherTypeId; kekri->kekid->other->keyAttr = otherType; } X509_ALGOR_set0(kekri->keyEncryptionAlgorithm, OBJ_nid2obj(nid), V_ASN1_UNDEF, NULL); return ri; merr: CMSerr(CMS_F_CMS_ADD0_RECIPIENT_KEY, ERR_R_MALLOC_FAILURE); err: M_ASN1_free_of(ri, CMS_RecipientInfo); return NULL; }
C
openssl
0
CVE-2017-15393
https://www.cvedetails.com/cve/CVE-2017-15393/
CWE-668
https://github.com/chromium/chromium/commit/a8ef19900d003ff7078fe4fcec8f63496b18f0dc
a8ef19900d003ff7078fe4fcec8f63496b18f0dc
[DevTools] Use no-referrer for DevTools links Bug: 732751 Change-Id: I77753120e2424203dedcc7bc0847fb67f87fe2b2 Reviewed-on: https://chromium-review.googlesource.com/615021 Reviewed-by: Andrey Kosyakov <[email protected]> Commit-Queue: Dmitry Gozman <[email protected]> Cr-Commit-Position: refs/heads/master@{#494413}
void DevToolsWindow::OpenNodeFrontendWindow(Profile* profile) { for (DevToolsWindow* window : g_instances.Get()) { if (window->frontend_type_ == kFrontendNode) { window->ActivateWindow(); return; } } DevToolsWindow* window = Create(profile, nullptr, kFrontendNode, std::string(), false, std::string(), std::string()); if (!window) return; window->bindings_->AttachTo(DevToolsAgentHost::CreateForDiscovery()); window->ScheduleShow(DevToolsToggleAction::Show()); }
void DevToolsWindow::OpenNodeFrontendWindow(Profile* profile) { for (DevToolsWindow* window : g_instances.Get()) { if (window->frontend_type_ == kFrontendNode) { window->ActivateWindow(); return; } } DevToolsWindow* window = Create(profile, nullptr, kFrontendNode, std::string(), false, std::string(), std::string()); if (!window) return; window->bindings_->AttachTo(DevToolsAgentHost::CreateForDiscovery()); window->ScheduleShow(DevToolsToggleAction::Show()); }
C
Chrome
0
CVE-2014-3515
https://www.cvedetails.com/cve/CVE-2014-3515/
null
https://git.php.net/?p=php-src.git;a=commit;h=88223c5245e9b470e1e6362bfd96829562ffe6ab
88223c5245e9b470e1e6362bfd96829562ffe6ab
null
SPL_METHOD(Array, unserialize) { spl_array_object *intern = (spl_array_object*)zend_object_store_get_object(getThis() TSRMLS_CC); char *buf; int buf_len; const unsigned char *p, *s; php_unserialize_data_t var_hash; zval *pmembers, *pflags = NULL; long flags; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "s", &buf, &buf_len) == FAILURE) { return; } if (buf_len == 0) { zend_throw_exception_ex(spl_ce_UnexpectedValueException, 0 TSRMLS_CC, "Empty serialized string cannot be empty"); return; } /* storage */ s = p = (const unsigned char*)buf; PHP_VAR_UNSERIALIZE_INIT(var_hash); if (*p!= 'x' || *++p != ':') { goto outexcept; } ++p; ALLOC_INIT_ZVAL(pflags); if (!php_var_unserialize(&pflags, &p, s + buf_len, &var_hash TSRMLS_CC) || Z_TYPE_P(pflags) != IS_LONG) { zval_ptr_dtor(&pflags); goto outexcept; } --p; /* for ';' */ flags = Z_LVAL_P(pflags); zval_ptr_dtor(&pflags); /* flags needs to be verified and we also need to verify whether the next * thing we get is ';'. After that we require an 'm' or somethign else * where 'm' stands for members and anything else should be an array. If * neither 'a' or 'm' follows we have an error. */ if (*p != ';') { goto outexcept; } ++p; if (*p!='m') { if (*p!='a' && *p!='O' && *p!='C' && *p!='r') { goto outexcept; } intern->ar_flags &= ~SPL_ARRAY_CLONE_MASK; intern->ar_flags |= flags & SPL_ARRAY_CLONE_MASK; zval_ptr_dtor(&intern->array); ALLOC_INIT_ZVAL(intern->array); if (!php_var_unserialize(&intern->array, &p, s + buf_len, &var_hash TSRMLS_CC)) { goto outexcept; } } if (*p != ';') { goto outexcept; } ++p; /* members */ if (*p!= 'm' || *++p != ':') { goto outexcept; } ++p; ALLOC_INIT_ZVAL(pmembers); if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC) || Z_TYPE_P(pmembers) != IS_ARRAY) { zval_ptr_dtor(&pmembers); goto outexcept; } /* copy members */ if (!intern->std.properties) { rebuild_object_properties(&intern->std); } zend_hash_copy(intern->std.properties, Z_ARRVAL_P(pmembers), (copy_ctor_func_t) zval_add_ref, (void *) NULL, sizeof(zval *)); zval_ptr_dtor(&pmembers); /* done reading $serialized */ PHP_VAR_UNSERIALIZE_DESTROY(var_hash); return; outexcept: PHP_VAR_UNSERIALIZE_DESTROY(var_hash); zend_throw_exception_ex(spl_ce_UnexpectedValueException, 0 TSRMLS_CC, "Error at offset %ld of %d bytes", (long)((char*)p - buf), buf_len); return; } /* }}} */ /* {{{ arginfo and function tbale */
SPL_METHOD(Array, unserialize) { spl_array_object *intern = (spl_array_object*)zend_object_store_get_object(getThis() TSRMLS_CC); char *buf; int buf_len; const unsigned char *p, *s; php_unserialize_data_t var_hash; zval *pmembers, *pflags = NULL; long flags; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "s", &buf, &buf_len) == FAILURE) { return; } if (buf_len == 0) { zend_throw_exception_ex(spl_ce_UnexpectedValueException, 0 TSRMLS_CC, "Empty serialized string cannot be empty"); return; } /* storage */ s = p = (const unsigned char*)buf; PHP_VAR_UNSERIALIZE_INIT(var_hash); if (*p!= 'x' || *++p != ':') { goto outexcept; } ++p; ALLOC_INIT_ZVAL(pflags); if (!php_var_unserialize(&pflags, &p, s + buf_len, &var_hash TSRMLS_CC) || Z_TYPE_P(pflags) != IS_LONG) { zval_ptr_dtor(&pflags); goto outexcept; } --p; /* for ';' */ flags = Z_LVAL_P(pflags); zval_ptr_dtor(&pflags); /* flags needs to be verified and we also need to verify whether the next * thing we get is ';'. After that we require an 'm' or somethign else * where 'm' stands for members and anything else should be an array. If * neither 'a' or 'm' follows we have an error. */ if (*p != ';') { goto outexcept; } ++p; if (*p!='m') { if (*p!='a' && *p!='O' && *p!='C' && *p!='r') { goto outexcept; } intern->ar_flags &= ~SPL_ARRAY_CLONE_MASK; intern->ar_flags |= flags & SPL_ARRAY_CLONE_MASK; zval_ptr_dtor(&intern->array); ALLOC_INIT_ZVAL(intern->array); if (!php_var_unserialize(&intern->array, &p, s + buf_len, &var_hash TSRMLS_CC)) { goto outexcept; } } if (*p != ';') { goto outexcept; } ++p; /* members */ if (*p!= 'm' || *++p != ':') { goto outexcept; } ++p; ALLOC_INIT_ZVAL(pmembers); if (!php_var_unserialize(&pmembers, &p, s + buf_len, &var_hash TSRMLS_CC)) { zval_ptr_dtor(&pmembers); goto outexcept; } /* copy members */ if (!intern->std.properties) { rebuild_object_properties(&intern->std); } zend_hash_copy(intern->std.properties, Z_ARRVAL_P(pmembers), (copy_ctor_func_t) zval_add_ref, (void *) NULL, sizeof(zval *)); zval_ptr_dtor(&pmembers); /* done reading $serialized */ PHP_VAR_UNSERIALIZE_DESTROY(var_hash); return; outexcept: PHP_VAR_UNSERIALIZE_DESTROY(var_hash); zend_throw_exception_ex(spl_ce_UnexpectedValueException, 0 TSRMLS_CC, "Error at offset %ld of %d bytes", (long)((char*)p - buf), buf_len); return; } /* }}} */ /* {{{ arginfo and function tbale */
C
php
1
null
null
null
https://github.com/chromium/chromium/commit/1da0daecc540238cb473f0d6322da51d3a544244
1da0daecc540238cb473f0d6322da51d3a544244
Change VideoDecoder::ReadCB to take const scoped_refptr<VideoFrame>&. BUG=none TEST=media_unittests, media layout tests. Review URL: https://chromiumcodereview.appspot.com/10559074 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@143192 0039d316-1c4b-4281-b951-d872f2087c98
NullVideoFrame() {}
NullVideoFrame() {}
C
Chrome
0
null
null
null
https://github.com/chromium/chromium/commit/d193f6bb5aa5bdc05e07f314abacf7d7bc466d3d
d193f6bb5aa5bdc05e07f314abacf7d7bc466d3d
cc: Make the PictureLayerImpl raster source null until commit. No need to make a raster source that is never used. R=enne, vmpstr BUG=387116 Review URL: https://codereview.chromium.org/809433003 Cr-Commit-Position: refs/heads/master@{#308466}
void AssertAllTilesRequired(PictureLayerTiling* tiling) { std::vector<Tile*> tiles = tiling->AllTilesForTesting(); for (size_t i = 0; i < tiles.size(); ++i) EXPECT_TRUE(tiles[i]->required_for_activation()) << "i: " << i; EXPECT_GT(tiles.size(), 0u); }
void AssertAllTilesRequired(PictureLayerTiling* tiling) { std::vector<Tile*> tiles = tiling->AllTilesForTesting(); for (size_t i = 0; i < tiles.size(); ++i) EXPECT_TRUE(tiles[i]->required_for_activation()) << "i: " << i; EXPECT_GT(tiles.size(), 0u); }
C
Chrome
0
CVE-2013-3235
https://www.cvedetails.com/cve/CVE-2013-3235/
CWE-200
https://github.com/torvalds/linux/commit/60085c3d009b0df252547adb336d1ccca5ce52ec
60085c3d009b0df252547adb336d1ccca5ce52ec
tipc: fix info leaks via msg_name in recv_msg/recv_stream The code in set_orig_addr() does not initialize all of the members of struct sockaddr_tipc when filling the sockaddr info -- namely the union is only partly filled. This will make recv_msg() and recv_stream() -- the only users of this function -- leak kernel stack memory as the msg_name member is a local variable in net/socket.c. Additionally to that both recv_msg() and recv_stream() fail to update the msg_namelen member to 0 while otherwise returning with 0, i.e. "success". This is the case for, e.g., non-blocking sockets. This will lead to a 128 byte kernel stack leak in net/socket.c. Fix the first issue by initializing the memory of the union with memset(0). Fix the second one by setting msg_namelen to 0 early as it will be updated later if we're going to fill the msg_name member. Cc: Jon Maloy <[email protected]> Cc: Allan Stephens <[email protected]> Signed-off-by: Mathias Krause <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int setsockopt(struct socket *sock, int lvl, int opt, char __user *ov, unsigned int ol) { struct sock *sk = sock->sk; struct tipc_port *tport = tipc_sk_port(sk); u32 value; int res; if ((lvl == IPPROTO_TCP) && (sock->type == SOCK_STREAM)) return 0; if (lvl != SOL_TIPC) return -ENOPROTOOPT; if (ol < sizeof(value)) return -EINVAL; res = get_user(value, (u32 __user *)ov); if (res) return res; lock_sock(sk); switch (opt) { case TIPC_IMPORTANCE: res = tipc_set_portimportance(tport->ref, value); break; case TIPC_SRC_DROPPABLE: if (sock->type != SOCK_STREAM) res = tipc_set_portunreliable(tport->ref, value); else res = -ENOPROTOOPT; break; case TIPC_DEST_DROPPABLE: res = tipc_set_portunreturnable(tport->ref, value); break; case TIPC_CONN_TIMEOUT: tipc_sk(sk)->conn_timeout = value; /* no need to set "res", since already 0 at this point */ break; default: res = -EINVAL; } release_sock(sk); return res; }
static int setsockopt(struct socket *sock, int lvl, int opt, char __user *ov, unsigned int ol) { struct sock *sk = sock->sk; struct tipc_port *tport = tipc_sk_port(sk); u32 value; int res; if ((lvl == IPPROTO_TCP) && (sock->type == SOCK_STREAM)) return 0; if (lvl != SOL_TIPC) return -ENOPROTOOPT; if (ol < sizeof(value)) return -EINVAL; res = get_user(value, (u32 __user *)ov); if (res) return res; lock_sock(sk); switch (opt) { case TIPC_IMPORTANCE: res = tipc_set_portimportance(tport->ref, value); break; case TIPC_SRC_DROPPABLE: if (sock->type != SOCK_STREAM) res = tipc_set_portunreliable(tport->ref, value); else res = -ENOPROTOOPT; break; case TIPC_DEST_DROPPABLE: res = tipc_set_portunreturnable(tport->ref, value); break; case TIPC_CONN_TIMEOUT: tipc_sk(sk)->conn_timeout = value; /* no need to set "res", since already 0 at this point */ break; default: res = -EINVAL; } release_sock(sk); return res; }
C
linux
0
CVE-2011-4131
https://www.cvedetails.com/cve/CVE-2011-4131/
CWE-189
https://github.com/torvalds/linux/commit/bf118a342f10dafe44b14451a1392c3254629a1f
bf118a342f10dafe44b14451a1392c3254629a1f
NFSv4: include bitmap in nfsv4 get acl data The NFSv4 bitmap size is unbounded: a server can return an arbitrary sized bitmap in an FATTR4_WORD0_ACL request. Replace using the nfs4_fattr_bitmap_maxsz as a guess to the maximum bitmask returned by a server with the inclusion of the bitmap (xdr length plus bitmasks) and the acl data xdr length to the (cached) acl page data. This is a general solution to commit e5012d1f "NFSv4.1: update nfs4_fattr_bitmap_maxsz" and fixes hitting a BUG_ON in xdr_shrink_bufhead when getting ACLs. Fix a bug in decode_getacl that returned -EINVAL on ACLs > page when getxattr was called with a NULL buffer, preventing ACL > PAGE_SIZE from being retrieved. Cc: [email protected] Signed-off-by: Andy Adamson <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
_copy_to_pages(struct page **pages, size_t pgbase, const char *p, size_t len) { struct page **pgto; char *vto; size_t copy; pgto = pages + (pgbase >> PAGE_CACHE_SHIFT); pgbase &= ~PAGE_CACHE_MASK; for (;;) { copy = PAGE_CACHE_SIZE - pgbase; if (copy > len) copy = len; vto = kmap_atomic(*pgto, KM_USER0); memcpy(vto + pgbase, p, copy); kunmap_atomic(vto, KM_USER0); len -= copy; if (len == 0) break; pgbase += copy; if (pgbase == PAGE_CACHE_SIZE) { flush_dcache_page(*pgto); pgbase = 0; pgto++; } p += copy; } flush_dcache_page(*pgto); }
_copy_to_pages(struct page **pages, size_t pgbase, const char *p, size_t len) { struct page **pgto; char *vto; size_t copy; pgto = pages + (pgbase >> PAGE_CACHE_SHIFT); pgbase &= ~PAGE_CACHE_MASK; for (;;) { copy = PAGE_CACHE_SIZE - pgbase; if (copy > len) copy = len; vto = kmap_atomic(*pgto, KM_USER0); memcpy(vto + pgbase, p, copy); kunmap_atomic(vto, KM_USER0); len -= copy; if (len == 0) break; pgbase += copy; if (pgbase == PAGE_CACHE_SIZE) { flush_dcache_page(*pgto); pgbase = 0; pgto++; } p += copy; } flush_dcache_page(*pgto); }
C
linux
0
CVE-2012-3552
https://www.cvedetails.com/cve/CVE-2012-3552/
CWE-362
https://github.com/torvalds/linux/commit/f6d8bd051c391c1c0458a30b2a7abcd939329259
f6d8bd051c391c1c0458a30b2a7abcd939329259
inet: add RCU protection to inet->opt We lack proper synchronization to manipulate inet->opt ip_options Problem is ip_make_skb() calls ip_setup_cork() and ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options), without any protection against another thread manipulating inet->opt. Another thread can change inet->opt pointer and free old one under us. Use RCU to protect inet->opt (changed to inet->inet_opt). Instead of handling atomic refcounts, just copy ip_options when necessary, to avoid cache line dirtying. We cant insert an rcu_head in struct ip_options since its included in skb->cb[], so this patch is large because I had to introduce a new ip_options_rcu structure. Signed-off-by: Eric Dumazet <[email protected]> Cc: Herbert Xu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static inline u32 inet_synq_hash(const __be32 raddr, const __be16 rport, const u32 rnd, const u32 synq_hsize) { return jhash_2words((__force u32)raddr, (__force u32)rport, rnd) & (synq_hsize - 1); }
static inline u32 inet_synq_hash(const __be32 raddr, const __be16 rport, const u32 rnd, const u32 synq_hsize) { return jhash_2words((__force u32)raddr, (__force u32)rport, rnd) & (synq_hsize - 1); }
C
linux
0
CVE-2016-9588
https://www.cvedetails.com/cve/CVE-2016-9588/
CWE-388
https://github.com/torvalds/linux/commit/ef85b67385436ddc1998f45f1d6a210f935b3388
ef85b67385436ddc1998f45f1d6a210f935b3388
kvm: nVMX: Allow L1 to intercept software exceptions (#BP and #OF) When L2 exits to L0 due to "exception or NMI", software exceptions (#BP and #OF) for which L1 has requested an intercept should be handled by L1 rather than L0. Previously, only hardware exceptions were forwarded to L1. Signed-off-by: Jim Mattson <[email protected]> Cc: [email protected] Signed-off-by: Paolo Bonzini <[email protected]>
static int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data) { struct vcpu_vmx *vmx = to_vmx(vcpu); switch (msr_index) { case MSR_IA32_VMX_BASIC: return vmx_restore_vmx_basic(vmx, data); case MSR_IA32_VMX_PINBASED_CTLS: case MSR_IA32_VMX_PROCBASED_CTLS: case MSR_IA32_VMX_EXIT_CTLS: case MSR_IA32_VMX_ENTRY_CTLS: /* * The "non-true" VMX capability MSRs are generated from the * "true" MSRs, so we do not support restoring them directly. * * If userspace wants to emulate VMX_BASIC[55]=0, userspace * should restore the "true" MSRs with the must-be-1 bits * set according to the SDM Vol 3. A.2 "RESERVED CONTROLS AND * DEFAULT SETTINGS". */ return -EINVAL; case MSR_IA32_VMX_TRUE_PINBASED_CTLS: case MSR_IA32_VMX_TRUE_PROCBASED_CTLS: case MSR_IA32_VMX_TRUE_EXIT_CTLS: case MSR_IA32_VMX_TRUE_ENTRY_CTLS: case MSR_IA32_VMX_PROCBASED_CTLS2: return vmx_restore_control_msr(vmx, msr_index, data); case MSR_IA32_VMX_MISC: return vmx_restore_vmx_misc(vmx, data); case MSR_IA32_VMX_CR0_FIXED0: case MSR_IA32_VMX_CR4_FIXED0: return vmx_restore_fixed0_msr(vmx, msr_index, data); case MSR_IA32_VMX_CR0_FIXED1: case MSR_IA32_VMX_CR4_FIXED1: /* * These MSRs are generated based on the vCPU's CPUID, so we * do not support restoring them directly. */ return -EINVAL; case MSR_IA32_VMX_EPT_VPID_CAP: return vmx_restore_vmx_ept_vpid_cap(vmx, data); case MSR_IA32_VMX_VMCS_ENUM: vmx->nested.nested_vmx_vmcs_enum = data; return 0; default: /* * The rest of the VMX capability MSRs do not support restore. */ return -EINVAL; } }
static int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data) { struct vcpu_vmx *vmx = to_vmx(vcpu); switch (msr_index) { case MSR_IA32_VMX_BASIC: return vmx_restore_vmx_basic(vmx, data); case MSR_IA32_VMX_PINBASED_CTLS: case MSR_IA32_VMX_PROCBASED_CTLS: case MSR_IA32_VMX_EXIT_CTLS: case MSR_IA32_VMX_ENTRY_CTLS: /* * The "non-true" VMX capability MSRs are generated from the * "true" MSRs, so we do not support restoring them directly. * * If userspace wants to emulate VMX_BASIC[55]=0, userspace * should restore the "true" MSRs with the must-be-1 bits * set according to the SDM Vol 3. A.2 "RESERVED CONTROLS AND * DEFAULT SETTINGS". */ return -EINVAL; case MSR_IA32_VMX_TRUE_PINBASED_CTLS: case MSR_IA32_VMX_TRUE_PROCBASED_CTLS: case MSR_IA32_VMX_TRUE_EXIT_CTLS: case MSR_IA32_VMX_TRUE_ENTRY_CTLS: case MSR_IA32_VMX_PROCBASED_CTLS2: return vmx_restore_control_msr(vmx, msr_index, data); case MSR_IA32_VMX_MISC: return vmx_restore_vmx_misc(vmx, data); case MSR_IA32_VMX_CR0_FIXED0: case MSR_IA32_VMX_CR4_FIXED0: return vmx_restore_fixed0_msr(vmx, msr_index, data); case MSR_IA32_VMX_CR0_FIXED1: case MSR_IA32_VMX_CR4_FIXED1: /* * These MSRs are generated based on the vCPU's CPUID, so we * do not support restoring them directly. */ return -EINVAL; case MSR_IA32_VMX_EPT_VPID_CAP: return vmx_restore_vmx_ept_vpid_cap(vmx, data); case MSR_IA32_VMX_VMCS_ENUM: vmx->nested.nested_vmx_vmcs_enum = data; return 0; default: /* * The rest of the VMX capability MSRs do not support restore. */ return -EINVAL; } }
C
linux
0
CVE-2016-1618
https://www.cvedetails.com/cve/CVE-2016-1618/
CWE-310
https://github.com/chromium/chromium/commit/0d151e09e13a704e9738ea913d117df7282e6c7d
0d151e09e13a704e9738ea913d117df7282e6c7d
Add assertions that the empty Platform::cryptographicallyRandomValues() overrides are not being used. These implementations are not safe and look scary if not accompanied by an assertion. Also one of the comments was incorrect. BUG=552749 Review URL: https://codereview.chromium.org/1419293005 Cr-Commit-Position: refs/heads/master@{#359229}
~ProxyPlatform() { blink::Platform::initialize(m_platform); }
~ProxyPlatform() { blink::Platform::initialize(m_platform); }
C
Chrome
0
CVE-2016-5350
https://www.cvedetails.com/cve/CVE-2016-5350/
CWE-399
https://github.com/wireshark/wireshark/commit/b4d16b4495b732888e12baf5b8a7e9bf2665e22b
b4d16b4495b732888e12baf5b8a7e9bf2665e22b
SPOOLSS: Try to avoid an infinite loop. Use tvb_reported_length_remaining in dissect_spoolss_uint16uni. Make sure our offset always increments in dissect_spoolss_keybuffer. Change-Id: I7017c9685bb2fa27161d80a03b8fca4ef630e793 Reviewed-on: https://code.wireshark.org/review/14687 Reviewed-by: Gerald Combs <[email protected]> Petri-Dish: Gerald Combs <[email protected]> Tested-by: Petri Dish Buildbot <[email protected]> Reviewed-by: Michael Mann <[email protected]>
SpoolssEnumForms_r(tvbuff_t *tvb, int offset, packet_info *pinfo, proto_tree *tree, dcerpc_info *di, guint8 *drep _U_) { dcerpc_call_value *dcv = (dcerpc_call_value *)di->call_data; BUFFER buffer; guint32 level = GPOINTER_TO_UINT(dcv->se_data), i, count; int buffer_offset; proto_item *hidden_item; hidden_item = proto_tree_add_uint( tree, hf_form, tvb, offset, 0, 1); PROTO_ITEM_SET_HIDDEN(hidden_item); /* Parse packet */ offset = dissect_spoolss_buffer( tvb, offset, pinfo, tree, di, drep, &buffer); offset = dissect_ndr_uint32( tvb, offset, pinfo, tree, di, drep, hf_needed, NULL); col_append_fstr(pinfo->cinfo, COL_INFO, ", level %d", level); offset = dissect_ndr_uint32( tvb, offset, pinfo, tree, di, drep, hf_enumforms_num, &count); /* Unfortunately this array isn't in NDR format so we can't use prs_array(). The other weird thing is the struct_start being inside the loop rather than outside. Very strange. */ buffer_offset = 0; for (i = 0; i < count; i++) { int struct_start = buffer_offset; buffer_offset = dissect_FORM_REL( buffer.tvb, buffer_offset, pinfo, buffer.tree, di, drep, struct_start); } offset = dissect_doserror( tvb, offset, pinfo, tree, di, drep, hf_rc, NULL); return offset; }
SpoolssEnumForms_r(tvbuff_t *tvb, int offset, packet_info *pinfo, proto_tree *tree, dcerpc_info *di, guint8 *drep _U_) { dcerpc_call_value *dcv = (dcerpc_call_value *)di->call_data; BUFFER buffer; guint32 level = GPOINTER_TO_UINT(dcv->se_data), i, count; int buffer_offset; proto_item *hidden_item; hidden_item = proto_tree_add_uint( tree, hf_form, tvb, offset, 0, 1); PROTO_ITEM_SET_HIDDEN(hidden_item); /* Parse packet */ offset = dissect_spoolss_buffer( tvb, offset, pinfo, tree, di, drep, &buffer); offset = dissect_ndr_uint32( tvb, offset, pinfo, tree, di, drep, hf_needed, NULL); col_append_fstr(pinfo->cinfo, COL_INFO, ", level %d", level); offset = dissect_ndr_uint32( tvb, offset, pinfo, tree, di, drep, hf_enumforms_num, &count); /* Unfortunately this array isn't in NDR format so we can't use prs_array(). The other weird thing is the struct_start being inside the loop rather than outside. Very strange. */ buffer_offset = 0; for (i = 0; i < count; i++) { int struct_start = buffer_offset; buffer_offset = dissect_FORM_REL( buffer.tvb, buffer_offset, pinfo, buffer.tree, di, drep, struct_start); } offset = dissect_doserror( tvb, offset, pinfo, tree, di, drep, hf_rc, NULL); return offset; }
C
wireshark
0
CVE-2018-17407
https://www.cvedetails.com/cve/CVE-2018-17407/
CWE-119
https://github.com/TeX-Live/texlive-source/commit/6ed0077520e2b0da1fd060c7f88db7b2e6068e4c
6ed0077520e2b0da1fd060c7f88db7b2e6068e4c
writet1 protection against buffer overflow git-svn-id: svn://tug.org/texlive/trunk/Build/source@48697 c570f23f-e606-0410-a88d-b1316a301751
static float t1_scan_num(char *p, char **r) { float f; skip(p, ' '); if (sscanf(p, "%g", &f) != 1) { remove_eol(p, t1_line_array); pdftex_fail("a number expected: `%s'", t1_line_array); } if (r != NULL) { for (; isdigit((unsigned char)*p) || *p == '.' || *p == 'e' || *p == 'E' || *p == '+' || *p == '-'; p++); *r = p; } return f; }
static float t1_scan_num(char *p, char **r) { float f; skip(p, ' '); if (sscanf(p, "%g", &f) != 1) { remove_eol(p, t1_line_array); pdftex_fail("a number expected: `%s'", t1_line_array); } if (r != NULL) { for (; isdigit((unsigned char)*p) || *p == '.' || *p == 'e' || *p == 'E' || *p == '+' || *p == '-'; p++); *r = p; } return f; }
C
texlive-source
0
CVE-2017-9374
https://www.cvedetails.com/cve/CVE-2017-9374/
CWE-772
https://git.qemu.org/?p=qemu.git;a=commit;h=d710e1e7bd3d5bfc26b631f02ae87901ebe646b0
d710e1e7bd3d5bfc26b631f02ae87901ebe646b0
null
static inline bool ehci_async_enabled(EHCIState *s) { return ehci_enabled(s) && (s->usbcmd & USBCMD_ASE); }
static inline bool ehci_async_enabled(EHCIState *s) { return ehci_enabled(s) && (s->usbcmd & USBCMD_ASE); }
C
qemu
0
CVE-2019-13302
https://www.cvedetails.com/cve/CVE-2019-13302/
CWE-125
https://github.com/ImageMagick/ImageMagick/commit/d5089971bd792311aaab5cb73460326d7ef7f32d
d5089971bd792311aaab5cb73460326d7ef7f32d
https://github.com/ImageMagick/ImageMagick/issues/1597
static MagickBooleanType InverseFourier(FourierInfo *fourier_info, const Image *magnitude_image,const Image *phase_image, fftw_complex *fourier_pixels,ExceptionInfo *exception) { CacheView *magnitude_view, *phase_view; double *inverse_pixels, *magnitude_pixels, *phase_pixels; MagickBooleanType status; MemoryInfo *inverse_info, *magnitude_info, *phase_info; register const Quantum *p; register ssize_t i, x; ssize_t y; /* Inverse fourier - read image and break down into a double array. */ magnitude_info=AcquireVirtualMemory((size_t) fourier_info->width, fourier_info->height*sizeof(*magnitude_pixels)); phase_info=AcquireVirtualMemory((size_t) fourier_info->width, fourier_info->height*sizeof(*phase_pixels)); inverse_info=AcquireVirtualMemory((size_t) fourier_info->width, (fourier_info->height/2+1)*sizeof(*inverse_pixels)); if ((magnitude_info == (MemoryInfo *) NULL) || (phase_info == (MemoryInfo *) NULL) || (inverse_info == (MemoryInfo *) NULL)) { if (magnitude_info != (MemoryInfo *) NULL) magnitude_info=RelinquishVirtualMemory(magnitude_info); if (phase_info != (MemoryInfo *) NULL) phase_info=RelinquishVirtualMemory(phase_info); if (inverse_info != (MemoryInfo *) NULL) inverse_info=RelinquishVirtualMemory(inverse_info); (void) ThrowMagickException(exception,GetMagickModule(), ResourceLimitError,"MemoryAllocationFailed","`%s'", magnitude_image->filename); return(MagickFalse); } magnitude_pixels=(double *) GetVirtualMemoryBlob(magnitude_info); phase_pixels=(double *) GetVirtualMemoryBlob(phase_info); inverse_pixels=(double *) GetVirtualMemoryBlob(inverse_info); i=0L; magnitude_view=AcquireVirtualCacheView(magnitude_image,exception); for (y=0L; y < (ssize_t) fourier_info->height; y++) { p=GetCacheViewVirtualPixels(magnitude_view,0L,y,fourier_info->width,1UL, exception); if (p == (const Quantum *) NULL) break; for (x=0L; x < (ssize_t) fourier_info->width; x++) { switch (fourier_info->channel) { case RedPixelChannel: default: { magnitude_pixels[i]=QuantumScale*GetPixelRed(magnitude_image,p); break; } case GreenPixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelGreen(magnitude_image,p); break; } case BluePixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelBlue(magnitude_image,p); break; } case BlackPixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelBlack(magnitude_image,p); break; } case AlphaPixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelAlpha(magnitude_image,p); break; } } i++; p+=GetPixelChannels(magnitude_image); } } magnitude_view=DestroyCacheView(magnitude_view); status=InverseQuadrantSwap(fourier_info->width,fourier_info->height, magnitude_pixels,inverse_pixels); (void) memcpy(magnitude_pixels,inverse_pixels,fourier_info->height* fourier_info->center*sizeof(*magnitude_pixels)); i=0L; phase_view=AcquireVirtualCacheView(phase_image,exception); for (y=0L; y < (ssize_t) fourier_info->height; y++) { p=GetCacheViewVirtualPixels(phase_view,0,y,fourier_info->width,1, exception); if (p == (const Quantum *) NULL) break; for (x=0L; x < (ssize_t) fourier_info->width; x++) { switch (fourier_info->channel) { case RedPixelChannel: default: { phase_pixels[i]=QuantumScale*GetPixelRed(phase_image,p); break; } case GreenPixelChannel: { phase_pixels[i]=QuantumScale*GetPixelGreen(phase_image,p); break; } case BluePixelChannel: { phase_pixels[i]=QuantumScale*GetPixelBlue(phase_image,p); break; } case BlackPixelChannel: { phase_pixels[i]=QuantumScale*GetPixelBlack(phase_image,p); break; } case AlphaPixelChannel: { phase_pixels[i]=QuantumScale*GetPixelAlpha(phase_image,p); break; } } i++; p+=GetPixelChannels(phase_image); } } if (fourier_info->modulus != MagickFalse) { i=0L; for (y=0L; y < (ssize_t) fourier_info->height; y++) for (x=0L; x < (ssize_t) fourier_info->width; x++) { phase_pixels[i]-=0.5; phase_pixels[i]*=(2.0*MagickPI); i++; } } phase_view=DestroyCacheView(phase_view); CorrectPhaseLHS(fourier_info->width,fourier_info->height,phase_pixels); if (status != MagickFalse) status=InverseQuadrantSwap(fourier_info->width,fourier_info->height, phase_pixels,inverse_pixels); (void) memcpy(phase_pixels,inverse_pixels,fourier_info->height* fourier_info->center*sizeof(*phase_pixels)); inverse_info=RelinquishVirtualMemory(inverse_info); /* Merge two sets. */ i=0L; if (fourier_info->modulus != MagickFalse) for (y=0L; y < (ssize_t) fourier_info->height; y++) for (x=0L; x < (ssize_t) fourier_info->center; x++) { #if defined(MAGICKCORE_HAVE_COMPLEX_H) fourier_pixels[i]=magnitude_pixels[i]*cos(phase_pixels[i])+I* magnitude_pixels[i]*sin(phase_pixels[i]); #else fourier_pixels[i][0]=magnitude_pixels[i]*cos(phase_pixels[i]); fourier_pixels[i][1]=magnitude_pixels[i]*sin(phase_pixels[i]); #endif i++; } else for (y=0L; y < (ssize_t) fourier_info->height; y++) for (x=0L; x < (ssize_t) fourier_info->center; x++) { #if defined(MAGICKCORE_HAVE_COMPLEX_H) fourier_pixels[i]=magnitude_pixels[i]+I*phase_pixels[i]; #else fourier_pixels[i][0]=magnitude_pixels[i]; fourier_pixels[i][1]=phase_pixels[i]; #endif i++; } magnitude_info=RelinquishVirtualMemory(magnitude_info); phase_info=RelinquishVirtualMemory(phase_info); return(status); }
static MagickBooleanType InverseFourier(FourierInfo *fourier_info, const Image *magnitude_image,const Image *phase_image, fftw_complex *fourier_pixels,ExceptionInfo *exception) { CacheView *magnitude_view, *phase_view; double *inverse_pixels, *magnitude_pixels, *phase_pixels; MagickBooleanType status; MemoryInfo *inverse_info, *magnitude_info, *phase_info; register const Quantum *p; register ssize_t i, x; ssize_t y; /* Inverse fourier - read image and break down into a double array. */ magnitude_info=AcquireVirtualMemory((size_t) fourier_info->width, fourier_info->height*sizeof(*magnitude_pixels)); phase_info=AcquireVirtualMemory((size_t) fourier_info->width, fourier_info->height*sizeof(*phase_pixels)); inverse_info=AcquireVirtualMemory((size_t) fourier_info->width, (fourier_info->height/2+1)*sizeof(*inverse_pixels)); if ((magnitude_info == (MemoryInfo *) NULL) || (phase_info == (MemoryInfo *) NULL) || (inverse_info == (MemoryInfo *) NULL)) { if (magnitude_info != (MemoryInfo *) NULL) magnitude_info=RelinquishVirtualMemory(magnitude_info); if (phase_info != (MemoryInfo *) NULL) phase_info=RelinquishVirtualMemory(phase_info); if (inverse_info != (MemoryInfo *) NULL) inverse_info=RelinquishVirtualMemory(inverse_info); (void) ThrowMagickException(exception,GetMagickModule(), ResourceLimitError,"MemoryAllocationFailed","`%s'", magnitude_image->filename); return(MagickFalse); } magnitude_pixels=(double *) GetVirtualMemoryBlob(magnitude_info); phase_pixels=(double *) GetVirtualMemoryBlob(phase_info); inverse_pixels=(double *) GetVirtualMemoryBlob(inverse_info); i=0L; magnitude_view=AcquireVirtualCacheView(magnitude_image,exception); for (y=0L; y < (ssize_t) fourier_info->height; y++) { p=GetCacheViewVirtualPixels(magnitude_view,0L,y,fourier_info->width,1UL, exception); if (p == (const Quantum *) NULL) break; for (x=0L; x < (ssize_t) fourier_info->width; x++) { switch (fourier_info->channel) { case RedPixelChannel: default: { magnitude_pixels[i]=QuantumScale*GetPixelRed(magnitude_image,p); break; } case GreenPixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelGreen(magnitude_image,p); break; } case BluePixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelBlue(magnitude_image,p); break; } case BlackPixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelBlack(magnitude_image,p); break; } case AlphaPixelChannel: { magnitude_pixels[i]=QuantumScale*GetPixelAlpha(magnitude_image,p); break; } } i++; p+=GetPixelChannels(magnitude_image); } } magnitude_view=DestroyCacheView(magnitude_view); status=InverseQuadrantSwap(fourier_info->width,fourier_info->height, magnitude_pixels,inverse_pixels); (void) memcpy(magnitude_pixels,inverse_pixels,fourier_info->height* fourier_info->center*sizeof(*magnitude_pixels)); i=0L; phase_view=AcquireVirtualCacheView(phase_image,exception); for (y=0L; y < (ssize_t) fourier_info->height; y++) { p=GetCacheViewVirtualPixels(phase_view,0,y,fourier_info->width,1, exception); if (p == (const Quantum *) NULL) break; for (x=0L; x < (ssize_t) fourier_info->width; x++) { switch (fourier_info->channel) { case RedPixelChannel: default: { phase_pixels[i]=QuantumScale*GetPixelRed(phase_image,p); break; } case GreenPixelChannel: { phase_pixels[i]=QuantumScale*GetPixelGreen(phase_image,p); break; } case BluePixelChannel: { phase_pixels[i]=QuantumScale*GetPixelBlue(phase_image,p); break; } case BlackPixelChannel: { phase_pixels[i]=QuantumScale*GetPixelBlack(phase_image,p); break; } case AlphaPixelChannel: { phase_pixels[i]=QuantumScale*GetPixelAlpha(phase_image,p); break; } } i++; p+=GetPixelChannels(phase_image); } } if (fourier_info->modulus != MagickFalse) { i=0L; for (y=0L; y < (ssize_t) fourier_info->height; y++) for (x=0L; x < (ssize_t) fourier_info->width; x++) { phase_pixels[i]-=0.5; phase_pixels[i]*=(2.0*MagickPI); i++; } } phase_view=DestroyCacheView(phase_view); CorrectPhaseLHS(fourier_info->width,fourier_info->height,phase_pixels); if (status != MagickFalse) status=InverseQuadrantSwap(fourier_info->width,fourier_info->height, phase_pixels,inverse_pixels); (void) memcpy(phase_pixels,inverse_pixels,fourier_info->height* fourier_info->center*sizeof(*phase_pixels)); inverse_info=RelinquishVirtualMemory(inverse_info); /* Merge two sets. */ i=0L; if (fourier_info->modulus != MagickFalse) for (y=0L; y < (ssize_t) fourier_info->height; y++) for (x=0L; x < (ssize_t) fourier_info->center; x++) { #if defined(MAGICKCORE_HAVE_COMPLEX_H) fourier_pixels[i]=magnitude_pixels[i]*cos(phase_pixels[i])+I* magnitude_pixels[i]*sin(phase_pixels[i]); #else fourier_pixels[i][0]=magnitude_pixels[i]*cos(phase_pixels[i]); fourier_pixels[i][1]=magnitude_pixels[i]*sin(phase_pixels[i]); #endif i++; } else for (y=0L; y < (ssize_t) fourier_info->height; y++) for (x=0L; x < (ssize_t) fourier_info->center; x++) { #if defined(MAGICKCORE_HAVE_COMPLEX_H) fourier_pixels[i]=magnitude_pixels[i]+I*phase_pixels[i]; #else fourier_pixels[i][0]=magnitude_pixels[i]; fourier_pixels[i][1]=phase_pixels[i]; #endif i++; } magnitude_info=RelinquishVirtualMemory(magnitude_info); phase_info=RelinquishVirtualMemory(phase_info); return(status); }
C
ImageMagick
0
CVE-2016-8666
https://www.cvedetails.com/cve/CVE-2016-8666/
CWE-400
https://github.com/torvalds/linux/commit/fac8e0f579695a3ecbc4d3cac369139d7f819971
fac8e0f579695a3ecbc4d3cac369139d7f819971
tunnels: Don't apply GRO to multiple layers of encapsulation. When drivers express support for TSO of encapsulated packets, they only mean that they can do it for one layer of encapsulation. Supporting additional levels would mean updating, at a minimum, more IP length fields and they are unaware of this. No encapsulation device expresses support for handling offloaded encapsulated packets, so we won't generate these types of frames in the transmit path. However, GRO doesn't have a check for multiple levels of encapsulation and will attempt to build them. UDP tunnel GRO actually does prevent this situation but it only handles multiple UDP tunnels stacked on top of each other. This generalizes that solution to prevent any kind of tunnel stacking that would cause problems. Fixes: bf5a755f ("net-gre-gro: Add GRE support to the GRO stack") Signed-off-by: Jesse Gross <[email protected]> Signed-off-by: David S. Miller <[email protected]>
int dev_get_phys_port_name(struct net_device *dev, char *name, size_t len) { const struct net_device_ops *ops = dev->netdev_ops; if (!ops->ndo_get_phys_port_name) return -EOPNOTSUPP; return ops->ndo_get_phys_port_name(dev, name, len); }
int dev_get_phys_port_name(struct net_device *dev, char *name, size_t len) { const struct net_device_ops *ops = dev->netdev_ops; if (!ops->ndo_get_phys_port_name) return -EOPNOTSUPP; return ops->ndo_get_phys_port_name(dev, name, len); }
C
linux
0
CVE-2013-2884
https://www.cvedetails.com/cve/CVE-2013-2884/
CWE-399
https://github.com/chromium/chromium/commit/4ac8bc08e3306f38a5ab3e551aef6ad43753579c
4ac8bc08e3306f38a5ab3e551aef6ad43753579c
Set Attr.ownerDocument in Element#setAttributeNode() Attr objects can move across documents by setAttributeNode(). So It needs to reset ownerDocument through TreeScopeAdoptr::adoptIfNeeded(). BUG=248950 TEST=set-attribute-node-from-iframe.html Review URL: https://chromiumcodereview.appspot.com/17583003 git-svn-id: svn://svn.chromium.org/blink/trunk@152938 bbb929c8-8fbe-4397-9dbb-9b2b20218538
ShareableElementData::ShareableElementData(const UniqueElementData& other) : ElementData(other, false) { ASSERT(!other.m_presentationAttributeStyle); if (other.m_inlineStyle) { ASSERT(!other.m_inlineStyle->hasCSSOMWrapper()); m_inlineStyle = other.m_inlineStyle->immutableCopyIfNeeded(); } for (unsigned i = 0; i < m_arraySize; ++i) new (&m_attributeArray[i]) Attribute(other.m_attributeVector.at(i)); }
ShareableElementData::ShareableElementData(const UniqueElementData& other) : ElementData(other, false) { ASSERT(!other.m_presentationAttributeStyle); if (other.m_inlineStyle) { ASSERT(!other.m_inlineStyle->hasCSSOMWrapper()); m_inlineStyle = other.m_inlineStyle->immutableCopyIfNeeded(); } for (unsigned i = 0; i < m_arraySize; ++i) new (&m_attributeArray[i]) Attribute(other.m_attributeVector.at(i)); }
C
Chrome
0
CVE-2016-6520
https://www.cvedetails.com/cve/CVE-2016-6520/
CWE-125
https://github.com/ImageMagick/ImageMagick/commit/76401e172ea3a55182be2b8e2aca4d07270f6da6
76401e172ea3a55182be2b8e2aca4d07270f6da6
Evaluate lazy pixel cache morphology to prevent buffer overflow (bug report from Ibrahim M. El-Sayed)
static inline double gamma_pow(const double value,const double gamma) { return(value < 0.0 ? value : pow(value,gamma)); }
static inline double gamma_pow(const double value,const double gamma) { return(value < 0.0 ? value : pow(value,gamma)); }
C
ImageMagick
0
CVE-2018-16435
https://www.cvedetails.com/cve/CVE-2018-16435/
CWE-190
https://github.com/mm2/Little-CMS/commit/768f70ca405cd3159d990e962d54456773bb8cf8
768f70ca405cd3159d990e962d54456773bb8cf8
Upgrade Visual studio 2017 15.8 - Upgrade to 15.8 - Add check on CGATS memory allocation (thanks to Quang Nguyen for pointing out this)
const char* CMSEXPORT cmsIT8GetDataRowCol(cmsHANDLE hIT8, int row, int col) { cmsIT8* it8 = (cmsIT8*) hIT8; _cmsAssert(hIT8 != NULL); return GetData(it8, row, col); }
const char* CMSEXPORT cmsIT8GetDataRowCol(cmsHANDLE hIT8, int row, int col) { cmsIT8* it8 = (cmsIT8*) hIT8; _cmsAssert(hIT8 != NULL); return GetData(it8, row, col); }
C
Little-CMS
0
CVE-2014-9666
https://www.cvedetails.com/cve/CVE-2014-9666/
CWE-189
https://git.savannah.gnu.org/cgit/freetype/freetype2.git/commit/?id=257c270bd25e15890190a28a1456e7623bba4439
257c270bd25e15890190a28a1456e7623bba4439
null
tt_sbit_decoder_load_byte_aligned( TT_SBitDecoder decoder, FT_Byte* p, FT_Byte* limit, FT_Int x_pos, FT_Int y_pos ) { FT_Error error = FT_Err_Ok; FT_Byte* line; FT_Int bit_height, bit_width, pitch, width, height, line_bits, h; FT_Bitmap* bitmap; /* check that we can write the glyph into the bitmap */ bitmap = decoder->bitmap; bit_width = bitmap->width; bit_height = bitmap->rows; pitch = bitmap->pitch; line = bitmap->buffer; width = decoder->metrics->width; height = decoder->metrics->height; line_bits = width * decoder->bit_depth; if ( x_pos < 0 || x_pos + width > bit_width || y_pos < 0 || y_pos + height > bit_height ) { FT_TRACE1(( "tt_sbit_decoder_load_byte_aligned:" " invalid bitmap dimensions\n" )); error = FT_THROW( Invalid_File_Format ); goto Exit; } if ( p + ( ( line_bits + 7 ) >> 3 ) * height > limit ) { FT_TRACE1(( "tt_sbit_decoder_load_byte_aligned: broken bitmap\n" )); error = FT_THROW( Invalid_File_Format ); goto Exit; } /* now do the blit */ line += y_pos * pitch + ( x_pos >> 3 ); x_pos &= 7; if ( x_pos == 0 ) /* the easy one */ { for ( h = height; h > 0; h--, line += pitch ) { FT_Byte* pwrite = line; FT_Int w; for ( w = line_bits; w >= 8; w -= 8 ) { pwrite[0] = (FT_Byte)( pwrite[0] | *p++ ); pwrite += 1; } if ( w > 0 ) pwrite[0] = (FT_Byte)( pwrite[0] | ( *p++ & ( 0xFF00U >> w ) ) ); } } else /* x_pos > 0 */ { for ( h = height; h > 0; h--, line += pitch ) { FT_Byte* pwrite = line; FT_Int w; FT_UInt wval = 0; for ( w = line_bits; w >= 8; w -= 8 ) { wval = (FT_UInt)( wval | *p++ ); pwrite[0] = (FT_Byte)( pwrite[0] | ( wval >> x_pos ) ); pwrite += 1; wval <<= 8; } if ( w > 0 ) wval = (FT_UInt)( wval | ( *p++ & ( 0xFF00U >> w ) ) ); /* all bits read and there are `x_pos + w' bits to be written */ pwrite[0] = (FT_Byte)( pwrite[0] | ( wval >> x_pos ) ); if ( x_pos + w > 8 ) { pwrite++; wval <<= 8; pwrite[0] = (FT_Byte)( pwrite[0] | ( wval >> x_pos ) ); } } } Exit: if ( !error ) FT_TRACE3(( "tt_sbit_decoder_load_byte_aligned: loaded\n" )); return error; }
tt_sbit_decoder_load_byte_aligned( TT_SBitDecoder decoder, FT_Byte* p, FT_Byte* limit, FT_Int x_pos, FT_Int y_pos ) { FT_Error error = FT_Err_Ok; FT_Byte* line; FT_Int bit_height, bit_width, pitch, width, height, line_bits, h; FT_Bitmap* bitmap; /* check that we can write the glyph into the bitmap */ bitmap = decoder->bitmap; bit_width = bitmap->width; bit_height = bitmap->rows; pitch = bitmap->pitch; line = bitmap->buffer; width = decoder->metrics->width; height = decoder->metrics->height; line_bits = width * decoder->bit_depth; if ( x_pos < 0 || x_pos + width > bit_width || y_pos < 0 || y_pos + height > bit_height ) { FT_TRACE1(( "tt_sbit_decoder_load_byte_aligned:" " invalid bitmap dimensions\n" )); error = FT_THROW( Invalid_File_Format ); goto Exit; } if ( p + ( ( line_bits + 7 ) >> 3 ) * height > limit ) { FT_TRACE1(( "tt_sbit_decoder_load_byte_aligned: broken bitmap\n" )); error = FT_THROW( Invalid_File_Format ); goto Exit; } /* now do the blit */ line += y_pos * pitch + ( x_pos >> 3 ); x_pos &= 7; if ( x_pos == 0 ) /* the easy one */ { for ( h = height; h > 0; h--, line += pitch ) { FT_Byte* pwrite = line; FT_Int w; for ( w = line_bits; w >= 8; w -= 8 ) { pwrite[0] = (FT_Byte)( pwrite[0] | *p++ ); pwrite += 1; } if ( w > 0 ) pwrite[0] = (FT_Byte)( pwrite[0] | ( *p++ & ( 0xFF00U >> w ) ) ); } } else /* x_pos > 0 */ { for ( h = height; h > 0; h--, line += pitch ) { FT_Byte* pwrite = line; FT_Int w; FT_UInt wval = 0; for ( w = line_bits; w >= 8; w -= 8 ) { wval = (FT_UInt)( wval | *p++ ); pwrite[0] = (FT_Byte)( pwrite[0] | ( wval >> x_pos ) ); pwrite += 1; wval <<= 8; } if ( w > 0 ) wval = (FT_UInt)( wval | ( *p++ & ( 0xFF00U >> w ) ) ); /* all bits read and there are `x_pos + w' bits to be written */ pwrite[0] = (FT_Byte)( pwrite[0] | ( wval >> x_pos ) ); if ( x_pos + w > 8 ) { pwrite++; wval <<= 8; pwrite[0] = (FT_Byte)( pwrite[0] | ( wval >> x_pos ) ); } } } Exit: if ( !error ) FT_TRACE3(( "tt_sbit_decoder_load_byte_aligned: loaded\n" )); return error; }
C
savannah
0
CVE-2015-6791
https://www.cvedetails.com/cve/CVE-2015-6791/
null
https://github.com/chromium/chromium/commit/7e995b26a5a503adefc0ad40435f7e16a45434c2
7e995b26a5a503adefc0ad40435f7e16a45434c2
Add a fake DriveFS launcher client. Using DriveFS requires building and deploying ChromeOS. Add a client for the fake DriveFS launcher to allow the use of a real DriveFS from a ChromeOS chroot to be used with a target_os="chromeos" build of chrome. This connects to the fake DriveFS launcher using mojo over a unix domain socket named by a command-line flag, using the launcher to create DriveFS instances. Bug: 848126 Change-Id: I22dcca154d41bda196dd7c1782bb503f6bcba5b1 Reviewed-on: https://chromium-review.googlesource.com/1098434 Reviewed-by: Xiyuan Xia <[email protected]> Commit-Queue: Sam McNally <[email protected]> Cr-Commit-Position: refs/heads/master@{#567513}
bool WakeOnWifiEnabled() { return !base::CommandLine::ForCurrentProcess()->HasSwitch(kDisableWakeOnWifi); }
bool WakeOnWifiEnabled() { return !base::CommandLine::ForCurrentProcess()->HasSwitch(kDisableWakeOnWifi); }
C
Chrome
0
CVE-2011-4930
https://www.cvedetails.com/cve/CVE-2011-4930/
CWE-134
https://htcondor-git.cs.wisc.edu/?p=condor.git;a=commitdiff;h=5e5571d1a431eb3c61977b6dd6ec90186ef79867
5e5571d1a431eb3c61977b6dd6ec90186ef79867
null
bool SafeSock :: init_MD(CONDOR_MD_MODE /* mode */, KeyInfo * key, const char * keyId) { bool inited = true; if (mdChecker_) { delete mdChecker_; mdChecker_ = 0; } if (key) { mdChecker_ = new Condor_MD_MAC(key); } if (_longMsg) { inited = _longMsg->verifyMD(mdChecker_); } else { inited = _shortMsg.verifyMD(mdChecker_); } if( !_outMsg.init_MD(keyId) ) { inited = false; } return inited; }
bool SafeSock :: init_MD(CONDOR_MD_MODE /* mode */, KeyInfo * key, const char * keyId) { bool inited = true; if (mdChecker_) { delete mdChecker_; mdChecker_ = 0; } if (key) { mdChecker_ = new Condor_MD_MAC(key); } if (_longMsg) { inited = _longMsg->verifyMD(mdChecker_); } else { inited = _shortMsg.verifyMD(mdChecker_); } if( !_outMsg.init_MD(keyId) ) { inited = false; } return inited; }
CPP
htcondor
0
CVE-2016-5688
https://www.cvedetails.com/cve/CVE-2016-5688/
CWE-119
https://github.com/ImageMagick/ImageMagick/commit/aecd0ada163a4d6c769cec178955d5f3e9316f2f
aecd0ada163a4d6c769cec178955d5f3e9316f2f
Set pixel cache to undefined if any resource limit is exceeded
static const Quantum *GetVirtualPixelsCache(const Image *image) { CacheInfo *magick_restrict cache_info; const int id = GetOpenMPThreadId(); assert(image != (const Image *) NULL); assert(image->signature == MagickCoreSignature); assert(image->cache != (Cache) NULL); cache_info=(CacheInfo *) image->cache; assert(cache_info->signature == MagickCoreSignature); assert(id < (int) cache_info->number_threads); return(GetVirtualPixelsNexus(image->cache,cache_info->nexus_info[id])); }
static const Quantum *GetVirtualPixelsCache(const Image *image) { CacheInfo *magick_restrict cache_info; const int id = GetOpenMPThreadId(); assert(image != (const Image *) NULL); assert(image->signature == MagickCoreSignature); assert(image->cache != (Cache) NULL); cache_info=(CacheInfo *) image->cache; assert(cache_info->signature == MagickCoreSignature); assert(id < (int) cache_info->number_threads); return(GetVirtualPixelsNexus(image->cache,cache_info->nexus_info[id])); }
C
ImageMagick
0
CVE-2011-3188
https://www.cvedetails.com/cve/CVE-2011-3188/
null
https://github.com/torvalds/linux/commit/6e5714eaf77d79ae1c8b47e3e040ff5411b717ec
6e5714eaf77d79ae1c8b47e3e040ff5411b717ec
net: Compute protocol sequence numbers and fragment IDs using MD5. Computers have become a lot faster since we compromised on the partial MD4 hash which we use currently for performance reasons. MD5 is a much safer choice, and is inline with both RFC1948 and other ISS generators (OpenBSD, Solaris, etc.) Furthermore, only having 24-bits of the sequence number be truly unpredictable is a very serious limitation. So the periodic regeneration and 8-bit counter have been removed. We compute and use a full 32-bit sequence number. For ipv6, DCCP was found to use a 32-bit truncated initial sequence number (it needs 43-bits) and that is fixed here as well. Reported-by: Dan Kaminsky <[email protected]> Tested-by: Willy Tarreau <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static inline int compare_netns(struct rtable *rt1, struct rtable *rt2) { return net_eq(dev_net(rt1->dst.dev), dev_net(rt2->dst.dev)); }
static inline int compare_netns(struct rtable *rt1, struct rtable *rt2) { return net_eq(dev_net(rt1->dst.dev), dev_net(rt2->dst.dev)); }
C
linux
0
CVE-2011-3055
https://www.cvedetails.com/cve/CVE-2011-3055/
null
https://github.com/chromium/chromium/commit/e9372a1bfd3588a80fcf49aa07321f0971dd6091
e9372a1bfd3588a80fcf49aa07321f0971dd6091
[V8] Pass Isolate to throwNotEnoughArgumentsError() https://bugs.webkit.org/show_bug.cgi?id=86983 Reviewed by Adam Barth. The objective is to pass Isolate around in V8 bindings. This patch passes Isolate to throwNotEnoughArgumentsError(). No tests. No change in behavior. * bindings/scripts/CodeGeneratorV8.pm: (GenerateArgumentsCountCheck): (GenerateEventConstructorCallback): * bindings/scripts/test/V8/V8Float64Array.cpp: (WebCore::Float64ArrayV8Internal::fooCallback): * bindings/scripts/test/V8/V8TestActiveDOMObject.cpp: (WebCore::TestActiveDOMObjectV8Internal::excitingFunctionCallback): (WebCore::TestActiveDOMObjectV8Internal::postMessageCallback): * bindings/scripts/test/V8/V8TestCustomNamedGetter.cpp: (WebCore::TestCustomNamedGetterV8Internal::anotherFunctionCallback): * bindings/scripts/test/V8/V8TestEventConstructor.cpp: (WebCore::V8TestEventConstructor::constructorCallback): * bindings/scripts/test/V8/V8TestEventTarget.cpp: (WebCore::TestEventTargetV8Internal::itemCallback): (WebCore::TestEventTargetV8Internal::dispatchEventCallback): * bindings/scripts/test/V8/V8TestInterface.cpp: (WebCore::TestInterfaceV8Internal::supplementalMethod2Callback): (WebCore::V8TestInterface::constructorCallback): * bindings/scripts/test/V8/V8TestMediaQueryListListener.cpp: (WebCore::TestMediaQueryListListenerV8Internal::methodCallback): * bindings/scripts/test/V8/V8TestNamedConstructor.cpp: (WebCore::V8TestNamedConstructorConstructorCallback): * bindings/scripts/test/V8/V8TestObj.cpp: (WebCore::TestObjV8Internal::voidMethodWithArgsCallback): (WebCore::TestObjV8Internal::intMethodWithArgsCallback): (WebCore::TestObjV8Internal::objMethodWithArgsCallback): (WebCore::TestObjV8Internal::methodWithSequenceArgCallback): (WebCore::TestObjV8Internal::methodReturningSequenceCallback): (WebCore::TestObjV8Internal::methodThatRequiresAllArgsAndThrowsCallback): (WebCore::TestObjV8Internal::serializedValueCallback): (WebCore::TestObjV8Internal::idbKeyCallback): (WebCore::TestObjV8Internal::optionsObjectCallback): (WebCore::TestObjV8Internal::methodWithNonOptionalArgAndOptionalArgCallback): (WebCore::TestObjV8Internal::methodWithNonOptionalArgAndTwoOptionalArgsCallback): (WebCore::TestObjV8Internal::methodWithCallbackArgCallback): (WebCore::TestObjV8Internal::methodWithNonCallbackArgAndCallbackArgCallback): (WebCore::TestObjV8Internal::overloadedMethod1Callback): (WebCore::TestObjV8Internal::overloadedMethod2Callback): (WebCore::TestObjV8Internal::overloadedMethod3Callback): (WebCore::TestObjV8Internal::overloadedMethod4Callback): (WebCore::TestObjV8Internal::overloadedMethod5Callback): (WebCore::TestObjV8Internal::overloadedMethod6Callback): (WebCore::TestObjV8Internal::overloadedMethod7Callback): (WebCore::TestObjV8Internal::overloadedMethod11Callback): (WebCore::TestObjV8Internal::overloadedMethod12Callback): (WebCore::TestObjV8Internal::enabledAtRuntimeMethod1Callback): (WebCore::TestObjV8Internal::enabledAtRuntimeMethod2Callback): (WebCore::TestObjV8Internal::convert1Callback): (WebCore::TestObjV8Internal::convert2Callback): (WebCore::TestObjV8Internal::convert3Callback): (WebCore::TestObjV8Internal::convert4Callback): (WebCore::TestObjV8Internal::convert5Callback): (WebCore::TestObjV8Internal::strictFunctionCallback): (WebCore::V8TestObj::constructorCallback): * bindings/scripts/test/V8/V8TestSerializedScriptValueInterface.cpp: (WebCore::TestSerializedScriptValueInterfaceV8Internal::acceptTransferListCallback): (WebCore::V8TestSerializedScriptValueInterface::constructorCallback): * bindings/v8/ScriptController.cpp: (WebCore::setValueAndClosePopupCallback): * bindings/v8/V8Proxy.cpp: (WebCore::V8Proxy::throwNotEnoughArgumentsError): * bindings/v8/V8Proxy.h: (V8Proxy): * bindings/v8/custom/V8AudioContextCustom.cpp: (WebCore::V8AudioContext::constructorCallback): * bindings/v8/custom/V8DataViewCustom.cpp: (WebCore::V8DataView::getInt8Callback): (WebCore::V8DataView::getUint8Callback): (WebCore::V8DataView::setInt8Callback): (WebCore::V8DataView::setUint8Callback): * bindings/v8/custom/V8DirectoryEntryCustom.cpp: (WebCore::V8DirectoryEntry::getDirectoryCallback): (WebCore::V8DirectoryEntry::getFileCallback): * bindings/v8/custom/V8IntentConstructor.cpp: (WebCore::V8Intent::constructorCallback): * bindings/v8/custom/V8SVGLengthCustom.cpp: (WebCore::V8SVGLength::convertToSpecifiedUnitsCallback): * bindings/v8/custom/V8WebGLRenderingContextCustom.cpp: (WebCore::getObjectParameter): (WebCore::V8WebGLRenderingContext::getAttachedShadersCallback): (WebCore::V8WebGLRenderingContext::getExtensionCallback): (WebCore::V8WebGLRenderingContext::getFramebufferAttachmentParameterCallback): (WebCore::V8WebGLRenderingContext::getParameterCallback): (WebCore::V8WebGLRenderingContext::getProgramParameterCallback): (WebCore::V8WebGLRenderingContext::getShaderParameterCallback): (WebCore::V8WebGLRenderingContext::getUniformCallback): (WebCore::vertexAttribAndUniformHelperf): (WebCore::uniformHelperi): (WebCore::uniformMatrixHelper): * bindings/v8/custom/V8WebKitMutationObserverCustom.cpp: (WebCore::V8WebKitMutationObserver::constructorCallback): (WebCore::V8WebKitMutationObserver::observeCallback): * bindings/v8/custom/V8WebSocketCustom.cpp: (WebCore::V8WebSocket::constructorCallback): (WebCore::V8WebSocket::sendCallback): * bindings/v8/custom/V8XMLHttpRequestCustom.cpp: (WebCore::V8XMLHttpRequest::openCallback): git-svn-id: svn://svn.chromium.org/blink/trunk@117736 bbb929c8-8fbe-4397-9dbb-9b2b20218538
V8Proxy::V8Proxy(Frame* frame) : m_frame(frame) , m_windowShell(V8DOMWindowShell::create(frame)) { }
V8Proxy::V8Proxy(Frame* frame) : m_frame(frame) , m_windowShell(V8DOMWindowShell::create(frame)) { }
C
Chrome
0
CVE-2017-7539
https://www.cvedetails.com/cve/CVE-2017-7539/
CWE-20
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=ff82911cd3f69f028f2537825c9720ff78bc3f19
ff82911cd3f69f028f2537825c9720ff78bc3f19
null
ssize_t nbd_wr_syncv(QIOChannel *ioc, struct iovec *iov, size_t niov, size_t length, bool do_read) { ssize_t done = 0; Error *local_err = NULL; struct iovec *local_iov = g_new(struct iovec, niov); struct iovec *local_iov_head = local_iov; unsigned int nlocal_iov = niov; nlocal_iov = iov_copy(local_iov, nlocal_iov, iov, niov, 0, length); while (nlocal_iov > 0) { ssize_t len; if (do_read) { len = qio_channel_readv(ioc, local_iov, nlocal_iov, &local_err); } else { len = qio_channel_writev(ioc, local_iov, nlocal_iov, &local_err); } if (len == QIO_CHANNEL_ERR_BLOCK) { if (qemu_in_coroutine()) { qio_channel_yield(ioc, do_read ? G_IO_IN : G_IO_OUT); } else { return -EAGAIN; } } else if (done) { /* XXX this is needed by nbd_reply_ready. */ qio_channel_wait(ioc, do_read ? G_IO_IN : G_IO_OUT); } else { return -EAGAIN; } continue; } if (len < 0) { TRACE("I/O error: %s", error_get_pretty(local_err)); error_free(local_err); /* XXX handle Error objects */ done = -EIO; goto cleanup; } if (do_read && len == 0) { break; } iov_discard_front(&local_iov, &nlocal_iov, len); done += len; }
ssize_t nbd_wr_syncv(QIOChannel *ioc, struct iovec *iov, size_t niov, size_t length, bool do_read) { ssize_t done = 0; Error *local_err = NULL; struct iovec *local_iov = g_new(struct iovec, niov); struct iovec *local_iov_head = local_iov; unsigned int nlocal_iov = niov; nlocal_iov = iov_copy(local_iov, nlocal_iov, iov, niov, 0, length); while (nlocal_iov > 0) { ssize_t len; if (do_read) { len = qio_channel_readv(ioc, local_iov, nlocal_iov, &local_err); } else { len = qio_channel_writev(ioc, local_iov, nlocal_iov, &local_err); } if (len == QIO_CHANNEL_ERR_BLOCK) { if (qemu_in_coroutine()) { /* XXX figure out if we can create a variant on * qio_channel_yield() that works with AIO contexts * and consider using that in this branch */ qemu_coroutine_yield(); } else if (done) { /* XXX this is needed by nbd_reply_ready. */ qio_channel_wait(ioc, do_read ? G_IO_IN : G_IO_OUT); } else { return -EAGAIN; } } else if (done) { /* XXX this is needed by nbd_reply_ready. */ qio_channel_wait(ioc, do_read ? G_IO_IN : G_IO_OUT); } else { return -EAGAIN; } continue; } if (len < 0) { TRACE("I/O error: %s", error_get_pretty(local_err)); error_free(local_err); /* XXX handle Error objects */ done = -EIO; goto cleanup; } if (do_read && len == 0) { break; } iov_discard_front(&local_iov, &nlocal_iov, len); done += len; }
C
qemu
1
CVE-2017-0592
https://www.cvedetails.com/cve/CVE-2017-0592/
CWE-119
https://android.googlesource.com/platform/frameworks/av/+/acc192347665943ca674acf117e4f74a88436922
acc192347665943ca674acf117e4f74a88436922
FLACExtractor: copy protect mWriteBuffer Bug: 30895578 Change-Id: I4cba36bbe3502678210e5925181683df9726b431
status_t FLACSource::read( MediaBuffer **outBuffer, const ReadOptions *options) { MediaBuffer *buffer; int64_t seekTimeUs; ReadOptions::SeekMode mode; if ((NULL != options) && options->getSeekTo(&seekTimeUs, &mode)) { FLAC__uint64 sample; if (seekTimeUs <= 0LL) { sample = 0LL; } else { sample = (seekTimeUs * mParser->getSampleRate()) / 1000000LL; if (sample >= mParser->getTotalSamples()) { sample = mParser->getTotalSamples(); } } buffer = mParser->readBuffer(sample); } else { buffer = mParser->readBuffer(); } *outBuffer = buffer; return buffer != NULL ? (status_t) OK : (status_t) ERROR_END_OF_STREAM; }
status_t FLACSource::read( MediaBuffer **outBuffer, const ReadOptions *options) { MediaBuffer *buffer; int64_t seekTimeUs; ReadOptions::SeekMode mode; if ((NULL != options) && options->getSeekTo(&seekTimeUs, &mode)) { FLAC__uint64 sample; if (seekTimeUs <= 0LL) { sample = 0LL; } else { sample = (seekTimeUs * mParser->getSampleRate()) / 1000000LL; if (sample >= mParser->getTotalSamples()) { sample = mParser->getTotalSamples(); } } buffer = mParser->readBuffer(sample); } else { buffer = mParser->readBuffer(); } *outBuffer = buffer; return buffer != NULL ? (status_t) OK : (status_t) ERROR_END_OF_STREAM; }
C
Android
0
CVE-2013-4623
https://www.cvedetails.com/cve/CVE-2013-4623/
CWE-20
https://github.com/polarssl/polarssl/commit/1922a4e6aade7b1d685af19d4d9339ddb5c02859
1922a4e6aade7b1d685af19d4d9339ddb5c02859
ssl_parse_certificate() now calls x509parse_crt_der() directly
void ssl_set_renegotiation( ssl_context *ssl, int renegotiation ) { ssl->disable_renegotiation = renegotiation; }
void ssl_set_renegotiation( ssl_context *ssl, int renegotiation ) { ssl->disable_renegotiation = renegotiation; }
C
polarssl
0
CVE-2017-18241
https://www.cvedetails.com/cve/CVE-2017-18241/
CWE-476
https://github.com/torvalds/linux/commit/d4fdf8ba0e5808ba9ad6b44337783bd9935e0982
d4fdf8ba0e5808ba9ad6b44337783bd9935e0982
f2fs: fix a panic caused by NULL flush_cmd_control Mount fs with option noflush_merge, boot failed for illegal address fcc in function f2fs_issue_flush: if (!test_opt(sbi, FLUSH_MERGE)) { ret = submit_flush_wait(sbi); atomic_inc(&fcc->issued_flush); -> Here, fcc illegal return ret; } Signed-off-by: Yunlei He <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
static int __revoke_inmem_pages(struct inode *inode, struct list_head *head, bool drop, bool recover) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct inmem_pages *cur, *tmp; int err = 0; list_for_each_entry_safe(cur, tmp, head, list) { struct page *page = cur->page; if (drop) trace_f2fs_commit_inmem_page(page, INMEM_DROP); lock_page(page); if (recover) { struct dnode_of_data dn; struct node_info ni; trace_f2fs_commit_inmem_page(page, INMEM_REVOKE); set_new_dnode(&dn, inode, NULL, NULL, 0); if (get_dnode_of_data(&dn, page->index, LOOKUP_NODE)) { err = -EAGAIN; goto next; } get_node_info(sbi, dn.nid, &ni); f2fs_replace_block(sbi, &dn, dn.data_blkaddr, cur->old_addr, ni.version, true, true); f2fs_put_dnode(&dn); } next: /* we don't need to invalidate this in the sccessful status */ if (drop || recover) ClearPageUptodate(page); set_page_private(page, 0); ClearPagePrivate(page); f2fs_put_page(page, 1); list_del(&cur->list); kmem_cache_free(inmem_entry_slab, cur); dec_page_count(F2FS_I_SB(inode), F2FS_INMEM_PAGES); } return err; }
static int __revoke_inmem_pages(struct inode *inode, struct list_head *head, bool drop, bool recover) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct inmem_pages *cur, *tmp; int err = 0; list_for_each_entry_safe(cur, tmp, head, list) { struct page *page = cur->page; if (drop) trace_f2fs_commit_inmem_page(page, INMEM_DROP); lock_page(page); if (recover) { struct dnode_of_data dn; struct node_info ni; trace_f2fs_commit_inmem_page(page, INMEM_REVOKE); set_new_dnode(&dn, inode, NULL, NULL, 0); if (get_dnode_of_data(&dn, page->index, LOOKUP_NODE)) { err = -EAGAIN; goto next; } get_node_info(sbi, dn.nid, &ni); f2fs_replace_block(sbi, &dn, dn.data_blkaddr, cur->old_addr, ni.version, true, true); f2fs_put_dnode(&dn); } next: /* we don't need to invalidate this in the sccessful status */ if (drop || recover) ClearPageUptodate(page); set_page_private(page, 0); ClearPagePrivate(page); f2fs_put_page(page, 1); list_del(&cur->list); kmem_cache_free(inmem_entry_slab, cur); dec_page_count(F2FS_I_SB(inode), F2FS_INMEM_PAGES); } return err; }
C
linux
0
CVE-2017-7191
https://www.cvedetails.com/cve/CVE-2017-7191/
CWE-416
https://github.com/irssi/irssi/commit/77b2631c78461965bc9a7414aae206b5c514e1b3
77b2631c78461965bc9a7414aae206b5c514e1b3
Merge branch 'netjoin-timeout' into 'master' fe-netjoin: remove irc servers on "server disconnected" signal Closes #7 See merge request !10
void fe_netjoin_init(void) { settings_add_bool("misc", "hide_netsplit_quits", TRUE); settings_add_int("misc", "netjoin_max_nicks", 10); join_tag = -1; printing_joins = FALSE; read_settings(); signal_add("setup changed", (SIGNAL_FUNC) read_settings); signal_add("server disconnected", (SIGNAL_FUNC) sig_server_disconnected); }
void fe_netjoin_init(void) { settings_add_bool("misc", "hide_netsplit_quits", TRUE); settings_add_int("misc", "netjoin_max_nicks", 10); join_tag = -1; printing_joins = FALSE; read_settings(); signal_add("setup changed", (SIGNAL_FUNC) read_settings); }
C
irssi
1
CVE-2016-4482
https://www.cvedetails.com/cve/CVE-2016-4482/
CWE-200
https://github.com/torvalds/linux/commit/681fef8380eb818c0b845fca5d2ab1dcbab114ee
681fef8380eb818c0b845fca5d2ab1dcbab114ee
USB: usbfs: fix potential infoleak in devio The stack object “ci” has a total size of 8 bytes. Its last 3 bytes are padding bytes which are not initialized and leaked to userland via “copy_to_user”. Signed-off-by: Kangjie Lu <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
static void destroy_async_on_interface(struct usb_dev_state *ps, unsigned int ifnum) { struct list_head *p, *q, hitlist; unsigned long flags; INIT_LIST_HEAD(&hitlist); spin_lock_irqsave(&ps->lock, flags); list_for_each_safe(p, q, &ps->async_pending) if (ifnum == list_entry(p, struct async, asynclist)->ifnum) list_move_tail(p, &hitlist); spin_unlock_irqrestore(&ps->lock, flags); destroy_async(ps, &hitlist); }
static void destroy_async_on_interface(struct usb_dev_state *ps, unsigned int ifnum) { struct list_head *p, *q, hitlist; unsigned long flags; INIT_LIST_HEAD(&hitlist); spin_lock_irqsave(&ps->lock, flags); list_for_each_safe(p, q, &ps->async_pending) if (ifnum == list_entry(p, struct async, asynclist)->ifnum) list_move_tail(p, &hitlist); spin_unlock_irqrestore(&ps->lock, flags); destroy_async(ps, &hitlist); }
C
linux
0
CVE-2013-6663
https://www.cvedetails.com/cve/CVE-2013-6663/
CWE-399
https://github.com/chromium/chromium/commit/fb5dce12f0462056fc9f66967b0f7b2b7bcd88f5
fb5dce12f0462056fc9f66967b0f7b2b7bcd88f5
One polymer_config.js to rule them all. [email protected],[email protected],[email protected] BUG=425626 Review URL: https://codereview.chromium.org/1224783005 Cr-Commit-Position: refs/heads/master@{#337882}
UserBoardView* OobeUI::GetUserBoardScreenActor() { return user_board_screen_handler_; }
UserBoardView* OobeUI::GetUserBoardScreenActor() { return user_board_screen_handler_; }
C
Chrome
0
CVE-2016-0723
https://www.cvedetails.com/cve/CVE-2016-0723/
CWE-362
https://github.com/torvalds/linux/commit/5c17c861a357e9458001f021a7afa7aab9937439
5c17c861a357e9458001f021a7afa7aab9937439
tty: Fix unsafe ldisc reference via ioctl(TIOCGETD) ioctl(TIOCGETD) retrieves the line discipline id directly from the ldisc because the line discipline id (c_line) in termios is untrustworthy; userspace may have set termios via ioctl(TCSETS*) without actually changing the line discipline via ioctl(TIOCSETD). However, directly accessing the current ldisc via tty->ldisc is unsafe; the ldisc ptr dereferenced may be stale if the line discipline is changing via ioctl(TIOCSETD) or hangup. Wait for the line discipline reference (just like read() or write()) to retrieve the "current" line discipline id. Cc: <[email protected]> Signed-off-by: Peter Hurley <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
struct device *tty_register_device(struct tty_driver *driver, unsigned index, struct device *device) { return tty_register_device_attr(driver, index, device, NULL, NULL); }
struct device *tty_register_device(struct tty_driver *driver, unsigned index, struct device *device) { return tty_register_device_attr(driver, index, device, NULL, NULL); }
C
linux
0
CVE-2013-0892
https://www.cvedetails.com/cve/CVE-2013-0892/
null
https://github.com/chromium/chromium/commit/0ab5fab4939150bd0f30ada8a4bf6eb0f69d66c1
0ab5fab4939150bd0f30ada8a4bf6eb0f69d66c1
Sizes going across an IPC should be uint32. BUG=164946 Review URL: https://chromiumcodereview.appspot.com/11472038 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@171944 0039d316-1c4b-4281-b951-d872f2087c98
GpuCommandBufferMemoryTracker(GpuChannel* channel) { gpu_memory_manager_tracking_group_ = new GpuMemoryTrackingGroup( channel->renderer_pid(), this, channel->gpu_channel_manager()->gpu_memory_manager()); }
GpuCommandBufferMemoryTracker(GpuChannel* channel) { gpu_memory_manager_tracking_group_ = new GpuMemoryTrackingGroup( channel->renderer_pid(), this, channel->gpu_channel_manager()->gpu_memory_manager()); }
C
Chrome
0
CVE-2017-6991
https://www.cvedetails.com/cve/CVE-2017-6991/
CWE-119
https://github.com/chromium/chromium/commit/3bfe67c9c4b45eb713326aae7a67c8f7390dae08
3bfe67c9c4b45eb713326aae7a67c8f7390dae08
sqlite: safely move pointer values through SQL. This lands https://www.sqlite.org/src/timeline?c=d6a44b35 in third_party/sqlite/src/ and third_party/sqlite/patches/0013-Add-new-interfaces-sqlite3_bind_pointer-sqlite3_resu.patch and re-generates third_party/sqlite/amalgamation/* using the script at third_party/sqlite/google_generate_amalgamation.sh. The CL also adds a layout test that verifies the patch works as intended. BUG=742407 Change-Id: I2e1a457459cd2e975e6241b630e7b79c82545981 Reviewed-on: https://chromium-review.googlesource.com/572976 Reviewed-by: Chris Mumford <[email protected]> Commit-Queue: Victor Costan <[email protected]> Cr-Commit-Position: refs/heads/master@{#487275}
static int addToSavepointBitvecs(Pager *pPager, Pgno pgno){ int ii; /* Loop counter */ int rc = SQLITE_OK; /* Result code */ for(ii=0; ii<pPager->nSavepoint; ii++){ PagerSavepoint *p = &pPager->aSavepoint[ii]; if( pgno<=p->nOrig ){ rc |= sqlite3BitvecSet(p->pInSavepoint, pgno); testcase( rc==SQLITE_NOMEM ); assert( rc==SQLITE_OK || rc==SQLITE_NOMEM ); } } return rc; }
static int addToSavepointBitvecs(Pager *pPager, Pgno pgno){ int ii; /* Loop counter */ int rc = SQLITE_OK; /* Result code */ for(ii=0; ii<pPager->nSavepoint; ii++){ PagerSavepoint *p = &pPager->aSavepoint[ii]; if( pgno<=p->nOrig ){ rc |= sqlite3BitvecSet(p->pInSavepoint, pgno); testcase( rc==SQLITE_NOMEM ); assert( rc==SQLITE_OK || rc==SQLITE_NOMEM ); } } return rc; }
C
Chrome
0
CVE-2012-2390
https://www.cvedetails.com/cve/CVE-2012-2390/
CWE-399
https://github.com/torvalds/linux/commit/c50ac050811d6485616a193eb0f37bfbd191cc89
c50ac050811d6485616a193eb0f37bfbd191cc89
hugetlb: fix resv_map leak in error path When called for anonymous (non-shared) mappings, hugetlb_reserve_pages() does a resv_map_alloc(). It depends on code in hugetlbfs's vm_ops->close() to release that allocation. However, in the mmap() failure path, we do a plain unmap_region() without the remove_vma() which actually calls vm_ops->close(). This is a decent fix. This leak could get reintroduced if new code (say, after hugetlb_reserve_pages() in hugetlbfs_file_mmap()) decides to return an error. But, I think it would have to unroll the reservation anyway. Christoph's test case: http://marc.info/?l=linux-mm&m=133728900729735 This patch applies to 3.4 and later. A version for earlier kernels is at https://lkml.org/lkml/2012/5/22/418. Signed-off-by: Dave Hansen <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: KOSAKI Motohiro <[email protected]> Reported-by: Christoph Lameter <[email protected]> Tested-by: Christoph Lameter <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: <[email protected]> [2.6.32+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static ssize_t nr_hugepages_mempolicy_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t len) { return nr_hugepages_store_common(true, kobj, attr, buf, len); }
static ssize_t nr_hugepages_mempolicy_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t len) { return nr_hugepages_store_common(true, kobj, attr, buf, len); }
C
linux
0
CVE-2016-9888
https://www.cvedetails.com/cve/CVE-2016-9888/
CWE-476
https://github.com/GNOME/libgsf/commit/95a8351a75758cf10b3bf6abae0b6b461f90d9e5
95a8351a75758cf10b3bf6abae0b6b461f90d9e5
tar: fix crash on broken tar file.
gsf_infile_tar_finalize (GObject *obj) { GsfInfileTar *tar = GSF_INFILE_TAR (obj); g_array_free (tar->children, TRUE); parent_class->finalize (obj); }
gsf_infile_tar_finalize (GObject *obj) { GsfInfileTar *tar = GSF_INFILE_TAR (obj); g_array_free (tar->children, TRUE); parent_class->finalize (obj); }
C
libgsf
0
CVE-2013-0884
https://www.cvedetails.com/cve/CVE-2013-0884/
null
https://github.com/chromium/chromium/commit/4c39b8e5670c4a0f2bb06008502ebb0c4fe322e0
4c39b8e5670c4a0f2bb06008502ebb0c4fe322e0
[4/4] Process clearBrowserCahce/cookies commands in browser. BUG=366585 Review URL: https://codereview.chromium.org/251183005 git-svn-id: svn://svn.chromium.org/blink/trunk@172984 bbb929c8-8fbe-4397-9dbb-9b2b20218538
void WebDevToolsAgentImpl::dispatchMouseEvent(const PlatformMouseEvent& event) { m_generatingEvent = true; WebMouseEvent webEvent = WebMouseEventBuilder(m_webViewImpl->mainFrameImpl()->frameView(), event); m_webViewImpl->handleInputEvent(webEvent); m_generatingEvent = false; }
void WebDevToolsAgentImpl::dispatchMouseEvent(const PlatformMouseEvent& event) { m_generatingEvent = true; WebMouseEvent webEvent = WebMouseEventBuilder(m_webViewImpl->mainFrameImpl()->frameView(), event); m_webViewImpl->handleInputEvent(webEvent); m_generatingEvent = false; }
C
Chrome
0
CVE-2017-6001
https://www.cvedetails.com/cve/CVE-2017-6001/
CWE-362
https://github.com/torvalds/linux/commit/321027c1fe77f892f4ea07846aeae08cefbbb290
321027c1fe77f892f4ea07846aeae08cefbbb290
perf/core: Fix concurrent sys_perf_event_open() vs. 'move_group' race Di Shen reported a race between two concurrent sys_perf_event_open() calls where both try and move the same pre-existing software group into a hardware context. The problem is exactly that described in commit: f63a8daa5812 ("perf: Fix event->ctx locking") ... where, while we wait for a ctx->mutex acquisition, the event->ctx relation can have changed under us. That very same commit failed to recognise sys_perf_event_context() as an external access vector to the events and thereby didn't apply the established locking rules correctly. So while one sys_perf_event_open() call is stuck waiting on mutex_lock_double(), the other (which owns said locks) moves the group about. So by the time the former sys_perf_event_open() acquires the locks, the context we've acquired is stale (and possibly dead). Apply the established locking rules as per perf_event_ctx_lock_nested() to the mutex_lock_double() for the 'move_group' case. This obviously means we need to validate state after we acquire the locks. Reported-by: Di Shen (Keen Lab) Tested-by: John Dias <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Min Chong <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Fixes: f63a8daa5812 ("perf: Fix event->ctx locking") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
static void perf_event_exit_task_context(struct task_struct *child, int ctxn) { struct perf_event_context *child_ctx, *clone_ctx = NULL; struct perf_event *child_event, *next; WARN_ON_ONCE(child != current); child_ctx = perf_pin_task_context(child, ctxn); if (!child_ctx) return; /* * In order to reduce the amount of tricky in ctx tear-down, we hold * ctx::mutex over the entire thing. This serializes against almost * everything that wants to access the ctx. * * The exception is sys_perf_event_open() / * perf_event_create_kernel_count() which does find_get_context() * without ctx::mutex (it cannot because of the move_group double mutex * lock thing). See the comments in perf_install_in_context(). */ mutex_lock(&child_ctx->mutex); /* * In a single ctx::lock section, de-schedule the events and detach the * context from the task such that we cannot ever get it scheduled back * in. */ raw_spin_lock_irq(&child_ctx->lock); task_ctx_sched_out(__get_cpu_context(child_ctx), child_ctx); /* * Now that the context is inactive, destroy the task <-> ctx relation * and mark the context dead. */ RCU_INIT_POINTER(child->perf_event_ctxp[ctxn], NULL); put_ctx(child_ctx); /* cannot be last */ WRITE_ONCE(child_ctx->task, TASK_TOMBSTONE); put_task_struct(current); /* cannot be last */ clone_ctx = unclone_ctx(child_ctx); raw_spin_unlock_irq(&child_ctx->lock); if (clone_ctx) put_ctx(clone_ctx); /* * Report the task dead after unscheduling the events so that we * won't get any samples after PERF_RECORD_EXIT. We can however still * get a few PERF_RECORD_READ events. */ perf_event_task(child, child_ctx, 0); list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) perf_event_exit_event(child_event, child_ctx, child); mutex_unlock(&child_ctx->mutex); put_ctx(child_ctx); }
static void perf_event_exit_task_context(struct task_struct *child, int ctxn) { struct perf_event_context *child_ctx, *clone_ctx = NULL; struct perf_event *child_event, *next; WARN_ON_ONCE(child != current); child_ctx = perf_pin_task_context(child, ctxn); if (!child_ctx) return; /* * In order to reduce the amount of tricky in ctx tear-down, we hold * ctx::mutex over the entire thing. This serializes against almost * everything that wants to access the ctx. * * The exception is sys_perf_event_open() / * perf_event_create_kernel_count() which does find_get_context() * without ctx::mutex (it cannot because of the move_group double mutex * lock thing). See the comments in perf_install_in_context(). */ mutex_lock(&child_ctx->mutex); /* * In a single ctx::lock section, de-schedule the events and detach the * context from the task such that we cannot ever get it scheduled back * in. */ raw_spin_lock_irq(&child_ctx->lock); task_ctx_sched_out(__get_cpu_context(child_ctx), child_ctx); /* * Now that the context is inactive, destroy the task <-> ctx relation * and mark the context dead. */ RCU_INIT_POINTER(child->perf_event_ctxp[ctxn], NULL); put_ctx(child_ctx); /* cannot be last */ WRITE_ONCE(child_ctx->task, TASK_TOMBSTONE); put_task_struct(current); /* cannot be last */ clone_ctx = unclone_ctx(child_ctx); raw_spin_unlock_irq(&child_ctx->lock); if (clone_ctx) put_ctx(clone_ctx); /* * Report the task dead after unscheduling the events so that we * won't get any samples after PERF_RECORD_EXIT. We can however still * get a few PERF_RECORD_READ events. */ perf_event_task(child, child_ctx, 0); list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) perf_event_exit_event(child_event, child_ctx, child); mutex_unlock(&child_ctx->mutex); put_ctx(child_ctx); }
C
linux
0
CVE-2011-2918
https://www.cvedetails.com/cve/CVE-2011-2918/
CWE-399
https://github.com/torvalds/linux/commit/a8b0ca17b80e92faab46ee7179ba9e99ccb61233
a8b0ca17b80e92faab46ee7179ba9e99ccb61233
perf: Remove the nmi parameter from the swevent and overflow interface The nmi parameter indicated if we could do wakeups from the current context, if not, we would set some state and self-IPI and let the resulting interrupt do the wakeup. For the various event classes: - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from the PMI-tail (ARM etc.) - tracepoint: nmi=0; since tracepoint could be from NMI context. - software: nmi=[0,1]; some, like the schedule thing cannot perform wakeups, and hence need 0. As one can see, there is very little nmi=1 usage, and the down-side of not using it is that on some platforms some software events can have a jiffy delay in wakeup (when arch_irq_work_raise isn't implemented). The up-side however is that we can remove the nmi parameter and save a bunch of conditionals in fast paths. Signed-off-by: Peter Zijlstra <[email protected]> Cc: Michael Cree <[email protected]> Cc: Will Deacon <[email protected]> Cc: Deng-Cheng Zhu <[email protected]> Cc: Anton Blanchard <[email protected]> Cc: Eric B Munson <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Cc: David S. Miller <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Jason Wessel <[email protected]> Cc: Don Zickus <[email protected]> Link: http://lkml.kernel.org/n/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
static void mipspmu_start(struct perf_event *event, int flags) { struct hw_perf_event *hwc = &event->hw; if (!mipspmu) return; if (flags & PERF_EF_RELOAD) WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); hwc->state = 0; /* Set the period for the event. */ mipspmu_event_set_period(event, hwc, hwc->idx); /* Enable the event. */ mipspmu->enable_event(hwc, hwc->idx); }
static void mipspmu_start(struct perf_event *event, int flags) { struct hw_perf_event *hwc = &event->hw; if (!mipspmu) return; if (flags & PERF_EF_RELOAD) WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); hwc->state = 0; /* Set the period for the event. */ mipspmu_event_set_period(event, hwc, hwc->idx); /* Enable the event. */ mipspmu->enable_event(hwc, hwc->idx); }
C
linux
0
CVE-2017-15391
https://www.cvedetails.com/cve/CVE-2017-15391/
null
https://github.com/chromium/chromium/commit/f1afce25b3f94d8bddec69b08ffbc29b989ad844
f1afce25b3f94d8bddec69b08ffbc29b989ad844
[Extensions] Update navigations across hypothetical extension extents Update code to treat navigations across hypothetical extension extents (e.g. for nonexistent extensions) the same as we do for navigations crossing installed extension extents. Bug: 598265 Change-Id: Ibdf2f563ce1fd108ead279077901020a24de732b Reviewed-on: https://chromium-review.googlesource.com/617180 Commit-Queue: Devlin <[email protected]> Reviewed-by: Alex Moshchuk <[email protected]> Reviewed-by: Nasko Oskov <[email protected]> Cr-Commit-Position: refs/heads/master@{#495779}
const extensions::Extension* GetNonBookmarkAppExtension( const ExtensionSet& extensions, const GURL& url) { const extensions::Extension* extension = extensions.GetExtensionOrAppByURL(url); if (extension && extension->from_bookmark()) extension = NULL; return extension; }
const extensions::Extension* GetNonBookmarkAppExtension( const ExtensionSet& extensions, const GURL& url) { const extensions::Extension* extension = extensions.GetExtensionOrAppByURL(url); if (extension && extension->from_bookmark()) extension = NULL; return extension; }
C
Chrome
0
CVE-2017-5061
https://www.cvedetails.com/cve/CVE-2017-5061/
CWE-362
https://github.com/chromium/chromium/commit/5d78b84d39bd34bc9fce9d01c0dcd5a22a330d34
5d78b84d39bd34bc9fce9d01c0dcd5a22a330d34
(Reland) Discard compositor frames from unloaded web content This is a reland of https://codereview.chromium.org/2707243005/ with a small change to fix an uninitialized memory error that fails on MSAN bots. BUG=672847 [email protected], [email protected] CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_trusty_blink_rel;master.tryserver.chromium.linux:linux_site_isolation Review-Url: https://codereview.chromium.org/2731283003 Cr-Commit-Position: refs/heads/master@{#454954}
void LayerTreeHostImpl::SetTreeLayerOpacityMutated(ElementId element_id, LayerTreeImpl* tree, float opacity) { if (!tree) return; PropertyTrees* property_trees = tree->property_trees(); DCHECK_EQ(1u, property_trees->element_id_to_effect_node_index.count(element_id)); const int effect_node_index = property_trees->element_id_to_effect_node_index[element_id]; property_trees->effect_tree.OnOpacityAnimated(opacity, effect_node_index, tree); }
void LayerTreeHostImpl::SetTreeLayerOpacityMutated(ElementId element_id, LayerTreeImpl* tree, float opacity) { if (!tree) return; PropertyTrees* property_trees = tree->property_trees(); DCHECK_EQ(1u, property_trees->element_id_to_effect_node_index.count(element_id)); const int effect_node_index = property_trees->element_id_to_effect_node_index[element_id]; property_trees->effect_tree.OnOpacityAnimated(opacity, effect_node_index, tree); }
C
Chrome
0
CVE-2016-3839
https://www.cvedetails.com/cve/CVE-2016-3839/
CWE-284
https://android.googlesource.com/platform/system/bt/+/472271b153c5dc53c28beac55480a8d8434b2d5c
472271b153c5dc53c28beac55480a8d8434b2d5c
DO NOT MERGE Fix potential DoS caused by delivering signal to BT process Bug: 28885210 Change-Id: I63866d894bfca47464d6e42e3fb0357c4f94d360 Conflicts: btif/co/bta_hh_co.c btif/src/btif_core.c Merge conflict resolution of ag/1161415 (referencing ag/1164670) - Directly into mnc-mr2-release
static void btif_hl_proc_reg_request(UINT8 app_idx, UINT8 app_id, tBTA_HL_REG_PARAM *p_reg_param, tBTA_HL_CBACK *p_cback){ UNUSED(p_cback); BTIF_TRACE_DEBUG("%s app_idx=%d app_id=%d", __FUNCTION__, app_idx, app_id); if(reg_counter >1) { BTIF_TRACE_DEBUG("btif_hl_proc_reg_request: calling uPDATE"); BTA_HlUpdate(app_id, p_reg_param,TRUE, btif_hl_cback); } else BTA_HlRegister(app_id, p_reg_param, btif_hl_cback); }
static void btif_hl_proc_reg_request(UINT8 app_idx, UINT8 app_id, tBTA_HL_REG_PARAM *p_reg_param, tBTA_HL_CBACK *p_cback){ UNUSED(p_cback); BTIF_TRACE_DEBUG("%s app_idx=%d app_id=%d", __FUNCTION__, app_idx, app_id); if(reg_counter >1) { BTIF_TRACE_DEBUG("btif_hl_proc_reg_request: calling uPDATE"); BTA_HlUpdate(app_id, p_reg_param,TRUE, btif_hl_cback); } else BTA_HlRegister(app_id, p_reg_param, btif_hl_cback); }
C
Android
0
CVE-2018-6096
https://www.cvedetails.com/cve/CVE-2018-6096/
null
https://github.com/chromium/chromium/commit/36f801fdbec07d116a6f4f07bb363f10897d6a51
36f801fdbec07d116a6f4f07bb363f10897d6a51
If a page calls |window.focus()|, kick it out of fullscreen. BUG=776418, 800056 Change-Id: I1880fe600e4814c073f247c43b1c1ac80c8fc017 Reviewed-on: https://chromium-review.googlesource.com/852378 Reviewed-by: Nasko Oskov <[email protected]> Reviewed-by: Philip Jägenstedt <[email protected]> Commit-Queue: Avi Drissman <[email protected]> Cr-Commit-Position: refs/heads/master@{#533790}
WebWidget* RenderViewImpl::CreatePopup(blink::WebLocalFrame* creator, blink::WebPopupType popup_type) { RenderWidget* widget = RenderWidget::CreateForPopup( this, compositor_deps_, popup_type, screen_info_, creator->GetTaskRunner(blink::TaskType::kUnthrottled)); if (!widget) return nullptr; if (screen_metrics_emulator_) { widget->SetPopupOriginAdjustmentsForEmulation( screen_metrics_emulator_.get()); } return widget->GetWebWidget(); }
WebWidget* RenderViewImpl::CreatePopup(blink::WebLocalFrame* creator, blink::WebPopupType popup_type) { RenderWidget* widget = RenderWidget::CreateForPopup( this, compositor_deps_, popup_type, screen_info_, creator->GetTaskRunner(blink::TaskType::kUnthrottled)); if (!widget) return nullptr; if (screen_metrics_emulator_) { widget->SetPopupOriginAdjustmentsForEmulation( screen_metrics_emulator_.get()); } return widget->GetWebWidget(); }
C
Chrome
0
CVE-2013-7421
https://www.cvedetails.com/cve/CVE-2013-7421/
CWE-264
https://github.com/torvalds/linux/commit/5d26a105b5a73e5635eae0629b42fa0a90e07b7b
5d26a105b5a73e5635eae0629b42fa0a90e07b7b
crypto: prefix module autoloading with "crypto-" This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70 Signed-off-by: Kees Cook <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
static int xts_aes_crypt(struct blkcipher_desc *desc, long func, struct s390_xts_ctx *xts_ctx, struct blkcipher_walk *walk) { unsigned int offset = (xts_ctx->key_len >> 1) & 0x10; int ret = blkcipher_walk_virt(desc, walk); unsigned int nbytes = walk->nbytes; unsigned int n; u8 *in, *out; struct pcc_param pcc_param; struct { u8 key[32]; u8 init[16]; } xts_param; if (!nbytes) goto out; memset(pcc_param.block, 0, sizeof(pcc_param.block)); memset(pcc_param.bit, 0, sizeof(pcc_param.bit)); memset(pcc_param.xts, 0, sizeof(pcc_param.xts)); memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak)); memcpy(pcc_param.key, xts_ctx->pcc_key, 32); ret = crypt_s390_pcc(func, &pcc_param.key[offset]); if (ret < 0) return -EIO; memcpy(xts_param.key, xts_ctx->key, 32); memcpy(xts_param.init, pcc_param.xts, 16); do { /* only use complete blocks */ n = nbytes & ~(AES_BLOCK_SIZE - 1); out = walk->dst.virt.addr; in = walk->src.virt.addr; ret = crypt_s390_km(func, &xts_param.key[offset], out, in, n); if (ret < 0 || ret != n) return -EIO; nbytes &= AES_BLOCK_SIZE - 1; ret = blkcipher_walk_done(desc, walk, nbytes); } while ((nbytes = walk->nbytes)); out: return ret; }
static int xts_aes_crypt(struct blkcipher_desc *desc, long func, struct s390_xts_ctx *xts_ctx, struct blkcipher_walk *walk) { unsigned int offset = (xts_ctx->key_len >> 1) & 0x10; int ret = blkcipher_walk_virt(desc, walk); unsigned int nbytes = walk->nbytes; unsigned int n; u8 *in, *out; struct pcc_param pcc_param; struct { u8 key[32]; u8 init[16]; } xts_param; if (!nbytes) goto out; memset(pcc_param.block, 0, sizeof(pcc_param.block)); memset(pcc_param.bit, 0, sizeof(pcc_param.bit)); memset(pcc_param.xts, 0, sizeof(pcc_param.xts)); memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak)); memcpy(pcc_param.key, xts_ctx->pcc_key, 32); ret = crypt_s390_pcc(func, &pcc_param.key[offset]); if (ret < 0) return -EIO; memcpy(xts_param.key, xts_ctx->key, 32); memcpy(xts_param.init, pcc_param.xts, 16); do { /* only use complete blocks */ n = nbytes & ~(AES_BLOCK_SIZE - 1); out = walk->dst.virt.addr; in = walk->src.virt.addr; ret = crypt_s390_km(func, &xts_param.key[offset], out, in, n); if (ret < 0 || ret != n) return -EIO; nbytes &= AES_BLOCK_SIZE - 1; ret = blkcipher_walk_done(desc, walk, nbytes); } while ((nbytes = walk->nbytes)); out: return ret; }
C
linux
0
null
null
null
https://github.com/chromium/chromium/commit/a3e2afaedd8190398ae45ccef34fcdee00fb19aa
a3e2afaedd8190398ae45ccef34fcdee00fb19aa
Fixed crash related to cellular network payment plan retreival. BUG=chromium-os:8864 TEST=none Review URL: http://codereview.chromium.org/4690002 git-svn-id: svn://svn.chromium.org/chrome/trunk/src@65405 0039d316-1c4b-4281-b951-d872f2087c98
virtual bool wifi_connecting() const { return false; }
virtual bool wifi_connecting() const { return false; }
C
Chrome
0
CVE-2014-9114
https://www.cvedetails.com/cve/CVE-2014-9114/
CWE-77
https://github.com/karelzak/util-linux/commit/89e90ae7b2826110ea28c1c0eb8e7c56c3907bdc
89e90ae7b2826110ea28c1c0eb8e7c56c3907bdc
libblkid: care about unsafe chars in cache The high-level libblkid API uses /run/blkid/blkid.tab cache to store probing results. The cache format is <device NAME="value" ...>devname</device> and unfortunately the cache code does not escape quotation marks: # mkfs.ext4 -L 'AAA"BBB' # cat /run/blkid/blkid.tab ... <device ... LABEL="AAA"BBB" ...>/dev/sdb1</device> such string is later incorrectly parsed and blkid(8) returns nonsenses. And for use-cases like # eval $(blkid -o export /dev/sdb1) it's also insecure. Note that mount, udevd and blkid -p are based on low-level libblkid API, it bypass the cache and directly read data from the devices. The current udevd upstream does not depend on blkid(8) output at all, it's directly linked with the library and all unsafe chars are encoded by \x<hex> notation. # mkfs.ext4 -L 'X"`/tmp/foo` "' /dev/sdb1 # udevadm info --export-db | grep LABEL ... E: ID_FS_LABEL=X__/tmp/foo___ E: ID_FS_LABEL_ENC=X\x22\x60\x2ftmp\x2ffoo\x60\x20\x22 Signed-off-by: Karel Zak <[email protected]>
static void safe_print(const char *cp, int len, const char *esc) { unsigned char ch; if (len < 0) len = strlen(cp); while (len--) { ch = *cp++; if (!raw_chars) { if (ch >= 128) { fputs("M-", stdout); ch -= 128; } if ((ch < 32) || (ch == 0x7f)) { fputc('^', stdout); ch ^= 0x40; /* ^@, ^A, ^B; ^? for DEL */ } else if (esc && strchr(esc, ch)) fputc('\\', stdout); } fputc(ch, stdout); } }
static void safe_print(const char *cp, int len, const char *esc) { unsigned char ch; if (len < 0) len = strlen(cp); while (len--) { ch = *cp++; if (!raw_chars) { if (ch >= 128) { fputs("M-", stdout); ch -= 128; } if ((ch < 32) || (ch == 0x7f)) { fputc('^', stdout); ch ^= 0x40; /* ^@, ^A, ^B; ^? for DEL */ } else if (esc && strchr(esc, ch)) fputc('\\', stdout); } fputc(ch, stdout); } }
C
util-linux
0
CVE-2015-8126
https://www.cvedetails.com/cve/CVE-2015-8126/
CWE-119
https://github.com/chromium/chromium/commit/7f3d85b096f66870a15b37c2f40b219b2e292693
7f3d85b096f66870a15b37c2f40b219b2e292693
third_party/libpng: update to 1.2.54 [email protected] BUG=560291 Review URL: https://codereview.chromium.org/1467263003 Cr-Commit-Position: refs/heads/master@{#362298}
png_write_find_filter(png_structp png_ptr, png_row_infop row_info) { png_bytep best_row; #ifdef PNG_WRITE_FILTER_SUPPORTED png_bytep prev_row, row_buf; png_uint_32 mins, bpp; png_byte filter_to_do = png_ptr->do_filter; png_uint_32 row_bytes = row_info->rowbytes; png_debug(1, "in png_write_find_filter"); /* Find out how many bytes offset each pixel is */ bpp = (row_info->pixel_depth + 7) >> 3; prev_row = png_ptr->prev_row; #endif best_row = png_ptr->row_buf; #ifdef PNG_WRITE_FILTER_SUPPORTED row_buf = best_row; mins = PNG_MAXSUM; /* The prediction method we use is to find which method provides the * smallest value when summing the absolute values of the distances * from zero, using anything >= 128 as negative numbers. This is known * as the "minimum sum of absolute differences" heuristic. Other * heuristics are the "weighted minimum sum of absolute differences" * (experimental and can in theory improve compression), and the "zlib * predictive" method (not implemented yet), which does test compressions * of lines using different filter methods, and then chooses the * (series of) filter(s) that give minimum compressed data size (VERY * computationally expensive). * * GRR 980525: consider also * (1) minimum sum of absolute differences from running average (i.e., * keep running sum of non-absolute differences & count of bytes) * [track dispersion, too? restart average if dispersion too large?] * (1b) minimum sum of absolute differences from sliding average, probably * with window size <= deflate window (usually 32K) * (2) minimum sum of squared differences from zero or running average * (i.e., ~ root-mean-square approach) */ /* We don't need to test the 'no filter' case if this is the only filter * that has been chosen, as it doesn't actually do anything to the data. */ if ((filter_to_do & PNG_FILTER_NONE) && filter_to_do != PNG_FILTER_NONE) { png_bytep rp; png_uint_32 sum = 0; png_uint_32 i; int v; for (i = 0, rp = row_buf + 1; i < row_bytes; i++, rp++) { v = *rp; sum += (v < 128) ? v : 256 - v; } mins = sum; } /* Sub filter */ if (filter_to_do == PNG_FILTER_SUB) /* It's the only filter so no testing is needed */ { png_bytep rp, lp, dp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->sub_row + 1; i < bpp; i++, rp++, dp++) { *dp = *rp; } for (lp = row_buf + 1; i < row_bytes; i++, rp++, lp++, dp++) { *dp = (png_byte)(((int)*rp - (int)*lp) & 0xff); } best_row = png_ptr->sub_row; } else if (filter_to_do & PNG_FILTER_SUB) { png_bytep rp, dp, lp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; for (i = 0, rp = row_buf + 1, dp = png_ptr->sub_row + 1; i < bpp; i++, rp++, dp++) { v = *dp = *rp; sum += (v < 128) ? v : 256 - v; } for (lp = row_buf + 1; i < row_bytes; i++, rp++, lp++, dp++) { v = *dp = (png_byte)(((int)*rp - (int)*lp) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } if (sum < mins) { mins = sum; best_row = png_ptr->sub_row; } } /* Up filter */ if (filter_to_do == PNG_FILTER_UP) { png_bytep rp, dp, pp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->up_row + 1, pp = prev_row + 1; i < row_bytes; i++, rp++, pp++, dp++) { *dp = (png_byte)(((int)*rp - (int)*pp) & 0xff); } best_row = png_ptr->up_row; } else if (filter_to_do & PNG_FILTER_UP) { png_bytep rp, dp, pp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; for (i = 0, rp = row_buf + 1, dp = png_ptr->up_row + 1, pp = prev_row + 1; i < row_bytes; i++) { v = *dp++ = (png_byte)(((int)*rp++ - (int)*pp++) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } if (sum < mins) { mins = sum; best_row = png_ptr->up_row; } } /* Avg filter */ if (filter_to_do == PNG_FILTER_AVG) { png_bytep rp, dp, pp, lp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->avg_row + 1, pp = prev_row + 1; i < bpp; i++) { *dp++ = (png_byte)(((int)*rp++ - ((int)*pp++ / 2)) & 0xff); } for (lp = row_buf + 1; i < row_bytes; i++) { *dp++ = (png_byte)(((int)*rp++ - (((int)*pp++ + (int)*lp++) / 2)) & 0xff); } best_row = png_ptr->avg_row; } else if (filter_to_do & PNG_FILTER_AVG) { png_bytep rp, dp, pp, lp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; for (i = 0, rp = row_buf + 1, dp = png_ptr->avg_row + 1, pp = prev_row + 1; i < bpp; i++) { v = *dp++ = (png_byte)(((int)*rp++ - ((int)*pp++ / 2)) & 0xff); sum += (v < 128) ? v : 256 - v; } for (lp = row_buf + 1; i < row_bytes; i++) { v = *dp++ = (png_byte)(((int)*rp++ - (((int)*pp++ + (int)*lp++) / 2)) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } if (sum < mins) { mins = sum; best_row = png_ptr->avg_row; } } /* Paeth filter */ if (filter_to_do == PNG_FILTER_PAETH) { png_bytep rp, dp, pp, cp, lp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->paeth_row + 1, pp = prev_row + 1; i < bpp; i++) { *dp++ = (png_byte)(((int)*rp++ - (int)*pp++) & 0xff); } for (lp = row_buf + 1, cp = prev_row + 1; i < row_bytes; i++) { int a, b, c, pa, pb, pc, p; b = *pp++; c = *cp++; a = *lp++; p = b - c; pc = a - c; #ifdef PNG_USE_ABS pa = abs(p); pb = abs(pc); pc = abs(p + pc); #else pa = p < 0 ? -p : p; pb = pc < 0 ? -pc : pc; pc = (p + pc) < 0 ? -(p + pc) : p + pc; #endif p = (pa <= pb && pa <=pc) ? a : (pb <= pc) ? b : c; *dp++ = (png_byte)(((int)*rp++ - p) & 0xff); } best_row = png_ptr->paeth_row; } else if (filter_to_do & PNG_FILTER_PAETH) { png_bytep rp, dp, pp, cp, lp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; for (i = 0, rp = row_buf + 1, dp = png_ptr->paeth_row + 1, pp = prev_row + 1; i < bpp; i++) { v = *dp++ = (png_byte)(((int)*rp++ - (int)*pp++) & 0xff); sum += (v < 128) ? v : 256 - v; } for (lp = row_buf + 1, cp = prev_row + 1; i < row_bytes; i++) { int a, b, c, pa, pb, pc, p; b = *pp++; c = *cp++; a = *lp++; #ifndef PNG_SLOW_PAETH p = b - c; pc = a - c; #ifdef PNG_USE_ABS pa = abs(p); pb = abs(pc); pc = abs(p + pc); #else pa = p < 0 ? -p : p; pb = pc < 0 ? -pc : pc; pc = (p + pc) < 0 ? -(p + pc) : p + pc; #endif p = (pa <= pb && pa <=pc) ? a : (pb <= pc) ? b : c; #else /* PNG_SLOW_PAETH */ p = a + b - c; pa = abs(p - a); pb = abs(p - b); pc = abs(p - c); if (pa <= pb && pa <= pc) p = a; else if (pb <= pc) p = b; else p = c; #endif /* PNG_SLOW_PAETH */ v = *dp++ = (png_byte)(((int)*rp++ - p) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } if (sum < mins) { best_row = png_ptr->paeth_row; } } #endif /* PNG_WRITE_FILTER_SUPPORTED */ /* Do the actual writing of the filtered row data from the chosen filter. */ png_write_filtered_row(png_ptr, best_row); }
png_write_find_filter(png_structp png_ptr, png_row_infop row_info) { png_bytep best_row; #ifdef PNG_WRITE_FILTER_SUPPORTED png_bytep prev_row, row_buf; png_uint_32 mins, bpp; png_byte filter_to_do = png_ptr->do_filter; png_uint_32 row_bytes = row_info->rowbytes; #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED int num_p_filters = (int)png_ptr->num_prev_filters; #endif png_debug(1, "in png_write_find_filter"); #ifndef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->row_number == 0 && filter_to_do == PNG_ALL_FILTERS) { /* These will never be selected so we need not test them. */ filter_to_do &= ~(PNG_FILTER_UP | PNG_FILTER_PAETH); } #endif /* Find out how many bytes offset each pixel is */ bpp = (row_info->pixel_depth + 7) >> 3; prev_row = png_ptr->prev_row; #endif best_row = png_ptr->row_buf; #ifdef PNG_WRITE_FILTER_SUPPORTED row_buf = best_row; mins = PNG_MAXSUM; /* The prediction method we use is to find which method provides the * smallest value when summing the absolute values of the distances * from zero, using anything >= 128 as negative numbers. This is known * as the "minimum sum of absolute differences" heuristic. Other * heuristics are the "weighted minimum sum of absolute differences" * (experimental and can in theory improve compression), and the "zlib * predictive" method (not implemented yet), which does test compressions * of lines using different filter methods, and then chooses the * (series of) filter(s) that give minimum compressed data size (VERY * computationally expensive). * * GRR 980525: consider also * (1) minimum sum of absolute differences from running average (i.e., * keep running sum of non-absolute differences & count of bytes) * [track dispersion, too? restart average if dispersion too large?] * (1b) minimum sum of absolute differences from sliding average, probably * with window size <= deflate window (usually 32K) * (2) minimum sum of squared differences from zero or running average * (i.e., ~ root-mean-square approach) */ /* We don't need to test the 'no filter' case if this is the only filter * that has been chosen, as it doesn't actually do anything to the data. */ if ((filter_to_do & PNG_FILTER_NONE) && filter_to_do != PNG_FILTER_NONE) { png_bytep rp; png_uint_32 sum = 0; png_uint_32 i; int v; for (i = 0, rp = row_buf + 1; i < row_bytes; i++, rp++) { v = *rp; sum += (v < 128) ? v : 256 - v; } #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { png_uint_32 sumhi, sumlo; int j; sumlo = sum & PNG_LOMASK; sumhi = (sum >> PNG_HISHIFT) & PNG_HIMASK; /* Gives us some footroom */ /* Reduce the sum if we match any of the previous rows */ for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_NONE) { sumlo = (sumlo * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; sumhi = (sumhi * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } /* Factor in the cost of this filter (this is here for completeness, * but it makes no sense to have a "cost" for the NONE filter, as * it has the minimum possible computational cost - none). */ sumlo = (sumlo * png_ptr->filter_costs[PNG_FILTER_VALUE_NONE]) >> PNG_COST_SHIFT; sumhi = (sumhi * png_ptr->filter_costs[PNG_FILTER_VALUE_NONE]) >> PNG_COST_SHIFT; if (sumhi > PNG_HIMASK) sum = PNG_MAXSUM; else sum = (sumhi << PNG_HISHIFT) + sumlo; } #endif mins = sum; } /* Sub filter */ if (filter_to_do == PNG_FILTER_SUB) /* It's the only filter so no testing is needed */ { png_bytep rp, lp, dp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->sub_row + 1; i < bpp; i++, rp++, dp++) { *dp = *rp; } for (lp = row_buf + 1; i < row_bytes; i++, rp++, lp++, dp++) { *dp = (png_byte)(((int)*rp - (int)*lp) & 0xff); } best_row = png_ptr->sub_row; } else if (filter_to_do & PNG_FILTER_SUB) { png_bytep rp, dp, lp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED /* We temporarily increase the "minimum sum" by the factor we * would reduce the sum of this filter, so that we can do the * early exit comparison without scaling the sum each time. */ if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 lmhi, lmlo; lmlo = lmins & PNG_LOMASK; lmhi = (lmins >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_SUB) { lmlo = (lmlo * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } lmlo = (lmlo * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_SUB]) >> PNG_COST_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_SUB]) >> PNG_COST_SHIFT; if (lmhi > PNG_HIMASK) lmins = PNG_MAXSUM; else lmins = (lmhi << PNG_HISHIFT) + lmlo; } #endif for (i = 0, rp = row_buf + 1, dp = png_ptr->sub_row + 1; i < bpp; i++, rp++, dp++) { v = *dp = *rp; sum += (v < 128) ? v : 256 - v; } for (lp = row_buf + 1; i < row_bytes; i++, rp++, lp++, dp++) { v = *dp = (png_byte)(((int)*rp - (int)*lp) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 sumhi, sumlo; sumlo = sum & PNG_LOMASK; sumhi = (sum >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_SUB) { sumlo = (sumlo * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; sumhi = (sumhi * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } sumlo = (sumlo * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_SUB]) >> PNG_COST_SHIFT; sumhi = (sumhi * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_SUB]) >> PNG_COST_SHIFT; if (sumhi > PNG_HIMASK) sum = PNG_MAXSUM; else sum = (sumhi << PNG_HISHIFT) + sumlo; } #endif if (sum < mins) { mins = sum; best_row = png_ptr->sub_row; } } /* Up filter */ if (filter_to_do == PNG_FILTER_UP) { png_bytep rp, dp, pp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->up_row + 1, pp = prev_row + 1; i < row_bytes; i++, rp++, pp++, dp++) { *dp = (png_byte)(((int)*rp - (int)*pp) & 0xff); } best_row = png_ptr->up_row; } else if (filter_to_do & PNG_FILTER_UP) { png_bytep rp, dp, pp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 lmhi, lmlo; lmlo = lmins & PNG_LOMASK; lmhi = (lmins >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_UP) { lmlo = (lmlo * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } lmlo = (lmlo * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_UP]) >> PNG_COST_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_UP]) >> PNG_COST_SHIFT; if (lmhi > PNG_HIMASK) lmins = PNG_MAXSUM; else lmins = (lmhi << PNG_HISHIFT) + lmlo; } #endif for (i = 0, rp = row_buf + 1, dp = png_ptr->up_row + 1, pp = prev_row + 1; i < row_bytes; i++) { v = *dp++ = (png_byte)(((int)*rp++ - (int)*pp++) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 sumhi, sumlo; sumlo = sum & PNG_LOMASK; sumhi = (sum >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_UP) { sumlo = (sumlo * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; sumhi = (sumhi * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } sumlo = (sumlo * png_ptr->filter_costs[PNG_FILTER_VALUE_UP]) >> PNG_COST_SHIFT; sumhi = (sumhi * png_ptr->filter_costs[PNG_FILTER_VALUE_UP]) >> PNG_COST_SHIFT; if (sumhi > PNG_HIMASK) sum = PNG_MAXSUM; else sum = (sumhi << PNG_HISHIFT) + sumlo; } #endif if (sum < mins) { mins = sum; best_row = png_ptr->up_row; } } /* Avg filter */ if (filter_to_do == PNG_FILTER_AVG) { png_bytep rp, dp, pp, lp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->avg_row + 1, pp = prev_row + 1; i < bpp; i++) { *dp++ = (png_byte)(((int)*rp++ - ((int)*pp++ / 2)) & 0xff); } for (lp = row_buf + 1; i < row_bytes; i++) { *dp++ = (png_byte)(((int)*rp++ - (((int)*pp++ + (int)*lp++) / 2)) & 0xff); } best_row = png_ptr->avg_row; } else if (filter_to_do & PNG_FILTER_AVG) { png_bytep rp, dp, pp, lp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 lmhi, lmlo; lmlo = lmins & PNG_LOMASK; lmhi = (lmins >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_AVG) { lmlo = (lmlo * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } lmlo = (lmlo * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_AVG]) >> PNG_COST_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_AVG]) >> PNG_COST_SHIFT; if (lmhi > PNG_HIMASK) lmins = PNG_MAXSUM; else lmins = (lmhi << PNG_HISHIFT) + lmlo; } #endif for (i = 0, rp = row_buf + 1, dp = png_ptr->avg_row + 1, pp = prev_row + 1; i < bpp; i++) { v = *dp++ = (png_byte)(((int)*rp++ - ((int)*pp++ / 2)) & 0xff); sum += (v < 128) ? v : 256 - v; } for (lp = row_buf + 1; i < row_bytes; i++) { v = *dp++ = (png_byte)(((int)*rp++ - (((int)*pp++ + (int)*lp++) / 2)) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 sumhi, sumlo; sumlo = sum & PNG_LOMASK; sumhi = (sum >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_NONE) { sumlo = (sumlo * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; sumhi = (sumhi * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } sumlo = (sumlo * png_ptr->filter_costs[PNG_FILTER_VALUE_AVG]) >> PNG_COST_SHIFT; sumhi = (sumhi * png_ptr->filter_costs[PNG_FILTER_VALUE_AVG]) >> PNG_COST_SHIFT; if (sumhi > PNG_HIMASK) sum = PNG_MAXSUM; else sum = (sumhi << PNG_HISHIFT) + sumlo; } #endif if (sum < mins) { mins = sum; best_row = png_ptr->avg_row; } } /* Paeth filter */ if (filter_to_do == PNG_FILTER_PAETH) { png_bytep rp, dp, pp, cp, lp; png_uint_32 i; for (i = 0, rp = row_buf + 1, dp = png_ptr->paeth_row + 1, pp = prev_row + 1; i < bpp; i++) { *dp++ = (png_byte)(((int)*rp++ - (int)*pp++) & 0xff); } for (lp = row_buf + 1, cp = prev_row + 1; i < row_bytes; i++) { int a, b, c, pa, pb, pc, p; b = *pp++; c = *cp++; a = *lp++; p = b - c; pc = a - c; #ifdef PNG_USE_ABS pa = abs(p); pb = abs(pc); pc = abs(p + pc); #else pa = p < 0 ? -p : p; pb = pc < 0 ? -pc : pc; pc = (p + pc) < 0 ? -(p + pc) : p + pc; #endif p = (pa <= pb && pa <=pc) ? a : (pb <= pc) ? b : c; *dp++ = (png_byte)(((int)*rp++ - p) & 0xff); } best_row = png_ptr->paeth_row; } else if (filter_to_do & PNG_FILTER_PAETH) { png_bytep rp, dp, pp, cp, lp; png_uint_32 sum = 0, lmins = mins; png_uint_32 i; int v; #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 lmhi, lmlo; lmlo = lmins & PNG_LOMASK; lmhi = (lmins >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_PAETH) { lmlo = (lmlo * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } lmlo = (lmlo * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_PAETH]) >> PNG_COST_SHIFT; lmhi = (lmhi * png_ptr->inv_filter_costs[PNG_FILTER_VALUE_PAETH]) >> PNG_COST_SHIFT; if (lmhi > PNG_HIMASK) lmins = PNG_MAXSUM; else lmins = (lmhi << PNG_HISHIFT) + lmlo; } #endif for (i = 0, rp = row_buf + 1, dp = png_ptr->paeth_row + 1, pp = prev_row + 1; i < bpp; i++) { v = *dp++ = (png_byte)(((int)*rp++ - (int)*pp++) & 0xff); sum += (v < 128) ? v : 256 - v; } for (lp = row_buf + 1, cp = prev_row + 1; i < row_bytes; i++) { int a, b, c, pa, pb, pc, p; b = *pp++; c = *cp++; a = *lp++; #ifndef PNG_SLOW_PAETH p = b - c; pc = a - c; #ifdef PNG_USE_ABS pa = abs(p); pb = abs(pc); pc = abs(p + pc); #else pa = p < 0 ? -p : p; pb = pc < 0 ? -pc : pc; pc = (p + pc) < 0 ? -(p + pc) : p + pc; #endif p = (pa <= pb && pa <=pc) ? a : (pb <= pc) ? b : c; #else /* PNG_SLOW_PAETH */ p = a + b - c; pa = abs(p - a); pb = abs(p - b); pc = abs(p - c); if (pa <= pb && pa <= pc) p = a; else if (pb <= pc) p = b; else p = c; #endif /* PNG_SLOW_PAETH */ v = *dp++ = (png_byte)(((int)*rp++ - p) & 0xff); sum += (v < 128) ? v : 256 - v; if (sum > lmins) /* We are already worse, don't continue. */ break; } #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED if (png_ptr->heuristic_method == PNG_FILTER_HEURISTIC_WEIGHTED) { int j; png_uint_32 sumhi, sumlo; sumlo = sum & PNG_LOMASK; sumhi = (sum >> PNG_HISHIFT) & PNG_HIMASK; for (j = 0; j < num_p_filters; j++) { if (png_ptr->prev_filters[j] == PNG_FILTER_VALUE_PAETH) { sumlo = (sumlo * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; sumhi = (sumhi * png_ptr->filter_weights[j]) >> PNG_WEIGHT_SHIFT; } } sumlo = (sumlo * png_ptr->filter_costs[PNG_FILTER_VALUE_PAETH]) >> PNG_COST_SHIFT; sumhi = (sumhi * png_ptr->filter_costs[PNG_FILTER_VALUE_PAETH]) >> PNG_COST_SHIFT; if (sumhi > PNG_HIMASK) sum = PNG_MAXSUM; else sum = (sumhi << PNG_HISHIFT) + sumlo; } #endif if (sum < mins) { best_row = png_ptr->paeth_row; } } #endif /* PNG_WRITE_FILTER_SUPPORTED */ /* Do the actual writing of the filtered row data from the chosen filter. */ png_write_filtered_row(png_ptr, best_row); #ifdef PNG_WRITE_FILTER_SUPPORTED #ifdef PNG_WRITE_WEIGHTED_FILTER_SUPPORTED /* Save the type of filter we picked this time for future calculations */ if (png_ptr->num_prev_filters > 0) { int j; for (j = 1; j < num_p_filters; j++) { png_ptr->prev_filters[j] = png_ptr->prev_filters[j - 1]; } png_ptr->prev_filters[j] = best_row[0]; } #endif #endif /* PNG_WRITE_FILTER_SUPPORTED */ }
C
Chrome
1
CVE-2013-7271
https://www.cvedetails.com/cve/CVE-2013-7271/
CWE-20
https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c
f3d3342602f8bcbf37d7c46641cb9bca7618eb1c
net: rework recvmsg handler msg_name and msg_namelen logic This patch now always passes msg->msg_namelen as 0. recvmsg handlers must set msg_namelen to the proper size <= sizeof(struct sockaddr_storage) to return msg_name to the user. This prevents numerous uninitialized memory leaks we had in the recvmsg handlers and makes it harder for new code to accidentally leak uninitialized memory. Optimize for the case recvfrom is called with NULL as address. We don't need to copy the address at all, so set it to NULL before invoking the recvmsg handler. We can do so, because all the recvmsg handlers must cope with the case a plain read() is called on them. read() also sets msg_name to NULL. Also document these changes in include/linux/net.h as suggested by David Miller. Changes since RFC: Set msg->msg_name = NULL if user specified a NULL in msg_name but had a non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't affect sendto as it would bail out earlier while trying to copy-in the address. It also more naturally reflects the logic by the callers of verify_iovec. With this change in place I could remove " if (!uaddr || msg_sys->msg_namelen == 0) msg->msg_name = NULL ". This change does not alter the user visible error logic as we ignore msg_namelen as long as msg_name is NULL. Also remove two unnecessary curly brackets in ___sys_recvmsg and change comments to netdev style. Cc: David Miller <[email protected]> Suggested-by: Eric Dumazet <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int caif_create(struct net *net, struct socket *sock, int protocol, int kern) { struct sock *sk = NULL; struct caifsock *cf_sk = NULL; static struct proto prot = {.name = "PF_CAIF", .owner = THIS_MODULE, .obj_size = sizeof(struct caifsock), }; if (!capable(CAP_SYS_ADMIN) && !capable(CAP_NET_ADMIN)) return -EPERM; /* * The sock->type specifies the socket type to use. * The CAIF socket is a packet stream in the sense * that it is packet based. CAIF trusts the reliability * of the link, no resending is implemented. */ if (sock->type == SOCK_SEQPACKET) sock->ops = &caif_seqpacket_ops; else if (sock->type == SOCK_STREAM) sock->ops = &caif_stream_ops; else return -ESOCKTNOSUPPORT; if (protocol < 0 || protocol >= CAIFPROTO_MAX) return -EPROTONOSUPPORT; /* * Set the socket state to unconnected. The socket state * is really not used at all in the net/core or socket.c but the * initialization makes sure that sock->state is not uninitialized. */ sk = sk_alloc(net, PF_CAIF, GFP_KERNEL, &prot); if (!sk) return -ENOMEM; cf_sk = container_of(sk, struct caifsock, sk); /* Store the protocol */ sk->sk_protocol = (unsigned char) protocol; /* Initialize default priority for well-known cases */ switch (protocol) { case CAIFPROTO_AT: sk->sk_priority = TC_PRIO_CONTROL; break; case CAIFPROTO_RFM: sk->sk_priority = TC_PRIO_INTERACTIVE_BULK; break; default: sk->sk_priority = TC_PRIO_BESTEFFORT; } /* * Lock in order to try to stop someone from opening the socket * too early. */ lock_sock(&(cf_sk->sk)); /* Initialize the nozero default sock structure data. */ sock_init_data(sock, sk); sk->sk_destruct = caif_sock_destructor; mutex_init(&cf_sk->readlock); /* single task reading lock */ cf_sk->layer.ctrlcmd = caif_ctrl_cb; cf_sk->sk.sk_socket->state = SS_UNCONNECTED; cf_sk->sk.sk_state = CAIF_DISCONNECTED; set_tx_flow_off(cf_sk); set_rx_flow_on(cf_sk); /* Set default options on configuration */ cf_sk->conn_req.link_selector = CAIF_LINK_LOW_LATENCY; cf_sk->conn_req.protocol = protocol; release_sock(&cf_sk->sk); return 0; }
static int caif_create(struct net *net, struct socket *sock, int protocol, int kern) { struct sock *sk = NULL; struct caifsock *cf_sk = NULL; static struct proto prot = {.name = "PF_CAIF", .owner = THIS_MODULE, .obj_size = sizeof(struct caifsock), }; if (!capable(CAP_SYS_ADMIN) && !capable(CAP_NET_ADMIN)) return -EPERM; /* * The sock->type specifies the socket type to use. * The CAIF socket is a packet stream in the sense * that it is packet based. CAIF trusts the reliability * of the link, no resending is implemented. */ if (sock->type == SOCK_SEQPACKET) sock->ops = &caif_seqpacket_ops; else if (sock->type == SOCK_STREAM) sock->ops = &caif_stream_ops; else return -ESOCKTNOSUPPORT; if (protocol < 0 || protocol >= CAIFPROTO_MAX) return -EPROTONOSUPPORT; /* * Set the socket state to unconnected. The socket state * is really not used at all in the net/core or socket.c but the * initialization makes sure that sock->state is not uninitialized. */ sk = sk_alloc(net, PF_CAIF, GFP_KERNEL, &prot); if (!sk) return -ENOMEM; cf_sk = container_of(sk, struct caifsock, sk); /* Store the protocol */ sk->sk_protocol = (unsigned char) protocol; /* Initialize default priority for well-known cases */ switch (protocol) { case CAIFPROTO_AT: sk->sk_priority = TC_PRIO_CONTROL; break; case CAIFPROTO_RFM: sk->sk_priority = TC_PRIO_INTERACTIVE_BULK; break; default: sk->sk_priority = TC_PRIO_BESTEFFORT; } /* * Lock in order to try to stop someone from opening the socket * too early. */ lock_sock(&(cf_sk->sk)); /* Initialize the nozero default sock structure data. */ sock_init_data(sock, sk); sk->sk_destruct = caif_sock_destructor; mutex_init(&cf_sk->readlock); /* single task reading lock */ cf_sk->layer.ctrlcmd = caif_ctrl_cb; cf_sk->sk.sk_socket->state = SS_UNCONNECTED; cf_sk->sk.sk_state = CAIF_DISCONNECTED; set_tx_flow_off(cf_sk); set_rx_flow_on(cf_sk); /* Set default options on configuration */ cf_sk->conn_req.link_selector = CAIF_LINK_LOW_LATENCY; cf_sk->conn_req.protocol = protocol; release_sock(&cf_sk->sk); return 0; }
C
linux
0
CVE-2011-4914
https://www.cvedetails.com/cve/CVE-2011-4914/
CWE-20
https://github.com/torvalds/linux/commit/e0bccd315db0c2f919e7fcf9cb60db21d9986f52
e0bccd315db0c2f919e7fcf9cb60db21d9986f52
rose: Add length checks to CALL_REQUEST parsing Define some constant offsets for CALL_REQUEST based on the description at <http://www.techfest.com/networking/wan/x25plp.htm> and the definition of ROSE as using 10-digit (5-byte) addresses. Use them consistently. Validate all implicit and explicit facilities lengths. Validate the address length byte rather than either trusting or assuming its value. Signed-off-by: Ben Hutchings <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void rose_kill_by_neigh(struct rose_neigh *neigh) { struct sock *s; struct hlist_node *node; spin_lock_bh(&rose_list_lock); sk_for_each(s, node, &rose_list) { struct rose_sock *rose = rose_sk(s); if (rose->neighbour == neigh) { rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0); rose->neighbour->use--; rose->neighbour = NULL; } } spin_unlock_bh(&rose_list_lock); }
void rose_kill_by_neigh(struct rose_neigh *neigh) { struct sock *s; struct hlist_node *node; spin_lock_bh(&rose_list_lock); sk_for_each(s, node, &rose_list) { struct rose_sock *rose = rose_sk(s); if (rose->neighbour == neigh) { rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0); rose->neighbour->use--; rose->neighbour = NULL; } } spin_unlock_bh(&rose_list_lock); }
C
linux
0
CVE-2015-6784
https://www.cvedetails.com/cve/CVE-2015-6784/
CWE-20
https://github.com/chromium/chromium/commit/a81593e7f162428585832ac8f6e71f75592b53e7
a81593e7f162428585832ac8f6e71f75592b53e7
Escape "--" in the page URL at page serialization This patch makes page serializer to escape the page URL embed into a HTML comment of result HTML[1] to avoid inserting text as HTML from URL by introducing a static member function |PageSerialzier::markOfTheWebDeclaration()| for sharing it between |PageSerialzier| and |WebPageSerialzier| classes. [1] We use following format for serialized HTML: saved from url=(${lengthOfURL})${URL} BUG=503217 TEST=webkit_unit_tests --gtest_filter=PageSerializerTest.markOfTheWebDeclaration TEST=webkit_unit_tests --gtest_filter=WebPageSerializerTest.fromUrlWithMinusMinu Review URL: https://codereview.chromium.org/1371323003 Cr-Commit-Position: refs/heads/master@{#351736}
void PageSerializer::registerRewriteURL(const String& from, const String& to) { m_rewriteURLs.set(from, to); }
void PageSerializer::registerRewriteURL(const String& from, const String& to) { m_rewriteURLs.set(from, to); }
C
Chrome
0