func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
static void tcp_skb_fragment_eor(struct sk_buff *skb, struct sk_buff *skb2) { TCP_SKB_CB(skb2)->eor = TCP_SKB_CB(skb)->eor; TCP_SKB_CB(skb)->eor = 0; }
0
[ "CWE-190" ]
net
3b4929f65b0d8249f19a50245cd88ed1a2f78cff
197,666,708,546,721,340,000,000,000,000,000,000,000
5
tcp: limit payload size of sacked skbs Jonathan Looney reported that TCP can trigger the following crash in tcp_shifted_skb() : BUG_ON(tcp_skb_pcount(skb) < pcount); This can happen if the remote peer has advertized the smallest MSS that linux TCP accepts : 48 An skb can hold 17 fragments, and each fragment can hold 32KB on x86, or 64KB on PowerPC. This means that the 16bit witdh of TCP_SKB_CB(skb)->tcp_gso_segs can overflow. Note that tcp_sendmsg() builds skbs with less than 64KB of payload, so this problem needs SACK to be enabled. SACK blocks allow TCP to coalesce multiple skbs in the retransmit queue, thus filling the 17 fragments to maximal capacity. CVE-2019-11477 -- u16 overflow of TCP_SKB_CB(skb)->tcp_gso_segs Fixes: 832d11c5cd07 ("tcp: Try to restore large SKBs while SACK processing") Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Jonathan Looney <[email protected]> Acked-by: Neal Cardwell <[email protected]> Reviewed-by: Tyler Hicks <[email protected]> Cc: Yuchung Cheng <[email protected]> Cc: Bruce Curtis <[email protected]> Cc: Jonathan Lemon <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void TagToCoderModuleName(const char *tag,char *name) { assert(tag != (char *) NULL); (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",tag); assert(name != (char *) NULL); #if defined(MAGICKCORE_LTDL_DELEGATE) (void) FormatLocaleString(name,MagickPathExtent,"%s.la",tag); (void) LocaleLower(name); #else #if defined(MAGICKCORE_WINDOWS_SUPPORT) if (LocaleNCompare("IM_MOD_",tag,7) == 0) (void) CopyMagickString(name,tag,MagickPathExtent); else { #if defined(_DEBUG) (void) FormatLocaleString(name,MagickPathExtent,"IM_MOD_DB_%s_.dll",tag); #else (void) FormatLocaleString(name,MagickPathExtent,"IM_MOD_RL_%s_.dll",tag); #endif } #endif #endif }
0
[ "CWE-200", "CWE-362" ]
ImageMagick
01faddbe2711a4156180c4a92837e2f23683cc68
273,084,858,565,211,830,000,000,000,000,000,000,000
23
Use the correct rights.
ex_pclose(exarg_T *eap) { win_T *win; // First close any normal window. FOR_ALL_WINDOWS(win) if (win->w_p_pvw) { ex_win_close(eap->forceit, win, NULL); return; } # ifdef FEAT_PROP_POPUP // Also when 'previewpopup' is empty, it might have been cleared. popup_close_preview(); # endif }
0
[ "CWE-122" ]
vim
35a319b77f897744eec1155b736e9372c9c5575f
239,333,767,403,571,960,000,000,000,000,000,000,000
16
patch 8.2.3489: ml_get error after search with range Problem: ml_get error after search with range. Solution: Limit the line number to the buffer line count.
ipcp_write_summary (void) { ipa_prop_write_jump_functions (); }
0
[ "CWE-20" ]
gcc
a09ccc22459c565814f79f96586fe4ad083fe4eb
313,312,142,728,003,600,000,000,000,000,000,000,000
4
Avoid segfault when doing IPA-VRP but not IPA-CP (PR 93015) 2019-12-21 Martin Jambor <[email protected]> PR ipa/93015 * ipa-cp.c (ipcp_store_vr_results): Check that info exists testsuite/ * gcc.dg/lto/pr93015_0.c: New test. From-SVN: r279695
set_freq(double freq) /* frequency update */ { char tbuf[80]; drift_comp = freq; #ifdef KERNEL_PLL /* * If the kernel is enabled, update the kernel frequency. */ if (pll_control && kern_enable) { memset(&ntv, 0, sizeof(ntv)); ntv.modes = MOD_FREQUENCY; ntv.freq = DTOFREQ(drift_comp); ntp_adjtime(&ntv); snprintf(tbuf, sizeof(tbuf), "kernel %.3f PPM", drift_comp * 1e6); report_event(EVNT_FSET, NULL, tbuf); } else { snprintf(tbuf, sizeof(tbuf), "ntpd %.3f PPM", drift_comp * 1e6); report_event(EVNT_FSET, NULL, tbuf); } #else /* KERNEL_PLL */ snprintf(tbuf, sizeof(tbuf), "ntpd %.3f PPM", drift_comp * 1e6); report_event(EVNT_FSET, NULL, tbuf); #endif /* KERNEL_PLL */ }
0
[ "CWE-399" ]
busybox
150dc7a2b483b8338a3e185c478b4b23ee884e71
268,818,521,746,595,600,000,000,000,000,000,000,000
26
ntpd: respond only to client and symmetric active packets The busybox NTP implementation doesn't check the NTP mode of packets received on the server port and responds to any packet with the right size. This includes responses from another NTP server. An attacker can send a packet with a spoofed source address in order to create an infinite loop of responses between two busybox NTP servers. Adding more packets to the loop increases the traffic between the servers until one of them has a fully loaded CPU and/or network. Signed-off-by: Miroslav Lichvar <[email protected]> Signed-off-by: Denys Vlasenko <[email protected]>
int ipv6_chk_addr(struct net *net, const struct in6_addr *addr, const struct net_device *dev, int strict) { struct inet6_ifaddr *ifp; unsigned int hash = inet6_addr_hash(addr); rcu_read_lock_bh(); hlist_for_each_entry_rcu(ifp, &inet6_addr_lst[hash], addr_lst) { if (!net_eq(dev_net(ifp->idev->dev), net)) continue; if (ipv6_addr_equal(&ifp->addr, addr) && !(ifp->flags&IFA_F_TENTATIVE) && (dev == NULL || ifp->idev->dev == dev || !(ifp->scope&(IFA_LINK|IFA_HOST) || strict))) { rcu_read_unlock_bh(); return 1; } } rcu_read_unlock_bh(); return 0; }
0
[]
net
4b08a8f1bd8cb4541c93ec170027b4d0782dab52
93,153,055,657,411,900,000,000,000,000,000,000,000
22
ipv6: remove max_addresses check from ipv6_create_tempaddr Because of the max_addresses check attackers were able to disable privacy extensions on an interface by creating enough autoconfigured addresses: <http://seclists.org/oss-sec/2012/q4/292> But the check is not actually needed: max_addresses protects the kernel to install too many ipv6 addresses on an interface and guards addrconf_prefix_rcv to install further addresses as soon as this limit is reached. We only generate temporary addresses in direct response of a new address showing up. As soon as we filled up the maximum number of addresses of an interface, we stop installing more addresses and thus also stop generating more temp addresses. Even if the attacker tries to generate a lot of temporary addresses by announcing a prefix and removing it again (lifetime == 0) we won't install more temp addresses, because the temporary addresses do count to the maximum number of addresses, thus we would stop installing new autoconfigured addresses when the limit is reached. This patch fixes CVE-2013-0343 (but other layer-2 attacks are still possible). Thanks to Ding Tianhong to bring this topic up again. Cc: Ding Tianhong <[email protected]> Cc: George Kargiotakis <[email protected]> Cc: P J P <[email protected]> Cc: YOSHIFUJI Hideaki <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Acked-by: Ding Tianhong <[email protected]> Signed-off-by: David S. Miller <[email protected]>
*/ int netif_set_real_num_rx_queues(struct net_device *dev, unsigned int rxq) { int rc; if (rxq < 1 || rxq > dev->num_rx_queues) return -EINVAL; if (dev->reg_state == NETREG_REGISTERED) { ASSERT_RTNL(); rc = net_rx_queue_update_kobjects(dev, dev->real_num_rx_queues, rxq); if (rc) return rc; } dev->real_num_rx_queues = rxq; return 0;
0
[ "CWE-400", "CWE-703" ]
linux
fac8e0f579695a3ecbc4d3cac369139d7f819971
279,463,337,291,487,400,000,000,000,000,000,000,000
19
tunnels: Don't apply GRO to multiple layers of encapsulation. When drivers express support for TSO of encapsulated packets, they only mean that they can do it for one layer of encapsulation. Supporting additional levels would mean updating, at a minimum, more IP length fields and they are unaware of this. No encapsulation device expresses support for handling offloaded encapsulated packets, so we won't generate these types of frames in the transmit path. However, GRO doesn't have a check for multiple levels of encapsulation and will attempt to build them. UDP tunnel GRO actually does prevent this situation but it only handles multiple UDP tunnels stacked on top of each other. This generalizes that solution to prevent any kind of tunnel stacking that would cause problems. Fixes: bf5a755f ("net-gre-gro: Add GRE support to the GRO stack") Signed-off-by: Jesse Gross <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx) { mutex_lock(&ctx->uring_lock); percpu_ref_kill(&ctx->refs); mutex_unlock(&ctx->uring_lock); /* * Wait for sq thread to idle, if we have one. It won't spin on new * work after we've killed the ctx ref above. This is important to do * before we cancel existing commands, as the thread could otherwise * be queueing new work post that. If that's work we need to cancel, * it could cause shutdown to hang. */ while (ctx->sqo_thread && !wq_has_sleeper(&ctx->sqo_wait)) cpu_relax(); io_kill_timeouts(ctx); io_poll_remove_all(ctx); if (ctx->io_wq) io_wq_cancel_all(ctx->io_wq); io_iopoll_reap_events(ctx); /* if we failed setting up the ctx, we might not have any rings */ if (ctx->rings) io_cqring_overflow_flush(ctx, true); idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx); wait_for_completion(&ctx->completions[0]); io_ring_ctx_free(ctx); }
0
[]
linux
ff002b30181d30cdfbca316dadd099c3ca0d739c
282,618,109,793,633,470,000,000,000,000,000,000,000
30
io_uring: grab ->fs as part of async preparation This passes it in to io-wq, so it assumes the right fs_struct when executing async work that may need to do lookups. Cc: [email protected] # 5.3+ Signed-off-by: Jens Axboe <[email protected]>
static int lookup_index_alloc( void **out, unsigned long *out_len, size_t entries, size_t hash_count) { size_t entries_len, hash_len, index_len; GITERR_CHECK_ALLOC_MULTIPLY(&entries_len, entries, sizeof(struct index_entry)); GITERR_CHECK_ALLOC_MULTIPLY(&hash_len, hash_count, sizeof(struct index_entry *)); GITERR_CHECK_ALLOC_ADD(&index_len, sizeof(struct git_delta_index), entries_len); GITERR_CHECK_ALLOC_ADD(&index_len, index_len, hash_len); if (!git__is_ulong(index_len)) { giterr_set(GITERR_NOMEMORY, "overly large delta"); return -1; } *out = git__malloc(index_len); GITERR_CHECK_ALLOC(*out); *out_len = index_len; return 0; }
0
[ "CWE-190", "CWE-125" ]
libgit2
3f461902dc1072acb8b7607ee65d0a0458ffac2a
132,887,721,078,250,640,000,000,000,000,000,000,000
22
delta: fix sign-extension of big left-shift Our delta code was originally adapted from JGit, which itself adapted it from git itself. Due to this heritage, we inherited a bug from git.git in how we compute the delta offset, which was fixed upstream in 48fb7deb5 (Fix big left-shifts of unsigned char, 2009-06-17). As explained by Linus: Shifting 'unsigned char' or 'unsigned short' left can result in sign extension errors, since the C integer promotion rules means that the unsigned char/short will get implicitly promoted to a signed 'int' due to the shift (or due to other operations). This normally doesn't matter, but if you shift things up sufficiently, it will now set the sign bit in 'int', and a subsequent cast to a bigger type (eg 'long' or 'unsigned long') will now sign-extend the value despite the original expression being unsigned. One example of this would be something like unsigned long size; unsigned char c; size += c << 24; where despite all the variables being unsigned, 'c << 24' ends up being a signed entity, and will get sign-extended when then doing the addition in an 'unsigned long' type. Since git uses 'unsigned char' pointers extensively, we actually have this bug in a couple of places. In our delta code, we inherited such a bogus shift when computing the offset at which the delta base is to be found. Due to the sign extension we can end up with an offset where all the bits are set. This can allow an arbitrary memory read, as the addition in `base_len < off + len` can now overflow if `off` has all its bits set. Fix the issue by casting the result of `*delta++ << 24UL` to an unsigned integer again. Add a test with a crafted delta that would actually succeed with an out-of-bounds read in case where the cast wouldn't exist. Reported-by: Riccardo Schirone <[email protected]> Test-provided-by: Riccardo Schirone <[email protected]>
lka_report_smtp_link_identify(const char *direction, struct timeval *tv, uint64_t reqid, const char *method, const char *heloname) { report_smtp_broadcast(reqid, direction, tv, "link-identify", "%s|%s\n", method, heloname); }
0
[ "CWE-476" ]
src
6c3220444ed06b5796dedfd53a0f4becd903c0d1
274,339,643,401,177,300,000,000,000,000,000,000,000
6
smtpd's filter state machine can prematurely release resources leading to a crash. From gilles@
static struct inet6_dev *ipv6_add_dev(struct net_device *dev) { struct inet6_dev *ndev; ASSERT_RTNL(); if (dev->mtu < IPV6_MIN_MTU) return NULL; ndev = kzalloc(sizeof(struct inet6_dev), GFP_KERNEL); if (ndev == NULL) return NULL; rwlock_init(&ndev->lock); ndev->dev = dev; INIT_LIST_HEAD(&ndev->addr_list); setup_timer(&ndev->rs_timer, addrconf_rs_timer, (unsigned long)ndev); memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf)); ndev->cnf.mtu6 = dev->mtu; ndev->cnf.sysctl = NULL; ndev->nd_parms = neigh_parms_alloc(dev, &nd_tbl); if (ndev->nd_parms == NULL) { kfree(ndev); return NULL; } if (ndev->cnf.forwarding) dev_disable_lro(dev); /* We refer to the device */ dev_hold(dev); if (snmp6_alloc_dev(ndev) < 0) { ADBG((KERN_WARNING "%s: cannot allocate memory for statistics; dev=%s.\n", __func__, dev->name)); neigh_parms_release(&nd_tbl, ndev->nd_parms); dev_put(dev); kfree(ndev); return NULL; } if (snmp6_register_dev(ndev) < 0) { ADBG((KERN_WARNING "%s: cannot create /proc/net/dev_snmp6/%s\n", __func__, dev->name)); neigh_parms_release(&nd_tbl, ndev->nd_parms); ndev->dead = 1; in6_dev_finish_destroy(ndev); return NULL; } /* One reference from device. We must do this before * we invoke __ipv6_regen_rndid(). */ in6_dev_hold(ndev); if (dev->flags & (IFF_NOARP | IFF_LOOPBACK)) ndev->cnf.accept_dad = -1; #if IS_ENABLED(CONFIG_IPV6_SIT) if (dev->type == ARPHRD_SIT && (dev->priv_flags & IFF_ISATAP)) { pr_info("%s: Disabled Multicast RS\n", dev->name); ndev->cnf.rtr_solicits = 0; } #endif #ifdef CONFIG_IPV6_PRIVACY INIT_LIST_HEAD(&ndev->tempaddr_list); setup_timer(&ndev->regen_timer, ipv6_regen_rndid, (unsigned long)ndev); if ((dev->flags&IFF_LOOPBACK) || dev->type == ARPHRD_TUNNEL || dev->type == ARPHRD_TUNNEL6 || dev->type == ARPHRD_SIT || dev->type == ARPHRD_NONE) { ndev->cnf.use_tempaddr = -1; } else { in6_dev_hold(ndev); ipv6_regen_rndid((unsigned long) ndev); } #endif ndev->token = in6addr_any; if (netif_running(dev) && addrconf_qdisc_ok(dev)) ndev->if_flags |= IF_READY; ipv6_mc_init_dev(ndev); ndev->tstamp = jiffies; addrconf_sysctl_register(ndev); /* protected by rtnl_lock */ rcu_assign_pointer(dev->ip6_ptr, ndev); /* Join interface-local all-node multicast group */ ipv6_dev_mc_inc(dev, &in6addr_interfacelocal_allnodes); /* Join all-node multicast group */ ipv6_dev_mc_inc(dev, &in6addr_linklocal_allnodes); /* Join all-router multicast group if forwarding is set */ if (ndev->cnf.forwarding && (dev->flags & IFF_MULTICAST)) ipv6_dev_mc_inc(dev, &in6addr_linklocal_allrouters); return ndev; }
0
[]
net
4b08a8f1bd8cb4541c93ec170027b4d0782dab52
250,412,181,294,406,250,000,000,000,000,000,000,000
104
ipv6: remove max_addresses check from ipv6_create_tempaddr Because of the max_addresses check attackers were able to disable privacy extensions on an interface by creating enough autoconfigured addresses: <http://seclists.org/oss-sec/2012/q4/292> But the check is not actually needed: max_addresses protects the kernel to install too many ipv6 addresses on an interface and guards addrconf_prefix_rcv to install further addresses as soon as this limit is reached. We only generate temporary addresses in direct response of a new address showing up. As soon as we filled up the maximum number of addresses of an interface, we stop installing more addresses and thus also stop generating more temp addresses. Even if the attacker tries to generate a lot of temporary addresses by announcing a prefix and removing it again (lifetime == 0) we won't install more temp addresses, because the temporary addresses do count to the maximum number of addresses, thus we would stop installing new autoconfigured addresses when the limit is reached. This patch fixes CVE-2013-0343 (but other layer-2 attacks are still possible). Thanks to Ding Tianhong to bring this topic up again. Cc: Ding Tianhong <[email protected]> Cc: George Kargiotakis <[email protected]> Cc: P J P <[email protected]> Cc: YOSHIFUJI Hideaki <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Acked-by: Ding Tianhong <[email protected]> Signed-off-by: David S. Miller <[email protected]>
dns_zone_setstatlevel(dns_zone_t *zone, dns_zonestat_level_t level) { REQUIRE(DNS_ZONE_VALID(zone)); zone->statlevel = level; }
0
[ "CWE-327" ]
bind9
f09352d20a9d360e50683cd1d2fc52ccedcd77a0
65,066,308,956,886,740,000,000,000,000,000,000,000
5
Update keyfetch_done compute_tag check If in keyfetch_done the compute_tag fails (because for example the algorithm is not supported), don't crash, but instead ignore the key.
xsltCopyOf(xsltTransformContextPtr ctxt, xmlNodePtr node, xmlNodePtr inst, xsltStylePreCompPtr castedComp) { #ifdef XSLT_REFACTORED xsltStyleItemCopyOfPtr comp = (xsltStyleItemCopyOfPtr) castedComp; #else xsltStylePreCompPtr comp = castedComp; #endif xmlXPathObjectPtr res = NULL; xmlNodeSetPtr list = NULL; int i; xmlDocPtr oldXPContextDoc; xmlNsPtr *oldXPNamespaces; xmlNodePtr oldXPContextNode; int oldXPProximityPosition, oldXPContextSize, oldXPNsNr; xmlXPathContextPtr xpctxt; if ((ctxt == NULL) || (node == NULL) || (inst == NULL)) return; if ((comp == NULL) || (comp->select == NULL) || (comp->comp == NULL)) { xsltTransformError(ctxt, NULL, inst, "xsl:copy-of : compilation failed\n"); return; } /* * SPEC XSLT 1.0: * "The xsl:copy-of element can be used to insert a result tree * fragment into the result tree, without first converting it to * a string as xsl:value-of does (see [7.6.1 Generating Text with * xsl:value-of]). The required select attribute contains an * expression. When the result of evaluating the expression is a * result tree fragment, the complete fragment is copied into the * result tree. When the result is a node-set, all the nodes in the * set are copied in document order into the result tree; copying * an element node copies the attribute nodes, namespace nodes and * children of the element node as well as the element node itself; * a root node is copied by copying its children. When the result * is neither a node-set nor a result tree fragment, the result is * converted to a string and then inserted into the result tree, * as with xsl:value-of. */ #ifdef WITH_XSLT_DEBUG_PROCESS XSLT_TRACE(ctxt,XSLT_TRACE_COPY_OF,xsltGenericDebug(xsltGenericDebugContext, "xsltCopyOf: select %s\n", comp->select)); #endif /* * Evaluate the "select" expression. */ xpctxt = ctxt->xpathCtxt; oldXPContextDoc = xpctxt->doc; oldXPContextNode = xpctxt->node; oldXPProximityPosition = xpctxt->proximityPosition; oldXPContextSize = xpctxt->contextSize; oldXPNsNr = xpctxt->nsNr; oldXPNamespaces = xpctxt->namespaces; xpctxt->node = node; if (comp != NULL) { #ifdef XSLT_REFACTORED if (comp->inScopeNs != NULL) { xpctxt->namespaces = comp->inScopeNs->list; xpctxt->nsNr = comp->inScopeNs->xpathNumber; } else { xpctxt->namespaces = NULL; xpctxt->nsNr = 0; } #else xpctxt->namespaces = comp->nsList; xpctxt->nsNr = comp->nsNr; #endif } else { xpctxt->namespaces = NULL; xpctxt->nsNr = 0; } res = xmlXPathCompiledEval(comp->comp, xpctxt); xpctxt->doc = oldXPContextDoc; xpctxt->node = oldXPContextNode; xpctxt->contextSize = oldXPContextSize; xpctxt->proximityPosition = oldXPProximityPosition; xpctxt->nsNr = oldXPNsNr; xpctxt->namespaces = oldXPNamespaces; if (res != NULL) { if (res->type == XPATH_NODESET) { /* * Node-set * -------- */ #ifdef WITH_XSLT_DEBUG_PROCESS XSLT_TRACE(ctxt,XSLT_TRACE_COPY_OF,xsltGenericDebug(xsltGenericDebugContext, "xsltCopyOf: result is a node set\n")); #endif list = res->nodesetval; if (list != NULL) { xmlNodePtr cur; /* * The list is already sorted in document order by XPath. * Append everything in this order under ctxt->insert. */ for (i = 0;i < list->nodeNr;i++) { cur = list->nodeTab[i]; if (cur == NULL) continue; if ((cur->type == XML_DOCUMENT_NODE) || (cur->type == XML_HTML_DOCUMENT_NODE)) { xsltCopyTreeList(ctxt, inst, cur->children, ctxt->insert, 0, 0); } else if (cur->type == XML_ATTRIBUTE_NODE) { xsltShallowCopyAttr(ctxt, inst, ctxt->insert, (xmlAttrPtr) cur); } else { xsltCopyTreeInternal(ctxt, inst, cur, ctxt->insert, 0, 0); } } } } else if (res->type == XPATH_XSLT_TREE) { /* * Result tree fragment * -------------------- * E.g. via <xsl:variable ...><foo/></xsl:variable> * Note that the root node of such trees is an xmlDocPtr in Libxslt. */ #ifdef WITH_XSLT_DEBUG_PROCESS XSLT_TRACE(ctxt,XSLT_TRACE_COPY_OF,xsltGenericDebug(xsltGenericDebugContext, "xsltCopyOf: result is a result tree fragment\n")); #endif list = res->nodesetval; if ((list != NULL) && (list->nodeTab != NULL) && (list->nodeTab[0] != NULL) && (IS_XSLT_REAL_NODE(list->nodeTab[0]))) { xsltCopyTreeList(ctxt, inst, list->nodeTab[0]->children, ctxt->insert, 0, 0); } } else { xmlChar *value = NULL; /* * Convert to a string. */ value = xmlXPathCastToString(res); if (value == NULL) { xsltTransformError(ctxt, NULL, inst, "Internal error in xsltCopyOf(): " "failed to cast an XPath object to string.\n"); ctxt->state = XSLT_STATE_STOPPED; } else { if (value[0] != 0) { /* * Append content as text node. */ xsltCopyTextString(ctxt, ctxt->insert, value, 0); } xmlFree(value); #ifdef WITH_XSLT_DEBUG_PROCESS XSLT_TRACE(ctxt,XSLT_TRACE_COPY_OF,xsltGenericDebug(xsltGenericDebugContext, "xsltCopyOf: result %s\n", res->stringval)); #endif } } } else { ctxt->state = XSLT_STATE_STOPPED; } if (res != NULL) xmlXPathFreeObject(res); }
0
[]
libxslt
937ba2a3eb42d288f53c8adc211bd1122869f0bf
278,754,457,704,560,130,000,000,000,000,000,000,000
174
Fix default template processing on namespace nodes
iakerb_gss_delete_sec_context(OM_uint32 *minor_status, gss_ctx_id_t *context_handle, gss_buffer_t output_token) { OM_uint32 major_status = GSS_S_COMPLETE; if (output_token != GSS_C_NO_BUFFER) { output_token->length = 0; output_token->value = NULL; } *minor_status = 0; if (*context_handle != GSS_C_NO_CONTEXT) { iakerb_ctx_id_t iakerb_ctx = (iakerb_ctx_id_t)*context_handle; if (iakerb_ctx->magic == KG_IAKERB_CONTEXT) { iakerb_release_context(iakerb_ctx); *context_handle = GSS_C_NO_CONTEXT; } else { assert(iakerb_ctx->magic == KG_CONTEXT); major_status = krb5_gss_delete_sec_context(minor_status, context_handle, output_token); } } return major_status; }
1
[ "CWE-18" ]
krb5
e04f0283516e80d2f93366e0d479d13c9b5c8c2a
289,829,108,918,143,800,000,000,000,000,000,000,000
30
Fix IAKERB context aliasing bugs [CVE-2015-2696] The IAKERB mechanism currently replaces its context handle with the krb5 mechanism handle upon establishment, under the assumption that most GSS functions are only called after context establishment. This assumption is incorrect, and can lead to aliasing violations for some programs. Maintain the IAKERB context structure after context establishment and add new IAKERB entry points to refer to it with that type. Add initiate and established flags to the IAKERB context structure for use in gss_inquire_context() prior to context establishment. CVE-2015-2696: In MIT krb5 1.9 and later, applications which call gss_inquire_context() on a partially-established IAKERB context can cause the GSS-API library to read from a pointer using the wrong type, generally causing a process crash. Java server applications using the native JGSS provider are vulnerable to this bug. A carefully crafted IAKERB packet might allow the gss_inquire_context() call to succeed with attacker-determined results, but applications should not make access control decisions based on gss_inquire_context() results prior to context establishment. CVSSv2 Vector: AV:N/AC:M/Au:N/C:N/I:N/A:C/E:POC/RL:OF/RC:C [[email protected]: several bugfixes, style changes, and edge-case behavior changes; commit message and CVE description] ticket: 8244 target_version: 1.14 tags: pullup
static void expectSlices(std::vector<std::vector<int>> buffer_list, OwnedImpl& buffer) { const auto& buffer_slices = buffer.describeSlicesForTest(); for (uint64_t i = 0; i < buffer_slices.size(); i++) { EXPECT_EQ(buffer_slices[i].data, buffer_list[i][0]); EXPECT_EQ(buffer_slices[i].reservable, buffer_list[i][1]); EXPECT_EQ(buffer_slices[i].capacity, buffer_list[i][2]); } }
1
[ "CWE-401" ]
envoy
5eba69a1f375413fb93fab4173f9c393ac8c2818
120,145,524,495,179,000,000,000,000,000,000,000,000
8
[buffer] Add on-drain hook to buffer API and use it to avoid fragmentation due to tracking of H2 data and control frames in the output buffer (#144) Signed-off-by: antonio <[email protected]>
SPL_METHOD(Array, append) { zval *value; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "z", &value) == FAILURE) { return; } spl_array_iterator_append(getThis(), value TSRMLS_CC); } /* }}} */
0
[]
php-src
a374dfab567ff7f0ab0dc150f14cc891b0340b47
329,239,304,477,753,470,000,000,000,000,000,000,000
9
Fix bug #67492: unserialize() SPL ArrayObject / SPLObjectStorage Type Confusion
append_cap (const char *arg) { exec_options.cap = realloc (exec_options.cap, (exec_options.cap_size + 2) * sizeof (*exec_options.cap)); if (exec_options.cap == NULL) error (EXIT_FAILURE, errno, "cannot allocate memory"); exec_options.cap[exec_options.cap_size + 1] = NULL; exec_options.cap[exec_options.cap_size] = xstrdup (arg); exec_options.cap_size++; }
0
[ "CWE-276" ]
crun
1aeeed2e4fdeffb4875c0d0b439915894594c8c6
321,449,376,473,004,640,000,000,000,000,000,000,000
9
exec: --cap do not set inheritable capabilities Closes: CVE-2022-27650 Signed-off-by: Giuseppe Scrivano <[email protected]>
int write_bytes_to_xdr_buf(const struct xdr_buf *buf, unsigned int base, void *obj, unsigned int len) { struct xdr_buf subbuf; int status; status = xdr_buf_subsegment(buf, &subbuf, base, len); if (status != 0) return status; __write_bytes_to_xdr_buf(&subbuf, obj, len); return 0; }
0
[ "CWE-119", "CWE-787" ]
linux
6d1c0f3d28f98ea2736128ed3e46821496dc3a8c
163,098,687,223,736,680,000,000,000,000,000,000,000
12
sunrpc: Avoid a KASAN slab-out-of-bounds bug in xdr_set_page_base() This seems to happen fairly easily during READ_PLUS testing on NFS v4.2. I found that we could end up accessing xdr->buf->pages[pgnr] with a pgnr greater than the number of pages in the array. So let's just return early if we're setting base to a point at the end of the page data and let xdr_set_tail_base() handle setting up the buffer pointers instead. Signed-off-by: Anna Schumaker <[email protected]> Fixes: 8d86e373b0ef ("SUNRPC: Clean up helpers xdr_set_iov() and xdr_set_page_base()") Signed-off-by: Trond Myklebust <[email protected]>
void EvalFloat(TfLiteContext* context, TfLiteNode* node, TfLiteConvParams* params, OpData* data, const TfLiteTensor* input, const TfLiteTensor* filter, const TfLiteTensor* bias, TfLiteTensor* im2col, TfLiteTensor* hwcn_weights, TfLiteTensor* output) { float output_activation_min, output_activation_max; CalculateActivationRange(params->activation, &output_activation_min, &output_activation_max); KernelType effective_kernel_type = kernel_type; // Fall back to the optimized path if multi-threaded conv is unsupported. if ((kernel_type == kMultithreadOptimized) && !data->supports_multithreaded_kernel) { effective_kernel_type = kGenericOptimized; } ConvParams op_params; op_params.padding_type = RuntimePaddingType(params->padding); op_params.padding_values.width = data->padding.width; op_params.padding_values.height = data->padding.height; op_params.stride_width = params->stride_width; op_params.stride_height = params->stride_height; op_params.dilation_width_factor = params->dilation_width_factor; op_params.dilation_height_factor = params->dilation_height_factor; op_params.float_activation_min = output_activation_min; op_params.float_activation_max = output_activation_max; switch (effective_kernel_type) { case kReference: { reference_ops::Conv(op_params, GetTensorShape(input), GetTensorData<float>(input), GetTensorShape(filter), GetTensorData<float>(filter), GetTensorShape(bias), GetTensorData<float>(bias), GetTensorShape(output), GetTensorData<float>(output), GetTensorShape(im2col), GetTensorData<float>(im2col)); break; } case kCblasOptimized: case kGenericOptimized: { optimized_ops::Conv(op_params, GetTensorShape(input), GetTensorData<float>(input), GetTensorShape(filter), GetTensorData<float>(filter), GetTensorShape(bias), GetTensorData<float>(bias), GetTensorShape(output), GetTensorData<float>(output), GetTensorShape(im2col), GetTensorData<float>(im2col), CpuBackendContext::GetFromContext(context)); break; } case kMultithreadOptimized: { #if defined(TFLITE_WITH_MULTITHREADED_EIGEN) const float* filter_data; if (data->need_hwcn_weights) { filter_data = GetTensorData<float>(hwcn_weights); } else { filter_data = GetTensorData<float>(filter); } multithreaded_ops::Conv( *eigen_support::GetThreadPoolDevice(context), op_params, GetTensorShape(input), GetTensorData<float>(input), GetTensorShape(filter), filter_data, GetTensorShape(bias), GetTensorData<float>(bias), GetTensorShape(output), GetTensorData<float>(output), GetTensorShape(im2col), GetTensorData<float>(im2col)); break; #else // !defined(TFLITE_WITH_MULTITHREADED_EIGEN) // See Register_CONV_2D: we should never be here when TFLITE_WITH_RUY // was enabled. We #if out this code in order to get the corresponding // binary size benefits. TFLITE_DCHECK(false); #endif // defined(TFLITE_WITH_MULTITHREADED_EIGEN) } } }
0
[ "CWE-125", "CWE-787" ]
tensorflow
1970c2158b1ffa416d159d03c3370b9a462aee35
9,475,239,664,736,473,000,000,000,000,000,000,000
70
[tflite]: Insert `nullptr` checks when obtaining tensors. As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages. We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`). PiperOrigin-RevId: 332521299 Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56
static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t flags) { unsigned long locked; unsigned long lock_limit; int error = -ENOMEM; if (!can_do_mlock()) return -EPERM; lru_add_drain_all(); /* flush pagevec */ len = PAGE_ALIGN(len + (offset_in_page(start))); start &= PAGE_MASK; lock_limit = rlimit(RLIMIT_MEMLOCK); lock_limit >>= PAGE_SHIFT; locked = len >> PAGE_SHIFT; if (down_write_killable(&current->mm->mmap_sem)) return -EINTR; locked += current->mm->locked_vm; if ((locked > lock_limit) && (!capable(CAP_IPC_LOCK))) { /* * It is possible that the regions requested intersect with * previously mlocked areas, that part area in "mm->locked_vm" * should not be counted to new mlock increment count. So check * and adjust locked count if necessary. */ locked -= count_mm_mlocked_page_nr(current->mm, start, len); } /* check against resource limits */ if ((locked <= lock_limit) || capable(CAP_IPC_LOCK)) error = apply_vma_lock_flags(start, len, flags); up_write(&current->mm->mmap_sem); if (error) return error; error = __mm_populate(start, len, 0); if (error) return __mlock_posix_error_return(error); return 0; }
0
[ "CWE-20" ]
linux
70feee0e1ef331b22cc51f383d532a0d043fbdcc
148,499,658,045,034,480,000,000,000,000,000,000,000
46
mlock: fix mlock count can not decrease in race condition Kefeng reported that when running the follow test, the mlock count in meminfo will increase permanently: [1] testcase linux:~ # cat test_mlockal grep Mlocked /proc/meminfo for j in `seq 0 10` do for i in `seq 4 15` do ./p_mlockall >> log & done sleep 0.2 done # wait some time to let mlock counter decrease and 5s may not enough sleep 5 grep Mlocked /proc/meminfo linux:~ # cat p_mlockall.c #include <sys/mman.h> #include <stdlib.h> #include <stdio.h> #define SPACE_LEN 4096 int main(int argc, char ** argv) { int ret; void *adr = malloc(SPACE_LEN); if (!adr) return -1; ret = mlockall(MCL_CURRENT | MCL_FUTURE); printf("mlcokall ret = %d\n", ret); ret = munlockall(); printf("munlcokall ret = %d\n", ret); free(adr); return 0; } In __munlock_pagevec() we should decrement NR_MLOCK for each page where we clear the PageMlocked flag. Commit 1ebb7cc6a583 ("mm: munlock: batch NR_MLOCK zone state updates") has introduced a bug where we don't decrement NR_MLOCK for pages where we clear the flag, but fail to isolate them from the lru list (e.g. when the pages are on some other cpu's percpu pagevec). Since PageMlocked stays cleared, the NR_MLOCK accounting gets permanently disrupted by this. Fix it by counting the number of page whose PageMlock flag is cleared. Fixes: 1ebb7cc6a583 (" mm: munlock: batch NR_MLOCK zone state updates") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Yisheng Xie <[email protected]> Reported-by: Kefeng Wang <[email protected]> Tested-by: Kefeng Wang <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Joern Engel <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: zhongjiang <[email protected]> Cc: Hanjun Guo <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
void MonCap::generate_test_instances(list<MonCap*>& ls) { ls.push_back(new MonCap); ls.push_back(new MonCap); ls.back()->parse("allow *"); ls.push_back(new MonCap); ls.back()->parse("allow rwx"); ls.push_back(new MonCap); ls.back()->parse("allow service foo x"); ls.push_back(new MonCap); ls.back()->parse("allow command bar x"); ls.push_back(new MonCap); ls.back()->parse("allow service foo r, allow command bar x"); ls.push_back(new MonCap); ls.back()->parse("allow command bar with k1=v1 x"); ls.push_back(new MonCap); ls.back()->parse("allow command bar with k1=v1 k2=v2 x"); }
0
[ "CWE-285" ]
ceph
a2acedd2a7e12d58af6db35edbd8a9d29c557578
264,648,333,317,765,750,000,000,000,000,000,000,000
18
mon/config-key: limit caps allowed to access the store Henceforth, we'll require explicit `allow` caps for commands, or for the config-key service. Blanket caps are no longer allowed for the config-key service, except for 'allow *'. (for luminous and mimic, we're also ensuring MonCap's parser is able to understand forward slashes '/' when parsing prefixes) Signed-off-by: Joao Eduardo Luis <[email protected]> (cherry picked from commit 5fff611041c5afeaf3c8eb09e4de0cc919d69237)
static void fixed_mtrr_seg_unit_range(int seg, int unit, u64 *start, u64 *end) { struct fixed_mtrr_segment *mtrr_seg = &fixed_seg_table[seg]; u64 unit_size = fixed_mtrr_seg_unit_size(seg); *start = mtrr_seg->start + unit * unit_size; *end = *start + unit_size; WARN_ON(*end > mtrr_seg->end); }
0
[ "CWE-284" ]
linux
9842df62004f366b9fed2423e24df10542ee0dc5
249,226,322,832,441,600,000,000,000,000,000,000,000
9
KVM: MTRR: remove MSR 0x2f8 MSR 0x2f8 accessed the 124th Variable Range MTRR ever since MTRR support was introduced by 9ba075a664df ("KVM: MTRR support"). 0x2f8 became harmful when 910a6aae4e2e ("KVM: MTRR: exactly define the size of variable MTRRs") shrinked the array of VR MTRRs from 256 to 8, which made access to index 124 out of bounds. The surrounding code only WARNs in this situation, thus the guest gained a limited read/write access to struct kvm_arch_vcpu. 0x2f8 is not a valid VR MTRR MSR, because KVM has/advertises only 16 VR MTRR MSRs, 0x200-0x20f. Every VR MTRR is set up using two MSRs, 0x2f8 was treated as a PHYSBASE and 0x2f9 would be its PHYSMASK, but 0x2f9 was not implemented in KVM, therefore 0x2f8 could never do anything useful and getting rid of it is safe. This fixes CVE-2016-3713. Fixes: 910a6aae4e2e ("KVM: MTRR: exactly define the size of variable MTRRs") Cc: [email protected] Reported-by: David Matlack <[email protected]> Signed-off-by: Andy Honig <[email protected]> Signed-off-by: Radim Krčmář <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
TEST_F(DocumentSourceMatchTest, MultipleMatchStagesShouldCombineIntoOne) { auto match1 = DocumentSourceMatch::create(BSON("a" << 1), getExpCtx()); auto match2 = DocumentSourceMatch::create(BSON("b" << 1), getExpCtx()); auto match3 = DocumentSourceMatch::create(BSON("c" << 1), getExpCtx()); Pipeline::SourceContainer container; // Check initial state ASSERT_BSONOBJ_EQ(match1->getQuery(), BSON("a" << 1)); ASSERT_BSONOBJ_EQ(match2->getQuery(), BSON("b" << 1)); ASSERT_BSONOBJ_EQ(match3->getQuery(), BSON("c" << 1)); container.push_back(match1); container.push_back(match2); match1->optimizeAt(container.begin(), &container); ASSERT_EQUALS(container.size(), 1U); ASSERT_BSONOBJ_EQ(match1->getQuery(), fromjson("{'$and': [{a:1}, {b:1}]}")); container.push_back(match3); match1->optimizeAt(container.begin(), &container); ASSERT_EQUALS(container.size(), 1U); ASSERT_BSONOBJ_EQ(match1->getQuery(), fromjson("{'$and': [{a:1}, {b:1}, {c:1}]}")); }
0
[]
mongo
b3107d73a2c58d7e016b834dae0acfd01c0db8d7
90,451,580,901,238,260,000,000,000,000,000,000,000
24
SERVER-59299: Flatten top-level nested $match stages in doOptimizeAt (cherry picked from commit 4db5eceda2cff697f35c84cd08232bac8c33beec)
void kvm_hv_process_stimers(struct kvm_vcpu *vcpu) { struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); struct kvm_vcpu_hv_stimer *stimer; u64 time_now, exp_time; int i; if (!hv_vcpu) return; for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) if (test_and_clear_bit(i, hv_vcpu->stimer_pending_bitmap)) { stimer = &hv_vcpu->stimer[i]; if (stimer->config.enable) { exp_time = stimer->exp_time; if (exp_time) { time_now = get_time_ref_counter(vcpu->kvm); if (time_now >= exp_time) stimer_expiration(stimer); } if ((stimer->config.enable) && stimer->count) { if (!stimer->msg_pending) stimer_start(stimer); } else stimer_cleanup(stimer); } } }
0
[ "CWE-476" ]
linux
919f4ebc598701670e80e31573a58f1f2d2bf918
102,133,489,137,441,500,000,000,000,000,000,000,000
32
KVM: x86: hyper-v: Fix Hyper-V context null-ptr-deref Reported by syzkaller: KASAN: null-ptr-deref in range [0x0000000000000140-0x0000000000000147] CPU: 1 PID: 8370 Comm: syz-executor859 Not tainted 5.11.0-syzkaller #0 RIP: 0010:synic_get arch/x86/kvm/hyperv.c:165 [inline] RIP: 0010:kvm_hv_set_sint_gsi arch/x86/kvm/hyperv.c:475 [inline] RIP: 0010:kvm_hv_irq_routing_update+0x230/0x460 arch/x86/kvm/hyperv.c:498 Call Trace: kvm_set_irq_routing+0x69b/0x940 arch/x86/kvm/../../../virt/kvm/irqchip.c:223 kvm_vm_ioctl+0x12d0/0x2800 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3959 vfs_ioctl fs/ioctl.c:48 [inline] __do_sys_ioctl fs/ioctl.c:753 [inline] __se_sys_ioctl fs/ioctl.c:739 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae Hyper-V context is lazily allocated until Hyper-V specific MSRs are accessed or SynIC is enabled. However, the syzkaller testcase sets irq routing table directly w/o enabling SynIC. This results in null-ptr-deref when accessing SynIC Hyper-V context. This patch fixes it. syzkaller source: https://syzkaller.appspot.com/x/repro.c?x=163342ccd00000 Reported-by: [email protected] Fixes: 8f014550dfb1 ("KVM: x86: hyper-v: Make Hyper-V emulation enablement conditional") Signed-off-by: Wanpeng Li <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
void writeBytes(const void* data, int length) { const U8* dataPtr = (const U8*)data; const U8* dataEnd = dataPtr + length; while (dataPtr < dataEnd) { int n = check(1, dataEnd - dataPtr); memcpy(ptr, dataPtr, n); ptr += n; dataPtr += n; } }
1
[ "CWE-20", "CWE-787" ]
tigervnc
0943c006c7d900dfc0281639e992791d6c567438
585,458,595,203,333,200,000,000,000,000,000,000
10
Use size_t for lengths in stream objects Provides safety against them accidentally becoming negative because of bugs in the calculations. Also does the same to CharArray and friends as they were strongly connection to the stream objects.
u8 bnx2x_is_pcie_pending(struct pci_dev *dev) { u16 status; pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &status); return status & PCI_EXP_DEVSTA_TRPND; }
0
[ "CWE-20" ]
linux
8914a595110a6eca69a5e275b323f5d09e18f4f9
202,579,011,907,704,270,000,000,000,000,000,000,000
7
bnx2x: disable GSO where gso_size is too big for hardware If a bnx2x card is passed a GSO packet with a gso_size larger than ~9700 bytes, it will cause a firmware error that will bring the card down: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f0)]MC assert! bnx2x: [bnx2x_mc_assert:720(enP24p1s0f0)]XSTORM_ASSERT_LIST_INDEX 0x2 bnx2x: [bnx2x_mc_assert:736(enP24p1s0f0)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e43e47 0x00463e01 0x00010052 bnx2x: [bnx2x_mc_assert:750(enP24p1s0f0)]Chip Revision: everest3, FW Version: 7_13_1 ... (dump of values continues) ... Detect when the mac length of a GSO packet is greater than the maximum packet size (9700 bytes) and disable GSO. Signed-off-by: Daniel Axtens <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
adjust_cursor_col(void) { if (curwin->w_cursor.col > 0 && (!VIsual_active || *p_sel == 'o') && gchar_cursor() == NUL) --curwin->w_cursor.col; }
0
[ "CWE-120" ]
vim
7ce5b2b590256ce53d6af28c1d203fb3bc1d2d97
298,312,948,169,644,000,000,000,000,000,000,000,000
7
patch 8.2.4969: changing text in Visual mode may cause invalid memory access Problem: Changing text in Visual mode may cause invalid memory access. Solution: Check the Visual position after making a change.
inline int abs(const int a) { return std::abs(a); }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
238,650,068,396,074,030,000,000,000,000,000,000,000
3
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
struct key *request_key_and_link(struct key_type *type, const char *description, const void *callout_info, size_t callout_len, void *aux, struct key *dest_keyring, unsigned long flags) { struct keyring_search_context ctx = { .index_key.type = type, .index_key.description = description, .cred = current_cred(), .match_data.cmp = key_default_cmp, .match_data.raw_data = description, .match_data.lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT, }; struct key *key; key_ref_t key_ref; int ret; kenter("%s,%s,%p,%zu,%p,%p,%lx", ctx.index_key.type->name, ctx.index_key.description, callout_info, callout_len, aux, dest_keyring, flags); if (type->match_preparse) { ret = type->match_preparse(&ctx.match_data); if (ret < 0) { key = ERR_PTR(ret); goto error; } } /* search all the process keyrings for a key */ key_ref = search_process_keyrings(&ctx); if (!IS_ERR(key_ref)) { key = key_ref_to_ptr(key_ref); if (dest_keyring) { construct_get_dest_keyring(&dest_keyring); ret = key_link(dest_keyring, key); key_put(dest_keyring); if (ret < 0) { key_put(key); key = ERR_PTR(ret); goto error_free; } } } else if (PTR_ERR(key_ref) != -EAGAIN) { key = ERR_CAST(key_ref); } else { /* the search failed, but the keyrings were searchable, so we * should consult userspace if we can */ key = ERR_PTR(-ENOKEY); if (!callout_info) goto error_free; key = construct_key_and_link(&ctx, callout_info, callout_len, aux, dest_keyring, flags); } error_free: if (type->match_free) type->match_free(&ctx.match_data); error: kleave(" = %p", key); return key; }
0
[ "CWE-476" ]
linux
c06cfb08b88dfbe13be44a69ae2fdc3a7c902d81
156,949,626,538,631,300,000,000,000,000,000,000,000
67
KEYS: Remove key_type::match in favour of overriding default by match_preparse A previous patch added a ->match_preparse() method to the key type. This is allowed to override the function called by the iteration algorithm. Therefore, we can just set a default that simply checks for an exact match of the key description with the original criterion data and allow match_preparse to override it as needed. The key_type::match op is then redundant and can be removed, as can the user_match() function. Signed-off-by: David Howells <[email protected]> Acked-by: Vivek Goyal <[email protected]>
static MagickBooleanType ApplyPSDOpacityMask(Image *image,const Image *mask, Quantum background,MagickBooleanType revert,ExceptionInfo *exception) { Image *complete_mask; MagickBooleanType status; PixelInfo color; ssize_t y; if (image->debug != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " applying opacity mask"); complete_mask=CloneImage(image,image->columns,image->rows,MagickTrue, exception); complete_mask->alpha_trait=BlendPixelTrait; GetPixelInfo(complete_mask,&color); color.red=background; SetImageColor(complete_mask,&color,exception); status=CompositeImage(complete_mask,mask,OverCompositeOp,MagickTrue, mask->page.x-image->page.x,mask->page.y-image->page.y,exception); if (status == MagickFalse) { complete_mask=DestroyImage(complete_mask); return(status); } image->alpha_trait=BlendPixelTrait; #if defined(MAGICKCORE_OPENMP_SUPPORT) #pragma omp parallel for schedule(static,4) shared(status) \ magick_threads(image,image,image->rows,1) #endif for (y=0; y < (ssize_t) image->rows; y++) { register Quantum *magick_restrict q; register Quantum *p; register ssize_t x; if (status == MagickFalse) continue; q=GetAuthenticPixels(image,0,y,image->columns,1,exception); p=GetAuthenticPixels(complete_mask,0,y,complete_mask->columns,1,exception); if ((q == (Quantum *) NULL) || (p == (Quantum *) NULL)) { status=MagickFalse; continue; } for (x=0; x < (ssize_t) image->columns; x++) { MagickRealType alpha, intensity; alpha=GetPixelAlpha(image,q); intensity=GetPixelIntensity(complete_mask,p); if (revert == MagickFalse) SetPixelAlpha(image,ClampToQuantum(intensity*(QuantumScale*alpha)),q); else if (intensity > 0) SetPixelAlpha(image,ClampToQuantum((alpha/intensity)*QuantumRange),q); q+=GetPixelChannels(image); p+=GetPixelChannels(complete_mask); } if (SyncAuthenticPixels(image,exception) == MagickFalse) status=MagickFalse; } complete_mask=DestroyImage(complete_mask); return(status); }
0
[ "CWE-787" ]
ImageMagick
d4ec73f866a7c42a2e7f301fcd696e5cb7a7d3ab
61,239,610,391,727,850,000,000,000,000,000,000,000
77
https://github.com/ImageMagick/ImageMagick/issues/350
static void ri(struct vc_data *vc) { /* don't scroll if below top of scrolling region, or * if above scrolling region */ if (vc->state.y == vc->vc_top) con_scroll(vc, vc->vc_top, vc->vc_bottom, SM_DOWN, 1); else if (vc->state.y > 0) { vc->state.y--; vc->vc_pos -= vc->vc_size_row; } vc->vc_need_wrap = 0; }
0
[ "CWE-125" ]
linux
3c4e0dff2095c579b142d5a0693257f1c58b4804
135,897,738,950,094,960,000,000,000,000,000,000,000
13
vt: Disable KD_FONT_OP_COPY It's buggy: On Fri, Nov 06, 2020 at 10:30:08PM +0800, Minh Yuan wrote: > We recently discovered a slab-out-of-bounds read in fbcon in the latest > kernel ( v5.10-rc2 for now ). The root cause of this vulnerability is that > "fbcon_do_set_font" did not handle "vc->vc_font.data" and > "vc->vc_font.height" correctly, and the patch > <https://lkml.org/lkml/2020/9/27/223> for VT_RESIZEX can't handle this > issue. > > Specifically, we use KD_FONT_OP_SET to set a small font.data for tty6, and > use KD_FONT_OP_SET again to set a large font.height for tty1. After that, > we use KD_FONT_OP_COPY to assign tty6's vc_font.data to tty1's vc_font.data > in "fbcon_do_set_font", while tty1 retains the original larger > height. Obviously, this will cause an out-of-bounds read, because we can > access a smaller vc_font.data with a larger vc_font.height. Further there was only one user ever. - Android's loadfont, busybox and console-tools only ever use OP_GET and OP_SET - fbset documentation only mentions the kernel cmdline font: option, not anything else. - systemd used OP_COPY before release 232 published in Nov 2016 Now unfortunately the crucial report seems to have gone down with gmane, and the commit message doesn't say much. But the pull request hints at OP_COPY being broken https://github.com/systemd/systemd/pull/3651 So in other words, this never worked, and the only project which foolishly every tried to use it, realized that rather quickly too. Instead of trying to fix security issues here on dead code by adding missing checks, fix the entire thing by removing the functionality. Note that systemd code using the OP_COPY function ignored the return value, so it doesn't matter what we're doing here really - just in case a lone server somewhere happens to be extremely unlucky and running an affected old version of systemd. The relevant code from font_copy_to_all_vcs() in systemd was: /* copy font from active VT, where the font was uploaded to */ cfo.op = KD_FONT_OP_COPY; cfo.height = vcs.v_active-1; /* tty1 == index 0 */ (void) ioctl(vcfd, KDFONTOP, &cfo); Note this just disables the ioctl, garbage collecting the now unused callbacks is left for -next. v2: Tetsuo found the old mail, which allowed me to find it on another archive. Add the link too. Acked-by: Peilin Ye <[email protected]> Reported-by: Minh Yuan <[email protected]> References: https://lists.freedesktop.org/archives/systemd-devel/2016-June/036935.html References: https://github.com/systemd/systemd/pull/3651 Cc: Greg KH <[email protected]> Cc: Peilin Ye <[email protected]> Cc: Tetsuo Handa <[email protected]> Signed-off-by: Daniel Vetter <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
cifs_check_receive(struct mid_q_entry *mid, struct TCP_Server_Info *server, bool log_error) { unsigned int len = get_rfc1002_length(mid->resp_buf) + 4; dump_smb(mid->resp_buf, min_t(u32, 92, len)); /* convert the length into a more usable form */ if (server->sec_mode & (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED)) { struct kvec iov; int rc = 0; struct smb_rqst rqst = { .rq_iov = &iov, .rq_nvec = 1 }; iov.iov_base = mid->resp_buf; iov.iov_len = len; /* FIXME: add code to kill session */ rc = cifs_verify_signature(&rqst, server, mid->sequence_number + 1); if (rc) cERROR(1, "SMB signature verification returned error = " "%d", rc); } /* BB special case reconnect tid and uid here? */ return map_smb_to_linux_error(mid->resp_buf, log_error); }
0
[ "CWE-362" ]
linux
ea702b80e0bbb2448e201472127288beb82ca2fe
35,156,757,483,860,750,000,000,000,000,000,000,000
27
cifs: move check for NULL socket into smb_send_rqst Cai reported this oops: [90701.616664] BUG: unable to handle kernel NULL pointer dereference at 0000000000000028 [90701.625438] IP: [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.632167] PGD fea319067 PUD 103fda4067 PMD 0 [90701.637255] Oops: 0000 [#1] SMP [90701.640878] Modules linked in: des_generic md4 nls_utf8 cifs dns_resolver binfmt_misc tun sg igb iTCO_wdt iTCO_vendor_support lpc_ich pcspkr i2c_i801 i2c_core i7core_edac edac_core ioatdma dca mfd_core coretemp kvm_intel kvm crc32c_intel microcode sr_mod cdrom ata_generic sd_mod pata_acpi crc_t10dif ata_piix libata megaraid_sas dm_mirror dm_region_hash dm_log dm_mod [90701.677655] CPU 10 [90701.679808] Pid: 9627, comm: ls Tainted: G W 3.7.1+ #10 QCI QSSC-S4R/QSSC-S4R [90701.688950] RIP: 0010:[<ffffffff814a343e>] [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.698383] RSP: 0018:ffff88177b431bb8 EFLAGS: 00010206 [90701.704309] RAX: ffff88177b431fd8 RBX: 00007ffffffff000 RCX: ffff88177b431bec [90701.712271] RDX: 0000000000000003 RSI: 0000000000000006 RDI: 0000000000000000 [90701.720223] RBP: ffff88177b431bc8 R08: 0000000000000004 R09: 0000000000000000 [90701.728185] R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000001 [90701.736147] R13: ffff88184ef92000 R14: 0000000000000023 R15: ffff88177b431c88 [90701.744109] FS: 00007fd56a1a47c0(0000) GS:ffff88105fc40000(0000) knlGS:0000000000000000 [90701.753137] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [90701.759550] CR2: 0000000000000028 CR3: 000000104f15f000 CR4: 00000000000007e0 [90701.767512] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [90701.775465] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [90701.783428] Process ls (pid: 9627, threadinfo ffff88177b430000, task ffff88185ca4cb60) [90701.792261] Stack: [90701.794505] 0000000000000023 ffff88177b431c50 ffff88177b431c38 ffffffffa014fcb1 [90701.802809] ffff88184ef921bc 0000000000000000 00000001ffffffff ffff88184ef921c0 [90701.811123] ffff88177b431c08 ffffffff815ca3d9 ffff88177b431c18 ffff880857758000 [90701.819433] Call Trace: [90701.822183] [<ffffffffa014fcb1>] smb_send_rqst+0x71/0x1f0 [cifs] [90701.828991] [<ffffffff815ca3d9>] ? schedule+0x29/0x70 [90701.834736] [<ffffffffa014fe6d>] smb_sendv+0x3d/0x40 [cifs] [90701.841062] [<ffffffffa014fe96>] smb_send+0x26/0x30 [cifs] [90701.847291] [<ffffffffa015801f>] send_nt_cancel+0x6f/0xd0 [cifs] [90701.854102] [<ffffffffa015075e>] SendReceive+0x18e/0x360 [cifs] [90701.860814] [<ffffffffa0134a78>] CIFSFindFirst+0x1a8/0x3f0 [cifs] [90701.867724] [<ffffffffa013f731>] ? build_path_from_dentry+0xf1/0x260 [cifs] [90701.875601] [<ffffffffa013f731>] ? build_path_from_dentry+0xf1/0x260 [cifs] [90701.883477] [<ffffffffa01578e6>] cifs_query_dir_first+0x26/0x30 [cifs] [90701.890869] [<ffffffffa015480d>] initiate_cifs_search+0xed/0x250 [cifs] [90701.898354] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.904486] [<ffffffffa01554cb>] cifs_readdir+0x45b/0x8f0 [cifs] [90701.911288] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.917410] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.923533] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.929657] [<ffffffff81195848>] vfs_readdir+0xb8/0xe0 [90701.935490] [<ffffffff81195b9f>] sys_getdents+0x8f/0x110 [90701.941521] [<ffffffff815d3b99>] system_call_fastpath+0x16/0x1b [90701.948222] Code: 66 90 55 65 48 8b 04 25 f0 c6 00 00 48 89 e5 53 48 83 ec 08 83 fe 01 48 8b 98 48 e0 ff ff 48 c7 80 48 e0 ff ff ff ff ff ff 74 22 <48> 8b 47 28 ff 50 68 65 48 8b 14 25 f0 c6 00 00 48 89 9a 48 e0 [90701.970313] RIP [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.977125] RSP <ffff88177b431bb8> [90701.981018] CR2: 0000000000000028 [90701.984809] ---[ end trace 24bd602971110a43 ]--- This is likely due to a race vs. a reconnection event. The current code checks for a NULL socket in smb_send_kvec, but that's too late. By the time that check is done, the socket will already have been passed to kernel_setsockopt. Move the check into smb_send_rqst, so that it's checked earlier. In truth, this is a bit of a half-assed fix. The -ENOTSOCK error return here looks like it could bubble back up to userspace. The locking rules around the ssocket pointer are really unclear as well. There are cases where the ssocket pointer is changed without holding the srv_mutex, but I'm not clear whether there's a potential race here yet or not. This code seems like it could benefit from some fundamental re-think of how the socket handling should behave. Until then though, this patch should at least fix the above oops in most cases. Cc: <[email protected]> # 3.7+ Reported-and-Tested-by: CAI Qian <[email protected]> Signed-off-by: Jeff Layton <[email protected]> Signed-off-by: Steve French <[email protected]>
static inline int tcp_skb_pcount(const struct sk_buff *skb) { return skb_shinfo(skb)->gso_segs; }
0
[]
linux
7bced397510ab569d31de4c70b39e13355046387
94,840,954,235,174,490,000,000,000,000,000,000,000
4
net_dma: simple removal Per commit "77873803363c net_dma: mark broken" net_dma is no longer used and there is no plan to fix it. This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards. Reverting the remainder of the net_dma induced changes is deferred to subsequent patches. Marked for stable due to Roman's report of a memory leak in dma_pin_iovec_pages(): https://lkml.org/lkml/2014/9/3/177 Cc: Dave Jiang <[email protected]> Cc: Vinod Koul <[email protected]> Cc: David Whipple <[email protected]> Cc: Alexander Duyck <[email protected]> Cc: <[email protected]> Reported-by: Roman Gushchin <[email protected]> Acked-by: David S. Miller <[email protected]> Signed-off-by: Dan Williams <[email protected]>
PJ_DEF(void) pj_scan_skip_whitespace( pj_scanner *scanner ) { register char *s = scanner->curptr; while (PJ_SCAN_IS_SPACE(*s)) { ++s; } if (PJ_SCAN_IS_NEWLINE(*s) && (scanner->skip_ws & PJ_SCAN_AUTOSKIP_NEWLINE)) { for (;;) { if (*s == '\r') { ++s; if (*s == '\n') ++s; ++scanner->line; scanner->curptr = scanner->start_line = s; } else if (*s == '\n') { ++s; ++scanner->line; scanner->curptr = scanner->start_line = s; } else if (PJ_SCAN_IS_SPACE(*s)) { do { ++s; } while (PJ_SCAN_IS_SPACE(*s)); } else { break; } } } if (PJ_SCAN_IS_NEWLINE(*s) && (scanner->skip_ws & PJ_SCAN_AUTOSKIP_WS_HEADER)==PJ_SCAN_AUTOSKIP_WS_HEADER) { /* Check for header continuation. */ scanner->curptr = s; if (*s == '\r') { ++s; } if (*s == '\n') { ++s; } scanner->start_line = s; if (PJ_SCAN_IS_SPACE(*s)) { register char *t = s; do { ++t; } while (PJ_SCAN_IS_SPACE(*t)); ++scanner->line; scanner->curptr = t; } } else { scanner->curptr = s; } }
0
[ "CWE-125" ]
pjproject
077b465c33f0aec05a49cd2ca456f9a1b112e896
269,721,489,769,459,680,000,000,000,000,000,000,000
54
Merge pull request from GHSA-7fw8-54cv-r7pm
void inotify_destroy(struct inotify_handle *ih) { /* * Destroy all of the watches for this handle. Unfortunately, not very * pretty. We cannot do a simple iteration over the list, because we * do not know the inode until we iterate to the watch. But we need to * hold inode->inotify_mutex before ih->mutex. The following works. */ while (1) { struct inotify_watch *watch; struct list_head *watches; struct inode *inode; mutex_lock(&ih->mutex); watches = &ih->watches; if (list_empty(watches)) { mutex_unlock(&ih->mutex); break; } watch = list_first_entry(watches, struct inotify_watch, h_list); get_inotify_watch(watch); mutex_unlock(&ih->mutex); inode = watch->inode; mutex_lock(&inode->inotify_mutex); mutex_lock(&ih->mutex); /* make sure we didn't race with another list removal */ if (likely(idr_find(&ih->idr, watch->wd))) { remove_watch_no_event(watch, ih); put_inotify_watch(watch); } mutex_unlock(&ih->mutex); mutex_unlock(&inode->inotify_mutex); put_inotify_watch(watch); } /* free this handle: the put matching the get in inotify_init() */ put_inotify_handle(ih); }
1
[ "CWE-362" ]
linux-2.6
8f7b0ba1c853919b85b54774775f567f30006107
97,538,541,219,067,190,000,000,000,000,000,000,000
41
Fix inotify watch removal/umount races Inotify watch removals suck violently. To kick the watch out we need (in this order) inode->inotify_mutex and ih->mutex. That's fine if we have a hold on inode; however, for all other cases we need to make damn sure we don't race with umount. We can *NOT* just grab a reference to a watch - inotify_unmount_inodes() will happily sail past it and we'll end with reference to inode potentially outliving its superblock. Ideally we just want to grab an active reference to superblock if we can; that will make sure we won't go into inotify_umount_inodes() until we are done. Cleanup is just deactivate_super(). However, that leaves a messy case - what if we *are* racing with umount() and active references to superblock can't be acquired anymore? We can bump ->s_count, grab ->s_umount, which will almost certainly wait until the superblock is shut down and the watch in question is pining for fjords. That's fine, but there is a problem - we might have hit the window between ->s_active getting to 0 / ->s_count - below S_BIAS (i.e. the moment when superblock is past the point of no return and is heading for shutdown) and the moment when deactivate_super() acquires ->s_umount. We could just do drop_super() yield() and retry, but that's rather antisocial and this stuff is luser-triggerable. OTOH, having grabbed ->s_umount and having found that we'd got there first (i.e. that ->s_root is non-NULL) we know that we won't race with inotify_umount_inodes(). So we could grab a reference to watch and do the rest as above, just with drop_super() instead of deactivate_super(), right? Wrong. We had to drop ih->mutex before we could grab ->s_umount. So the watch could've been gone already. That still can be dealt with - we need to save watch->wd, do idr_find() and compare its result with our pointer. If they match, we either have the damn thing still alive or we'd lost not one but two races at once, the watch had been killed and a new one got created with the same ->wd at the same address. That couldn't have happened in inotify_destroy(), but inotify_rm_wd() could run into that. Still, "new one got created" is not a problem - we have every right to kill it or leave it alone, whatever's more convenient. So we can use idr_find(...) == watch && watch->inode->i_sb == sb as "grab it and kill it" check. If it's been our original watch, we are fine, if it's a newcomer - nevermind, just pretend that we'd won the race and kill the fscker anyway; we are safe since we know that its superblock won't be going away. And yes, this is far beyond mere "not very pretty"; so's the entire concept of inotify to start with. Signed-off-by: Al Viro <[email protected]> Acked-by: Greg KH <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static void __init mp_config_acpi_legacy_irqs(void) { int i; struct mpc_intsrc mp_irq; #ifdef CONFIG_EISA /* * Fabricate the legacy ISA bus (bus #31). */ mp_bus_id_to_type[MP_ISA_BUS] = MP_BUS_ISA; #endif set_bit(MP_ISA_BUS, mp_bus_not_pci); pr_debug("Bus #%d is ISA\n", MP_ISA_BUS); /* * Use the default configuration for the IRQs 0-15. Unless * overridden by (MADT) interrupt source override entries. */ for (i = 0; i < nr_legacy_irqs(); i++) { int ioapic, pin; unsigned int dstapic; int idx; u32 gsi; /* Locate the gsi that irq i maps to. */ if (acpi_isa_irq_to_gsi(i, &gsi)) continue; /* * Locate the IOAPIC that manages the ISA IRQ. */ ioapic = mp_find_ioapic(gsi); if (ioapic < 0) continue; pin = mp_find_ioapic_pin(ioapic, gsi); dstapic = mpc_ioapic_id(ioapic); for (idx = 0; idx < mp_irq_entries; idx++) { struct mpc_intsrc *irq = mp_irqs + idx; /* Do we already have a mapping for this ISA IRQ? */ if (irq->srcbus == MP_ISA_BUS && irq->srcbusirq == i) break; /* Do we already have a mapping for this IOAPIC pin */ if (irq->dstapic == dstapic && irq->dstirq == pin) break; } if (idx != mp_irq_entries) { printk(KERN_DEBUG "ACPI: IRQ%d used by override.\n", i); continue; /* IRQ already used */ } mp_irq.type = MP_INTSRC; mp_irq.irqflag = 0; /* Conforming */ mp_irq.srcbus = MP_ISA_BUS; mp_irq.dstapic = dstapic; mp_irq.irqtype = mp_INT; mp_irq.srcbusirq = i; /* Identity mapped */ mp_irq.dstirq = pin; mp_save_irq(&mp_irq); } }
0
[ "CWE-120" ]
linux
dad5ab0db8deac535d03e3fe3d8f2892173fa6a4
44,640,150,677,039,730,000,000,000,000,000,000,000
65
x86/acpi: Prevent out of bound access caused by broken ACPI tables The bus_irq argument of mp_override_legacy_irq() is used as the index into the isa_irq_to_gsi[] array. The bus_irq argument originates from ACPI_MADT_TYPE_IO_APIC and ACPI_MADT_TYPE_INTERRUPT items in the ACPI tables, but is nowhere sanity checked. That allows broken or malicious ACPI tables to overwrite memory, which might cause malfunction, panic or arbitrary code execution. Add a sanity check and emit a warning when that triggers. [ tglx: Added warning and rewrote changelog ] Signed-off-by: Seunghun Han <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: "Rafael J. Wysocki" <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
gs_getdefaultlibdevice(gs_memory_t *mem) { const gx_device *const *list; int count = gs_lib_device_list(&list, NULL); const char *name, *end, *fin; int i; /* Search the compiled in device list for a known device name */ /* In the case the lib ctx hasn't been initialised */ if (mem && mem->gs_lib_ctx && mem->gs_lib_ctx->default_device_list) { name = mem->gs_lib_ctx->default_device_list; fin = name + strlen(name); } else { name = gs_dev_defaults; fin = name + strlen(name); } /* iterate through each name in the string */ while (name < fin) { /* split a name from any whitespace */ while ((name < fin) && (*name == ' ' || *name == '\t')) name++; end = name; while ((end < fin) && (*end != ' ') && (*end != '\t')) end++; /* return any matches */ for (i = 0; i < count; i++) if ((end - name) == strlen(list[i]->dname)) if (!memcmp(name, list[i]->dname, end - name)) return gs_getdevice(i); /* otherwise, try the next device name */ name = end; } /* Fall back to the first device in the list. */ return gs_getdevice(0); }
0
[]
ghostpdl
79cccf641486a6595c43f1de1cd7ade696020a31
187,769,702,752,186,660,000,000,000,000,000,000,000
41
Bug 699654(2): preserve LockSafetyParams in the nulldevice The nulldevice does not necessarily use the normal setpagedevice machinery, but can be set using the nulldevice operator. In which case, we don't preserve the settings from the original device (in the way setpagedevice does). Since nulldevice does nothing, this is not generally a problem, but in the case of LockSafetyParams it *is* important when we restore back to the original device, when LockSafetyParams not being set is "preserved" into the post- restore configuration. We have to initialise the value to false because the nulldevice is used during initialisation (before any other device exists), and *must* be writable for that.
static int do_stop_slave_sql(MYSQL *mysql_con) { MYSQL_RES *slave; /* We need to check if the slave sql is running in the first place */ if (mysql_query_with_error_report(mysql_con, &slave, "SHOW SLAVE STATUS")) return(1); else { MYSQL_ROW row= mysql_fetch_row(slave); if (row && row[11]) { /* if SLAVE SQL is not running, we don't stop it */ if (!strcmp(row[11],"No")) { mysql_free_result(slave); /* Silently assume that they don't have the slave running */ return(0); } } } mysql_free_result(slave); /* now, stop slave if running */ if (mysql_query_with_error_report(mysql_con, 0, "STOP SLAVE SQL_THREAD")) return(1); return(0); }
0
[ "CWE-284", "CWE-295" ]
mysql-server
3bd5589e1a5a93f9c224badf983cd65c45215390
173,764,242,435,393,040,000,000,000,000,000,000,000
28
WL#6791 : Redefine client --ssl option to imply enforced encryption # Changed the meaning of the --ssl=1 option of all client binaries to mean force ssl, not try ssl and fail over to eunecrypted # Added a new MYSQL_OPT_SSL_ENFORCE mysql_options() option to specify that an ssl connection is required. # Added a new macro SSL_SET_OPTIONS() to the client SSL handling headers that sets all the relevant SSL options at once. # Revamped all of the current native clients to use the new macro # Removed some Windows line endings. # Added proper handling of the new option into the ssl helper headers. # If SSL is mandatory assume that the media is secure enough for the sha256 plugin to do unencrypted password exchange even before establishing a connection. # Set the default ssl cipher to DHE-RSA-AES256-SHA if none is specified. # updated test cases that require a non-default cipher to spawn a mysql command line tool binary since mysqltest has no support for specifying ciphers. # updated the replication slave connection code to always enforce SSL if any of the SSL config options is present. # test cases added and updated. # added a mysql_get_option() API to return mysql_options() values. Used the new API inside the sha256 plugin. # Fixed compilation warnings because of unused variables. # Fixed test failures (mysql_ssl and bug13115401) # Fixed whitespace issues. # Fully implemented the mysql_get_option() function. # Added a test case for mysql_get_option() # fixed some trailing whitespace issues # fixed some uint/int warnings in mysql_client_test.c # removed shared memory option from non-windows get_options tests # moved MYSQL_OPT_LOCAL_INFILE to the uint options
static uint32_t pcnet_csr_readw(PCNetState *s, uint32_t rap) { uint32_t val; switch (rap) { case 0: pcnet_update_irq(s); val = s->csr[0]; val |= (val & 0x7800) ? 0x8000 : 0; break; case 16: return pcnet_csr_readw(s,1); case 17: return pcnet_csr_readw(s,2); case 58: return pcnet_bcr_readw(s,BCR_SWS); case 88: val = s->csr[89]; val <<= 16; val |= s->csr[88]; break; default: val = s->csr[rap]; } #ifdef PCNET_DEBUG_CSR printf("pcnet_csr_readw rap=%d val=0x%04x\n", rap, val); #endif return val; }
0
[]
qemu
837f21aacf5a714c23ddaadbbc5212f9b661e3f7
96,279,034,574,157,100,000,000,000,000,000,000,000
28
net: pcnet: add check to validate receive data size(CVE-2015-7504) In loopback mode, pcnet_receive routine appends CRC code to the receive buffer. If the data size given is same as the buffer size, the appended CRC code overwrites 4 bytes after s->buffer. Added a check to avoid that. Reported by: Qinghao Tang <[email protected]> Cc: [email protected] Reviewed-by: Michael S. Tsirkin <[email protected]> Signed-off-by: Prasad J Pandit <[email protected]> Signed-off-by: Jason Wang <[email protected]>
virtio_dev_rx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; uint16_t avail_idx = vq->last_avail_idx; uint64_t desc_addrs[PACKED_BATCH_SIZE]; struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; uint32_t buf_offset = dev->vhost_hlen; uint64_t lens[PACKED_BATCH_SIZE]; uint16_t ids[PACKED_BATCH_SIZE]; uint16_t i; if (unlikely(avail_idx & PACKED_BATCH_MASK)) return -1; if (unlikely((avail_idx + PACKED_BATCH_SIZE) > vq->size)) return -1; vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { if (unlikely(pkts[i]->next != NULL)) return -1; if (unlikely(!desc_is_avail(&descs[avail_idx + i], wrap_counter))) return -1; } rte_smp_rmb(); vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) lens[i] = descs[avail_idx + i].len; vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { if (unlikely(pkts[i]->pkt_len > (lens[i] - buf_offset))) return -1; } vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) desc_addrs[i] = vhost_iova_to_vva(dev, vq, descs[avail_idx + i].addr, &lens[i], VHOST_ACCESS_RW); vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { if (unlikely(lens[i] != descs[avail_idx + i].len)) return -1; } vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); hdrs[i] = (struct virtio_net_hdr_mrg_rxbuf *) (uintptr_t)desc_addrs[i]; lens[i] = pkts[i]->pkt_len + dev->vhost_hlen; } vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) virtio_enqueue_offload(pkts[i], &hdrs[i]->hdr); vq_inc_last_avail_packed(vq, PACKED_BATCH_SIZE); vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { rte_memcpy((void *)(uintptr_t)(desc_addrs[i] + buf_offset), rte_pktmbuf_mtod_offset(pkts[i], void *, 0), pkts[i]->pkt_len); } vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) vhost_log_cache_write_iova(dev, vq, descs[avail_idx + i].addr, lens[i]); vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) ids[i] = descs[avail_idx + i].id; vhost_flush_enqueue_batch_packed(dev, vq, lens, ids); return 0; }
1
[ "CWE-665" ]
dpdk
97ecc1c85c95c13bc66a87435758e93406c35c48
141,015,319,348,152,110,000,000,000,000,000,000,000
78
vhost: fix translated address not checked Malicious guest can construct desc with invalid address and zero buffer length. That will request vhost to check both translated address and translated data length. This patch will add missed address check. CVE-2020-10725 Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue") Fixes: ef861692c398 ("vhost: add packed ring batch enqueue") Cc: [email protected] Signed-off-by: Marvin Liu <[email protected]> Reviewed-by: Maxime Coquelin <[email protected]>
static void vrend_resource_buffer_copy(UNUSED struct vrend_context *ctx, struct vrend_resource *src_res, struct vrend_resource *dst_res, uint32_t dstx, uint32_t srcx, uint32_t width) { glBindBuffer(GL_COPY_READ_BUFFER, src_res->id); glBindBuffer(GL_COPY_WRITE_BUFFER, dst_res->id); glCopyBufferSubData(GL_COPY_READ_BUFFER, GL_COPY_WRITE_BUFFER, srcx, dstx, width); glBindBuffer(GL_COPY_READ_BUFFER, 0); glBindBuffer(GL_COPY_WRITE_BUFFER, 0); }
0
[ "CWE-787" ]
virglrenderer
cbc8d8b75be360236cada63784046688aeb6d921
92,125,587,738,094,390,000,000,000,000,000,000,000
13
vrend: check transfer bounds for negative values too and report error Closes #138 Signed-off-by: Gert Wollny <[email protected]> Reviewed-by: Emil Velikov <[email protected]>
ref_param_end_write_collection(gs_param_list * plist, gs_param_name pkey, gs_param_dict * pvalue) { iparam_list *const iplist = (iparam_list *) plist; int code = ref_param_write(iplist, pkey, &((dict_param_list *) pvalue->list)->dict); gs_free_object(plist->memory, pvalue->list, "ref_param_end_write_collection"); pvalue->list = 0; return code; }
0
[]
ghostpdl
bfa6b2ecbe48edc69a7d9d22a12419aed25960b8
260,254,494,060,999,660,000,000,000,000,000,000,000
11
Bug 697548: use the correct param list enumerator When we encountered dictionary in a ref_param_list, we were using the enumerator for the "parent" param_list, rather than the enumerator for the param_list we just created for the dictionary. That parent was usually the stack list enumerator, and caused a segfault. Using the correct enumerator works better.
pum_wanted(void) { // 'completeopt' must contain "menu" or "menuone" if (vim_strchr(p_cot, 'm') == NULL) return FALSE; // The display looks bad on a B&W display. if (t_colors < 8 #ifdef FEAT_GUI && !gui.in_use #endif ) return FALSE; return TRUE; }
0
[ "CWE-125" ]
vim
f12129f1714f7d2301935bb21d896609bdac221c
208,609,454,555,556,500,000,000,000,000,000,000,000
15
patch 9.0.0020: with some completion reading past end of string Problem: With some completion reading past end of string. Solution: Check the length of the string.
void luaC_barrier_ (lua_State *L, GCObject *o, GCObject *v) { global_State *g = G(L); lua_assert(isblack(o) && iswhite(v) && !isdead(g, v) && !isdead(g, o)); if (keepinvariant(g)) { /* must keep invariant? */ reallymarkobject(g, v); /* restore invariant */ if (isold(o)) { lua_assert(!isold(v)); /* white object could not be old */ setage(v, G_OLD0); /* restore generational invariant */ } } else { /* sweep phase */ lua_assert(issweepphase(g)); makewhite(g, o); /* mark main obj. as white to avoid other barriers */ } }
1
[ "CWE-763" ]
lua
a6da1472c0c5e05ff249325f979531ad51533110
214,764,055,066,647,000,000,000,000,000,000,000,000
15
Fixed bug: barriers cannot be active during sweep Barriers cannot be active during sweep, even in generational mode. (Although gen. mode is not incremental, it can hit a barrier when deleting a thread and closing its upvalues.) The colors of objects are being changed during sweep and, therefore, cannot be trusted.
void RGWListBuckets_ObjStore_SWIFT::send_response_data_reversed(RGWUserBuckets& buckets) { if (! sent_data) { return; } /* Take care of the prefix parameter of Swift API. There is no business * in applying the filter earlier as we really need to go through all * entries regardless of it (the headers like X-Account-Container-Count * aren't affected by specifying prefix). */ std::map<std::string, RGWBucketEnt>& m = buckets.get_buckets(); auto iter = m.rbegin(); for (/* initialized above */; iter != m.rend() && !boost::algorithm::starts_with(iter->first, prefix); ++iter) { /* NOP */; } for (/* iter carried */; iter != m.rend() && boost::algorithm::starts_with(iter->first, prefix); ++iter) { dump_bucket_entry(iter->second); } }
0
[ "CWE-617" ]
ceph
f44a8ae8aa27ecef69528db9aec220f12492810e
186,625,326,925,144,000,000,000,000,000,000,000,000
25
rgw: RGWSwiftWebsiteHandler::is_web_dir checks empty subdir_name checking for empty name avoids later assertion in RGWObjectCtx::set_atomic Fixes: CVE-2021-3531 Reviewed-by: Casey Bodley <[email protected]> Signed-off-by: Casey Bodley <[email protected]> (cherry picked from commit 7196a469b4470f3c8628489df9a41ec8b00a5610)
i16 sqlite3StorageColumnToTable(Table *pTab, i16 iCol){ if( pTab->tabFlags & TF_HasVirtual ){ int i; for(i=0; i<=iCol; i++){ if( pTab->aCol[i].colFlags & COLFLAG_VIRTUAL ) iCol++; } } return iCol; }
0
[ "CWE-674", "CWE-787" ]
sqlite
38096961c7cd109110ac21d3ed7dad7e0cb0ae06
11,357,103,150,222,371,000,000,000,000,000,000,000
9
Avoid infinite recursion in the ALTER TABLE code when a view contains an unused CTE that references, directly or indirectly, the view itself. FossilOrigin-Name: 1d2e53a39b87e364685e21de137655b6eee725e4c6d27fc90865072d7c5892b5
void init_revisions(struct rev_info *revs, const char *prefix) { memset(revs, 0, sizeof(*revs)); revs->abbrev = DEFAULT_ABBREV; revs->ignore_merges = 1; revs->simplify_history = 1; DIFF_OPT_SET(&revs->pruning, RECURSIVE); DIFF_OPT_SET(&revs->pruning, QUICK); revs->pruning.add_remove = file_add_remove; revs->pruning.change = file_change; revs->sort_order = REV_SORT_IN_GRAPH_ORDER; revs->dense = 1; revs->prefix = prefix; revs->max_age = -1; revs->min_age = -1; revs->skip_count = -1; revs->max_count = -1; revs->max_parents = -1; revs->expand_tabs_in_log = -1; revs->commit_format = CMIT_FMT_DEFAULT; revs->expand_tabs_in_log_default = 8; init_grep_defaults(); grep_init(&revs->grep_filter, prefix); revs->grep_filter.status_only = 1; revs->grep_filter.regflags = REG_NEWLINE; diff_setup(&revs->diffopt); if (prefix && !revs->diffopt.prefix) { revs->diffopt.prefix = prefix; revs->diffopt.prefix_length = strlen(prefix); } revs->notes_opt.use_default_notes = -1; }
1
[]
git
a937b37e766479c8e780b17cce9c4b252fd97e40
18,747,356,120,528,333,000,000,000,000,000,000,000
37
revision: quit pruning diff more quickly when possible When the revision traversal machinery is given a pathspec, we must compute the parent-diff for each commit to determine which ones are TREESAME. We set the QUICK diff flag to avoid looking at more entries than we need; we really just care whether there are any changes at all. But there is one case where we want to know a bit more: if --remove-empty is set, we care about finding cases where the change consists only of added entries (in which case we may prune the parent in try_to_simplify_commit()). To cover that case, our file_add_remove() callback does not quit the diff upon seeing an added entry; it keeps looking for other types of entries. But this means when --remove-empty is not set (and it is not by default), we compute more of the diff than is necessary. You can see this in a pathological case where a commit adds a very large number of entries, and we limit based on a broad pathspec. E.g.: perl -e ' chomp(my $blob = `git hash-object -w --stdin </dev/null`); for my $a (1..1000) { for my $b (1..1000) { print "100644 $blob\t$a/$b\n"; } } ' | git update-index --index-info git commit -qm add git rev-list HEAD -- . This case takes about 100ms now, but after this patch only needs 6ms. That's not a huge improvement, but it's easy to get and it protects us against even more pathological cases (e.g., going from 1 million to 10 million files would take ten times as long with the current code, but not increase at all after this patch). This is reported to minorly speed-up pathspec limiting in real world repositories (like the 100-million-file Windows repository), but probably won't make a noticeable difference outside of pathological setups. This patch actually covers the case without --remove-empty, and the case where we see only deletions. See the in-code comment for details. Note that we have to add a new member to the diff_options struct so that our callback can see the value of revs->remove_empty_trees. This callback parameter could be passed to the "add_remove" and "change" callbacks, but there's not much point. They already receive the diff_options struct, and doing it this way avoids having to update the function signature of the other callbacks (arguably the format_callback and output_prefix functions could benefit from the same simplification). Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
NCR_ProcessUnknown (NTP_Packet *message, /* the received message */ struct timeval *now, /* timestamp at time of receipt */ double now_err, /* assumed error in the timestamp */ NTP_Remote_Address *remote_addr, NTP_Local_Address *local_addr, int length /* the length of the received packet */ ) { NTP_Mode pkt_mode, my_mode; int has_auth, valid_auth; uint32_t key_id; /* Ignore the packet if it wasn't received by server socket */ if (!NIO_IsServerSocket(local_addr->sock_fd)) { DEBUG_LOG(LOGF_NtpCore, "NTP request packet received by client socket %d", local_addr->sock_fd); return; } if (!check_packet_format(message, length)) return; if (!ADF_IsAllowed(access_auth_table, &remote_addr->ip_addr)) { DEBUG_LOG(LOGF_NtpCore, "NTP packet received from unauthorised host %s port %d", UTI_IPToString(&remote_addr->ip_addr), remote_addr->port); return; } pkt_mode = NTP_LVM_TO_MODE(message->lvm); switch (pkt_mode) { case MODE_ACTIVE: /* We are symmetric passive, even though we don't ever lock to him */ my_mode = MODE_PASSIVE; CLG_LogNTPPeerAccess(&remote_addr->ip_addr, now->tv_sec); break; case MODE_CLIENT: /* Reply with server packet */ my_mode = MODE_SERVER; CLG_LogNTPClientAccess(&remote_addr->ip_addr, now->tv_sec); break; default: /* Discard */ DEBUG_LOG(LOGF_NtpCore, "NTP packet discarded pkt_mode=%d", pkt_mode); return; } /* Check if the packet includes MAC that authenticates properly */ valid_auth = check_packet_auth(message, length, &has_auth, &key_id); /* If authentication failed, reply with crypto-NAK */ if (!valid_auth) key_id = 0; /* Send a reply. - copy the poll value as the client may use it to control its polling interval - authenticate the packet if the request was authenticated - originate timestamp is the client's transmit time - don't save our transmit timestamp as we aren't maintaining state about this client */ transmit_packet(my_mode, message->poll, NTP_LVM_TO_VERSION(message->lvm), has_auth, key_id, &message->transmit_ts, now, NULL, NULL, remote_addr, local_addr); }
0
[]
chrony
a78bf9725a7b481ebff0e0c321294ba767f2c1d8
332,615,298,872,033,750,000,000,000,000,000,000,000
67
ntp: restrict authentication of server/peer to specified key When a server/peer was specified with a key number to enable authentication with a symmetric key, packets received from the server/peer were accepted if they were authenticated with any of the keys contained in the key file and not just the specified key. This allowed an attacker who knew one key of a client/peer to modify packets from its servers/peers that were authenticated with other keys in a man-in-the-middle (MITM) attack. For example, in a network where each NTP association had a separate key and all hosts had only keys they needed, a client of a server could not attack other clients of the server, but it could attack the server and also attack its own clients (i.e. modify packets from other servers). To not allow the server/peer to be authenticated with other keys extend the authentication test to check if the key ID in the received packet is equal to the configured key number. As a consequence, it's no longer possible to authenticate two peers to each other with two different keys, both peers have to be configured to use the same key. This issue was discovered by Matt Street of Cisco ASIG.
TEST(HeaderMapImplTest, InlineInsert) { HeaderMapImpl headers; EXPECT_TRUE(headers.empty()); EXPECT_EQ(0, headers.size()); EXPECT_EQ(headers.byteSize().value(), 0); EXPECT_EQ(nullptr, headers.Host()); headers.insertHost().value(std::string("hello")); EXPECT_FALSE(headers.empty()); EXPECT_EQ(1, headers.size()); EXPECT_EQ(":authority", headers.Host()->key().getStringView()); EXPECT_EQ("hello", headers.Host()->value().getStringView()); EXPECT_EQ("hello", headers.get(Headers::get().Host)->value().getStringView()); }
0
[ "CWE-400", "CWE-703" ]
envoy
afc39bea36fd436e54262f150c009e8d72db5014
212,143,867,424,961,370,000,000,000,000,000,000,000
13
Track byteSize of HeaderMap internally. Introduces a cached byte size updated internally in HeaderMap. The value is stored as an optional, and is cleared whenever a non-const pointer or reference to a HeaderEntry is accessed. The cached value can be set with refreshByteSize() which performs an iteration over the HeaderMap to sum the size of each key and value in the HeaderMap. Signed-off-by: Asra Ali <[email protected]>
R_API RSysInfo *r_sys_info(void) { #if __UNIX__ struct utsname un = {{0}}; if (uname (&un) != -1) { RSysInfo *si = R_NEW0 (RSysInfo); if (si) { si->sysname = strdup (un.sysname); si->nodename = strdup (un.nodename); si->release = strdup (un.release); si->version = strdup (un.version); si->machine = strdup (un.machine); return si; } } #elif __WINDOWS__ HKEY key; DWORD type; DWORD size; DWORD major; DWORD minor; char tmp[256] = {0}; RSysInfo *si = R_NEW0 (RSysInfo); if (!si) { return NULL; } if (RegOpenKeyExA (HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", 0, KEY_QUERY_VALUE, &key) != ERROR_SUCCESS) { r_sys_perror ("r_sys_info/RegOpenKeyExA"); r_sys_info_free (si); return NULL; } size = sizeof (tmp); if (RegQueryValueExA (key, "ProductName", NULL, &type, (LPBYTE)&tmp, &size) != ERROR_SUCCESS || type != REG_SZ) { goto beach; } si->sysname = strdup (tmp); size = sizeof (major); if (RegQueryValueExA (key, "CurrentMajorVersionNumber", NULL, &type, (LPBYTE)&major, &size) != ERROR_SUCCESS || type != REG_DWORD) { goto beach; } size = sizeof (minor); if (RegQueryValueExA (key, "CurrentMinorVersionNumber", NULL, &type, (LPBYTE)&minor, &size) != ERROR_SUCCESS || type != REG_DWORD) { goto beach; } size = sizeof (tmp); if (RegQueryValueExA (key, "CurrentBuild", NULL, &type, (LPBYTE)&tmp, &size) != ERROR_SUCCESS || type != REG_SZ) { goto beach; } si->version = r_str_newf ("%d.%d.%s", major, minor, tmp); size = sizeof (tmp); if (RegQueryValueExA (key, "ReleaseId", NULL, &type, (LPBYTE)tmp, &size) != ERROR_SUCCESS || type != REG_SZ) { goto beach; } si->release = strdup (tmp); beach: RegCloseKey (key); return si; #endif return NULL; }
0
[ "CWE-78" ]
radare2
04edfa82c1f3fa2bc3621ccdad2f93bdbf00e4f9
83,167,404,419,296,030,000,000,000,000,000,000,000
75
Fix command injection on PDB download (#16966) * Fix r_sys_mkdirp with absolute path on Windows * Fix build with --with-openssl * Use RBuffer in r_socket_http_answer() * r_socket_http_answer: Fix read for big responses * Implement r_str_escape_sh() * Cleanup r_socket_connect() on Windows * Fix socket being created without a protocol * Fix socket connect with SSL ##socket * Use select() in r_socket_ready() * Fix read failing if received only protocol answer * Fix double-free * r_socket_http_get: Fail if req. SSL with no support * Follow redirects in r_socket_http_answer() * Fix r_socket_http_get result length with R2_CURL=1 * Also follow redirects * Avoid using curl for downloading PDBs * Use r_socket_http_get() on UNIXs * Use WinINet API on Windows for r_socket_http_get() * Fix command injection * Fix r_sys_cmd_str_full output for binary data * Validate GUID on PDB download * Pass depth to socket_http_get_recursive() * Remove 'r_' and '__' from static function names * Fix is_valid_guid * Fix for comments
bind_variable (name, value, flags) const char *name; char *value; int flags; { SHELL_VAR *v; VAR_CONTEXT *vc; if (shell_variables == 0) create_variable_tables (); /* If we have a temporary environment, look there first for the variable, and, if found, modify the value there before modifying it in the shell_variables table. This allows sourced scripts to modify values given to them in a temporary environment while modifying the variable value that the caller sees. */ if (temporary_env) bind_tempenv_variable (name, value); /* XXX -- handle local variables here. */ for (vc = shell_variables; vc; vc = vc->down) { if (vc_isfuncenv (vc) || vc_isbltnenv (vc)) { v = hash_lookup (name, vc->table); if (v) return (bind_variable_internal (name, value, vc->table, 0, flags)); } } return (bind_variable_internal (name, value, global_variables->table, 0, flags)); }
0
[]
bash
863d31ae775d56b785dc5b0105b6d251515d81d5
10,149,033,631,500,343,000,000,000,000,000,000,000
31
commit bash-20120224 snapshot
static __net_init int ipv4_mib_init_net(struct net *net) { if (snmp_mib_init((void __percpu **)net->mib.tcp_statistics, sizeof(struct tcp_mib), __alignof__(struct tcp_mib)) < 0) goto err_tcp_mib; if (snmp_mib_init((void __percpu **)net->mib.ip_statistics, sizeof(struct ipstats_mib), __alignof__(struct ipstats_mib)) < 0) goto err_ip_mib; if (snmp_mib_init((void __percpu **)net->mib.net_statistics, sizeof(struct linux_mib), __alignof__(struct linux_mib)) < 0) goto err_net_mib; if (snmp_mib_init((void __percpu **)net->mib.udp_statistics, sizeof(struct udp_mib), __alignof__(struct udp_mib)) < 0) goto err_udp_mib; if (snmp_mib_init((void __percpu **)net->mib.udplite_statistics, sizeof(struct udp_mib), __alignof__(struct udp_mib)) < 0) goto err_udplite_mib; if (snmp_mib_init((void __percpu **)net->mib.icmp_statistics, sizeof(struct icmp_mib), __alignof__(struct icmp_mib)) < 0) goto err_icmp_mib; if (snmp_mib_init((void __percpu **)net->mib.icmpmsg_statistics, sizeof(struct icmpmsg_mib), __alignof__(struct icmpmsg_mib)) < 0) goto err_icmpmsg_mib; tcp_mib_init(net); return 0; err_icmpmsg_mib: snmp_mib_free((void __percpu **)net->mib.icmp_statistics); err_icmp_mib: snmp_mib_free((void __percpu **)net->mib.udplite_statistics); err_udplite_mib: snmp_mib_free((void __percpu **)net->mib.udp_statistics); err_udp_mib: snmp_mib_free((void __percpu **)net->mib.net_statistics); err_net_mib: snmp_mib_free((void __percpu **)net->mib.ip_statistics); err_ip_mib: snmp_mib_free((void __percpu **)net->mib.tcp_statistics); err_tcp_mib: return -ENOMEM; }
0
[ "CWE-362" ]
linux-2.6
f6d8bd051c391c1c0458a30b2a7abcd939329259
339,765,467,674,273,150,000,000,000,000,000,000,000
49
inet: add RCU protection to inet->opt We lack proper synchronization to manipulate inet->opt ip_options Problem is ip_make_skb() calls ip_setup_cork() and ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options), without any protection against another thread manipulating inet->opt. Another thread can change inet->opt pointer and free old one under us. Use RCU to protect inet->opt (changed to inet->inet_opt). Instead of handling atomic refcounts, just copy ip_options when necessary, to avoid cache line dirtying. We cant insert an rcu_head in struct ip_options since its included in skb->cb[], so this patch is large because I had to introduce a new ip_options_rcu structure. Signed-off-by: Eric Dumazet <[email protected]> Cc: Herbert Xu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
any(any &&rhs) : content_(rhs.content_) { rhs.content_ = nullptr; }
0
[ "CWE-125" ]
cpp-peglib
b3b29ce8f3acf3a32733d930105a17d7b0ba347e
253,692,954,630,360,240,000,000,000,000,000,000,000
1
Fix #122
utf8_str (const gchar *utf8, gchar *buf) { char_str (g_utf8_get_char (utf8), buf); return buf; }
1
[ "CWE-125" ]
glib
cec71705406f0b2790422f0c1aa0ff3b4b464b1b
237,522,953,369,241,600,000,000,000,000,000,000,000
6
gmarkup: Fix unvalidated UTF-8 read in markup parsing error paths When formatting the error messages for markup parsing errors, the parser was unconditionally reading a UTF-8 character from the input buffer — but the buffer might end with a partial code sequence, resulting in reading off the end of the buffer by up to three bytes. Fix this and add a test case, courtesy of pdknsk. Signed-off-by: Philip Withnall <[email protected]> https://gitlab.gnome.org/GNOME/glib/issues/1462
SWFInput_seek(SWFInput input, long offset, int whence) { input->seek(input, offset, whence); }
0
[ "CWE-190", "CWE-703" ]
libming
a009a38dce1d9316cad1ab522b813b1d5ba4c62a
31,910,871,611,829,530,000,000,000,000,000,000,000
4
Fix left shift of a negative value in SWFInput_readSBits. Check for number before before left-shifting by (number-1).
static const char *get_csr_string(int cmd) { #define IWL_CMD(x) case x: return #x switch (cmd) { IWL_CMD(CSR_HW_IF_CONFIG_REG); IWL_CMD(CSR_INT_COALESCING); IWL_CMD(CSR_INT); IWL_CMD(CSR_INT_MASK); IWL_CMD(CSR_FH_INT_STATUS); IWL_CMD(CSR_GPIO_IN); IWL_CMD(CSR_RESET); IWL_CMD(CSR_GP_CNTRL); IWL_CMD(CSR_HW_REV); IWL_CMD(CSR_EEPROM_REG); IWL_CMD(CSR_EEPROM_GP); IWL_CMD(CSR_OTP_GP_REG); IWL_CMD(CSR_GIO_REG); IWL_CMD(CSR_GP_UCODE_REG); IWL_CMD(CSR_GP_DRIVER_REG); IWL_CMD(CSR_UCODE_DRV_GP1); IWL_CMD(CSR_UCODE_DRV_GP2); IWL_CMD(CSR_LED_REG); IWL_CMD(CSR_DRAM_INT_TBL_REG); IWL_CMD(CSR_GIO_CHICKEN_BITS); IWL_CMD(CSR_ANA_PLL_CFG); IWL_CMD(CSR_HW_REV_WA_REG); IWL_CMD(CSR_MONITOR_STATUS_REG); IWL_CMD(CSR_DBG_HPET_MEM_REG); default: return "UNKNOWN"; } #undef IWL_CMD }
0
[ "CWE-476" ]
linux
8188a18ee2e48c9a7461139838048363bfce3fef
30,123,787,307,295,370,000,000,000,000,000,000,000
33
iwlwifi: pcie: fix rb_allocator workqueue allocation We don't handle failures in the rb_allocator workqueue allocation correctly. To fix that, move the code earlier so the cleanup is easier and we don't have to undo all the interrupt allocations in this case. Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: Luca Coelho <[email protected]>
static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { /* * VMENTER enables interrupts (host state), but the kernel state is * interrupts disabled when this is invoked. Also tell RCU about * it. This is the same logic as for exit_to_user_mode(). * * This ensures that e.g. latency analysis on the host observes * guest mode as interrupt enabled. * * guest_enter_irqoff() informs context tracking about the * transition to guest mode and if enabled adjusts RCU state * accordingly. */ instrumentation_begin(); trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(CALLER_ADDR0); instrumentation_end(); guest_enter_irqoff(); lockdep_hardirqs_on(CALLER_ADDR0); /* L1D Flush includes CPU buffer clear to mitigate MDS */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); else if (static_branch_unlikely(&mds_user_clear)) mds_clear_cpu_buffers(); if (vcpu->arch.cr2 != native_read_cr2()) native_write_cr2(vcpu->arch.cr2); vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, vmx->loaded_vmcs->launched); vcpu->arch.cr2 = native_read_cr2(); /* * VMEXIT disables interrupts (host state), but tracing and lockdep * have them in state 'on' as recorded before entering guest mode. * Same as enter_from_user_mode(). * * guest_exit_irqoff() restores host context and reinstates RCU if * enabled and required. * * This needs to be done before the below as native_read_msr() * contains a tracepoint and x86_spec_ctrl_restore_host() calls * into world and some more. */ lockdep_hardirqs_off(CALLER_ADDR0); guest_exit_irqoff(); instrumentation_begin(); trace_hardirqs_off_finish(); instrumentation_end(); }
0
[ "CWE-787" ]
linux
04c4f2ee3f68c9a4bf1653d15f1a9a435ae33f7a
126,333,904,672,839,980,000,000,000,000,000,000,000
56
KVM: VMX: Don't use vcpu->run->internal.ndata as an array index __vmx_handle_exit() uses vcpu->run->internal.ndata as an index for an array access. Since vcpu->run is (can be) mapped to a user address space with a writer permission, the 'ndata' could be updated by the user process at anytime (the user process can set it to outside the bounds of the array). So, it is not safe that __vmx_handle_exit() uses the 'ndata' that way. Fixes: 1aa561b1a4c0 ("kvm: x86: Add "last CPU" to some KVM_EXIT information") Signed-off-by: Reiji Watanabe <[email protected]> Reviewed-by: Jim Mattson <[email protected]> Message-Id: <[email protected]> Cc: [email protected] Signed-off-by: Paolo Bonzini <[email protected]>
static void tq_freezethaw(struct thread_q *tq, bool frozen) { mutex_lock(&tq->mutex); tq->frozen = frozen; pthread_cond_signal(&tq->cond); mutex_unlock(&tq->mutex); }
0
[ "CWE-119", "CWE-787" ]
bfgminer
c80ad8548251eb0e15329fc240c89070640c9d79
46,629,623,730,367,320,000,000,000,000,000,000,000
9
Stratum: extract_sockaddr: Truncate overlong addresses rather than stack overflow Thanks to Mick Ayzenberg <[email protected]> for finding this!
static int ud_interception(struct vcpu_svm *svm) { return handle_ud(&svm->vcpu); }
0
[ "CWE-401" ]
linux
d80b64ff297e40c2b6f7d7abc1b3eba70d22a068
337,519,073,765,060,700,000,000,000,000,000,000,000
4
KVM: SVM: Fix potential memory leak in svm_cpu_init() When kmalloc memory for sd->sev_vmcbs failed, we forget to free the page held by sd->save_area. Also get rid of the var r as '-ENOMEM' is actually the only possible outcome here. Reviewed-by: Liran Alon <[email protected]> Reviewed-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Miaohe Lin <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static ssize_t soc_info_get(struct device *dev, struct device_attribute *attr, char *buf) { struct soc_device *soc_dev = container_of(dev, struct soc_device, dev); if (attr == &dev_attr_machine) return sprintf(buf, "%s\n", soc_dev->attr->machine); if (attr == &dev_attr_family) return sprintf(buf, "%s\n", soc_dev->attr->family); if (attr == &dev_attr_revision) return sprintf(buf, "%s\n", soc_dev->attr->revision); if (attr == &dev_attr_serial_number) return sprintf(buf, "%s\n", soc_dev->attr->serial_number); if (attr == &dev_attr_soc_id) return sprintf(buf, "%s\n", soc_dev->attr->soc_id); return -EINVAL; }
1
[ "CWE-787" ]
linux
aa838896d87af561a33ecefea1caa4c15a68bc47
199,731,750,340,119,500,000,000,000,000,000,000,000
20
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions Convert the various sprintf fmaily calls in sysfs device show functions to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety. Done with: $ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 . And cocci script: $ cat sysfs_emit_dev.cocci @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - sprintf(buf, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - snprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - scnprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; expression chr; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - strcpy(buf, chr); + sysfs_emit(buf, chr); ...> } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - sprintf(buf, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - snprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - scnprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... - len += scnprintf(buf + len, PAGE_SIZE - len, + len += sysfs_emit_at(buf, len, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; expression chr; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { ... - strcpy(buf, chr); - return strlen(buf); + return sysfs_emit(buf, chr); } Signed-off-by: Joe Perches <[email protected]> Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com Signed-off-by: Greg Kroah-Hartman <[email protected]>
GF_Err mfra_box_read(GF_Box *s, GF_BitStream *bs) { return gf_isom_box_array_read(s, bs, mfra_on_child_box); }
0
[ "CWE-787" ]
gpac
388ecce75d05e11fc8496aa4857b91245007d26e
215,277,539,528,114,530,000,000,000,000,000,000,000
4
fixed #1587
c_pdf14trans_write(const gs_composite_t * pct, byte * data, uint * psize, gx_device_clist_writer *cdev) { const gs_pdf14trans_params_t * pparams = &((const gs_pdf14trans_t *)pct)->params; int need, avail = *psize; byte buf[MAX_CLIST_TRANSPARENCY_BUFFER_SIZE]; /* Must be large enough to fit the data written below. We don't implement a dynamic check for the buffer owerflow, assuming that the consistency is verified in the coding phase. See the definition of MAX_CLIST_TRANSPARENCY_BUFFER_SIZE. */ byte * pbuf = buf; int opcode = pparams->pdf14_op; int mask_size = 0; uint mask_id = 0; int code; bool found_icc; int64_t hashcode = 0; cmm_profile_t *icc_profile; gsicc_rendering_param_t render_cond; cmm_dev_profile_t *dev_profile; /* We maintain and update working copies until we actually write the clist */ int pdf14_needed = cdev->pdf14_needed; int trans_group_level = cdev->pdf14_trans_group_level; int smask_level = cdev->pdf14_smask_level; code = dev_proc((gx_device *) cdev, get_profile)((gx_device *) cdev, &dev_profile); if (code < 0) return code; gsicc_extract_profile(GS_UNKNOWN_TAG, dev_profile, &icc_profile, &render_cond); *pbuf++ = opcode; /* 1 byte */ switch (opcode) { default: /* Should not occur. */ break; case PDF14_PUSH_DEVICE: trans_group_level = 0; cdev->pdf14_smask_level = 0; cdev->page_pdf14_needed = false; put_value(pbuf, pparams->num_spot_colors); put_value(pbuf, pparams->is_pattern); /* If we happen to be going to a color space like CIELAB then we are going to do our blending in default RGB and convert to CIELAB at the end. To do this, we need to store the default RGB profile in the clist so that we can grab it later on during the clist read back and put image command */ if (icc_profile->data_cs == gsCIELAB || icc_profile->islab) { /* Get the default RGB profile. Set the device hash code so that we can extract it during the put_image operation. */ cdev->trans_dev_icc_hash = pparams->iccprofile->hashcode; found_icc = clist_icc_searchtable(cdev, pparams->iccprofile->hashcode); if (!found_icc) { /* Add it to the table */ clist_icc_addentry(cdev, pparams->iccprofile->hashcode, pparams->iccprofile); } } break; case PDF14_POP_DEVICE: pdf14_needed = false; /* reset pdf14_needed */ trans_group_level = 0; smask_level = 0; put_value(pbuf, pparams->is_pattern); break; case PDF14_END_TRANS_GROUP: case PDF14_END_TRANS_TEXT_GROUP: trans_group_level--; /* if now at page level, pdf14_needed will be updated */ if (smask_level == 0 && trans_group_level == 0) pdf14_needed = cdev->page_pdf14_needed; break; /* No data */ case PDF14_BEGIN_TRANS_GROUP: pdf14_needed = true; /* the compositor will be needed while reading */ trans_group_level++; code = c_pdf14trans_write_ctm(&pbuf, pparams); if (code < 0) return code; *pbuf++ = (pparams->Isolated & 1) + ((pparams->Knockout & 1) << 1); *pbuf++ = pparams->blend_mode; *pbuf++ = pparams->group_color; put_value(pbuf, pparams->group_color_numcomps); put_value(pbuf, pparams->opacity.alpha); put_value(pbuf, pparams->shape.alpha); put_value(pbuf, pparams->bbox); put_value(pbuf, pparams->text_group); mask_id = pparams->mask_id; put_value(pbuf, mask_id); /* Color space information maybe ICC based in this case we need to store the ICC profile or the ID if it is cached already */ if (pparams->group_color == ICC) { /* Check if it is already in the ICC clist table */ hashcode = pparams->iccprofile->hashcode; found_icc = clist_icc_searchtable(cdev, hashcode); if (!found_icc) { /* Add it to the table */ clist_icc_addentry(cdev, hashcode, pparams->iccprofile); put_value(pbuf, hashcode); } else { /* It will be in the clist. Just write out the hashcode */ put_value(pbuf, hashcode); } } else { put_value(pbuf, hashcode); } break; case PDF14_BEGIN_TRANS_MASK: if (pparams->subtype != TRANSPARENCY_MASK_None) { pdf14_needed = true; /* the compositor will be needed while reading */ smask_level++; } code = c_pdf14trans_write_ctm(&pbuf, pparams); if (code < 0) return code; put_value(pbuf, pparams->subtype); *pbuf++ = pparams->group_color; put_value(pbuf, pparams->group_color_numcomps); *pbuf++ = pparams->replacing; *pbuf++ = pparams->function_is_identity; *pbuf++ = pparams->Background_components; *pbuf++ = pparams->Matte_components; put_value(pbuf, pparams->bbox); mask_id = pparams->mask_id; put_value(pbuf, mask_id); if (pparams->Background_components) { const int l = sizeof(pparams->Background[0]) * pparams->Background_components; memcpy(pbuf, pparams->Background, l); pbuf += l; memcpy(pbuf, &pparams->GrayBackground, sizeof(pparams->GrayBackground)); pbuf += sizeof(pparams->GrayBackground); } if (pparams->Matte_components) { const int m = sizeof(pparams->Matte[0]) * pparams->Matte_components; memcpy(pbuf, pparams->Matte, m); pbuf += m; } if (!pparams->function_is_identity) mask_size = sizeof(pparams->transfer_fn); /* Color space information may be ICC based in this case we need to store the ICC profile or the ID if it is cached already */ if (pparams->group_color == ICC) { /* Check if it is already in the ICC clist table */ hashcode = pparams->iccprofile->hashcode; found_icc = clist_icc_searchtable(cdev, hashcode); if (!found_icc) { /* Add it to the table */ clist_icc_addentry(cdev, hashcode, pparams->iccprofile); put_value(pbuf, hashcode); } else { /* It will be in the clist. Just write out the hashcode */ put_value(pbuf, hashcode); } } else { put_value(pbuf, hashcode); } break; case PDF14_END_TRANS_MASK: smask_level--; if (smask_level == 0 && trans_group_level == 0) pdf14_needed = cdev->page_pdf14_needed; break; case PDF14_SET_BLEND_PARAMS: if (pparams->blend_mode != BLEND_MODE_Normal || pparams->opacity.alpha != 1.0 || pparams->shape.alpha != 1.0) pdf14_needed = true; /* the compositor will be needed while reading */ else if (smask_level == 0 && trans_group_level == 0) pdf14_needed = false; /* At page level, set back to false */ if (smask_level == 0 && trans_group_level == 0) cdev->page_pdf14_needed = pdf14_needed; /* save for after popping to page level */ *pbuf++ = pparams->changed; if (pparams->changed & PDF14_SET_BLEND_MODE) *pbuf++ = pparams->blend_mode; if (pparams->changed & PDF14_SET_TEXT_KNOCKOUT) *pbuf++ = pparams->text_knockout; if (pparams->changed & PDF14_SET_OPACITY_ALPHA) put_value(pbuf, pparams->opacity.alpha); if (pparams->changed & PDF14_SET_SHAPE_ALPHA) put_value(pbuf, pparams->shape.alpha); if (pparams->changed & PDF14_SET_OVERPRINT) put_value(pbuf, pparams->overprint); if (pparams->changed & PDF14_SET_OVERPRINT_MODE) put_value(pbuf, pparams->overprint_mode); break; case PDF14_PUSH_TRANS_STATE: break; case PDF14_POP_TRANS_STATE: break; case PDF14_PUSH_SMASK_COLOR: return 0; /* We really should never be here */ break; case PDF14_POP_SMASK_COLOR: return 0; /* We really should never be here */ break; } /* check for fit */ need = (pbuf - buf) + mask_size; *psize = need; if (need > avail) { if (avail) return_error(gs_error_rangecheck); else return gs_error_rangecheck; } /* If we are writing more than the maximum ever expected, * return a rangecheck error. Second check is for Coverity */ if ((need + 3 > MAX_CLIST_COMPOSITOR_SIZE) || (need + 3 - mask_size > MAX_CLIST_TRANSPARENCY_BUFFER_SIZE) ) return_error(gs_error_rangecheck); /* Copy our serialized data into the output buffer */ memcpy(data, buf, need - mask_size); if (mask_size) /* Include the transfer mask data if present */ memcpy(data + need - mask_size, pparams->transfer_fn, mask_size); if_debug3m('v', cdev->memory, "[v] c_pdf14trans_write: opcode = %s mask_id=%d need = %d\n", pdf14_opcode_names[opcode], mask_id, need); cdev->pdf14_needed = pdf14_needed; /* all OK to update */ cdev->pdf14_trans_group_level = trans_group_level; cdev->pdf14_smask_level = smask_level; return 0; }
0
[]
ghostpdl
c432131c3fdb2143e148e8ba88555f7f7a63b25e
312,432,765,408,951,740,000,000,000,000,000,000,000
226
Bug 699661: Avoid sharing pointers between pdf14 compositors If a copdevice is triggered when the pdf14 compositor is the device, we make a copy of the device, then throw an error because, by default we're only allowed to copy the device prototype - then freeing it calls the finalize, which frees several pointers shared with the parent. Make a pdf14 specific finish_copydevice() which NULLs the relevant pointers, before, possibly, throwing the same error as the default method. This also highlighted a problem with reopening the X11 devices, where a custom error handler could be replaced with itself, meaning it also called itself, and infifite recursion resulted. Keep a note of if the handler replacement has been done, and don't do it a second time.
void* formal_count_address() { return &thread_local_top_.formal_count_; }
0
[ "CWE-20", "CWE-119" ]
node
530af9cb8e700e7596b3ec812bad123c9fa06356
9,790,363,854,466,520,000,000,000,000,000,000,000
1
v8: Interrupts must not mask stack overflow. Backport of https://codereview.chromium.org/339883002
static BlockAIOCB *iscsi_aio_ioctl(BlockDriverState *bs, unsigned long int req, void *buf, BlockCompletionFunc *cb, void *opaque) { IscsiLun *iscsilun = bs->opaque; struct iscsi_context *iscsi = iscsilun->iscsi; struct iscsi_data data; IscsiAIOCB *acb; acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque); acb->iscsilun = iscsilun; acb->bh = NULL; acb->status = -EINPROGRESS; acb->ioh = buf; acb->cancelled = false; if (req != SG_IO) { iscsi_ioctl_handle_emulated(acb, req, buf); return &acb->common; } if (acb->ioh->cmd_len > SCSI_CDB_MAX_SIZE) { error_report("iSCSI: ioctl error CDB exceeds max size (%d > %d)", acb->ioh->cmd_len, SCSI_CDB_MAX_SIZE); qemu_aio_unref(acb); return NULL; } acb->task = malloc(sizeof(struct scsi_task)); if (acb->task == NULL) { error_report("iSCSI: Failed to allocate task for scsi command. %s", iscsi_get_error(iscsi)); qemu_aio_unref(acb); return NULL; } memset(acb->task, 0, sizeof(struct scsi_task)); switch (acb->ioh->dxfer_direction) { case SG_DXFER_TO_DEV: acb->task->xfer_dir = SCSI_XFER_WRITE; break; case SG_DXFER_FROM_DEV: acb->task->xfer_dir = SCSI_XFER_READ; break; default: acb->task->xfer_dir = SCSI_XFER_NONE; break; } acb->task->cdb_size = acb->ioh->cmd_len; memcpy(&acb->task->cdb[0], acb->ioh->cmdp, acb->ioh->cmd_len); acb->task->expxferlen = acb->ioh->dxfer_len; data.size = 0; qemu_mutex_lock(&iscsilun->mutex); if (acb->task->xfer_dir == SCSI_XFER_WRITE) { if (acb->ioh->iovec_count == 0) { data.data = acb->ioh->dxferp; data.size = acb->ioh->dxfer_len; } else { scsi_task_set_iov_out(acb->task, (struct scsi_iovec *) acb->ioh->dxferp, acb->ioh->iovec_count); } } if (iscsi_scsi_command_async(iscsi, iscsilun->lun, acb->task, iscsi_aio_ioctl_cb, (data.size > 0) ? &data : NULL, acb) != 0) { qemu_mutex_unlock(&iscsilun->mutex); scsi_free_scsi_task(acb->task); qemu_aio_unref(acb); return NULL; } /* tell libiscsi to read straight into the buffer we got from ioctl */ if (acb->task->xfer_dir == SCSI_XFER_READ) { if (acb->ioh->iovec_count == 0) { scsi_task_add_data_in_buffer(acb->task, acb->ioh->dxfer_len, acb->ioh->dxferp); } else { scsi_task_set_iov_in(acb->task, (struct scsi_iovec *) acb->ioh->dxferp, acb->ioh->iovec_count); } } iscsi_set_events(iscsilun); qemu_mutex_unlock(&iscsilun->mutex); return &acb->common; }
0
[ "CWE-125" ]
qemu
ff0507c239a246fd7215b31c5658fc6a3ee1e4c5
151,470,465,877,630,770,000,000,000,000,000,000,000
95
block/iscsi:fix heap-buffer-overflow in iscsi_aio_ioctl_cb There is an overflow, the source 'datain.data[2]' is 100 bytes, but the 'ss' is 252 bytes.This may cause a security issue because we can access a lot of unrelated memory data. The len for sbp copy data should take the minimum of mx_sb_len and sb_len_wr, not the maximum. If we use iscsi device for VM backend storage, ASAN show stack: READ of size 252 at 0xfffd149dcfc4 thread T0 #0 0xaaad433d0d34 in __asan_memcpy (aarch64-softmmu/qemu-system-aarch64+0x2cb0d34) #1 0xaaad45f9d6d0 in iscsi_aio_ioctl_cb /qemu/block/iscsi.c:996:9 #2 0xfffd1af0e2dc (/usr/lib64/iscsi/libiscsi.so.8+0xe2dc) #3 0xfffd1af0d174 (/usr/lib64/iscsi/libiscsi.so.8+0xd174) #4 0xfffd1af19fac (/usr/lib64/iscsi/libiscsi.so.8+0x19fac) #5 0xaaad45f9acc8 in iscsi_process_read /qemu/block/iscsi.c:403:5 #6 0xaaad4623733c in aio_dispatch_handler /qemu/util/aio-posix.c:467:9 #7 0xaaad4622f350 in aio_dispatch_handlers /qemu/util/aio-posix.c:510:20 #8 0xaaad4622f350 in aio_dispatch /qemu/util/aio-posix.c:520 #9 0xaaad46215944 in aio_ctx_dispatch /qemu/util/async.c:298:5 #10 0xfffd1bed12f4 in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x512f4) #11 0xaaad46227de0 in glib_pollfds_poll /qemu/util/main-loop.c:219:9 #12 0xaaad46227de0 in os_host_main_loop_wait /qemu/util/main-loop.c:242 #13 0xaaad46227de0 in main_loop_wait /qemu/util/main-loop.c:518 #14 0xaaad43d9d60c in qemu_main_loop /qemu/softmmu/vl.c:1662:9 #15 0xaaad4607a5b0 in main /qemu/softmmu/main.c:49:5 #16 0xfffd1a460b9c in __libc_start_main (/lib64/libc.so.6+0x20b9c) #17 0xaaad43320740 in _start (aarch64-softmmu/qemu-system-aarch64+0x2c00740) 0xfffd149dcfc4 is located 0 bytes to the right of 100-byte region [0xfffd149dcf60,0xfffd149dcfc4) allocated by thread T0 here: #0 0xaaad433d1e70 in __interceptor_malloc (aarch64-softmmu/qemu-system-aarch64+0x2cb1e70) #1 0xfffd1af0e254 (/usr/lib64/iscsi/libiscsi.so.8+0xe254) #2 0xfffd1af0d174 (/usr/lib64/iscsi/libiscsi.so.8+0xd174) #3 0xfffd1af19fac (/usr/lib64/iscsi/libiscsi.so.8+0x19fac) #4 0xaaad45f9acc8 in iscsi_process_read /qemu/block/iscsi.c:403:5 #5 0xaaad4623733c in aio_dispatch_handler /qemu/util/aio-posix.c:467:9 #6 0xaaad4622f350 in aio_dispatch_handlers /qemu/util/aio-posix.c:510:20 #7 0xaaad4622f350 in aio_dispatch /qemu/util/aio-posix.c:520 #8 0xaaad46215944 in aio_ctx_dispatch /qemu/util/async.c:298:5 #9 0xfffd1bed12f4 in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x512f4) #10 0xaaad46227de0 in glib_pollfds_poll /qemu/util/main-loop.c:219:9 #11 0xaaad46227de0 in os_host_main_loop_wait /qemu/util/main-loop.c:242 #12 0xaaad46227de0 in main_loop_wait /qemu/util/main-loop.c:518 #13 0xaaad43d9d60c in qemu_main_loop /qemu/softmmu/vl.c:1662:9 #14 0xaaad4607a5b0 in main /qemu/softmmu/main.c:49:5 #15 0xfffd1a460b9c in __libc_start_main (/lib64/libc.so.6+0x20b9c) #16 0xaaad43320740 in _start (aarch64-softmmu/qemu-system-aarch64+0x2c00740) Reported-by: Euler Robot <[email protected]> Signed-off-by: Chen Qun <[email protected]> Reviewed-by: Stefan Hajnoczi <[email protected]> Message-id: [email protected] Reviewed-by: Daniel P. Berrangé <[email protected]> Signed-off-by: Peter Maydell <[email protected]>
static void dwc3_reset_gadget(struct dwc3 *dwc) { if (!dwc->gadget_driver) return; if (dwc->gadget.speed != USB_SPEED_UNKNOWN) { spin_unlock(&dwc->lock); usb_gadget_udc_reset(&dwc->gadget, dwc->gadget_driver); spin_lock(&dwc->lock); } }
0
[ "CWE-703", "CWE-667", "CWE-189" ]
linux
c91815b596245fd7da349ecc43c8def670d2269e
310,934,271,350,631,120,000,000,000,000,000,000,000
11
usb: dwc3: gadget: never call ->complete() from ->ep_queue() This is a requirement which has always existed but, somehow, wasn't reflected in the documentation and problems weren't found until now when Tuba Yavuz found a possible deadlock happening between dwc3 and f_hid. She described the situation as follows: spin_lock_irqsave(&hidg->write_spinlock, flags); // first acquire /* we our function has been disabled by host */ if (!hidg->req) { free_ep_req(hidg->in_ep, hidg->req); goto try_again; } [...] status = usb_ep_queue(hidg->in_ep, hidg->req, GFP_ATOMIC); => [...] => usb_gadget_giveback_request => f_hidg_req_complete => spin_lock_irqsave(&hidg->write_spinlock, flags); // second acquire Note that this happens because dwc3 would call ->complete() on a failed usb_ep_queue() due to failed Start Transfer command. This is, anyway, a theoretical situation because dwc3 currently uses "No Response Update Transfer" command for Bulk and Interrupt endpoints. It's still good to make this case impossible to happen even if the "No Reponse Update Transfer" command is changed. Reported-by: Tuba Yavuz <[email protected]> Signed-off-by: Felipe Balbi <[email protected]> Cc: stable <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
static void php_session_track_init(void) /* {{{ */ { zval session_vars; zend_string *var_name = zend_string_init("_SESSION", sizeof("_SESSION") - 1, 0); /* Unconditionally destroy existing array -- possible dirty data */ zend_delete_global_variable(var_name); if (!Z_ISUNDEF(PS(http_session_vars))) { zval_ptr_dtor(&PS(http_session_vars)); } array_init(&session_vars); ZVAL_NEW_REF(&PS(http_session_vars), &session_vars); Z_ADDREF_P(&PS(http_session_vars)); zend_hash_update_ind(&EG(symbol_table), var_name, &PS(http_session_vars)); zend_string_release(var_name); }
0
[ "CWE-476" ]
php-src
d76f7c6c636b8240e06a1fa29eebb98ad005008a
257,835,829,665,622,080,000,000,000,000,000,000,000
17
Fix bug #79221 - Null Pointer Dereference in PHP Session Upload Progress
**/ T cubic_cut_atX(const float fx, const int y, const int z, const int c, const T& out_value) const { return cimg::type<T>::cut(cubic_atX(fx,y,z,c,out_value));
0
[ "CWE-125" ]
CImg
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
318,543,818,374,875,900,000,000,000,000,000,000,000
3
Fix other issues in 'CImg<T>::load_bmp()'.
UnhiliteSelection(XtermWidget xw) { TScreen *screen = TScreenOf(xw); if (ScrnHaveSelection(screen)) { CELL first = screen->startH; CELL last = screen->endH; screen->startH = zeroCELL; screen->endH = zeroCELL; ReHiliteText(xw, &first, &last); } }
0
[ "CWE-399" ]
xterm-snapshots
82ba55b8f994ab30ff561a347b82ea340ba7075c
118,965,216,151,274,950,000,000,000,000,000,000,000
13
snapshot of project "xterm", label xterm-365d
static inline int sdsHdrSize(char type) { switch(type&SDS_TYPE_MASK) { case SDS_TYPE_5: return sizeof(struct sdshdr5); case SDS_TYPE_8: return sizeof(struct sdshdr8); case SDS_TYPE_16: return sizeof(struct sdshdr16); case SDS_TYPE_32: return sizeof(struct sdshdr32); case SDS_TYPE_64: return sizeof(struct sdshdr64); } return 0; }
0
[ "CWE-190" ]
redis
d32f2e9999ce003bad0bd2c3bca29f64dcce4433
313,283,130,146,822,730,000,000,000,000,000,000,000
15
Fix integer overflow (CVE-2021-21309). (#8522) On 32-bit systems, setting the proto-max-bulk-len config parameter to a high value may result with integer overflow and a subsequent heap overflow when parsing an input bulk (CVE-2021-21309). This fix has two parts: Set a reasonable limit to the config parameter. Add additional checks to prevent the problem in other potential but unknown code paths.
size_t ArgsLength() { return _args_length; }
0
[ "CWE-191" ]
node
656260b4b65fec3b10f6da3fdc9f11fb941aafb5
69,957,221,759,236,590,000,000,000,000,000,000,000
1
napi: fix memory corruption vulnerability Fixes: https://hackerone.com/reports/784186 CVE-ID: CVE-2020-8174 PR-URL: https://github.com/nodejs-private/node-private/pull/195 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Gabriel Schulhof <[email protected]> Reviewed-By: Michael Dawson <[email protected]> Reviewed-By: Colin Ihrig <[email protected]> Reviewed-By: Rich Trott <[email protected]>
MagickExport MagickBooleanType DrawAffineImage(Image *image, const Image *source,const AffineMatrix *affine,ExceptionInfo *exception) { AffineMatrix inverse_affine; CacheView *image_view, *source_view; MagickBooleanType status; PixelInfo zero; PointInfo extent[4], min, max; register ssize_t i; SegmentInfo edge; ssize_t start, stop, y; /* Determine bounding box. */ assert(image != (Image *) NULL); assert(image->signature == MagickCoreSignature); if (image->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename); assert(source != (const Image *) NULL); assert(source->signature == MagickCoreSignature); assert(affine != (AffineMatrix *) NULL); extent[0].x=0.0; extent[0].y=0.0; extent[1].x=(double) source->columns-1.0; extent[1].y=0.0; extent[2].x=(double) source->columns-1.0; extent[2].y=(double) source->rows-1.0; extent[3].x=0.0; extent[3].y=(double) source->rows-1.0; for (i=0; i < 4; i++) { PointInfo point; point=extent[i]; extent[i].x=point.x*affine->sx+point.y*affine->ry+affine->tx; extent[i].y=point.x*affine->rx+point.y*affine->sy+affine->ty; } min=extent[0]; max=extent[0]; for (i=1; i < 4; i++) { if (min.x > extent[i].x) min.x=extent[i].x; if (min.y > extent[i].y) min.y=extent[i].y; if (max.x < extent[i].x) max.x=extent[i].x; if (max.y < extent[i].y) max.y=extent[i].y; } /* Affine transform image. */ if (SetImageStorageClass(image,DirectClass,exception) == MagickFalse) return(MagickFalse); status=MagickTrue; edge.x1=MagickMax(min.x,0.0); edge.y1=MagickMax(min.y,0.0); edge.x2=MagickMin(max.x,(double) image->columns-1.0); edge.y2=MagickMin(max.y,(double) image->rows-1.0); inverse_affine=InverseAffineMatrix(affine); GetPixelInfo(image,&zero); start=(ssize_t) ceil(edge.y1-0.5); stop=(ssize_t) floor(edge.y2+0.5); source_view=AcquireVirtualCacheView(source,exception); image_view=AcquireAuthenticCacheView(image,exception); #if defined(MAGICKCORE_OPENMP_SUPPORT) #pragma omp parallel for schedule(static) shared(status) \ magick_number_threads(source,image,stop-start,1) #endif for (y=start; y <= stop; y++) { PixelInfo composite, pixel; PointInfo point; register ssize_t x; register Quantum *magick_restrict q; SegmentInfo inverse_edge; ssize_t x_offset; inverse_edge=AffineEdge(source,&inverse_affine,(double) y,&edge); if (inverse_edge.x2 < inverse_edge.x1) continue; q=GetCacheViewAuthenticPixels(image_view,(ssize_t) ceil(inverse_edge.x1- 0.5),y,(size_t) (floor(inverse_edge.x2+0.5)-ceil(inverse_edge.x1-0.5)+1), 1,exception); if (q == (Quantum *) NULL) continue; pixel=zero; composite=zero; x_offset=0; for (x=(ssize_t) ceil(inverse_edge.x1-0.5); x <= (ssize_t) floor(inverse_edge.x2+0.5); x++) { point.x=(double) x*inverse_affine.sx+y*inverse_affine.ry+ inverse_affine.tx; point.y=(double) x*inverse_affine.rx+y*inverse_affine.sy+ inverse_affine.ty; status=InterpolatePixelInfo(source,source_view,UndefinedInterpolatePixel, point.x,point.y,&pixel,exception); if (status == MagickFalse) break; GetPixelInfoPixel(image,q,&composite); CompositePixelInfoOver(&pixel,pixel.alpha,&composite,composite.alpha, &composite); SetPixelViaPixelInfo(image,&composite,q); x_offset++; q+=GetPixelChannels(image); } if (SyncCacheViewAuthenticPixels(image_view,exception) == MagickFalse) status=MagickFalse; } source_view=DestroyCacheView(source_view); image_view=DestroyCacheView(image_view); return(status); }
0
[ "CWE-416" ]
ImageMagick
ecf7c6b288e11e7e7f75387c5e9e93e423b98397
18,643,341,953,036,294,000,000,000,000,000,000,000
148
...
static int seq_timing_event(unsigned char *event_rec) { unsigned char cmd = event_rec[1]; unsigned int parm = *(int *) &event_rec[4]; if (seq_mode == SEQ_2) { int ret; if ((ret = tmr->event(tmr_no, event_rec)) == TIMER_ARMED) if ((SEQ_MAX_QUEUE - qlen) >= output_threshold) wake_up(&seq_sleeper); return ret; } switch (cmd) { case TMR_WAIT_REL: parm += prev_event_time; /* * NOTE! No break here. Execution of TMR_WAIT_REL continues in the * next case (TMR_WAIT_ABS) */ case TMR_WAIT_ABS: if (parm > 0) { long time; time = parm; prev_event_time = time; seq_playing = 1; request_sound_timer(time); if ((SEQ_MAX_QUEUE - qlen) >= output_threshold) wake_up(&seq_sleeper); return TIMER_ARMED; } break; case TMR_START: seq_time = jiffies; prev_input_time = 0; prev_event_time = 0; break; case TMR_STOP: break; case TMR_CONTINUE: break; case TMR_TEMPO: break; case TMR_ECHO: if (seq_mode == SEQ_2) seq_copy_to_input(event_rec, 8); else { parm = (parm << 8 | SEQ_ECHO); seq_copy_to_input((unsigned char *) &parm, 4); } break; default:; } return TIMER_NOT_ARMED; }
0
[ "CWE-703", "CWE-189" ]
linux
b769f49463711205d57286e64cf535ed4daf59e9
209,783,111,993,532,170,000,000,000,000,000,000,000
71
sound/oss: remove offset from load_patch callbacks Was: [PATCH] sound/oss/midi_synth: prevent underflow, use of uninitialized value, and signedness issue The offset passed to midi_synth_load_patch() can be essentially arbitrary. If it's greater than the header length, this will result in a copy_from_user(dst, src, negative_val). While this will just return -EFAULT on x86, on other architectures this may cause memory corruption. Additionally, the length field of the sysex_info structure may not be initialized prior to its use. Finally, a signed comparison may result in an unintentionally large loop. On suggestion by Takashi Iwai, version two removes the offset argument from the load_patch callbacks entirely, which also resolves similar issues in opl3. Compile tested only. v3 adjusts comments and hopefully gets copy offsets right. Signed-off-by: Dan Rosenberg <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
flatpak_dir_remote_fetch_summary_index (FlatpakDir *self, const char *name_or_uri, gboolean only_cached, GBytes **out_index, GBytes **out_index_sig, GCancellable *cancellable, GError **error) { g_autofree char *url = NULL; gboolean is_local; g_autoptr(GError) local_error = NULL; g_autoptr(GError) cache_error = NULL; g_autoptr(GBytes) cached_index = NULL; g_autoptr(GBytes) cached_index_sig = NULL; g_autoptr(GBytes) index = NULL; g_autoptr(GBytes) index_sig = NULL; gboolean gpg_verify_summary; ensure_soup_session (self); if (!ostree_repo_remote_get_url (self->repo, name_or_uri, &url, error)) return FALSE; if (!ostree_repo_remote_get_gpg_verify_summary (self->repo, name_or_uri, &gpg_verify_summary, NULL)) return FALSE; if (!g_str_has_prefix (name_or_uri, "file:") && flatpak_dir_get_remote_disabled (self, name_or_uri)) { g_set_error (error, G_IO_ERROR, G_IO_ERROR_INVALID_DATA, "Can't fetch summary from disabled remote ‘%s’", name_or_uri); return FALSE; } if (flatpak_dir_get_remote_oci (self, name_or_uri)) { g_set_error (error, G_IO_ERROR, G_IO_ERROR_NOT_FOUND, "No index in OCI remote ‘%s’", name_or_uri); return FALSE; } is_local = g_str_has_prefix (url, "file:"); /* Seems ostree asserts if this is NULL */ if (error == NULL) error = &local_error; flatpak_dir_remote_load_cached_summary (self, name_or_uri, ".idx", ".idx.sig", &cached_index, &cached_index_sig, cancellable, &cache_error); if (only_cached) { if (cached_index == NULL) { g_propagate_error (error, g_steal_pointer (&cache_error)); return FALSE; } g_debug ("Loaded summary index from cache for remote ‘%s’", name_or_uri); index = g_steal_pointer (&cached_index); if (gpg_verify_summary) index_sig = g_steal_pointer (&cached_index_sig); } else { g_autofree char *index_url = g_build_filename (url, "summary.idx", NULL); g_autoptr(GBytes) dl_index = NULL; gboolean used_download = FALSE; g_debug ("Fetching summary index file for remote ‘%s’", name_or_uri); dl_index = flatpak_load_uri (self->soup_session, index_url, 0, NULL, NULL, NULL, NULL, cancellable, error); if (dl_index == NULL) return FALSE; /* If the downloaded index is the same as the cached one we need not re-download or * re-verify, just use the cache (which we verified before) */ if (cached_index != NULL && g_bytes_equal (cached_index, dl_index)) { index = g_steal_pointer (&cached_index); if (gpg_verify_summary) index_sig = g_steal_pointer (&cached_index_sig); } else { index = g_steal_pointer (&dl_index); used_download = TRUE; } if (gpg_verify_summary && index_sig == NULL) { g_autofree char *index_digest = g_compute_checksum_for_bytes (G_CHECKSUM_SHA256, index); g_autofree char *index_sig_filename = g_strconcat (index_digest, ".idx.sig", NULL); g_autofree char *index_sig_url = g_build_filename (url, "summaries", index_sig_filename, NULL); g_autofree char *index_sig_url2 = g_build_filename (url, "summary.idx.sig", NULL); g_autoptr(GError) dl_sig_error = NULL; g_autoptr (GBytes) dl_index_sig = NULL; dl_index_sig = load_uri_with_fallback (self->soup_session, index_sig_url, index_sig_url2, 0, NULL, cancellable, &dl_sig_error); if (dl_index_sig == NULL) { if (g_error_matches (dl_sig_error, G_IO_ERROR, G_IO_ERROR_NOT_FOUND)) g_set_error (error, OSTREE_GPG_ERROR, OSTREE_GPG_ERROR_NO_SIGNATURE, "GPG verification enabled, but no summary signatures found (use gpg-verify-summary=false in remote config to disable)"); else g_propagate_error (error, g_steal_pointer (&dl_sig_error)); return FALSE; } if (!remote_verify_signature (self->repo, name_or_uri, index, dl_index_sig, cancellable, error)) return FALSE; index_sig = g_steal_pointer (&dl_index_sig); used_download = TRUE; } g_assert (index != NULL); if (gpg_verify_summary) g_assert (index_sig != NULL); /* Update cache on disk if we downloaded anything, but never cache for file: repos */ if (used_download && !is_local && !flatpak_dir_remote_save_cached_summary (self, name_or_uri, ".idx", ".idx.sig", index, index_sig, cancellable, error)) return FALSE; } /* Cache in memory */ if (!is_local && !only_cached) { g_autofree char *cache_key = g_strconcat ("index-", name_or_uri, NULL); flatpak_dir_cache_summary (self, index, index_sig, cache_key, url); } *out_index = g_steal_pointer (&index); if (out_index_sig) *out_index_sig = g_steal_pointer (&index_sig); return TRUE; }
0
[ "CWE-74" ]
flatpak
fb473cad801c6b61706353256cab32330557374a
185,625,475,002,880,030,000,000,000,000,000,000,000
145
dir: Pass environment via bwrap --setenv when running apply_extra This means we can systematically pass the environment variables through bwrap(1), even if it is setuid and thus is filtering out security-sensitive environment variables. bwrap ends up being run with an empty environment instead. As with the previous commit, this regressed while fixing CVE-2021-21261. Fixes: 6d1773d2 "run: Convert all environment variables into bwrap arguments" Signed-off-by: Simon McVittie <[email protected]>
static inline void *get_ir_table(uint32_t dmar_index) { static struct intr_remap_table ir_tables[CONFIG_MAX_IOMMU_NUM] __aligned(PAGE_SIZE); return (void *)ir_tables[dmar_index].tables[0].contents; }
0
[ "CWE-120", "CWE-787" ]
acrn-hypervisor
25c0e3817eb332660dd63d1d4522e63dcc94e79a
249,476,805,806,670,400,000,000,000,000,000,000,000
5
hv: validate input for dmar_free_irte function Malicious input 'index' may trigger buffer overflow on array 'irte_alloc_bitmap[]'. This patch validate that 'index' shall be less than 'CONFIG_MAX_IR_ENTRIES' and also remove unnecessary check on 'index' in 'ptirq_free_irte()' function with this fix. Tracked-On: #6132 Signed-off-by: Yonghua Huang <[email protected]>
static void fatal_jpeg_error (j_common_ptr cinfo) { jmpbuf_wrapper *jmpbufw; char buffer[JMSG_LENGTH_MAX]; (*cinfo->err->format_message)(cinfo, buffer); gd_error_ex(GD_WARNING, "gd-jpeg: JPEG library reports unrecoverable error: %s", buffer); jmpbufw = (jmpbuf_wrapper *) cinfo->client_data; jpeg_destroy (cinfo); if (jmpbufw != 0) { longjmp (jmpbufw->jmpbuf, 1); gd_error_ex(GD_ERROR, "gd-jpeg: EXTREMELY fatal error: longjmp returned control; terminating"); } else { gd_error_ex(GD_ERROR, "gd-jpeg: EXTREMELY fatal error: jmpbuf unrecoverable; terminating"); } exit (99); }
0
[ "CWE-415" ]
php-src
089f7c0bc28d399b0420aa6ef058e4c1c120b2ae
117,697,483,284,954,300,000,000,000,000,000,000,000
20
Sync with upstream Even though libgd/libgd#492 is not a relevant bug fix for PHP, since the binding doesn't use the `gdImage*Ptr()` functions at all, we're porting the fix to stay in sync here.
void term_gets(GArray *buffer, int *line_count) { int ret, i, char_len; /* fread() doesn't work */ ret = read(fileno(current_term->in), term_inbuf + term_inbuf_pos, sizeof(term_inbuf)-term_inbuf_pos); if (ret == 0) { /* EOF - terminal got lost */ ret = -1; } else if (ret == -1 && (errno == EINTR || errno == EAGAIN)) ret = 0; if (ret == -1) signal_emit("command quit", 1, "Lost terminal"); if (ret > 0) { /* convert input to unichars. */ term_inbuf_pos += ret; for (i = 0; i < term_inbuf_pos; ) { unichar key; char_len = input_func(term_inbuf+i, term_inbuf_pos-i, &key); if (char_len < 0) break; g_array_append_val(buffer, key); if (key == '\r' || key == '\n') (*line_count)++; i += char_len; } if (i >= term_inbuf_pos) term_inbuf_pos = 0; else if (i > 0) { memmove(term_inbuf, term_inbuf+i, term_inbuf_pos-i); term_inbuf_pos -= i; } } }
0
[ "CWE-476" ]
irssi
6c6c42e3d1b49d90aacc0b67f8540471cae02a1d
191,845,723,583,014,030,000,000,000,000,000,000,000
40
Merge branch 'security' into 'master' See merge request !7
Upstream::ClusterManager& clusterManager() const { return cluster_manager_; }
0
[ "CWE-476" ]
envoy
8788a3cf255b647fd14e6b5e2585abaaedb28153
321,909,890,197,203,450,000,000,000,000,000,000,000
1
1.4 - Do not call into the VM unless the VM Context has been created. (#24) * Ensure that the in VM Context is created before onDone is called. Signed-off-by: John Plevyak <[email protected]> * Update as per offline discussion. Signed-off-by: John Plevyak <[email protected]> * Set in_vm_context_created_ in onNetworkNewConnection. Signed-off-by: John Plevyak <[email protected]> * Add guards to other network calls. Signed-off-by: John Plevyak <[email protected]> * Fix common/wasm tests. Signed-off-by: John Plevyak <[email protected]> * Patch tests. Signed-off-by: John Plevyak <[email protected]> * Remove unecessary file from cherry-pick. Signed-off-by: John Plevyak <[email protected]>
int thr_info_create(struct thr_info *thr, pthread_attr_t *attr, void *(*start) (void *), void *arg) { cgsem_init(&thr->sem); return pthread_create(&thr->pth, attr, start, arg); }
0
[ "CWE-20", "CWE-703" ]
sgminer
910c36089940e81fb85c65b8e63dcd2fac71470c
119,939,999,207,559,840,000,000,000,000,000,000,000
6
stratum: parse_notify(): Don't die on malformed bbversion/prev_hash/nbit/ntime. Might have introduced a memory leak, don't have time to check. :( Should the other hex2bin()'s be checked? Thanks to Mick Ayzenberg <mick.dejavusecurity.com> for finding this.
int cil_gen_sidcontext(struct cil_db *db, struct cil_tree_node *parse_current, struct cil_tree_node *ast_node) { enum cil_syntax syntax[] = { CIL_SYN_STRING, CIL_SYN_STRING, CIL_SYN_STRING | CIL_SYN_LIST, CIL_SYN_END }; int syntax_len = sizeof(syntax)/sizeof(*syntax); struct cil_sidcontext *sidcon = NULL; int rc = SEPOL_ERR; if (db == NULL || parse_current == NULL || ast_node == NULL) { goto exit; } rc = __cil_verify_syntax(parse_current, syntax, syntax_len); if (rc != SEPOL_OK) { goto exit; } cil_sidcontext_init(&sidcon); sidcon->sid_str = parse_current->next->data; if (parse_current->next->next->cl_head == NULL) { sidcon->context_str = parse_current->next->next->data; } else { cil_context_init(&sidcon->context); rc = cil_fill_context(parse_current->next->next->cl_head, sidcon->context); if (rc != SEPOL_OK) { goto exit; } } ast_node->data = sidcon; ast_node->flavor = CIL_SIDCONTEXT; return SEPOL_OK; exit: cil_tree_log(parse_current, CIL_ERR, "Bad sidcontext declaration"); cil_destroy_sidcontext(sidcon); return rc; }
0
[ "CWE-125" ]
selinux
340f0eb7f3673e8aacaf0a96cbfcd4d12a405521
136,168,249,351,250,120,000,000,000,000,000,000,000
46
libsepol/cil: Check for statements not allowed in optional blocks While there are some checks for invalid statements in an optional block when resolving the AST, there are no checks when building the AST. OSS-Fuzz found the following policy which caused a null dereference in cil_tree_get_next_path(). (blockinherit b3) (sid SID) (sidorder(SID)) (optional o (ibpkeycon :(1 0)s) (block b3 (filecon""block()) (filecon""block()))) The problem is that the blockinherit copies block b3 before the optional block is disabled. When the optional is disabled, block b3 is deleted along with everything else in the optional. Later, when filecon statements with the same path are found an error message is produced and in trying to find out where the block was copied from, the reference to the deleted block is used. The error handling code assumes (rightly) that if something was copied from a block then that block should still exist. It is clear that in-statements, blocks, and macros cannot be in an optional, because that allows nodes to be copied from the optional block to somewhere outside even though the optional could be disabled later. When optionals are disabled the AST is reset and the resolution is restarted at the point of resolving macro calls, so anything resolved before macro calls will never be re-resolved. This includes tunableifs, in-statements, blockinherits, blockabstracts, and macro definitions. Tunable declarations also cannot be in an optional block because they are needed to resolve tunableifs. It should be fine to allow blockinherit statements in an optional, because that is copying nodes from outside the optional to the optional and if the optional is later disabled, everything will be deleted anyway. Check and quit with an error if a tunable declaration, in-statement, block, blockabstract, or macro definition is found within an optional when either building or resolving the AST. Signed-off-by: James Carter <[email protected]>
connection_dirserv_flushed_some(dir_connection_t *conn) { tor_assert(conn->_base.state == DIR_CONN_STATE_SERVER_WRITING); if (buf_datalen(conn->_base.outbuf) >= DIRSERV_BUFFER_MIN) return 0; switch (conn->dir_spool_src) { case DIR_SPOOL_EXTRA_BY_DIGEST: case DIR_SPOOL_EXTRA_BY_FP: case DIR_SPOOL_SERVER_BY_DIGEST: case DIR_SPOOL_SERVER_BY_FP: return connection_dirserv_add_servers_to_outbuf(conn); case DIR_SPOOL_MICRODESC: return connection_dirserv_add_microdescs_to_outbuf(conn); case DIR_SPOOL_CACHED_DIR: return connection_dirserv_add_dir_bytes_to_outbuf(conn); case DIR_SPOOL_NETWORKSTATUS: return connection_dirserv_add_networkstatus_bytes_to_outbuf(conn); case DIR_SPOOL_NONE: default: return 0; } }
0
[ "CWE-264" ]
tor
00fffbc1a15e2696a89c721d0c94dc333ff419ef
97,275,979,994,297,340,000,000,000,000,000,000,000
24
Don't give the Guard flag to relays without the CVE-2011-2768 fix
static int __mincore_unmapped_range(unsigned long addr, unsigned long end, struct vm_area_struct *vma, unsigned char *vec) { unsigned long nr = (end - addr) >> PAGE_SHIFT; int i; if (vma->vm_file) { pgoff_t pgoff; pgoff = linear_page_index(vma, addr); for (i = 0; i < nr; i++, pgoff++) vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff); } else { for (i = 0; i < nr; i++) vec[i] = 0; } return nr; }
1
[ "CWE-200", "CWE-319" ]
linux
574823bfab82d9d8fa47f422778043fbb4b4f50e
147,099,605,113,093,230,000,000,000,000,000,000,000
18
Change mincore() to count "mapped" pages rather than "cached" pages The semantics of what "in core" means for the mincore() system call are somewhat unclear, but Linux has always (since 2.3.52, which is when mincore() was initially done) treated it as "page is available in page cache" rather than "page is mapped in the mapping". The problem with that traditional semantic is that it exposes a lot of system cache state that it really probably shouldn't, and that users shouldn't really even care about. So let's try to avoid that information leak by simply changing the semantics to be that mincore() counts actual mapped pages, not pages that might be cheaply mapped if they were faulted (note the "might be" part of the old semantics: being in the cache doesn't actually guarantee that you can access them without IO anyway, since things like network filesystems may have to revalidate the cache before use). In many ways the old semantics were somewhat insane even aside from the information leak issue. From the very beginning (and that beginning is a long time ago: 2.3.52 was released in March 2000, I think), the code had a comment saying Later we can get more picky about what "in core" means precisely. and this is that "later". Admittedly it is much later than is really comfortable. NOTE! This is a real semantic change, and it is for example known to change the output of "fincore", since that program literally does a mmmap without populating it, and then doing "mincore()" on that mapping that doesn't actually have any pages in it. I'm hoping that nobody actually has any workflow that cares, and the info leak is real. We may have to do something different if it turns out that people have valid reasons to want the old semantics, and if we can limit the information leak sanely. Cc: Kevin Easton <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Masatake YAMATO <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Greg KH <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static int ntop_add_user(lua_State* vm) { char *username, *full_name, *password, *host_role, *allowed_networks, *allowed_interface, *host_pool_id = NULL; ntop->getTrace()->traceEvent(TRACE_DEBUG, "%s() called", __FUNCTION__); if(!Utils::isUserAdministrator(vm)) return(CONST_LUA_ERROR); if(ntop_lua_check(vm, __FUNCTION__, 1, LUA_TSTRING)) return(CONST_LUA_PARAM_ERROR); if((username = (char*)lua_tostring(vm, 1)) == NULL) return(CONST_LUA_PARAM_ERROR); if(ntop_lua_check(vm, __FUNCTION__, 2, LUA_TSTRING)) return(CONST_LUA_PARAM_ERROR); if((full_name = (char*)lua_tostring(vm, 2)) == NULL) return(CONST_LUA_PARAM_ERROR); if(ntop_lua_check(vm, __FUNCTION__, 3, LUA_TSTRING)) return(CONST_LUA_PARAM_ERROR); if((password = (char*)lua_tostring(vm, 3)) == NULL) return(CONST_LUA_PARAM_ERROR); if(ntop_lua_check(vm, __FUNCTION__, 4, LUA_TSTRING)) return(CONST_LUA_PARAM_ERROR); if((host_role = (char*)lua_tostring(vm, 4)) == NULL) return(CONST_LUA_PARAM_ERROR); if(ntop_lua_check(vm, __FUNCTION__, 5, LUA_TSTRING)) return(CONST_LUA_PARAM_ERROR); if((allowed_networks = (char*)lua_tostring(vm, 5)) == NULL) return(CONST_LUA_PARAM_ERROR); if(ntop_lua_check(vm, __FUNCTION__, 6, LUA_TSTRING)) return(CONST_LUA_PARAM_ERROR); if((allowed_interface = (char*)lua_tostring(vm, 6)) == NULL) return(CONST_LUA_PARAM_ERROR); if(lua_type(vm, 7) == LUA_TSTRING) if((host_pool_id = (char*)lua_tostring(vm, 7)) == NULL) return(CONST_LUA_PARAM_ERROR); return ntop->addUser(username, full_name, password, host_role, allowed_networks, allowed_interface, host_pool_id); }
0
[ "CWE-476" ]
ntopng
01f47e04fd7c8d54399c9e465f823f0017069f8f
251,284,977,098,271,580,000,000,000,000,000,000,000
31
Security fix: prevents empty host from being used
set_cmdspos_cursor(void) { int i, m, c; set_cmdspos(); if (KeyTyped) { m = Columns * Rows; if (m < 0) // overflow, Columns or Rows at weird value m = MAXCOL; } else m = MAXCOL; for (i = 0; i < ccline.cmdlen && i < ccline.cmdpos; ++i) { c = cmdline_charsize(i); // Count ">" for double-wide multi-byte char that doesn't fit. if (has_mbyte) correct_cmdspos(i, c); // If the cmdline doesn't fit, show cursor on last visible char. // Don't move the cursor itself, so we can still append. if ((ccline.cmdspos += c) >= m) { ccline.cmdspos -= c; break; } if (has_mbyte) i += (*mb_ptr2len)(ccline.cmdbuff + i) - 1; } }
0
[ "CWE-122", "CWE-787" ]
vim
85b6747abc15a7a81086db31289cf1b8b17e6cb1
150,664,838,563,907,400,000,000,000,000,000,000,000
30
patch 8.2.4214: illegal memory access with large 'tabstop' in Ex mode Problem: Illegal memory access with large 'tabstop' in Ex mode. Solution: Allocate enough memory.
static void test06(char const* infile, char const* password, char const* outfile, char const* outfile2) { char* buf = NULL; unsigned long size = 0; read_file_into_memory(infile, &buf, &size); qpdf_read_memory(qpdf, infile, buf, size, password); qpdf_init_write(qpdf, outfile); qpdf_set_static_ID(qpdf, QPDF_TRUE); qpdf_set_object_stream_mode(qpdf, qpdf_o_generate); qpdf_write(qpdf); report_errors(); free(buf); }
0
[ "CWE-787" ]
qpdf
d71f05ca07eb5c7cfa4d6d23e5c1f2a800f52e8e
192,082,289,071,789,600,000,000,000,000,000,000,000
16
Fix sign and conversion warnings (major) This makes all integer type conversions that have potential data loss explicit with calls that do range checks and raise an exception. After this commit, qpdf builds with no warnings when -Wsign-conversion -Wconversion is used with gcc or clang or when -W3 -Wd4800 is used with MSVC. This significantly reduces the likelihood of potential crashes from bogus integer values. There are some parts of the code that take int when they should take size_t or an offset. Such places would make qpdf not support files with more than 2^31 of something that usually wouldn't be so large. In the event that such a file shows up and is valid, at least qpdf would raise an error in the right spot so the issue could be legitimately addressed rather than failing in some weird way because of a silent overflow condition.
output_buffer& operator<<(output_buffer& output, const Alert& a) { output[AUTO] = a.level_; output[AUTO] = a.description_; return output; }
0
[]
mysql-server
b9768521bdeb1a8069c7b871f4536792b65fd79b
178,645,597,275,197,440,000,000,000,000,000,000,000
6
Updated yassl to yassl-2.3.8 (cherry picked from commit 7f9941eab55ed672bfcccd382dafbdbcfdc75aaa)
xmlXPathCastStringToNumber(const xmlChar * val) { return(xmlXPathStringEvalNumber(val)); }
0
[ "CWE-119" ]
libxml2
91d19754d46acd4a639a8b9e31f50f31c78f8c9c
110,621,317,801,385,480,000,000,000,000,000,000,000
3
Fix the semantic of XPath axis for namespace/attribute context nodes The processing of namespace and attributes nodes was not compliant to the XPath-1.0 specification
static void whilestat (LexState *ls, int line) { /* whilestat -> WHILE cond DO block END */ FuncState *fs = ls->fs; int whileinit; int condexit; BlockCnt bl; luaX_next(ls); /* skip WHILE */ whileinit = luaK_getlabel(fs); condexit = cond(ls); enterblock(fs, &bl, 1); checknext(ls, TK_DO); block(ls); luaK_jumpto(fs, whileinit); check_match(ls, TK_END, TK_WHILE, line); leaveblock(fs); luaK_patchtohere(fs, condexit); /* false conditions finish the loop */ }
0
[ "CWE-125" ]
lua
1f3c6f4534c6411313361697d98d1145a1f030fa
254,363,748,366,819,500,000,000,000,000,000,000,000
17
Bug: Lua can generate wrong code when _ENV is <const>
int cil_gen_genfscon(struct cil_db *db, struct cil_tree_node *parse_current, struct cil_tree_node *ast_node) { enum cil_syntax syntax[] = { CIL_SYN_STRING, CIL_SYN_STRING, CIL_SYN_STRING, CIL_SYN_STRING | CIL_SYN_LIST, CIL_SYN_END }; int syntax_len = sizeof(syntax)/sizeof(*syntax); int rc = SEPOL_ERR; struct cil_genfscon *genfscon = NULL; if (db == NULL || parse_current == NULL || ast_node == NULL) { goto exit; } rc = __cil_verify_syntax(parse_current, syntax, syntax_len); if (rc != SEPOL_OK) { goto exit; } cil_genfscon_init(&genfscon); genfscon->fs_str = parse_current->next->data; genfscon->path_str = parse_current->next->next->data; if (parse_current->next->next->next->cl_head == NULL ) { genfscon->context_str = parse_current->next->next->next->data; } else { cil_context_init(&genfscon->context); rc = cil_fill_context(parse_current->next->next->next->cl_head, genfscon->context); if (rc != SEPOL_OK) { goto exit; } } ast_node->data = genfscon; ast_node->flavor = CIL_GENFSCON; return SEPOL_OK; exit: cil_tree_log(parse_current, CIL_ERR, "Bad genfscon declaration"); cil_destroy_genfscon(genfscon); return SEPOL_ERR; }
0
[ "CWE-125" ]
selinux
340f0eb7f3673e8aacaf0a96cbfcd4d12a405521
84,973,479,684,354,400,000,000,000,000,000,000,000
48
libsepol/cil: Check for statements not allowed in optional blocks While there are some checks for invalid statements in an optional block when resolving the AST, there are no checks when building the AST. OSS-Fuzz found the following policy which caused a null dereference in cil_tree_get_next_path(). (blockinherit b3) (sid SID) (sidorder(SID)) (optional o (ibpkeycon :(1 0)s) (block b3 (filecon""block()) (filecon""block()))) The problem is that the blockinherit copies block b3 before the optional block is disabled. When the optional is disabled, block b3 is deleted along with everything else in the optional. Later, when filecon statements with the same path are found an error message is produced and in trying to find out where the block was copied from, the reference to the deleted block is used. The error handling code assumes (rightly) that if something was copied from a block then that block should still exist. It is clear that in-statements, blocks, and macros cannot be in an optional, because that allows nodes to be copied from the optional block to somewhere outside even though the optional could be disabled later. When optionals are disabled the AST is reset and the resolution is restarted at the point of resolving macro calls, so anything resolved before macro calls will never be re-resolved. This includes tunableifs, in-statements, blockinherits, blockabstracts, and macro definitions. Tunable declarations also cannot be in an optional block because they are needed to resolve tunableifs. It should be fine to allow blockinherit statements in an optional, because that is copying nodes from outside the optional to the optional and if the optional is later disabled, everything will be deleted anyway. Check and quit with an error if a tunable declaration, in-statement, block, blockabstract, or macro definition is found within an optional when either building or resolving the AST. Signed-off-by: James Carter <[email protected]>
void Scanner::lex_string(char delim) { loop: #line 3637 "src/parse/lex.cc" { unsigned char yych; if ((lim - cur) < 2) { if (!fill(2)) { error("unexpected end of input"); exit(1); } } yych = (unsigned char)*cur; if (yych <= '!') { if (yych <= '\n') { if (yych <= 0x00) goto yy540; if (yych <= '\t') goto yy542; goto yy544; } else { if (yych == '\r') goto yy546; goto yy542; } } else { if (yych <= '\'') { if (yych <= '"') goto yy547; if (yych <= '&') goto yy542; goto yy547; } else { if (yych == '\\') goto yy549; goto yy542; } } yy540: ++cur; #line 711 "../src/parse/lex.re" { fail_if_eof(); goto loop; } #line 3665 "src/parse/lex.cc" yy542: ++cur; yy543: #line 712 "../src/parse/lex.re" { goto loop; } #line 3671 "src/parse/lex.cc" yy544: ++cur; #line 710 "../src/parse/lex.re" { next_line(); goto loop; } #line 3676 "src/parse/lex.cc" yy546: yych = (unsigned char)*++cur; if (yych == '\n') goto yy544; goto yy543; yy547: ++cur; #line 708 "../src/parse/lex.re" { if (cur[-1] == delim) return; else goto loop; } #line 3685 "src/parse/lex.cc" yy549: yych = (unsigned char)*++cur; if (yych <= '&') { if (yych != '"') goto yy543; } else { if (yych <= '\'') goto yy550; if (yych != '\\') goto yy543; } yy550: ++cur; #line 709 "../src/parse/lex.re" { goto loop; } #line 3698 "src/parse/lex.cc" } #line 713 "../src/parse/lex.re" }
1
[ "CWE-787" ]
re2c
039c18949190c5de5397eba504d2c75dad2ea9ca
138,976,750,429,976,170,000,000,000,000,000,000,000
70
Emit an error when repetition lower bound exceeds upper bound. Historically this was allowed and re2c swapped the bounds. However, it most likely indicates an error in user code and there is only a single occurrence in the tests (and the test in an artificial one), so although the change is backwards incompatible there is low chance of breaking real-world code. This fixes second test case in the bug #394 "Stack overflow due to recursion in src/dfa/dead_rules.cc" (the actual fix is to limit DFA size but the test also has counted repetition with swapped bounds).
static void alter_partition_lock_handling(ALTER_PARTITION_PARAM_TYPE *lpt) { THD *thd= lpt->thd; if (lpt->old_table) close_all_tables_for_name(thd, lpt->old_table->s, HA_EXTRA_NOT_USED); if (lpt->table) { /* Only remove the intermediate table object and its share object, do not remove the .frm file, since it is the original one. */ close_temporary(lpt->table, 1, 0); } lpt->table= 0; lpt->old_table= 0; lpt->table_list->table= 0; if (thd->locked_tables_list.reopen_tables(thd)) sql_print_warning("We failed to reacquire LOCKs in ALTER TABLE"); }
0
[]
server
f305a7ce4bccbd56520d874e1d81a4f29bc17a96
64,092,592,147,941,315,000,000,000,000,000,000,000
20
bugfix: long partition names
nautilus_file_monitor_add (NautilusFile *file, gconstpointer client, NautilusFileAttributes attributes) { g_return_if_fail (NAUTILUS_IS_FILE (file)); g_return_if_fail (client != NULL); EEL_CALL_METHOD (NAUTILUS_FILE_CLASS, file, monitor_add, (file, client, attributes)); }
0
[]
nautilus
7632a3e13874a2c5e8988428ca913620a25df983
27,305,019,024,569,150,000,000,000,000,000,000,000
11
Check for trusted desktop file launchers. 2009-02-24 Alexander Larsson <[email protected]> * libnautilus-private/nautilus-directory-async.c: Check for trusted desktop file launchers. * libnautilus-private/nautilus-file-private.h: * libnautilus-private/nautilus-file.c: * libnautilus-private/nautilus-file.h: Add nautilus_file_is_trusted_link. Allow unsetting of custom display name. * libnautilus-private/nautilus-mime-actions.c: Display dialog when trying to launch a non-trusted desktop file. svn path=/trunk/; revision=15003
void ide_exec_cmd(IDEBus *bus, uint32_t val) { IDEState *s; bool complete; #if defined(DEBUG_IDE) printf("ide: CMD=%02x\n", val); #endif s = idebus_active_if(bus); /* ignore commands to non existent slave */ if (s != bus->ifs && !s->bs) return; /* Only DEVICE RESET is allowed while BSY or/and DRQ are set */ if ((s->status & (BUSY_STAT|DRQ_STAT)) && val != WIN_DEVICE_RESET) return; if (!ide_cmd_permitted(s, val)) { ide_abort_command(s); ide_set_irq(s->bus); return; } s->status = READY_STAT | BUSY_STAT; s->error = 0; complete = ide_cmd_table[val].handler(s, val); if (complete) { s->status &= ~BUSY_STAT; assert(!!s->error == !!(s->status & ERR_STAT)); if ((ide_cmd_table[val].flags & SET_DSC) && !s->error) { s->status |= SEEK_STAT; } ide_set_irq(s->bus); } }
0
[ "CWE-189" ]
qemu
940973ae0b45c9b6817bab8e4cf4df99a9ef83d7
259,671,828,557,280,450,000,000,000,000,000,000,000
38
ide: Correct improper smart self test counter reset in ide core. The SMART self test counter was incorrectly being reset to zero, not 1. This had the effect that on every 21st SMART EXECUTE OFFLINE: * We would write off the beginning of a dynamically allocated buffer * We forgot the SMART history Fix this. Signed-off-by: Benoit Canet <[email protected]> Message-id: [email protected] Reviewed-by: Markus Armbruster <[email protected]> Cc: [email protected] Acked-by: Kevin Wolf <[email protected]> [PMM: tweaked commit message as per suggestions from Markus] Signed-off-by: Peter Maydell <[email protected]>
GF_Err gf_isom_remove_chapter(GF_ISOFile *movie, u32 trackNumber, u32 index) { GF_Err e; GF_ChapterListBox *ptr; GF_ChapterEntry *ce; GF_UserDataBox *udta; GF_UserDataMap *map; e = CanAccessMovie(movie, GF_ISOM_OPEN_WRITE); if (e) return e; e = gf_isom_insert_moov(movie); if (e) return e; if (trackNumber) { GF_TrackBox *trak = gf_isom_get_track_from_file(movie, trackNumber); if (!trak) return GF_BAD_PARAM; if (!trak->udta) return GF_OK; udta = trak->udta; } else { if (!movie->moov->udta) return GF_OK; udta = movie->moov->udta; } map = udta_getEntry(udta, GF_ISOM_BOX_TYPE_CHPL, NULL); if (!map) return GF_OK; ptr = (GF_ChapterListBox*)gf_list_get(map->boxes, 0); if (!ptr) return GF_OK; if (index) { ce = (GF_ChapterEntry *)gf_list_get(ptr->list, index-1); if (!ce) return GF_BAD_PARAM; if (ce->name) gf_free(ce->name); gf_free(ce); gf_list_rem(ptr->list, index-1); } else { while (gf_list_count(ptr->list)) { ce = (GF_ChapterEntry *)gf_list_get(ptr->list, 0); if (ce->name) gf_free(ce->name); gf_free(ce); gf_list_rem(ptr->list, 0); } } if (!gf_list_count(ptr->list)) { gf_list_del_item(udta->recordList, map); gf_isom_box_array_del(map->boxes); gf_free(map); } return GF_OK; }
0
[ "CWE-476" ]
gpac
ebfa346eff05049718f7b80041093b4c5581c24e
324,855,361,736,332,730,000,000,000,000,000,000,000
49
fixed #1706
static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_splice *sp = &req->splice; sp->off_in = READ_ONCE(sqe->splice_off_in); sp->off_out = READ_ONCE(sqe->off); return __io_splice_prep(req, sqe); }
0
[ "CWE-125" ]
linux
89c2b3b74918200e46699338d7bcc19b1ea12110
3,658,181,001,946,560,400,000,000,000,000,000,000
8
io_uring: reexpand under-reexpanded iters [ 74.211232] BUG: KASAN: stack-out-of-bounds in iov_iter_revert+0x809/0x900 [ 74.212778] Read of size 8 at addr ffff888025dc78b8 by task syz-executor.0/828 [ 74.214756] CPU: 0 PID: 828 Comm: syz-executor.0 Not tainted 5.14.0-rc3-next-20210730 #1 [ 74.216525] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [ 74.219033] Call Trace: [ 74.219683] dump_stack_lvl+0x8b/0xb3 [ 74.220706] print_address_description.constprop.0+0x1f/0x140 [ 74.224226] kasan_report.cold+0x7f/0x11b [ 74.226085] iov_iter_revert+0x809/0x900 [ 74.227960] io_write+0x57d/0xe40 [ 74.232647] io_issue_sqe+0x4da/0x6a80 [ 74.242578] __io_queue_sqe+0x1ac/0xe60 [ 74.245358] io_submit_sqes+0x3f6e/0x76a0 [ 74.248207] __do_sys_io_uring_enter+0x90c/0x1a20 [ 74.257167] do_syscall_64+0x3b/0x90 [ 74.257984] entry_SYSCALL_64_after_hwframe+0x44/0xae old_size = iov_iter_count(); ... iov_iter_revert(old_size - iov_iter_count()); If iov_iter_revert() is done base on the initial size as above, and the iter is truncated and not reexpanded in the middle, it miscalculates borders causing problems. This trace is due to no one reexpanding after generic_write_checks(). Now iters store how many bytes has been truncated, so reexpand them to the initial state right before reverting. Cc: [email protected] Reported-by: Palash Oswal <[email protected]> Reported-by: Sudip Mukherjee <[email protected]> Reported-and-tested-by: [email protected] Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Al Viro <[email protected]>
__kvmgt_protect_table_find(struct kvmgt_guest_info *info, gfn_t gfn) { struct kvmgt_pgfn *p, *res = NULL; hash_for_each_possible(info->ptable, p, hnode, gfn) { if (gfn == p->gfn) { res = p; break; } } return res; }
0
[ "CWE-20" ]
linux
51b00d8509dc69c98740da2ad07308b630d3eb7d
249,061,204,592,079,850,000,000,000,000,000,000,000
13
drm/i915/gvt: Fix mmap range check This is to fix missed mmap range check on vGPU bar2 region and only allow to map vGPU allocated GMADDR range, which means user space should support sparse mmap to get proper offset for mmap vGPU aperture. And this takes care of actual pgoff in mmap request as original code always does from beginning of vGPU aperture. Fixes: 659643f7d814 ("drm/i915/gvt/kvmgt: add vfio/mdev support to KVMGT") Cc: "Monroy, Rodrigo Axel" <[email protected]> Cc: "Orrala Contreras, Alfredo" <[email protected]> Cc: [email protected] # v4.10+ Reviewed-by: Hang Yuan <[email protected]> Signed-off-by: Zhenyu Wang <[email protected]>
static unsigned char min() { return 0; }
0
[ "CWE-125" ]
CImg
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
47,349,267,517,338,180,000,000,000,000,000,000,000
1
Fix other issues in 'CImg<T>::load_bmp()'.
kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); if (hva) *hva = addr; if (addr == KVM_HVA_ERR_RO_BAD) { if (writable) *writable = false; return KVM_PFN_ERR_RO_FAULT; } if (kvm_is_error_hva(addr)) { if (writable) *writable = false; return KVM_PFN_NOSLOT; } /* Do not map writable pfn in the readonly memslot. */ if (writable && memslot_is_readonly(slot)) { *writable = false; writable = NULL; } return hva_to_pfn(addr, atomic, async, write_fault, writable);
0
[ "CWE-459" ]
linux
683412ccf61294d727ead4a73d97397396e69a6b
307,673,534,623,218,560,000,000,000,000,000,000,000
30
KVM: SEV: add cache flush to solve SEV cache incoherency issues Flush the CPU caches when memory is reclaimed from an SEV guest (where reclaim also includes it being unmapped from KVM's memslots). Due to lack of coherency for SEV encrypted memory, failure to flush results in silent data corruption if userspace is malicious/broken and doesn't ensure SEV guest memory is properly pinned and unpinned. Cache coherency is not enforced across the VM boundary in SEV (AMD APM vol.2 Section 15.34.7). Confidential cachelines, generated by confidential VM guests have to be explicitly flushed on the host side. If a memory page containing dirty confidential cachelines was released by VM and reallocated to another user, the cachelines may corrupt the new user at a later time. KVM takes a shortcut by assuming all confidential memory remain pinned until the end of VM lifetime. Therefore, KVM does not flush cache at mmu_notifier invalidation events. Because of this incorrect assumption and the lack of cache flushing, malicous userspace can crash the host kernel: creating a malicious VM and continuously allocates/releases unpinned confidential memory pages when the VM is running. Add cache flush operations to mmu_notifier operations to ensure that any physical memory leaving the guest VM get flushed. In particular, hook mmu_notifier_invalidate_range_start and mmu_notifier_release events and flush cache accordingly. The hook after releasing the mmu lock to avoid contention with other vCPUs. Cc: [email protected] Suggested-by: Sean Christpherson <[email protected]> Reported-by: Mingwei Zhang <[email protected]> Signed-off-by: Mingwei Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
PHP_METHOD(PharFileInfo, __construct) { char *fname, *arch, *entry, *error; int fname_len, arch_len, entry_len; phar_entry_object *entry_obj; phar_entry_info *entry_info; phar_archive_data *phar_data; zval *zobj = getThis(), arg1; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "s", &fname, &fname_len) == FAILURE) { return; } entry_obj = (phar_entry_object*)zend_object_store_get_object(getThis() TSRMLS_CC); if (entry_obj->ent.entry) { zend_throw_exception_ex(spl_ce_BadMethodCallException, 0 TSRMLS_CC, "Cannot call constructor twice"); return; } if (fname_len < 7 || memcmp(fname, "phar://", 7) || phar_split_fname(fname, fname_len, &arch, &arch_len, &entry, &entry_len, 2, 0 TSRMLS_CC) == FAILURE) { zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "'%s' is not a valid phar archive URL (must have at least phar://filename.phar)", fname); return; } if (phar_open_from_filename(arch, arch_len, NULL, 0, REPORT_ERRORS, &phar_data, &error TSRMLS_CC) == FAILURE) { efree(arch); efree(entry); if (error) { zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "Cannot open phar file '%s': %s", fname, error); efree(error); } else { zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "Cannot open phar file '%s'", fname); } return; } if ((entry_info = phar_get_entry_info_dir(phar_data, entry, entry_len, 1, &error, 1 TSRMLS_CC)) == NULL) { zend_throw_exception_ex(spl_ce_RuntimeException, 0 TSRMLS_CC, "Cannot access phar file entry '%s' in archive '%s'%s%s", entry, arch, error ? ", " : "", error ? error : ""); efree(arch); efree(entry); return; } efree(arch); efree(entry); entry_obj->ent.entry = entry_info; INIT_PZVAL(&arg1); ZVAL_STRINGL(&arg1, fname, fname_len, 0); zend_call_method_with_1_params(&zobj, Z_OBJCE_P(zobj), &spl_ce_SplFileInfo->constructor, "__construct", NULL, &arg1); }
0
[ "CWE-119" ]
php-src
13ad4d3e971807f9a58ab5933182907dc2958539
31,020,279,037,241,440,000,000,000,000,000,000,000
59
Fix bug #71354 - remove UMR when size is 0
static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) { unsigned long guest_cr3; u64 eptp; guest_cr3 = cr3; if (enable_ept) { eptp = construct_eptp(vcpu, cr3); vmcs_write64(EPT_POINTER, eptp); if (enable_unrestricted_guest || is_paging(vcpu) || is_guest_mode(vcpu)) guest_cr3 = kvm_read_cr3(vcpu); else guest_cr3 = to_kvm_vmx(vcpu->kvm)->ept_identity_map_addr; ept_load_pdptrs(vcpu); } vmx_flush_tlb(vcpu, true); vmcs_writel(GUEST_CR3, guest_cr3); }
0
[ "CWE-284" ]
linux
727ba748e110b4de50d142edca9d6a9b7e6111d8
30,646,429,277,293,105,000,000,000,000,000,000,000
20
kvm: nVMX: Enforce cpl=0 for VMX instructions VMX instructions executed inside a L1 VM will always trigger a VM exit even when executed with cpl 3. This means we must perform the privilege check in software. Fixes: 70f3aac964ae("kvm: nVMX: Remove superfluous VMX instruction fault checks") Cc: [email protected] Signed-off-by: Felix Wilhelm <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
router_picked_poor_directory_log(const routerstatus_t *rs) { const networkstatus_t *usable_consensus; usable_consensus = networkstatus_get_reasonably_live_consensus(time(NULL), usable_consensus_flavor()); #if !LOG_FALSE_POSITIVES_DURING_BOOTSTRAP /* Don't log early in the bootstrap process, it's normal to pick from a * small pool of nodes. Of course, this won't help if we're trying to * diagnose bootstrap issues. */ if (!smartlist_len(nodelist_get_list()) || !usable_consensus || !router_have_minimum_dir_info()) { return; } #endif /* We couldn't find a node, or the one we have doesn't fit our preferences. * Sometimes this is normal, sometimes it can be a reachability issue. */ if (!rs) { /* This happens a lot, so it's at debug level */ log_debug(LD_DIR, "Wanted to make an outgoing directory connection, but " "we couldn't find a directory that fit our criteria. " "Perhaps we will succeed next time with less strict criteria."); } else if (!fascist_firewall_allows_rs(rs, FIREWALL_OR_CONNECTION, 1) && !fascist_firewall_allows_rs(rs, FIREWALL_DIR_CONNECTION, 1) ) { /* This is rare, and might be interesting to users trying to diagnose * connection issues on dual-stack machines. */ log_info(LD_DIR, "Selected a directory %s with non-preferred OR and Dir " "addresses for launching an outgoing connection: " "IPv4 %s OR %d Dir %d IPv6 %s OR %d Dir %d", routerstatus_describe(rs), fmt_addr32(rs->addr), rs->or_port, rs->dir_port, fmt_addr(&rs->ipv6_addr), rs->ipv6_orport, rs->dir_port); } }
0
[]
tor
1afc2ed956a35b40dfd1d207652af5b50c295da7
114,269,684,557,236,960,000,000,000,000,000,000,000
37
Fix policies.c instance of the "if (r=(a-b)) return r" pattern I think this one probably can't underflow, since the input ranges are small. But let's not tempt fate. This patch also replaces the "cmp" functions here with just "eq" functions, since nothing actually checked for anything besides 0 and nonzero. Related to 21278.