func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
sched_feat_write(struct file *filp, const char __user *ubuf, size_t cnt, loff_t *ppos) { char buf[64]; char *cmp; int neg = 0; int i; if (cnt > 63) cnt = 63; if (copy_from_user(&buf, ubuf, cnt)) return -EFAULT; buf[cnt] = 0; cmp = strstrip(buf); if (strncmp(buf, "NO_", 3) == 0) { neg = 1; cmp += 3; } for (i = 0; sched_feat_names[i]; i++) { if (strcmp(cmp, sched_feat_names[i]) == 0) { if (neg) sysctl_sched_features &= ~(1UL << i); else sysctl_sched_features |= (1UL << i); break; } } if (!sched_feat_names[i]) return -EINVAL; *ppos += cnt; return cnt; }
0
[ "CWE-703", "CWE-835" ]
linux
f26f9aff6aaf67e9a430d16c266f91b13a5bff64
245,367,001,628,276,130,000,000,000,000,000,000,000
39
Sched: fix skip_clock_update optimization idle_balance() drops/retakes rq->lock, leaving the previous task vulnerable to set_tsk_need_resched(). Clear it after we return from balancing instead, and in setup_thread_stack() as well, so no successfully descheduled or never scheduled task has it set. Need resched confused the skip_clock_update logic, which assumes that the next call to update_rq_clock() will come nearly immediately after being set. Make the optimization robust against the waking a sleeper before it sucessfully deschedules case by checking that the current task has not been dequeued before setting the flag, since it is that useless clock update we're trying to save, and clear unconditionally in schedule() proper instead of conditionally in put_prev_task(). Signed-off-by: Mike Galbraith <[email protected]> Reported-by: Bjoern B. Brandenburg <[email protected]> Tested-by: Yong Zhang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
gs_make_null_device(gx_device_null *dev_null, gx_device *dev, gs_memory_t * mem) { gx_device_init((gx_device *)dev_null, (const gx_device *)&gs_null_device, mem, true); gx_device_set_target((gx_device_forward *)dev_null, dev); if (dev) { /* The gx_device_copy_color_params() call below should probably copy over these new-style color mapping procs, as well as the old-style (map_rgb_color and friends). However, the change was made here instead, to minimize the potential impact of the patch. */ gx_device *dn = (gx_device *)dev_null; set_dev_proc(dn, get_color_mapping_procs, gx_forward_get_color_mapping_procs); set_dev_proc(dn, get_color_comp_index, gx_forward_get_color_comp_index); set_dev_proc(dn, encode_color, gx_forward_encode_color); set_dev_proc(dn, decode_color, gx_forward_decode_color); set_dev_proc(dn, get_profile, gx_forward_get_profile); set_dev_proc(dn, set_graphics_type_tag, gx_forward_set_graphics_type_tag); set_dev_proc(dn, begin_transparency_group, gx_default_begin_transparency_group); set_dev_proc(dn, end_transparency_group, gx_default_end_transparency_group); set_dev_proc(dn, begin_transparency_mask, gx_default_begin_transparency_mask); set_dev_proc(dn, end_transparency_mask, gx_default_end_transparency_mask); set_dev_proc(dn, discard_transparency_layer, gx_default_discard_transparency_layer); set_dev_proc(dn, pattern_manage, gx_default_pattern_manage); set_dev_proc(dn, push_transparency_state, gx_default_push_transparency_state); set_dev_proc(dn, pop_transparency_state, gx_default_pop_transparency_state); set_dev_proc(dn, put_image, gx_default_put_image); set_dev_proc(dn, copy_planes, gx_default_copy_planes); set_dev_proc(dn, copy_alpha_hl_color, gx_default_no_copy_alpha_hl_color); dn->graphics_type_tag = dev->graphics_type_tag; /* initialize to same as target */ gx_device_copy_color_params(dn, dev); } }
0
[]
ghostpdl
79cccf641486a6595c43f1de1cd7ade696020a31
1,881,982,769,357,326,600,000,000,000,000,000,000
35
Bug 699654(2): preserve LockSafetyParams in the nulldevice The nulldevice does not necessarily use the normal setpagedevice machinery, but can be set using the nulldevice operator. In which case, we don't preserve the settings from the original device (in the way setpagedevice does). Since nulldevice does nothing, this is not generally a problem, but in the case of LockSafetyParams it *is* important when we restore back to the original device, when LockSafetyParams not being set is "preserved" into the post- restore configuration. We have to initialise the value to false because the nulldevice is used during initialisation (before any other device exists), and *must* be writable for that.
OPJ_BOOL opj_jp2_read_tile_header(opj_jp2_t * p_jp2, OPJ_UINT32 * p_tile_index, OPJ_UINT32 * p_data_size, OPJ_INT32 * p_tile_x0, OPJ_INT32 * p_tile_y0, OPJ_INT32 * p_tile_x1, OPJ_INT32 * p_tile_y1, OPJ_UINT32 * p_nb_comps, OPJ_BOOL * p_go_on, opj_stream_private_t *p_stream, opj_event_mgr_t * p_manager ) { return opj_j2k_read_tile_header(p_jp2->j2k, p_tile_index, p_data_size, p_tile_x0, p_tile_y0, p_tile_x1, p_tile_y1, p_nb_comps, p_go_on, p_stream, p_manager); }
0
[ "CWE-20" ]
openjpeg
4edb8c83374f52cd6a8f2c7c875e8ffacccb5fa5
93,976,259,371,183,720,000,000,000,000,000,000,000
23
Add support for generation of PLT markers in encoder * -PLT switch added to opj_compress * Add a opj_encoder_set_extra_options() function that accepts a PLT=YES option, and could be expanded later for other uses. ------- Testing with a Sentinel2 10m band, T36JTT_20160914T074612_B02.jp2, coming from S2A_MSIL1C_20160914T074612_N0204_R135_T36JTT_20160914T081456.SAFE Decompress it to TIFF: ``` opj_uncompress -i T36JTT_20160914T074612_B02.jp2 -o T36JTT_20160914T074612_B02.tif ``` Recompress it with similar parameters as original: ``` opj_compress -n 5 -c [256,256],[256,256],[256,256],[256,256],[256,256] -t 1024,1024 -PLT -i T36JTT_20160914T074612_B02.tif -o T36JTT_20160914T074612_B02_PLT.jp2 ``` Dump codestream detail with GDAL dump_jp2.py utility (https://github.com/OSGeo/gdal/blob/master/gdal/swig/python/samples/dump_jp2.py) ``` python dump_jp2.py T36JTT_20160914T074612_B02.jp2 > /tmp/dump_sentinel2_ori.txt python dump_jp2.py T36JTT_20160914T074612_B02_PLT.jp2 > /tmp/dump_sentinel2_openjpeg_plt.txt ``` The diff between both show very similar structure, and identical number of packets in PLT markers Now testing with Kakadu (KDU803_Demo_Apps_for_Linux-x86-64_200210) Full file decompression: ``` kdu_expand -i T36JTT_20160914T074612_B02_PLT.jp2 -o tmp.tif Consumed 121 tile-part(s) from a total of 121 tile(s). Consumed 80,318,806 codestream bytes (excluding any file format) = 5.329697 bits/pel. Processed using the multi-threaded environment, with 8 parallel threads of execution ``` Partial decompresson (presumably using PLT markers): ``` kdu_expand -i T36JTT_20160914T074612_B02.jp2 -o tmp.pgm -region "{0.5,0.5},{0.01,0.01}" kdu_expand -i T36JTT_20160914T074612_B02_PLT.jp2 -o tmp2.pgm -region "{0.5,0.5},{0.01,0.01}" diff tmp.pgm tmp2.pgm && echo "same !" ``` ------- Funded by ESA for S2-MPC project
PHP_NAMED_FUNCTION(php_inet_pton) { int ret, af = AF_INET; char *address; int address_len; char buffer[17]; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "s", &address, &address_len) == FAILURE) { RETURN_FALSE; } memset(buffer, 0, sizeof(buffer)); #ifdef HAVE_IPV6 if (strchr(address, ':')) { af = AF_INET6; } else #endif if (!strchr(address, '.')) { php_error_docref(NULL TSRMLS_CC, E_WARNING, "Unrecognized address %s", address); RETURN_FALSE; } ret = inet_pton(af, address, buffer); if (ret <= 0) { php_error_docref(NULL TSRMLS_CC, E_WARNING, "Unrecognized address %s", address); RETURN_FALSE; } RETURN_STRINGL(buffer, af == AF_INET ? 4 : 16, 1); }
0
[ "CWE-19" ]
php-src
be9b2a95adb504abd5acdc092d770444ad6f6854
102,334,728,588,331,490,000,000,000,000,000,000,000
32
Fixed bug #69418 - more s->p fixes for filenames
TEST(InMatchExpression, MatchesMinKey) { BSONObj operand = BSON_ARRAY(MinKey); InMatchExpression in("a"); std::vector<BSONElement> equalities{operand.firstElement()}; ASSERT_OK(in.setEqualities(std::move(equalities))); ASSERT(in.matchesBSON(BSON("a" << MinKey), NULL)); ASSERT(!in.matchesBSON(BSON("a" << MaxKey), NULL)); ASSERT(!in.matchesBSON(BSON("a" << 4), NULL)); }
0
[]
mongo
64095239f41e9f3841d8be9088347db56d35c891
65,711,962,331,854,150,000,000,000,000,000,000,000
10
SERVER-51083 Reject invalid UTF-8 from $regex match expressions
static void virtio_gpu_cleanup_mapping(struct virtio_gpu_simple_resource *res) { virtio_gpu_cleanup_mapping_iov(res->iov, res->iov_cnt); res->iov = NULL; res->iov_cnt = 0; g_free(res->addrs); res->addrs = NULL; }
0
[ "CWE-772", "CWE-401" ]
qemu
dd248ed7e204ee8a1873914e02b8b526e8f1b80d
280,142,761,310,503,200,000,000,000,000,000,000,000
8
virtio-gpu: fix memory leak in set scanout In virtio_gpu_set_scanout function, when creating the 'rect' its refcount is set to 2, by pixman_image_create_bits and qemu_create_displaysurface_pixman function. This can lead a memory leak issues. This patch avoid this issue. Signed-off-by: Li Qiang <[email protected]> Reviewed-by: Marc-André Lureau <[email protected]> Message-id: [email protected] Signed-off-by: Gerd Hoffmann <[email protected]>
virtual bool is_result_field() { return 0; }
0
[]
mysql-server
f7316aa0c9a3909fc7498e7b95d5d3af044a7e21
312,170,997,676,397,450,000,000,000,000,000,000,000
1
Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST, COL), NAME_CONST('NAME', NULL)) Backport of Bug#19143243 fix. NAME_CONST item can return NULL_ITEM type in case of incorrect arguments. NULL_ITEM has special processing in Item_func_in function. In Item_func_in::fix_length_and_dec an array of possible comparators is created. Since NAME_CONST function has NULL_ITEM type, corresponding array element is empty. Then NAME_CONST is wrapped to ITEM_CACHE. ITEM_CACHE can not return proper type(NULL_ITEM) in Item_func_in::val_int(), so the NULL_ITEM is attempted compared with an empty comparator. The fix is to disable the caching of Item_name_const item.
LUALIB_API int luaopen_cmsgpack_safe(lua_State *L) { int i; luaopen_cmsgpack(L); /* Wrap all functions in the safe handler */ for (i = 0; i < (sizeof(cmds)/sizeof(*cmds) - 1); i++) { lua_getfield(L, -1, cmds[i].name); lua_pushcclosure(L, mp_safe, 1); lua_setfield(L, -2, cmds[i].name); } #if LUA_VERSION_NUM < 502 /* Register name globally for 5.1 */ lua_pushvalue(L, -1); lua_setglobal(L, LUACMSGPACK_SAFE_NAME); #endif return 1; }
0
[ "CWE-119", "CWE-787" ]
redis
52a00201fca331217c3b4b8b634f6a0f57d6b7d3
20,866,902,983,380,486,000,000,000,000,000,000,000
20
Security: fix Lua cmsgpack library stack overflow. During an auditing effort, the Apple Vulnerability Research team discovered a critical Redis security issue affecting the Lua scripting part of Redis. -- Description of the problem Several years ago I merged a pull request including many small changes at the Lua MsgPack library (that originally I authored myself). The Pull Request entered Redis in commit 90b6337c1, in 2014. Unfortunately one of the changes included a variadic Lua function that lacked the check for the available Lua C stack. As a result, calling the "pack" MsgPack library function with a large number of arguments, results into pushing into the Lua C stack a number of new values proportional to the number of arguments the function was called with. The pushed values, moreover, are controlled by untrusted user input. This in turn causes stack smashing which we believe to be exploitable, while not very deterministic, but it is likely that an exploit could be created targeting specific versions of Redis executables. However at its minimum the issue results in a DoS, crashing the Redis server. -- Versions affected Versions greater or equal to Redis 2.8.18 are affected. -- Reproducing Reproduce with this (based on the original reproduction script by Apple security team): https://gist.github.com/antirez/82445fcbea6d9b19f97014cc6cc79f8a -- Verification of the fix The fix was tested in the following way: 1) I checked that the problem is no longer observable running the trigger. 2) The Lua code was analyzed to understand the stack semantics, and that actually enough stack is allocated in all the cases of mp_pack() calls. 3) The mp_pack() function was modified in order to show exactly what items in the stack were being set, to make sure that there is no silent overflow even after the fix. -- Credits Thank you to the Apple team and to the other persons that helped me checking the patch and coordinating this communication.
void CLASS wavelet_denoise() { float *fimg=0, *temp, thold, mul[2], avg, diff; int scale=1, size, lev, hpass, lpass, row, col, nc, c, i, wlast, blk[2]; ushort *window[4]; static const float noise[] = { 0.8002,0.2735,0.1202,0.0585,0.0291,0.0152,0.0080,0.0044 }; #ifdef DCRAW_VERBOSE if (verbose) fprintf (stderr,_("Wavelet denoising...\n")); #endif while (maximum << scale < 0x10000) scale++; maximum <<= --scale; black <<= scale; FORC4 cblack[c] <<= scale; if ((size = iheight*iwidth) < 0x15550000) fimg = (float *) malloc ((size*3 + iheight + iwidth) * sizeof *fimg); merror (fimg, "wavelet_denoise()"); temp = fimg + size*3; if ((nc = colors) == 3 && filters) nc++; FORC(nc) { /* denoise R,G1,B,G3 individually */ for (i=0; i < size; i++) fimg[i] = 256 * sqrt((double)(image[i][c] << scale)); for (hpass=lev=0; lev < 5; lev++) { lpass = size*((lev & 1)+1); for (row=0; row < iheight; row++) { hat_transform (temp, fimg+hpass+row*iwidth, 1, iwidth, 1 << lev); for (col=0; col < iwidth; col++) fimg[lpass + row*iwidth + col] = temp[col] * 0.25; } for (col=0; col < iwidth; col++) { hat_transform (temp, fimg+lpass+col, iwidth, iheight, 1 << lev); for (row=0; row < iheight; row++) fimg[lpass + row*iwidth + col] = temp[row] * 0.25; } thold = threshold * noise[lev]; for (i=0; i < size; i++) { fimg[hpass+i] -= fimg[lpass+i]; if (fimg[hpass+i] < -thold) fimg[hpass+i] += thold; else if (fimg[hpass+i] > thold) fimg[hpass+i] -= thold; else fimg[hpass+i] = 0; if (hpass) fimg[i] += fimg[hpass+i]; } hpass = lpass; } for (i=0; i < size; i++) image[i][c] = CLIP(SQR(fimg[i]+fimg[lpass+i])/0x10000); } if (filters && colors == 3) { /* pull G1 and G3 closer together */ for (row=0; row < 2; row++) { mul[row] = 0.125 * pre_mul[FC(row+1,0) | 1] / pre_mul[FC(row,0) | 1]; blk[row] = cblack[FC(row,0) | 1]; } for (i=0; i < 4; i++) window[i] = (ushort *) fimg + width*i; for (wlast=-1, row=1; row < height-1; row++) { while (wlast < row+1) { for (wlast++, i=0; i < 4; i++) window[(i+3) & 3] = window[i]; for (col = FC(wlast,1) & 1; col < width; col+=2) window[2][col] = BAYER(wlast,col); } thold = threshold/512; for (col = (FC(row,0) & 1)+1; col < width-1; col+=2) { avg = ( window[0][col-1] + window[0][col+1] + window[2][col-1] + window[2][col+1] - blk[~row & 1]*4 ) * mul[row & 1] + (window[1][col] + blk[row & 1]) * 0.5; avg = avg < 0 ? 0 : sqrt(avg); diff = sqrt((double)(BAYER(row,col))) - avg; if (diff < -thold) diff += thold; else if (diff > thold) diff -= thold; else diff = 0; BAYER(row,col) = CLIP(SQR(avg+diff) + 0.5); } } } free (fimg); }
0
[]
LibRaw
c4e374ea6c979a7d1d968f5082b7d0ea8cd27202
129,728,671,509,164,580,000,000,000,000,000,000,000
79
additional data checks backported from 0.15.4
static void srpt_drop_tport(struct se_wwn *wwn) { struct srpt_port *sport = container_of(wwn, struct srpt_port, port_wwn); pr_debug("drop_tport(%s\n", config_item_name(&sport->port_wwn.wwn_group.cg_item)); }
0
[ "CWE-200", "CWE-476" ]
linux
51093254bf879bc9ce96590400a87897c7498463
143,555,846,857,044,140,000,000,000,000,000,000,000
6
IB/srpt: Simplify srpt_handle_tsk_mgmt() Let the target core check task existence instead of the SRP target driver. Additionally, let the target core check the validity of the task management request instead of the ib_srpt driver. This patch fixes the following kernel crash: BUG: unable to handle kernel NULL pointer dereference at 0000000000000001 IP: [<ffffffffa0565f37>] srpt_handle_new_iu+0x6d7/0x790 [ib_srpt] Oops: 0002 [#1] SMP Call Trace: [<ffffffffa05660ce>] srpt_process_completion+0xde/0x570 [ib_srpt] [<ffffffffa056669f>] srpt_compl_thread+0x13f/0x160 [ib_srpt] [<ffffffff8109726f>] kthread+0xcf/0xe0 [<ffffffff81613cfc>] ret_from_fork+0x7c/0xb0 Signed-off-by: Bart Van Assche <[email protected]> Fixes: 3e4f574857ee ("ib_srpt: Convert TMR path to target_submit_tmr") Tested-by: Alex Estrin <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Nicholas Bellinger <[email protected]> Cc: Sagi Grimberg <[email protected]> Cc: stable <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
slapi_pblock_new() { Slapi_PBlock *pb; pb = (Slapi_PBlock *)slapi_ch_calloc(1, sizeof(Slapi_PBlock)); #ifdef PBLOCK_ANALYTICS pblock_analytics_init(pb); #endif return pb; }
0
[ "CWE-415" ]
389-ds-base
a3c298f8140d3e4fa1bd5a670f1bb965a21a9b7b
283,974,954,871,691,450,000,000,000,000,000,000,000
10
Issue 5218 - double-free of the virtual attribute context in persistent search (#5219) description: A search is processed by a worker using a private pblock. If the search is persistent, the worker spawn a thread and kind of duplicate its private pblock so that the spawn thread continue to process the persistent search. Then worker ends the initial search, reinit (free) its private pblock, and returns monitoring the wait_queue. When the persistent search completes, it frees the duplicated pblock. The problem is that private pblock and duplicated pblock are referring to a same structure (pb_vattr_context). That can lead to a double free Fix: When cloning the pblock (slapi_pblock_clone) make sure to transfert the references inside the original (private) pblock to the target (cloned) one That includes pb_vattr_context pointer. Reviewed by: Mark Reynolds, James Chapman, Pierre Rogier (Thanks !) Co-authored-by: Mark Reynolds <[email protected]>
GF_Box *totl_box_new() { ISOM_DECL_BOX_ALLOC(GF_TOTLBox, GF_ISOM_BOX_TYPE_TOTL); return (GF_Box *)tmp; }
0
[ "CWE-787" ]
gpac
388ecce75d05e11fc8496aa4857b91245007d26e
156,726,263,276,561,890,000,000,000,000,000,000,000
5
fixed #1587
PHP_FUNCTION(openssl_verify) { zval *key; EVP_PKEY *pkey; int err = 0; EVP_MD_CTX *md_ctx; const EVP_MD *mdtype; zend_resource *keyresource = NULL; char * data; size_t data_len; char * signature; size_t signature_len; zval *method = NULL; zend_long signature_algo = OPENSSL_ALGO_SHA1; if (zend_parse_parameters(ZEND_NUM_ARGS(), "ssz|z", &data, &data_len, &signature, &signature_len, &key, &method) == FAILURE) { return; } PHP_OPENSSL_CHECK_SIZE_T_TO_UINT(signature_len, signature); if (method == NULL || Z_TYPE_P(method) == IS_LONG) { if (method != NULL) { signature_algo = Z_LVAL_P(method); } mdtype = php_openssl_get_evp_md_from_algo(signature_algo); } else if (Z_TYPE_P(method) == IS_STRING) { mdtype = EVP_get_digestbyname(Z_STRVAL_P(method)); } else { php_error_docref(NULL, E_WARNING, "Unknown signature algorithm."); RETURN_FALSE; } if (!mdtype) { php_error_docref(NULL, E_WARNING, "Unknown signature algorithm."); RETURN_FALSE; } pkey = php_openssl_evp_from_zval(key, 1, NULL, 0, 0, &keyresource); if (pkey == NULL) { php_error_docref(NULL, E_WARNING, "supplied key param cannot be coerced into a public key"); RETURN_FALSE; } md_ctx = EVP_MD_CTX_create(); if (md_ctx == NULL || !EVP_VerifyInit (md_ctx, mdtype) || !EVP_VerifyUpdate (md_ctx, data, data_len) || (err = EVP_VerifyFinal(md_ctx, (unsigned char *)signature, (unsigned int)signature_len, pkey)) < 0) { php_openssl_store_errors(); } EVP_MD_CTX_destroy(md_ctx); if (keyresource == NULL) { EVP_PKEY_free(pkey); } RETURN_LONG(err); }
0
[ "CWE-326" ]
php-src
0216630ea2815a5789a24279a1211ac398d4de79
268,481,574,231,028,500,000,000,000,000,000,000,000
57
Fix bug #79601 (Wrong ciphertext/tag in AES-CCM encryption for a 12 bytes IV)
int fb_show_logo(struct fb_info *info, int rotate) { return 0; }
0
[ "CWE-703", "CWE-189" ]
linux
fc9bbca8f650e5f738af8806317c0a041a48ae4a
216,577,319,078,413,520,000,000,000,000,000,000,000
1
vm: convert fb_mmap to vm_iomap_memory() helper This is my example conversion of a few existing mmap users. The fb_mmap() case is a good example because it is a bit more complicated than some: fb_mmap() mmaps one of two different memory areas depending on the page offset of the mmap (but happily there is never any mixing of the two, so the helper function still works). Signed-off-by: Linus Torvalds <[email protected]>
bool CoreNetwork::cipherUsesCBC(const QString &target) { CoreIrcChannel *c = qobject_cast<CoreIrcChannel*>(ircChannel(target)); if (c) return c->cipher()->usesCBC(); CoreIrcUser *u = qobject_cast<CoreIrcUser*>(ircUser(target)); if (u) return u->cipher()->usesCBC(); return false; }
0
[ "CWE-399" ]
quassel
b5e38970ffd55e2dd9f706ce75af9a8d7730b1b8
139,564,916,506,729,900,000,000,000,000,000,000,000
11
Improve the message-splitting algorithm for PRIVMSG and CTCP This introduces a new message splitting algorithm based on QTextBoundaryFinder. It works by first starting with the entire message to be sent, encoding it, and checking to see if it is over the maximum message length. If it is, it uses QTBF to find the word boundary most immediately preceding the maximum length. If no suitable boundary can be found, it falls back to searching for grapheme boundaries. It repeats this process until the entire message has been sent. Unlike what it replaces, the new splitting code is not recursive and cannot cause stack overflows. Additionally, if it is unable to split a string, it will give up gracefully and not crash the core or cause a thread to run away. This patch fixes two bugs. The first is garbage characters caused by accidentally splitting the string in the middle of a multibyte character. Since the new code splits at a character level instead of a byte level, this will no longer be an issue. The second is the core crash caused by sending an overlength CTCP query ("/me") containing only multibyte characters. This bug was caused by the old CTCP splitter using the byte index from lastParamOverrun() as a character index for a QString.
virtual bool is_bool_type() { return false; }
0
[ "CWE-617" ]
server
2e7891080667c59ac80f788eef4d59d447595772
64,661,891,688,124,270,000,000,000,000,000,000,000
1
MDEV-25635 Assertion failure when pushing from HAVING into WHERE of view This bug could manifest itself after pushing a where condition over a mergeable derived table / view / CTE DT into a grouping view / derived table / CTE V whose item list contained set functions with constant arguments such as MIN(2), SUM(1) etc. In such cases the field references used in the condition pushed into the view V that correspond set functions are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation of the virtual method const_item() for the class Item_direct_view_ref the wrapped set functions with constant arguments could be erroneously taken for constant items. This could lead to a wrong result set returned by the main select query in 10.2. In 10.4 where a possibility of pushing condition from HAVING into WHERE had been added this could cause a crash. Approved by Sergey Petrunya <[email protected]>
int zuiLongLongFromValue(zsetopval *val) { if (!(val->flags & OPVAL_DIRTY_LL)) { val->flags |= OPVAL_DIRTY_LL; if (val->ele != NULL) { if (string2ll(val->ele,sdslen(val->ele),&val->ell)) val->flags |= OPVAL_VALID_LL; } else if (val->estr != NULL) { if (string2ll((char*)val->estr,val->elen,&val->ell)) val->flags |= OPVAL_VALID_LL; } else { /* The long long was already set, flag as valid. */ val->flags |= OPVAL_VALID_LL; } } return val->flags & OPVAL_VALID_LL; }
0
[ "CWE-190" ]
redis
f6a40570fa63d5afdd596c78083d754081d80ae3
140,383,312,615,382,570,000,000,000,000,000,000,000
17
Fix ziplist and listpack overflows and truncations (CVE-2021-32627, CVE-2021-32628) - fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error.
testThread(void) { fprintf(stderr, "Specific platform thread support not detected\n"); return (-1); }
0
[ "CWE-125" ]
libxml2
a820dbeac29d330bae4be05d9ecd939ad6b4aa33
206,267,921,897,433,850,000,000,000,000,000,000,000
6
Bug 758605: Heap-based buffer overread in xmlDictAddString <https://bugzilla.gnome.org/show_bug.cgi?id=758605> Reviewed by David Kilzer. * HTMLparser.c: (htmlParseName): Add bounds check. (htmlParseNameComplex): Ditto. * result/HTML/758605.html: Added. * result/HTML/758605.html.err: Added. * result/HTML/758605.html.sax: Added. * runtest.c: (pushParseTest): The input for the new test case was so small (4 bytes) that htmlParseChunk() was never called after htmlCreatePushParserCtxt(), thereby creating a false positive test failure. Fixed by using a do-while loop so we always call htmlParseChunk() at least once. * test/HTML/758605.html: Added.
cmsPipeline* CMSEXPORT cmsPipelineDup(const cmsPipeline* lut) { cmsPipeline* NewLUT; cmsStage *NewMPE, *Anterior = NULL, *mpe; cmsBool First = TRUE; if (lut == NULL) return NULL; NewLUT = cmsPipelineAlloc(lut ->ContextID, lut ->InputChannels, lut ->OutputChannels); for (mpe = lut ->Elements; mpe != NULL; mpe = mpe ->Next) { NewMPE = cmsStageDup(mpe); if (NewMPE == NULL) { cmsPipelineFree(NewLUT); return NULL; } if (First) { NewLUT ->Elements = NewMPE; First = FALSE; } else { Anterior ->Next = NewMPE; } Anterior = NewMPE; } NewLUT ->Eval16Fn = lut ->Eval16Fn; NewLUT ->EvalFloatFn = lut ->EvalFloatFn; NewLUT ->DupDataFn = lut ->DupDataFn; NewLUT ->FreeDataFn = lut ->FreeDataFn; if (NewLUT ->DupDataFn != NULL) NewLUT ->Data = NewLUT ->DupDataFn(lut ->ContextID, lut->Data); NewLUT ->SaveAs8Bits = lut ->SaveAs8Bits; BlessLUT(NewLUT); return NewLUT; }
1
[]
Little-CMS
886e2f524268efe8a1c3aa838c28e446fda24486
320,532,601,666,313,700,000,000,000,000,000,000,000
45
Fixes from coverity check
void tpm_get_timeouts(struct tpm_chip *chip) { struct tpm_cmd_t tpm_cmd; struct timeout_t *timeout_cap; struct duration_t *duration_cap; ssize_t rc; u32 timeout; unsigned int scale = 1; tpm_cmd.header.in = tpm_getcap_header; tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP; tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4); tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_TIMEOUT; rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, "attempting to determine the timeouts"); if (rc) goto duration; if (be32_to_cpu(tpm_cmd.header.out.return_code) != 0 || be32_to_cpu(tpm_cmd.header.out.length) != sizeof(tpm_cmd.header.out) + sizeof(u32) + 4 * sizeof(u32)) return; timeout_cap = &tpm_cmd.params.getcap_out.cap.timeout; /* Don't overwrite default if value is 0 */ timeout = be32_to_cpu(timeout_cap->a); if (timeout && timeout < 1000) { /* timeouts in msec rather usec */ scale = 1000; chip->vendor.timeout_adjusted = true; } if (timeout) chip->vendor.timeout_a = usecs_to_jiffies(timeout * scale); timeout = be32_to_cpu(timeout_cap->b); if (timeout) chip->vendor.timeout_b = usecs_to_jiffies(timeout * scale); timeout = be32_to_cpu(timeout_cap->c); if (timeout) chip->vendor.timeout_c = usecs_to_jiffies(timeout * scale); timeout = be32_to_cpu(timeout_cap->d); if (timeout) chip->vendor.timeout_d = usecs_to_jiffies(timeout * scale); duration: tpm_cmd.header.in = tpm_getcap_header; tpm_cmd.params.getcap_in.cap = TPM_CAP_PROP; tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4); tpm_cmd.params.getcap_in.subcap = TPM_CAP_PROP_TIS_DURATION; rc = transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, "attempting to determine the durations"); if (rc) return; if (be32_to_cpu(tpm_cmd.header.out.return_code) != 0 || be32_to_cpu(tpm_cmd.header.out.length) != sizeof(tpm_cmd.header.out) + sizeof(u32) + 3 * sizeof(u32)) return; duration_cap = &tpm_cmd.params.getcap_out.cap.duration; chip->vendor.duration[TPM_SHORT] = usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_short)); chip->vendor.duration[TPM_MEDIUM] = usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_medium)); chip->vendor.duration[TPM_LONG] = usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_long)); /* The Broadcom BCM0102 chipset in a Dell Latitude D820 gets the above * value wrong and apparently reports msecs rather than usecs. So we * fix up the resulting too-small TPM_SHORT value to make things work. * We also scale the TPM_MEDIUM and -_LONG values by 1000. */ if (chip->vendor.duration[TPM_SHORT] < (HZ / 100)) { chip->vendor.duration[TPM_SHORT] = HZ; chip->vendor.duration[TPM_MEDIUM] *= 1000; chip->vendor.duration[TPM_LONG] *= 1000; chip->vendor.duration_adjusted = true; dev_info(chip->dev, "Adjusting TPM timeout parameters."); } }
0
[ "CWE-200" ]
tpm
adfea973dfca35407de074ae2052be221e4b8956
4,897,984,027,911,187,000,000,000,000,000,000,000
81
TPM: Call tpm_transmit with correct size This patch changes the call of tpm_transmit by supplying the size of the userspace buffer instead of TPM_BUFSIZE. This got assigned CVE-2011-1161. [The first hunk didn't make much sense given one could expect way less data than TPM_BUFSIZE, so added tpm_transmit boundary check over bufsiz instead] Signed-off-by: Rajiv Andrade <[email protected]>
static unsigned zlib_compress(unsigned char** out, size_t* outsize, const unsigned char* in, size_t insize, const LodePNGCompressSettings* settings) { if (!settings->custom_zlib) return 87; /*no custom zlib function provided */ return settings->custom_zlib(out, outsize, in, insize, settings); }
0
[ "CWE-401" ]
FreeRDP
9fee4ae076b1ec97b97efb79ece08d1dab4df29a
220,095,069,074,580,650,000,000,000,000,000,000,000
6
Fixed #5645: realloc return handling
Statement_Ptr Expand::operator()(Declaration_Ptr d) { Block_Obj ab = d->block(); String_Obj old_p = d->property(); Expression_Obj prop = old_p->perform(&eval); String_Obj new_p = Cast<String>(prop); // we might get a color back if (!new_p) { std::string str(prop->to_string(ctx.c_options)); new_p = SASS_MEMORY_NEW(String_Constant, old_p->pstate(), str); } Expression_Obj value = d->value()->perform(&eval); Block_Obj bb = ab ? operator()(ab) : NULL; if (!bb) { if (!value || (value->is_invisible() && !d->is_important())) return 0; } Declaration_Ptr decl = SASS_MEMORY_NEW(Declaration, d->pstate(), new_p, value, d->is_important(), d->is_custom_property(), bb); decl->tabs(d->tabs()); return decl; }
1
[ "CWE-476" ]
libsass
0bc35e3d26922229d5a3e3308860cf0fcee5d1cf
113,533,062,334,791,860,000,000,000,000,000,000,000
26
Fix segfault on empty custom properties Originally reported in sass/sassc#225 Fixes sass/sassc#225 Spec sass/sass-spec#1249
ofpacts_pull_openflow_actions(struct ofpbuf *openflow, unsigned int actions_len, enum ofp_version version, const struct vl_mff_map *vl_mff_map, uint64_t *ofpacts_tlv_bitmap, struct ofpbuf *ofpacts) { return ofpacts_pull_openflow_actions__( openflow, actions_len, version, (1u << OVSINST_OFPIT11_APPLY_ACTIONS) | (1u << OVSINST_OFPIT13_METER), ofpacts, 0, vl_mff_map, ofpacts_tlv_bitmap); }
0
[ "CWE-416" ]
ovs
77cccc74deede443e8b9102299efc869a52b65b2
77,144,883,073,366,090,000,000,000,000,000,000,000
12
ofp-actions: Fix use-after-free while decoding RAW_ENCAP. While decoding RAW_ENCAP action, decode_ed_prop() might re-allocate ofpbuf if there is no enough space left. However, function 'decode_NXAST_RAW_ENCAP' continues to use old pointer to 'encap' structure leading to write-after-free and incorrect decoding. ==3549105==ERROR: AddressSanitizer: heap-use-after-free on address 0x60600000011a at pc 0x0000005f6cc6 bp 0x7ffc3a2d4410 sp 0x7ffc3a2d4408 WRITE of size 2 at 0x60600000011a thread T0 #0 0x5f6cc5 in decode_NXAST_RAW_ENCAP lib/ofp-actions.c:4461:20 #1 0x5f0551 in ofpact_decode ./lib/ofp-actions.inc2:4777:16 #2 0x5ed17c in ofpacts_decode lib/ofp-actions.c:7752:21 #3 0x5eba9a in ofpacts_pull_openflow_actions__ lib/ofp-actions.c:7791:13 #4 0x5eb9fc in ofpacts_pull_openflow_actions lib/ofp-actions.c:7835:12 #5 0x64bb8b in ofputil_decode_packet_out lib/ofp-packet.c:1113:17 #6 0x65b6f4 in ofp_print_packet_out lib/ofp-print.c:148:13 #7 0x659e3f in ofp_to_string__ lib/ofp-print.c:1029:16 #8 0x659b24 in ofp_to_string lib/ofp-print.c:1244:21 #9 0x65a28c in ofp_print lib/ofp-print.c:1288:28 #10 0x540d11 in ofctl_ofp_parse utilities/ovs-ofctl.c:2814:9 #11 0x564228 in ovs_cmdl_run_command__ lib/command-line.c:247:17 #12 0x56408a in ovs_cmdl_run_command lib/command-line.c:278:5 #13 0x5391ae in main utilities/ovs-ofctl.c:179:9 #14 0x7f6911ce9081 in __libc_start_main (/lib64/libc.so.6+0x27081) #15 0x461fed in _start (utilities/ovs-ofctl+0x461fed) Fix that by getting a new pointer before using. Credit to OSS-Fuzz. Fuzzer regression test will fail only with AddressSanitizer enabled. Reported-at: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=27851 Fixes: f839892a206a ("OF support and translation of generic encap and decap") Acked-by: William Tu <[email protected]> Signed-off-by: Ilya Maximets <[email protected]>
xfs_attr3_leaf_order( struct xfs_buf *leaf1_bp, struct xfs_attr3_icleaf_hdr *leaf1hdr, struct xfs_buf *leaf2_bp, struct xfs_attr3_icleaf_hdr *leaf2hdr) { struct xfs_attr_leaf_entry *entries1; struct xfs_attr_leaf_entry *entries2; entries1 = xfs_attr3_leaf_entryp(leaf1_bp->b_addr); entries2 = xfs_attr3_leaf_entryp(leaf2_bp->b_addr); if (leaf1hdr->count > 0 && leaf2hdr->count > 0 && ((be32_to_cpu(entries2[0].hashval) < be32_to_cpu(entries1[0].hashval)) || (be32_to_cpu(entries2[leaf2hdr->count - 1].hashval) < be32_to_cpu(entries1[leaf1hdr->count - 1].hashval)))) { return 1; } return 0; }
0
[ "CWE-241", "CWE-19" ]
linux
8275cdd0e7ac550dcce2b3ef6d2fb3b808c1ae59
93,488,332,072,739,380,000,000,000,000,000,000,000
20
xfs: remote attribute overwrite causes transaction overrun Commit e461fcb ("xfs: remote attribute lookups require the value length") passes the remote attribute length in the xfs_da_args structure on lookup so that CRC calculations and validity checking can be performed correctly by related code. This, unfortunately has the side effect of changing the args->valuelen parameter in cases where it shouldn't. That is, when we replace a remote attribute, the incoming replacement stores the value and length in args->value and args->valuelen, but then the lookup which finds the existing remote attribute overwrites args->valuelen with the length of the remote attribute being replaced. Hence when we go to create the new attribute, we create it of the size of the existing remote attribute, not the size it is supposed to be. When the new attribute is much smaller than the old attribute, this results in a transaction overrun and an ASSERT() failure on a debug kernel: XFS: Assertion failed: tp->t_blk_res_used <= tp->t_blk_res, file: fs/xfs/xfs_trans.c, line: 331 Fix this by keeping the remote attribute value length separate to the attribute value length in the xfs_da_args structure. The enables us to pass the length of the remote attribute to be removed without overwriting the new attribute's length. Also, ensure that when we save remote block contexts for a later rename we zero the original state variables so that we don't confuse the state of the attribute to be removes with the state of the new attribute that we just added. [Spotted by Brain Foster.] Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Brian Foster <[email protected]> Signed-off-by: Dave Chinner <[email protected]>
static int read_param(link_ctx *ctx) { if (skip_ws(ctx) && read_chr(ctx, ';')) { const char *name, *value = ""; if (read_pname(ctx, &name)) { read_pvalue(ctx, &value); /* value is optional */ apr_table_setn(ctx->params, name, value); return 1; } } return 0; }
0
[ "CWE-444" ]
mod_h2
b8a8c5061eada0ce3339b24ba1d587134552bc0c
25,918,506,343,822,637,000,000,000,000,000,000,000
12
* Removing support for abandoned draft of http-wg regarding cache-digests.
static void sk_prot_free(struct proto *prot, struct sock *sk) { struct kmem_cache *slab; struct module *owner; owner = prot->owner; slab = prot->slab; cgroup_sk_free(&sk->sk_cgrp_data); mem_cgroup_sk_free(sk); security_sk_free(sk); if (slab != NULL) kmem_cache_free(slab, sk); else kfree(sk); module_put(owner); }
0
[ "CWE-119", "CWE-787" ]
linux
b98b0bc8c431e3ceb4b26b0dfc8db509518fb290
305,923,894,213,229,900,000,000,000,000,000,000,000
17
net: avoid signed overflows for SO_{SND|RCV}BUFFORCE CAP_NET_ADMIN users should not be allowed to set negative sk_sndbuf or sk_rcvbuf values, as it can lead to various memory corruptions, crashes, OOM... Note that before commit 82981930125a ("net: cleanups in sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF and SO_RCVBUF were vulnerable. This needs to be backported to all known linux kernels. Again, many thanks to syzkaller team for discovering this gem. Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Andrey Konovalov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
esis_print(netdissect_options *ndo, const uint8_t *pptr, u_int length) { const uint8_t *optr; u_int li,esis_pdu_type,source_address_length, source_address_number; const struct esis_header_t *esis_header; if (!ndo->ndo_eflag) ND_PRINT((ndo, "ES-IS")); if (length <= 2) { ND_PRINT((ndo, ndo->ndo_qflag ? "bad pkt!" : "no header at all!")); return; } esis_header = (const struct esis_header_t *) pptr; ND_TCHECK(*esis_header); li = esis_header->length_indicator; optr = pptr; /* * Sanity checking of the header. */ if (esis_header->nlpid != NLPID_ESIS) { ND_PRINT((ndo, " nlpid 0x%02x packet not supported", esis_header->nlpid)); return; } if (esis_header->version != ESIS_VERSION) { ND_PRINT((ndo, " version %d packet not supported", esis_header->version)); return; } if (li > length) { ND_PRINT((ndo, " length indicator(%u) > PDU size (%u)!", li, length)); return; } if (li < sizeof(struct esis_header_t) + 2) { ND_PRINT((ndo, " length indicator %u < min PDU size:", li)); while (pptr < ndo->ndo_snapend) ND_PRINT((ndo, "%02X", *pptr++)); return; } esis_pdu_type = esis_header->type & ESIS_PDU_TYPE_MASK; if (ndo->ndo_vflag < 1) { ND_PRINT((ndo, "%s%s, length %u", ndo->ndo_eflag ? "" : ", ", tok2str(esis_pdu_values,"unknown type (%u)",esis_pdu_type), length)); return; } else ND_PRINT((ndo, "%slength %u\n\t%s (%u)", ndo->ndo_eflag ? "" : ", ", length, tok2str(esis_pdu_values,"unknown type: %u", esis_pdu_type), esis_pdu_type)); ND_PRINT((ndo, ", v: %u%s", esis_header->version, esis_header->version == ESIS_VERSION ? "" : "unsupported" )); ND_PRINT((ndo, ", checksum: 0x%04x", EXTRACT_16BITS(esis_header->cksum))); osi_print_cksum(ndo, pptr, EXTRACT_16BITS(esis_header->cksum), 7, li); ND_PRINT((ndo, ", holding time: %us, length indicator: %u", EXTRACT_16BITS(esis_header->holdtime), li)); if (ndo->ndo_vflag > 1) print_unknown_data(ndo, optr, "\n\t", sizeof(struct esis_header_t)); pptr += sizeof(struct esis_header_t); li -= sizeof(struct esis_header_t); switch (esis_pdu_type) { case ESIS_PDU_REDIRECT: { const uint8_t *dst, *snpa, *neta; u_int dstl, snpal, netal; ND_TCHECK(*pptr); if (li < 1) { ND_PRINT((ndo, ", bad redirect/li")); return; } dstl = *pptr; pptr++; li--; ND_TCHECK2(*pptr, dstl); if (li < dstl) { ND_PRINT((ndo, ", bad redirect/li")); return; } dst = pptr; pptr += dstl; li -= dstl; ND_PRINT((ndo, "\n\t %s", isonsap_string(ndo, dst, dstl))); ND_TCHECK(*pptr); if (li < 1) { ND_PRINT((ndo, ", bad redirect/li")); return; } snpal = *pptr; pptr++; li--; ND_TCHECK2(*pptr, snpal); if (li < snpal) { ND_PRINT((ndo, ", bad redirect/li")); return; } snpa = pptr; pptr += snpal; li -= snpal; ND_TCHECK(*pptr); if (li < 1) { ND_PRINT((ndo, ", bad redirect/li")); return; } netal = *pptr; pptr++; ND_TCHECK2(*pptr, netal); if (li < netal) { ND_PRINT((ndo, ", bad redirect/li")); return; } neta = pptr; pptr += netal; li -= netal; if (netal == 0) ND_PRINT((ndo, "\n\t %s", etheraddr_string(ndo, snpa))); else ND_PRINT((ndo, "\n\t %s", isonsap_string(ndo, neta, netal))); break; } case ESIS_PDU_ESH: ND_TCHECK(*pptr); if (li < 1) { ND_PRINT((ndo, ", bad esh/li")); return; } source_address_number = *pptr; pptr++; li--; ND_PRINT((ndo, "\n\t Number of Source Addresses: %u", source_address_number)); while (source_address_number > 0) { ND_TCHECK(*pptr); if (li < 1) { ND_PRINT((ndo, ", bad esh/li")); return; } source_address_length = *pptr; pptr++; li--; ND_TCHECK2(*pptr, source_address_length); if (li < source_address_length) { ND_PRINT((ndo, ", bad esh/li")); return; } ND_PRINT((ndo, "\n\t NET (length: %u): %s", source_address_length, isonsap_string(ndo, pptr, source_address_length))); pptr += source_address_length; li -= source_address_length; source_address_number--; } break; case ESIS_PDU_ISH: { ND_TCHECK(*pptr); if (li < 1) { ND_PRINT((ndo, ", bad ish/li")); return; } source_address_length = *pptr; pptr++; li--; ND_TCHECK2(*pptr, source_address_length); if (li < source_address_length) { ND_PRINT((ndo, ", bad ish/li")); return; } ND_PRINT((ndo, "\n\t NET (length: %u): %s", source_address_length, isonsap_string(ndo, pptr, source_address_length))); pptr += source_address_length; li -= source_address_length; break; } default: if (ndo->ndo_vflag <= 1) { if (pptr < ndo->ndo_snapend) print_unknown_data(ndo, pptr, "\n\t ", ndo->ndo_snapend - pptr); } return; } /* now walk the options */ while (li != 0) { u_int op, opli; const uint8_t *tptr; if (li < 2) { ND_PRINT((ndo, ", bad opts/li")); return; } ND_TCHECK2(*pptr, 2); op = *pptr++; opli = *pptr++; li -= 2; if (opli > li) { ND_PRINT((ndo, ", opt (%d) too long", op)); return; } li -= opli; tptr = pptr; ND_PRINT((ndo, "\n\t %s Option #%u, length %u, value: ", tok2str(esis_option_values,"Unknown",op), op, opli)); switch (op) { case ESIS_OPTION_ES_CONF_TIME: if (opli == 2) { ND_TCHECK2(*pptr, 2); ND_PRINT((ndo, "%us", EXTRACT_16BITS(tptr))); } else ND_PRINT((ndo, "(bad length)")); break; case ESIS_OPTION_PROTOCOLS: while (opli>0) { ND_TCHECK(*pptr); ND_PRINT((ndo, "%s (0x%02x)", tok2str(nlpid_values, "unknown", *tptr), *tptr)); if (opli>1) /* further NPLIDs ? - put comma */ ND_PRINT((ndo, ", ")); tptr++; opli--; } break; /* * FIXME those are the defined Options that lack a decoder * you are welcome to contribute code ;-) */ case ESIS_OPTION_QOS_MAINTENANCE: case ESIS_OPTION_SECURITY: case ESIS_OPTION_PRIORITY: case ESIS_OPTION_ADDRESS_MASK: case ESIS_OPTION_SNPA_MASK: default: print_unknown_data(ndo, tptr, "\n\t ", opli); break; } if (ndo->ndo_vflag > 1) print_unknown_data(ndo, pptr, "\n\t ", opli); pptr += opli; } trunc: return; }
1
[ "CWE-125", "CWE-787" ]
tcpdump
c177cb3800a9a68d79b2812f0ffcb9479abd6eb8
152,858,727,405,626,930,000,000,000,000,000,000,000
274
CVE-2017-13016/ES-IS: Fix printing of addresses in RD PDUs. Always print the SNPA, and flag it as such; only print it as a MAC address if it's 6 bytes long. Identify the NET as such. This fixes a buffer over-read discovered by Bhargava Shastry, SecT/TU Berlin. Add tests using the capture files supplied by the reporter(s), modified so the capture files won't be rejected as an invalid capture.
get_time_t_min(void) { #if defined(TIME_T_MIN) return TIME_T_MIN; #else if (((time_t)0) < ((time_t)-1)) { /* Time_t is unsigned */ return (time_t)0; } else { /* Time_t is signed. */ if (sizeof(time_t) == sizeof(int64_t)) { return (time_t)INT64_MIN; } else { return (time_t)INT32_MIN; } } #endif }
0
[ "CWE-476", "CWE-119" ]
libarchive
a550daeecf6bc689ade371349892ea17b5b97c77
298,719,989,339,536,070,000,000,000,000,000,000,000
18
Fix libarchive/archive_read_support_format_mtree.c:1388:11: error: array subscript is above array bounds
int oidc_auth_checker(request_rec *r) { /* check for anonymous access and PASS mode */ if (r->user != NULL && strlen(r->user) == 0) { r->user = NULL; if (oidc_dir_cfg_unauth_action(r) == OIDC_UNAUTH_PASS) return OK; } /* get the set of claims from the request state (they've been set in the authentication part earlier */ json_t *claims = NULL, *id_token = NULL; oidc_authz_get_claims_and_idtoken(r, &claims, &id_token); /* get the Require statements */ const apr_array_header_t * const reqs_arr = ap_requires(r); /* see if we have any */ const require_line * const reqs = reqs_arr ? (require_line *) reqs_arr->elts : NULL; if (!reqs_arr) { oidc_debug(r, "no require statements found, so declining to perform authorization."); return DECLINED; } /* merge id_token claims (e.g. "iss") in to claims json object */ if (claims) oidc_util_json_merge(id_token, claims); /* dispatch to the <2.4 specific authz routine */ int rc = oidc_authz_worker(r, claims ? claims : id_token, reqs, reqs_arr->nelts); /* cleanup */ if (claims) json_decref(claims); if (id_token) json_decref(id_token); if ((rc == HTTP_UNAUTHORIZED) && ap_auth_type(r) && (apr_strnatcasecmp((const char *) ap_auth_type(r), "oauth20") == 0)) oidc_oauth_return_www_authenticate(r, "insufficient_scope", "Different scope(s) or other claims required"); return rc; }
0
[ "CWE-20" ]
mod_auth_openidc
612e309bfffd6f9b8ad7cdccda3019fc0865f3b4
233,246,613,439,240,130,000,000,000,000,000,000,000
47
don't echo query params on invalid requests to redirect URI; closes #212 thanks @LukasReschke; I'm sure there's some OWASP guideline that warns against this
oui_to_struct_tok(uint32_t orgcode) { const struct tok *tok = null_values; const struct oui_tok *otp; for (otp = &oui_to_tok[0]; otp->tok != NULL; otp++) { if (otp->oui == orgcode) { tok = otp->tok; break; } } return (tok); }
0
[ "CWE-125", "CWE-787" ]
tcpdump
1dcd10aceabbc03bf571ea32b892c522cbe923de
106,032,391,206,159,670,000,000,000,000,000,000,000
13
CVE-2017-12897/ISO CLNS: Use ND_TTEST() for the bounds checks in isoclns_print(). This fixes a buffer over-read discovered by Kamil Frankowicz. Don't pass the remaining caplen - that's too hard to get right, and we were getting it wrong in at least one case; just use ND_TTEST(). Add a test using the capture file supplied by the reporter(s).
void encodeTrailers(const RequestTrailerMap& trailers) override { encodeTrailersBase(trailers); }
0
[ "CWE-400" ]
envoy
0e49a495826ea9e29134c1bd54fdeb31a034f40c
69,178,258,720,740,500,000,000,000,000,000,000,000
1
http/2: add stats and stream flush timeout (#139) This commit adds a new stream flush timeout to guard against a remote server that does not open window once an entire stream has been buffered for flushing. Additional stats have also been added to better understand the codecs view of active streams as well as amount of data buffered. Signed-off-by: Matt Klein <[email protected]>
explicit AdminHook(Monitor *m) : mon(m) {}
0
[ "CWE-287", "CWE-284" ]
ceph
5ead97120e07054d80623dada90a5cc764c28468
339,448,893,241,427,670,000,000,000,000,000,000,000
1
auth/cephx: add authorizer challenge Allow the accepting side of a connection to reject an initial authorizer with a random challenge. The connecting side then has to respond with an updated authorizer proving they are able to decrypt the service's challenge and that the new authorizer was produced for this specific connection instance. The accepting side requires this challenge and response unconditionally if the client side advertises they have the feature bit. Servers wishing to require this improved level of authentication simply have to require the appropriate feature. Signed-off-by: Sage Weil <[email protected]> (cherry picked from commit f80b848d3f830eb6dba50123e04385173fa4540b) # Conflicts: # src/auth/Auth.h # src/auth/cephx/CephxProtocol.cc # src/auth/cephx/CephxProtocol.h # src/auth/none/AuthNoneProtocol.h # src/msg/Dispatcher.h # src/msg/async/AsyncConnection.cc - const_iterator - ::decode vs decode - AsyncConnection ctor arg noise - get_random_bytes(), not cct->random()
static void vnc_client_cache_addr(VncState *client) { Error *err = NULL; client->info = g_malloc0(sizeof(*client->info)); vnc_init_basic_info_from_remote_addr(client->sioc, qapi_VncClientInfo_base(client->info), &err); if (err) { qapi_free_VncClientInfo(client->info); client->info = NULL; error_free(err); } }
0
[ "CWE-772" ]
qemu
d3b0db6dfea6b3a9ee0d96aceb796bdcafa84314
116,215,966,448,957,900,000,000,000,000,000,000,000
14
vnc: Set default kbd delay to 10ms The current VNC default keyboard delay is 1ms. With that we're constantly typing faster than the guest receives keyboard events from an XHCI attached USB HID device. The default keyboard delay time in the input layer however is 10ms. I don't know how that number came to be, but empirical tests on some OpenQA driven ARM systems show that 10ms really is a reasonable default number for the delay. This patch moves the VNC delay also to 10ms. That way our default is much safer (good!) and also consistent with the input layer default (also good!). Signed-off-by: Alexander Graf <[email protected]> Reviewed-by: Daniel P. Berrange <[email protected]> Message-id: [email protected] Signed-off-by: Gerd Hoffmann <[email protected]>
const char *Field_iterator_view::name() { return ptr->name; }
0
[ "CWE-416" ]
server
4681b6f2d8c82b4ec5cf115e83698251963d80d5
299,447,893,660,742,900,000,000,000,000,000,000,000
4
MDEV-26281 ASAN use-after-poison when complex conversion is involved in blob the bug was that in_vector array in Item_func_in was allocated in the statement arena, not in the table->expr_arena. revert part of the 5acd391e8b2d. Instead, change the arena correctly in fix_all_session_vcol_exprs(). Remove TABLE_ARENA, that was introduced in 5acd391e8b2d to force item tree changes to be rolled back (because they were allocated in the wrong arena and didn't persist. now they do)
static void rtmsg_ifinfo_event(int type, struct net_device *dev, unsigned int change, u32 event, gfp_t flags, int *new_nsid) { struct sk_buff *skb; if (dev->reg_state != NETREG_REGISTERED) return; skb = rtmsg_ifinfo_build_skb(type, dev, change, event, flags, new_nsid); if (skb) rtmsg_ifinfo_send(skb, dev, flags); }
0
[ "CWE-476" ]
linux
f428fe4a04cc339166c8bbd489789760de3a0cee
281,970,684,474,524,070,000,000,000,000,000,000,000
13
rtnetlink: give a user socket to get_target_net() This function is used from two places: rtnl_dump_ifinfo and rtnl_getlink. In rtnl_getlink(), we give a request skb into get_target_net(), but in rtnl_dump_ifinfo, we give a response skb into get_target_net(). The problem here is that NETLINK_CB() isn't initialized for the response skb. In both cases we can get a user socket and give it instead of skb into get_target_net(). This bug was found by syzkaller with this call-trace: kasan: GPF could be caused by NULL-ptr deref or user memory access general protection fault: 0000 [#1] SMP KASAN Modules linked in: CPU: 1 PID: 3149 Comm: syzkaller140561 Not tainted 4.15.0-rc4-mm1+ #47 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:__netlink_ns_capable+0x8b/0x120 net/netlink/af_netlink.c:868 RSP: 0018:ffff8801c880f348 EFLAGS: 00010206 RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffffff8443f900 RDX: 000000000000007b RSI: ffffffff86510f40 RDI: 00000000000003d8 RBP: ffff8801c880f360 R08: 0000000000000000 R09: 1ffff10039101e4f R10: 0000000000000000 R11: 0000000000000001 R12: ffffffff86510f40 R13: 000000000000000c R14: 0000000000000004 R15: 0000000000000011 FS: 0000000001a1a880(0000) GS:ffff8801db300000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020151000 CR3: 00000001c9511005 CR4: 00000000001606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: netlink_ns_capable+0x26/0x30 net/netlink/af_netlink.c:886 get_target_net+0x9d/0x120 net/core/rtnetlink.c:1765 rtnl_dump_ifinfo+0x2e5/0xee0 net/core/rtnetlink.c:1806 netlink_dump+0x48c/0xce0 net/netlink/af_netlink.c:2222 __netlink_dump_start+0x4f0/0x6d0 net/netlink/af_netlink.c:2319 netlink_dump_start include/linux/netlink.h:214 [inline] rtnetlink_rcv_msg+0x7f0/0xb10 net/core/rtnetlink.c:4485 netlink_rcv_skb+0x21e/0x460 net/netlink/af_netlink.c:2441 rtnetlink_rcv+0x1c/0x20 net/core/rtnetlink.c:4540 netlink_unicast_kernel net/netlink/af_netlink.c:1308 [inline] netlink_unicast+0x4be/0x6a0 net/netlink/af_netlink.c:1334 netlink_sendmsg+0xa4a/0xe60 net/netlink/af_netlink.c:1897 Cc: Jiri Benc <[email protected]> Fixes: 79e1ad148c84 ("rtnetlink: use netnsid to query interface") Signed-off-by: Andrei Vagin <[email protected]> Signed-off-by: David S. Miller <[email protected]>
SYSCALL_DEFINE4(rt_sigprocmask, int, how, sigset_t __user *, nset, sigset_t __user *, oset, size_t, sigsetsize) { sigset_t old_set, new_set; int error; /* XXX: Don't preclude handling different sized sigset_t's. */ if (sigsetsize != sizeof(sigset_t)) return -EINVAL; old_set = current->blocked; if (nset) { if (copy_from_user(&new_set, nset, sizeof(sigset_t))) return -EFAULT; sigdelsetmask(&new_set, sigmask(SIGKILL)|sigmask(SIGSTOP)); error = sigprocmask(how, &new_set, NULL); if (error) return error; } if (oset) { if (copy_to_user(oset, &old_set, sizeof(sigset_t))) return -EFAULT; } return 0; }
0
[ "CWE-400" ]
linux-stable-rt
bcf6b1d78c0bde228929c388978ed3af9a623463
149,095,670,047,596,040,000,000,000,000,000,000,000
29
signal/x86: Delay calling signals in atomic On x86_64 we must disable preemption before we enable interrupts for stack faults, int3 and debugging, because the current task is using a per CPU debug stack defined by the IST. If we schedule out, another task can come in and use the same stack and cause the stack to be corrupted and crash the kernel on return. When CONFIG_PREEMPT_RT_FULL is enabled, spin_locks become mutexes, and one of these is the spin lock used in signal handling. Some of the debug code (int3) causes do_trap() to send a signal. This function calls a spin lock that has been converted to a mutex and has the possibility to sleep. If this happens, the above issues with the corrupted stack is possible. Instead of calling the signal right away, for PREEMPT_RT and x86_64, the signal information is stored on the stacks task_struct and TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume code will send the signal when preemption is enabled. [ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT_FULL to ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ] Cc: [email protected] Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Steven Rostedt <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
ldns_str2rdf_selector(ldns_rdf **rd, const char *str) { return ldns_str2rdf_mnemonic4int8(ldns_tlsa_selectors, rd, str); }
0
[]
ldns
3bdeed02505c9bbacb3b64a97ddcb1de967153b7
292,534,761,658,643,200,000,000,000,000,000,000,000
4
bugfix #1257: Free after reallocing to 0 size Thanks Stephan Zeisberg
CompilationInfo* info() { return info_; }
0
[]
node
fd80a31e0697d6317ce8c2d289575399f4e06d21
4,664,823,125,998,047,000,000,000,000,000,000,000
1
deps: backport 5f836c from v8 upstream Original commit message: Fix Hydrogen bounds check elimination When combining bounds checks, they must all be moved before the first load/store that they are guarding. BUG=chromium:344186 LOG=y [email protected] Review URL: https://codereview.chromium.org/172093002 git-svn-id: https://v8.googlecode.com/svn/branches/bleeding_edge@19475 ce2b1a6d-e550-0410-aec6-3dcde31c8c00 fix #8070
get_filter_list( Operation *op, BerElement *ber, Filter **f, const char **text ) { Filter **new; int err; ber_tag_t tag; ber_len_t len; char *last; Debug( LDAP_DEBUG_FILTER, "begin get_filter_list\n", 0, 0, 0 ); new = f; for ( tag = ber_first_element( ber, &len, &last ); tag != LBER_DEFAULT; tag = ber_next_element( ber, &len, last ) ) { err = get_filter( op, ber, new, text ); if ( err != LDAP_SUCCESS ) return( err ); new = &(*new)->f_next; } *new = NULL; Debug( LDAP_DEBUG_FILTER, "end get_filter_list\n", 0, 0, 0 ); return( LDAP_SUCCESS ); }
1
[ "CWE-674" ]
openldap
98464c11df8247d6a11b52e294ba5dd4f0380440
225,019,894,143,010,960,000,000,000,000,000,000,000
26
ITS#9202 limit depth of nested filters Using a hardcoded limit for now; no reasonable apps should ever run into it.
TYPED_TEST(MultiProtocolTest, FrozenStructA) { FrozenStructA oldObject = makeFrozenStructALike<FrozenStructA>(); tablebased::FrozenStructA newObject = makeFrozenStructALike<tablebased::FrozenStructA>(); EXPECT_COMPATIBLE_PROTOCOL(oldObject, newObject, TypeParam); }
0
[ "CWE-763" ]
fbthrift
bfda1efa547dce11a38592820916db01b05b9339
180,983,744,834,842,840,000,000,000,000,000,000,000
6
Fix handling of invalid union data in table-based serializer Summary: Fix handling of invalid union data in the table-based serializer. Previously if the input contained duplicate union data, previous active member of the union was overwritten without calling the destructor of the old object, potentially causing a memory leak. In addition to that, if the second piece of data was incomplete the wrong destructor would be called during stack unwinding causing a segfault, data corruption or other undesirable effects. Fix the issue by clearing the union if there is an active member. Also fix the type of the data member that holds the active field id (it's `int`, not `FieldID`). Reviewed By: yfeldblum Differential Revision: D26440248 fbshipit-source-id: fae9ab96566cf07e14dabe9663b2beb680a01bb4
static int gfar_resume(struct device *dev) { struct gfar_private *priv = dev_get_drvdata(dev); struct net_device *ndev = priv->ndev; struct gfar __iomem *regs = priv->gfargrp[0].regs; u32 tempval; u16 wol = priv->wol_opts; if (!netif_running(ndev)) return 0; if (wol & GFAR_WOL_MAGIC) { /* Disable Magic Packet mode */ tempval = gfar_read(&regs->maccfg2); tempval &= ~MACCFG2_MPEN; gfar_write(&regs->maccfg2, tempval); } else if (wol & GFAR_WOL_FILER_UCAST) { /* need to stop rx only, tx is already down */ gfar_halt(priv); gfar_filer_restore_table(priv); } else { phy_start(ndev->phydev); } gfar_start(priv); netif_device_attach(ndev); enable_napi(priv); return 0; }
0
[]
linux
d8861bab48b6c1fc3cdbcab8ff9d1eaea43afe7f
7,870,095,431,586,977,000,000,000,000,000,000,000
33
gianfar: fix jumbo packets+napi+rx overrun crash When using jumbo packets and overrunning rx queue with napi enabled, the following sequence is observed in gfar_add_rx_frag: | lstatus | | skb | t | lstatus, size, flags | first | len, data_len, *ptr | ---+--------------------------------------+-------+-----------------------+ 13 | 18002348, 9032, INTERRUPT LAST | 0 | 9600, 8000, f554c12e | 12 | 10000640, 1600, INTERRUPT | 0 | 8000, 6400, f554c12e | 11 | 10000640, 1600, INTERRUPT | 0 | 6400, 4800, f554c12e | 10 | 10000640, 1600, INTERRUPT | 0 | 4800, 3200, f554c12e | 09 | 10000640, 1600, INTERRUPT | 0 | 3200, 1600, f554c12e | 08 | 14000640, 1600, INTERRUPT FIRST | 0 | 1600, 0, f554c12e | 07 | 14000640, 1600, INTERRUPT FIRST | 1 | 0, 0, f554c12e | 06 | 1c000080, 128, INTERRUPT LAST FIRST | 1 | 0, 0, abf3bd6e | 05 | 18002348, 9032, INTERRUPT LAST | 0 | 8000, 6400, c5a57780 | 04 | 10000640, 1600, INTERRUPT | 0 | 6400, 4800, c5a57780 | 03 | 10000640, 1600, INTERRUPT | 0 | 4800, 3200, c5a57780 | 02 | 10000640, 1600, INTERRUPT | 0 | 3200, 1600, c5a57780 | 01 | 10000640, 1600, INTERRUPT | 0 | 1600, 0, c5a57780 | 00 | 14000640, 1600, INTERRUPT FIRST | 1 | 0, 0, c5a57780 | So at t=7 a new packets is started but not finished, probably due to rx overrun - but rx overrun is not indicated in the flags. Instead a new packets starts at t=8. This results in skb->len to exceed size for the LAST fragment at t=13 and thus a negative fragment size added to the skb. This then crashes: kernel BUG at include/linux/skbuff.h:2277! Oops: Exception in kernel mode, sig: 5 [#1] ... NIP [c04689f4] skb_pull+0x2c/0x48 LR [c03f62ac] gfar_clean_rx_ring+0x2e4/0x844 Call Trace: [ec4bfd38] [c06a84c4] _raw_spin_unlock_irqrestore+0x60/0x7c (unreliable) [ec4bfda8] [c03f6a44] gfar_poll_rx_sq+0x48/0xe4 [ec4bfdc8] [c048d504] __napi_poll+0x54/0x26c [ec4bfdf8] [c048d908] net_rx_action+0x138/0x2c0 [ec4bfe68] [c06a8f34] __do_softirq+0x3a4/0x4fc [ec4bfed8] [c0040150] run_ksoftirqd+0x58/0x70 [ec4bfee8] [c0066ecc] smpboot_thread_fn+0x184/0x1cc [ec4bff08] [c0062718] kthread+0x140/0x144 [ec4bff38] [c0012350] ret_from_kernel_thread+0x14/0x1c This patch fixes this by checking for computed LAST fragment size, so a negative sized fragment is never added. In order to prevent the newer rx frame from getting corrupted, the FIRST flag is checked to discard the incomplete older frame. Signed-off-by: Michael Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void kill_ctx(struct kioctx *ctx) { int (*cancel)(struct kiocb *, struct io_event *); struct task_struct *tsk = current; DECLARE_WAITQUEUE(wait, tsk); struct io_event res; spin_lock_irq(&ctx->ctx_lock); ctx->dead = 1; while (!list_empty(&ctx->active_reqs)) { struct list_head *pos = ctx->active_reqs.next; struct kiocb *iocb = list_kiocb(pos); list_del_init(&iocb->ki_list); cancel = iocb->ki_cancel; kiocbSetCancelled(iocb); if (cancel) { iocb->ki_users++; spin_unlock_irq(&ctx->ctx_lock); cancel(iocb, &res); spin_lock_irq(&ctx->ctx_lock); } } if (!ctx->reqs_active) goto out; add_wait_queue(&ctx->wait, &wait); set_task_state(tsk, TASK_UNINTERRUPTIBLE); while (ctx->reqs_active) { spin_unlock_irq(&ctx->ctx_lock); io_schedule(); set_task_state(tsk, TASK_UNINTERRUPTIBLE); spin_lock_irq(&ctx->ctx_lock); } __set_task_state(tsk, TASK_RUNNING); remove_wait_queue(&ctx->wait, &wait); out: spin_unlock_irq(&ctx->ctx_lock); }
0
[ "CWE-200" ]
linux
a70b52ec1aaeaf60f4739edb1b422827cb6f3893
255,855,928,893,026,300,000,000,000,000,000,000,000
40
vfs: make AIO use the proper rw_verify_area() area helpers We had for some reason overlooked the AIO interface, and it didn't use the proper rw_verify_area() helper function that checks (for example) mandatory locking on the file, and that the size of the access doesn't cause us to overflow the provided offset limits etc. Instead, AIO did just the security_file_permission() thing (that rw_verify_area() also does) directly. This fixes it to do all the proper helper functions, which not only means that now mandatory file locking works with AIO too, we can actually remove lines of code. Reported-by: Manish Honap <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
static int __perf_event_enable(void *info) { struct perf_event *event = info; struct perf_event_context *ctx = event->ctx; struct perf_event *leader = event->group_leader; struct perf_cpu_context *cpuctx = __get_cpu_context(ctx); int err; if (WARN_ON_ONCE(!ctx->is_active)) return -EINVAL; raw_spin_lock(&ctx->lock); update_context_time(ctx); if (event->state >= PERF_EVENT_STATE_INACTIVE) goto unlock; /* * set current task's cgroup time reference point */ perf_cgroup_set_timestamp(current, ctx); __perf_event_mark_enabled(event); if (!event_filter_match(event)) { if (is_cgroup_event(event)) perf_cgroup_defer_enabled(event); goto unlock; } /* * If the event is in a group and isn't the group leader, * then don't put it on unless the group is on. */ if (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE) goto unlock; if (!group_can_go_on(event, cpuctx, 1)) { err = -EEXIST; } else { if (event == leader) err = group_sched_in(event, cpuctx, ctx); else err = event_sched_in(event, cpuctx, ctx); } if (err) { /* * If this event can't go on and it's part of a * group, then the whole group has to come off. */ if (leader != event) group_sched_out(leader, cpuctx, ctx); if (leader->attr.pinned) { update_group_times(leader); leader->state = PERF_EVENT_STATE_ERROR; } } unlock: raw_spin_unlock(&ctx->lock); return 0; }
0
[ "CWE-703", "CWE-189" ]
linux
8176cced706b5e5d15887584150764894e94e02f
87,705,890,342,013,020,000,000,000,000,000,000,000
64
perf: Treat attr.config as u64 in perf_swevent_init() Trinity discovered that we fail to check all 64 bits of attr.config passed by user space, resulting to out-of-bounds access of the perf_swevent_enabled array in sw_perf_event_destroy(). Introduced in commit b0a873ebb ("perf: Register PMU implementations"). Signed-off-by: Tommi Rantala <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: Paul Mackerras <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
u32 gf_isom_get_hevc_lhvc_type(GF_ISOFile *the_file, u32 trackNumber, u32 DescriptionIndex) { u32 type; GF_TrackBox *trak; GF_MPEGVisualSampleEntryBox *entry; trak = gf_isom_get_track_from_file(the_file, trackNumber); if (!trak || !trak->Media || !DescriptionIndex) return GF_ISOM_HEVCTYPE_NONE; if (trak->Media->handler->handlerType != GF_ISOM_MEDIA_VISUAL) return GF_ISOM_HEVCTYPE_NONE; entry = (GF_MPEGVisualSampleEntryBox*)gf_list_get(trak->Media->information->sampleTable->SampleDescription->other_boxes, DescriptionIndex-1); if (!entry) return GF_ISOM_AVCTYPE_NONE; type = entry->type; if (type == GF_ISOM_BOX_TYPE_ENCV) { GF_ProtectionSchemeInfoBox *sinf = (GF_ProtectionSchemeInfoBox *) gf_list_get(entry->protections, 0); if (sinf && sinf->original_format) type = sinf->original_format->data_format; } else if (type == GF_ISOM_BOX_TYPE_RESV) { if (entry->rinf && entry->rinf->original_format) type = entry->rinf->original_format->data_format; } switch (type) { case GF_ISOM_BOX_TYPE_HVC1: case GF_ISOM_BOX_TYPE_HEV1: case GF_ISOM_BOX_TYPE_HVC2: case GF_ISOM_BOX_TYPE_HEV2: case GF_ISOM_BOX_TYPE_LHV1: case GF_ISOM_BOX_TYPE_LHE1: case GF_ISOM_BOX_TYPE_HVT1: break; default: return GF_ISOM_HEVCTYPE_NONE; } if (entry->hevc_config && !entry->lhvc_config) return GF_ISOM_HEVCTYPE_HEVC_ONLY; if (entry->hevc_config && entry->lhvc_config) return GF_ISOM_HEVCTYPE_HEVC_LHVC; if (!entry->hevc_config && entry->lhvc_config) return GF_ISOM_HEVCTYPE_LHVC_ONLY; return GF_ISOM_HEVCTYPE_NONE; }
0
[ "CWE-119", "CWE-787" ]
gpac
90dc7f853d31b0a4e9441cba97feccf36d8b69a4
82,012,705,491,791,300,000,000,000,000,000,000,000
37
fix some exploitable overflows (#994, #997)
TEST_P(Security, BuiltinAuthenticationAndAccessAndCryptoPlugin_PermissionsDisableDiscoveryDisableAccessEncrypt_validation_ok_disable_discovery_disable_access_encrypt) // *INDENT-ON* { PubSubReader<HelloWorldType> reader(TEST_TOPIC_NAME); PubSubWriter<HelloWorldType> writer(TEST_TOPIC_NAME); std::string governance_file("governance_disable_discovery_disable_access_encrypt.smime"); BuiltinAuthenticationAndAccessAndCryptoPlugin_Permissions_validation_ok_common(reader, writer, governance_file); }
0
[ "CWE-284" ]
Fast-DDS
d2aeab37eb4fad4376b68ea4dfbbf285a2926384
93,182,213,097,977,850,000,000,000,000,000,000,000
9
check remote permissions (#1387) * Refs 5346. Blackbox test Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. one-way string compare Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Do not add partition separator on last partition Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Access control unit testing It only covers Partition and Topic permissions Signed-off-by: Iker Luengo <[email protected]> * Refs #3680. Fix partition check on Permissions plugin. Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Fix tests on mac Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Fix windows tests Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Avoid memory leak on test Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Proxy data mocks should not return temporary objects Signed-off-by: Iker Luengo <[email protected]> * refs 3680. uncrustify Signed-off-by: Iker Luengo <[email protected]> Co-authored-by: Miguel Company <[email protected]>
static RIOMapRef *_mapref_from_map(RIOMap *map) { RIOMapRef *mapref = R_NEW (RIOMapRef); if (mapref) { mapref->id = map->id; mapref->ts = map->ts; } return mapref; }
0
[ "CWE-416" ]
radare2
b5cb90b28ec71fda3504da04e3cc94a362807f5e
256,428,654,007,499,930,000,000,000,000,000,000,000
8
Prefer memleak over usaf in io.bank's rbtree bug ##crash * That's a workaround, proper fix will come later * Reproducer: bins/fuzzed/iobank-crash * Reported by Akyne Choi via huntr.dev
static int send_abort(struct iwch_ep *ep, struct sk_buff *skb, gfp_t gfp) { struct cpl_abort_req *req; PDBG("%s ep %p\n", __func__, ep); skb = get_skb(skb, sizeof(*req), gfp); if (!skb) { printk(KERN_ERR MOD "%s - failed to alloc skb.\n", __func__); return -ENOMEM; } skb->priority = CPL_PRIORITY_DATA; set_arp_failure_handler(skb, abort_arp_failure); req = (struct cpl_abort_req *) skb_put(skb, sizeof(*req)); memset(req, 0, sizeof(*req)); req->wr.wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_HOST_ABORT_CON_REQ)); req->wr.wr_lo = htonl(V_WR_TID(ep->hwtid)); OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_ABORT_REQ, ep->hwtid)); req->cmd = CPL_ABORT_SEND_RST; return iwch_l2t_send(ep->com.tdev, skb, ep->l2t); }
0
[ "CWE-703" ]
linux
67f1aee6f45059fd6b0f5b0ecb2c97ad0451f6b3
239,444,724,767,152,260,000,000,000,000,000,000,000
21
iw_cxgb3: Fix incorrectly returning error on success The cxgb3_*_send() functions return NET_XMIT_ values, which are positive integers values. So don't treat positive return values as an error. Signed-off-by: Steve Wise <[email protected]> Signed-off-by: Hariprasad Shenai <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
static int restore_vsx(struct task_struct *tsk) { if (cpu_has_feature(CPU_FTR_VSX)) { tsk->thread.used_vsr = 1; return 1; } return 0; }
0
[]
linux
5d176f751ee3c6eededd984ad409bff201f436a7
326,942,953,660,167,540,000,000,000,000,000,000,000
9
powerpc: tm: Enable transactional memory (TM) lazily for userspace Currently the MSR TM bit is always set if the hardware is TM capable. This adds extra overhead as it means the TM SPRS (TFHAR, TEXASR and TFAIR) must be swapped for each process regardless of if they use TM. For processes that don't use TM the TM MSR bit can be turned off allowing the kernel to avoid the expensive swap of the TM registers. A TM unavailable exception will occur if a thread does use TM and the kernel will enable MSR_TM and leave it so for some time afterwards. Signed-off-by: Cyril Bur <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
void rds6_inc_info_copy(struct rds_incoming *inc, struct rds_info_iterator *iter, struct in6_addr *saddr, struct in6_addr *daddr, int flip) { struct rds6_info_message minfo6; minfo6.seq = be64_to_cpu(inc->i_hdr.h_sequence); minfo6.len = be32_to_cpu(inc->i_hdr.h_len); if (flip) { minfo6.laddr = *daddr; minfo6.faddr = *saddr; minfo6.lport = inc->i_hdr.h_dport; minfo6.fport = inc->i_hdr.h_sport; } else { minfo6.laddr = *saddr; minfo6.faddr = *daddr; minfo6.lport = inc->i_hdr.h_sport; minfo6.fport = inc->i_hdr.h_dport; } rds_info_copy(iter, &minfo6, sizeof(minfo6)); }
1
[ "CWE-200", "CWE-909" ]
linux
7d0a06586b2686ba80c4a2da5f91cb10ffbea736
159,953,519,244,717,050,000,000,000,000,000,000,000
24
net/rds: Fix info leak in rds6_inc_info_copy() The rds6_inc_info_copy() function has a couple struct members which are leaking stack information. The ->tos field should hold actual information and the ->flags field needs to be zeroed out. Fixes: 3eb450367d08 ("rds: add type of service(tos) infrastructure") Fixes: b7ff8b1036f0 ("rds: Extend RDS API for IPv6 support") Reported-by: 黄ID蝴蝶 <[email protected]> Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: Ka-Cheong Poon <[email protected]> Acked-by: Santosh Shilimkar <[email protected]> Signed-off-by: David S. Miller <[email protected]>
v9fs_mmap_file_write_iter(struct kiocb *iocb, struct iov_iter *from) { /* * TODO: invalidate mmaps on filp's inode between * offset and offset+count */ return v9fs_file_write_iter(iocb, from); }
0
[ "CWE-835" ]
linux
5e3cc1ee1405a7eb3487ed24f786dec01b4cbe1f
227,796,664,909,758,100,000,000,000,000,000,000,000
8
9p: use inode->i_lock to protect i_size_write() under 32-bit Use inode->i_lock to protect i_size_write(), else i_size_read() in generic_fillattr() may loop infinitely in read_seqcount_begin() when multiple processes invoke v9fs_vfs_getattr() or v9fs_vfs_getattr_dotl() simultaneously under 32-bit SMP environment, and a soft lockup will be triggered as show below: watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [stat:2217] Modules linked in: CPU: 5 PID: 2217 Comm: stat Not tainted 5.0.0-rc1-00005-g7f702faf5a9e #4 Hardware name: Generic DT based system PC is at generic_fillattr+0x104/0x108 LR is at 0xec497f00 pc : [<802b8898>] lr : [<ec497f00>] psr: 200c0013 sp : ec497e20 ip : ed608030 fp : ec497e3c r10: 00000000 r9 : ec497f00 r8 : ed608030 r7 : ec497ebc r6 : ec497f00 r5 : ee5c1550 r4 : ee005780 r3 : 0000052d r2 : 00000000 r1 : ec497f00 r0 : ed608030 Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: ac48006a DAC: 00000051 CPU: 5 PID: 2217 Comm: stat Not tainted 5.0.0-rc1-00005-g7f702faf5a9e #4 Hardware name: Generic DT based system Backtrace: [<8010d974>] (dump_backtrace) from [<8010dc88>] (show_stack+0x20/0x24) [<8010dc68>] (show_stack) from [<80a1d194>] (dump_stack+0xb0/0xdc) [<80a1d0e4>] (dump_stack) from [<80109f34>] (show_regs+0x1c/0x20) [<80109f18>] (show_regs) from [<801d0a80>] (watchdog_timer_fn+0x280/0x2f8) [<801d0800>] (watchdog_timer_fn) from [<80198658>] (__hrtimer_run_queues+0x18c/0x380) [<801984cc>] (__hrtimer_run_queues) from [<80198e60>] (hrtimer_run_queues+0xb8/0xf0) [<80198da8>] (hrtimer_run_queues) from [<801973e8>] (run_local_timers+0x28/0x64) [<801973c0>] (run_local_timers) from [<80197460>] (update_process_times+0x3c/0x6c) [<80197424>] (update_process_times) from [<801ab2b8>] (tick_nohz_handler+0xe0/0x1bc) [<801ab1d8>] (tick_nohz_handler) from [<80843050>] (arch_timer_handler_virt+0x38/0x48) [<80843018>] (arch_timer_handler_virt) from [<80180a64>] (handle_percpu_devid_irq+0x8c/0x240) [<801809d8>] (handle_percpu_devid_irq) from [<8017ac20>] (generic_handle_irq+0x34/0x44) [<8017abec>] (generic_handle_irq) from [<8017b344>] (__handle_domain_irq+0x6c/0xc4) [<8017b2d8>] (__handle_domain_irq) from [<801022e0>] (gic_handle_irq+0x4c/0x88) [<80102294>] (gic_handle_irq) from [<80101a30>] (__irq_svc+0x70/0x98) [<802b8794>] (generic_fillattr) from [<8056b284>] (v9fs_vfs_getattr_dotl+0x74/0xa4) [<8056b210>] (v9fs_vfs_getattr_dotl) from [<802b8904>] (vfs_getattr_nosec+0x68/0x7c) [<802b889c>] (vfs_getattr_nosec) from [<802b895c>] (vfs_getattr+0x44/0x48) [<802b8918>] (vfs_getattr) from [<802b8a74>] (vfs_statx+0x9c/0xec) [<802b89d8>] (vfs_statx) from [<802b9428>] (sys_lstat64+0x48/0x78) [<802b93e0>] (sys_lstat64) from [<80101000>] (ret_fast_syscall+0x0/0x28) [[email protected]: updated comment to not refer to a function in another subsystem] Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 7549ae3e81cc ("9p: Use the i_size_[read, write]() macros instead of using inode->i_size directly.") Reported-by: Xing Gaopeng <[email protected]> Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Dominique Martinet <[email protected]>
eafnosupport_fib6_table_lookup(struct net *net, struct fib6_table *table, int oif, struct flowi6 *fl6, struct fib6_result *res, int flags) { return -EAFNOSUPPORT; }
0
[]
net
6c8991f41546c3c472503dff1ea9daaddf9331c2
908,234,200,232,273,700,000,000,000,000,000,000
6
net: ipv6_stub: use ip6_dst_lookup_flow instead of ip6_dst_lookup ipv6_stub uses the ip6_dst_lookup function to allow other modules to perform IPv6 lookups. However, this function skips the XFRM layer entirely. All users of ipv6_stub->ip6_dst_lookup use ip_route_output_flow (via the ip_route_output_key and ip_route_output helpers) for their IPv4 lookups, which calls xfrm_lookup_route(). This patch fixes this inconsistent behavior by switching the stub to ip6_dst_lookup_flow, which also calls xfrm_lookup_route(). This requires some changes in all the callers, as these two functions take different arguments and have different return types. Fixes: 5f81bd2e5d80 ("ipv6: export a stub for IPv6 symbols used by vxlan") Reported-by: Xiumei Mu <[email protected]> Signed-off-by: Sabrina Dubroca <[email protected]> Signed-off-by: David S. Miller <[email protected]>
gxps_archive_open (GXPSArchive *archive, const gchar *path) { GXPSArchiveInputStream *stream; gchar *first_piece_path = NULL; if (path == NULL) return NULL; if (path[0] == '/') path++; if (!g_hash_table_contains (archive->entries, path)) { first_piece_path = g_build_path ("/", path, "[0].piece", NULL); if (!g_hash_table_contains (archive->entries, first_piece_path)) { g_free (first_piece_path); return NULL; } path = first_piece_path; } stream = (GXPSArchiveInputStream *)g_object_new (GXPS_TYPE_ARCHIVE_INPUT_STREAM, NULL); stream->zip = gxps_zip_archive_create (archive->filename); stream->is_interleaved = first_piece_path != NULL; while (gxps_zip_archive_iter_next (stream->zip, &stream->entry)) { if (g_ascii_strcasecmp (path, archive_entry_pathname (stream->entry)) == 0) break; archive_read_data_skip (stream->zip->archive); } g_free (first_piece_path); return G_INPUT_STREAM (stream); }
0
[ "CWE-125" ]
libgxps
b458226e162fe1ffe7acb4230c114a52ada5131b
337,695,972,202,389,420,000,000,000,000,000,000,000
36
gxps-archive: Ensure gxps_archive_read_entry() fills the GError in case of failure And fix the callers to not overwrite the GError.
static int mg_b64idx(int c) { if (c < 26) { return c + 'A'; } else if (c < 52) { return c - 26 + 'a'; } else if (c < 62) { return c - 52 + '0'; } else { return c == 62 ? '+' : '/'; } }
0
[ "CWE-552" ]
mongoose
c65c8fdaaa257e0487ab0aaae9e8f6b439335945
314,442,217,996,812,280,000,000,000,000,000,000,000
11
Protect against the directory traversal in mg_upload()
static int ssd0323_load(QEMUFile *f, void *opaque, int version_id) { SSISlave *ss = SSI_SLAVE(opaque); ssd0323_state *s = (ssd0323_state *)opaque; int i; if (version_id != 1) return -EINVAL; s->cmd_len = qemu_get_be32(f); if (s->cmd_len < 0 || s->cmd_len > ARRAY_SIZE(s->cmd_data)) { return -EINVAL; } s->cmd = qemu_get_be32(f); for (i = 0; i < 8; i++) s->cmd_data[i] = qemu_get_be32(f); s->row = qemu_get_be32(f); if (s->row < 0 || s->row >= 80) { return -EINVAL; } s->row_start = qemu_get_be32(f); if (s->row_start < 0 || s->row_start >= 80) { return -EINVAL; } s->row_end = qemu_get_be32(f); if (s->row_end < 0 || s->row_end >= 80) { return -EINVAL; } s->col = qemu_get_be32(f); if (s->col < 0 || s->col >= 64) { return -EINVAL; } s->col_start = qemu_get_be32(f); if (s->col_start < 0 || s->col_start >= 64) { return -EINVAL; } s->col_end = qemu_get_be32(f); if (s->col_end < 0 || s->col_end >= 64) { return -EINVAL; } s->redraw = qemu_get_be32(f); s->remap = qemu_get_be32(f); s->mode = qemu_get_be32(f); if (s->mode != SSD0323_CMD && s->mode != SSD0323_DATA) { return -EINVAL; } qemu_get_buffer(f, s->framebuffer, sizeof(s->framebuffer)); ss->cs = qemu_get_be32(f); return 0; }
0
[ "CWE-119" ]
qemu
ead7a57df37d2187813a121308213f41591bd811
278,403,764,330,573,670,000,000,000,000,000,000,000
52
ssd0323: fix buffer overun on invalid state load CVE-2013-4538 s->cmd_len used as index in ssd0323_transfer() to store 32-bit field. Possible this field might then be supplied by guest to overwrite a return addr somewhere. Same for row/col fields, which are indicies into framebuffer array. To fix validate after load. Additionally, validate that the row/col_start/end are within bounds; otherwise the guest can provoke an overrun by either setting the _end field so large that the row++ increments just walk off the end of the array, or by setting the _start value to something bogus and then letting the "we hit end of row" logic reset row to row_start. For completeness, validate mode as well. Signed-off-by: Michael S. Tsirkin <[email protected]> Reviewed-by: Peter Maydell <[email protected]> Signed-off-by: Juan Quintela <[email protected]>
CUser* SafeGetUserFromParam(CWebSock& WebSock) { return CZNC::Get().FindUser(SafeGetUserNameParam(WebSock)); }
0
[ "CWE-703" ]
znc
2bd410ee5570cea127233f1133ea22f25174eb28
103,647,753,242,066,210,000,000,000,000,000,000,000
3
Fix NULL pointer dereference in webadmin. Triggerable by any non-admin, if webadmin is loaded. The only affected version is 1.0 Thanks to ChauffeR (Simone Esposito) for reporting this.
xfs_bdstrat_cb( struct xfs_buf *bp) { if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) { trace_xfs_bdstrat_shut(bp, _RET_IP_); /* * Metadata write that didn't get logged but * written delayed anyway. These aren't associated * with a transaction, and can be ignored. */ if (!bp->b_iodone && !XFS_BUF_ISREAD(bp)) return xfs_bioerror_relse(bp); else return xfs_bioerror(bp); } xfs_buf_iorequest(bp); return 0; }
0
[ "CWE-20", "CWE-703" ]
linux
eb178619f930fa2ba2348de332a1ff1c66a31424
213,026,902,947,818,000,000,000,000,000,000,000,000
19
xfs: fix _xfs_buf_find oops on blocks beyond the filesystem end When _xfs_buf_find is passed an out of range address, it will fail to find a relevant struct xfs_perag and oops with a null dereference. This can happen when trying to walk a filesystem with a metadata inode that has a partially corrupted extent map (i.e. the block number returned is corrupt, but is otherwise intact) and we try to read from the corrupted block address. In this case, just fail the lookup. If it is readahead being issued, it will simply not be done, but if it is real read that fails we will get an error being reported. Ideally this case should result in an EFSCORRUPTED error being reported, but we cannot return an error through xfs_buf_read() or xfs_buf_get() so this lookup failure may result in ENOMEM or EIO errors being reported instead. Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Brian Foster <[email protected]> Reviewed-by: Ben Myers <[email protected]> Signed-off-by: Ben Myers <[email protected]>
void callbacks_update_scrollbar_limits (void){ gerbv_render_info_t tempRenderInfo = {0, 0, 0, 0, GERBV_RENDER_TYPE_CAIRO_HIGH_QUALITY, screenRenderInfo.displayWidth, screenRenderInfo.displayHeight}; GtkAdjustment *hAdjust = (GtkAdjustment *)screen.win.hAdjustment; GtkAdjustment *vAdjust = (GtkAdjustment *)screen.win.vAdjustment; gerbv_render_zoom_to_fit_display (mainProject, &tempRenderInfo); hAdjust->lower = tempRenderInfo.lowerLeftX; hAdjust->page_increment = hAdjust->page_size; hAdjust->step_increment = hAdjust->page_size / 10.0; vAdjust->lower = tempRenderInfo.lowerLeftY; vAdjust->page_increment = vAdjust->page_size; vAdjust->step_increment = vAdjust->page_size / 10.0; hAdjust->upper = tempRenderInfo.lowerLeftX + (tempRenderInfo.displayWidth / tempRenderInfo.scaleFactorX); hAdjust->page_size = screenRenderInfo.displayWidth / screenRenderInfo.scaleFactorX; vAdjust->upper = tempRenderInfo.lowerLeftY + (tempRenderInfo.displayHeight / tempRenderInfo.scaleFactorY); vAdjust->page_size = screenRenderInfo.displayHeight / screenRenderInfo.scaleFactorY; callbacks_update_scrollbar_positions (); }
0
[ "CWE-200" ]
gerbv
319a8af890e4d0a5c38e6d08f510da8eefc42537
230,871,040,478,583,100,000,000,000,000,000,000,000
21
Remove local alias to parameter array Normalizing access to `gerbv_simplified_amacro_t::parameter` as a step to fix CVE-2021-40402
_asn1_extract_der_octet (asn1_node node, const unsigned char *der, int der_len, unsigned flags) { int len2, len3; int counter, counter_end; int result; len2 = asn1_get_length_der (der, der_len, &len3); if (len2 < -1) return ASN1_DER_ERROR; counter = len3 + 1; DECR_LEN(der_len, len3); if (len2 == -1) counter_end = der_len - 2; else counter_end = der_len; while (counter < counter_end) { DECR_LEN(der_len, 1); len2 = asn1_get_length_der (der + counter, der_len, &len3); if (IS_ERR(len2, flags)) { warn(); return ASN1_DER_ERROR; } if (len2 >= 0) { DECR_LEN(der_len, len2+len3); _asn1_append_value (node, der + counter + len3, len2); } else { /* indefinite */ DECR_LEN(der_len, len3); result = _asn1_extract_der_octet (node, der + counter + len3, der_len, flags); if (result != ASN1_SUCCESS) return result; len2 = 0; } counter += len2 + len3 + 1; } return ASN1_SUCCESS; cleanup: return result; }
1
[ "CWE-399" ]
libtasn1
f435825c0f527a8e52e6ffbc3ad0bc60531d537e
86,862,057,818,840,760,000,000,000,000,000,000,000
54
_asn1_extract_der_octet: catch invalid input cases early That is, check the calculated lengths for validity prior to entering a loop. This avoids an infinite recursion. Reported by Pascal Cuoq.
static int set_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 data) { u64 mcg_cap = vcpu->arch.mcg_cap; unsigned bank_num = mcg_cap & 0xff; switch (msr) { case MSR_IA32_MCG_STATUS: vcpu->arch.mcg_status = data; break; case MSR_IA32_MCG_CTL: if (!(mcg_cap & MCG_CTL_P)) return 1; if (data != 0 && data != ~(u64)0) return -1; vcpu->arch.mcg_ctl = data; break; default: if (msr >= MSR_IA32_MC0_CTL && msr < MSR_IA32_MC0_CTL + 4 * bank_num) { u32 offset = msr - MSR_IA32_MC0_CTL; /* only 0 or all 1s can be written to IA32_MCi_CTL * some Linux kernels though clear bit 10 in bank 4 to * workaround a BIOS/GART TBL issue on AMD K8s, ignore * this to avoid an uncatched #GP in the guest */ if ((offset & 0x3) == 0 && data != 0 && (data | (1 << 10)) != ~(u64)0) return -1; vcpu->arch.mce_banks[offset] = data; break; } return 1; } return 0; }
0
[ "CWE-200" ]
kvm
831d9d02f9522e739825a51a11e3bc5aa531a905
128,183,703,584,265,000,000,000,000,000,000,000,000
35
KVM: x86: fix information leak to userland Structures kvm_vcpu_events, kvm_debugregs, kvm_pit_state2 and kvm_clock_data are copied to userland with some padding and reserved fields unitialized. It leads to leaking of contents of kernel stack memory. We have to initialize them to zero. In patch v1 Jan Kiszka suggested to fill reserved fields with zeros instead of memset'ting the whole struct. It makes sense as these fields are explicitly marked as padding. No more fields need zeroing. KVM-Stable-Tag. Signed-off-by: Vasiliy Kulikov <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]>
ompl::base::PlannerStatus ompl::geometric::VFRRT::solve(const base::PlannerTerminationCondition &ptc) { checkValidity(); base::Goal *goal = pdef_->getGoal().get(); auto *goal_s = dynamic_cast<base::GoalSampleableRegion *>(goal); if (!sampler_) sampler_ = si_->allocStateSampler(); meanNorm_ = determineMeanNorm(); while (const base::State *st = pis_.nextStart()) { auto *motion = new Motion(si_); si_->copyState(motion->state, st); nn_->add(motion); } if (nn_->size() == 0) { OMPL_ERROR("%s: There are no valid initial states!", getName().c_str()); return base::PlannerStatus::INVALID_START; } OMPL_INFORM("%s: Starting planning with %u states already in datastructure", getName().c_str(), nn_->size()); Motion *solution = nullptr; Motion *approxsol = nullptr; double approxdif = std::numeric_limits<double>::infinity(); auto *rmotion = new Motion(si_); base::State *rstate = rmotion->state; base::State *xstate = si_->allocState(); while (ptc == false) { // Sample random state (with goal biasing) if (goal_s && rng_.uniform01() < goalBias_ && goal_s->canSample()) goal_s->sampleGoal(rstate); else sampler_->sampleUniform(rstate); // Find closest state in the tree Motion *nmotion = nn_->nearest(rmotion); // Modify direction based on vector field before extending Motion *motion = extendTree(nmotion, rstate, getNewDirection(nmotion->state, rstate)); if (!motion) continue; // Check if we can connect to the goal double dist = 0; bool sat = goal->isSatisfied(motion->state, &dist); if (sat) { approxdif = dist; solution = motion; break; } if (dist < approxdif) { approxdif = dist; approxsol = motion; } } bool solved = false; bool approximate = false; if (solution == nullptr) { solution = approxsol; approximate = true; } if (solution != nullptr) { lastGoalMotion_ = solution; // Construct the solution path std::vector<Motion *> mpath; while (solution != nullptr) { mpath.push_back(solution); solution = solution->parent; } // Set the solution path auto path(std::make_shared<PathGeometric>(si_)); for (int i = mpath.size() - 1; i >= 0; --i) path->append(mpath[i]->state); pdef_->addSolutionPath(path, approximate, approxdif, name_); solved = true; } si_->freeState(xstate); if (rmotion->state) si_->freeState(rmotion->state); delete rmotion; OMPL_INFORM("%s: Created %u states", getName().c_str(), nn_->size()); return {solved, approximate}; }
0
[ "CWE-703" ]
ompl
abb4fadcb4e4fe4c9cf41e5e7706143a66948eb7
101,485,675,536,105,240,000,000,000,000,000,000,000
102
fix memory leak in VFRRT. closes #839
process_button(struct parsed_tag *tag) { Str tmp = NULL; char *p, *q, *r, *qq = ""; int qlen, v; if (cur_form_id < 0) { char *s = "<form_int method=internal action=none>"; tmp = process_form(parse_tag(&s, TRUE)); } if (tmp == NULL) tmp = Strnew(); p = "submit"; parsedtag_get_value(tag, ATTR_TYPE, &p); q = NULL; parsedtag_get_value(tag, ATTR_VALUE, &q); r = ""; parsedtag_get_value(tag, ATTR_NAME, &r); v = formtype(p); if (v == FORM_UNKNOWN) return NULL; if (!q) { switch (v) { case FORM_INPUT_SUBMIT: case FORM_INPUT_BUTTON: q = "SUBMIT"; break; case FORM_INPUT_RESET: q = "RESET"; break; } } if (q) { qq = html_quote(q); qlen = strlen(q); } /* Strcat_charp(tmp, "<pre_int>"); */ Strcat(tmp, Sprintf("<input_alt hseq=\"%d\" fid=\"%d\" type=%s " "name=\"%s\" value=\"%s\">", cur_hseq++, cur_form_id, p, html_quote(r), qq)); return tmp; }
1
[ "CWE-476" ]
w3m
59b91cd8e30c86f23476fa81ae005cabff49ebb6
320,051,948,238,344,660,000,000,000,000,000,000,000
46
Prevent segfault with malformed input type Bug-Debian: https://github.com/tats/w3m/issues/7
static void unit_tidy_watch_pids(Unit *u) { pid_t except1, except2; Iterator i; void *e; assert(u); /* Cleans dead PIDs from our list */ except1 = unit_main_pid(u); except2 = unit_control_pid(u); SET_FOREACH(e, u->pids, i) { pid_t pid = PTR_TO_PID(e); if (pid == except1 || pid == except2) continue; if (!pid_is_unwaited(pid)) unit_unwatch_pid(u, pid); } }
0
[ "CWE-269" ]
systemd
bf65b7e0c9fc215897b676ab9a7c9d1c688143ba
80,421,296,649,866,840,000,000,000,000,000,000,000
22
core: imply NNP and SUID/SGID restriction for DynamicUser=yes service Let's be safe, rather than sorry. This way DynamicUser=yes services can neither take benefit of, nor create SUID/SGID binaries. Given that DynamicUser= is a recent addition only we should be able to get away with turning this on, even though this is strictly speaking a binary compatibility breakage.
virDomainDiskExpandGroupIoTune(virDomainDiskDefPtr disk, const virDomainDef *def) { size_t i; if (!disk->blkdeviotune.group_name || virDomainBlockIoTuneInfoHasAny(&disk->blkdeviotune)) return; for (i = 0; i < def->ndisks; i++) { virDomainDiskDefPtr d = def->disks[i]; if (STRNEQ_NULLABLE(disk->blkdeviotune.group_name, d->blkdeviotune.group_name) || !virDomainBlockIoTuneInfoHasAny(&d->blkdeviotune)) continue; VIR_FREE(disk->blkdeviotune.group_name); virDomainBlockIoTuneInfoCopy(&d->blkdeviotune, &disk->blkdeviotune); return; } }
0
[ "CWE-212" ]
libvirt
a5b064bf4b17a9884d7d361733737fb614ad8979
113,461,091,824,223,670,000,000,000,000,000,000,000
23
conf: Don't format http cookies unless VIR_DOMAIN_DEF_FORMAT_SECURE is used Starting with 3b076391befc3fe72deb0c244ac6c2b4c100b410 (v6.1.0-122-g3b076391be) we support http cookies. Since they may contain somewhat sensitive information we should not format them into the XML unless VIR_DOMAIN_DEF_FORMAT_SECURE is asserted. Reported-by: Han Han <[email protected]> Signed-off-by: Peter Krempa <[email protected]> Reviewed-by: Erik Skultety <[email protected]>
static char *next_string(char *string, unsigned long *secsize) { /* Skip non-zero chars */ while (string[0]) { string++; if ((*secsize)-- <= 1) return NULL; } /* Skip any zero padding. */ while (!string[0]) { string++; if ((*secsize)-- <= 1) return NULL; } return string; }
0
[ "CWE-362", "CWE-347" ]
linux
0c18f29aae7ce3dadd26d8ee3505d07cc982df75
40,915,240,867,636,447,000,000,000,000,000,000,000
17
module: limit enabling module.sig_enforce Irrespective as to whether CONFIG_MODULE_SIG is configured, specifying "module.sig_enforce=1" on the boot command line sets "sig_enforce". Only allow "sig_enforce" to be set when CONFIG_MODULE_SIG is configured. This patch makes the presence of /sys/module/module/parameters/sig_enforce dependent on CONFIG_MODULE_SIG=y. Fixes: fda784e50aac ("module: export module signature enforcement status") Reported-by: Nayna Jain <[email protected]> Tested-by: Mimi Zohar <[email protected]> Tested-by: Jessica Yu <[email protected]> Signed-off-by: Mimi Zohar <[email protected]> Signed-off-by: Jessica Yu <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
cdf_unpack_summary_info(const cdf_stream_t *sst, const cdf_header_t *h, cdf_summary_info_header_t *ssi, cdf_property_info_t **info, size_t *count) { size_t i, maxcount; const cdf_summary_info_header_t *si = CAST(const cdf_summary_info_header_t *, sst->sst_tab); const cdf_section_declaration_t *sd = CAST(const cdf_section_declaration_t *, (const void *) ((const char *)sst->sst_tab + CDF_SECTION_DECLARATION_OFFSET)); if (cdf_check_stream_offset(sst, h, si, sizeof(*si), __LINE__) == -1 || cdf_check_stream_offset(sst, h, sd, sizeof(*sd), __LINE__) == -1) return -1; ssi->si_byte_order = CDF_TOLE2(si->si_byte_order); ssi->si_os_version = CDF_TOLE2(si->si_os_version); ssi->si_os = CDF_TOLE2(si->si_os); ssi->si_class = si->si_class; cdf_swap_class(&ssi->si_class); ssi->si_count = CDF_TOLE2(si->si_count); *count = 0; maxcount = 0; *info = NULL; for (i = 0; i < CDF_TOLE4(si->si_count); i++) { if (i >= CDF_LOOP_LIMIT) { DPRINTF(("Unpack summary info loop limit")); errno = EFTYPE; return -1; } if (cdf_read_property_info(sst, h, CDF_TOLE4(sd->sd_offset), info, count, &maxcount) == -1) return -1; } return 0; }
1
[ "CWE-119" ]
file
1140872578eedaeecf828f1841d17ff574372dba
153,284,141,002,568,240,000,000,000,000,000,000,000
34
- add float and double types - fix debug printf formats - fix short stream sizes - don't fail if we don't know about a type
GF_Err writegen_configure_pid(GF_Filter *filter, GF_FilterPid *pid, Bool is_remove) { u32 cid, chan, sr, w, h, stype, pf, sfmt, av1mode, nb_bps; const char *name, *mimetype; char szExt[GF_4CC_MSIZE], szCodecExt[30], *sep; const GF_PropertyValue *p; GF_GenDumpCtx *ctx = gf_filter_get_udta(filter); if (is_remove) { ctx->ipid = NULL; if (ctx->opid) { gf_filter_pid_remove(ctx->opid); ctx->opid = NULL; } return GF_OK; } if (! gf_filter_pid_check_caps(pid)) return GF_NOT_SUPPORTED; p = gf_filter_pid_get_property(pid, GF_PROP_PID_CODECID); if (!p) return GF_NOT_SUPPORTED; cid = p->value.uint; ctx->codecid = cid; if (!ctx->opid) { ctx->opid = gf_filter_pid_new(filter); ctx->first = GF_TRUE; } ctx->ipid = pid; //copy properties at init or reconfig gf_filter_pid_copy_properties(ctx->opid, pid); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_DECODER_CONFIG, NULL); p = gf_filter_pid_get_property(pid, GF_PROP_PID_STREAM_TYPE); stype = p ? p->value.uint : 0; p = gf_filter_pid_get_property(pid, GF_PROP_PID_SAMPLE_RATE); sr = p ? p->value.uint : 0; p = gf_filter_pid_get_property(pid, GF_PROP_PID_NUM_CHANNELS); chan = p ? p->value.uint : 0; p = gf_filter_pid_get_property(pid, GF_PROP_PID_AUDIO_FORMAT); sfmt = p ? p->value.uint : GF_AUDIO_FMT_S16; p = gf_filter_pid_get_property(pid, GF_PROP_PID_AUDIO_BPS); nb_bps = p ? p->value.uint : 0; p = gf_filter_pid_get_property(pid, GF_PROP_PID_WIDTH); ctx->w = w = p ? p->value.uint : 0; p = gf_filter_pid_get_property(pid, GF_PROP_PID_HEIGHT); ctx->h = h = p ? p->value.uint : 0; p = gf_filter_pid_get_property(pid, GF_PROP_PID_PIXFMT); pf = p ? p->value.uint : 0; if (!pf) pf = GF_PIXEL_YUV; //get DSI except for unframed streams p = gf_filter_pid_get_property(pid, GF_PROP_PID_UNFRAMED); if (!p ) p = gf_filter_pid_get_property(pid, GF_PROP_PID_UNFRAMED_LATM); if (!p || !p->value.boolean) { p = gf_filter_pid_get_property(pid, GF_PROP_PID_DECODER_CONFIG); if (p) { ctx->dcfg = p->value.data.ptr; ctx->dcfg_size = p->value.data.size; } } p = gf_filter_pid_get_property(pid, GF_PROP_PID_DASH_MODE); ctx->dash_mode = (p && p->value.uint) ? GF_TRUE : GF_FALSE; gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_STREAM_TYPE, &PROP_UINT(GF_STREAM_FILE) ); //special case for xml text, override to xml switch (cid) { case GF_CODECID_META_XML: case GF_CODECID_SUBS_XML: strcpy(szCodecExt, "xml"); break; default: strncpy(szCodecExt, gf_codecid_file_ext(cid), 29); szCodecExt[29]=0; sep = strchr(szCodecExt, '|'); if (sep) sep[0] = 0; break; } gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING(szCodecExt) ); mimetype = gf_codecid_mime(cid); switch (cid) { case GF_CODECID_AAC_MPEG4: case GF_CODECID_AAC_MPEG2_MP: case GF_CODECID_AAC_MPEG2_LCP: case GF_CODECID_AAC_MPEG2_SSRP: //override extension to latm for aac if unframed latm data - NOT for usac p = gf_filter_pid_get_property(pid, GF_PROP_PID_UNFRAMED_LATM); if (p && p->value.boolean) { gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING("latm") ); } break; case GF_CODECID_PNG: case GF_CODECID_JPEG: ctx->split = GF_TRUE; break; case GF_CODECID_J2K: ctx->split = GF_TRUE; ctx->is_mj2k = GF_TRUE; break; case GF_CODECID_AMR: gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); ctx->dcfg = "#!AMR\n"; ctx->dcfg_size = 6; ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_AMR_WB: gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); ctx->dcfg = "#!AMR-WB\n"; ctx->dcfg_size = 9; ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_SMV: gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); ctx->dcfg = "#!SMV\n"; ctx->dcfg_size = 6; ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_EVRC_PV: case GF_CODECID_EVRC: gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); ctx->dcfg = "#!EVRC\n"; ctx->dcfg_size = 7; ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_FLAC: gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_SIMPLE_TEXT: if (!gf_filter_pid_get_property(pid, GF_PROP_PID_MIME)) gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); if (ctx->decinfo == DECINFO_AUTO) ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_META_TEXT: if (!gf_filter_pid_get_property(pid, GF_PROP_PID_MIME)) gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); if (ctx->decinfo == DECINFO_AUTO) ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_META_XML: if (!gf_filter_pid_get_property(pid, GF_PROP_PID_MIME)) gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); if (ctx->decinfo == DECINFO_AUTO) ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_SUBS_TEXT: if (!gf_filter_pid_get_property(pid, GF_PROP_PID_MIME)) gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); if (ctx->decinfo == DECINFO_AUTO) ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_SUBS_XML: if (!gf_filter_pid_get_property(pid, GF_PROP_PID_MIME)) gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); if (ctx->dash_mode) { ctx->ttml_agg = GF_TRUE; } else if (!ctx->frame) { ctx->ttml_agg = GF_TRUE; } else { ctx->split = GF_TRUE; } if (ctx->decinfo == DECINFO_AUTO) ctx->decinfo = DECINFO_FIRST; break; case GF_CODECID_AV1: av1mode = 0; p = gf_filter_pid_get_property_str(ctx->ipid, "obu:mode"); if (p) av1mode = p->value.uint; if (av1mode==1) { gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING("av1b") ); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING("video/x-av1b") ); } else if (av1mode==2) { gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING("ivf") ); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING("video/x-ivf") ); } else { gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING("obu") ); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING("video/x-av1") ); } break; case GF_CODECID_RAW: ctx->dcfg = NULL; ctx->dcfg_size = 0; if (stype==GF_STREAM_VISUAL) { strcpy(szExt, gf_pixel_fmt_sname(ctx->target_pfmt ? ctx->target_pfmt : pf)); p = gf_filter_pid_caps_query(ctx->opid, GF_PROP_PID_FILE_EXT); if (p) { strncpy(szExt, p->value.string, GF_4CC_MSIZE-1); szExt[GF_4CC_MSIZE-1] = 0; if (!strcmp(szExt, "bmp")) { ctx->is_bmp = GF_TRUE; //request BGR ctx->target_pfmt = GF_PIXEL_BGR; ctx->split = GF_TRUE; } else if (!strcmp(szExt, "y4m")) { ctx->is_y4m = GF_TRUE; ctx->target_pfmt = GF_PIXEL_YUV; } else { ctx->target_pfmt = gf_pixel_fmt_parse(szExt); if (!ctx->target_pfmt) { GF_LOG(GF_LOG_ERROR, GF_LOG_MEDIA, ("Cannot guess pixel format from extension type %s\n", szExt)); return GF_NOT_SUPPORTED; } strcpy(szExt, gf_pixel_fmt_sname(ctx->target_pfmt)); } //forcing pixel format regardless of extension if (ctx->pfmt) { if (pf != ctx->pfmt) { gf_filter_pid_negociate_property(ctx->ipid, GF_PROP_PID_PIXFMT, &PROP_UINT(ctx->pfmt)); //make sure we reconfigure ctx->codecid = 0; pf = ctx->pfmt; } } //use extension to derive pixel format else if (pf != ctx->target_pfmt) { gf_filter_pid_negociate_property(ctx->ipid, GF_PROP_PID_PIXFMT, &PROP_UINT(ctx->target_pfmt)); strcpy(szExt, gf_pixel_fmt_sname(ctx->target_pfmt)); //make sure we reconfigure ctx->codecid = 0; } } p = gf_filter_pid_get_property(pid, GF_PROP_PID_STRIDE); ctx->stride = p ? p->value.uint : 0; if (!ctx->stride) { gf_pixel_get_size_info(ctx->target_pfmt ? ctx->target_pfmt : pf, ctx->w, ctx->h, NULL, &ctx->stride, NULL, NULL, NULL); } } else if (stype==GF_STREAM_AUDIO) { strcpy(szExt, gf_audio_fmt_sname(ctx->target_afmt ? ctx->target_afmt : sfmt)); p = gf_filter_pid_caps_query(ctx->opid, GF_PROP_PID_FILE_EXT); if (p) { strncpy(szExt, p->value.string, GF_4CC_MSIZE-1); szExt[GF_4CC_MSIZE-1] = 0; if (!strcmp(szExt, "wav")) { ctx->is_wav = GF_TRUE; //request PCMs16 ? // ctx->target_afmt = GF_AUDIO_FMT_S16; ctx->target_afmt = sfmt; } else { ctx->target_afmt = gf_audio_fmt_parse(szExt); strcpy(szExt, gf_audio_fmt_sname(ctx->target_afmt)); } //forcing sample format regardless of extension if (ctx->afmt) { if (sfmt != ctx->afmt) { gf_filter_pid_negociate_property(ctx->ipid, GF_PROP_PID_AUDIO_FORMAT, &PROP_UINT(ctx->afmt)); //make sure we reconfigure ctx->codecid = 0; sfmt = ctx->afmt; } } //use extension to derive sample format else if (sfmt != ctx->target_afmt) { gf_filter_pid_negociate_property(ctx->ipid, GF_PROP_PID_AUDIO_FORMAT, &PROP_UINT(ctx->target_afmt)); strcpy(szExt, gf_audio_fmt_sname(ctx->target_afmt)); //make sure we reconfigure ctx->codecid = 0; } if (ctx->dur.num && ctx->dur.den && !gf_audio_fmt_is_planar(ctx->target_afmt ? ctx->target_afmt : ctx->afmt)) ctx->trunc_audio = GF_TRUE; } } else { strcpy(szExt, gf_4cc_to_str(cid)); } if (ctx->is_bmp) strcpy(szExt, "bmp"); else if (ctx->is_y4m) strcpy(szExt, "y4m"); else if (ctx->is_wav) { gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_DISABLE_PROGRESSIVE, &PROP_UINT(GF_PID_FILE_PATCH_REPLACE) ); strcpy(szExt, "wav"); } else if (!strlen(szExt)) strcpy(szExt, "raw"); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING( szExt ) ); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING("application/octet-string") ); if (!ctx->codecid) return GF_OK; break; default: if (!strcmp(szCodecExt, "raw")) { strcpy(szExt, gf_4cc_to_str(cid)); if (!strlen(szExt)) strcpy(szExt, "raw"); gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING( szExt ) ); } else { gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_FILE_EXT, &PROP_STRING( szCodecExt ) ); } gf_filter_pid_set_property(ctx->opid, GF_PROP_PID_MIME, &PROP_STRING(mimetype) ); break; } if (ctx->decinfo == DECINFO_AUTO) ctx->decinfo = DECINFO_NO; name = gf_codecid_name(cid); if (ctx->exporter) { if (w && h) { if (cid==GF_CODECID_RAW) name = gf_pixel_fmt_name(pf); GF_LOG(GF_LOG_INFO, GF_LOG_MEDIA, ("Exporting %s - Size %dx%d\n", name, w, h)); } else if (sr && chan) { if (cid==GF_CODECID_RAW) { GF_LOG(GF_LOG_INFO, GF_LOG_MEDIA, ("Exporting PCM %s SampleRate %d %d channels %d bits per sample\n", gf_audio_fmt_name(sfmt), sr, chan, gf_audio_fmt_bit_depth(sfmt) )); } else { if (!nb_bps) nb_bps = gf_audio_fmt_bit_depth(sfmt); GF_LOG(GF_LOG_INFO, GF_LOG_MEDIA, ("Exporting %s - SampleRate %d %d channels %d bits per sample\n", name, sr, chan, nb_bps )); } } else { GF_LOG(GF_LOG_INFO, GF_LOG_MEDIA, ("Exporting %s\n", name)); } } //avoid creating a file when dumping individual samples if (ctx->split) { p = gf_filter_pid_get_property(pid, GF_PROP_PID_NB_FRAMES); if (!p || (p->value.uint>1)) gf_filter_pid_set_property(ctx->opid, GF_PROP_PCK_FILENUM, &PROP_UINT(0) ); else ctx->split = GF_FALSE; } else if (ctx->frame) { gf_filter_pid_set_property(ctx->opid, GF_PROP_PCK_FILENUM, &PROP_UINT(0) ); } p = gf_filter_pid_get_property(pid, GF_PROP_PID_DURATION); if (p) ctx->duration = p->value.lfrac; gf_filter_pid_set_framing_mode(pid, GF_TRUE); return GF_OK; }
0
[ "CWE-787" ]
gpac
ea1eca00fd92fa17f0e25ac25652622924a9a6a0
94,278,145,992,839,940,000,000,000,000,000,000,000
347
fixed #2138
TEST_F(RenameCollectionTest, RenameCollectionForApplyOpsDropTargetEvenIfSourceDoesNotExist) { _createCollectionWithUUID(_opCtx.get(), _targetNss); auto missingSourceNss = NamespaceString("test.bar2"); auto uuidDoc = BSON("ui" << UUID::gen()); auto cmd = BSON("renameCollection" << missingSourceNss.ns() << "to" << _targetNss.ns() << "dropTarget" << "true"); ASSERT_OK(renameCollectionForApplyOps( _opCtx.get(), missingSourceNss.db().toString(), uuidDoc["ui"], cmd, {})); ASSERT_FALSE(_collectionExists(_opCtx.get(), _targetNss)); }
0
[ "CWE-20" ]
mongo
35c1b1f588f04926a958ad2fe4d9c59d79f81e8b
145,709,627,002,041,170,000,000,000,000,000,000,000
11
SERVER-35636 renameCollectionForApplyOps checks for complete namespace
void clear_outer_face_cycle_marks() { clear_outer_face_cycle_marks(typename Is_extended_kernel<Extended_kernel>::value_type()); }
0
[ "CWE-269" ]
cgal
618b409b0fbcef7cb536a4134ae3a424ef5aae45
80,954,468,061,828,540,000,000,000,000,000,000,000
4
Fix Nef_2 and Nef_S2 IO
static int ssl_mt_error(int n) { int ret; switch (n) { case SSL2_PE_NO_CIPHER: ret=SSL_R_PEER_ERROR_NO_CIPHER; break; case SSL2_PE_NO_CERTIFICATE: ret=SSL_R_PEER_ERROR_NO_CERTIFICATE; break; case SSL2_PE_BAD_CERTIFICATE: ret=SSL_R_PEER_ERROR_CERTIFICATE; break; case SSL2_PE_UNSUPPORTED_CERTIFICATE_TYPE: ret=SSL_R_PEER_ERROR_UNSUPPORTED_CERTIFICATE_TYPE; break; default: ret=SSL_R_UNKNOWN_REMOTE_ERROR_TYPE; break; } return(ret); }
0
[ "CWE-310" ]
openssl
270881316664396326c461ec7a124aec2c6cc081
107,510,825,424,950,630,000,000,000,000,000,000,000
24
Add and use a constant-time memcmp. This change adds CRYPTO_memcmp, which compares two vectors of bytes in an amount of time that's independent of their contents. It also changes several MAC compares in the code to use this over the standard memcmp, which may leak information about the size of a matching prefix. (cherry picked from commit 2ee798880a246d648ecddadc5b91367bee4a5d98) Conflicts: crypto/crypto.h ssl/t1_lib.c (cherry picked from commit dc406b59f3169fe191e58906df08dce97edb727c) Conflicts: crypto/crypto.h ssl/d1_pkt.c ssl/s3_pkt.c
void testMultiPartThreading (const std::string & tempDir) { try { cout << "Testing the multi part APIs for multi-thread use" << endl; random_reseed(1); int numThreads = ThreadPool::globalThreadPool().numThreads(); ThreadPool::globalThreadPool().setNumThreads(32); testWriteRead ( 1, 1, 5, tempDir); testWriteRead ( 2, 2, 10, tempDir); testWriteRead ( 5, 5, 25, tempDir); testWriteRead (50, 2, 250, tempDir); ThreadPool::globalThreadPool().setNumThreads(numThreads); cout << "ok\n" << endl; } catch (const std::exception &e) { cerr << "ERROR -- caught exception: " << e.what() << endl; assert (false); } }
0
[ "CWE-125" ]
openexr
e79d2296496a50826a15c667bf92bdc5a05518b4
18,085,176,002,699,960,000,000,000,000,000,000,000
26
fix memory leaks and invalid memory accesses Signed-off-by: Peter Hillman <[email protected]>
SECURITY_STATUS SEC_ENTRY RevertSecurityContext(PCtxtHandle phContext) { return SEC_E_OK; }
0
[ "CWE-476", "CWE-125" ]
FreeRDP
0773bb9303d24473fe1185d85a424dfe159aff53
117,806,938,001,085,500,000,000,000,000,000,000,000
4
nla: invalidate sec handle after creation If sec pointer isn't invalidated after creation it is not possible to check if the upper and lower pointers are valid. This fixes a segfault in the server part if the client disconnects before the authentication was finished.
grub_ext2_label (grub_device_t device, char **label) { struct grub_ext2_data *data; grub_disk_t disk = device->disk; grub_dl_ref (my_mod); data = grub_ext2_mount (disk); if (data) *label = grub_strndup (data->sblock.volume_name, 14); else *label = NULL; grub_dl_unref (my_mod); grub_free (data); return grub_errno; }
0
[ "CWE-703", "CWE-787" ]
radare2
796dd28aaa6b9fa76d99c42c4d5ff8b257cc2191
191,630,850,893,789,900,000,000,000,000,000,000,000
19
Fix ext2 buffer overflow in r2_sbu_grub_memmove
static inline struct sock *l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident) { struct sock *s; read_lock(&l->lock); s = __l2cap_get_chan_by_ident(l, ident); if (s) bh_lock_sock(s); read_unlock(&l->lock); return s; }
0
[ "CWE-200", "CWE-119", "CWE-787" ]
linux
f2fcfcd670257236ebf2088bbdf26f6a8ef459fe
287,131,284,440,456,200,000,000,000,000,000,000,000
10
Bluetooth: Add configuration support for ERTM and Streaming mode Add support to config_req and config_rsp to configure ERTM and Streaming mode. If the remote device specifies ERTM or Streaming mode, then the same mode is proposed. Otherwise ERTM or Basic mode is used. And in case of a state 2 device, the remote device should propose the same mode. If not, then the channel gets disconnected. Signed-off-by: Gustavo F. Padovan <[email protected]> Signed-off-by: Marcel Holtmann <[email protected]>
i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, unsigned flags) { struct i915_mmu_notifier *mn; struct i915_mmu_object *mo; if (flags & I915_USERPTR_UNSYNCHRONIZED) return capable(CAP_SYS_ADMIN) ? 0 : -EPERM; if (WARN_ON(obj->userptr.mm == NULL)) return -EINVAL; mn = i915_mmu_notifier_find(obj->userptr.mm); if (IS_ERR(mn)) return PTR_ERR(mn); mo = kzalloc(sizeof(*mo), GFP_KERNEL); if (!mo) return -ENOMEM; mo->mn = mn; mo->obj = obj; mo->it.start = obj->userptr.ptr; mo->it.last = obj->userptr.ptr + obj->base.size - 1; RB_CLEAR_NODE(&mo->it.rb); obj->userptr.mmu_object = mo; return 0; }
0
[ "CWE-362" ]
linux
17839856fd588f4ab6b789f482ed3ffd7c403e1f
41,628,665,108,955,744,000,000,000,000,000,000,000
29
gup: document and work around "COW can break either way" issue Doing a "get_user_pages()" on a copy-on-write page for reading can be ambiguous: the page can be COW'ed at any time afterwards, and the direction of a COW event isn't defined. Yes, whoever writes to it will generally do the COW, but if the thread that did the get_user_pages() unmapped the page before the write (and that could happen due to memory pressure in addition to any outright action), the writer could also just take over the old page instead. End result: the get_user_pages() call might result in a page pointer that is no longer associated with the original VM, and is associated with - and controlled by - another VM having taken it over instead. So when doing a get_user_pages() on a COW mapping, the only really safe thing to do would be to break the COW when getting the page, even when only getting it for reading. At the same time, some users simply don't even care. For example, the perf code wants to look up the page not because it cares about the page, but because the code simply wants to look up the physical address of the access for informational purposes, and doesn't really care about races when a page might be unmapped and remapped elsewhere. This adds logic to force a COW event by setting FOLL_WRITE on any copy-on-write mapping when FOLL_GET (or FOLL_PIN) is used to get a page pointer as a result. The current semantics end up being: - __get_user_pages_fast(): no change. If you don't ask for a write, you won't break COW. You'd better know what you're doing. - get_user_pages_fast(): the fast-case "look it up in the page tables without anything getting mmap_sem" now refuses to follow a read-only page, since it might need COW breaking. Which happens in the slow path - the fast path doesn't know if the memory might be COW or not. - get_user_pages() (including the slow-path fallback for gup_fast()): for a COW mapping, turn on FOLL_WRITE for FOLL_GET/FOLL_PIN, with very similar semantics to FOLL_FORCE. If it turns out that we want finer granularity (ie "only break COW when it might actually matter" - things like the zero page are special and don't need to be broken) we might need to push these semantics deeper into the lookup fault path. So if people care enough, it's possible that we might end up adding a new internal FOLL_BREAK_COW flag to go with the internal FOLL_COW flag we already have for tracking "I had a COW". Alternatively, if it turns out that different callers might want to explicitly control the forced COW break behavior, we might even want to make such a flag visible to the users of get_user_pages() instead of using the above default semantics. But for now, this is mostly commentary on the issue (this commit message being a lot bigger than the patch, and that patch in turn is almost all comments), with that minimal "enable COW breaking early" logic using the existing FOLL_WRITE behavior. [ It might be worth noting that we've always had this ambiguity, and it could arguably be seen as a user-space issue. You only get private COW mappings that could break either way in situations where user space is doing cooperative things (ie fork() before an execve() etc), but it _is_ surprising and very subtle, and fork() is supposed to give you independent address spaces. So let's treat this as a kernel issue and make the semantics of get_user_pages() easier to understand. Note that obviously a true shared mapping will still get a page that can change under us, so this does _not_ mean that get_user_pages() somehow returns any "stable" page ] Reported-by: Jann Horn <[email protected]> Tested-by: Christoph Hellwig <[email protected]> Acked-by: Oleg Nesterov <[email protected]> Acked-by: Kirill Shutemov <[email protected]> Acked-by: Jan Kara <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Matthew Wilcox <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static ssize_t vfswrap_listxattr(struct vfs_handle_struct *handle, const char *path, char *list, size_t size) { return listxattr(path, list, size); }
0
[ "CWE-665" ]
samba
30e724cbff1ecd90e5a676831902d1e41ec1b347
22,146,261,642,361,303,000,000,000,000,000,000,000
4
FSCTL_GET_SHADOW_COPY_DATA: Initialize output array to zero Otherwise num_volumes and the end marker can return uninitialized data to the client. Signed-off-by: Christof Schmitt <[email protected]> Reviewed-by: Jeremy Allison <[email protected]> Reviewed-by: Simo Sorce <[email protected]>
static inline bool bio_integrity_endio(struct bio *bio) { return true; }
0
[ "CWE-416" ]
linux
c3e2219216c92919a6bd1711f340f5faa98695e6
269,116,501,786,768,040,000,000,000,000,000,000,000
4
block: free sched's request pool in blk_cleanup_queue In theory, IO scheduler belongs to request queue, and the request pool of sched tags belongs to the request queue too. However, the current tags allocation interfaces are re-used for both driver tags and sched tags, and driver tags is definitely host wide, and doesn't belong to any request queue, same with its request pool. So we need tagset instance for freeing request of sched tags. Meantime, blk_mq_free_tag_set() often follows blk_cleanup_queue() in case of non-BLK_MQ_F_TAG_SHARED, this way requires that request pool of sched tags to be freed before calling blk_mq_free_tag_set(). Commit 47cdee29ef9d94e ("block: move blk_exit_queue into __blk_release_queue") moves blk_exit_queue into __blk_release_queue for simplying the fast path in generic_make_request(), then causes oops during freeing requests of sched tags in __blk_release_queue(). Fix the above issue by move freeing request pool of sched tags into blk_cleanup_queue(), this way is safe becasue queue has been frozen and no any in-queue requests at that time. Freeing sched tags has to be kept in queue's release handler becasue there might be un-completed dispatch activity which might refer to sched tags. Cc: Bart Van Assche <[email protected]> Cc: Christoph Hellwig <[email protected]> Fixes: 47cdee29ef9d94e485eb08f962c74943023a5271 ("block: move blk_exit_queue into __blk_release_queue") Tested-by: Yi Zhang <[email protected]> Reported-by: kernel test robot <[email protected]> Signed-off-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
cdio_generic_init (void *user_data, int open_flags) { generic_img_private_t *p_env = user_data; if (p_env->init) { cdio_warn ("init called more than once"); return false; } p_env->fd = open (p_env->source_name, open_flags, 0); if (p_env->fd < 0) { cdio_warn ("open (%s): %s", p_env->source_name, strerror (errno)); return false; } p_env->init = true; p_env->toc_init = false; p_env->cdtext = NULL; p_env->scsi_tuple = NULL; p_env->b_cdtext_error = false; p_env->u_joliet_level = 0; /* Assume no Joliet extensions initally */ return true; }
0
[ "CWE-415" ]
libcdio
dec2f876c2d7162da213429bce1a7140cdbdd734
117,332,565,740,058,050,000,000,000,000,000,000,000
24
Removed wrong line
static void update_field_dependencies(THD *thd, Field *field, TABLE *table) { DBUG_ENTER("update_field_dependencies"); if (thd->mark_used_columns != MARK_COLUMNS_NONE) { MY_BITMAP *bitmap; /* We always want to register the used keys, as the column bitmap may have been set for all fields (for example for view). */ table->covering_keys.intersect(field->part_of_key); if (field->vcol_info) table->mark_virtual_col(field); if (thd->mark_used_columns == MARK_COLUMNS_READ) bitmap= table->read_set; else bitmap= table->write_set; /* The test-and-set mechanism in the bitmap is not reliable during multi-UPDATE statements under MARK_COLUMNS_READ mode (thd->mark_used_columns == MARK_COLUMNS_READ), as this bitmap contains only those columns that are used in the SET clause. I.e they are being set here. See multi_update::prepare() */ if (bitmap_fast_test_and_set(bitmap, field->field_index)) { if (thd->mark_used_columns == MARK_COLUMNS_WRITE) { DBUG_PRINT("warning", ("Found duplicated field")); thd->dup_field= field; } else { DBUG_PRINT("note", ("Field found before")); } DBUG_VOID_RETURN; } if (table->get_fields_in_item_tree) field->flags|= GET_FIXED_FIELDS_FLAG; table->used_fields++; } else if (table->get_fields_in_item_tree) field->flags|= GET_FIXED_FIELDS_FLAG; DBUG_VOID_RETURN; }
0
[]
server
0168d1eda30dad4b517659422e347175eb89e923
12,277,685,338,507,984,000,000,000,000,000,000,000
50
MDEV-25766 Unused CTE lead to a crash in find_field_in_tables/find_order_in_list Do not assume that subquery Item always present.
ldns_pkt_tsig_verify_next(ldns_pkt *pkt, const uint8_t *wire, size_t wirelen, const char* key_name, const char *key_data, const ldns_rdf *orig_mac_rdf, int tsig_timers_only) { ldns_rdf *fudge_rdf; ldns_rdf *algorithm_rdf; ldns_rdf *time_signed_rdf; ldns_rdf *orig_id_rdf; ldns_rdf *error_rdf; ldns_rdf *other_data_rdf; ldns_rdf *pkt_mac_rdf; ldns_rdf *my_mac_rdf; ldns_rdf *key_name_rdf = ldns_rdf_new_frm_str(LDNS_RDF_TYPE_DNAME, key_name); uint16_t pkt_id, orig_pkt_id; ldns_status status; uint8_t *prepared_wire = NULL; size_t prepared_wire_size = 0; ldns_rr *orig_tsig = ldns_pkt_tsig(pkt); if (!orig_tsig || ldns_rr_rd_count(orig_tsig) <= 6) { ldns_rdf_deep_free(key_name_rdf); return false; } algorithm_rdf = ldns_rr_rdf(orig_tsig, 0); time_signed_rdf = ldns_rr_rdf(orig_tsig, 1); fudge_rdf = ldns_rr_rdf(orig_tsig, 2); pkt_mac_rdf = ldns_rr_rdf(orig_tsig, 3); orig_id_rdf = ldns_rr_rdf(orig_tsig, 4); error_rdf = ldns_rr_rdf(orig_tsig, 5); other_data_rdf = ldns_rr_rdf(orig_tsig, 6); /* remove temporarily */ ldns_pkt_set_tsig(pkt, NULL); /* temporarily change the id to the original id */ pkt_id = ldns_pkt_id(pkt); orig_pkt_id = ldns_rdf2native_int16(orig_id_rdf); ldns_pkt_set_id(pkt, orig_pkt_id); prepared_wire = ldns_tsig_prepare_pkt_wire(wire, wirelen, &prepared_wire_size); status = ldns_tsig_mac_new(&my_mac_rdf, prepared_wire, prepared_wire_size, key_data, key_name_rdf, fudge_rdf, algorithm_rdf, time_signed_rdf, error_rdf, other_data_rdf, orig_mac_rdf, tsig_timers_only); LDNS_FREE(prepared_wire); if (status != LDNS_STATUS_OK) { ldns_rdf_deep_free(key_name_rdf); return false; } /* Put back the values */ ldns_pkt_set_tsig(pkt, orig_tsig); ldns_pkt_set_id(pkt, pkt_id); ldns_rdf_deep_free(key_name_rdf); if( ldns_rdf_size(pkt_mac_rdf) != ldns_rdf_size(my_mac_rdf)) { ldns_rdf_deep_free(my_mac_rdf); return false; } /* use time insensitive memory compare */ if( #ifdef HAVE_CRYPTO_MEMCMP CRYPTO_memcmp #else memcmp #endif (ldns_rdf_data(pkt_mac_rdf), ldns_rdf_data(my_mac_rdf), ldns_rdf_size(my_mac_rdf)) == 0) { ldns_rdf_deep_free(my_mac_rdf); return true; } else { ldns_rdf_deep_free(my_mac_rdf); return false; } }
0
[]
ldns
f9073d9fc313b19f51a5aa160584f2bdccda637a
290,292,905,840,239,040,000,000,000,000,000,000,000
77
* Detect fixed time memory compare for openssl 0.9.8.
DequeueConsumable(qqueue_t *pThis, wti_t *pWti) { DEFiRet; int iQueueSize = 0; /* keep the compiler happy... */ /* dequeue element batch (still protected from mutex) */ iRet = DequeueConsumableElements(pThis, pWti, &iQueueSize); /* awake some flow-controlled sources if we can do this right now */ /* TODO: this could be done better from a performance point of view -- do it only if * we have someone waiting for the condition (or only when we hit the watermark right * on the nail [exact value]) -- rgerhards, 2008-03-14 * now that we dequeue batches of pointers, this is much less an issue... * rgerhards, 2009-04-22 */ if(iQueueSize < pThis->iFullDlyMrk / 2) { pthread_cond_broadcast(&pThis->belowFullDlyWtrMrk); } if(iQueueSize < pThis->iLightDlyMrk / 2) { pthread_cond_broadcast(&pThis->belowLightDlyWtrMrk); } // TODO: MULTI: check physical queue size? pthread_cond_signal(&pThis->notFull); /* WE ARE NO LONGER PROTECTED BY THE MUTEX */ if(iRet != RS_RET_OK && iRet != RS_RET_DISCARDMSG) { DBGOPRINT((obj_t*) pThis, "error %d dequeueing element - ignoring, but strange things " "may happen\n", iRet); } RETiRet; }
0
[ "CWE-772" ]
rsyslog
dfa88369d4ca4290db56b843f9eabdae1bfe0fd5
120,424,300,584,896,550,000,000,000,000,000,000,000
34
bugfix: memory leak when $RepeatedMsgReduction on was used bug tracker: http://bugzilla.adiscon.com/show_bug.cgi?id=225
rsvg_render_image (RsvgDrawingCtx * ctx, GdkPixbuf * pb, double x, double y, double w, double h) { ctx->render->render_image (ctx, pb, x, y, w, h); }
0
[]
librsvg
34c95743ca692ea0e44778e41a7c0a129363de84
254,091,404,704,940,900,000,000,000,000,000,000,000
4
Store node type separately in RsvgNode The node name (formerly RsvgNode:type) cannot be used to infer the sub-type of RsvgNode that we're dealing with, since for unknown elements we put type = node-name. This lead to a (potentially exploitable) crash e.g. when the element name started with "fe" which tricked the old code into considering it as a RsvgFilterPrimitive. CVE-2011-3146 https://bugzilla.gnome.org/show_bug.cgi?id=658014
v9fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) { struct p9_fid *fid = iocb->ki_filp->private_data; int ret, err = 0; p9_debug(P9_DEBUG_VFS, "count %zu offset %lld\n", iov_iter_count(to), iocb->ki_pos); ret = p9_client_read(fid, iocb->ki_pos, to, &err); if (!ret) return err; iocb->ki_pos += ret; return ret; }
0
[ "CWE-835" ]
linux
5e3cc1ee1405a7eb3487ed24f786dec01b4cbe1f
304,241,675,490,306,500,000,000,000,000,000,000,000
15
9p: use inode->i_lock to protect i_size_write() under 32-bit Use inode->i_lock to protect i_size_write(), else i_size_read() in generic_fillattr() may loop infinitely in read_seqcount_begin() when multiple processes invoke v9fs_vfs_getattr() or v9fs_vfs_getattr_dotl() simultaneously under 32-bit SMP environment, and a soft lockup will be triggered as show below: watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [stat:2217] Modules linked in: CPU: 5 PID: 2217 Comm: stat Not tainted 5.0.0-rc1-00005-g7f702faf5a9e #4 Hardware name: Generic DT based system PC is at generic_fillattr+0x104/0x108 LR is at 0xec497f00 pc : [<802b8898>] lr : [<ec497f00>] psr: 200c0013 sp : ec497e20 ip : ed608030 fp : ec497e3c r10: 00000000 r9 : ec497f00 r8 : ed608030 r7 : ec497ebc r6 : ec497f00 r5 : ee5c1550 r4 : ee005780 r3 : 0000052d r2 : 00000000 r1 : ec497f00 r0 : ed608030 Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: ac48006a DAC: 00000051 CPU: 5 PID: 2217 Comm: stat Not tainted 5.0.0-rc1-00005-g7f702faf5a9e #4 Hardware name: Generic DT based system Backtrace: [<8010d974>] (dump_backtrace) from [<8010dc88>] (show_stack+0x20/0x24) [<8010dc68>] (show_stack) from [<80a1d194>] (dump_stack+0xb0/0xdc) [<80a1d0e4>] (dump_stack) from [<80109f34>] (show_regs+0x1c/0x20) [<80109f18>] (show_regs) from [<801d0a80>] (watchdog_timer_fn+0x280/0x2f8) [<801d0800>] (watchdog_timer_fn) from [<80198658>] (__hrtimer_run_queues+0x18c/0x380) [<801984cc>] (__hrtimer_run_queues) from [<80198e60>] (hrtimer_run_queues+0xb8/0xf0) [<80198da8>] (hrtimer_run_queues) from [<801973e8>] (run_local_timers+0x28/0x64) [<801973c0>] (run_local_timers) from [<80197460>] (update_process_times+0x3c/0x6c) [<80197424>] (update_process_times) from [<801ab2b8>] (tick_nohz_handler+0xe0/0x1bc) [<801ab1d8>] (tick_nohz_handler) from [<80843050>] (arch_timer_handler_virt+0x38/0x48) [<80843018>] (arch_timer_handler_virt) from [<80180a64>] (handle_percpu_devid_irq+0x8c/0x240) [<801809d8>] (handle_percpu_devid_irq) from [<8017ac20>] (generic_handle_irq+0x34/0x44) [<8017abec>] (generic_handle_irq) from [<8017b344>] (__handle_domain_irq+0x6c/0xc4) [<8017b2d8>] (__handle_domain_irq) from [<801022e0>] (gic_handle_irq+0x4c/0x88) [<80102294>] (gic_handle_irq) from [<80101a30>] (__irq_svc+0x70/0x98) [<802b8794>] (generic_fillattr) from [<8056b284>] (v9fs_vfs_getattr_dotl+0x74/0xa4) [<8056b210>] (v9fs_vfs_getattr_dotl) from [<802b8904>] (vfs_getattr_nosec+0x68/0x7c) [<802b889c>] (vfs_getattr_nosec) from [<802b895c>] (vfs_getattr+0x44/0x48) [<802b8918>] (vfs_getattr) from [<802b8a74>] (vfs_statx+0x9c/0xec) [<802b89d8>] (vfs_statx) from [<802b9428>] (sys_lstat64+0x48/0x78) [<802b93e0>] (sys_lstat64) from [<80101000>] (ret_fast_syscall+0x0/0x28) [[email protected]: updated comment to not refer to a function in another subsystem] Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 7549ae3e81cc ("9p: Use the i_size_[read, write]() macros instead of using inode->i_size directly.") Reported-by: Xing Gaopeng <[email protected]> Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Dominique Martinet <[email protected]>
static struct tcf_proto *tcf_chain_tp_insert_unique(struct tcf_chain *chain, struct tcf_proto *tp_new, u32 protocol, u32 prio, bool rtnl_held) { struct tcf_chain_info chain_info; struct tcf_proto *tp; int err = 0; mutex_lock(&chain->filter_chain_lock); if (tcf_proto_exists_destroying(chain, tp_new)) { mutex_unlock(&chain->filter_chain_lock); tcf_proto_destroy(tp_new, rtnl_held, false, NULL); return ERR_PTR(-EAGAIN); } tp = tcf_chain_tp_find(chain, &chain_info, protocol, prio, false); if (!tp) err = tcf_chain_tp_insert(chain, &chain_info, tp_new); mutex_unlock(&chain->filter_chain_lock); if (tp) { tcf_proto_destroy(tp_new, rtnl_held, false, NULL); tp_new = tp; } else if (err) { tcf_proto_destroy(tp_new, rtnl_held, false, NULL); tp_new = ERR_PTR(err); } return tp_new; }
0
[ "CWE-416" ]
linux
04c2a47ffb13c29778e2a14e414ad4cb5a5db4b5
109,894,080,995,589,500,000,000,000,000,000,000,000
33
net: sched: fix use-after-free in tc_new_tfilter() Whenever tc_new_tfilter() jumps back to replay: label, we need to make sure @q and @chain local variables are cleared again, or risk use-after-free as in [1] For consistency, apply the same fix in tc_ctl_chain() BUG: KASAN: use-after-free in mini_qdisc_pair_swap+0x1b9/0x1f0 net/sched/sch_generic.c:1581 Write of size 8 at addr ffff8880985c4b08 by task syz-executor.4/1945 CPU: 0 PID: 1945 Comm: syz-executor.4 Not tainted 5.17.0-rc1-syzkaller-00495-gff58831fa02d #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description.constprop.0.cold+0x8d/0x336 mm/kasan/report.c:255 __kasan_report mm/kasan/report.c:442 [inline] kasan_report.cold+0x83/0xdf mm/kasan/report.c:459 mini_qdisc_pair_swap+0x1b9/0x1f0 net/sched/sch_generic.c:1581 tcf_chain_head_change_item net/sched/cls_api.c:372 [inline] tcf_chain0_head_change.isra.0+0xb9/0x120 net/sched/cls_api.c:386 tcf_chain_tp_insert net/sched/cls_api.c:1657 [inline] tcf_chain_tp_insert_unique net/sched/cls_api.c:1707 [inline] tc_new_tfilter+0x1e67/0x2350 net/sched/cls_api.c:2086 rtnetlink_rcv_msg+0x80d/0xb80 net/core/rtnetlink.c:5583 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2494 netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline] netlink_unicast+0x539/0x7e0 net/netlink/af_netlink.c:1343 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x331/0x810 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmmsg+0x195/0x470 net/socket.c:2553 __do_sys_sendmmsg net/socket.c:2582 [inline] __se_sys_sendmmsg net/socket.c:2579 [inline] __x64_sys_sendmmsg+0x99/0x100 net/socket.c:2579 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f2647172059 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f2645aa5168 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 00007f2647285100 RCX: 00007f2647172059 RDX: 040000000000009f RSI: 00000000200002c0 RDI: 0000000000000006 RBP: 00007f26471cc08d R08: 0000000000000000 R09: 0000000000000000 R10: 9e00000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fffb3f7f02f R14: 00007f2645aa5300 R15: 0000000000022000 </TASK> Allocated by task 1944: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 kasan_set_track mm/kasan/common.c:45 [inline] set_alloc_info mm/kasan/common.c:436 [inline] ____kasan_kmalloc mm/kasan/common.c:515 [inline] ____kasan_kmalloc mm/kasan/common.c:474 [inline] __kasan_kmalloc+0xa9/0xd0 mm/kasan/common.c:524 kmalloc_node include/linux/slab.h:604 [inline] kzalloc_node include/linux/slab.h:726 [inline] qdisc_alloc+0xac/0xa10 net/sched/sch_generic.c:941 qdisc_create.constprop.0+0xce/0x10f0 net/sched/sch_api.c:1211 tc_modify_qdisc+0x4c5/0x1980 net/sched/sch_api.c:1660 rtnetlink_rcv_msg+0x413/0xb80 net/core/rtnetlink.c:5592 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2494 netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline] netlink_unicast+0x539/0x7e0 net/netlink/af_netlink.c:1343 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x331/0x810 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmmsg+0x195/0x470 net/socket.c:2553 __do_sys_sendmmsg net/socket.c:2582 [inline] __se_sys_sendmmsg net/socket.c:2579 [inline] __x64_sys_sendmmsg+0x99/0x100 net/socket.c:2579 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Freed by task 3609: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 kasan_set_track+0x21/0x30 mm/kasan/common.c:45 kasan_set_free_info+0x20/0x30 mm/kasan/generic.c:370 ____kasan_slab_free mm/kasan/common.c:366 [inline] ____kasan_slab_free+0x130/0x160 mm/kasan/common.c:328 kasan_slab_free include/linux/kasan.h:236 [inline] slab_free_hook mm/slub.c:1728 [inline] slab_free_freelist_hook+0x8b/0x1c0 mm/slub.c:1754 slab_free mm/slub.c:3509 [inline] kfree+0xcb/0x280 mm/slub.c:4562 rcu_do_batch kernel/rcu/tree.c:2527 [inline] rcu_core+0x7b8/0x1540 kernel/rcu/tree.c:2778 __do_softirq+0x29b/0x9c2 kernel/softirq.c:558 Last potentially related work creation: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 __kasan_record_aux_stack+0xbe/0xd0 mm/kasan/generic.c:348 __call_rcu kernel/rcu/tree.c:3026 [inline] call_rcu+0xb1/0x740 kernel/rcu/tree.c:3106 qdisc_put_unlocked+0x6f/0x90 net/sched/sch_generic.c:1109 tcf_block_release+0x86/0x90 net/sched/cls_api.c:1238 tc_new_tfilter+0xc0d/0x2350 net/sched/cls_api.c:2148 rtnetlink_rcv_msg+0x80d/0xb80 net/core/rtnetlink.c:5583 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2494 netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline] netlink_unicast+0x539/0x7e0 net/netlink/af_netlink.c:1343 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x331/0x810 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmmsg+0x195/0x470 net/socket.c:2553 __do_sys_sendmmsg net/socket.c:2582 [inline] __se_sys_sendmmsg net/socket.c:2579 [inline] __x64_sys_sendmmsg+0x99/0x100 net/socket.c:2579 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae The buggy address belongs to the object at ffff8880985c4800 which belongs to the cache kmalloc-1k of size 1024 The buggy address is located 776 bytes inside of 1024-byte region [ffff8880985c4800, ffff8880985c4c00) The buggy address belongs to the page: page:ffffea0002617000 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x985c0 head:ffffea0002617000 order:3 compound_mapcount:0 compound_pincount:0 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 0000000000000000 dead000000000122 ffff888010c41dc0 raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 1941, ts 1038999441284, free_ts 1033444432829 prep_new_page mm/page_alloc.c:2434 [inline] get_page_from_freelist+0xa72/0x2f50 mm/page_alloc.c:4165 __alloc_pages+0x1b2/0x500 mm/page_alloc.c:5389 alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271 alloc_slab_page mm/slub.c:1799 [inline] allocate_slab mm/slub.c:1944 [inline] new_slab+0x28a/0x3b0 mm/slub.c:2004 ___slab_alloc+0x87c/0xe90 mm/slub.c:3018 __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3105 slab_alloc_node mm/slub.c:3196 [inline] slab_alloc mm/slub.c:3238 [inline] __kmalloc+0x2fb/0x340 mm/slub.c:4420 kmalloc include/linux/slab.h:586 [inline] kzalloc include/linux/slab.h:715 [inline] __register_sysctl_table+0x112/0x1090 fs/proc/proc_sysctl.c:1335 neigh_sysctl_register+0x2c8/0x5e0 net/core/neighbour.c:3787 devinet_sysctl_register+0xb1/0x230 net/ipv4/devinet.c:2618 inetdev_init+0x286/0x580 net/ipv4/devinet.c:278 inetdev_event+0xa8a/0x15d0 net/ipv4/devinet.c:1532 notifier_call_chain+0xb5/0x200 kernel/notifier.c:84 call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:1919 call_netdevice_notifiers_extack net/core/dev.c:1931 [inline] call_netdevice_notifiers net/core/dev.c:1945 [inline] register_netdevice+0x1073/0x1500 net/core/dev.c:9698 veth_newlink+0x59c/0xa90 drivers/net/veth.c:1722 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1352 [inline] free_pcp_prepare+0x374/0x870 mm/page_alloc.c:1404 free_unref_page_prepare mm/page_alloc.c:3325 [inline] free_unref_page+0x19/0x690 mm/page_alloc.c:3404 release_pages+0x748/0x1220 mm/swap.c:956 tlb_batch_pages_flush mm/mmu_gather.c:50 [inline] tlb_flush_mmu_free mm/mmu_gather.c:243 [inline] tlb_flush_mmu+0xe9/0x6b0 mm/mmu_gather.c:250 zap_pte_range mm/memory.c:1441 [inline] zap_pmd_range mm/memory.c:1490 [inline] zap_pud_range mm/memory.c:1519 [inline] zap_p4d_range mm/memory.c:1540 [inline] unmap_page_range+0x1d1d/0x2a30 mm/memory.c:1561 unmap_single_vma+0x198/0x310 mm/memory.c:1606 unmap_vmas+0x16b/0x2f0 mm/memory.c:1638 exit_mmap+0x201/0x670 mm/mmap.c:3178 __mmput+0x122/0x4b0 kernel/fork.c:1114 mmput+0x56/0x60 kernel/fork.c:1135 exit_mm kernel/exit.c:507 [inline] do_exit+0xa3c/0x2a30 kernel/exit.c:793 do_group_exit+0xd2/0x2f0 kernel/exit.c:935 __do_sys_exit_group kernel/exit.c:946 [inline] __se_sys_exit_group kernel/exit.c:944 [inline] __x64_sys_exit_group+0x3a/0x50 kernel/exit.c:944 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Memory state around the buggy address: ffff8880985c4a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880985c4a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8880985c4b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880985c4b80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880985c4c00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc Fixes: 470502de5bdb ("net: sched: unlock rules update API") Signed-off-by: Eric Dumazet <[email protected]> Cc: Vlad Buslov <[email protected]> Cc: Jiri Pirko <[email protected]> Cc: Cong Wang <[email protected]> Reported-by: syzbot <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
unsigned char *skb_push(struct sk_buff *skb, unsigned int len) { skb->data -= len; skb->len += len; if (unlikely(skb->data<skb->head)) skb_under_panic(skb, len, __builtin_return_address(0)); return skb->data; }
0
[ "CWE-416" ]
net
36d5fe6a000790f56039afe26834265db0a3ad4c
318,382,583,613,764,440,000,000,000,000,000,000,000
8
core, nfqueue, openvswitch: Orphan frags in skb_zerocopy and handle errors skb_zerocopy can copy elements of the frags array between skbs, but it doesn't orphan them. Also, it doesn't handle errors, so this patch takes care of that as well, and modify the callers accordingly. skb_tx_error() is also added to the callers so they will signal the failed delivery towards the creator of the skb. Signed-off-by: Zoltan Kiss <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void sk_free(struct sock *sk) { /* * We subtract one from sk_wmem_alloc and can know if * some packets are still in some tx queue. * If not null, sock_wfree() will call __sk_free(sk) later */ if (atomic_dec_and_test(&sk->sk_wmem_alloc)) __sk_free(sk); }
0
[ "CWE-119", "CWE-787" ]
linux
b98b0bc8c431e3ceb4b26b0dfc8db509518fb290
135,872,164,155,454,400,000,000,000,000,000,000,000
10
net: avoid signed overflows for SO_{SND|RCV}BUFFORCE CAP_NET_ADMIN users should not be allowed to set negative sk_sndbuf or sk_rcvbuf values, as it can lead to various memory corruptions, crashes, OOM... Note that before commit 82981930125a ("net: cleanups in sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF and SO_RCVBUF were vulnerable. This needs to be backported to all known linux kernels. Again, many thanks to syzkaller team for discovering this gem. Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Andrey Konovalov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static double mp_ellipse(_cimg_math_parser& mp) { const unsigned int i_end = (unsigned int)mp.opcode[2]; unsigned int ind = (unsigned int)mp.opcode[3]; if (ind!=~0U) ind = (unsigned int)cimg::mod((int)_mp_arg(3),mp.listin.width()); CImg<T> &img = ind==~0U?mp.imgout:mp.listout[ind]; CImg<T> color(img._spectrum,1,1,1,0); bool is_invalid_arguments = false, is_outlined = false; float r1 = 0, r2 = 0, angle = 0, opacity = 1; unsigned int i = 4, pattern = ~0U; int x0 = 0, y0 = 0; if (i>=i_end) is_invalid_arguments = true; else { x0 = (int)cimg::round(_mp_arg(i++)); if (i>=i_end) is_invalid_arguments = true; else { y0 = (int)cimg::round(_mp_arg(i++)); if (i>=i_end) is_invalid_arguments = true; else { r1 = (float)_mp_arg(i++); if (i>=i_end) r2 = r1; else { r2 = (float)_mp_arg(i++); if (i<i_end) { angle = (float)_mp_arg(i++); if (i<i_end) { opacity = (float)_mp_arg(i++); if (r1<0 && r2<0) { pattern = (unsigned int)_mp_arg(i++); is_outlined = true; r1 = -r1; r2 = -r2; } if (i<i_end) { cimg_forX(color,k) if (i<i_end) color[k] = (T)_mp_arg(i++); else { color.resize(k,1,1,1,-1); break; } color.resize(img._spectrum,1,1,1,0,2); } } } } } } } if (!is_invalid_arguments) { if (is_outlined) img.draw_ellipse(x0,y0,r1,r2,angle,color._data,opacity,pattern); else img.draw_ellipse(x0,y0,r1,r2,angle,color._data,opacity); } else { CImg<doubleT> args(i_end - 4); cimg_forX(args,k) args[k] = _mp_arg(4 + k); if (ind==~0U) throw CImgArgumentException("[" cimg_appname "_math_parser] CImg<%s>: Function 'ellipse()': " "Invalid arguments '%s'. ", mp.imgin.pixel_type(),args.value_string()._data); else throw CImgArgumentException("[" cimg_appname "_math_parser] CImg<%s>: Function 'ellipse()': " "Invalid arguments '#%u%s%s'. ", mp.imgin.pixel_type(),ind,args._width?",":"",args.value_string()._data); } return cimg::type<double>::nan();
0
[ "CWE-119", "CWE-787" ]
CImg
ac8003393569aba51048c9d67e1491559877b1d1
114,619,085,217,211,420,000,000,000,000,000,000,000
59
.
void __init init_vdso_image(const struct vdso_image *image) { int i; int npages = (image->size) / PAGE_SIZE; BUG_ON(image->size % PAGE_SIZE != 0); for (i = 0; i < npages; i++) image->text_mapping.pages[i] = virt_to_page(image->data + i*PAGE_SIZE); apply_alternatives((struct alt_instr *)(image->data + image->alt), (struct alt_instr *)(image->data + image->alt + image->alt_len)); }
0
[]
linux
394f56fe480140877304d342dec46d50dc823d46
8,246,631,058,801,701,000,000,000,000,000,000,000
14
x86_64, vdso: Fix the vdso address randomization algorithm The theory behind vdso randomization is that it's mapped at a random offset above the top of the stack. To avoid wasting a page of memory for an extra page table, the vdso isn't supposed to extend past the lowest PMD into which it can fit. Other than that, the address should be a uniformly distributed address that meets all of the alignment requirements. The current algorithm is buggy: the vdso has about a 50% probability of being at the very end of a PMD. The current algorithm also has a decent chance of failing outright due to incorrect handling of the case where the top of the stack is near the top of its PMD. This fixes the implementation. The paxtest estimate of vdso "randomisation" improves from 11 bits to 18 bits. (Disclaimer: I don't know what the paxtest code is actually calculating.) It's worth noting that this algorithm is inherently biased: the vdso is more likely to end up near the end of its PMD than near the beginning. Ideally we would either nix the PMD sharing requirement or jointly randomize the vdso and the stack to reduce the bias. In the mean time, this is a considerable improvement with basically no risk of compatibility issues, since the allowed outputs of the algorithm are unchanged. As an easy test, doing this: for i in `seq 10000` do grep -P vdso /proc/self/maps |cut -d- -f1 done |sort |uniq -d used to produce lots of output (1445 lines on my most recent run). A tiny subset looks like this: 7fffdfffe000 7fffe01fe000 7fffe05fe000 7fffe07fe000 7fffe09fe000 7fffe0bfe000 7fffe0dfe000 Note the suspicious fe000 endings. With the fix, I get a much more palatable 76 repeated addresses. Reviewed-by: Kees Cook <[email protected]> Cc: [email protected] Signed-off-by: Andy Lutomirski <[email protected]>
asmlinkage void sparc_breakpoint(struct pt_regs *regs) { siginfo_t info; if (test_thread_flag(TIF_32BIT)) { regs->tpc &= 0xffffffff; regs->tnpc &= 0xffffffff; } #ifdef DEBUG_SPARC_BREAKPOINT printk ("TRAP: Entering kernel PC=%lx, nPC=%lx\n", regs->tpc, regs->tnpc); #endif info.si_signo = SIGTRAP; info.si_errno = 0; info.si_code = TRAP_BRKPT; info.si_addr = (void __user *)regs->tpc; info.si_trapno = 0; force_sig_info(SIGTRAP, &info, current); #ifdef DEBUG_SPARC_BREAKPOINT printk ("TRAP: Returning to space: PC=%lx nPC=%lx\n", regs->tpc, regs->tnpc); #endif }
0
[]
linux
5a0efea09f42f7c92bd98a38d66b4dff9589266b
214,344,744,796,657,830,000,000,000,000,000,000,000
21
sparc64: Sharpen address space randomization calculations. A recent patch to the x86 randomization code caused me to take a quick look at what we do on sparc64, and in doing so I noticed that we sometimes calculate a non-page-aligned randomization value and stick it into mmap_base. I also noticed that since I copied the logic over from PowerPC, the powerpc code has tweaked the randomization ranges in ways that would benefit us as well. For one thing, we should allow up to at least 8MB of randomization otherwise huge-page regions when HPAGE_SIZE is 4MB never randomize at all. And on the 64-bit side we were using up to 4GB. Tone it down to 1GB as 4GB can result in a lot of address space wastage. Finally, make sure all computations are unsigned. Signed-off-by: David S. Miller <[email protected]>
bool send_result_set_metadata(List<Item> &list, uint flags) { return 0; }
0
[ "CWE-416" ]
server
4681b6f2d8c82b4ec5cf115e83698251963d80d5
24,411,046,369,725,553,000,000,000,000,000,000,000
1
MDEV-26281 ASAN use-after-poison when complex conversion is involved in blob the bug was that in_vector array in Item_func_in was allocated in the statement arena, not in the table->expr_arena. revert part of the 5acd391e8b2d. Instead, change the arena correctly in fix_all_session_vcol_exprs(). Remove TABLE_ARENA, that was introduced in 5acd391e8b2d to force item tree changes to be rolled back (because they were allocated in the wrong arena and didn't persist. now they do)
PHP_METHOD(PharFileInfo, isCRCChecked) { PHAR_ENTRY_OBJECT(); if (zend_parse_parameters_none() == FAILURE) { return; } RETURN_BOOL(entry_obj->ent.entry->is_crc_checked); }
0
[ "CWE-119" ]
php-src
13ad4d3e971807f9a58ab5933182907dc2958539
156,147,248,557,108,950,000,000,000,000,000,000,000
10
Fix bug #71354 - remove UMR when size is 0
void _xml_unparsedEntityDeclHandler(void *userData, const XML_Char *entityName, const XML_Char *base, const XML_Char *systemId, const XML_Char *publicId, const XML_Char *notationName) { xml_parser *parser = (xml_parser *)userData; if (parser && parser->unparsedEntityDeclHandler) { zval *retval, *args[6]; args[0] = _xml_resource_zval(parser->index); args[1] = _xml_xmlchar_zval(entityName, 0, parser->target_encoding); args[2] = _xml_xmlchar_zval(base, 0, parser->target_encoding); args[3] = _xml_xmlchar_zval(systemId, 0, parser->target_encoding); args[4] = _xml_xmlchar_zval(publicId, 0, parser->target_encoding); args[5] = _xml_xmlchar_zval(notationName, 0, parser->target_encoding); if ((retval = xml_call_handler(parser, parser->unparsedEntityDeclHandler, parser->unparsedEntityDeclPtr, 6, args))) { zval_ptr_dtor(&retval); } } }
0
[ "CWE-119" ]
php-src
1248079be837808da4c97364fb3b4c96c8015fbf
276,586,210,940,896,650,000,000,000,000,000,000,000
23
Fix bug #72099: xml_parse_into_struct segmentation fault
static netdev_tx_t wanxl_xmit(struct sk_buff *skb, struct net_device *dev) { port_t *port = dev_to_port(dev); desc_t *desc; spin_lock(&port->lock); desc = &get_status(port)->tx_descs[port->tx_out]; if (desc->stat != PACKET_EMPTY) { /* should never happen - previous xmit should stop queue */ #ifdef DEBUG_PKT printk(KERN_DEBUG "%s: transmitter buffer full\n", dev->name); #endif netif_stop_queue(dev); spin_unlock(&port->lock); return NETDEV_TX_BUSY; /* request packet to be queued */ } #ifdef DEBUG_PKT printk(KERN_DEBUG "%s TX(%i):", dev->name, skb->len); debug_frame(skb); #endif port->tx_skbs[port->tx_out] = skb; desc->address = pci_map_single(port->card->pdev, skb->data, skb->len, PCI_DMA_TODEVICE); desc->length = skb->len; desc->stat = PACKET_FULL; writel(1 << (DOORBELL_TO_CARD_TX_0 + port->node), port->card->plx + PLX_DOORBELL_TO_CARD); port->tx_out = (port->tx_out + 1) % TX_BUFFERS; if (get_status(port)->tx_descs[port->tx_out].stat != PACKET_EMPTY) { netif_stop_queue(dev); #ifdef DEBUG_PKT printk(KERN_DEBUG "%s: transmitter buffer full\n", dev->name); #endif } spin_unlock(&port->lock); return NETDEV_TX_OK; }
0
[ "CWE-399" ]
linux
2b13d06c9584b4eb773f1e80bbaedab9a1c344e1
242,873,236,589,763,140,000,000,000,000,000,000,000
43
wanxl: fix info leak in ioctl The wanxl_ioctl() code fails to initialize the two padding bytes of struct sync_serial_settings after the ->loopback member. Add an explicit memset(0) before filling the structure to avoid the info leak. Signed-off-by: Salva Peiró <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void _cgsem_post(cgsem_t *cgsem, const char *file, const char *func, const int line) { const char buf = 1; int ret; retry: ret = write(cgsem->pipefd[1], &buf, 1); if (unlikely(ret == 0)) applog(LOG_WARNING, "Failed to write errno=%d" IN_FMT_FFL, errno, file, func, line); else if (unlikely(ret < 0 && interrupted)) goto retry; }
0
[ "CWE-20", "CWE-703" ]
sgminer
910c36089940e81fb85c65b8e63dcd2fac71470c
268,932,362,951,537,600,000,000,000,000,000,000,000
12
stratum: parse_notify(): Don't die on malformed bbversion/prev_hash/nbit/ntime. Might have introduced a memory leak, don't have time to check. :( Should the other hex2bin()'s be checked? Thanks to Mick Ayzenberg <mick.dejavusecurity.com> for finding this.
static int packet_do_bind(struct sock *sk, const char *name, int ifindex, __be16 proto) { struct packet_sock *po = pkt_sk(sk); struct net_device *dev_curr; __be16 proto_curr; bool need_rehook; struct net_device *dev = NULL; int ret = 0; bool unlisted = false; lock_sock(sk); spin_lock(&po->bind_lock); rcu_read_lock(); if (po->fanout) { ret = -EINVAL; goto out_unlock; } if (name) { dev = dev_get_by_name_rcu(sock_net(sk), name); if (!dev) { ret = -ENODEV; goto out_unlock; } } else if (ifindex) { dev = dev_get_by_index_rcu(sock_net(sk), ifindex); if (!dev) { ret = -ENODEV; goto out_unlock; } } dev_hold(dev); proto_curr = po->prot_hook.type; dev_curr = po->prot_hook.dev; need_rehook = proto_curr != proto || dev_curr != dev; if (need_rehook) { if (po->running) { rcu_read_unlock(); /* prevents packet_notifier() from calling * register_prot_hook() */ WRITE_ONCE(po->num, 0); __unregister_prot_hook(sk, true); rcu_read_lock(); dev_curr = po->prot_hook.dev; if (dev) unlisted = !dev_get_by_index_rcu(sock_net(sk), dev->ifindex); } BUG_ON(po->running); WRITE_ONCE(po->num, proto); po->prot_hook.type = proto; if (unlikely(unlisted)) { dev_put(dev); po->prot_hook.dev = NULL; WRITE_ONCE(po->ifindex, -1); packet_cached_dev_reset(po); } else { po->prot_hook.dev = dev; WRITE_ONCE(po->ifindex, dev ? dev->ifindex : 0); packet_cached_dev_assign(po, dev); } } dev_put(dev_curr); if (proto == 0 || !need_rehook) goto out_unlock; if (!unlisted && (!dev || (dev->flags & IFF_UP))) { register_prot_hook(sk); } else { sk->sk_err = ENETDOWN; if (!sock_flag(sk, SOCK_DEAD)) sk_error_report(sk); } out_unlock: rcu_read_unlock(); spin_unlock(&po->bind_lock); release_sock(sk); return ret; }
0
[ "CWE-415" ]
linux
ec6af094ea28f0f2dda1a6a33b14cd57e36a9755
147,927,606,986,697,980,000,000,000,000,000,000,000
90
net/packet: rx_owner_map depends on pg_vec Packet sockets may switch ring versions. Avoid misinterpreting state between versions, whose fields share a union. rx_owner_map is only allocated with a packet ring (pg_vec) and both are swapped together. If pg_vec is NULL, meaning no packet ring was allocated, then neither was rx_owner_map. And the field may be old state from a tpacket_v3. Fixes: 61fad6816fc1 ("net/packet: tpacket_rcv: avoid a producer race condition") Reported-by: Syzbot <[email protected]> Signed-off-by: Willem de Bruijn <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
void OPENSSL_cpuid_setup(void) {}
0
[ "CWE-310" ]
openssl
270881316664396326c461ec7a124aec2c6cc081
65,714,802,550,548,320,000,000,000,000,000,000,000
1
Add and use a constant-time memcmp. This change adds CRYPTO_memcmp, which compares two vectors of bytes in an amount of time that's independent of their contents. It also changes several MAC compares in the code to use this over the standard memcmp, which may leak information about the size of a matching prefix. (cherry picked from commit 2ee798880a246d648ecddadc5b91367bee4a5d98) Conflicts: crypto/crypto.h ssl/t1_lib.c (cherry picked from commit dc406b59f3169fe191e58906df08dce97edb727c) Conflicts: crypto/crypto.h ssl/d1_pkt.c ssl/s3_pkt.c
PHP_METHOD(PharFileInfo, getMetadata) { PHAR_ENTRY_OBJECT(); if (zend_parse_parameters_none() == FAILURE) { return; } if (entry_obj->ent.entry->metadata) { if (entry_obj->ent.entry->is_persistent) { zval *ret; char *buf = estrndup((char *) entry_obj->ent.entry->metadata, entry_obj->ent.entry->metadata_len); /* assume success, we would have failed before */ phar_parse_metadata(&buf, &ret, entry_obj->ent.entry->metadata_len TSRMLS_CC); efree(buf); RETURN_ZVAL(ret, 0, 1); } RETURN_ZVAL(entry_obj->ent.entry->metadata, 1, 0); } }
0
[ "CWE-416" ]
php-src
b2cf3f064b8f5efef89bb084521b61318c71781b
315,664,415,278,208,120,000,000,000,000,000,000,000
20
Fixed bug #68901 (use after free)
static int propfind_schedtag(const xmlChar *name, xmlNsPtr ns, struct propfind_ctx *fctx, xmlNodePtr resp __attribute__((unused)), struct propstat propstat[], void *rock __attribute__((unused))) { struct caldav_data *cdata = (struct caldav_data *) fctx->data; if (!cdata->sched_tag) return HTTP_NOT_FOUND; /* add DQUOTEs */ buf_reset(&fctx->buf); buf_printf(&fctx->buf, "\"%s\"", cdata->sched_tag); xml_add_prop(HTTP_OK, fctx->ns[NS_DAV], &propstat[PROPSTAT_OK], name, ns, BAD_CAST buf_cstring(&fctx->buf), 0); return 0; }
0
[ "CWE-787" ]
cyrus-imapd
a5779db8163b99463e25e7c476f9cbba438b65f3
20,346,963,153,003,592,000,000,000,000,000,000,000
19
HTTP: don't overrun buffer when parsing strings with sscanf()
make_ks_from_key_data(krb5_context context, kadm5_key_data *key_data, int n_key_data, krb5_key_salt_tuple **out) { int i; krb5_key_salt_tuple *ks; *out = NULL; ks = calloc(n_key_data, sizeof(*ks)); if (ks == NULL) return ENOMEM; for (i = 0; i < n_key_data; i++) { ks[i].ks_enctype = key_data[i].key.enctype; ks[i].ks_salttype = key_data[i].salt.type; } *out = ks; return 0; }
0
[ "CWE-476", "CWE-90" ]
krb5
e1caf6fb74981da62039846931ebdffed71309d1
2,265,319,747,938,208,500,000,000,000,000,000,000
19
Fix flaws in LDAP DN checking KDB_TL_USER_INFO tl-data is intended to be internal to the LDAP KDB module, and not used in disk or wire principal entries. Prevent kadmin clients from sending KDB_TL_USER_INFO tl-data by giving it a type number less than 256 and filtering out type numbers less than 256 in kadm5_create_principal_3(). (We already filter out low type numbers in kadm5_modify_principal()). In the LDAP KDB module, if containerdn and linkdn are both specified in a put_principal operation, check both linkdn and the computed standalone_principal_dn for container membership. To that end, factor out the checks into helper functions and call them on all applicable client-influenced DNs. CVE-2018-5729: In MIT krb5 1.6 or later, an authenticated kadmin user with permission to add principals to an LDAP Kerberos database can cause a null dereference in kadmind, or circumvent a DN container check, by supplying tagged data intended to be internal to the database module. Thanks to Sharwan Ram and Pooja Anil for discovering the potential null dereference. CVE-2018-5730: In MIT krb5 1.6 or later, an authenticated kadmin user with permission to add principals to an LDAP Kerberos database can circumvent a DN containership check by supplying both a "linkdn" and "containerdn" database argument, or by supplying a DN string which is a left extension of a container DN string but is not hierarchically within the container DN. ticket: 8643 (new) tags: pullup target_version: 1.16-next target_version: 1.15-next
void ConnectionImpl::StreamImpl::submitTrailers(const HeaderMap& trailers) { std::vector<nghttp2_nv> final_headers; buildHeaders(final_headers, trailers); int rc = nghttp2_submit_trailer(parent_.session_, stream_id_, final_headers.data(), final_headers.size()); ASSERT(rc == 0); }
0
[ "CWE-400" ]
envoy
0e49a495826ea9e29134c1bd54fdeb31a034f40c
105,307,371,093,787,550,000,000,000,000,000,000,000
7
http/2: add stats and stream flush timeout (#139) This commit adds a new stream flush timeout to guard against a remote server that does not open window once an entire stream has been buffered for flushing. Additional stats have also been added to better understand the codecs view of active streams as well as amount of data buffered. Signed-off-by: Matt Klein <[email protected]>
void mono_reflection_initialize_generic_parameter (MonoReflectionGenericParam *gparam) { g_assert_not_reached ();
0
[ "CWE-20" ]
mono
4905ef1130feb26c3150b28b97e4a96752e0d399
299,370,511,235,845,300,000,000,000,000,000,000,000
4
Handle invalid instantiation of generic methods. * verify.c: Add new function to internal verifier API to check method instantiations. * reflection.c (mono_reflection_bind_generic_method_parameters): Check the instantiation before returning it. Fixes #655847