func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
void free_http_req_rules(struct list *r) { struct http_req_rule *tr, *pr; list_for_each_entry_safe(pr, tr, r, list) { LIST_DEL(&pr->list); if (pr->action == HTTP_REQ_ACT_AUTH) free(pr->arg.auth.realm); free(pr); } }
0
[]
haproxy
aae75e3279c6c9bd136413a72dafdcd4986bb89a
309,125,466,764,980,670,000,000,000,000,000,000,000
11
BUG/CRITICAL: using HTTP information in tcp-request content may crash the process During normal HTTP request processing, request buffers are realigned if there are less than global.maxrewrite bytes available after them, in order to leave enough room for rewriting headers after the request. This is done in http_wait_for_request(). However, if some HTTP inspection happens during a "tcp-request content" rule, this realignment is not performed. In theory this is not a problem because empty buffers are always aligned and TCP inspection happens at the beginning of a connection. But with HTTP keep-alive, it also happens at the beginning of each subsequent request. So if a second request was pipelined by the client before the first one had a chance to be forwarded, the second request will not be realigned. Then, http_wait_for_request() will not perform such a realignment either because the request was already parsed and marked as such. The consequence of this, is that the rewrite of a sufficient number of such pipelined, unaligned requests may leave less room past the request been processed than the configured reserve, which can lead to a buffer overflow if request processing appends some data past the end of the buffer. A number of conditions are required for the bug to be triggered : - HTTP keep-alive must be enabled ; - HTTP inspection in TCP rules must be used ; - some request appending rules are needed (reqadd, x-forwarded-for) - since empty buffers are always realigned, the client must pipeline enough requests so that the buffer always contains something till the point where there is no more room for rewriting. While such a configuration is quite unlikely to be met (which is confirmed by the bug's lifetime), a few people do use these features together for very specific usages. And more importantly, writing such a configuration and the request to attack it is trivial. A quick workaround consists in forcing keep-alive off by adding "option httpclose" or "option forceclose" in the frontend. Alternatively, disabling HTTP-based TCP inspection rules enough if the application supports it. At first glance, this bug does not look like it could lead to remote code execution, as the overflowing part is controlled by the configuration and not by the user. But some deeper analysis should be performed to confirm this. And anyway, corrupting the process' memory and crashing it is quite trivial. Special thanks go to Yves Lafon from the W3C who reported this bug and deployed significant efforts to collect the relevant data needed to understand it in less than one week. CVE-2013-1912 was assigned to this issue. Note that 1.4 is also affected so the fix must be backported.
static int process_all_tables_in_db(char *database) { MYSQL_RES *res; MYSQL_ROW row; uint num_columns; LINT_INIT(res); if (use_db(database)) return 1; if ((mysql_query(sock, "SHOW /*!50002 FULL*/ TABLES") && mysql_query(sock, "SHOW TABLES")) || !(res= mysql_store_result(sock))) { my_printf_error(0, "Error: Couldn't get table list for database %s: %s", MYF(0), database, mysql_error(sock)); return 1; } num_columns= mysql_num_fields(res); if (opt_all_in_1 && what_to_do != DO_UPGRADE) { /* We need table list in form `a`, `b`, `c` that's why we need 2 more chars added to to each table name space is for more readable output in logs and in case of error */ char *tables, *end; uint tot_length = 0; while ((row = mysql_fetch_row(res))) tot_length+= fixed_name_length(row[0]) + 2; mysql_data_seek(res, 0); if (!(tables=(char *) my_malloc(PSI_NOT_INSTRUMENTED, sizeof(char)*tot_length+4, MYF(MY_WME)))) { mysql_free_result(res); return 1; } for (end = tables + 1; (row = mysql_fetch_row(res)) ;) { if ((num_columns == 2) && (strcmp(row[1], "VIEW") == 0)) continue; end= fix_table_name(end, row[0]); *end++= ','; } *--end = 0; if (tot_length) handle_request_for_tables(tables + 1, tot_length - 1); my_free(tables); } else { while ((row = mysql_fetch_row(res))) { /* Skip views if we don't perform renaming. */ if ((what_to_do != DO_UPGRADE) && (num_columns == 2) && (strcmp(row[1], "VIEW") == 0)) continue; handle_request_for_tables(row[0], fixed_name_length(row[0])); } } mysql_free_result(res); return 0; } /* process_all_tables_in_db */
0
[ "CWE-284", "CWE-295" ]
mysql-server
3bd5589e1a5a93f9c224badf983cd65c45215390
296,241,069,683,562,500,000,000,000,000,000,000,000
68
WL#6791 : Redefine client --ssl option to imply enforced encryption # Changed the meaning of the --ssl=1 option of all client binaries to mean force ssl, not try ssl and fail over to eunecrypted # Added a new MYSQL_OPT_SSL_ENFORCE mysql_options() option to specify that an ssl connection is required. # Added a new macro SSL_SET_OPTIONS() to the client SSL handling headers that sets all the relevant SSL options at once. # Revamped all of the current native clients to use the new macro # Removed some Windows line endings. # Added proper handling of the new option into the ssl helper headers. # If SSL is mandatory assume that the media is secure enough for the sha256 plugin to do unencrypted password exchange even before establishing a connection. # Set the default ssl cipher to DHE-RSA-AES256-SHA if none is specified. # updated test cases that require a non-default cipher to spawn a mysql command line tool binary since mysqltest has no support for specifying ciphers. # updated the replication slave connection code to always enforce SSL if any of the SSL config options is present. # test cases added and updated. # added a mysql_get_option() API to return mysql_options() values. Used the new API inside the sha256 plugin. # Fixed compilation warnings because of unused variables. # Fixed test failures (mysql_ssl and bug13115401) # Fixed whitespace issues. # Fully implemented the mysql_get_option() function. # Added a test case for mysql_get_option() # fixed some trailing whitespace issues # fixed some uint/int warnings in mysql_client_test.c # removed shared memory option from non-windows get_options tests # moved MYSQL_OPT_LOCAL_INFILE to the uint options
_warc_skip(struct archive_read *a) { struct warc_s *w = a->format->data; __archive_read_consume(a, w->cntlen + 4U/*\r\n\r\n separator*/); w->cntlen = 0U; w->cntoff = 0U; return (ARCHIVE_OK); }
0
[ "CWE-415", "CWE-787" ]
libarchive
9c84b7426660c09c18cc349f6d70b5f8168b5680
205,992,419,703,010,840,000,000,000,000,000,000,000
9
warc: consume data once read The warc decoder only used read ahead, it wouldn't actually consume data that had previously been printed. This means that if you specify an invalid content length, it will just reprint the same data over and over and over again until it hits the desired length. This means that a WARC resource with e.g. Content-Length: 666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666665 but only a few hundred bytes of data, causes a quasi-infinite loop. Consume data in subsequent calls to _warc_read. Found with an AFL + afl-rb + qsym setup.
private int magiccheck(struct magic_set *ms, struct magic *m) { uint64_t l = m->value.q; uint64_t v; float fl, fv; double dl, dv; int matched; union VALUETYPE *p = &ms->ms_value; switch (m->type) { case FILE_BYTE: v = p->b; break; case FILE_SHORT: case FILE_BESHORT: case FILE_LESHORT: v = p->h; break; case FILE_LONG: case FILE_BELONG: case FILE_LELONG: case FILE_MELONG: case FILE_DATE: case FILE_BEDATE: case FILE_LEDATE: case FILE_MEDATE: case FILE_LDATE: case FILE_BELDATE: case FILE_LELDATE: case FILE_MELDATE: v = p->l; break; case FILE_QUAD: case FILE_LEQUAD: case FILE_BEQUAD: case FILE_QDATE: case FILE_BEQDATE: case FILE_LEQDATE: case FILE_QLDATE: case FILE_BEQLDATE: case FILE_LEQLDATE: case FILE_QWDATE: case FILE_BEQWDATE: case FILE_LEQWDATE: v = p->q; break; case FILE_FLOAT: case FILE_BEFLOAT: case FILE_LEFLOAT: fl = m->value.f; fv = p->f; switch (m->reln) { case 'x': matched = 1; break; case '!': matched = fv != fl; break; case '=': matched = fv == fl; break; case '>': matched = fv > fl; break; case '<': matched = fv < fl; break; default: file_magerror(ms, "cannot happen with float: invalid relation `%c'", m->reln); return -1; } return matched; case FILE_DOUBLE: case FILE_BEDOUBLE: case FILE_LEDOUBLE: dl = m->value.d; dv = p->d; switch (m->reln) { case 'x': matched = 1; break; case '!': matched = dv != dl; break; case '=': matched = dv == dl; break; case '>': matched = dv > dl; break; case '<': matched = dv < dl; break; default: file_magerror(ms, "cannot happen with double: invalid relation `%c'", m->reln); return -1; } return matched; case FILE_DEFAULT: case FILE_CLEAR: l = 0; v = 0; break; case FILE_STRING: case FILE_PSTRING: l = 0; v = file_strncmp(m->value.s, p->s, (size_t)m->vallen, m->str_flags); break; case FILE_BESTRING16: case FILE_LESTRING16: l = 0; v = file_strncmp16(m->value.s, p->s, (size_t)m->vallen, m->str_flags); break; case FILE_SEARCH: { /* search ms->search.s for the string m->value.s */ size_t slen; size_t idx; if (ms->search.s == NULL) return 0; slen = MIN(m->vallen, sizeof(m->value.s)); l = 0; v = 0; for (idx = 0; m->str_range == 0 || idx < m->str_range; idx++) { if (slen + idx > ms->search.s_len) return 0; v = file_strncmp(m->value.s, ms->search.s + idx, slen, m->str_flags); if (v == 0) { /* found match */ ms->search.offset += idx; ms->search.rm_len = ms->search.s_len - idx; break; } } break; } case FILE_REGEX: { int rc; file_regex_t rx; const char *search; if (ms->search.s == NULL) return 0; l = 0; rc = file_regcomp(&rx, m->value.s, REG_EXTENDED|REG_NEWLINE| ((m->str_flags & STRING_IGNORE_CASE) ? REG_ICASE : 0)); if (rc) { file_regerror(&rx, rc, ms); v = (uint64_t)-1; } else { regmatch_t pmatch; size_t slen = ms->search.s_len; char *copy; if (slen != 0) { copy = CAST(char *, malloc(slen)); if (copy == NULL) { file_regfree(&rx); file_error(ms, errno, "can't allocate %" SIZE_T_FORMAT "u bytes", slen); return -1; } memcpy(copy, ms->search.s, slen); copy[--slen] = '\0'; search = copy; } else { search = CCAST(char *, ""); copy = NULL; } rc = file_regexec(&rx, (const char *)search, 1, &pmatch, 0); free(copy); switch (rc) { case 0: ms->search.s += (int)pmatch.rm_so; ms->search.offset += (size_t)pmatch.rm_so; ms->search.rm_len = (size_t)(pmatch.rm_eo - pmatch.rm_so); v = 0; break; case REG_NOMATCH: v = 1; break; default: file_regerror(&rx, rc, ms); v = (uint64_t)-1; break; } } file_regfree(&rx); if (v == (uint64_t)-1) return -1; break; } case FILE_INDIRECT: case FILE_USE: case FILE_NAME: return 1; case FILE_DER: matched = der_cmp(ms, m); if (matched == -1) { if ((ms->flags & MAGIC_DEBUG) != 0) { (void) fprintf(stderr, "EOF comparing DER entries"); } return 0; } return matched; default: file_magerror(ms, "invalid type %d in magiccheck()", m->type); return -1; } v = file_signextend(ms, m, v); switch (m->reln) { case 'x': if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "u == *any* = 1\n", (unsigned long long)v); matched = 1; break; case '!': matched = v != l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "u != %" INT64_T_FORMAT "u = %d\n", (unsigned long long)v, (unsigned long long)l, matched); break; case '=': matched = v == l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "u == %" INT64_T_FORMAT "u = %d\n", (unsigned long long)v, (unsigned long long)l, matched); break; case '>': if (m->flag & UNSIGNED) { matched = v > l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "u > %" INT64_T_FORMAT "u = %d\n", (unsigned long long)v, (unsigned long long)l, matched); } else { matched = (int64_t) v > (int64_t) l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "d > %" INT64_T_FORMAT "d = %d\n", (long long)v, (long long)l, matched); } break; case '<': if (m->flag & UNSIGNED) { matched = v < l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "u < %" INT64_T_FORMAT "u = %d\n", (unsigned long long)v, (unsigned long long)l, matched); } else { matched = (int64_t) v < (int64_t) l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "%" INT64_T_FORMAT "d < %" INT64_T_FORMAT "d = %d\n", (long long)v, (long long)l, matched); } break; case '&': matched = (v & l) == l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "((%" INT64_T_FORMAT "x & %" INT64_T_FORMAT "x) == %" INT64_T_FORMAT "x) = %d\n", (unsigned long long)v, (unsigned long long)l, (unsigned long long)l, matched); break; case '^': matched = (v & l) != l; if ((ms->flags & MAGIC_DEBUG) != 0) (void) fprintf(stderr, "((%" INT64_T_FORMAT "x & %" INT64_T_FORMAT "x) != %" INT64_T_FORMAT "x) = %d\n", (unsigned long long)v, (unsigned long long)l, (unsigned long long)l, matched); break; default: file_magerror(ms, "cannot happen: invalid relation `%c'", m->reln); return -1; }
0
[ "CWE-787" ]
file
d65781527c8134a1202b2649695d48d5701ac60b
55,371,434,120,083,850,000,000,000,000,000,000,000
329
PR/62: spinpx: limit size of file_printable.
void Statement::Work_BeginEach(Baton* baton) { // Only create the Async object when we're actually going into // the event loop. This prevents dangling events. EachBaton* each_baton = static_cast<EachBaton*>(baton); each_baton->async = new Async(each_baton->stmt, reinterpret_cast<uv_async_cb>(AsyncEach)); each_baton->async->item_cb.Reset(each_baton->callback.Value(), 1); each_baton->async->completed_cb.Reset(each_baton->completed.Value(), 1); STATEMENT_BEGIN(Each); }
0
[]
node-sqlite3
593c9d498be2510d286349134537e3bf89401c4a
181,498,808,454,014,600,000,000,000,000,000,000,000
10
bug: fix segfault of invalid toString() object (#1450) * bug: verify toString() returns valid data * test: faulty toString test
void SSL_set_bio(SSL *s,BIO *rbio,BIO *wbio) { /* If the output buffering BIO is still in place, remove it */ if (s->bbio != NULL) { if (s->wbio == s->bbio) { s->wbio=s->wbio->next_bio; s->bbio->next_bio=NULL; } } if ((s->rbio != NULL) && (s->rbio != rbio)) BIO_free_all(s->rbio); if ((s->wbio != NULL) && (s->wbio != wbio) && (s->rbio != s->wbio)) BIO_free_all(s->wbio); s->rbio=rbio; s->wbio=wbio; }
0
[]
openssl
ee2ffc279417f15fef3b1073c7dc81a908991516
86,461,910,816,766,780,000,000,000,000,000,000,000
19
Add Next Protocol Negotiation.
static __init int sched_init_debug(void) { debugfs_create_file("sched_features", 0644, NULL, NULL, &sched_feat_fops); return 0; }
0
[ "CWE-200" ]
linux
4efbc454ba68def5ef285b26ebfcfdb605b52755
337,150,368,023,898,500,000,000,000,000,000,000,000
7
sched: Fix information leak in sys_sched_getattr() We're copying the on-stack structure to userspace, but forgot to give the right number of bytes to copy. This allows the calling process to obtain up to PAGE_SIZE bytes from the stack (and possibly adjacent kernel memory). This fix copies only as much as we actually have on the stack (attr->size defaults to the size of the struct) and leaves the rest of the userspace-provided buffer untouched. Found using kmemcheck + trinity. Fixes: d50dde5a10f30 ("sched: Add new scheduler syscalls to support an extended scheduling parameters ABI") Cc: Dario Faggioli <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Ingo Molnar <[email protected]> Signed-off-by: Vegard Nossum <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
static void __init of_unittest_platform_populate(void) { int irq, rc; struct device_node *np, *child, *grandchild; struct platform_device *pdev, *test_bus; const struct of_device_id match[] = { { .compatible = "test-device", }, {} }; np = of_find_node_by_path("/testcase-data"); of_platform_default_populate(np, NULL, NULL); /* Test that a missing irq domain returns -EPROBE_DEFER */ np = of_find_node_by_path("/testcase-data/testcase-device1"); pdev = of_find_device_by_node(np); unittest(pdev, "device 1 creation failed\n"); if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) { irq = platform_get_irq(pdev, 0); unittest(irq == -EPROBE_DEFER, "device deferred probe failed - %d\n", irq); /* Test that a parsing failure does not return -EPROBE_DEFER */ np = of_find_node_by_path("/testcase-data/testcase-device2"); pdev = of_find_device_by_node(np); unittest(pdev, "device 2 creation failed\n"); irq = platform_get_irq(pdev, 0); unittest(irq < 0 && irq != -EPROBE_DEFER, "device parsing error failed - %d\n", irq); } np = of_find_node_by_path("/testcase-data/platform-tests"); unittest(np, "No testcase data in device tree\n"); if (!np) return; test_bus = platform_device_register_full(&test_bus_info); rc = PTR_ERR_OR_ZERO(test_bus); unittest(!rc, "testbus registration failed; rc=%i\n", rc); if (rc) { of_node_put(np); return; } test_bus->dev.of_node = np; /* * Add a dummy resource to the test bus node after it is * registered to catch problems with un-inserted resources. The * DT code doesn't insert the resources, and it has caused the * kernel to oops in the past. This makes sure the same bug * doesn't crop up again. */ platform_device_add_resources(test_bus, &test_bus_res, 1); of_platform_populate(np, match, NULL, &test_bus->dev); for_each_child_of_node(np, child) { for_each_child_of_node(child, grandchild) unittest(of_find_device_by_node(grandchild), "Could not create device for node '%pOFn'\n", grandchild); } of_platform_depopulate(&test_bus->dev); for_each_child_of_node(np, child) { for_each_child_of_node(child, grandchild) unittest(!of_find_device_by_node(grandchild), "device didn't get destroyed '%pOFn'\n", grandchild); } platform_device_unregister(test_bus); of_node_put(np); }
0
[ "CWE-401" ]
linux
e13de8fe0d6a51341671bbe384826d527afe8d44
195,221,263,369,025,930,000,000,000,000,000,000,000
74
of: unittest: fix memory leak in unittest_data_add In unittest_data_add, a copy buffer is created via kmemdup. This buffer is leaked if of_fdt_unflatten_tree fails. The release for the unittest_data buffer is added. Fixes: b951f9dc7f25 ("Enabling OF selftest to run without machine's devicetree") Signed-off-by: Navid Emamdoost <[email protected]> Reviewed-by: Frank Rowand <[email protected]> Signed-off-by: Rob Herring <[email protected]>
TEST_CASE("Definition duplicates test", "[general]") { parser parser(R"( A <- '' A <- '' )"); REQUIRE(!parser); }
1
[ "CWE-125" ]
cpp-peglib
b3b29ce8f3acf3a32733d930105a17d7b0ba347e
174,567,211,667,921,460,000,000,000,000,000,000,000
9
Fix #122
ZEND_VM_HANDLER(156, ZEND_SEPARATE, VAR, UNUSED) { USE_OPLINE zval *var_ptr; var_ptr = EX_VAR(opline->op1.var); if (UNEXPECTED(Z_ISREF_P(var_ptr))) { if (UNEXPECTED(Z_REFCOUNT_P(var_ptr) == 1)) { ZVAL_UNREF(var_ptr); } } ZEND_VM_NEXT_OPCODE(); }
0
[ "CWE-787" ]
php-src
f1ce8d5f5839cb2069ea37ff424fb96b8cd6932d
236,765,872,969,914,350,000,000,000,000,000,000,000
14
Fix #73122: Integer Overflow when concatenating strings We must avoid integer overflows in memory allocations, so we introduce an additional check in the VM, and bail out in the rare case of an overflow. Since the recent fix for bug #74960 still doesn't catch all possible overflows, we fix that right away.
TEST_F(HttpConnectionManagerConfigTest, UserDefinedSettingsNoCollision) { const std::string yaml_string = R"EOF( codec_type: http2 stat_prefix: my_stat_prefix route_config: virtual_hosts: - name: default domains: - "*" routes: - match: prefix: "/" route: cluster: fake_cluster http_filters: - name: envoy.filters.http.router typed_config: {} http2_protocol_options: hpack_table_size: 1024 custom_settings_parameters: { identifier: 3, value: 2048 } )EOF"; // This will throw when Http2ProtocolOptions validation fails. createHttpConnectionManagerConfig(yaml_string); }
0
[ "CWE-22" ]
envoy
5333b928d8bcffa26ab19bf018369a835f697585
330,297,892,659,771,080,000,000,000,000,000,000,000
24
Implement handling of escaped slash characters in URL path Fixes: CVE-2021-29492 Signed-off-by: Yan Avlasov <[email protected]>
R_API bool r_str_isnumber(const char *str) { if (!str || !*str) { return false; } bool isnum = IS_DIGIT (*str) || *str == '-'; while (isnum && *++str) { if (!IS_DIGIT (*str)) { isnum = false; } } return isnum; }
0
[ "CWE-78" ]
radare2
04edfa82c1f3fa2bc3621ccdad2f93bdbf00e4f9
209,502,637,786,527,130,000,000,000,000,000,000,000
12
Fix command injection on PDB download (#16966) * Fix r_sys_mkdirp with absolute path on Windows * Fix build with --with-openssl * Use RBuffer in r_socket_http_answer() * r_socket_http_answer: Fix read for big responses * Implement r_str_escape_sh() * Cleanup r_socket_connect() on Windows * Fix socket being created without a protocol * Fix socket connect with SSL ##socket * Use select() in r_socket_ready() * Fix read failing if received only protocol answer * Fix double-free * r_socket_http_get: Fail if req. SSL with no support * Follow redirects in r_socket_http_answer() * Fix r_socket_http_get result length with R2_CURL=1 * Also follow redirects * Avoid using curl for downloading PDBs * Use r_socket_http_get() on UNIXs * Use WinINet API on Windows for r_socket_http_get() * Fix command injection * Fix r_sys_cmd_str_full output for binary data * Validate GUID on PDB download * Pass depth to socket_http_get_recursive() * Remove 'r_' and '__' from static function names * Fix is_valid_guid * Fix for comments
RAMBlock *qemu_ram_alloc_from_ptr(ram_addr_t size, void *host, MemoryRegion *mr, Error **errp) { return qemu_ram_alloc_internal(size, size, NULL, host, RAM_PREALLOC, mr, errp); }
0
[ "CWE-908" ]
qemu
418ade7849ce7641c0f7333718caf5091a02fd4c
146,352,255,872,202,440,000,000,000,000,000,000,000
6
softmmu: Always initialize xlat in address_space_translate_for_iotlb The bug is an uninitialized memory read, along the translate_fail path, which results in garbage being read from iotlb_to_section, which can lead to a crash in io_readx/io_writex. The bug may be fixed by writing any value with zero in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using the xlat'ed address returns io_mem_unassigned, as desired by the translate_fail path. It is most useful to record the original physical page address, which will eventually be logged by memory_region_access_valid when the access is rejected by unassigned_mem_accepts. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065 Signed-off-by: Richard Henderson <[email protected]> Reviewed-by: Peter Maydell <[email protected]> Message-Id: <[email protected]>
TEST_F(QueryPlannerTest, AndOfAnd) { addIndex(BSON("x" << 1)); runQuery(fromjson("{$and: [ {$and: [ {x: 2.5}]}, {x: {$gt: 1}}, {x: {$lt: 3}} ] }")); ASSERT_EQUALS(getNumSolutions(), 2U); assertSolutionExists("{cscan: {dir: 1}}"); assertSolutionExists( "{fetch: {filter: null, node: {ixscan: " "{filter: null, pattern: {x: 1}}}}}"); }
0
[]
mongo
ee97c0699fd55b498310996ee002328e533681a3
92,957,565,637,392,100,000,000,000,000,000,000,000
10
SERVER-36993 Fix crash due to incorrect $or pushdown for indexed $expr.
repodata_free_dircache(Repodata *data) { data->dircache = solv_free(data->dircache); }
0
[ "CWE-125" ]
libsolv
fdb9c9c03508990e4583046b590c30d958f272da
276,674,186,516,462,670,000,000,000,000,000,000,000
4
repodata_schema2id: fix heap-buffer-overflow in memcmp When the length of last schema in data->schemadata is less than length of input schema, we got a read overflow in asan test. Signed-off-by: Zhipeng Xie <[email protected]>
bool ovs_nla_get_ufid(struct sw_flow_id *sfid, const struct nlattr *attr, bool log) { sfid->ufid_len = get_ufid_len(attr, log); if (sfid->ufid_len) memcpy(sfid->ufid, nla_data(attr), sfid->ufid_len); return sfid->ufid_len; }
0
[ "CWE-362", "CWE-787" ]
linux
cefa91b2332d7009bc0be5d951d6cbbf349f90f8
25,338,996,554,783,130,000,000,000,000,000,000,000
9
openvswitch: fix OOB access in reserve_sfa_size() Given a sufficiently large number of actions, while copying and reserving memory for a new action of a new flow, if next_offset is greater than MAX_ACTIONS_BUFSIZE, the function reserve_sfa_size() does not return -EMSGSIZE as expected, but it allocates MAX_ACTIONS_BUFSIZE bytes increasing actions_len by req_size. This can then lead to an OOB write access, especially when further actions need to be copied. Fix it by rearranging the flow action size check. KASAN splat below: ================================================================== BUG: KASAN: slab-out-of-bounds in reserve_sfa_size+0x1ba/0x380 [openvswitch] Write of size 65360 at addr ffff888147e4001c by task handler15/836 CPU: 1 PID: 836 Comm: handler15 Not tainted 5.18.0-rc1+ #27 ... Call Trace: <TASK> dump_stack_lvl+0x45/0x5a print_report.cold+0x5e/0x5db ? __lock_text_start+0x8/0x8 ? reserve_sfa_size+0x1ba/0x380 [openvswitch] kasan_report+0xb5/0x130 ? reserve_sfa_size+0x1ba/0x380 [openvswitch] kasan_check_range+0xf5/0x1d0 memcpy+0x39/0x60 reserve_sfa_size+0x1ba/0x380 [openvswitch] __add_action+0x24/0x120 [openvswitch] ovs_nla_add_action+0xe/0x20 [openvswitch] ovs_ct_copy_action+0x29d/0x1130 [openvswitch] ? __kernel_text_address+0xe/0x30 ? unwind_get_return_address+0x56/0xa0 ? create_prof_cpu_mask+0x20/0x20 ? ovs_ct_verify+0xf0/0xf0 [openvswitch] ? prep_compound_page+0x198/0x2a0 ? __kasan_check_byte+0x10/0x40 ? kasan_unpoison+0x40/0x70 ? ksize+0x44/0x60 ? reserve_sfa_size+0x75/0x380 [openvswitch] __ovs_nla_copy_actions+0xc26/0x2070 [openvswitch] ? __zone_watermark_ok+0x420/0x420 ? validate_set.constprop.0+0xc90/0xc90 [openvswitch] ? __alloc_pages+0x1a9/0x3e0 ? __alloc_pages_slowpath.constprop.0+0x1da0/0x1da0 ? unwind_next_frame+0x991/0x1e40 ? __mod_node_page_state+0x99/0x120 ? __mod_lruvec_page_state+0x2e3/0x470 ? __kasan_kmalloc_large+0x90/0xe0 ovs_nla_copy_actions+0x1b4/0x2c0 [openvswitch] ovs_flow_cmd_new+0x3cd/0xb10 [openvswitch] ... Cc: [email protected] Fixes: f28cd2af22a0 ("openvswitch: fix flow actions reallocation") Signed-off-by: Paolo Valerio <[email protected]> Acked-by: Eelco Chaudron <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static Image *ReadPCXImage(const ImageInfo *image_info,ExceptionInfo *exception) { #define ThrowPCXException(severity,tag) \ { \ if (scanline != (unsigned char *) NULL) \ scanline=(unsigned char *) RelinquishMagickMemory(scanline); \ if (pixel_info != (MemoryInfo *) NULL) \ pixel_info=RelinquishVirtualMemory(pixel_info); \ if (page_table != (MagickOffsetType *) NULL) \ page_table=(MagickOffsetType *) RelinquishMagickMemory(page_table); \ ThrowReaderException(severity,tag); \ } Image *image; int bits, id, mask; MagickBooleanType status; MagickOffsetType offset, *page_table; MemoryInfo *pixel_info; PCXInfo pcx_info; register IndexPacket *indexes; register ssize_t x; register PixelPacket *q; register ssize_t i; register unsigned char *p, *r; size_t one, pcx_packets; ssize_t count, y; unsigned char packet, pcx_colormap[768], *pixels, *scanline; /* Open image file. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickCoreSignature); if (image_info->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s", image_info->filename); assert(exception != (ExceptionInfo *) NULL); assert(exception->signature == MagickCoreSignature); image=AcquireImage(image_info); status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception); if (status == MagickFalse) { image=DestroyImageList(image); return((Image *) NULL); } /* Determine if this a PCX file. */ page_table=(MagickOffsetType *) NULL; scanline=(unsigned char *) NULL; pixel_info=(MemoryInfo *) NULL; if (LocaleCompare(image_info->magick,"DCX") == 0) { size_t magic; /* Read the DCX page table. */ magic=ReadBlobLSBLong(image); if (magic != 987654321) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); page_table=(MagickOffsetType *) AcquireQuantumMemory(1024UL, sizeof(*page_table)); if (page_table == (MagickOffsetType *) NULL) ThrowPCXException(ResourceLimitError,"MemoryAllocationFailed"); for (id=0; id < 1024; id++) { page_table[id]=(MagickOffsetType) ReadBlobLSBLong(image); if (page_table[id] == 0) break; } } if (page_table != (MagickOffsetType *) NULL) { offset=SeekBlob(image,(MagickOffsetType) page_table[0],SEEK_SET); if (offset < 0) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); } count=ReadBlob(image,1,&pcx_info.identifier); for (id=1; id < 1024; id++) { int bits_per_pixel; /* Verify PCX identifier. */ pcx_info.version=(unsigned char) ReadBlobByte(image); if ((count != 1) || (pcx_info.identifier != 0x0a)) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); pcx_info.encoding=(unsigned char) ReadBlobByte(image); bits_per_pixel=ReadBlobByte(image); if (bits_per_pixel == -1) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); pcx_info.bits_per_pixel=(unsigned char) bits_per_pixel; pcx_info.left=ReadBlobLSBShort(image); pcx_info.top=ReadBlobLSBShort(image); pcx_info.right=ReadBlobLSBShort(image); pcx_info.bottom=ReadBlobLSBShort(image); pcx_info.horizontal_resolution=ReadBlobLSBShort(image); pcx_info.vertical_resolution=ReadBlobLSBShort(image); /* Read PCX raster colormap. */ image->columns=(size_t) MagickAbsoluteValue((ssize_t) pcx_info.right- pcx_info.left)+1UL; image->rows=(size_t) MagickAbsoluteValue((ssize_t) pcx_info.bottom- pcx_info.top)+1UL; if ((image->columns == 0) || (image->rows == 0) || ((pcx_info.bits_per_pixel != 1) && (pcx_info.bits_per_pixel != 2) && (pcx_info.bits_per_pixel != 4) && (pcx_info.bits_per_pixel != 8))) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); image->depth=pcx_info.bits_per_pixel; image->units=PixelsPerInchResolution; image->x_resolution=(double) pcx_info.horizontal_resolution; image->y_resolution=(double) pcx_info.vertical_resolution; image->colors=16; if ((image_info->ping != MagickFalse) && (image_info->number_scenes != 0)) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) break; status=SetImageExtent(image,image->columns,image->rows); if (status == MagickFalse) ThrowPCXException(image->exception.severity,image->exception.reason); (void) SetImageBackgroundColor(image); (void) memset(pcx_colormap,0,sizeof(pcx_colormap)); count=ReadBlob(image,3*image->colors,pcx_colormap); if (count != (ssize_t) (3*image->colors)) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); pcx_info.reserved=(unsigned char) ReadBlobByte(image); pcx_info.planes=(unsigned char) ReadBlobByte(image); if (pcx_info.planes > 6) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); if ((pcx_info.bits_per_pixel*pcx_info.planes) >= 64) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); if (pcx_info.planes == 0) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); one=1; if ((pcx_info.bits_per_pixel != 8) || (pcx_info.planes == 1)) if ((pcx_info.version == 3) || (pcx_info.version == 5) || ((pcx_info.bits_per_pixel*pcx_info.planes) == 1)) image->colors=(size_t) MagickMin(one << (1UL* (pcx_info.bits_per_pixel*pcx_info.planes)),256UL); if (AcquireImageColormap(image,image->colors) == MagickFalse) ThrowPCXException(ResourceLimitError,"MemoryAllocationFailed"); if ((pcx_info.bits_per_pixel >= 8) && (pcx_info.planes != 1)) image->storage_class=DirectClass; p=pcx_colormap; for (i=0; i < (ssize_t) image->colors; i++) { image->colormap[i].red=ScaleCharToQuantum(*p++); image->colormap[i].green=ScaleCharToQuantum(*p++); image->colormap[i].blue=ScaleCharToQuantum(*p++); } pcx_info.bytes_per_line=ReadBlobLSBShort(image); pcx_info.palette_info=ReadBlobLSBShort(image); pcx_info.horizontal_screensize=ReadBlobLSBShort(image); pcx_info.vertical_screensize=ReadBlobLSBShort(image); for (i=0; i < 54; i++) (void) ReadBlobByte(image); /* Read image data. */ if (HeapOverflowSanityCheck(image->rows, (size_t) pcx_info.bytes_per_line) != MagickFalse) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); pcx_packets=(size_t) image->rows*pcx_info.bytes_per_line; if (HeapOverflowSanityCheck(pcx_packets, (size_t) pcx_info.planes) != MagickFalse) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); pcx_packets=(size_t) pcx_packets*pcx_info.planes; if ((size_t) (pcx_info.bits_per_pixel*pcx_info.planes*image->columns) > (pcx_packets*8U)) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); if ((MagickSizeType) (pcx_packets/8) > GetBlobSize(image)) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); scanline=(unsigned char *) AcquireQuantumMemory(MagickMax(image->columns, pcx_info.bytes_per_line),MagickMax(pcx_info.planes,8)*sizeof(*scanline)); pixel_info=AcquireVirtualMemory(pcx_packets,2*sizeof(*pixels)); if ((scanline == (unsigned char *) NULL) || (pixel_info == (MemoryInfo *) NULL)) { if (scanline != (unsigned char *) NULL) scanline=(unsigned char *) RelinquishMagickMemory(scanline); if (pixel_info != (MemoryInfo *) NULL) pixel_info=RelinquishVirtualMemory(pixel_info); ThrowPCXException(ResourceLimitError,"MemoryAllocationFailed"); } (void) memset(scanline,0,(size_t) MagickMax(image->columns, pcx_info.bytes_per_line)*MagickMax(pcx_info.planes,8)*sizeof(*scanline)); pixels=(unsigned char *) GetVirtualMemoryBlob(pixel_info); (void) memset(pixels,0,(size_t) pcx_packets*(2*sizeof(*pixels))); /* Uncompress image data. */ p=pixels; if (pcx_info.encoding == 0) while (pcx_packets != 0) { packet=(unsigned char) ReadBlobByte(image); if (EOFBlob(image) != MagickFalse) ThrowPCXException(CorruptImageError,"UnexpectedEndOfFile"); *p++=packet; pcx_packets--; } else while (pcx_packets != 0) { packet=(unsigned char) ReadBlobByte(image); if (EOFBlob(image) != MagickFalse) ThrowPCXException(CorruptImageError,"UnexpectedEndOfFile"); if ((packet & 0xc0) != 0xc0) { *p++=packet; pcx_packets--; continue; } count=(ssize_t) (packet & 0x3f); packet=(unsigned char) ReadBlobByte(image); if (EOFBlob(image) != MagickFalse) ThrowPCXException(CorruptImageError,"UnexpectedEndOfFile"); for ( ; count != 0; count--) { *p++=packet; pcx_packets--; if (pcx_packets == 0) break; } } if (image->storage_class == DirectClass) image->matte=pcx_info.planes > 3 ? MagickTrue : MagickFalse; else if ((pcx_info.version == 5) || ((pcx_info.bits_per_pixel*pcx_info.planes) == 1)) { /* Initialize image colormap. */ if (image->colors > 256) ThrowPCXException(CorruptImageError,"ColormapExceeds256Colors"); if ((pcx_info.bits_per_pixel*pcx_info.planes) == 1) { /* Monochrome colormap. */ image->colormap[0].red=(Quantum) 0; image->colormap[0].green=(Quantum) 0; image->colormap[0].blue=(Quantum) 0; image->colormap[1].red=QuantumRange; image->colormap[1].green=QuantumRange; image->colormap[1].blue=QuantumRange; } else if (image->colors > 16) { /* 256 color images have their color map at the end of the file. */ pcx_info.colormap_signature=(unsigned char) ReadBlobByte(image); count=ReadBlob(image,3*image->colors,pcx_colormap); p=pcx_colormap; for (i=0; i < (ssize_t) image->colors; i++) { image->colormap[i].red=ScaleCharToQuantum(*p++); image->colormap[i].green=ScaleCharToQuantum(*p++); image->colormap[i].blue=ScaleCharToQuantum(*p++); } } } /* Convert PCX raster image to pixel packets. */ for (y=0; y < (ssize_t) image->rows; y++) { p=pixels+(y*pcx_info.bytes_per_line*pcx_info.planes); q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (PixelPacket *) NULL) break; indexes=GetAuthenticIndexQueue(image); r=scanline; if (image->storage_class == DirectClass) for (i=0; i < pcx_info.planes; i++) { r=scanline+i; for (x=0; x < (ssize_t) pcx_info.bytes_per_line; x++) { switch (i) { case 0: { *r=(*p++); break; } case 1: { *r=(*p++); break; } case 2: { *r=(*p++); break; } case 3: default: { *r=(*p++); break; } } r+=pcx_info.planes; } } else if (pcx_info.planes > 1) { for (x=0; x < (ssize_t) image->columns; x++) *r++=0; for (i=0; i < pcx_info.planes; i++) { r=scanline; for (x=0; x < (ssize_t) pcx_info.bytes_per_line; x++) { bits=(*p++); for (mask=0x80; mask != 0; mask>>=1) { if (bits & mask) *r|=1 << i; r++; } } } } else switch (pcx_info.bits_per_pixel) { case 1: { register ssize_t bit; for (x=0; x < ((ssize_t) image->columns-7); x+=8) { for (bit=7; bit >= 0; bit--) *r++=(unsigned char) ((*p) & (0x01 << bit) ? 0x01 : 0x00); p++; } if ((image->columns % 8) != 0) { for (bit=7; bit >= (ssize_t) (8-(image->columns % 8)); bit--) *r++=(unsigned char) ((*p) & (0x01 << bit) ? 0x01 : 0x00); p++; } break; } case 2: { for (x=0; x < ((ssize_t) image->columns-3); x+=4) { *r++=(*p >> 6) & 0x3; *r++=(*p >> 4) & 0x3; *r++=(*p >> 2) & 0x3; *r++=(*p) & 0x3; p++; } if ((image->columns % 4) != 0) { for (i=3; i >= (ssize_t) (4-(image->columns % 4)); i--) *r++=(unsigned char) ((*p >> (i*2)) & 0x03); p++; } break; } case 4: { for (x=0; x < ((ssize_t) image->columns-1); x+=2) { *r++=(*p >> 4) & 0xf; *r++=(*p) & 0xf; p++; } if ((image->columns % 2) != 0) *r++=(*p++ >> 4) & 0xf; break; } case 8: { (void) memcpy(r,p,image->columns); break; } default: break; } /* Transfer image scanline. */ r=scanline; for (x=0; x < (ssize_t) image->columns; x++) { if (image->storage_class == PseudoClass) SetPixelIndex(indexes+x,*r++); else { SetPixelRed(q,ScaleCharToQuantum(*r++)); SetPixelGreen(q,ScaleCharToQuantum(*r++)); SetPixelBlue(q,ScaleCharToQuantum(*r++)); if (image->matte != MagickFalse) SetPixelAlpha(q,ScaleCharToQuantum(*r++)); } q++; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } if (image->storage_class == PseudoClass) (void) SyncImage(image); scanline=(unsigned char *) RelinquishMagickMemory(scanline); pixel_info=RelinquishVirtualMemory(pixel_info); if (EOFBlob(image) != MagickFalse) { ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile", image->filename); break; } /* Proceed to next image. */ if (image_info->number_scenes != 0) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) break; if (page_table == (MagickOffsetType *) NULL) break; if (page_table[id] == 0) break; offset=SeekBlob(image,(MagickOffsetType) page_table[id],SEEK_SET); if (offset < 0) ThrowPCXException(CorruptImageError,"ImproperImageHeader"); count=ReadBlob(image,1,&pcx_info.identifier); if ((count != 0) && (pcx_info.identifier == 0x0a)) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) { status=MagickFalse; break; } image=SyncNextImageInList(image); status=SetImageProgress(image,LoadImagesTag,TellBlob(image), GetBlobSize(image)); if (status == MagickFalse) break; } } if (page_table != (MagickOffsetType *) NULL) page_table=(MagickOffsetType *) RelinquishMagickMemory(page_table); (void) CloseBlob(image); if (status == MagickFalse) return(DestroyImageList(image)); return(GetFirstImageInList(image)); }
0
[ "CWE-401" ]
ImageMagick6
210474b2fac6a661bfa7ed563213920e93e76395
3,538,462,241,710,490,000,000,000,000,000,000,000
504
Fix ultra rare but potential memory-leak
inline StreamListener::~StreamListener() { if (stream_ != nullptr) stream_->RemoveStreamListener(this); }
0
[ "CWE-416" ]
node
7f178663ebffc82c9f8a5a1b6bf2da0c263a30ed
214,539,541,975,339,600,000,000,000,000,000,000,000
4
src: use unique_ptr for WriteWrap This commit attempts to avoid a use-after-free error by using unqiue_ptr and passing a reference to it. CVE-ID: CVE-2020-8265 Fixes: https://github.com/nodejs-private/node-private/issues/227 PR-URL: https://github.com/nodejs-private/node-private/pull/238 Reviewed-By: Michael Dawson <[email protected]> Reviewed-By: Tobias Nießen <[email protected]> Reviewed-By: Richard Lau <[email protected]>
static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long param) { int drive = (long)bdev->bd_disk->private_data; int type = ITYPE(UDRS->fd_device); int i; int ret; int size; union inparam { struct floppy_struct g; /* geometry */ struct format_descr f; struct floppy_max_errors max_errors; struct floppy_drive_params dp; } inparam; /* parameters coming from user space */ const void *outparam; /* parameters passed back to user space */ /* convert compatibility eject ioctls into floppy eject ioctl. * We do this in order to provide a means to eject floppy disks before * installing the new fdutils package */ if (cmd == CDROMEJECT || /* CD-ROM eject */ cmd == 0x6470) { /* SunOS floppy eject */ DPRINT("obsolete eject ioctl\n"); DPRINT("please use floppycontrol --eject\n"); cmd = FDEJECT; } if (!((cmd & 0xff00) == 0x0200)) return -EINVAL; /* convert the old style command into a new style command */ ret = normalize_ioctl(&cmd, &size); if (ret) return ret; /* permission checks */ if (((cmd & 0x40) && !(mode & (FMODE_WRITE | FMODE_WRITE_IOCTL))) || ((cmd & 0x80) && !capable(CAP_SYS_ADMIN))) return -EPERM; if (WARN_ON(size < 0 || size > sizeof(inparam))) return -EINVAL; /* copyin */ memset(&inparam, 0, sizeof(inparam)); if (_IOC_DIR(cmd) & _IOC_WRITE) { ret = fd_copyin((void __user *)param, &inparam, size); if (ret) return ret; } switch (cmd) { case FDEJECT: if (UDRS->fd_ref != 1) /* somebody else has this drive open */ return -EBUSY; if (lock_fdc(drive, true)) return -EINTR; /* do the actual eject. Fails on * non-Sparc architectures */ ret = fd_eject(UNIT(drive)); set_bit(FD_DISK_CHANGED_BIT, &UDRS->flags); set_bit(FD_VERIFY_BIT, &UDRS->flags); process_fd_request(); return ret; case FDCLRPRM: if (lock_fdc(drive, true)) return -EINTR; current_type[drive] = NULL; floppy_sizes[drive] = MAX_DISK_SIZE << 1; UDRS->keep_data = 0; return invalidate_drive(bdev); case FDSETPRM: case FDDEFPRM: return set_geometry(cmd, &inparam.g, drive, type, bdev); case FDGETPRM: ret = get_floppy_geometry(drive, type, (struct floppy_struct **)&outparam); if (ret) return ret; break; case FDMSGON: UDP->flags |= FTD_MSG; return 0; case FDMSGOFF: UDP->flags &= ~FTD_MSG; return 0; case FDFMTBEG: if (lock_fdc(drive, true)) return -EINTR; if (poll_drive(true, FD_RAW_NEED_DISK) == -EINTR) return -EINTR; ret = UDRS->flags; process_fd_request(); if (ret & FD_VERIFY) return -ENODEV; if (!(ret & FD_DISK_WRITABLE)) return -EROFS; return 0; case FDFMTTRK: if (UDRS->fd_ref != 1) return -EBUSY; return do_format(drive, &inparam.f); case FDFMTEND: case FDFLUSH: if (lock_fdc(drive, true)) return -EINTR; return invalidate_drive(bdev); case FDSETEMSGTRESH: UDP->max_errors.reporting = (unsigned short)(param & 0x0f); return 0; case FDGETMAXERRS: outparam = &UDP->max_errors; break; case FDSETMAXERRS: UDP->max_errors = inparam.max_errors; break; case FDGETDRVTYP: outparam = drive_name(type, drive); SUPBOUND(size, strlen((const char *)outparam) + 1); break; case FDSETDRVPRM: *UDP = inparam.dp; break; case FDGETDRVPRM: outparam = UDP; break; case FDPOLLDRVSTAT: if (lock_fdc(drive, true)) return -EINTR; if (poll_drive(true, FD_RAW_NEED_DISK) == -EINTR) return -EINTR; process_fd_request(); /* fall through */ case FDGETDRVSTAT: outparam = UDRS; break; case FDRESET: return user_reset_fdc(drive, (int)param, true); case FDGETFDCSTAT: outparam = UFDCS; break; case FDWERRORCLR: memset(UDRWE, 0, sizeof(*UDRWE)); return 0; case FDWERRORGET: outparam = UDRWE; break; case FDRAWCMD: if (type) return -EINVAL; if (lock_fdc(drive, true)) return -EINTR; set_floppy(drive); i = raw_cmd_ioctl(cmd, (void __user *)param); if (i == -EINTR) return -EINTR; process_fd_request(); return i; case FDTWADDLE: if (lock_fdc(drive, true)) return -EINTR; twaddle(); process_fd_request(); return 0; default: return -EINVAL; } if (_IOC_DIR(cmd) & _IOC_READ) return fd_copyout((void __user *)param, outparam, size); return 0; }
1
[ "CWE-362" ]
linux
a0c80efe5956ccce9fe7ae5c78542578c07bc20a
237,494,062,442,199,300,000,000,000,000,000,000,000
175
floppy: fix lock_fdc() signal handling floppy_revalidate() doesn't perform any error handling on lock_fdc() result. lock_fdc() might actually be interrupted by a signal (it waits for fdc becoming non-busy interruptibly). In such case, floppy_revalidate() proceeds as if it had claimed the lock, but it fact it doesn't. In case of multiple threads trying to open("/dev/fdX"), this leads to serious corruptions all over the place, because all of a sudden there is no critical section protection (that'd otherwise be guaranteed by locked fd) whatsoever. While at this, fix the fact that the 'interruptible' parameter to lock_fdc() doesn't make any sense whatsoever, because we always wait interruptibly anyway. Most of the lock_fdc() callsites do properly handle error (and propagate EINTR), but floppy_revalidate() and floppy_check_events() don't. Fix this. Spotted by 'syzkaller' tool. Reported-by: Dmitry Vyukov <[email protected]> Tested-by: Dmitry Vyukov <[email protected]> Signed-off-by: Jiri Kosina <[email protected]>
rt__valid_civil_p(VALUE y, VALUE m, VALUE d, VALUE sg) { VALUE nth, rjd2; int ry, rm, rd, rjd, ns; if (!valid_civil_p(y, NUM2INT(m), NUM2INT(d), NUM2DBL(sg), &nth, &ry, &rm, &rd, &rjd, &ns)) return Qnil; encode_jd(nth, rjd, &rjd2); return rjd2; }
0
[]
date
3959accef8da5c128f8a8e2fd54e932a4fb253b0
164,003,928,467,003,790,000,000,000,000,000,000,000
13
Add length limit option for methods that parses date strings `Date.parse` now raises an ArgumentError when a given date string is longer than 128. You can configure the limit by giving `limit` keyword arguments like `Date.parse(str, limit: 1000)`. If you pass `limit: nil`, the limit is disabled. Not only `Date.parse` but also the following methods are changed. * Date._parse * Date.parse * DateTime.parse * Date._iso8601 * Date.iso8601 * DateTime.iso8601 * Date._rfc3339 * Date.rfc3339 * DateTime.rfc3339 * Date._xmlschema * Date.xmlschema * DateTime.xmlschema * Date._rfc2822 * Date.rfc2822 * DateTime.rfc2822 * Date._rfc822 * Date.rfc822 * DateTime.rfc822 * Date._jisx0301 * Date.jisx0301 * DateTime.jisx0301
base::ProcessId WebContents::GetOSProcessID() const { base::ProcessHandle process_handle = web_contents()->GetMainFrame()->GetProcess()->GetProcess().Handle(); return base::GetProcId(process_handle); }
0
[]
electron
e9fa834757f41c0b9fe44a4dffe3d7d437f52d34
240,693,997,014,087,330,000,000,000,000,000,000,000
5
fix: ensure ElectronBrowser mojo service is only bound to appropriate render frames (#33344) * fix: ensure ElectronBrowser mojo service is only bound to authorized render frames Notes: no-notes * refactor: extract electron API IPC to its own mojo interface * fix: just check main frame not primary main frame Co-authored-by: Samuel Attard <[email protected]> Co-authored-by: Samuel Attard <[email protected]>
f_prevnonblank(typval_T *argvars, typval_T *rettv) { linenr_T lnum; lnum = tv_get_lnum(argvars); if (lnum < 1 || lnum > curbuf->b_ml.ml_line_count) lnum = 0; else while (lnum >= 1 && *skipwhite(ml_get(lnum)) == NUL) --lnum; rettv->vval.v_number = lnum; }
0
[ "CWE-78" ]
vim
8c62a08faf89663e5633dc5036cd8695c80f1075
297,276,778,991,003,300,000,000,000,000,000,000,000
12
patch 8.1.0881: can execute shell commands in rvim through interfaces Problem: Can execute shell commands in rvim through interfaces. Solution: Disable using interfaces in restricted mode. Allow for writing file with writefile(), histadd() and a few others.
ex_equal(exarg_T *eap) { smsg("%ld", (long)eap->line2); ex_may_print(eap); }
0
[ "CWE-78" ]
vim
8c62a08faf89663e5633dc5036cd8695c80f1075
113,930,732,242,390,570,000,000,000,000,000,000,000
5
patch 8.1.0881: can execute shell commands in rvim through interfaces Problem: Can execute shell commands in rvim through interfaces. Solution: Disable using interfaces in restricted mode. Allow for writing file with writefile(), histadd() and a few others.
server_set_mixmode(struct xrdp_mod* mod, int mixmode) { struct xrdp_painter* p; p = (struct xrdp_painter*)(mod->painter); if (p == 0) { return 0; } p->mix_mode = mixmode; return 0; }
0
[]
xrdp
d8f9e8310dac362bb9578763d1024178f94f4ecc
183,013,826,733,766,200,000,000,000,000,000,000,000
12
move temp files from /tmp to /tmp/.xrdp
static int vvalue_tvb_vector(tvbuff_t *tvb, int offset, struct vt_vector *val, struct vtype_data *type) { const guint num = tvb_get_letohl(tvb, offset); return 4 + vvalue_tvb_vector_internal(tvb, offset+4, val, type, num); }
0
[ "CWE-770" ]
wireshark
b7a0650e061b5418ab4a8f72c6e4b00317aff623
63,114,137,248,975,420,000,000,000,000,000,000,000
5
MS-WSP: Don't allocate huge amounts of memory. Add a couple of memory allocation sanity checks, one of which fixes #17331.
static void __exit usb_pwc_exit(void) { PWC_DEBUG_MODULE("Deregistering driver.\n"); usb_deregister(&pwc_driver); PWC_INFO("Philips webcam module removed.\n"); }
0
[ "CWE-399" ]
linux-2.6
85237f202d46d55c1bffe0c5b1aa3ddc0f1dce4d
228,516,976,162,157,760,000,000,000,000,000,000,000
6
USB: fix DoS in pwc USB video driver the pwc driver has a disconnect method that waits for user space to close the device. This opens up an opportunity for a DoS attack, blocking the USB subsystem and making khubd's task busy wait in kernel space. This patch shifts freeing resources to close if an opened device is disconnected. Signed-off-by: Oliver Neukum <[email protected]> CC: stable <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len) { #if 0 CPUWatchpoint *wp; int ret = 0; QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) { if (watchpoint_address_matches(wp, addr, TARGET_PAGE_SIZE)) { ret |= wp->flags; } } return ret; #endif return 0; }
0
[ "CWE-476" ]
unicorn
3d3deac5e6d38602b689c4fef5dac004f07a2e63
178,298,338,924,716,140,000,000,000,000,000,000,000
15
Fix crash when mapping a big memory and calling uc_close
static bool ad5755_is_voltage_mode(enum ad5755_mode mode) { switch (mode) { case AD5755_MODE_VOLTAGE_0V_5V: case AD5755_MODE_VOLTAGE_0V_10V: case AD5755_MODE_VOLTAGE_PLUSMINUS_5V: case AD5755_MODE_VOLTAGE_PLUSMINUS_10V: return true; default: return false; } }
0
[ "CWE-787" ]
linux
9d47964bfd471f0dd4c89f28556aec68bffa0020
66,804,420,455,801,090,000,000,000,000,000,000,000
12
iio: ad5755: fix off-by-one on devnr limit check The comparison for devnr limits is off-by-one, the current check allows 0 to AD5755_NUM_CHANNELS and the limit should be in fact 0 to AD5755_NUM_CHANNELS - 1. This can lead to an out of bounds write to pdata->dac[devnr]. Fix this by replacing > with >= on the comparison. Signed-off-by: Colin Ian King <[email protected]> Fixes: c947459979c6 ("iio: ad5755: add support for dt bindings") Signed-off-by: Jonathan Cameron <[email protected]>
rsvg_filter_primitive_offset_render (RsvgFilterPrimitive * self, RsvgFilterContext * ctx) { guchar ch; gint x, y; gint rowstride, height, width; RsvgIRect boundarys; guchar *in_pixels; guchar *output_pixels; RsvgFilterPrimitiveOutput out; RsvgFilterPrimitiveOffset *upself; cairo_surface_t *output, *in; double dx, dy; int ox, oy; upself = (RsvgFilterPrimitiveOffset *) self; boundarys = rsvg_filter_primitive_get_bounds (self, ctx); in = rsvg_filter_get_in (self->in, ctx); if (in == NULL) return; cairo_surface_flush (in); in_pixels = cairo_image_surface_get_data (in); height = cairo_image_surface_get_height (in); width = cairo_image_surface_get_width (in); rowstride = cairo_image_surface_get_stride (in); output = _rsvg_image_surface_new (width, height); if (output == NULL) { cairo_surface_destroy (in); return; } output_pixels = cairo_image_surface_get_data (output); dx = _rsvg_css_normalize_length (&upself->dx, ctx->ctx, 'w'); dy = _rsvg_css_normalize_length (&upself->dy, ctx->ctx, 'v'); ox = ctx->paffine.xx * dx + ctx->paffine.xy * dy; oy = ctx->paffine.yx * dx + ctx->paffine.yy * dy; for (y = boundarys.y0; y < boundarys.y1; y++) for (x = boundarys.x0; x < boundarys.x1; x++) { if (x - ox < boundarys.x0 || x - ox >= boundarys.x1) continue; if (y - oy < boundarys.y0 || y - oy >= boundarys.y1) continue; for (ch = 0; ch < 4; ch++) { output_pixels[y * rowstride + x * 4 + ch] = in_pixels[(y - oy) * rowstride + (x - ox) * 4 + ch]; } } cairo_surface_mark_dirty (output); out.surface = output; out.Rused = 1; out.Gused = 1; out.Bused = 1; out.Aused = 1; out.bounds = boundarys; rsvg_filter_store_output (self->result, out, ctx); cairo_surface_destroy (in); cairo_surface_destroy (output); }
0
[]
librsvg
a51919f7e1ca9c535390a746fbf6e28c8402dc61
287,067,383,456,316,930,000,000,000,000,000,000,000
75
rsvg: Add rsvg_acquire_node() This function does proper recursion checks when looking up resources from URLs and thereby helps avoiding infinite loops when cyclic references span multiple types of elements.
long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { struct kvm_vcpu *vcpu = filp->private_data; void __user *argp = (void __user *)arg; int r; union { struct kvm_lapic_state *lapic; struct kvm_xsave *xsave; struct kvm_xcrs *xcrs; void *buffer; } u; u.buffer = NULL; switch (ioctl) { case KVM_GET_LAPIC: { r = -EINVAL; if (!vcpu->arch.apic) goto out; u.lapic = kzalloc(sizeof(struct kvm_lapic_state), GFP_KERNEL); r = -ENOMEM; if (!u.lapic) goto out; r = kvm_vcpu_ioctl_get_lapic(vcpu, u.lapic); if (r) goto out; r = -EFAULT; if (copy_to_user(argp, u.lapic, sizeof(struct kvm_lapic_state))) goto out; r = 0; break; } case KVM_SET_LAPIC: { r = -EINVAL; if (!vcpu->arch.apic) goto out; u.lapic = memdup_user(argp, sizeof(*u.lapic)); if (IS_ERR(u.lapic)) { r = PTR_ERR(u.lapic); goto out; } r = kvm_vcpu_ioctl_set_lapic(vcpu, u.lapic); if (r) goto out; r = 0; break; } case KVM_INTERRUPT: { struct kvm_interrupt irq; r = -EFAULT; if (copy_from_user(&irq, argp, sizeof irq)) goto out; r = kvm_vcpu_ioctl_interrupt(vcpu, &irq); if (r) goto out; r = 0; break; } case KVM_NMI: { r = kvm_vcpu_ioctl_nmi(vcpu); if (r) goto out; r = 0; break; } case KVM_SET_CPUID: { struct kvm_cpuid __user *cpuid_arg = argp; struct kvm_cpuid cpuid; r = -EFAULT; if (copy_from_user(&cpuid, cpuid_arg, sizeof cpuid)) goto out; r = kvm_vcpu_ioctl_set_cpuid(vcpu, &cpuid, cpuid_arg->entries); if (r) goto out; break; } case KVM_SET_CPUID2: { struct kvm_cpuid2 __user *cpuid_arg = argp; struct kvm_cpuid2 cpuid; r = -EFAULT; if (copy_from_user(&cpuid, cpuid_arg, sizeof cpuid)) goto out; r = kvm_vcpu_ioctl_set_cpuid2(vcpu, &cpuid, cpuid_arg->entries); if (r) goto out; break; } case KVM_GET_CPUID2: { struct kvm_cpuid2 __user *cpuid_arg = argp; struct kvm_cpuid2 cpuid; r = -EFAULT; if (copy_from_user(&cpuid, cpuid_arg, sizeof cpuid)) goto out; r = kvm_vcpu_ioctl_get_cpuid2(vcpu, &cpuid, cpuid_arg->entries); if (r) goto out; r = -EFAULT; if (copy_to_user(cpuid_arg, &cpuid, sizeof cpuid)) goto out; r = 0; break; } case KVM_GET_MSRS: r = msr_io(vcpu, argp, kvm_get_msr, 1); break; case KVM_SET_MSRS: r = msr_io(vcpu, argp, do_set_msr, 0); break; case KVM_TPR_ACCESS_REPORTING: { struct kvm_tpr_access_ctl tac; r = -EFAULT; if (copy_from_user(&tac, argp, sizeof tac)) goto out; r = vcpu_ioctl_tpr_access_reporting(vcpu, &tac); if (r) goto out; r = -EFAULT; if (copy_to_user(argp, &tac, sizeof tac)) goto out; r = 0; break; }; case KVM_SET_VAPIC_ADDR: { struct kvm_vapic_addr va; r = -EINVAL; if (!irqchip_in_kernel(vcpu->kvm)) goto out; r = -EFAULT; if (copy_from_user(&va, argp, sizeof va)) goto out; r = 0; kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr); break; } case KVM_X86_SETUP_MCE: { u64 mcg_cap; r = -EFAULT; if (copy_from_user(&mcg_cap, argp, sizeof mcg_cap)) goto out; r = kvm_vcpu_ioctl_x86_setup_mce(vcpu, mcg_cap); break; } case KVM_X86_SET_MCE: { struct kvm_x86_mce mce; r = -EFAULT; if (copy_from_user(&mce, argp, sizeof mce)) goto out; r = kvm_vcpu_ioctl_x86_set_mce(vcpu, &mce); break; } case KVM_GET_VCPU_EVENTS: { struct kvm_vcpu_events events; kvm_vcpu_ioctl_x86_get_vcpu_events(vcpu, &events); r = -EFAULT; if (copy_to_user(argp, &events, sizeof(struct kvm_vcpu_events))) break; r = 0; break; } case KVM_SET_VCPU_EVENTS: { struct kvm_vcpu_events events; r = -EFAULT; if (copy_from_user(&events, argp, sizeof(struct kvm_vcpu_events))) break; r = kvm_vcpu_ioctl_x86_set_vcpu_events(vcpu, &events); break; } case KVM_GET_DEBUGREGS: { struct kvm_debugregs dbgregs; kvm_vcpu_ioctl_x86_get_debugregs(vcpu, &dbgregs); r = -EFAULT; if (copy_to_user(argp, &dbgregs, sizeof(struct kvm_debugregs))) break; r = 0; break; } case KVM_SET_DEBUGREGS: { struct kvm_debugregs dbgregs; r = -EFAULT; if (copy_from_user(&dbgregs, argp, sizeof(struct kvm_debugregs))) break; r = kvm_vcpu_ioctl_x86_set_debugregs(vcpu, &dbgregs); break; } case KVM_GET_XSAVE: { u.xsave = kzalloc(sizeof(struct kvm_xsave), GFP_KERNEL); r = -ENOMEM; if (!u.xsave) break; kvm_vcpu_ioctl_x86_get_xsave(vcpu, u.xsave); r = -EFAULT; if (copy_to_user(argp, u.xsave, sizeof(struct kvm_xsave))) break; r = 0; break; } case KVM_SET_XSAVE: { u.xsave = memdup_user(argp, sizeof(*u.xsave)); if (IS_ERR(u.xsave)) { r = PTR_ERR(u.xsave); goto out; } r = kvm_vcpu_ioctl_x86_set_xsave(vcpu, u.xsave); break; } case KVM_GET_XCRS: { u.xcrs = kzalloc(sizeof(struct kvm_xcrs), GFP_KERNEL); r = -ENOMEM; if (!u.xcrs) break; kvm_vcpu_ioctl_x86_get_xcrs(vcpu, u.xcrs); r = -EFAULT; if (copy_to_user(argp, u.xcrs, sizeof(struct kvm_xcrs))) break; r = 0; break; } case KVM_SET_XCRS: { u.xcrs = memdup_user(argp, sizeof(*u.xcrs)); if (IS_ERR(u.xcrs)) { r = PTR_ERR(u.xcrs); goto out; } r = kvm_vcpu_ioctl_x86_set_xcrs(vcpu, u.xcrs); break; } case KVM_SET_TSC_KHZ: { u32 user_tsc_khz; r = -EINVAL; if (!kvm_has_tsc_control) break; user_tsc_khz = (u32)arg; if (user_tsc_khz >= kvm_max_guest_tsc_khz) goto out; kvm_x86_ops->set_tsc_khz(vcpu, user_tsc_khz); r = 0; goto out; } case KVM_GET_TSC_KHZ: { r = -EIO; if (check_tsc_unstable()) goto out; r = vcpu_tsc_khz(vcpu); goto out; } default: r = -EINVAL; } out: kfree(u.buffer); return r; }
0
[]
kvm
0769c5de24621141c953fbe1f943582d37cb4244
101,214,359,560,341,760,000,000,000,000,000,000,000
288
KVM: x86: extend "struct x86_emulate_ops" with "get_cpuid" In order to be able to proceed checks on CPU-specific properties within the emulator, function "get_cpuid" is introduced. With "get_cpuid" it is possible to virtually call the guests "cpuid"-opcode without changing the VM's context. [mtosatti: cleanup/beautify code] Signed-off-by: Stephan Baerwolf <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]>
static int __bufsize_v4l2_format(struct v4l2_format32 __user *up, u32 *size) { u32 type; if (get_user(type, &up->type)) return -EFAULT; switch (type) { case V4L2_BUF_TYPE_VIDEO_OVERLAY: case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY: { u32 clipcount; if (get_user(clipcount, &up->fmt.win.clipcount)) return -EFAULT; if (clipcount > 2048) return -EINVAL; *size = clipcount * sizeof(struct v4l2_clip); return 0; } default: *size = 0; return 0; } }
0
[ "CWE-787" ]
linux
a1dfb4c48cc1e64eeb7800a27c66a6f7e88d075a
59,279,112,526,754,020,000,000,000,000,000,000,000
24
media: v4l2-compat-ioctl32.c: refactor compat ioctl32 logic The 32-bit compat v4l2 ioctl handling is implemented based on its 64-bit equivalent. It converts 32-bit data structures into its 64-bit equivalents and needs to provide the data to the 64-bit ioctl in user space memory which is commonly allocated using compat_alloc_user_space(). However, due to how that function is implemented, it can only be called a single time for every syscall invocation. Supposedly to avoid this limitation, the existing code uses a mix of memory from the kernel stack and memory allocated through compat_alloc_user_space(). Under normal circumstances, this would not work, because the 64-bit ioctl expects all pointers to point to user space memory. As a workaround, set_fs(KERNEL_DS) is called to temporarily disable this extra safety check and allow kernel pointers. However, this might introduce a security vulnerability: The result of the 32-bit to 64-bit conversion is writeable by user space because the output buffer has been allocated via compat_alloc_user_space(). A malicious user space process could then manipulate pointers inside this output buffer, and due to the previous set_fs(KERNEL_DS) call, functions like get_user() or put_user() no longer prevent kernel memory access. The new approach is to pre-calculate the total amount of user space memory that is needed, allocate it using compat_alloc_user_space() and then divide up the allocated memory to accommodate all data structures that need to be converted. An alternative approach would have been to retain the union type karg that they allocated on the kernel stack in do_video_ioctl(), copy all data from user space into karg and then back to user space. However, we decided against this approach because it does not align with other compat syscall implementations. Instead, we tried to replicate the get_user/put_user pairs as found in other places in the kernel: if (get_user(clipcount, &up->clipcount) || put_user(clipcount, &kp->clipcount)) return -EFAULT; Notes from [email protected]: This patch was taken from: https://github.com/LineageOS/android_kernel_samsung_apq8084/commit/97b733953c06e4f0398ade18850f0817778255f7 Clearly nobody could be bothered to upstream this patch or at minimum tell us :-( We only heard about this a week ago. This patch was rebased and cleaned up. Compared to the original I also swapped the order of the convert_in_user arguments so that they matched copy_in_user. It was hard to review otherwise. I also replaced the ALLOC_USER_SPACE/ALLOC_AND_GET by a normal function. Fixes: 6b5a9492ca ("v4l: introduce string control support.") Signed-off-by: Daniel Mentz <[email protected]> Co-developed-by: Hans Verkuil <[email protected]> Acked-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]> Cc: <[email protected]> # for v4.15 and up Signed-off-by: Mauro Carvalho Chehab <[email protected]>
static inline unsigned long group_faults_shared(struct numa_group *ng) { unsigned long faults = 0; int node; for_each_online_node(node) { faults += ng->faults[task_faults_idx(NUMA_MEM, node, 0)]; } return faults; }
0
[ "CWE-400", "CWE-703", "CWE-835" ]
linux
c40f7d74c741a907cfaeb73a7697081881c497d0
280,031,936,816,424,220,000,000,000,000,000,000,000
11
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the scheduler under high loads, starting at around the v4.18 time frame, and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list manipulation. Do a (manual) revert of: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path") It turns out that the list_del_leaf_cfs_rq() introduced by this commit is a surprising property that was not considered in followup commits such as: 9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list") As Vincent Guittot explains: "I think that there is a bigger problem with commit a9e7f6544b9c and cfs_rq throttling: Let take the example of the following topology TG2 --> TG1 --> root: 1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1 cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in one path because it has never been used and can't be throttled so tmp_alone_branch will point to leaf_cfs_rq_list at the end. 2) Then TG1 is throttled 3) and we add TG3 as a new child of TG1. 4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1 cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list. With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list. So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1 cfs_rq is removed from the list. Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list but tmp_alone_branch still points to TG3 cfs_rq because its throttled parent can't be enqueued when the lock is released. tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should. So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch points on another TG cfs_rq, the next TG cfs_rq that will be added, will be linked outside rq->leaf_cfs_rq_list - which is bad. In addition, we can break the ordering of the cfs_rq in rq->leaf_cfs_rq_list but this ordering is used to update and propagate the update from leaf down to root." Instead of trying to work through all these cases and trying to reproduce the very high loads that produced the lockup to begin with, simplify the code temporarily by reverting a9e7f6544b9c - which change was clearly not thought through completely. This (hopefully) gives us a kernel that doesn't lock up so people can continue to enjoy their holidays without worrying about regressions. ;-) [ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ] Analyzed-by: Xie XiuQi <[email protected]> Analyzed-by: Vincent Guittot <[email protected]> Reported-by: Zhipeng Xie <[email protected]> Reported-by: Sargun Dhillon <[email protected]> Reported-by: Xie XiuQi <[email protected]> Tested-by: Zhipeng Xie <[email protected]> Tested-by: Sargun Dhillon <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Acked-by: Vincent Guittot <[email protected]> Cc: <[email protected]> # v4.13+ Cc: Bin Li <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
static unsigned char *get_mcs(int bitrate) { struct yam_mcs *p; p = yam_data; while (p) { if (p->bitrate == bitrate) return p->bits; p = p->next; } /* Load predefined mcs data */ switch (bitrate) { case 1200: /* setting predef as YAM_1200 for loading predef 1200 mcs */ return add_mcs(NULL, bitrate, YAM_1200); default: /* setting predef as YAM_9600 for loading predef 9600 mcs */ return add_mcs(NULL, bitrate, YAM_9600); } }
0
[ "CWE-401" ]
linux
29eb31542787e1019208a2e1047bb7c76c069536
133,793,977,531,995,070,000,000,000,000,000,000,000
21
yam: fix a memory leak in yam_siocdevprivate() ym needs to be free when ym->cmd != SIOCYAMSMCS. Fixes: 0781168e23a2 ("yam: fix a missing-check bug") Signed-off-by: Hangyu Hua <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int ptrace_setoptions(struct task_struct *child, unsigned long data) { unsigned flags; int ret; ret = check_ptrace_options(data); if (ret) return ret; /* Avoid intermediate state when all opts are cleared */ flags = child->ptrace; flags &= ~(PTRACE_O_MASK << PT_OPT_FLAG_SHIFT); flags |= (data << PT_OPT_FLAG_SHIFT); child->ptrace = flags; return 0; }
0
[ "CWE-276", "CWE-703", "CWE-863" ]
linux
ee1fee900537b5d9560e9f937402de5ddc8412f3
315,801,124,649,429,600,000,000,000,000,000,000,000
17
ptrace: Check PTRACE_O_SUSPEND_SECCOMP permission on PTRACE_SEIZE Setting PTRACE_O_SUSPEND_SECCOMP is supposed to be a highly privileged operation because it allows the tracee to completely bypass all seccomp filters on kernels with CONFIG_CHECKPOINT_RESTORE=y. It is only supposed to be settable by a process with global CAP_SYS_ADMIN, and only if that process is not subject to any seccomp filters at all. However, while these permission checks were done on the PTRACE_SETOPTIONS path, they were missing on the PTRACE_SEIZE path, which also sets user-specified ptrace flags. Move the permissions checks out into a helper function and let both ptrace_attach() and ptrace_setoptions() call it. Cc: [email protected] Fixes: 13c4a90119d2 ("seccomp: add ptrace options for suspend/resume") Signed-off-by: Jann Horn <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Eric W. Biederman <[email protected]>
gnome_desktop_thumbnail_factory_class_init (GnomeDesktopThumbnailFactoryClass *class) { GObjectClass *gobject_class; gobject_class = G_OBJECT_CLASS (class); gobject_class->finalize = gnome_desktop_thumbnail_factory_finalize; g_type_class_add_private (class, sizeof (GnomeDesktopThumbnailFactoryPrivate)); }
0
[]
nautilus
2ddba428ef2b13d0620bd599c3635b9c11044659
226,786,193,330,057,600,000,000,000,000,000,000,000
10
Update gnome-desktop code Closes https://gitlab.gnome.org/GNOME/nautilus/issues/987
static int op64_get_current_rxslot(struct b43_dmaring *ring) { u32 val; val = b43_dma_read(ring, B43_DMA64_RXSTATUS); val &= B43_DMA64_RXSTATDPTR; return (val / sizeof(struct b43_dmadesc64)); }
0
[ "CWE-119", "CWE-787" ]
linux
c85ce65ecac078ab1a1835c87c4a6319cf74660a
279,657,630,056,352,040,000,000,000,000,000,000,000
9
b43: allocate receive buffers big enough for max frame len + offset Otherwise, skb_put inside of dma_rx can fail... https://bugzilla.kernel.org/show_bug.cgi?id=32042 Signed-off-by: John W. Linville <[email protected]> Acked-by: Larry Finger <[email protected]> Cc: [email protected]
unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable) { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); return gfn_to_hva_memslot_prot(slot, gfn, writable); }
0
[ "CWE-416" ]
linux
0774a964ef561b7170d8d1b1bfe6f88002b6d219
229,158,219,027,242,020,000,000,000,000,000,000,000
6
KVM: Fix out of range accesses to memslots Reset the LRU slot if it becomes invalid when deleting a memslot to fix an out-of-bounds/use-after-free access when searching through memslots. Explicitly check for there being no used slots in search_memslots(), and in the caller of s390's approximation variant. Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots") Reported-by: Qian Cai <[email protected]> Cc: Peter Xu <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Acked-by: Christian Borntraeger <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static int FUZ_mallocTests(unsigned seed, double compressibility, unsigned part) { (void)seed; (void)compressibility; (void)part; return 0; }
0
[ "CWE-362" ]
zstd
3e5cdf1b6a85843e991d7d10f6a2567c15580da0
331,202,986,874,979,470,000,000,000,000,000,000,000
5
fixed T36302429
bool LibarchivePlugin::copyFiles(const QVector<Archive::Entry*> &files, Archive::Entry *destination, const CompressionOptions &options) { Q_UNUSED(files) Q_UNUSED(destination) Q_UNUSED(options) return false; }
0
[ "CWE-59", "CWE-61" ]
ark
8bf8c5ef07b0ac5e914d752681e470dea403a5bd
252,587,313,096,233,900,000,000,000,000,000,000,000
7
Pass the ARCHIVE_EXTRACT_SECURE_SYMLINKS flag to libarchive There are archive types which allow to first create a symlink and then later on dereference it. If the symlink points outside of the archive, this results in writing outside of the destination directory. With the ARCHIVE_EXTRACT_SECURE_SYMLINKS option set, libarchive avoids this situation by verifying that none of the target path components are symlinks before writing. Remove the commented out code in the method, which would actually misbehave if enabled again. Signed-off-by: Fabian Vogt <[email protected]>
schemeToProxy(int scheme) { ParsedURL *pu = NULL; /* for gcc */ switch (scheme) { case SCM_HTTP: pu = &HTTP_proxy_parsed; break; #ifdef USE_SSL case SCM_HTTPS: pu = &HTTPS_proxy_parsed; break; #endif case SCM_FTP: pu = &FTP_proxy_parsed; break; #ifdef USE_GOPHER case SCM_GOPHER: pu = &GOPHER_proxy_parsed; break; #endif #ifdef DEBUG default: abort(); #endif } return pu; }
0
[ "CWE-119" ]
w3m
ba9d78faeba9024c3e8840579c3b0e959ae2cb0f
180,064,575,614,410,070,000,000,000,000,000,000,000
27
Prevent global-buffer-overflow in parseURL() Bug-Debian: https://github.com/tats/w3m/issues/41
static int econet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) { struct sockaddr_ec *sec = (struct sockaddr_ec *)uaddr; struct sock *sk; struct econet_sock *eo; /* * Check legality */ if (addr_len < sizeof(struct sockaddr_ec) || sec->sec_family != AF_ECONET) return -EINVAL; mutex_lock(&econet_mutex); sk = sock->sk; eo = ec_sk(sk); eo->cb = sec->cb; eo->port = sec->port; eo->station = sec->addr.station; eo->net = sec->addr.net; mutex_unlock(&econet_mutex); return 0; }
0
[ "CWE-200" ]
linux-2.6
80922bbb12a105f858a8f0abb879cb4302d0ecaa
72,473,236,791,753,190,000,000,000,000,000,000,000
28
econet: Fix econet_getname() leak econet_getname() can leak kernel memory to user. Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
int sc_asn1_encode(sc_context_t *ctx, const struct sc_asn1_entry *asn1, u8 **ptr, size_t *size) { return asn1_encode(ctx, asn1, ptr, size, 0); }
0
[ "CWE-119", "CWE-787" ]
OpenSC
412a6142c27a5973c61ba540e33cdc22d5608e68
26,569,893,839,896,066,000,000,000,000,000,000,000
5
fixed out of bounds access of ASN.1 Bitstring Credit to OSS-Fuzz
static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u, int closing, int tx_ring) { struct pgv *pg_vec = NULL; struct packet_sock *po = pkt_sk(sk); int was_running, order = 0; struct packet_ring_buffer *rb; struct sk_buff_head *rb_queue; __be16 num; int err = -EINVAL; /* Added to avoid minimal code churn */ struct tpacket_req *req = &req_u->req; /* Opening a Tx-ring is NOT supported in TPACKET_V3 */ if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) { net_warn_ratelimited("Tx-ring is not supported.\n"); goto out; } rb = tx_ring ? &po->tx_ring : &po->rx_ring; rb_queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue; err = -EBUSY; if (!closing) { if (atomic_read(&po->mapped)) goto out; if (packet_read_pending(rb)) goto out; } if (req->tp_block_nr) { /* Sanity tests and some calculations */ err = -EBUSY; if (unlikely(rb->pg_vec)) goto out; switch (po->tp_version) { case TPACKET_V1: po->tp_hdrlen = TPACKET_HDRLEN; break; case TPACKET_V2: po->tp_hdrlen = TPACKET2_HDRLEN; break; case TPACKET_V3: po->tp_hdrlen = TPACKET3_HDRLEN; break; } err = -EINVAL; if (unlikely((int)req->tp_block_size <= 0)) goto out; if (unlikely(!PAGE_ALIGNED(req->tp_block_size))) goto out; if (po->tp_version >= TPACKET_V3 && (int)(req->tp_block_size - BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0) goto out; if (unlikely(req->tp_frame_size < po->tp_hdrlen + po->tp_reserve)) goto out; if (unlikely(req->tp_frame_size & (TPACKET_ALIGNMENT - 1))) goto out; rb->frames_per_block = req->tp_block_size / req->tp_frame_size; if (unlikely(rb->frames_per_block == 0)) goto out; if (unlikely((rb->frames_per_block * req->tp_block_nr) != req->tp_frame_nr)) goto out; err = -ENOMEM; order = get_order(req->tp_block_size); pg_vec = alloc_pg_vec(req, order); if (unlikely(!pg_vec)) goto out; switch (po->tp_version) { case TPACKET_V3: /* Transmit path is not supported. We checked * it above but just being paranoid */ if (!tx_ring) init_prb_bdqc(po, rb, pg_vec, req_u); break; default: break; } } /* Done */ else { err = -EINVAL; if (unlikely(req->tp_frame_nr)) goto out; } lock_sock(sk); /* Detach socket from network */ spin_lock(&po->bind_lock); was_running = po->running; num = po->num; if (was_running) { po->num = 0; __unregister_prot_hook(sk, false); } spin_unlock(&po->bind_lock); synchronize_net(); err = -EBUSY; mutex_lock(&po->pg_vec_lock); if (closing || atomic_read(&po->mapped) == 0) { err = 0; spin_lock_bh(&rb_queue->lock); swap(rb->pg_vec, pg_vec); rb->frame_max = (req->tp_frame_nr - 1); rb->head = 0; rb->frame_size = req->tp_frame_size; spin_unlock_bh(&rb_queue->lock); swap(rb->pg_vec_order, order); swap(rb->pg_vec_len, req->tp_block_nr); rb->pg_vec_pages = req->tp_block_size/PAGE_SIZE; po->prot_hook.func = (po->rx_ring.pg_vec) ? tpacket_rcv : packet_rcv; skb_queue_purge(rb_queue); if (atomic_read(&po->mapped)) pr_err("packet_mmap: vma is busy: %d\n", atomic_read(&po->mapped)); } mutex_unlock(&po->pg_vec_lock); spin_lock(&po->bind_lock); if (was_running) { po->num = num; register_prot_hook(sk); } spin_unlock(&po->bind_lock); if (closing && (po->tp_version > TPACKET_V2)) { /* Because we don't support block-based V3 on tx-ring */ if (!tx_ring) prb_shutdown_retire_blk_timer(po, rb_queue); } release_sock(sk); if (pg_vec) free_pg_vec(pg_vec, order, req->tp_block_nr); out: return err; }
1
[ "CWE-416", "CWE-362" ]
linux
84ac7260236a49c79eede91617700174c2c19b0c
106,097,602,093,951,040,000,000,000,000,000,000,000
150
packet: fix race condition in packet_set_ring When packet_set_ring creates a ring buffer it will initialize a struct timer_list if the packet version is TPACKET_V3. This value can then be raced by a different thread calling setsockopt to set the version to TPACKET_V1 before packet_set_ring has finished. This leads to a use-after-free on a function pointer in the struct timer_list when the socket is closed as the previously initialized timer will not be deleted. The bug is fixed by taking lock_sock(sk) in packet_setsockopt when changing the packet version while also taking the lock at the start of packet_set_ring. Fixes: f6fb8f100b80 ("af-packet: TPACKET_V3 flexible buffer implementation.") Signed-off-by: Philip Pettersson <[email protected]> Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int query_formats(AVFilterContext *ctx) { AVFilterFormats *formats; enum AVPixelFormat pix_fmt; int ret; /** accept any input pixel format that is not hardware accelerated, not * a bitstream format, and does not have vertically sub-sampled chroma */ if (ctx->inputs[0]) { formats = NULL; for (pix_fmt = 0; pix_fmt < AV_PIX_FMT_NB; pix_fmt++) { const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt); if (!(desc->flags & AV_PIX_FMT_FLAG_HWACCEL || desc->flags & AV_PIX_FMT_FLAG_BITSTREAM) && desc->nb_components && !desc->log2_chroma_h && (ret = ff_add_format(&formats, pix_fmt)) < 0) { ff_formats_unref(&formats); return ret; } } ff_formats_ref(formats, &ctx->inputs[0]->out_formats); ff_formats_ref(formats, &ctx->outputs[0]->in_formats); } return 0; }
0
[ "CWE-119", "CWE-787" ]
FFmpeg
e43a0a232dbf6d3c161823c2e07c52e76227a1bc
232,829,042,140,428,770,000,000,000,000,000,000,000
26
avfilter: fix plane validity checks Fixes out of array accesses Signed-off-by: Michael Niedermayer <[email protected]>
int EmitCIEBasedABC(cmsIOHANDLER* m, cmsFloat64Number* Matrix, cmsToneCurve** CurveSet, cmsCIEXYZ* BlackPoint) { int i; _cmsIOPrintf(m, "[ /CIEBasedABC\n"); _cmsIOPrintf(m, "<<\n"); _cmsIOPrintf(m, "/DecodeABC [ "); EmitNGamma(m, 3, CurveSet); _cmsIOPrintf(m, "]\n"); _cmsIOPrintf(m, "/MatrixABC [ " ); for( i=0; i < 3; i++ ) { _cmsIOPrintf(m, "%.6f %.6f %.6f ", Matrix[i + 3*0], Matrix[i + 3*1], Matrix[i + 3*2]); } _cmsIOPrintf(m, "]\n"); _cmsIOPrintf(m, "/RangeLMN [ 0.0 0.9642 0.0 1.0000 0.0 0.8249 ]\n"); EmitWhiteBlackD50(m, BlackPoint); EmitIntent(m, INTENT_PERCEPTUAL); _cmsIOPrintf(m, ">>\n"); _cmsIOPrintf(m, "]\n"); return 1; }
0
[]
Little-CMS
d1fcbdc6bcdd50432ae95e9c6f83e79338f87690
213,531,686,868,548,170,000,000,000,000,000,000,000
35
Buffer overflow fix. BuildColorantList() concatenates 32 bit data in each iteration. The max value of iteration can be 16. Thus buffer overflow can occur with storage of 128 bytes. It would need 512 bytes in worst case.
static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog, enum perf_bpf_event_type type) { bool unregister = type == PERF_BPF_EVENT_PROG_UNLOAD; int i; if (prog->aux->func_cnt == 0) { perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, (u64)(unsigned long)prog->bpf_func, prog->jited_len, unregister, prog->aux->ksym.name); } else { for (i = 0; i < prog->aux->func_cnt; i++) { struct bpf_prog *subprog = prog->aux->func[i]; perf_event_ksymbol( PERF_RECORD_KSYMBOL_TYPE_BPF, (u64)(unsigned long)subprog->bpf_func, subprog->jited_len, unregister, prog->aux->ksym.name); } } }
0
[ "CWE-401" ]
tip
7bdb157cdebbf95a1cd94ed2e01b338714075d00
272,269,716,485,332,200,000,000,000,000,000,000,000
23
perf/core: Fix a memory leak in perf_event_parse_addr_filter() As shown through runtime testing, the "filename" allocation is not always freed in perf_event_parse_addr_filter(). There are three possible ways that this could happen: - It could be allocated twice on subsequent iterations through the loop, - or leaked on the success path, - or on the failure path. Clean up the code flow to make it obvious that 'filename' is always freed in the reallocation path and in the two return paths as well. We rely on the fact that kfree(NULL) is NOP and filename is initialized with NULL. This fixes the leak. No other side effects expected. [ Dan Carpenter: cleaned up the code flow & added a changelog. ] [ Ingo Molnar: updated the changelog some more. ] Fixes: 375637bc5249 ("perf/core: Introduce address range filtering") Signed-off-by: "kiyin(尹亮)" <[email protected]> Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: "Srivatsa S. Bhat" <[email protected]> Cc: Anthony Liguori <[email protected]> -- kernel/events/core.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-)
static void idr(H264Context *h) { int i; ff_h264_remove_all_refs(h); h->prev_frame_num = h->prev_frame_num_offset = 0; h->prev_poc_msb = 1<<16; h->prev_poc_lsb = 0; for (i = 0; i < MAX_DELAYED_PIC_COUNT; i++) h->last_pocs[i] = INT_MIN; }
0
[ "CWE-703" ]
FFmpeg
e8714f6f93d1a32f4e4655209960afcf4c185214
95,890,470,114,770,750,000,000,000,000,000,000,000
11
avcodec/h264: Clear delayed_pic on deallocation Fixes use of freed memory Fixes: case5_av_frame_copy_props.mp4 Found-by: Michal Zalewski <[email protected]> Signed-off-by: Michael Niedermayer <[email protected]>
EXPORTED int check_precond(struct transaction_t *txn, const void *data, const char *etag, time_t lastmod) { const char *lock_token = NULL; unsigned locked = 0; hdrcache_t hdrcache = txn->req_hdrs; const char **hdr; time_t since; #ifdef WITH_DAV struct dav_data *ddata = (struct dav_data *) data; /* Check for a write-lock on the source */ if (ddata && ddata->lock_expire > time(NULL)) { lock_token = ddata->lock_token; switch (txn->meth) { case METH_DELETE: case METH_LOCK: case METH_MOVE: case METH_POST: case METH_PUT: /* State-changing method: Only the lock owner can execute and MUST provide the correct lock-token in an If header */ if (strcmp(ddata->lock_ownerid, httpd_userid)) return HTTP_LOCKED; locked = 1; break; case METH_UNLOCK: /* State-changing method: Authorized in meth_unlock() */ break; case METH_ACL: case METH_MKCALENDAR: case METH_MKCOL: case METH_PROPPATCH: /* State-changing method: Locks on collections unsupported */ break; default: /* Non-state-changing method: Always allowed */ break; } } #else assert(!data); #endif /* WITH_DAV */ /* Per RFC 4918, If is similar to If-Match, but with lock-token submission. Per Section 5 of HTTPbis, Part 4, LOCK errors supercede preconditions */ if ((hdr = spool_getheader(hdrcache, "If"))) { /* State tokens (sync-token, lock-token) and Etags */ if (!eval_if(hdr[0], etag, lock_token, &locked)) return HTTP_PRECOND_FAILED; } if (locked) { /* Correct lock-token was not provided in If header */ return HTTP_LOCKED; } /* Evaluate other precondition headers per Section 5 of HTTPbis, Part 4 */ /* Step 1 */ if ((hdr = spool_getheader(hdrcache, "If-Match"))) { if (!etag_match(hdr, etag)) return HTTP_PRECOND_FAILED; /* Continue to step 3 */ } /* Step 2 */ else if ((hdr = spool_getheader(hdrcache, "If-Unmodified-Since"))) { if (time_from_rfc822(hdr[0], &since)) return HTTP_BAD_REQUEST; if (since && (lastmod > since)) return HTTP_PRECOND_FAILED; /* Continue to step 3 */ } /* Step 3 */ if ((hdr = spool_getheader(hdrcache, "If-None-Match"))) { if (etag_match(hdr, etag)) { if (txn->meth == METH_GET || txn->meth == METH_HEAD) return HTTP_NOT_MODIFIED; else return HTTP_PRECOND_FAILED; } /* Continue to step 5 */ } /* Step 4 */ else if ((txn->meth == METH_GET || txn->meth == METH_HEAD) && (hdr = spool_getheader(hdrcache, "If-Modified-Since"))) { if (time_from_rfc822(hdr[0], &since)) return HTTP_BAD_REQUEST; if (lastmod <= since) return HTTP_NOT_MODIFIED; /* Continue to step 5 */ } /* Step 5 */ if (txn->flags.ranges && /* Only if we support Range requests */ txn->meth == METH_GET && (hdr = spool_getheader(hdrcache, "Range"))) { if ((hdr = spool_getheader(hdrcache, "If-Range"))) { time_from_rfc822(hdr[0], &since); /* error OK here, could be an etag */ } /* Only process Range if If-Range isn't present or validator matches */ if (!hdr || (since && (lastmod <= since)) || !etagcmp(hdr[0], etag)) return HTTP_PARTIAL; } /* Step 6 */ return HTTP_OK; }
0
[ "CWE-787" ]
cyrus-imapd
a5779db8163b99463e25e7c476f9cbba438b65f3
258,048,892,209,108,480,000,000,000,000,000,000,000
120
HTTP: don't overrun buffer when parsing strings with sscanf()
static void plugin_instance_finalize(PluginInstance *plugin) { if (plugin->browser_toplevel) { g_object_unref(plugin->browser_toplevel); plugin->browser_toplevel = NULL; } if (plugin->instance) { free(plugin->instance); plugin->instance = NULL; } }
0
[ "CWE-264" ]
nspluginwrapper
7e4ab8e1189846041f955e6c83f72bc1624e7a98
327,380,973,403,447,640,000,000,000,000,000,000,000
11
Support all the new variables added
static void ip6_evictor(struct net *net, struct inet6_dev *idev) { int evicted; evicted = inet_frag_evictor(&net->ipv6.frags, &ip6_frags); if (evicted) IP6_ADD_STATS_BH(net, idev, IPSTATS_MIB_REASMFAILS, evicted); }
0
[]
linux-2.6
70789d7052239992824628db8133de08dc78e593
226,767,978,544,525,720,000,000,000,000,000,000,000
8
ipv6: discard overlapping fragment RFC5722 prohibits reassembling fragments when some data overlaps. Bug spotted by Zhang Zuotao <[email protected]>. Signed-off-by: Nicolas Dichtel <[email protected]> Signed-off-by: David S. Miller <[email protected]>
stonith_set_debug (Stonith* s, int debuglevel) { StonithPlugin* sp = (StonithPlugin*)s; if (StonithPIsys == NULL) { return; } PILSetDebugLevel(StonithPIsys, STONITH_TYPE_S, sp->s.stype, debuglevel); }
0
[ "CWE-287" ]
cluster-glue
3d7b464439ee0271da76e0ee9480f3dc14005879
211,466,857,431,787,040,000,000,000,000,000,000,000
8
Medium: stonith: add -E option to get the configuration from the environment
void TABLE::restore_blob_values(String *blob_storage) { Field **vfield_ptr; for (vfield_ptr= vfield; *vfield_ptr; vfield_ptr++) { if ((*vfield_ptr)->type() == MYSQL_TYPE_BLOB && !(*vfield_ptr)->vcol_info->stored_in_db) { Field_blob *blob= ((Field_blob*) *vfield_ptr); blob->value.free(); memcpy((void*) &blob->value, (void*) blob_storage, sizeof(blob->value)); blob_storage++; } } }
0
[ "CWE-416" ]
server
c02ebf3510850ba78a106be9974c94c3b97d8585
44,400,565,734,316,850,000,000,000,000,000,000,000
15
MDEV-24176 Preparations 1. moved fix_vcol_exprs() call to open_table() mysql_alter_table() doesn't do lock_tables() so it cannot win from fix_vcol_exprs() from there. Tests affected: main.default_session 2. Vanilla cleanups and comments.
StreamBufferHandle_t xStreamBufferGenericCreate( size_t xBufferSizeBytes, size_t xTriggerLevelBytes, BaseType_t xIsMessageBuffer ) { uint8_t * pucAllocatedMemory; uint8_t ucFlags; /* In case the stream buffer is going to be used as a message buffer * (that is, it will hold discrete messages with a little meta data that * says how big the next message is) check the buffer will be large enough * to hold at least one message. */ if( xIsMessageBuffer == pdTRUE ) { /* Is a message buffer but not statically allocated. */ ucFlags = sbFLAGS_IS_MESSAGE_BUFFER; configASSERT( xBufferSizeBytes > sbBYTES_TO_STORE_MESSAGE_LENGTH ); } else { /* Not a message buffer and not statically allocated. */ ucFlags = 0; configASSERT( xBufferSizeBytes > 0 ); } configASSERT( xTriggerLevelBytes <= xBufferSizeBytes ); /* A trigger level of 0 would cause a waiting task to unblock even when * the buffer was empty. */ if( xTriggerLevelBytes == ( size_t ) 0 ) { xTriggerLevelBytes = ( size_t ) 1; } /* A stream buffer requires a StreamBuffer_t structure and a buffer. * Both are allocated in a single call to pvPortMalloc(). The * StreamBuffer_t structure is placed at the start of the allocated memory * and the buffer follows immediately after. The requested size is * incremented so the free space is returned as the user would expect - * this is a quirk of the implementation that means otherwise the free * space would be reported as one byte smaller than would be logically * expected. */ xBufferSizeBytes++; pucAllocatedMemory = ( uint8_t * ) pvPortMalloc( xBufferSizeBytes + sizeof( StreamBuffer_t ) ); /*lint !e9079 malloc() only returns void*. */ if( pucAllocatedMemory != NULL ) { prvInitialiseNewStreamBuffer( ( StreamBuffer_t * ) pucAllocatedMemory, /* Structure at the start of the allocated memory. */ /*lint !e9087 Safe cast as allocated memory is aligned. */ /*lint !e826 Area is not too small and alignment is guaranteed provided malloc() behaves as expected and returns aligned buffer. */ pucAllocatedMemory + sizeof( StreamBuffer_t ), /* Storage area follows. */ /*lint !e9016 Indexing past structure valid for uint8_t pointer, also storage area has no alignment requirement. */ xBufferSizeBytes, xTriggerLevelBytes, ucFlags ); traceSTREAM_BUFFER_CREATE( ( ( StreamBuffer_t * ) pucAllocatedMemory ), xIsMessageBuffer ); } else { traceSTREAM_BUFFER_CREATE_FAILED( xIsMessageBuffer ); } return ( StreamBufferHandle_t ) pucAllocatedMemory; /*lint !e9087 !e826 Safe cast as allocated memory is aligned. */ }
1
[ "CWE-190" ]
FreeRTOS-Kernel
d05b9c123f2bf9090bce386a244fc934ae44db5b
218,343,301,167,279,600,000,000,000,000,000,000
61
Add addition overflow check for stream buffer (#226)
static int ipv6_hex(unsigned char *out, const char *in, int inlen) { unsigned char c; unsigned int num = 0; int x; if (inlen > 4) return 0; while (inlen--) { c = *in++; num <<= 4; x = OPENSSL_hexchar2int(c); if (x < 0) return 0; num |= (char)x; } out[0] = num >> 8; out[1] = num & 0xff; return 1; }
0
[ "CWE-125" ]
openssl
bb4d2ed4091408404e18b3326e3df67848ef63d0
94,842,342,542,294,780,000,000,000,000,000,000,000
20
Fix append_ia5 function to not assume NUL terminated strings ASN.1 strings may not be NUL terminated. Don't assume they are. CVE-2021-3712 Reviewed-by: Viktor Dukhovni <[email protected]> Reviewed-by: Paul Dale <[email protected]>
static int talk_to_netback_xdp(struct netfront_info *np, int xdp) { int err; unsigned short headroom; headroom = xdp ? XDP_PACKET_HEADROOM : 0; err = xenbus_printf(XBT_NIL, np->xbdev->nodename, "xdp-headroom", "%hu", headroom); if (err) pr_warn("Error writing xdp-headroom\n"); return err; }
0
[]
linux
f63c2c2032c2e3caad9add3b82cc6e91c376fd26
213,895,088,067,346,300,000,000,000,000,000,000,000
14
xen-netfront: restore __skb_queue_tail() positioning in xennet_get_responses() The commit referenced below moved the invocation past the "next" label, without any explanation. In fact this allows misbehaving backends undue control over the domain the frontend runs in, as earlier detected errors require the skb to not be freed (it may be retained for later processing via xennet_move_rx_slot(), or it may simply be unsafe to have it freed). This is CVE-2022-33743 / XSA-405. Fixes: 6c5aa6fc4def ("xen networking: add basic XDP support for xen-netfront") Signed-off-by: Jan Beulich <[email protected]> Reviewed-by: Juergen Gross <[email protected]> Signed-off-by: Juergen Gross <[email protected]>
copy_from_lzss_window(struct archive_read *a, const void **buffer, int64_t startpos, int length) { int windowoffs, firstpart; struct rar *rar = (struct rar *)(a->format->data); if (!rar->unp_buffer) { if ((rar->unp_buffer = malloc(rar->unp_buffer_size)) == NULL) { archive_set_error(&a->archive, ENOMEM, "Unable to allocate memory for uncompressed data."); return (ARCHIVE_FATAL); } } windowoffs = lzss_offset_for_position(&rar->lzss, startpos); if(windowoffs + length <= lzss_size(&rar->lzss)) { memcpy(&rar->unp_buffer[rar->unp_offset], &rar->lzss.window[windowoffs], length); } else if (length <= lzss_size(&rar->lzss)) { firstpart = lzss_size(&rar->lzss) - windowoffs; if (firstpart < 0) { archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT, "Bad RAR file data"); return (ARCHIVE_FATAL); } if (firstpart < length) { memcpy(&rar->unp_buffer[rar->unp_offset], &rar->lzss.window[windowoffs], firstpart); memcpy(&rar->unp_buffer[rar->unp_offset + firstpart], &rar->lzss.window[0], length - firstpart); } else { memcpy(&rar->unp_buffer[rar->unp_offset], &rar->lzss.window[windowoffs], length); } } else { archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT, "Bad RAR file data"); return (ARCHIVE_FATAL); } rar->unp_offset += length; if (rar->unp_offset >= rar->unp_buffer_size) *buffer = rar->unp_buffer; else *buffer = NULL; return (ARCHIVE_OK); }
0
[ "CWE-125", "CWE-193" ]
libarchive
5562545b5562f6d12a4ef991fae158bf4ccf92b6
135,730,725,764,736,120,000,000,000,000,000,000,000
48
Avoid a read off-by-one error for UTF16 names in RAR archives. Reported-By: OSS-Fuzz issue 573
RawTile KakaduImage::getRegion( int seq, int ang, unsigned int res, int layers, int x, int y, unsigned int w, unsigned int h ) { // Scale up our output bit depth to the nearest factor of 8 unsigned int obpc = bpc; if( bpc <= 16 && bpc > 8 ) obpc = 16; else if( bpc <= 8 ) obpc = 8; #ifdef DEBUG Timer timer; timer.start(); #endif RawTile rawtile( 0, res, seq, ang, w, h, channels, obpc ); if( obpc == 16 ) rawtile.data = new unsigned short[w*h*channels]; else if( obpc == 8 ) rawtile.data = new unsigned char[w*h*channels]; else throw file_error( "Kakadu :: Unsupported number of bits" ); rawtile.dataLength = w*h*channels*(obpc/8); rawtile.filename = getImagePath(); rawtile.timestamp = timestamp; process( res, layers, x, y, w, h, rawtile.data ); #ifdef DEBUG logfile << "Kakadu :: getRegion() :: " << timer.getTime() << " microseconds" << endl; #endif return rawtile; }
1
[ "CWE-190" ]
iipsrv
882925b295a80ec992063deffc2a3b0d803c3195
163,046,629,309,483,980,000,000,000,000,000,000,000
31
- Modified TileManager.cc to verify that malloc() has correctly allocated memory. - Updated numerical types to std::size_t in RawTile.h, TileManager.cc, KakaduImage.cc, OpenJPEG.cc and Transforms.cc when allocating memory via new to avoid integer overflow - fixes remaining problems identified in https://github.com/ruven/iipsrv/issues/223.
static void last_valueFinalizeFunc(sqlite3_context *pCtx){ struct LastValueCtx *p; p = (struct LastValueCtx*)sqlite3_aggregate_context(pCtx, sizeof(*p)); if( p && p->pVal ){ sqlite3_result_value(pCtx, p->pVal); sqlite3_value_free(p->pVal); p->pVal = 0; } }
0
[ "CWE-476" ]
sqlite
75e95e1fcd52d3ec8282edb75ac8cd0814095d54
34,608,929,913,318,580,000,000,000,000,000,000,000
9
When processing constant integer values in ORDER BY clauses of window definitions (see check-in [7e4809eadfe99ebf]) be sure to fully disable the constant value to avoid an invalid pointer dereference if the expression is ever duplicated. This fixes a crash report from Yongheng and Rui. FossilOrigin-Name: 1ca0bd982ab1183bbafce0d260e4dceda5eb766ed2e7793374a88d1ae0bdd2ca
static coroutine_fn int qcow2_co_flush_to_os(BlockDriverState *bs) { BDRVQcowState *s = bs->opaque; int ret; qemu_co_mutex_lock(&s->lock); ret = qcow2_cache_flush(bs, s->l2_table_cache); if (ret < 0) { qemu_co_mutex_unlock(&s->lock); return ret; } if (qcow2_need_accurate_refcounts(s)) { ret = qcow2_cache_flush(bs, s->refcount_block_cache); if (ret < 0) { qemu_co_mutex_unlock(&s->lock); return ret; } } qemu_co_mutex_unlock(&s->lock); return 0; }
0
[ "CWE-476" ]
qemu
11b128f4062dd7f89b14abc8877ff20d41b28be9
133,889,395,381,997,850,000,000,000,000,000,000,000
23
qcow2: Fix NULL dereference in qcow2_open() error path (CVE-2014-0146) The qcow2 code assumes that s->snapshots is non-NULL if s->nb_snapshots != 0. By having the initialisation of both fields separated in qcow2_open(), any error occuring in between would cause the error path to dereference NULL in qcow2_free_snapshots() if the image had any snapshots. Signed-off-by: Kevin Wolf <[email protected]> Reviewed-by: Max Reitz <[email protected]> Signed-off-by: Stefan Hajnoczi <[email protected]>
wav_read_acid_chunk (SF_PRIVATE *psf, uint32_t chunklen) { char buffer [512] ; uint32_t bytesread = 0 ; int beats, flags ; short rootnote, q1, meter_denom, meter_numer ; float q2, tempo ; chunklen += (chunklen & 1) ; bytesread += psf_binheader_readf (psf, "422f", &flags, &rootnote, &q1, &q2) ; snprintf (buffer, sizeof (buffer), "%f", q2) ; psf_log_printf (psf, " Flags : 0x%04x (%s,%s,%s,%s,%s)\n", flags, (flags & 0x01) ? "OneShot" : "Loop", (flags & 0x02) ? "RootNoteValid" : "RootNoteInvalid", (flags & 0x04) ? "StretchOn" : "StretchOff", (flags & 0x08) ? "DiskBased" : "RAMBased", (flags & 0x10) ? "??On" : "??Off") ; psf_log_printf (psf, " Root note : 0x%x\n ???? : 0x%04x\n ???? : %s\n", rootnote, q1, buffer) ; bytesread += psf_binheader_readf (psf, "422f", &beats, &meter_denom, &meter_numer, &tempo) ; snprintf (buffer, sizeof (buffer), "%f", tempo) ; psf_log_printf (psf, " Beats : %d\n Meter : %d/%d\n Tempo : %s\n", beats, meter_numer, meter_denom, buffer) ; psf_binheader_readf (psf, "j", chunklen - bytesread) ; if ((psf->loop_info = calloc (1, sizeof (SF_LOOP_INFO))) == NULL) return SFE_MALLOC_FAILED ; psf->loop_info->time_sig_num = meter_numer ; psf->loop_info->time_sig_den = meter_denom ; psf->loop_info->loop_mode = (flags & 0x01) ? SF_LOOP_NONE : SF_LOOP_FORWARD ; psf->loop_info->num_beats = beats ; psf->loop_info->bpm = tempo ; psf->loop_info->root_key = (flags & 0x02) ? rootnote : -1 ; return 0 ; } /* wav_read_acid_chunk */
0
[ "CWE-476" ]
libsndfile
6f3266277bed16525f0ac2f0f03ff4626f1923e5
37,353,074,156,331,910,000,000,000,000,000,000,000
42
Fix max channel count bug The code was allowing files to be written with a channel count of exactly `SF_MAX_CHANNELS` but was failing to read some file formats with the same channel count.
static int bitmap_position(const unsigned char *sha1) { int pos = bitmap_position_packfile(sha1); return (pos >= 0) ? pos : bitmap_position_extended(sha1); }
0
[ "CWE-119", "CWE-787" ]
git
de1e67d0703894cb6ea782e36abb63976ab07e60
218,447,670,093,286,360,000,000,000,000,000,000,000
5
list-objects: pass full pathname to callbacks When we find a blob at "a/b/c", we currently pass this to our show_object_fn callbacks as two components: "a/b/" and "c". Callbacks which want the full value then call path_name(), which concatenates the two. But this is an inefficient interface; the path is a strbuf, and we could simply append "c" to it temporarily, then roll back the length, without creating a new copy. So we could improve this by teaching the callsites of path_name() this trick (and there are only 3). But we can also notice that no callback actually cares about the broken-down representation, and simply pass each callback the full path "a/b/c" as a string. The callback code becomes even simpler, then, as we do not have to worry about freeing an allocated buffer, nor rolling back our modification to the strbuf. This is theoretically less efficient, as some callbacks would not bother to format the final path component. But in practice this is not measurable. Since we use the same strbuf over and over, our work to grow it is amortized, and we really only pay to memcpy a few bytes. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
camel_pop3_store_class_init (CamelPOP3StoreClass *class) { GObjectClass *object_class; CamelServiceClass *service_class; CamelStoreClass *store_class; object_class = G_OBJECT_CLASS (class); object_class->set_property = pop3_store_set_property; object_class->get_property = pop3_store_get_property; object_class->dispose = pop3_store_dispose; object_class->finalize = pop3_store_finalize; service_class = CAMEL_SERVICE_CLASS (class); service_class->settings_type = CAMEL_TYPE_POP3_SETTINGS; service_class->get_name = pop3_store_get_name; service_class->connect_sync = pop3_store_connect_sync; service_class->disconnect_sync = pop3_store_disconnect_sync; service_class->authenticate_sync = pop3_store_authenticate_sync; service_class->query_auth_types_sync = pop3_store_query_auth_types_sync; store_class = CAMEL_STORE_CLASS (class); store_class->can_refresh_folder = pop3_store_can_refresh_folder; store_class->get_folder_sync = pop3_store_get_folder_sync; store_class->get_folder_info_sync = pop3_store_get_folder_info_sync; store_class->get_trash_folder_sync = pop3_store_get_trash_folder_sync; /* Inherited from CamelNetworkService. */ g_object_class_override_property ( object_class, PROP_CONNECTABLE, "connectable"); /* Inherited from CamelNetworkService. */ g_object_class_override_property ( object_class, PROP_HOST_REACHABLE, "host-reachable"); }
0
[ "CWE-74" ]
evolution-data-server
ba82be72cfd427b5d72ff21f929b3a6d8529c4df
58,431,082,915,674,350,000,000,000,000,000,000,000
38
I#226 - CVE-2020-14928: Response Injection via STARTTLS in SMTP and POP3 Closes https://gitlab.gnome.org/GNOME/evolution-data-server/-/issues/226
static zend_object_value spl_array_object_clone(zval *zobject TSRMLS_DC) { zend_object_value new_obj_val; zend_object *old_object; zend_object *new_object; zend_object_handle handle = Z_OBJ_HANDLE_P(zobject); spl_array_object *intern; old_object = zend_objects_get_address(zobject TSRMLS_CC); new_obj_val = spl_array_object_new_ex(old_object->ce, &intern, zobject, 1 TSRMLS_CC); new_object = &intern->std; zend_objects_clone_members(new_object, new_obj_val, old_object, handle TSRMLS_CC); return new_obj_val; }
0
[]
php-src
a374dfab567ff7f0ab0dc150f14cc891b0340b47
36,679,772,295,238,340,000,000,000,000,000,000,000
16
Fix bug #67492: unserialize() SPL ArrayObject / SPLObjectStorage Type Confusion
R_API ut64 r_bin_java_signature_attr_calc_size(RBinJavaAttrInfo *attr) { ut64 size = 0; if (attr == NULL) { // TODO eprintf allocation fail return size; } size += 6; // attr->info.source_file_attr.sourcefile_idx = R_BIN_JAVA_USHORT (buffer, offset); size += 2; // attr->info.signature_attr.signature_idx = R_BIN_JAVA_USHORT (buffer, offset); size += 2; return size; }
0
[ "CWE-119", "CWE-788" ]
radare2
6c4428f018d385fc80a33ecddcb37becea685dd5
226,817,701,800,274,400,000,000,000,000,000,000,000
13
Improve boundary checks to fix oobread segfaults ##crash * Reported by Cen Zhang via huntr.dev * Reproducer: bins/fuzzed/javaoob-havoc.class
int emulator_get_dr(struct x86_emulate_ctxt *ctxt, int dr, unsigned long *dest) { return _kvm_get_dr(emul_to_vcpu(ctxt), dr, dest); }
0
[]
kvm
0769c5de24621141c953fbe1f943582d37cb4244
160,938,768,287,705,420,000,000,000,000,000,000,000
4
KVM: x86: extend "struct x86_emulate_ops" with "get_cpuid" In order to be able to proceed checks on CPU-specific properties within the emulator, function "get_cpuid" is introduced. With "get_cpuid" it is possible to virtually call the guests "cpuid"-opcode without changing the VM's context. [mtosatti: cleanup/beautify code] Signed-off-by: Stephan Baerwolf <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]>
void helperTestQueryString(char const * uriString, int pairsExpected) { UriParserStateA state; UriUriA uri; state.uri = &uri; int res = uriParseUriA(&state, uriString); TEST_ASSERT(res == URI_SUCCESS); UriQueryListA * queryList = NULL; int itemCount = 0; res = uriDissectQueryMallocA(&queryList, &itemCount, uri.query.first, uri.query.afterLast); TEST_ASSERT(res == URI_SUCCESS); TEST_ASSERT(queryList != NULL); TEST_ASSERT(itemCount == pairsExpected); uriFreeQueryListA(queryList); uriFreeUriMembersA(&uri); }
0
[ "CWE-787" ]
uriparser
864f5d4c127def386dd5cc926ad96934b297f04e
116,328,857,860,196,110,000,000,000,000,000,000,000
18
UriQuery.c: Fix out-of-bounds-write in ComposeQuery and ...Ex Reported by Google Autofuzz team
static void ipa_functions(wmfAPI *API) { wmf_magick_t *ddata = 0; wmfFunctionReference *FR = (wmfFunctionReference *) API->function_reference; /* IPA function reference links */ FR->device_open = ipa_device_open; FR->device_close = ipa_device_close; FR->device_begin = ipa_device_begin; FR->device_end = ipa_device_end; FR->flood_interior = ipa_flood_interior; FR->flood_exterior = ipa_flood_exterior; FR->draw_pixel = ipa_draw_pixel; FR->draw_pie = ipa_draw_pie; FR->draw_chord = ipa_draw_chord; FR->draw_arc = ipa_draw_arc; FR->draw_ellipse = ipa_draw_ellipse; FR->draw_line = ipa_draw_line; FR->poly_line = ipa_poly_line; FR->draw_polygon = ipa_draw_polygon; #if defined(MAGICKCORE_WMF_DELEGATE) FR->draw_polypolygon = ipa_draw_polypolygon; #endif FR->draw_rectangle = ipa_draw_rectangle; FR->rop_draw = ipa_rop_draw; FR->bmp_draw = ipa_bmp_draw; FR->bmp_read = ipa_bmp_read; FR->bmp_free = ipa_bmp_free; FR->draw_text = ipa_draw_text; FR->udata_init = ipa_udata_init; FR->udata_copy = ipa_udata_copy; FR->udata_set = ipa_udata_set; FR->udata_free = ipa_udata_free; FR->region_frame = ipa_region_frame; FR->region_paint = ipa_region_paint; FR->region_clip = ipa_region_clip; /* Allocate device data structure */ ddata = (wmf_magick_t *) wmf_malloc(API, sizeof(wmf_magick_t)); if (ERR(API)) return; (void) ResetMagickMemory((void *) ddata, 0, sizeof(wmf_magick_t)); API->device_data = (void *) ddata; /* Device data defaults */ ddata->image = 0; }
0
[ "CWE-772" ]
ImageMagick
b2b48d50300a9fbcd0aa0d9230fd6d7a08f7671e
336,847,599,779,955,860,000,000,000,000,000,000,000
57
https://github.com/ImageMagick/ImageMagick/issues/544
static void i40e_fdir_filter_exit(struct i40e_pf *pf) { struct i40e_fdir_filter *filter; struct i40e_flex_pit *pit_entry, *tmp; struct hlist_node *node2; hlist_for_each_entry_safe(filter, node2, &pf->fdir_filter_list, fdir_node) { hlist_del(&filter->fdir_node); kfree(filter); } list_for_each_entry_safe(pit_entry, tmp, &pf->l3_flex_pit_list, list) { list_del(&pit_entry->list); kfree(pit_entry); } INIT_LIST_HEAD(&pf->l3_flex_pit_list); list_for_each_entry_safe(pit_entry, tmp, &pf->l4_flex_pit_list, list) { list_del(&pit_entry->list); kfree(pit_entry); } INIT_LIST_HEAD(&pf->l4_flex_pit_list); pf->fdir_pf_active_filters = 0; pf->fd_tcp4_filter_cnt = 0; pf->fd_udp4_filter_cnt = 0; pf->fd_sctp4_filter_cnt = 0; pf->fd_ip4_filter_cnt = 0; /* Reprogram the default input set for TCP/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_TCP, I40E_L3_SRC_MASK | I40E_L3_DST_MASK | I40E_L4_SRC_MASK | I40E_L4_DST_MASK); /* Reprogram the default input set for UDP/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_UDP, I40E_L3_SRC_MASK | I40E_L3_DST_MASK | I40E_L4_SRC_MASK | I40E_L4_DST_MASK); /* Reprogram the default input set for SCTP/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_SCTP, I40E_L3_SRC_MASK | I40E_L3_DST_MASK | I40E_L4_SRC_MASK | I40E_L4_DST_MASK); /* Reprogram the default input set for Other/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_OTHER, I40E_L3_SRC_MASK | I40E_L3_DST_MASK); i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_FRAG_IPV4, I40E_L3_SRC_MASK | I40E_L3_DST_MASK); }
0
[ "CWE-400", "CWE-401" ]
linux
27d461333459d282ffa4a2bdb6b215a59d493a8f
324,133,479,928,069,000,000,000,000,000,000,000,000
52
i40e: prevent memory leak in i40e_setup_macvlans In i40e_setup_macvlans if i40e_setup_channel fails the allocated memory for ch should be released. Signed-off-by: Navid Emamdoost <[email protected]> Tested-by: Andrew Bowers <[email protected]> Signed-off-by: Jeff Kirsher <[email protected]>
static int parse_tlsfeatures(ASN1_TYPE c2, gnutls_x509_tlsfeatures_t f, unsigned flags) { char nptr[ASN1_MAX_NAME_SIZE]; int result; unsigned i, indx, j; unsigned int feature; if (!(flags & GNUTLS_EXT_FLAG_APPEND)) f->size = 0; for (i = 1;; i++) { unsigned skip = 0; snprintf(nptr, sizeof(nptr), "?%u", i); result = _gnutls_x509_read_uint(c2, nptr, &feature); if (result == GNUTLS_E_ASN1_ELEMENT_NOT_FOUND || result == GNUTLS_E_ASN1_VALUE_NOT_FOUND) { break; } else if (result != GNUTLS_E_SUCCESS) { gnutls_assert(); return _gnutls_asn2err(result); } if (feature > UINT16_MAX) { gnutls_assert(); return GNUTLS_E_CERTIFICATE_ERROR; } /* skip duplicates */ for (j=0;j<f->size;j++) { if (f->feature[j] == feature) { skip = 1; break; } } if (!skip) { if (f->size >= sizeof(f->feature)/sizeof(f->feature[0])) { gnutls_assert(); return GNUTLS_E_INTERNAL_ERROR; } indx = f->size; f->feature[indx] = feature; f->size++; } } return 0; }
0
[ "CWE-415" ]
gnutls
c5aaa488a3d6df712dc8dff23a049133cab5ec1b
176,384,897,510,976,860,000,000,000,000,000,000,000
51
gnutls_x509_ext_import_proxy: fix issue reading the policy language If the language was set but the policy wasn't, that could lead to a double free, as the value returned to the user was freed.
static void ql_phy_init_ex(struct ql3_adapter *qdev) { ql_phy_reset_ex(qdev); PHY_Setup(qdev); ql_phy_start_neg_ex(qdev); }
0
[ "CWE-401" ]
linux
1acb8f2a7a9f10543868ddd737e37424d5c36cf4
246,447,520,247,425,780,000,000,000,000,000,000,000
6
net: qlogic: Fix memory leak in ql_alloc_large_buffers In ql_alloc_large_buffers, a new skb is allocated via netdev_alloc_skb. This skb should be released if pci_dma_mapping_error fails. Fixes: 0f8ab89e825f ("qla3xxx: Check return code from pci_map_single() in ql_release_to_lrg_buf_free_list(), ql_populate_free_queue(), ql_alloc_large_buffers(), and ql3xxx_send()") Signed-off-by: Navid Emamdoost <[email protected]> Signed-off-by: David S. Miller <[email protected]>
int ext4_mb_init_group(struct super_block *sb, ext4_group_t group, gfp_t gfp) { struct ext4_group_info *this_grp; struct ext4_buddy e4b; struct page *page; int ret = 0; might_sleep(); mb_debug(sb, "init group %u\n", group); this_grp = ext4_get_group_info(sb, group); /* * This ensures that we don't reinit the buddy cache * page which map to the group from which we are already * allocating. If we are looking at the buddy cache we would * have taken a reference using ext4_mb_load_buddy and that * would have pinned buddy page to page cache. * The call to ext4_mb_get_buddy_page_lock will mark the * page accessed. */ ret = ext4_mb_get_buddy_page_lock(sb, group, &e4b, gfp); if (ret || !EXT4_MB_GRP_NEED_INIT(this_grp)) { /* * somebody initialized the group * return without doing anything */ goto err; } page = e4b.bd_bitmap_page; ret = ext4_mb_init_cache(page, NULL, gfp); if (ret) goto err; if (!PageUptodate(page)) { ret = -EIO; goto err; } if (e4b.bd_buddy_page == NULL) { /* * If both the bitmap and buddy are in * the same page we don't need to force * init the buddy */ ret = 0; goto err; } /* init buddy cache */ page = e4b.bd_buddy_page; ret = ext4_mb_init_cache(page, e4b.bd_bitmap, gfp); if (ret) goto err; if (!PageUptodate(page)) { ret = -EIO; goto err; } err: ext4_mb_put_buddy_page_lock(&e4b); return ret; }
0
[ "CWE-703" ]
linux
ce9f24cccdc019229b70a5c15e2b09ad9c0ab5d1
158,017,716,293,910,200,000,000,000,000,000,000,000
60
ext4: check journal inode extents more carefully Currently, system zones just track ranges of block, that are "important" fs metadata (bitmaps, group descriptors, journal blocks, etc.). This however complicates how extent tree (or indirect blocks) can be checked for inodes that actually track such metadata - currently the journal inode but arguably we should be treating quota files or resize inode similarly. We cannot run __ext4_ext_check() on such metadata inodes when loading their extents as that would immediately trigger the validity checks and so we just hack around that and special-case the journal inode. This however leads to a situation that a journal inode which has extent tree of depth at least one can have invalid extent tree that gets unnoticed until ext4_cache_extents() crashes. To overcome this limitation, track inode number each system zone belongs to (0 is used for zones not belonging to any inode). We can then verify inode number matches the expected one when verifying extent tree and thus avoid the false errors. With this there's no need to to special-case journal inode during extent tree checking anymore so remove it. Fixes: 0a944e8a6c66 ("ext4: don't perform block validity checks on the journal inode") Reported-by: Wolfgang Frisch <[email protected]> Reviewed-by: Lukas Czerner <[email protected]> Signed-off-by: Jan Kara <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Theodore Ts'o <[email protected]>
sg_proc_init(void) { struct proc_dir_entry *p; p = proc_mkdir("scsi/sg", NULL); if (!p) return 1; proc_create("allow_dio", S_IRUGO | S_IWUSR, p, &adio_proc_ops); proc_create_seq("debug", S_IRUGO, p, &debug_seq_ops); proc_create("def_reserved_size", S_IRUGO | S_IWUSR, p, &dressz_proc_ops); proc_create_single("device_hdr", S_IRUGO, p, sg_proc_seq_show_devhdr); proc_create_seq("devices", S_IRUGO, p, &dev_seq_ops); proc_create_seq("device_strs", S_IRUGO, p, &devstrs_seq_ops); proc_create_single("version", S_IRUGO, p, sg_proc_seq_show_version); return 0; }
0
[]
linux
83c6f2390040f188cc25b270b4befeb5628c1aee
279,020,747,513,494,430,000,000,000,000,000,000,000
17
scsi: sg: add sg_remove_request in sg_write If the __copy_from_user function failed we need to call sg_remove_request in sg_write. Link: https://lore.kernel.org/r/[email protected] Acked-by: Douglas Gilbert <[email protected]> Signed-off-by: Wu Bo <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]>
ext4_xattr_block_get(struct inode *inode, int name_index, const char *name, void *buffer, size_t buffer_size) { struct buffer_head *bh = NULL; struct ext4_xattr_entry *entry; size_t size; void *end; int error; struct mb_cache *ea_block_cache = EA_BLOCK_CACHE(inode); ea_idebug(inode, "name=%d.%s, buffer=%p, buffer_size=%ld", name_index, name, buffer, (long)buffer_size); error = -ENODATA; if (!EXT4_I(inode)->i_file_acl) goto cleanup; ea_idebug(inode, "reading block %llu", (unsigned long long)EXT4_I(inode)->i_file_acl); bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl); if (!bh) goto cleanup; ea_bdebug(bh, "b_count=%d, refcount=%d", atomic_read(&(bh->b_count)), le32_to_cpu(BHDR(bh)->h_refcount)); error = ext4_xattr_check_block(inode, bh); if (error) goto cleanup; ext4_xattr_block_cache_insert(ea_block_cache, bh); entry = BFIRST(bh); end = bh->b_data + bh->b_size; error = xattr_find_entry(inode, &entry, end, name_index, name, 1); if (error) goto cleanup; size = le32_to_cpu(entry->e_value_size); if (buffer) { error = -ERANGE; if (size > buffer_size) goto cleanup; if (entry->e_value_inum) { error = ext4_xattr_inode_get(inode, entry, buffer, size); if (error) goto cleanup; } else { memcpy(buffer, bh->b_data + le16_to_cpu(entry->e_value_offs), size); } } error = size; cleanup: brelse(bh); return error; }
1
[]
linux
54dd0e0a1b255f115f8647fc6fb93273251b01b9
127,014,153,535,097,250,000,000,000,000,000,000,000
53
ext4: add extra checks to ext4_xattr_block_get() Add explicit checks in ext4_xattr_block_get() just in case the e_value_offs and e_value_size fields in the the xattr block are corrupted in memory after the buffer_verified bit is set on the xattr block. Signed-off-by: Theodore Ts'o <[email protected]> Cc: [email protected]
void HtmlOutputDev::beginString(GfxState *state, const GooString *s) { pages->beginString(state, s); }
0
[ "CWE-824" ]
poppler
30c731b487190c02afff3f036736a392eb60cd9a
105,391,709,930,140,870,000,000,000,000,000,000,000
3
Properly initialize HtmlOutputDev::page to avoid SIGSEGV upon error exit. Closes #742
PHP_FUNCTION(imagecolorexactalpha) { zval *IM; long red, green, blue, alpha; gdImagePtr im; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "rllll", &IM, &red, &green, &blue, &alpha) == FAILURE) { return; } ZEND_FETCH_RESOURCE(im, gdImagePtr, &IM, -1, "Image", le_gd); RETURN_LONG(gdImageColorExactAlpha(im, red, green, blue, alpha)); }
0
[ "CWE-703", "CWE-189" ]
php-src
2938329ce19cb8c4197dec146c3ec887c6f61d01
219,468,162,585,784,270,000,000,000,000,000,000,000
14
Fixed bug #66356 (Heap Overflow Vulnerability in imagecrop()) And also fixed the bug: arguments are altered after some calls
inline void Http2Session::SetChunksSinceLastWrite(size_t n) { chunks_sent_since_last_write_ = n; }
0
[]
node
ce22d6f9178507c7a41b04ac4097b9ea902049e3
97,608,318,697,297,670,000,000,000,000,000,000,000
3
http2: add altsvc support Add support for sending and receiving ALTSVC frames. PR-URL: https://github.com/nodejs/node/pull/17917 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Tiancheng "Timothy" Gu <[email protected]> Reviewed-By: Matteo Collina <[email protected]>
virtual bool check_index_dependence(void *arg) { return 0; }
0
[ "CWE-617" ]
server
807945f2eb5fa22e6f233cc17b85a2e141efe2c8
139,748,655,385,892,520,000,000,000,000,000,000,000
1
MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order... When doing condition pushdown from HAVING into WHERE, Item_equal::create_pushable_equalities() calls item->set_extraction_flag(IMMUTABLE_FL) for constant items. Then, Item::cleanup_excluding_immutables_processor() checks for this flag to see if it should call item->cleanup() or leave the item as-is. The failure happens when a constant item has a non-constant one inside it, like: (tbl.col=0 AND impossible_cond) item->walk(cleanup_excluding_immutables_processor) works in a bottom-up way so it 1. will call Item_func_eq(tbl.col=0)->cleanup() 2. will not call Item_cond_and->cleanup (as the AND is constant) This creates an item tree where a fixed Item has an un-fixed Item inside it which eventually causes an assertion failure. Fixed by introducing this rule: instead of just calling item->set_extraction_flag(IMMUTABLE_FL); we call Item::walk() to set the flag for all sub-items of the item.
static int TIFFWriteDirectoryTagDoublePerSample(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, uint16 tag, double value) { static const char module[] = "TIFFWriteDirectoryTagDoublePerSample"; double* m; double* na; uint16 nb; int o; if (dir==NULL) { (*ndir)++; return(1); } m=_TIFFmalloc(tif->tif_dir.td_samplesperpixel*sizeof(double)); if (m==NULL) { TIFFErrorExt(tif->tif_clientdata,module,"Out of memory"); return(0); } for (na=m, nb=0; nb<tif->tif_dir.td_samplesperpixel; na++, nb++) *na=value; o=TIFFWriteDirectoryTagCheckedDoubleArray(tif,ndir,dir,tag,tif->tif_dir.td_samplesperpixel,m); _TIFFfree(m); return(o); }
0
[ "CWE-617" ]
libtiff
de144fd228e4be8aa484c3caf3d814b6fa88c6d9
111,898,799,146,576,500,000,000,000,000,000,000,000
24
TIFFWriteDirectorySec: avoid assertion. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2795. CVE-2018-10963
int ssl3_get_client_certificate(SSL *s) { int i,ok,al,ret= -1; X509 *x=NULL; unsigned long l,nc,llen,n; const unsigned char *p,*q; unsigned char *d; STACK_OF(X509) *sk=NULL; n=s->method->ssl_get_message(s, SSL3_ST_SR_CERT_A, SSL3_ST_SR_CERT_B, -1, s->max_cert_list, &ok); if (!ok) return((int)n); if (s->s3->tmp.message_type == SSL3_MT_CLIENT_KEY_EXCHANGE) { if ( (s->verify_mode & SSL_VERIFY_PEER) && (s->verify_mode & SSL_VERIFY_FAIL_IF_NO_PEER_CERT)) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_PEER_DID_NOT_RETURN_A_CERTIFICATE); al=SSL_AD_HANDSHAKE_FAILURE; goto f_err; } /* If tls asked for a client cert, the client must return a 0 list */ if ((s->version > SSL3_VERSION) && s->s3->tmp.cert_request) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_TLS_PEER_DID_NOT_RESPOND_WITH_CERTIFICATE_LIST); al=SSL_AD_UNEXPECTED_MESSAGE; goto f_err; } s->s3->tmp.reuse_message=1; return(1); } if (s->s3->tmp.message_type != SSL3_MT_CERTIFICATE) { al=SSL_AD_UNEXPECTED_MESSAGE; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_WRONG_MESSAGE_TYPE); goto f_err; } p=d=(unsigned char *)s->init_msg; if ((sk=sk_X509_new_null()) == NULL) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,ERR_R_MALLOC_FAILURE); goto err; } n2l3(p,llen); if (llen+3 != n) { al=SSL_AD_DECODE_ERROR; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_LENGTH_MISMATCH); goto f_err; } for (nc=0; nc<llen; ) { n2l3(p,l); if ((l+nc+3) > llen) { al=SSL_AD_DECODE_ERROR; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_CERT_LENGTH_MISMATCH); goto f_err; } q=p; x=d2i_X509(NULL,&p,l); if (x == NULL) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,ERR_R_ASN1_LIB); goto err; } if (p != (q+l)) { al=SSL_AD_DECODE_ERROR; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_CERT_LENGTH_MISMATCH); goto f_err; } if (!sk_X509_push(sk,x)) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,ERR_R_MALLOC_FAILURE); goto err; } x=NULL; nc+=l+3; } if (sk_X509_num(sk) <= 0) { /* TLS does not mind 0 certs returned */ if (s->version == SSL3_VERSION) { al=SSL_AD_HANDSHAKE_FAILURE; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_NO_CERTIFICATES_RETURNED); goto f_err; } /* Fail for TLS only if we required a certificate */ else if ((s->verify_mode & SSL_VERIFY_PEER) && (s->verify_mode & SSL_VERIFY_FAIL_IF_NO_PEER_CERT)) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_PEER_DID_NOT_RETURN_A_CERTIFICATE); al=SSL_AD_HANDSHAKE_FAILURE; goto f_err; } /* No client certificate so digest cached records */ if (s->s3->handshake_buffer && !ssl3_digest_cached_records(s)) { al=SSL_AD_INTERNAL_ERROR; goto f_err; } } else { EVP_PKEY *pkey; i=ssl_verify_cert_chain(s,sk); if (i <= 0) { al=ssl_verify_alarm_type(s->verify_result); SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE,SSL_R_CERTIFICATE_VERIFY_FAILED); goto f_err; } if (i > 1) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, i); al = SSL_AD_HANDSHAKE_FAILURE; goto f_err; } pkey = X509_get_pubkey(sk_X509_value(sk, 0)); if (pkey == NULL) { al=SSL3_AD_HANDSHAKE_FAILURE; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_UNKNOWN_CERTIFICATE_TYPE); goto f_err; } EVP_PKEY_free(pkey); } if (s->session->peer != NULL) /* This should not be needed */ X509_free(s->session->peer); s->session->peer=sk_X509_shift(sk); s->session->verify_result = s->verify_result; /* With the current implementation, sess_cert will always be NULL * when we arrive here. */ if (s->session->sess_cert == NULL) { s->session->sess_cert = ssl_sess_cert_new(); if (s->session->sess_cert == NULL) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE); goto err; } } if (s->session->sess_cert->cert_chain != NULL) sk_X509_pop_free(s->session->sess_cert->cert_chain, X509_free); s->session->sess_cert->cert_chain=sk; /* Inconsistency alert: cert_chain does *not* include the * peer's own certificate, while we do include it in s3_clnt.c */ sk=NULL; ret=1; if (0) { f_err: ssl3_send_alert(s,SSL3_AL_FATAL,al); } err: if (x != NULL) X509_free(x); if (sk != NULL) sk_X509_pop_free(sk,X509_free); return(ret); }
0
[ "CWE-310" ]
openssl
ce325c60c74b0fa784f5872404b722e120e5cab0
296,388,984,992,731,020,000,000,000,000,000,000,000
177
Only allow ephemeral RSA keys in export ciphersuites. OpenSSL clients would tolerate temporary RSA keys in non-export ciphersuites. It also had an option SSL_OP_EPHEMERAL_RSA which enabled this server side. Remove both options as they are a protocol violation. Thanks to Karthikeyan Bhargavan for reporting this issue. (CVE-2015-0204) Reviewed-by: Matt Caswell <[email protected]>
static void io_req_task_queue_fail(struct io_kiocb *req, int ret) { req->result = ret; req->io_task_work.func = io_req_task_cancel; io_req_task_work_add(req); }
0
[ "CWE-125" ]
linux
89c2b3b74918200e46699338d7bcc19b1ea12110
22,957,049,579,455,570,000,000,000,000,000,000,000
6
io_uring: reexpand under-reexpanded iters [ 74.211232] BUG: KASAN: stack-out-of-bounds in iov_iter_revert+0x809/0x900 [ 74.212778] Read of size 8 at addr ffff888025dc78b8 by task syz-executor.0/828 [ 74.214756] CPU: 0 PID: 828 Comm: syz-executor.0 Not tainted 5.14.0-rc3-next-20210730 #1 [ 74.216525] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [ 74.219033] Call Trace: [ 74.219683] dump_stack_lvl+0x8b/0xb3 [ 74.220706] print_address_description.constprop.0+0x1f/0x140 [ 74.224226] kasan_report.cold+0x7f/0x11b [ 74.226085] iov_iter_revert+0x809/0x900 [ 74.227960] io_write+0x57d/0xe40 [ 74.232647] io_issue_sqe+0x4da/0x6a80 [ 74.242578] __io_queue_sqe+0x1ac/0xe60 [ 74.245358] io_submit_sqes+0x3f6e/0x76a0 [ 74.248207] __do_sys_io_uring_enter+0x90c/0x1a20 [ 74.257167] do_syscall_64+0x3b/0x90 [ 74.257984] entry_SYSCALL_64_after_hwframe+0x44/0xae old_size = iov_iter_count(); ... iov_iter_revert(old_size - iov_iter_count()); If iov_iter_revert() is done base on the initial size as above, and the iter is truncated and not reexpanded in the middle, it miscalculates borders causing problems. This trace is due to no one reexpanding after generic_write_checks(). Now iters store how many bytes has been truncated, so reexpand them to the initial state right before reverting. Cc: [email protected] Reported-by: Palash Oswal <[email protected]> Reported-by: Sudip Mukherjee <[email protected]> Reported-and-tested-by: [email protected] Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Al Viro <[email protected]>
void SpatialMaxPool(OpKernelContext* context, Tensor* output, const Tensor& tensor_in, const PoolParameters& params, const Padding& padding) { if (output->NumElements() == 0) { return; } // On GPU, use Eigen's Spatial Max Pooling. On CPU, use an // EigenMatrix version that is currently faster than Eigen's // Spatial MaxPooling implementation. // // TODO(vrv): Remove this once we no longer need it. if (std::is_same<Device, GPUDevice>::value) { Eigen::PaddingType pt = BrainPadding2EigenPadding(padding); functor::SpatialMaxPooling<Device, T>()( context->eigen_device<Device>(), output->tensor<T, 4>(), tensor_in.tensor<T, 4>(), params.window_rows, params.window_cols, params.row_stride, params.col_stride, pt); } else { typedef Eigen::Map<const Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>> ConstEigenMatrixMap; typedef Eigen::Map<Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>> EigenMatrixMap; ConstEigenMatrixMap in_mat(tensor_in.flat<T>().data(), params.depth, params.tensor_in_cols * params.tensor_in_rows * params.tensor_in_batch); EigenMatrixMap out_mat( output->flat<T>().data(), params.depth, params.out_width * params.out_height * params.tensor_in_batch); const DeviceBase::CpuWorkerThreads& worker_threads = *(context->device()->tensorflow_cpu_worker_threads()); // The following code basically does the following: // 1. Flattens the input and output tensors into two dimensional arrays. // tensor_in_as_matrix: // depth by (tensor_in_cols * tensor_in_rows * tensor_in_batch) // output_as_matrix: // depth by (out_width * out_height * tensor_in_batch) // // 2. Walks through the set of columns in the flattened // tensor_in_as_matrix, // and updates the corresponding column(s) in output_as_matrix with the // max value. auto shard = [&params, &in_mat, &out_mat](int64_t start, int64_t limit) { const int32_t in_rows = params.tensor_in_rows; const int32_t in_cols = params.tensor_in_cols; const int32_t pad_top = params.pad_top; const int32_t pad_left = params.pad_left; const int32_t window_rows = params.window_rows; const int32_t window_cols = params.window_cols; const int32_t row_stride = params.row_stride; const int32_t col_stride = params.col_stride; const int32_t out_height = params.out_height; const int32_t out_width = params.out_width; { // Initializes the output tensor with MIN<T>. const int32_t output_image_size = out_height * out_width * params.depth; EigenMatrixMap out_shard(out_mat.data() + start * output_image_size, 1, (limit - start) * output_image_size); out_shard.setConstant(Eigen::NumTraits<T>::lowest()); } for (int32_t b = start; b < limit; ++b) { const int32_t out_offset_batch = b * out_height; for (int32_t h = 0; h < in_rows; ++h) { for (int32_t w = 0; w < in_cols; ++w) { // (h_start, h_end) * (w_start, w_end) is the range that the input // vector projects to. const int32_t hpad = h + pad_top; const int32_t wpad = w + pad_left; const int32_t h_start = (hpad < window_rows) ? 0 : (hpad - window_rows) / row_stride + 1; const int32_t h_end = std::min(hpad / row_stride + 1, out_height); const int32_t w_start = (wpad < window_cols) ? 0 : (wpad - window_cols) / col_stride + 1; const int32_t w_end = std::min(wpad / col_stride + 1, out_width); // compute elementwise max const int32_t in_offset = (b * in_rows + h) * in_cols + w; for (int32_t ph = h_start; ph < h_end; ++ph) { const int32_t out_offset_base = (out_offset_batch + ph) * out_width; for (int32_t pw = w_start; pw < w_end; ++pw) { const int32_t out_offset = out_offset_base + pw; out_mat.col(out_offset) = out_mat.col(out_offset).cwiseMax(in_mat.col(in_offset)); } } } } } }; // TODO(andydavis) Consider sharding across batch x rows x cols. // TODO(andydavis) Consider a higher resolution shard cost model. const int64_t shard_cost = params.tensor_in_rows * params.tensor_in_cols * params.depth; Shard(worker_threads.num_threads, worker_threads.workers, params.tensor_in_batch, shard_cost, shard); } }
0
[ "CWE-354" ]
tensorflow
4dddb2fd0b01cdd196101afbba6518658a2c9e07
125,139,611,671,914,630,000,000,000,000,000,000,000
105
Fix segfault in pools on empty shapes when certain dimension were very large. Pooling ops multiply certain components of the input shape, e.g. by multiplying input.shape[1] * input.shape[2] * input.shape[3]. This multiplication could overflow an int64 value if shape[0] was 0 but shape[1], shape[2], and shape[3] were very large, e.g. by passing an input with shape (0, 2**25, 2**25, 2**25). PiperOrigin-RevId: 404644978 Change-Id: Ic79f89c970357ca2962b1f231449066db9403146
static inline struct hso_serial *dev2ser(struct hso_device *hso_dev) { return hso_dev->port_data.dev_serial; }
0
[ "CWE-125" ]
linux
5146f95df782b0ac61abde36567e718692725c89
169,103,992,850,905,280,000,000,000,000,000,000,000
4
USB: hso: Fix OOB memory access in hso_probe/hso_get_config_data The function hso_probe reads if_num from the USB device (as an u8) and uses it without a length check to index an array, resulting in an OOB memory read in hso_probe or hso_get_config_data. Add a length check for both locations and updated hso_probe to bail on error. This issue has been assigned CVE-2018-19985. Reported-by: Hui Peng <[email protected]> Reported-by: Mathias Payer <[email protected]> Signed-off-by: Hui Peng <[email protected]> Signed-off-by: Mathias Payer <[email protected]> Reviewed-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void readDataAvailable(size_t /* len */) noexcept override {}
0
[ "CWE-125" ]
folly
c321eb588909646c15aefde035fd3133ba32cdee
285,647,341,502,221,700,000,000,000,000,000,000,000
1
Handle close_notify as standard writeErr in AsyncSSLSocket. Summary: Fixes CVE-2019-11934 Reviewed By: mingtaoy Differential Revision: D18020613 fbshipit-source-id: db82bb250e53f0d225f1280bd67bc74abd417836
struct xt_table *xt_find_table_lock(int af, const char *name) { struct xt_table *t; if (mutex_lock_interruptible(&xt[af].mutex) != 0) return ERR_PTR(-EINTR); list_for_each_entry(t, &xt[af].tables, list) if (strcmp(t->name, name) == 0 && try_module_get(t->me)) return t; mutex_unlock(&xt[af].mutex); return NULL; }
0
[ "CWE-787" ]
linux
9fa492cdc160cd27ce1046cb36f47d3b2b1efa21
115,996,926,667,649,130,000,000,000,000,000,000,000
13
[NETFILTER]: x_tables: simplify compat API Split the xt_compat_match/xt_compat_target into smaller type-safe functions performing just one operation. Handle all alignment and size-related conversions centrally in these function instead of requiring each module to implement a full-blown conversion function. Replace ->compat callback by ->compat_from_user and ->compat_to_user callbacks, responsible for converting just a single private structure. Signed-off-by: Patrick McHardy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void InstanceKlass::set_enclosing_method_indices(u2 class_index, u2 method_index) { Array<jushort>* inner_class_list = inner_classes(); assert (inner_class_list != NULL, "_inner_classes list is not set up"); int length = inner_class_list->length(); if (length % inner_class_next_offset == enclosing_method_attribute_size) { int index = length - enclosing_method_attribute_size; inner_class_list->at_put( index + enclosing_method_class_index_offset, class_index); inner_class_list->at_put( index + enclosing_method_method_index_offset, method_index); } }
0
[]
jdk17u
f8eb9abe034f7c6bea4da05a9ea42017b3f80730
7,328,830,923,488,700,000,000,000,000,000,000,000
13
8270386: Better verification of scan methods Reviewed-by: coleenp Backport-of: ac329cef45979bd0159ecd1347e36f7129bb2ce4
ArgParser::readArgsFromFile(char const* filename) { std::list<std::string> lines; if (strcmp(filename, "-") == 0) { QTC::TC("qpdf", "qpdf read args from stdin"); lines = QUtil::read_lines_from_file(std::cin); } else { QTC::TC("qpdf", "qpdf read args from file"); lines = QUtil::read_lines_from_file(filename); } for (std::list<std::string>::iterator iter = lines.begin(); iter != lines.end(); ++iter) { new_argv.push_back( PointerHolder<char>(true, QUtil::copy_string((*iter).c_str()))); } }
0
[ "CWE-787" ]
qpdf
d71f05ca07eb5c7cfa4d6d23e5c1f2a800f52e8e
136,163,864,658,381,300,000,000,000,000,000,000,000
20
Fix sign and conversion warnings (major) This makes all integer type conversions that have potential data loss explicit with calls that do range checks and raise an exception. After this commit, qpdf builds with no warnings when -Wsign-conversion -Wconversion is used with gcc or clang or when -W3 -Wd4800 is used with MSVC. This significantly reduces the likelihood of potential crashes from bogus integer values. There are some parts of the code that take int when they should take size_t or an offset. Such places would make qpdf not support files with more than 2^31 of something that usually wouldn't be so large. In the event that such a file shows up and is valid, at least qpdf would raise an error in the right spot so the issue could be legitimately addressed rather than failing in some weird way because of a silent overflow condition.
static void dump_backtrace_entry(unsigned long where, unsigned long stack) { print_ip_sym(where); if (in_exception_text(where)) dump_mem("", "Exception stack", stack, stack + sizeof(struct pt_regs)); }
0
[ "CWE-703" ]
linux
9955ac47f4ba1c95ecb6092aeaefb40a22e99268
75,292,051,640,975,800,000,000,000,000,000,000,000
7
arm64: don't kill the kernel on a bad esr from el0 Rather than completely killing the kernel if we receive an esr value we can't deal with in the el0 handlers, send the process a SIGILL and log the esr value in the hope that we can debug it. If we receive a bad esr from el1, we'll die() as before. Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Cc: [email protected]
static int selinux_task_movememory(struct task_struct *p) { return avc_has_perm(&selinux_state, current_sid(), task_sid_obj(p), SECCLASS_PROCESS, PROCESS__SETSCHED, NULL); }
0
[ "CWE-416" ]
linux
a3727a8bac0a9e77c70820655fd8715523ba3db7
274,755,636,598,768,200,000,000,000,000,000,000,000
6
selinux,smack: fix subjective/objective credential use mixups Jann Horn reported a problem with commit eb1231f73c4d ("selinux: clarify task subjective and objective credentials") where some LSM hooks were attempting to access the subjective credentials of a task other than the current task. Generally speaking, it is not safe to access another task's subjective credentials and doing so can cause a number of problems. Further, while looking into the problem, I realized that Smack was suffering from a similar problem brought about by a similar commit 1fb057dcde11 ("smack: differentiate between subjective and objective task credentials"). This patch addresses this problem by restoring the use of the task's objective credentials in those cases where the task is other than the current executing task. Not only does this resolve the problem reported by Jann, it is arguably the correct thing to do in these cases. Cc: [email protected] Fixes: eb1231f73c4d ("selinux: clarify task subjective and objective credentials") Fixes: 1fb057dcde11 ("smack: differentiate between subjective and objective task credentials") Reported-by: Jann Horn <[email protected]> Acked-by: Eric W. Biederman <[email protected]> Acked-by: Casey Schaufler <[email protected]> Signed-off-by: Paul Moore <[email protected]>
rsvg_new_text (void) { RsvgNodeText *text; text = g_new (RsvgNodeText, 1); _rsvg_node_init (&text->super); text->super.draw = _rsvg_node_text_draw; text->super.set_atts = _rsvg_node_text_set_atts; text->x = text->y = text->dx = text->dy = _rsvg_css_parse_length ("0"); return &text->super; }
1
[]
librsvg
34c95743ca692ea0e44778e41a7c0a129363de84
257,362,848,512,948,800,000,000,000,000,000,000,000
10
Store node type separately in RsvgNode The node name (formerly RsvgNode:type) cannot be used to infer the sub-type of RsvgNode that we're dealing with, since for unknown elements we put type = node-name. This lead to a (potentially exploitable) crash e.g. when the element name started with "fe" which tricked the old code into considering it as a RsvgFilterPrimitive. CVE-2011-3146 https://bugzilla.gnome.org/show_bug.cgi?id=658014
CiffComponent* CiffDirectory::doAdd(CrwDirs& crwDirs, uint16_t crwTagId) { /* add() if stack not empty pop from stack find dir among components if not found, create it add() else find tag among components if not found, create it set value */ const Components::iterator b = components_.begin(); const Components::iterator e = components_.end(); if (!crwDirs.empty()) { CrwSubDir csd = crwDirs.top(); crwDirs.pop(); // Find the directory for (Components::iterator i = b; i != e; ++i) { if ((*i)->tag() == csd.crwDir_) { cc_ = *i; break; } } if (cc_ == 0) { // Directory doesn't exist yet, add it m_ = UniquePtr(new CiffDirectory(csd.crwDir_, csd.parent_)); cc_ = m_.get(); add(std::move(m_)); } // Recursive call to next lower level directory cc_ = cc_->add(crwDirs, crwTagId); } else { // Find the tag for (Components::iterator i = b; i != e; ++i) { if ((*i)->tagId() == crwTagId) { cc_ = *i; break; } } if (cc_ == 0) { // Tag doesn't exist yet, add it m_ = UniquePtr(new CiffEntry(crwTagId, tag())); cc_ = m_.get(); add(std::move(m_)); } } return cc_; }
0
[ "CWE-125" ]
exiv2
9628f82084ed30d494ddd4f7360d233801e22967
237,583,392,090,129,300,000,000,000,000,000,000,000
53
Avoid integer overflow. (cherry picked from commit c0ecc2ae36f34462be98623deb85ba1747ae2175)
static int vlv_gt_eq_to_index(struct vlv_context *ac, struct GUID *guid_array, struct ldb_vlv_req_control *vlv_details, struct ldb_server_sort_control *sort_details, int *status) { /* this has a >= comparison string, which needs to be * converted into indices. */ size_t len = ac->store->num_entries; struct ldb_context *ldb; const struct ldb_schema_attribute *a; struct GUID *result = NULL; struct vlv_sort_context context; struct ldb_val value = { .data = (uint8_t *)vlv_details->match.gtOrEq.value, .length = vlv_details->match.gtOrEq.value_len }; ldb = ldb_module_get_ctx(ac->module); a = ldb_schema_attribute_by_name(ldb, sort_details->attributeName); context = (struct vlv_sort_context){ .ldb = ldb, .comparison_fn = a->syntax->comparison_fn, .attr = sort_details->attributeName, .ac = ac, .status = LDB_SUCCESS, .value = value }; if (sort_details->reverse) { /* when the sort is reversed, "gtOrEq" means "less than or equal" */ BINARY_ARRAY_SEARCH_GTE(guid_array, len, &context, vlv_value_compare_rev, result, result); } else { BINARY_ARRAY_SEARCH_GTE(guid_array, len, &context, vlv_value_compare, result, result); } if (context.status != LDB_SUCCESS) { *status = context.status; return len; } *status = LDB_SUCCESS; if (result == NULL) { /* the target is beyond the end of the array */ return len; } return result - guid_array; }
0
[ "CWE-416" ]
samba
32c333def9ad5a1c67abee320cf5f3c4f2cb1e5c
305,581,514,956,240,400,000,000,000,000,000,000,000
54
CVE-2020-10760 dsdb: Ensure a proper talloc tree for saved controls Otherwise a paged search on the GC port will fail as the ->data was not kept around for the second page of searches. An example command to produce this is bin/ldbsearch --paged -H ldap://$SERVER:3268 -U$USERNAME%$PASSWORD This shows up later in the partition module as: ERROR: AddressSanitizer: heap-use-after-free on address 0x60b00151ef20 at pc 0x7fec3f801aac bp 0x7ffe8472c270 sp 0x7ffe8472c260 READ of size 4 at 0x60b00151ef20 thread T0 (ldap(0)) #0 0x7fec3f801aab in talloc_chunk_from_ptr ../../lib/talloc/talloc.c:526 #1 0x7fec3f801aab in __talloc_get_name ../../lib/talloc/talloc.c:1559 #2 0x7fec3f801aab in talloc_check_name ../../lib/talloc/talloc.c:1582 #3 0x7fec1b86b2e1 in partition_search ../../source4/dsdb/samdb/ldb_modules/partition.c:780 or smb_panic_default: PANIC (pid 13287): Bad talloc magic value - unknown value (from source4/dsdb/samdb/ldb_modules/partition.c:780) BUG: https://bugzilla.samba.org/show_bug.cgi?id=14402 Signed-off-by: Andrew Bartlett <[email protected]>
ofputil_decode_role_status(const struct ofp_header *oh, struct ofputil_role_status *rs) { struct ofpbuf b = ofpbuf_const_initializer(oh, ntohs(oh->length)); enum ofpraw raw = ofpraw_pull_assert(&b); ovs_assert(raw == OFPRAW_OFPT14_ROLE_STATUS); const struct ofp14_role_status *r = b.msg; if (r->role != htonl(OFPCR12_ROLE_NOCHANGE) && r->role != htonl(OFPCR12_ROLE_EQUAL) && r->role != htonl(OFPCR12_ROLE_MASTER) && r->role != htonl(OFPCR12_ROLE_SLAVE)) { return OFPERR_OFPRRFC_BAD_ROLE; } rs->role = ntohl(r->role); rs->generation_id = ntohll(r->generation_id); rs->reason = r->reason; return 0; }
0
[ "CWE-772" ]
ovs
77ad4225d125030420d897c873e4734ac708c66b
62,191,787,583,736,860,000,000,000,000,000,000,000
21
ofp-util: Fix memory leaks on error cases in ofputil_decode_group_mod(). Found by libFuzzer. Reported-by: Bhargava Shastry <[email protected]> Signed-off-by: Ben Pfaff <[email protected]> Acked-by: Justin Pettit <[email protected]>
set_unescape_error (GMarkupParseContext *context, GError **error, const gchar *remaining_text, GMarkupError code, const gchar *format, ...) { GError *tmp_error; gchar *s; va_list args; gint remaining_newlines; const gchar *p; remaining_newlines = 0; p = remaining_text; while (*p != '\0') { if (*p == '\n') ++remaining_newlines; ++p; } va_start (args, format); s = g_strdup_vprintf (format, args); va_end (args); tmp_error = g_error_new (G_MARKUP_ERROR, code, _("Error on line %d: %s"), context->line_number - remaining_newlines, s); g_free (s); mark_error (context, tmp_error); g_propagate_error (error, tmp_error); }
0
[ "CWE-476" ]
glib
fccef3cc822af74699cca84cd202719ae61ca3b9
228,883,274,491,718,600,000,000,000,000,000,000,000
38
gmarkup: Fix crash in error handling path for closing elements If something which looks like a closing tag is left unfinished, but isn’t paired to an opening tag in the document, the error handling code would do a null pointer dereference. Avoid that, at the cost of introducing a new translatable error message. Includes a test case, courtesy of pdknsk. Signed-off-by: Philip Withnall <[email protected]> https://gitlab.gnome.org/GNOME/glib/issues/1461
adv_error adv_png_read_iend(adv_fz* f, const unsigned char* data, unsigned data_size, unsigned type) { if (type == ADV_PNG_CN_IEND) return 0; /* ancillary bit. bit 5 of first byte. 0 (uppercase) = critical, 1 (lowercase) = ancillary. */ if ((type & 0x20000000) == 0) { char buf[4]; be_uint32_write(buf, type); error_unsupported_set("Unsupported critical chunk '%c%c%c%c'", buf[0], buf[1], buf[2], buf[3]); return -1; } while (1) { unsigned char* ptr; unsigned ptr_size; /* read next */ if (adv_png_read_chunk(f, &ptr, &ptr_size, &type) != 0) { return -1; } free(ptr); if (type == ADV_PNG_CN_IEND) break; /* ancillary bit. bit 5 of first byte. 0 (uppercase) = critical, 1 (lowercase) = ancillary. */ if ((type & 0x20000000) == 0) { char buf[4]; be_uint32_write(buf, type); error_unsupported_set("Unsupported critical chunk '%c%c%c%c'", buf[0], buf[1], buf[2], buf[3]); return -1; } } return 0; }
0
[ "CWE-119" ]
advancecomp
78a56b21340157775be2462a19276b4d31d2bd01
81,999,737,616,066,940,000,000,000,000,000,000,000
38
Fix a buffer overflow caused by invalid images
conntrack_execute(struct conntrack *ct, struct dp_packet_batch *pkt_batch, ovs_be16 dl_type, bool force, bool commit, uint16_t zone, const uint32_t *setmark, const struct ovs_key_ct_labels *setlabel, const char *helper, const struct nat_action_info_t *nat_action_info) { struct dp_packet **pkts = pkt_batch->packets; size_t cnt = pkt_batch->count; struct conn_lookup_ctx ctx; long long now = time_msec(); for (size_t i = 0; i < cnt; i++) { if (!conn_key_extract(ct, pkts[i], dl_type, &ctx, zone)) { pkts[i]->md.ct_state = CS_INVALID; write_ct_md(pkts[i], zone, NULL, NULL, NULL); continue; } process_one(ct, pkts[i], &ctx, zone, force, commit, now, setmark, setlabel, nat_action_info, helper); } return 0; }
0
[ "CWE-400" ]
ovs
35c280072c1c3ed58202745b7d27fbbd0736999b
170,701,786,309,798,570,000,000,000,000,000,000,000
25
flow: Support extra padding length. Although not required, padding can be optionally added until the packet length is MTU bytes. A packet with extra padding currently fails sanity checks. Vulnerability: CVE-2020-35498 Fixes: fa8d9001a624 ("miniflow_extract: Properly handle small IP packets.") Reported-by: Joakim Hindersson <[email protected]> Acked-by: Ilya Maximets <[email protected]> Signed-off-by: Flavio Leitner <[email protected]> Signed-off-by: Ilya Maximets <[email protected]>
static void js_setvar(js_State *J, const char *name) { js_Environment *E = J->E; do { js_Property *ref = jsV_getproperty(J, E->variables, name); if (ref) { if (ref->setter) { js_pushobject(J, ref->setter); js_pushobject(J, E->variables); js_copy(J, -3); js_call(J, 1); js_pop(J, 1); return; } if (!(ref->atts & JS_READONLY)) ref->value = *stackidx(J, -1); else if (J->strict) js_typeerror(J, "'%s' is read-only", name); return; } E = E->outer; } while (E); if (J->strict) js_referenceerror(J, "assignment to undeclared variable '%s'", name); jsR_setproperty(J, J->G, name); }
0
[ "CWE-476" ]
mujs
77ab465f1c394bb77f00966cd950650f3f53cb24
149,470,063,434,441,650,000,000,000,000,000,000,000
26
Fix 697401: Error when dropping extra arguments to lightweight functions.
BOOL update_write_patblt_order(wStream* s, ORDER_INFO* orderInfo, PATBLT_ORDER* patblt) { if (!Stream_EnsureRemainingCapacity(s, update_approximate_patblt_order(orderInfo, patblt))) return FALSE; orderInfo->fieldFlags = 0; orderInfo->fieldFlags |= ORDER_FIELD_01; update_write_coord(s, patblt->nLeftRect); orderInfo->fieldFlags |= ORDER_FIELD_02; update_write_coord(s, patblt->nTopRect); orderInfo->fieldFlags |= ORDER_FIELD_03; update_write_coord(s, patblt->nWidth); orderInfo->fieldFlags |= ORDER_FIELD_04; update_write_coord(s, patblt->nHeight); orderInfo->fieldFlags |= ORDER_FIELD_05; Stream_Write_UINT8(s, patblt->bRop); orderInfo->fieldFlags |= ORDER_FIELD_06; update_write_color(s, patblt->backColor); orderInfo->fieldFlags |= ORDER_FIELD_07; update_write_color(s, patblt->foreColor); orderInfo->fieldFlags |= ORDER_FIELD_08; orderInfo->fieldFlags |= ORDER_FIELD_09; orderInfo->fieldFlags |= ORDER_FIELD_10; orderInfo->fieldFlags |= ORDER_FIELD_11; orderInfo->fieldFlags |= ORDER_FIELD_12; update_write_brush(s, &patblt->brush, orderInfo->fieldFlags >> 7); return TRUE; }
0
[ "CWE-415" ]
FreeRDP
67c2aa52b2ae0341d469071d1bc8aab91f8d2ed8
171,292,188,430,026,700,000,000,000,000,000,000,000
28
Fixed #6013: Check new length is > 0
static void iowarrior_disconnect(struct usb_interface *interface) { struct iowarrior *dev; int minor; dev = usb_get_intfdata(interface); mutex_lock(&iowarrior_open_disc_lock); usb_set_intfdata(interface, NULL); /* prevent device read, write and ioctl */ dev->present = 0; minor = dev->minor; mutex_unlock(&iowarrior_open_disc_lock); /* give back our minor - this will call close() locks need to be dropped at this point*/ usb_deregister_dev(interface, &iowarrior_class); mutex_lock(&dev->mutex); /* prevent device read, write and ioctl */ mutex_unlock(&dev->mutex); if (dev->opened) { /* There is a process that holds a filedescriptor to the device , so we only shutdown read-/write-ops going on. Deleting the device is postponed until close() was called. */ usb_kill_urb(dev->int_in_urb); wake_up_interruptible(&dev->read_wait); wake_up_interruptible(&dev->write_wait); } else { /* no process is using the device, cleanup now */ iowarrior_delete(dev); } dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n", minor - IOWARRIOR_MINOR_BASE); }
1
[ "CWE-416" ]
linux
edc4746f253d907d048de680a621e121517f484b
89,920,402,995,718,520,000,000,000,000,000,000,000
39
USB: iowarrior: fix use-after-free on disconnect A recent fix addressing a deadlock on disconnect introduced a new bug by moving the present flag out of the critical section protected by the driver-data mutex. This could lead to a racing release() freeing the driver data before disconnect() is done with it. Due to insufficient locking a related use-after-free could be triggered also before the above mentioned commit. Specifically, the driver needs to hold the driver-data mutex also while checking the opened flag at disconnect(). Fixes: c468a8aa790e ("usb: iowarrior: fix deadlock on disconnect") Fixes: 946b960d13c1 ("USB: add driver for iowarrior devices.") Cc: stable <[email protected]> # 2.6.21 Reported-by: [email protected] Signed-off-by: Johan Hovold <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
GF_DashSegmenterInput *set_dash_input(GF_DashSegmenterInput *dash_inputs, char *name, u32 *nb_dash_inputs) { GF_DashSegmenterInput *di; char *other_opts = NULL; char *sep = gf_url_colon_suffix(name); dash_inputs = gf_realloc(dash_inputs, sizeof(GF_DashSegmenterInput) * (*nb_dash_inputs + 1) ); memset(&dash_inputs[*nb_dash_inputs], 0, sizeof(GF_DashSegmenterInput) ); di = &dash_inputs[*nb_dash_inputs]; (*nb_dash_inputs)++; if (sep) { char *opts, *first_opt; opts = first_opt = sep; while (opts) { sep = gf_url_colon_suffix(opts); while (sep) { /* this is a real separator if it is followed by a keyword we are looking for */ if (!strnicmp(sep, ":id=", 4) || !strnicmp(sep, ":dur=", 5) || !strnicmp(sep, ":period=", 8) || !strnicmp(sep, ":BaseURL=", 9) || !strnicmp(sep, ":bandwidth=", 11) || !strnicmp(sep, ":role=", 6) || !strnicmp(sep, ":desc", 5) || !strnicmp(sep, ":sscale", 7) || !strnicmp(sep, ":duration=", 10) || !strnicmp(sep, ":period_duration=", 10) || !strnicmp(sep, ":pdur=", 6) || !strnicmp(sep, ":xlink=", 7) || !strnicmp(sep, ":asID=", 6) || !strnicmp(sep, ":sn=", 4) || !strnicmp(sep, ":tpl=", 5) || !strnicmp(sep, ":hls=", 5) || !strnicmp(sep, ":trackID=", 9) || !strnicmp(sep, ":@@", 3) ) { break; } else { char *nsep = gf_url_colon_suffix(sep+1); if (nsep) nsep[0] = 0; gf_dynstrcat(&other_opts, sep, ":"); if (nsep) nsep[0] = ':'; sep = strchr(sep+1, ':'); } } if (sep && !strncmp(sep, "://", 3) && strnicmp(sep, ":@@", 3)) sep = gf_url_colon_suffix(sep+3); if (sep) sep[0] = 0; if (!strnicmp(opts, "id=", 3)) { di->representationID = gf_strdup(opts+3); /*we allow the same repID to be set to force muxed representations*/ } else if (!strnicmp(opts, "dur=", 4)) di->media_duration = (Double)atof(opts+4); else if (!strnicmp(opts, "period=", 7)) di->periodID = gf_strdup(opts+7); else if (!strnicmp(opts, "BaseURL=", 8)) { di->baseURL = (char **)gf_realloc(di->baseURL, (di->nb_baseURL+1)*sizeof(char *)); di->baseURL[di->nb_baseURL] = gf_strdup(opts+8); di->nb_baseURL++; } else if (!strnicmp(opts, "bandwidth=", 10)) di->bandwidth = atoi(opts+10); else if (!strnicmp(opts, "role=", 5)) { di->roles = gf_realloc(di->roles, sizeof (char *) * (di->nb_roles+1)); di->roles[di->nb_roles] = gf_strdup(opts+5); di->nb_roles++; } else if (!strnicmp(opts, "desc", 4)) { u32 *nb_descs=NULL; char ***descs=NULL; u32 opt_offset=0; u32 len; if (!strnicmp(opts, "desc_p=", 7)) { nb_descs = &di->nb_p_descs; descs = &di->p_descs; opt_offset = 7; } else if (!strnicmp(opts, "desc_as=", 8)) { nb_descs = &di->nb_as_descs; descs = &di->as_descs; opt_offset = 8; } else if (!strnicmp(opts, "desc_as_c=", 8)) { nb_descs = &di->nb_as_c_descs; descs = &di->as_c_descs; opt_offset = 10; } else if (!strnicmp(opts, "desc_rep=", 8)) { nb_descs = &di->nb_rep_descs; descs = &di->rep_descs; opt_offset = 9; } if (opt_offset) { (*nb_descs)++; opts += opt_offset; len = (u32) strlen(opts); (*descs) = (char **)gf_realloc((*descs), (*nb_descs)*sizeof(char *)); (*descs)[(*nb_descs)-1] = (char *)gf_malloc((len+1)*sizeof(char)); strncpy((*descs)[(*nb_descs)-1], opts, len); (*descs)[(*nb_descs)-1][len] = 0; } } else if (!strnicmp(opts, "xlink=", 6)) di->xlink = gf_strdup(opts+6); else if (!strnicmp(opts, "sscale", 6)) di->sscale = GF_TRUE; else if (!strnicmp(opts, "pdur=", 5)) di->period_duration = (Double) atof(opts+5); else if (!strnicmp(opts, "period_duration=", 16)) di->period_duration = (Double) atof(opts+16); else if (!strnicmp(opts, "duration=", 9)) di->dash_duration = (Double) atof(opts+9); else if (!strnicmp(opts, "asID=", 5)) di->asID = atoi(opts+5); else if (!strnicmp(opts, "sn=", 3)) di->startNumber = atoi(opts+3); else if (!strnicmp(opts, "tpl=", 4)) di->seg_template = gf_strdup(opts+4); else if (!strnicmp(opts, "hls=", 4)) di->hls_pl = gf_strdup(opts+4); else if (!strnicmp(opts, "trackID=", 8)) di->track_id = atoi(opts+8); else if (!strnicmp(opts, "@@", 2)) { di->filter_chain = gf_strdup(opts+2); if (sep) sep[0] = ':'; sep = NULL; } if (!sep) break; sep[0] = ':'; opts = sep+1; } first_opt[0] = '\0'; } di->file_name = name; di->source_opts = other_opts; if (!di->representationID) { char szRep[100]; sprintf(szRep, "%d", *nb_dash_inputs); di->representationID = gf_strdup(szRep); } return dash_inputs; }
0
[ "CWE-476" ]
gpac
9eeac00b38348c664dfeae2525bba0cf1bc32349
243,973,782,433,467,350,000,000,000,000,000,000,000
133
fixed #1565
static int __elevator_change(struct request_queue *q, const char *name) { char elevator_name[ELV_NAME_MAX]; struct elevator_type *e; /* Make sure queue is not in the middle of being removed */ if (!test_bit(QUEUE_FLAG_REGISTERED, &q->queue_flags)) return -ENOENT; /* * Special case for mq, turn off scheduling */ if (!strncmp(name, "none", 4)) { if (!q->elevator) return 0; return elevator_switch(q, NULL); } strlcpy(elevator_name, name, sizeof(elevator_name)); e = elevator_get(q, strstrip(elevator_name), true); if (!e) return -EINVAL; if (q->elevator && elevator_match(q->elevator->type, elevator_name)) { elevator_put(e); return 0; } return elevator_switch(q, e); }
0
[ "CWE-416" ]
linux
c3e2219216c92919a6bd1711f340f5faa98695e6
173,260,733,871,195,530,000,000,000,000,000,000,000
30
block: free sched's request pool in blk_cleanup_queue In theory, IO scheduler belongs to request queue, and the request pool of sched tags belongs to the request queue too. However, the current tags allocation interfaces are re-used for both driver tags and sched tags, and driver tags is definitely host wide, and doesn't belong to any request queue, same with its request pool. So we need tagset instance for freeing request of sched tags. Meantime, blk_mq_free_tag_set() often follows blk_cleanup_queue() in case of non-BLK_MQ_F_TAG_SHARED, this way requires that request pool of sched tags to be freed before calling blk_mq_free_tag_set(). Commit 47cdee29ef9d94e ("block: move blk_exit_queue into __blk_release_queue") moves blk_exit_queue into __blk_release_queue for simplying the fast path in generic_make_request(), then causes oops during freeing requests of sched tags in __blk_release_queue(). Fix the above issue by move freeing request pool of sched tags into blk_cleanup_queue(), this way is safe becasue queue has been frozen and no any in-queue requests at that time. Freeing sched tags has to be kept in queue's release handler becasue there might be un-completed dispatch activity which might refer to sched tags. Cc: Bart Van Assche <[email protected]> Cc: Christoph Hellwig <[email protected]> Fixes: 47cdee29ef9d94e485eb08f962c74943023a5271 ("block: move blk_exit_queue into __blk_release_queue") Tested-by: Yi Zhang <[email protected]> Reported-by: kernel test robot <[email protected]> Signed-off-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>