func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
g_vfs_daemon_init (GVfsDaemon *daemon) { GError *error; gint max_threads = 1; /* TODO: handle max threads */ daemon->thread_pool = g_thread_pool_new (job_handler_callback, daemon, max_threads, FALSE, NULL); /* TODO: verify thread_pool != NULL in a nicer way */ g_assert (daemon->thread_pool != NULL); g_mutex_init (&daemon->lock); daemon->mount_counter = 0; daemon->jobs = NULL; daemon->registered_paths = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, (GDestroyNotify)registered_path_free); /* This is where we store active client connections so when a new filter is registered, * we re-register them on all active connections */ daemon->client_connections = g_hash_table_new_full (g_direct_hash, g_direct_equal, g_object_unref, NULL); daemon->conn = g_bus_get_sync (G_BUS_TYPE_SESSION, NULL, NULL); g_assert (daemon->conn != NULL); daemon->daemon_skeleton = gvfs_dbus_daemon_skeleton_new (); g_signal_connect (daemon->daemon_skeleton, "handle-get-connection", G_CALLBACK (handle_get_connection), daemon); g_signal_connect (daemon->daemon_skeleton, "handle-cancel", G_CALLBACK (handle_cancel), daemon); g_signal_connect (daemon->daemon_skeleton, "handle-list-monitor-implementations", G_CALLBACK (handle_list_monitor_implementations), daemon); error = NULL; if (!g_dbus_interface_skeleton_export (G_DBUS_INTERFACE_SKELETON (daemon->daemon_skeleton), daemon->conn, G_VFS_DBUS_DAEMON_PATH, &error)) { g_warning ("Error exporting daemon interface: %s (%s, %d)\n", error->message, g_quark_to_string (error->domain), error->code); g_error_free (error); } daemon->mountable_skeleton = gvfs_dbus_mountable_skeleton_new (); g_signal_connect (daemon->mountable_skeleton, "handle-mount", G_CALLBACK (daemon_handle_mount), daemon); error = NULL; if (!g_dbus_interface_skeleton_export (G_DBUS_INTERFACE_SKELETON (daemon->mountable_skeleton), daemon->conn, G_VFS_DBUS_MOUNTABLE_PATH, &error)) { g_warning ("Error exporting mountable interface: %s (%s, %d)\n", error->message, g_quark_to_string (error->domain), error->code); g_error_free (error); } }
1
[ "CWE-276" ]
gvfs
e3808a1b4042761055b1d975333a8243d67b8bfe
17,821,737,996,076,360,000,000,000,000,000,000,000
59
gvfsdaemon: Check that the connecting client is the same user Otherwise, an attacker who learns the abstract socket address from netstat(8) or similar could connect to it and issue D-Bus method calls. Signed-off-by: Simon McVittie <[email protected]>
ResetReindexPending(void) { pendingReindexedIndexes = NIL; }
0
[ "CWE-362" ]
postgres
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
305,335,500,435,057,000,000,000,000,000,000,000,000
4
Avoid repeated name lookups during table and index DDL. If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
int manager_open_serialization(Manager *m, FILE **_f) { const char *path; int fd = -1; FILE *f; assert(_f); path = MANAGER_IS_SYSTEM(m) ? "/run/systemd" : "/tmp"; fd = open_tmpfile_unlinkable(path, O_RDWR|O_CLOEXEC); if (fd < 0) return -errno; log_debug("Serializing state to %s", path); f = fdopen(fd, "w+"); if (!f) { safe_close(fd); return -errno; } *_f = f; return 0; }
0
[ "CWE-20" ]
systemd
531ac2b2349da02acc9c382849758e07eb92b020
68,108,211,312,650,960,000,000,000,000,000,000,000
24
If the notification message length is 0, ignore the message (#4237) Fixes #4234. Signed-off-by: Jorge Niedbalski <[email protected]>
void Item_singlerow_subselect::reset() { Item_subselect::reset(); if (value) { for(uint i= 0; i < engine->cols(); i++) row[i]->set_null(); } }
0
[ "CWE-89" ]
server
3c209bfc040ddfc41ece8357d772547432353fd2
270,599,540,168,321,970,000,000,000,000,000,000,000
9
MDEV-25994: Crash with union of my_decimal type in ORDER BY clause When single-row subquery fails with "Subquery reutrns more than 1 row" error, it will raise an error and return NULL. On the other hand, Item_singlerow_subselect sets item->maybe_null=0 for table-less subqueries like "(SELECT not_null_value)" (*) This discrepancy (item with maybe_null=0 returning NULL) causes the code in Type_handler_decimal_result::make_sort_key_part() to crash. Fixed this by allowing inference (*) only when the subquery is NOT a UNION.
NTSTATUS mitkdc_task_init(struct task_server *task) { struct tevent_req *subreq; const char * const *kdc_cmd; struct interface *ifaces; char *kdc_config = NULL; struct kdc_server *kdc; krb5_error_code code; NTSTATUS status; kadm5_ret_t ret; kadm5_config_params config; void *server_handle; int dbglvl = 0; task_server_set_title(task, "task[mitkdc_parent]"); switch (lpcfg_server_role(task->lp_ctx)) { case ROLE_STANDALONE: task_server_terminate(task, "The KDC is not required in standalone " "server configuration, terminate!", false); return NT_STATUS_INVALID_DOMAIN_ROLE; case ROLE_DOMAIN_MEMBER: task_server_terminate(task, "The KDC is not required in member " "server configuration", false); return NT_STATUS_INVALID_DOMAIN_ROLE; case ROLE_ACTIVE_DIRECTORY_DC: /* Yes, we want to start the KDC */ break; } /* Load interfaces for kpasswd */ load_interface_list(task, task->lp_ctx, &ifaces); if (iface_list_count(ifaces) == 0) { task_server_terminate(task, "KDC: no network interfaces configured", false); return NT_STATUS_UNSUCCESSFUL; } kdc_config = talloc_asprintf(task, "%s/kdc.conf", lpcfg_private_dir(task->lp_ctx)); if (kdc_config == NULL) { task_server_terminate(task, "KDC: no memory", false); return NT_STATUS_NO_MEMORY; } setenv("KRB5_KDC_PROFILE", kdc_config, 0); TALLOC_FREE(kdc_config); dbglvl = debuglevel_get_class(DBGC_KERBEROS); if (dbglvl >= 10) { char *kdc_trace_file = talloc_asprintf(task, "%s/mit_kdc_trace.log", get_dyn_LOGFILEBASE()); if (kdc_trace_file == NULL) { task_server_terminate(task, "KDC: no memory", false); return NT_STATUS_NO_MEMORY; } setenv("KRB5_TRACE", kdc_trace_file, 1); } /* start it as a child process */ kdc_cmd = lpcfg_mit_kdc_command(task->lp_ctx); subreq = samba_runcmd_send(task, task->event_ctx, timeval_zero(), 1, /* stdout log level */ 0, /* stderr log level */ kdc_cmd, "-n", /* Don't go into background */ #if 0 "-w 2", /* Start two workers */ #endif NULL); if (subreq == NULL) { DEBUG(0, ("Failed to start MIT KDC as child daemon\n")); task_server_terminate(task, "Failed to startup mitkdc task", true); return NT_STATUS_INTERNAL_ERROR; } tevent_req_set_callback(subreq, mitkdc_server_done, task); DEBUG(5,("Started krb5kdc process\n")); status = samba_setup_mit_kdc_irpc(task); if (!NT_STATUS_IS_OK(status)) { task_server_terminate(task, "Failed to setup kdc irpc service", true); } DEBUG(5,("Started irpc service for kdc_server\n")); kdc = talloc_zero(task, struct kdc_server); if (kdc == NULL) { task_server_terminate(task, "KDC: Out of memory", true); return NT_STATUS_NO_MEMORY; } talloc_set_destructor(kdc, kdc_server_destroy); kdc->task = task; kdc->base_ctx = talloc_zero(kdc, struct samba_kdc_base_context); if (kdc->base_ctx == NULL) { task_server_terminate(task, "KDC: Out of memory", true); return NT_STATUS_NO_MEMORY; } kdc->base_ctx->ev_ctx = task->event_ctx; kdc->base_ctx->lp_ctx = task->lp_ctx; initialize_krb5_error_table(); code = smb_krb5_init_context(kdc, kdc->task->lp_ctx, &kdc->smb_krb5_context); if (code != 0) { task_server_terminate(task, "KDC: Unable to initialize krb5 context", true); return NT_STATUS_INTERNAL_ERROR; } code = kadm5_init_krb5_context(&kdc->smb_krb5_context->krb5_context); if (code != 0) { task_server_terminate(task, "KDC: Unable to init kadm5 krb5_context", true); return NT_STATUS_INTERNAL_ERROR; } ZERO_STRUCT(config); config.mask = KADM5_CONFIG_REALM; config.realm = discard_const_p(char, lpcfg_realm(kdc->task->lp_ctx)); ret = kadm5_init(kdc->smb_krb5_context->krb5_context, discard_const_p(char, "kpasswd"), NULL, /* pass */ discard_const_p(char, "kpasswd"), &config, KADM5_STRUCT_VERSION, KADM5_API_VERSION_4, NULL, &server_handle); if (ret != 0) { task_server_terminate(task, "KDC: Initialize kadm5", true); return NT_STATUS_INTERNAL_ERROR; } kdc->private_data = server_handle; code = krb5_db_register_keytab(kdc->smb_krb5_context->krb5_context); if (code != 0) { task_server_terminate(task, "KDC: Unable to KDB", true); return NT_STATUS_INTERNAL_ERROR; } kdc->keytab_name = talloc_asprintf(kdc, "KDB:"); if (kdc->keytab_name == NULL) { task_server_terminate(task, "KDC: Out of memory", true); return NT_STATUS_NO_MEMORY; } kdc->samdb = samdb_connect(kdc, kdc->task->event_ctx, kdc->task->lp_ctx, system_session(kdc->task->lp_ctx), NULL, 0); if (kdc->samdb == NULL) { task_server_terminate(task, "KDC: Unable to connect to samdb", true); return NT_STATUS_CONNECTION_INVALID; } status = startup_kpasswd_server(kdc, kdc, task->lp_ctx, ifaces); if (!NT_STATUS_IS_OK(status)) { task_server_terminate(task, "KDC: Unable to start kpasswd server", true); return status; } DEBUG(5,("Started kpasswd service for kdc_server\n")); return NT_STATUS_OK; }
1
[ "CWE-288" ]
samba
827dc6a61e6bd01531da0cc8e10f1e54ad400359
99,836,057,865,585,400,000,000,000,000,000,000,000
209
CVE-2022-32744 s4:kdc: Rename keytab_name -> kpasswd_keytab_name This makes explicitly clear the purpose of this keytab. BUG: https://bugzilla.samba.org/show_bug.cgi?id=15074 Signed-off-by: Joseph Sutton <[email protected]> Reviewed-by: Andreas Schneider <[email protected]>
static int handle_triple_fault(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) { kvm_run->exit_reason = KVM_EXIT_SHUTDOWN; return 0; }
0
[ "CWE-20" ]
linux-2.6
16175a796d061833aacfbd9672235f2d2725df65
113,636,860,370,233,430,000,000,000,000,000,000,000
5
KVM: VMX: Don't allow uninhibited access to EFER on i386 vmx_set_msr() does not allow i386 guests to touch EFER, but they can still do so through the default: label in the switch. If they set EFER_LME, they can oops the host. Fix by having EFER access through the normal channel (which will check for EFER_LME) even on i386. Reported-and-tested-by: Benjamin Gilbert <[email protected]> Cc: [email protected] Signed-off-by: Avi Kivity <[email protected]>
int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which, unsigned long ctrl) { switch (which) { case PR_SPEC_STORE_BYPASS: return ssb_prctl_set(task, ctrl); case PR_SPEC_INDIRECT_BRANCH: return ib_prctl_set(task, ctrl); default: return -ENODEV; } }
0
[]
linux
a2059825986a1c8143fd6698774fa9d83733bb11
28,698,905,945,959,380,000,000,000,000,000,000,000
12
x86/speculation: Enable Spectre v1 swapgs mitigations The previous commit added macro calls in the entry code which mitigate the Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are enabled. Enable those features where applicable. The mitigations may be disabled with "nospectre_v1" or "mitigations=off". There are different features which can affect the risk of attack: - When FSGSBASE is enabled, unprivileged users are able to place any value in GS, using the wrgsbase instruction. This means they can write a GS value which points to any value in kernel space, which can be useful with the following gadget in an interrupt/exception/NMI handler: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 // dependent load or store based on the value of %reg // for example: mov %(reg1), %reg2 If an interrupt is coming from user space, and the entry code speculatively skips the swapgs (due to user branch mistraining), it may speculatively execute the GS-based load and a subsequent dependent load or store, exposing the kernel data to an L1 side channel leak. Note that, on Intel, a similar attack exists in the above gadget when coming from kernel space, if the swapgs gets speculatively executed to switch back to the user GS. On AMD, this variant isn't possible because swapgs is serializing with respect to future GS-based accesses. NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case doesn't exist quite yet. - When FSGSBASE is disabled, the issue is mitigated somewhat because unprivileged users must use prctl(ARCH_SET_GS) to set GS, which restricts GS values to user space addresses only. That means the gadget would need an additional step, since the target kernel address needs to be read from user space first. Something like: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 mov (%reg1), %reg2 // dependent load or store based on the value of %reg2 // for example: mov %(reg2), %reg3 It's difficult to audit for this gadget in all the handlers, so while there are no known instances of it, it's entirely possible that it exists somewhere (or could be introduced in the future). Without tooling to analyze all such code paths, consider it vulnerable. Effects of SMAP on the !FSGSBASE case: - If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not susceptible to Meltdown), the kernel is prevented from speculatively reading user space memory, even L1 cached values. This effectively disables the !FSGSBASE attack vector. - If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP still prevents the kernel from speculatively reading user space memory. But it does *not* prevent the kernel from reading the user value from L1, if it has already been cached. This is probably only a small hurdle for an attacker to overcome. Thanks to Dave Hansen for contributing the speculative_smap() function. Thanks to Andrew Cooper for providing the inside scoop on whether swapgs is serializing on AMD. [ tglx: Fixed the USER fence decision and polished the comment as suggested by Dave Hansen ] Signed-off-by: Josh Poimboeuf <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Dave Hansen <[email protected]>
static int scan_prefix(compiler_common *common, PCRE2_SPTR cc, fast_forward_char_data *chars, int max_chars, sljit_u32 *rec_count) { /* Recursive function, which scans prefix literals. */ BOOL last, any, class, caseless; int len, repeat, len_save, consumed = 0; sljit_u32 chr; /* Any unicode character. */ sljit_u8 *bytes, *bytes_end, byte; PCRE2_SPTR alternative, cc_save, oc; #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 8 PCRE2_UCHAR othercase[4]; #elif defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 16 PCRE2_UCHAR othercase[2]; #else PCRE2_UCHAR othercase[1]; #endif repeat = 1; while (TRUE) { if (*rec_count == 0) return 0; (*rec_count)--; last = TRUE; any = FALSE; class = FALSE; caseless = FALSE; switch (*cc) { case OP_CHARI: caseless = TRUE; /* Fall through */ case OP_CHAR: last = FALSE; cc++; break; case OP_SOD: case OP_SOM: case OP_SET_SOM: case OP_NOT_WORD_BOUNDARY: case OP_WORD_BOUNDARY: case OP_EODN: case OP_EOD: case OP_CIRC: case OP_CIRCM: case OP_DOLL: case OP_DOLLM: /* Zero width assertions. */ cc++; continue; case OP_ASSERT: case OP_ASSERT_NOT: case OP_ASSERTBACK: case OP_ASSERTBACK_NOT: cc = bracketend(cc); continue; case OP_PLUSI: case OP_MINPLUSI: case OP_POSPLUSI: caseless = TRUE; /* Fall through */ case OP_PLUS: case OP_MINPLUS: case OP_POSPLUS: cc++; break; case OP_EXACTI: caseless = TRUE; /* Fall through */ case OP_EXACT: repeat = GET2(cc, 1); last = FALSE; cc += 1 + IMM2_SIZE; break; case OP_QUERYI: case OP_MINQUERYI: case OP_POSQUERYI: caseless = TRUE; /* Fall through */ case OP_QUERY: case OP_MINQUERY: case OP_POSQUERY: len = 1; cc++; #ifdef SUPPORT_UNICODE if (common->utf && HAS_EXTRALEN(*cc)) len += GET_EXTRALEN(*cc); #endif max_chars = scan_prefix(common, cc + len, chars, max_chars, rec_count); if (max_chars == 0) return consumed; last = FALSE; break; case OP_KET: cc += 1 + LINK_SIZE; continue; case OP_ALT: cc += GET(cc, 1); continue; case OP_ONCE: case OP_BRA: case OP_BRAPOS: case OP_CBRA: case OP_CBRAPOS: alternative = cc + GET(cc, 1); while (*alternative == OP_ALT) { max_chars = scan_prefix(common, alternative + 1 + LINK_SIZE, chars, max_chars, rec_count); if (max_chars == 0) return consumed; alternative += GET(alternative, 1); } if (*cc == OP_CBRA || *cc == OP_CBRAPOS) cc += IMM2_SIZE; cc += 1 + LINK_SIZE; continue; case OP_CLASS: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 8 if (common->utf && !is_char7_bitset((const sljit_u8 *)(cc + 1), FALSE)) return consumed; #endif class = TRUE; break; case OP_NCLASS: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32 if (common->utf) return consumed; #endif class = TRUE; break; #if defined SUPPORT_UNICODE || PCRE2_CODE_UNIT_WIDTH != 8 case OP_XCLASS: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32 if (common->utf) return consumed; #endif any = TRUE; cc += GET(cc, 1); break; #endif case OP_DIGIT: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 8 if (common->utf && !is_char7_bitset((const sljit_u8 *)common->ctypes - cbit_length + cbit_digit, FALSE)) return consumed; #endif any = TRUE; cc++; break; case OP_WHITESPACE: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 8 if (common->utf && !is_char7_bitset((const sljit_u8 *)common->ctypes - cbit_length + cbit_space, FALSE)) return consumed; #endif any = TRUE; cc++; break; case OP_WORDCHAR: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 8 if (common->utf && !is_char7_bitset((const sljit_u8 *)common->ctypes - cbit_length + cbit_word, FALSE)) return consumed; #endif any = TRUE; cc++; break; case OP_NOT: case OP_NOTI: cc++; /* Fall through. */ case OP_NOT_DIGIT: case OP_NOT_WHITESPACE: case OP_NOT_WORDCHAR: case OP_ANY: case OP_ALLANY: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32 if (common->utf) return consumed; #endif any = TRUE; cc++; break; #ifdef SUPPORT_UNICODE case OP_NOTPROP: case OP_PROP: #if PCRE2_CODE_UNIT_WIDTH != 32 if (common->utf) return consumed; #endif any = TRUE; cc += 1 + 2; break; #endif case OP_TYPEEXACT: repeat = GET2(cc, 1); cc += 1 + IMM2_SIZE; continue; case OP_NOTEXACT: case OP_NOTEXACTI: #if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32 if (common->utf) return consumed; #endif any = TRUE; repeat = GET2(cc, 1); cc += 1 + IMM2_SIZE + 1; break; default: return consumed; } if (any) { do { chars->count = 255; consumed++; if (--max_chars == 0) return consumed; chars++; } while (--repeat > 0); repeat = 1; continue; } if (class) { bytes = (sljit_u8*) (cc + 1); cc += 1 + 32 / sizeof(PCRE2_UCHAR); switch (*cc) { case OP_CRSTAR: case OP_CRMINSTAR: case OP_CRPOSSTAR: case OP_CRQUERY: case OP_CRMINQUERY: case OP_CRPOSQUERY: max_chars = scan_prefix(common, cc + 1, chars, max_chars, rec_count); if (max_chars == 0) return consumed; break; default: case OP_CRPLUS: case OP_CRMINPLUS: case OP_CRPOSPLUS: break; case OP_CRRANGE: case OP_CRMINRANGE: case OP_CRPOSRANGE: repeat = GET2(cc, 1); if (repeat <= 0) return consumed; break; } do { if (bytes[31] & 0x80) chars->count = 255; else if (chars->count != 255) { bytes_end = bytes + 32; chr = 0; do { byte = *bytes++; SLJIT_ASSERT((chr & 0x7) == 0); if (byte == 0) chr += 8; else { do { if ((byte & 0x1) != 0) add_prefix_char(chr, chars, TRUE); byte >>= 1; chr++; } while (byte != 0); chr = (chr + 7) & ~7; } } while (chars->count != 255 && bytes < bytes_end); bytes = bytes_end - 32; } consumed++; if (--max_chars == 0) return consumed; chars++; } while (--repeat > 0); switch (*cc) { case OP_CRSTAR: case OP_CRMINSTAR: case OP_CRPOSSTAR: return consumed; case OP_CRQUERY: case OP_CRMINQUERY: case OP_CRPOSQUERY: cc++; break; case OP_CRRANGE: case OP_CRMINRANGE: case OP_CRPOSRANGE: if (GET2(cc, 1) != GET2(cc, 1 + IMM2_SIZE)) return consumed; cc += 1 + 2 * IMM2_SIZE; break; } repeat = 1; continue; } len = 1; #ifdef SUPPORT_UNICODE if (common->utf && HAS_EXTRALEN(*cc)) len += GET_EXTRALEN(*cc); #endif if (caseless && char_has_othercase(common, cc)) { #ifdef SUPPORT_UNICODE if (common->utf) { GETCHAR(chr, cc); if ((int)PRIV(ord2utf)(char_othercase(common, chr), othercase) != len) return consumed; } else #endif { chr = *cc; othercase[0] = TABLE_GET(chr, common->fcc, chr); } } else { caseless = FALSE; othercase[0] = 0; /* Stops compiler warning - PH */ } len_save = len; cc_save = cc; while (TRUE) { oc = othercase; do { len--; consumed++; chr = *cc; add_prefix_char(*cc, chars, len == 0); if (caseless) add_prefix_char(*oc, chars, len == 0); if (--max_chars == 0) return consumed; chars++; cc++; oc++; } while (len > 0); if (--repeat == 0) break; len = len_save; cc = cc_save; } repeat = 1; if (last) return consumed; } }
0
[ "CWE-125" ]
php-src
8947fd9e9fdce87cd6c59817b1db58e789538fe9
282,745,801,254,307,700,000,000,000,000,000,000,000
401
Fix #78338: Array cross-border reading in PCRE We backport r1092 from pcre2.
s64 __ktime_divns(const ktime_t kt, s64 div) { int sft = 0; s64 dclc; u64 tmp; dclc = ktime_to_ns(kt); tmp = dclc < 0 ? -dclc : dclc; /* Make sure the divisor is less than 2^32: */ while (div >> 32) { sft++; div >>= 1; } tmp >>= sft; do_div(tmp, (unsigned long) div); return dclc < 0 ? -tmp : tmp; }
0
[ "CWE-200" ]
tip
dfb4357da6ddbdf57d583ba64361c9d792b0e0b1
108,102,182,122,729,370,000,000,000,000,000,000,000
18
time: Remove CONFIG_TIMER_STATS Currently CONFIG_TIMER_STATS exposes process information across namespaces: kernel/time/timer_list.c print_timer(): SEQ_printf(m, ", %s/%d", tmp, timer->start_pid); /proc/timer_list: #11: <0000000000000000>, hrtimer_wakeup, S:01, do_nanosleep, cron/2570 Given that the tracer can give the same information, this patch entirely removes CONFIG_TIMER_STATS. Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Kees Cook <[email protected]> Acked-by: John Stultz <[email protected]> Cc: Nicolas Pitre <[email protected]> Cc: [email protected] Cc: Lai Jiangshan <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Xing Gao <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Jessica Frazelle <[email protected]> Cc: [email protected] Cc: Nicolas Iooss <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Michal Marek <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Olof Johansson <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: Arjan van de Ven <[email protected]> Link: http://lkml.kernel.org/r/20170208192659.GA32582@beast Signed-off-by: Thomas Gleixner <[email protected]>
inline int length() const { return cur_len_; }
0
[]
envoy
3b5acb2f43548862dadb243de7cf3994986a8e04
104,873,615,213,707,430,000,000,000,000,000,000,000
1
http, url: Bring back chromium_url and http_parser_parse_url (#198) * Revert GURL as HTTP URL parser utility This reverts: 1. commit c9c4709c844b90b9bb2935d784a428d667c9df7d 2. commit d828958b591a6d79f4b5fa608ece9962b7afbe32 3. commit 2d69e30c51f2418faf267aaa6c1126fce9948c62 Signed-off-by: Dhi Aurrahman <[email protected]>
unsigned int gg_login_hash(const unsigned char *password, unsigned int seed) { unsigned int x, y, z; y = seed; for (x = 0; *password; password++) { x = (x & 0xffffff00) | *password; y ^= x; y += x; x <<= 8; y ^= x; x <<= 8; y -= x; x <<= 8; y ^= x; z = y & 0x1F; y = (y << z) | (y >> (32 - z)); } return y; }
0
[ "CWE-310" ]
libgadu
23644f1fb8219031b3cac93289a588b05f90226b
111,678,130,739,634,380,000,000,000,000,000,000,000
23
Poprawka ograniczania długości opisu.
LoadCommand* Binary::add(const SegmentCommand& segment) { /* * To add a new segment in a Mach-O file, we need to: * * 1. Allocate space for a new Load command: LC_SEGMENT_64 / LC_SEGMENT * which must include the sections * 2. Allocate space for the content of the provided segment * * For #1, the logic is to shift all the content after the end of the load command table. * This modification is described in doc/sphinx/tutorials/11_macho_modification.rst. * * For #2, the easiest way is to place the content at the end of the Mach-O file and * to make the LC_SEGMENT point to this area. It works as expected as long as * the binary does not need to be signed. * * If the binary has to be signed, codesign and the underlying Apple libraries * enforce that there is not data after the __LINKEDIT segment, otherwise we get * this kind of error: "main executable failed strict validation". * To comply with this check, we can shift the __LINKEDIT segment (c.f. ``shift_linkedit(...)``) * such as the data of the new segment are located before __LINKEDIT. * Nevertheless, we can't shift __LINKEDIT by an arbitrary value. For ARM and ARM64, * ld/dyld enforces a segment alignment of "4 * 4096" as coded in ``Options::reconfigureDefaults`` * of ``ld64-609/src/ld/Option.cpp``: * * ```cpp * ... * <rdar://problem/13070042> Only third party apps should have 16KB page segments by default * if (fEncryptable) { * if (fSegmentAlignment == 4096) * fSegmentAlignment = 4096*4; * } * * // <rdar://problem/12258065> ARM64 needs 16KB page size for user land code * // <rdar://problem/15974532> make armv7[s] use 16KB pages in user land code for iOS 8 or later * if (fArchitecture == CPU_TYPE_ARM64 || (fArchitecture == CPU_TYPE_ARM) ) { * fSegmentAlignment = 4096*4; * } * ``` * Therefore, we must shift __LINKEDIT by at least 4 * 0x1000 for Mach-O files targeting ARM */ LIEF_DEBUG("Adding the new segment '{}' ({} bytes)", segment.name(), segment.content().size()); const uint32_t alignment = page_size(); const uint64_t new_fsize = align(segment.content().size(), alignment); SegmentCommand new_segment = segment; if (new_segment.file_size() == 0) { new_segment.file_size(new_fsize); new_segment.content_resize(new_fsize); } if (new_segment.virtual_size() == 0) { const uint64_t new_size = align(new_segment.file_size(), alignment); new_segment.virtual_size(new_size); } if (segment.sections().size() > 0) { new_segment.nb_sections_ = segment.sections().size(); } if (is64_) { new_segment.command(LOAD_COMMAND_TYPES::LC_SEGMENT_64); size_t needed_size = sizeof(details::segment_command_64); needed_size += new_segment.numberof_sections() * sizeof(details::section_64); new_segment.size(needed_size); } else { new_segment.command(LOAD_COMMAND_TYPES::LC_SEGMENT); size_t needed_size = sizeof(details::segment_command_32); needed_size += new_segment.numberof_sections() * sizeof(details::section_32); new_segment.size(needed_size); } LIEF_DEBUG(" -> sizeof(LC_SEGMENT): {}", new_segment.size()); // Insert the segment before __LINKEDIT const auto it_linkedit = std::find_if(std::begin(commands_), std::end(commands_), [] (const std::unique_ptr<LoadCommand>& cmd) { if (!SegmentCommand::classof(cmd.get())) { return false; } return cmd->as<SegmentCommand>()->name() == "__LINKEDIT"; }); const bool has_linkedit = it_linkedit != std::end(commands_); size_t pos = std::distance(std::begin(commands_), it_linkedit); LIEF_DEBUG(" -> index: {}", pos); auto* new_cmd = add(new_segment, pos); if (new_cmd == nullptr) { LIEF_WARN("Fail to insert new '{}' segment", segment.name()); return nullptr; } auto* segment_added = new_cmd->as<SegmentCommand>(); if (!has_linkedit) { /* If there are not __LINKEDIT segment we can point the Segment's content to the EOF * NOTE(romain): I don't know if a binary without a __LINKEDIT segment exists */ range_t new_va_ranges = this->va_ranges(); range_t new_off_ranges = off_ranges(); if (segment.virtual_address() == 0 && segment_added->virtual_size() != 0) { const uint64_t new_va = align(new_va_ranges.end, alignment); segment_added->virtual_address(new_va); size_t current_va = segment_added->virtual_address(); for (Section& section : segment_added->sections()) { section.virtual_address(current_va); current_va += section.size(); } } if (segment.file_offset() == 0 && segment_added->virtual_size() != 0) { const uint64_t new_offset = align(new_off_ranges.end, alignment); segment_added->file_offset(new_offset); size_t current_offset = new_offset; for (Section& section : segment_added->sections()) { section.offset(current_offset); current_offset += section.size(); } } refresh_seg_offset(); return segment_added; } uint64_t lnk_offset = 0; uint64_t lnk_va = 0; if (const SegmentCommand* lnk = get_segment("__LINKEDIT")) { lnk_offset = lnk->file_offset(); lnk_va = lnk->virtual_address(); } // Make space for the content of the new segment shift_linkedit(new_fsize); LIEF_DEBUG(" -> offset : 0x{:06x}", lnk_offset); LIEF_DEBUG(" -> virtual address: 0x{:06x}", lnk_va); segment_added->virtual_address(lnk_va); segment_added->virtual_size(segment_added->virtual_size()); size_t current_va = segment_added->virtual_address(); for (Section& section : segment_added->sections()) { section.virtual_address(current_va); current_va += section.size(); } segment_added->file_offset(lnk_offset); size_t current_offset = lnk_offset; for (Section& section : segment_added->sections()) { section.offset(current_offset); current_offset += section.size(); } refresh_seg_offset(); return segment_added; }
0
[ "CWE-703" ]
LIEF
7acf0bc4224081d4f425fcc8b2e361b95291d878
11,438,396,368,192,998,000,000,000,000,000,000,000
157
Resolve #764
TPM2B_Marshal(TPM2B *source, BYTE **buffer, INT32 *size) { UINT16 written = 0; written += UINT16_Marshal(&(source->size), buffer, size); written += Array_Marshal(source->buffer, source->size, buffer, size); return written; }
1
[ "CWE-787" ]
libtpms
3ef9b26cb9f28bd64d738bff9505a20d4eb56acd
273,534,893,471,155,440,000,000,000,000,000,000,000
7
tpm2: Add maxSize parameter to TPM2B_Marshal for sanity checks Add maxSize parameter to TPM2B_Marshal and assert on it checking the size of the data intended to be marshaled versus the maximum buffer size. Signed-off-by: Stefan Berger <[email protected]>
struct soc_device *soc_device_register(struct soc_device_attribute *soc_dev_attr) { struct soc_device *soc_dev; const struct attribute_group **soc_attr_groups; int ret; if (!soc_bus_type.p) { if (early_soc_dev_attr) return ERR_PTR(-EBUSY); early_soc_dev_attr = soc_dev_attr; return NULL; } soc_dev = kzalloc(sizeof(*soc_dev), GFP_KERNEL); if (!soc_dev) { ret = -ENOMEM; goto out1; } soc_attr_groups = kcalloc(3, sizeof(*soc_attr_groups), GFP_KERNEL); if (!soc_attr_groups) { ret = -ENOMEM; goto out2; } soc_attr_groups[0] = &soc_attr_group; soc_attr_groups[1] = soc_dev_attr->custom_attr_group; /* Fetch a unique (reclaimable) SOC ID. */ ret = ida_simple_get(&soc_ida, 0, 0, GFP_KERNEL); if (ret < 0) goto out3; soc_dev->soc_dev_num = ret; soc_dev->attr = soc_dev_attr; soc_dev->dev.bus = &soc_bus_type; soc_dev->dev.groups = soc_attr_groups; soc_dev->dev.release = soc_release; dev_set_name(&soc_dev->dev, "soc%d", soc_dev->soc_dev_num); ret = device_register(&soc_dev->dev); if (ret) { put_device(&soc_dev->dev); return ERR_PTR(ret); } return soc_dev; out3: kfree(soc_attr_groups); out2: kfree(soc_dev); out1: return ERR_PTR(ret); }
0
[ "CWE-787" ]
linux
aa838896d87af561a33ecefea1caa4c15a68bc47
95,416,730,785,952,890,000,000,000,000,000,000,000
55
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions Convert the various sprintf fmaily calls in sysfs device show functions to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety. Done with: $ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 . And cocci script: $ cat sysfs_emit_dev.cocci @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - sprintf(buf, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - snprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - scnprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; expression chr; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - strcpy(buf, chr); + sysfs_emit(buf, chr); ...> } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - sprintf(buf, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - snprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - scnprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... - len += scnprintf(buf + len, PAGE_SIZE - len, + len += sysfs_emit_at(buf, len, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; expression chr; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { ... - strcpy(buf, chr); - return strlen(buf); + return sysfs_emit(buf, chr); } Signed-off-by: Joe Perches <[email protected]> Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com Signed-off-by: Greg Kroah-Hartman <[email protected]>
static EVP_MD * php_openssl_get_evp_md_from_algo(zend_long algo) { /* {{{ */ EVP_MD *mdtype; switch (algo) { case OPENSSL_ALGO_SHA1: mdtype = (EVP_MD *) EVP_sha1(); break; case OPENSSL_ALGO_MD5: mdtype = (EVP_MD *) EVP_md5(); break; case OPENSSL_ALGO_MD4: mdtype = (EVP_MD *) EVP_md4(); break; #ifdef HAVE_OPENSSL_MD2_H case OPENSSL_ALGO_MD2: mdtype = (EVP_MD *) EVP_md2(); break; #endif #if PHP_OPENSSL_API_VERSION < 0x10100 case OPENSSL_ALGO_DSS1: mdtype = (EVP_MD *) EVP_dss1(); break; #endif case OPENSSL_ALGO_SHA224: mdtype = (EVP_MD *) EVP_sha224(); break; case OPENSSL_ALGO_SHA256: mdtype = (EVP_MD *) EVP_sha256(); break; case OPENSSL_ALGO_SHA384: mdtype = (EVP_MD *) EVP_sha384(); break; case OPENSSL_ALGO_SHA512: mdtype = (EVP_MD *) EVP_sha512(); break; case OPENSSL_ALGO_RMD160: mdtype = (EVP_MD *) EVP_ripemd160(); break; default: return NULL; break; } return mdtype; }
0
[ "CWE-326" ]
php-src
0216630ea2815a5789a24279a1211ac398d4de79
121,270,264,812,658,000,000,000,000,000,000,000,000
44
Fix bug #79601 (Wrong ciphertext/tag in AES-CCM encryption for a 12 bytes IV)
static int kvp_key_add_or_modify(int pool, __u8 *key, int key_size, __u8 *value, int value_size) { int i; int num_records; struct kvp_record *record; int num_blocks; if ((key_size > HV_KVP_EXCHANGE_MAX_KEY_SIZE) || (value_size > HV_KVP_EXCHANGE_MAX_VALUE_SIZE)) return 1; /* * First update the in-memory state. */ kvp_update_mem_state(pool); num_records = kvp_file_info[pool].num_records; record = kvp_file_info[pool].records; num_blocks = kvp_file_info[pool].num_blocks; for (i = 0; i < num_records; i++) { if (memcmp(key, record[i].key, key_size)) continue; /* * Found a match; just update the value - * this is the modify case. */ memcpy(record[i].value, value, value_size); kvp_update_file(pool); return 0; } /* * Need to add a new entry; */ if (num_records == (ENTRIES_PER_BLOCK * num_blocks)) { /* Need to allocate a larger array for reg entries. */ record = realloc(record, sizeof(struct kvp_record) * ENTRIES_PER_BLOCK * (num_blocks + 1)); if (record == NULL) return 1; kvp_file_info[pool].num_blocks++; } memcpy(record[i].value, value, value_size); memcpy(record[i].key, key, key_size); kvp_file_info[pool].records = record; kvp_file_info[pool].num_records++; kvp_update_file(pool); return 0; }
0
[]
char-misc
95a69adab9acfc3981c504737a2b6578e4d846ef
273,665,752,814,809,250,000,000,000,000,000,000,000
53
tools: hv: Netlink source address validation allows DoS The source code without this patch caused hypervkvpd to exit when it processed a spoofed Netlink packet which has been sent from an untrusted local user. Now Netlink messages with a non-zero nl_pid source address are ignored and a warning is printed into the syslog. Signed-off-by: Tomas Hozza <[email protected]> Acked-by: K. Y. Srinivasan <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Stats::Scope& listenerScope() override { return *scope_; }
0
[ "CWE-400" ]
envoy
dfddb529e914d794ac552e906b13d71233609bf7
211,039,481,440,939,620,000,000,000,000,000,000,000
1
listener: Add configurable accepted connection limits (#153) Add support for per-listener limits on accepted connections. Signed-off-by: Tony Allen <[email protected]>
ListenerFactoryContextBaseImpl& parentFactoryContext() { return *listener_factory_context_base_; }
0
[ "CWE-400" ]
envoy
dfddb529e914d794ac552e906b13d71233609bf7
185,905,105,289,410,300,000,000,000,000,000,000,000
1
listener: Add configurable accepted connection limits (#153) Add support for per-listener limits on accepted connections. Signed-off-by: Tony Allen <[email protected]>
static void *AcquireBlock(size_t size) { size_t i; void *block; /* Find free block. */ size=(size_t) (size+sizeof(size_t)+6*sizeof(size_t)-1) & -(4U*sizeof(size_t)); i=AllocationPolicy(size); block=memory_pool.blocks[i]; while ((block != (void *) NULL) && (SizeOfBlock(block) < size)) block=NextBlockInList(block); if (block == (void *) NULL) { i++; while (memory_pool.blocks[i] == (void *) NULL) i++; block=memory_pool.blocks[i]; if (i >= MaxBlocks) return((void *) NULL); } assert((*BlockHeader(NextBlock(block)) & PreviousBlockBit) == 0); assert(SizeOfBlock(block) >= size); RemoveFreeBlock(block,AllocationPolicy(SizeOfBlock(block))); if (SizeOfBlock(block) > size) { size_t blocksize; void *next; /* Split block. */ next=(char *) block+size; blocksize=SizeOfBlock(block)-size; *BlockHeader(next)=blocksize; *BlockFooter(next,blocksize)=blocksize; InsertFreeBlock(next,AllocationPolicy(blocksize)); *BlockHeader(block)=size | (*BlockHeader(block) & ~SizeMask); } assert(size == SizeOfBlock(block)); *BlockHeader(NextBlock(block))|=PreviousBlockBit; memory_pool.allocation+=size; return(block); }
0
[ "CWE-369" ]
ImageMagick
70aa86f5d5d8aa605a918ed51f7574f433a18482
249,819,838,563,059,980,000,000,000,000,000,000,000
51
possible divide by zero + clear buffers
static void route4_bind_class(void *fh, u32 classid, unsigned long cl, void *q, unsigned long base) { struct route4_filter *f = fh; if (f && f->res.classid == classid) { if (cl) __tcf_bind_filter(q, &f->res, base); else __tcf_unbind_filter(q, &f->res); } }
0
[ "CWE-416", "CWE-200" ]
linux
ef299cc3fa1a9e1288665a9fdc8bff55629fd359
204,378,942,938,555,720,000,000,000,000,000,000,000
12
net_sched: cls_route: remove the right filter from hashtable route4_change() allocates a new filter and copies values from the old one. After the new filter is inserted into the hash table, the old filter should be removed and freed, as the final step of the update. However, the current code mistakenly removes the new one. This looks apparently wrong to me, and it causes double "free" and use-after-free too, as reported by syzbot. Reported-and-tested-by: [email protected] Reported-and-tested-by: [email protected] Reported-and-tested-by: [email protected] Fixes: 1109c00547fc ("net: sched: RCU cls_route") Cc: Jamal Hadi Salim <[email protected]> Cc: Jiri Pirko <[email protected]> Cc: John Fastabend <[email protected]> Signed-off-by: Cong Wang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void sysfs_slab_release(struct kmem_cache *s) { if (slab_state >= FULL) kobject_put(&s->kobj);
0
[]
linux
fd4d9c7d0c71866ec0c2825189ebd2ce35bd95b8
320,799,860,075,855,900,000,000,000,000,000,000,000
5
mm: slub: add missing TID bump in kmem_cache_alloc_bulk() When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu freelist of length M, and N > M > 0, it will first remove the M elements from the percpu freelist, then call ___slab_alloc() to allocate the next element and repopulate the percpu freelist. ___slab_alloc() can re-enable IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc() to properly commit the freelist head change. Fix it by unconditionally bumping c->tid when entering the slowpath. Cc: [email protected] Fixes: ebe909e0fdb3 ("slub: improve bulk alloc strategy") Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
void Http2Session::Goaway(uint32_t code, int32_t lastStreamID, const uint8_t* data, size_t len) { if (is_destroyed()) return; Http2Scope h2scope(this); // the last proc stream id is the most recently created Http2Stream. if (lastStreamID <= 0) lastStreamID = nghttp2_session_get_last_proc_stream_id(session_.get()); Debug(this, "submitting goaway"); nghttp2_submit_goaway(session_.get(), NGHTTP2_FLAG_NONE, lastStreamID, code, data, len); }
0
[ "CWE-416" ]
node
a3c33d4ce78f74d1cf1765704af5b427aa3840a6
148,322,487,850,715,660,000,000,000,000,000,000,000
15
http2: update handling of rst_stream with error code NGHTTP2_CANCEL The PR updates the handling of rst_stream frames and adds all streams to the pending list on receiving rst frames with the error code NGHTTP2_CANCEL. The changes will remove dependency on the stream state that may allow bypassing the checks in certain cases. I think a better solution is to delay streams in all cases if rst_stream is received for the cancel events. The rst_stream frames can be received for protocol/connection error as well it should be handled immediately. Adding streams to the pending list in such cases may cause errors. CVE-ID: CVE-2021-22930 Refs: https://nvd.nist.gov/vuln/detail/CVE-2021-22930 PR-URL: https://github.com/nodejs/node/pull/39622 Refs: https://github.com/nodejs/node/pull/39423 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: James M Snell <[email protected]> Reviewed-By: Beth Griggs <[email protected]>
DEFUN(movLW, PREV_WORD, "Move to the previous word") { char *lb; Line *pline, *l; int ppos; int i, n = searchKeyNum(); if (Currentbuf->firstLine == NULL) return; for (i = 0; i < n; i++) { pline = Currentbuf->currentLine; ppos = Currentbuf->pos; if (prev_nonnull_line(Currentbuf->currentLine) < 0) goto end; while (1) { l = Currentbuf->currentLine; lb = l->lineBuf; while (Currentbuf->pos > 0) { int tmp = Currentbuf->pos; prevChar(tmp, l); if (is_wordchar(getChar(&lb[tmp]))) break; Currentbuf->pos = tmp; } if (Currentbuf->pos > 0) break; if (prev_nonnull_line(Currentbuf->currentLine->prev) < 0) { Currentbuf->currentLine = pline; Currentbuf->pos = ppos; goto end; } Currentbuf->pos = Currentbuf->currentLine->len; } l = Currentbuf->currentLine; lb = l->lineBuf; while (Currentbuf->pos > 0) { int tmp = Currentbuf->pos; prevChar(tmp, l); if (!is_wordchar(getChar(&lb[tmp]))) break; Currentbuf->pos = tmp; } } end: arrangeCursor(Currentbuf); displayBuffer(Currentbuf, B_NORMAL); }
0
[ "CWE-59", "CWE-241" ]
w3m
18dcbadf2771cdb0c18509b14e4e73505b242753
199,074,212,185,973,600,000,000,000,000,000,000,000
51
Make temporary directory safely when ~/.w3m is unwritable
static int rfcomm_sock_getname(struct socket *sock, struct sockaddr *addr, int *len, int peer) { struct sockaddr_rc *sa = (struct sockaddr_rc *) addr; struct sock *sk = sock->sk; BT_DBG("sock %p, sk %p", sock, sk); memset(sa, 0, sizeof(*sa)); sa->rc_family = AF_BLUETOOTH; sa->rc_channel = rfcomm_pi(sk)->channel; if (peer) bacpy(&sa->rc_bdaddr, &rfcomm_pi(sk)->dst); else bacpy(&sa->rc_bdaddr, &rfcomm_pi(sk)->src); *len = sizeof(struct sockaddr_rc); return 0; }
0
[ "CWE-20", "CWE-269" ]
linux
f3d3342602f8bcbf37d7c46641cb9bca7618eb1c
5,989,027,315,880,496,000,000,000,000,000,000,000
18
net: rework recvmsg handler msg_name and msg_namelen logic This patch now always passes msg->msg_namelen as 0. recvmsg handlers must set msg_namelen to the proper size <= sizeof(struct sockaddr_storage) to return msg_name to the user. This prevents numerous uninitialized memory leaks we had in the recvmsg handlers and makes it harder for new code to accidentally leak uninitialized memory. Optimize for the case recvfrom is called with NULL as address. We don't need to copy the address at all, so set it to NULL before invoking the recvmsg handler. We can do so, because all the recvmsg handlers must cope with the case a plain read() is called on them. read() also sets msg_name to NULL. Also document these changes in include/linux/net.h as suggested by David Miller. Changes since RFC: Set msg->msg_name = NULL if user specified a NULL in msg_name but had a non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't affect sendto as it would bail out earlier while trying to copy-in the address. It also more naturally reflects the logic by the callers of verify_iovec. With this change in place I could remove " if (!uaddr || msg_sys->msg_namelen == 0) msg->msg_name = NULL ". This change does not alter the user visible error logic as we ignore msg_namelen as long as msg_name is NULL. Also remove two unnecessary curly brackets in ___sys_recvmsg and change comments to netdev style. Cc: David Miller <[email protected]> Suggested-by: Eric Dumazet <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Signed-off-by: David S. Miller <[email protected]>
void WatchdogException(struct pt_regs *regs) { printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n"); WatchdogHandler(regs); }
0
[]
linux
5d176f751ee3c6eededd984ad409bff201f436a7
187,390,577,420,074,900,000,000,000,000,000,000,000
5
powerpc: tm: Enable transactional memory (TM) lazily for userspace Currently the MSR TM bit is always set if the hardware is TM capable. This adds extra overhead as it means the TM SPRS (TFHAR, TEXASR and TFAIR) must be swapped for each process regardless of if they use TM. For processes that don't use TM the TM MSR bit can be turned off allowing the kernel to avoid the expensive swap of the TM registers. A TM unavailable exception will occur if a thread does use TM and the kernel will enable MSR_TM and leave it so for some time afterwards. Signed-off-by: Cyril Bur <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
static void fixup_low_keys(struct btrfs_path *path, struct btrfs_disk_key *key, int level) { int i; struct extent_buffer *t; int ret; for (i = level; i < BTRFS_MAX_LEVEL; i++) { int tslot = path->slots[i]; if (!path->nodes[i]) break; t = path->nodes[i]; ret = tree_mod_log_insert_key(t, tslot, MOD_LOG_KEY_REPLACE, GFP_ATOMIC); BUG_ON(ret < 0); btrfs_set_node_key(t, key, tslot); btrfs_mark_buffer_dirty(path->nodes[i]); if (tslot != 0) break; } }
0
[ "CWE-362" ]
linux
dbcc7d57bffc0c8cac9dac11bec548597d59a6a5
111,793,823,149,640,720,000,000,000,000,000,000,000
22
btrfs: fix race when cloning extent buffer during rewind of an old root While resolving backreferences, as part of a logical ino ioctl call or fiemap, we can end up hitting a BUG_ON() when replaying tree mod log operations of a root, triggering a stack trace like the following: ------------[ cut here ]------------ kernel BUG at fs/btrfs/ctree.c:1210! invalid opcode: 0000 [#1] SMP KASAN PTI CPU: 1 PID: 19054 Comm: crawl_335 Tainted: G W 5.11.0-2d11c0084b02-misc-next+ #89 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 RIP: 0010:__tree_mod_log_rewind+0x3b1/0x3c0 Code: 05 48 8d 74 10 (...) RSP: 0018:ffffc90001eb70b8 EFLAGS: 00010297 RAX: 0000000000000000 RBX: ffff88812344e400 RCX: ffffffffb28933b6 RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffff88812344e42c RBP: ffffc90001eb7108 R08: 1ffff11020b60a20 R09: ffffed1020b60a20 R10: ffff888105b050f9 R11: ffffed1020b60a1f R12: 00000000000000ee R13: ffff8880195520c0 R14: ffff8881bc958500 R15: ffff88812344e42c FS: 00007fd1955e8700(0000) GS:ffff8881f5600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007efdb7928718 CR3: 000000010103a006 CR4: 0000000000170ee0 Call Trace: btrfs_search_old_slot+0x265/0x10d0 ? lock_acquired+0xbb/0x600 ? btrfs_search_slot+0x1090/0x1090 ? free_extent_buffer.part.61+0xd7/0x140 ? free_extent_buffer+0x13/0x20 resolve_indirect_refs+0x3e9/0xfc0 ? lock_downgrade+0x3d0/0x3d0 ? __kasan_check_read+0x11/0x20 ? add_prelim_ref.part.11+0x150/0x150 ? lock_downgrade+0x3d0/0x3d0 ? __kasan_check_read+0x11/0x20 ? lock_acquired+0xbb/0x600 ? __kasan_check_write+0x14/0x20 ? do_raw_spin_unlock+0xa8/0x140 ? rb_insert_color+0x30/0x360 ? prelim_ref_insert+0x12d/0x430 find_parent_nodes+0x5c3/0x1830 ? resolve_indirect_refs+0xfc0/0xfc0 ? lock_release+0xc8/0x620 ? fs_reclaim_acquire+0x67/0xf0 ? lock_acquire+0xc7/0x510 ? lock_downgrade+0x3d0/0x3d0 ? lockdep_hardirqs_on_prepare+0x160/0x210 ? lock_release+0xc8/0x620 ? fs_reclaim_acquire+0x67/0xf0 ? lock_acquire+0xc7/0x510 ? poison_range+0x38/0x40 ? unpoison_range+0x14/0x40 ? trace_hardirqs_on+0x55/0x120 btrfs_find_all_roots_safe+0x142/0x1e0 ? find_parent_nodes+0x1830/0x1830 ? btrfs_inode_flags_to_xflags+0x50/0x50 iterate_extent_inodes+0x20e/0x580 ? tree_backref_for_extent+0x230/0x230 ? lock_downgrade+0x3d0/0x3d0 ? read_extent_buffer+0xdd/0x110 ? lock_downgrade+0x3d0/0x3d0 ? __kasan_check_read+0x11/0x20 ? lock_acquired+0xbb/0x600 ? __kasan_check_write+0x14/0x20 ? _raw_spin_unlock+0x22/0x30 ? __kasan_check_write+0x14/0x20 iterate_inodes_from_logical+0x129/0x170 ? iterate_inodes_from_logical+0x129/0x170 ? btrfs_inode_flags_to_xflags+0x50/0x50 ? iterate_extent_inodes+0x580/0x580 ? __vmalloc_node+0x92/0xb0 ? init_data_container+0x34/0xb0 ? init_data_container+0x34/0xb0 ? kvmalloc_node+0x60/0x80 btrfs_ioctl_logical_to_ino+0x158/0x230 btrfs_ioctl+0x205e/0x4040 ? __might_sleep+0x71/0xe0 ? btrfs_ioctl_get_supported_features+0x30/0x30 ? getrusage+0x4b6/0x9c0 ? __kasan_check_read+0x11/0x20 ? lock_release+0xc8/0x620 ? __might_fault+0x64/0xd0 ? lock_acquire+0xc7/0x510 ? lock_downgrade+0x3d0/0x3d0 ? lockdep_hardirqs_on_prepare+0x210/0x210 ? lockdep_hardirqs_on_prepare+0x210/0x210 ? __kasan_check_read+0x11/0x20 ? do_vfs_ioctl+0xfc/0x9d0 ? ioctl_file_clone+0xe0/0xe0 ? lock_downgrade+0x3d0/0x3d0 ? lockdep_hardirqs_on_prepare+0x210/0x210 ? __kasan_check_read+0x11/0x20 ? lock_release+0xc8/0x620 ? __task_pid_nr_ns+0xd3/0x250 ? lock_acquire+0xc7/0x510 ? __fget_files+0x160/0x230 ? __fget_light+0xf2/0x110 __x64_sys_ioctl+0xc3/0x100 do_syscall_64+0x37/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7fd1976e2427 Code: 00 00 90 48 8b 05 (...) RSP: 002b:00007fd1955e5cf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007fd1955e5f40 RCX: 00007fd1976e2427 RDX: 00007fd1955e5f48 RSI: 00000000c038943b RDI: 0000000000000004 RBP: 0000000001000000 R08: 0000000000000000 R09: 00007fd1955e6120 R10: 0000557835366b00 R11: 0000000000000246 R12: 0000000000000004 R13: 00007fd1955e5f48 R14: 00007fd1955e5f40 R15: 00007fd1955e5ef8 Modules linked in: ---[ end trace ec8931a1c36e57be ]--- (gdb) l *(__tree_mod_log_rewind+0x3b1) 0xffffffff81893521 is in __tree_mod_log_rewind (fs/btrfs/ctree.c:1210). 1205 * the modification. as we're going backwards, we do the 1206 * opposite of each operation here. 1207 */ 1208 switch (tm->op) { 1209 case MOD_LOG_KEY_REMOVE_WHILE_FREEING: 1210 BUG_ON(tm->slot < n); 1211 fallthrough; 1212 case MOD_LOG_KEY_REMOVE_WHILE_MOVING: 1213 case MOD_LOG_KEY_REMOVE: 1214 btrfs_set_node_key(eb, &tm->key, tm->slot); Here's what happens to hit that BUG_ON(): 1) We have one tree mod log user (through fiemap or the logical ino ioctl), with a sequence number of 1, so we have fs_info->tree_mod_seq == 1; 2) Another task is at ctree.c:balance_level() and we have eb X currently as the root of the tree, and we promote its single child, eb Y, as the new root. Then, at ctree.c:balance_level(), we call: tree_mod_log_insert_root(eb X, eb Y, 1); 3) At tree_mod_log_insert_root() we create tree mod log elements for each slot of eb X, of operation type MOD_LOG_KEY_REMOVE_WHILE_FREEING each with a ->logical pointing to ebX->start. These are placed in an array named tm_list. Lets assume there are N elements (N pointers in eb X); 4) Then, still at tree_mod_log_insert_root(), we create a tree mod log element of operation type MOD_LOG_ROOT_REPLACE, ->logical set to ebY->start, ->old_root.logical set to ebX->start, ->old_root.level set to the level of eb X and ->generation set to the generation of eb X; 5) Then tree_mod_log_insert_root() calls tree_mod_log_free_eb() with tm_list as argument. After that, tree_mod_log_free_eb() calls __tree_mod_log_insert() for each member of tm_list in reverse order, from highest slot in eb X, slot N - 1, to slot 0 of eb X; 6) __tree_mod_log_insert() sets the sequence number of each given tree mod log operation - it increments fs_info->tree_mod_seq and sets fs_info->tree_mod_seq as the sequence number of the given tree mod log operation. This means that for the tm_list created at tree_mod_log_insert_root(), the element corresponding to slot 0 of eb X has the highest sequence number (1 + N), and the element corresponding to the last slot has the lowest sequence number (2); 7) Then, after inserting tm_list's elements into the tree mod log rbtree, the MOD_LOG_ROOT_REPLACE element is inserted, which gets the highest sequence number, which is N + 2; 8) Back to ctree.c:balance_level(), we free eb X by calling btrfs_free_tree_block() on it. Because eb X was created in the current transaction, has no other references and writeback did not happen for it, we add it back to the free space cache/tree; 9) Later some other task T allocates the metadata extent from eb X, since it is marked as free space in the space cache/tree, and uses it as a node for some other btree; 10) The tree mod log user task calls btrfs_search_old_slot(), which calls get_old_root(), and finally that calls __tree_mod_log_oldest_root() with time_seq == 1 and eb_root == eb Y; 11) First iteration of the while loop finds the tree mod log element with sequence number N + 2, for the logical address of eb Y and of type MOD_LOG_ROOT_REPLACE; 12) Because the operation type is MOD_LOG_ROOT_REPLACE, we don't break out of the loop, and set root_logical to point to tm->old_root.logical which corresponds to the logical address of eb X; 13) On the next iteration of the while loop, the call to tree_mod_log_search_oldest() returns the smallest tree mod log element for the logical address of eb X, which has a sequence number of 2, an operation type of MOD_LOG_KEY_REMOVE_WHILE_FREEING and corresponds to the old slot N - 1 of eb X (eb X had N items in it before being freed); 14) We then break out of the while loop and return the tree mod log operation of type MOD_LOG_ROOT_REPLACE (eb Y), and not the one for slot N - 1 of eb X, to get_old_root(); 15) At get_old_root(), we process the MOD_LOG_ROOT_REPLACE operation and set "logical" to the logical address of eb X, which was the old root. We then call tree_mod_log_search() passing it the logical address of eb X and time_seq == 1; 16) Then before calling tree_mod_log_search(), task T adds a key to eb X, which results in adding a tree mod log operation of type MOD_LOG_KEY_ADD to the tree mod log - this is done at ctree.c:insert_ptr() - but after adding the tree mod log operation and before updating the number of items in eb X from 0 to 1... 17) The task at get_old_root() calls tree_mod_log_search() and gets the tree mod log operation of type MOD_LOG_KEY_ADD just added by task T. Then it enters the following if branch: if (old_root && tm && tm->op != MOD_LOG_KEY_REMOVE_WHILE_FREEING) { (...) } (...) Calls read_tree_block() for eb X, which gets a reference on eb X but does not lock it - task T has it locked. Then it clones eb X while it has nritems set to 0 in its header, before task T sets nritems to 1 in eb X's header. From hereupon we use the clone of eb X which no other task has access to; 18) Then we call __tree_mod_log_rewind(), passing it the MOD_LOG_KEY_ADD mod log operation we just got from tree_mod_log_search() in the previous step and the cloned version of eb X; 19) At __tree_mod_log_rewind(), we set the local variable "n" to the number of items set in eb X's clone, which is 0. Then we enter the while loop, and in its first iteration we process the MOD_LOG_KEY_ADD operation, which just decrements "n" from 0 to (u32)-1, since "n" is declared with a type of u32. At the end of this iteration we call rb_next() to find the next tree mod log operation for eb X, that gives us the mod log operation of type MOD_LOG_KEY_REMOVE_WHILE_FREEING, for slot 0, with a sequence number of N + 1 (steps 3 to 6); 20) Then we go back to the top of the while loop and trigger the following BUG_ON(): (...) switch (tm->op) { case MOD_LOG_KEY_REMOVE_WHILE_FREEING: BUG_ON(tm->slot < n); fallthrough; (...) Because "n" has a value of (u32)-1 (4294967295) and tm->slot is 0. Fix this by taking a read lock on the extent buffer before cloning it at ctree.c:get_old_root(). This should be done regardless of the extent buffer having been freed and reused, as a concurrent task might be modifying it (while holding a write lock on it). Reported-by: Zygo Blaxell <[email protected]> Link: https://lore.kernel.org/linux-btrfs/[email protected]/ Fixes: 834328a8493079 ("Btrfs: tree mod log's old roots could still be part of the tree") CC: [email protected] # 4.4+ Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
static const char* tunnel_authorization_response_fields_present_to_string(UINT16 fieldsPresent) { return fields_present_to_string(fieldsPresent, tunnel_authorization_response_fields_present, ARRAYSIZE(tunnel_authorization_response_fields_present)); }
0
[ "CWE-125" ]
FreeRDP
6b485b146a1b9d6ce72dfd7b5f36456c166e7a16
257,509,036,205,991,550,000,000,000,000,000,000,000
5
Fixed oob read in irp_write and similar
gdm_display_real_prepare (GdmDisplay *self) { g_return_val_if_fail (GDM_IS_DISPLAY (self), FALSE); g_debug ("GdmDisplay: prepare display"); _gdm_display_set_status (self, GDM_DISPLAY_PREPARED); return TRUE; }
0
[ "CWE-754" ]
gdm
4e6e5335d29c039bed820c43bfd1c19cb62539ff
180,998,420,471,556,080,000,000,000,000,000,000,000
10
display: Use autoptr to handle errors in look for existing users It will make things just cleaner
void dns_server_reset_features_all(DnsServer *s) { DnsServer *i; LIST_FOREACH(servers, i, s) dns_server_reset_features(i); }
0
[ "CWE-416" ]
systemd
904dcaf9d4933499f8334859f52ea8497f2d24ff
11,557,303,546,008,690,000,000,000,000,000,000,000
6
resolved: take particular care when detaching DnsServer from its default stream DnsStream and DnsServer have a symbiotic relationship: one DnsStream is the current "default" stream of the server (and thus reffed by it), but each stream also refs the server it is connected to. This cyclic dependency can result in weird situations: when one is destroyed/unlinked/stopped it needs to unregister itself from the other, but doing this will trigger unregistration of the other. Hence, let's make sure we unregister the stream from the server before destroying it, to break this cycle. Most likely fixes: #10725
static int __compat_put_timespec(const struct timespec *ts, struct old_timespec32 __user *cts) { return (!access_ok(cts, sizeof(*cts)) || __put_user(ts->tv_sec, &cts->tv_sec) || __put_user(ts->tv_nsec, &cts->tv_nsec)) ? -EFAULT : 0; }
0
[ "CWE-20" ]
linux
594cc251fdd0d231d342d88b2fdff4bc42fb0690
123,117,836,072,004,940,000,000,000,000,000,000,000
6
make 'user_access_begin()' do 'access_ok()' Originally, the rule used to be that you'd have to do access_ok() separately, and then user_access_begin() before actually doing the direct (optimized) user access. But experience has shown that people then decide not to do access_ok() at all, and instead rely on it being implied by other operations or similar. Which makes it very hard to verify that the access has actually been range-checked. If you use the unsafe direct user accesses, hardware features (either SMAP - Supervisor Mode Access Protection - on x86, or PAN - Privileged Access Never - on ARM) do force you to use user_access_begin(). But nothing really forces the range check. By putting the range check into user_access_begin(), we actually force people to do the right thing (tm), and the range check vill be visible near the actual accesses. We have way too long a history of people trying to avoid them. Signed-off-by: Linus Torvalds <[email protected]>
e1000e_set_ics(E1000ECore *core, int index, uint32_t val) { trace_e1000e_irq_write_ics(val); e1000e_set_interrupt_cause(core, val); }
0
[ "CWE-835" ]
qemu
4154c7e03fa55b4cf52509a83d50d6c09d743b77
5,928,812,701,810,742,000,000,000,000,000,000,000
5
net: e1000e: fix an infinite loop issue This issue is like the issue in e1000 network card addressed in this commit: e1000: eliminate infinite loops on out-of-bounds transfer start. Signed-off-by: Li Qiang <[email protected]> Reviewed-by: Dmitry Fleytman <[email protected]> Signed-off-by: Jason Wang <[email protected]>
GF_Err stbl_AddChunkOffset(GF_MediaBox *mdia, u32 sampleNumber, u32 StreamDescIndex, u64 offset, u32 nb_pack) { GF_SampleTableBox *stbl; GF_ChunkOffsetBox *stco; GF_SampleToChunkBox *stsc; GF_ChunkLargeOffsetBox *co64; GF_StscEntry *ent; u32 i, k, *newOff, new_chunk_idx=0; u64 *newLarge; s32 insert_idx = -1; stbl = mdia->information->sampleTable; stsc = stbl->SampleToChunk; // if (stsc->w_lastSampleNumber + 1 < sampleNumber ) return GF_BAD_PARAM; CHECK_PACK(GF_BAD_PARAM) if (!stsc->nb_entries || (stsc->nb_entries + 2 >= stsc->alloc_size)) { if (!stsc->alloc_size) stsc->alloc_size = 1; ALLOC_INC(stsc->alloc_size); stsc->entries = gf_realloc(stsc->entries, sizeof(GF_StscEntry)*stsc->alloc_size); if (!stsc->entries) return GF_OUT_OF_MEM; memset(&stsc->entries[stsc->nb_entries], 0, sizeof(GF_StscEntry)*(stsc->alloc_size-stsc->nb_entries) ); } if (sampleNumber == stsc->w_lastSampleNumber + 1) { ent = &stsc->entries[stsc->nb_entries]; stsc->w_lastChunkNumber ++; ent->firstChunk = stsc->w_lastChunkNumber; if (stsc->nb_entries) stsc->entries[stsc->nb_entries-1].nextChunk = stsc->w_lastChunkNumber; new_chunk_idx = stsc->w_lastChunkNumber; stsc->w_lastSampleNumber = sampleNumber + nb_pack-1; stsc->nb_entries += 1; } else { u32 cur_samp = 1; u32 samples_in_next_entry = 0; u32 next_entry_first_chunk = 1; for (i=0; i<stsc->nb_entries; i++) { u32 nb_chunks = 1; ent = &stsc->entries[i]; if (i+1<stsc->nb_entries) nb_chunks = stsc->entries[i+1].firstChunk - ent->firstChunk; for (k=0; k<nb_chunks; k++) { if ((cur_samp <= sampleNumber) && (ent->samplesPerChunk + cur_samp > sampleNumber)) { insert_idx = i; //stsc entry has samples before inserted sample, split if (sampleNumber>cur_samp) { samples_in_next_entry = ent->samplesPerChunk - (sampleNumber-cur_samp); ent->samplesPerChunk = sampleNumber-cur_samp; } break; } cur_samp += ent->samplesPerChunk; next_entry_first_chunk++; } if (insert_idx>=0) break; } //we need to split the entry if (samples_in_next_entry) { memmove(&stsc->entries[insert_idx+3], &stsc->entries[insert_idx+1], sizeof(GF_StscEntry)*(stsc->nb_entries - insert_idx - 1)); //copy over original entry ent = &stsc->entries[insert_idx]; stsc->entries[insert_idx+2] = *ent; stsc->entries[insert_idx+2].samplesPerChunk = samples_in_next_entry; stsc->entries[insert_idx+2].firstChunk = next_entry_first_chunk + 1; //setup new entry ent = &stsc->entries[insert_idx+1]; ent->firstChunk = next_entry_first_chunk; stsc->nb_entries += 2; } else { if (insert_idx<0) { ent = &stsc->entries[stsc->nb_entries]; insert_idx = stsc->nb_entries; } else { memmove(&stsc->entries[insert_idx+1], &stsc->entries[insert_idx], sizeof(GF_StscEntry)*(stsc->nb_entries+1-insert_idx)); ent = &stsc->entries[insert_idx+1]; } ent->firstChunk = next_entry_first_chunk; stsc->nb_entries += 1; } new_chunk_idx = next_entry_first_chunk; } ent->isEdited = (Media_IsSelfContained(mdia, StreamDescIndex)) ? 1 : 0; ent->sampleDescriptionIndex = StreamDescIndex; ent->samplesPerChunk = nb_pack; ent->nextChunk = ent->firstChunk+1; //OK, now if we've inserted a chunk, update the sample to chunk info... if (sampleNumber + nb_pack - 1 == stsc->w_lastSampleNumber) { if (stsc->nb_entries) stsc->entries[stsc->nb_entries-1].nextChunk = ent->firstChunk; stbl->SampleToChunk->currentIndex = stsc->nb_entries-1; stbl->SampleToChunk->firstSampleInCurrentChunk = sampleNumber; //write - edit mode: sample number = chunk number stbl->SampleToChunk->currentChunk = stsc->w_lastChunkNumber; stbl->SampleToChunk->ghostNumber = 1; } else { /*offset remaining entries*/ for (i = insert_idx+1; i<stsc->nb_entries+1; i++) { stsc->entries[i].firstChunk++; if (i+1<stsc->nb_entries) stsc->entries[i-1].nextChunk = stsc->entries[i].firstChunk; } } //add the offset to the chunk... //and we change our offset if (stbl->ChunkOffset->type == GF_ISOM_BOX_TYPE_STCO) { stco = (GF_ChunkOffsetBox *)stbl->ChunkOffset; //if the new offset is a large one, we have to rewrite our table entry by entry (32->64 bit conv)... if (offset > 0xFFFFFFFF) { co64 = (GF_ChunkLargeOffsetBox *) gf_isom_box_new_parent(&stbl->child_boxes, GF_ISOM_BOX_TYPE_CO64); if (!co64) return GF_OUT_OF_MEM; co64->nb_entries = stco->nb_entries + 1; co64->alloc_size = co64->nb_entries; co64->offsets = (u64*)gf_malloc(sizeof(u64) * co64->nb_entries); if (!co64->offsets) return GF_OUT_OF_MEM; k = 0; for (i=0; i<stco->nb_entries; i++) { if (i + 1 == new_chunk_idx) { co64->offsets[i] = offset; k = 1; } co64->offsets[i+k] = (u64) stco->offsets[i]; } if (!k) co64->offsets[co64->nb_entries - 1] = offset; gf_isom_box_del_parent(&stbl->child_boxes, stbl->ChunkOffset); stbl->ChunkOffset = (GF_Box *) co64; } else { //no, we can use this one. if (new_chunk_idx > stco->nb_entries) { if (!stco->alloc_size) stco->alloc_size = stco->nb_entries; if (stco->nb_entries == stco->alloc_size) { ALLOC_INC(stco->alloc_size); stco->offsets = (u32*)gf_realloc(stco->offsets, sizeof(u32) * stco->alloc_size); if (!stco->offsets) return GF_OUT_OF_MEM; memset(&stco->offsets[stco->nb_entries], 0, sizeof(u32) * (stco->alloc_size-stco->nb_entries) ); } stco->offsets[stco->nb_entries] = (u32) offset; stco->nb_entries += 1; } else { //nope. we're inserting newOff = (u32*)gf_malloc(sizeof(u32) * (stco->nb_entries + 1)); if (!newOff) return GF_OUT_OF_MEM; k=0; for (i=0; i<stco->nb_entries; i++) { if (i+1 == new_chunk_idx) { newOff[i] = (u32) offset; k=1; } newOff[i+k] = stco->offsets[i]; } gf_free(stco->offsets); stco->offsets = newOff; stco->nb_entries ++; stco->alloc_size = stco->nb_entries; } } } else { //use large offset... co64 = (GF_ChunkLargeOffsetBox *)stbl->ChunkOffset; if (sampleNumber > co64->nb_entries) { if (!co64->alloc_size) co64->alloc_size = co64->nb_entries; if (co64->nb_entries == co64->alloc_size) { ALLOC_INC(co64->alloc_size); co64->offsets = (u64*)gf_realloc(co64->offsets, sizeof(u64) * co64->alloc_size); if (!co64->offsets) return GF_OUT_OF_MEM; memset(&co64->offsets[co64->nb_entries], 0, sizeof(u64) * (co64->alloc_size - co64->nb_entries) ); } co64->offsets[co64->nb_entries] = offset; co64->nb_entries += 1; } else { //nope. we're inserting newLarge = (u64*)gf_malloc(sizeof(u64) * (co64->nb_entries + 1)); if (!newLarge) return GF_OUT_OF_MEM; k=0; for (i=0; i<co64->nb_entries; i++) { if (i+1 == new_chunk_idx) { newLarge[i] = offset; k=1; } newLarge[i+k] = co64->offsets[i]; } gf_free(co64->offsets); co64->offsets = newLarge; co64->nb_entries++; co64->alloc_size++; } } return GF_OK; }
0
[ "CWE-120", "CWE-787" ]
gpac
77ed81c069e10b3861d88f72e1c6be1277ee7eae
18,239,218,717,088,660,000,000,000,000,000,000,000
195
fixed #1774 (fuzz)
void preloadBuffer(const UChar8* preloadText) { size_t ucharCount; int errorCode; copyString8to32(buf32, preloadText, buflen + 1, ucharCount, errorCode); recomputeCharacterWidths(buf32, charWidths, ucharCount); len = ucharCount; pos = ucharCount; }
0
[ "CWE-200" ]
mongo
035cf2afc04988b22cb67f4ebfd77e9b344cb6e0
175,024,438,534,204,140,000,000,000,000,000,000,000
8
SERVER-25335 avoid group and other permissions when creating .dbshell history file
gdk_pixbuf__xbm_image_load_increment (gpointer data, const guchar *buf, guint size, GError **error) { XBMData *context = (XBMData *) data; g_return_val_if_fail (data != NULL, FALSE); if (fwrite (buf, sizeof (guchar), size, context->file) != size) { gint save_errno = errno; context->all_okay = FALSE; g_set_error_literal (error, G_FILE_ERROR, g_file_error_from_errno (save_errno), _("Failed to write to temporary file when loading XBM image")); return FALSE; } return TRUE; }
0
[ "CWE-189" ]
gdk-pixbuf
4f0f465f991cd454d03189497f923eb40c170c22
142,133,394,336,956,980,000,000,000,000,000,000,000
21
Avoid an integer overflow in the xbm loader At the same time, reject some silly input, such as negative width or height. https://bugzilla.gnome.org/show_bug.cgi?id=672811
void ImportEPUB::ProcessFontFiles(const QList<Resource *> &resources, const QHash<QString, QString> &updates, const QHash<QString, QString> &encrypted_files) { if (encrypted_files.empty()) { return; } QList<FontResource *> font_resources = m_Book->GetFolderKeeper()->GetResourceTypeList<FontResource>(); if (font_resources.empty()) { return; } QHash<QString, QString> new_font_paths_to_algorithms; foreach(QString old_update_path, updates.keys()) { if (!FONT_EXTENSIONS.contains(QFileInfo(old_update_path).suffix().toLower())) { continue; } QString new_update_path = updates[ old_update_path ]; foreach(QString old_encrypted_path, encrypted_files.keys()) { if (old_update_path == old_encrypted_path) { new_font_paths_to_algorithms[ new_update_path ] = encrypted_files[ old_encrypted_path ]; } } } foreach(FontResource * font_resource, font_resources) { QString match_path = "../" + font_resource->GetRelativePathToOEBPS(); QString algorithm = new_font_paths_to_algorithms.value(match_path); if (algorithm.isEmpty()) { continue; } font_resource->SetObfuscationAlgorithm(algorithm); // Actually we are de-obfuscating, but the inverse operations of the obfuscation methods // are the obfuscation methods themselves. For the math oriented, the obfuscation methods // are involutary [ f( f( x ) ) = x ]. if (algorithm == ADOBE_FONT_ALGO_ID) { FontObfuscation::ObfuscateFile(font_resource->GetFullPath(), algorithm, m_UuidIdentifierValue); } else { FontObfuscation::ObfuscateFile(font_resource->GetFullPath(), algorithm, m_UniqueIdentifierValue); } } }
0
[ "CWE-22" ]
Sigil
04e2f280cc4a0766bedcc7b9eb56449ceecc2ad4
76,825,915,197,055,520,000,000,000,000,000,000,000
47
further harden against malicious epubs and produce error message
static int snd_pcm_action_single(const struct action_ops *ops, struct snd_pcm_substream *substream, snd_pcm_state_t state) { int res; res = ops->pre_action(substream, state); if (res < 0) return res; res = ops->do_action(substream, state); if (res == 0) ops->post_action(substream, state); else if (ops->undo_action) ops->undo_action(substream, state); return res; }
0
[ "CWE-125" ]
linux
92ee3c60ec9fe64404dc035e7c41277d74aa26cb
281,968,562,073,650,160,000,000,000,000,000,000,000
16
ALSA: pcm: Fix races among concurrent hw_params and hw_free calls Currently we have neither proper check nor protection against the concurrent calls of PCM hw_params and hw_free ioctls, which may result in a UAF. Since the existing PCM stream lock can't be used for protecting the whole ioctl operations, we need a new mutex to protect those racy calls. This patch introduced a new mutex, runtime->buffer_mutex, and applies it to both hw_params and hw_free ioctl code paths. Along with it, the both functions are slightly modified (the mmap_count check is moved into the state-check block) for code simplicity. Reported-by: Hu Jiahui <[email protected]> Cc: <[email protected]> Reviewed-by: Jaroslav Kysela <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
static bool vrend_hw_switch_query_context(struct vrend_context *ctx) { if (vrend_state.use_async_fence_cb) { if (!ctx) return false; if (ctx == vrend_state.current_sync_thread_ctx) return true; if (ctx->ctx_id != 0 && ctx->in_error) return false; vrend_clicbs->make_current(ctx->sub->gl_context); vrend_state.current_sync_thread_ctx = ctx; return true; } else { return vrend_hw_switch_context(ctx, true); } }
0
[ "CWE-787" ]
virglrenderer
95e581fd181b213c2ed7cdc63f2abc03eaaa77ec
272,368,505,316,270,140,000,000,000,000,000,000,000
19
vrend: Add test to resource OOB write and fix it v2: Also check that no depth != 1 has been send when none is due Closes: #250 Signed-off-by: Gert Wollny <[email protected]> Reviewed-by: Chia-I Wu <[email protected]>
std::string queueloader::get_filename(const std::string& str) { std::string fn = ctrl->get_dlpath(); if (fn[fn.length()-1] != NEWSBEUTER_PATH_SEP[0]) fn.append(NEWSBEUTER_PATH_SEP); char buf[1024]; snprintf(buf, sizeof(buf), "%s", str.c_str()); char * base = basename(buf); if (!base || strlen(base) == 0) { char lbuf[128]; time_t t = time(NULL); strftime(lbuf, sizeof(lbuf), "%Y-%b-%d-%H%M%S.unknown", localtime(&t)); fn.append(lbuf); } else { fn.append(base); } return fn; }
1
[ "CWE-78" ]
newsbeuter
26f5a4350f3ab5507bb8727051c87bb04660f333
107,680,217,014,226,120,000,000,000,000,000,000,000
18
Work around shell code in podcast names (#598)
static void convert_32s2u_C1R(const OPJ_INT32* pSrc, OPJ_BYTE* pDst, OPJ_SIZE_T length) { OPJ_SIZE_T i; for (i = 0; i < (length & ~(OPJ_SIZE_T)3U); i += 4U) { OPJ_UINT32 src0 = (OPJ_UINT32)pSrc[i + 0]; OPJ_UINT32 src1 = (OPJ_UINT32)pSrc[i + 1]; OPJ_UINT32 src2 = (OPJ_UINT32)pSrc[i + 2]; OPJ_UINT32 src3 = (OPJ_UINT32)pSrc[i + 3]; *pDst++ = (OPJ_BYTE)((src0 << 6) | (src1 << 4) | (src2 << 2) | src3); } if (length & 3U) { OPJ_UINT32 src0 = (OPJ_UINT32)pSrc[i + 0]; OPJ_UINT32 src1 = 0U; OPJ_UINT32 src2 = 0U; length = length & 3U; if (length > 1U) { src1 = (OPJ_UINT32)pSrc[i + 1]; if (length > 2U) { src2 = (OPJ_UINT32)pSrc[i + 2]; } } *pDst++ = (OPJ_BYTE)((src0 << 6) | (src1 << 4) | (src2 << 2)); } }
0
[ "CWE-787" ]
openjpeg
e5285319229a5d77bf316bb0d3a6cbd3cb8666d9
338,979,602,305,112,670,000,000,000,000,000,000,000
28
pgxtoimage(): fix write stack buffer overflow (#997)
daemon_msg_auth_req(struct daemon_slpars *pars, uint32 plen) { char errbuf[PCAP_ERRBUF_SIZE]; // buffer for network errors char errmsgbuf[PCAP_ERRBUF_SIZE]; // buffer for errors to send to the client int status; struct rpcap_auth auth; // RPCAP authentication header char sendbuf[RPCAP_NETBUF_SIZE]; // temporary buffer in which data to be sent is buffered int sendbufidx = 0; // index which keeps the number of bytes currently buffered struct rpcap_authreply *authreply; // authentication reply message status = rpcapd_recv(pars->sockctrl, (char *) &auth, sizeof(struct rpcap_auth), &plen, errmsgbuf); if (status == -1) { return -1; } if (status == -2) { goto error; } switch (ntohs(auth.type)) { case RPCAP_RMTAUTH_NULL: { if (!pars->nullAuthAllowed) { // Send the client an error reply. pcap_snprintf(errmsgbuf, PCAP_ERRBUF_SIZE, "Authentication failed; NULL authentication not permitted."); if (rpcap_senderror(pars->sockctrl, 0, PCAP_ERR_AUTH_FAILED, errmsgbuf, errbuf) == -1) { // That failed; log a message and give up. rpcapd_log(LOGPRIO_ERROR, "Send to client failed: %s", errbuf); return -1; } goto error_noreply; } break; } case RPCAP_RMTAUTH_PWD: { char *username, *passwd; uint32 usernamelen, passwdlen; usernamelen = ntohs(auth.slen1); username = (char *) malloc (usernamelen + 1); if (username == NULL) { pcap_fmt_errmsg_for_errno(errmsgbuf, PCAP_ERRBUF_SIZE, errno, "malloc() failed"); goto error; } status = rpcapd_recv(pars->sockctrl, username, usernamelen, &plen, errmsgbuf); if (status == -1) { free(username); return -1; } if (status == -2) { free(username); goto error; } username[usernamelen] = '\0'; passwdlen = ntohs(auth.slen2); passwd = (char *) malloc (passwdlen + 1); if (passwd == NULL) { pcap_fmt_errmsg_for_errno(errmsgbuf, PCAP_ERRBUF_SIZE, errno, "malloc() failed"); free(username); goto error; } status = rpcapd_recv(pars->sockctrl, passwd, passwdlen, &plen, errmsgbuf); if (status == -1) { free(username); free(passwd); return -1; } if (status == -2) { free(username); free(passwd); goto error; } passwd[passwdlen] = '\0'; if (daemon_AuthUserPwd(username, passwd, errmsgbuf)) { // // Authentication failed. Let the client // know. // free(username); free(passwd); if (rpcap_senderror(pars->sockctrl, 0, PCAP_ERR_AUTH_FAILED, errmsgbuf, errbuf) == -1) { // That failed; log a message and give up. rpcapd_log(LOGPRIO_ERROR, "Send to client failed: %s", errbuf); return -1; } // // Suspend for 1 second, so that they can't // hammer us with repeated tries with an // attack such as a dictionary attack. // // WARNING: this delay is inserted only // at this point; if the client closes the // connection and reconnects, the suspension // time does not have any effect. // sleep_secs(RPCAP_SUSPEND_WRONGAUTH); goto error_noreply; } free(username); free(passwd); break; } default: pcap_snprintf(errmsgbuf, PCAP_ERRBUF_SIZE, "Authentication type not recognized."); if (rpcap_senderror(pars->sockctrl, 0, PCAP_ERR_AUTH_TYPE_NOTSUP, errmsgbuf, errbuf) == -1) { // That failed; log a message and give up. rpcapd_log(LOGPRIO_ERROR, "Send to client failed: %s", errbuf); return -1; } goto error_noreply; } // The authentication succeeded; let the client know. if (sock_bufferize(NULL, sizeof(struct rpcap_header), NULL, &sendbufidx, RPCAP_NETBUF_SIZE, SOCKBUF_CHECKONLY, errmsgbuf, PCAP_ERRBUF_SIZE) == -1) goto error; rpcap_createhdr((struct rpcap_header *) sendbuf, 0, RPCAP_MSG_AUTH_REPLY, 0, sizeof(struct rpcap_authreply)); authreply = (struct rpcap_authreply *) &sendbuf[sendbufidx]; if (sock_bufferize(NULL, sizeof(struct rpcap_authreply), NULL, &sendbufidx, RPCAP_NETBUF_SIZE, SOCKBUF_CHECKONLY, errmsgbuf, PCAP_ERRBUF_SIZE) == -1) goto error; // // Indicate to our peer what versions we support. // memset(authreply, 0, sizeof(struct rpcap_authreply)); authreply->minvers = RPCAP_MIN_VERSION; authreply->maxvers = RPCAP_MAX_VERSION; // Send the reply. if (sock_send(pars->sockctrl, sendbuf, sendbufidx, errbuf, PCAP_ERRBUF_SIZE) == -1) { // That failed; log a messsage and give up. rpcapd_log(LOGPRIO_ERROR, "Send to client failed: %s", errbuf); return -1; } // Check if all the data has been read; if not, discard the data in excess if (rpcapd_discard(pars->sockctrl, plen) == -1) { return -1; } return 0; error: if (rpcap_senderror(pars->sockctrl, 0, PCAP_ERR_AUTH, errmsgbuf, errbuf) == -1) { // That failed; log a message and give up. rpcapd_log(LOGPRIO_ERROR, "Send to client failed: %s", errbuf); return -1; } error_noreply: // Check if all the data has been read; if not, discard the data in excess if (rpcapd_discard(pars->sockctrl, plen) == -1) { return -1; } return -2; }
0
[ "CWE-703", "CWE-918" ]
libpcap
33834cb2a4d035b52aa2a26742f832a112e90a0a
34,326,012,738,690,104,000,000,000,000,000,000,000
194
In the open request, reject capture sources that are URLs. You shouldn't be able to ask a server to open a remote device on some *other* server; just open it yourself. This addresses Include Security issue F13: [libpcap] Remote Packet Capture Daemon Allows Opening Capture URLs.
archive_string_conversion_from_charset(struct archive *a, const char *charset, int best_effort) { int flag = SCONV_FROM_CHARSET; if (best_effort) flag |= SCONV_BEST_EFFORT; return (get_sconv_object(a, charset, get_current_charset(a), flag)); }
0
[ "CWE-476" ]
libarchive
42a3408ac7df1e69bea9ea12b72e14f59f7400c0
146,617,012,706,925,230,000,000,000,000,000,000,000
9
archive_strncat_l(): allocate and do not convert if length == 0 This ensures e.g. that archive_mstring_copy_mbs_len_l() does not set aes_set = AES_SET_MBS with aes_mbs.s == NULL. Resolves possible null-pointer dereference reported by OSS-Fuzz. Reported-By: OSS-Fuzz issue 286
int __sys_recvmmsg(int fd, struct mmsghdr __user *mmsg, unsigned int vlen, unsigned int flags, struct timespec *timeout) { int fput_needed, err, datagrams; struct socket *sock; struct mmsghdr __user *entry; struct compat_mmsghdr __user *compat_entry; struct msghdr msg_sys; struct timespec64 end_time; struct timespec64 timeout64; if (timeout && poll_select_set_timeout(&end_time, timeout->tv_sec, timeout->tv_nsec)) return -EINVAL; datagrams = 0; sock = sockfd_lookup_light(fd, &err, &fput_needed); if (!sock) return err; if (likely(!(flags & MSG_ERRQUEUE))) { err = sock_error(sock->sk); if (err) { datagrams = err; goto out_put; } } entry = mmsg; compat_entry = (struct compat_mmsghdr __user *)mmsg; while (datagrams < vlen) { /* * No need to ask LSM for more than the first datagram. */ if (MSG_CMSG_COMPAT & flags) { err = ___sys_recvmsg(sock, (struct user_msghdr __user *)compat_entry, &msg_sys, flags & ~MSG_WAITFORONE, datagrams); if (err < 0) break; err = __put_user(err, &compat_entry->msg_len); ++compat_entry; } else { err = ___sys_recvmsg(sock, (struct user_msghdr __user *)entry, &msg_sys, flags & ~MSG_WAITFORONE, datagrams); if (err < 0) break; err = put_user(err, &entry->msg_len); ++entry; } if (err) break; ++datagrams; /* MSG_WAITFORONE turns on MSG_DONTWAIT after one packet */ if (flags & MSG_WAITFORONE) flags |= MSG_DONTWAIT; if (timeout) { ktime_get_ts64(&timeout64); *timeout = timespec64_to_timespec( timespec64_sub(end_time, timeout64)); if (timeout->tv_sec < 0) { timeout->tv_sec = timeout->tv_nsec = 0; break; } /* Timeout, return less than vlen datagrams */ if (timeout->tv_nsec == 0 && timeout->tv_sec == 0) break; } /* Out of band data, return right away */ if (msg_sys.msg_flags & MSG_OOB) break; cond_resched(); } if (err == 0) goto out_put; if (datagrams == 0) { datagrams = err; goto out_put; } /* * We may return less entries than requested (vlen) if the * sock is non block and there aren't enough datagrams... */ if (err != -EAGAIN) { /* * ... or if recvmsg returns an error after we * received some datagrams, where we record the * error to return on the next call or if the * app asks about it using getsockopt(SO_ERROR). */ sock->sk->sk_err = -err; } out_put: fput_light(sock->file, fput_needed); return datagrams; }
0
[ "CWE-362" ]
linux
6d8c50dcb029872b298eea68cc6209c866fd3e14
157,711,095,143,079,860,000,000,000,000,000,000,000
110
socket: close race condition between sock_close() and sockfs_setattr() fchownat() doesn't even hold refcnt of fd until it figures out fd is really needed (otherwise is ignored) and releases it after it resolves the path. This means sock_close() could race with sockfs_setattr(), which leads to a NULL pointer dereference since typically we set sock->sk to NULL in ->release(). As pointed out by Al, this is unique to sockfs. So we can fix this in socket layer by acquiring inode_lock in sock_close() and checking against NULL in sockfs_setattr(). sock_release() is called in many places, only the sock_close() path matters here. And fortunately, this should not affect normal sock_close() as it is only called when the last fd refcnt is gone. It only affects sock_close() with a parallel sockfs_setattr() in progress, which is not common. Fixes: 86741ec25462 ("net: core: Add a UID field to struct sock.") Reported-by: shankarapailoor <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Lorenzo Colitti <[email protected]> Cc: Al Viro <[email protected]> Signed-off-by: Cong Wang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void Ins_SRP2( INS_ARG ) { CUR.GS.rp2 = (Int)(args[0]); }
0
[ "CWE-125" ]
ghostpdl
c7c55972758a93350882c32147801a3485b010fe
317,617,670,120,891,500,000,000,000,000,000,000,000
4
Bug 698024: bounds check zone pointer in Ins_MIRP()
static void uas_shutdown(struct device *dev) { struct usb_interface *intf = to_usb_interface(dev); struct usb_device *udev = interface_to_usbdev(intf); struct Scsi_Host *shost = usb_get_intfdata(intf); struct uas_dev_info *devinfo = (struct uas_dev_info *)shost->hostdata; if (system_state != SYSTEM_RESTART) return; devinfo->shutdown = 1; uas_free_streams(devinfo); usb_set_interface(udev, intf->altsetting[0].desc.bInterfaceNumber, 0); usb_reset_device(udev); }
0
[ "CWE-125" ]
linux
786de92b3cb26012d3d0f00ee37adf14527f35c4
41,250,628,005,460,165,000,000,000,000,000,000,000
15
USB: uas: fix bug in handling of alternate settings The uas driver has a subtle bug in the way it handles alternate settings. The uas_find_uas_alt_setting() routine returns an altsetting value (the bAlternateSetting number in the descriptor), but uas_use_uas_driver() then treats that value as an index to the intf->altsetting array, which it isn't. Normally this doesn't cause any problems because the various alternate settings have bAlternateSetting values 0, 1, 2, ..., so the value is equal to the index in the array. But this is not guaranteed, and Andrey Konovalov used the syzkaller fuzzer with KASAN to get a slab-out-of-bounds error by violating this assumption. This patch fixes the bug by making uas_find_uas_alt_setting() return a pointer to the altsetting entry rather than either the value or the index. Pointers are less subject to misinterpretation. Signed-off-by: Alan Stern <[email protected]> Reported-by: Andrey Konovalov <[email protected]> Tested-by: Andrey Konovalov <[email protected]> CC: Oliver Neukum <[email protected]> CC: <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
string_replace_with_callback (const char *string, char *(*callback)(void *data, const char *text), void *callback_data, int *errors) { int length, length_value, index_string, index_result; char *result, *result2, *key, *value; const char *pos_end_name; *errors = 0; if (!string) return NULL; length = strlen (string) + 1; result = malloc (length); if (result) { index_string = 0; index_result = 0; while (string[index_string]) { if ((string[index_string] == '\\') && (string[index_string + 1] == '$')) { index_string++; result[index_result++] = string[index_string++]; } else if ((string[index_string] == '$') && (string[index_string + 1] == '{')) { pos_end_name = strchr (string + index_string + 2, '}'); if (pos_end_name) { key = string_strndup (string + index_string + 2, pos_end_name - (string + index_string + 2)); if (key) { value = (*callback) (callback_data, key); if (value) { length_value = strlen (value); length += length_value; result2 = realloc (result, length); if (!result2) { if (result) free (result); free (key); free (value); return NULL; } result = result2; strcpy (result + index_result, value); index_result += length_value; index_string += pos_end_name - string - index_string + 1; free (value); } else { result[index_result++] = string[index_string++]; (*errors)++; } free (key); } else result[index_result++] = string[index_string++]; } else result[index_result++] = string[index_string++]; } else result[index_result++] = string[index_string++]; } result[index_result] = '\0'; } return result; }
0
[ "CWE-20" ]
weechat
efb795c74fe954b9544074aafcebb1be4452b03a
195,072,093,278,838,860,000,000,000,000,000,000,000
81
core: do not call shell to execute command in hook_process (fix security problem when a plugin/script gives untrusted command) (bug #37764)
NO_INLINE JsVar *jspePostfixExpression() { JsVar *a; // TODO: should be in jspeUnaryExpression if (lex->tk==LEX_PLUSPLUS || lex->tk==LEX_MINUSMINUS) { int op = lex->tk; JSP_ASSERT_MATCH(op); a = jspePostfixExpression(); if (JSP_SHOULD_EXECUTE) { JsVar *one = jsvNewFromInteger(1); JsVar *res = jsvMathsOpSkipNames(a, one, op==LEX_PLUSPLUS ? '+' : '-'); jsvUnLock(one); // in-place add/subtract jsvReplaceWith(a, res); jsvUnLock(res); } } else a = jspeFactorFunctionCall(); return __jspePostfixExpression(a); }
0
[ "CWE-787" ]
Espruino
e069be2ecc5060ef47391716e4de94999595b260
8,073,462,984,330,933,000,000,000,000,000,000,000
19
Fix potential corruption issue caused by `delete [].__proto__` (fix #2142)
int ssl3_get_client_certificate(SSL *s) { int i, ok, al, ret = -1; X509 *x = NULL; unsigned long l, nc, llen, n; const unsigned char *p, *q; unsigned char *d; STACK_OF(X509) *sk = NULL; n = s->method->ssl_get_message(s, SSL3_ST_SR_CERT_A, SSL3_ST_SR_CERT_B, -1, s->max_cert_list, &ok); if (!ok) return ((int)n); if (s->s3->tmp.message_type == SSL3_MT_CLIENT_KEY_EXCHANGE) { if ((s->verify_mode & SSL_VERIFY_PEER) && (s->verify_mode & SSL_VERIFY_FAIL_IF_NO_PEER_CERT)) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_PEER_DID_NOT_RETURN_A_CERTIFICATE); al = SSL_AD_HANDSHAKE_FAILURE; goto f_err; } /* * If tls asked for a client cert, the client must return a 0 list */ if ((s->version > SSL3_VERSION) && s->s3->tmp.cert_request) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_TLS_PEER_DID_NOT_RESPOND_WITH_CERTIFICATE_LIST); al = SSL_AD_UNEXPECTED_MESSAGE; goto f_err; } s->s3->tmp.reuse_message = 1; return (1); } if (s->s3->tmp.message_type != SSL3_MT_CERTIFICATE) { al = SSL_AD_UNEXPECTED_MESSAGE; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_WRONG_MESSAGE_TYPE); goto f_err; } p = d = (unsigned char *)s->init_msg; if ((sk = sk_X509_new_null()) == NULL) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE); goto err; } n2l3(p, llen); if (llen + 3 != n) { al = SSL_AD_DECODE_ERROR; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_LENGTH_MISMATCH); goto f_err; } for (nc = 0; nc < llen;) { n2l3(p, l); if ((l + nc + 3) > llen) { al = SSL_AD_DECODE_ERROR; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_CERT_LENGTH_MISMATCH); goto f_err; } q = p; x = d2i_X509(NULL, &p, l); if (x == NULL) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_ASN1_LIB); goto err; } if (p != (q + l)) { al = SSL_AD_DECODE_ERROR; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_CERT_LENGTH_MISMATCH); goto f_err; } if (!sk_X509_push(sk, x)) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE); goto err; } x = NULL; nc += l + 3; } if (sk_X509_num(sk) <= 0) { /* TLS does not mind 0 certs returned */ if (s->version == SSL3_VERSION) { al = SSL_AD_HANDSHAKE_FAILURE; SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_NO_CERTIFICATES_RETURNED); goto f_err; } /* Fail for TLS only if we required a certificate */ else if ((s->verify_mode & SSL_VERIFY_PEER) && (s->verify_mode & SSL_VERIFY_FAIL_IF_NO_PEER_CERT)) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_PEER_DID_NOT_RETURN_A_CERTIFICATE); al = SSL_AD_HANDSHAKE_FAILURE; goto f_err; } /* No client certificate so digest cached records */ if (s->s3->handshake_buffer && !ssl3_digest_cached_records(s)) { al = SSL_AD_INTERNAL_ERROR; goto f_err; } } else { i = ssl_verify_cert_chain(s, sk); if (i <= 0) { al = ssl_verify_alarm_type(s->verify_result); SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, SSL_R_NO_CERTIFICATE_RETURNED); goto f_err; } } if (s->session->peer != NULL) /* This should not be needed */ X509_free(s->session->peer); s->session->peer = sk_X509_shift(sk); s->session->verify_result = s->verify_result; /* * With the current implementation, sess_cert will always be NULL when we * arrive here. */ if (s->session->sess_cert == NULL) { s->session->sess_cert = ssl_sess_cert_new(); if (s->session->sess_cert == NULL) { SSLerr(SSL_F_SSL3_GET_CLIENT_CERTIFICATE, ERR_R_MALLOC_FAILURE); goto err; } } if (s->session->sess_cert->cert_chain != NULL) sk_X509_pop_free(s->session->sess_cert->cert_chain, X509_free); s->session->sess_cert->cert_chain = sk; /* * Inconsistency alert: cert_chain does *not* include the peer's own * certificate, while we do include it in s3_clnt.c */ sk = NULL; ret = 1; if (0) { f_err: ssl3_send_alert(s, SSL3_AL_FATAL, al); err: s->state = SSL_ST_ERR; } if (x != NULL) X509_free(x); if (sk != NULL) sk_X509_pop_free(sk, X509_free); return (ret); }
1
[ "CWE-125" ]
openssl
52e623c4cb06fffa9d5e75c60b34b4bc130b12e9
140,613,349,966,135,560,000,000,000,000,000,000,000
156
Fix small OOB reads. In ssl3_get_client_certificate, ssl3_get_server_certificate and ssl3_get_certificate_request check we have enough room before reading a length. Thanks to Shi Lei (Gear Team, Qihoo 360 Inc.) for reporting these bugs. CVE-2016-6306 Reviewed-by: Richard Levitte <[email protected]> Reviewed-by: Matt Caswell <[email protected]> (cherry picked from commit ff553f837172ecb2b5c8eca257ec3c5619a4b299)
int64_t qmp_guest_file_open(const char *path, bool has_mode, const char *mode, Error **err) { FILE *fh; int fd; int64_t ret = -1, handle; if (!has_mode) { mode = "r"; } slog("guest-file-open called, filepath: %s, mode: %s", path, mode); fh = fopen(path, mode); if (!fh) { error_setg_errno(err, errno, "failed to open file '%s' (mode: '%s')", path, mode); return -1; } /* set fd non-blocking to avoid common use cases (like reading from a * named pipe) from hanging the agent */ fd = fileno(fh); ret = fcntl(fd, F_GETFL); ret = fcntl(fd, F_SETFL, ret | O_NONBLOCK); if (ret == -1) { error_setg_errno(err, errno, "failed to make file '%s' non-blocking", path); fclose(fh); return -1; } handle = guest_file_handle_add(fh, err); if (error_is_set(err)) { fclose(fh); return -1; } slog("guest-file-open, handle: %d", handle); return handle; }
1
[ "CWE-264" ]
qemu
c689b4f1bac352dcfd6ecb9a1d45337de0f1de67
19,460,503,677,627,909,000,000,000,000,000,000,000
39
qga: set umask 0077 when daemonizing (CVE-2013-2007) The qemu guest agent creates a bunch of files with insecure permissions when started in daemon mode. For example: -rw-rw-rw- 1 root root /var/log/qemu-ga.log -rw-rw-rw- 1 root root /var/run/qga.state -rw-rw-rw- 1 root root /var/log/qga-fsfreeze-hook.log In addition, at least all files created with the "guest-file-open" QMP command, and all files created with shell output redirection (or otherwise) by utilities invoked by the fsfreeze hook script are affected. For now mask all file mode bits for "group" and "others" in become_daemon(). Temporarily, for compatibility reasons, stick with the 0666 file-mode in case of files newly created by the "guest-file-open" QMP call. Do so without changing the umask temporarily. Signed-off-by: Laszlo Ersek <[email protected]> Signed-off-by: Anthony Liguori <[email protected]>
static int snd_timer_user_gparams(struct file *file, struct snd_timer_gparams __user *_gparams) { struct snd_timer_gparams gparams; struct snd_timer *t; int err; if (copy_from_user(&gparams, _gparams, sizeof(gparams))) return -EFAULT; mutex_lock(&register_mutex); t = snd_timer_find(&gparams.tid); if (!t) { err = -ENODEV; goto _error; } if (!list_empty(&t->open_list_head)) { err = -EBUSY; goto _error; } if (!t->hw.set_period) { err = -ENOSYS; goto _error; } err = t->hw.set_period(t, gparams.period_num, gparams.period_den); _error: mutex_unlock(&register_mutex); return err; }
0
[ "CWE-200", "CWE-362" ]
linux
ee8413b01045c74340aa13ad5bdf905de32be736
152,895,939,008,705,220,000,000,000,000,000,000,000
28
ALSA: timer: Fix double unlink of active_list ALSA timer instance object has a couple of linked lists and they are unlinked unconditionally at snd_timer_stop(). Meanwhile snd_timer_interrupt() unlinks it, but it calls list_del() which leaves the element list itself unchanged. This ends up with unlinking twice, and it was caught by syzkaller fuzzer. The fix is to use list_del_init() variant properly there, too. Reported-by: Dmitry Vyukov <[email protected]> Tested-by: Dmitry Vyukov <[email protected]> Cc: <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
static int empty_dir(struct inode *inode) { unsigned int offset; struct buffer_head *bh; struct ext4_dir_entry_2 *de, *de1; struct super_block *sb; int err = 0; sb = inode->i_sb; if (inode->i_size < EXT4_DIR_REC_LEN(1) + EXT4_DIR_REC_LEN(2) || !(bh = ext4_bread(NULL, inode, 0, 0, &err))) { if (err) EXT4_ERROR_INODE(inode, "error %d reading directory lblock 0", err); else ext4_warning(inode->i_sb, "bad directory (dir #%lu) - no data block", inode->i_ino); return 1; } if (!buffer_verified(bh) && !ext4_dirent_csum_verify(inode, (struct ext4_dir_entry *)bh->b_data)) { EXT4_ERROR_INODE(inode, "checksum error reading directory " "lblock 0"); return -EIO; } set_buffer_verified(bh); de = (struct ext4_dir_entry_2 *) bh->b_data; de1 = ext4_next_entry(de, sb->s_blocksize); if (le32_to_cpu(de->inode) != inode->i_ino || !le32_to_cpu(de1->inode) || strcmp(".", de->name) || strcmp("..", de1->name)) { ext4_warning(inode->i_sb, "bad directory (dir #%lu) - no `.' or `..'", inode->i_ino); brelse(bh); return 1; } offset = ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize) + ext4_rec_len_from_disk(de1->rec_len, sb->s_blocksize); de = ext4_next_entry(de1, sb->s_blocksize); while (offset < inode->i_size) { if (!bh || (void *) de >= (void *) (bh->b_data+sb->s_blocksize)) { unsigned int lblock; err = 0; brelse(bh); lblock = offset >> EXT4_BLOCK_SIZE_BITS(sb); bh = ext4_bread(NULL, inode, lblock, 0, &err); if (!bh) { if (err) EXT4_ERROR_INODE(inode, "error %d reading directory " "lblock %u", err, lblock); offset += sb->s_blocksize; continue; } if (!buffer_verified(bh) && !ext4_dirent_csum_verify(inode, (struct ext4_dir_entry *)bh->b_data)) { EXT4_ERROR_INODE(inode, "checksum error " "reading directory lblock 0"); return -EIO; } set_buffer_verified(bh); de = (struct ext4_dir_entry_2 *) bh->b_data; } if (ext4_check_dir_entry(inode, NULL, de, bh, offset)) { de = (struct ext4_dir_entry_2 *)(bh->b_data + sb->s_blocksize); offset = (offset | (sb->s_blocksize - 1)) + 1; continue; } if (le32_to_cpu(de->inode)) { brelse(bh); return 0; } offset += ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize); de = ext4_next_entry(de, sb->s_blocksize); } brelse(bh); return 1; }
0
[ "CWE-20" ]
linux
c9b92530a723ac5ef8e352885a1862b18f31b2f5
238,320,405,731,921,230,000,000,000,000,000,000,000
85
ext4: make orphan functions be no-op in no-journal mode Instead of checking whether the handle is valid, we check if journal is enabled. This avoids taking the s_orphan_lock mutex in all cases when there is no journal in use, including the error paths where ext4_orphan_del() is called with a handle set to NULL. Signed-off-by: Anatol Pomozov <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]>
rf64_write_tailer (SF_PRIVATE *psf) { /* Reset the current header buffer length to zero. */ psf->header.ptr [0] = 0 ; psf->header.indx = 0 ; if (psf->bytewidth > 0 && psf->sf.seekable == SF_TRUE) { psf->datalength = psf->sf.frames * psf->bytewidth * psf->sf.channels ; psf->dataend = psf->dataoffset + psf->datalength ; } ; if (psf->dataend > 0) psf_fseek (psf, psf->dataend, SEEK_SET) ; else psf->dataend = psf_fseek (psf, 0, SEEK_END) ; if (psf->dataend & 1) psf_binheader_writef (psf, "z", BHWz (1)) ; if (psf->strings.flags & SF_STR_LOCATE_END) wavlike_write_strings (psf, SF_STR_LOCATE_END) ; /* Write the tailer. */ if (psf->header.indx > 0) psf_fwrite (psf->header.ptr, psf->header.indx, 1, psf) ; return 0 ; } /* rf64_write_tailer */
0
[ "CWE-476" ]
libsndfile
6f3266277bed16525f0ac2f0f03ff4626f1923e5
156,533,627,646,915,930,000,000,000,000,000,000,000
28
Fix max channel count bug The code was allowing files to be written with a channel count of exactly `SF_MAX_CHANNELS` but was failing to read some file formats with the same channel count.
void StreamTcpPseudoPacketCreateStreamEndPacket(ThreadVars *tv, StreamTcpThread *stt, Packet *p, TcpSession *ssn, PacketQueue *pq) { SCEnter(); if (p->flags & PKT_PSEUDO_STREAM_END) { SCReturn; } /* no need for a pseudo packet if there is nothing left to reassemble */ if (ssn->server.seg_list == NULL && ssn->client.seg_list == NULL) { SCReturn; } Packet *np = StreamTcpPseudoSetup(p, GET_PKT_DATA(p), GET_PKT_LEN(p)); if (np == NULL) { SCLogDebug("The packet received from packet allocation is NULL"); StatsIncr(tv, stt->counter_tcp_pseudo_failed); SCReturn; } PKT_SET_SRC(np, PKT_SRC_STREAM_TCP_STREAM_END_PSEUDO); /* Setup the IP and TCP headers */ StreamTcpPseudoPacketSetupHeader(np,p); np->tenant_id = p->flow->tenant_id; np->flowflags = p->flowflags; np->flags |= PKT_STREAM_EST; np->flags |= PKT_STREAM_EOF; np->flags |= PKT_HAS_FLOW; np->flags |= PKT_PSEUDO_STREAM_END; if (p->flags & PKT_NOPACKET_INSPECTION) { DecodeSetNoPacketInspectionFlag(np); } if (p->flags & PKT_NOPAYLOAD_INSPECTION) { DecodeSetNoPayloadInspectionFlag(np); } if (PKT_IS_TOSERVER(p)) { SCLogDebug("original is to_server, so pseudo is to_client"); np->flowflags &= ~FLOW_PKT_TOSERVER; np->flowflags |= FLOW_PKT_TOCLIENT; #ifdef DEBUG BUG_ON(!(PKT_IS_TOCLIENT(np))); BUG_ON((PKT_IS_TOSERVER(np))); #endif } else if (PKT_IS_TOCLIENT(p)) { SCLogDebug("original is to_client, so pseudo is to_server"); np->flowflags &= ~FLOW_PKT_TOCLIENT; np->flowflags |= FLOW_PKT_TOSERVER; #ifdef DEBUG BUG_ON(!(PKT_IS_TOSERVER(np))); BUG_ON((PKT_IS_TOCLIENT(np))); #endif } PacketEnqueue(pq, np); StatsIncr(tv, stt->counter_tcp_pseudo); SCReturn; }
0
[]
suricata
843d0b7a10bb45627f94764a6c5d468a24143345
9,532,805,427,145,807,000,000,000,000,000,000,000
63
stream: support RST getting lost/ignored In case of a valid RST on a SYN, the state is switched to 'TCP_CLOSED'. However, the target of the RST may not have received it, or may not have accepted it. Also, the RST may have been injected, so the supposed sender may not actually be aware of the RST that was sent in it's name. In this case the previous behavior was to switch the state to CLOSED and accept no further TCP updates or stream reassembly. This patch changes this. It still switches the state to CLOSED, as this is by far the most likely to be correct. However, it will reconsider the state if the receiver continues to talk. To do this on each state change the previous state will be recorded in TcpSession::pstate. If a non-RST packet is received after a RST, this TcpSession::pstate is used to try to continue the conversation. If the (supposed) sender of the RST is also continueing the conversation as normal, it's highly likely it didn't send the RST. In this case a stream event is generated. Ticket: #2501 Reported-By: Kirill Shipulin
static void handle_wl_output_mode(void *data, struct wl_output *output, uint32_t flags, int32_t width, int32_t height, int32_t refresh) { // Who cares }
0
[ "CWE-703" ]
swaylock
1d1c75b6316d21933069a9d201f966d84099f6ca
208,605,364,177,085,270,000,000,000,000,000,000,000
4
Add support for ext-session-lock-v1 This is a new protocol to lock the session [1]. It should be more reliable than layer-shell + input-inhibitor. [1]: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/131
KEY_CheckKeyLength(uint32_t key_id) { Key *key; key = get_key_by_id(key_id); if (!key) return 0; switch (key->class) { case NTP_MAC: return key->data.ntp_mac.length >= MIN_SECURE_KEY_LENGTH; default: return 1; } }
0
[ "CWE-59" ]
chrony
e18903a6b56341481a2e08469c0602010bf7bfe3
22,885,825,468,540,683,000,000,000,000,000,000,000
16
switch to new util file functions Replace all fopen(), rename(), and unlink() calls with the new util functions.
static void cmd_inquiry(IDEState *s, uint8_t *buf) { int max_len = buf[4]; buf[0] = 0x05; /* CD-ROM */ buf[1] = 0x80; /* removable */ buf[2] = 0x00; /* ISO */ buf[3] = 0x21; /* ATAPI-2 (XXX: put ATAPI-4 ?) */ buf[4] = 31; /* additional length */ buf[5] = 0; /* reserved */ buf[6] = 0; /* reserved */ buf[7] = 0; /* reserved */ padstr8(buf + 8, 8, "QEMU"); padstr8(buf + 16, 16, "QEMU DVD-ROM"); padstr8(buf + 32, 4, s->version); ide_atapi_cmd_reply(s, 36, max_len); }
0
[]
qemu
ce560dcf20c14194db5ef3b9fc1ea592d4e68109
303,769,312,568,046,140,000,000,000,000,000,000,000
17
ATAPI: STARTSTOPUNIT only eject/load media if powercondition is 0 The START STOP UNIT command will only eject/load media if power condition is zero. If power condition is !0 then LOEJ and START will be ignored. From MMC (sbc contains similar wordings too) The Power Conditions field requests the block device to be placed in the power condition defined in Table 558. If this field has a value other than 0h then the Start and LoEj bits shall be ignored. Signed-off-by: Ronnie Sahlberg <[email protected]> Signed-off-by: Kevin Wolf <[email protected]>
int MS_CALLBACK verify_callback(int ok, X509_STORE_CTX *ctx) { X509 *err_cert; int err,depth; err_cert=X509_STORE_CTX_get_current_cert(ctx); err= X509_STORE_CTX_get_error(ctx); depth= X509_STORE_CTX_get_error_depth(ctx); if (!verify_quiet || !ok) { BIO_printf(bio_err,"depth=%d ",depth); if (err_cert) { X509_NAME_print_ex(bio_err, X509_get_subject_name(err_cert), 0, XN_FLAG_ONELINE); BIO_puts(bio_err, "\n"); } else BIO_puts(bio_err, "<no cert>\n"); } if (!ok) { BIO_printf(bio_err,"verify error:num=%d:%s\n",err, X509_verify_cert_error_string(err)); if (verify_depth >= depth) { if (!verify_return_error) ok=1; verify_error=X509_V_OK; } else { ok=0; verify_error=X509_V_ERR_CERT_CHAIN_TOO_LONG; } } switch (err) { case X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT: BIO_puts(bio_err,"issuer= "); X509_NAME_print_ex(bio_err, X509_get_issuer_name(err_cert), 0, XN_FLAG_ONELINE); BIO_puts(bio_err, "\n"); break; case X509_V_ERR_CERT_NOT_YET_VALID: case X509_V_ERR_ERROR_IN_CERT_NOT_BEFORE_FIELD: BIO_printf(bio_err,"notBefore="); ASN1_TIME_print(bio_err,X509_get_notBefore(err_cert)); BIO_printf(bio_err,"\n"); break; case X509_V_ERR_CERT_HAS_EXPIRED: case X509_V_ERR_ERROR_IN_CERT_NOT_AFTER_FIELD: BIO_printf(bio_err,"notAfter="); ASN1_TIME_print(bio_err,X509_get_notAfter(err_cert)); BIO_printf(bio_err,"\n"); break; case X509_V_ERR_NO_EXPLICIT_POLICY: policies_print(bio_err, ctx); break; } if (err == X509_V_OK && ok == 2 && !verify_quiet) policies_print(bio_err, ctx); if (ok && !verify_quiet) BIO_printf(bio_err,"verify return:%d\n",ok); return(ok); }
0
[]
openssl
a70da5b3ecc3160368529677006801c58cb369db
84,637,427,452,457,550,000,000,000,000,000,000,000
68
New functions to check a hostname email or IP address against a certificate. Add options to s_client, s_server and x509 utilities to print results of checks.
do_resources_recurs (WinLibrary *fi, WinResource *base, WinResource *type_wr, WinResource *name_wr, WinResource *lang_wr, const char *type, const char *name, const char *lang, DoResourceCallback cb) { int c, rescnt; WinResource *wr; /* get a list of all resources at this level */ wr = list_resources (fi, base, &rescnt); if (wr == NULL) return; /* process each resource listed */ for (c = 0 ; c < rescnt ; c++) { /* (over)write the corresponding WinResource holder with the current */ memcpy(WINRESOURCE_BY_LEVEL(wr[c].level), wr+c, sizeof(WinResource)); if ((base && (wr[c].level <= base->level)) || (wr[c].level >= 3)) { warn(_("%s: resource structure malformed"), fi->name); return; } /* go deeper unless there is something that does NOT match */ if (LEVEL_MATCHES(type) && LEVEL_MATCHES(name) && LEVEL_MATCHES(lang)) { if (wr->is_directory) do_resources_recurs (fi, wr+c, type_wr, name_wr, lang_wr, type, name, lang, cb); else cb(fi, wr+c, type_wr, name_wr, lang_wr); } } /* since we're moving back one level after this, unset the * WinResource holder used on this level */ memset(WINRESOURCE_BY_LEVEL(wr[0].level), 0, sizeof(WinResource)); }
0
[]
icoutils
a8c8543731ed255e6f8bb81c662155f98710ac20
138,487,623,016,309,200,000,000,000,000,000,000,000
36
wrestool: read_library() moves memory around, check it doesn't overwrite header data
EXPORTED int mailbox_open_iwl(const char *name, struct mailbox **mailboxptr) { return mailbox_open_advanced(name, LOCK_SHARED, LOCK_EXCLUSIVE, NULL, mailboxptr); }
0
[]
cyrus-imapd
1d6d15ee74e11a9bd745e80be69869e5fb8d64d6
308,568,166,508,064,360,000,000,000,000,000,000,000
5
mailbox.c/reconstruct.c: Add mailbox_mbentry_from_path()
undo_read(bufinfo_T *bi, char_u *buffer, size_t size) { #ifdef FEAT_CRYPT if (bi->bi_buffer != NULL) { int size_todo = (int)size; char_u *p = buffer; while (size_todo > 0) { size_t n; if (bi->bi_used >= bi->bi_avail) { n = fread(bi->bi_buffer, 1, (size_t)CRYPT_BUF_SIZE, bi->bi_fp); if (n == 0) { /* Error may be checked for only later. Fill with zeros, * so that the reader won't use garbage. */ vim_memset(p, 0, size_todo); return FAIL; } bi->bi_avail = n; bi->bi_used = 0; crypt_decode_inplace(bi->bi_state, bi->bi_buffer, bi->bi_avail); } n = size_todo; if (n > bi->bi_avail - bi->bi_used) n = bi->bi_avail - bi->bi_used; mch_memmove(p, bi->bi_buffer + bi->bi_used, n); bi->bi_used += n; size_todo -= (int)n; p += n; } return OK; } #endif if (fread(buffer, (size_t)size, 1, bi->bi_fp) != 1) return FAIL; return OK; }
0
[ "CWE-190" ]
vim
3eb1637b1bba19519885dd6d377bd5596e91d22c
129,500,758,764,281,720,000,000,000,000,000,000,000
41
patch 8.0.0377: possible overflow when reading corrupted undo file Problem: Possible overflow when reading corrupted undo file. Solution: Check if allocated size is not too big. (King)
intrusive_ptr<Expression> ExpressionDateToParts::optimize() { _date = _date->optimize(); if (_timeZone) { _timeZone = _timeZone->optimize(); } if (_iso8601) { _iso8601 = _iso8601->optimize(); } if (ExpressionConstant::allNullOrConstant({_date, _iso8601, _timeZone})) { // Everything is a constant, so we can turn into a constant. return ExpressionConstant::create( getExpressionContext(), evaluate(Document{}, &(getExpressionContext()->variables))); } return this; }
0
[]
mongo
1772b9a0393b55e6a280a35e8f0a1f75c014f301
304,320,830,742,354,780,000,000,000,000,000,000,000
17
SERVER-49404 Enforce additional checks in $arrayToObject
static int mov_read_sbgp(MOVContext *c, AVIOContext *pb, MOVAtom atom) { AVStream *st; MOVStreamContext *sc; unsigned int i, entries; uint8_t version; uint32_t grouping_type; MOVSbgp *table, **tablep; int *table_count; if (c->fc->nb_streams < 1) return 0; st = c->fc->streams[c->fc->nb_streams-1]; sc = st->priv_data; version = avio_r8(pb); /* version */ avio_rb24(pb); /* flags */ grouping_type = avio_rl32(pb); if (grouping_type == MKTAG('r','a','p',' ')) { tablep = &sc->rap_group; table_count = &sc->rap_group_count; } else if (grouping_type == MKTAG('s','y','n','c')) { tablep = &sc->sync_group; table_count = &sc->sync_group_count; } else { return 0; } if (version == 1) avio_rb32(pb); /* grouping_type_parameter */ entries = avio_rb32(pb); if (!entries) return 0; if (*tablep) av_log(c->fc, AV_LOG_WARNING, "Duplicated SBGP %s atom\n", av_fourcc2str(grouping_type)); av_freep(tablep); table = av_malloc_array(entries, sizeof(*table)); if (!table) return AVERROR(ENOMEM); *tablep = table; for (i = 0; i < entries && !pb->eof_reached; i++) { table[i].count = avio_rb32(pb); /* sample_count */ table[i].index = avio_rb32(pb); /* group_description_index */ } *table_count = i; if (pb->eof_reached) { av_log(c->fc, AV_LOG_WARNING, "reached eof, corrupted SBGP atom\n"); return AVERROR_EOF; } return 0; }
0
[ "CWE-703" ]
FFmpeg
c953baa084607dd1d84c3bfcce3cf6a87c3e6e05
299,806,781,581,892,020,000,000,000,000,000,000,000
57
avformat/mov: Check count sums in build_open_gop_key_points() Fixes: ffmpeg.md Fixes: Out of array access Fixes: CVE-2022-2566 Found-by: Andy Nguyen <[email protected]> Found-by: 3pvd <[email protected]> Reviewed-by: Andy Nguyen <[email protected]> Signed-off-by: Michael Niedermayer <[email protected]>
static size_t PCLDeltaCompressImage(const size_t length, const unsigned char *previous_pixels,const unsigned char *pixels, unsigned char *compress_pixels) { int delta, j, replacement; register ssize_t i, x; register unsigned char *q; q=compress_pixels; for (x=0; x < (ssize_t) length; ) { j=0; for (i=0; x < (ssize_t) length; x++) { if (*pixels++ != *previous_pixels++) { i=1; break; } j++; } while (x < (ssize_t) length) { x++; if (*pixels == *previous_pixels) break; i++; previous_pixels++; pixels++; } if (i == 0) break; replacement=j >= 31 ? 31 : j; j-=replacement; delta=i >= 8 ? 8 : i; *q++=(unsigned char) (((delta-1) << 5) | replacement); if (replacement == 31) { for (replacement=255; j != 0; ) { if (replacement > j) replacement=j; *q++=(unsigned char) replacement; j-=replacement; } if (replacement == 255) *q++='\0'; } for (pixels-=i; i != 0; ) { for (i-=delta; delta != 0; delta--) *q++=(*pixels++); if (i == 0) break; delta=i; if (i >= 8) delta=8; *q++=(unsigned char) ((delta-1) << 5); } } return((size_t) (q-compress_pixels)); }
0
[ "CWE-401" ]
ImageMagick
c0fe488e7052f68d4eb7768805a857ef6fef928d
178,281,003,676,363,270,000,000,000,000,000,000,000
70
https://github.com/ImageMagick/ImageMagick/issues/1520
trace_get_syscall_nr(struct task_struct *task, struct pt_regs *regs) { if (unlikely(arch_trace_is_compat_syscall(regs))) return -1; return syscall_get_nr(task, regs); }
0
[ "CWE-125", "CWE-476", "CWE-119", "CWE-264" ]
linux
086ba77a6db00ed858ff07451bedee197df868c9
239,718,463,646,162,030,000,000,000,000,000,000,000
7
tracing/syscalls: Ignore numbers outside NR_syscalls' range ARM has some private syscalls (for example, set_tls(2)) which lie outside the range of NR_syscalls. If any of these are called while syscall tracing is being performed, out-of-bounds array access will occur in the ftrace and perf sys_{enter,exit} handlers. # trace-cmd record -e raw_syscalls:* true && trace-cmd report ... true-653 [000] 384.675777: sys_enter: NR 192 (0, 1000, 3, 4000022, ffffffff, 0) true-653 [000] 384.675812: sys_exit: NR 192 = 1995915264 true-653 [000] 384.675971: sys_enter: NR 983045 (76f74480, 76f74000, 76f74b28, 76f74480, 76f76f74, 1) true-653 [000] 384.675988: sys_exit: NR 983045 = 0 ... # trace-cmd record -e syscalls:* true [ 17.289329] Unable to handle kernel paging request at virtual address aaaaaace [ 17.289590] pgd = 9e71c000 [ 17.289696] [aaaaaace] *pgd=00000000 [ 17.289985] Internal error: Oops: 5 [#1] PREEMPT SMP ARM [ 17.290169] Modules linked in: [ 17.290391] CPU: 0 PID: 704 Comm: true Not tainted 3.18.0-rc2+ #21 [ 17.290585] task: 9f4dab00 ti: 9e710000 task.ti: 9e710000 [ 17.290747] PC is at ftrace_syscall_enter+0x48/0x1f8 [ 17.290866] LR is at syscall_trace_enter+0x124/0x184 Fix this by ignoring out-of-NR_syscalls-bounds syscall numbers. Commit cd0980fc8add "tracing: Check invalid syscall nr while tracing syscalls" added the check for less than zero, but it should have also checked for greater than NR_syscalls. Link: http://lkml.kernel.org/p/[email protected] Fixes: cd0980fc8add "tracing: Check invalid syscall nr while tracing syscalls" Cc: [email protected] # 2.6.33+ Signed-off-by: Rabin Vincent <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
static void mmput_async_fn(struct work_struct *work) { struct mm_struct *mm = container_of(work, struct mm_struct, async_put_work); __mmput(mm); }
0
[ "CWE-416", "CWE-703" ]
linux
2b7e8665b4ff51c034c55df3cff76518d1a9ee3a
170,588,262,403,942,580,000,000,000,000,000,000,000
5
fork: fix incorrect fput of ->exe_file causing use-after-free Commit 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for write killable") made it possible to kill a forking task while it is waiting to acquire its ->mmap_sem for write, in dup_mmap(). However, it was overlooked that this introduced an new error path before a reference is taken on the mm_struct's ->exe_file. Since the ->exe_file of the new mm_struct was already set to the old ->exe_file by the memcpy() in dup_mm(), it was possible for the mmput() in the error path of dup_mm() to drop a reference to ->exe_file which was never taken. This caused the struct file to later be freed prematurely. Fix it by updating mm_init() to NULL out the ->exe_file, in the same place it clears other things like the list of mmaps. This bug was found by syzkaller. It can be reproduced using the following C program: #define _GNU_SOURCE #include <pthread.h> #include <stdlib.h> #include <sys/mman.h> #include <sys/syscall.h> #include <sys/wait.h> #include <unistd.h> static void *mmap_thread(void *_arg) { for (;;) { mmap(NULL, 0x1000000, PROT_READ, MAP_POPULATE|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); } } static void *fork_thread(void *_arg) { usleep(rand() % 10000); fork(); } int main(void) { fork(); fork(); fork(); for (;;) { if (fork() == 0) { pthread_t t; pthread_create(&t, NULL, mmap_thread, NULL); pthread_create(&t, NULL, fork_thread, NULL); usleep(rand() % 10000); syscall(__NR_exit_group, 0); } wait(NULL); } } No special kernel config options are needed. It usually causes a NULL pointer dereference in __remove_shared_vm_struct() during exit, or in dup_mmap() (which is usually inlined into copy_process()) during fork. Both are due to a vm_area_struct's ->vm_file being used after it's already been freed. Google Bug Id: 64772007 Link: http://lkml.kernel.org/r/[email protected] Fixes: 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for write killable") Signed-off-by: Eric Biggers <[email protected]> Tested-by: Mark Rutland <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Konstantin Khlebnikov <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: <[email protected]> [v4.7+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
Alter_drop *clone(MEM_ROOT *mem_root) const { return new (mem_root) Alter_drop(*this); }
0
[ "CWE-416" ]
server
4681b6f2d8c82b4ec5cf115e83698251963d80d5
105,404,184,828,675,020,000,000,000,000,000,000,000
2
MDEV-26281 ASAN use-after-poison when complex conversion is involved in blob the bug was that in_vector array in Item_func_in was allocated in the statement arena, not in the table->expr_arena. revert part of the 5acd391e8b2d. Instead, change the arena correctly in fix_all_session_vcol_exprs(). Remove TABLE_ARENA, that was introduced in 5acd391e8b2d to force item tree changes to be rolled back (because they were allocated in the wrong arena and didn't persist. now they do)
static int cqspi_indirect_read_execute(struct spi_nor *nor, u8 *rxbuf, const unsigned n_rx) { struct cqspi_flash_pdata *f_pdata = nor->priv; struct cqspi_st *cqspi = f_pdata->cqspi; void __iomem *reg_base = cqspi->iobase; void __iomem *ahb_base = cqspi->ahb_base; unsigned int remaining = n_rx; unsigned int bytes_to_read = 0; int ret = 0; writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES); /* Clear all interrupts. */ writel(CQSPI_IRQ_STATUS_MASK, reg_base + CQSPI_REG_IRQSTATUS); writel(CQSPI_IRQ_MASK_RD, reg_base + CQSPI_REG_IRQMASK); reinit_completion(&cqspi->transfer_complete); writel(CQSPI_REG_INDIRECTRD_START_MASK, reg_base + CQSPI_REG_INDIRECTRD); while (remaining > 0) { ret = wait_for_completion_timeout(&cqspi->transfer_complete, msecs_to_jiffies (CQSPI_READ_TIMEOUT_MS)); bytes_to_read = cqspi_get_rd_sram_level(cqspi); if (!ret && bytes_to_read == 0) { dev_err(nor->dev, "Indirect read timeout, no bytes\n"); ret = -ETIMEDOUT; goto failrd; } while (bytes_to_read != 0) { bytes_to_read *= cqspi->fifo_width; bytes_to_read = bytes_to_read > remaining ? remaining : bytes_to_read; readsl(ahb_base, rxbuf, DIV_ROUND_UP(bytes_to_read, 4)); rxbuf += bytes_to_read; remaining -= bytes_to_read; bytes_to_read = cqspi_get_rd_sram_level(cqspi); } if (remaining > 0) reinit_completion(&cqspi->transfer_complete); } /* Check indirect done status */ ret = cqspi_wait_for_bit(reg_base + CQSPI_REG_INDIRECTRD, CQSPI_REG_INDIRECTRD_DONE_MASK, 0); if (ret) { dev_err(nor->dev, "Indirect read completion error (%i)\n", ret); goto failrd; } /* Disable interrupt */ writel(0, reg_base + CQSPI_REG_IRQMASK); /* Clear indirect completion status */ writel(CQSPI_REG_INDIRECTRD_DONE_MASK, reg_base + CQSPI_REG_INDIRECTRD); return 0; failrd: /* Disable interrupt */ writel(0, reg_base + CQSPI_REG_IRQMASK); /* Cancel the indirect read */ writel(CQSPI_REG_INDIRECTWR_CANCEL_MASK, reg_base + CQSPI_REG_INDIRECTRD); return ret; }
0
[ "CWE-119", "CWE-787" ]
linux
193e87143c290ec16838f5368adc0e0bc94eb931
288,452,690,763,751,200,000,000,000,000,000,000,000
75
mtd: spi-nor: Off by one in cqspi_setup_flash() There are CQSPI_MAX_CHIPSELECT elements in the ->f_pdata array so the > should be >=. Fixes: 140623410536 ('mtd: spi-nor: Add driver for Cadence Quad SPI Flash Controller') Signed-off-by: Dan Carpenter <[email protected]> Reviewed-by: Marek Vasut <[email protected]> Signed-off-by: Cyrille Pitchen <[email protected]>
static void io_req_complete_state(struct io_kiocb *req, long res, unsigned int cflags) { io_clean_op(req); req->result = res; req->compl.cflags = cflags; req->flags |= REQ_F_COMPLETE_INLINE; }
0
[ "CWE-667" ]
linux
3ebba796fa251d042be42b929a2d916ee5c34a49
102,641,515,523,356,360,000,000,000,000,000,000,000
8
io_uring: ensure that SQPOLL thread is started for exit If we create it in a disabled state because IORING_SETUP_R_DISABLED is set on ring creation, we need to ensure that we've kicked the thread if we're exiting before it's been explicitly disabled. Otherwise we can run into a deadlock where exit is waiting go park the SQPOLL thread, but the SQPOLL thread itself is waiting to get a signal to start. That results in the below trace of both tasks hung, waiting on each other: INFO: task syz-executor458:8401 blocked for more than 143 seconds. Not tainted 5.11.0-next-20210226-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor458 state:D stack:27536 pid: 8401 ppid: 8400 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4324 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5075 schedule+0xcf/0x270 kernel/sched/core.c:5154 schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x168/0x270 kernel/sched/completion.c:138 io_sq_thread_park fs/io_uring.c:7115 [inline] io_sq_thread_park+0xd5/0x130 fs/io_uring.c:7103 io_uring_cancel_task_requests+0x24c/0xd90 fs/io_uring.c:8745 __io_uring_files_cancel+0x110/0x230 fs/io_uring.c:8840 io_uring_files_cancel include/linux/io_uring.h:47 [inline] do_exit+0x299/0x2a60 kernel/exit.c:780 do_group_exit+0x125/0x310 kernel/exit.c:922 __do_sys_exit_group kernel/exit.c:933 [inline] __se_sys_exit_group kernel/exit.c:931 [inline] __x64_sys_exit_group+0x3a/0x50 kernel/exit.c:931 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x43e899 RSP: 002b:00007ffe89376d48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 RAX: ffffffffffffffda RBX: 00000000004af2f0 RCX: 000000000043e899 RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000 RBP: 0000000000000000 R08: ffffffffffffffc0 R09: 0000000010000000 R10: 0000000000008011 R11: 0000000000000246 R12: 00000000004af2f0 R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000001 INFO: task iou-sqp-8401:8402 can't die for more than 143 seconds. task:iou-sqp-8401 state:D stack:30272 pid: 8402 ppid: 8400 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4324 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5075 schedule+0xcf/0x270 kernel/sched/core.c:5154 schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x168/0x270 kernel/sched/completion.c:138 io_sq_thread+0x27d/0x1ae0 fs/io_uring.c:6717 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task iou-sqp-8401:8402 blocked for more than 143 seconds. Reported-by: [email protected] Signed-off-by: Jens Axboe <[email protected]>
TEST_F(QuicServerTransportTest, ReceiveCloseAfterLocalError) { auto qLogger = std::make_shared<FileQLogger>(VantagePoint::Server); server->getNonConstConn().qLogger = qLogger; ShortHeader header( ProtectionType::KeyPhaseZero, *server->getConn().serverConnectionId, clientNextAppDataPacketNum++); RegularQuicPacketBuilder builder( server->getConn().udpSendPacketLen, std::move(header), 0 /* largestAcked */); builder.encodePacketHeader(); ASSERT_TRUE(builder.canBuildPacket()); // Deliver a reset to non existent stream to trigger a local conn error StreamId streamId = 0x01; RstStreamFrame rstFrame(streamId, GenericApplicationErrorCode::UNKNOWN, 0); writeFrame(std::move(rstFrame), builder); auto packet = std::move(builder).buildPacket(); deliverDataWithoutErrorCheck(packetToBuf(packet)); EXPECT_TRUE(verifyFramePresent( serverWrites, *makeClientEncryptedCodec(), QuicFrame::Type::ConnectionCloseFrame)); serverWrites.clear(); auto currLargestReceivedPacketNum = server->getConn().ackStates.appDataAckState.largestReceivedPacketNum; EXPECT_TRUE(hasNotReceivedNewPacketsSinceLastCloseSent(server->getConn())); ShortHeader header2( ProtectionType::KeyPhaseZero, *server->getConn().serverConnectionId, clientNextAppDataPacketNum++); RegularQuicPacketBuilder builder2( server->getConn().udpSendPacketLen, std::move(header2), 0 /* largestAcked */); builder2.encodePacketHeader(); std::string errMsg = "Mind the gap"; ConnectionCloseFrame connClose( QuicErrorCode(TransportErrorCode::NO_ERROR), errMsg); writeFrame(std::move(connClose), builder2); auto packet2 = std::move(builder2).buildPacket(); deliverDataWithoutErrorCheck(packetToBuf(packet2)); EXPECT_FALSE(verifyFramePresent( serverWrites, *makeClientEncryptedCodec(), QuicFrame::Type::ConnectionCloseFrame)); EXPECT_GT( server->getConn().ackStates.appDataAckState.largestReceivedPacketNum, currLargestReceivedPacketNum); // Deliver the same bad data again EXPECT_CALL(*transportInfoCb_, onPacketDropped(_)); deliverDataWithoutErrorCheck(packetToBuf(packet)); EXPECT_LT( server->getConn() .ackStates.appDataAckState.largestReceivedAtLastCloseSent, server->getConn().ackStates.appDataAckState.largestReceivedPacketNum); EXPECT_FALSE(verifyFramePresent( serverWrites, *makeClientEncryptedCodec(), QuicFrame::Type::ConnectionCloseFrame)); checkTransportStateUpdate( qLogger, "Server closed by peer reason=Mind the gap"); }
0
[ "CWE-617", "CWE-703" ]
mvfst
a67083ff4b8dcbb7ee2839da6338032030d712b0
268,087,804,122,680,670,000,000,000,000,000,000,000
69
Close connection if we derive an extra 1-rtt write cipher Summary: Fixes CVE-2021-24029 Reviewed By: mjoras, lnicco Differential Revision: D26613890 fbshipit-source-id: 19bb2be2c731808144e1a074ece313fba11f1945
static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, u8 type, u8 code, int offset, __be32 info) { const struct ipv6hdr *hdr = (const struct ipv6hdr *)skb->data; const struct dccp_hdr *dh = (struct dccp_hdr *)(skb->data + offset); struct dccp_sock *dp; struct ipv6_pinfo *np; struct sock *sk; int err; __u64 seq; struct net *net = dev_net(skb->dev); if (skb->len < offset + sizeof(*dh) || skb->len < offset + __dccp_basic_hdr_len(dh)) { ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev), ICMP6_MIB_INERRORS); return; } sk = inet6_lookup(net, &dccp_hashinfo, &hdr->daddr, dh->dccph_dport, &hdr->saddr, dh->dccph_sport, inet6_iif(skb)); if (sk == NULL) { ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev), ICMP6_MIB_INERRORS); return; } if (sk->sk_state == DCCP_TIME_WAIT) { inet_twsk_put(inet_twsk(sk)); return; } bh_lock_sock(sk); if (sock_owned_by_user(sk)) NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS); if (sk->sk_state == DCCP_CLOSED) goto out; dp = dccp_sk(sk); seq = dccp_hdr_seq(dh); if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_LISTEN) && !between48(seq, dp->dccps_awl, dp->dccps_awh)) { NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS); goto out; } np = inet6_sk(sk); if (type == ICMPV6_PKT_TOOBIG) { struct dst_entry *dst = NULL; if (sock_owned_by_user(sk)) goto out; if ((1 << sk->sk_state) & (DCCPF_LISTEN | DCCPF_CLOSED)) goto out; /* icmp should have updated the destination cache entry */ dst = __sk_dst_check(sk, np->dst_cookie); if (dst == NULL) { struct inet_sock *inet = inet_sk(sk); struct flowi6 fl6; /* BUGGG_FUTURE: Again, it is not clear how to handle rthdr case. Ignore this complexity for now. */ memset(&fl6, 0, sizeof(fl6)); fl6.flowi6_proto = IPPROTO_DCCP; ipv6_addr_copy(&fl6.daddr, &np->daddr); ipv6_addr_copy(&fl6.saddr, &np->saddr); fl6.flowi6_oif = sk->sk_bound_dev_if; fl6.fl6_dport = inet->inet_dport; fl6.fl6_sport = inet->inet_sport; security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); dst = ip6_dst_lookup_flow(sk, &fl6, NULL, false); if (IS_ERR(dst)) { sk->sk_err_soft = -PTR_ERR(dst); goto out; } } else dst_hold(dst); if (inet_csk(sk)->icsk_pmtu_cookie > dst_mtu(dst)) { dccp_sync_mss(sk, dst_mtu(dst)); } /* else let the usual retransmit timer handle it */ dst_release(dst); goto out; } icmpv6_err_convert(type, code, &err); /* Might be for an request_sock */ switch (sk->sk_state) { struct request_sock *req, **prev; case DCCP_LISTEN: if (sock_owned_by_user(sk)) goto out; req = inet6_csk_search_req(sk, &prev, dh->dccph_dport, &hdr->daddr, &hdr->saddr, inet6_iif(skb)); if (req == NULL) goto out; /* * ICMPs are not backlogged, hence we cannot get an established * socket here. */ WARN_ON(req->sk != NULL); if (seq != dccp_rsk(req)->dreq_iss) { NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS); goto out; } inet_csk_reqsk_queue_drop(sk, req, prev); goto out; case DCCP_REQUESTING: case DCCP_RESPOND: /* Cannot happen. It can, it SYNs are crossed. --ANK */ if (!sock_owned_by_user(sk)) { DCCP_INC_STATS_BH(DCCP_MIB_ATTEMPTFAILS); sk->sk_err = err; /* * Wake people up to see the error * (see connect in sock.c) */ sk->sk_error_report(sk); dccp_done(sk); } else sk->sk_err_soft = err; goto out; } if (!sock_owned_by_user(sk) && np->recverr) { sk->sk_err = err; sk->sk_error_report(sk); } else sk->sk_err_soft = err; out: bh_unlock_sock(sk); sock_put(sk); }
0
[ "CWE-362" ]
linux-2.6
f6d8bd051c391c1c0458a30b2a7abcd939329259
250,197,168,998,727,440,000,000,000,000,000,000,000
149
inet: add RCU protection to inet->opt We lack proper synchronization to manipulate inet->opt ip_options Problem is ip_make_skb() calls ip_setup_cork() and ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options), without any protection against another thread manipulating inet->opt. Another thread can change inet->opt pointer and free old one under us. Use RCU to protect inet->opt (changed to inet->inet_opt). Instead of handling atomic refcounts, just copy ip_options when necessary, to avoid cache line dirtying. We cant insert an rcu_head in struct ip_options since its included in skb->cb[], so this patch is large because I had to introduce a new ip_options_rcu structure. Signed-off-by: Eric Dumazet <[email protected]> Cc: Herbert Xu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void mg_send_u16(struct mg_connection *c, uint16_t value) { mg_send(c, &value, sizeof(value)); }
0
[ "CWE-552" ]
mongoose
c65c8fdaaa257e0487ab0aaae9e8f6b439335945
57,639,774,569,107,490,000,000,000,000,000,000,000
3
Protect against the directory traversal in mg_upload()
static netdev_features_t xenvif_fix_features(struct net_device *dev, netdev_features_t features) { struct xenvif *vif = netdev_priv(dev); if (!vif->can_sg) features &= ~NETIF_F_SG; if (!vif->gso && !vif->gso_prefix) features &= ~NETIF_F_TSO; if (!vif->csum) features &= ~NETIF_F_IP_CSUM; return features; }
0
[ "CWE-20" ]
linux
48856286b64e4b66ec62b94e504d0b29c1ade664
129,730,241,710,149,850,000,000,000,000,000,000,000
14
xen/netback: shutdown the ring if it contains garbage. A buggy or malicious frontend should not be able to confuse netback. If we spot anything which is not as it should be then shutdown the device and don't try to continue with the ring in a potentially hostile state. Well behaved and non-hostile frontends will not be penalised. As well as making the existing checks for such errors fatal also add a new check that ensures that there isn't an insane number of requests on the ring (i.e. more than would fit in the ring). If the ring contains garbage then previously is was possible to loop over this insane number, getting an error each time and therefore not generating any more pending requests and therefore not exiting the loop in xen_netbk_tx_build_gops for an externded period. Also turn various netdev_dbg calls which no precipitate a fatal error into netdev_err, they are rate limited because the device is shutdown afterwards. This fixes at least one known DoS/softlockup of the backend domain. Signed-off-by: Ian Campbell <[email protected]> Reviewed-by: Konrad Rzeszutek Wilk <[email protected]> Acked-by: Jan Beulich <[email protected]> Signed-off-by: David S. Miller <[email protected]>
String *Item_field::str_result(String *str) { if ((null_value=result_field->is_null())) return 0; str->set_charset(str_value.charset()); return result_field->val_str(str,&str_value); }
0
[]
server
b000e169562697aa072600695d4f0c0412f94f4f
275,652,936,467,834,450,000,000,000,000,000,000,000
7
Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST, COL), NAME_CONST('NAME', NULL)) based on: commit f7316aa0c9a Author: Ajo Robert <[email protected]> Date: Thu Aug 24 17:03:21 2017 +0530 Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST, COL), NAME_CONST('NAME', NULL)) Backport of Bug#19143243 fix. NAME_CONST item can return NULL_ITEM type in case of incorrect arguments. NULL_ITEM has special processing in Item_func_in function. In Item_func_in::fix_length_and_dec an array of possible comparators is created. Since NAME_CONST function has NULL_ITEM type, corresponding array element is empty. Then NAME_CONST is wrapped to ITEM_CACHE. ITEM_CACHE can not return proper type(NULL_ITEM) in Item_func_in::val_int(), so the NULL_ITEM is attempted compared with an empty comparator. The fix is to disable the caching of Item_name_const item.
xmlRegStateAddTrans(xmlRegParserCtxtPtr ctxt, xmlRegStatePtr state, xmlRegAtomPtr atom, xmlRegStatePtr target, int counter, int count) { int nrtrans; if (state == NULL) { ERROR("add state: state is NULL"); return; } if (target == NULL) { ERROR("add state: target is NULL"); return; } /* * Other routines follow the philosophy 'When in doubt, add a transition' * so we check here whether such a transition is already present and, if * so, silently ignore this request. */ for (nrtrans = state->nbTrans - 1; nrtrans >= 0; nrtrans--) { xmlRegTransPtr trans = &(state->trans[nrtrans]); if ((trans->atom == atom) && (trans->to == target->no) && (trans->counter == counter) && (trans->count == count)) { #ifdef DEBUG_REGEXP_GRAPH printf("Ignoring duplicate transition from %d to %d\n", state->no, target->no); #endif return; } } if (state->maxTrans == 0) { state->maxTrans = 8; state->trans = (xmlRegTrans *) xmlMalloc(state->maxTrans * sizeof(xmlRegTrans)); if (state->trans == NULL) { xmlRegexpErrMemory(ctxt, "adding transition"); state->maxTrans = 0; return; } } else if (state->nbTrans >= state->maxTrans) { xmlRegTrans *tmp; state->maxTrans *= 2; tmp = (xmlRegTrans *) xmlRealloc(state->trans, state->maxTrans * sizeof(xmlRegTrans)); if (tmp == NULL) { xmlRegexpErrMemory(ctxt, "adding transition"); state->maxTrans /= 2; return; } state->trans = tmp; } #ifdef DEBUG_REGEXP_GRAPH printf("Add trans from %d to %d ", state->no, target->no); if (count == REGEXP_ALL_COUNTER) printf("all transition\n"); else if (count >= 0) printf("count based %d\n", count); else if (counter >= 0) printf("counted %d\n", counter); else if (atom == NULL) printf("epsilon transition\n"); else if (atom != NULL) xmlRegPrintAtom(stdout, atom); #endif state->trans[state->nbTrans].atom = atom; state->trans[state->nbTrans].to = target->no; state->trans[state->nbTrans].counter = counter; state->trans[state->nbTrans].count = count; state->trans[state->nbTrans].nd = 0; state->nbTrans++; xmlRegStateAddTransTo(ctxt, target, state->no); }
0
[ "CWE-119" ]
libxml2
cbb271655cadeb8dbb258a64701d9a3a0c4835b4
193,353,388,381,091,450,000,000,000,000,000,000,000
77
Bug 757711: heap-buffer-overflow in xmlFAParsePosCharGroup <https://bugzilla.gnome.org/show_bug.cgi?id=757711> * xmlregexp.c: (xmlFAParseCharRange): Only advance to the next character if there is no error. Advancing to the next character in case of an error while parsing regexp leads to an out of bounds access.
UnicodeStringTest::TestCountChar32(void) { { UnicodeString s=UNICODE_STRING("\\U0002f999\\U0001d15f\\u00c4\\u1ed0", 32).unescape(); // test countChar32() // note that this also calls and tests u_countChar32(length>=0) if( s.countChar32()!=4 || s.countChar32(1)!=4 || s.countChar32(2)!=3 || s.countChar32(2, 3)!=2 || s.countChar32(2, 0)!=0 ) { errln("UnicodeString::countChar32() failed"); } // NUL-terminate the string buffer and test u_countChar32(length=-1) const UChar *buffer=s.getTerminatedBuffer(); if( u_countChar32(buffer, -1)!=4 || u_countChar32(buffer+1, -1)!=4 || u_countChar32(buffer+2, -1)!=3 || u_countChar32(buffer+3, -1)!=3 || u_countChar32(buffer+4, -1)!=2 || u_countChar32(buffer+5, -1)!=1 || u_countChar32(buffer+6, -1)!=0 ) { errln("u_countChar32(length=-1) failed"); } // test u_countChar32() with bad input if(u_countChar32(NULL, 5)!=0 || u_countChar32(buffer, -2)!=0) { errln("u_countChar32(bad input) failed (returned non-zero counts)"); } } /* test data and variables for hasMoreChar32Than() */ static const UChar str[]={ 0x61, 0x62, 0xd800, 0xdc00, 0xd801, 0xdc01, 0x63, 0xd802, 0x64, 0xdc03, 0x65, 0x66, 0xd804, 0xdc04, 0xd805, 0xdc05, 0x67 }; UnicodeString string(str, UPRV_LENGTHOF(str)); int32_t start, length, number; /* test hasMoreChar32Than() */ for(length=string.length(); length>=0; --length) { for(start=0; start<=length; ++start) { for(number=-1; number<=((length-start)+2); ++number) { _testUnicodeStringHasMoreChar32Than(string, start, length-start, number); } } } /* test hasMoreChar32Than() with pinning */ for(start=-1; start<=string.length()+1; ++start) { for(number=-1; number<=((string.length()-start)+2); ++number) { _testUnicodeStringHasMoreChar32Than(string, start, 0x7fffffff, number); } } /* test hasMoreChar32Than() with a bogus string */ string.setToBogus(); for(length=-1; length<=1; ++length) { for(start=-1; start<=length; ++start) { for(number=-1; number<=((length-start)+2); ++number) { _testUnicodeStringHasMoreChar32Than(string, start, length-start, number); } } } }
0
[ "CWE-190", "CWE-787" ]
icu
b7d08bc04a4296982fcef8b6b8a354a9e4e7afca
150,443,622,598,826,220,000,000,000,000,000,000,000
73
ICU-20958 Prevent SEGV_MAPERR in append See #971
soup_cookie_jar_set_property (GObject *object, guint prop_id, const GValue *value, GParamSpec *pspec) { SoupCookieJarPrivate *priv = soup_cookie_jar_get_instance_private (SOUP_COOKIE_JAR (object)); switch (prop_id) { case PROP_READ_ONLY: priv->read_only = g_value_get_boolean (value); break; case PROP_ACCEPT_POLICY: priv->accept_policy = g_value_get_enum (value); break; default: G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec); break; } }
0
[ "CWE-125" ]
libsoup
db2b0d5809d5f8226d47312b40992cadbcde439f
251,258,238,240,798,140,000,000,000,000,000,000,000
18
cookie-jar: bail if hostname is an empty string There are several other ways to fix the problem with this function, but skipping over all of the code is probably the simplest. Fixes #3
static int ZEND_FASTCALL ZEND_BW_NOT_SPEC_CONST_HANDLER(ZEND_OPCODE_HANDLER_ARGS) { zend_op *opline = EX(opline); bitwise_not_function(&EX_T(opline->result.u.var).tmp_var, &opline->op1.u.constant TSRMLS_CC); ZEND_VM_NEXT_OPCODE(); }
0
[]
php-src
ce96fd6b0761d98353761bf78d5bfb55291179fd
284,507,229,651,348,770,000,000,000,000,000,000,000
10
- fix #39863, do not accept paths with NULL in them. See http://news.php.net/php.internals/50191, trunk will have the patch later (adding a macro and/or changing (some) APIs. Patch by Rasmus
id3_skip (SF_PRIVATE * psf) { unsigned char buf [10] ; memset (buf, 0, sizeof (buf)) ; psf_binheader_readf (psf, "pb", 0, buf, 10) ; if (buf [0] == 'I' && buf [1] == 'D' && buf [2] == '3') { int offset = buf [6] & 0x7f ; offset = (offset << 7) | (buf [7] & 0x7f) ; offset = (offset << 7) | (buf [8] & 0x7f) ; offset = (offset << 7) | (buf [9] & 0x7f) ; psf_log_printf (psf, "ID3 length : %d\n--------------------\n", offset) ; /* Never want to jump backwards in a file. */ if (offset < 0) return 0 ; /* Calculate new file offset and position ourselves there. */ psf->fileoffset += offset + 10 ; if (psf->fileoffset < psf->filelength) { psf_binheader_readf (psf, "p", psf->fileoffset) ; return 1 ; } ; } ; return 0 ; } /* id3_skip */
0
[ "CWE-119", "CWE-787" ]
libsndfile
f457b7b5ecfe91697ed01cfc825772c4d8de1236
65,484,225,225,491,810,000,000,000,000,000,000,000
29
src/id3.c : Improve error handling
static void __tty_hangup(struct tty_struct *tty, int exit_session) { struct file *cons_filp = NULL; struct file *filp, *f = NULL; struct tty_file_private *priv; int closecount = 0, n; int refs; if (!tty) return; spin_lock(&redirect_lock); if (redirect && file_tty(redirect) == tty) { f = redirect; redirect = NULL; } spin_unlock(&redirect_lock); tty_lock(tty); if (test_bit(TTY_HUPPED, &tty->flags)) { tty_unlock(tty); return; } /* inuse_filps is protected by the single tty lock, this really needs to change if we want to flush the workqueue with the lock held */ check_tty_count(tty, "tty_hangup"); spin_lock(&tty_files_lock); /* This breaks for file handles being sent over AF_UNIX sockets ? */ list_for_each_entry(priv, &tty->tty_files, list) { filp = priv->file; if (filp->f_op->write == redirected_tty_write) cons_filp = filp; if (filp->f_op->write != tty_write) continue; closecount++; __tty_fasync(-1, filp, 0); /* can't block */ filp->f_op = &hung_up_tty_fops; } spin_unlock(&tty_files_lock); refs = tty_signal_session_leader(tty, exit_session); /* Account for the p->signal references we killed */ while (refs--) tty_kref_put(tty); tty_ldisc_hangup(tty); spin_lock_irq(&tty->ctrl_lock); clear_bit(TTY_THROTTLED, &tty->flags); clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); put_pid(tty->session); put_pid(tty->pgrp); tty->session = NULL; tty->pgrp = NULL; tty->ctrl_status = 0; spin_unlock_irq(&tty->ctrl_lock); /* * If one of the devices matches a console pointer, we * cannot just call hangup() because that will cause * tty->count and state->count to go out of sync. * So we just call close() the right number of times. */ if (cons_filp) { if (tty->ops->close) for (n = 0; n < closecount; n++) tty->ops->close(tty, cons_filp); } else if (tty->ops->hangup) tty->ops->hangup(tty); /* * We don't want to have driver/ldisc interactions beyond * the ones we did here. The driver layer expects no * calls after ->hangup() from the ldisc side. However we * can't yet guarantee all that. */ set_bit(TTY_HUPPED, &tty->flags); tty_unlock(tty); if (f) fput(f); }
0
[ "CWE-200", "CWE-362" ]
linux
5c17c861a357e9458001f021a7afa7aab9937439
248,612,011,983,953,160,000,000,000,000,000,000,000
86
tty: Fix unsafe ldisc reference via ioctl(TIOCGETD) ioctl(TIOCGETD) retrieves the line discipline id directly from the ldisc because the line discipline id (c_line) in termios is untrustworthy; userspace may have set termios via ioctl(TCSETS*) without actually changing the line discipline via ioctl(TIOCSETD). However, directly accessing the current ldisc via tty->ldisc is unsafe; the ldisc ptr dereferenced may be stale if the line discipline is changing via ioctl(TIOCSETD) or hangup. Wait for the line discipline reference (just like read() or write()) to retrieve the "current" line discipline id. Cc: <[email protected]> Signed-off-by: Peter Hurley <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
base64_to_byte_array (gunichar2 *start, gint ilength, MonoBoolean allowWhitespaceOnly) { gint ignored; gint i; gunichar2 c; gunichar2 last, prev_last, prev2_last; gint olength; MonoArray *result; guchar *res_ptr; gint a [4], b [4]; MonoException *exc; ignored = 0; last = prev_last = 0, prev2_last = 0; for (i = 0; i < ilength; i++) { c = start [i]; if (c >= sizeof (dbase64)) { exc = mono_exception_from_name_msg (mono_get_corlib (), "System", "FormatException", "Invalid character found."); mono_raise_exception (exc); } else if (isspace (c)) { ignored++; } else { prev2_last = prev_last; prev_last = last; last = c; } } olength = ilength - ignored; if (allowWhitespaceOnly && olength == 0) { return mono_array_new (mono_domain_get (), mono_defaults.byte_class, 0); } if ((olength & 3) != 0 || olength <= 0) { exc = mono_exception_from_name_msg (mono_get_corlib (), "System", "FormatException", "Invalid length."); mono_raise_exception (exc); } if (prev2_last == '=') { exc = mono_exception_from_name_msg (mono_get_corlib (), "System", "FormatException", "Invalid format."); mono_raise_exception (exc); } olength = (olength * 3) / 4; if (last == '=') olength--; if (prev_last == '=') olength--; result = mono_array_new (mono_domain_get (), mono_defaults.byte_class, olength); res_ptr = mono_array_addr (result, guchar, 0); for (i = 0; i < ilength; ) { int k; for (k = 0; k < 4 && i < ilength;) { c = start [i++]; if (isspace (c)) continue; a [k] = (guchar) c; if (((b [k] = dbase64 [c]) & 0x80) != 0) { exc = mono_exception_from_name_msg (mono_get_corlib (), "System", "FormatException", "Invalid character found."); mono_raise_exception (exc); } k++; } *res_ptr++ = (b [0] << 2) | (b [1] >> 4); if (a [2] != '=') *res_ptr++ = (b [1] << 4) | (b [2] >> 2); if (a [3] != '=') *res_ptr++ = (b [2] << 6) | b [3]; while (i < ilength && isspace (start [i])) i++; } return result; }
0
[ "CWE-264" ]
mono
035c8587c0d8d307e45f1b7171a0d337bb451f1e
227,876,953,970,605,480,000,000,000,000,000,000,000
86
Allow only primitive types/enums in RuntimeHelpers.InitializeArray ().
yystrlen (const char *yystr) { YYSIZE_T yylen; for (yylen = 0; yystr[yylen]; yylen++) continue; return yylen; }
1
[ "CWE-19" ]
ntp
fe46889f7baa75fc8e6c0fcde87706d396ce1461
279,845,132,198,429,670,000,000,000,000,000,000,000
7
[Sec 2942]: Off-path DoS attack on auth broadcast mode. HStenn.
static void copy_option(uint8_t *buf, uint16_t code, uint16_t len, uint8_t *msg) { buf[0] = code >> 8; buf[1] = code & 0xff; buf[2] = len >> 8; buf[3] = len & 0xff; if (len > 0 && msg) memcpy(&buf[4], msg, len); }
0
[]
connman
a74524b3e3fad81b0fd1084ffdf9f2ea469cd9b1
301,681,738,139,554,800,000,000,000,000,000,000,000
10
gdhcp: Avoid leaking stack data via unitiialized variable Fixes: CVE-2021-26676
xsltSetLoaderFunc(xsltDocLoaderFunc f) { if (f == NULL) xsltDocDefaultLoader = xsltDocDefaultLoaderFunc; else xsltDocDefaultLoader = f; }
0
[]
libxslt
e03553605b45c88f0b4b2980adfbbb8f6fca2fd6
245,773,274,450,095,570,000,000,000,000,000,000,000
6
Fix security framework bypass xsltCheckRead and xsltCheckWrite return -1 in case of error but callers don't check for this condition and allow access. With a specially crafted URL, xsltCheckRead could be tricked into returning an error because of a supposedly invalid URL that would still be loaded succesfully later on. Fixes #12. Thanks to Felix Wilhelm for the report.
static int wait_for_thread(thread_t thread) { return pthread_join(thread, NULL) == 0; }
0
[ "CWE-330" ]
openssl
1b0fe00e2704b5e20334a16d3c9099d1ba2ef1be
247,550,686,047,382,650,000,000,000,000,000,000,000
4
drbg: ensure fork-safety without using a pthread_atfork handler When the new OpenSSL CSPRNG was introduced in version 1.1.1, it was announced in the release notes that it would be fork-safe, which the old CSPRNG hadn't been. The fork-safety was implemented using a fork count, which was incremented by a pthread_atfork handler. Initially, this handler was enabled by default. Unfortunately, the default behaviour had to be changed for other reasons in commit b5319bdbd095, so the new OpenSSL CSPRNG failed to keep its promise. This commit restores the fork-safety using a different approach. It replaces the fork count by a fork id, which coincides with the process id on UNIX-like operating systems and is zero on other operating systems. It is used to detect when an automatic reseed after a fork is necessary. To prevent a future regression, it also adds a test to verify that the child reseeds after fork. CVE-2019-1549 Reviewed-by: Paul Dale <[email protected]> Reviewed-by: Matt Caswell <[email protected]> (Merged from https://github.com/openssl/openssl/pull/9802)
static int ZEND_FASTCALL ZEND_ADD_ARRAY_ELEMENT_SPEC_CV_CV_HANDLER(ZEND_OPCODE_HANDLER_ARGS) { zend_op *opline = EX(opline); zval *array_ptr = &EX_T(opline->result.u.var).tmp_var; zval *expr_ptr; zval *offset=_get_zval_ptr_cv(&opline->op2, EX(Ts), BP_VAR_R TSRMLS_CC); #if 0 || IS_CV == IS_VAR || IS_CV == IS_CV zval **expr_ptr_ptr = NULL; if (opline->extended_value) { expr_ptr_ptr=_get_zval_ptr_ptr_cv(&opline->op1, EX(Ts), BP_VAR_W TSRMLS_CC); expr_ptr = *expr_ptr_ptr; } else { expr_ptr=_get_zval_ptr_cv(&opline->op1, EX(Ts), BP_VAR_R TSRMLS_CC); } #else expr_ptr=_get_zval_ptr_cv(&opline->op1, EX(Ts), BP_VAR_R TSRMLS_CC); #endif if (0) { /* temporary variable */ zval *new_expr; ALLOC_ZVAL(new_expr); INIT_PZVAL_COPY(new_expr, expr_ptr); expr_ptr = new_expr; } else { #if 0 || IS_CV == IS_VAR || IS_CV == IS_CV if (opline->extended_value) { SEPARATE_ZVAL_TO_MAKE_IS_REF(expr_ptr_ptr); expr_ptr = *expr_ptr_ptr; Z_ADDREF_P(expr_ptr); } else #endif if (IS_CV == IS_CONST || PZVAL_IS_REF(expr_ptr)) { zval *new_expr; ALLOC_ZVAL(new_expr); INIT_PZVAL_COPY(new_expr, expr_ptr); expr_ptr = new_expr; zendi_zval_copy_ctor(*expr_ptr); } else { Z_ADDREF_P(expr_ptr); } } if (offset) { switch (Z_TYPE_P(offset)) { case IS_DOUBLE: zend_hash_index_update(Z_ARRVAL_P(array_ptr), zend_dval_to_lval(Z_DVAL_P(offset)), &expr_ptr, sizeof(zval *), NULL); break; case IS_LONG: case IS_BOOL: zend_hash_index_update(Z_ARRVAL_P(array_ptr), Z_LVAL_P(offset), &expr_ptr, sizeof(zval *), NULL); break; case IS_STRING: zend_symtable_update(Z_ARRVAL_P(array_ptr), Z_STRVAL_P(offset), Z_STRLEN_P(offset)+1, &expr_ptr, sizeof(zval *), NULL); break; case IS_NULL: zend_hash_update(Z_ARRVAL_P(array_ptr), "", sizeof(""), &expr_ptr, sizeof(zval *), NULL); break; default: zend_error(E_WARNING, "Illegal offset type"); zval_ptr_dtor(&expr_ptr); /* do nothing */ break; } } else { zend_hash_next_index_insert(Z_ARRVAL_P(array_ptr), &expr_ptr, sizeof(zval *), NULL); } if (opline->extended_value) { } else { } ZEND_VM_NEXT_OPCODE(); }
0
[]
php-src
ce96fd6b0761d98353761bf78d5bfb55291179fd
47,009,062,418,023,820,000,000,000,000,000,000,000
78
- fix #39863, do not accept paths with NULL in them. See http://news.php.net/php.internals/50191, trunk will have the patch later (adding a macro and/or changing (some) APIs. Patch by Rasmus
int Curl_strncasecompare(const char *first, const char *second, size_t max) { while(*first && *second && max) { if(Curl_raw_toupper(*first) != Curl_raw_toupper(*second)) { break; } max--; first++; second++; } if(0 == max) return 1; /* they are equal this far */ return Curl_raw_toupper(*first) == Curl_raw_toupper(*second); }
0
[]
curl
852aa5ad351ea53e5f01d2f44b5b4370c2bf5425
161,226,856,599,520,340,000,000,000,000,000,000,000
15
url: check sasl additional parameters for connection reuse. Also move static function safecmp() as non-static Curl_safecmp() since its purpose is needed at several places. Bug: https://curl.se/docs/CVE-2022-22576.html CVE-2022-22576 Closes #8746
void ValidateInputs(OpKernelContext* ctx, const CSRSparseMatrix& sparse_matrix, const Tensor& permutation_indices, int* batch_size, int64* num_rows) { OP_REQUIRES(ctx, sparse_matrix.dtype() == DataTypeToEnum<T>::value, errors::InvalidArgument( "Asked for a CSRSparseMatrix of type ", DataTypeString(DataTypeToEnum<T>::value), " but saw dtype: ", DataTypeString(sparse_matrix.dtype()))); const Tensor& dense_shape = sparse_matrix.dense_shape(); const int rank = dense_shape.dim_size(0); OP_REQUIRES(ctx, rank == 2 || rank == 3, errors::InvalidArgument("sparse matrix must have rank 2 or 3; ", "but dense_shape has size ", rank)); const int row_dim = (rank == 2) ? 0 : 1; auto dense_shape_vec = dense_shape.vec<int64>(); *num_rows = dense_shape_vec(row_dim); const int64 num_cols = dense_shape_vec(row_dim + 1); OP_REQUIRES(ctx, *num_rows == num_cols, errors::InvalidArgument("sparse matrix must be square; got: ", *num_rows, " != ", num_cols)); const TensorShape& perm_shape = permutation_indices.shape(); OP_REQUIRES( ctx, perm_shape.dims() + 1 == rank, errors::InvalidArgument( "sparse matrix must have the same rank as permutation; got: ", rank, " != ", perm_shape.dims(), " + 1.")); OP_REQUIRES( ctx, perm_shape.dim_size(rank - 2) == *num_rows, errors::InvalidArgument( "permutation must have the same number of elements in each batch " "as the number of rows in sparse matrix; got: ", perm_shape.dim_size(rank - 2), " != ", *num_rows)); *batch_size = sparse_matrix.batch_size(); if (*batch_size > 1) { OP_REQUIRES( ctx, perm_shape.dim_size(0) == *batch_size, errors::InvalidArgument("permutation must have the same batch size " "as sparse matrix; got: ", perm_shape.dim_size(0), " != ", *batch_size)); } }
1
[ "CWE-476" ]
tensorflow
e6a7c7cc18c3aaad1ae0872cb0a959f5c923d2bd
336,558,180,459,502,140,000,000,000,000,000,000,000
44
Remove `OP_REQUIRES` call from helper function. Since `OP_REQUIRES` macro expands to a `return;` (among other), calling it in a helper function only ends the helper function's execution earlier, but the kernel will still run from start to end. Thus, all the expected validations are actually broken/useless as the code ploughs through the next crash anyway. PiperOrigin-RevId: 369524386 Change-Id: I54f6cf9328445675ccc392e661b04336b229c9da
static int rsi_usb_load_data_master_write(struct rsi_hw *adapter, u32 base_address, u32 instructions_sz, u16 block_size, u8 *ta_firmware) { u16 num_blocks; u32 cur_indx, i; u8 temp_buf[256]; int status; num_blocks = instructions_sz / block_size; rsi_dbg(INFO_ZONE, "num_blocks: %d\n", num_blocks); for (cur_indx = 0, i = 0; i < num_blocks; i++, cur_indx += block_size) { memcpy(temp_buf, ta_firmware + cur_indx, block_size); status = rsi_usb_write_register_multiple(adapter, base_address, (u8 *)(temp_buf), block_size); if (status < 0) return status; rsi_dbg(INFO_ZONE, "%s: loading block: %d\n", __func__, i); base_address += block_size; } if (instructions_sz % block_size) { memset(temp_buf, 0, block_size); memcpy(temp_buf, ta_firmware + cur_indx, instructions_sz % block_size); status = rsi_usb_write_register_multiple (adapter, base_address, (u8 *)temp_buf, instructions_sz % block_size); if (status < 0) return status; rsi_dbg(INFO_ZONE, "Written Last Block in Address 0x%x Successfully\n", cur_indx); } return 0; }
0
[ "CWE-415" ]
wireless-drivers
8b51dc7291473093c821195c4b6af85fadedbc2f
326,048,552,812,324,320,000,000,000,000,000,000,000
41
rsi: fix a double free bug in rsi_91x_deinit() `dev` (struct rsi_91x_usbdev *) field of adapter (struct rsi_91x_usbdev *) is allocated and initialized in `rsi_init_usb_interface`. If any error is detected in information read from the device side, `rsi_init_usb_interface` will be freed. However, in the higher level error handling code in `rsi_probe`, if error is detected, `rsi_91x_deinit` is called again, in which `dev` will be freed again, resulting double free. This patch fixes the double free by removing the free operation on `dev` in `rsi_init_usb_interface`, because `rsi_91x_deinit` is also used in `rsi_disconnect`, in that code path, the `dev` field is not (and thus needs to be) freed. This bug was found in v4.19, but is also present in the latest version of kernel. Fixes CVE-2019-15504. Reported-by: Hui Peng <[email protected]> Reported-by: Mathias Payer <[email protected]> Signed-off-by: Hui Peng <[email protected]> Reviewed-by: Guenter Roeck <[email protected]> Signed-off-by: Kalle Valo <[email protected]>
static void lis_close(struct tls_listener *listener) { if (listener->is_registered) { pjsip_tpmgr_unregister_tpfactory(listener->tpmgr, &listener->factory); listener->is_registered = PJ_FALSE; } if (listener->ssock) { pj_ssl_sock_close(listener->ssock); listener->ssock = NULL; } }
0
[ "CWE-362", "CWE-703" ]
pjproject
d5f95aa066f878b0aef6a64e60b61e8626e664cd
338,626,231,057,090,300,000,000,000,000,000,000,000
12
Merge pull request from GHSA-cv8x-p47p-99wr * - Avoid SSL socket parent/listener getting destroyed during handshake by increasing parent's reference count. - Add missing SSL socket close when the newly accepted SSL socket is discarded in SIP TLS transport. * - Fix silly mistake: accepted active socket created without group lock in SSL socket. - Replace assertion with normal validation check of SSL socket instance in OpenSSL verification callback (verify_cb()) to avoid crash, e.g: if somehow race condition with SSL socket destroy happens or OpenSSL application data index somehow gets corrupted.
static int test_GENERAL_NAME_cmp(void) { size_t i, j; GENERAL_NAME **namesa = OPENSSL_malloc(sizeof(*namesa) * OSSL_NELEM(gennames)); GENERAL_NAME **namesb = OPENSSL_malloc(sizeof(*namesb) * OSSL_NELEM(gennames)); int testresult = 0; if (!TEST_ptr(namesa) || !TEST_ptr(namesb)) goto end; for (i = 0; i < OSSL_NELEM(gennames); i++) { const unsigned char *derp = gennames[i].der; /* * We create two versions of each GENERAL_NAME so that we ensure when * we compare them they are always different pointers. */ namesa[i] = d2i_GENERAL_NAME(NULL, &derp, gennames[i].derlen); derp = gennames[i].der; namesb[i] = d2i_GENERAL_NAME(NULL, &derp, gennames[i].derlen); if (!TEST_ptr(namesa[i]) || !TEST_ptr(namesb[i])) goto end; } /* Every name should be equal to itself and not equal to any others. */ for (i = 0; i < OSSL_NELEM(gennames); i++) { for (j = 0; j < OSSL_NELEM(gennames); j++) { if (i == j) { if (!TEST_int_eq(GENERAL_NAME_cmp(namesa[i], namesb[j]), 0)) goto end; } else { if (!TEST_int_ne(GENERAL_NAME_cmp(namesa[i], namesb[j]), 0)) goto end; } } } testresult = 1; end: for (i = 0; i < OSSL_NELEM(gennames); i++) { if (namesa != NULL) GENERAL_NAME_free(namesa[i]); if (namesb != NULL) GENERAL_NAME_free(namesb[i]); } OPENSSL_free(namesa); OPENSSL_free(namesb); return testresult; }
0
[ "CWE-476" ]
openssl
97ab3c4b538840037812c8d9164d09a1f4bf11a1
62,915,723,312,988,620,000,000,000,000,000,000,000
52
Add a test for GENERAL_NAME_cmp Based on a boringssl test contributed by David Benjamin Reviewed-by: Tomas Mraz <[email protected]>
handle_image (void *data, unsigned int datasize, EFI_LOADED_IMAGE *li, EFI_IMAGE_ENTRY_POINT *entry_point, EFI_PHYSICAL_ADDRESS *alloc_address, UINTN *alloc_pages) { EFI_STATUS efi_status; char *buffer; int i; EFI_IMAGE_SECTION_HEADER *Section; char *base, *end; UINT32 size; PE_COFF_LOADER_IMAGE_CONTEXT context; unsigned int alignment, alloc_size; int found_entry_point = 0; UINT8 sha1hash[SHA1_DIGEST_SIZE]; UINT8 sha256hash[SHA256_DIGEST_SIZE]; /* * The binary header contains relevant context and section pointers */ efi_status = read_header(data, datasize, &context); if (EFI_ERROR(efi_status)) { perror(L"Failed to read header: %r\n", efi_status); return efi_status; } /* * Perform the image verification before we start copying data around * in order to load it. */ if (secure_mode ()) { efi_status = verify_buffer(data, datasize, &context, sha256hash, sha1hash); if (EFI_ERROR(efi_status)) { if (verbose) console_print(L"Verification failed: %r\n", efi_status); else console_error(L"Verification failed", efi_status); return efi_status; } else { if (verbose) console_print(L"Verification succeeded\n"); } } /* * Calculate the hash for the TPM measurement. * XXX: We're computing these twice in secure boot mode when the * buffers already contain the previously computed hashes. Also, * this is only useful for the TPM1.2 case. We should try to fix * this in a follow-up. */ efi_status = generate_hash(data, datasize, &context, sha256hash, sha1hash); if (EFI_ERROR(efi_status)) return efi_status; /* Measure the binary into the TPM */ #ifdef REQUIRE_TPM efi_status = #endif tpm_log_pe((EFI_PHYSICAL_ADDRESS)(UINTN)data, datasize, (EFI_PHYSICAL_ADDRESS)(UINTN)context.ImageAddress, li->FilePath, sha1hash, 4); #ifdef REQUIRE_TPM if (efi_status != EFI_SUCCESS) { return efi_status; } #endif /* The spec says, uselessly, of SectionAlignment: * ===== * The alignment (in bytes) of sections when they are loaded into * memory. It must be greater than or equal to FileAlignment. The * default is the page size for the architecture. * ===== * Which doesn't tell you whose responsibility it is to enforce the * "default", or when. It implies that the value in the field must * be > FileAlignment (also poorly defined), but it appears visual * studio will happily write 512 for FileAlignment (its default) and * 0 for SectionAlignment, intending to imply PAGE_SIZE. * * We only support one page size, so if it's zero, nerf it to 4096. */ alignment = context.SectionAlignment; if (!alignment) alignment = 4096; alloc_size = ALIGN_VALUE(context.ImageSize + context.SectionAlignment, PAGE_SIZE); *alloc_pages = alloc_size / PAGE_SIZE; efi_status = BS->AllocatePages(AllocateAnyPages, EfiLoaderCode, *alloc_pages, alloc_address); if (EFI_ERROR(efi_status)) { perror(L"Failed to allocate image buffer\n"); return EFI_OUT_OF_RESOURCES; } buffer = (void *)ALIGN_VALUE((unsigned long)*alloc_address, alignment); dprint(L"Loading 0x%llx bytes at 0x%llx\n", (unsigned long long)context.ImageSize, (unsigned long long)(uintptr_t)buffer); update_mem_attrs((uintptr_t)buffer, alloc_size, MEM_ATTR_R|MEM_ATTR_W, MEM_ATTR_X); CopyMem(buffer, data, context.SizeOfHeaders); *entry_point = ImageAddress(buffer, context.ImageSize, context.EntryPoint); if (!*entry_point) { perror(L"Entry point is invalid\n"); BS->FreePages(*alloc_address, *alloc_pages); return EFI_UNSUPPORTED; } char *RelocBase, *RelocBaseEnd; /* * These are relative virtual addresses, so we have to check them * against the image size, not the data size. */ RelocBase = ImageAddress(buffer, context.ImageSize, context.RelocDir->VirtualAddress); /* * RelocBaseEnd here is the address of the last byte of the table */ RelocBaseEnd = ImageAddress(buffer, context.ImageSize, context.RelocDir->VirtualAddress + context.RelocDir->Size - 1); EFI_IMAGE_SECTION_HEADER *RelocSection = NULL; /* * Copy the executable's sections to their desired offsets */ Section = context.FirstSection; for (i = 0; i < context.NumberOfSections; i++, Section++) { /* Don't try to copy discardable sections with zero size */ if ((Section->Characteristics & EFI_IMAGE_SCN_MEM_DISCARDABLE) && !Section->Misc.VirtualSize) continue; /* * Skip sections that aren't marked readable. */ if (!(Section->Characteristics & EFI_IMAGE_SCN_MEM_READ)) continue; if (!(Section->Characteristics & EFI_IMAGE_SCN_MEM_DISCARDABLE) && (Section->Characteristics & EFI_IMAGE_SCN_MEM_WRITE) && (Section->Characteristics & EFI_IMAGE_SCN_MEM_EXECUTE) && (mok_policy & MOK_POLICY_REQUIRE_NX)) { perror(L"Section %d is writable and executable\n", i); return EFI_UNSUPPORTED; } base = ImageAddress (buffer, context.ImageSize, Section->VirtualAddress); end = ImageAddress (buffer, context.ImageSize, Section->VirtualAddress + Section->Misc.VirtualSize - 1); if (end < base) { perror(L"Section %d has negative size\n", i); BS->FreePages(*alloc_address, *alloc_pages); return EFI_UNSUPPORTED; } if (Section->VirtualAddress <= context.EntryPoint && (Section->VirtualAddress + Section->SizeOfRawData - 1) > context.EntryPoint) found_entry_point++; /* We do want to process .reloc, but it's often marked * discardable, so we don't want to memcpy it. */ if (CompareMem(Section->Name, ".reloc\0\0", 8) == 0) { if (RelocSection) { perror(L"Image has multiple relocation sections\n"); return EFI_UNSUPPORTED; } /* If it has nonzero sizes, and our bounds check * made sense, and the VA and size match RelocDir's * versions, then we believe in this section table. */ if (Section->SizeOfRawData && Section->Misc.VirtualSize && base && end && RelocBase == base && RelocBaseEnd == end) { RelocSection = Section; } } if (Section->Characteristics & EFI_IMAGE_SCN_MEM_DISCARDABLE) { continue; } if (!base) { perror(L"Section %d has invalid base address\n", i); return EFI_UNSUPPORTED; } if (!end) { perror(L"Section %d has zero size\n", i); return EFI_UNSUPPORTED; } if (!(Section->Characteristics & EFI_IMAGE_SCN_CNT_UNINITIALIZED_DATA) && (Section->VirtualAddress < context.SizeOfHeaders || Section->PointerToRawData < context.SizeOfHeaders)) { perror(L"Section %d is inside image headers\n", i); return EFI_UNSUPPORTED; } if (Section->Characteristics & EFI_IMAGE_SCN_CNT_UNINITIALIZED_DATA) { ZeroMem(base, Section->Misc.VirtualSize); } else { if (Section->PointerToRawData < context.SizeOfHeaders) { perror(L"Section %d is inside image headers\n", i); return EFI_UNSUPPORTED; } size = Section->Misc.VirtualSize; if (size > Section->SizeOfRawData) size = Section->SizeOfRawData; if (size > 0) CopyMem(base, data + Section->PointerToRawData, size); if (size < Section->Misc.VirtualSize) ZeroMem(base + size, Section->Misc.VirtualSize - size); } } if (context.NumberOfRvaAndSizes <= EFI_IMAGE_DIRECTORY_ENTRY_BASERELOC) { perror(L"Image has no relocation entry\n"); FreePool(buffer); return EFI_UNSUPPORTED; } if (context.RelocDir->Size && RelocSection) { /* * Run the relocation fixups */ efi_status = relocate_coff(&context, RelocSection, data, buffer); if (EFI_ERROR(efi_status)) { perror(L"Relocation failed: %r\n", efi_status); FreePool(buffer); return efi_status; } } /* * Now set the page permissions appropriately. */ Section = context.FirstSection; for (i = 0; i < context.NumberOfSections; i++, Section++) { uint64_t set_attrs = MEM_ATTR_R; uint64_t clear_attrs = MEM_ATTR_W|MEM_ATTR_X; uintptr_t addr; uint64_t length; /* * Skip discardable sections with zero size */ if ((Section->Characteristics & EFI_IMAGE_SCN_MEM_DISCARDABLE) && !Section->Misc.VirtualSize) continue; /* * Skip sections that aren't marked readable. */ if (!(Section->Characteristics & EFI_IMAGE_SCN_MEM_READ)) continue; base = ImageAddress (buffer, context.ImageSize, Section->VirtualAddress); end = ImageAddress (buffer, context.ImageSize, Section->VirtualAddress + Section->Misc.VirtualSize - 1); addr = (uintptr_t)base; length = (uintptr_t)end - (uintptr_t)base + 1; if (Section->Characteristics & EFI_IMAGE_SCN_MEM_WRITE) { set_attrs |= MEM_ATTR_W; clear_attrs &= ~MEM_ATTR_W; } if (Section->Characteristics & EFI_IMAGE_SCN_MEM_EXECUTE) { set_attrs |= MEM_ATTR_X; clear_attrs &= ~MEM_ATTR_X; } update_mem_attrs(addr, length, set_attrs, clear_attrs); } /* * grub needs to know its location and size in memory, so fix up * the loaded image protocol values */ li->ImageBase = buffer; li->ImageSize = context.ImageSize; /* Pass the load options to the second stage loader */ li->LoadOptions = load_options; li->LoadOptionsSize = load_options_size; if (!found_entry_point) { perror(L"Entry point is not within sections\n"); return EFI_UNSUPPORTED; } if (found_entry_point > 1) { perror(L"%d sections contain entry point\n", found_entry_point); return EFI_UNSUPPORTED; } return EFI_SUCCESS; }
0
[ "CWE-787" ]
shim
159151b6649008793d6204a34d7b9c41221fb4b0
37,439,915,958,794,330,000,000,000,000,000,000,000
319
Also avoid CVE-2022-28737 in verify_image() PR 446 ("Add verify_image") duplicates some of the code affected by Chris Coulson's defense in depth patch against CVE-2022-28737 ("pe: Perform image verification earlier when loading grub"). This patch makes the same change to the new function. Signed-off-by: Peter Jones <[email protected]>
mvcur(int yold, int xold, int ynew, int xnew) { return NCURSES_SP_NAME(mvcur) (CURRENT_SCREEN, yold, xold, ynew, xnew); }
0
[]
ncurses
790a85dbd4a81d5f5d8dd02a44d84f01512ef443
283,981,859,131,734,900,000,000,000,000,000,000,000
4
ncurses 6.2 - patch 20200531 + correct configure version-check/warnng for g++ to allow for 10.x + re-enable "bel" in konsole-base (report by Nia Huang) + add linux-s entry (patch by Alexandre Montaron). + drop long-obsolete convert_configure.pl + add test/test_parm.c, for checking tparm changes. + improve parameter-checking for tparm, adding function _nc_tiparm() to handle the most-used case, which accepts only numeric parameters (report/testcase by "puppet-meteor"). + use a more conservative estimate of the buffer-size in lib_tparm.c's save_text() and save_number(), in case the sprintf() function passes-through unexpected characters from a format specifier (report/testcase by "puppet-meteor"). + add a check for end-of-string in cvtchar to handle a malformed string in infotocap (report/testcase by "puppet-meteor").
can_clear_with(NCURSES_SP_DCLx ARG_CH_T ch) { if (!back_color_erase && SP_PARM->_coloron) { #if NCURSES_EXT_FUNCS int pair; if (!SP_PARM->_default_color) return FALSE; if (!(isDefaultColor(SP_PARM->_default_fg) && isDefaultColor(SP_PARM->_default_bg))) return FALSE; if ((pair = GetPair(CHDEREF(ch))) != 0) { NCURSES_COLOR_T fg, bg; if (NCURSES_SP_NAME(pair_content) (NCURSES_SP_ARGx (short) pair, &fg, &bg) == ERR || !(isDefaultColor(fg) && isDefaultColor(bg))) { return FALSE; } } #else if (AttrOfD(ch) & A_COLOR) return FALSE; #endif } return (ISBLANK(CHDEREF(ch)) && (AttrOfD(ch) & ~(NONBLANK_ATTR | A_COLOR)) == BLANK_ATTR); }
0
[]
ncurses
790a85dbd4a81d5f5d8dd02a44d84f01512ef443
13,862,017,367,073,534,000,000,000,000,000,000,000
28
ncurses 6.2 - patch 20200531 + correct configure version-check/warnng for g++ to allow for 10.x + re-enable "bel" in konsole-base (report by Nia Huang) + add linux-s entry (patch by Alexandre Montaron). + drop long-obsolete convert_configure.pl + add test/test_parm.c, for checking tparm changes. + improve parameter-checking for tparm, adding function _nc_tiparm() to handle the most-used case, which accepts only numeric parameters (report/testcase by "puppet-meteor"). + use a more conservative estimate of the buffer-size in lib_tparm.c's save_text() and save_number(), in case the sprintf() function passes-through unexpected characters from a format specifier (report/testcase by "puppet-meteor"). + add a check for end-of-string in cvtchar to handle a malformed string in infotocap (report/testcase by "puppet-meteor").
static int i2r_IPAddrBlocks(const X509V3_EXT_METHOD *method, void *ext, BIO *out, int indent) { const IPAddrBlocks *addr = ext; int i; for (i = 0; i < sk_IPAddressFamily_num(addr); i++) { IPAddressFamily *f = sk_IPAddressFamily_value(addr, i); const unsigned int afi = X509v3_addr_get_afi(f); switch (afi) { case IANA_AFI_IPV4: BIO_printf(out, "%*sIPv4", indent, ""); break; case IANA_AFI_IPV6: BIO_printf(out, "%*sIPv6", indent, ""); break; default: BIO_printf(out, "%*sUnknown AFI %u", indent, "", afi); break; } if (f->addressFamily->length > 2) { switch (f->addressFamily->data[2]) { case 1: BIO_puts(out, " (Unicast)"); break; case 2: BIO_puts(out, " (Multicast)"); break; case 3: BIO_puts(out, " (Unicast/Multicast)"); break; case 4: BIO_puts(out, " (MPLS)"); break; case 64: BIO_puts(out, " (Tunnel)"); break; case 65: BIO_puts(out, " (VPLS)"); break; case 66: BIO_puts(out, " (BGP MDT)"); break; case 128: BIO_puts(out, " (MPLS-labeled VPN)"); break; default: BIO_printf(out, " (Unknown SAFI %u)", (unsigned)f->addressFamily->data[2]); break; } } switch (f->ipAddressChoice->type) { case IPAddressChoice_inherit: BIO_puts(out, ": inherit\n"); break; case IPAddressChoice_addressesOrRanges: BIO_puts(out, ":\n"); if (!i2r_IPAddressOrRanges(out, indent + 2, f->ipAddressChoice-> u.addressesOrRanges, afi)) return 0; break; } } return 1; }
0
[ "CWE-119", "CWE-787" ]
openssl
068b963bb7afc57f5bdd723de0dd15e7795d5822
199,205,484,802,727,830,000,000,000,000,000,000,000
67
Avoid out-of-bounds read Fixes CVE 2017-3735 Reviewed-by: Kurt Roeckx <[email protected]> (Merged from https://github.com/openssl/openssl/pull/4276) (cherry picked from commit b23171744b01e473ebbfd6edad70c1c3825ffbcd)
char *ip6_string(char *p, const char *addr, const char *fmt) { int i; for (i = 0; i < 8; i++) { p = hex_byte_pack(p, *addr++); p = hex_byte_pack(p, *addr++); if (fmt[0] == 'I' && i != 7) *p++ = ':'; } *p = '\0'; return p; }
0
[ "CWE-200" ]
linux
ad67b74d2469d9b82aaa572d76474c95bc484d57
225,582,408,543,093,700,000,000,000,000,000,000,000
14
printk: hash addresses printed with %p Currently there exist approximately 14 000 places in the kernel where addresses are being printed using an unadorned %p. This potentially leaks sensitive information regarding the Kernel layout in memory. Many of these calls are stale, instead of fixing every call lets hash the address by default before printing. This will of course break some users, forcing code printing needed addresses to be updated. Code that _really_ needs the address will soon be able to use the new printk specifier %px to print the address. For what it's worth, usage of unadorned %p can be broken down as follows (thanks to Joe Perches). $ git grep -E '%p[^A-Za-z0-9]' | cut -f1 -d"/" | sort | uniq -c 1084 arch 20 block 10 crypto 32 Documentation 8121 drivers 1221 fs 143 include 101 kernel 69 lib 100 mm 1510 net 40 samples 7 scripts 11 security 166 sound 152 tools 2 virt Add function ptr_to_id() to map an address to a 32 bit unique identifier. Hash any unadorned usage of specifier %p and any malformed specifiers. Signed-off-by: Tobin C. Harding <[email protected]>
static ssize_t TIFFReadCustomStream(unsigned char *data,const size_t count, void *user_data) { PhotoshopProfile *profile; size_t total; ssize_t remaining; if (count == 0) return(0); profile=(PhotoshopProfile *) user_data; remaining=(MagickOffsetType) profile->length-profile->offset; if (remaining <= 0) return(-1); total=MagickMin(count, (size_t) remaining); (void) memcpy(data,profile->data->datum+profile->offset,total); return(total); }
0
[ "CWE-770" ]
ImageMagick
32a3eeb9e0da083cbc05909e4935efdbf9846df9
338,616,658,694,561,800,000,000,000,000,000,000,000
22
https://github.com/ImageMagick/ImageMagick/issues/736
system_command() { char *cmd; ++c_token; cmd = try_to_get_string(); do_system(cmd); free(cmd); }
0
[ "CWE-415" ]
gnuplot
052cbd17c3cbbc602ee080b2617d32a8417d7563
237,445,037,548,121,860,000,000,000,000,000,000,000
8
successive failures of "set print <foo>" could cause double-free Bug #2312
static void blk_mq_run_work_fn(struct work_struct *work) { struct blk_mq_hw_ctx *hctx; hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work); __blk_mq_run_hw_queue(hctx); }
0
[ "CWE-362", "CWE-264" ]
linux
0048b4837affd153897ed1222283492070027aa9
306,938,426,392,193,500,000,000,000,000,000,000,000
8
blk-mq: fix race between timeout and freeing request Inside timeout handler, blk_mq_tag_to_rq() is called to retrieve the request from one tag. This way is obviously wrong because the request can be freed any time and some fiedds of the request can't be trusted, then kernel oops might be triggered[1]. Currently wrt. blk_mq_tag_to_rq(), the only special case is that the flush request can share same tag with the request cloned from, and the two requests can't be active at the same time, so this patch fixes the above issue by updating tags->rqs[tag] with the active request(either flush rq or the request cloned from) of the tag. Also blk_mq_tag_to_rq() gets much simplified with this patch. Given blk_mq_tag_to_rq() is mainly for drivers and the caller must make sure the request can't be freed, so in bt_for_each() this helper is replaced with tags->rqs[tag]. [1] kernel oops log [ 439.696220] BUG: unable to handle kernel NULL pointer dereference at 0000000000000158^M [ 439.697162] IP: [<ffffffff812d89ba>] blk_mq_tag_to_rq+0x21/0x6e^M [ 439.700653] PGD 7ef765067 PUD 7ef764067 PMD 0 ^M [ 439.700653] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC ^M [ 439.700653] Dumping ftrace buffer:^M [ 439.700653] (ftrace buffer empty)^M [ 439.700653] Modules linked in: nbd ipv6 kvm_intel kvm serio_raw^M [ 439.700653] CPU: 6 PID: 2779 Comm: stress-ng-sigfd Not tainted 4.2.0-rc5-next-20150805+ #265^M [ 439.730500] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011^M [ 439.730500] task: ffff880605308000 ti: ffff88060530c000 task.ti: ffff88060530c000^M [ 439.730500] RIP: 0010:[<ffffffff812d89ba>] [<ffffffff812d89ba>] blk_mq_tag_to_rq+0x21/0x6e^M [ 439.730500] RSP: 0018:ffff880819203da0 EFLAGS: 00010283^M [ 439.730500] RAX: ffff880811b0e000 RBX: ffff8800bb465f00 RCX: 0000000000000002^M [ 439.730500] RDX: 0000000000000000 RSI: 0000000000000202 RDI: 0000000000000000^M [ 439.730500] RBP: ffff880819203db0 R08: 0000000000000002 R09: 0000000000000000^M [ 439.730500] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000202^M [ 439.730500] R13: ffff880814104800 R14: 0000000000000002 R15: ffff880811a2ea00^M [ 439.730500] FS: 00007f165b3f5740(0000) GS:ffff880819200000(0000) knlGS:0000000000000000^M [ 439.730500] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b^M [ 439.730500] CR2: 0000000000000158 CR3: 00000007ef766000 CR4: 00000000000006e0^M [ 439.730500] Stack:^M [ 439.730500] 0000000000000008 ffff8808114eed90 ffff880819203e00 ffffffff812dc104^M [ 439.755663] ffff880819203e40 ffffffff812d9f5e 0000020000000000 ffff8808114eed80^M [ 439.755663] Call Trace:^M [ 439.755663] <IRQ> ^M [ 439.755663] [<ffffffff812dc104>] bt_for_each+0x6e/0xc8^M [ 439.755663] [<ffffffff812d9f5e>] ? blk_mq_rq_timed_out+0x6a/0x6a^M [ 439.755663] [<ffffffff812d9f5e>] ? blk_mq_rq_timed_out+0x6a/0x6a^M [ 439.755663] [<ffffffff812dc1b3>] blk_mq_tag_busy_iter+0x55/0x5e^M [ 439.755663] [<ffffffff812d88b4>] ? blk_mq_bio_to_request+0x38/0x38^M [ 439.755663] [<ffffffff812d8911>] blk_mq_rq_timer+0x5d/0xd4^M [ 439.755663] [<ffffffff810a3e10>] call_timer_fn+0xf7/0x284^M [ 439.755663] [<ffffffff810a3d1e>] ? call_timer_fn+0x5/0x284^M [ 439.755663] [<ffffffff812d88b4>] ? blk_mq_bio_to_request+0x38/0x38^M [ 439.755663] [<ffffffff810a46d6>] run_timer_softirq+0x1ce/0x1f8^M [ 439.755663] [<ffffffff8104c367>] __do_softirq+0x181/0x3a4^M [ 439.755663] [<ffffffff8104c76e>] irq_exit+0x40/0x94^M [ 439.755663] [<ffffffff81031482>] smp_apic_timer_interrupt+0x33/0x3e^M [ 439.755663] [<ffffffff815559a4>] apic_timer_interrupt+0x84/0x90^M [ 439.755663] <EOI> ^M [ 439.755663] [<ffffffff81554350>] ? _raw_spin_unlock_irq+0x32/0x4a^M [ 439.755663] [<ffffffff8106a98b>] finish_task_switch+0xe0/0x163^M [ 439.755663] [<ffffffff8106a94d>] ? finish_task_switch+0xa2/0x163^M [ 439.755663] [<ffffffff81550066>] __schedule+0x469/0x6cd^M [ 439.755663] [<ffffffff8155039b>] schedule+0x82/0x9a^M [ 439.789267] [<ffffffff8119b28b>] signalfd_read+0x186/0x49a^M [ 439.790911] [<ffffffff8106d86a>] ? wake_up_q+0x47/0x47^M [ 439.790911] [<ffffffff811618c2>] __vfs_read+0x28/0x9f^M [ 439.790911] [<ffffffff8117a289>] ? __fget_light+0x4d/0x74^M [ 439.790911] [<ffffffff811620a7>] vfs_read+0x7a/0xc6^M [ 439.790911] [<ffffffff8116292b>] SyS_read+0x49/0x7f^M [ 439.790911] [<ffffffff81554c17>] entry_SYSCALL_64_fastpath+0x12/0x6f^M [ 439.790911] Code: 48 89 e5 e8 a9 b8 e7 ff 5d c3 0f 1f 44 00 00 55 89 f2 48 89 e5 41 54 41 89 f4 53 48 8b 47 60 48 8b 1c d0 48 8b 7b 30 48 8b 53 38 <48> 8b 87 58 01 00 00 48 85 c0 75 09 48 8b 97 88 0c 00 00 eb 10 ^M [ 439.790911] RIP [<ffffffff812d89ba>] blk_mq_tag_to_rq+0x21/0x6e^M [ 439.790911] RSP <ffff880819203da0>^M [ 439.790911] CR2: 0000000000000158^M [ 439.790911] ---[ end trace d40af58949325661 ]---^M Cc: <[email protected]> Signed-off-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
executing_line_number () { if (executing && showing_function_line == 0 && (variable_context == 0 || interactive_shell == 0) && currently_executing_command) { #if defined (COND_COMMAND) if (currently_executing_command->type == cm_cond) return currently_executing_command->value.Cond->line; #endif #if defined (DPAREN_ARITHMETIC) else if (currently_executing_command->type == cm_arith) return currently_executing_command->value.Arith->line; #endif #if defined (ARITH_FOR_COMMAND) else if (currently_executing_command->type == cm_arith_for) return currently_executing_command->value.ArithFor->line; #endif return line_number; } else return line_number; }
0
[]
bash
863d31ae775d56b785dc5b0105b6d251515d81d5
306,993,517,462,164,440,000,000,000,000,000,000,000
24
commit bash-20120224 snapshot
static ssize_t objs_per_slab_show(struct kmem_cache *s, char *buf) { return sprintf(buf, "%d\n", oo_objects(s->oo)); }
0
[ "CWE-189" ]
linux
f8bd2258e2d520dff28c855658bd24bdafb5102d
149,421,529,791,126,130,000,000,000,000,000,000,000
4
remove div_long_long_rem x86 is the only arch right now, which provides an optimized for div_long_long_rem and it has the downside that one has to be very careful that the divide doesn't overflow. The API is a little akward, as the arguments for the unsigned divide are signed. The signed version also doesn't handle a negative divisor and produces worse code on 64bit archs. There is little incentive to keep this API alive, so this converts the few users to the new API. Signed-off-by: Roman Zippel <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: john stultz <[email protected]> Cc: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
TEST(Url, ParsingFails) { Utility::Url url; EXPECT_FALSE(url.initialize("", false)); EXPECT_FALSE(url.initialize("foo", false)); EXPECT_FALSE(url.initialize("http://", false)); EXPECT_FALSE(url.initialize("random_scheme://host.com/path", false)); EXPECT_FALSE(url.initialize("http://www.foo.com", true)); EXPECT_FALSE(url.initialize("foo.com", true)); }
0
[]
envoy
3b5acb2f43548862dadb243de7cf3994986a8e04
110,821,334,231,473,080,000,000,000,000,000,000,000
9
http, url: Bring back chromium_url and http_parser_parse_url (#198) * Revert GURL as HTTP URL parser utility This reverts: 1. commit c9c4709c844b90b9bb2935d784a428d667c9df7d 2. commit d828958b591a6d79f4b5fa608ece9962b7afbe32 3. commit 2d69e30c51f2418faf267aaa6c1126fce9948c62 Signed-off-by: Dhi Aurrahman <[email protected]>