func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
BasicWriter &operator<<(char value) { buffer_.push_back(value); return *this; }
0
[ "CWE-134", "CWE-119", "CWE-787" ]
fmt
8cf30aa2be256eba07bb1cefb998c52326e846e7
174,159,489,359,749,760,000,000,000,000,000,000,000
4
Fix segfault on complex pointer formatting (#642)
void rds_wake_sk_sleep(struct rds_sock *rs) { unsigned long flags; read_lock_irqsave(&rs->rs_recv_lock, flags); __rds_wake_sk_sleep(rds_rs_to_sk(rs)); read_unlock_irqrestore(&rs->rs_recv_lock, flags); }
0
[ "CWE-787" ]
linux
780e982905bef61d13496d9af5310bf4af3a64d3
117,980,289,796,542,200,000,000,000,000,000,000,000
8
RDS: validate the requested traces user input against max supported Larger than supported value can lead to array read/write overflow. Reported-by: Colin Ian King <[email protected]> Signed-off-by: Santosh Shilimkar <[email protected]> Signed-off-by: David S. Miller <[email protected]>
xfs_inode_ag_walk_grab( struct xfs_inode *ip, int flags) { struct inode *inode = VFS_I(ip); bool newinos = !!(flags & XFS_AGITER_INEW_WAIT); ASSERT(rcu_read_lock_held()); /* * check for stale RCU freed inode * * If the inode has been reallocated, it doesn't matter if it's not in * the AG we are walking - we are walking for writeback, so if it * passes all the "valid inode" checks and is dirty, then we'll write * it back anyway. If it has been reallocated and still being * initialised, the XFS_INEW check below will catch it. */ spin_lock(&ip->i_flags_lock); if (!ip->i_ino) goto out_unlock_noent; /* avoid new or reclaimable inodes. Leave for reclaim code to flush */ if ((!newinos && __xfs_iflags_test(ip, XFS_INEW)) || __xfs_iflags_test(ip, XFS_IRECLAIMABLE | XFS_IRECLAIM)) goto out_unlock_noent; spin_unlock(&ip->i_flags_lock); /* nothing to sync during shutdown */ if (XFS_FORCED_SHUTDOWN(ip->i_mount)) return -EFSCORRUPTED; /* If we can't grab the inode, it must on it's way to reclaim. */ if (!igrab(inode)) return -ENOENT; /* inode is valid */ return 0; out_unlock_noent: spin_unlock(&ip->i_flags_lock); return -ENOENT; }
0
[ "CWE-476" ]
linux
afca6c5b2595fc44383919fba740c194b0b76aff
169,859,838,599,628,020,000,000,000,000,000,000,000
43
xfs: validate cached inodes are free when allocated A recent fuzzed filesystem image cached random dcache corruption when the reproducer was run. This often showed up as panics in lookup_slow() on a null inode->i_ops pointer when doing pathwalks. BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 .... Call Trace: lookup_slow+0x44/0x60 walk_component+0x3dd/0x9f0 link_path_walk+0x4a7/0x830 path_lookupat+0xc1/0x470 filename_lookup+0x129/0x270 user_path_at_empty+0x36/0x40 path_listxattr+0x98/0x110 SyS_listxattr+0x13/0x20 do_syscall_64+0xf5/0x280 entry_SYSCALL_64_after_hwframe+0x42/0xb7 but had many different failure modes including deadlocks trying to lock the inode that was just allocated or KASAN reports of use-after-free violations. The cause of the problem was a corrupt INOBT on a v4 fs where the root inode was marked as free in the inobt record. Hence when we allocated an inode, it chose the root inode to allocate, found it in the cache and re-initialised it. We recently fixed a similar inode allocation issue caused by inobt record corruption problem in xfs_iget_cache_miss() in commit ee457001ed6c ("xfs: catch inode allocation state mismatch corruption"). This change adds similar checks to the cache-hit path to catch it, and turns the reproducer into a corruption shutdown situation. Reported-by: Wen Xu <[email protected]> Signed-Off-By: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Carlos Maiolino <[email protected]> Reviewed-by: Darrick J. Wong <[email protected]> [darrick: fix typos in comment] Signed-off-by: Darrick J. Wong <[email protected]>
INST_HANDLER (spm) { // SPM Z+ ut64 spmcsr; // read SPM Control Register (SPMCR) r_anal_esil_reg_read (anal->esil, "spmcsr", &spmcsr, NULL); // clear SPMCSR ESIL_A ("0x7c,spmcsr,&=,"); // decide action depending on the old value of SPMCSR switch (spmcsr & 0x7f) { case 0x03: // PAGE ERASE // invoke SPM_CLEAR_PAGE (erases target page writing // the 0xff value ESIL_A ("16,rampz,<<,z,+,"); // push target address ESIL_A ("SPM_PAGE_ERASE,"); // do magic break; case 0x01: // FILL TEMPORARY BUFFER ESIL_A ("r1,r0,"); // push data ESIL_A ("z,"); // push target address ESIL_A ("SPM_PAGE_FILL,"); // do magic break; case 0x05: // WRITE PAGE ESIL_A ("16,rampz,<<,z,+,"); // push target address ESIL_A ("SPM_PAGE_WRITE,"); // do magic break; default: eprintf ("SPM: I dont know what to do with SPMCSR %02x.\n", (unsigned int) spmcsr); } op->cycles = 1; // This is truly false. Datasheets do not publish how // many cycles this instruction uses in all its // operation modes and I am pretty sure that this value // can vary substantially from one MCU type to another. // So... one cycle is fine. }
0
[ "CWE-125" ]
radare2
041e53cab7ca33481ae45ecd65ad596976d78e68
155,326,726,084,858,460,000,000,000,000,000,000,000
40
Fix crash in anal.avr
void ha_partition::release_auto_increment() { DBUG_ENTER("ha_partition::release_auto_increment"); if (table->s->next_number_keypart) { for (uint i= 0; i < m_tot_parts; i++) m_file[i]->ha_release_auto_increment(); } else if (next_insert_id) { ulonglong next_auto_inc_val; lock_auto_increment(); next_auto_inc_val= table_share->ha_part_data->next_auto_inc_val; /* If the current auto_increment values is lower than the reserved value, and the reserved value was reserved by this thread, we can lower the reserved value. */ if (next_insert_id < next_auto_inc_val && auto_inc_interval_for_cur_row.maximum() >= next_auto_inc_val) { THD *thd= ha_thd(); /* Check that we do not lower the value because of a failed insert with SET INSERT_ID, i.e. forced/non generated values. */ if (thd->auto_inc_intervals_forced.maximum() < next_insert_id) table_share->ha_part_data->next_auto_inc_val= next_insert_id; } DBUG_PRINT("info", ("table_share->ha_part_data->next_auto_inc_val: %lu", (ulong) table_share->ha_part_data->next_auto_inc_val)); /* Unlock the multi row statement lock taken in get_auto_increment */ if (auto_increment_safe_stmt_log_lock) { auto_increment_safe_stmt_log_lock= FALSE; DBUG_PRINT("info", ("unlocking auto_increment_safe_stmt_log_lock")); } unlock_auto_increment(); } DBUG_VOID_RETURN; }
0
[]
mysql-server
be901b60ae59c93848c829d1b0b2cb523ab8692e
334,086,517,636,375,900,000,000,000,000,000,000,000
44
Bug#26390632: CREATE TABLE CAN CAUSE MYSQL TO EXIT. Analysis ======== CREATE TABLE of InnoDB table with a partition name which exceeds the path limit can cause the server to exit. During the preparation of the partition name, there was no check to identify whether the complete path name for partition exceeds the max supported path length, causing the server to exit during subsequent processing. Fix === During the preparation of partition name, check and report an error if the partition path name exceeds the maximum path name limit. This is a 5.5 patch.
static OPJ_BOOL opj_j2k_write_sot(opj_j2k_t *p_j2k, OPJ_BYTE * p_data, OPJ_UINT32 * p_data_written, const opj_stream_private_t *p_stream, opj_event_mgr_t * p_manager ) { /* preconditions */ assert(p_j2k != 00); assert(p_manager != 00); assert(p_stream != 00); OPJ_UNUSED(p_stream); OPJ_UNUSED(p_manager); opj_write_bytes(p_data, J2K_MS_SOT, 2); /* SOT */ p_data += 2; opj_write_bytes(p_data, 10, 2); /* Lsot */ p_data += 2; opj_write_bytes(p_data, p_j2k->m_current_tile_number, 2); /* Isot */ p_data += 2; /* Psot */ p_data += 4; opj_write_bytes(p_data, p_j2k->m_specific_param.m_encoder.m_current_tile_part_number, 1); /* TPsot */ ++p_data; opj_write_bytes(p_data, p_j2k->m_cp.tcps[p_j2k->m_current_tile_number].m_nb_tile_parts, 1); /* TNsot */ ++p_data; /* UniPG>> */ #ifdef USE_JPWL /* update markers struct */ /* OPJ_BOOL res = j2k_add_marker(p_j2k->cstr_info, J2K_MS_SOT, p_j2k->sot_start, len + 2); */ assert(0 && "TODO"); #endif /* USE_JPWL */ * p_data_written = 12; return OPJ_TRUE; }
0
[ "CWE-416", "CWE-787" ]
openjpeg
4241ae6fbbf1de9658764a80944dc8108f2b4154
58,369,328,727,972,650,000,000,000,000,000,000,000
53
Fix assertion in debug mode / heap-based buffer overflow in opj_write_bytes_LE for Cinema profiles with numresolutions = 1 (#985)
static int posix_cpu_clock_get(const clockid_t which_clock, struct timespec64 *tp) { const pid_t pid = CPUCLOCK_PID(which_clock); int err = -EINVAL; if (pid == 0) { /* * Special case constant value for our own clocks. * We don't have to do any lookup to find ourselves. */ err = posix_cpu_clock_get_task(current, which_clock, tp); } else { /* * Find the given PID, and validate that the caller * should be able to see it. */ struct task_struct *p; rcu_read_lock(); p = find_task_by_vpid(pid); if (p) err = posix_cpu_clock_get_task(p, which_clock, tp); rcu_read_unlock(); } return err; }
0
[ "CWE-190" ]
linux
78c9c4dfbf8c04883941445a195276bb4bb92c76
71,290,548,450,087,510,000,000,000,000,000,000,000
26
posix-timers: Sanitize overrun handling The posix timer overrun handling is broken because the forwarding functions can return a huge number of overruns which does not fit in an int. As a consequence timer_getoverrun(2) and siginfo::si_overrun can turn into random number generators. The k_clock::timer_forward() callbacks return a 64 bit value now. Make k_itimer::ti_overrun[_last] 64bit as well, so the kernel internal accounting is correct. 3Remove the temporary (int) casts. Add a helper function which clamps the overrun value returned to user space via timer_getoverrun(2) or siginfo::si_overrun limited to a positive value between 0 and INT_MAX. INT_MAX is an indicator for user space that the overrun value has been clamped. Reported-by: Team OWL337 <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: John Stultz <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Michael Kerrisk <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
void kvm_ioapic_reset(struct kvm_ioapic *ioapic) { int i; for (i = 0; i < IOAPIC_NUM_PINS; i++) ioapic->redirtbl[i].fields.mask = 1; ioapic->base_address = IOAPIC_DEFAULT_BASE_ADDRESS; ioapic->ioregsel = 0; ioapic->irr = 0; ioapic->id = 0; update_handled_vectors(ioapic); }
0
[ "CWE-20" ]
kvm
a2c118bfab8bc6b8bb213abfc35201e441693d55
160,620,087,376,532,980,000,000,000,000,000,000,000
12
KVM: Fix bounds checking in ioapic indirect register reads (CVE-2013-1798) If the guest specifies a IOAPIC_REG_SELECT with an invalid value and follows that with a read of the IOAPIC_REG_WINDOW KVM does not properly validate that request. ioapic_read_indirect contains an ASSERT(redir_index < IOAPIC_NUM_PINS), but the ASSERT has no effect in non-debug builds. In recent kernels this allows a guest to cause a kernel oops by reading invalid memory. In older kernels (pre-3.3) this allows a guest to read from large ranges of host memory. Tested: tested against apic unit tests. Signed-off-by: Andrew Honig <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]>
bool init_read_record(READ_RECORD *info,THD *thd, TABLE *table, SQL_SELECT *select, SORT_INFO *filesort, int use_record_cache, bool print_error, bool disable_rr_cache) { IO_CACHE *tempfile; SORT_ADDON_FIELD *addon_field= filesort ? filesort->addon_field : 0; DBUG_ENTER("init_read_record"); bzero((char*) info,sizeof(*info)); info->thd=thd; info->table=table; info->forms= &info->table; /* Only one table */ info->addon_field= addon_field; if ((table->s->tmp_table == INTERNAL_TMP_TABLE) && !addon_field) (void) table->file->extra(HA_EXTRA_MMAP); if (addon_field) { info->rec_buf= (uchar*) filesort->addon_buf.str; info->ref_length= filesort->addon_buf.length; info->unpack= filesort->unpack; } else { empty_record(table); info->record= table->record[0]; info->ref_length= table->file->ref_length; } info->select=select; info->print_error=print_error; info->unlock_row= rr_unlock_row; info->ignore_not_found_rows= 0; table->status= 0; /* Rows are always found */ tempfile= 0; if (select && my_b_inited(&select->file)) tempfile= &select->file; else if (filesort && my_b_inited(&filesort->io_cache)) tempfile= &filesort->io_cache; if (tempfile && !(select && select->quick)) { DBUG_PRINT("info",("using rr_from_tempfile")); info->read_record= (addon_field ? rr_unpack_from_tempfile : rr_from_tempfile); info->io_cache= tempfile; reinit_io_cache(info->io_cache,READ_CACHE,0L,0,0); info->ref_pos=table->file->ref; if (!table->file->inited) if (table->file->ha_rnd_init_with_error(0)) DBUG_RETURN(1); /* addon_field is checked because if we use addon fields, it doesn't make sense to use cache - we don't read from the table and filesort->io_cache is read sequentially */ if (!disable_rr_cache && !addon_field && thd->variables.read_rnd_buff_size && !(table->file->ha_table_flags() & HA_FAST_KEY_READ) && (table->db_stat & HA_READ_ONLY || table->reginfo.lock_type <= TL_READ_NO_INSERT) && (ulonglong) table->s->reclength* (table->file->stats.records+ table->file->stats.deleted) > (ulonglong) MIN_FILE_LENGTH_TO_USE_ROW_CACHE && info->io_cache->end_of_file/info->ref_length * table->s->reclength > (my_off_t) MIN_ROWS_TO_USE_TABLE_CACHE && !table->s->blob_fields && info->ref_length <= MAX_REFLENGTH) { if (! init_rr_cache(thd, info)) { DBUG_PRINT("info",("using rr_from_cache")); info->read_record=rr_from_cache; } } } else if (select && select->quick) { DBUG_PRINT("info",("using rr_quick")); info->read_record=rr_quick; } else if (filesort && filesort->record_pointers) { DBUG_PRINT("info",("using record_pointers")); if (table->file->ha_rnd_init_with_error(0)) DBUG_RETURN(1); info->cache_pos= filesort->record_pointers; info->cache_end= (info->cache_pos+ filesort->return_rows * info->ref_length); info->read_record= (addon_field ? rr_unpack_from_buffer : rr_from_pointers); } else if (table->file->keyread_enabled()) { int error; info->read_record= rr_index_first; if (!table->file->inited && (error= table->file->ha_index_init(table->file->keyread, 1))) { if (print_error) table->file->print_error(error, MYF(0)); DBUG_RETURN(1); } } else { DBUG_PRINT("info",("using rr_sequential")); info->read_record=rr_sequential; if (table->file->ha_rnd_init_with_error(1)) DBUG_RETURN(1); /* We can use record cache if we don't update dynamic length tables */ if (!table->no_cache && (use_record_cache > 0 || (int) table->reginfo.lock_type <= (int) TL_READ_HIGH_PRIORITY || !(table->s->db_options_in_use & HA_OPTION_PACK_RECORD) || (use_record_cache < 0 && !(table->file->ha_table_flags() & HA_NOT_DELETE_WITH_CACHE)))) (void) table->file->extra_opt(HA_EXTRA_CACHE, thd->variables.read_buff_size); } /* Condition pushdown to storage engine */ if ((table->file->ha_table_flags() & HA_CAN_TABLE_CONDITION_PUSHDOWN) && select && select->cond && (select->cond->used_tables() & table->map) && !table->file->pushed_cond) table->file->cond_push(select->cond); DBUG_RETURN(0); } /* init_read_record */
0
[]
server
1b8bb44106f528f742faa19d23bd6e822be04f39
135,238,532,333,325,230,000,000,000,000,000,000,000
135
MDEV-26351 segfault - (MARIA_HA *) 0x0 in ha_maria::extra use the correct check. before invoking handler methods we need to know that the table was opened, not only created.
static void flat_print_str(WriterContext *wctx, const char *key, const char *value) { FlatContext *flat = wctx->priv; AVBPrint buf; printf("%s", wctx->section_pbuf[wctx->level].str); av_bprint_init(&buf, 1, AV_BPRINT_SIZE_UNLIMITED); printf("%s=", flat_escape_key_str(&buf, key, flat->sep)); av_bprint_clear(&buf); printf("\"%s\"\n", flat_escape_value_str(&buf, value)); av_bprint_finalize(&buf, NULL); }
0
[ "CWE-476" ]
FFmpeg
837cb4325b712ff1aab531bf41668933f61d75d2
181,545,842,968,407,170,000,000,000,000,000,000,000
12
ffprobe: Fix null pointer dereference with color primaries Found-by: AD-lab of venustech Signed-off-by: Michael Niedermayer <[email protected]>
int read_print_config_dir(void) { return 0; }
0
[ "CWE-787" ]
123elf
92738c435690ae467ecc1f99d2bcea56f198205a
328,455,766,165,197,540,000,000,000,000,000,000,000
4
Reimplementation of function at 0x80bb148 that prevents overflowing the destination buffer. - Adds symbol FUN_80bb148 using objcopy --add-symbol - Adds it to undefine.lst so it can be replaced - Replaces it with a function that stops copying if the destination buffer is full. The size is determined based on the calling function.
static int alloc_identity_pagetable(struct kvm *kvm) { struct page *page; struct kvm_userspace_memory_region kvm_userspace_mem; int r = 0; mutex_lock(&kvm->slots_lock); if (kvm->arch.ept_identity_pagetable) goto out; kvm_userspace_mem.slot = IDENTITY_PAGETABLE_PRIVATE_MEMSLOT; kvm_userspace_mem.flags = 0; kvm_userspace_mem.guest_phys_addr = kvm->arch.ept_identity_map_addr; kvm_userspace_mem.memory_size = PAGE_SIZE; r = __kvm_set_memory_region(kvm, &kvm_userspace_mem); if (r) goto out; page = gfn_to_page(kvm, kvm->arch.ept_identity_map_addr >> PAGE_SHIFT); if (is_error_page(page)) { r = -EFAULT; goto out; } kvm->arch.ept_identity_pagetable = page; out: mutex_unlock(&kvm->slots_lock); return r; }
0
[ "CWE-20" ]
linux
bfd0a56b90005f8c8a004baf407ad90045c2b11e
332,314,985,567,021,060,000,000,000,000,000,000,000
29
nEPT: Nested INVEPT If we let L1 use EPT, we should probably also support the INVEPT instruction. In our current nested EPT implementation, when L1 changes its EPT table for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in the course of this modification already calls INVEPT. But if last level of shadow page is unsync not all L1's changes to EPT12 are intercepted, which means roots need to be synced when L1 calls INVEPT. Global INVEPT should not be different since roots are synced by kvm_mmu_load() each time EPTP02 changes. Reviewed-by: Xiao Guangrong <[email protected]> Signed-off-by: Nadav Har'El <[email protected]> Signed-off-by: Jun Nakajima <[email protected]> Signed-off-by: Xinhao Xu <[email protected]> Signed-off-by: Yang Zhang <[email protected]> Signed-off-by: Gleb Natapov <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static uint64_t enet_read(void *opaque, hwaddr addr, unsigned size) { XgmacState *s = opaque; uint64_t r = 0; addr >>= 2; switch (addr) { case XGMAC_VERSION: r = 0x1012; break; default: if (addr < ARRAY_SIZE(s->regs)) { r = s->regs[addr]; } break; } return r; }
0
[ "CWE-787" ]
qemu
5519724a13664b43e225ca05351c60b4468e4555
152,204,616,326,757,900,000,000,000,000,000,000,000
18
hw/net/xgmac: Fix buffer overflow in xgmac_enet_send() A buffer overflow issue was reported by Mr. Ziming Zhang, CC'd here. It occurs while sending an Ethernet frame due to missing break statements and improper checking of the buffer size. Reported-by: Ziming Zhang <[email protected]> Signed-off-by: Mauro Matteo Cascella <[email protected]> Reviewed-by: Peter Maydell <[email protected]> Signed-off-by: Jason Wang <[email protected]>
static inline int finish_nested_data(UNSERIALIZE_PARAMETER) { if (*p >= max || **p != '}') { return 0; } (*p)++; return 1; }
0
[ "CWE-416" ]
php-src
1a23ebc1fff59bf480ca92963b36eba5c1b904c4
165,566,961,263,101,950,000,000,000,000,000,000,000
9
Fixed bug #74103 and bug #75054 Directly fail unserialization when trying to acquire an r/R reference to an UNDEF HT slot. Previously this left an UNDEF and later deleted the index/key from the HT. What actually caused the issue here is a combination of two factors: First, the key deletion was performed using the hash API, rather than the symtable API, such that the element was not actually removed if it used an integral string key. Second, a subsequent deletion operation, while collecting trailing UNDEF ranges, would mark the element as available for reuse (leaving a corrupted HT state with nNumOfElemnts > nNumUsed). Fix this by failing early and dropping the deletion code.
static void _slurm_rpc_shutdown_controller(slurm_msg_t * msg) { int error_code = SLURM_SUCCESS, i; uint16_t options = 0; shutdown_msg_t *shutdown_msg = (shutdown_msg_t *) msg->data; uid_t uid = g_slurm_auth_get_uid(msg->auth_cred, slurmctld_config.auth_info); /* Locks: Read node */ slurmctld_lock_t node_read_lock = { NO_LOCK, NO_LOCK, READ_LOCK, NO_LOCK, NO_LOCK }; if (!validate_super_user(uid)) { error("Security violation, SHUTDOWN RPC from uid=%d", uid); error_code = ESLURM_USER_ID_MISSING; } if (error_code); else if (msg->msg_type == REQUEST_CONTROL) { info("Performing RPC: REQUEST_CONTROL"); /* resume backup mode */ slurmctld_config.resume_backup = true; } else { info("Performing RPC: REQUEST_SHUTDOWN"); options = shutdown_msg->options; } /* do RPC call */ if (error_code) ; else if (options == 1) info("performing immeditate shutdown without state save"); else if (slurmctld_config.shutdown_time) debug2("shutdown RPC issued when already in progress"); else { if ((msg->msg_type == REQUEST_SHUTDOWN) && (options == 0)) { /* This means (msg->msg_type != REQUEST_CONTROL) */ lock_slurmctld(node_read_lock); msg_to_slurmd(REQUEST_SHUTDOWN); unlock_slurmctld(node_read_lock); } if (slurmctld_config.thread_id_sig) /* signal clean-up */ pthread_kill(slurmctld_config.thread_id_sig, SIGTERM); else { error("thread_id_sig undefined, hard shutdown"); slurmctld_config.shutdown_time = time(NULL); /* send REQUEST_SHUTDOWN_IMMEDIATE RPC */ slurmctld_shutdown(); } } if (msg->msg_type == REQUEST_CONTROL) { /* Wait for workload to dry up before sending reply. * One thread should remain, this one. */ for (i = 1; i < (CONTROL_TIMEOUT * 10); i++) { if (slurmctld_config.server_thread_count <= 1) break; usleep(100000); } if (slurmctld_config.server_thread_count > 1) error("REQUEST_CONTROL reply with %d active threads", slurmctld_config.server_thread_count); /* save_all_state(); performed by _slurmctld_background */ /* * jobcomp/elasticsearch saves/loads the state to/from file * elasticsearch_state. Since the jobcomp API isn't designed * with save/load state operations, the jobcomp/elasticsearch * _save_state() is highly coupled to its fini() function. This * state doesn't follow the same execution path as the rest of * Slurm states, where in save_all_sate() they are all indepen- * dently scheduled. So we save it manually here. */ (void) g_slurm_jobcomp_fini(); } slurm_send_rc_msg(msg, error_code); if ((error_code == SLURM_SUCCESS) && (options == 1) && (slurmctld_config.thread_id_sig)) pthread_kill(slurmctld_config.thread_id_sig, SIGABRT); }
0
[ "CWE-20" ]
slurm
033dc0d1d28b8d2ba1a5187f564a01c15187eb4e
230,909,536,534,606,900,000,000,000,000,000,000,000
81
Fix insecure handling of job requested gid. Only trust MUNGE signed values, unless the RPC was signed by SlurmUser or root. CVE-2018-10995.
static unsigned int dn_poll(struct file *file, struct socket *sock, poll_table *wait) { struct sock *sk = sock->sk; struct dn_scp *scp = DN_SK(sk); int mask = datagram_poll(file, sock, wait); if (!skb_queue_empty(&scp->other_receive_queue)) mask |= POLLRDBAND; return mask; }
0
[]
net
79462ad02e861803b3840cc782248c7359451cd9
294,813,764,094,135,180,000,000,000,000,000,000,000
11
net: add validation for the socket syscall protocol argument 郭永刚 reported that one could simply crash the kernel as root by using a simple program: int socket_fd; struct sockaddr_in addr; addr.sin_port = 0; addr.sin_addr.s_addr = INADDR_ANY; addr.sin_family = 10; socket_fd = socket(10,3,0x40000000); connect(socket_fd , &addr,16); AF_INET, AF_INET6 sockets actually only support 8-bit protocol identifiers. inet_sock's skc_protocol field thus is sized accordingly, thus larger protocol identifiers simply cut off the higher bits and store a zero in the protocol fields. This could lead to e.g. NULL function pointer because as a result of the cut off inet_num is zero and we call down to inet_autobind, which is NULL for raw sockets. kernel: Call Trace: kernel: [<ffffffff816db90e>] ? inet_autobind+0x2e/0x70 kernel: [<ffffffff816db9a4>] inet_dgram_connect+0x54/0x80 kernel: [<ffffffff81645069>] SYSC_connect+0xd9/0x110 kernel: [<ffffffff810ac51b>] ? ptrace_notify+0x5b/0x80 kernel: [<ffffffff810236d8>] ? syscall_trace_enter_phase2+0x108/0x200 kernel: [<ffffffff81645e0e>] SyS_connect+0xe/0x10 kernel: [<ffffffff81779515>] tracesys_phase2+0x84/0x89 I found no particular commit which introduced this problem. CVE: CVE-2015-8543 Cc: Cong Wang <[email protected]> Reported-by: 郭永刚 <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int vxlan_open(struct net_device *dev) { struct vxlan_dev *vxlan = netdev_priv(dev); int ret; ret = vxlan_sock_add(vxlan); if (ret < 0) return ret; if (vxlan_addr_multicast(&vxlan->default_dst.remote_ip)) { ret = vxlan_igmp_join(vxlan); if (ret == -EADDRINUSE) ret = 0; if (ret) { vxlan_sock_release(vxlan); return ret; } } if (vxlan->cfg.age_interval) mod_timer(&vxlan->age_timer, jiffies + FDB_AGE_INTERVAL); return ret; }
0
[]
net
6c8991f41546c3c472503dff1ea9daaddf9331c2
114,090,663,636,282,820,000,000,000,000,000,000,000
24
net: ipv6_stub: use ip6_dst_lookup_flow instead of ip6_dst_lookup ipv6_stub uses the ip6_dst_lookup function to allow other modules to perform IPv6 lookups. However, this function skips the XFRM layer entirely. All users of ipv6_stub->ip6_dst_lookup use ip_route_output_flow (via the ip_route_output_key and ip_route_output helpers) for their IPv4 lookups, which calls xfrm_lookup_route(). This patch fixes this inconsistent behavior by switching the stub to ip6_dst_lookup_flow, which also calls xfrm_lookup_route(). This requires some changes in all the callers, as these two functions take different arguments and have different return types. Fixes: 5f81bd2e5d80 ("ipv6: export a stub for IPv6 symbols used by vxlan") Reported-by: Xiumei Mu <[email protected]> Signed-off-by: Sabrina Dubroca <[email protected]> Signed-off-by: David S. Miller <[email protected]>
ConnStateData::handleChunkedRequestBody() { debugs(33, 7, "chunked from " << clientConnection << ": " << inBuf.length()); try { // the parser will throw on errors if (inBuf.isEmpty()) // nothing to do return ERR_NONE; BodyPipeCheckout bpc(*bodyPipe); bodyParser->setPayloadBuffer(&bpc.buf); const bool parsed = bodyParser->parse(inBuf); inBuf = bodyParser->remaining(); // sync buffers bpc.checkIn(); // dechunk then check: the size limit applies to _dechunked_ content if (clientIsRequestBodyTooLargeForPolicy(bodyPipe->producedSize())) return ERR_TOO_BIG; if (parsed) { finishDechunkingRequest(true); Must(!bodyPipe); return ERR_NONE; // nil bodyPipe implies body end for the caller } // if chunk parser needs data, then the body pipe must need it too Must(!bodyParser->needsMoreData() || bodyPipe->mayNeedMoreData()); // if parser needs more space and we can consume nothing, we will stall Must(!bodyParser->needsMoreSpace() || bodyPipe->buf().hasContent()); } catch (...) { // TODO: be more specific debugs(33, 3, HERE << "malformed chunks" << bodyPipe->status()); return ERR_INVALID_REQ; } debugs(33, 7, HERE << "need more chunked data" << *bodyPipe->status()); return ERR_NONE; }
0
[ "CWE-444" ]
squid
fd68382860633aca92065e6c343cfd1b12b126e7
234,376,442,091,291,100,000,000,000,000,000,000,000
38
Improve Transfer-Encoding handling (#702) Reject messages containing Transfer-Encoding header with coding other than chunked or identity. Squid does not support other codings. For simplicity and security sake, also reject messages where Transfer-Encoding contains unnecessary complex values that are technically equivalent to "chunked" or "identity" (e.g., ",,chunked" or "identity, chunked"). RFC 7230 formally deprecated and removed identity coding, but it is still used by some agents.
start_initial_timeout(NCR_Instance inst) { if (!inst->timer_running) { /* This will be the first transmission after mode change */ /* Mark source active */ SRC_SetActive(inst->source); } restart_timeout(inst, INITIAL_DELAY); }
0
[]
chrony
a78bf9725a7b481ebff0e0c321294ba767f2c1d8
203,319,319,912,351,200,000,000,000,000,000,000,000
11
ntp: restrict authentication of server/peer to specified key When a server/peer was specified with a key number to enable authentication with a symmetric key, packets received from the server/peer were accepted if they were authenticated with any of the keys contained in the key file and not just the specified key. This allowed an attacker who knew one key of a client/peer to modify packets from its servers/peers that were authenticated with other keys in a man-in-the-middle (MITM) attack. For example, in a network where each NTP association had a separate key and all hosts had only keys they needed, a client of a server could not attack other clients of the server, but it could attack the server and also attack its own clients (i.e. modify packets from other servers). To not allow the server/peer to be authenticated with other keys extend the authentication test to check if the key ID in the received packet is equal to the configured key number. As a consequence, it's no longer possible to authenticate two peers to each other with two different keys, both peers have to be configured to use the same key. This issue was discovered by Matt Street of Cisco ASIG.
static int __maybe_unused i740fb_suspend(struct device *dev) { struct fb_info *info = dev_get_drvdata(dev); struct i740fb_par *par = info->par; console_lock(); mutex_lock(&(par->open_lock)); /* do nothing if framebuffer is not active */ if (par->ref_count == 0) { mutex_unlock(&(par->open_lock)); console_unlock(); return 0; } fb_set_suspend(info, 1); mutex_unlock(&(par->open_lock)); console_unlock(); return 0; }
0
[ "CWE-369" ]
linux-fbdev
15cf0b82271b1823fb02ab8c377badba614d95d5
142,637,254,358,097,400,000,000,000,000,000,000,000
22
video: fbdev: i740fb: Error out if 'pixclock' equals zero The userspace program could pass any values to the driver through ioctl() interface. If the driver doesn't check the value of 'pixclock', it may cause divide error. Fix this by checking whether 'pixclock' is zero in the function i740fb_check_var(). The following log reveals it: divide error: 0000 [#1] PREEMPT SMP KASAN PTI RIP: 0010:i740fb_decode_var drivers/video/fbdev/i740fb.c:444 [inline] RIP: 0010:i740fb_set_par+0x272f/0x3bb0 drivers/video/fbdev/i740fb.c:739 Call Trace: fb_set_var+0x604/0xeb0 drivers/video/fbdev/core/fbmem.c:1036 do_fb_ioctl+0x234/0x670 drivers/video/fbdev/core/fbmem.c:1112 fb_ioctl+0xdd/0x130 drivers/video/fbdev/core/fbmem.c:1191 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:874 [inline] Signed-off-by: Zheyu Ma <[email protected]> Signed-off-by: Helge Deller <[email protected]>
qtdemux_tag_add_uint32 (GstQTDemux * qtdemux, GstTagList * taglist, const char *tag1, const char *dummy, GNode * node) { GNode *data; int len; int type; guint32 num; data = qtdemux_tree_get_child_by_type (node, FOURCC_data); if (data) { len = QT_UINT32 (data->data); type = QT_UINT32 ((guint8 *) data->data + 8); GST_DEBUG_OBJECT (qtdemux, "have %s tag, type=%d,len=%d", tag1, type, len); /* some files wrongly have a type 0x0f=15, but it should be 0x15 */ if ((type == 0x00000015 || type == 0x0000000f) && len >= 20) { num = QT_UINT32 ((guint8 *) data->data + 16); if (num) { /* do not add num=0 */ GST_DEBUG_OBJECT (qtdemux, "adding tag %d", num); gst_tag_list_add (taglist, GST_TAG_MERGE_REPLACE, tag1, num, NULL); } } } }
0
[ "CWE-125" ]
gst-plugins-good
d0949baf3dadea6021d54abef6802fed5a06af75
45,536,681,746,660,110,000,000,000,000,000,000,000
24
qtdemux: Fix out of bounds read in tag parsing code We can't simply assume that the length of the tag value as given inside the stream is correct but should also check against the amount of data we have actually available. https://bugzilla.gnome.org/show_bug.cgi?id=775451
static int alloc_kmem_cache_cpus(struct kmem_cache *s, gfp_t flags) { int cpu; for_each_online_cpu(cpu) { struct kmem_cache_cpu *c = get_cpu_slab(s, cpu); if (c) continue; c = alloc_kmem_cache_cpu(s, cpu, flags); if (!c) { free_kmem_cache_cpus(s); return 0; } s->cpu_slab[cpu] = c; } return 1; }
0
[ "CWE-189" ]
linux
f8bd2258e2d520dff28c855658bd24bdafb5102d
206,318,721,529,821,570,000,000,000,000,000,000,000
19
remove div_long_long_rem x86 is the only arch right now, which provides an optimized for div_long_long_rem and it has the downside that one has to be very careful that the divide doesn't overflow. The API is a little akward, as the arguments for the unsigned divide are signed. The signed version also doesn't handle a negative divisor and produces worse code on 64bit archs. There is little incentive to keep this API alive, so this converts the few users to the new API. Signed-off-by: Roman Zippel <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: john stultz <[email protected]> Cc: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
output_buffer& Finished::get(output_buffer& out) const { return out << *this; }
0
[]
mysql-server
b9768521bdeb1a8069c7b871f4536792b65fd79b
335,853,433,876,977,770,000,000,000,000,000,000,000
4
Updated yassl to yassl-2.3.8 (cherry picked from commit 7f9941eab55ed672bfcccd382dafbdbcfdc75aaa)
keystr (u32 *keyid) { static char keyid_str[KEYID_STR_SIZE]; switch (opt.keyid_format) { case KF_SHORT: snprintf (keyid_str, sizeof keyid_str, "%08lX", (ulong)keyid[1]); break; case KF_LONG: if (keyid[0]) snprintf (keyid_str, sizeof keyid_str, "%08lX%08lX", (ulong)keyid[0], (ulong)keyid[1]); else snprintf (keyid_str, sizeof keyid_str, "%08lX", (ulong)keyid[1]); break; case KF_0xSHORT: snprintf (keyid_str, sizeof keyid_str, "0x%08lX", (ulong)keyid[1]); break; case KF_0xLONG: if(keyid[0]) snprintf (keyid_str, sizeof keyid_str, "0x%08lX%08lX", (ulong)keyid[0],(ulong)keyid[1]); else snprintf (keyid_str, sizeof keyid_str, "0x%08lX", (ulong)keyid[1]); break; default: BUG(); } return keyid_str; }
0
[ "CWE-20" ]
gnupg
2183683bd633818dd031b090b5530951de76f392
262,826,992,550,133,230,000,000,000,000,000,000,000
36
Use inline functions to convert buffer data to scalars. * common/host2net.h (buf16_to_ulong, buf16_to_uint): New. (buf16_to_ushort, buf16_to_u16): New. (buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New. -- Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to avoid all sign extension on shift problems. Hanno Böck found a case with an invalid read due to this problem. To fix that once and for all almost all uses of "<< 24" and "<< 8" are changed by this patch to use an inline function from host2net.h. Signed-off-by: Werner Koch <[email protected]>
static int copy_verifier_state(struct bpf_verifier_state *dst, const struct bpf_verifier_state *src) { int err; err = realloc_verifier_state(dst, src->allocated_stack, false); if (err) return err; memcpy(dst, src, offsetof(struct bpf_verifier_state, allocated_stack)); return copy_stack_state(dst, src); }
0
[ "CWE-20" ]
linux
c131187db2d3fa2f8bf32fdf4e9a4ef805168467
111,433,643,106,887,540,000,000,000,000,000,000,000
11
bpf: fix branch pruning logic when the verifier detects that register contains a runtime constant and it's compared with another constant it will prune exploration of the branch that is guaranteed not to be taken at runtime. This is all correct, but malicious program may be constructed in such a way that it always has a constant comparison and the other branch is never taken under any conditions. In this case such path through the program will not be explored by the verifier. It won't be taken at run-time either, but since all instructions are JITed the malicious program may cause JITs to complain about using reserved fields, etc. To fix the issue we have to track the instructions explored by the verifier and sanitize instructions that are dead at run time with NOPs. We cannot reject such dead code, since llvm generates it for valid C code, since it doesn't do as much data flow analysis as the verifier does. Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)") Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
void WebContents::DevToolsIndexPath( int request_id, const std::string& file_system_path, const std::string& excluded_folders_message) { if (!IsDevToolsFileSystemAdded(GetDevToolsWebContents(), file_system_path)) { OnDevToolsIndexingDone(request_id, file_system_path); return; } if (devtools_indexing_jobs_.count(request_id) != 0) return; std::vector<std::string> excluded_folders; std::unique_ptr<base::Value> parsed_excluded_folders = base::JSONReader::ReadDeprecated(excluded_folders_message); if (parsed_excluded_folders && parsed_excluded_folders->is_list()) { for (const base::Value& folder_path : parsed_excluded_folders->GetListDeprecated()) { if (folder_path.is_string()) excluded_folders.push_back(folder_path.GetString()); } } devtools_indexing_jobs_[request_id] = scoped_refptr<DevToolsFileSystemIndexer::FileSystemIndexingJob>( devtools_file_system_indexer_->IndexPath( file_system_path, excluded_folders, base::BindRepeating( &WebContents::OnDevToolsIndexingWorkCalculated, weak_factory_.GetWeakPtr(), request_id, file_system_path), base::BindRepeating(&WebContents::OnDevToolsIndexingWorked, weak_factory_.GetWeakPtr(), request_id, file_system_path), base::BindRepeating(&WebContents::OnDevToolsIndexingDone, weak_factory_.GetWeakPtr(), request_id, file_system_path))); }
0
[]
electron
e9fa834757f41c0b9fe44a4dffe3d7d437f52d34
49,464,281,531,819,640,000,000,000,000,000,000,000
34
fix: ensure ElectronBrowser mojo service is only bound to appropriate render frames (#33344) * fix: ensure ElectronBrowser mojo service is only bound to authorized render frames Notes: no-notes * refactor: extract electron API IPC to its own mojo interface * fix: just check main frame not primary main frame Co-authored-by: Samuel Attard <[email protected]> Co-authored-by: Samuel Attard <[email protected]>
HttpTransact::did_forward_server_send_0_9_response(State* s) { if (s->hdr_info.server_response.version_get() == HTTPVersion(0, 9)) { s->current.server->http_version.set(0, 9); return true; } return false; }
0
[ "CWE-119" ]
trafficserver
8b5f0345dade6b2822d9b52c8ad12e63011a5c12
22,513,862,327,159,112,000,000,000,000,000,000,000
8
Fix the internal buffer sizing. Thanks to Sudheer for helping isolating this bug
cmp_item *cmp_item_datetime::make_same() { return new cmp_item_datetime(); }
0
[ "CWE-617" ]
server
807945f2eb5fa22e6f233cc17b85a2e141efe2c8
294,172,693,288,117,430,000,000,000,000,000,000,000
4
MDEV-26402: A SEGV in Item_field::used_tables/update_depend_map_for_order... When doing condition pushdown from HAVING into WHERE, Item_equal::create_pushable_equalities() calls item->set_extraction_flag(IMMUTABLE_FL) for constant items. Then, Item::cleanup_excluding_immutables_processor() checks for this flag to see if it should call item->cleanup() or leave the item as-is. The failure happens when a constant item has a non-constant one inside it, like: (tbl.col=0 AND impossible_cond) item->walk(cleanup_excluding_immutables_processor) works in a bottom-up way so it 1. will call Item_func_eq(tbl.col=0)->cleanup() 2. will not call Item_cond_and->cleanup (as the AND is constant) This creates an item tree where a fixed Item has an un-fixed Item inside it which eventually causes an assertion failure. Fixed by introducing this rule: instead of just calling item->set_extraction_flag(IMMUTABLE_FL); we call Item::walk() to set the flag for all sub-items of the item.
e_util_free_object_slist (GSList *objects) { g_slist_free_full (objects, (GDestroyNotify) g_object_unref); }
0
[ "CWE-295" ]
evolution-data-server
6672b8236139bd6ef41ecb915f4c72e2a052dba5
215,006,933,046,875,230,000,000,000,000,000,000,000
4
Let child source with 'none' authentication method use collection source authentication That might be the same as having set NULL authentication method. Related to https://gitlab.gnome.org/GNOME/evolution-ews/issues/27
static struct cm_id_private * cm_get_id(__be32 local_id, __be32 remote_id) { struct cm_id_private *cm_id_priv; cm_id_priv = idr_find(&cm.local_id_table, (__force int) (local_id ^ cm.random_id_operand)); if (cm_id_priv) { if (cm_id_priv->id.remote_id == remote_id) atomic_inc(&cm_id_priv->refcount); else cm_id_priv = NULL; } return cm_id_priv; }
0
[ "CWE-20" ]
linux
b2853fd6c2d0f383dbdf7427e263eb576a633867
210,804,856,012,840,930,000,000,000,000,000,000,000
15
IB/core: Don't resolve passive side RoCE L2 address in CMA REQ handler The code that resolves the passive side source MAC within the rdma_cm connection request handler was both redundant and buggy, so remove it. It was redundant since later, when an RC QP is modified to RTR state, the resolution will take place in the ib_core module. It was buggy because this callback also deals with UD SIDR exchange, for which we incorrectly looked at the REQ member of the CM event and dereferenced a random value. Fixes: dd5f03beb4f7 ("IB/core: Ethernet L2 attributes in verbs/cm structures") Signed-off-by: Moni Shoua <[email protected]> Signed-off-by: Or Gerlitz <[email protected]> Signed-off-by: Roland Dreier <[email protected]>
double GetZoomLevel(v8::Isolate* isolate) { double result = 0.0; content::RenderFrame* render_frame; if (!MaybeGetRenderFrame(isolate, "getZoomLevel", &render_frame)) return result; mojo::AssociatedRemote<mojom::ElectronWebContentsUtility> web_contents_utility_remote; render_frame->GetRemoteAssociatedInterfaces()->GetInterface( &web_contents_utility_remote); web_contents_utility_remote->DoGetZoomLevel(&result); return result; }
0
[]
electron
e9fa834757f41c0b9fe44a4dffe3d7d437f52d34
3,259,543,042,889,835,000,000,000,000,000,000,000
13
fix: ensure ElectronBrowser mojo service is only bound to appropriate render frames (#33344) * fix: ensure ElectronBrowser mojo service is only bound to authorized render frames Notes: no-notes * refactor: extract electron API IPC to its own mojo interface * fix: just check main frame not primary main frame Co-authored-by: Samuel Attard <[email protected]> Co-authored-by: Samuel Attard <[email protected]>
INST_HANDLER (cpi) { // CPI Rd, K int d = ((buf[0] >> 4) & 0xf) + 16; int k = (buf[0] & 0xf) | ((buf[1] & 0xf) << 4); ESIL_A ("%d,r%d,-,", k, d); // Rd - k __generic_sub_update_flags_rk (op, d, k, 0); // FLAGS (carry) }
0
[ "CWE-125" ]
radare2
041e53cab7ca33481ae45ecd65ad596976d78e68
247,359,219,611,248,660,000,000,000,000,000,000,000
6
Fix crash in anal.avr
node_extended_grapheme_cluster(Node** np, ScanEnv* env) { /* same as (?>\P{M}\p{M}*) */ Node* np1 = NULL; Node* np2 = NULL; Node* qn = NULL; Node* list1 = NULL; Node* list2 = NULL; int r = 0; #ifdef USE_UNICODE_PROPERTIES if (ONIGENC_IS_UNICODE(env->enc)) { /* UTF-8, UTF-16BE/LE, UTF-32BE/LE */ CClassNode* cc1; CClassNode* cc2; UChar* propname = (UChar* )"M"; int ctype = env->enc->property_name_to_ctype(ONIG_ENCODING_ASCII, propname, propname + 1); if (ctype >= 0) { /* \P{M} */ np1 = node_new_cclass(); if (IS_NULL(np1)) goto err; cc1 = NCCLASS(np1); r = add_ctype_to_cc(cc1, ctype, 0, 0, env); if (r != 0) goto err; NCCLASS_SET_NOT(cc1); /* \p{M}* */ np2 = node_new_cclass(); if (IS_NULL(np2)) goto err; cc2 = NCCLASS(np2); r = add_ctype_to_cc(cc2, ctype, 0, 0, env); if (r != 0) goto err; qn = node_new_quantifier(0, REPEAT_INFINITE, 0); if (IS_NULL(qn)) goto err; NQTFR(qn)->target = np2; np2 = NULL; /* \P{M}\p{M}* */ list2 = node_new_list(qn, NULL_NODE); if (IS_NULL(list2)) goto err; qn = NULL; list1 = node_new_list(np1, list2); if (IS_NULL(list1)) goto err; np1 = NULL; list2 = NULL; /* (?>...) */ *np = node_new_enclose(ENCLOSE_STOP_BACKTRACK); if (IS_NULL(*np)) goto err; NENCLOSE(*np)->target = list1; return ONIG_NORMAL; } } #endif /* USE_UNICODE_PROPERTIES */ if (IS_NULL(*np)) { /* PerlSyntax: (?s:.), RubySyntax: (?m:.) */ OnigOptionType option; np1 = node_new_anychar(); if (IS_NULL(np1)) goto err; option = env->option; ONOFF(option, ONIG_OPTION_MULTILINE, 0); *np = node_new_option(option); if (IS_NULL(*np)) goto err; NENCLOSE(*np)->target = np1; } return ONIG_NORMAL; err: onig_node_free(np1); onig_node_free(np2); onig_node_free(qn); onig_node_free(list1); onig_node_free(list2); return (r == 0) ? ONIGERR_MEMORY : r; }
0
[ "CWE-125" ]
Onigmo
29e7e6aedebafd5efbbd90655c8e0d495035d7b4
223,296,650,090,924,450,000,000,000,000,000,000,000
78
bug: Fix out of bounds read Add boundary check before PFETCH. Based on the following commits on https://github.com/kkos/oniguruma , but not the same. * 68c395576813b3f9812427f94d272bcffaca316c * dc0a23eb16961f98d2a5a2128d18bd4602058a10 * 5186c7c706a7f280110e6a0b060f87d0f7d790ce * 562bf4825b301693180c674994bf708b28b00592 * 162cf9124ba3bfaa21d53ebc506f3d9354bfa99b
static int __init esp4_init(void) { if (xfrm_register_type(&esp_type, AF_INET) < 0) { pr_info("%s: can't add xfrm type\n", __func__); return -EAGAIN; } if (xfrm4_protocol_register(&esp4_protocol, IPPROTO_ESP) < 0) { pr_info("%s: can't add protocol\n", __func__); xfrm_unregister_type(&esp_type, AF_INET); return -EAGAIN; } return 0; }
0
[ "CWE-787" ]
linux
ebe48d368e97d007bfeb76fcb065d6cfc4c96645
280,545,905,740,650,400,000,000,000,000,000,000,000
13
esp: Fix possible buffer overflow in ESP transformation The maximum message size that can be send is bigger than the maximum site that skb_page_frag_refill can allocate. So it is possible to write beyond the allocated buffer. Fix this by doing a fallback to COW in that case. v2: Avoid get get_order() costs as suggested by Linus Torvalds. Fixes: cac2661c53f3 ("esp4: Avoid skb_cow_data whenever possible") Fixes: 03e2a30f6a27 ("esp6: Avoid skb_cow_data whenever possible") Reported-by: valis <[email protected]> Signed-off-by: Steffen Klassert <[email protected]>
static inline void security_sb_post_pivotroot(struct path *old_path, struct path *new_path) { }
0
[]
linux-2.6
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
92,687,251,114,517,320,000,000,000,000,000,000,000
3
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6] Add a keyctl to install a process's session keyring onto its parent. This replaces the parent's session keyring. Because the COW credential code does not permit one process to change another process's credentials directly, the change is deferred until userspace next starts executing again. Normally this will be after a wait*() syscall. To support this, three new security hooks have been provided: cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in the blank security creds and key_session_to_parent() - which asks the LSM if the process may replace its parent's session keyring. The replacement may only happen if the process has the same ownership details as its parent, and the process has LINK permission on the session keyring, and the session keyring is owned by the process, and the LSM permits it. Note that this requires alteration to each architecture's notify_resume path. This has been done for all arches barring blackfin, m68k* and xtensa, all of which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the replacement to be performed at the point the parent process resumes userspace execution. This allows the userspace AFS pioctl emulation to fully emulate newpag() and the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to alter the parent process's PAG membership. However, since kAFS doesn't use PAGs per se, but rather dumps the keys into the session keyring, the session keyring of the parent must be replaced if, for example, VIOCSETTOK is passed the newpag flag. This can be tested with the following program: #include <stdio.h> #include <stdlib.h> #include <keyutils.h> #define KEYCTL_SESSION_TO_PARENT 18 #define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0) int main(int argc, char **argv) { key_serial_t keyring, key; long ret; keyring = keyctl_join_session_keyring(argv[1]); OSERROR(keyring, "keyctl_join_session_keyring"); key = add_key("user", "a", "b", 1, keyring); OSERROR(key, "add_key"); ret = keyctl(KEYCTL_SESSION_TO_PARENT); OSERROR(ret, "KEYCTL_SESSION_TO_PARENT"); return 0; } Compiled and linked with -lkeyutils, you should see something like: [dhowells@andromeda ~]$ keyctl show Session Keyring -3 --alswrv 4043 4043 keyring: _ses 355907932 --alswrv 4043 -1 \_ keyring: _uid.4043 [dhowells@andromeda ~]$ /tmp/newpag [dhowells@andromeda ~]$ keyctl show Session Keyring -3 --alswrv 4043 4043 keyring: _ses 1055658746 --alswrv 4043 4043 \_ user: a [dhowells@andromeda ~]$ /tmp/newpag hello [dhowells@andromeda ~]$ keyctl show Session Keyring -3 --alswrv 4043 4043 keyring: hello 340417692 --alswrv 4043 4043 \_ user: a Where the test program creates a new session keyring, sticks a user key named 'a' into it and then installs it on its parent. Signed-off-by: David Howells <[email protected]> Signed-off-by: James Morris <[email protected]>
static int find_group_orlov(struct super_block *sb, struct inode *parent, ext4_group_t *group, umode_t mode, const struct qstr *qstr) { ext4_group_t parent_group = EXT4_I(parent)->i_block_group; struct ext4_sb_info *sbi = EXT4_SB(sb); ext4_group_t real_ngroups = ext4_get_groups_count(sb); int inodes_per_group = EXT4_INODES_PER_GROUP(sb); unsigned int freei, avefreei, grp_free; ext4_fsblk_t freeb, avefreec; unsigned int ndirs; int max_dirs, min_inodes; ext4_grpblk_t min_clusters; ext4_group_t i, grp, g, ngroups; struct ext4_group_desc *desc; struct orlov_stats stats; int flex_size = ext4_flex_bg_size(sbi); struct dx_hash_info hinfo; ngroups = real_ngroups; if (flex_size > 1) { ngroups = (real_ngroups + flex_size - 1) >> sbi->s_log_groups_per_flex; parent_group >>= sbi->s_log_groups_per_flex; } freei = percpu_counter_read_positive(&sbi->s_freeinodes_counter); avefreei = freei / ngroups; freeb = EXT4_C2B(sbi, percpu_counter_read_positive(&sbi->s_freeclusters_counter)); avefreec = freeb; do_div(avefreec, ngroups); ndirs = percpu_counter_read_positive(&sbi->s_dirs_counter); if (S_ISDIR(mode) && ((parent == d_inode(sb->s_root)) || (ext4_test_inode_flag(parent, EXT4_INODE_TOPDIR)))) { int best_ndir = inodes_per_group; int ret = -1; if (qstr) { hinfo.hash_version = DX_HASH_HALF_MD4; hinfo.seed = sbi->s_hash_seed; ext4fs_dirhash(qstr->name, qstr->len, &hinfo); grp = hinfo.hash; } else grp = prandom_u32(); parent_group = (unsigned)grp % ngroups; for (i = 0; i < ngroups; i++) { g = (parent_group + i) % ngroups; get_orlov_stats(sb, g, flex_size, &stats); if (!stats.free_inodes) continue; if (stats.used_dirs >= best_ndir) continue; if (stats.free_inodes < avefreei) continue; if (stats.free_clusters < avefreec) continue; grp = g; ret = 0; best_ndir = stats.used_dirs; } if (ret) goto fallback; found_flex_bg: if (flex_size == 1) { *group = grp; return 0; } /* * We pack inodes at the beginning of the flexgroup's * inode tables. Block allocation decisions will do * something similar, although regular files will * start at 2nd block group of the flexgroup. See * ext4_ext_find_goal() and ext4_find_near(). */ grp *= flex_size; for (i = 0; i < flex_size; i++) { if (grp+i >= real_ngroups) break; desc = ext4_get_group_desc(sb, grp+i, NULL); if (desc && ext4_free_inodes_count(sb, desc)) { *group = grp+i; return 0; } } goto fallback; } max_dirs = ndirs / ngroups + inodes_per_group / 16; min_inodes = avefreei - inodes_per_group*flex_size / 4; if (min_inodes < 1) min_inodes = 1; min_clusters = avefreec - EXT4_CLUSTERS_PER_GROUP(sb)*flex_size / 4; /* * Start looking in the flex group where we last allocated an * inode for this parent directory */ if (EXT4_I(parent)->i_last_alloc_group != ~0) { parent_group = EXT4_I(parent)->i_last_alloc_group; if (flex_size > 1) parent_group >>= sbi->s_log_groups_per_flex; } for (i = 0; i < ngroups; i++) { grp = (parent_group + i) % ngroups; get_orlov_stats(sb, grp, flex_size, &stats); if (stats.used_dirs >= max_dirs) continue; if (stats.free_inodes < min_inodes) continue; if (stats.free_clusters < min_clusters) continue; goto found_flex_bg; } fallback: ngroups = real_ngroups; avefreei = freei / ngroups; fallback_retry: parent_group = EXT4_I(parent)->i_block_group; for (i = 0; i < ngroups; i++) { grp = (parent_group + i) % ngroups; desc = ext4_get_group_desc(sb, grp, NULL); if (desc) { grp_free = ext4_free_inodes_count(sb, desc); if (grp_free && grp_free >= avefreei) { *group = grp; return 0; } } } if (avefreei) { /* * The free-inodes counter is approximate, and for really small * filesystems the above test can fail to find any blockgroups */ avefreei = 0; goto fallback_retry; } return -1; }
0
[]
linux
7dac4a1726a9c64a517d595c40e95e2d0d135f6f
107,875,112,794,622,770,000,000,000,000,000,000,000
147
ext4: add validity checks for bitmap block numbers An privileged attacker can cause a crash by mounting a crafted ext4 image which triggers a out-of-bounds read in the function ext4_valid_block_bitmap() in fs/ext4/balloc.c. This issue has been assigned CVE-2018-1093. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=199181 BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1560782 Reported-by: Wen Xu <[email protected]> Signed-off-by: Theodore Ts'o <[email protected]> Cc: [email protected]
static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); bool already_loaded = vmx->loaded_vmcs->cpu == cpu; if (!vmm_exclusive) kvm_cpu_vmxon(phys_addr); else if (!already_loaded) loaded_vmcs_clear(vmx->loaded_vmcs); if (!already_loaded) { local_irq_disable(); crash_disable_local_vmclear(cpu); /* * Read loaded_vmcs->cpu should be before fetching * loaded_vmcs->loaded_vmcss_on_cpu_link. * See the comments in __loaded_vmcs_clear(). */ smp_rmb(); list_add(&vmx->loaded_vmcs->loaded_vmcss_on_cpu_link, &per_cpu(loaded_vmcss_on_cpu, cpu)); crash_enable_local_vmclear(cpu); local_irq_enable(); } if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; vmcs_load(vmx->loaded_vmcs->vmcs); } if (!already_loaded) { struct desc_ptr *gdt = this_cpu_ptr(&host_gdt); unsigned long sysenter_esp; kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); /* * Linux uses per-cpu TSS and GDT, so set these when switching * processors. */ vmcs_writel(HOST_TR_BASE, kvm_read_tr_base()); /* 22.2.4 */ vmcs_writel(HOST_GDTR_BASE, gdt->address); /* 22.2.4 */ rdmsrl(MSR_IA32_SYSENTER_ESP, sysenter_esp); vmcs_writel(HOST_IA32_SYSENTER_ESP, sysenter_esp); /* 22.2.3 */ vmx->loaded_vmcs->cpu = cpu; } /* Setup TSC multiplier */ if (kvm_has_tsc_control && vmx->current_tsc_ratio != vcpu->arch.tsc_scaling_ratio) decache_tsc_multiplier(vmx); vmx_vcpu_pi_load(vcpu, cpu); vmx->host_pkru = read_pkru(); }
0
[ "CWE-388" ]
linux
ef85b67385436ddc1998f45f1d6a210f935b3388
244,638,910,458,362,020,000,000,000,000,000,000,000
60
kvm: nVMX: Allow L1 to intercept software exceptions (#BP and #OF) When L2 exits to L0 due to "exception or NMI", software exceptions (#BP and #OF) for which L1 has requested an intercept should be handled by L1 rather than L0. Previously, only hardware exceptions were forwarded to L1. Signed-off-by: Jim Mattson <[email protected]> Cc: [email protected] Signed-off-by: Paolo Bonzini <[email protected]>
static void test_view_2where() { MYSQL_STMT *stmt; int rc, i; MYSQL_BIND my_bind[8]; char parms[8][100]; ulong length[8]; const char *query= "select relid, report, handle, log_group, username, variant, type, " "version, erfdat, erftime, erfname, aedat, aetime, aename, dependvars, " "inactive from V_LTDX where mandt = ? and relid = ? and report = ? and " "handle = ? and log_group = ? and username in ( ? , ? ) and type = ?"; myheader("test_view_2where"); rc= mysql_query(mysql, "DROP TABLE IF EXISTS LTDX"); myquery(rc); rc= mysql_query(mysql, "DROP VIEW IF EXISTS V_LTDX"); myquery(rc); rc= mysql_query(mysql, "CREATE TABLE LTDX (MANDT char(3) NOT NULL default '000', " " RELID char(2) NOT NULL, REPORT varchar(40) NOT NULL," " HANDLE varchar(4) NOT NULL, LOG_GROUP varchar(4) NOT NULL," " USERNAME varchar(12) NOT NULL," " VARIANT varchar(12) NOT NULL," " TYPE char(1) NOT NULL, SRTF2 int(11) NOT NULL," " VERSION varchar(6) NOT NULL default '000000'," " ERFDAT varchar(8) NOT NULL default '00000000'," " ERFTIME varchar(6) NOT NULL default '000000'," " ERFNAME varchar(12) NOT NULL," " AEDAT varchar(8) NOT NULL default '00000000'," " AETIME varchar(6) NOT NULL default '000000'," " AENAME varchar(12) NOT NULL," " DEPENDVARS varchar(10) NOT NULL," " INACTIVE char(1) NOT NULL, CLUSTR smallint(6) NOT NULL," " CLUSTD blob," " PRIMARY KEY (MANDT, RELID, REPORT, HANDLE, LOG_GROUP, " "USERNAME, VARIANT, TYPE, SRTF2))" " CHARSET=latin1 COLLATE latin1_bin"); myquery(rc); rc= mysql_query(mysql, "CREATE VIEW V_LTDX AS select T0001.MANDT AS " " MANDT,T0001.RELID AS RELID,T0001.REPORT AS " " REPORT,T0001.HANDLE AS HANDLE,T0001.LOG_GROUP AS " " LOG_GROUP,T0001.USERNAME AS USERNAME,T0001.VARIANT AS " " VARIANT,T0001.TYPE AS TYPE,T0001.VERSION AS " " VERSION,T0001.ERFDAT AS ERFDAT,T0001.ERFTIME AS " " ERFTIME,T0001.ERFNAME AS ERFNAME,T0001.AEDAT AS " " AEDAT,T0001.AETIME AS AETIME,T0001.AENAME AS " " AENAME,T0001.DEPENDVARS AS DEPENDVARS,T0001.INACTIVE AS " " INACTIVE from LTDX T0001 where (T0001.SRTF2 = 0)"); myquery(rc); memset(my_bind, 0, sizeof(my_bind)); for (i=0; i < 8; i++) { my_stpcpy(parms[i], "1"); my_bind[i].buffer_type = MYSQL_TYPE_VAR_STRING; my_bind[i].buffer = (char *)&parms[i]; my_bind[i].buffer_length = 100; my_bind[i].is_null = 0; my_bind[i].length = &length[i]; length[i] = 1; } stmt= mysql_stmt_init(mysql); rc= mysql_stmt_prepare(stmt, query, strlen(query)); check_execute(stmt, rc); rc= mysql_stmt_bind_param(stmt, my_bind); check_execute(stmt,rc); rc= mysql_stmt_execute(stmt); check_execute(stmt, rc); rc= my_process_stmt_result(stmt); DIE_UNLESS(0 == rc); mysql_stmt_close(stmt); rc= mysql_query(mysql, "DROP VIEW V_LTDX"); myquery(rc); rc= mysql_query(mysql, "DROP TABLE LTDX"); myquery(rc); }
0
[ "CWE-284", "CWE-295" ]
mysql-server
3bd5589e1a5a93f9c224badf983cd65c45215390
96,001,402,788,749,140,000,000,000,000,000,000,000
81
WL#6791 : Redefine client --ssl option to imply enforced encryption # Changed the meaning of the --ssl=1 option of all client binaries to mean force ssl, not try ssl and fail over to eunecrypted # Added a new MYSQL_OPT_SSL_ENFORCE mysql_options() option to specify that an ssl connection is required. # Added a new macro SSL_SET_OPTIONS() to the client SSL handling headers that sets all the relevant SSL options at once. # Revamped all of the current native clients to use the new macro # Removed some Windows line endings. # Added proper handling of the new option into the ssl helper headers. # If SSL is mandatory assume that the media is secure enough for the sha256 plugin to do unencrypted password exchange even before establishing a connection. # Set the default ssl cipher to DHE-RSA-AES256-SHA if none is specified. # updated test cases that require a non-default cipher to spawn a mysql command line tool binary since mysqltest has no support for specifying ciphers. # updated the replication slave connection code to always enforce SSL if any of the SSL config options is present. # test cases added and updated. # added a mysql_get_option() API to return mysql_options() values. Used the new API inside the sha256 plugin. # Fixed compilation warnings because of unused variables. # Fixed test failures (mysql_ssl and bug13115401) # Fixed whitespace issues. # Fully implemented the mysql_get_option() function. # Added a test case for mysql_get_option() # fixed some trailing whitespace issues # fixed some uint/int warnings in mysql_client_test.c # removed shared memory option from non-windows get_options tests # moved MYSQL_OPT_LOCAL_INFILE to the uint options
mrb_objspace_each_objects(mrb_state *mrb, mrb_each_object_callback *callback, void *data) { gc_each_objects(mrb, &mrb->gc, callback, data); }
0
[ "CWE-416" ]
mruby
5c114c91d4ff31859fcd84cf8bf349b737b90d99
32,029,417,403,882,955,000,000,000,000,000,000,000
4
Clear unused stack region that may refer freed objects; fix #3596
const git_tree_entry *git_tree_entry_byid( const git_tree *tree, const git_oid *id) { size_t i; const git_tree_entry *e; assert(tree); git_vector_foreach(&tree->entries, i, e) { if (memcmp(&e->oid.id, &id->id, sizeof(id->id)) == 0) return e; } return NULL; }
0
[ "CWE-20" ]
libgit2
928429c5c96a701bcbcafacb2421a82602b36915
295,355,269,792,520,750,000,000,000,000,000,000,000
15
tree: Check for `.git` with case insensitivy
static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages, void *arg) { unsigned long i; unsigned long onlined_pages = *(unsigned long *)arg; struct page *page; if (PageReserved(pfn_to_page(start_pfn))) for (i = 0; i < nr_pages; i++) { page = pfn_to_page(start_pfn + i); (*online_page_callback)(page); onlined_pages++; } *(unsigned long *)arg = onlined_pages; return 0; }
0
[]
linux-2.6
08dff7b7d629807dbb1f398c68dd9cd58dd657a1
169,270,042,720,845,680,000,000,000,000,000,000,000
15
mm/hotplug: correctly add new zone to all other nodes' zone lists When online_pages() is called to add new memory to an empty zone, it rebuilds all zone lists by calling build_all_zonelists(). But there's a bug which prevents the new zone to be added to other nodes' zone lists. online_pages() { build_all_zonelists() ..... node_set_state(zone_to_nid(zone), N_HIGH_MEMORY) } Here the node of the zone is put into N_HIGH_MEMORY state after calling build_all_zonelists(), but build_all_zonelists() only adds zones from nodes in N_HIGH_MEMORY state to the fallback zone lists. build_all_zonelists() ->__build_all_zonelists() ->build_zonelists() ->find_next_best_node() ->for_each_node_state(n, N_HIGH_MEMORY) So memory in the new zone will never be used by other nodes, and it may cause strange behavor when system is under memory pressure. So put node into N_HIGH_MEMORY state before calling build_all_zonelists(). Signed-off-by: Jianguo Wu <[email protected]> Signed-off-by: Jiang Liu <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Tony Luck <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: KOSAKI Motohiro <[email protected]> Cc: David Rientjes <[email protected]> Cc: Keping Chen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static void ahci_cmd_done(IDEDMA *dma) { AHCIDevice *ad = DO_UPCAST(AHCIDevice, dma, dma); DPRINTF(ad->port_no, "cmd done\n"); /* update d2h status */ ahci_write_fis_d2h(ad); if (!ad->check_bh) { /* maybe we still have something to process, check later */ ad->check_bh = qemu_bh_new(ahci_check_cmd_bh, ad); qemu_bh_schedule(ad->check_bh); } }
0
[ "CWE-772", "CWE-401" ]
qemu
d68f0f778e7f4fbd674627274267f269e40f0b04
233,009,922,100,344,870,000,000,000,000,000,000,000
15
ide: ahci: call cleanup function in ahci unit This can avoid memory leak when hotunplug the ahci device. Signed-off-by: Li Qiang <[email protected]> Message-id: [email protected] Signed-off-by: John Snow <[email protected]>
static int raw_getname(struct socket *sock, struct sockaddr *uaddr, int *len, int peer) { struct sockaddr_can *addr = (struct sockaddr_can *)uaddr; struct sock *sk = sock->sk; struct raw_sock *ro = raw_sk(sk); if (peer) return -EOPNOTSUPP; addr->can_family = AF_CAN; addr->can_ifindex = ro->ifindex; *len = sizeof(*addr); return 0; }
1
[ "CWE-200" ]
linux-2.6
e84b90ae5eb3c112d1f208964df1d8156a538289
110,676,071,555,549,100,000,000,000,000,000,000,000
17
can: Fix raw_getname() leak raw_getname() can leak 10 bytes of kernel memory to user (two bytes hole between can_family and can_ifindex, 8 bytes at the end of sockaddr_can structure) Signed-off-by: Eric Dumazet <[email protected]> Acked-by: Oliver Hartkopp <[email protected]> Signed-off-by: David S. Miller <[email protected]>
TfLiteRegistration* Register_LOCAL_RESPONSE_NORMALIZATION() { return Register_LOCAL_RESPONSE_NORM_GENERIC_OPT(); }
0
[ "CWE-125", "CWE-787" ]
tensorflow
1970c2158b1ffa416d159d03c3370b9a462aee35
122,553,243,035,616,100,000,000,000,000,000,000,000
3
[tflite]: Insert `nullptr` checks when obtaining tensors. As part of ongoing refactoring, `tflite::GetInput`, `tflite::GetOutput`, `tflite::GetTemporary` and `tflite::GetIntermediates` will return `nullptr` in some cases. Hence, we insert the `nullptr` checks on all usages. We also insert `nullptr` checks on usages of `tflite::GetVariableInput` and `tflite::GetOptionalInputTensor` but only in the cases where there is no obvious check that `nullptr` is acceptable (that is, we only insert the check for the output of these two functions if the tensor is accessed as if it is always not `nullptr`). PiperOrigin-RevId: 332521299 Change-Id: I29af455bcb48d0b92e58132d951a3badbd772d56
ews_backend_ref_connection_thread (GSimpleAsyncResult *simple, GObject *object, GCancellable *cancellable) { EEwsConnection *connection; GError *error = NULL; connection = e_ews_backend_ref_connection_sync (E_EWS_BACKEND (object), NULL, NULL, NULL, cancellable, &error); /* Sanity check. */ g_return_if_fail ( ((connection != NULL) && (error == NULL)) || ((connection == NULL) && (error != NULL))); if (connection != NULL) g_simple_async_result_set_op_res_gpointer ( simple, connection, g_object_unref); if (error != NULL) g_simple_async_result_take_error (simple, error); }
0
[ "CWE-295" ]
evolution-ews
915226eca9454b8b3e5adb6f2fff9698451778de
191,886,030,184,611,340,000,000,000,000,000,000,000
21
I#27 - SSL Certificates are not validated This depends on https://gitlab.gnome.org/GNOME/evolution-data-server/commit/6672b8236139bd6ef41ecb915f4c72e2a052dba5 too. Closes https://gitlab.gnome.org/GNOME/evolution-ews/issues/27
g_file_unmount_mountable_with_operation_finish (GFile *file, GAsyncResult *result, GError **error) { GFileIface *iface; g_return_val_if_fail (G_IS_FILE (file), FALSE); g_return_val_if_fail (G_IS_ASYNC_RESULT (result), FALSE); if (g_async_result_legacy_propagate_error (result, error)) return FALSE; else if (g_async_result_is_tagged (result, g_file_unmount_mountable_with_operation)) return g_task_propagate_boolean (G_TASK (result), error); iface = G_FILE_GET_IFACE (file); if (iface->unmount_mountable_with_operation_finish != NULL) return (* iface->unmount_mountable_with_operation_finish) (file, result, error); else return (* iface->unmount_mountable_finish) (file, result, error); }
0
[ "CWE-362" ]
glib
d8f8f4d637ce43f8699ba94c9b7648beda0ca174
207,231,597,169,613,440,000,000,000,000,000,000,000
20
gfile: Limit access to files when copying file_copy_fallback creates new files with default permissions and set the correct permissions after the operation is finished. This might cause that the files can be accessible by more users during the operation than expected. Use G_FILE_CREATE_PRIVATE for the new files to limit access to those files.
ClientRequestContext::clientAccessCheckDone(const Acl::Answer &answer) { acl_checklist = NULL; err_type page_id; Http::StatusCode status; debugs(85, 2, "The request " << http->request->method << ' ' << http->uri << " is " << answer << "; last ACL checked: " << (AclMatchedName ? AclMatchedName : "[none]")); #if USE_AUTH char const *proxy_auth_msg = "<null>"; if (http->getConn() != NULL && http->getConn()->getAuth() != NULL) proxy_auth_msg = http->getConn()->getAuth()->denyMessage("<null>"); else if (http->request->auth_user_request != NULL) proxy_auth_msg = http->request->auth_user_request->denyMessage("<null>"); #endif if (!answer.allowed()) { // auth has a grace period where credentials can be expired but okay not to challenge. /* Send an auth challenge or error */ // XXX: do we still need aclIsProxyAuth() ? bool auth_challenge = (answer == ACCESS_AUTH_REQUIRED || aclIsProxyAuth(AclMatchedName)); debugs(85, 5, "Access Denied: " << http->uri); debugs(85, 5, "AclMatchedName = " << (AclMatchedName ? AclMatchedName : "<null>")); #if USE_AUTH if (auth_challenge) debugs(33, 5, "Proxy Auth Message = " << (proxy_auth_msg ? proxy_auth_msg : "<null>")); #endif /* * NOTE: get page_id here, based on AclMatchedName because if * USE_DELAY_POOLS is enabled, then AclMatchedName gets clobbered in * the clientCreateStoreEntry() call just below. Pedro Ribeiro * <[email protected]> */ page_id = aclGetDenyInfoPage(&Config.denyInfoList, AclMatchedName, answer != ACCESS_AUTH_REQUIRED); http->logType.update(LOG_TCP_DENIED); if (auth_challenge) { #if USE_AUTH if (http->request->flags.sslBumped) { /*SSL Bumped request, authentication is not possible*/ status = Http::scForbidden; } else if (!http->flags.accel) { /* Proxy authorisation needed */ status = Http::scProxyAuthenticationRequired; } else { /* WWW authorisation needed */ status = Http::scUnauthorized; } #else // need auth, but not possible to do. status = Http::scForbidden; #endif if (page_id == ERR_NONE) page_id = ERR_CACHE_ACCESS_DENIED; } else { status = Http::scForbidden; if (page_id == ERR_NONE) page_id = ERR_ACCESS_DENIED; } error = clientBuildError(page_id, status, nullptr, http->getConn(), http->request, http->al); #if USE_AUTH error->auth_user_request = http->getConn() != NULL && http->getConn()->getAuth() != NULL ? http->getConn()->getAuth() : http->request->auth_user_request; #endif readNextRequest = true; } /* ACCESS_ALLOWED continues here ... */ xfree(http->uri); http->uri = SBufToCstring(http->request->effectiveRequestUri()); http->doCallouts(); }
0
[ "CWE-116" ]
squid
7024fb734a59409889e53df2257b3fc817809fb4
221,569,862,419,161,940,000,000,000,000,000,000,000
81
Handle more Range requests (#790) Also removed some effectively unused code.
unsigned int vector_copy(const unsigned int arg) { const unsigned int siz = _cimg_mp_size(arg), pos = vector(siz); CImg<ulongT>::vector((ulongT)mp_vector_copy,pos,arg,siz).move_to(code); return pos; }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
171,571,189,582,970,900,000,000,000,000,000,000,000
7
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
SetBackColor(new) int new; { if (!display) return; SetColor(rend_getfg(&D_rend), new); }
0
[]
screen
c5db181b6e017cfccb8d7842ce140e59294d9f62
219,687,512,585,824,080,000,000,000,000,000,000,000
7
ansi: add support for xterm OSC 11 It allows for getting and setting the background color. Notably, Vim uses OSC 11 to learn whether it's running on a light or dark colored terminal and choose a color scheme accordingly. Tested with gnome-terminal and xterm. When called with "?" argument the current background color is returned: $ echo -ne "\e]11;?\e\\" $ 11;rgb:2323/2727/2929 Signed-off-by: Lubomir Rintel <[email protected]> (cherry picked from commit 7059bff20a28778f9d3acf81cad07b1388d02309) Signed-off-by: Amadeusz Sławiński <[email protected]
int jpc_tagtree_decode(jpc_tagtree_t *tree, jpc_tagtreenode_t *leaf, int threshold, jpc_bitstream_t *in) { jpc_tagtreenode_t *stk[JPC_TAGTREE_MAXDEPTH - 1]; jpc_tagtreenode_t **stkptr; jpc_tagtreenode_t *node; int low; int ret; /* Avoid compiler warnings about unused parameters. */ tree = 0; assert(threshold >= 0); /* Traverse to the root of the tree, recording the path taken. */ stkptr = stk; node = leaf; while (node->parent_) { *stkptr++ = node; node = node->parent_; } low = 0; for (;;) { if (low > node->low_) { node->low_ = low; } else { low = node->low_; } while (low < threshold && low < node->value_) { if ((ret = jpc_bitstream_getbit(in)) < 0) { return -1; } if (ret) { node->value_ = low; } else { ++low; } } node->low_ = low; if (stkptr == stk) { break; } node = *--stkptr; } return (node->value_ < threshold) ? 1 : 0; }
0
[ "CWE-189" ]
jasper
3c55b399c36ef46befcb21e4ebc4799367f89684
233,191,178,549,445,800,000,000,000,000,000,000,000
48
At many places in the code, jas_malloc or jas_recalloc was being invoked with the size argument being computed in a manner that would not allow integer overflow to be detected. Now, these places in the code have been modified to use special-purpose memory allocation functions (e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow. This should fix many security problems.
static int do_ip_getsockopt(struct sock *sk, int level, int optname, char __user *optval, int __user *optlen, unsigned int flags) { struct inet_sock *inet = inet_sk(sk); int val; int len; if (level != SOL_IP) return -EOPNOTSUPP; if (ip_mroute_opt(optname)) return ip_mroute_getsockopt(sk, optname, optval, optlen); if (get_user(len, optlen)) return -EFAULT; if (len < 0) return -EINVAL; lock_sock(sk); switch (optname) { case IP_OPTIONS: { unsigned char optbuf[sizeof(struct ip_options)+40]; struct ip_options *opt = (struct ip_options *)optbuf; struct ip_options_rcu *inet_opt; inet_opt = rcu_dereference_protected(inet->inet_opt, sock_owned_by_user(sk)); opt->optlen = 0; if (inet_opt) memcpy(optbuf, &inet_opt->opt, sizeof(struct ip_options) + inet_opt->opt.optlen); release_sock(sk); if (opt->optlen == 0) return put_user(0, optlen); ip_options_undo(opt); len = min_t(unsigned int, len, opt->optlen); if (put_user(len, optlen)) return -EFAULT; if (copy_to_user(optval, opt->__data, len)) return -EFAULT; return 0; } case IP_PKTINFO: val = (inet->cmsg_flags & IP_CMSG_PKTINFO) != 0; break; case IP_RECVTTL: val = (inet->cmsg_flags & IP_CMSG_TTL) != 0; break; case IP_RECVTOS: val = (inet->cmsg_flags & IP_CMSG_TOS) != 0; break; case IP_RECVOPTS: val = (inet->cmsg_flags & IP_CMSG_RECVOPTS) != 0; break; case IP_RETOPTS: val = (inet->cmsg_flags & IP_CMSG_RETOPTS) != 0; break; case IP_PASSSEC: val = (inet->cmsg_flags & IP_CMSG_PASSSEC) != 0; break; case IP_RECVORIGDSTADDR: val = (inet->cmsg_flags & IP_CMSG_ORIGDSTADDR) != 0; break; case IP_TOS: val = inet->tos; break; case IP_TTL: val = (inet->uc_ttl == -1 ? sysctl_ip_default_ttl : inet->uc_ttl); break; case IP_HDRINCL: val = inet->hdrincl; break; case IP_NODEFRAG: val = inet->nodefrag; break; case IP_MTU_DISCOVER: val = inet->pmtudisc; break; case IP_MTU: { struct dst_entry *dst; val = 0; dst = sk_dst_get(sk); if (dst) { val = dst_mtu(dst); dst_release(dst); } if (!val) { release_sock(sk); return -ENOTCONN; } break; } case IP_RECVERR: val = inet->recverr; break; case IP_MULTICAST_TTL: val = inet->mc_ttl; break; case IP_MULTICAST_LOOP: val = inet->mc_loop; break; case IP_UNICAST_IF: val = (__force int)htonl((__u32) inet->uc_index); break; case IP_MULTICAST_IF: { struct in_addr addr; len = min_t(unsigned int, len, sizeof(struct in_addr)); addr.s_addr = inet->mc_addr; release_sock(sk); if (put_user(len, optlen)) return -EFAULT; if (copy_to_user(optval, &addr, len)) return -EFAULT; return 0; } case IP_MSFILTER: { struct ip_msfilter msf; int err; if (len < IP_MSFILTER_SIZE(0)) { release_sock(sk); return -EINVAL; } if (copy_from_user(&msf, optval, IP_MSFILTER_SIZE(0))) { release_sock(sk); return -EFAULT; } err = ip_mc_msfget(sk, &msf, (struct ip_msfilter __user *)optval, optlen); release_sock(sk); return err; } case MCAST_MSFILTER: { struct group_filter gsf; int err; if (len < GROUP_FILTER_SIZE(0)) { release_sock(sk); return -EINVAL; } if (copy_from_user(&gsf, optval, GROUP_FILTER_SIZE(0))) { release_sock(sk); return -EFAULT; } err = ip_mc_gsfget(sk, &gsf, (struct group_filter __user *)optval, optlen); release_sock(sk); return err; } case IP_MULTICAST_ALL: val = inet->mc_all; break; case IP_PKTOPTIONS: { struct msghdr msg; release_sock(sk); if (sk->sk_type != SOCK_STREAM) return -ENOPROTOOPT; msg.msg_control = optval; msg.msg_controllen = len; msg.msg_flags = flags; if (inet->cmsg_flags & IP_CMSG_PKTINFO) { struct in_pktinfo info; info.ipi_addr.s_addr = inet->inet_rcv_saddr; info.ipi_spec_dst.s_addr = inet->inet_rcv_saddr; info.ipi_ifindex = inet->mc_index; put_cmsg(&msg, SOL_IP, IP_PKTINFO, sizeof(info), &info); } if (inet->cmsg_flags & IP_CMSG_TTL) { int hlim = inet->mc_ttl; put_cmsg(&msg, SOL_IP, IP_TTL, sizeof(hlim), &hlim); } if (inet->cmsg_flags & IP_CMSG_TOS) { int tos = inet->rcv_tos; put_cmsg(&msg, SOL_IP, IP_TOS, sizeof(tos), &tos); } len -= msg.msg_controllen; return put_user(len, optlen); } case IP_FREEBIND: val = inet->freebind; break; case IP_TRANSPARENT: val = inet->transparent; break; case IP_MINTTL: val = inet->min_ttl; break; default: release_sock(sk); return -ENOPROTOOPT; } release_sock(sk); if (len < sizeof(int) && len > 0 && val >= 0 && val <= 255) { unsigned char ucval = (unsigned char)val; len = 1; if (put_user(len, optlen)) return -EFAULT; if (copy_to_user(optval, &ucval, 1)) return -EFAULT; } else { len = min_t(unsigned int, sizeof(int), len); if (put_user(len, optlen)) return -EFAULT; if (copy_to_user(optval, &val, len)) return -EFAULT; } return 0; }
0
[ "CWE-20" ]
net
85fbaa75037d0b6b786ff18658ddf0b4014ce2a4
312,709,252,802,332,860,000,000,000,000,000,000,000
229
inet: fix addr_len/msg->msg_namelen assignment in recv_error and rxpmtu functions Commit bceaa90240b6019ed73b49965eac7d167610be69 ("inet: prevent leakage of uninitialized memory to user in recv syscalls") conditionally updated addr_len if the msg_name is written to. The recv_error and rxpmtu functions relied on the recvmsg functions to set up addr_len before. As this does not happen any more we have to pass addr_len to those functions as well and set it to the size of the corresponding sockaddr length. This broke traceroute and such. Fixes: bceaa90240b6 ("inet: prevent leakage of uninitialized memory to user in recv syscalls") Reported-by: Brad Spengler <[email protected]> Reported-by: Tom Labanowski Cc: mpb <[email protected]> Cc: David S. Miller <[email protected]> Cc: Eric Dumazet <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Signed-off-by: David S. Miller <[email protected]>
eval_expr_to_bool(typval_T *expr, int *error) { typval_T rettv; int res; if (eval_expr_typval(expr, NULL, 0, &rettv) == FAIL) { *error = TRUE; return FALSE; } res = (tv_get_bool_chk(&rettv, error) != 0); clear_tv(&rettv); return res; }
0
[ "CWE-122", "CWE-787" ]
vim
605ec91e5a7330d61be313637e495fa02a6dc264
180,496,761,844,651,540,000,000,000,000,000,000,000
14
patch 8.2.3847: illegal memory access when using a lambda with an error Problem: Illegal memory access when using a lambda with an error. Solution: Avoid skipping over the NUL after a string.
int usb_cypress_load_firmware(struct usb_device *udev, const struct firmware *fw, int type) { struct hexline *hx; u8 reset; int ret,pos=0; hx = kmalloc(sizeof(*hx), GFP_KERNEL); if (!hx) return -ENOMEM; /* stop the CPU */ reset = 1; if ((ret = usb_cypress_writemem(udev,cypress[type].cpu_cs_register,&reset,1)) != 1) err("could not stop the USB controller CPU."); while ((ret = dvb_usb_get_hexline(fw, hx, &pos)) > 0) { deb_fw("writing to address 0x%04x (buffer: 0x%02x %02x)\n", hx->addr, hx->len, hx->chk); ret = usb_cypress_writemem(udev, hx->addr, hx->data, hx->len); if (ret != hx->len) { err("error while transferring firmware (transferred size: %d, block size: %d)", ret, hx->len); ret = -EINVAL; break; } } if (ret < 0) { err("firmware download failed at %d with %d",pos,ret); kfree(hx); return ret; } if (ret == 0) { /* restart the CPU */ reset = 0; if (ret || usb_cypress_writemem(udev,cypress[type].cpu_cs_register,&reset,1) != 1) { err("could not restart the USB controller CPU."); ret = -EINVAL; } } else ret = -EIO; kfree(hx); return ret; }
1
[ "CWE-119", "CWE-787" ]
linux
67b0503db9c29b04eadfeede6bebbfe5ddad94ef
256,557,976,954,160,700,000,000,000,000,000,000,000
46
[media] dvb-usb-firmware: don't do DMA on stack The buffer allocation for the firmware data was changed in commit 43fab9793c1f ("[media] dvb-usb: don't use stack for firmware load") but the same applies for the reset value. Fixes: 43fab9793c1f ("[media] dvb-usb: don't use stack for firmware load") Cc: [email protected] Signed-off-by: Stefan Brüns <[email protected]> Signed-off-by: Mauro Carvalho Chehab <[email protected]>
GF_Err stbl_AppendTrafMap(GF_SampleTableBox *stbl, Bool is_seg_start, u64 seg_start_offset, u64 frag_start_offset, u8 *moof_template, u32 moof_template_size, u64 sidx_start, u64 sidx_end) { GF_TrafToSampleMap *tmap; GF_TrafMapEntry *tmap_ent; if (!stbl->traf_map) { //nope, create one GF_SAFEALLOC(stbl->traf_map, GF_TrafToSampleMap); if (!stbl->traf_map) return GF_OUT_OF_MEM; } tmap = stbl->traf_map; if (tmap->nb_entries >= stbl->SampleSize->sampleCount) { u32 i; for (i=0; i<tmap->nb_entries; i++) { if (tmap->frag_starts[i].moof_template) gf_free(tmap->frag_starts[i].moof_template); } memset(tmap->frag_starts, 0, sizeof(GF_TrafMapEntry)*tmap->nb_alloc); tmap->nb_entries = 0; } if (tmap->nb_entries + 1 > tmap->nb_alloc) { tmap->nb_alloc++; tmap->frag_starts = gf_realloc(tmap->frag_starts, sizeof(GF_TrafMapEntry) * tmap->nb_alloc); if (!tmap->frag_starts) return GF_OUT_OF_MEM; } tmap_ent = &tmap->frag_starts[tmap->nb_entries]; tmap->nb_entries += 1; memset(tmap_ent, 0, sizeof(GF_TrafMapEntry)); tmap_ent->sample_num = stbl->SampleSize->sampleCount; tmap_ent->moof_template = moof_template; tmap_ent->moof_template_size = moof_template_size; tmap_ent->moof_start = frag_start_offset; tmap_ent->sidx_start = sidx_start; tmap_ent->sidx_end = sidx_end; if (is_seg_start) tmap_ent->seg_start_plus_one = 1 + seg_start_offset; return GF_OK; }
0
[ "CWE-120", "CWE-787" ]
gpac
77ed81c069e10b3861d88f72e1c6be1277ee7eae
192,832,077,402,496,400,000,000,000,000,000,000,000
40
fixed #1774 (fuzz)
load_xwd_f2_d24_b32 (const gchar *filename, FILE *ifp, L_XWDFILEHEADER *xwdhdr, L_XWDCOLOR *xwdcolmap, GError **error) { register guchar *dest, lsbyte_first; gint width, height, linepad, i, j, c0, c1, c2, c3; gint tile_height, scan_lines; L_CARD32 pixelval; gint red, green, blue, ncols; gint maxred, maxgreen, maxblue; gulong redmask, greenmask, bluemask; guint redshift, greenshift, blueshift; guchar redmap[256], greenmap[256], bluemap[256]; guchar *data; PIXEL_MAP pixel_map; gint err = 0; gint32 layer_ID, image_ID; GeglBuffer *buffer; #ifdef XWD_DEBUG printf ("load_xwd_f2_d24_b32 (%s)\n", filename); #endif width = xwdhdr->l_pixmap_width; height = xwdhdr->l_pixmap_height; redmask = xwdhdr->l_red_mask; greenmask = xwdhdr->l_green_mask; bluemask = xwdhdr->l_blue_mask; if (redmask == 0) redmask = 0xff0000; if (greenmask == 0) greenmask = 0x00ff00; if (bluemask == 0) bluemask = 0x0000ff; /* How to shift RGB to be right aligned ? */ /* (We rely on the the mask bits are grouped and not mixed) */ redshift = greenshift = blueshift = 0; while (((1 << redshift) & redmask) == 0) redshift++; while (((1 << greenshift) & greenmask) == 0) greenshift++; while (((1 << blueshift) & bluemask) == 0) blueshift++; /* The bits_per_rgb may not be correct. Use redmask instead */ maxred = 0; while (redmask >> (redshift + maxred)) maxred++; maxred = (1 << maxred) - 1; maxgreen = 0; while (greenmask >> (greenshift + maxgreen)) maxgreen++; maxgreen = (1 << maxgreen) - 1; maxblue = 0; while (bluemask >> (blueshift + maxblue)) maxblue++; maxblue = (1 << maxblue) - 1; if (maxred > sizeof (redmap) || maxgreen > sizeof (greenmap) || maxblue > sizeof (bluemap)) { g_set_error (error, G_FILE_ERROR, G_FILE_ERROR_FAILED, _("XWD-file %s is corrupt."), gimp_filename_to_utf8 (filename)); return -1; } image_ID = create_new_image (filename, width, height, GIMP_RGB, GIMP_RGB_IMAGE, &layer_ID, &buffer); tile_height = gimp_tile_height (); data = g_malloc (tile_height * width * 3); /* Set map-arrays for red, green, blue */ for (red = 0; red <= maxred; red++) redmap[red] = (red * 255) / maxred; for (green = 0; green <= maxgreen; green++) greenmap[green] = (green * 255) / maxgreen; for (blue = 0; blue <= maxblue; blue++) bluemap[blue] = (blue * 255) / maxblue; ncols = xwdhdr->l_colormap_entries; if (xwdhdr->l_ncolors < ncols) ncols = xwdhdr->l_ncolors; set_pixelmap (ncols, xwdcolmap, &pixel_map); /* What do we have to consume after a line has finished ? */ linepad = xwdhdr->l_bytes_per_line - (xwdhdr->l_pixmap_width*xwdhdr->l_bits_per_pixel)/8; if (linepad < 0) linepad = 0; lsbyte_first = (xwdhdr->l_byte_order == 0); dest = data; scan_lines = 0; if (xwdhdr->l_bits_per_pixel == 32) { for (i = 0; i < height; i++) { for (j = 0; j < width; j++) { c0 = getc (ifp); c1 = getc (ifp); c2 = getc (ifp); c3 = getc (ifp); if (c3 < 0) { err = 1; break; } if (lsbyte_first) pixelval = c0 | (c1 << 8) | (c2 << 16) | (c3 << 24); else pixelval = (c0 << 24) | (c1 << 16) | (c2 << 8) | c3; if (get_pixelmap (pixelval, &pixel_map, dest, dest+1, dest+2)) { dest += 3; } else { *(dest++) = redmap[(pixelval & redmask) >> redshift]; *(dest++) = greenmap[(pixelval & greenmask) >> greenshift]; *(dest++) = bluemap[(pixelval & bluemask) >> blueshift]; } } scan_lines++; if (err) break; for (j = 0; j < linepad; j++) getc (ifp); if ((i % 20) == 0) gimp_progress_update ((gdouble) (i + 1) / (gdouble) height); if ((scan_lines == tile_height) || ((i+1) == height)) { gegl_buffer_set (buffer, GEGL_RECTANGLE (0, i - scan_lines + 1, width, scan_lines), 0, NULL, data, GEGL_AUTO_ROWSTRIDE); scan_lines = 0; dest = data; } } } else /* 24 bits per pixel */ { for (i = 0; i < height; i++) { for (j = 0; j < width; j++) { c0 = getc (ifp); c1 = getc (ifp); c2 = getc (ifp); if (c2 < 0) { err = 1; break; } if (lsbyte_first) pixelval = c0 | (c1 << 8) | (c2 << 16); else pixelval = (c0 << 16) | (c1 << 8) | c2; if (get_pixelmap (pixelval, &pixel_map, dest, dest+1, dest+2)) { dest += 3; } else { *(dest++) = redmap[(pixelval & redmask) >> redshift]; *(dest++) = greenmap[(pixelval & greenmask) >> greenshift]; *(dest++) = bluemap[(pixelval & bluemask) >> blueshift]; } } scan_lines++; if (err) break; for (j = 0; j < linepad; j++) getc (ifp); if ((i % 20) == 0) gimp_progress_update ((gdouble) (i + 1) / (gdouble) height); if ((scan_lines == tile_height) || ((i+1) == height)) { gegl_buffer_set (buffer, GEGL_RECTANGLE (0, i - scan_lines + 1, width, scan_lines), 0, NULL, data, GEGL_AUTO_ROWSTRIDE); scan_lines = 0; dest = data; } } } g_free (data); if (err) g_message (_("EOF encountered on reading")); g_object_unref (buffer); return err ? -1 : image_ID; }
0
[ "CWE-190" ]
gimp
32ae0f83e5748299641cceaabe3f80f1b3afd03e
166,522,069,606,136,660,000,000,000,000,000,000,000
210
file-xwd: sanity check colormap size (CVE-2013-1913)
QPDF::showXRefTable() { for (std::map<QPDFObjGen, QPDFXRefEntry>::iterator iter = this->xref_table.begin(); iter != this->xref_table.end(); ++iter) { QPDFObjGen const& og = (*iter).first; QPDFXRefEntry const& entry = (*iter).second; *out_stream << og.getObj() << "/" << og.getGen() << ": "; switch (entry.getType()) { case 1: *out_stream << "uncompressed; offset = " << entry.getOffset(); break; case 2: *out_stream << "compressed; stream = " << entry.getObjStreamNumber() << ", index = " << entry.getObjStreamIndex(); break; default: throw std::logic_error("unknown cross-reference table type while" " showing xref_table"); break; } *out_stream << std::endl; } }
0
[ "CWE-399", "CWE-835" ]
qpdf
701b518d5c56a1449825a3a37a716c58e05e1c3e
265,542,729,821,096,200,000,000,000,000,000,000,000
28
Detect recursion loops resolving objects (fixes #51) During parsing of an object, sometimes parts of the object have to be resolved. An example is stream lengths. If such an object directly or indirectly points to the object being parsed, it can cause an infinite loop. Guard against all cases of re-entrant resolution of objects.
lseg_length(PG_FUNCTION_ARGS) { LSEG *lseg = PG_GETARG_LSEG_P(0); PG_RETURN_FLOAT8(point_dt(&lseg->p[0], &lseg->p[1])); }
0
[ "CWE-703", "CWE-189" ]
postgres
31400a673325147e1205326008e32135a78b4d8a
150,249,295,071,490,570,000,000,000,000,000,000,000
6
Predict integer overflow to avoid buffer overruns. Several functions, mostly type input functions, calculated an allocation size such that the calculation wrapped to a small positive value when arguments implied a sufficiently-large requirement. Writes past the end of the inadvertent small allocation followed shortly thereafter. Coverity identified the path_in() vulnerability; code inspection led to the rest. In passing, add check_stack_depth() to prevent stack overflow in related functions. Back-patch to 8.4 (all supported versions). The non-comment hstore changes touch code that did not exist in 8.4, so that part stops at 9.0. Noah Misch and Heikki Linnakangas, reviewed by Tom Lane. Security: CVE-2014-0064
TEST_P(Security, BuiltinAuthenticationAndCryptoPlugin_reliable_submessage_data300kb) { PubSubReader<Data1mbType> reader(TEST_TOPIC_NAME); PubSubWriter<Data1mbType> writer(TEST_TOPIC_NAME); PropertyPolicy pub_part_property_policy, sub_part_property_policy, pub_property_policy, sub_property_policy; sub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.plugin", "builtin.PKI-DH")); sub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.builtin.PKI-DH.identity_ca", "file://" + std::string(certs_path) + "/maincacert.pem")); sub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.builtin.PKI-DH.identity_certificate", "file://" + std::string(certs_path) + "/mainsubcert.pem")); sub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.builtin.PKI-DH.private_key", "file://" + std::string(certs_path) + "/mainsubkey.pem")); sub_part_property_policy.properties().emplace_back(Property("dds.sec.crypto.plugin", "builtin.AES-GCM-GMAC")); sub_property_policy.properties().emplace_back("rtps.endpoint.submessage_protection_kind", "ENCRYPT"); reader.history_depth(5). reliability(eprosima::fastrtps::RELIABLE_RELIABILITY_QOS). property_policy(sub_part_property_policy). entity_property_policy(sub_property_policy).init(); ASSERT_TRUE(reader.isInitialized()); pub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.plugin", "builtin.PKI-DH")); pub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.builtin.PKI-DH.identity_ca", "file://" + std::string(certs_path) + "/maincacert.pem")); pub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.builtin.PKI-DH.identity_certificate", "file://" + std::string(certs_path) + "/mainpubcert.pem")); pub_part_property_policy.properties().emplace_back(Property("dds.sec.auth.builtin.PKI-DH.private_key", "file://" + std::string(certs_path) + "/mainpubkey.pem")); pub_part_property_policy.properties().emplace_back(Property("dds.sec.crypto.plugin", "builtin.AES-GCM-GMAC")); pub_property_policy.properties().emplace_back("rtps.endpoint.submessage_protection_kind", "ENCRYPT"); // When doing fragmentation, it is necessary to have some degree of // flow control not to overrun the receive buffer. uint32_t bytesPerPeriod = 65536; uint32_t periodInMs = 50; writer.history_depth(5). asynchronously(eprosima::fastrtps::ASYNCHRONOUS_PUBLISH_MODE). add_throughput_controller_descriptor_to_pparams(bytesPerPeriod, periodInMs). property_policy(pub_part_property_policy). entity_property_policy(pub_property_policy).init(); ASSERT_TRUE(writer.isInitialized()); // Wait for authorization reader.waitAuthorized(); writer.waitAuthorized(); // Wait for discovery. writer.wait_discovery(); reader.wait_discovery(); auto data = default_data300kb_data_generator(5); reader.startReception(data); // Send data writer.send(data); // In this test all data should be sent. ASSERT_TRUE(data.empty()); // Block reader until reception finished or timeout. reader.block_for_all(); }
0
[ "CWE-284" ]
Fast-DDS
d2aeab37eb4fad4376b68ea4dfbbf285a2926384
258,243,460,487,521,680,000,000,000,000,000,000,000
71
check remote permissions (#1387) * Refs 5346. Blackbox test Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. one-way string compare Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Do not add partition separator on last partition Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Access control unit testing It only covers Partition and Topic permissions Signed-off-by: Iker Luengo <[email protected]> * Refs #3680. Fix partition check on Permissions plugin. Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Fix tests on mac Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Fix windows tests Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Avoid memory leak on test Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Proxy data mocks should not return temporary objects Signed-off-by: Iker Luengo <[email protected]> * refs 3680. uncrustify Signed-off-by: Iker Luengo <[email protected]> Co-authored-by: Miguel Company <[email protected]>
static void JS_DebugSetFailAt(v8::FunctionCallbackInfo<v8::Value> const& args) { TRI_V8_TRY_CATCH_BEGIN(isolate); v8::HandleScope scope(isolate); TRI_GET_GLOBALS(); if (v8g->_vocbase == nullptr) { TRI_V8_THROW_EXCEPTION_MEMORY(); } std::string dbname(v8g->_vocbase->name()); // extract arguments if (args.Length() != 1) { TRI_V8_THROW_EXCEPTION_USAGE("debugSetFailAt(<point>)"); } std::string const point = TRI_ObjectToString(isolate, args[0]); TRI_AddFailurePointDebugging(point.c_str()); if (ServerState::instance()->isCoordinator()) { auto res = clusterSendToAllServers(isolate, dbname, "_admin/debug/failat/" + StringUtils::urlEncode(point), arangodb::rest::RequestType::PUT, ""); if (res != TRI_ERROR_NO_ERROR) { TRI_V8_THROW_EXCEPTION(res); } } TRI_V8_RETURN_UNDEFINED(); TRI_V8_TRY_CATCH_END }
0
[ "CWE-918" ]
arangodb
d9b7f019d2435f107b19a59190bf9cc27d5f34dd
15,920,875,017,866,218,000,000,000,000,000,000,000
32
[APM-78] Disable installation from remote URL (#15292)
dns_zone_getxfrsource4dscp(dns_zone_t *zone) { REQUIRE(DNS_ZONE_VALID(zone)); return (zone->xfrsource4dscp); }
0
[ "CWE-327" ]
bind9
f09352d20a9d360e50683cd1d2fc52ccedcd77a0
38,969,076,657,245,550,000,000,000,000,000,000,000
4
Update keyfetch_done compute_tag check If in keyfetch_done the compute_tag fails (because for example the algorithm is not supported), don't crash, but instead ignore the key.
SAPI_API SAPI_TREAT_DATA_FUNC(php_default_treat_data) { char *res = NULL, *var, *val, *separator = NULL; const char *c_var; zval array; int free_buffer = 0; char *strtok_buf = NULL; zend_long count = 0; ZVAL_UNDEF(&array); switch (arg) { case PARSE_POST: case PARSE_GET: case PARSE_COOKIE: array_init(&array); switch (arg) { case PARSE_POST: zval_ptr_dtor(&PG(http_globals)[TRACK_VARS_POST]); ZVAL_COPY_VALUE(&PG(http_globals)[TRACK_VARS_POST], &array); break; case PARSE_GET: zval_ptr_dtor(&PG(http_globals)[TRACK_VARS_GET]); ZVAL_COPY_VALUE(&PG(http_globals)[TRACK_VARS_GET], &array); break; case PARSE_COOKIE: zval_ptr_dtor(&PG(http_globals)[TRACK_VARS_COOKIE]); ZVAL_COPY_VALUE(&PG(http_globals)[TRACK_VARS_COOKIE], &array); break; } break; default: ZVAL_COPY_VALUE(&array, destArray); break; } if (arg == PARSE_POST) { sapi_handle_post(&array); return; } if (arg == PARSE_GET) { /* GET data */ c_var = SG(request_info).query_string; if (c_var && *c_var) { res = (char *) estrdup(c_var); free_buffer = 1; } else { free_buffer = 0; } } else if (arg == PARSE_COOKIE) { /* Cookie data */ c_var = SG(request_info).cookie_data; if (c_var && *c_var) { res = (char *) estrdup(c_var); free_buffer = 1; } else { free_buffer = 0; } } else if (arg == PARSE_STRING) { /* String data */ res = str; free_buffer = 1; } if (!res) { return; } switch (arg) { case PARSE_GET: case PARSE_STRING: separator = PG(arg_separator).input; break; case PARSE_COOKIE: separator = ";\0"; break; } var = php_strtok_r(res, separator, &strtok_buf); while (var) { val = strchr(var, '='); if (arg == PARSE_COOKIE) { /* Remove leading spaces from cookie names, needed for multi-cookie header where ; can be followed by a space */ while (isspace(*var)) { var++; } if (var == val || *var == '\0') { goto next_cookie; } } if (++count > PG(max_input_vars)) { php_error_docref(NULL, E_WARNING, "Input variables exceeded " ZEND_LONG_FMT ". To increase the limit change max_input_vars in php.ini.", PG(max_input_vars)); break; } if (val) { /* have a value */ size_t val_len; size_t new_val_len; *val++ = '\0'; php_url_decode(var, strlen(var)); val_len = php_url_decode(val, strlen(val)); val = estrndup(val, val_len); if (sapi_module.input_filter(arg, var, &val, val_len, &new_val_len)) { php_register_variable_safe(var, val, new_val_len, &array); } efree(val); } else { size_t val_len; size_t new_val_len; php_url_decode(var, strlen(var)); val_len = 0; val = estrndup("", val_len); if (sapi_module.input_filter(arg, var, &val, val_len, &new_val_len)) { php_register_variable_safe(var, val, new_val_len, &array); } efree(val); } next_cookie: var = php_strtok_r(NULL, separator, &strtok_buf); } if (free_buffer) { efree(res); } }
1
[ "CWE-565" ]
php-src
6559fe912661ca5ce5f0eeeb591d928451428ed0
129,611,870,495,713,550,000,000,000,000,000,000,000
127
Do not decode cookie names anymore
SPL_METHOD(SplFileObject, fread) { spl_filesystem_object *intern = (spl_filesystem_object*)zend_object_store_get_object(getThis() TSRMLS_CC); long length = 0; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "l", &length) == FAILURE) { return; } if (length <= 0) { php_error_docref(NULL TSRMLS_CC, E_WARNING, "Length parameter must be greater than 0"); RETURN_FALSE; } if (length > INT_MAX) { php_error_docref(NULL TSRMLS_CC, E_WARNING, "Length parameter must be no more than %d", INT_MAX); RETURN_FALSE; } Z_STRVAL_P(return_value) = emalloc(length + 1); Z_STRLEN_P(return_value) = php_stream_read(intern->u.file.stream, Z_STRVAL_P(return_value), length); /* needed because recv/read/gzread doesnt put a null at the end*/ Z_STRVAL_P(return_value)[Z_STRLEN_P(return_value)] = 0; Z_TYPE_P(return_value) = IS_STRING; }
0
[ "CWE-190" ]
php-src
7245bff300d3fa8bacbef7897ff080a6f1c23eba
39,186,748,832,896,263,000,000,000,000,000,000,000
25
Fix bug #72262 - do not overflow int
xmlCtxtResetPush(xmlParserCtxtPtr ctxt, const char *chunk, int size, const char *filename, const char *encoding) { xmlParserInputPtr inputStream; xmlParserInputBufferPtr buf; xmlCharEncoding enc = XML_CHAR_ENCODING_NONE; if (ctxt == NULL) return(1); if ((encoding == NULL) && (chunk != NULL) && (size >= 4)) enc = xmlDetectCharEncoding((const xmlChar *) chunk, size); buf = xmlAllocParserInputBuffer(enc); if (buf == NULL) return(1); if (ctxt == NULL) { xmlFreeParserInputBuffer(buf); return(1); } xmlCtxtReset(ctxt); if (ctxt->pushTab == NULL) { ctxt->pushTab = (void **) xmlMalloc(ctxt->nameMax * 3 * sizeof(xmlChar *)); if (ctxt->pushTab == NULL) { xmlErrMemory(ctxt, NULL); xmlFreeParserInputBuffer(buf); return(1); } } if (filename == NULL) { ctxt->directory = NULL; } else { ctxt->directory = xmlParserGetDirectory(filename); } inputStream = xmlNewInputStream(ctxt); if (inputStream == NULL) { xmlFreeParserInputBuffer(buf); return(1); } if (filename == NULL) inputStream->filename = NULL; else inputStream->filename = (char *) xmlCanonicPath((const xmlChar *) filename); inputStream->buf = buf; inputStream->base = inputStream->buf->buffer->content; inputStream->cur = inputStream->buf->buffer->content; inputStream->end = &inputStream->buf->buffer->content[inputStream->buf->buffer->use]; inputPush(ctxt, inputStream); if ((size > 0) && (chunk != NULL) && (ctxt->input != NULL) && (ctxt->input->buf != NULL)) { int base = ctxt->input->base - ctxt->input->buf->buffer->content; int cur = ctxt->input->cur - ctxt->input->base; xmlParserInputBufferPush(ctxt->input->buf, size, chunk); ctxt->input->base = ctxt->input->buf->buffer->content + base; ctxt->input->cur = ctxt->input->base + cur; ctxt->input->end = &ctxt->input->buf->buffer->content[ctxt->input->buf->buffer-> use]; #ifdef DEBUG_PUSH xmlGenericError(xmlGenericErrorContext, "PP: pushed %d\n", size); #endif } if (encoding != NULL) { xmlCharEncodingHandlerPtr hdlr; if (ctxt->encoding != NULL) xmlFree((xmlChar *) ctxt->encoding); ctxt->encoding = xmlStrdup((const xmlChar *) encoding); hdlr = xmlFindCharEncodingHandler(encoding); if (hdlr != NULL) { xmlSwitchToEncoding(ctxt, hdlr); } else { xmlFatalErrMsgStr(ctxt, XML_ERR_UNSUPPORTED_ENCODING, "Unsupported encoding %s\n", BAD_CAST encoding); } } else if (enc != XML_CHAR_ENCODING_NONE) { xmlSwitchEncoding(ctxt, enc); } return(0); }
0
[ "CWE-125" ]
libxml2
77404b8b69bc122d12231807abf1a837d121b551
208,703,729,129,275,170,000,000,000,000,000,000,000
96
Make sure the parser returns when getting a Stop order patch backported from chromiun bug fixes, assuming author is Chris
void readYAMLConf(YAML::Node &node) { YAML::Node section = node["common"]; std::string strLine; string_array tempArray; section["api_mode"] >> global.APIMode; section["api_access_token"] >> global.accessToken; if(section["default_url"].IsSequence()) { section["default_url"] >> tempArray; if(tempArray.size()) { strLine = std::accumulate(std::next(tempArray.begin()), tempArray.end(), tempArray[0], [](std::string a, std::string b) { return std::move(a) + "|" + std::move(b); }); global.defaultUrls = strLine; eraseElements(tempArray); } } global.enableInsert = safe_as<std::string>(section["enable_insert"]); if(section["insert_url"].IsSequence()) { section["insert_url"] >> tempArray; if(tempArray.size()) { strLine = std::accumulate(std::next(tempArray.begin()), tempArray.end(), tempArray[0], [](std::string a, std::string b) { return std::move(a) + "|" + std::move(b); }); global.insertUrls = strLine; eraseElements(tempArray); } } section["prepend_insert_url"] >> global.prependInsert; if(section["exclude_remarks"].IsSequence()) section["exclude_remarks"] >> global.excludeRemarks; if(section["include_remarks"].IsSequence()) section["include_remarks"] >> global.includeRemarks; global.filterScript = safe_as<bool>(section["enable_filter"]) ? safe_as<std::string>(section["filter_script"]) : ""; section["base_path"] >> global.basePath; section["clash_rule_base"] >> global.clashBase; section["surge_rule_base"] >> global.surgeBase; section["surfboard_rule_base"] >> global.surfboardBase; section["mellow_rule_base"] >> global.mellowBase; section["quan_rule_base"] >> global.quanBase; section["quanx_rule_base"] >> global.quanXBase; section["loon_rule_base"] >> global.loonBase; section["sssub_rule_base"] >> global.SSSubBase; section["default_external_config"] >> global.defaultExtConfig; section["append_proxy_type"] >> global.appendType; section["proxy_config"] >> global.proxyConfig; section["proxy_ruleset"] >> global.proxyRuleset; section["proxy_subscription"] >> global.proxySubscription; if(node["userinfo"].IsDefined()) { section = node["userinfo"]; if(section["stream_rule"].IsSequence()) { readRegexMatch(section["stream_rule"], "|", tempArray, false); auto configs = INIBinding::from<RegexMatchConfig>::from_ini(tempArray, "|"); safe_set_streams(configs); eraseElements(tempArray); } if(section["time_rule"].IsSequence()) { readRegexMatch(section["time_rule"], "|", tempArray, false); auto configs = INIBinding::from<RegexMatchConfig>::from_ini(tempArray, "|"); safe_set_times(configs); eraseElements(tempArray); } } if(node["node_pref"].IsDefined()) { section = node["node_pref"]; /* section["udp_flag"] >> udp_flag; section["tcp_fast_open_flag"] >> tfo_flag; section["skip_cert_verify_flag"] >> scv_flag; */ global.UDPFlag.set(safe_as<std::string>(section["udp_flag"])); global.TFOFlag.set(safe_as<std::string>(section["tcp_fast_open_flag"])); global.skipCertVerify.set(safe_as<std::string>(section["skip_cert_verify_flag"])); global.TLS13Flag.set(safe_as<std::string>(section["tls13_flag"])); section["sort_flag"] >> global.enableSort; section["sort_script"] >> global.sortScript; section["filter_deprecated_nodes"] >> global.filterDeprecated; section["append_sub_userinfo"] >> global.appendUserinfo; section["clash_use_new_field_name"] >> global.clashUseNewField; section["clash_proxies_style"] >> global.clashProxiesStyle; } if(section["rename_node"].IsSequence()) { readRegexMatch(section["rename_node"], "@", tempArray, false); auto configs = INIBinding::from<RegexMatchConfig>::from_ini(tempArray, "@"); safe_set_renames(configs); eraseElements(tempArray); } if(node["managed_config"].IsDefined()) { section = node["managed_config"]; section["write_managed_config"] >> global.writeManagedConfig; section["managed_config_prefix"] >> global.managedConfigPrefix; section["config_update_interval"] >> global.updateInterval; section["config_update_strict"] >> global.updateStrict; section["quanx_device_id"] >> global.quanXDevID; } if(node["surge_external_proxy"].IsDefined()) { node["surge_external_proxy"]["surge_ssr_path"] >> global.surgeSSRPath; node["surge_external_proxy"]["resolve_hostname"] >> global.surgeResolveHostname; } if(node["emojis"].IsDefined()) { section = node["emojis"]; section["add_emoji"] >> global.addEmoji; section["remove_old_emoji"] >> global.removeEmoji; if(section["rules"].IsSequence()) { readEmoji(section["rules"], tempArray, false); auto configs = INIBinding::from<RegexMatchConfig>::from_ini(tempArray, ","); safe_set_emojis(configs); eraseElements(tempArray); } } const char *rulesets_title = node["rulesets"].IsDefined() ? "rulesets" : "ruleset"; if(node[rulesets_title].IsDefined()) { section = node[rulesets_title]; section["enabled"] >> global.enableRuleGen; if(!global.enableRuleGen) { global.overwriteOriginalRules = false; global.updateRulesetOnRequest = false; } else { section["overwrite_original_rules"] >> global.overwriteOriginalRules; section["update_ruleset_on_request"] >> global.updateRulesetOnRequest; } const char *ruleset_title = section["rulesets"].IsDefined() ? "rulesets" : "surge_ruleset"; if(section[ruleset_title].IsSequence()) { string_array vArray; readRuleset(section[ruleset_title], vArray, false); global.customRulesets = INIBinding::from<RulesetConfig>::from_ini(vArray); } } const char *groups_title = node["proxy_groups"].IsDefined() ? "proxy_groups" : "proxy_group"; if(node[groups_title].IsDefined() && node[groups_title]["custom_proxy_group"].IsDefined()) { string_array vArray; readGroup(node[groups_title]["custom_proxy_group"], vArray, false); global.customProxyGroups = INIBinding::from<ProxyGroupConfig>::from_ini(vArray); } if(node["template"].IsDefined()) { node["template"]["template_path"] >> global.templatePath; if(node["template"]["globals"].IsSequence()) { eraseElements(global.templateVars); for(size_t i = 0; i < node["template"]["globals"].size(); i++) { std::string key, value; node["template"]["globals"][i]["key"] >> key; node["template"]["globals"][i]["value"] >> value; global.templateVars[key] = value; } } } if(node["aliases"].IsSequence()) { webServer.reset_redirect(); for(size_t i = 0; i < node["aliases"].size(); i++) { std::string uri, target; node["aliases"][i]["uri"] >> uri; node["aliases"][i]["target"] >> target; webServer.append_redirect(uri, target); } } if(node["tasks"].IsSequence()) { string_array vArray; for(size_t i = 0; i < node["tasks"].size(); i++) { std::string name, exp, path, timeout; node["tasks"][i]["import"] >> name; if(name.size()) { vArray.emplace_back("!!import:" + name); continue; } node["tasks"][i]["name"] >> name; node["tasks"][i]["cronexp"] >> exp; node["tasks"][i]["path"] >> path; node["tasks"][i]["timeout"] >> timeout; strLine = name + "`" + exp + "`" + path + "`" + timeout; vArray.emplace_back(std::move(strLine)); } importItems(vArray, false); global.enableCron = !vArray.empty(); global.cronTasks = INIBinding::from<CronTaskConfig>::from_ini(vArray); refresh_schedule(); } if(node["server"].IsDefined()) { node["server"]["listen"] >> global.listenAddress; node["server"]["port"] >> global.listenPort; node["server"]["serve_file_root"] >>= webServer.serve_file_root; webServer.serve_file = !webServer.serve_file_root.empty(); } if(node["advanced"].IsDefined()) { std::string log_level; node["advanced"]["log_level"] >> log_level; node["advanced"]["print_debug_info"] >> global.printDbgInfo; if(global.printDbgInfo) global.logLevel = LOG_LEVEL_VERBOSE; else { switch(hash_(log_level)) { case "warn"_hash: global.logLevel = LOG_LEVEL_WARNING; break; case "error"_hash: global.logLevel = LOG_LEVEL_ERROR; break; case "fatal"_hash: global.logLevel = LOG_LEVEL_FATAL; break; case "verbose"_hash: global.logLevel = LOG_LEVEL_VERBOSE; break; case "debug"_hash: global.logLevel = LOG_LEVEL_DEBUG; break; default: global.logLevel = LOG_LEVEL_INFO; } } node["advanced"]["max_pending_connections"] >> global.maxPendingConns; node["advanced"]["max_concurrent_threads"] >> global.maxConcurThreads; node["advanced"]["max_allowed_rulesets"] >> global.maxAllowedRulesets; node["advanced"]["max_allowed_rules"] >> global.maxAllowedRules; node["advanced"]["max_allowed_download_size"] >> global.maxAllowedDownloadSize; if(node["advanced"]["enable_cache"].IsDefined()) { if(safe_as<bool>(node["advanced"]["enable_cache"])) { node["advanced"]["cache_subscription"] >> global.cacheSubscription; node["advanced"]["cache_config"] >> global.cacheConfig; node["advanced"]["cache_ruleset"] >> global.cacheRuleset; node["advanced"]["serve_cache_on_fetch_fail"] >> global.serveCacheOnFetchFail; } else global.cacheSubscription = global.cacheConfig = global.cacheRuleset = 0; //disable cache } node["advanced"]["script_clean_context"] >> global.scriptCleanContext; node["advanced"]["async_fetch_ruleset"] >> global.asyncFetchRuleset; node["advanced"]["skip_failed_links"] >> global.skipFailedLinks; } }
1
[ "CWE-434", "CWE-94" ]
subconverter
ce8d2bd0f13f05fcbd2ed90755d097f402393dd3
215,023,881,635,728,620,000,000,000,000,000,000,000
279
Enhancements Add authorization check before loading scripts. Add detailed logs when loading preference settings.
BGD_DECLARE(int) gdAlphaBlend (int dst, int src) { int src_alpha = gdTrueColorGetAlpha(src); int dst_alpha, alpha, red, green, blue; int src_weight, dst_weight, tot_weight; /* -------------------------------------------------------------------- */ /* Simple cases we want to handle fast. */ /* -------------------------------------------------------------------- */ if( src_alpha == gdAlphaOpaque ) return src; dst_alpha = gdTrueColorGetAlpha(dst); if( src_alpha == gdAlphaTransparent ) return dst; if( dst_alpha == gdAlphaTransparent ) return src; /* -------------------------------------------------------------------- */ /* What will the source and destination alphas be? Note that */ /* the destination weighting is substantially reduced as the */ /* overlay becomes quite opaque. */ /* -------------------------------------------------------------------- */ src_weight = gdAlphaTransparent - src_alpha; dst_weight = (gdAlphaTransparent - dst_alpha) * src_alpha / gdAlphaMax; tot_weight = src_weight + dst_weight; /* -------------------------------------------------------------------- */ /* What red, green and blue result values will we use? */ /* -------------------------------------------------------------------- */ alpha = src_alpha * dst_alpha / gdAlphaMax; red = (gdTrueColorGetRed(src) * src_weight + gdTrueColorGetRed(dst) * dst_weight) / tot_weight; green = (gdTrueColorGetGreen(src) * src_weight + gdTrueColorGetGreen(dst) * dst_weight) / tot_weight; blue = (gdTrueColorGetBlue(src) * src_weight + gdTrueColorGetBlue(dst) * dst_weight) / tot_weight; /* -------------------------------------------------------------------- */ /* Return merged result. */ /* -------------------------------------------------------------------- */ return ((alpha << 24) + (red << 16) + (green << 8) + blue); }
0
[ "CWE-119", "CWE-787" ]
libgd
77f619d48259383628c3ec4654b1ad578e9eb40e
47,143,068,322,639,710,000,000,000,000,000,000,000
44
fix #215 gdImageFillToBorder stack-overflow when invalid color is used
void _gnutls_session_client_cert_type_set(gnutls_session_t session, gnutls_certificate_type_t ct) { _gnutls_handshake_log ("HSK[%p]: Selected client certificate type %s (%d)\n", session, gnutls_certificate_type_get_name(ct), ct); session->security_parameters.client_ctype = ct; }
0
[]
gnutls
3d7fae761e65e9d0f16d7247ee8a464d4fe002da
74,469,231,642,592,420,000,000,000,000,000,000,000
8
valgrind: check if session ticket key is used without initialization This adds a valgrind client request for session->key.session_ticket_key to make sure that it is not used without initialization. Signed-off-by: Daiki Ueno <[email protected]>
void kvm_arch_hardware_unsetup(void) { kvm_unregister_perf_callbacks(); static_call(kvm_x86_hardware_unsetup)(); }
0
[ "CWE-459" ]
linux
683412ccf61294d727ead4a73d97397396e69a6b
195,652,523,584,830,760,000,000,000,000,000,000,000
6
KVM: SEV: add cache flush to solve SEV cache incoherency issues Flush the CPU caches when memory is reclaimed from an SEV guest (where reclaim also includes it being unmapped from KVM's memslots). Due to lack of coherency for SEV encrypted memory, failure to flush results in silent data corruption if userspace is malicious/broken and doesn't ensure SEV guest memory is properly pinned and unpinned. Cache coherency is not enforced across the VM boundary in SEV (AMD APM vol.2 Section 15.34.7). Confidential cachelines, generated by confidential VM guests have to be explicitly flushed on the host side. If a memory page containing dirty confidential cachelines was released by VM and reallocated to another user, the cachelines may corrupt the new user at a later time. KVM takes a shortcut by assuming all confidential memory remain pinned until the end of VM lifetime. Therefore, KVM does not flush cache at mmu_notifier invalidation events. Because of this incorrect assumption and the lack of cache flushing, malicous userspace can crash the host kernel: creating a malicious VM and continuously allocates/releases unpinned confidential memory pages when the VM is running. Add cache flush operations to mmu_notifier operations to ensure that any physical memory leaving the guest VM get flushed. In particular, hook mmu_notifier_invalidate_range_start and mmu_notifier_release events and flush cache accordingly. The hook after releasing the mmu lock to avoid contention with other vCPUs. Cc: [email protected] Suggested-by: Sean Christpherson <[email protected]> Reported-by: Mingwei Zhang <[email protected]> Signed-off-by: Mingwei Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
_rl_callback_data_dispose (arg) _rl_callback_generic_arg *arg; { xfree (arg); }
0
[]
bash
955543877583837c85470f7fb8a97b7aa8d45e6c
254,179,623,958,938,400,000,000,000,000,000,000,000
5
bash-4.4-rc2 release
int ip_options_get(struct net *net, struct ip_options_rcu **optp, unsigned char *data, int optlen) { struct ip_options_rcu *opt = ip_options_get_alloc(optlen); if (!opt) return -ENOMEM; if (optlen) memcpy(opt->opt.__data, data, optlen); return ip_options_get_finish(net, optp, opt, optlen); }
0
[ "CWE-362" ]
linux-2.6
f6d8bd051c391c1c0458a30b2a7abcd939329259
152,256,801,938,089,260,000,000,000,000,000,000,000
11
inet: add RCU protection to inet->opt We lack proper synchronization to manipulate inet->opt ip_options Problem is ip_make_skb() calls ip_setup_cork() and ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options), without any protection against another thread manipulating inet->opt. Another thread can change inet->opt pointer and free old one under us. Use RCU to protect inet->opt (changed to inet->inet_opt). Instead of handling atomic refcounts, just copy ip_options when necessary, to avoid cache line dirtying. We cant insert an rcu_head in struct ip_options since its included in skb->cb[], so this patch is large because I had to introduce a new ip_options_rcu structure. Signed-off-by: Eric Dumazet <[email protected]> Cc: Herbert Xu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static void windowIfNewPeer( Parse *pParse, ExprList *pOrderBy, int regNew, /* First in array of new values */ int regOld, /* First in array of old values */ int addr /* Jump here */ ){ Vdbe *v = sqlite3GetVdbe(pParse); if( pOrderBy ){ int nVal = pOrderBy->nExpr; KeyInfo *pKeyInfo = sqlite3KeyInfoFromExprList(pParse, pOrderBy, 0, 0); sqlite3VdbeAddOp3(v, OP_Compare, regOld, regNew, nVal); sqlite3VdbeAppendP4(v, (void*)pKeyInfo, P4_KEYINFO); sqlite3VdbeAddOp3(v, OP_Jump, sqlite3VdbeCurrentAddr(v)+1, addr, sqlite3VdbeCurrentAddr(v)+1 ); VdbeCoverageEqNe(v); sqlite3VdbeAddOp3(v, OP_Copy, regNew, regOld, nVal-1); }else{ sqlite3VdbeAddOp2(v, OP_Goto, 0, addr); } }
0
[ "CWE-476" ]
sqlite
75e95e1fcd52d3ec8282edb75ac8cd0814095d54
155,633,484,751,677,920,000,000,000,000,000,000,000
22
When processing constant integer values in ORDER BY clauses of window definitions (see check-in [7e4809eadfe99ebf]) be sure to fully disable the constant value to avoid an invalid pointer dereference if the expression is ever duplicated. This fixes a crash report from Yongheng and Rui. FossilOrigin-Name: 1ca0bd982ab1183bbafce0d260e4dceda5eb766ed2e7793374a88d1ae0bdd2ca
bool RWFunction::Validate(RandomNumberGenerator &rng, unsigned int level) const { bool pass = true; pass = pass && m_n > Integer::One() && m_n%8 == 5; return pass; }
0
[ "CWE-200", "CWE-399" ]
cryptopp
9425e16437439e68c7d96abef922167d68fafaff
200,925,029,368,727,770,000,000,000,000,000,000,000
6
Fix for CVE-2015-2141. Thanks to Evgeny Sidorov for reporting. Squaring to satisfy Jacobi requirements suggested by JPM.
void neigh_table_init(struct neigh_table *tbl) { unsigned long now = jiffies; unsigned long phsize; atomic_set(&tbl->parms.refcnt, 1); INIT_RCU_HEAD(&tbl->parms.rcu_head); tbl->parms.reachable_time = neigh_rand_reach_time(tbl->parms.base_reachable_time); if (!tbl->kmem_cachep) tbl->kmem_cachep = kmem_cache_create(tbl->id, tbl->entry_size, 0, SLAB_HWCACHE_ALIGN, NULL, NULL); if (!tbl->kmem_cachep) panic("cannot create neighbour cache"); tbl->stats = alloc_percpu(struct neigh_statistics); if (!tbl->stats) panic("cannot create neighbour cache statistics"); #ifdef CONFIG_PROC_FS tbl->pde = create_proc_entry(tbl->id, 0, proc_net_stat); if (!tbl->pde) panic("cannot create neighbour proc dir entry"); tbl->pde->proc_fops = &neigh_stat_seq_fops; tbl->pde->data = tbl; #endif tbl->hash_mask = 1; tbl->hash_buckets = neigh_hash_alloc(tbl->hash_mask + 1); phsize = (PNEIGH_HASHMASK + 1) * sizeof(struct pneigh_entry *); tbl->phash_buckets = kmalloc(phsize, GFP_KERNEL); if (!tbl->hash_buckets || !tbl->phash_buckets) panic("cannot allocate neighbour cache hashes"); memset(tbl->phash_buckets, 0, phsize); get_random_bytes(&tbl->hash_rnd, sizeof(tbl->hash_rnd)); rwlock_init(&tbl->lock); init_timer(&tbl->gc_timer); tbl->gc_timer.data = (unsigned long)tbl; tbl->gc_timer.function = neigh_periodic_timer; tbl->gc_timer.expires = now + 1; add_timer(&tbl->gc_timer); init_timer(&tbl->proxy_timer); tbl->proxy_timer.data = (unsigned long)tbl; tbl->proxy_timer.function = neigh_proxy_process; skb_queue_head_init(&tbl->proxy_queue); tbl->last_flush = now; tbl->last_rand = now + tbl->parms.reachable_time * 20; write_lock(&neigh_tbl_lock); tbl->next = neigh_tables; neigh_tables = tbl; write_unlock(&neigh_tbl_lock); }
0
[ "CWE-200" ]
linux-2.6
9ef1d4c7c7aca1cd436612b6ca785b726ffb8ed8
20,430,728,187,028,306,000,000,000,000,000,000,000
63
[NETLINK]: Missing initializations in dumped data Mostly missing initialization of padding fields of 1 or 2 bytes length, two instances of uninitialized nlmsgerr->msg of 16 bytes length. Signed-off-by: Patrick McHardy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
lacks_deep_count (NautilusFile *file) { return file->details->deep_counts_status != NAUTILUS_REQUEST_DONE; }
0
[ "CWE-20" ]
nautilus
1630f53481f445ada0a455e9979236d31a8d3bb0
48,091,846,732,075,770,000,000,000,000,000,000,000
4
mime-actions: use file metadata for trusting desktop files Currently we only trust desktop files that have the executable bit set, and don't replace the displayed icon or the displayed name until it's trusted, which prevents for running random programs by a malicious desktop file. However, the executable permission is preserved if the desktop file comes from a compressed file. To prevent this, add a metadata::trusted metadata to the file once the user acknowledges the file as trusted. This adds metadata to the file, which cannot be added unless it has access to the computer. Also remove the SHEBANG "trusted" content we were putting inside the desktop file, since that doesn't add more security since it can come with the file itself. https://bugzilla.gnome.org/show_bug.cgi?id=777991
DLLEXPORT int tjDecodeYUV(tjhandle handle, const unsigned char *srcBuf, int pad, int subsamp, unsigned char *dstBuf, int width, int pitch, int height, int pixelFormat, int flags) { const unsigned char *srcPlanes[3]; int pw0, ph0, strides[3], retval = -1; tjinstance *this = (tjinstance *)handle; if (!this) _throwg("tjDecodeYUV(): Invalid handle"); this->isInstanceError = FALSE; if (srcBuf == NULL || pad < 0 || !isPow2(pad) || subsamp < 0 || subsamp >= NUMSUBOPT || width <= 0 || height <= 0) _throw("tjDecodeYUV(): Invalid argument"); pw0 = tjPlaneWidth(0, width, subsamp); ph0 = tjPlaneHeight(0, height, subsamp); srcPlanes[0] = srcBuf; strides[0] = PAD(pw0, pad); if (subsamp == TJSAMP_GRAY) { strides[1] = strides[2] = 0; srcPlanes[1] = srcPlanes[2] = NULL; } else { int pw1 = tjPlaneWidth(1, width, subsamp); int ph1 = tjPlaneHeight(1, height, subsamp); strides[1] = strides[2] = PAD(pw1, pad); srcPlanes[1] = srcPlanes[0] + strides[0] * ph0; srcPlanes[2] = srcPlanes[1] + strides[1] * ph1; } return tjDecodeYUVPlanes(handle, srcPlanes, strides, subsamp, dstBuf, width, pitch, height, pixelFormat, flags); bailout: return retval; }
0
[ "CWE-787" ]
libjpeg-turbo
3d9c64e9f8aa1ee954d1d0bb3390fc894bb84da3
222,444,050,006,608,360,000,000,000,000,000,000,000
38
tjLoadImage(): Fix int overflow/segfault w/big BMP Fixes #304
static int genl_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, struct netlink_ext_ack *extack) { const struct genl_family *family; int err; family = genl_family_find_byid(nlh->nlmsg_type); if (family == NULL) return -ENOENT; if (!family->parallel_ops) genl_lock(); err = genl_family_rcv_msg(family, skb, nlh, extack); if (!family->parallel_ops) genl_unlock(); return err; }
0
[ "CWE-399", "CWE-401" ]
linux
ceabee6c59943bdd5e1da1a6a20dc7ee5f8113a2
56,017,898,878,448,580,000,000,000,000,000,000,000
20
genetlink: Fix a memory leak on error path In genl_register_family(), when idr_alloc() fails, we forget to free the memory we possibly allocate for family->attrbuf. Reported-by: Hulk Robot <[email protected]> Fixes: 2ae0f17df1cd ("genetlink: use idr to track families") Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Kirill Tkhai <[email protected]> Signed-off-by: David S. Miller <[email protected]>
STBIDEF int stbi_info(char const *filename, int *x, int *y, int *comp) { FILE *f = stbi__fopen(filename, "rb"); int result; if (!f) return stbi__err("can't fopen", "Unable to open file"); result = stbi_info_from_file(f, x, y, comp); fclose(f); return result; }
0
[ "CWE-787" ]
stb
5ba0baaa269b3fd681828e0e3b3ac0f1472eaf40
276,156,734,390,331,330,000,000,000,000,000,000,000
9
stb_image: Reject fractional JPEG component subsampling ratios The component resamplers are not written to support this and I've never seen it happen in a real (non-crafted) JPEG file so I'm fine rejecting this as outright corrupt. Fixes issue #1178.
int copy_screen(void) { char *fbp; int i, y, block_size; if (! fs_factor) { return 0; } if (debug_tiles) fprintf(stderr, "copy_screen\n"); if (unixpw_in_progress) return 0; if (! main_fb) { return 0; } block_size = ((dpy_y/fs_factor) * main_bytes_per_line); fbp = main_fb; y = 0; X_LOCK; /* screen may be too big for 1 shm area, so broken into fs_factor */ for (i=0; i < fs_factor; i++) { XRANDR_SET_TRAP_RET(-1, "copy_screen-set"); copy_image(fullscreen, 0, y, 0, 0); XRANDR_CHK_TRAP_RET(-1, "copy_screen-chk"); memcpy(fbp, fullscreen->data, (size_t) block_size); y += dpy_y / fs_factor; fbp += block_size; } X_UNLOCK; if (blackouts) { blackout_regions(); } mark_rect_as_modified(0, 0, dpy_x, dpy_y, 0); return 0; }
0
[ "CWE-862", "CWE-284", "CWE-732" ]
x11vnc
69eeb9f7baa14ca03b16c9de821f9876def7a36a
63,389,055,161,149,080,000,000,000,000,000,000,000
44
scan: limit access to shared memory segments to current user
static void io_free_file_tables(struct io_file_table *table, unsigned nr_files) { size_t size = nr_files * sizeof(struct io_fixed_file); io_free_page_table((void **)table->files, size); table->files = NULL;
0
[ "CWE-125" ]
linux
89c2b3b74918200e46699338d7bcc19b1ea12110
263,970,551,028,217,560,000,000,000,000,000,000,000
7
io_uring: reexpand under-reexpanded iters [ 74.211232] BUG: KASAN: stack-out-of-bounds in iov_iter_revert+0x809/0x900 [ 74.212778] Read of size 8 at addr ffff888025dc78b8 by task syz-executor.0/828 [ 74.214756] CPU: 0 PID: 828 Comm: syz-executor.0 Not tainted 5.14.0-rc3-next-20210730 #1 [ 74.216525] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [ 74.219033] Call Trace: [ 74.219683] dump_stack_lvl+0x8b/0xb3 [ 74.220706] print_address_description.constprop.0+0x1f/0x140 [ 74.224226] kasan_report.cold+0x7f/0x11b [ 74.226085] iov_iter_revert+0x809/0x900 [ 74.227960] io_write+0x57d/0xe40 [ 74.232647] io_issue_sqe+0x4da/0x6a80 [ 74.242578] __io_queue_sqe+0x1ac/0xe60 [ 74.245358] io_submit_sqes+0x3f6e/0x76a0 [ 74.248207] __do_sys_io_uring_enter+0x90c/0x1a20 [ 74.257167] do_syscall_64+0x3b/0x90 [ 74.257984] entry_SYSCALL_64_after_hwframe+0x44/0xae old_size = iov_iter_count(); ... iov_iter_revert(old_size - iov_iter_count()); If iov_iter_revert() is done base on the initial size as above, and the iter is truncated and not reexpanded in the middle, it miscalculates borders causing problems. This trace is due to no one reexpanding after generic_write_checks(). Now iters store how many bytes has been truncated, so reexpand them to the initial state right before reverting. Cc: [email protected] Reported-by: Palash Oswal <[email protected]> Reported-by: Sudip Mukherjee <[email protected]> Reported-and-tested-by: [email protected] Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Al Viro <[email protected]>
static struct socket *get_raw_socket(int fd) { struct { struct sockaddr_ll sa; char buf[MAX_ADDR_LEN]; } uaddr; int uaddr_len = sizeof uaddr, r; struct socket *sock = sockfd_lookup(fd, &r); if (!sock) return ERR_PTR(-ENOTSOCK); /* Parameter checking */ if (sock->sk->sk_type != SOCK_RAW) { r = -ESOCKTNOSUPPORT; goto err; } r = sock->ops->getname(sock, (struct sockaddr *)&uaddr.sa, &uaddr_len, 0); if (r) goto err; if (uaddr.sa.sll_family != AF_PACKET) { r = -EPFNOSUPPORT; goto err; } return sock; err: fput(sock->file); return ERR_PTR(r); }
0
[ "CWE-399" ]
linux
dd7633ecd553a5e304d349aa6f8eb8a0417098c5
309,536,326,285,069,340,000,000,000,000,000,000,000
32
vhost-net: fix use-after-free in vhost_net_flush vhost_net_ubuf_put_and_wait has a confusing name: it will actually also free it's argument. Thus since commit 1280c27f8e29acf4af2da914e80ec27c3dbd5c01 "vhost-net: flush outstanding DMAs on memory change" vhost_net_flush tries to use the argument after passing it to vhost_net_ubuf_put_and_wait, this results in use after free. To fix, don't free the argument in vhost_net_ubuf_put_and_wait, add an new API for callers that want to free ubufs. Acked-by: Asias He <[email protected]> Acked-by: Jason Wang <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]> Signed-off-by: David S. Miller <[email protected]>
PyBytes_Size(PyObject *op) { if (!PyBytes_Check(op)) { PyErr_Format(PyExc_TypeError, "expected bytes, %.200s found", Py_TYPE(op)->tp_name); return -1; } return Py_SIZE(op); }
0
[ "CWE-190" ]
cpython
6c004b40f9d51872d848981ef1a18bb08c2dfc42
208,599,048,517,710,800,000,000,000,000,000,000,000
9
bpo-30657: Fix CVE-2017-1000158 (#4758) Fixes possible integer overflow in PyBytes_DecodeEscape. Co-Authored-By: Jay Bosamiya <[email protected]>
GF_Err tsro_Write(GF_Box *s, GF_BitStream *bs) { GF_Err e; GF_TimeOffHintEntryBox *ptr = (GF_TimeOffHintEntryBox *)s; if (ptr == NULL) return GF_BAD_PARAM; e = gf_isom_box_write_header(s, bs); if (e) return e; gf_bs_write_u32(bs, ptr->TimeOffset); return GF_OK; }
0
[ "CWE-125" ]
gpac
bceb03fd2be95097a7b409ea59914f332fb6bc86
129,381,015,136,224,110,000,000,000,000,000,000,000
11
fixed 2 possible heap overflows (inc. #1088)
static noinline int btrfs_ioctl_subvol_setflags(struct file *file, void __user *arg) { struct inode *inode = file_inode(file); struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); struct btrfs_root *root = BTRFS_I(inode)->root; struct btrfs_trans_handle *trans; u64 root_flags; u64 flags; int ret = 0; if (!inode_owner_or_capable(inode)) return -EPERM; ret = mnt_want_write_file(file); if (ret) goto out; if (btrfs_ino(BTRFS_I(inode)) != BTRFS_FIRST_FREE_OBJECTID) { ret = -EINVAL; goto out_drop_write; } if (copy_from_user(&flags, arg, sizeof(flags))) { ret = -EFAULT; goto out_drop_write; } if (flags & BTRFS_SUBVOL_CREATE_ASYNC) { ret = -EINVAL; goto out_drop_write; } if (flags & ~BTRFS_SUBVOL_RDONLY) { ret = -EOPNOTSUPP; goto out_drop_write; } down_write(&fs_info->subvol_sem); /* nothing to do */ if (!!(flags & BTRFS_SUBVOL_RDONLY) == btrfs_root_readonly(root)) goto out_drop_sem; root_flags = btrfs_root_flags(&root->root_item); if (flags & BTRFS_SUBVOL_RDONLY) { btrfs_set_root_flags(&root->root_item, root_flags | BTRFS_ROOT_SUBVOL_RDONLY); } else { /* * Block RO -> RW transition if this subvolume is involved in * send */ spin_lock(&root->root_item_lock); if (root->send_in_progress == 0) { btrfs_set_root_flags(&root->root_item, root_flags & ~BTRFS_ROOT_SUBVOL_RDONLY); spin_unlock(&root->root_item_lock); } else { spin_unlock(&root->root_item_lock); btrfs_warn(fs_info, "Attempt to set subvolume %llu read-write during send", root->root_key.objectid); ret = -EPERM; goto out_drop_sem; } } trans = btrfs_start_transaction(root, 1); if (IS_ERR(trans)) { ret = PTR_ERR(trans); goto out_reset; } ret = btrfs_update_root(trans, fs_info->tree_root, &root->root_key, &root->root_item); if (ret < 0) { btrfs_end_transaction(trans); goto out_reset; } ret = btrfs_commit_transaction(trans); out_reset: if (ret) btrfs_set_root_flags(&root->root_item, root_flags); out_drop_sem: up_write(&fs_info->subvol_sem); out_drop_write: mnt_drop_write_file(file); out: return ret; }
0
[ "CWE-476", "CWE-284" ]
linux
09ba3bc9dd150457c506e4661380a6183af651c1
279,827,574,986,158,050,000,000,000,000,000,000,000
93
btrfs: merge btrfs_find_device and find_device Both btrfs_find_device() and find_device() does the same thing except that the latter does not take the seed device onto account in the device scanning context. We can merge them. Signed-off-by: Anand Jain <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
int SN_Client_WillTopicUpdate(MqttClient *client, SN_Will *will) { int rc = 0, len = 0; /* Validate required arguments */ if (client == NULL) { return MQTT_CODE_ERROR_BAD_ARG; } if (will->stat == MQTT_MSG_BEGIN) { #ifdef WOLFMQTT_MULTITHREAD /* Lock send socket mutex */ rc = wm_SemLock(&client->lockSend); if (rc != 0) { return rc; } #endif /* Encode Will Topic Update */ len = rc = SN_Encode_WillTopicUpdate(client->tx_buf, client->tx_buf_len, will); #ifdef WOLFMQTT_DEBUG_CLIENT PRINTF("MqttClient_EncodePacket: Len %d, Type %s (%d)", rc, SN_Packet_TypeDesc(SN_MSG_TYPE_WILLTOPICUPD), SN_MSG_TYPE_WILLTOPICUPD); #endif if (rc <= 0) { #ifdef WOLFMQTT_MULTITHREAD wm_SemUnlock(&client->lockSend); #endif return rc; } #ifdef WOLFMQTT_MULTITHREAD rc = wm_SemLock(&client->lockClient); if (rc == 0) { /* inform other threads of expected response */ rc = MqttClient_RespList_Add(client, (MqttPacketType)SN_MSG_TYPE_WILLTOPICRESP, 0, &will->pendResp, &will->resp.topicResp); wm_SemUnlock(&client->lockClient); } if (rc != 0) { wm_SemUnlock(&client->lockSend); return rc; /* Error locking client */ } #endif /* Send Will Topic Update packet */ rc = MqttPacket_Write(client, client->tx_buf, len); #ifdef WOLFMQTT_MULTITHREAD wm_SemUnlock(&client->lockSend); #endif if (rc != len) { #ifdef WOLFMQTT_MULTITHREAD if (wm_SemLock(&client->lockClient) == 0) { MqttClient_RespList_Remove(client, &will->pendResp); wm_SemUnlock(&client->lockClient); } #endif } will->stat = MQTT_MSG_WAIT; } /* Wait for Will Topic Update Response packet */ rc = SN_Client_WaitType(client, &will->resp.topicResp, SN_MSG_TYPE_WILLTOPICRESP, 0, client->cmd_timeout_ms); #ifdef WOLFMQTT_NONBLOCK if (rc == MQTT_CODE_CONTINUE) return rc; #endif #ifdef WOLFMQTT_MULTITHREAD if (wm_SemLock(&client->lockClient) == 0) { MqttClient_RespList_Remove(client, &will->pendResp); wm_SemUnlock(&client->lockClient); } #endif /* reset state */ will->stat = MQTT_MSG_BEGIN; return rc; }
0
[ "CWE-787" ]
wolfMQTT
84d4b53122e0fa0280c7872350b89d5777dabbb2
299,161,248,120,408,300,000,000,000,000,000,000,000
83
Fix wolfmqtt-fuzzer: Null-dereference WRITE in MqttProps_Free
static port::StatusOr<CudnnRnnDescriptor> Create( const CudnnHandle& cudnn, int num_layers, int hidden_size, int input_size, int cell_size, int batch_size, cudnnRNNInputMode_t input_mode, cudnnDirectionMode_t direction_mode, cudnnRNNMode_t rnn_mode, cudnnDataType_t data_type, cudnnDataType_t compute_type, const dnn::AlgorithmConfig& algorithm_config, float dropout, uint64 seed, ScratchAllocator* state_allocator, bool use_padded_io) { SE_ASSIGN_OR_RETURN( CudnnDropoutDescriptor dropout_desc, CudnnDropoutDescriptor::Create(cudnn, dropout, seed, state_allocator)); gpu::RnnDescriptor rnn_desc = CreateRnnDescriptor(); cudnnRNNAlgo_t rnn_algo = ToCudnnRNNAlgo(algorithm_config.algorithm()); // TODO: allow the user to choose an algorithm. auto proj_size = hidden_size; hidden_size = std::max(hidden_size, cell_size); // Require explicit algorithm config to enable tensor cores. Some configs // return CUDNN_NOT_SUPPORTED when tensor ops are enabled (which is against // the idiom that enabling tensor ops is only a hint: see nvbugs/2172799). // We can only reasonably expect the user to handle the subsequent failure // in profile mode, which is run with algorithms returned from // GetRnnAlgorithms() (which are non-default and explicitly set whether to // use tensor ops). CuDNN 7.2.1 fixed this issue. // TODO(csigg): Minimal support cuDNN version is 7.3, clean up. bool allow_tensor_ops = data_type == CUDNN_DATA_HALF; if (data_type == CUDNN_DATA_FLOAT) allow_tensor_ops = tensorflow::tensor_float_32_execution_enabled(); bool use_tensor_ops = algorithm_config.algorithm().has_value() ? algorithm_config.algorithm()->tensor_ops_enabled() : allow_tensor_ops; if (use_tensor_ops && !allow_tensor_ops) { return port::Status(port::error::INVALID_ARGUMENT, "Algo requests disallowed tensor op evaluation."); } cudnnMathType_t math_type = use_tensor_ops ? CUDNN_TENSOR_OP_MATH : CUDNN_DEFAULT_MATH; #if CUDNN_VERSION >= 8000 cudnnRNNBiasMode_t bias_mode = CUDNN_RNN_DOUBLE_BIAS; uint32_t aux_flags = 0; if (use_padded_io) aux_flags |= CUDNN_RNN_PADDED_IO_ENABLED; RETURN_IF_CUDNN_ERROR(cudnnSetRNNDescriptor_v8( /*rnnDesc=*/rnn_desc.get(), /*algo=*/rnn_algo, /*cellMode=*/rnn_mode, /*biasMode=*/bias_mode, /*dirMode=*/direction_mode, /*inputMode=*/input_mode, /*dataType=*/data_type, /*mathPrec=*/compute_type, /*mathType=*/math_type, /*inputSize=*/input_size, /*hiddenSize=*/hidden_size, /*projSize=*/proj_size, /*numLayers=*/num_layers, /*dropoutDesc=*/dropout_desc.handle(), /*auxFlags=*/aux_flags)); #else RETURN_IF_CUDNN_ERROR(cudnnSetRNNDescriptor_v6( cudnn.handle(), /*rnnDesc=*/rnn_desc.get(), /*hiddenSize=*/hidden_size, /*numLayers=*/num_layers, /*dropoutDesc=*/dropout_desc.handle(), /*inputMode=*/input_mode, /*direction=*/direction_mode, /*mode=*/rnn_mode, /*algo=*/rnn_algo, /*dataType=*/compute_type)); CHECK_CUDNN_OK(cudnnSetRNNMatrixMathType(rnn_desc.get(), math_type)); if (proj_size < hidden_size) { RETURN_IF_CUDNN_ERROR(cudnnSetRNNProjectionLayers( cudnn.handle(), /*rnnDesc=*/rnn_desc.get(), /*recProjSize=*/proj_size, /*outProjSize=*/0)); } // TODO: For now, we only use cudnnRNN**Ex API to process padded inputs. // But in the future if these APIs are used to process full length arrays, // we need to distinguish when to set it. if (use_padded_io) { RETURN_IF_CUDNN_ERROR( cudnnSetRNNPaddingMode(rnn_desc.get(), CUDNN_RNN_PADDED_IO_ENABLED)); } #endif port::StatusOr<PersistentRnnPlan> rnn_plan_wrapper; PersistentRnnPlan rnn_plan; if (rnn_algo == CUDNN_RNN_ALGO_PERSIST_DYNAMIC) { CHECK_GE(batch_size, 0); rnn_plan_wrapper = CreatePersistentRnnPlan(rnn_desc.get(), batch_size, data_type); if (!rnn_plan_wrapper.ok()) { return port::StatusOr<CudnnRnnDescriptor>(rnn_plan_wrapper.status()); } else { rnn_plan = rnn_plan_wrapper.ConsumeValueOrDie(); RETURN_IF_CUDNN_ERROR( cudnnSetPersistentRNNPlan(rnn_desc.get(), rnn_plan.get())); } } // Create the params handle. SE_ASSIGN_OR_RETURN(auto params_desc, CudnnRnnParamsDescriptor::Create( cudnn, input_size, data_type, rnn_desc.get(), rnn_mode, direction_mode, num_layers)); return CudnnRnnDescriptor(cudnn, std::move(rnn_desc), std::move(rnn_plan), num_layers, hidden_size, input_size, cell_size, batch_size, input_mode, direction_mode, rnn_mode, data_type, compute_type, algorithm_config, std::move(dropout_desc), std::move(params_desc)); }
0
[ "CWE-20" ]
tensorflow
14755416e364f17fb1870882fa778c7fec7f16e3
268,640,351,339,768,700,000,000,000,000,000,000,000
107
Prevent CHECK-fail in LSTM/GRU with zero-length input. PiperOrigin-RevId: 346239181 Change-Id: I5f233dbc076aab7bb4e31ba24f5abd4eaf99ea4f
static UINT video_data_on_data_received(IWTSVirtualChannelCallback* pChannelCallback, wStream* s) { VIDEO_CHANNEL_CALLBACK* callback = (VIDEO_CHANNEL_CALLBACK*)pChannelCallback; VIDEO_PLUGIN* video; VideoClientContext* context; UINT32 cbSize, packetType; TSMM_VIDEO_DATA data; video = (VIDEO_PLUGIN*)callback->plugin; context = (VideoClientContext*)video->wtsPlugin.pInterface; if (Stream_GetRemainingLength(s) < 4) return ERROR_INVALID_DATA; Stream_Read_UINT32(s, cbSize); if (cbSize < 8 || Stream_GetRemainingLength(s) < (cbSize - 4)) { WLog_ERR(TAG, "invalid cbSize"); return ERROR_INVALID_DATA; } Stream_Read_UINT32(s, packetType); if (packetType != TSMM_PACKET_TYPE_VIDEO_DATA) { WLog_ERR(TAG, "only expecting VIDEO_DATA on the data channel"); return ERROR_INVALID_DATA; } if (Stream_GetRemainingLength(s) < 32) { WLog_ERR(TAG, "not enough bytes for a TSMM_VIDEO_DATA"); return ERROR_INVALID_DATA; } Stream_Read_UINT8(s, data.PresentationId); Stream_Read_UINT8(s, data.Version); Stream_Read_UINT8(s, data.Flags); Stream_Seek_UINT8(s); /* reserved */ Stream_Read_UINT64(s, data.hnsTimestamp); Stream_Read_UINT64(s, data.hnsDuration); Stream_Read_UINT16(s, data.CurrentPacketIndex); Stream_Read_UINT16(s, data.PacketsInSample); Stream_Read_UINT32(s, data.SampleNumber); Stream_Read_UINT32(s, data.cbSample); data.pSample = Stream_Pointer(s); /* WLog_DBG(TAG, "videoData: id:%"PRIu8" version:%"PRIu8" flags:0x%"PRIx8" timestamp=%"PRIu64" duration=%"PRIu64 " curPacketIndex:%"PRIu16" packetInSample:%"PRIu16" sampleNumber:%"PRIu32" cbSample:%"PRIu32"", data.PresentationId, data.Version, data.Flags, data.hnsTimestamp, data.hnsDuration, data.CurrentPacketIndex, data.PacketsInSample, data.SampleNumber, data.cbSample); */ return video_VideoData(context, &data); }
0
[ "CWE-190" ]
FreeRDP
06c32f170093a6ecde93e3bc07fed6a706bfbeb3
79,256,016,384,876,460,000,000,000,000,000,000,000
56
Fixed int overflow in PresentationContext_new Thanks to hac425 CVE-2020-11038
static double mp_set_Ixyz_v(_cimg_math_parser& mp) { CImg<T> &img = mp.imgout; const int x = (int)_mp_arg(2), y = (int)_mp_arg(3), z = (int)_mp_arg(4); const double *ptrs = &_mp_arg(1) + 1; if (x>=0 && x<img.width() && y>=0 && y<img.height() && z>=0 && z<img.depth()) { const unsigned int vsiz = (unsigned int)mp.opcode[5]; T *ptrd = &img(x,y,z); const ulongT whd = (ulongT)img._width*img._height*img._depth; cimg_for_inC(img,0,vsiz - 1,c) { *ptrd = (T)*(ptrs++); ptrd+=whd; } } return cimg::type<double>::nan(); }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
333,346,554,653,431,270,000,000,000,000,000,000,000
15
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
static int FillBasicWEBPInfo(Image *image,const uint8_t *stream,size_t length, WebPDecoderConfig *configure) { WebPBitstreamFeatures *magick_restrict features = &configure->input; int webp_status; webp_status=WebPGetFeatures(stream,length,features); if (webp_status != VP8_STATUS_OK) return(webp_status); image->columns=(size_t) features->width; image->rows=(size_t) features->height; image->depth=8; image->matte=features->has_alpha != 0 ? MagickTrue : MagickFalse; return(webp_status); }
0
[ "CWE-369" ]
ImageMagick6
a78d92dc0f468e79c3d761aae9707042952cdaca
88,954,484,969,752,500,000,000,000,000,000,000,000
21
https://github.com/ImageMagick/ImageMagick/issues/3176
int mnt_fs_set_target(struct libmnt_fs *fs, const char *tgt) { return strdup_to_struct_member(fs, target, tgt); }
0
[ "CWE-552", "CWE-703" ]
util-linux
166e87368ae88bf31112a30e078cceae637f4cdb
149,221,947,296,171,900,000,000,000,000,000,000,000
4
libmount: remove support for deleted mount table entries The "(deleted)" suffix has been originally used by kernel for deleted mountpoints. Since kernel commit 9d4d65748a5ca26ea8650e50ba521295549bf4e3 (Dec 2014) kernel does not use this suffix for mount stuff in /proc at all. Let's remove this support from libmount too. Signed-off-by: Karel Zak <[email protected]>
xmlSchemaGetNextComponent(xmlSchemaBasicItemPtr item) { switch (item->type) { case XML_SCHEMA_TYPE_ELEMENT: return ((xmlSchemaBasicItemPtr) ((xmlSchemaElementPtr) item)->next); case XML_SCHEMA_TYPE_ATTRIBUTE: return ((xmlSchemaBasicItemPtr) ((xmlSchemaAttributePtr) item)->next); case XML_SCHEMA_TYPE_COMPLEX: case XML_SCHEMA_TYPE_SIMPLE: return ((xmlSchemaBasicItemPtr) ((xmlSchemaTypePtr) item)->next); case XML_SCHEMA_TYPE_ANY: case XML_SCHEMA_TYPE_ANY_ATTRIBUTE: return (NULL); case XML_SCHEMA_TYPE_PARTICLE: return ((xmlSchemaBasicItemPtr) ((xmlSchemaParticlePtr) item)->next); case XML_SCHEMA_TYPE_SEQUENCE: case XML_SCHEMA_TYPE_CHOICE: case XML_SCHEMA_TYPE_ALL: return (NULL); case XML_SCHEMA_TYPE_GROUP: return (NULL); case XML_SCHEMA_TYPE_ATTRIBUTEGROUP: return ((xmlSchemaBasicItemPtr) ((xmlSchemaAttributeGroupPtr) item)->next); case XML_SCHEMA_TYPE_IDC_UNIQUE: case XML_SCHEMA_TYPE_IDC_KEY: case XML_SCHEMA_TYPE_IDC_KEYREF: return ((xmlSchemaBasicItemPtr) ((xmlSchemaIDCPtr) item)->next); default: return (NULL); } }
0
[ "CWE-134" ]
libxml2
4472c3a5a5b516aaf59b89be602fbce52756c3e9
175,359,828,951,788,340,000,000,000,000,000,000,000
31
Fix some format string warnings with possible format string vulnerability For https://bugzilla.gnome.org/show_bug.cgi?id=761029 Decorate every method in libxml2 with the appropriate LIBXML_ATTR_FORMAT(fmt,args) macro and add some cleanups following the reports.
char *my_memmem(char *haystack, size_t haystacklen, char *needle, size_t needlelen) { char *c; for (c = haystack; c <= haystack + haystacklen - needlelen; c++) if (!memcmp(c, needle, needlelen)) return c; return 0; }
0
[ "CWE-476", "CWE-119" ]
LibRaw
d7c3d2cb460be10a3ea7b32e9443a83c243b2251
322,000,658,541,269,180,000,000,000,000,000,000,000
8
Secunia SA75000 advisory: several buffer overruns
tracing_trace_options_write(struct file *filp, const char __user *ubuf, size_t cnt, loff_t *ppos) { struct seq_file *m = filp->private_data; struct trace_array *tr = m->private; char buf[64]; int ret; if (cnt >= sizeof(buf)) return -EINVAL; if (copy_from_user(buf, ubuf, cnt)) return -EFAULT; buf[cnt] = 0; ret = trace_set_options(tr, buf); if (ret < 0) return ret; *ppos += cnt; return cnt; }
0
[ "CWE-415" ]
linux
4397f04575c44e1440ec2e49b6302785c95fd2f8
239,651,879,930,428,300,000,000,000,000,000,000,000
24
tracing: Fix possible double free on failure of allocating trace buffer Jing Xia and Chunyan Zhang reported that on failing to allocate part of the tracing buffer, memory is freed, but the pointers that point to them are not initialized back to NULL, and later paths may try to free the freed memory again. Jing and Chunyan fixed one of the locations that does this, but missed a spot. Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 737223fbca3b1 ("tracing: Consolidate buffer allocation code") Reported-by: Jing Xia <[email protected]> Reported-by: Chunyan Zhang <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
static inline void xen_set_restricted_virtio_memory_access(void) { if (IS_ENABLED(CONFIG_XEN_VIRTIO) && xen_domain()) platform_set(PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS); }
0
[]
linux
fa1f57421e0b1c57843902c89728f823abc32f02
289,860,416,961,336,850,000,000,000,000,000,000,000
5
xen/virtio: Enable restricted memory access using Xen grant mappings In order to support virtio in Xen guests add a config option XEN_VIRTIO enabling the user to specify whether in all Xen guests virtio should be able to access memory via Xen grant mappings only on the host side. Also set PLATFORM_VIRTIO_RESTRICTED_MEM_ACCESS feature from the guest initialization code on Arm and x86 if CONFIG_XEN_VIRTIO is enabled. Signed-off-by: Juergen Gross <[email protected]> Signed-off-by: Oleksandr Tyshchenko <[email protected]> Reviewed-by: Stefano Stabellini <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Juergen Gross <[email protected]>
get_issuers_num (gnutls_session_t session, uint8_t * data, ssize_t data_size) { int issuers_dn_len = 0, result; unsigned size; /* Count the number of the given issuers; * This is used to allocate the issuers_dn without * using realloc(). */ if (data_size == 0 || data == NULL) return 0; if (data_size > 0) do { /* This works like DECR_LEN() */ result = GNUTLS_E_UNEXPECTED_PACKET_LENGTH; DECR_LENGTH_COM (data_size, 2, goto error); size = _gnutls_read_uint16 (data); result = GNUTLS_E_UNEXPECTED_PACKET_LENGTH; DECR_LENGTH_COM (data_size, size, goto error); data += 2; if (size > 0) { issuers_dn_len++; data += size; } if (data_size == 0) break; } while (1); return issuers_dn_len; error: return result; }
0
[ "CWE-399" ]
gnutls
9c62f4feb2bdd6fbbb06eb0c60bfdea80d21bbb8
97,348,370,262,734,770,000,000,000,000,000,000,000
44
Deinitialize the correct number of certificates. Reported by Remi Gacogne.
bool unit_stop_pending(Unit *u) { assert(u); /* This call does check the current state of the unit. It's * hence useful to be called from state change calls of the * unit itself, where the state isn't updated yet. This is * different from unit_inactive_or_pending() which checks both * the current state and for a queued job. */ return u->job && u->job->type == JOB_STOP; }
0
[ "CWE-269" ]
systemd
bf65b7e0c9fc215897b676ab9a7c9d1c688143ba
25,102,928,019,009,906,000,000,000,000,000,000,000
11
core: imply NNP and SUID/SGID restriction for DynamicUser=yes service Let's be safe, rather than sorry. This way DynamicUser=yes services can neither take benefit of, nor create SUID/SGID binaries. Given that DynamicUser= is a recent addition only we should be able to get away with turning this on, even though this is strictly speaking a binary compatibility breakage.
static MAIN_WINDOW_REC *mainwindows_find_left(MAIN_WINDOW_REC *window, int find_last) { int first_line, last_line, first_column; MAIN_WINDOW_REC *best; GSList *tmp; if (window != NULL) { first_line = window->first_line; last_line = window->last_line; first_column = window->first_column; } else { first_line = last_line = screen_height; first_column = screen_width; } if (find_last) first_column = screen_width; best = NULL; for (tmp = mainwindows; tmp != NULL; tmp = tmp->next) { MAIN_WINDOW_REC *rec = tmp->data; if (rec->first_line >= first_line && rec->last_line <= last_line && rec->last_column < first_column && (best == NULL || rec->last_column > best->last_column)) best = rec; } return best; }
0
[ "CWE-476" ]
irssi
5b5bfef03596d95079c728f65f523570dd7b03aa
337,720,089,556,321,300,000,000,000,000,000,000,000
31
check the error condition of mainwindow_create
whichmodule(PyObject *global, PyObject *dotted_path) { PyObject *module_name; PyObject *module = NULL; Py_ssize_t i; PyObject *modules; _Py_IDENTIFIER(__module__); _Py_IDENTIFIER(modules); _Py_IDENTIFIER(__main__); if (_PyObject_LookupAttrId(global, &PyId___module__, &module_name) < 0) { return NULL; } if (module_name) { /* In some rare cases (e.g., bound methods of extension types), __module__ can be None. If it is so, then search sys.modules for the module of global. */ if (module_name != Py_None) return module_name; Py_CLEAR(module_name); } assert(module_name == NULL); /* Fallback on walking sys.modules */ modules = _PySys_GetObjectId(&PyId_modules); if (modules == NULL) { PyErr_SetString(PyExc_RuntimeError, "unable to get sys.modules"); return NULL; } if (PyDict_CheckExact(modules)) { i = 0; while (PyDict_Next(modules, &i, &module_name, &module)) { if (_checkmodule(module_name, module, global, dotted_path) == 0) { Py_INCREF(module_name); return module_name; } if (PyErr_Occurred()) { return NULL; } } } else { PyObject *iterator = PyObject_GetIter(modules); if (iterator == NULL) { return NULL; } while ((module_name = PyIter_Next(iterator))) { module = PyObject_GetItem(modules, module_name); if (module == NULL) { Py_DECREF(module_name); Py_DECREF(iterator); return NULL; } if (_checkmodule(module_name, module, global, dotted_path) == 0) { Py_DECREF(module); Py_DECREF(iterator); return module_name; } Py_DECREF(module); Py_DECREF(module_name); if (PyErr_Occurred()) { Py_DECREF(iterator); return NULL; } } Py_DECREF(iterator); } /* If no module is found, use __main__. */ module_name = _PyUnicode_FromId(&PyId___main__); Py_XINCREF(module_name); return module_name; }
0
[ "CWE-190", "CWE-369" ]
cpython
a4ae828ee416a66d8c7bf5ee71d653c2cc6a26dd
259,377,905,945,003,300,000,000,000,000,000,000,000
73
closes bpo-34656: Avoid relying on signed overflow in _pickle memos. (GH-9261)
static void sig_chatnet_saved(IRC_CHATNET_REC *rec, CONFIG_NODE *node) { if (!IS_IRC_CHATNET(rec)) return; if (rec->usermode != NULL) iconfig_node_set_str(node, "usermode", rec->usermode); if (rec->max_cmds_at_once > 0) iconfig_node_set_int(node, "cmdmax", rec->max_cmds_at_once); if (rec->cmd_queue_speed > 0) iconfig_node_set_int(node, "cmdspeed", rec->cmd_queue_speed); if (rec->max_query_chans > 0) iconfig_node_set_int(node, "max_query_chans", rec->max_query_chans); if (rec->max_kicks > 0) iconfig_node_set_int(node, "max_kicks", rec->max_kicks); if (rec->max_msgs > 0) iconfig_node_set_int(node, "max_msgs", rec->max_msgs); if (rec->max_modes > 0) iconfig_node_set_int(node, "max_modes", rec->max_modes); if (rec->max_whois > 0) iconfig_node_set_int(node, "max_whois", rec->max_whois); }
0
[ "CWE-416" ]
irssi
b8d3301d34f383f039071214872570385de1bb59
288,351,219,471,545,050,000,000,000,000,000,000,000
24
SASL support The only supported methods are PLAIN and EXTERNAL, the latter is untested as of now. The code gets the values from the keys named sasl_{mechanism,username,password} specified for each chatnet.
PHP_FUNCTION(header_register_callback) { zval *callback_func; char *callback_name; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "z", &callback_func) == FAILURE) { return; } if (!zend_is_callable(callback_func, 0, &callback_name TSRMLS_CC)) { efree(callback_name); RETURN_FALSE; } efree(callback_name); if (SG(callback_func)) { zval_ptr_dtor(&SG(callback_func)); SG(fci_cache) = empty_fcall_info_cache; } Z_ADDREF_P(callback_func); SG(callback_func) = callback_func; RETURN_TRUE; }
0
[]
php-src
2438490addfbfba51e12246a74588b2382caa08a
110,459,272,630,786,650,000,000,000,000,000,000,000
25
slim post data
void HllVal::init(FunctionContext* ctx) { len = doris::HLL_COLUMN_DEFAULT_LEN; ptr = ctx->allocate(len); memset(ptr, 0, len); // the HLL type is HLL_DATA_FULL in UDF or UDAF ptr[0] = doris::HllDataType::HLL_DATA_FULL; is_null = false; }
0
[ "CWE-200" ]
incubator-doris
246ac4e37aa4da6836b7850cb990f02d1c3725a3
127,575,135,615,849,300,000,000,000,000,000,000,000
9
[fix] fix a bug of encryption function with iv may return wrong result (#8277)
static long native_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { long ret = -ENOIOCTLCMD; if (file->f_op->unlocked_ioctl) ret = file->f_op->unlocked_ioctl(file, cmd, arg); return ret; }
0
[ "CWE-787" ]
linux
a1dfb4c48cc1e64eeb7800a27c66a6f7e88d075a
105,840,834,099,933,650,000,000,000,000,000,000,000
9
media: v4l2-compat-ioctl32.c: refactor compat ioctl32 logic The 32-bit compat v4l2 ioctl handling is implemented based on its 64-bit equivalent. It converts 32-bit data structures into its 64-bit equivalents and needs to provide the data to the 64-bit ioctl in user space memory which is commonly allocated using compat_alloc_user_space(). However, due to how that function is implemented, it can only be called a single time for every syscall invocation. Supposedly to avoid this limitation, the existing code uses a mix of memory from the kernel stack and memory allocated through compat_alloc_user_space(). Under normal circumstances, this would not work, because the 64-bit ioctl expects all pointers to point to user space memory. As a workaround, set_fs(KERNEL_DS) is called to temporarily disable this extra safety check and allow kernel pointers. However, this might introduce a security vulnerability: The result of the 32-bit to 64-bit conversion is writeable by user space because the output buffer has been allocated via compat_alloc_user_space(). A malicious user space process could then manipulate pointers inside this output buffer, and due to the previous set_fs(KERNEL_DS) call, functions like get_user() or put_user() no longer prevent kernel memory access. The new approach is to pre-calculate the total amount of user space memory that is needed, allocate it using compat_alloc_user_space() and then divide up the allocated memory to accommodate all data structures that need to be converted. An alternative approach would have been to retain the union type karg that they allocated on the kernel stack in do_video_ioctl(), copy all data from user space into karg and then back to user space. However, we decided against this approach because it does not align with other compat syscall implementations. Instead, we tried to replicate the get_user/put_user pairs as found in other places in the kernel: if (get_user(clipcount, &up->clipcount) || put_user(clipcount, &kp->clipcount)) return -EFAULT; Notes from [email protected]: This patch was taken from: https://github.com/LineageOS/android_kernel_samsung_apq8084/commit/97b733953c06e4f0398ade18850f0817778255f7 Clearly nobody could be bothered to upstream this patch or at minimum tell us :-( We only heard about this a week ago. This patch was rebased and cleaned up. Compared to the original I also swapped the order of the convert_in_user arguments so that they matched copy_in_user. It was hard to review otherwise. I also replaced the ALLOC_USER_SPACE/ALLOC_AND_GET by a normal function. Fixes: 6b5a9492ca ("v4l: introduce string control support.") Signed-off-by: Daniel Mentz <[email protected]> Co-developed-by: Hans Verkuil <[email protected]> Acked-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]> Cc: <[email protected]> # for v4.15 and up Signed-off-by: Mauro Carvalho Chehab <[email protected]>