func
stringlengths
0
484k
target
int64
0
1
cwe
sequencelengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
njs_block_type(njs_generator_block_type_t type) { switch (type) { case NJS_GENERATOR_LOOP: return "LOOP "; case NJS_GENERATOR_SWITCH: return "SWITCH"; case NJS_GENERATOR_BLOCK: return "BLOCK "; default: return "TRY "; } }
0
[ "CWE-703", "CWE-754" ]
njs
404553896792b8f5f429dc8852d15784a59d8d3e
79,628,358,480,669,690,000,000,000,000,000,000,000
13
Fixed break instruction in a try-catch block. Previously, JUMP offset for a break instruction inside a try-catch block was not set to a correct offset during code generation when a return instruction was present in inner try-catch block. The fix is to update the JUMP offset appropriately. This closes #553 issue on Github.
static int do_rt_sigqueueinfo(pid_t pid, int sig, siginfo_t *info) { /* Not even root can pretend to send signals from the kernel. * Nor can they impersonate a kill()/tgkill(), which adds source info. */ if ((info->si_code >= 0 || info->si_code == SI_TKILL) && (task_pid_vnr(current) != pid)) { /* We used to allow any < 0 si_code */ WARN_ON_ONCE(info->si_code < 0); return -EPERM; } info->si_signo = sig; /* POSIX.1b doesn't mention process groups. */ return kill_proc_info(sig, info, pid); }
0
[ "CWE-399" ]
linux
b9e146d8eb3b9ecae5086d373b50fa0c1f3e7f0f
279,826,248,916,580,000,000,000,000,000,000,000,000
16
kernel/signal.c: stop info leak via the tkill and the tgkill syscalls This fixes a kernel memory contents leak via the tkill and tgkill syscalls for compat processes. This is visible in the siginfo_t->_sifields._rt.si_sigval.sival_ptr field when handling signals delivered from tkill. The place of the infoleak: int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from) { ... put_user_ex(ptr_to_compat(from->si_ptr), &to->si_ptr); ... } Signed-off-by: Emese Revfy <[email protected]> Reviewed-by: PaX Team <[email protected]> Signed-off-by: Kees Cook <[email protected]> Cc: Al Viro <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Serge Hallyn <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
dissect_usb_video_colorformat(proto_tree *tree, tvbuff_t *tvb, int offset) { proto_tree_add_item(tree, hf_usb_vid_color_primaries, tvb, offset, 1, ENC_LITTLE_ENDIAN); proto_tree_add_item(tree, hf_usb_vid_transfer_characteristics, tvb, offset+1, 1, ENC_LITTLE_ENDIAN); proto_tree_add_item(tree, hf_usb_vid_matrix_coefficients, tvb, offset+2, 1, ENC_LITTLE_ENDIAN); offset +=3; return offset; }
0
[ "CWE-476" ]
wireshark
2cb5985bf47bdc8bea78d28483ed224abdd33dc6
27,096,917,785,980,833,000,000,000,000,000,000,000
9
Make class "type" for USB conversations. USB dissectors can't assume that only their class type has been passed around in the conversation. Make explicit check that class type expected matches the dissector and stop/prevent dissection if there isn't a match. Bug: 12356 Change-Id: Ib23973a4ebd0fbb51952ffc118daf95e3389a209 Reviewed-on: https://code.wireshark.org/review/15212 Petri-Dish: Michael Mann <[email protected]> Reviewed-by: Martin Kaiser <[email protected]> Petri-Dish: Martin Kaiser <[email protected]> Tested-by: Petri Dish Buildbot <[email protected]> Reviewed-by: Michael Mann <[email protected]>
static int apparmor_file_alloc_security(struct file *file) { /* freed by apparmor_file_free_security */ file->f_security = aa_alloc_file_context(GFP_KERNEL); if (!file->f_security) return -ENOMEM; return 0; }
0
[ "CWE-119", "CWE-264", "CWE-369" ]
linux
30a46a4647fd1df9cf52e43bf467f0d9265096ca
137,182,360,817,983,510,000,000,000,000,000,000,000
9
apparmor: fix oops, validate buffer size in apparmor_setprocattr() When proc_pid_attr_write() was changed to use memdup_user apparmor's (interface violating) assumption that the setprocattr buffer was always a single page was violated. The size test is not strictly speaking needed as proc_pid_attr_write() will reject anything larger, but for the sake of robustness we can keep it in. SMACK and SELinux look safe to me, but somebody else should probably have a look just in case. Based on original patch from Vegard Nossum <[email protected]> modified for the case that apparmor provides null termination. Fixes: bb646cdb12e75d82258c2f2e7746d5952d3e321a Reported-by: Vegard Nossum <[email protected]> Cc: Al Viro <[email protected]> Cc: John Johansen <[email protected]> Cc: Paul Moore <[email protected]> Cc: Stephen Smalley <[email protected]> Cc: Eric Paris <[email protected]> Cc: Casey Schaufler <[email protected]> Cc: [email protected] Signed-off-by: John Johansen <[email protected]> Reviewed-by: Tyler Hicks <[email protected]> Signed-off-by: James Morris <[email protected]>
static bool freelist_state_initialize(union freelist_init_state *state, struct kmem_cache *cachep, unsigned int count) { bool ret; unsigned int rand; /* Use best entropy available to define a random shift */ rand = get_random_int(); /* Use a random state if the pre-computed list is not available */ if (!cachep->random_seq) { prandom_seed_state(&state->rnd_state, rand); ret = false; } else { state->list = cachep->random_seq; state->count = count; state->pos = 0; state->rand = rand; ret = true; } return ret; }
1
[ "CWE-703" ]
linux
c4e490cf148e85ead0d1b1c2caaba833f1d5b29f
65,539,602,228,336,130,000,000,000,000,000,000,000
23
mm/slab.c: fix SLAB freelist randomization duplicate entries This patch fixes a bug in the freelist randomization code. When a high random number is used, the freelist will contain duplicate entries. It will result in different allocations sharing the same chunk. It will result in odd behaviours and crashes. It should be uncommon but it depends on the machines. We saw it happening more often on some machines (every few hours of running tests). Fixes: c7ce4f60ac19 ("mm: SLAB freelist randomization") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: John Sperbeck <[email protected]> Signed-off-by: Thomas Garnier <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static int __ext4_get_inode_loc(struct inode *inode, struct ext4_iloc *iloc, int in_mem) { struct ext4_group_desc *gdp; struct buffer_head *bh; struct super_block *sb = inode->i_sb; ext4_fsblk_t block; int inodes_per_block, inode_offset; iloc->bh = NULL; if (!ext4_valid_inum(sb, inode->i_ino)) return -EIO; iloc->block_group = (inode->i_ino - 1) / EXT4_INODES_PER_GROUP(sb); gdp = ext4_get_group_desc(sb, iloc->block_group, NULL); if (!gdp) return -EIO; /* * Figure out the offset within the block group inode table */ inodes_per_block = (EXT4_BLOCK_SIZE(sb) / EXT4_INODE_SIZE(sb)); inode_offset = ((inode->i_ino - 1) % EXT4_INODES_PER_GROUP(sb)); block = ext4_inode_table(sb, gdp) + (inode_offset / inodes_per_block); iloc->offset = (inode_offset % inodes_per_block) * EXT4_INODE_SIZE(sb); bh = sb_getblk(sb, block); if (!bh) { ext4_error(sb, "unable to read inode block - " "inode=%lu, block=%llu", inode->i_ino, block); return -EIO; } if (!buffer_uptodate(bh)) { lock_buffer(bh); /* * If the buffer has the write error flag, we have failed * to write out another inode in the same block. In this * case, we don't have to read the block because we may * read the old inode data successfully. */ if (buffer_write_io_error(bh) && !buffer_uptodate(bh)) set_buffer_uptodate(bh); if (buffer_uptodate(bh)) { /* someone brought it uptodate while we waited */ unlock_buffer(bh); goto has_buffer; } /* * If we have all information of the inode in memory and this * is the only valid inode in the block, we need not read the * block. */ if (in_mem) { struct buffer_head *bitmap_bh; int i, start; start = inode_offset & ~(inodes_per_block - 1); /* Is the inode bitmap in cache? */ bitmap_bh = sb_getblk(sb, ext4_inode_bitmap(sb, gdp)); if (!bitmap_bh) goto make_io; /* * If the inode bitmap isn't in cache then the * optimisation may end up performing two reads instead * of one, so skip it. */ if (!buffer_uptodate(bitmap_bh)) { brelse(bitmap_bh); goto make_io; } for (i = start; i < start + inodes_per_block; i++) { if (i == inode_offset) continue; if (ext4_test_bit(i, bitmap_bh->b_data)) break; } brelse(bitmap_bh); if (i == start + inodes_per_block) { /* all other inodes are free, so skip I/O */ memset(bh->b_data, 0, bh->b_size); set_buffer_uptodate(bh); unlock_buffer(bh); goto has_buffer; } } make_io: /* * If we need to do any I/O, try to pre-readahead extra * blocks from the inode table. */ if (EXT4_SB(sb)->s_inode_readahead_blks) { ext4_fsblk_t b, end, table; unsigned num; table = ext4_inode_table(sb, gdp); /* s_inode_readahead_blks is always a power of 2 */ b = block & ~(EXT4_SB(sb)->s_inode_readahead_blks-1); if (table > b) b = table; end = b + EXT4_SB(sb)->s_inode_readahead_blks; num = EXT4_INODES_PER_GROUP(sb); if (EXT4_HAS_RO_COMPAT_FEATURE(sb, EXT4_FEATURE_RO_COMPAT_GDT_CSUM)) num -= ext4_itable_unused_count(sb, gdp); table += num / inodes_per_block; if (end > table) end = table; while (b <= end) sb_breadahead(sb, b++); } /* * There are other valid inodes in the buffer, this inode * has in-inode xattrs, or we don't have this inode in memory. * Read the block from disk. */ get_bh(bh); bh->b_end_io = end_buffer_read_sync; submit_bh(READ_META, bh); wait_on_buffer(bh); if (!buffer_uptodate(bh)) { ext4_error(sb, "unable to read inode block - inode=%lu," " block=%llu", inode->i_ino, block); brelse(bh); return -EIO; } } has_buffer: iloc->bh = bh; return 0; }
0
[ "CWE-703" ]
linux
744692dc059845b2a3022119871846e74d4f6e11
268,196,574,071,773,230,000,000,000,000,000,000,000
138
ext4: use ext4_get_block_write in buffer write Allocate uninitialized extent before ext4 buffer write and convert the extent to initialized after io completes. The purpose is to make sure an extent can only be marked initialized after it has been written with new data so we can safely drop the i_mutex lock in ext4 DIO read without exposing stale data. This helps to improve multi-thread DIO read performance on high-speed disks. Skip the nobh and data=journal mount cases to make things simple for now. Signed-off-by: Jiaying Zhang <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]>
gnutls_cipher_algorithm_t gnutls_cipher_get(gnutls_session_t session) { record_parameters_st *record_params; int ret; ret = _gnutls_epoch_get(session, EPOCH_READ_CURRENT, &record_params); if (ret < 0) return gnutls_assert_val(GNUTLS_CIPHER_NULL); return record_params->cipher->id; }
0
[ "CWE-400" ]
gnutls
1ffb827e45721ef56982d0ffd5c5de52376c428e
25,478,415,437,483,343,000,000,000,000,000,000,000
12
handshake: set a maximum number of warning messages that can be received per handshake That is to avoid DoS due to the assymetry of cost of sending an alert vs the cost of processing.
static inline RPVector *parse_vec(RBinWasmObj *bin, ut64 bound, ParseEntryFcn parse_entry, RPVectorFree free_entry) { RBuffer *buf = bin->buf; ut32 count; if (!consume_u32_r (buf, bound, &count)) { return NULL; } RPVector *vec = r_pvector_new (free_entry); if (vec) { r_pvector_reserve (vec, count); ut32 i; for (i = 0; i < count; i++) { ut64 start = r_buf_tell (buf); void *e = parse_entry (bin, bound, i); if (!e || !r_pvector_push (vec, e)) { eprintf ("[wasm] Failed to parse entry %u/%u of vec at 0x%" PFMT64x "\n", i, count, start); free_entry (e); break; } } } return vec; }
0
[ "CWE-787" ]
radare2
b4ca66f5d4363d68a6379e5706353b3bde5104a4
328,531,736,830,511,800,000,000,000,000,000,000,000
24
Fix #20336 - wasm bin parser ##crash
static gboolean io_watch_poll_prepare(GSource *source, gint *timeout_) { IOWatchPoll *iwp = io_watch_poll_from_source(source); bool now_active = iwp->fd_can_read(iwp->opaque) > 0; bool was_active = iwp->src != NULL; if (was_active == now_active) { return FALSE; } if (now_active) { iwp->src = qio_channel_create_watch( iwp->ioc, G_IO_IN | G_IO_ERR | G_IO_HUP | G_IO_NVAL); g_source_set_callback(iwp->src, iwp->fd_read, iwp->opaque, NULL); g_source_attach(iwp->src, iwp->context); } else { g_source_destroy(iwp->src); g_source_unref(iwp->src); iwp->src = NULL; } return FALSE; }
0
[ "CWE-416" ]
qemu
a4afa548fc6dd9842ed86639b4d37d4d1c4ad480
292,740,653,713,610,170,000,000,000,000,000,000,000
22
char: move front end handlers in CharBackend Since the hanlders are associated with a CharBackend, rather than the CharDriverState, it is more appropriate to store in CharBackend. This avoids the handler copy dance in qemu_chr_fe_set_handlers() then mux_chr_update_read_handler(), by storing the CharBackend pointer directly. Also a mux CharDriver should go through mux->backends[focused], since chr->be will stay NULL. Before that, it was possible to call chr->handler by mistake with surprising results, for ex through qemu_chr_be_can_write(), which would result in calling the last set handler front end, not the one with focus. Signed-off-by: Marc-André Lureau <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
add_selected_from_list_view (GtkTreeModel *model, GtkTreePath *path, GtkTreeIter *iter, gpointer data) { GList **list = data; FileData *fdata; gtk_tree_model_get (model, iter, COLUMN_FILE_DATA, &fdata, -1); *list = g_list_prepend (*list, fdata); }
0
[ "CWE-22" ]
file-roller
b147281293a8307808475e102a14857055f81631
158,201,848,940,248,940,000,000,000,000,000,000,000
13
libarchive: sanitize filenames before extracting
static inline void l2cap_le_sig_channel(struct l2cap_conn *conn, struct sk_buff *skb) { struct hci_conn *hcon = conn->hcon; struct l2cap_cmd_hdr *cmd; u16 len; int err; if (hcon->type != LE_LINK) goto drop; if (skb->len < L2CAP_CMD_HDR_SIZE) goto drop; cmd = (void *) skb->data; skb_pull(skb, L2CAP_CMD_HDR_SIZE); len = le16_to_cpu(cmd->len); BT_DBG("code 0x%2.2x len %d id 0x%2.2x", cmd->code, len, cmd->ident); if (len != skb->len || !cmd->ident) { BT_DBG("corrupted command"); goto drop; } err = l2cap_le_sig_cmd(conn, cmd, len, skb->data); if (err) { struct l2cap_cmd_rej_unk rej; BT_ERR("Wrong link type (%d)", err); rej.reason = cpu_to_le16(L2CAP_REJ_NOT_UNDERSTOOD); l2cap_send_cmd(conn, cmd->ident, L2CAP_COMMAND_REJ, sizeof(rej), &rej); } drop: kfree_skb(skb); }
0
[ "CWE-787" ]
linux
e860d2c904d1a9f38a24eb44c9f34b8f915a6ea3
202,991,656,146,654,070,000,000,000,000,000,000,000
40
Bluetooth: Properly check L2CAP config option output buffer length Validate the output buffer length for L2CAP config requests and responses to avoid overflowing the stack buffer used for building the option blocks. Cc: [email protected] Signed-off-by: Ben Seri <[email protected]> Signed-off-by: Marcel Holtmann <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
TEST_P(Security, BuiltinAuthenticationAndAccessAndCryptoPlugin_PermissionsEnableDiscoveryEnableAccessNone_validation_ok_enable_discovery_disable_access_none) // *INDENT-ON* { PubSubReader<HelloWorldType> reader(TEST_TOPIC_NAME); PubSubWriter<HelloWorldType> writer(TEST_TOPIC_NAME); std::string governance_file("governance_enable_discovery_enable_access_none.smime"); BuiltinAuthenticationAndAccessAndCryptoPlugin_Permissions_validation_ok_common(reader, writer, governance_file); }
0
[ "CWE-284" ]
Fast-DDS
d2aeab37eb4fad4376b68ea4dfbbf285a2926384
93,434,255,811,533,420,000,000,000,000,000,000,000
9
check remote permissions (#1387) * Refs 5346. Blackbox test Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. one-way string compare Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Do not add partition separator on last partition Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 5346. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Access control unit testing It only covers Partition and Topic permissions Signed-off-by: Iker Luengo <[email protected]> * Refs #3680. Fix partition check on Permissions plugin. Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Uncrustify Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Fix tests on mac Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Fix windows tests Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Avoid memory leak on test Signed-off-by: Iker Luengo <[email protected]> * Refs 3680. Proxy data mocks should not return temporary objects Signed-off-by: Iker Luengo <[email protected]> * refs 3680. uncrustify Signed-off-by: Iker Luengo <[email protected]> Co-authored-by: Miguel Company <[email protected]>
static ssize_t auto_remove_on_show(struct device *dev, struct device_attribute *attr, char *buf) { struct device_link *link = to_devlink(dev); char *str; if (link->flags & DL_FLAG_AUTOREMOVE_SUPPLIER) str = "supplier unbind"; else if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER) str = "consumer unbind"; else str = "never"; return sysfs_emit(buf, "%s\n", str); }
0
[ "CWE-787" ]
linux
aa838896d87af561a33ecefea1caa4c15a68bc47
333,192,211,870,556,750,000,000,000,000,000,000,000
15
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions Convert the various sprintf fmaily calls in sysfs device show functions to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety. Done with: $ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 . And cocci script: $ cat sysfs_emit_dev.cocci @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - sprintf(buf, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - snprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - scnprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> } @@ identifier d_show; identifier dev, attr, buf; expression chr; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... return - strcpy(buf, chr); + sysfs_emit(buf, chr); ...> } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - sprintf(buf, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - snprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... len = - scnprintf(buf, PAGE_SIZE, + sysfs_emit(buf, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; identifier len; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { <... - len += scnprintf(buf + len, PAGE_SIZE - len, + len += sysfs_emit_at(buf, len, ...); ...> return len; } @@ identifier d_show; identifier dev, attr, buf; expression chr; @@ ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf) { ... - strcpy(buf, chr); - return strlen(buf); + return sysfs_emit(buf, chr); } Signed-off-by: Joe Perches <[email protected]> Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com Signed-off-by: Greg Kroah-Hartman <[email protected]>
static Item_cache* get_cache(THD *thd, const Item *item) { return get_cache(thd, item, item->cmp_type()); }
0
[ "CWE-617" ]
server
2e7891080667c59ac80f788eef4d59d447595772
193,968,543,518,614,700,000,000,000,000,000,000,000
4
MDEV-25635 Assertion failure when pushing from HAVING into WHERE of view This bug could manifest itself after pushing a where condition over a mergeable derived table / view / CTE DT into a grouping view / derived table / CTE V whose item list contained set functions with constant arguments such as MIN(2), SUM(1) etc. In such cases the field references used in the condition pushed into the view V that correspond set functions are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation of the virtual method const_item() for the class Item_direct_view_ref the wrapped set functions with constant arguments could be erroneously taken for constant items. This could lead to a wrong result set returned by the main select query in 10.2. In 10.4 where a possibility of pushing condition from HAVING into WHERE had been added this could cause a crash. Approved by Sergey Petrunya <[email protected]>
inline void Pow(const T* input1_data, const Dims<4>& input1_dims, const T* input2_data, const Dims<4>& input2_dims, T* output_data, const Dims<4>& output_dims) { Pow(DimsToShape(input1_dims), input1_data, DimsToShape(input2_dims), input2_data, DimsToShape(output_dims), output_data); }
0
[ "CWE-703", "CWE-835" ]
tensorflow
dfa22b348b70bb89d6d6ec0ff53973bacb4f4695
43,157,301,744,481,530,000,000,000,000,000,000,000
6
Prevent a division by 0 in average ops. PiperOrigin-RevId: 385184660 Change-Id: I7affd4554f9b336fca29ac68f633232c094d0bd3
void TfLiteTensorRealloc(size_t num_bytes, TfLiteTensor* tensor) { if (tensor->allocation_type != kTfLiteDynamic && tensor->allocation_type != kTfLitePersistentRo) { return; } // TODO(b/145340303): Tensor data should be aligned. if (!tensor->data.raw) { tensor->data.raw = malloc(num_bytes); } else if (num_bytes > tensor->bytes) { tensor->data.raw = realloc(tensor->data.raw, num_bytes); } tensor->bytes = num_bytes; }
0
[ "CWE-190" ]
tensorflow
7c8cc4ec69cd348e44ad6a2699057ca88faad3e5
116,072,997,571,063,430,000,000,000,000,000,000,000
13
Fix a dangerous integer overflow and a malloc of negative size. PiperOrigin-RevId: 371254154 Change-Id: I250a98a3df26328770167025670235a963a72da0
static inline int ieee802154_match_sock(u8 *hw_addr, u16 pan_id, u16 short_addr, struct dgram_sock *ro) { if (!ro->bound) return 1; if (ro->src_addr.addr_type == IEEE802154_ADDR_LONG && !memcmp(ro->src_addr.hwaddr, hw_addr, IEEE802154_ADDR_LEN)) return 1; if (ro->src_addr.addr_type == IEEE802154_ADDR_SHORT && pan_id == ro->src_addr.pan_id && short_addr == ro->src_addr.short_addr) return 1; return 0; }
0
[ "CWE-20" ]
net
bceaa90240b6019ed73b49965eac7d167610be69
118,085,248,869,919,540,000,000,000,000,000,000,000
17
inet: prevent leakage of uninitialized memory to user in recv syscalls Only update *addr_len when we actually fill in sockaddr, otherwise we can return uninitialized memory from the stack to the caller in the recvfrom, recvmmsg and recvmsg syscalls. Drop the the (addr_len == NULL) checks because we only get called with a valid addr_len pointer either from sock_common_recvmsg or inet_recvmsg. If a blocking read waits on a socket which is concurrently shut down we now return zero and set msg_msgnamelen to 0. Reported-by: mpb <[email protected]> Suggested-by: Eric Dumazet <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Signed-off-by: David S. Miller <[email protected]>
long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { void __user *argp = (void __user *)arg; long r; switch (ioctl) { case KVM_GET_MSR_INDEX_LIST: { struct kvm_msr_list __user *user_msr_list = argp; struct kvm_msr_list msr_list; unsigned n; r = -EFAULT; if (copy_from_user(&msr_list, user_msr_list, sizeof msr_list)) goto out; n = msr_list.nmsrs; msr_list.nmsrs = num_msrs_to_save + ARRAY_SIZE(emulated_msrs); if (copy_to_user(user_msr_list, &msr_list, sizeof msr_list)) goto out; r = -E2BIG; if (n < msr_list.nmsrs) goto out; r = -EFAULT; if (copy_to_user(user_msr_list->indices, &msrs_to_save, num_msrs_to_save * sizeof(u32))) goto out; if (copy_to_user(user_msr_list->indices + num_msrs_to_save, &emulated_msrs, ARRAY_SIZE(emulated_msrs) * sizeof(u32))) goto out; r = 0; break; } case KVM_GET_SUPPORTED_CPUID: case KVM_GET_EMULATED_CPUID: { struct kvm_cpuid2 __user *cpuid_arg = argp; struct kvm_cpuid2 cpuid; r = -EFAULT; if (copy_from_user(&cpuid, cpuid_arg, sizeof cpuid)) goto out; r = kvm_dev_ioctl_get_cpuid(&cpuid, cpuid_arg->entries, ioctl); if (r) goto out; r = -EFAULT; if (copy_to_user(cpuid_arg, &cpuid, sizeof cpuid)) goto out; r = 0; break; } case KVM_X86_GET_MCE_CAP_SUPPORTED: { u64 mce_cap; mce_cap = KVM_MCE_CAP_SUPPORTED; r = -EFAULT; if (copy_to_user(argp, &mce_cap, sizeof mce_cap)) goto out; r = 0; break; } default: r = -EINVAL; } out: return r; }
0
[ "CWE-119", "CWE-703", "CWE-120" ]
linux
a08d3b3b99efd509133946056531cdf8f3a0c09b
14,202,587,724,531,713,000,000,000,000,000,000,000
69
kvm: x86: fix emulator buffer overflow (CVE-2014-0049) The problem occurs when the guest performs a pusha with the stack address pointing to an mmio address (or an invalid guest physical address) to start with, but then extending into an ordinary guest physical address. When doing repeated emulated pushes emulator_read_write sets mmio_needed to 1 on the first one. On a later push when the stack points to regular memory, mmio_nr_fragments is set to 0, but mmio_is_needed is not set to 0. As a result, KVM exits to userspace, and then returns to complete_emulated_mmio. In complete_emulated_mmio vcpu->mmio_cur_fragment is incremented. The termination condition of vcpu->mmio_cur_fragment == vcpu->mmio_nr_fragments is never achieved. The code bounces back and fourth to userspace incrementing mmio_cur_fragment past it's buffer. If the guest does nothing else it eventually leads to a a crash on a memcpy from invalid memory address. However if a guest code can cause the vm to be destroyed in another vcpu with excellent timing, then kvm_clear_async_pf_completion_queue can be used by the guest to control the data that's pointed to by the call to cancel_work_item, which can be used to gain execution. Fixes: f78146b0f9230765c6315b2e14f56112513389ad Signed-off-by: Andrew Honig <[email protected]> Cc: [email protected] (3.5+) Signed-off-by: Paolo Bonzini <[email protected]>
Bool rfbOptOtpAuth(void) { SecTypeData *s; for (s = secTypes; s->name != NULL; s++) { if (!strcmp(&s->name[strlen(s->name) - 3], "otp") && s->enabled) return TRUE; } return FALSE; }
0
[ "CWE-787" ]
turbovnc
cea98166008301e614e0d36776bf9435a536136e
300,916,413,642,148,400,000,000,000,000,000,000,000
11
Server: Fix two issues identified by ASan 1. If the TLSPlain and X509Plain security types were both disabled, then rfbOptPamAuth() would overflow the name field in the secTypes structure when testing the "none" security type, since the name of that security type has less than five characters. This issue was innocuous, since the overflow was fully contained within the secTypes structure, but the ASan error caused Xvnc to abort, which made it difficult to detect other errors. 2. If an ill-behaved RFB client sent the TurboVNC Server a fence message with more than 64 bytes, then the TurboVNC Server would try to read that message and subsequently overflow the stack before it detected that the payload was too large. This could never have occurred with any of the VNC viewers that currently support the RFB flow control extensions (TigerVNC and TurboVNC, namely.) This issue was also innocuous, since the stack overflow affected two variables (newScreens and errMsg) that were never accessed before the function returned.
static void perf_event_switch(struct task_struct *task, struct task_struct *next_prev, bool sched_in) { struct perf_switch_event switch_event; /* N.B. caller checks nr_switch_events != 0 */ switch_event = (struct perf_switch_event){ .task = task, .next_prev = next_prev, .event_id = { .header = { /* .type */ .misc = sched_in ? 0 : PERF_RECORD_MISC_SWITCH_OUT, /* .size */ }, /* .next_prev_pid */ /* .next_prev_tid */ }, }; perf_event_aux(perf_event_switch_output, &switch_event, NULL); }
0
[ "CWE-416", "CWE-362" ]
linux
12ca6ad2e3a896256f086497a7c7406a547ee373
57,754,481,850,397,830,000,000,000,000,000,000,000
25
perf: Fix race in swevent hash There's a race on CPU unplug where we free the swevent hash array while it can still have events on. This will result in a use-after-free which is BAD. Simply do not free the hash array on unplug. This leaves the thing around and no use-after-free takes place. When the last swevent dies, we do a for_each_possible_cpu() iteration anyway to clean these up, at which time we'll free it, so no leakage will occur. Reported-by: Sasha Levin <[email protected]> Tested-by: Sasha Levin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { int msr; struct page *page; unsigned long *msr_bitmap_l1; unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap; /* This shortcut is ok because we support only x2APIC MSRs so far. */ if (!nested_cpu_has_virt_x2apic_mode(vmcs12)) return false; page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); if (is_error_page(page)) return false; msr_bitmap_l1 = (unsigned long *)kmap(page); memset(msr_bitmap_l0, 0xff, PAGE_SIZE); if (nested_cpu_has_virt_x2apic_mode(vmcs12)) { if (nested_cpu_has_apic_reg_virt(vmcs12)) for (msr = 0x800; msr <= 0x8ff; msr++) nested_vmx_disable_intercept_for_msr( msr_bitmap_l1, msr_bitmap_l0, msr, MSR_TYPE_R); nested_vmx_disable_intercept_for_msr( msr_bitmap_l1, msr_bitmap_l0, APIC_BASE_MSR + (APIC_TASKPRI >> 4), MSR_TYPE_R | MSR_TYPE_W); if (nested_cpu_has_vid(vmcs12)) { nested_vmx_disable_intercept_for_msr( msr_bitmap_l1, msr_bitmap_l0, APIC_BASE_MSR + (APIC_EOI >> 4), MSR_TYPE_W); nested_vmx_disable_intercept_for_msr( msr_bitmap_l1, msr_bitmap_l0, APIC_BASE_MSR + (APIC_SELF_IPI >> 4), MSR_TYPE_W); } } kunmap(page); kvm_release_page_clean(page); return true; }
0
[ "CWE-20", "CWE-617" ]
linux
3a8b0677fc6180a467e26cc32ce6b0c09a32f9bb
29,768,296,439,688,285,000,000,000,000,000,000,000
47
KVM: VMX: Do not BUG() on out-of-bounds guest IRQ The value of the guest_irq argument to vmx_update_pi_irte() is ultimately coming from a KVM_IRQFD API call. Do not BUG() in vmx_update_pi_irte() if the value is out-of bounds. (Especially, since KVM as a whole seems to hang after that.) Instead, print a message only once if we find that we don't have a route for a certain IRQ (which can be out-of-bounds or within the array). This fixes CVE-2017-1000252. Fixes: efc644048ecde54 ("KVM: x86: Update IRTE for posted-interrupts") Signed-off-by: Jan H. Schönherr <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static int k_comp(int e, int alpha, int gamma) { return ceil((alpha-e+63) * D_1_LOG2_10); }
0
[ "CWE-190" ]
mujs
25821e6d74fab5fcc200fe5e818362e03e114428
284,291,426,081,212,400,000,000,000,000,000,000,000
3
Fix 698920: Guard jsdtoa from integer overflow wreaking havoc.
xfs_itruncate_extents( struct xfs_trans **tpp, struct xfs_inode *ip, int whichfork, xfs_fsize_t new_size) { struct xfs_mount *mp = ip->i_mount; struct xfs_trans *tp = *tpp; xfs_bmap_free_t free_list; xfs_fsblock_t first_block; xfs_fileoff_t first_unmap_block; xfs_fileoff_t last_block; xfs_filblks_t unmap_len; int committed; int error = 0; int done = 0; ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(!atomic_read(&VFS_I(ip)->i_count) || xfs_isilocked(ip, XFS_IOLOCK_EXCL)); ASSERT(new_size <= XFS_ISIZE(ip)); ASSERT(tp->t_flags & XFS_TRANS_PERM_LOG_RES); ASSERT(ip->i_itemp != NULL); ASSERT(ip->i_itemp->ili_lock_flags == 0); ASSERT(!XFS_NOT_DQATTACHED(mp, ip)); trace_xfs_itruncate_extents_start(ip, new_size); /* * Since it is possible for space to become allocated beyond * the end of the file (in a crash where the space is allocated * but the inode size is not yet updated), simply remove any * blocks which show up between the new EOF and the maximum * possible file size. If the first block to be removed is * beyond the maximum file size (ie it is the same as last_block), * then there is nothing to do. */ first_unmap_block = XFS_B_TO_FSB(mp, (xfs_ufsize_t)new_size); last_block = XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes); if (first_unmap_block == last_block) return 0; ASSERT(first_unmap_block < last_block); unmap_len = last_block - first_unmap_block + 1; while (!done) { xfs_bmap_init(&free_list, &first_block); error = xfs_bunmapi(tp, ip, first_unmap_block, unmap_len, xfs_bmapi_aflag(whichfork), XFS_ITRUNC_MAX_EXTENTS, &first_block, &free_list, &done); if (error) goto out_bmap_cancel; /* * Duplicate the transaction that has the permanent * reservation and commit the old transaction. */ error = xfs_bmap_finish(&tp, &free_list, &committed); if (committed) xfs_trans_ijoin(tp, ip, 0); if (error) goto out_bmap_cancel; error = xfs_trans_roll(&tp, ip); if (error) goto out; } /* * Always re-log the inode so that our permanent transaction can keep * on rolling it forward in the log. */ xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); trace_xfs_itruncate_extents_end(ip, new_size); out: *tpp = tp; return error; out_bmap_cancel: /* * If the bunmapi call encounters an error, return to the caller where * the transaction can be properly aborted. We just need to make sure * we're not holding any resources that we were not when we came in. */ xfs_bmap_cancel(&free_list); goto out; }
0
[ "CWE-19" ]
linux
fc0561cefc04e7803c0f6501ca4f310a502f65b8
230,604,154,541,378,300,000,000,000,000,000,000,000
90
xfs: optimise away log forces on timestamp updates for fdatasync xfs: timestamp updates cause excessive fdatasync log traffic Sage Weil reported that a ceph test workload was writing to the log on every fdatasync during an overwrite workload. Event tracing showed that the only metadata modification being made was the timestamp updates during the write(2) syscall, but fdatasync(2) is supposed to ignore them. The key observation was that the transactions in the log all looked like this: INODE: #regs: 4 ino: 0x8b flags: 0x45 dsize: 32 And contained a flags field of 0x45 or 0x85, and had data and attribute forks following the inode core. This means that the timestamp updates were triggering dirty relogging of previously logged parts of the inode that hadn't yet been flushed back to disk. There are two parts to this problem. The first is that XFS relogs dirty regions in subsequent transactions, so it carries around the fields that have been dirtied since the last time the inode was written back to disk, not since the last time the inode was forced into the log. The second part is that on v5 filesystems, the inode change count update during inode dirtying also sets the XFS_ILOG_CORE flag, so on v5 filesystems this makes a timestamp update dirty the entire inode. As a result when fdatasync is run, it looks at the dirty fields in the inode, and sees more than just the timestamp flag, even though the only metadata change since the last fdatasync was just the timestamps. Hence we force the log on every subsequent fdatasync even though it is not needed. To fix this, add a new field to the inode log item that tracks changes since the last time fsync/fdatasync forced the log to flush the changes to the journal. This flag is updated when we dirty the inode, but we do it before updating the change count so it does not carry the "core dirty" flag from timestamp updates. The fields are zeroed when the inode is marked clean (due to writeback/freeing) or when an fsync/datasync forces the log. Hence if we only dirty the timestamps on the inode between fsync/fdatasync calls, the fdatasync will not trigger another log force. Over 100 runs of the test program: Ext4 baseline: runtime: 1.63s +/- 0.24s avg lat: 1.59ms +/- 0.24ms iops: ~2000 XFS, vanilla kernel: runtime: 2.45s +/- 0.18s avg lat: 2.39ms +/- 0.18ms log forces: ~400/s iops: ~1000 XFS, patched kernel: runtime: 1.49s +/- 0.26s avg lat: 1.46ms +/- 0.25ms log forces: ~30/s iops: ~1500 Reported-by: Sage Weil <[email protected]> Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Brian Foster <[email protected]> Signed-off-by: Dave Chinner <[email protected]>
static double Hann(const double x, const ResizeFilter *magick_unused(resize_filter)) { /* Cosine window function: 0.5+0.5*cos(pi*x). */ const double cosine=cos((double) (MagickPI*x)); magick_unreferenced(resize_filter); return(0.5+0.5*cosine); }
0
[ "CWE-125" ]
ImageMagick
c5402b6e0fcf8b694ae2af6a6652ebb8ce0ccf46
153,968,314,379,065,940,000,000,000,000,000,000,000
11
https://github.com/ImageMagick/ImageMagick/issues/717
static ThreadId Invalid() { return ThreadId(kInvalidId); }
0
[ "CWE-20", "CWE-119" ]
node
530af9cb8e700e7596b3ec812bad123c9fa06356
275,098,090,009,089,760,000,000,000,000,000,000,000
1
v8: Interrupts must not mask stack overflow. Backport of https://codereview.chromium.org/339883002
NTSTATUS tstream_tls_params_server(TALLOC_CTX *mem_ctx, const char *dns_host_name, bool enabled, const char *key_file, const char *cert_file, const char *ca_file, const char *crl_file, const char *dhp_file, struct tstream_tls_params **_tlsp) { struct tstream_tls_params *tlsp; #if ENABLE_GNUTLS int ret; if (!enabled || key_file == NULL || *key_file == 0) { tlsp = talloc_zero(mem_ctx, struct tstream_tls_params); NT_STATUS_HAVE_NO_MEMORY(tlsp); talloc_set_destructor(tlsp, tstream_tls_params_destructor); tlsp->tls_enabled = false; *_tlsp = tlsp; return NT_STATUS_OK; } ret = gnutls_global_init(); if (ret != GNUTLS_E_SUCCESS) { DEBUG(0,("TLS %s - %s\n", __location__, gnutls_strerror(ret))); return NT_STATUS_NOT_SUPPORTED; } tlsp = talloc_zero(mem_ctx, struct tstream_tls_params); NT_STATUS_HAVE_NO_MEMORY(tlsp); talloc_set_destructor(tlsp, tstream_tls_params_destructor); if (!file_exist(ca_file)) { tls_cert_generate(tlsp, dns_host_name, key_file, cert_file, ca_file); } ret = gnutls_certificate_allocate_credentials(&tlsp->x509_cred); if (ret != GNUTLS_E_SUCCESS) { DEBUG(0,("TLS %s - %s\n", __location__, gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_NO_MEMORY; } if (ca_file && *ca_file) { ret = gnutls_certificate_set_x509_trust_file(tlsp->x509_cred, ca_file, GNUTLS_X509_FMT_PEM); if (ret < 0) { DEBUG(0,("TLS failed to initialise cafile %s - %s\n", ca_file, gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_CANT_ACCESS_DOMAIN_INFO; } } if (crl_file && *crl_file) { ret = gnutls_certificate_set_x509_crl_file(tlsp->x509_cred, crl_file, GNUTLS_X509_FMT_PEM); if (ret < 0) { DEBUG(0,("TLS failed to initialise crlfile %s - %s\n", crl_file, gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_CANT_ACCESS_DOMAIN_INFO; } } ret = gnutls_certificate_set_x509_key_file(tlsp->x509_cred, cert_file, key_file, GNUTLS_X509_FMT_PEM); if (ret != GNUTLS_E_SUCCESS) { DEBUG(0,("TLS failed to initialise certfile %s and keyfile %s - %s\n", cert_file, key_file, gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_CANT_ACCESS_DOMAIN_INFO; } ret = gnutls_dh_params_init(&tlsp->dh_params); if (ret != GNUTLS_E_SUCCESS) { DEBUG(0,("TLS %s - %s\n", __location__, gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_NO_MEMORY; } if (dhp_file && *dhp_file) { gnutls_datum_t dhparms; size_t size; dhparms.data = (uint8_t *)file_load(dhp_file, &size, 0, tlsp); if (!dhparms.data) { DEBUG(0,("TLS failed to read DH Parms from %s - %d:%s\n", dhp_file, errno, strerror(errno))); talloc_free(tlsp); return NT_STATUS_CANT_ACCESS_DOMAIN_INFO; } dhparms.size = size; ret = gnutls_dh_params_import_pkcs3(tlsp->dh_params, &dhparms, GNUTLS_X509_FMT_PEM); if (ret != GNUTLS_E_SUCCESS) { DEBUG(0,("TLS failed to import pkcs3 %s - %s\n", dhp_file, gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_CANT_ACCESS_DOMAIN_INFO; } } else { ret = gnutls_dh_params_generate2(tlsp->dh_params, DH_BITS); if (ret != GNUTLS_E_SUCCESS) { DEBUG(0,("TLS failed to generate dh_params - %s\n", gnutls_strerror(ret))); talloc_free(tlsp); return NT_STATUS_INTERNAL_ERROR; } } gnutls_certificate_set_dh_params(tlsp->x509_cred, tlsp->dh_params); tlsp->tls_enabled = true; #else /* ENABLE_GNUTLS */ tlsp = talloc_zero(mem_ctx, struct tstream_tls_params); NT_STATUS_HAVE_NO_MEMORY(tlsp); talloc_set_destructor(tlsp, tstream_tls_params_destructor); tlsp->tls_enabled = false; #endif /* ENABLE_GNUTLS */ *_tlsp = tlsp; return NT_STATUS_OK; }
1
[]
samba
22af043d2f20760f27150d7d469c7c7b944c6b55
306,906,282,743,807,170,000,000,000,000,000,000,000
135
CVE-2013-4476: s4:libtls: check for safe permissions of tls private key file (key.pem) If the tls key is not owned by root or has not mode 0600 samba will not start up. Bug: https://bugzilla.samba.org/show_bug.cgi?id=10234 Pair-Programmed-With: Stefan Metzmacher <[email protected]> Signed-off-by: Björn Baumbach <[email protected]> Signed-off-by: Stefan Metzmacher <[email protected]> Reviewed-by: Stefan Metzmacher <[email protected]> Autobuild-User(master): Karolin Seeger <[email protected]> Autobuild-Date(master): Mon Nov 11 13:07:16 CET 2013 on sn-devel-104
flow_hash_symmetric_l3l4(const struct flow *flow, uint32_t basis, bool inc_udp_ports) { uint32_t hash = basis; /* UDP source and destination port are also taken into account. */ if (flow->dl_type == htons(ETH_TYPE_IP)) { hash = hash_add(hash, (OVS_FORCE uint32_t) (flow->nw_src ^ flow->nw_dst)); } else if (flow->dl_type == htons(ETH_TYPE_IPV6)) { /* IPv6 addresses are 64-bit aligned inside struct flow. */ const uint64_t *a = ALIGNED_CAST(uint64_t *, flow->ipv6_src.s6_addr); const uint64_t *b = ALIGNED_CAST(uint64_t *, flow->ipv6_dst.s6_addr); for (int i = 0; i < sizeof flow->ipv6_src / sizeof *a; i++) { hash = hash_add64(hash, a[i] ^ b[i]); } } else { /* Cannot hash non-IP flows */ return 0; } hash = hash_add(hash, flow->nw_proto); if (flow->nw_proto == IPPROTO_TCP || flow->nw_proto == IPPROTO_SCTP || (inc_udp_ports && flow->nw_proto == IPPROTO_UDP)) { hash = hash_add(hash, (OVS_FORCE uint16_t) (flow->tp_src ^ flow->tp_dst)); } return hash_finish(hash, basis); }
0
[ "CWE-400" ]
ovs
48ceca0446b1c2c2c03e7551048c5b19ed23cc97
204,561,420,925,707,660,000,000,000,000,000,000,000
31
flow: Support extra padding length. Although not required, padding can be optionally added until the packet length is MTU bytes. A packet with extra padding currently fails sanity checks. Vulnerability: CVE-2020-35498 Fixes: fa8d9001a624 ("miniflow_extract: Properly handle small IP packets.") Reported-by: Joakim Hindersson <[email protected]> Acked-by: Ilya Maximets <[email protected]> Signed-off-by: Flavio Leitner <[email protected]> Signed-off-by: Ilya Maximets <[email protected]>
static int alloc_fresh_huge_page(struct hstate *h, nodemask_t *nodes_allowed) { struct page *page; int nr_nodes, node; int ret = 0; for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { page = alloc_fresh_huge_page_node(h, node); if (page) { ret = 1; break; } } if (ret) count_vm_event(HTLB_BUDDY_PGALLOC); else count_vm_event(HTLB_BUDDY_PGALLOC_FAIL); return ret; }
0
[ "CWE-703" ]
linux
5af10dfd0afc559bb4b0f7e3e8227a1578333995
132,652,623,102,748,870,000,000,000,000,000,000,000
21
userfaultfd: hugetlbfs: remove superfluous page unlock in VM_SHARED case huge_add_to_page_cache->add_to_page_cache implicitly unlocks the page before returning in case of errors. The error returned was -EEXIST by running UFFDIO_COPY on a non-hole offset of a VM_SHARED hugetlbfs mapping. It was an userland bug that triggered it and the kernel must cope with it returning -EEXIST from ioctl(UFFDIO_COPY) as expected. page dumped because: VM_BUG_ON_PAGE(!PageLocked(page)) kernel BUG at mm/filemap.c:964! invalid opcode: 0000 [#1] SMP CPU: 1 PID: 22582 Comm: qemu-system-x86 Not tainted 4.11.11-300.fc26.x86_64 #1 RIP: unlock_page+0x4a/0x50 Call Trace: hugetlb_mcopy_atomic_pte+0xc0/0x320 mcopy_atomic+0x96f/0xbe0 userfaultfd_ioctl+0x218/0xe90 do_vfs_ioctl+0xa5/0x600 SyS_ioctl+0x79/0x90 entry_SYSCALL_64_fastpath+0x1a/0xa9 Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Tested-by: Maxime Coquelin <[email protected]> Reviewed-by: Mike Kravetz <[email protected]> Cc: "Dr. David Alan Gilbert" <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Alexey Perevalov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
ves_icall_System_Array_SetValueImpl (MonoArray *this, MonoObject *value, guint32 pos) { MonoClass *ac, *vc, *ec; gint32 esize, vsize; gpointer *ea, *va; int et, vt; guint64 u64 = 0; gint64 i64 = 0; gdouble r64 = 0; MONO_ARCH_SAVE_REGS; if (value) vc = value->vtable->klass; else vc = NULL; ac = this->obj.vtable->klass; ec = ac->element_class; esize = mono_array_element_size (ac); ea = (gpointer*)((char*)this->vector + (pos * esize)); va = (gpointer*)((char*)value + sizeof (MonoObject)); if (mono_class_is_nullable (ec)) { mono_nullable_init ((guint8*)ea, value, ec); return; } if (!value) { memset (ea, 0, esize); return; } #define NO_WIDENING_CONVERSION G_STMT_START{\ mono_raise_exception (mono_get_exception_argument ( \ "value", "not a widening conversion")); \ }G_STMT_END #define CHECK_WIDENING_CONVERSION(extra) G_STMT_START{\ if (esize < vsize + (extra)) \ mono_raise_exception (mono_get_exception_argument ( \ "value", "not a widening conversion")); \ }G_STMT_END #define INVALID_CAST G_STMT_START{\ mono_raise_exception (mono_get_exception_invalid_cast ()); \ }G_STMT_END /* Check element (destination) type. */ switch (ec->byval_arg.type) { case MONO_TYPE_STRING: switch (vc->byval_arg.type) { case MONO_TYPE_STRING: break; default: INVALID_CAST; } break; case MONO_TYPE_BOOLEAN: switch (vc->byval_arg.type) { case MONO_TYPE_BOOLEAN: break; case MONO_TYPE_CHAR: case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_I1: case MONO_TYPE_I2: case MONO_TYPE_I4: case MONO_TYPE_I8: case MONO_TYPE_R4: case MONO_TYPE_R8: NO_WIDENING_CONVERSION; default: INVALID_CAST; } break; } if (!ec->valuetype) { if (!mono_object_isinst (value, ec)) INVALID_CAST; mono_gc_wbarrier_set_arrayref (this, ea, (MonoObject*)value); return; } if (mono_object_isinst (value, ec)) { if (ec->has_references) mono_value_copy (ea, (char*)value + sizeof (MonoObject), ec); else memcpy (ea, (char *)value + sizeof (MonoObject), esize); return; } if (!vc->valuetype) INVALID_CAST; vsize = mono_class_instance_size (vc) - sizeof (MonoObject); et = ec->byval_arg.type; if (et == MONO_TYPE_VALUETYPE && ec->byval_arg.data.klass->enumtype) et = mono_class_enum_basetype (ec->byval_arg.data.klass)->type; vt = vc->byval_arg.type; if (vt == MONO_TYPE_VALUETYPE && vc->byval_arg.data.klass->enumtype) vt = mono_class_enum_basetype (vc->byval_arg.data.klass)->type; #define ASSIGN_UNSIGNED(etype) G_STMT_START{\ switch (vt) { \ case MONO_TYPE_U1: \ case MONO_TYPE_U2: \ case MONO_TYPE_U4: \ case MONO_TYPE_U8: \ case MONO_TYPE_CHAR: \ CHECK_WIDENING_CONVERSION(0); \ *(etype *) ea = (etype) u64; \ return; \ /* You can't assign a signed value to an unsigned array. */ \ case MONO_TYPE_I1: \ case MONO_TYPE_I2: \ case MONO_TYPE_I4: \ case MONO_TYPE_I8: \ /* You can't assign a floating point number to an integer array. */ \ case MONO_TYPE_R4: \ case MONO_TYPE_R8: \ NO_WIDENING_CONVERSION; \ } \ }G_STMT_END #define ASSIGN_SIGNED(etype) G_STMT_START{\ switch (vt) { \ case MONO_TYPE_I1: \ case MONO_TYPE_I2: \ case MONO_TYPE_I4: \ case MONO_TYPE_I8: \ CHECK_WIDENING_CONVERSION(0); \ *(etype *) ea = (etype) i64; \ return; \ /* You can assign an unsigned value to a signed array if the array's */ \ /* element size is larger than the value size. */ \ case MONO_TYPE_U1: \ case MONO_TYPE_U2: \ case MONO_TYPE_U4: \ case MONO_TYPE_U8: \ case MONO_TYPE_CHAR: \ CHECK_WIDENING_CONVERSION(1); \ *(etype *) ea = (etype) u64; \ return; \ /* You can't assign a floating point number to an integer array. */ \ case MONO_TYPE_R4: \ case MONO_TYPE_R8: \ NO_WIDENING_CONVERSION; \ } \ }G_STMT_END #define ASSIGN_REAL(etype) G_STMT_START{\ switch (vt) { \ case MONO_TYPE_R4: \ case MONO_TYPE_R8: \ CHECK_WIDENING_CONVERSION(0); \ *(etype *) ea = (etype) r64; \ return; \ /* All integer values fit into a floating point array, so we don't */ \ /* need to CHECK_WIDENING_CONVERSION here. */ \ case MONO_TYPE_I1: \ case MONO_TYPE_I2: \ case MONO_TYPE_I4: \ case MONO_TYPE_I8: \ *(etype *) ea = (etype) i64; \ return; \ case MONO_TYPE_U1: \ case MONO_TYPE_U2: \ case MONO_TYPE_U4: \ case MONO_TYPE_U8: \ case MONO_TYPE_CHAR: \ *(etype *) ea = (etype) u64; \ return; \ } \ }G_STMT_END switch (vt) { case MONO_TYPE_U1: u64 = *(guint8 *) va; break; case MONO_TYPE_U2: u64 = *(guint16 *) va; break; case MONO_TYPE_U4: u64 = *(guint32 *) va; break; case MONO_TYPE_U8: u64 = *(guint64 *) va; break; case MONO_TYPE_I1: i64 = *(gint8 *) va; break; case MONO_TYPE_I2: i64 = *(gint16 *) va; break; case MONO_TYPE_I4: i64 = *(gint32 *) va; break; case MONO_TYPE_I8: i64 = *(gint64 *) va; break; case MONO_TYPE_R4: r64 = *(gfloat *) va; break; case MONO_TYPE_R8: r64 = *(gdouble *) va; break; case MONO_TYPE_CHAR: u64 = *(guint16 *) va; break; case MONO_TYPE_BOOLEAN: /* Boolean is only compatible with itself. */ switch (et) { case MONO_TYPE_CHAR: case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_I1: case MONO_TYPE_I2: case MONO_TYPE_I4: case MONO_TYPE_I8: case MONO_TYPE_R4: case MONO_TYPE_R8: NO_WIDENING_CONVERSION; default: INVALID_CAST; } break; } /* If we can't do a direct copy, let's try a widening conversion. */ switch (et) { case MONO_TYPE_CHAR: ASSIGN_UNSIGNED (guint16); case MONO_TYPE_U1: ASSIGN_UNSIGNED (guint8); case MONO_TYPE_U2: ASSIGN_UNSIGNED (guint16); case MONO_TYPE_U4: ASSIGN_UNSIGNED (guint32); case MONO_TYPE_U8: ASSIGN_UNSIGNED (guint64); case MONO_TYPE_I1: ASSIGN_SIGNED (gint8); case MONO_TYPE_I2: ASSIGN_SIGNED (gint16); case MONO_TYPE_I4: ASSIGN_SIGNED (gint32); case MONO_TYPE_I8: ASSIGN_SIGNED (gint64); case MONO_TYPE_R4: ASSIGN_REAL (gfloat); case MONO_TYPE_R8: ASSIGN_REAL (gdouble); } INVALID_CAST; /* Not reached, INVALID_CAST does not return. Just to avoid a compiler warning ... */ return; #undef INVALID_CAST #undef NO_WIDENING_CONVERSION #undef CHECK_WIDENING_CONVERSION #undef ASSIGN_UNSIGNED #undef ASSIGN_SIGNED #undef ASSIGN_REAL }
0
[ "CWE-264" ]
mono
035c8587c0d8d307e45f1b7171a0d337bb451f1e
57,798,510,170,109,800,000,000,000,000,000,000,000
275
Allow only primitive types/enums in RuntimeHelpers.InitializeArray ().
void RtmpProtocol::check_C1_Digest(const string &digest,const string &data){ auto sha256 = openssl_HMACsha256(FPKey, C1_FPKEY_SIZE, data.data(), data.size()); if (sha256 != digest) { throw std::runtime_error("digest mismatched"); } else { InfoL << "check rtmp complex handshark success!"; } }
0
[ "CWE-703" ]
ZLMediaKit
7d8b212a3c3368bc2f6507cb74664fc419eb9327
179,615,557,000,239,060,000,000,000,000,000,000,000
8
修复rtmp汇报窗口太小导致循环递归的bug:#1839
sixel_allocator_free( sixel_allocator_t /* in */ *allocator, /* allocator object */ void /* in */ *p) /* existing buffer to be freed */ { /* precondition */ assert(allocator); assert(allocator->fn_free); allocator->fn_free(p); }
0
[ "CWE-125" ]
libsixel
0b1e0b3f7b44233f84e5c9f512f8c90d6bbbe33d
138,919,985,224,302,780,000,000,000,000,000,000,000
10
Introduce SIXEL_ALLOCATE_BYTES_MAX macro and limit allocation size to 128MB(#74)
void __blkg_release_rcu(struct rcu_head *rcu_head) { struct blkcg_gq *blkg = container_of(rcu_head, struct blkcg_gq, rcu_head); /* release the blkcg and parent blkg refs this blkg has been holding */ css_put(&blkg->blkcg->css); if (blkg->parent) blkg_put(blkg->parent); wb_congested_put(blkg->wb_congested); blkg_free(blkg); }
0
[ "CWE-415" ]
linux
9b54d816e00425c3a517514e0d677bb3cec49258
97,297,487,189,412,030,000,000,000,000,000,000,000
13
blkcg: fix double free of new_blkg in blkcg_init_queue If blkg_create fails, new_blkg passed as an argument will be freed by blkg_create, so there is no need to free it again. Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
TEST_F(HttpConnectionManagerImplTest, RequestTimeoutIsDisarmedOnCompleteRequestWithData) { request_timeout_ = std::chrono::milliseconds(10); setup(false, ""); EXPECT_CALL(*codec_, dispatch(_)).WillOnce(Invoke([&](Buffer::Instance& data) -> Http::Status { Event::MockTimer* request_timer = setUpTimer(); EXPECT_CALL(*request_timer, enableTimer(request_timeout_, _)).Times(1); RequestDecoder* decoder = &conn_manager_->newStream(response_encoder_); RequestHeaderMapPtr headers{ new TestRequestHeaderMapImpl{{":authority", "host"}, {":path", "/"}, {":method", "POST"}}}; decoder->decodeHeaders(std::move(headers), false); EXPECT_CALL(*request_timer, disableTimer()).Times(1); decoder->decodeData(data, true); return Http::okStatus(); })); Buffer::OwnedImpl fake_input("1234"); conn_manager_->onData(fake_input, false); EXPECT_EQ(0U, stats_.named_.downstream_rq_timeout_.value()); }
0
[ "CWE-400" ]
envoy
0e49a495826ea9e29134c1bd54fdeb31a034f40c
206,499,704,500,522,300,000,000,000,000,000,000,000
23
http/2: add stats and stream flush timeout (#139) This commit adds a new stream flush timeout to guard against a remote server that does not open window once an entire stream has been buffered for flushing. Additional stats have also been added to better understand the codecs view of active streams as well as amount of data buffered. Signed-off-by: Matt Klein <[email protected]>
inline int PacketCopyDataOffset(Packet *p, uint32_t offset, uint8_t *data, uint32_t datalen) { if (unlikely(offset + datalen > MAX_PAYLOAD_SIZE)) { /* too big */ return -1; } /* Do we have already an packet with allocated data */ if (! p->ext_pkt) { uint32_t newsize = offset + datalen; // check overflow if (newsize < offset) return -1; if (newsize <= default_packet_size) { /* data will fit in memory allocated with packet */ memcpy(GET_PKT_DIRECT_DATA(p) + offset, data, datalen); } else { /* here we need a dynamic allocation */ p->ext_pkt = SCMalloc(MAX_PAYLOAD_SIZE); if (unlikely(p->ext_pkt == NULL)) { SET_PKT_LEN(p, 0); return -1; } /* copy initial data */ memcpy(p->ext_pkt, GET_PKT_DIRECT_DATA(p), GET_PKT_DIRECT_MAX_SIZE(p)); /* copy data as asked */ memcpy(p->ext_pkt + offset, data, datalen); } } else { memcpy(p->ext_pkt + offset, data, datalen); } return 0; }
0
[ "CWE-20" ]
suricata
11f3659f64a4e42e90cb3c09fcef66894205aefe
262,499,756,358,856,720,000,000,000,000,000,000,000
33
teredo: be stricter on what to consider valid teredo Invalid Teredo can lead to valid DNS traffic (or other UDP traffic) being misdetected as Teredo. This leads to false negatives in the UDP payload inspection. Make the teredo code only consider a packet teredo if the encapsulated data was decoded without any 'invalid' events being set. Bug #2736.
if (key->handle != -1) { return sp_dsp_ecc_verify_256(key->handle, hash, hashlen, key->pubkey.x, key->pubkey.y, key->pubkey.z, r, s, res, key->heap); }
0
[ "CWE-326", "CWE-203" ]
wolfssl
1de07da61f0c8e9926dcbd68119f73230dae283f
243,154,859,634,717,440,000,000,000,000,000,000,000
4
Constant time EC map to affine for private operations For fast math, use a constant time modular inverse when mapping to affine when operation involves a private key - key gen, calc shared secret, sign.
int exec_command_append(ExecCommand *c, const char *path, ...) { _cleanup_strv_free_ char **l = NULL; va_list ap; int r; assert(c); assert(path); va_start(ap, path); l = strv_new_ap(path, ap); va_end(ap); if (!l) return -ENOMEM; r = strv_extend_strv(&c->argv, l, false); if (r < 0) return r; return 0; }
0
[ "CWE-269" ]
systemd
f69567cbe26d09eac9d387c0be0fc32c65a83ada
269,663,270,549,324,830,000,000,000,000,000,000,000
21
core: expose SUID/SGID restriction as new unit setting RestrictSUIDSGID=
ldap_error_to_response (gint ldap_error) { if (ldap_error == LDAP_SUCCESS) return EDB_ERROR (SUCCESS); else if (ldap_error == LDAP_INVALID_DN_SYNTAX) return e_data_book_create_error (E_DATA_BOOK_STATUS_OTHER_ERROR, _("Invalid DN syntax")); else if (LDAP_NAME_ERROR (ldap_error)) return EDB_ERROR (CONTACT_NOT_FOUND); else if (ldap_error == LDAP_INSUFFICIENT_ACCESS) return EDB_ERROR (PERMISSION_DENIED); else if (ldap_error == LDAP_STRONG_AUTH_REQUIRED) return EDB_ERROR (AUTHENTICATION_REQUIRED); else if (ldap_error == LDAP_SERVER_DOWN) return EDB_ERROR (REPOSITORY_OFFLINE); else if (ldap_error == LDAP_ALREADY_EXISTS) return EDB_ERROR (CONTACTID_ALREADY_EXISTS); else if (ldap_error == LDAP_TYPE_OR_VALUE_EXISTS ) return EDB_ERROR (CONTACTID_ALREADY_EXISTS); else return e_data_book_create_error_fmt ( E_DATA_BOOK_STATUS_OTHER_ERROR, _("LDAP error 0x%x (%s)"), ldap_error, ldap_err2string (ldap_error) ? ldap_err2string (ldap_error) : _("Unknown error")); }
0
[]
evolution-data-server
34bad61738e2127736947ac50e0c7969cc944972
295,611,928,647,182,250,000,000,000,000,000,000,000
26
Bug 796174 - strcat() considered unsafe for buffer overflow
rsvg_filter_primitive_displacement_map_free (gpointer impl) { RsvgFilterPrimitiveDisplacementMap *dmap = impl; g_string_free (dmap->in2, TRUE); rsvg_filter_primitive_free (impl); }
0
[ "CWE-369" ]
librsvg
ecf9267a24b2c3c0cd211dbdfa9ef2232511972a
24,803,910,131,062,315,000,000,000,000,000,000,000
8
bgo#783835 - Don't divide by zero in box_blur_line() for gaussian blurs We were making the decision to use box blurs, instead of a true Gaussian kernel, based on the size of *both* x and y dimensions. Do them individually instead.
drill_parse_header_is_inch(gerb_file_t *fd, drill_state_t *state, gerbv_image_t *image, ssize_t file_line) { gerbv_drill_stats_t *stats = image->drill_stats; char c; dprintf(" %s(): entering\n", __FUNCTION__); if (DRILL_HEADER != state->curr_section) return 0; switch (file_check_str(fd, "INCH")) { case -1: gerbv_stats_printf(stats->error_list, GERBV_MESSAGE_ERROR, -1, _("Unexpected EOF found while parsing \"%s\" string " "in file \"%s\""), "INCH", fd->filename); return 0; case 0: return 0; } /* Look for TZ/LZ */ if (',' != gerb_fgetc(fd)) { /* Unget the char in case we just advanced past a new line char */ gerb_ungetc (fd); } else { c = gerb_fgetc(fd); if (c != EOF && 'Z' == gerb_fgetc(fd)) { switch (c) { case 'L': if (state->autod) { image->format->omit_zeros = GERBV_OMIT_ZEROS_TRAILING; state->header_number_format = state->number_format = FMT_00_0000; state->decimals = 4; } break; case 'T': if (state->autod) { image->format->omit_zeros = GERBV_OMIT_ZEROS_LEADING; state->header_number_format = state->number_format = FMT_00_0000; state->decimals = 4; } break; default: gerbv_stats_printf(stats->error_list, GERBV_MESSAGE_WARNING, -1, _("Found junk '%s' after " "INCH command " "at line %ld in file \"%s\""), gerbv_escape_char(c), file_line, fd->filename); break; } } else { gerbv_stats_printf(stats->error_list, GERBV_MESSAGE_WARNING, -1, _("Found junk '%s' after INCH command " "at line %ld in file \"%s\""), gerbv_escape_char(c), file_line, fd->filename); } } state->unit = GERBV_UNIT_INCH; return 1; } /* drill_parse_header_is_inch() */
0
[ "CWE-787" ]
gerbv
672214abb47a802fc000125996e6e0a46c623a4e
203,945,269,297,079,800,000,000,000,000,000,000,000
72
Add test to demonstrate buffer overrun
int SSL_get_fd(const SSL *s) { return (SSL_get_rfd(s)); }
0
[ "CWE-310" ]
openssl
56f1acf5ef8a432992497a04792ff4b3b2c6f286
71,631,560,030,828,135,000,000,000,000,000,000,000
4
Disable SSLv2 default build, default negotiation and weak ciphers. SSLv2 is by default disabled at build-time. Builds that are not configured with "enable-ssl2" will not support SSLv2. Even if "enable-ssl2" is used, users who want to negotiate SSLv2 via the version-flexible SSLv23_method() will need to explicitly call either of: SSL_CTX_clear_options(ctx, SSL_OP_NO_SSLv2); or SSL_clear_options(ssl, SSL_OP_NO_SSLv2); as appropriate. Even if either of those is used, or the application explicitly uses the version-specific SSLv2_method() or its client or server variants, SSLv2 ciphers vulnerable to exhaustive search key recovery have been removed. Specifically, the SSLv2 40-bit EXPORT ciphers, and SSLv2 56-bit DES are no longer available. Mitigation for CVE-2016-0800 Reviewed-by: Emilia Käsper <[email protected]>
static int usbredirparser_verify_bulk_recv_cap( struct usbredirparser *parser_pub, int send) { struct usbredirparser_priv *parser = (struct usbredirparser_priv *)parser_pub; if ((send && !usbredirparser_peer_has_cap(parser_pub, usb_redir_cap_bulk_receiving)) || (!send && !usbredirparser_have_cap(parser_pub, usb_redir_cap_bulk_receiving))) { ERROR("error bulk_receiving without cap_bulk_receiving"); return 0; } return 1; /* Verify ok */ }
0
[]
usbredir
03c519ff5831ba75120e00ebebbf1d5a1f7220ab
70,623,237,537,864,040,000,000,000,000,000,000,000
15
Avoid use-after-free in serialization Serializing parsers with large amounts of buffered write data (e.g. in case of a slow or blocked write destination) would cause "serialize_data" to reallocate the state buffer whose default size is 64kB (USBREDIRPARSER_SERIALIZE_BUF_SIZE). The pointer to the position for the write buffer count would then point to a location outside the buffer where the number of write buffers would be written as a 32-bit value. As of QEMU 5.2.0 the serializer is invoked for migrations. Serializations for migrations may happen regularily such as when using the COLO feature[1]. Serialization happens under QEMU's I/O lock. The guest can't control the state while the serialization is happening. The value written is the number of outstanding buffers which would be suceptible to timing and host system system load. The guest would have to continously groom the write buffers. A useful value needs to be allocated in the exact position freed during the buffer size increase, but before the buffer count is written. The author doesn't consider it realistic to exploit this use-after-free reliably. [1] https://wiki.qemu.org/Features/COLO Signed-off-by: Michael Hanselmann <[email protected]>
static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb, struct super_block *newsb, unsigned long kern_flags, unsigned long *set_kern_flags) { int rc = 0; const struct superblock_security_struct *oldsbsec = selinux_superblock(oldsb); struct superblock_security_struct *newsbsec = selinux_superblock(newsb); int set_fscontext = (oldsbsec->flags & FSCONTEXT_MNT); int set_context = (oldsbsec->flags & CONTEXT_MNT); int set_rootcontext = (oldsbsec->flags & ROOTCONTEXT_MNT); /* * if the parent was able to be mounted it clearly had no special lsm * mount options. thus we can safely deal with this superblock later */ if (!selinux_initialized(&selinux_state)) return 0; /* * Specifying internal flags without providing a place to * place the results is not allowed. */ if (kern_flags && !set_kern_flags) return -EINVAL; /* how can we clone if the old one wasn't set up?? */ BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED)); /* if fs is reusing a sb, make sure that the contexts match */ if (newsbsec->flags & SE_SBINITIALIZED) { if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context) *set_kern_flags |= SECURITY_LSM_NATIVE_LABELS; return selinux_cmp_sb_context(oldsb, newsb); } mutex_lock(&newsbsec->lock); newsbsec->flags = oldsbsec->flags; newsbsec->sid = oldsbsec->sid; newsbsec->def_sid = oldsbsec->def_sid; newsbsec->behavior = oldsbsec->behavior; if (newsbsec->behavior == SECURITY_FS_USE_NATIVE && !(kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context) { rc = security_fs_use(&selinux_state, newsb); if (rc) goto out; } if (kern_flags & SECURITY_LSM_NATIVE_LABELS && !set_context) { newsbsec->behavior = SECURITY_FS_USE_NATIVE; *set_kern_flags |= SECURITY_LSM_NATIVE_LABELS; } if (set_context) { u32 sid = oldsbsec->mntpoint_sid; if (!set_fscontext) newsbsec->sid = sid; if (!set_rootcontext) { struct inode_security_struct *newisec = backing_inode_security(newsb->s_root); newisec->sid = sid; } newsbsec->mntpoint_sid = sid; } if (set_rootcontext) { const struct inode_security_struct *oldisec = backing_inode_security(oldsb->s_root); struct inode_security_struct *newisec = backing_inode_security(newsb->s_root); newisec->sid = oldisec->sid; } sb_finish_set_opts(newsb); out: mutex_unlock(&newsbsec->lock); return rc; }
0
[ "CWE-416" ]
linux
a3727a8bac0a9e77c70820655fd8715523ba3db7
257,323,809,014,435,240,000,000,000,000,000,000,000
81
selinux,smack: fix subjective/objective credential use mixups Jann Horn reported a problem with commit eb1231f73c4d ("selinux: clarify task subjective and objective credentials") where some LSM hooks were attempting to access the subjective credentials of a task other than the current task. Generally speaking, it is not safe to access another task's subjective credentials and doing so can cause a number of problems. Further, while looking into the problem, I realized that Smack was suffering from a similar problem brought about by a similar commit 1fb057dcde11 ("smack: differentiate between subjective and objective task credentials"). This patch addresses this problem by restoring the use of the task's objective credentials in those cases where the task is other than the current executing task. Not only does this resolve the problem reported by Jann, it is arguably the correct thing to do in these cases. Cc: [email protected] Fixes: eb1231f73c4d ("selinux: clarify task subjective and objective credentials") Fixes: 1fb057dcde11 ("smack: differentiate between subjective and objective task credentials") Reported-by: Jann Horn <[email protected]> Acked-by: Eric W. Biederman <[email protected]> Acked-by: Casey Schaufler <[email protected]> Signed-off-by: Paul Moore <[email protected]>
static int make_callable_ex(pdo_stmt_t *stmt, zval *callable, zend_fcall_info * fci, zend_fcall_info_cache * fcc, int num_args TSRMLS_DC) /* {{{ */ { char *is_callable_error = NULL; if (zend_fcall_info_init(callable, 0, fci, fcc, NULL, &is_callable_error TSRMLS_CC) == FAILURE) { if (is_callable_error) { pdo_raise_impl_error(stmt->dbh, stmt, "HY000", is_callable_error TSRMLS_CC); efree(is_callable_error); } else { pdo_raise_impl_error(stmt->dbh, stmt, "HY000", "user-supplied function must be a valid callback" TSRMLS_CC); } return 0; } if (is_callable_error) { /* Possible E_STRICT error message */ efree(is_callable_error); } fci->param_count = num_args; /* probably less */ fci->params = safe_emalloc(sizeof(zval**), num_args, 0); return 1;
0
[ "CWE-476" ]
php-src
6045de69c7dedcba3eadf7c4bba424b19c81d00d
167,063,689,166,217,800,000,000,000,000,000,000,000
23
Fix bug #73331 - do not try to serialize/unserialize objects wddx can not handle Proper soltion would be to call serialize/unserialize and deal with the result, but this requires more work that should be done by wddx maintainer (not me).
int cil_resolve_selinuxuser(struct cil_tree_node *current, void *extra_args) { struct cil_selinuxuser *selinuxuser = current->data; struct cil_symtab_datum *user_datum = NULL; struct cil_symtab_datum *lvlrange_datum = NULL; struct cil_tree_node *user_node = NULL; int rc = SEPOL_ERR; rc = cil_resolve_name(current, selinuxuser->user_str, CIL_SYM_USERS, extra_args, &user_datum); if (rc != SEPOL_OK) { goto exit; } user_node = NODE(user_datum); if (user_node->flavor != CIL_USER) { cil_log(CIL_ERR, "Selinuxuser must be a user: %s\n", user_datum->fqn); rc = SEPOL_ERR; goto exit; } selinuxuser->user = (struct cil_user*)user_datum; if (selinuxuser->range_str != NULL) { rc = cil_resolve_name(current, selinuxuser->range_str, CIL_SYM_LEVELRANGES, extra_args, &lvlrange_datum); if (rc != SEPOL_OK) { goto exit; } selinuxuser->range = (struct cil_levelrange*)lvlrange_datum; /* This could still be an anonymous levelrange even if range_str is set, if range_str is a param_str*/ if (selinuxuser->range->datum.name == NULL) { rc = cil_resolve_levelrange(current, selinuxuser->range, extra_args); if (rc != SEPOL_OK) { goto exit; } } } else if (selinuxuser->range != NULL) { rc = cil_resolve_levelrange(current, selinuxuser->range, extra_args); if (rc != SEPOL_OK) { goto exit; } } rc = SEPOL_OK; exit: return rc; }
0
[ "CWE-125" ]
selinux
340f0eb7f3673e8aacaf0a96cbfcd4d12a405521
58,658,542,505,265,140,000,000,000,000,000,000,000
48
libsepol/cil: Check for statements not allowed in optional blocks While there are some checks for invalid statements in an optional block when resolving the AST, there are no checks when building the AST. OSS-Fuzz found the following policy which caused a null dereference in cil_tree_get_next_path(). (blockinherit b3) (sid SID) (sidorder(SID)) (optional o (ibpkeycon :(1 0)s) (block b3 (filecon""block()) (filecon""block()))) The problem is that the blockinherit copies block b3 before the optional block is disabled. When the optional is disabled, block b3 is deleted along with everything else in the optional. Later, when filecon statements with the same path are found an error message is produced and in trying to find out where the block was copied from, the reference to the deleted block is used. The error handling code assumes (rightly) that if something was copied from a block then that block should still exist. It is clear that in-statements, blocks, and macros cannot be in an optional, because that allows nodes to be copied from the optional block to somewhere outside even though the optional could be disabled later. When optionals are disabled the AST is reset and the resolution is restarted at the point of resolving macro calls, so anything resolved before macro calls will never be re-resolved. This includes tunableifs, in-statements, blockinherits, blockabstracts, and macro definitions. Tunable declarations also cannot be in an optional block because they are needed to resolve tunableifs. It should be fine to allow blockinherit statements in an optional, because that is copying nodes from outside the optional to the optional and if the optional is later disabled, everything will be deleted anyway. Check and quit with an error if a tunable declaration, in-statement, block, blockabstract, or macro definition is found within an optional when either building or resolving the AST. Signed-off-by: James Carter <[email protected]>
init_root_domain_thread (MonoInternalThread *thread, MonoThread *candidate) { MonoDomain *domain = mono_get_root_domain (); if (!candidate || candidate->obj.vtable->domain != domain) candidate = new_thread_with_internal (domain, thread); set_current_thread_for_domain (domain, thread, candidate); g_assert (!thread->root_domain_thread); MONO_OBJECT_SETREF (thread, root_domain_thread, candidate); }
0
[ "CWE-399", "CWE-264" ]
mono
722f9890f09aadfc37ae479e7d946d5fc5ef7b91
90,784,278,367,558,920,000,000,000,000,000,000,000
10
Fix access to freed members of a dead thread * threads.c: Fix access to freed members of a dead thread. Found and fixed by Rodrigo Kumpera <[email protected]> Ref: CVE-2011-0992
static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long debugctlmsr, cr4; /* Record the guest's net vcpu time for enforced NMI injections. */ if (unlikely(!cpu_has_virtual_nmis() && vmx->soft_vnmi_blocked)) vmx->entry_time = ktime_get(); /* Don't enter VMX if guest state is invalid, let the exit handler start emulation until we arrive back to a valid state */ if (vmx->emulation_required) return; if (vmx->ple_window_dirty) { vmx->ple_window_dirty = false; vmcs_write32(PLE_WINDOW, vmx->ple_window); } if (vmx->nested.sync_shadow_vmcs) { copy_vmcs12_to_shadow(vmx); vmx->nested.sync_shadow_vmcs = false; } if (test_bit(VCPU_REGS_RSP, (unsigned long *)&vcpu->arch.regs_dirty)) vmcs_writel(GUEST_RSP, vcpu->arch.regs[VCPU_REGS_RSP]); if (test_bit(VCPU_REGS_RIP, (unsigned long *)&vcpu->arch.regs_dirty)) vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]); cr4 = cr4_read_shadow(); if (unlikely(cr4 != vmx->host_state.vmcs_host_cr4)) { vmcs_writel(HOST_CR4, cr4); vmx->host_state.vmcs_host_cr4 = cr4; } /* When single-stepping over STI and MOV SS, we must clear the * corresponding interruptibility bits in the guest state. Otherwise * vmentry fails as it then expects bit 14 (BS) in pending debug * exceptions being set, but that's not correct for the guest debugging * case. */ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) vmx_set_interrupt_shadow(vcpu, 0); atomic_switch_perf_msrs(vmx); debugctlmsr = get_debugctlmsr(); vmx->__launched = vmx->loaded_vmcs->launched; asm( /* Store host registers */ "push %%" _ASM_DX "; push %%" _ASM_BP ";" "push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" "cmp %%" _ASM_SP ", %c[host_rsp](%0) \n\t" "je 1f \n\t" "mov %%" _ASM_SP ", %c[host_rsp](%0) \n\t" __ex(ASM_VMX_VMWRITE_RSP_RDX) "\n\t" "1: \n\t" /* Reload cr2 if changed */ "mov %c[cr2](%0), %%" _ASM_AX " \n\t" "mov %%cr2, %%" _ASM_DX " \n\t" "cmp %%" _ASM_AX ", %%" _ASM_DX " \n\t" "je 2f \n\t" "mov %%" _ASM_AX", %%cr2 \n\t" "2: \n\t" /* Check if vmlaunch of vmresume is needed */ "cmpl $0, %c[launched](%0) \n\t" /* Load guest registers. Don't clobber flags. */ "mov %c[rax](%0), %%" _ASM_AX " \n\t" "mov %c[rbx](%0), %%" _ASM_BX " \n\t" "mov %c[rdx](%0), %%" _ASM_DX " \n\t" "mov %c[rsi](%0), %%" _ASM_SI " \n\t" "mov %c[rdi](%0), %%" _ASM_DI " \n\t" "mov %c[rbp](%0), %%" _ASM_BP " \n\t" #ifdef CONFIG_X86_64 "mov %c[r8](%0), %%r8 \n\t" "mov %c[r9](%0), %%r9 \n\t" "mov %c[r10](%0), %%r10 \n\t" "mov %c[r11](%0), %%r11 \n\t" "mov %c[r12](%0), %%r12 \n\t" "mov %c[r13](%0), %%r13 \n\t" "mov %c[r14](%0), %%r14 \n\t" "mov %c[r15](%0), %%r15 \n\t" #endif "mov %c[rcx](%0), %%" _ASM_CX " \n\t" /* kills %0 (ecx) */ /* Enter guest mode */ "jne 1f \n\t" __ex(ASM_VMX_VMLAUNCH) "\n\t" "jmp 2f \n\t" "1: " __ex(ASM_VMX_VMRESUME) "\n\t" "2: " /* Save guest registers, load host registers, keep flags */ "mov %0, %c[wordsize](%%" _ASM_SP ") \n\t" "pop %0 \n\t" "mov %%" _ASM_AX ", %c[rax](%0) \n\t" "mov %%" _ASM_BX ", %c[rbx](%0) \n\t" __ASM_SIZE(pop) " %c[rcx](%0) \n\t" "mov %%" _ASM_DX ", %c[rdx](%0) \n\t" "mov %%" _ASM_SI ", %c[rsi](%0) \n\t" "mov %%" _ASM_DI ", %c[rdi](%0) \n\t" "mov %%" _ASM_BP ", %c[rbp](%0) \n\t" #ifdef CONFIG_X86_64 "mov %%r8, %c[r8](%0) \n\t" "mov %%r9, %c[r9](%0) \n\t" "mov %%r10, %c[r10](%0) \n\t" "mov %%r11, %c[r11](%0) \n\t" "mov %%r12, %c[r12](%0) \n\t" "mov %%r13, %c[r13](%0) \n\t" "mov %%r14, %c[r14](%0) \n\t" "mov %%r15, %c[r15](%0) \n\t" #endif "mov %%cr2, %%" _ASM_AX " \n\t" "mov %%" _ASM_AX ", %c[cr2](%0) \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" "setbe %c[fail](%0) \n\t" ".pushsection .rodata \n\t" ".global vmx_return \n\t" "vmx_return: " _ASM_PTR " 2b \n\t" ".popsection" : : "c"(vmx), "d"((unsigned long)HOST_RSP), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), [rdx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RDX])), [rsi]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RSI])), [rdi]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RDI])), [rbp]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBP])), #ifdef CONFIG_X86_64 [r8]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R8])), [r9]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R9])), [r10]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R10])), [r11]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R11])), [r12]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R12])), [r13]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R13])), [r14]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R14])), [r15]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R15])), #endif [cr2]"i"(offsetof(struct vcpu_vmx, vcpu.arch.cr2)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 , "rax", "rbx", "rdi", "rsi" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else , "eax", "ebx", "edi", "esi" #endif ); /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ if (debugctlmsr) update_debugctlmsr(debugctlmsr); #ifndef CONFIG_X86_64 /* * The sysexit path does not restore ds/es, so we must set them to * a reasonable value ourselves. * * We can't defer this to vmx_load_host_state() since that function * may be executed in interrupt context, which saves and restore segments * around it, nullifying its effect. */ loadsegment(ds, __USER_DS); loadsegment(es, __USER_DS); #endif vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP) | (1 << VCPU_EXREG_RFLAGS) | (1 << VCPU_EXREG_PDPTR) | (1 << VCPU_EXREG_SEGMENTS) | (1 << VCPU_EXREG_CR3)); vcpu->arch.regs_dirty = 0; vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); vmx->loaded_vmcs->launched = 1; vmx->exit_reason = vmcs_read32(VM_EXIT_REASON); trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX); /* * the KVM_REQ_EVENT optimization bit is only on for one entry, and if * we did not inject a still-pending event to L1 now because of * nested_run_pending, we need to re-enable this bit. */ if (vmx->nested.nested_run_pending) kvm_make_request(KVM_REQ_EVENT, vcpu); vmx->nested.nested_run_pending = 0; vmx_complete_atomic_exit(vmx); vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); }
0
[ "CWE-399" ]
linux
54a20552e1eae07aa240fa370a0293e006b5faed
196,690,700,571,569,830,000,000,000,000,000,000,000
197
KVM: x86: work around infinite loop in microcode when #AC is delivered It was found that a guest can DoS a host by triggering an infinite stream of "alignment check" (#AC) exceptions. This causes the microcode to enter an infinite loop where the core never receives another interrupt. The host kernel panics pretty quickly due to the effects (CVE-2015-5307). Signed-off-by: Eric Northup <[email protected]> Cc: [email protected] Signed-off-by: Paolo Bonzini <[email protected]>
hrtimer_start(struct hrtimer *timer, ktime_t tim, const enum hrtimer_mode mode) { struct hrtimer_clock_base *base, *new_base; unsigned long flags; int ret; base = lock_hrtimer_base(timer, &flags); /* Remove an active timer from the queue: */ ret = remove_hrtimer(timer, base); /* Switch the timer base, if necessary: */ new_base = switch_hrtimer_base(timer, base); if (mode == HRTIMER_MODE_REL) { tim = ktime_add(tim, new_base->get_time()); /* * CONFIG_TIME_LOW_RES is a temporary way for architectures * to signal that they simply return xtime in * do_gettimeoffset(). In this case we want to round up by * resolution when starting a relative timer, to avoid short * timeouts. This will go away with the GTOD framework. */ #ifdef CONFIG_TIME_LOW_RES tim = ktime_add(tim, base->resolution); #endif } timer->expires = tim; timer_stats_hrtimer_set_start_info(timer); enqueue_hrtimer(timer, new_base, base == new_base); unlock_hrtimer_base(timer, &flags); return ret; }
0
[ "CWE-189" ]
linux-2.6
13788ccc41ceea5893f9c747c59bc0b28f2416c2
71,188,575,773,950,225,000,000,000,000,000,000,000
37
[PATCH] hrtimer: prevent overrun DoS in hrtimer_forward() hrtimer_forward() does not check for the possible overflow of timer->expires. This can happen on 64 bit machines with large interval values and results currently in an endless loop in the softirq because the expiry value becomes negative and therefor the timer is expired all the time. Check for this condition and set the expiry value to the max. expiry time in the future. The fix should be applied to stable kernel series as well. Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Ingo Molnar <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
static inline int u24(byte *p) { return (p[0] << 16) | (p[1] << 8) | p[2]; }
0
[ "CWE-125" ]
ghostpdl
d2ab84732936b6e7e5a461dc94344902965e9a06
306,388,473,170,239,500,000,000,000,000,000,000,000
4
Bug 698025: validate offsets reading TTF name table in xps
span_renderer_fini (cairo_abstract_span_renderer_t *_r, cairo_int_status_t status) { cairo_image_span_renderer_t *r = (cairo_image_span_renderer_t *) _r; TRACE ((stderr, "%s\n", __FUNCTION__)); if (likely (status == CAIRO_INT_STATUS_SUCCESS && r->bpp == 0)) { const cairo_composite_rectangles_t *composite = r->composite; if (r->base.finish) r->base.finish (r); pixman_image_composite32 (r->op, r->src, r->mask, to_pixman_image (composite->surface), composite->unbounded.x + r->u.mask.src_x, composite->unbounded.y + r->u.mask.src_y, 0, 0, composite->unbounded.x, composite->unbounded.y, composite->unbounded.width, composite->unbounded.height); } if (r->src) pixman_image_unref (r->src); if (r->mask) pixman_image_unref (r->mask); }
0
[ "CWE-787" ]
cairo
c986a7310bb06582b7d8a566d5f007ba4e5e75bf
78,607,634,416,812,080,000,000,000,000,000,000,000
29
image: Enable inplace compositing with opacities for general routines On a SNB i5-2500: Speedups ======== firefox-chalkboard 34284.16 -> 19637.40: 1.74x speedup swfdec-giant-steps 778.35 -> 665.37: 1.17x speedup ocitysmap 485.64 -> 431.94: 1.12x speedup Slowdowns ========= firefox-fishbowl 46878.98 -> 54407.14: 1.16x slowdown That slow down is due to overhead of the increased number of calls to pixman_image_composite32() (pixman_transform_point for analyzing the source extents in particular) outweighing any advantage gained by performing the rasterisation in a single pass and eliding gaps. The solution that has been floated in the past is for an interface into pixman to only perform the analysis once and then to return a kernel to use for all spans. Signed-off-by: Chris Wilson <[email protected]>
char *full_fname(const char *fn) { static char *result = NULL; char *m1, *m2, *m3; char *p1, *p2; if (result) free(result); if (*fn == '/') p1 = p2 = ""; else { p1 = curr_dir + module_dirlen; for (p2 = p1; *p2 == '/'; p2++) {} if (*p2) p2 = "/"; } if (module_id >= 0) { m1 = " (in "; m2 = lp_name(module_id); m3 = ")"; } else m1 = m2 = m3 = ""; if (asprintf(&result, "\"%s%s%s\"%s%s%s", p1, p2, fn, m1, m2, m3) < 0) out_of_memory("full_fname"); return result; }
0
[ "CWE-59" ]
rsync
4cad402ea8a91031f86c53961d78bb7f4f174790
38,145,224,569,791,200,000,000,000,000,000,000,000
29
Receiver now rejects invalid filenames in filelist. If the receiver gets a filename with a leading slash (w/o --relative) and/or a filename with an embedded ".." dir in the path, it dies with an error (rather than continuing). Those invalid paths should never happen in reality, so just reject someone trying to pull a fast one.
TiledInputFile::dataWindowForTile (int dx, int dy, int lx, int ly) const { try { if (!isValidTile (dx, dy, lx, ly)) throw IEX_NAMESPACE::ArgExc ("Arguments not in valid range."); return OPENEXR_IMF_INTERNAL_NAMESPACE::dataWindowForTile ( _data->tileDesc, _data->minX, _data->maxX, _data->minY, _data->maxY, dx, dy, lx, ly); } catch (IEX_NAMESPACE::BaseExc &e) { REPLACE_EXC (e, "Error calling dataWindowForTile() on image " "file \"" << fileName() << "\". " << e.what()); throw; } }
0
[ "CWE-125" ]
openexr
e79d2296496a50826a15c667bf92bdc5a05518b4
217,417,403,483,073,220,000,000,000,000,000,000,000
20
fix memory leaks and invalid memory accesses Signed-off-by: Peter Hillman <[email protected]>
build_object (GoaProvider *provider, GoaObjectSkeleton *object, GKeyFile *key_file, const gchar *group, GDBusConnection *connection, gboolean just_added, GError **error) { GoaAccount *account; GoaCalendar *calendar; GoaContacts *contacts; GoaExchange *exchange; GoaMail *mail; GoaPasswordBased *password_based; gboolean calendar_enabled; gboolean contacts_enabled; gboolean mail_enabled; gboolean ret; account = NULL; calendar = NULL; contacts = NULL; exchange = NULL; mail = NULL; password_based = NULL; ret = FALSE; /* Chain up */ if (!GOA_PROVIDER_CLASS (goa_exchange_provider_parent_class)->build_object (provider, object, key_file, group, connection, just_added, error)) goto out; password_based = goa_object_get_password_based (GOA_OBJECT (object)); if (password_based == NULL) { password_based = goa_password_based_skeleton_new (); /* Ensure D-Bus method invocations run in their own thread */ g_dbus_interface_skeleton_set_flags (G_DBUS_INTERFACE_SKELETON (password_based), G_DBUS_INTERFACE_SKELETON_FLAGS_HANDLE_METHOD_INVOCATIONS_IN_THREAD); goa_object_skeleton_set_password_based (object, password_based); g_signal_connect (password_based, "handle-get-password", G_CALLBACK (on_handle_get_password), NULL); } account = goa_object_get_account (GOA_OBJECT (object)); /* Email */ mail = goa_object_get_mail (GOA_OBJECT (object)); mail_enabled = g_key_file_get_boolean (key_file, group, "MailEnabled", NULL); if (mail_enabled) { if (mail == NULL) { const gchar *email_address; email_address = goa_account_get_presentation_identity (account); mail = goa_mail_skeleton_new (); g_object_set (G_OBJECT (mail), "email-address", email_address, NULL); goa_object_skeleton_set_mail (object, mail); } } else { if (mail != NULL) goa_object_skeleton_set_mail (object, NULL); } /* Calendar */ calendar = goa_object_get_calendar (GOA_OBJECT (object)); calendar_enabled = g_key_file_get_boolean (key_file, group, "CalendarEnabled", NULL); if (calendar_enabled) { if (calendar == NULL) { calendar = goa_calendar_skeleton_new (); goa_object_skeleton_set_calendar (object, calendar); } } else { if (calendar != NULL) goa_object_skeleton_set_calendar (object, NULL); } /* Contacts */ contacts = goa_object_get_contacts (GOA_OBJECT (object)); contacts_enabled = g_key_file_get_boolean (key_file, group, "ContactsEnabled", NULL); if (contacts_enabled) { if (contacts == NULL) { contacts = goa_contacts_skeleton_new (); goa_object_skeleton_set_contacts (object, contacts); } } else { if (contacts != NULL) goa_object_skeleton_set_contacts (object, NULL); } /* Exchange */ exchange = goa_object_get_exchange (GOA_OBJECT (object)); if (exchange == NULL) { gchar *host; host = g_key_file_get_string (key_file, group, "Host", NULL); exchange = goa_exchange_skeleton_new (); g_object_set (G_OBJECT (exchange), "host", host, NULL); goa_object_skeleton_set_exchange (object, exchange); g_free (host); } if (just_added) { goa_account_set_mail_disabled (account, !mail_enabled); goa_account_set_calendar_disabled (account, !calendar_enabled); goa_account_set_contacts_disabled (account, !contacts_enabled); g_signal_connect (account, "notify::mail-disabled", G_CALLBACK (goa_util_account_notify_property_cb), "MailEnabled"); g_signal_connect (account, "notify::calendar-disabled", G_CALLBACK (goa_util_account_notify_property_cb), "CalendarEnabled"); g_signal_connect (account, "notify::contacts-disabled", G_CALLBACK (goa_util_account_notify_property_cb), "ContactsEnabled"); } ret = TRUE; out: if (exchange != NULL) g_object_unref (exchange); if (contacts != NULL) g_object_unref (contacts); if (calendar != NULL) g_object_unref (calendar); if (mail != NULL) g_object_unref (mail); if (password_based != NULL) g_object_unref (password_based); return ret; }
0
[ "CWE-310" ]
gnome-online-accounts
ecad8142e9ac519b9fc74b96dcb5531052bbffe1
195,398,530,000,458,860,000,000,000,000,000,000,000
158
Guard against invalid SSL certificates None of the branded providers (eg., Google, Facebook and Windows Live) should ever have an invalid certificate. So set "ssl-strict" on the SoupSession object being used by GoaWebView. Providers like ownCloud and Exchange might have to deal with certificates that are not up to the mark. eg., self-signed certificates. For those, show a warning when the account is being created, and only proceed if the user decides to ignore it. In any case, save the status of the certificate that was used to create the account. So an account created with a valid certificate will never work with an invalid one, and one created with an invalid certificate will not throw any further warnings. Fixes: CVE-2013-0240
TEST(ExpressionStrLenCP, ComputesLengthOfStringWithNullAtEnd) { assertExpectedResults("$strLenCP", {{{Value("abc\0"_sd)}, Value(4)}}); }
0
[ "CWE-835" ]
mongo
0a076417d1d7fba3632b73349a1fd29a83e68816
28,905,127,228,935,250,000,000,000,000,000,000,000
3
SERVER-38070 fix infinite loop in agg expression
static int ntop_get_interface_hosts(lua_State* vm, LocationPolicy location) { NetworkInterface *ntop_interface = getCurrentInterface(vm); bool show_details = true; char *sortColumn = (char*)"column_ip", *country = NULL, *os_filter = NULL, *mac_filter = NULL; bool a2zSortOrder = true; u_int16_t vlan_filter, *vlan_filter_ptr = NULL; u_int32_t asn_filter, *asn_filter_ptr = NULL; int16_t network_filter, *network_filter_ptr = NULL; u_int32_t toSkip = 0, maxHits = CONST_MAX_NUM_HITS; ntop->getTrace()->traceEvent(TRACE_DEBUG, "%s() called", __FUNCTION__); if(lua_type(vm, 1) == LUA_TBOOLEAN) show_details = lua_toboolean(vm, 1) ? true : false; if(lua_type(vm, 2) == LUA_TSTRING) sortColumn = (char*)lua_tostring(vm, 2); if(lua_type(vm, 3) == LUA_TNUMBER) maxHits = (u_int16_t)lua_tonumber(vm, 3); if(lua_type(vm, 4) == LUA_TNUMBER) toSkip = (u_int16_t)lua_tonumber(vm, 4); if(lua_type(vm, 5) == LUA_TBOOLEAN) a2zSortOrder = lua_toboolean(vm, 5) ? true : false; if(lua_type(vm, 6) == LUA_TSTRING) country = (char*)lua_tostring(vm, 6); if(lua_type(vm, 7) == LUA_TSTRING) os_filter = (char*)lua_tostring(vm, 7); if(lua_type(vm, 8) == LUA_TNUMBER) vlan_filter = (u_int16_t)lua_tonumber(vm, 8), vlan_filter_ptr = &vlan_filter; if(lua_type(vm, 9) == LUA_TNUMBER) asn_filter = (u_int32_t)lua_tonumber(vm, 9), asn_filter_ptr = &asn_filter; if(lua_type(vm,10) == LUA_TNUMBER) network_filter = (int16_t)lua_tonumber(vm, 10), network_filter_ptr = &network_filter; if(lua_type(vm,11) == LUA_TSTRING) mac_filter = (char*)lua_tostring(vm, 11); if(!ntop_interface || ntop_interface->getActiveHostsList(vm, get_allowed_nets(vm), show_details, location, country, mac_filter, vlan_filter_ptr, os_filter, asn_filter_ptr, network_filter_ptr, sortColumn, maxHits, toSkip, a2zSortOrder) < 0) return(CONST_LUA_ERROR); return(CONST_LUA_OK); }
0
[ "CWE-284", "CWE-352" ]
ntopng
f91fbe3d94c8346884271838ae3406ae633f6f15
128,977,288,932,476,500,000,000,000,000,000,000,000
35
Check for presence of crsf in admin scripts
test_if_cheaper_ordering(const JOIN_TAB *tab, ORDER *order, TABLE *table, key_map usable_keys, int ref_key, ha_rows select_limit_arg, int *new_key, int *new_key_direction, ha_rows *new_select_limit, uint *new_used_key_parts, uint *saved_best_key_parts) { DBUG_ENTER("test_if_cheaper_ordering"); /* Check whether there is an index compatible with the given order usage of which is cheaper than usage of the ref_key index (ref_key>=0) or a table scan. It may be the case if ORDER/GROUP BY is used with LIMIT. */ ha_rows best_select_limit= HA_POS_ERROR; JOIN *join= tab ? tab->join : NULL; uint nr; key_map keys; uint best_key_parts= 0; int best_key_direction= 0; ha_rows best_records= 0; double read_time; int best_key= -1; bool is_best_covering= FALSE; double fanout= 1; ha_rows table_records= table->stat_records(); bool group= join && join->group && order == join->group_list; ha_rows refkey_rows_estimate= table->quick_condition_rows; const bool has_limit= (select_limit_arg != HA_POS_ERROR); /* If not used with LIMIT, only use keys if the whole query can be resolved with a key; This is because filesort() is usually faster than retrieving all rows through an index. */ if (select_limit_arg >= table_records) { keys= *table->file->keys_to_use_for_scanning(); keys.merge(table->covering_keys); /* We are adding here also the index specified in FORCE INDEX clause, if any. This is to allow users to use index in ORDER BY. */ if (table->force_index) keys.merge(group ? table->keys_in_use_for_group_by : table->keys_in_use_for_order_by); keys.intersect(usable_keys); } else keys= usable_keys; if (join) { uint tablenr= (uint)(tab - join->join_tab); read_time= join->best_positions[tablenr].read_time; for (uint i= tablenr+1; i < join->table_count; i++) fanout*= join->best_positions[i].records_read; // fanout is always >= 1 } else read_time= table->file->scan_time(); /* TODO: add cost of sorting here. */ read_time += COST_EPS; /* Calculate the selectivity of the ref_key for REF_ACCESS. For RANGE_ACCESS we use table->quick_condition_rows. */ if (ref_key >= 0 && ref_key != MAX_KEY && tab->type == JT_REF) { if (table->quick_keys.is_set(ref_key)) refkey_rows_estimate= table->quick_rows[ref_key]; else { const KEY *ref_keyinfo= table->key_info + ref_key; refkey_rows_estimate= ref_keyinfo->rec_per_key[tab->ref.key_parts - 1]; } set_if_bigger(refkey_rows_estimate, 1); } for (nr=0; nr < table->s->keys ; nr++) { int direction; ha_rows select_limit= select_limit_arg; uint used_key_parts= 0; if (keys.is_set(nr) && (direction= test_if_order_by_key(join, order, table, nr, &used_key_parts))) { /* At this point we are sure that ref_key is a non-ordering key (where "ordering key" is a key that will return rows in the order required by ORDER BY). */ DBUG_ASSERT (ref_key != (int) nr); bool is_covering= (table->covering_keys.is_set(nr) || (table->file->index_flags(nr, 0, 1) & HA_CLUSTERED_INDEX)); /* Don't use an index scan with ORDER BY without limit. For GROUP BY without limit always use index scan if there is a suitable index. Why we hold to this asymmetry hardly can be explained rationally. It's easy to demonstrate that using temporary table + filesort could be cheaper for grouping queries too. */ if (is_covering || select_limit != HA_POS_ERROR || (ref_key < 0 && (group || table->force_index))) { double rec_per_key; double index_scan_time; KEY *keyinfo= table->key_info+nr; if (select_limit == HA_POS_ERROR) select_limit= table_records; if (group) { /* Used_key_parts can be larger than keyinfo->user_defined_key_parts when using a secondary index clustered with a primary key (e.g. as in Innodb). See Bug #28591 for details. */ uint used_index_parts= keyinfo->user_defined_key_parts; uint used_pk_parts= 0; if (used_key_parts > used_index_parts) used_pk_parts= used_key_parts-used_index_parts; rec_per_key= used_key_parts ? keyinfo->actual_rec_per_key(used_key_parts-1) : 1; /* Take into account the selectivity of the used pk prefix */ if (used_pk_parts) { KEY *pkinfo=tab->table->key_info+table->s->primary_key; /* If the values of of records per key for the prefixes of the primary key are considered unknown we assume they are equal to 1. */ if (used_key_parts == pkinfo->user_defined_key_parts || pkinfo->rec_per_key[0] == 0) rec_per_key= 1; if (rec_per_key > 1) { rec_per_key*= pkinfo->actual_rec_per_key(used_pk_parts-1); rec_per_key/= pkinfo->actual_rec_per_key(0); /* The value of rec_per_key for the extended key has to be adjusted accordingly if some components of the secondary key are included in the primary key. */ for(uint i= 1; i < used_pk_parts; i++) { if (pkinfo->key_part[i].field->key_start.is_set(nr)) { /* We presume here that for any index rec_per_key[i] != 0 if rec_per_key[0] != 0. */ DBUG_ASSERT(pkinfo->actual_rec_per_key(i)); rec_per_key*= pkinfo->actual_rec_per_key(i-1); rec_per_key/= pkinfo->actual_rec_per_key(i); } } } } set_if_bigger(rec_per_key, 1); /* With a grouping query each group containing on average rec_per_key records produces only one row that will be included into the result set. */ if (select_limit > table_records/rec_per_key) select_limit= table_records; else select_limit= (ha_rows) (select_limit*rec_per_key); } /* group */ /* If tab=tk is not the last joined table tn then to get first L records from the result set we can expect to retrieve only L/fanout(tk,tn) where fanout(tk,tn) says how many rows in the record set on average will match each row tk. Usually our estimates for fanouts are too pessimistic. So the estimate for L/fanout(tk,tn) will be too optimistic and as result we'll choose an index scan when using ref/range access + filesort will be cheaper. */ select_limit= (ha_rows) (select_limit < fanout ? 1 : select_limit/fanout); /* We assume that each of the tested indexes is not correlated with ref_key. Thus, to select first N records we have to scan N/selectivity(ref_key) index entries. selectivity(ref_key) = #scanned_records/#table_records = refkey_rows_estimate/table_records. In any case we can't select more than #table_records. N/(refkey_rows_estimate/table_records) > table_records <=> N > refkey_rows_estimate. */ if (select_limit > refkey_rows_estimate) select_limit= table_records; else select_limit= (ha_rows) (select_limit * (double) table_records / refkey_rows_estimate); rec_per_key= keyinfo->actual_rec_per_key(keyinfo->user_defined_key_parts-1); set_if_bigger(rec_per_key, 1); /* Here we take into account the fact that rows are accessed in sequences rec_per_key records in each. Rows in such a sequence are supposed to be ordered by rowid/primary key. When reading the data in a sequence we'll touch not more pages than the table file contains. TODO. Use the formula for a disk sweep sequential access to calculate the cost of accessing data rows for one index entry. */ index_scan_time= select_limit/rec_per_key * MY_MIN(rec_per_key, table->file->scan_time()); double range_scan_time; if (get_range_limit_read_cost(tab, table, nr, select_limit, &range_scan_time)) { if (range_scan_time < index_scan_time) index_scan_time= range_scan_time; } if ((ref_key < 0 && (group || table->force_index || is_covering)) || index_scan_time < read_time) { ha_rows quick_records= table_records; ha_rows refkey_select_limit= (ref_key >= 0 && !is_hash_join_key_no(ref_key) && table->covering_keys.is_set(ref_key)) ? refkey_rows_estimate : HA_POS_ERROR; if ((is_best_covering && !is_covering) || (is_covering && refkey_select_limit < select_limit)) continue; if (table->quick_keys.is_set(nr)) quick_records= table->quick_rows[nr]; if (best_key < 0 || (select_limit <= MY_MIN(quick_records,best_records) ? keyinfo->user_defined_key_parts < best_key_parts : quick_records < best_records) || (!is_best_covering && is_covering)) { best_key= nr; best_key_parts= keyinfo->user_defined_key_parts; if (saved_best_key_parts) *saved_best_key_parts= used_key_parts; best_records= quick_records; is_best_covering= is_covering; best_key_direction= direction; best_select_limit= select_limit; } } } } } if (best_key < 0 || best_key == ref_key) DBUG_RETURN(FALSE); *new_key= best_key; *new_key_direction= best_key_direction; *new_select_limit= has_limit ? best_select_limit : table_records; if (new_used_key_parts != NULL) *new_used_key_parts= best_key_parts; DBUG_RETURN(TRUE); }
0
[ "CWE-89" ]
server
5ba77222e9fe7af8ff403816b5338b18b342053c
253,031,158,977,151,000,000,000,000,000,000,000,000
280
MDEV-21028 Server crashes in Query_arena::set_query_arena upon SELECT from view if the view has algorithm=temptable it is not updatable, so DEFAULT() for its fields is meaningless, and thus it's NULL or 0/'' for NOT NULL columns.
TEST(LengthFieldFrameDecoder, StripPrePostHeaderFrameInclHeader) { auto pipeline = Pipeline<IOBufQueue&, std::unique_ptr<IOBuf>>::create(); int called = 0; (*pipeline) .addBack(LengthFieldBasedFrameDecoder(2, 10, 2, -2, 4)) .addBack(test::FrameTester([&](std::unique_ptr<IOBuf> buf) { auto sz = buf->computeChainDataLength(); called++; EXPECT_EQ(sz, 3); })) .finalize(); auto bufFrame = createZeroedBuffer(6); RWPrivateCursor c(bufFrame.get()); c.write((uint16_t)100); // pre header c.writeBE((uint16_t)5); // frame size c.write((uint16_t)100); // post header auto bufData = createZeroedBuffer(1); IOBufQueue q(IOBufQueue::cacheChainLength()); q.append(std::move(bufFrame)); pipeline->read(q); EXPECT_EQ(called, 0); q.append(std::move(bufData)); pipeline->read(q); EXPECT_EQ(called, 1); }
0
[ "CWE-119", "CWE-787" ]
wangle
5b3bceca875e4ea4ed9d14c20b20ce46c92c13c6
65,627,431,402,929,340,000,000,000,000,000,000,000
30
Peek for \n in LineBasedFrameDecoder. Summary: Previously this could underflow if there was not a following \n. CVE-2019-3563 Reviewed By: siyengar Differential Revision: D14935715 fbshipit-source-id: 25c3eecf373f89efa1232456aeeb092f13b7fa06
aclitemout(PG_FUNCTION_ARGS) { AclItem *aip = PG_GETARG_ACLITEM_P(0); char *p; char *out; HeapTuple htup; unsigned i; out = palloc(strlen("=/") + 2 * N_ACL_RIGHTS + 2 * (2 * NAMEDATALEN + 2) + 1); p = out; *p = '\0'; if (aip->ai_grantee != ACL_ID_PUBLIC) { htup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(aip->ai_grantee)); if (HeapTupleIsValid(htup)) { putid(p, NameStr(((Form_pg_authid) GETSTRUCT(htup))->rolname)); ReleaseSysCache(htup); } else { /* Generate numeric OID if we don't find an entry */ sprintf(p, "%u", aip->ai_grantee); } } while (*p) ++p; *p++ = '='; for (i = 0; i < N_ACL_RIGHTS; ++i) { if (ACLITEM_GET_PRIVS(*aip) & (1 << i)) *p++ = ACL_ALL_RIGHTS_STR[i]; if (ACLITEM_GET_GOPTIONS(*aip) & (1 << i)) *p++ = '*'; } *p++ = '/'; *p = '\0'; htup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(aip->ai_grantor)); if (HeapTupleIsValid(htup)) { putid(p, NameStr(((Form_pg_authid) GETSTRUCT(htup))->rolname)); ReleaseSysCache(htup); } else { /* Generate numeric OID if we don't find an entry */ sprintf(p, "%u", aip->ai_grantor); } PG_RETURN_CSTRING(out); }
0
[ "CWE-264" ]
postgres
fea164a72a7bfd50d77ba5fb418d357f8f2bb7d0
91,401,396,458,183,210,000,000,000,000,000,000,000
60
Shore up ADMIN OPTION restrictions. Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role. Issuing SET ROLE before the GRANT bypassed that, because the role itself had an implicit right to add or remove members. Plug that hole by recognizing that implicit right only when the session user matches the current role. Additionally, do not recognize it during a security-restricted operation or during execution of a SECURITY DEFINER function. The restriction on SECURITY DEFINER is not security-critical. However, it seems best for a user testing his own SECURITY DEFINER function to see the same behavior others will see. Back-patch to 8.4 (all supported versions). The SQL standards do not conflate roles and users as PostgreSQL does; only SQL roles have members, and only SQL users initiate sessions. An application using PostgreSQL users and roles as SQL users and roles will never attempt to grant membership in the role that is the session user, so the implicit right to add or remove members will never arise. The security impact was mostly that a role member could revoke access from others, contrary to the wishes of his own grantor. Unapproved role member additions are less notable, because the member can still largely achieve that by creating a view or a SECURITY DEFINER function. Reviewed by Andres Freund and Tom Lane. Reported, independently, by Jonas Sundman and Noah Misch. Security: CVE-2014-0060
TEST_F(QuicUnencryptedServerTransportTest, FirstPacketProcessedCallback) { getFakeHandshakeLayer()->allowZeroRttKeys(); EXPECT_CALL(connCallback, onFirstPeerPacketProcessed()).Times(1); recvClientHello(); loopForWrites(); AckBlocks acks; acks.insert(0); auto aead = getInitialCipher(); auto headerCipher = getInitialHeaderCipher(); EXPECT_CALL(connCallback, onFirstPeerPacketProcessed()).Times(0); deliverData(packetToBufCleartext( createAckPacket( server->getNonConstConn(), clientNextInitialPacketNum, acks, PacketNumberSpace::Initial, aead.get()), *aead, *headerCipher, clientNextInitialPacketNum)); }
0
[ "CWE-617", "CWE-703" ]
mvfst
a67083ff4b8dcbb7ee2839da6338032030d712b0
298,846,480,881,963,170,000,000,000,000,000,000,000
21
Close connection if we derive an extra 1-rtt write cipher Summary: Fixes CVE-2021-24029 Reviewed By: mjoras, lnicco Differential Revision: D26613890 fbshipit-source-id: 19bb2be2c731808144e1a074ece313fba11f1945
rend_parse_introduction_points(rend_service_descriptor_t *parsed, const char *intro_points_encoded, size_t intro_points_encoded_size) { const char *current_ipo, *end_of_intro_points; smartlist_t *tokens; directory_token_t *tok; rend_intro_point_t *intro; extend_info_t *info; int result, num_ok=1; memarea_t *area = NULL; tor_assert(parsed); /** Function may only be invoked once. */ tor_assert(!parsed->intro_nodes); tor_assert(intro_points_encoded); tor_assert(intro_points_encoded_size > 0); /* Consider one intro point after the other. */ current_ipo = intro_points_encoded; end_of_intro_points = intro_points_encoded + intro_points_encoded_size; tokens = smartlist_create(); parsed->intro_nodes = smartlist_create(); area = memarea_new(); while (!fast_memcmpstart(current_ipo, end_of_intro_points-current_ipo, "introduction-point ")) { /* Determine end of string. */ const char *eos = tor_memstr(current_ipo, end_of_intro_points-current_ipo, "\nintroduction-point "); if (!eos) eos = end_of_intro_points; else eos = eos+1; tor_assert(eos <= intro_points_encoded+intro_points_encoded_size); /* Free tokens and clear token list. */ SMARTLIST_FOREACH(tokens, directory_token_t *, t, token_clear(t)); smartlist_clear(tokens); memarea_clear(area); /* Tokenize string. */ if (tokenize_string(area, current_ipo, eos, tokens, ipo_token_table, 0)) { log_warn(LD_REND, "Error tokenizing introduction point"); goto err; } /* Advance to next introduction point, if available. */ current_ipo = eos; /* Check minimum allowed length of introduction point. */ if (smartlist_len(tokens) < 5) { log_warn(LD_REND, "Impossibly short introduction point."); goto err; } /* Allocate new intro point and extend info. */ intro = tor_malloc_zero(sizeof(rend_intro_point_t)); info = intro->extend_info = tor_malloc_zero(sizeof(extend_info_t)); /* Parse identifier. */ tok = find_by_keyword(tokens, R_IPO_IDENTIFIER); if (base32_decode(info->identity_digest, DIGEST_LEN, tok->args[0], REND_INTRO_POINT_ID_LEN_BASE32) < 0) { log_warn(LD_REND, "Identity digest contains illegal characters: %s", tok->args[0]); rend_intro_point_free(intro); goto err; } /* Write identifier to nickname. */ info->nickname[0] = '$'; base16_encode(info->nickname + 1, sizeof(info->nickname) - 1, info->identity_digest, DIGEST_LEN); /* Parse IP address. */ tok = find_by_keyword(tokens, R_IPO_IP_ADDRESS); if (tor_addr_from_str(&info->addr, tok->args[0])<0) { log_warn(LD_REND, "Could not parse introduction point address."); rend_intro_point_free(intro); goto err; } if (tor_addr_family(&info->addr) != AF_INET) { log_warn(LD_REND, "Introduction point address was not ipv4."); rend_intro_point_free(intro); goto err; } /* Parse onion port. */ tok = find_by_keyword(tokens, R_IPO_ONION_PORT); info->port = (uint16_t) tor_parse_long(tok->args[0],10,1,65535, &num_ok,NULL); if (!info->port || !num_ok) { log_warn(LD_REND, "Introduction point onion port %s is invalid", escaped(tok->args[0])); rend_intro_point_free(intro); goto err; } /* Parse onion key. */ tok = find_by_keyword(tokens, R_IPO_ONION_KEY); if (!crypto_pk_public_exponent_ok(tok->key)) { log_warn(LD_REND, "Introduction point's onion key had invalid exponent."); rend_intro_point_free(intro); goto err; } info->onion_key = tok->key; tok->key = NULL; /* Prevent free */ /* Parse service key. */ tok = find_by_keyword(tokens, R_IPO_SERVICE_KEY); if (!crypto_pk_public_exponent_ok(tok->key)) { log_warn(LD_REND, "Introduction point key had invalid exponent."); rend_intro_point_free(intro); goto err; } intro->intro_key = tok->key; tok->key = NULL; /* Prevent free */ /* Add extend info to list of introduction points. */ smartlist_add(parsed->intro_nodes, intro); } result = smartlist_len(parsed->intro_nodes); goto done; err: result = -1; done: /* Free tokens and clear token list. */ SMARTLIST_FOREACH(tokens, directory_token_t *, t, token_clear(t)); smartlist_free(tokens); if (area) memarea_drop_all(area); return result; }
0
[ "CWE-399" ]
tor
57e35ad3d91724882c345ac709666a551a977f0f
295,348,384,027,096,900,000,000,000,000,000,000,000
126
Avoid possible segfault when handling networkstatus vote with bad flavor Fix for 6530; fix on 0.2.2.6-alpha.
Config::WeightedClusterEntry::WeightedClusterEntry( const Config& parent, const envoy::extensions::filters::network::tcp_proxy::v3::TcpProxy:: WeightedCluster::ClusterWeight& config) : parent_(parent), cluster_name_(config.name()), cluster_weight_(config.weight()) { if (config.has_metadata_match()) { const auto filter_it = config.metadata_match().filter_metadata().find( Envoy::Config::MetadataFilters::get().ENVOY_LB); if (filter_it != config.metadata_match().filter_metadata().end()) { if (parent.cluster_metadata_match_criteria_) { metadata_match_criteria_ = parent.cluster_metadata_match_criteria_->mergeMatchCriteria(filter_it->second); } else { metadata_match_criteria_ = std::make_unique<Router::MetadataMatchCriteriaImpl>(filter_it->second); } } } }
0
[ "CWE-416" ]
envoy
ce0ae309057a216aba031aff81c445c90c6ef145
66,606,675,178,624,890,000,000,000,000,000,000,000
18
CVE-2021-43826 Signed-off-by: Yan Avlasov <[email protected]>
nfs4_find_client_sessionid(struct net *net, const struct sockaddr *addr, struct nfs4_sessionid *sid, u32 minorversion) { struct nfs_client *clp; struct nfs_net *nn = net_generic(net, nfs_net_id); spin_lock(&nn->nfs_client_lock); list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) { if (!nfs4_cb_match_client(addr, clp, minorversion)) continue; if (!nfs4_has_session(clp)) continue; /* Match sessionid*/ if (memcmp(clp->cl_session->sess_id.data, sid->data, NFS4_MAX_SESSIONID_LEN) != 0) continue; refcount_inc(&clp->cl_count); spin_unlock(&nn->nfs_client_lock); return clp; } spin_unlock(&nn->nfs_client_lock); return NULL; }
0
[ "CWE-703" ]
linux
dd99e9f98fbf423ff6d365b37a98e8879170f17c
62,295,332,169,746,670,000,000,000,000,000,000,000
26
NFSv4: Initialise connection to the server in nfs4_alloc_client() Set up the connection to the NFSv4 server in nfs4_alloc_client(), before we've added the struct nfs_client to the net-namespace's nfs_client_list so that a downed server won't cause other mounts to hang in the trunking detection code. Reported-by: Michael Wakabayashi <[email protected]> Fixes: 5c6e5b60aae4 ("NFS: Fix an Oops in the pNFS files and flexfiles connection setup to the DS") Signed-off-by: Trond Myklebust <[email protected]>
_asn1_get_time_der (const unsigned char *der, int der_len, int *ret_len, char *str, int str_size) { int len_len, str_len; if (der_len <= 0 || str == NULL) return ASN1_DER_ERROR; str_len = asn1_get_length_der (der, der_len, &len_len); if (str_len <= 0 || str_size < str_len) return ASN1_DER_ERROR; memcpy (str, der + len_len, str_len); str[str_len] = 0; *ret_len = str_len + len_len; return ASN1_SUCCESS; }
0
[]
libtasn1
51612fca32dda445056ca9a7533bae258acd3ecb
68,277,054,056,425,725,000,000,000,000,000,000,000
18
check for zero size in time and object ids.
static inline bool is_base64(unsigned char c) { return (isalnum(c) || (c == '+') || (c == '/')); }
0
[ "CWE-20" ]
tinygltf
52ff00a38447f06a17eab1caa2cf0730a119c751
238,297,595,809,150,340,000,000,000,000,000,000,000
3
Do not expand file path since its not necessary for glTF asset path(URI) and for security reason(`wordexp`).
int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t *ppflags) { int q = oci->filter_level * 10 / 6; int flags = ppflags->post_proc_flag; int deblock_level = ppflags->deblocking_level; int noise_level = ppflags->noise_level; if (!oci->frame_to_show) return -1; if (q > 63) q = 63; if (!flags) { *dest = *oci->frame_to_show; /* handle problem with extending borders */ dest->y_width = oci->Width; dest->y_height = oci->Height; dest->uv_height = dest->y_height / 2; oci->postproc_state.last_base_qindex = oci->base_qindex; oci->postproc_state.last_frame_valid = 1; return 0; } if (flags & VP8D_ADDNOISE) { if (!oci->postproc_state.generated_noise) { oci->postproc_state.generated_noise = vpx_calloc( oci->Width + 256, sizeof(*oci->postproc_state.generated_noise)); if (!oci->postproc_state.generated_noise) return 1; } } /* Allocate post_proc_buffer_int if needed */ if ((flags & VP8D_MFQE) && !oci->post_proc_buffer_int_used) { if ((flags & VP8D_DEBLOCK) || (flags & VP8D_DEMACROBLOCK)) { int width = (oci->Width + 15) & ~15; int height = (oci->Height + 15) & ~15; if (vp8_yv12_alloc_frame_buffer(&oci->post_proc_buffer_int, width, height, VP8BORDERINPIXELS)) { vpx_internal_error(&oci->error, VPX_CODEC_MEM_ERROR, "Failed to allocate MFQE framebuffer"); } oci->post_proc_buffer_int_used = 1; /* insure that postproc is set to all 0's so that post proc * doesn't pull random data in from edge */ memset((&oci->post_proc_buffer_int)->buffer_alloc, 128, (&oci->post_proc_buffer)->frame_size); } } vpx_clear_system_state(); if ((flags & VP8D_MFQE) && oci->postproc_state.last_frame_valid && oci->current_video_frame >= 2 && oci->postproc_state.last_base_qindex < 60 && oci->base_qindex - oci->postproc_state.last_base_qindex >= 20) { vp8_multiframe_quality_enhance(oci); if (((flags & VP8D_DEBLOCK) || (flags & VP8D_DEMACROBLOCK)) && oci->post_proc_buffer_int_used) { vp8_yv12_copy_frame(&oci->post_proc_buffer, &oci->post_proc_buffer_int); if (flags & VP8D_DEMACROBLOCK) { vp8_deblock(oci, &oci->post_proc_buffer_int, &oci->post_proc_buffer, q + (deblock_level - 5) * 10, 1, 0); vp8_de_mblock(&oci->post_proc_buffer, q + (deblock_level - 5) * 10); } else if (flags & VP8D_DEBLOCK) { vp8_deblock(oci, &oci->post_proc_buffer_int, &oci->post_proc_buffer, q, 1, 0); } } /* Move partially towards the base q of the previous frame */ oci->postproc_state.last_base_qindex = (3 * oci->postproc_state.last_base_qindex + oci->base_qindex) >> 2; } else if (flags & VP8D_DEMACROBLOCK) { vp8_deblock(oci, oci->frame_to_show, &oci->post_proc_buffer, q + (deblock_level - 5) * 10, 1, 0); vp8_de_mblock(&oci->post_proc_buffer, q + (deblock_level - 5) * 10); oci->postproc_state.last_base_qindex = oci->base_qindex; } else if (flags & VP8D_DEBLOCK) { vp8_deblock(oci, oci->frame_to_show, &oci->post_proc_buffer, q, 1, 0); oci->postproc_state.last_base_qindex = oci->base_qindex; } else { vp8_yv12_copy_frame(oci->frame_to_show, &oci->post_proc_buffer); oci->postproc_state.last_base_qindex = oci->base_qindex; } oci->postproc_state.last_frame_valid = 1; if (flags & VP8D_ADDNOISE) { if (oci->postproc_state.last_q != q || oci->postproc_state.last_noise != noise_level) { double sigma; struct postproc_state *ppstate = &oci->postproc_state; vpx_clear_system_state(); sigma = noise_level + .5 + .6 * q / 63.0; ppstate->clamp = vpx_setup_noise(sigma, ppstate->generated_noise, oci->Width + 256); ppstate->last_q = q; ppstate->last_noise = noise_level; } vpx_plane_add_noise( oci->post_proc_buffer.y_buffer, oci->postproc_state.generated_noise, oci->postproc_state.clamp, oci->postproc_state.clamp, oci->post_proc_buffer.y_width, oci->post_proc_buffer.y_height, oci->post_proc_buffer.y_stride); } *dest = oci->post_proc_buffer; /* handle problem with extending borders */ dest->y_width = oci->Width; dest->y_height = oci->Height; dest->uv_height = dest->y_height / 2; return 0; }
0
[ "CWE-20" ]
libvpx
52add5896661d186dec284ed646a4b33b607d2c7
171,906,525,104,444,400,000,000,000,000,000,000,000
117
VP8: Fix use-after-free in postproc. The pointer in vp8 postproc refers to show_frame_mi which is only updated on show frame. However, when there is a no-show frame which also changes the size (thus new frame buffers allocated), show_frame_mi is not updated with new frame buffer memory. Change the pointer in postproc to mi which is always updated. Bug: 842265 Change-Id: I33874f2112b39f74562cba528432b5f239e6a7bd
GF_Err gf_isom_truehd_config_get(GF_ISOFile *the_file, u32 trackNumber, u32 StreamDescriptionIndex, u32 *format_info, u32 *peak_data_rate) { GF_TrackBox *trak; GF_MPEGAudioSampleEntryBox *entry; trak = gf_isom_get_track_from_file(the_file, trackNumber); if (!trak || !StreamDescriptionIndex) return GF_BAD_PARAM; entry = (GF_MPEGAudioSampleEntryBox *)gf_list_get(trak->Media->information->sampleTable->SampleDescription->child_boxes, StreamDescriptionIndex-1); if (!entry) return GF_BAD_PARAM; if (entry->type != GF_ISOM_SUBTYPE_MLPA) return GF_BAD_PARAM; if (!entry->cfg_mlp) return GF_ISOM_INVALID_FILE; if (format_info) *format_info = entry->cfg_mlp->format_info; if (peak_data_rate) *peak_data_rate = entry->cfg_mlp->peak_data_rate; return GF_OK; }
0
[ "CWE-476", "CWE-401" ]
gpac
328c6d682698fdb9878dbb4f282963d42c538c01
138,494,702,351,549,150,000,000,000,000,000,000,000
16
fixed #1756
static void __unix_remove_socket(struct sock *sk) { sk_del_node_init(sk); }
0
[]
linux-2.6
6209344f5a3795d34b7f2c0061f49802283b6bdd
175,771,693,388,606,060,000,000,000,000,000,000,000
4
net: unix: fix inflight counting bug in garbage collector Previously I assumed that the receive queues of candidates don't change during the GC. This is only half true, nothing can be received from the queues (see comment in unix_gc()), but buffers could be added through the other half of the socket pair, which may still have file descriptors referring to it. This can result in inc_inflight_move_tail() erronously increasing the "inflight" counter for a unix socket for which dec_inflight() wasn't previously called. This in turn can trigger the "BUG_ON(total_refs < inflight_refs)" in a later garbage collection run. Fix this by only manipulating the "inflight" counter for sockets which are candidates themselves. Duplicating the file references in unix_attach_fds() is also needed to prevent a socket becoming a candidate for GC while the skb that contains it is not yet queued. Reported-by: Andrea Bittau <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]> CC: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
pipe_write(struct kiocb *iocb, struct iov_iter *from) { struct file *filp = iocb->ki_filp; struct pipe_inode_info *pipe = filp->private_data; ssize_t ret = 0; int do_wakeup = 0; size_t total_len = iov_iter_count(from); ssize_t chars; /* Null write succeeds. */ if (unlikely(total_len == 0)) return 0; __pipe_lock(pipe); if (!pipe->readers) { send_sig(SIGPIPE, current, 0); ret = -EPIPE; goto out; } /* We try to merge small writes */ chars = total_len & (PAGE_SIZE-1); /* size of the last buffer */ if (pipe->nrbufs && chars != 0) { int lastbuf = (pipe->curbuf + pipe->nrbufs - 1) & (pipe->buffers - 1); struct pipe_buffer *buf = pipe->bufs + lastbuf; const struct pipe_buf_operations *ops = buf->ops; int offset = buf->offset + buf->len; if (ops->can_merge && offset + chars <= PAGE_SIZE) { ret = ops->confirm(pipe, buf); if (ret) goto out; ret = copy_page_from_iter(buf->page, offset, chars, from); if (unlikely(ret < chars)) { ret = -EFAULT; goto out; } do_wakeup = 1; buf->len += ret; if (!iov_iter_count(from)) goto out; } } for (;;) { int bufs; if (!pipe->readers) { send_sig(SIGPIPE, current, 0); if (!ret) ret = -EPIPE; break; } bufs = pipe->nrbufs; if (bufs < pipe->buffers) { int newbuf = (pipe->curbuf + bufs) & (pipe->buffers-1); struct pipe_buffer *buf = pipe->bufs + newbuf; struct page *page = pipe->tmp_page; int copied; if (!page) { page = alloc_page(GFP_HIGHUSER); if (unlikely(!page)) { ret = ret ? : -ENOMEM; break; } pipe->tmp_page = page; } /* Always wake up, even if the copy fails. Otherwise * we lock up (O_NONBLOCK-)readers that sleep due to * syscall merging. * FIXME! Is this really true? */ do_wakeup = 1; copied = copy_page_from_iter(page, 0, PAGE_SIZE, from); if (unlikely(copied < PAGE_SIZE && iov_iter_count(from))) { if (!ret) ret = -EFAULT; break; } ret += copied; /* Insert it into the buffer array */ buf->page = page; buf->ops = &anon_pipe_buf_ops; buf->offset = 0; buf->len = copied; buf->flags = 0; if (is_packetized(filp)) { buf->ops = &packet_pipe_buf_ops; buf->flags = PIPE_BUF_FLAG_PACKET; } pipe->nrbufs = ++bufs; pipe->tmp_page = NULL; if (!iov_iter_count(from)) break; } if (bufs < pipe->buffers) continue; if (filp->f_flags & O_NONBLOCK) { if (!ret) ret = -EAGAIN; break; } if (signal_pending(current)) { if (!ret) ret = -ERESTARTSYS; break; } if (do_wakeup) { wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLRDNORM); kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN); do_wakeup = 0; } pipe->waiting_writers++; pipe_wait(pipe); pipe->waiting_writers--; } out: __pipe_unlock(pipe); if (do_wakeup) { wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLRDNORM); kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN); } if (ret > 0 && sb_start_write_trylock(file_inode(filp)->i_sb)) { int err = file_update_time(filp); if (err) ret = err; sb_end_write(file_inode(filp)->i_sb); } return ret; }
0
[ "CWE-399" ]
linux
759c01142a5d0f364a462346168a56de28a80f52
169,063,989,323,754,050,000,000,000,000,000,000,000
136
pipe: limit the per-user amount of pages allocated in pipes On no-so-small systems, it is possible for a single process to cause an OOM condition by filling large pipes with data that are never read. A typical process filling 4000 pipes with 1 MB of data will use 4 GB of memory. On small systems it may be tricky to set the pipe max size to prevent this from happening. This patch makes it possible to enforce a per-user soft limit above which new pipes will be limited to a single page, effectively limiting them to 4 kB each, as well as a hard limit above which no new pipes may be created for this user. This has the effect of protecting the system against memory abuse without hurting other users, and still allowing pipes to work correctly though with less data at once. The limit are controlled by two new sysctls : pipe-user-pages-soft, and pipe-user-pages-hard. Both may be disabled by setting them to zero. The default soft limit allows the default number of FDs per process (1024) to create pipes of the default size (64kB), thus reaching a limit of 64MB before starting to create only smaller pipes. With 256 processes limited to 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB = 1084 MB of memory allocated for a user. The hard limit is disabled by default to avoid breaking existing applications that make intensive use of pipes (eg: for splicing). Reported-by: [email protected] Reported-by: Tetsuo Handa <[email protected]> Mitigates: CVE-2013-4312 (Linux 2.0+) Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Willy Tarreau <[email protected]> Signed-off-by: Al Viro <[email protected]>
static int cpia2_g_fmt_vid_cap(struct file *file, void *fh, struct v4l2_format *f) { struct camera_data *cam = video_drvdata(file); f->fmt.pix.width = cam->width; f->fmt.pix.height = cam->height; f->fmt.pix.pixelformat = cam->pixelformat; f->fmt.pix.field = V4L2_FIELD_NONE; f->fmt.pix.bytesperline = 0; f->fmt.pix.sizeimage = cam->frame_size; f->fmt.pix.colorspace = V4L2_COLORSPACE_JPEG; f->fmt.pix.priv = 0; return 0; }
0
[ "CWE-416" ]
linux
dea37a97265588da604c6ba80160a287b72c7bfd
212,911,909,860,948,000,000,000,000,000,000,000,000
16
media: cpia2: Fix use-after-free in cpia2_exit Syzkaller report this: BUG: KASAN: use-after-free in sysfs_remove_file_ns+0x5f/0x70 fs/sysfs/file.c:468 Read of size 8 at addr ffff8881f59a6b70 by task syz-executor.0/8363 CPU: 0 PID: 8363 Comm: syz-executor.0 Not tainted 5.0.0-rc8+ #3 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xfa/0x1ce lib/dump_stack.c:113 print_address_description+0x65/0x270 mm/kasan/report.c:187 kasan_report+0x149/0x18d mm/kasan/report.c:317 sysfs_remove_file_ns+0x5f/0x70 fs/sysfs/file.c:468 sysfs_remove_file include/linux/sysfs.h:519 [inline] driver_remove_file+0x40/0x50 drivers/base/driver.c:122 usb_remove_newid_files drivers/usb/core/driver.c:212 [inline] usb_deregister+0x12a/0x3b0 drivers/usb/core/driver.c:1005 cpia2_exit+0xa/0x16 [cpia2] __do_sys_delete_module kernel/module.c:1018 [inline] __se_sys_delete_module kernel/module.c:961 [inline] __x64_sys_delete_module+0x3dc/0x5e0 kernel/module.c:961 do_syscall_64+0x147/0x600 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x462e99 Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f86f3754c58 EFLAGS: 00000246 ORIG_RAX: 00000000000000b0 RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462e99 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000300 RBP: 0000000000000002 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f86f37556bc R13: 00000000004bcca9 R14: 00000000006f6b48 R15: 00000000ffffffff Allocated by task 8363: set_track mm/kasan/common.c:85 [inline] __kasan_kmalloc.constprop.3+0xa0/0xd0 mm/kasan/common.c:495 kmalloc include/linux/slab.h:545 [inline] kzalloc include/linux/slab.h:740 [inline] bus_add_driver+0xc0/0x610 drivers/base/bus.c:651 driver_register+0x1bb/0x3f0 drivers/base/driver.c:170 usb_register_driver+0x267/0x520 drivers/usb/core/driver.c:965 0xffffffffc1b4817c do_one_initcall+0xfa/0x5ca init/main.c:887 do_init_module+0x204/0x5f6 kernel/module.c:3460 load_module+0x66b2/0x8570 kernel/module.c:3808 __do_sys_finit_module+0x238/0x2a0 kernel/module.c:3902 do_syscall_64+0x147/0x600 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 8363: set_track mm/kasan/common.c:85 [inline] __kasan_slab_free+0x130/0x180 mm/kasan/common.c:457 slab_free_hook mm/slub.c:1430 [inline] slab_free_freelist_hook mm/slub.c:1457 [inline] slab_free mm/slub.c:3005 [inline] kfree+0xe1/0x270 mm/slub.c:3957 kobject_cleanup lib/kobject.c:662 [inline] kobject_release lib/kobject.c:691 [inline] kref_put include/linux/kref.h:67 [inline] kobject_put+0x146/0x240 lib/kobject.c:708 bus_remove_driver+0x10e/0x220 drivers/base/bus.c:732 driver_unregister+0x6c/0xa0 drivers/base/driver.c:197 usb_register_driver+0x341/0x520 drivers/usb/core/driver.c:980 0xffffffffc1b4817c do_one_initcall+0xfa/0x5ca init/main.c:887 do_init_module+0x204/0x5f6 kernel/module.c:3460 load_module+0x66b2/0x8570 kernel/module.c:3808 __do_sys_finit_module+0x238/0x2a0 kernel/module.c:3902 do_syscall_64+0x147/0x600 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe The buggy address belongs to the object at ffff8881f59a6b40 which belongs to the cache kmalloc-256 of size 256 The buggy address is located 48 bytes inside of 256-byte region [ffff8881f59a6b40, ffff8881f59a6c40) The buggy address belongs to the page: page:ffffea0007d66980 count:1 mapcount:0 mapping:ffff8881f6c02e00 index:0x0 flags: 0x2fffc0000000200(slab) raw: 02fffc0000000200 dead000000000100 dead000000000200 ffff8881f6c02e00 raw: 0000000000000000 00000000800c000c 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8881f59a6a00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff8881f59a6a80: 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc >ffff8881f59a6b00: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb ^ ffff8881f59a6b80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8881f59a6c00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc cpia2_init does not check return value of cpia2_init, if it failed in usb_register_driver, there is already cleanup using driver_unregister. No need call cpia2_usb_cleanup on module exit. Reported-by: Hulk Robot <[email protected]> Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Hans Verkuil <[email protected]> Signed-off-by: Mauro Carvalho Chehab <[email protected]>
static int ext4_ext_fiemap_cb(struct inode *inode, struct ext4_ext_path *path, struct ext4_ext_cache *newex, struct ext4_extent *ex, void *data) { struct fiemap_extent_info *fieinfo = data; unsigned char blksize_bits = inode->i_sb->s_blocksize_bits; __u64 logical; __u64 physical; __u64 length; __u32 flags = 0; int error; logical = (__u64)newex->ec_block << blksize_bits; if (newex->ec_type == EXT4_EXT_CACHE_GAP) { pgoff_t offset; struct page *page; struct buffer_head *bh = NULL; offset = logical >> PAGE_SHIFT; page = find_get_page(inode->i_mapping, offset); if (!page || !page_has_buffers(page)) return EXT_CONTINUE; bh = page_buffers(page); if (!bh) return EXT_CONTINUE; if (buffer_delay(bh)) { flags |= FIEMAP_EXTENT_DELALLOC; page_cache_release(page); } else { page_cache_release(page); return EXT_CONTINUE; } } physical = (__u64)newex->ec_start << blksize_bits; length = (__u64)newex->ec_len << blksize_bits; if (ex && ext4_ext_is_uninitialized(ex)) flags |= FIEMAP_EXTENT_UNWRITTEN; /* * If this extent reaches EXT_MAX_BLOCK, it must be last. * * Or if ext4_ext_next_allocated_block is EXT_MAX_BLOCK, * this also indicates no more allocated blocks. * * XXX this might miss a single-block extent at EXT_MAX_BLOCK */ if (ext4_ext_next_allocated_block(path) == EXT_MAX_BLOCK || newex->ec_block + newex->ec_len - 1 == EXT_MAX_BLOCK) { loff_t size = i_size_read(inode); loff_t bs = EXT4_BLOCK_SIZE(inode->i_sb); flags |= FIEMAP_EXTENT_LAST; if ((flags & FIEMAP_EXTENT_DELALLOC) && logical+length > size) length = (size - logical + bs - 1) & ~(bs-1); } error = fiemap_fill_next_extent(fieinfo, logical, physical, length, flags); if (error < 0) return error; if (error == 1) return EXT_BREAK; return EXT_CONTINUE; }
0
[ "CWE-703" ]
linux
744692dc059845b2a3022119871846e74d4f6e11
92,520,414,816,126,780,000,000,000,000,000,000,000
72
ext4: use ext4_get_block_write in buffer write Allocate uninitialized extent before ext4 buffer write and convert the extent to initialized after io completes. The purpose is to make sure an extent can only be marked initialized after it has been written with new data so we can safely drop the i_mutex lock in ext4 DIO read without exposing stale data. This helps to improve multi-thread DIO read performance on high-speed disks. Skip the nobh and data=journal mount cases to make things simple for now. Signed-off-by: Jiaying Zhang <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]>
gc_mark_gray_list(mrb_state *mrb, mrb_gc *gc) { while (gc->gray_list) { if (is_gray(gc->gray_list)) gc_mark_children(mrb, gc, gc->gray_list); else gc->gray_list = gc->gray_list->gcnext; } }
0
[ "CWE-415" ]
mruby
97319697c8f9f6ff27b32589947e1918e3015503
141,060,422,086,697,430,000,000,000,000,000,000,000
8
Cancel 9cdf439 Should not free the pointer in `realloc` since it can cause use-after-free problem.
is_window_local_option(int opt_idx) { return options[opt_idx].var == VAR_WIN; }
0
[ "CWE-122" ]
vim
b7081e135a16091c93f6f5f7525a5c58fb7ca9f9
2,869,457,340,712,041,400,000,000,000,000,000,000
4
patch 8.2.3402: invalid memory access when using :retab with large value Problem: Invalid memory access when using :retab with large value. Solution: Check the number is positive.
static int get_target_version(struct file *filp, struct dm_ioctl *param, size_t param_size) { return __list_versions(param, param_size, param->name); }
0
[ "CWE-787" ]
linux
4edbe1d7bcffcd6269f3b5eb63f710393ff2ec7a
37,356,517,143,544,630,000,000,000,000,000,000,000
4
dm ioctl: fix out of bounds array access when no devices If there are not any dm devices, we need to zero the "dev" argument in the first structure dm_name_list. However, this can cause out of bounds write, because the "needed" variable is zero and len may be less than eight. Fix this bug by reporting DM_BUFFER_FULL_FLAG if the result buffer is too small to hold the "nl->dev" value. Signed-off-by: Mikulas Patocka <[email protected]> Reported-by: Dan Carpenter <[email protected]> Cc: [email protected] Signed-off-by: Mike Snitzer <[email protected]>
void HeaderMapImpl::addViaMove(HeaderString&& key, HeaderString&& value) { // If this is an inline header, we can't addViaMove, because we'll overwrite // the existing value. auto* entry = getExistingInline(key.getStringView()); if (entry != nullptr) { const uint64_t added_size = appendToHeader(entry->value(), value.getStringView()); addSize(added_size); key.clear(); value.clear(); } else { insertByKey(std::move(key), std::move(value)); } }
0
[ "CWE-400", "CWE-703" ]
envoy
afc39bea36fd436e54262f150c009e8d72db5014
191,477,461,308,138,420,000,000,000,000,000,000,000
13
Track byteSize of HeaderMap internally. Introduces a cached byte size updated internally in HeaderMap. The value is stored as an optional, and is cleared whenever a non-const pointer or reference to a HeaderEntry is accessed. The cached value can be set with refreshByteSize() which performs an iteration over the HeaderMap to sum the size of each key and value in the HeaderMap. Signed-off-by: Asra Ali <[email protected]>
Item_cache_temporal::Item_cache_temporal(THD *thd, enum_field_types field_type_arg): Item_cache_int(thd, field_type_arg) { if (mysql_type_to_time_type(Item_cache_temporal::field_type()) == MYSQL_TIMESTAMP_ERROR) set_handler_by_field_type(MYSQL_TYPE_DATETIME); }
0
[ "CWE-89" ]
server
b5e16a6e0381b28b598da80b414168ce9a5016e5
117,441,376,727,051,970,000,000,000,000,000,000,000
8
MDEV-26061 MariaDB server crash at Field::set_default * Item_default_value::fix_fields creates a copy of its argument's field. * Field::default_value is changed when its expression is prepared in unpack_vcol_info_from_frm() This means we must unpack any vcol expression that includes DEFAULT(x) strictly after unpacking x->default_value. To avoid building and solving this dependency graph on every table open, we update Item_default_value::field->default_value after all vcols are unpacked and fixed.
static bool disconnected_whitelist_entries(struct hci_dev *hdev) { struct bdaddr_list *b; list_for_each_entry(b, &hdev->whitelist, list) { struct hci_conn *conn; conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &b->bdaddr); if (!conn) return true; if (conn->state != BT_CONNECTED && conn->state != BT_CONFIG) return true; } return false; }
0
[ "CWE-362" ]
linux
e2cb6b891ad2b8caa9131e3be70f45243df82a80
258,985,222,302,500,880,000,000,000,000,000,000,000
17
bluetooth: eliminate the potential race condition when removing the HCI controller There is a possible race condition vulnerability between issuing a HCI command and removing the cont. Specifically, functions hci_req_sync() and hci_dev_do_close() can race each other like below: thread-A in hci_req_sync() | thread-B in hci_dev_do_close() | hci_req_sync_lock(hdev); test_bit(HCI_UP, &hdev->flags); | ... | test_and_clear_bit(HCI_UP, &hdev->flags) hci_req_sync_lock(hdev); | | In this commit we alter the sequence in function hci_req_sync(). Hence, the thread-A cannot issue th. Signed-off-by: Lin Ma <[email protected]> Cc: Marcel Holtmann <[email protected]> Fixes: 7c6a329e4447 ("[Bluetooth] Fix regression from using default link policy") Signed-off-by: Greg Kroah-Hartman <[email protected]>
static BROTLI_INLINE void BrotliUnalignedWrite64(void* p, uint64_t v) { memcpy(p, &v, sizeof v); }
0
[ "CWE-120" ]
brotli
223d80cfbec8fd346e32906c732c8ede21f0cea6
315,112,362,131,508,620,000,000,000,000,000,000,000
3
Update (#826) * IMPORTANT: decoder: fix potential overflow when input chunk is >2GiB * simplify max Huffman table size calculation * eliminate symbol duplicates (static arrays in .h files) * minor combing in research/ code
static void cli_full_connection_done(struct tevent_req *subreq) { struct tevent_req *req = tevent_req_callback_data( subreq, struct tevent_req); NTSTATUS status; status = cli_tree_connect_recv(subreq); TALLOC_FREE(subreq); if (tevent_req_nterror(req, status)) { return; } tevent_req_done(req); }
0
[ "CWE-94" ]
samba
94295b7aa22d2544af5323bca70d3dcb97fd7c64
102,932,507,166,658,630,000,000,000,000,000,000,000
14
CVE-2016-2019: s3:libsmb: add comment regarding smbXcli_session_is_guest() with mandatory signing BUG: https://bugzilla.samba.org/show_bug.cgi?id=11860 Signed-off-by: Stefan Metzmacher <[email protected]>
extension_info_cancel (NautilusDirectory *directory) { if (directory->details->extension_info_in_progress != NULL) { if (directory->details->extension_info_idle) { g_source_remove (directory->details->extension_info_idle); } else { nautilus_info_provider_cancel_update (directory->details->extension_info_provider, directory->details->extension_info_in_progress); } directory->details->extension_info_in_progress = NULL; directory->details->extension_info_file = NULL; directory->details->extension_info_provider = NULL; directory->details->extension_info_idle = 0; async_job_end (directory, "extension info"); } }
0
[]
nautilus
7632a3e13874a2c5e8988428ca913620a25df983
274,803,958,388,699,400,000,000,000,000,000,000,000
19
Check for trusted desktop file launchers. 2009-02-24 Alexander Larsson <[email protected]> * libnautilus-private/nautilus-directory-async.c: Check for trusted desktop file launchers. * libnautilus-private/nautilus-file-private.h: * libnautilus-private/nautilus-file.c: * libnautilus-private/nautilus-file.h: Add nautilus_file_is_trusted_link. Allow unsetting of custom display name. * libnautilus-private/nautilus-mime-actions.c: Display dialog when trying to launch a non-trusted desktop file. svn path=/trunk/; revision=15003
static void VirtIONetRelease(PARANDIS_ADAPTER *pContext) { BOOLEAN b; ULONG i; DEBUG_ENTRY(0); /* list NetReceiveBuffersWaiting must be free */ for (i = 0; i < ARRAYSIZE(pContext->ReceiveQueues); i++) { pRxNetDescriptor pBufferDescriptor; while (NULL != (pBufferDescriptor = ReceiveQueueGetBuffer(pContext->ReceiveQueues + i))) { pBufferDescriptor->Queue->ReuseReceiveBuffer(FALSE, pBufferDescriptor); } } do { b = pContext->m_upstreamPacketPending != 0; if (b) { DPrintf(0, ("[%s] There are waiting buffers\n", __FUNCTION__)); PrintStatistics(pContext); NdisMSleep(5000000); } } while (b); RestoreMAC(pContext); for (i = 0; i < pContext->nPathBundles; i++) { if (pContext->pPathBundles[i].txCreated) { pContext->pPathBundles[i].txPath.Shutdown(); } if (pContext->pPathBundles[i].rxCreated) { pContext->pPathBundles[i].rxPath.Shutdown(); /* this can be freed, queue shut down */ pContext->pPathBundles[i].rxPath.FreeRxDescriptorsFromList(); } } if (pContext->bCXPathCreated) { pContext->CXPath.Shutdown(); } PrintStatistics(pContext); }
0
[ "CWE-20" ]
kvm-guest-drivers-windows
723416fa4210b7464b28eab89cc76252e6193ac1
309,661,723,128,044,000,000,000,000,000,000,000,000
55
NetKVM: BZ#1169718: Checking the length only on read Signed-off-by: Joseph Hindin <[email protected]>
static void nf_ct_frag6_expire(unsigned long data) { struct frag_queue *fq; struct net *net; fq = container_of((struct inet_frag_queue *)data, struct frag_queue, q); net = container_of(fq->q.net, struct net, nf_frag.frags); ip6_expire_frag_queue(net, fq, &nf_frags); }
0
[]
linux
3ef0eb0db4bf92c6d2510fe5c4dc51852746f206
205,876,914,341,535,330,000,000,000,000,000,000,000
10
net: frag, move LRU list maintenance outside of rwlock Updating the fragmentation queues LRU (Least-Recently-Used) list, required taking the hash writer lock. However, the LRU list isn't tied to the hash at all, so we can use a separate lock for it. Original-idea-by: Florian Westphal <[email protected]> Signed-off-by: Jesper Dangaard Brouer <[email protected]> Signed-off-by: David S. Miller <[email protected]>
R_API char *r_bin_java_get_item_desc_from_cp_item_list(RList *cp_list, RBinJavaCPTypeObj *obj, int depth) { /* Given a constant pool object FieldRef, MethodRef, or InterfaceMethodRef return the actual descriptor string. @rvalue ut8* (user frees) or NULL */ if (!obj || !cp_list || depth < 0) { return NULL; } switch (obj->tag) { case R_BIN_JAVA_CP_NAMEANDTYPE: return r_bin_java_get_utf8_from_cp_item_list (cp_list, obj->info.cp_name_and_type.descriptor_idx); // XXX - Probably not good form, but they are the same memory structure case R_BIN_JAVA_CP_FIELDREF: case R_BIN_JAVA_CP_INTERFACEMETHOD_REF: case R_BIN_JAVA_CP_METHODREF: obj = r_bin_java_get_item_from_cp_item_list (cp_list, obj->info.cp_method.name_and_type_idx); return r_bin_java_get_item_desc_from_cp_item_list ( cp_list, obj, depth - 1); default: return NULL; } return NULL; }
0
[ "CWE-125" ]
radare2
0927ed3ae99444e7b47b84e43118deb10fe37529
227,265,448,365,020,700,000,000,000,000,000,000,000
26
Fix oobread crash in java parser ##crash * Reported by @bet4it via @huntrdev * BountyID: 229a2e0d-9e5c-402f-9a24-57fa2eb1aaa7 * Reproducer: poc4java
void AV1_RewriteESDescriptorEx(GF_MPEGVisualSampleEntryBox *av1, GF_MediaBox *mdia) { GF_BitRateBox *btrt = gf_isom_sample_entry_get_bitrate((GF_SampleEntryBox *)av1, GF_FALSE); if (av1->emul_esd) gf_odf_desc_del((GF_Descriptor *)av1->emul_esd); av1->emul_esd = gf_odf_desc_esd_new(2); av1->emul_esd->decoderConfig->streamType = GF_STREAM_VISUAL; av1->emul_esd->decoderConfig->objectTypeIndication = GF_CODECID_AV1; if (btrt) { av1->emul_esd->decoderConfig->bufferSizeDB = btrt->bufferSizeDB; av1->emul_esd->decoderConfig->avgBitrate = btrt->avgBitrate; av1->emul_esd->decoderConfig->maxBitrate = btrt->maxBitrate; } if (av1->av1_config) { GF_AV1Config *av1_cfg = AV1_DuplicateConfig(av1->av1_config->config); if (av1_cfg) { gf_odf_av1_cfg_write(av1_cfg, &av1->emul_esd->decoderConfig->decoderSpecificInfo->data, &av1->emul_esd->decoderConfig->decoderSpecificInfo->dataLength); gf_odf_av1_cfg_del(av1_cfg); } } }
1
[ "CWE-476" ]
gpac
b2eab95e07cb5819375a50358d4806a8813b6e50
26,677,708,125,559,030,000,000,000,000,000,000,000
22
fixed #1738
static int cm_rtu_handler(struct cm_work *work) { struct cm_id_private *cm_id_priv; struct cm_rtu_msg *rtu_msg; int ret; rtu_msg = (struct cm_rtu_msg *)work->mad_recv_wc->recv_buf.mad; cm_id_priv = cm_acquire_id(rtu_msg->remote_comm_id, rtu_msg->local_comm_id); if (!cm_id_priv) return -EINVAL; work->cm_event.private_data = &rtu_msg->private_data; spin_lock_irq(&cm_id_priv->lock); if (cm_id_priv->id.state != IB_CM_REP_SENT && cm_id_priv->id.state != IB_CM_MRA_REP_RCVD) { spin_unlock_irq(&cm_id_priv->lock); atomic_long_inc(&work->port->counter_group[CM_RECV_DUPLICATES]. counter[CM_RTU_COUNTER]); goto out; } cm_id_priv->id.state = IB_CM_ESTABLISHED; ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg); ret = atomic_inc_and_test(&cm_id_priv->work_count); if (!ret) list_add_tail(&work->list, &cm_id_priv->work_list); spin_unlock_irq(&cm_id_priv->lock); if (ret) cm_process_work(cm_id_priv, work); else cm_deref_id(cm_id_priv); return 0; out: cm_deref_id(cm_id_priv); return -EINVAL; }
0
[ "CWE-20" ]
linux
b2853fd6c2d0f383dbdf7427e263eb576a633867
118,879,586,598,905,470,000,000,000,000,000,000,000
39
IB/core: Don't resolve passive side RoCE L2 address in CMA REQ handler The code that resolves the passive side source MAC within the rdma_cm connection request handler was both redundant and buggy, so remove it. It was redundant since later, when an RC QP is modified to RTR state, the resolution will take place in the ib_core module. It was buggy because this callback also deals with UD SIDR exchange, for which we incorrectly looked at the REQ member of the CM event and dereferenced a random value. Fixes: dd5f03beb4f7 ("IB/core: Ethernet L2 attributes in verbs/cm structures") Signed-off-by: Moni Shoua <[email protected]> Signed-off-by: Or Gerlitz <[email protected]> Signed-off-by: Roland Dreier <[email protected]>
ExprMatchTest() : _expCtx(new ExpressionContextForTest()) {}
0
[]
mongo
ee97c0699fd55b498310996ee002328e533681a3
288,706,472,175,858,620,000,000,000,000,000,000,000
1
SERVER-36993 Fix crash due to incorrect $or pushdown for indexed $expr.
static int selinux_task_getscheduler(struct task_struct *p) { return current_has_perm(p, PROCESS__GETSCHED); }
0
[]
linux-2.6
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
287,859,421,750,704,100,000,000,000,000,000,000,000
4
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6] Add a keyctl to install a process's session keyring onto its parent. This replaces the parent's session keyring. Because the COW credential code does not permit one process to change another process's credentials directly, the change is deferred until userspace next starts executing again. Normally this will be after a wait*() syscall. To support this, three new security hooks have been provided: cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in the blank security creds and key_session_to_parent() - which asks the LSM if the process may replace its parent's session keyring. The replacement may only happen if the process has the same ownership details as its parent, and the process has LINK permission on the session keyring, and the session keyring is owned by the process, and the LSM permits it. Note that this requires alteration to each architecture's notify_resume path. This has been done for all arches barring blackfin, m68k* and xtensa, all of which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the replacement to be performed at the point the parent process resumes userspace execution. This allows the userspace AFS pioctl emulation to fully emulate newpag() and the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to alter the parent process's PAG membership. However, since kAFS doesn't use PAGs per se, but rather dumps the keys into the session keyring, the session keyring of the parent must be replaced if, for example, VIOCSETTOK is passed the newpag flag. This can be tested with the following program: #include <stdio.h> #include <stdlib.h> #include <keyutils.h> #define KEYCTL_SESSION_TO_PARENT 18 #define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0) int main(int argc, char **argv) { key_serial_t keyring, key; long ret; keyring = keyctl_join_session_keyring(argv[1]); OSERROR(keyring, "keyctl_join_session_keyring"); key = add_key("user", "a", "b", 1, keyring); OSERROR(key, "add_key"); ret = keyctl(KEYCTL_SESSION_TO_PARENT); OSERROR(ret, "KEYCTL_SESSION_TO_PARENT"); return 0; } Compiled and linked with -lkeyutils, you should see something like: [dhowells@andromeda ~]$ keyctl show Session Keyring -3 --alswrv 4043 4043 keyring: _ses 355907932 --alswrv 4043 -1 \_ keyring: _uid.4043 [dhowells@andromeda ~]$ /tmp/newpag [dhowells@andromeda ~]$ keyctl show Session Keyring -3 --alswrv 4043 4043 keyring: _ses 1055658746 --alswrv 4043 4043 \_ user: a [dhowells@andromeda ~]$ /tmp/newpag hello [dhowells@andromeda ~]$ keyctl show Session Keyring -3 --alswrv 4043 4043 keyring: hello 340417692 --alswrv 4043 4043 \_ user: a Where the test program creates a new session keyring, sticks a user key named 'a' into it and then installs it on its parent. Signed-off-by: David Howells <[email protected]> Signed-off-by: James Morris <[email protected]>
PHPAPI ZEND_COLD void php_error_docref0(const char *docref, int type, const char *format, ...) { va_list args; va_start(args, format); php_verror(docref, "", type, format, args); va_end(args); }
0
[]
php-src
9a07245b728714de09361ea16b9c6fcf70cb5685
274,978,220,526,900,130,000,000,000,000,000,000,000
8
Fixed bug #71273 A wrong ext directory setup in php.ini leads to crash
StreamListener::~StreamListener() { if (stream_ != nullptr) stream_->RemoveStreamListener(this); }
0
[ "CWE-416" ]
node
4f8772f9b731118628256189b73cd202149bbd97
193,940,499,036,447,670,000,000,000,000,000,000,000
4
src: retain pointers to WriteWrap/ShutdownWrap Avoids potential use-after-free when wrap req's are synchronously destroyed. CVE-ID: CVE-2020-8265 Fixes: https://github.com/nodejs-private/node-private/issues/227 Refs: https://hackerone.com/bugs?subject=nodejs&report_id=988103 PR-URL: https://github.com/nodejs-private/node-private/pull/23 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Rich Trott <[email protected]>
virtual void ms_fast_preprocess(Message *m) {}
0
[ "CWE-287", "CWE-284" ]
ceph
5ead97120e07054d80623dada90a5cc764c28468
147,208,992,118,580,110,000,000,000,000,000,000,000
1
auth/cephx: add authorizer challenge Allow the accepting side of a connection to reject an initial authorizer with a random challenge. The connecting side then has to respond with an updated authorizer proving they are able to decrypt the service's challenge and that the new authorizer was produced for this specific connection instance. The accepting side requires this challenge and response unconditionally if the client side advertises they have the feature bit. Servers wishing to require this improved level of authentication simply have to require the appropriate feature. Signed-off-by: Sage Weil <[email protected]> (cherry picked from commit f80b848d3f830eb6dba50123e04385173fa4540b) # Conflicts: # src/auth/Auth.h # src/auth/cephx/CephxProtocol.cc # src/auth/cephx/CephxProtocol.h # src/auth/none/AuthNoneProtocol.h # src/msg/Dispatcher.h # src/msg/async/AsyncConnection.cc - const_iterator - ::decode vs decode - AsyncConnection ctor arg noise - get_random_bytes(), not cct->random()
static int do_cgroup_get(const char *cgroup_path, const char *sub_filename, char *value, size_t len) { const char *parts[3] = { cgroup_path, sub_filename, NULL }; char *filename; int ret, saved_errno; filename = lxc_string_join("/", parts, false); if (!filename) return -1; ret = lxc_read_from_file(filename, value, len); saved_errno = errno; free(filename); errno = saved_errno; return ret; }
0
[ "CWE-59", "CWE-61" ]
lxc
592fd47a6245508b79fe6ac819fe6d3b2c1289be
57,765,451,055,290,030,000,000,000,000,000,000,000
21
CVE-2015-1335: Protect container mounts against symlinks When a container starts up, lxc sets up the container's inital fstree by doing a bunch of mounting, guided by the container configuration file. The container config is owned by the admin or user on the host, so we do not try to guard against bad entries. However, since the mount target is in the container, it's possible that the container admin could divert the mount with symbolic links. This could bypass proper container startup (i.e. confinement of a root-owned container by the restrictive apparmor policy, by diverting the required write to /proc/self/attr/current), or bypass the (path-based) apparmor policy by diverting, say, /proc to /mnt in the container. To prevent this, 1. do not allow mounts to paths containing symbolic links 2. do not allow bind mounts from relative paths containing symbolic links. Details: Define safe_mount which ensures that the container has not inserted any symbolic links into any mount targets for mounts to be done during container setup. The host's mount path may contain symbolic links. As it is under the control of the administrator, that's ok. So safe_mount begins the check for symbolic links after the rootfs->mount, by opening that directory. It opens each directory along the path using openat() relative to the parent directory using O_NOFOLLOW. When the target is reached, it mounts onto /proc/self/fd/<targetfd>. Use safe_mount() in mount_entry(), when mounting container proc, and when needed. In particular, safe_mount() need not be used in any case where: 1. the mount is done in the container's namespace 2. the mount is for the container's rootfs 3. the mount is relative to a tmpfs or proc/sysfs which we have just safe_mount()ed ourselves Since we were using proc/net as a temporary placeholder for /proc/sys/net during container startup, and proc/net is a symbolic link, use proc/tty instead. Update the lxc.container.conf manpage with details about the new restrictions. Finally, add a testcase to test some symbolic link possibilities. Reported-by: Roman Fiedler Signed-off-by: Serge Hallyn <[email protected]> Acked-by: Stéphane Graber <[email protected]>
static int rfcomm_get_dev_list(void __user *arg) { struct rfcomm_dev *dev; struct rfcomm_dev_list_req *dl; struct rfcomm_dev_info *di; int n = 0, size, err; u16 dev_num; BT_DBG(""); if (get_user(dev_num, (u16 __user *) arg)) return -EFAULT; if (!dev_num || dev_num > (PAGE_SIZE * 4) / sizeof(*di)) return -EINVAL; size = sizeof(*dl) + dev_num * sizeof(*di); dl = kzalloc(size, GFP_KERNEL); if (!dl) return -ENOMEM; di = dl->dev_info; spin_lock(&rfcomm_dev_lock); list_for_each_entry(dev, &rfcomm_dev_list, list) { if (test_bit(RFCOMM_TTY_RELEASED, &dev->flags)) continue; (di + n)->id = dev->id; (di + n)->flags = dev->flags; (di + n)->state = dev->dlc->state; (di + n)->channel = dev->channel; bacpy(&(di + n)->src, &dev->src); bacpy(&(di + n)->dst, &dev->dst); if (++n >= dev_num) break; } spin_unlock(&rfcomm_dev_lock); dl->dev_num = n; size = sizeof(*dl) + n * sizeof(*di); err = copy_to_user(arg, dl, size); kfree(dl); return err ? -EFAULT : 0; }
0
[ "CWE-200" ]
linux
f9432c5ec8b1e9a09b9b0e5569e3c73db8de432a
168,980,827,765,376,610,000,000,000,000,000,000,000
49
Bluetooth: RFCOMM - Fix info leak in ioctl(RFCOMMGETDEVLIST) The RFCOMM code fails to initialize the two padding bytes of struct rfcomm_dev_list_req inserted for alignment before copying it to userland. Additionally there are two padding bytes in each instance of struct rfcomm_dev_info. The ioctl() that for disclosures two bytes plus dev_num times two bytes uninitialized kernel heap memory. Allocate the memory using kzalloc() to fix this issue. Signed-off-by: Mathias Krause <[email protected]> Cc: Marcel Holtmann <[email protected]> Cc: Gustavo Padovan <[email protected]> Cc: Johan Hedberg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
ciConstant ciEnv::get_constant_by_index_impl(constantPoolHandle cpool, int pool_index, int cache_index, ciInstanceKlass* accessor) { EXCEPTION_CONTEXT; int index = pool_index; if (cache_index >= 0) { assert(index < 0, "only one kind of index at a time"); oop obj = cpool->resolved_references()->obj_at(cache_index); if (obj != NULL) { ciObject* ciobj = get_object(obj); if (ciobj->is_array()) { return ciConstant(T_ARRAY, ciobj); } else { assert(ciobj->is_instance(), "should be an instance"); return ciConstant(T_OBJECT, ciobj); } } index = cpool->object_to_cp_index(cache_index); } constantTag tag = cpool->tag_at(index); if (tag.is_int()) { return ciConstant(T_INT, (jint)cpool->int_at(index)); } else if (tag.is_long()) { return ciConstant((jlong)cpool->long_at(index)); } else if (tag.is_float()) { return ciConstant((jfloat)cpool->float_at(index)); } else if (tag.is_double()) { return ciConstant((jdouble)cpool->double_at(index)); } else if (tag.is_string()) { oop string = NULL; assert(cache_index >= 0, "should have a cache index"); if (cpool->is_pseudo_string_at(index)) { string = cpool->pseudo_string_at(index, cache_index); } else { string = cpool->string_at(index, cache_index, THREAD); if (HAS_PENDING_EXCEPTION) { CLEAR_PENDING_EXCEPTION; record_out_of_memory_failure(); return ciConstant(); } } ciObject* constant = get_object(string); if (constant->is_array()) { return ciConstant(T_ARRAY, constant); } else { assert (constant->is_instance(), "must be an instance, or not? "); return ciConstant(T_OBJECT, constant); } } else if (tag.is_klass() || tag.is_unresolved_klass()) { bool will_link; ciKlass* klass = get_klass_by_index_impl(cpool, index, will_link, accessor); if (HAS_PENDING_EXCEPTION) { CLEAR_PENDING_EXCEPTION; record_out_of_memory_failure(); return ciConstant(); } assert (klass->is_instance_klass() || klass->is_array_klass(), "must be an instance or array klass "); ciInstance* mirror = (will_link ? klass->java_mirror() : get_unloaded_klass_mirror(klass)); return ciConstant(T_OBJECT, mirror); } else if (tag.is_method_type()) { // must execute Java code to link this CP entry into cache[i].f1 ciSymbol* signature = get_symbol(cpool->method_type_signature_at(index)); ciObject* ciobj = get_unloaded_method_type_constant(signature); return ciConstant(T_OBJECT, ciobj); } else if (tag.is_method_handle()) { // must execute Java code to link this CP entry into cache[i].f1 bool ignore_will_link; int ref_kind = cpool->method_handle_ref_kind_at(index); int callee_index = cpool->method_handle_klass_index_at(index); ciKlass* callee = get_klass_by_index_impl(cpool, callee_index, ignore_will_link, accessor); ciSymbol* name = get_symbol(cpool->method_handle_name_ref_at(index)); ciSymbol* signature = get_symbol(cpool->method_handle_signature_ref_at(index)); ciObject* ciobj = get_unloaded_method_handle_constant(callee, name, signature, ref_kind); return ciConstant(T_OBJECT, ciobj); } else { ShouldNotReachHere(); return ciConstant(); } }
0
[]
jdk8u
1dafef08cc922ee85a8e216387100dc681a5484d
212,562,859,574,027,000,000,000,000,000,000,000,000
80
8281859: Improve class compilation Reviewed-by: andrew Backport-of: 3ac62a66efd05d0842076dd4cfbea0e53b12630f
has_column_privilege_name_id_attnum(PG_FUNCTION_ARGS) { Name username = PG_GETARG_NAME(0); Oid tableoid = PG_GETARG_OID(1); AttrNumber colattnum = PG_GETARG_INT16(2); text *priv_type_text = PG_GETARG_TEXT_P(3); Oid roleid; AclMode mode; int privresult; roleid = get_role_oid_or_public(NameStr(*username)); mode = convert_column_priv_string(priv_type_text); privresult = column_privilege_check(tableoid, colattnum, roleid, mode); if (privresult < 0) PG_RETURN_NULL(); PG_RETURN_BOOL(privresult); }
0
[ "CWE-264" ]
postgres
fea164a72a7bfd50d77ba5fb418d357f8f2bb7d0
56,945,794,360,422,900,000,000,000,000,000,000,000
18
Shore up ADMIN OPTION restrictions. Granting a role without ADMIN OPTION is supposed to prevent the grantee from adding or removing members from the granted role. Issuing SET ROLE before the GRANT bypassed that, because the role itself had an implicit right to add or remove members. Plug that hole by recognizing that implicit right only when the session user matches the current role. Additionally, do not recognize it during a security-restricted operation or during execution of a SECURITY DEFINER function. The restriction on SECURITY DEFINER is not security-critical. However, it seems best for a user testing his own SECURITY DEFINER function to see the same behavior others will see. Back-patch to 8.4 (all supported versions). The SQL standards do not conflate roles and users as PostgreSQL does; only SQL roles have members, and only SQL users initiate sessions. An application using PostgreSQL users and roles as SQL users and roles will never attempt to grant membership in the role that is the session user, so the implicit right to add or remove members will never arise. The security impact was mostly that a role member could revoke access from others, contrary to the wishes of his own grantor. Unapproved role member additions are less notable, because the member can still largely achieve that by creating a view or a SECURITY DEFINER function. Reviewed by Andres Freund and Tom Lane. Reported, independently, by Jonas Sundman and Noah Misch. Security: CVE-2014-0060
SWFShape_setLineStyle_internal(SWFShape shape, unsigned short width, byte r, byte g, byte b, byte a) { int line; if ( shape->isEnded ) return; for ( line=0; line<shape->nLines; ++line ) { if ( SWFLineStyle_equals(shape->lines[line], width, r, g, b, a, 0) ) break; } if ( line == shape->nLines ) line = SWFShape_addLineStyle(shape, width, r, g, b, a); else ++line; finishSetLine(shape, line, width); }
0
[ "CWE-20", "CWE-476" ]
libming
6e76e8c71cb51c8ba0aa9737a636b9ac3029887f
203,614,836,395,317,360,000,000,000,000,000,000,000
21
SWFShape_setLeftFillStyle: prevent fill overflow
static void __update_rq_clock(struct rq *rq) { u64 prev_raw = rq->prev_clock_raw; u64 now = sched_clock(); s64 delta = now - prev_raw; u64 clock = rq->clock; #ifdef CONFIG_SCHED_DEBUG WARN_ON_ONCE(cpu_of(rq) != smp_processor_id()); #endif /* * Protect against sched_clock() occasionally going backwards: */ if (unlikely(delta < 0)) { clock++; rq->clock_warps++; } else { /* * Catch too large forward jumps too: */ u64 max_jump = max_skipped_ticks(rq) * TICK_NSEC; u64 max_time = rq->tick_timestamp + max_jump; if (unlikely(clock + delta > max_time)) { if (clock < max_time) clock = max_time; else clock++; rq->clock_overflows++; } else { if (unlikely(delta > rq->clock_max_delta)) rq->clock_max_delta = delta; clock += delta; } } rq->prev_clock_raw = now; rq->clock = clock; }
0
[]
linux-2.6
8f1bc385cfbab474db6c27b5af1e439614f3025c
254,862,540,538,729,000,000,000,000,000,000,000,000
39
sched: fair: weight calculations In order to level the hierarchy, we need to calculate load based on the root view. That is, each task's load is in the same unit. A / \ B 1 / \ 2 3 To compute 1's load we do: weight(1) -------------- rq_weight(A) To compute 2's load we do: weight(2) weight(B) ------------ * ----------- rq_weight(B) rw_weight(A) This yields load fractions in comparable units. The consequence is that it changes virtual time. We used to have: time_{i} vtime_{i} = ------------ weight_{i} vtime = \Sum vtime_{i} = time / rq_weight. But with the new way of load calculation we get that vtime equals time. Signed-off-by: Peter Zijlstra <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
gdm_session_worker_watch_child (GdmSessionWorker *worker) { g_debug ("GdmSession worker: watching pid %d", worker->priv->child_pid); worker->priv->child_watch_id = g_child_watch_add (worker->priv->child_pid, (GChildWatchFunc)session_worker_child_watch, worker); }
0
[ "CWE-362" ]
gdm
dcdbaaa04012541ad2813cf83559d91d52f208b9
184,168,643,116,661,420,000,000,000,000,000,000,000
8
session-worker: Don't switch back VTs until session is fully exited There's a race condition on shutdown where the session worker is switching VTs back to the initial VT at the same time as the session exit is being processed. This means that manager may try to start a login screen (because of the VT switch) when autologin is enabled when there shouldn't be a login screen. This commit makes sure both the PostSession script, and session-exited signal emission are complete before initiating the VT switch back to the initial VT. https://gitlab.gnome.org/GNOME/gdm/-/issues/660
static int execute(struct sockaddr *addr) { static char line[1000]; int pktlen, len, i; if (addr) { char addrbuf[256] = ""; int port = -1; if (addr->sa_family == AF_INET) { struct sockaddr_in *sin_addr = (void *) addr; inet_ntop(addr->sa_family, &sin_addr->sin_addr, addrbuf, sizeof(addrbuf)); port = ntohs(sin_addr->sin_port); #ifndef NO_IPV6 } else if (addr && addr->sa_family == AF_INET6) { struct sockaddr_in6 *sin6_addr = (void *) addr; char *buf = addrbuf; *buf++ = '['; *buf = '\0'; /* stpcpy() is cool */ inet_ntop(AF_INET6, &sin6_addr->sin6_addr, buf, sizeof(addrbuf) - 1); strcat(buf, "]"); port = ntohs(sin6_addr->sin6_port); #endif } loginfo("Connection from %s:%d", addrbuf, port); setenv("REMOTE_ADDR", addrbuf, 1); } else { unsetenv("REMOTE_ADDR"); } alarm(init_timeout ? init_timeout : timeout); pktlen = packet_read_line(0, line, sizeof(line)); alarm(0); len = strlen(line); if (pktlen != len) loginfo("Extended attributes (%d bytes) exist <%.*s>", (int) pktlen - len, (int) pktlen - len, line + len + 1); if (len && line[len-1] == '\n') { line[--len] = 0; pktlen--; } free(hostname); free(canon_hostname); free(ip_address); free(tcp_port); hostname = canon_hostname = ip_address = tcp_port = NULL; if (len != pktlen) parse_extra_args(line + len + 1, pktlen - len - 1); for (i = 0; i < ARRAY_SIZE(daemon_service); i++) { struct daemon_service *s = &(daemon_service[i]); int namelen = strlen(s->name); if (!prefixcmp(line, "git-") && !strncmp(s->name, line + 4, namelen) && line[namelen + 4] == ' ') { /* * Note: The directory here is probably context sensitive, * and might depend on the actual service being performed. */ return run_service(line + namelen + 5, s); } } logerror("Protocol error: '%s'", line); return -1; }
1
[]
git
73bb33a94ec67a53e7d805b12ad9264fa25f4f8d
167,556,915,230,880,960,000,000,000,000,000,000,000
72
daemon: Strictly parse the "extra arg" part of the command Since 1.4.4.5 (49ba83fb67 "Add virtualization support to git-daemon") git daemon enters an infinite loop and never terminates if a client hides any extra arguments in the initial request line which is not exactly "\0host=blah\0". Since that change, a client must never insert additional extra arguments, or attempt to use any argument other than "host=", as any daemon will get stuck parsing the request line and will never complete the request. Since the client can't tell if the daemon is patched or not, it is not possible to know if additional extra args might actually be able to be safely requested. If we ever need to extend the git daemon protocol to support a new feature, we may have to do something like this to the exchange: # If both support git:// v2 # C: 000cgit://v2 S: 0010ok host user C: 0018host git.kernel.org C: 0027git-upload-pack /pub/linux-2.6.git S: ...git-upload-pack header... # If client supports git:// v2, server does not: # C: 000cgit://v2 S: <EOF> C: 003bgit-upload-pack /pub/linux-2.6.git\0host=git.kernel.org\0 S: ...git-upload-pack header... This requires the client to create two TCP connections to talk to an older git daemon, however all daemons since the introduction of daemon.c will safely reject the unknown "git://v2" command request, so the client can quite easily determine the server supports an older protocol. Signed-off-by: Shawn O. Pearce <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
size_t vterm_screen_get_text(const VTermScreen *screen, char *str, size_t len, const VTermRect rect) { return _get_chars(screen, 1, str, len, rect); }
0
[ "CWE-476" ]
vim
cd929f7ba8cc5b6d6dcf35c8b34124e969fed6b8
319,029,035,588,556,330,000,000,000,000,000,000,000
4
patch 8.1.0633: crash when out of memory while opening a terminal window Problem: Crash when out of memory while opening a terminal window. Solution: Handle out-of-memory more gracefully.
static int uvesafb_probe(struct platform_device *dev) { struct fb_info *info; struct vbe_mode_ib *mode = NULL; struct uvesafb_par *par; int err = 0, i; info = framebuffer_alloc(sizeof(*par) + sizeof(u32) * 256, &dev->dev); if (!info) return -ENOMEM; par = info->par; err = uvesafb_vbe_init(info); if (err) { pr_err("vbe_init() failed with %d\n", err); goto out; } info->fbops = &uvesafb_ops; i = uvesafb_vbe_init_mode(info); if (i < 0) { err = -EINVAL; goto out; } else { mode = &par->vbe_modes[i]; } if (fb_alloc_cmap(&info->cmap, 256, 0) < 0) { err = -ENXIO; goto out; } uvesafb_init_info(info, mode); if (!request_region(0x3c0, 32, "uvesafb")) { pr_err("request region 0x3c0-0x3e0 failed\n"); err = -EIO; goto out_mode; } if (!request_mem_region(info->fix.smem_start, info->fix.smem_len, "uvesafb")) { pr_err("cannot reserve video memory at 0x%lx\n", info->fix.smem_start); err = -EIO; goto out_reg; } uvesafb_init_mtrr(info); uvesafb_ioremap(info); if (!info->screen_base) { pr_err("abort, cannot ioremap 0x%x bytes of video memory at 0x%lx\n", info->fix.smem_len, info->fix.smem_start); err = -EIO; goto out_mem; } platform_set_drvdata(dev, info); if (register_framebuffer(info) < 0) { pr_err("failed to register framebuffer device\n"); err = -EINVAL; goto out_unmap; } pr_info("framebuffer at 0x%lx, mapped to 0x%p, using %dk, total %dk\n", info->fix.smem_start, info->screen_base, info->fix.smem_len / 1024, par->vbe_ib.total_memory * 64); fb_info(info, "%s frame buffer device\n", info->fix.id); err = sysfs_create_group(&dev->dev.kobj, &uvesafb_dev_attgrp); if (err != 0) fb_warn(info, "failed to register attributes\n"); return 0; out_unmap: iounmap(info->screen_base); out_mem: release_mem_region(info->fix.smem_start, info->fix.smem_len); out_reg: release_region(0x3c0, 32); out_mode: if (!list_empty(&info->modelist)) fb_destroy_modelist(&info->modelist); fb_destroy_modedb(info->monspecs.modedb); fb_dealloc_cmap(&info->cmap); out: kfree(par->vbe_modes); framebuffer_release(info); return err; }
0
[ "CWE-190" ]
linux
9f645bcc566a1e9f921bdae7528a01ced5bc3713
208,064,175,202,548,530,000,000,000,000,000,000,000
96
video: uvesafb: Fix integer overflow in allocation cmap->len can get close to INT_MAX/2, allowing for an integer overflow in allocation. This uses kmalloc_array() instead to catch the condition. Reported-by: Dr Silvio Cesare of InfoSect <[email protected]> Fixes: 8bdb3a2d7df48 ("uvesafb: the driver core") Cc: [email protected] Signed-off-by: Kees Cook <[email protected]>
static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; struct sev_data_receive_finish data; if (!sev_guest(kvm)) return -ENOTTY; data.handle = sev->handle; return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error); }
0
[ "CWE-459" ]
linux
683412ccf61294d727ead4a73d97397396e69a6b
31,599,833,030,237,940,000,000,000,000,000,000,000
11
KVM: SEV: add cache flush to solve SEV cache incoherency issues Flush the CPU caches when memory is reclaimed from an SEV guest (where reclaim also includes it being unmapped from KVM's memslots). Due to lack of coherency for SEV encrypted memory, failure to flush results in silent data corruption if userspace is malicious/broken and doesn't ensure SEV guest memory is properly pinned and unpinned. Cache coherency is not enforced across the VM boundary in SEV (AMD APM vol.2 Section 15.34.7). Confidential cachelines, generated by confidential VM guests have to be explicitly flushed on the host side. If a memory page containing dirty confidential cachelines was released by VM and reallocated to another user, the cachelines may corrupt the new user at a later time. KVM takes a shortcut by assuming all confidential memory remain pinned until the end of VM lifetime. Therefore, KVM does not flush cache at mmu_notifier invalidation events. Because of this incorrect assumption and the lack of cache flushing, malicous userspace can crash the host kernel: creating a malicious VM and continuously allocates/releases unpinned confidential memory pages when the VM is running. Add cache flush operations to mmu_notifier operations to ensure that any physical memory leaving the guest VM get flushed. In particular, hook mmu_notifier_invalidate_range_start and mmu_notifier_release events and flush cache accordingly. The hook after releasing the mmu lock to avoid contention with other vCPUs. Cc: [email protected] Suggested-by: Sean Christpherson <[email protected]> Reported-by: Mingwei Zhang <[email protected]> Signed-off-by: Mingwei Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
void SdamServerSelector::_getCandidateServers(std::vector<ServerDescriptionPtr>* result, const TopologyDescriptionPtr topologyDescription, const ReadPreferenceSetting& criteria) { // when querying the primary we don't need to consider tags bool shouldTagFilter = true; // TODO SERVER-46499: check to see if we want to enforce minOpTime at all since // it was effectively optional in the original implementation. if (!criteria.minOpTime.isNull()) { auto eligibleServers = topologyDescription->findServers([](const ServerDescriptionPtr& s) { return (s->getType() == ServerType::kRSPrimary || s->getType() == ServerType::kRSSecondary); }); auto beginIt = eligibleServers.begin(); auto endIt = eligibleServers.end(); auto maxIt = std::max_element(beginIt, endIt, [topologyDescription](const ServerDescriptionPtr& left, const ServerDescriptionPtr& right) { return left->getOpTime() < right->getOpTime(); }); if (maxIt != endIt) { auto maxOpTime = (*maxIt)->getOpTime(); if (maxOpTime && maxOpTime < criteria.minOpTime) { // ignore minOpTime const_cast<ReadPreferenceSetting&>(criteria) = ReadPreferenceSetting(criteria.pref); } } } switch (criteria.pref) { case ReadPreference::Nearest: { auto filter = (topologyDescription->getType() != TopologyType::kSharded) ? nearestFilter(criteria) : shardedFilter(criteria); *result = topologyDescription->findServers(filter); break; } case ReadPreference::SecondaryOnly: *result = topologyDescription->findServers(secondaryFilter(criteria)); break; case ReadPreference::PrimaryOnly: { const auto primaryCriteria = ReadPreferenceSetting(criteria.pref); *result = topologyDescription->findServers(primaryFilter(primaryCriteria)); shouldTagFilter = false; break; } case ReadPreference::PrimaryPreferred: { // ignore tags and max staleness for primary query auto primaryCriteria = ReadPreferenceSetting(ReadPreference::PrimaryOnly); _getCandidateServers(result, topologyDescription, primaryCriteria); if (result->size()) { shouldTagFilter = false; break; } // keep tags and maxStaleness for secondary query auto secondaryCriteria = criteria; secondaryCriteria.pref = ReadPreference::SecondaryOnly; _getCandidateServers(result, topologyDescription, secondaryCriteria); break; } case ReadPreference::SecondaryPreferred: { // keep tags and maxStaleness for secondary query auto secondaryCriteria = criteria; secondaryCriteria.pref = ReadPreference::SecondaryOnly; _getCandidateServers(result, topologyDescription, secondaryCriteria); if (result->size()) { break; } // ignore tags and maxStaleness for primary query shouldTagFilter = false; auto primaryCriteria = ReadPreferenceSetting(ReadPreference::PrimaryOnly); _getCandidateServers(result, topologyDescription, primaryCriteria); break; } default: MONGO_UNREACHABLE } if (shouldTagFilter) { filterTags(result, criteria.tags); } }
0
[ "CWE-755" ]
mongo
75f7184eafa78006a698cda4c4adfb57f1290047
239,670,115,127,374,400,000,000,000,000,000,000,000
91
SERVER-50170 fix max staleness read preference parameter for server selection