func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
double time_millis_since_init() { return OS::TimeCurrentMillis() - time_millis_at_init_; }
0
[ "CWE-20", "CWE-119" ]
node
530af9cb8e700e7596b3ec812bad123c9fa06356
36,081,632,078,761,136,000,000,000,000,000,000,000
3
v8: Interrupts must not mask stack overflow. Backport of https://codereview.chromium.org/339883002
GF_Err hnti_dump(GF_Box *a, FILE * trace) { gf_isom_box_dump_start(a, "HintTrackInfoBox", trace); fprintf(trace, ">\n"); gf_isom_box_dump_done("HintTrackInfoBox", NULL, trace); return GF_OK; }
0
[ "CWE-125" ]
gpac
bceb03fd2be95097a7b409ea59914f332fb6bc86
185,652,313,198,340,350,000,000,000,000,000,000,000
7
fixed 2 possible heap overflows (inc. #1088)
static IWTSVirtualChannelManager* dvcman_new(drdynvcPlugin* plugin) { DVCMAN* dvcman; dvcman = (DVCMAN*) calloc(1, sizeof(DVCMAN)); if (!dvcman) { WLog_Print(plugin->log, WLOG_ERROR, "calloc failed!"); return NULL; } dvcman->iface.CreateListener = dvcman_create_listener; dvcman->iface.FindChannelById = dvcman_find_channel_by_id; dvcman->iface.GetChannelId = dvcman_get_channel_id; dvcman->drdynvc = plugin; dvcman->channels = ArrayList_New(TRUE); if (!dvcman->channels) { WLog_Print(plugin->log, WLOG_ERROR, "ArrayList_New failed!"); free(dvcman); return NULL; } dvcman->channels->object.fnObjectFree = dvcman_channel_free; dvcman->pool = StreamPool_New(TRUE, 10); if (!dvcman->pool) { WLog_Print(plugin->log, WLOG_ERROR, "StreamPool_New failed!"); ArrayList_Free(dvcman->channels); free(dvcman); return NULL; } return (IWTSVirtualChannelManager*) dvcman; }
0
[ "CWE-125" ]
FreeRDP
baee520e3dd9be6511c45a14c5f5e77784de1471
255,991,301,977,237,280,000,000,000,000,000,000,000
37
Fix for #4866: Added additional length checks
int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, void *data, unsigned long len) { return kvm_read_guest_offset_cached(kvm, ghc, data, 0, len);
0
[ "CWE-459" ]
linux
683412ccf61294d727ead4a73d97397396e69a6b
131,234,426,322,304,220,000,000,000,000,000,000,000
5
KVM: SEV: add cache flush to solve SEV cache incoherency issues Flush the CPU caches when memory is reclaimed from an SEV guest (where reclaim also includes it being unmapped from KVM's memslots). Due to lack of coherency for SEV encrypted memory, failure to flush results in silent data corruption if userspace is malicious/broken and doesn't ensure SEV guest memory is properly pinned and unpinned. Cache coherency is not enforced across the VM boundary in SEV (AMD APM vol.2 Section 15.34.7). Confidential cachelines, generated by confidential VM guests have to be explicitly flushed on the host side. If a memory page containing dirty confidential cachelines was released by VM and reallocated to another user, the cachelines may corrupt the new user at a later time. KVM takes a shortcut by assuming all confidential memory remain pinned until the end of VM lifetime. Therefore, KVM does not flush cache at mmu_notifier invalidation events. Because of this incorrect assumption and the lack of cache flushing, malicous userspace can crash the host kernel: creating a malicious VM and continuously allocates/releases unpinned confidential memory pages when the VM is running. Add cache flush operations to mmu_notifier operations to ensure that any physical memory leaving the guest VM get flushed. In particular, hook mmu_notifier_invalidate_range_start and mmu_notifier_release events and flush cache accordingly. The hook after releasing the mmu lock to avoid contention with other vCPUs. Cc: [email protected] Suggested-by: Sean Christpherson <[email protected]> Reported-by: Mingwei Zhang <[email protected]> Signed-off-by: Mingwei Zhang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
void Gfx::opMoveSetShowText(Object args[], int numArgs) { double tx, ty; if (!state->getFont()) { error(getPos(), "No font in move/set/show"); return; } if (fontChanged) { out->updateFont(state); fontChanged = gFalse; } state->setWordSpace(args[0].getNum()); state->setCharSpace(args[1].getNum()); tx = state->getLineX(); ty = state->getLineY() - state->getLeading(); state->textMoveTo(tx, ty); out->updateWordSpace(state); out->updateCharSpace(state); out->updateTextPos(state); out->beginStringOp(state); doShowText(args[2].getString()); out->endStringOp(state); }
0
[]
poppler
abf167af8b15e5f3b510275ce619e6fdb42edd40
162,013,866,766,451,070,000,000,000,000,000,000,000
23
Implement tiling/patterns in SplashOutputDev Fixes bug 13518
smbhash(unsigned char *out, const unsigned char *in, unsigned char *key) { unsigned char key2[8]; struct crypto_cipher *tfm_des; str_to_key(key, key2); tfm_des = crypto_alloc_cipher("des", 0, 0); if (IS_ERR(tfm_des)) { cifs_dbg(VFS, "could not allocate des crypto API\n"); return PTR_ERR(tfm_des); } crypto_cipher_setkey(tfm_des, key2, 8); crypto_cipher_encrypt_one(tfm_des, out, in); crypto_free_cipher(tfm_des); return 0; }
0
[ "CWE-119", "CWE-703" ]
linux
06deeec77a5a689cc94b21a8a91a76e42176685d
152,915,868,707,921,740,000,000,000,000,000,000,000
19
cifs: Fix smbencrypt() to stop pointing a scatterlist at the stack smbencrypt() points a scatterlist to the stack, which is breaks if CONFIG_VMAP_STACK=y. Fix it by switching to crypto_cipher_encrypt_one(). The new code should be considerably faster as an added benefit. This code is nearly identical to some code that Eric Biggers suggested. Cc: [email protected] # 4.9 only Reported-by: Eric Biggers <[email protected]> Signed-off-by: Andy Lutomirski <[email protected]> Acked-by: Jeff Layton <[email protected]> Signed-off-by: Steve French <[email protected]>
int modbus_get_header_length(modbus_t *ctx) { if (ctx == NULL) { errno = EINVAL; return -1; } return ctx->backend->header_length; }
0
[ "CWE-125" ]
libmodbus
5ccdf5ef79d742640355d1132fa9e2abc7fbaefc
328,656,956,927,268,300,000,000,000,000,000,000,000
9
Fix VD-1301 and VD-1302 vulnerabilities This patch was contributed by Maor Vermucht and Or Peles from VDOO Connected Trust.
xfs_filemap_fault( struct vm_area_struct *vma, struct vm_fault *vmf) { struct inode *inode = file_inode(vma->vm_file); int ret; trace_xfs_filemap_fault(XFS_I(inode)); /* DAX can shortcut the normal fault path on write faults! */ if ((vmf->flags & FAULT_FLAG_WRITE) && IS_DAX(inode)) return xfs_filemap_page_mkwrite(vma, vmf); xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED); if (IS_DAX(inode)) { /* * we do not want to trigger unwritten extent conversion on read * faults - that is unnecessary overhead and would also require * changes to xfs_get_blocks_direct() to map unwritten extent * ioend for conversion on read-only mappings. */ ret = __dax_fault(vma, vmf, xfs_get_blocks_direct, NULL); } else ret = filemap_fault(vma, vmf); xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED); return ret; }
0
[ "CWE-19" ]
linux
fc0561cefc04e7803c0f6501ca4f310a502f65b8
120,413,484,483,542,550,000,000,000,000,000,000,000
28
xfs: optimise away log forces on timestamp updates for fdatasync xfs: timestamp updates cause excessive fdatasync log traffic Sage Weil reported that a ceph test workload was writing to the log on every fdatasync during an overwrite workload. Event tracing showed that the only metadata modification being made was the timestamp updates during the write(2) syscall, but fdatasync(2) is supposed to ignore them. The key observation was that the transactions in the log all looked like this: INODE: #regs: 4 ino: 0x8b flags: 0x45 dsize: 32 And contained a flags field of 0x45 or 0x85, and had data and attribute forks following the inode core. This means that the timestamp updates were triggering dirty relogging of previously logged parts of the inode that hadn't yet been flushed back to disk. There are two parts to this problem. The first is that XFS relogs dirty regions in subsequent transactions, so it carries around the fields that have been dirtied since the last time the inode was written back to disk, not since the last time the inode was forced into the log. The second part is that on v5 filesystems, the inode change count update during inode dirtying also sets the XFS_ILOG_CORE flag, so on v5 filesystems this makes a timestamp update dirty the entire inode. As a result when fdatasync is run, it looks at the dirty fields in the inode, and sees more than just the timestamp flag, even though the only metadata change since the last fdatasync was just the timestamps. Hence we force the log on every subsequent fdatasync even though it is not needed. To fix this, add a new field to the inode log item that tracks changes since the last time fsync/fdatasync forced the log to flush the changes to the journal. This flag is updated when we dirty the inode, but we do it before updating the change count so it does not carry the "core dirty" flag from timestamp updates. The fields are zeroed when the inode is marked clean (due to writeback/freeing) or when an fsync/datasync forces the log. Hence if we only dirty the timestamps on the inode between fsync/fdatasync calls, the fdatasync will not trigger another log force. Over 100 runs of the test program: Ext4 baseline: runtime: 1.63s +/- 0.24s avg lat: 1.59ms +/- 0.24ms iops: ~2000 XFS, vanilla kernel: runtime: 2.45s +/- 0.18s avg lat: 2.39ms +/- 0.18ms log forces: ~400/s iops: ~1000 XFS, patched kernel: runtime: 1.49s +/- 0.26s avg lat: 1.46ms +/- 0.25ms log forces: ~30/s iops: ~1500 Reported-by: Sage Weil <[email protected]> Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Brian Foster <[email protected]> Signed-off-by: Dave Chinner <[email protected]>
const char* ExtensionSet::ParseFieldWithExtensionInfo( int number, bool was_packed_on_wire, const ExtensionInfo& extension, InternalMetadata* metadata, const char* ptr, internal::ParseContext* ctx) { if (was_packed_on_wire) { switch (extension.type) { #define HANDLE_TYPE(UPPERCASE, CPP_CAMELCASE) \ case WireFormatLite::TYPE_##UPPERCASE: \ return internal::Packed##CPP_CAMELCASE##Parser( \ MutableRawRepeatedField(number, extension.type, extension.is_packed, \ extension.descriptor), \ ptr, ctx); HANDLE_TYPE(INT32, Int32); HANDLE_TYPE(INT64, Int64); HANDLE_TYPE(UINT32, UInt32); HANDLE_TYPE(UINT64, UInt64); HANDLE_TYPE(SINT32, SInt32); HANDLE_TYPE(SINT64, SInt64); HANDLE_TYPE(FIXED32, Fixed32); HANDLE_TYPE(FIXED64, Fixed64); HANDLE_TYPE(SFIXED32, SFixed32); HANDLE_TYPE(SFIXED64, SFixed64); HANDLE_TYPE(FLOAT, Float); HANDLE_TYPE(DOUBLE, Double); HANDLE_TYPE(BOOL, Bool); #undef HANDLE_TYPE case WireFormatLite::TYPE_ENUM: return internal::PackedEnumParserArg<T>( MutableRawRepeatedField(number, extension.type, extension.is_packed, extension.descriptor), ptr, ctx, extension.enum_validity_check.func, extension.enum_validity_check.arg, metadata, number); case WireFormatLite::TYPE_STRING: case WireFormatLite::TYPE_BYTES: case WireFormatLite::TYPE_GROUP: case WireFormatLite::TYPE_MESSAGE: GOOGLE_LOG(FATAL) << "Non-primitive types can't be packed."; break; } } else { switch (extension.type) { #define HANDLE_VARINT_TYPE(UPPERCASE, CPP_CAMELCASE) \ case WireFormatLite::TYPE_##UPPERCASE: { \ uint64_t value; \ ptr = VarintParse(ptr, &value); \ GOOGLE_PROTOBUF_PARSER_ASSERT(ptr); \ if (extension.is_repeated) { \ Add##CPP_CAMELCASE(number, WireFormatLite::TYPE_##UPPERCASE, \ extension.is_packed, value, extension.descriptor); \ } else { \ Set##CPP_CAMELCASE(number, WireFormatLite::TYPE_##UPPERCASE, value, \ extension.descriptor); \ } \ } break HANDLE_VARINT_TYPE(INT32, Int32); HANDLE_VARINT_TYPE(INT64, Int64); HANDLE_VARINT_TYPE(UINT32, UInt32); HANDLE_VARINT_TYPE(UINT64, UInt64); HANDLE_VARINT_TYPE(BOOL, Bool); #undef HANDLE_VARINT_TYPE #define HANDLE_SVARINT_TYPE(UPPERCASE, CPP_CAMELCASE, SIZE) \ case WireFormatLite::TYPE_##UPPERCASE: { \ uint64_t val; \ ptr = VarintParse(ptr, &val); \ GOOGLE_PROTOBUF_PARSER_ASSERT(ptr); \ auto value = WireFormatLite::ZigZagDecode##SIZE(val); \ if (extension.is_repeated) { \ Add##CPP_CAMELCASE(number, WireFormatLite::TYPE_##UPPERCASE, \ extension.is_packed, value, extension.descriptor); \ } else { \ Set##CPP_CAMELCASE(number, WireFormatLite::TYPE_##UPPERCASE, value, \ extension.descriptor); \ } \ } break HANDLE_SVARINT_TYPE(SINT32, Int32, 32); HANDLE_SVARINT_TYPE(SINT64, Int64, 64); #undef HANDLE_SVARINT_TYPE #define HANDLE_FIXED_TYPE(UPPERCASE, CPP_CAMELCASE, CPPTYPE) \ case WireFormatLite::TYPE_##UPPERCASE: { \ auto value = UnalignedLoad<CPPTYPE>(ptr); \ ptr += sizeof(CPPTYPE); \ if (extension.is_repeated) { \ Add##CPP_CAMELCASE(number, WireFormatLite::TYPE_##UPPERCASE, \ extension.is_packed, value, extension.descriptor); \ } else { \ Set##CPP_CAMELCASE(number, WireFormatLite::TYPE_##UPPERCASE, value, \ extension.descriptor); \ } \ } break HANDLE_FIXED_TYPE(FIXED32, UInt32, uint32_t); HANDLE_FIXED_TYPE(FIXED64, UInt64, uint64_t); HANDLE_FIXED_TYPE(SFIXED32, Int32, int32_t); HANDLE_FIXED_TYPE(SFIXED64, Int64, int64_t); HANDLE_FIXED_TYPE(FLOAT, Float, float); HANDLE_FIXED_TYPE(DOUBLE, Double, double); #undef HANDLE_FIXED_TYPE case WireFormatLite::TYPE_ENUM: { uint64_t val; ptr = VarintParse(ptr, &val); GOOGLE_PROTOBUF_PARSER_ASSERT(ptr); int value = val; if (!extension.enum_validity_check.func( extension.enum_validity_check.arg, value)) { WriteVarint(number, val, metadata->mutable_unknown_fields<T>()); } else if (extension.is_repeated) { AddEnum(number, WireFormatLite::TYPE_ENUM, extension.is_packed, value, extension.descriptor); } else { SetEnum(number, WireFormatLite::TYPE_ENUM, value, extension.descriptor); } break; } case WireFormatLite::TYPE_BYTES: case WireFormatLite::TYPE_STRING: { std::string* value = extension.is_repeated ? AddString(number, WireFormatLite::TYPE_STRING, extension.descriptor) : MutableString(number, WireFormatLite::TYPE_STRING, extension.descriptor); int size = ReadSize(&ptr); GOOGLE_PROTOBUF_PARSER_ASSERT(ptr); return ctx->ReadString(ptr, size, value); } case WireFormatLite::TYPE_GROUP: { MessageLite* value = extension.is_repeated ? AddMessage(number, WireFormatLite::TYPE_GROUP, *extension.message_info.prototype, extension.descriptor) : MutableMessage(number, WireFormatLite::TYPE_GROUP, *extension.message_info.prototype, extension.descriptor); uint32_t tag = (number << 3) + WireFormatLite::WIRETYPE_START_GROUP; return ctx->ParseGroup(value, ptr, tag); } case WireFormatLite::TYPE_MESSAGE: { MessageLite* value = extension.is_repeated ? AddMessage(number, WireFormatLite::TYPE_MESSAGE, *extension.message_info.prototype, extension.descriptor) : MutableMessage(number, WireFormatLite::TYPE_MESSAGE, *extension.message_info.prototype, extension.descriptor); return ctx->ParseMessage(value, ptr); } } } return ptr; }
0
[ "CWE-703" ]
protobuf
d1635e1496f51e0d5653d856211e8821bc47adc4
126,616,697,497,240,600,000,000,000,000,000,000,000
160
Apply patch
void hlenCommand(client *c) { robj *o; if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.czero)) == NULL || checkType(c,o,OBJ_HASH)) return; addReplyLongLong(c,hashTypeLength(o)); }
0
[ "CWE-190" ]
redis
f6a40570fa63d5afdd596c78083d754081d80ae3
111,997,650,204,454,580,000,000,000,000,000,000,000
8
Fix ziplist and listpack overflows and truncations (CVE-2021-32627, CVE-2021-32628) - fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error.
_copyRangeTblFunction(const RangeTblFunction *from) { RangeTblFunction *newnode = makeNode(RangeTblFunction); COPY_NODE_FIELD(funcexpr); COPY_SCALAR_FIELD(funccolcount); COPY_NODE_FIELD(funccolnames); COPY_NODE_FIELD(funccoltypes); COPY_NODE_FIELD(funccoltypmods); COPY_NODE_FIELD(funccolcollations); COPY_BITMAPSET_FIELD(funcparams); return newnode; }
0
[ "CWE-362" ]
postgres
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
177,693,047,070,099,300,000,000,000,000,000,000,000
14
Avoid repeated name lookups during table and index DDL. If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
llsec_do_decrypt_auth(struct sk_buff *skb, const struct mac802154_llsec *sec, const struct ieee802154_hdr *hdr, struct mac802154_llsec_key *key, __le64 dev_addr) { u8 iv[16]; unsigned char *data; int authlen, datalen, assoclen, rc; struct scatterlist sg; struct aead_request *req; authlen = ieee802154_sechdr_authtag_len(&hdr->sec); llsec_geniv(iv, dev_addr, &hdr->sec); req = aead_request_alloc(llsec_tfm_by_len(key, authlen), GFP_ATOMIC); if (!req) return -ENOMEM; assoclen = skb->mac_len; data = skb_mac_header(skb) + skb->mac_len; datalen = skb_tail_pointer(skb) - data; sg_init_one(&sg, skb_mac_header(skb), assoclen + datalen); if (!(hdr->sec.level & IEEE802154_SCF_SECLEVEL_ENC)) { assoclen += datalen - authlen; datalen = authlen; } aead_request_set_callback(req, 0, NULL, NULL); aead_request_set_crypt(req, &sg, &sg, datalen, iv); aead_request_set_ad(req, assoclen); rc = crypto_aead_decrypt(req); kfree_sensitive(req); skb_trim(skb, skb->len - authlen); return rc; }
0
[ "CWE-416" ]
linux
1165affd484889d4986cf3b724318935a0b120d8
327,278,495,877,045,380,000,000,000,000,000,000,000
40
net: mac802154: Fix general protection fault syzbot found general protection fault in crypto_destroy_tfm()[1]. It was caused by wrong clean up loop in llsec_key_alloc(). If one of the tfm array members is in IS_ERR() range it will cause general protection fault in clean up function [1]. Call Trace: crypto_free_aead include/crypto/aead.h:191 [inline] [1] llsec_key_alloc net/mac802154/llsec.c:156 [inline] mac802154_llsec_key_add+0x9e0/0xcc0 net/mac802154/llsec.c:249 ieee802154_add_llsec_key+0x56/0x80 net/mac802154/cfg.c:338 rdev_add_llsec_key net/ieee802154/rdev-ops.h:260 [inline] nl802154_add_llsec_key+0x3d3/0x560 net/ieee802154/nl802154.c:1584 genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:739 genl_family_rcv_msg net/netlink/genetlink.c:783 [inline] genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:800 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502 genl_rcv+0x24/0x40 net/netlink/genetlink.c:811 netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline] netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1338 netlink_sendmsg+0x856/0xd90 net/netlink/af_netlink.c:1927 sock_sendmsg_nosec net/socket.c:654 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:674 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2350 ___sys_sendmsg+0xf3/0x170 net/socket.c:2404 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2433 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae Signed-off-by: Pavel Skripkin <[email protected]> Reported-by: [email protected] Acked-by: Alexander Aring <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Stefan Schmidt <[email protected]>
static void _php_curl_set_default_options(php_curl *ch) { char *cainfo; curl_easy_setopt(ch->cp, CURLOPT_NOPROGRESS, 1); curl_easy_setopt(ch->cp, CURLOPT_VERBOSE, 0); curl_easy_setopt(ch->cp, CURLOPT_ERRORBUFFER, ch->err.str); curl_easy_setopt(ch->cp, CURLOPT_WRITEFUNCTION, curl_write); curl_easy_setopt(ch->cp, CURLOPT_FILE, (void *) ch); curl_easy_setopt(ch->cp, CURLOPT_READFUNCTION, curl_read); curl_easy_setopt(ch->cp, CURLOPT_INFILE, (void *) ch); curl_easy_setopt(ch->cp, CURLOPT_HEADERFUNCTION, curl_write_header); curl_easy_setopt(ch->cp, CURLOPT_WRITEHEADER, (void *) ch); #if !defined(ZTS) curl_easy_setopt(ch->cp, CURLOPT_DNS_USE_GLOBAL_CACHE, 1); #endif curl_easy_setopt(ch->cp, CURLOPT_DNS_CACHE_TIMEOUT, 120); curl_easy_setopt(ch->cp, CURLOPT_MAXREDIRS, 20); /* prevent infinite redirects */ cainfo = INI_STR("openssl.cafile"); if (!(cainfo && cainfo[0] != '\0')) { cainfo = INI_STR("curl.cainfo"); } if (cainfo && cainfo[0] != '\0') { curl_easy_setopt(ch->cp, CURLOPT_CAINFO, cainfo); } #if defined(ZTS) curl_easy_setopt(ch->cp, CURLOPT_NOSIGNAL, 1); #endif }
0
[ "CWE-119", "CWE-787" ]
php-src
72dbb7f416160f490c4e9987040989a10ad431c7
177,320,537,213,431,020,000,000,000,000,000,000,000
31
Fix bug #72674 - check both curl_escape and curl_unescape
void unit_add_to_gc_queue(Unit *u) { assert(u); if (u->in_gc_queue || u->in_cleanup_queue) return; if (!unit_may_gc(u)) return; LIST_PREPEND(gc_queue, u->manager->gc_unit_queue, u); u->in_gc_queue = true; }
0
[ "CWE-269" ]
systemd
bf65b7e0c9fc215897b676ab9a7c9d1c688143ba
130,858,163,139,363,430,000,000,000,000,000,000,000
12
core: imply NNP and SUID/SGID restriction for DynamicUser=yes service Let's be safe, rather than sorry. This way DynamicUser=yes services can neither take benefit of, nor create SUID/SGID binaries. Given that DynamicUser= is a recent addition only we should be able to get away with turning this on, even though this is strictly speaking a binary compatibility breakage.
concat_left_node_opt_info(OnigEncoding enc, NodeOpt* to, NodeOpt* add) { int exb_reach, exm_reach; OptAnc tanc; concat_opt_anc_info(&tanc, &to->anc, &add->anc, to->len.max, add->len.max); copy_opt_anc_info(&to->anc, &tanc); if (add->exb.len > 0 && to->len.max == 0) { concat_opt_anc_info(&tanc, &to->anc, &add->exb.anc, to->len.max, add->len.max); copy_opt_anc_info(&add->exb.anc, &tanc); } if (add->map.value > 0 && to->len.max == 0) { if (add->map.mmd.max == 0) add->map.anc.left |= to->anc.left; } exb_reach = to->exb.reach_end; exm_reach = to->exm.reach_end; if (add->len.max != 0) to->exb.reach_end = to->exm.reach_end = 0; if (add->exb.len > 0) { if (exb_reach) { concat_opt_exact(&to->exb, &add->exb, enc); clear_opt_exact(&add->exb); } else if (exm_reach) { concat_opt_exact(&to->exm, &add->exb, enc); clear_opt_exact(&add->exb); } } select_opt_exact(enc, &to->exm, &add->exb); select_opt_exact(enc, &to->exm, &add->exm); if (to->expr.len > 0) { if (add->len.max > 0) { if (to->expr.len > (int )add->len.max) to->expr.len = add->len.max; if (to->expr.mmd.max == 0) select_opt_exact(enc, &to->exb, &to->expr); else select_opt_exact(enc, &to->exm, &to->expr); } } else if (add->expr.len > 0) { copy_opt_exact(&to->expr, &add->expr); } select_opt_map(&to->map, &add->map); add_mml(&to->len, &add->len); }
0
[ "CWE-476" ]
oniguruma
410f5916429e7d2920e1d4867388514f605413b8
311,748,582,147,749,800,000,000,000,000,000,000,000
55
fix #87: Read unknown address in onig_error_code_to_str()
static int io_tee_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { if (READ_ONCE(sqe->splice_off_in) || READ_ONCE(sqe->off)) return -EINVAL; return __io_splice_prep(req, sqe); }
0
[]
linux
0f2122045b946241a9e549c2a76cea54fa58a7ff
323,406,473,734,784,500,000,000,000,000,000,000,000
7
io_uring: don't rely on weak ->files references Grab actual references to the files_struct. To avoid circular references issues due to this, we add a per-task note that keeps track of what io_uring contexts a task has used. When the tasks execs or exits its assigned files, we cancel requests based on this tracking. With that, we can grab proper references to the files table, and no longer need to rely on stashing away ring_fd and ring_file to check if the ring_fd may have been closed. Cc: [email protected] # v5.5+ Reviewed-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
static int sd_compat_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long arg) { struct scsi_device *sdev = scsi_disk(bdev->bd_disk)->device; /* * If we are in the middle of error recovery, don't let anyone * else try and use this device. Also, if error recovery fails, it * may try and take the device offline, in which case all further * access to the device is prohibited. */ if (!scsi_block_when_processing_errors(sdev)) return -ENODEV; if (sdev->host->hostt->compat_ioctl) { int ret; ret = sdev->host->hostt->compat_ioctl(sdev, cmd, (void __user *)arg); return ret; } /* * Let the static ioctl translation table take care of it. */ return -ENOIOCTLCMD; }
1
[ "CWE-284", "CWE-264" ]
linux
0bfc96cb77224736dfa35c3c555d37b3646ef35e
182,261,452,713,770,400,000,000,000,000,000,000,000
27
block: fail SCSI passthrough ioctls on partition devices Linux allows executing the SG_IO ioctl on a partition or LVM volume, and will pass the command to the underlying block device. This is well-known, but it is also a large security problem when (via Unix permissions, ACLs, SELinux or a combination thereof) a program or user needs to be granted access only to part of the disk. This patch lets partitions forward a small set of harmless ioctls; others are logged with printk so that we can see which ioctls are actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred. Of course it was being sent to a (partition on a) hard disk, so it would have failed with ENOTTY and the patch isn't changing anything in practice. Still, I'm treating it specially to avoid spamming the logs. In principle, this restriction should include programs running with CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and /dev/sdb, it still should not be able to read/write outside the boundaries of /dev/sda2 independent of the capabilities. However, for now programs with CAP_SYS_RAWIO will still be allowed to send the ioctls. Their actions will still be logged. This patch does not affect the non-libata IDE driver. That driver however already tests for bd != bd->bd_contains before issuing some ioctl; it could be restricted further to forbid these ioctls even for programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO. Cc: [email protected] Cc: Jens Axboe <[email protected]> Cc: James Bottomley <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> [ Make it also print the command name when warning - Linus ] Signed-off-by: Linus Torvalds <[email protected]>
static struct segment *new_init_section(struct playlist *pls, struct init_section_info *info, const char *url_base) { struct segment *sec; char *ptr; char tmp_str[MAX_URL_SIZE]; if (!info->uri[0]) return NULL; sec = av_mallocz(sizeof(*sec)); if (!sec) return NULL; ff_make_absolute_url(tmp_str, sizeof(tmp_str), url_base, info->uri); sec->url = av_strdup(tmp_str); if (!sec->url) { av_free(sec); return NULL; } if (info->byterange[0]) { sec->size = strtoll(info->byterange, NULL, 10); ptr = strchr(info->byterange, '@'); if (ptr) sec->url_offset = strtoll(ptr+1, NULL, 10); } else { /* the entire file is the init section */ sec->size = -1; } dynarray_add(&pls->init_sections, &pls->n_init_sections, sec); return sec; }
0
[ "CWE-703", "CWE-835" ]
FFmpeg
7ec414892ddcad88313848494b6fc5f437c9ca4a
11,663,049,502,347,047,000,000,000,000,000,000,000
36
avformat/hls: Fix DoS due to infinite loop Fixes: loop.m3u The default max iteration count of 1000 is arbitrary and ideas for a better solution are welcome Found-by: Xiaohei and Wangchu from Alibaba Security Team Previous version reviewed-by: Steven Liu <[email protected]> Signed-off-by: Michael Niedermayer <[email protected]>
CImg<T>& operator<<=(const t value) { if (is_empty()) return *this; cimg_openmp_for(*this,((longT)*ptr) << (int)value,65536); return *this; }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
2,549,272,255,284,635,300,000,000,000,000,000,000
5
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
void dccp_print(netdissect_options *ndo, const u_char *bp, const u_char *data2, u_int len) { const struct dccp_hdr *dh; const struct ip *ip; const struct ip6_hdr *ip6; const u_char *cp; u_short sport, dport; u_int hlen; u_int fixed_hdrlen; uint8_t dccph_type; dh = (const struct dccp_hdr *)bp; ip = (const struct ip *)data2; if (IP_V(ip) == 6) ip6 = (const struct ip6_hdr *)data2; else ip6 = NULL; /* make sure we have enough data to look at the X bit */ cp = (const u_char *)(dh + 1); if (cp > ndo->ndo_snapend) { ND_PRINT((ndo, "[Invalid packet|dccp]")); return; } if (len < sizeof(struct dccp_hdr)) { ND_PRINT((ndo, "truncated-dccp - %u bytes missing!", len - (u_int)sizeof(struct dccp_hdr))); return; } /* get the length of the generic header */ fixed_hdrlen = dccp_basic_hdr_len(dh); if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-dccp - %u bytes missing!", len - fixed_hdrlen)); return; } ND_TCHECK2(*dh, fixed_hdrlen); sport = EXTRACT_16BITS(&dh->dccph_sport); dport = EXTRACT_16BITS(&dh->dccph_dport); hlen = dh->dccph_doff * 4; if (ip6) { ND_PRINT((ndo, "%s.%d > %s.%d: ", ip6addr_string(ndo, &ip6->ip6_src), sport, ip6addr_string(ndo, &ip6->ip6_dst), dport)); } else { ND_PRINT((ndo, "%s.%d > %s.%d: ", ipaddr_string(ndo, &ip->ip_src), sport, ipaddr_string(ndo, &ip->ip_dst), dport)); } ND_PRINT((ndo, "DCCP")); if (ndo->ndo_qflag) { ND_PRINT((ndo, " %d", len - hlen)); if (hlen > len) { ND_PRINT((ndo, " [bad hdr length %u - too long, > %u]", hlen, len)); } return; } /* other variables in generic header */ if (ndo->ndo_vflag) { ND_PRINT((ndo, " (CCVal %d, CsCov %d, ", DCCPH_CCVAL(dh), DCCPH_CSCOV(dh))); } /* checksum calculation */ if (ndo->ndo_vflag && ND_TTEST2(bp[0], len)) { uint16_t sum = 0, dccp_sum; dccp_sum = EXTRACT_16BITS(&dh->dccph_checksum); ND_PRINT((ndo, "cksum 0x%04x ", dccp_sum)); if (IP_V(ip) == 4) sum = dccp_cksum(ndo, ip, dh, len); else if (IP_V(ip) == 6) sum = dccp6_cksum(ndo, ip6, dh, len); if (sum != 0) ND_PRINT((ndo, "(incorrect -> 0x%04x)",in_cksum_shouldbe(dccp_sum, sum))); else ND_PRINT((ndo, "(correct)")); } if (ndo->ndo_vflag) ND_PRINT((ndo, ")")); ND_PRINT((ndo, " ")); dccph_type = DCCPH_TYPE(dh); switch (dccph_type) { case DCCP_PKT_REQUEST: { const struct dccp_hdr_request *dhr = (const struct dccp_hdr_request *)(bp + fixed_hdrlen); fixed_hdrlen += 4; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_TCHECK(*dhr); ND_PRINT((ndo, "%s (service=%d) ", tok2str(dccp_pkt_type_str, "", dccph_type), EXTRACT_32BITS(&dhr->dccph_req_service))); break; } case DCCP_PKT_RESPONSE: { const struct dccp_hdr_response *dhr = (const struct dccp_hdr_response *)(bp + fixed_hdrlen); fixed_hdrlen += 12; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_TCHECK(*dhr); ND_PRINT((ndo, "%s (service=%d) ", tok2str(dccp_pkt_type_str, "", dccph_type), EXTRACT_32BITS(&dhr->dccph_resp_service))); break; } case DCCP_PKT_DATA: ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; case DCCP_PKT_ACK: { fixed_hdrlen += 8; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; } case DCCP_PKT_DATAACK: { fixed_hdrlen += 8; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; } case DCCP_PKT_CLOSEREQ: fixed_hdrlen += 8; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; case DCCP_PKT_CLOSE: fixed_hdrlen += 8; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; case DCCP_PKT_RESET: { const struct dccp_hdr_reset *dhr = (const struct dccp_hdr_reset *)(bp + fixed_hdrlen); fixed_hdrlen += 12; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_TCHECK(*dhr); ND_PRINT((ndo, "%s (code=%s) ", tok2str(dccp_pkt_type_str, "", dccph_type), dccp_reset_code(dhr->dccph_reset_code))); break; } case DCCP_PKT_SYNC: fixed_hdrlen += 8; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; case DCCP_PKT_SYNCACK: fixed_hdrlen += 8; if (len < fixed_hdrlen) { ND_PRINT((ndo, "truncated-%s - %u bytes missing!", tok2str(dccp_pkt_type_str, "", dccph_type), len - fixed_hdrlen)); return; } ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "", dccph_type))); break; default: ND_PRINT((ndo, "%s ", tok2str(dccp_pkt_type_str, "unknown-type-%u", dccph_type))); break; } if ((DCCPH_TYPE(dh) != DCCP_PKT_DATA) && (DCCPH_TYPE(dh) != DCCP_PKT_REQUEST)) dccp_print_ack_no(ndo, bp); if (ndo->ndo_vflag < 2) return; ND_PRINT((ndo, "seq %" PRIu64, dccp_seqno(bp))); /* process options */ if (hlen > fixed_hdrlen){ u_int optlen; cp = bp + fixed_hdrlen; ND_PRINT((ndo, " <")); hlen -= fixed_hdrlen; while(1){ optlen = dccp_print_option(ndo, cp, hlen); if (!optlen) break; if (hlen <= optlen) break; hlen -= optlen; cp += optlen; ND_PRINT((ndo, ", ")); } ND_PRINT((ndo, ">")); } return; trunc: ND_PRINT((ndo, "%s", tstr)); return; }
0
[ "CWE-125" ]
tcpdump
211124b972e74f0da66bc8b16f181f78793e2f66
249,882,691,013,207,930,000,000,000,000,000,000,000
244
(for 4.9.3) CVE-2018-16229/DCCP: Fix printing "Timestamp" and "Timestamp Echo" options Add some comments. Moreover: Put a function definition name at the beginning of the line. (This change was ported from commit 6df4852 in the master branch.) Ryan Ackroyd had independently identified this buffer over-read later by means of fuzzing and provided the packet capture file for the test.
inline t maxabs(const t& a, const t& b, const t& abs_b) { return abs_b>cimg::abs(a)?b:a; }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
114,084,438,328,578,030,000,000,000,000,000,000,000
3
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
static int sig_cb(const char *elem, int len, void *arg) { sig_cb_st *sarg = arg; size_t i; char etmp[20], *p; int sig_alg, hash_alg; if (elem == NULL) return 0; if (sarg->sigalgcnt == MAX_SIGALGLEN) return 0; if (len > (int)(sizeof(etmp) - 1)) return 0; memcpy(etmp, elem, len); etmp[len] = 0; p = strchr(etmp, '+'); if (!p) return 0; *p = 0; p++; if (!*p) return 0; if (!strcmp(etmp, "RSA")) sig_alg = EVP_PKEY_RSA; else if (!strcmp(etmp, "DSA")) sig_alg = EVP_PKEY_DSA; else if (!strcmp(etmp, "ECDSA")) sig_alg = EVP_PKEY_EC; else return 0; hash_alg = OBJ_sn2nid(p); if (hash_alg == NID_undef) hash_alg = OBJ_ln2nid(p); if (hash_alg == NID_undef) return 0; for (i = 0; i < sarg->sigalgcnt; i += 2) { if (sarg->sigalgs[i] == sig_alg && sarg->sigalgs[i + 1] == hash_alg) return 0; } sarg->sigalgs[sarg->sigalgcnt++] = hash_alg; sarg->sigalgs[sarg->sigalgcnt++] = sig_alg; return 1; }
0
[]
openssl
76343947ada960b6269090638f5391068daee88d
60,240,487,868,110,200,000,000,000,000,000,000,000
45
Fix for CVE-2015-0291 If a client renegotiates using an invalid signature algorithms extension it will crash a server with a NULL pointer dereference. Thanks to David Ramos of Stanford University for reporting this bug. CVE-2015-0291 Reviewed-by: Tim Hudson <[email protected]> Conflicts: ssl/t1_lib.c
add_acl_headers(int where, uschar *acl_name) { header_line *h, *next; header_line *last_received = NULL; switch(where) { case ACL_WHERE_DKIM: case ACL_WHERE_MIME: case ACL_WHERE_DATA: if ( cutthrough.fd >= 0 && cutthrough.delivery && (acl_removed_headers || acl_added_headers)) { log_write(0, LOG_MAIN|LOG_PANIC, "Header modification in data ACLs" " will not take effect on cutthrough deliveries"); return; } } if (acl_removed_headers) { DEBUG(D_receive|D_acl) debug_printf_indent(">>Headers removed by %s ACL:\n", acl_name); for (h = header_list; h; h = h->next) if (h->type != htype_old) { const uschar * list = acl_removed_headers; int sep = ':'; /* This is specified as a colon-separated list */ uschar *s; uschar buffer[128]; while ((s = string_nextinlist(&list, &sep, buffer, sizeof(buffer)))) if (header_testname(h, s, Ustrlen(s), FALSE)) { h->type = htype_old; DEBUG(D_receive|D_acl) debug_printf_indent(" %s", h->text); } } acl_removed_headers = NULL; DEBUG(D_receive|D_acl) debug_printf_indent(">>\n"); } if (!acl_added_headers) return; DEBUG(D_receive|D_acl) debug_printf_indent(">>Headers added by %s ACL:\n", acl_name); for (h = acl_added_headers; h; h = next) { next = h->next; switch(h->type) { case htype_add_top: h->next = header_list; header_list = h; DEBUG(D_receive|D_acl) debug_printf_indent(" (at top)"); break; case htype_add_rec: if (last_received == NULL) { last_received = header_list; while (!header_testname(last_received, US"Received", 8, FALSE)) last_received = last_received->next; while (last_received->next != NULL && header_testname(last_received->next, US"Received", 8, FALSE)) last_received = last_received->next; } h->next = last_received->next; last_received->next = h; DEBUG(D_receive|D_acl) debug_printf_indent(" (after Received:)"); break; case htype_add_rfc: /* add header before any header which is NOT Received: or Resent- */ last_received = header_list; while ( (last_received->next != NULL) && ( (header_testname(last_received->next, US"Received", 8, FALSE)) || (header_testname_incomplete(last_received->next, US"Resent-", 7, FALSE)) ) ) last_received = last_received->next; /* last_received now points to the last Received: or Resent-* header in an uninterrupted chain of those header types (seen from the beginning of all headers. Our current header must follow it. */ h->next = last_received->next; last_received->next = h; DEBUG(D_receive|D_acl) debug_printf_indent(" (before any non-Received: or Resent-*: header)"); break; default: h->next = NULL; header_last->next = h; break; } if (h->next == NULL) header_last = h; /* Check for one of the known header types (From:, To:, etc.) though in practice most added headers are going to be "other". Lower case identification letters are never stored with the header; they are used for existence tests when messages are received. So discard any lower case flag values. */ h->type = header_checkname(h, FALSE); if (h->type >= 'a') h->type = htype_other; DEBUG(D_receive|D_acl) debug_printf_indent(" %s", header_last->text); } acl_added_headers = NULL; DEBUG(D_receive|D_acl) debug_printf_indent(">>\n"); }
0
[ "CWE-416" ]
exim
4e6ae6235c68de243b1c2419027472d7659aa2b4
11,561,020,290,966,343,000,000,000,000,000,000,000
109
Avoid release of store if there have been later allocations. Bug 2199
static int fpga_write(int iobase, unsigned char wrd) { unsigned char bit; int k; unsigned long timeout = jiffies + HZ / 10; for (k = 0; k < 8; k++) { bit = (wrd & 0x80) ? (MCR_RTS | MCR_DTR) : MCR_DTR; outb(bit | MCR_OUT1 | MCR_OUT2, MCR(iobase)); wrd <<= 1; outb(0xfc, THR(iobase)); while ((inb(LSR(iobase)) & LSR_TSRE) == 0) if (time_after(jiffies, timeout)) return -1; } return 0; }
0
[ "CWE-401" ]
linux
29eb31542787e1019208a2e1047bb7c76c069536
266,063,814,541,304,720,000,000,000,000,000,000,000
18
yam: fix a memory leak in yam_siocdevprivate() ym needs to be free when ym->cmd != SIOCYAMSMCS. Fixes: 0781168e23a2 ("yam: fix a missing-check bug") Signed-off-by: Hangyu Hua <[email protected]> Signed-off-by: David S. Miller <[email protected]>
report_default_term(char_u *term) { mch_errmsg(_("defaulting to '")); mch_errmsg((char *)term); mch_errmsg("'\r\n"); if (emsg_silent == 0 && !in_assert_fails) { screen_start(); // don't know where cursor is now out_flush(); if (!is_not_a_term()) ui_delay(2007L, TRUE); } }
0
[ "CWE-125", "CWE-787" ]
vim
e178af5a586ea023622d460779fdcabbbfac0908
22,526,637,893,631,933,000,000,000,000,000,000,000
13
patch 8.2.5160: accessing invalid memory after changing terminal size Problem: Accessing invalid memory after changing terminal size. Solution: Adjust cmdline_row and msg_row to the value of Rows.
CImg<_cimg_Tt> get_cross(const CImg<t>& img) const { return CImg<_cimg_Tt>(*this).cross(img); }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
246,053,058,465,114,460,000,000,000,000,000,000,000
3
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
void Http2UpstreamIntegrationTest::bidirectionalStreaming(uint32_t bytes) { initialize(); codec_client_ = makeHttpConnection(lookupPort("http")); // Start the request. auto encoder_decoder = codec_client_->startRequest(Http::TestRequestHeaderMapImpl{{":method", "POST"}, {":path", "/test/long/url"}, {":scheme", "http"}, {":authority", "host"}}); auto response = std::move(encoder_decoder.second); request_encoder_ = &encoder_decoder.first; ASSERT_TRUE(fake_upstreams_[0]->waitForHttpConnection(*dispatcher_, fake_upstream_connection_)); ASSERT_TRUE(fake_upstream_connection_->waitForNewStream(*dispatcher_, upstream_request_)); // Send part of the request body and ensure it is received upstream. codec_client_->sendData(*request_encoder_, bytes, false); ASSERT_TRUE(upstream_request_->waitForData(*dispatcher_, bytes)); // Start sending the response and ensure it is received downstream. upstream_request_->encodeHeaders(Http::TestResponseHeaderMapImpl{{":status", "200"}}, false); upstream_request_->encodeData(bytes, false); response->waitForBodyData(bytes); // Finish the request. codec_client_->sendTrailers(*request_encoder_, Http::TestRequestTrailerMapImpl{{"trailer", "foo"}}); ASSERT_TRUE(upstream_request_->waitForEndStream(*dispatcher_)); // Finish the response. upstream_request_->encodeTrailers(Http::TestResponseTrailerMapImpl{{"trailer", "bar"}}); response->waitForEndStream(); EXPECT_TRUE(response->complete()); }
0
[ "CWE-400" ]
envoy
0e49a495826ea9e29134c1bd54fdeb31a034f40c
28,516,107,128,001,844,000,000,000,000,000,000,000
34
http/2: add stats and stream flush timeout (#139) This commit adds a new stream flush timeout to guard against a remote server that does not open window once an entire stream has been buffered for flushing. Additional stats have also been added to better understand the codecs view of active streams as well as amount of data buffered. Signed-off-by: Matt Klein <[email protected]>
GF_Err tenc_dump(GF_Box *a, FILE * trace) { GF_TrackEncryptionBox *ptr = (GF_TrackEncryptionBox*) a; if (!a) return GF_BAD_PARAM; gf_isom_box_dump_start(a, "TrackEncryptionBox", trace); fprintf(trace, "isEncrypted=\"%d\"", ptr->isProtected); if (ptr->Per_Sample_IV_Size) fprintf(trace, " IV_size=\"%d\" KID=\"", ptr->Per_Sample_IV_Size); else { fprintf(trace, " constant_IV_size=\"%d\" constant_IV=\"", ptr->constant_IV_size); dump_data_hex(trace, (char *) ptr->constant_IV, ptr->constant_IV_size); fprintf(trace, "\" KID=\""); } dump_data_hex(trace, (char *) ptr->KID, 16); if (ptr->version) fprintf(trace, "\" crypt_byte_block=\"%d\" skip_byte_block=\"%d", ptr->crypt_byte_block, ptr->skip_byte_block); fprintf(trace, "\">\n"); gf_isom_box_dump_done("TrackEncryptionBox", a, trace); return GF_OK; }
1
[ "CWE-125" ]
gpac
bceb03fd2be95097a7b409ea59914f332fb6bc86
260,234,517,776,416,400,000,000,000,000,000,000,000
22
fixed 2 possible heap overflows (inc. #1088)
static char *mode_to_str(mode_t m) { static char str[11]; snprintf(str, sizeof(str), "%c%c%c%c%c%c%c%c%c%c", S_ISDIR(m) ? 'd' : '-', m & S_IRUSR ? 'r' : '-', m & S_IWUSR ? 'w' : '-', m & S_IXUSR ? 'x' : '-', m & S_IRGRP ? 'r' : '-', m & S_IWGRP ? 'w' : '-', m & S_IXGRP ? 'x' : '-', m & S_IROTH ? 'r' : '-', m & S_IWOTH ? 'w' : '-', m & S_IXOTH ? 'x' : '-'); return str; }
0
[ "CWE-120", "CWE-787" ]
uftpd
0fb2c031ce0ace07cc19cd2cb2143c4b5a63c9dd
216,588,653,288,237,900,000,000,000,000,000,000,000
18
FTP: Fix buffer overflow in PORT parser, reported by Aaron Esau Signed-off-by: Joachim Nilsson <[email protected]>
__u32 tcp_init_cwnd(const struct tcp_sock *tp, const struct dst_entry *dst) { __u32 cwnd = (dst ? dst_metric(dst, RTAX_INITCWND) : 0); if (!cwnd) cwnd = TCP_INIT_CWND; return min_t(__u32, cwnd, tp->snd_cwnd_clamp); }
0
[]
net-next
fdf5af0daf8019cec2396cdef8fb042d80fe71fa
158,885,573,745,228,800,000,000,000,000,000,000,000
8
tcp: drop SYN+FIN messages Denys Fedoryshchenko reported that SYN+FIN attacks were bringing his linux machines to their limits. Dont call conn_request() if the TCP flags includes SYN flag Reported-by: Denys Fedoryshchenko <[email protected]> Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static __inline__ void isdn_net_device_wake_queue(isdn_net_local *lp) { if (lp->master) netif_wake_queue(lp->master); else netif_wake_queue(lp->netdev->dev); }
0
[ "CWE-119" ]
linux
9f5af546e6acc30f075828cb58c7f09665033967
172,118,282,409,734,300,000,000,000,000,000,000,000
7
isdn/i4l: fix buffer overflow This fixes a potential buffer overflow in isdn_net.c caused by an unbounded strcpy. [ ISDN seems to be effectively unmaintained, and the I4L driver in particular is long deprecated, but in case somebody uses this.. - Linus ] Signed-off-by: Jiten Thakkar <[email protected]> Signed-off-by: Annie Cherkaev <[email protected]> Cc: Karsten Keil <[email protected]> Cc: Kees Cook <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
XMLRPC_VALUE XMLRPC_CreateValueDateTime(const char* id, time_t time) { XMLRPC_VALUE val = XMLRPC_CreateValueEmpty(); if(val) { XMLRPC_SetValueDateTime(val, time); if(id) { XMLRPC_SetValueID(val, id, 0); } } return val; }
0
[ "CWE-119" ]
php-src
88412772d295ebf7dd34409534507dc9bcac726e
298,195,999,709,884,400,000,000,000,000,000,000,000
10
Fix bug #68027 - fix date parsing in XMLRPC lib
static void mon_text_read_data(struct mon_reader_text *rp, struct mon_text_ptr *p, const struct mon_event_text *ep) { int data_len, i; if ((data_len = ep->length) > 0) { if (ep->data_flag == 0) { p->cnt += snprintf(p->pbuf + p->cnt, p->limit - p->cnt, " ="); if (data_len >= DATA_MAX) data_len = DATA_MAX; for (i = 0; i < data_len; i++) { if (i % 4 == 0) { p->cnt += snprintf(p->pbuf + p->cnt, p->limit - p->cnt, " "); } p->cnt += snprintf(p->pbuf + p->cnt, p->limit - p->cnt, "%02x", ep->data[i]); } p->cnt += snprintf(p->pbuf + p->cnt, p->limit - p->cnt, "\n"); } else { p->cnt += snprintf(p->pbuf + p->cnt, p->limit - p->cnt, " %c\n", ep->data_flag); } } else { p->cnt += snprintf(p->pbuf + p->cnt, p->limit - p->cnt, "\n"); } }
0
[ "CWE-787" ]
linux
a5f596830e27e15f7a0ecd6be55e433d776986d8
278,609,282,494,044,600,000,000,000,000,000,000,000
31
usb: usbmon: Read text within supplied buffer size This change fixes buffer overflows and silent data corruption with the usbmon device driver text file read operations. Signed-off-by: Fredrik Noring <[email protected]> Signed-off-by: Pete Zaitcev <[email protected]> Cc: stable <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
void enable_sched_clock_irqtime(void) { sched_clock_irqtime = 1; }
0
[ "CWE-703", "CWE-835" ]
linux
f26f9aff6aaf67e9a430d16c266f91b13a5bff64
32,587,360,432,012,805,000,000,000,000,000,000,000
4
Sched: fix skip_clock_update optimization idle_balance() drops/retakes rq->lock, leaving the previous task vulnerable to set_tsk_need_resched(). Clear it after we return from balancing instead, and in setup_thread_stack() as well, so no successfully descheduled or never scheduled task has it set. Need resched confused the skip_clock_update logic, which assumes that the next call to update_rq_clock() will come nearly immediately after being set. Make the optimization robust against the waking a sleeper before it sucessfully deschedules case by checking that the current task has not been dequeued before setting the flag, since it is that useless clock update we're trying to save, and clear unconditionally in schedule() proper instead of conditionally in put_prev_task(). Signed-off-by: Mike Galbraith <[email protected]> Reported-by: Bjoern B. Brandenburg <[email protected]> Tested-by: Yong Zhang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
get_bitmap_glyph(ASS_Renderer *render_priv, GlyphInfo *info) { if (!info->outline || info->symbol == '\n' || info->symbol == 0 || info->skip) return; BitmapHashValue *val; OutlineBitmapHashKey *key = &info->hash_key.u.outline; if (ass_cache_get(render_priv->cache.bitmap_cache, &info->hash_key, &val)) { info->image = val; return; } if (!val) return; ASS_Outline *outline = outline_copy(info->outline); ASS_Outline *border = outline_copy(info->border); // calculating rotation shift vector (from rotation origin to the glyph basepoint) FT_Vector shift = { key->shift_x, key->shift_y }; double scale_x = render_priv->font_scale_x; double fax_scaled = info->fax / info->scale_y * info->scale_x; double fay_scaled = info->fay / info->scale_x * info->scale_y; // apply rotation // use blur_scale because, like blurs, VSFilter forgets to scale this transform_3d(shift, outline, border, info->frx, info->fry, info->frz, fax_scaled, fay_scaled, render_priv->blur_scale, info->asc); // PAR correction scaling FT_Matrix m = { double_to_d16(scale_x), 0, 0, double_to_d16(1.0) }; // subpixel shift if (outline) { if (scale_x != 1.0) outline_transform(outline, &m); outline_translate(outline, key->advance.x, -key->advance.y); } if (border) { if (scale_x != 1.0) outline_transform(border, &m); outline_translate(border, key->advance.x, -key->advance.y); } // render glyph int error = outline_to_bitmap2(render_priv, outline, border, &val->bm, &val->bm_o); if (error) info->symbol = 0; ass_cache_commit(val, bitmap_size(val->bm) + bitmap_size(val->bm_o) + sizeof(BitmapHashKey) + sizeof(BitmapHashValue)); info->image = val; outline_free(outline); free(outline); outline_free(border); free(border); }
0
[ "CWE-125" ]
libass
f4f48950788b91c6a30029cc28a240b834713ea7
334,023,131,320,035,300,000,000,000,000,000,000,000
60
Fix line wrapping mode 0/3 bugs This fixes two separate bugs: a) Don't move a linebreak into the first symbol. This results in a empty line at the front, which does not help to equalize line lengths at all. Instead, merge line with the second one. b) When moving a linebreak into a symbol that already is a break, the number of lines must be decremented. Otherwise, uninitialized memory is possibly used for later layout operations. Found by fuzzer test case id:000085,sig:11,src:003377+003350,op:splice,rep:8. This might also affect and hopefully fix libass#229. v2: change semantics according to review
term_initialise() { FPRINTF((stderr, "term_initialise()\n")); if (!term) int_error(NO_CARET, "No terminal defined"); /* check if we have opened the output file in the wrong mode * (text/binary), if set term comes after set output * This was originally done in change_term, but that * resulted in output files being truncated */ if (outstr && (term->flags & TERM_NO_OUTPUTFILE)) { if (interactive) fprintf(stderr,"Closing %s\n",outstr); term_close_output(); } if (outstr && (((term->flags & TERM_BINARY) && !opened_binary) || ((!(term->flags & TERM_BINARY) && opened_binary)))) { /* this is nasty - we cannot just term_set_output(outstr) * since term_set_output will first free outstr and we * end up with an invalid pointer. I think I would * prefer to defer opening output file until first plot. */ char *temp = gp_alloc(strlen(outstr) + 1, "temp file string"); if (temp) { FPRINTF((stderr, "term_initialise: reopening \"%s\" as %s\n", outstr, term->flags & TERM_BINARY ? "binary" : "text")); strcpy(temp, outstr); term_set_output(temp); /* will free outstr */ if (temp != outstr) { if (temp) free(temp); temp = outstr; } } else fputs("Cannot reopen output file in binary", stderr); /* and carry on, hoping for the best ! */ } #if defined(MSDOS) || defined (_WIN32) || defined(OS2) # ifdef _WIN32 else if (!outstr && (term->flags & TERM_BINARY)) # else else if (!outstr && !interactive && (term->flags & TERM_BINARY)) # endif { #if defined(_WIN32) && !defined(WGP_CONSOLE) #ifdef PIPES if (!output_pipe_open) #endif if (outstr == NULL && !(term->flags & TERM_NO_OUTPUTFILE)) int_error(c_token, "cannot output binary data to wgnuplot text window"); #endif /* binary to stdout in non-interactive session... */ fflush(stdout); setmode(fileno(stdout), O_BINARY); } #endif if (!term_initialised || term_force_init) { FPRINTF((stderr, "- calling term->init()\n")); (*term->init) (); term_initialised = TRUE; #ifdef HAVE_LOCALE_H /* This is here only from an abundance of caution (a.k.a. paranoia). * Some terminals (wxt qt caca) are known to change the locale when * initialized. Others have been implicated (gd). Rather than trying * to catch all such offenders one by one, cover for all of them here. */ setlocale(LC_NUMERIC, "C"); #endif } }
0
[ "CWE-787" ]
gnuplot
963c7df3e0c5266efff260d0dff757dfe03d3632
155,488,710,479,536,420,000,000,000,000,000,000,000
76
Better error handling for faulty font syntax A missing close-quote in an enhanced text font specification could cause a segfault. Bug #2303
static int hid_debug_events_open(struct inode *inode, struct file *file) { int err = 0; struct hid_debug_list *list; unsigned long flags; if (!(list = kzalloc(sizeof(struct hid_debug_list), GFP_KERNEL))) { err = -ENOMEM; goto out; } if (!(list->hid_debug_buf = kzalloc(sizeof(char) * HID_DEBUG_BUFSIZE, GFP_KERNEL))) { err = -ENOMEM; kfree(list); goto out; } list->hdev = (struct hid_device *) inode->i_private; file->private_data = list; mutex_init(&list->read_mutex); spin_lock_irqsave(&list->hdev->debug_list_lock, flags); list_add_tail(&list->node, &list->hdev->debug_list); spin_unlock_irqrestore(&list->hdev->debug_list_lock, flags); out: return err; }
0
[ "CWE-835", "CWE-787" ]
linux
717adfdaf14704fd3ec7fa2c04520c0723247eac
70,414,457,699,156,000,000,000,000,000,000,000,000
27
HID: debug: check length before copy_to_user() If our length is greater than the size of the buffer, we overflow the buffer Cc: [email protected] Signed-off-by: Daniel Rosenberg <[email protected]> Reviewed-by: Benjamin Tissoires <[email protected]> Signed-off-by: Jiri Kosina <[email protected]>
void dccp_close(struct sock *sk, long timeout) { struct dccp_sock *dp = dccp_sk(sk); struct sk_buff *skb; u32 data_was_unread = 0; int state; lock_sock(sk); sk->sk_shutdown = SHUTDOWN_MASK; if (sk->sk_state == DCCP_LISTEN) { dccp_set_state(sk, DCCP_CLOSED); /* Special case. */ inet_csk_listen_stop(sk); goto adjudge_to_death; } sk_stop_timer(sk, &dp->dccps_xmit_timer); /* * We need to flush the recv. buffs. We do this only on the * descriptor close, not protocol-sourced closes, because the *reader process may not have drained the data yet! */ while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) { data_was_unread += skb->len; __kfree_skb(skb); } if (data_was_unread) { /* Unread data was tossed, send an appropriate Reset Code */ DCCP_WARN("ABORT with %u bytes unread\n", data_was_unread); dccp_send_reset(sk, DCCP_RESET_CODE_ABORTED); dccp_set_state(sk, DCCP_CLOSED); } else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) { /* Check zero linger _after_ checking for unread data. */ sk->sk_prot->disconnect(sk, 0); } else if (sk->sk_state != DCCP_CLOSED) { /* * Normal connection termination. May need to wait if there are * still packets in the TX queue that are delayed by the CCID. */ dccp_flush_write_queue(sk, &timeout); dccp_terminate_connection(sk); } /* * Flush write queue. This may be necessary in several cases: * - we have been closed by the peer but still have application data; * - abortive termination (unread data or zero linger time), * - normal termination but queue could not be flushed within time limit */ __skb_queue_purge(&sk->sk_write_queue); sk_stream_wait_close(sk, timeout); adjudge_to_death: state = sk->sk_state; sock_hold(sk); sock_orphan(sk); /* * It is the last release_sock in its life. It will remove backlog. */ release_sock(sk); /* * Now socket is owned by kernel and we acquire BH lock * to finish close. No need to check for user refs. */ local_bh_disable(); bh_lock_sock(sk); WARN_ON(sock_owned_by_user(sk)); percpu_counter_inc(sk->sk_prot->orphan_count); /* Have we already been destroyed by a softirq or backlog? */ if (state != DCCP_CLOSED && sk->sk_state == DCCP_CLOSED) goto out; if (sk->sk_state == DCCP_CLOSED) inet_csk_destroy_sock(sk); /* Otherwise, socket is reprieved until protocol close. */ out: bh_unlock_sock(sk); local_bh_enable(); sock_put(sk); }
0
[]
linux
7bced397510ab569d31de4c70b39e13355046387
332,744,672,263,191,620,000,000,000,000,000,000,000
92
net_dma: simple removal Per commit "77873803363c net_dma: mark broken" net_dma is no longer used and there is no plan to fix it. This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards. Reverting the remainder of the net_dma induced changes is deferred to subsequent patches. Marked for stable due to Roman's report of a memory leak in dma_pin_iovec_pages(): https://lkml.org/lkml/2014/9/3/177 Cc: Dave Jiang <[email protected]> Cc: Vinod Koul <[email protected]> Cc: David Whipple <[email protected]> Cc: Alexander Duyck <[email protected]> Cc: <[email protected]> Reported-by: Roman Gushchin <[email protected]> Acked-by: David S. Miller <[email protected]> Signed-off-by: Dan Williams <[email protected]>
static BOOL rdg_ntlm_init(rdpRdg* rdg, rdpTls* tls) { BOOL continueNeeded = FALSE; rdpContext* context = rdg->context; rdpSettings* settings = context->settings; rdg->ntlm = ntlm_new(); if (!rdg->ntlm) return FALSE; if (!rdg_get_gateway_credentials(context)) return FALSE; if (!ntlm_client_init(rdg->ntlm, TRUE, settings->GatewayUsername, settings->GatewayDomain, settings->GatewayPassword, tls->Bindings)) return FALSE; if (!ntlm_client_make_spn(rdg->ntlm, _T("HTTP"), settings->GatewayHostname)) return FALSE; if (!ntlm_authenticate(rdg->ntlm, &continueNeeded)) return FALSE; return continueNeeded; }
0
[ "CWE-125" ]
FreeRDP
6b485b146a1b9d6ce72dfd7b5f36456c166e7a16
232,453,736,369,827,200,000,000,000,000,000,000,000
25
Fixed oob read in irp_write and similar
xmlXPtrNewRangeNodeObject(xmlNodePtr start, xmlXPathObjectPtr end) { xmlNodePtr endNode; int endIndex; xmlXPathObjectPtr ret; if (start == NULL) return(NULL); if (end == NULL) return(NULL); switch (end->type) { case XPATH_POINT: endNode = end->user; endIndex = end->index; break; case XPATH_RANGE: endNode = end->user2; endIndex = end->index2; break; case XPATH_NODESET: /* * Empty set ... */ if (end->nodesetval->nodeNr <= 0) return(NULL); endNode = end->nodesetval->nodeTab[end->nodesetval->nodeNr - 1]; endIndex = -1; break; default: /* TODO */ return(NULL); } ret = xmlXPtrNewRangeInternal(start, -1, endNode, endIndex); xmlXPtrRangeCheckOrder(ret); return(ret); }
0
[ "CWE-119" ]
libxml2
c1d1f7121194036608bf555f08d3062a36fd344b
229,416,524,710,414,260,000,000,000,000,000,000,000
36
Disallow namespace nodes in XPointer ranges Namespace nodes must be copied to avoid use-after-free errors. But they don't necessarily have a physical representation in a document, so simply disallow them in XPointer ranges. Found with afl-fuzz. Fixes CVE-2016-4658.
GF_Err sdp_box_size(GF_Box *s) { GF_SDPBox *ptr = (GF_SDPBox *)s; //don't count the NULL char!!! if (ptr->sdpText) ptr->size += strlen(ptr->sdpText); return GF_OK; }
0
[ "CWE-787" ]
gpac
388ecce75d05e11fc8496aa4857b91245007d26e
238,578,199,431,772,760,000,000,000,000,000,000,000
8
fixed #1587
static int ebb_set(struct task_struct *target, const struct user_regset *regset, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { int ret = 0; /* Build tests */ BUILD_BUG_ON(TSO(ebbrr) + sizeof(unsigned long) != TSO(ebbhr)); BUILD_BUG_ON(TSO(ebbhr) + sizeof(unsigned long) != TSO(bescr)); if (!cpu_has_feature(CPU_FTR_ARCH_207S)) return -ENODEV; if (target->thread.used_ebb) return -ENODATA; ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &target->thread.ebbrr, 0, sizeof(unsigned long)); if (!ret) ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &target->thread.ebbhr, sizeof(unsigned long), 2 * sizeof(unsigned long)); if (!ret) ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &target->thread.bescr, 2 * sizeof(unsigned long), 3 * sizeof(unsigned long)); return ret; }
0
[ "CWE-119", "CWE-787" ]
linux
c1fa0768a8713b135848f78fd43ffc208d8ded70
262,112,258,709,268,950,000,000,000,000,000,000,000
32
powerpc/tm: Flush TM only if CPU has TM feature Commit cd63f3c ("powerpc/tm: Fix saving of TM SPRs in core dump") added code to access TM SPRs in flush_tmregs_to_thread(). However flush_tmregs_to_thread() does not check if TM feature is available on CPU before trying to access TM SPRs in order to copy live state to thread structures. flush_tmregs_to_thread() is indeed guarded by CONFIG_PPC_TRANSACTIONAL_MEM but it might be the case that kernel was compiled with CONFIG_PPC_TRANSACTIONAL_MEM enabled and ran on a CPU without TM feature available, thus rendering the execution of TM instructions that are treated by the CPU as illegal instructions. The fix is just to add proper checking in flush_tmregs_to_thread() if CPU has the TM feature before accessing any TM-specific resource, returning immediately if TM is no available on the CPU. Adding that checking in flush_tmregs_to_thread() instead of in places where it is called, like in vsr_get() and vsr_set(), is better because avoids the same problem cropping up elsewhere. Cc: [email protected] # v4.13+ Fixes: cd63f3c ("powerpc/tm: Fix saving of TM SPRs in core dump") Signed-off-by: Gustavo Romero <[email protected]> Reviewed-by: Cyril Bur <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
const Type_handler *type_handler() const { return &type_handler_enum; }
0
[ "CWE-416", "CWE-703" ]
server
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
305,124,507,112,944,300,000,000,000,000,000,000,000
1
MDEV-24176 Server crashes after insert in the table with virtual column generated using date_format() and if() vcol_info->expr is allocated on expr_arena at parsing stage. Since expr item is allocated on expr_arena all its containee items must be allocated on expr_arena too. Otherwise fix_session_expr() will encounter prematurely freed item. When table is reopened from cache vcol_info contains stale expression. We refresh expression via TABLE::vcol_fix_exprs() but first we must prepare a proper context (Vcol_expr_context) which meets some requirements: 1. As noted above expr update must be done on expr_arena as there may be new items created. It was a bug in fix_session_expr_for_read() and was just not reproduced because of no second refix. Now refix is done for more cases so it does reproduce. Tests affected: vcol.binlog 2. Also name resolution context must be narrowed to the single table. Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes 3. sql_mode must be clean and not fail expr update. sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc must not affect vcol expression update. If the table was created successfully any further evaluation must not fail. Tests affected: main.func_like Reviewed by: Sergei Golubchik <[email protected]>
void HRangeAnalysis::InferControlFlowRange(HCompareIDAndBranch* test, HBasicBlock* dest) { ASSERT((test->FirstSuccessor() == dest) == (test->SecondSuccessor() != dest)); if (test->GetInputRepresentation().IsInteger32()) { Token::Value op = test->token(); if (test->SecondSuccessor() == dest) { op = Token::NegateCompareOp(op); } Token::Value inverted_op = Token::InvertCompareOp(op); UpdateControlFlowRange(op, test->left(), test->right()); UpdateControlFlowRange(inverted_op, test->right(), test->left()); } }
0
[]
node
fd80a31e0697d6317ce8c2d289575399f4e06d21
295,056,694,438,038,980,000,000,000,000,000,000,000
13
deps: backport 5f836c from v8 upstream Original commit message: Fix Hydrogen bounds check elimination When combining bounds checks, they must all be moved before the first load/store that they are guarding. BUG=chromium:344186 LOG=y [email protected] Review URL: https://codereview.chromium.org/172093002 git-svn-id: https://v8.googlecode.com/svn/branches/bleeding_edge@19475 ce2b1a6d-e550-0410-aec6-3dcde31c8c00 fix #8070
static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, struct bpf_insn *insn, const struct bpf_reg_state *ptr_reg, const struct bpf_reg_state *off_reg) { struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state = vstate->frame[vstate->curframe]; struct bpf_reg_state *regs = state->regs, *dst_reg; bool known = tnum_is_const(off_reg->var_off); s64 smin_val = off_reg->smin_value, smax_val = off_reg->smax_value, smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value; u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value, umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value; u8 opcode = BPF_OP(insn->code); u32 dst = insn->dst_reg; dst_reg = &regs[dst]; if ((known && (smin_val != smax_val || umin_val != umax_val)) || smin_val > smax_val || umin_val > umax_val) { /* Taint dst register if offset had invalid bounds derived from * e.g. dead branches. */ __mark_reg_unknown(dst_reg); return 0; } if (BPF_CLASS(insn->code) != BPF_ALU64) { /* 32-bit ALU ops on pointers produce (meaningless) scalars */ verbose(env, "R%d 32-bit pointer arithmetic prohibited\n", dst); return -EACCES; } if (ptr_reg->type == PTR_TO_MAP_VALUE_OR_NULL) { verbose(env, "R%d pointer arithmetic on PTR_TO_MAP_VALUE_OR_NULL prohibited, null-check it first\n", dst); return -EACCES; } if (ptr_reg->type == CONST_PTR_TO_MAP) { verbose(env, "R%d pointer arithmetic on CONST_PTR_TO_MAP prohibited\n", dst); return -EACCES; } if (ptr_reg->type == PTR_TO_PACKET_END) { verbose(env, "R%d pointer arithmetic on PTR_TO_PACKET_END prohibited\n", dst); return -EACCES; } /* In case of 'scalar += pointer', dst_reg inherits pointer type and id. * The id may be overwritten later if we create a new variable offset. */ dst_reg->type = ptr_reg->type; dst_reg->id = ptr_reg->id; if (!check_reg_sane_offset(env, off_reg, ptr_reg->type) || !check_reg_sane_offset(env, ptr_reg, ptr_reg->type)) return -EINVAL; switch (opcode) { case BPF_ADD: /* We can take a fixed offset as long as it doesn't overflow * the s32 'off' field */ if (known && (ptr_reg->off + smin_val == (s64)(s32)(ptr_reg->off + smin_val))) { /* pointer += K. Accumulate it into fixed offset */ dst_reg->smin_value = smin_ptr; dst_reg->smax_value = smax_ptr; dst_reg->umin_value = umin_ptr; dst_reg->umax_value = umax_ptr; dst_reg->var_off = ptr_reg->var_off; dst_reg->off = ptr_reg->off + smin_val; dst_reg->range = ptr_reg->range; break; } /* A new variable offset is created. Note that off_reg->off * == 0, since it's a scalar. * dst_reg gets the pointer type and since some positive * integer value was added to the pointer, give it a new 'id' * if it's a PTR_TO_PACKET. * this creates a new 'base' pointer, off_reg (variable) gets * added into the variable offset, and we copy the fixed offset * from ptr_reg. */ if (signed_add_overflows(smin_ptr, smin_val) || signed_add_overflows(smax_ptr, smax_val)) { dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } else { dst_reg->smin_value = smin_ptr + smin_val; dst_reg->smax_value = smax_ptr + smax_val; } if (umin_ptr + umin_val < umin_ptr || umax_ptr + umax_val < umax_ptr) { dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; } else { dst_reg->umin_value = umin_ptr + umin_val; dst_reg->umax_value = umax_ptr + umax_val; } dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off); dst_reg->off = ptr_reg->off; if (reg_is_pkt_pointer(ptr_reg)) { dst_reg->id = ++env->id_gen; /* something was added to pkt_ptr, set range to zero */ dst_reg->range = 0; } break; case BPF_SUB: if (dst_reg == off_reg) { /* scalar -= pointer. Creates an unknown scalar */ verbose(env, "R%d tried to subtract pointer from scalar\n", dst); return -EACCES; } /* We don't allow subtraction from FP, because (according to * test_verifier.c test "invalid fp arithmetic", JITs might not * be able to deal with it. */ if (ptr_reg->type == PTR_TO_STACK) { verbose(env, "R%d subtraction from stack pointer prohibited\n", dst); return -EACCES; } if (known && (ptr_reg->off - smin_val == (s64)(s32)(ptr_reg->off - smin_val))) { /* pointer -= K. Subtract it from fixed offset */ dst_reg->smin_value = smin_ptr; dst_reg->smax_value = smax_ptr; dst_reg->umin_value = umin_ptr; dst_reg->umax_value = umax_ptr; dst_reg->var_off = ptr_reg->var_off; dst_reg->id = ptr_reg->id; dst_reg->off = ptr_reg->off - smin_val; dst_reg->range = ptr_reg->range; break; } /* A new variable offset is created. If the subtrahend is known * nonnegative, then any reg->range we had before is still good. */ if (signed_sub_overflows(smin_ptr, smax_val) || signed_sub_overflows(smax_ptr, smin_val)) { /* Overflow possible, we know nothing */ dst_reg->smin_value = S64_MIN; dst_reg->smax_value = S64_MAX; } else { dst_reg->smin_value = smin_ptr - smax_val; dst_reg->smax_value = smax_ptr - smin_val; } if (umin_ptr < umax_val) { /* Overflow possible, we know nothing */ dst_reg->umin_value = 0; dst_reg->umax_value = U64_MAX; } else { /* Cannot overflow (as long as bounds are consistent) */ dst_reg->umin_value = umin_ptr - umax_val; dst_reg->umax_value = umax_ptr - umin_val; } dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off); dst_reg->off = ptr_reg->off; if (reg_is_pkt_pointer(ptr_reg)) { dst_reg->id = ++env->id_gen; /* something was added to pkt_ptr, set range to zero */ if (smin_val < 0) dst_reg->range = 0; } break; case BPF_AND: case BPF_OR: case BPF_XOR: /* bitwise ops on pointers are troublesome, prohibit. */ verbose(env, "R%d bitwise operator %s on pointer prohibited\n", dst, bpf_alu_string[opcode >> 4]); return -EACCES; default: /* other operators (e.g. MUL,LSH) produce non-pointer results */ verbose(env, "R%d pointer arithmetic with %s operator prohibited\n", dst, bpf_alu_string[opcode >> 4]); return -EACCES; } if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type)) return -EINVAL; __update_reg_bounds(dst_reg); __reg_deduce_bounds(dst_reg); __reg_bound_offset(dst_reg); return 0; }
0
[ "CWE-125" ]
linux
b799207e1e1816b09e7a5920fbb2d5fcf6edd681
308,805,106,951,555,240,000,000,000,000,000,000,000
192
bpf: 32-bit RSH verification must truncate input before the ALU op When I wrote commit 468f6eafa6c4 ("bpf: fix 32-bit ALU op verification"), I assumed that, in order to emulate 64-bit arithmetic with 32-bit logic, it is sufficient to just truncate the output to 32 bits; and so I just moved the register size coercion that used to be at the start of the function to the end of the function. That assumption is true for almost every op, but not for 32-bit right shifts, because those can propagate information towards the least significant bit. Fix it by always truncating inputs for 32-bit ops to 32 bits. Also get rid of the coerce_reg_to_size() after the ALU op, since that has no effect. Fixes: 468f6eafa6c4 ("bpf: fix 32-bit ALU op verification") Acked-by: Daniel Borkmann <[email protected]> Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
static void detect_partial_match(compiler_common *common, jump_list **backtracks) { DEFINE_COMPILER; struct sljit_jump *jump; if (common->mode == PCRE2_JIT_COMPLETE) { add_jump(compiler, backtracks, CMP(SLJIT_GREATER_EQUAL, STR_PTR, 0, STR_END, 0)); return; } /* Partial matching mode. */ jump = CMP(SLJIT_LESS, STR_PTR, 0, STR_END, 0); add_jump(compiler, backtracks, CMP(SLJIT_GREATER_EQUAL, SLJIT_MEM1(SLJIT_SP), common->start_used_ptr, STR_PTR, 0)); if (common->mode == PCRE2_JIT_PARTIAL_SOFT) { OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), common->hit_start, SLJIT_IMM, 0); add_jump(compiler, backtracks, JUMP(SLJIT_JUMP)); } else { if (common->partialmatchlabel != NULL) JUMPTO(SLJIT_JUMP, common->partialmatchlabel); else add_jump(compiler, &common->partialmatch, JUMP(SLJIT_JUMP)); } JUMPHERE(jump); }
0
[ "CWE-125" ]
php-src
8947fd9e9fdce87cd6c59817b1db58e789538fe9
80,710,716,307,458,320,000,000,000,000,000,000,000
28
Fix #78338: Array cross-border reading in PCRE We backport r1092 from pcre2.
virNodeDeviceGetPCIIOMMUGroupCaps(virNodeDevCapPCIDevPtr pci_dev) { size_t i; int tmpGroup; virPCIDeviceAddress addr; /* this could be a refresh, so clear out the old data */ for (i = 0; i < pci_dev->nIommuGroupDevices; i++) VIR_FREE(pci_dev->iommuGroupDevices[i]); VIR_FREE(pci_dev->iommuGroupDevices); pci_dev->nIommuGroupDevices = 0; pci_dev->iommuGroupNumber = 0; addr.domain = pci_dev->domain; addr.bus = pci_dev->bus; addr.slot = pci_dev->slot; addr.function = pci_dev->function; tmpGroup = virPCIDeviceAddressGetIOMMUGroupNum(&addr); if (tmpGroup == -1) { /* error was already reported */ return -1; } if (tmpGroup == -2) /* -2 return means there is no iommu_group data */ return 0; if (tmpGroup >= 0) { if (virPCIDeviceAddressGetIOMMUGroupAddresses(&addr, &pci_dev->iommuGroupDevices, &pci_dev->nIommuGroupDevices) < 0) return -1; pci_dev->iommuGroupNumber = tmpGroup; } return 0; }
0
[ "CWE-119" ]
libvirt
4c4d0e2da07b5a035b26a0ff13ec27070f7c7b1a
213,371,801,175,747,770,000,000,000,000,000,000,000
34
conf: Fix segfault when parsing mdev types Commit f1b0890 introduced a potential crash due to incorrect operator precedence when accessing an element from a pointer to an array. Backtrace below: #0 virNodeDeviceGetMdevTypesCaps (sysfspath=0x7fff801661e0 "/sys/devices/pci0000:00/0000:00:02.0", mdev_types=0x7fff801c9b40, nmdev_types=0x7fff801c9b48) at ../src/conf/node_device_conf.c:2676 #1 0x00007ffff7caf53d in virNodeDeviceGetPCIDynamicCaps (sysfsPath=0x7fff801661e0 "/sys/devices/pci0000:00/0000:00:02.0", pci_dev=0x7fff801c9ac8) at ../src/conf/node_device_conf.c:2705 #2 0x00007ffff7cae38f in virNodeDeviceUpdateCaps (def=0x7fff80168a10) at ../src/conf/node_device_conf.c:2342 #3 0x00007ffff7cb11c0 in virNodeDeviceObjMatch (obj=0x7fff84002e50, flags=0) at ../src/conf/virnodedeviceobj.c:850 #4 0x00007ffff7cb153d in virNodeDeviceObjListExportCallback (payload=0x7fff84002e50, name=0x7fff801cbc20 "pci_0000_00_02_0", opaque=0x7fffe2ffc6a0) at ../src/conf/virnodedeviceobj.c:909 #5 0x00007ffff7b69146 in virHashForEach (table=0x7fff9814b700 = {...}, iter=0x7ffff7cb149e <virNodeDeviceObjListExportCallback>, opaque=0x7fffe2ffc6a0) at ../src/util/virhash.c:394 #6 0x00007ffff7cb1694 in virNodeDeviceObjListExport (conn=0x7fff98013170, devs=0x7fff98154430, devices=0x7fffe2ffc798, filter=0x7ffff7cf47a1 <virConnectListAllNodeDevicesCheckACL>, flags=0) at ../src/conf/virnodedeviceobj.c:943 #7 0x00007fffe00694b2 in nodeConnectListAllNodeDevices (conn=0x7fff98013170, devices=0x7fffe2ffc798, flags=0) at ../src/node_device/node_device_driver.c:228 #8 0x00007ffff7e703aa in virConnectListAllNodeDevices (conn=0x7fff98013170, devices=0x7fffe2ffc798, flags=0) at ../src/libvirt-nodedev.c:130 #9 0x000055555557f796 in remoteDispatchConnectListAllNodeDevices (server=0x555555627080, client=0x5555556bf050, msg=0x5555556c0000, rerr=0x7fffe2ffc8a0, args=0x7fffd4008470, ret=0x7fffd40084e0) at src/remote/remote_daemon_dispatch_stubs.h:1613 #10 0x000055555557f6f9 in remoteDispatchConnectListAllNodeDevicesHelper (server=0x555555627080, client=0x5555556bf050, msg=0x5555556c0000, rerr=0x7fffe2ffc8a0, args=0x7fffd4008470, ret=0x7fffd40084e0) at src/remote/remote_daemon_dispatch_stubs.h:1591 #11 0x00007ffff7ce9542 in virNetServerProgramDispatchCall (prog=0x555555690c10, server=0x555555627080, client=0x5555556bf050, msg=0x5555556c0000) at ../src/rpc/virnetserverprogram.c:428 #12 0x00007ffff7ce90bd in virNetServerProgramDispatch (prog=0x555555690c10, server=0x555555627080, client=0x5555556bf050, msg=0x5555556c0000) at ../src/rpc/virnetserverprogram.c:302 #13 0x00007ffff7cf042b in virNetServerProcessMsg (srv=0x555555627080, client=0x5555556bf050, prog=0x555555690c10, msg=0x5555556c0000) at ../src/rpc/virnetserver.c:137 #14 0x00007ffff7cf04eb in virNetServerHandleJob (jobOpaque=0x5555556b66b0, opaque=0x555555627080) at ../src/rpc/virnetserver.c:154 #15 0x00007ffff7bd912f in virThreadPoolWorker (opaque=0x55555562bc70) at ../src/util/virthreadpool.c:163 #16 0x00007ffff7bd8645 in virThreadHelper (data=0x55555562bc90) at ../src/util/virthread.c:233 #17 0x00007ffff6d90432 in start_thread () at /lib64/libpthread.so.0 #18 0x00007ffff75c5913 in clone () at /lib64/libc.so.6 Signed-off-by: Jonathon Jongsma <[email protected]> Reviewed-by: Ján Tomko <[email protected]> Signed-off-by: Ján Tomko <[email protected]>
decrypt_tkt (krb5_context context, krb5_keyblock *key, krb5_key_usage usage, krb5_const_pointer decrypt_arg, krb5_kdc_rep *dec_rep) { krb5_error_code ret; krb5_data data; size_t size; krb5_crypto crypto; ret = krb5_crypto_init(context, key, 0, &crypto); if (ret) return ret; ret = krb5_decrypt_EncryptedData (context, crypto, usage, &dec_rep->kdc_rep.enc_part, &data); krb5_crypto_destroy(context, crypto); if (ret) return ret; ret = decode_EncASRepPart(data.data, data.length, &dec_rep->enc_part, &size); if (ret) ret = decode_EncTGSRepPart(data.data, data.length, &dec_rep->enc_part, &size); krb5_data_free (&data); if (ret) { krb5_set_error_message(context, ret, N_("Failed to decode encpart in ticket", "")); return ret; } return 0; }
0
[ "CWE-345" ]
heimdal
6dd3eb836bbb80a00ffced4ad57077a1cdf227ea
338,186,112,001,309,860,000,000,000,000,000,000,000
42
CVE-2017-11103: Orpheus' Lyre KDC-REP service name validation In _krb5_extract_ticket() the KDC-REP service name must be obtained from encrypted version stored in 'enc_part' instead of the unencrypted version stored in 'ticket'. Use of the unecrypted version provides an opportunity for successful server impersonation and other attacks. Identified by Jeffrey Altman, Viktor Duchovni and Nico Williams. Change-Id: I45ef61e8a46e0f6588d64b5bd572a24c7432547c
static void encrypted_destroy(struct key *key) { struct encrypted_key_payload *epayload = key->payload.data[0]; if (!epayload) return; memzero_explicit(epayload->decrypted_data, epayload->decrypted_datalen); kfree(key->payload.data[0]); }
0
[ "CWE-125" ]
linux
794b4bc292f5d31739d89c0202c54e7dc9bc3add
225,425,673,720,928,540,000,000,000,000,000,000,000
10
KEYS: encrypted: fix buffer overread in valid_master_desc() With the 'encrypted' key type it was possible for userspace to provide a data blob ending with a master key description shorter than expected, e.g. 'keyctl add encrypted desc "new x" @s'. When validating such a master key description, validate_master_desc() could read beyond the end of the buffer. Fix this by using strncmp() instead of memcmp(). [Also clean up the code to deduplicate some logic.] Cc: Mimi Zohar <[email protected]> Signed-off-by: Eric Biggers <[email protected]> Signed-off-by: David Howells <[email protected]> Signed-off-by: James Morris <[email protected]>
bool Item_field::update_table_bitmaps_processor(uchar *arg) { update_table_bitmaps(); return FALSE; }
0
[]
server
b000e169562697aa072600695d4f0c0412f94f4f
13,857,870,315,759,622,000,000,000,000,000,000,000
5
Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST, COL), NAME_CONST('NAME', NULL)) based on: commit f7316aa0c9a Author: Ajo Robert <[email protected]> Date: Thu Aug 24 17:03:21 2017 +0530 Bug#26361149 MYSQL SERVER CRASHES AT: COL IN(IFNULL(CONST, COL), NAME_CONST('NAME', NULL)) Backport of Bug#19143243 fix. NAME_CONST item can return NULL_ITEM type in case of incorrect arguments. NULL_ITEM has special processing in Item_func_in function. In Item_func_in::fix_length_and_dec an array of possible comparators is created. Since NAME_CONST function has NULL_ITEM type, corresponding array element is empty. Then NAME_CONST is wrapped to ITEM_CACHE. ITEM_CACHE can not return proper type(NULL_ITEM) in Item_func_in::val_int(), so the NULL_ITEM is attempted compared with an empty comparator. The fix is to disable the caching of Item_name_const item.
static int dvb_frontend_get_event(struct dvb_frontend *fe, struct dvb_frontend_event *event, int flags) { struct dvb_frontend_private *fepriv = fe->frontend_priv; struct dvb_fe_events *events = &fepriv->events; dev_dbg(fe->dvb->device, "%s:\n", __func__); if (events->overflow) { events->overflow = 0; return -EOVERFLOW; } if (events->eventw == events->eventr) { int ret; if (flags & O_NONBLOCK) return -EWOULDBLOCK; up(&fepriv->sem); ret = wait_event_interruptible (events->wait_queue, events->eventw != events->eventr); if (down_interruptible (&fepriv->sem)) return -ERESTARTSYS; if (ret < 0) return ret; } mutex_lock(&events->mtx); *event = events->events[events->eventr]; events->eventr = (events->eventr + 1) % MAX_EVENT; mutex_unlock(&events->mtx); return 0; }
0
[ "CWE-416" ]
linux
b1cb7372fa822af6c06c8045963571d13ad6348b
10,105,094,822,488,774,000,000,000,000,000,000,000
38
dvb_frontend: don't use-after-free the frontend struct dvb_frontend_invoke_release() may free the frontend struct. So, the free logic can't update it anymore after calling it. That's OK, as __dvb_frontend_free() is called only when the krefs are zeroed, so nobody is using it anymore. That should fix the following KASAN error: The KASAN report looks like this (running on kernel 3e0cc09a3a2c40ec1ffb6b4e12da86e98feccb11 (4.14-rc5+)): ================================================================== BUG: KASAN: use-after-free in __dvb_frontend_free+0x113/0x120 Write of size 8 at addr ffff880067d45a00 by task kworker/0:1/24 CPU: 0 PID: 24 Comm: kworker/0:1 Not tainted 4.14.0-rc5-43687-g06ab8a23e0e6 #545 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Workqueue: usb_hub_wq hub_event Call Trace: __dump_stack lib/dump_stack.c:16 dump_stack+0x292/0x395 lib/dump_stack.c:52 print_address_description+0x78/0x280 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 kasan_report+0x23d/0x350 mm/kasan/report.c:409 __asan_report_store8_noabort+0x1c/0x20 mm/kasan/report.c:435 __dvb_frontend_free+0x113/0x120 drivers/media/dvb-core/dvb_frontend.c:156 dvb_frontend_put+0x59/0x70 drivers/media/dvb-core/dvb_frontend.c:176 dvb_frontend_detach+0x120/0x150 drivers/media/dvb-core/dvb_frontend.c:2803 dvb_usb_adapter_frontend_exit+0xd6/0x160 drivers/media/usb/dvb-usb/dvb-usb-dvb.c:340 dvb_usb_adapter_exit drivers/media/usb/dvb-usb/dvb-usb-init.c:116 dvb_usb_exit+0x9b/0x200 drivers/media/usb/dvb-usb/dvb-usb-init.c:132 dvb_usb_device_exit+0xa5/0xf0 drivers/media/usb/dvb-usb/dvb-usb-init.c:295 usb_unbind_interface+0x21c/0xa90 drivers/usb/core/driver.c:423 __device_release_driver drivers/base/dd.c:861 device_release_driver_internal+0x4f1/0x5c0 drivers/base/dd.c:893 device_release_driver+0x1e/0x30 drivers/base/dd.c:918 bus_remove_device+0x2f4/0x4b0 drivers/base/bus.c:565 device_del+0x5c4/0xab0 drivers/base/core.c:1985 usb_disable_device+0x1e9/0x680 drivers/usb/core/message.c:1170 usb_disconnect+0x260/0x7a0 drivers/usb/core/hub.c:2124 hub_port_connect drivers/usb/core/hub.c:4754 hub_port_connect_change drivers/usb/core/hub.c:5009 port_event drivers/usb/core/hub.c:5115 hub_event+0x1318/0x3740 drivers/usb/core/hub.c:5195 process_one_work+0xc73/0x1d90 kernel/workqueue.c:2119 worker_thread+0x221/0x1850 kernel/workqueue.c:2253 kthread+0x363/0x440 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 Allocated by task 24: save_stack_trace+0x1b/0x20 arch/x86/kernel/stacktrace.c:59 save_stack+0x43/0xd0 mm/kasan/kasan.c:447 set_track mm/kasan/kasan.c:459 kasan_kmalloc+0xad/0xe0 mm/kasan/kasan.c:551 kmem_cache_alloc_trace+0x11e/0x2d0 mm/slub.c:2772 kmalloc ./include/linux/slab.h:493 kzalloc ./include/linux/slab.h:666 dtt200u_fe_attach+0x4c/0x110 drivers/media/usb/dvb-usb/dtt200u-fe.c:212 dtt200u_frontend_attach+0x35/0x80 drivers/media/usb/dvb-usb/dtt200u.c:136 dvb_usb_adapter_frontend_init+0x32b/0x660 drivers/media/usb/dvb-usb/dvb-usb-dvb.c:286 dvb_usb_adapter_init drivers/media/usb/dvb-usb/dvb-usb-init.c:86 dvb_usb_init drivers/media/usb/dvb-usb/dvb-usb-init.c:162 dvb_usb_device_init+0xf73/0x17f0 drivers/media/usb/dvb-usb/dvb-usb-init.c:277 dtt200u_usb_probe+0xa1/0xe0 drivers/media/usb/dvb-usb/dtt200u.c:155 usb_probe_interface+0x35d/0x8e0 drivers/usb/core/driver.c:361 really_probe drivers/base/dd.c:413 driver_probe_device+0x610/0xa00 drivers/base/dd.c:557 __device_attach_driver+0x230/0x290 drivers/base/dd.c:653 bus_for_each_drv+0x161/0x210 drivers/base/bus.c:463 __device_attach+0x26b/0x3c0 drivers/base/dd.c:710 device_initial_probe+0x1f/0x30 drivers/base/dd.c:757 bus_probe_device+0x1eb/0x290 drivers/base/bus.c:523 device_add+0xd0b/0x1660 drivers/base/core.c:1835 usb_set_configuration+0x104e/0x1870 drivers/usb/core/message.c:1932 generic_probe+0x73/0xe0 drivers/usb/core/generic.c:174 usb_probe_device+0xaf/0xe0 drivers/usb/core/driver.c:266 really_probe drivers/base/dd.c:413 driver_probe_device+0x610/0xa00 drivers/base/dd.c:557 __device_attach_driver+0x230/0x290 drivers/base/dd.c:653 bus_for_each_drv+0x161/0x210 drivers/base/bus.c:463 __device_attach+0x26b/0x3c0 drivers/base/dd.c:710 device_initial_probe+0x1f/0x30 drivers/base/dd.c:757 bus_probe_device+0x1eb/0x290 drivers/base/bus.c:523 device_add+0xd0b/0x1660 drivers/base/core.c:1835 usb_new_device+0x7b8/0x1020 drivers/usb/core/hub.c:2457 hub_port_connect drivers/usb/core/hub.c:4903 hub_port_connect_change drivers/usb/core/hub.c:5009 port_event drivers/usb/core/hub.c:5115 hub_event+0x194d/0x3740 drivers/usb/core/hub.c:5195 process_one_work+0xc73/0x1d90 kernel/workqueue.c:2119 worker_thread+0x221/0x1850 kernel/workqueue.c:2253 kthread+0x363/0x440 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 Freed by task 24: save_stack_trace+0x1b/0x20 arch/x86/kernel/stacktrace.c:59 save_stack+0x43/0xd0 mm/kasan/kasan.c:447 set_track mm/kasan/kasan.c:459 kasan_slab_free+0x72/0xc0 mm/kasan/kasan.c:524 slab_free_hook mm/slub.c:1390 slab_free_freelist_hook mm/slub.c:1412 slab_free mm/slub.c:2988 kfree+0xf6/0x2f0 mm/slub.c:3919 dtt200u_fe_release+0x3c/0x50 drivers/media/usb/dvb-usb/dtt200u-fe.c:202 dvb_frontend_invoke_release.part.13+0x1c/0x30 drivers/media/dvb-core/dvb_frontend.c:2790 dvb_frontend_invoke_release drivers/media/dvb-core/dvb_frontend.c:2789 __dvb_frontend_free+0xad/0x120 drivers/media/dvb-core/dvb_frontend.c:153 dvb_frontend_put+0x59/0x70 drivers/media/dvb-core/dvb_frontend.c:176 dvb_frontend_detach+0x120/0x150 drivers/media/dvb-core/dvb_frontend.c:2803 dvb_usb_adapter_frontend_exit+0xd6/0x160 drivers/media/usb/dvb-usb/dvb-usb-dvb.c:340 dvb_usb_adapter_exit drivers/media/usb/dvb-usb/dvb-usb-init.c:116 dvb_usb_exit+0x9b/0x200 drivers/media/usb/dvb-usb/dvb-usb-init.c:132 dvb_usb_device_exit+0xa5/0xf0 drivers/media/usb/dvb-usb/dvb-usb-init.c:295 usb_unbind_interface+0x21c/0xa90 drivers/usb/core/driver.c:423 __device_release_driver drivers/base/dd.c:861 device_release_driver_internal+0x4f1/0x5c0 drivers/base/dd.c:893 device_release_driver+0x1e/0x30 drivers/base/dd.c:918 bus_remove_device+0x2f4/0x4b0 drivers/base/bus.c:565 device_del+0x5c4/0xab0 drivers/base/core.c:1985 usb_disable_device+0x1e9/0x680 drivers/usb/core/message.c:1170 usb_disconnect+0x260/0x7a0 drivers/usb/core/hub.c:2124 hub_port_connect drivers/usb/core/hub.c:4754 hub_port_connect_change drivers/usb/core/hub.c:5009 port_event drivers/usb/core/hub.c:5115 hub_event+0x1318/0x3740 drivers/usb/core/hub.c:5195 process_one_work+0xc73/0x1d90 kernel/workqueue.c:2119 worker_thread+0x221/0x1850 kernel/workqueue.c:2253 kthread+0x363/0x440 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 The buggy address belongs to the object at ffff880067d45500 which belongs to the cache kmalloc-2048 of size 2048 The buggy address is located 1280 bytes inside of 2048-byte region [ffff880067d45500, ffff880067d45d00) The buggy address belongs to the page: page:ffffea00019f5000 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 flags: 0x100000000008100(slab|head) raw: 0100000000008100 0000000000000000 0000000000000000 00000001000f000f raw: dead000000000100 dead000000000200 ffff88006c002d80 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff880067d45900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff880067d45980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff880067d45a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff880067d45a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff880067d45b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ================================================================== Fixes: ead666000a5f ("media: dvb_frontend: only use kref after initialized") Reported-by: Andrey Konovalov <[email protected]> Suggested-by: Matthias Schwarzott <[email protected]> Tested-by: Andrey Konovalov <[email protected]> Signed-off-by: Mauro Carvalho Chehab <[email protected]>
CImg<Tfloat> get_HSItoRGB() const { return CImg< Tuchar>(*this,false).HSItoRGB(); }
0
[ "CWE-770" ]
cimg
619cb58dd90b4e03ac68286c70ed98acbefd1c90
45,240,721,952,459,380,000,000,000,000,000,000,000
3
CImg<>::load_bmp() and CImg<>::load_pandore(): Check that dimensions encoded in file does not exceed file size.
static int init_blkz_info(struct f2fs_sb_info *sbi, int devi) { struct block_device *bdev = FDEV(devi).bdev; sector_t nr_sectors = bdev->bd_part->nr_sects; sector_t sector = 0; struct blk_zone *zones; unsigned int i, nr_zones; unsigned int n = 0; int err = -EIO; if (!f2fs_sb_mounted_blkzoned(sbi->sb)) return 0; if (sbi->blocks_per_blkz && sbi->blocks_per_blkz != SECTOR_TO_BLOCK(bdev_zone_sectors(bdev))) return -EINVAL; sbi->blocks_per_blkz = SECTOR_TO_BLOCK(bdev_zone_sectors(bdev)); if (sbi->log_blocks_per_blkz && sbi->log_blocks_per_blkz != __ilog2_u32(sbi->blocks_per_blkz)) return -EINVAL; sbi->log_blocks_per_blkz = __ilog2_u32(sbi->blocks_per_blkz); FDEV(devi).nr_blkz = SECTOR_TO_BLOCK(nr_sectors) >> sbi->log_blocks_per_blkz; if (nr_sectors & (bdev_zone_sectors(bdev) - 1)) FDEV(devi).nr_blkz++; FDEV(devi).blkz_type = kmalloc(FDEV(devi).nr_blkz, GFP_KERNEL); if (!FDEV(devi).blkz_type) return -ENOMEM; #define F2FS_REPORT_NR_ZONES 4096 zones = kcalloc(F2FS_REPORT_NR_ZONES, sizeof(struct blk_zone), GFP_KERNEL); if (!zones) return -ENOMEM; /* Get block zones type */ while (zones && sector < nr_sectors) { nr_zones = F2FS_REPORT_NR_ZONES; err = blkdev_report_zones(bdev, sector, zones, &nr_zones, GFP_KERNEL); if (err) break; if (!nr_zones) { err = -EIO; break; } for (i = 0; i < nr_zones; i++) { FDEV(devi).blkz_type[n] = zones[i].type; sector += zones[i].len; n++; } } kfree(zones); return err; }
0
[ "CWE-284" ]
linux
b9dd46188edc2f0d1f37328637860bb65a771124
194,752,396,513,282,320,000,000,000,000,000,000,000
62
f2fs: sanity check segment count F2FS uses 4 bytes to represent block address. As a result, supported size of disk is 16 TB and it equals to 16 * 1024 * 1024 / 2 segments. Signed-off-by: Jin Qian <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
int generalizedTimeIndexer( slap_mask_t use, slap_mask_t flags, Syntax *syntax, MatchingRule *mr, struct berval *prefix, BerVarray values, BerVarray *keysp, void *ctx ) { int i, j; BerVarray keys; char tmp[5]; BerValue bvtmp; /* 40 bit index */ struct lutil_tm tm; struct lutil_timet tt; bvtmp.bv_len = sizeof(tmp); bvtmp.bv_val = tmp; for( i=0; values[i].bv_val != NULL; i++ ) { /* just count them */ } /* we should have at least one value at this point */ assert( i > 0 ); keys = slap_sl_malloc( sizeof( struct berval ) * (i+1), ctx ); /* GeneralizedTime YYYYmmddHH[MM[SS]][(./,)d...](Z|(+/-)HH[MM]) */ for( i=0, j=0; values[i].bv_val != NULL; i++ ) { assert(values[i].bv_val != NULL && values[i].bv_len >= 10); /* Use 40 bits of time for key */ if ( lutil_parsetime( values[i].bv_val, &tm ) == 0 ) { lutil_tm2gtime( &tm, &tt ); tmp[0] = tt.tt_gsec & 0xff; tmp[4] = tt.tt_sec & 0xff; tt.tt_sec >>= 8; tmp[3] = tt.tt_sec & 0xff; tt.tt_sec >>= 8; tmp[2] = tt.tt_sec & 0xff; tt.tt_sec >>= 8; tmp[1] = tt.tt_sec & 0xff; ber_dupbv_x(&keys[j++], &bvtmp, ctx ); } } keys[j].bv_val = NULL; keys[j].bv_len = 0; *keysp = keys; return LDAP_SUCCESS; }
0
[ "CWE-617" ]
openldap
3539fc33212b528c56b716584f2c2994af7c30b0
230,812,441,082,089,920,000,000,000,000,000,000,000
54
ITS#9454 fix issuerAndThisUpdateCheck
void gf_hinter_track_get_payload_name(GF_RTPHinter *tkHinter, char *payloadName) { char mediaName[30]; gf_rtp_builder_get_payload_name(tkHinter->rtp_p, payloadName, mediaName); }
0
[ "CWE-476" ]
gpac
ebfa346eff05049718f7b80041093b4c5581c24e
209,141,596,735,371,070,000,000,000,000,000,000,000
5
fixed #1706
static int jpc_unk_getparms(jpc_ms_t *ms, jpc_cstate_t *cstate, jas_stream_t *in) { jpc_unk_t *unk = &ms->parms.unk; /* Eliminate compiler warning about unused variables. */ cstate = 0; if (ms->len > 0) { if (!(unk->data = jas_malloc(ms->len * sizeof(unsigned char)))) { return -1; } if (jas_stream_read(in, (char *) unk->data, ms->len) != JAS_CAST(int, ms->len)) { jas_free(unk->data); return -1; } unk->len = ms->len; } else { unk->data = 0; unk->len = 0; } return 0; }
1
[ "CWE-189" ]
jasper
3c55b399c36ef46befcb21e4ebc4799367f89684
213,869,555,503,483,260,000,000,000,000,000,000,000
22
At many places in the code, jas_malloc or jas_recalloc was being invoked with the size argument being computed in a manner that would not allow integer overflow to be detected. Now, these places in the code have been modified to use special-purpose memory allocation functions (e.g., jas_alloc2, jas_alloc3, jas_realloc2) that check for overflow. This should fix many security problems.
static inline int str2in_method(char *optarg) { if (optarg) { #ifdef PROC_NET_DEV if (!strcasecmp(optarg,"proc")) return PROC_IN; #endif #ifdef NETSTAT if (!strcasecmp(optarg,"netstat")) return NETSTAT_IN; #endif #ifdef LIBSTATGRAB if (!strcasecmp(optarg,"libstat") || !strcasecmp(optarg,"statgrab") || !strcasecmp(optarg,"libstatgrab")) return LIBSTAT_IN; if (!strcasecmp(optarg,"libstatdisk")) return LIBSTATDISK_IN; #endif #ifdef GETIFADDRS if (!strcasecmp(optarg,"getifaddrs")) return GETIFADDRS_IN; #endif #if DEVSTAT_IN if (!strcasecmp(optarg,"devstat")) return DEVSTAT_IN; #endif #ifdef SYSCTL if (!strcasecmp(optarg,"sysctl")) return SYSCTL_IN; #endif #if SYSCTLDISK_IN if (!strcasecmp(optarg,"sysctldisk")) return SYSCTLDISK_IN; #endif #ifdef PROC_DISKSTATS if (!strcasecmp(optarg,"disk")) return DISKLINUX_IN; #endif #ifdef WIN32 if (!strcasecmp(optarg,"win32")) return WIN32_IN; #endif #ifdef HAVE_LIBKSTAT if (!strcasecmp(optarg,"kstat")) return KSTAT_IN; if (!strcasecmp(optarg,"kstatdisk")) return KSTATDISK_IN; #endif #if IOSERVICE_IN if (!strcasecmp(optarg,"ioservice")) return IOSERVICE_IN; #endif } return -1; }
0
[ "CWE-476" ]
bwm-ng
9774f23bf78a6e6d3ae4cfe3d73bad34f2fdcd17
108,344,679,584,290,360,000,000,000,000,000,000,000
40
Fix https://github.com/vgropp/bwm-ng/issues/26
ExecBSDeleteTriggers(EState *estate, ResultRelInfo *relinfo) { TriggerDesc *trigdesc; int i; TriggerData LocTriggerData; trigdesc = relinfo->ri_TrigDesc; if (trigdesc == NULL) return; if (!trigdesc->trig_delete_before_statement) return; LocTriggerData.type = T_TriggerData; LocTriggerData.tg_event = TRIGGER_EVENT_DELETE | TRIGGER_EVENT_BEFORE; LocTriggerData.tg_relation = relinfo->ri_RelationDesc; LocTriggerData.tg_trigtuple = NULL; LocTriggerData.tg_newtuple = NULL; LocTriggerData.tg_trigtuplebuf = InvalidBuffer; LocTriggerData.tg_newtuplebuf = InvalidBuffer; for (i = 0; i < trigdesc->numtriggers; i++) { Trigger *trigger = &trigdesc->triggers[i]; HeapTuple newtuple; if (!TRIGGER_TYPE_MATCHES(trigger->tgtype, TRIGGER_TYPE_STATEMENT, TRIGGER_TYPE_BEFORE, TRIGGER_TYPE_DELETE)) continue; if (!TriggerEnabled(estate, relinfo, trigger, LocTriggerData.tg_event, NULL, NULL, NULL)) continue; LocTriggerData.tg_trigger = trigger; newtuple = ExecCallTriggerFunc(&LocTriggerData, i, relinfo->ri_TrigFunctions, relinfo->ri_TrigInstrument, GetPerTupleMemoryContext(estate)); if (newtuple) ereport(ERROR, (errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED), errmsg("BEFORE STATEMENT trigger cannot return a value"))); } }
0
[ "CWE-362" ]
postgres
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
52,764,582,941,507,280,000,000,000,000,000,000,000
48
Avoid repeated name lookups during table and index DDL. If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
static int load_elf_binary(struct linux_binprm *bprm) { struct file *interpreter = NULL; /* to shut gcc up */ unsigned long load_addr = 0, load_bias = 0; int load_addr_set = 0; char * elf_interpreter = NULL; unsigned long error; struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL; unsigned long elf_bss, elf_brk; int bss_prot = 0; int retval, i; unsigned long elf_entry; unsigned long interp_load_addr = 0; unsigned long start_code, end_code, start_data, end_data; unsigned long reloc_func_desc __maybe_unused = 0; int executable_stack = EXSTACK_DEFAULT; struct pt_regs *regs = current_pt_regs(); struct { struct elfhdr elf_ex; struct elfhdr interp_elf_ex; } *loc; struct arch_elf_state arch_state = INIT_ARCH_ELF_STATE; loc = kmalloc(sizeof(*loc), GFP_KERNEL); if (!loc) { retval = -ENOMEM; goto out_ret; } /* Get the exec-header */ loc->elf_ex = *((struct elfhdr *)bprm->buf); retval = -ENOEXEC; /* First of all, some simple consistency checks */ if (memcmp(loc->elf_ex.e_ident, ELFMAG, SELFMAG) != 0) goto out; if (loc->elf_ex.e_type != ET_EXEC && loc->elf_ex.e_type != ET_DYN) goto out; if (!elf_check_arch(&loc->elf_ex)) goto out; if (!bprm->file->f_op->mmap) goto out; elf_phdata = load_elf_phdrs(&loc->elf_ex, bprm->file); if (!elf_phdata) goto out; elf_ppnt = elf_phdata; elf_bss = 0; elf_brk = 0; start_code = ~0UL; end_code = 0; start_data = 0; end_data = 0; for (i = 0; i < loc->elf_ex.e_phnum; i++) { if (elf_ppnt->p_type == PT_INTERP) { /* This is the program interpreter used for * shared libraries - for now assume that this * is an a.out format binary */ retval = -ENOEXEC; if (elf_ppnt->p_filesz > PATH_MAX || elf_ppnt->p_filesz < 2) goto out_free_ph; retval = -ENOMEM; elf_interpreter = kmalloc(elf_ppnt->p_filesz, GFP_KERNEL); if (!elf_interpreter) goto out_free_ph; retval = kernel_read(bprm->file, elf_ppnt->p_offset, elf_interpreter, elf_ppnt->p_filesz); if (retval != elf_ppnt->p_filesz) { if (retval >= 0) retval = -EIO; goto out_free_interp; } /* make sure path is NULL terminated */ retval = -ENOEXEC; if (elf_interpreter[elf_ppnt->p_filesz - 1] != '\0') goto out_free_interp; interpreter = open_exec(elf_interpreter); retval = PTR_ERR(interpreter); if (IS_ERR(interpreter)) goto out_free_interp; /* * If the binary is not readable then enforce * mm->dumpable = 0 regardless of the interpreter's * permissions. */ would_dump(bprm, interpreter); /* Get the exec headers */ retval = kernel_read(interpreter, 0, (void *)&loc->interp_elf_ex, sizeof(loc->interp_elf_ex)); if (retval != sizeof(loc->interp_elf_ex)) { if (retval >= 0) retval = -EIO; goto out_free_dentry; } break; } elf_ppnt++; } elf_ppnt = elf_phdata; for (i = 0; i < loc->elf_ex.e_phnum; i++, elf_ppnt++) switch (elf_ppnt->p_type) { case PT_GNU_STACK: if (elf_ppnt->p_flags & PF_X) executable_stack = EXSTACK_ENABLE_X; else executable_stack = EXSTACK_DISABLE_X; break; case PT_LOPROC ... PT_HIPROC: retval = arch_elf_pt_proc(&loc->elf_ex, elf_ppnt, bprm->file, false, &arch_state); if (retval) goto out_free_dentry; break; } /* Some simple consistency checks for the interpreter */ if (elf_interpreter) { retval = -ELIBBAD; /* Not an ELF interpreter */ if (memcmp(loc->interp_elf_ex.e_ident, ELFMAG, SELFMAG) != 0) goto out_free_dentry; /* Verify the interpreter has a valid arch */ if (!elf_check_arch(&loc->interp_elf_ex)) goto out_free_dentry; /* Load the interpreter program headers */ interp_elf_phdata = load_elf_phdrs(&loc->interp_elf_ex, interpreter); if (!interp_elf_phdata) goto out_free_dentry; /* Pass PT_LOPROC..PT_HIPROC headers to arch code */ elf_ppnt = interp_elf_phdata; for (i = 0; i < loc->interp_elf_ex.e_phnum; i++, elf_ppnt++) switch (elf_ppnt->p_type) { case PT_LOPROC ... PT_HIPROC: retval = arch_elf_pt_proc(&loc->interp_elf_ex, elf_ppnt, interpreter, true, &arch_state); if (retval) goto out_free_dentry; break; } } /* * Allow arch code to reject the ELF at this point, whilst it's * still possible to return an error to the code that invoked * the exec syscall. */ retval = arch_check_elf(&loc->elf_ex, !!interpreter, &loc->interp_elf_ex, &arch_state); if (retval) goto out_free_dentry; /* Flush all traces of the currently running executable */ retval = flush_old_exec(bprm); if (retval) goto out_free_dentry; /* Do this immediately, since STACK_TOP as used in setup_arg_pages may depend on the personality. */ SET_PERSONALITY2(loc->elf_ex, &arch_state); if (elf_read_implies_exec(loc->elf_ex, executable_stack)) current->personality |= READ_IMPLIES_EXEC; if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) current->flags |= PF_RANDOMIZE; setup_new_exec(bprm); install_exec_creds(bprm); /* Do this so that we can load the interpreter, if need be. We will change some of these later */ retval = setup_arg_pages(bprm, randomize_stack_top(STACK_TOP), executable_stack); if (retval < 0) goto out_free_dentry; current->mm->start_stack = bprm->p; /* Now we do a little grungy work by mmapping the ELF image into the correct location in memory. */ for(i = 0, elf_ppnt = elf_phdata; i < loc->elf_ex.e_phnum; i++, elf_ppnt++) { int elf_prot = 0, elf_flags; unsigned long k, vaddr; unsigned long total_size = 0; if (elf_ppnt->p_type != PT_LOAD) continue; if (unlikely (elf_brk > elf_bss)) { unsigned long nbyte; /* There was a PT_LOAD segment with p_memsz > p_filesz before this one. Map anonymous pages, if needed, and clear the area. */ retval = set_brk(elf_bss + load_bias, elf_brk + load_bias, bss_prot); if (retval) goto out_free_dentry; nbyte = ELF_PAGEOFFSET(elf_bss); if (nbyte) { nbyte = ELF_MIN_ALIGN - nbyte; if (nbyte > elf_brk - elf_bss) nbyte = elf_brk - elf_bss; if (clear_user((void __user *)elf_bss + load_bias, nbyte)) { /* * This bss-zeroing can fail if the ELF * file specifies odd protections. So * we don't check the return value */ } } } if (elf_ppnt->p_flags & PF_R) elf_prot |= PROT_READ; if (elf_ppnt->p_flags & PF_W) elf_prot |= PROT_WRITE; if (elf_ppnt->p_flags & PF_X) elf_prot |= PROT_EXEC; elf_flags = MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE; vaddr = elf_ppnt->p_vaddr; if (loc->elf_ex.e_type == ET_EXEC || load_addr_set) { elf_flags |= MAP_FIXED; } else if (loc->elf_ex.e_type == ET_DYN) { /* Try and get dynamic programs out of the way of the * default mmap base, as well as whatever program they * might try to exec. This is because the brk will * follow the loader, and is not movable. */ load_bias = ELF_ET_DYN_BASE - vaddr; if (current->flags & PF_RANDOMIZE) load_bias += arch_mmap_rnd(); load_bias = ELF_PAGESTART(load_bias); total_size = total_mapping_size(elf_phdata, loc->elf_ex.e_phnum); if (!total_size) { retval = -EINVAL; goto out_free_dentry; } } error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt, elf_prot, elf_flags, total_size); if (BAD_ADDR(error)) { retval = IS_ERR((void *)error) ? PTR_ERR((void*)error) : -EINVAL; goto out_free_dentry; } if (!load_addr_set) { load_addr_set = 1; load_addr = (elf_ppnt->p_vaddr - elf_ppnt->p_offset); if (loc->elf_ex.e_type == ET_DYN) { load_bias += error - ELF_PAGESTART(load_bias + vaddr); load_addr += load_bias; reloc_func_desc = load_bias; } } k = elf_ppnt->p_vaddr; if (k < start_code) start_code = k; if (start_data < k) start_data = k; /* * Check to see if the section's size will overflow the * allowed task size. Note that p_filesz must always be * <= p_memsz so it is only necessary to check p_memsz. */ if (BAD_ADDR(k) || elf_ppnt->p_filesz > elf_ppnt->p_memsz || elf_ppnt->p_memsz > TASK_SIZE || TASK_SIZE - elf_ppnt->p_memsz < k) { /* set_brk can never work. Avoid overflows. */ retval = -EINVAL; goto out_free_dentry; } k = elf_ppnt->p_vaddr + elf_ppnt->p_filesz; if (k > elf_bss) elf_bss = k; if ((elf_ppnt->p_flags & PF_X) && end_code < k) end_code = k; if (end_data < k) end_data = k; k = elf_ppnt->p_vaddr + elf_ppnt->p_memsz; if (k > elf_brk) { bss_prot = elf_prot; elf_brk = k; } } loc->elf_ex.e_entry += load_bias; elf_bss += load_bias; elf_brk += load_bias; start_code += load_bias; end_code += load_bias; start_data += load_bias; end_data += load_bias; /* Calling set_brk effectively mmaps the pages that we need * for the bss and break sections. We must do this before * mapping in the interpreter, to make sure it doesn't wind * up getting placed where the bss needs to go. */ retval = set_brk(elf_bss, elf_brk, bss_prot); if (retval) goto out_free_dentry; if (likely(elf_bss != elf_brk) && unlikely(padzero(elf_bss))) { retval = -EFAULT; /* Nobody gets to see this, but.. */ goto out_free_dentry; } if (elf_interpreter) { unsigned long interp_map_addr = 0; elf_entry = load_elf_interp(&loc->interp_elf_ex, interpreter, &interp_map_addr, load_bias, interp_elf_phdata); if (!IS_ERR((void *)elf_entry)) { /* * load_elf_interp() returns relocation * adjustment */ interp_load_addr = elf_entry; elf_entry += loc->interp_elf_ex.e_entry; } if (BAD_ADDR(elf_entry)) { retval = IS_ERR((void *)elf_entry) ? (int)elf_entry : -EINVAL; goto out_free_dentry; } reloc_func_desc = interp_load_addr; allow_write_access(interpreter); fput(interpreter); kfree(elf_interpreter); } else { elf_entry = loc->elf_ex.e_entry; if (BAD_ADDR(elf_entry)) { retval = -EINVAL; goto out_free_dentry; } } kfree(interp_elf_phdata); kfree(elf_phdata); set_binfmt(&elf_format); #ifdef ARCH_HAS_SETUP_ADDITIONAL_PAGES retval = arch_setup_additional_pages(bprm, !!elf_interpreter); if (retval < 0) goto out; #endif /* ARCH_HAS_SETUP_ADDITIONAL_PAGES */ retval = create_elf_tables(bprm, &loc->elf_ex, load_addr, interp_load_addr); if (retval < 0) goto out; /* N.B. passed_fileno might not be initialized? */ current->mm->end_code = end_code; current->mm->start_code = start_code; current->mm->start_data = start_data; current->mm->end_data = end_data; current->mm->start_stack = bprm->p; if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) { current->mm->brk = current->mm->start_brk = arch_randomize_brk(current->mm); #ifdef compat_brk_randomized current->brk_randomized = 1; #endif } if (current->personality & MMAP_PAGE_ZERO) { /* Why this, you ask??? Well SVr4 maps page 0 as read-only, and some applications "depend" upon this behavior. Since we do not have the power to recompile these, we emulate the SVr4 behavior. Sigh. */ error = vm_mmap(NULL, 0, PAGE_SIZE, PROT_READ | PROT_EXEC, MAP_FIXED | MAP_PRIVATE, 0); } #ifdef ELF_PLAT_INIT /* * The ABI may specify that certain registers be set up in special * ways (on i386 %edx is the address of a DT_FINI function, for * example. In addition, it may also specify (eg, PowerPC64 ELF) * that the e_entry field is the address of the function descriptor * for the startup routine, rather than the address of the startup * routine itself. This macro performs whatever initialization to * the regs structure is required as well as any relocations to the * function descriptor entries when executing dynamically links apps. */ ELF_PLAT_INIT(regs, reloc_func_desc); #endif start_thread(regs, elf_entry, bprm->p); retval = 0; out: kfree(loc); out_ret: return retval; /* error cleanup */ out_free_dentry: kfree(interp_elf_phdata); allow_write_access(interpreter); if (interpreter) fput(interpreter); out_free_interp: kfree(elf_interpreter); out_free_ph: kfree(elf_phdata); goto out; }
1
[]
linux
eab09532d40090698b05a07c1c87f39fdbc5fab5
58,994,664,817,333,900,000,000,000,000,000,000,000
445
binfmt_elf: use ELF_ET_DYN_BASE only for PIE The ELF_ET_DYN_BASE position was originally intended to keep loaders away from ET_EXEC binaries. (For example, running "/lib/ld-linux.so.2 /bin/cat" might cause the subsequent load of /bin/cat into where the loader had been loaded.) With the advent of PIE (ET_DYN binaries with an INTERP Program Header), ELF_ET_DYN_BASE continued to be used since the kernel was only looking at ET_DYN. However, since ELF_ET_DYN_BASE is traditionally set at the top 1/3rd of the TASK_SIZE, a substantial portion of the address space is unused. For 32-bit tasks when RLIMIT_STACK is set to RLIM_INFINITY, programs are loaded above the mmap region. This means they can be made to collide (CVE-2017-1000370) or nearly collide (CVE-2017-1000371) with pathological stack regions. Lowering ELF_ET_DYN_BASE solves both by moving programs below the mmap region in all cases, and will now additionally avoid programs falling back to the mmap region by enforcing MAP_FIXED for program loads (i.e. if it would have collided with the stack, now it will fail to load instead of falling back to the mmap region). To allow for a lower ELF_ET_DYN_BASE, loaders (ET_DYN without INTERP) are loaded into the mmap region, leaving space available for either an ET_EXEC binary with a fixed location or PIE being loaded into mmap by the loader. Only PIE programs are loaded offset from ELF_ET_DYN_BASE, which means architectures can now safely lower their values without risk of loaders colliding with their subsequently loaded programs. For 64-bit, ELF_ET_DYN_BASE is best set to 4GB to allow runtimes to use the entire 32-bit address space for 32-bit pointers. Thanks to PaX Team, Daniel Micay, and Rik van Riel for inspiration and suggestions on how to implement this solution. Fixes: d1fd836dcf00 ("mm: split ET_DYN ASLR from mmap ASLR") Link: http://lkml.kernel.org/r/20170621173201.GA114489@beast Signed-off-by: Kees Cook <[email protected]> Acked-by: Rik van Riel <[email protected]> Cc: Daniel Micay <[email protected]> Cc: Qualys Security Advisory <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Dmitry Safonov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Grzegorz Andrejczuk <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: James Hogan <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Pratyush Anand <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
ves_icall_MonoDebugger_GetMethodToken (MonoReflectionMethod *method) { return method->method->token; }
0
[ "CWE-264" ]
mono
035c8587c0d8d307e45f1b7171a0d337bb451f1e
234,629,036,052,022,700,000,000,000,000,000,000,000
4
Allow only primitive types/enums in RuntimeHelpers.InitializeArray ().
mrb_class_inherited(mrb_state *mrb, struct RClass *super, struct RClass *klass) { mrb_value s; mrb_sym mid; if (!super) super = mrb->object_class; super->flags |= MRB_FLAG_IS_INHERITED; s = mrb_obj_value(super); mc_clear_by_class(mrb, klass); mid = mrb_intern_lit(mrb, "inherited"); if (!mrb_func_basic_p(mrb, s, mid, mrb_bob_init)) { mrb_value c = mrb_obj_value(klass); mrb_funcall_argv(mrb, s, mid, 1, &c); } }
0
[ "CWE-476", "CWE-415" ]
mruby
faa4eaf6803bd11669bc324b4c34e7162286bfa3
25,165,747,789,213,260,000,000,000,000,000,000,000
16
`mrb_class_real()` did not work for `BasicObject`; fix #4037
static void cirrus_vga_write_sr(CirrusVGAState * s, uint32_t val) { switch (s->vga.sr_index) { case 0x00: // Standard VGA case 0x01: // Standard VGA case 0x02: // Standard VGA case 0x03: // Standard VGA case 0x04: // Standard VGA s->vga.sr[s->vga.sr_index] = val & sr_mask[s->vga.sr_index]; if (s->vga.sr_index == 1) s->vga.update_retrace_info(&s->vga); break; case 0x06: // Unlock Cirrus extensions val &= 0x17; if (val == 0x12) { s->vga.sr[s->vga.sr_index] = 0x12; } else { s->vga.sr[s->vga.sr_index] = 0x0f; } break; case 0x10: case 0x30: case 0x50: case 0x70: // Graphics Cursor X case 0x90: case 0xb0: case 0xd0: case 0xf0: // Graphics Cursor X s->vga.sr[0x10] = val; s->vga.hw_cursor_x = (val << 3) | (s->vga.sr_index >> 5); break; case 0x11: case 0x31: case 0x51: case 0x71: // Graphics Cursor Y case 0x91: case 0xb1: case 0xd1: case 0xf1: // Graphics Cursor Y s->vga.sr[0x11] = val; s->vga.hw_cursor_y = (val << 3) | (s->vga.sr_index >> 5); break; case 0x07: // Extended Sequencer Mode cirrus_update_memory_access(s); case 0x08: // EEPROM Control case 0x09: // Scratch Register 0 case 0x0a: // Scratch Register 1 case 0x0b: // VCLK 0 case 0x0c: // VCLK 1 case 0x0d: // VCLK 2 case 0x0e: // VCLK 3 case 0x0f: // DRAM Control case 0x13: // Graphics Cursor Pattern Address case 0x14: // Scratch Register 2 case 0x15: // Scratch Register 3 case 0x16: // Performance Tuning Register case 0x18: // Signature Generator Control case 0x19: // Signature Generator Result case 0x1a: // Signature Generator Result case 0x1b: // VCLK 0 Denominator & Post case 0x1c: // VCLK 1 Denominator & Post case 0x1d: // VCLK 2 Denominator & Post case 0x1e: // VCLK 3 Denominator & Post case 0x1f: // BIOS Write Enable and MCLK select s->vga.sr[s->vga.sr_index] = val; #ifdef DEBUG_CIRRUS printf("cirrus: handled outport sr_index %02x, sr_value %02x\n", s->vga.sr_index, val); #endif break; case 0x12: // Graphics Cursor Attribute s->vga.sr[0x12] = val; s->vga.force_shadow = !!(val & CIRRUS_CURSOR_SHOW); #ifdef DEBUG_CIRRUS printf("cirrus: cursor ctl SR12=%02x (force shadow: %d)\n", val, s->vga.force_shadow); #endif break; case 0x17: // Configuration Readback and Extended Control s->vga.sr[s->vga.sr_index] = (s->vga.sr[s->vga.sr_index] & 0x38) | (val & 0xc7); cirrus_update_memory_access(s); break; default: #ifdef DEBUG_CIRRUS printf("cirrus: outport sr_index %02x, sr_value %02x\n", s->vga.sr_index, val); #endif break; } }
0
[ "CWE-119" ]
qemu
026aeffcb4752054830ba203020ed6eb05bcaba8
227,409,899,026,445,820,000,000,000,000,000,000,000
91
cirrus: stop passing around dst pointers in the blitter Instead pass around the address (aka offset into vga memory). Calculate the pointer in the rop_* functions, after applying the mask to the address, to make sure the address stays within the valid range. Signed-off-by: Gerd Hoffmann <[email protected]> Message-id: [email protected]
nfs3svc_encode_createres(struct svc_rqst *rqstp, __be32 *p, struct nfsd3_diropres *resp) { if (resp->status == 0) { *p++ = xdr_one; p = encode_fh(p, &resp->fh); p = encode_post_op_attr(rqstp, p, &resp->fh); } p = encode_wcc_data(rqstp, p, &resp->dirfh); return xdr_ressize_check(rqstp, p); }
0
[ "CWE-119", "CWE-703" ]
linux
13bf9fbff0e5e099e2b6f003a0ab8ae145436309
93,925,868,377,889,390,000,000,000,000,000,000,000
11
nfsd: stricter decoding of write-like NFSv2/v3 ops The NFSv2/v3 code does not systematically check whether we decode past the end of the buffer. This generally appears to be harmless, but there are a few places where we do arithmetic on the pointers involved and don't account for the possibility that a length could be negative. Add checks to catch these. Reported-by: Tuomas Haanpää <[email protected]> Reported-by: Ari Kauppi <[email protected]> Reviewed-by: NeilBrown <[email protected]> Cc: [email protected] Signed-off-by: J. Bruce Fields <[email protected]>
static void mark_used_tables_as_free_for_reuse(THD *thd, TABLE *table) { DBUG_ENTER("mark_used_tables_as_free_for_reuse"); for (; table ; table= table->next) { DBUG_ASSERT(table->pos_in_locked_tables == NULL || table->pos_in_locked_tables->table == table); if (table->query_id == thd->query_id) { table->query_id= 0; table->file->ha_reset(); } else if (table->file->check_table_binlog_row_based_done) table->file->clear_cached_table_binlog_row_based_flag(); } DBUG_VOID_RETURN; }
0
[ "CWE-416" ]
server
0beed9b5e933f0ff79b3bb346524f7a451d14e38
283,747,419,119,284,400,000,000,000,000,000,000,000
17
MDEV-28097 use-after-free when WHERE has subquery with an outer reference in HAVING when resolving WHERE and ON clauses, do not look in SELECT list/aliases.
static unsigned ucvector_push_back(ucvector* p, unsigned char c) { if(!ucvector_resize(p, p->size + 1)) return 0; p->data[p->size - 1] = c; return 1; }
0
[ "CWE-401" ]
FreeRDP
9fee4ae076b1ec97b97efb79ece08d1dab4df29a
234,438,751,018,169,950,000,000,000,000,000,000,000
6
Fixed #5645: realloc return handling
static bool parse_notify(struct pool *pool, json_t *val) { char *job_id, *prev_hash, *coinbase1, *coinbase2, *bbversion, *nbit, *ntime; bool clean, ret = false; int merkles, i; json_t *arr; arr = json_array_get(val, 4); if (!arr || !json_is_array(arr)) goto out; merkles = json_array_size(arr); job_id = json_array_string(val, 0); prev_hash = json_array_string(val, 1); coinbase1 = json_array_string(val, 2); coinbase2 = json_array_string(val, 3); bbversion = json_array_string(val, 5); nbit = json_array_string(val, 6); ntime = json_array_string(val, 7); clean = json_is_true(json_array_get(val, 8)); if (!job_id || !prev_hash || !coinbase1 || !coinbase2 || !bbversion || !nbit || !ntime) { /* Annoying but we must not leak memory */ if (job_id) free(job_id); if (prev_hash) free(prev_hash); if (coinbase1) free(coinbase1); if (coinbase2) free(coinbase2); if (bbversion) free(bbversion); if (nbit) free(nbit); if (ntime) free(ntime); goto out; } mutex_lock(&pool->pool_lock); free(pool->swork.job_id); free(pool->swork.prev_hash); free(pool->swork.coinbase1); free(pool->swork.coinbase2); free(pool->swork.bbversion); free(pool->swork.nbit); free(pool->swork.ntime); pool->swork.job_id = job_id; pool->swork.prev_hash = prev_hash; pool->swork.coinbase1 = coinbase1; pool->swork.cb1_len = strlen(coinbase1) / 2; pool->swork.coinbase2 = coinbase2; pool->swork.cb2_len = strlen(coinbase2) / 2; pool->swork.bbversion = bbversion; pool->swork.nbit = nbit; pool->swork.ntime = ntime; pool->submit_old = !clean; pool->swork.clean = true; pool->swork.cb_len = pool->swork.cb1_len + pool->n1_len + pool->n2size + pool->swork.cb2_len; for (i = 0; i < pool->swork.merkles; i++) free(pool->swork.merkle[i]); if (merkles) { pool->swork.merkle = realloc(pool->swork.merkle, sizeof(char *) * merkles + 1); for (i = 0; i < merkles; i++) pool->swork.merkle[i] = json_array_string(arr, i); } pool->swork.merkles = merkles; if (clean) pool->nonce2 = 0; pool->swork.header_len = strlen(pool->swork.bbversion) + strlen(pool->swork.prev_hash) + strlen(pool->swork.ntime) + strlen(pool->swork.nbit) + /* merkle_hash */ 32 + /* nonce */ 8 + /* workpadding */ 96; pool->swork.header_len = pool->swork.header_len * 2 + 1; align_len(&pool->swork.header_len); mutex_unlock(&pool->pool_lock); applog(LOG_DEBUG, "Received stratum notify from pool %u with job_id=%s", pool->pool_no, job_id); if (opt_protocol) { applog(LOG_DEBUG, "job_id: %s", job_id); applog(LOG_DEBUG, "prev_hash: %s", prev_hash); applog(LOG_DEBUG, "coinbase1: %s", coinbase1); applog(LOG_DEBUG, "coinbase2: %s", coinbase2); for (i = 0; i < merkles; i++) applog(LOG_DEBUG, "merkle%d: %s", i, pool->swork.merkle[i]); applog(LOG_DEBUG, "bbversion: %s", bbversion); applog(LOG_DEBUG, "nbit: %s", nbit); applog(LOG_DEBUG, "ntime: %s", ntime); applog(LOG_DEBUG, "clean: %s", clean ? "yes" : "no"); } /* A notify message is the closest stratum gets to a getwork */ pool->getwork_requested++; total_getworks++; if ((merkles && (!pool->swork.transparency_probed || rand() <= RAND_MAX / (opt_skip_checks + 1))) || pool->swork.transparency_time != (time_t)-1) if (pool->stratum_auth) stratum_probe_transparency(pool); ret = true; out: return ret; }
0
[ "CWE-119", "CWE-787" ]
bfgminer
c80ad8548251eb0e15329fc240c89070640c9d79
127,715,730,442,668,480,000,000,000,000,000,000,000
110
Stratum: extract_sockaddr: Truncate overlong addresses rather than stack overflow Thanks to Mick Ayzenberg <[email protected]> for finding this!
int wait_for_prior_commit(THD *thd) { /* Quick inline check, to avoid function call and locking in the common case where no wakeup is registered, or a registered wait was already signalled. */ if (waitee) return wait_for_prior_commit2(thd); else { if (wakeup_error) my_error(ER_PRIOR_COMMIT_FAILED, MYF(0)); return wakeup_error; } }
0
[ "CWE-416" ]
server
4681b6f2d8c82b4ec5cf115e83698251963d80d5
275,480,038,339,471,170,000,000,000,000,000,000,000
15
MDEV-26281 ASAN use-after-poison when complex conversion is involved in blob the bug was that in_vector array in Item_func_in was allocated in the statement arena, not in the table->expr_arena. revert part of the 5acd391e8b2d. Instead, change the arena correctly in fix_all_session_vcol_exprs(). Remove TABLE_ARENA, that was introduced in 5acd391e8b2d to force item tree changes to be rolled back (because they were allocated in the wrong arena and didn't persist. now they do)
static int krb5_save_ccname(TALLOC_CTX *mem_ctx, struct sysdb_ctx *sysdb, struct sss_domain_info *domain, const char *name, const char *ccname) { TALLOC_CTX *tmpctx; struct sysdb_attrs *attrs; int ret; if (name == NULL || ccname == NULL) { DEBUG(1, ("Missing user or ccache name.\n")); return EINVAL; } DEBUG(9, ("Save ccname [%s] for user [%s].\n", ccname, name)); tmpctx = talloc_new(mem_ctx); if (!tmpctx) { return ENOMEM; } attrs = sysdb_new_attrs(mem_ctx); if (!attrs) { ret = ENOMEM; goto done; } ret = sysdb_attrs_add_string(attrs, SYSDB_CCACHE_FILE, ccname); if (ret != EOK) { DEBUG(1, ("sysdb_attrs_add_string failed.\n")); goto done; } ret = sysdb_transaction_start(sysdb); if (ret != EOK) { DEBUG(6, ("Error %d starting transaction (%s)\n", ret, strerror(ret))); goto done; } ret = sysdb_set_user_attr(tmpctx, sysdb, domain, name, attrs, SYSDB_MOD_REP); if (ret != EOK) { DEBUG(6, ("Error: %d (%s)\n", ret, strerror(ret))); sysdb_transaction_cancel(sysdb); goto done; } ret = sysdb_transaction_commit(sysdb); if (ret != EOK) { DEBUG(1, ("Failed to commit transaction!\n")); } done: talloc_zfree(tmpctx); return ret; }
0
[ "CWE-287" ]
sssd
fffdae81651b460f3d2c119c56d5caa09b4de42a
303,586,883,176,240,600,000,000,000,000,000,000,000
57
Fix bad password caching when using automatic TGT renewal Fixes CVE-2011-1758, https://fedorahosted.org/sssd/ticket/856
static void cleanup_special_delivery(deliver_data_t *mydata) { fclose(mydata->m->f); prot_free(mydata->m->data); append_removestage(mydata->stage); if (mydata->content->base) { map_free(&mydata->content->base, &mydata->content->len); if (mydata->content->body) { message_free_body(mydata->content->body); free(mydata->content->body); } } }
0
[ "CWE-269" ]
cyrus-imapd
673ebd96e2efbb8895d08648983377262f35b3f7
233,916,555,432,498,930,000,000,000,000,000,000,000
13
lmtp_sieve: don't create mailbox with admin for sieve autocreate
const Type_handler *type_handler() const { return &type_handler_null; }
0
[ "CWE-416", "CWE-703" ]
server
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
17,115,308,997,702,156,000,000,000,000,000,000,000
1
MDEV-24176 Server crashes after insert in the table with virtual column generated using date_format() and if() vcol_info->expr is allocated on expr_arena at parsing stage. Since expr item is allocated on expr_arena all its containee items must be allocated on expr_arena too. Otherwise fix_session_expr() will encounter prematurely freed item. When table is reopened from cache vcol_info contains stale expression. We refresh expression via TABLE::vcol_fix_exprs() but first we must prepare a proper context (Vcol_expr_context) which meets some requirements: 1. As noted above expr update must be done on expr_arena as there may be new items created. It was a bug in fix_session_expr_for_read() and was just not reproduced because of no second refix. Now refix is done for more cases so it does reproduce. Tests affected: vcol.binlog 2. Also name resolution context must be narrowed to the single table. Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes 3. sql_mode must be clean and not fail expr update. sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc must not affect vcol expression update. If the table was created successfully any further evaluation must not fail. Tests affected: main.func_like Reviewed by: Sergei Golubchik <[email protected]>
static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp) { struct net *net = dev_net(ifp->idev->dev); inet6_ifa_notify(event ? : RTM_NEWADDR, ifp); switch (event) { case RTM_NEWADDR: update_valid_ll_addr_cnt(ifp, 1); /* * If the address was optimistic * we inserted the route at the start of * our DAD process, so we don't need * to do it again */ if (!(ifp->rt->rt6i_node)) ip6_ins_rt(ifp->rt); if (ifp->idev->cnf.forwarding) addrconf_join_anycast(ifp); if (!ipv6_addr_any(&ifp->peer_addr)) addrconf_prefix_route(&ifp->peer_addr, 128, ifp->idev->dev, 0, 0); break; case RTM_DELADDR: update_valid_ll_addr_cnt(ifp, -1); if (ifp->idev->cnf.forwarding) addrconf_leave_anycast(ifp); addrconf_leave_solict(ifp->idev, &ifp->addr); if (!ipv6_addr_any(&ifp->peer_addr)) { struct rt6_info *rt; struct net_device *dev = ifp->idev->dev; rt = rt6_lookup(dev_net(dev), &ifp->peer_addr, NULL, dev->ifindex, 1); if (rt) { dst_hold(&rt->dst); if (ip6_del_rt(rt)) dst_free(&rt->dst); } } dst_hold(&ifp->rt->dst); if (ip6_del_rt(ifp->rt)) dst_free(&ifp->rt->dst); break; } atomic_inc(&net->ipv6.dev_addr_genid); }
0
[]
net
4b08a8f1bd8cb4541c93ec170027b4d0782dab52
65,424,035,632,356,490,000,000,000,000,000,000,000
50
ipv6: remove max_addresses check from ipv6_create_tempaddr Because of the max_addresses check attackers were able to disable privacy extensions on an interface by creating enough autoconfigured addresses: <http://seclists.org/oss-sec/2012/q4/292> But the check is not actually needed: max_addresses protects the kernel to install too many ipv6 addresses on an interface and guards addrconf_prefix_rcv to install further addresses as soon as this limit is reached. We only generate temporary addresses in direct response of a new address showing up. As soon as we filled up the maximum number of addresses of an interface, we stop installing more addresses and thus also stop generating more temp addresses. Even if the attacker tries to generate a lot of temporary addresses by announcing a prefix and removing it again (lifetime == 0) we won't install more temp addresses, because the temporary addresses do count to the maximum number of addresses, thus we would stop installing new autoconfigured addresses when the limit is reached. This patch fixes CVE-2013-0343 (but other layer-2 attacks are still possible). Thanks to Ding Tianhong to bring this topic up again. Cc: Ding Tianhong <[email protected]> Cc: George Kargiotakis <[email protected]> Cc: P J P <[email protected]> Cc: YOSHIFUJI Hideaki <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Acked-by: Ding Tianhong <[email protected]> Signed-off-by: David S. Miller <[email protected]>
SSLNetVConnection::sslServerHandShakeEvent(int &err) { // Continue on if we are in the invoked state. The hook has not yet reenabled if (sslHandshakeHookState == HANDSHAKE_HOOKS_CERT_INVOKE || sslHandshakeHookState == HANDSHAKE_HOOKS_CLIENT_CERT_INVOKE || sslHandshakeHookState == HANDSHAKE_HOOKS_PRE_INVOKE) { return SSL_WAIT_FOR_HOOK; } // Go do the preaccept hooks if (sslHandshakeHookState == HANDSHAKE_HOOKS_PRE) { if (!curHook) { Debug("ssl", "Initialize preaccept curHook from NULL"); curHook = ssl_hooks->get(TS_VCONN_START_INTERNAL_HOOK); } else { curHook = curHook->next(); } // If no more hooks, move onto SNI if (nullptr == curHook) { sslHandshakeHookState = HANDSHAKE_HOOKS_SNI; } else { sslHandshakeHookState = HANDSHAKE_HOOKS_PRE_INVOKE; ContWrapper::wrap(nh->mutex.get(), curHook->m_cont, TS_EVENT_VCONN_START, this); return SSL_WAIT_FOR_HOOK; } } // If a blind tunnel was requested in the pre-accept calls, convert. // Again no data has been exchanged, so we can go directly // without data replay. // Note we can't arrive here if a hook is active. if (SSL_HOOK_OP_TUNNEL == hookOpRequested && !SNIMapping) { this->attributes = HttpProxyPort::TRANSPORT_BLIND_TUNNEL; SSL_free(this->ssl); this->ssl = nullptr; // Don't mark the handshake as complete yet, // Will be checking for that flag not being set after // we get out of this callback, and then will shuffle // over the buffered handshake packets to the O.S. return EVENT_DONE; } else if (SSL_HOOK_OP_TERMINATE == hookOpRequested) { sslHandShakeComplete = true; return EVENT_DONE; } Debug("ssl", "Go on with the handshake state=%d", sslHandshakeHookState); // All the pre-accept hooks have completed, proceed with the actual accept. if (this->handShakeReader) { if (BIO_eof(SSL_get_rbio(this->ssl))) { // No more data in the buffer // Is this the first read? if (!this->handShakeReader->is_read_avail_more_than(0) && !this->handShakeHolder->is_read_avail_more_than(0)) { Debug("ssl", "%p first read\n", this); // Read from socket to fill in the BIO buffer with the // raw handshake data before calling the ssl accept calls. int retval = this->read_raw_data(); if (retval < 0) { if (retval == -EAGAIN) { // No data at the moment, hang tight SSLVCDebug(this, "SSL handshake: EAGAIN"); return SSL_HANDSHAKE_WANT_READ; } else { // An error, make us go away SSLVCDebug(this, "SSL handshake error: read_retval=%d", retval); return EVENT_ERROR; } } else if (retval == 0) { // EOF, go away, we stopped in the handshake SSLVCDebug(this, "SSL handshake error: EOF"); return EVENT_ERROR; } } else { update_rbio(false); } } // Still data in the BIO } #if TS_USE_TLS_ASYNC if (SSLConfigParams::async_handshake_enabled) { SSL_set_mode(ssl, SSL_MODE_ASYNC); } #endif ssl_error_t ssl_error = SSLAccept(ssl); #if TS_USE_TLS_ASYNC if (ssl_error == SSL_ERROR_WANT_ASYNC) { size_t numfds; OSSL_ASYNC_FD waitfd; // Set up the epoll entry for the signalling if (SSL_get_all_async_fds(ssl, &waitfd, &numfds) && numfds > 0) { // Temporarily disable regular net read_disable(nh, this); this->ep.stop(); // Modify used in read_disable doesn't work for edge triggered epol // Have to have the read NetState enabled because we are using it for the signal vc read.enabled = true; write_disable(nh, this); PollDescriptor *pd = get_PollDescriptor(this_ethread()); this->ep.start(pd, waitfd, this, EVENTIO_READ); this->ep.type = EVENTIO_READWRITE_VC; } } else if (SSLConfigParams::async_handshake_enabled) { // Clean up the epoll entry for signalling SSL_clear_mode(ssl, SSL_MODE_ASYNC); this->ep.stop(); // Rectivate the socket, ready to rock PollDescriptor *pd = get_PollDescriptor(this_ethread()); this->ep.start( pd, this, EVENTIO_READ | EVENTIO_WRITE); // Again we must muck with the eventloop directly because of limits with these methods and edge trigger if (ssl_error == SSL_ERROR_WANT_READ) { this->reenable(&read.vio); this->read.triggered = 1; } } #endif bool trace = getSSLTrace(); if (ssl_error != SSL_ERROR_NONE) { err = errno; SSLVCDebug(this, "SSL handshake error: %s (%d), errno=%d", SSLErrorName(ssl_error), ssl_error, err); // start a blind tunnel if tr-pass is set and data does not look like ClientHello char *buf = handShakeBuffer ? handShakeBuffer->buf() : nullptr; if (getTransparentPassThrough() && buf && *buf != SSL_OP_HANDSHAKE) { SSLVCDebug(this, "Data does not look like SSL handshake, starting blind tunnel"); this->attributes = HttpProxyPort::TRANSPORT_BLIND_TUNNEL; sslHandShakeComplete = false; return EVENT_CONT; } } switch (ssl_error) { case SSL_ERROR_NONE: if (is_debug_tag_set("ssl")) { X509 *cert = SSL_get_peer_certificate(ssl); Debug("ssl", "SSL server handshake completed successfully"); if (cert) { debug_certificate_name("client certificate subject CN is", X509_get_subject_name(cert)); debug_certificate_name("client certificate issuer CN is", X509_get_issuer_name(cert)); X509_free(cert); } } sslHandShakeComplete = true; TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake completed successfully"); // do we want to include cert info in trace? if (sslHandshakeBeginTime) { sslHandshakeEndTime = Thread::get_hrtime(); const ink_hrtime ssl_handshake_time = sslHandshakeEndTime - sslHandshakeBeginTime; Debug("ssl", "ssl handshake time:%" PRId64, ssl_handshake_time); SSL_INCREMENT_DYN_STAT_EX(ssl_total_handshake_time_stat, ssl_handshake_time); SSL_INCREMENT_DYN_STAT(ssl_total_success_handshake_count_in_stat); } { const unsigned char *proto = nullptr; unsigned len = 0; // If it's possible to negotiate both NPN and ALPN, then ALPN // is preferred since it is the server's preference. The server // preference would not be meaningful if we let the client // preference have priority. #if TS_USE_TLS_ALPN SSL_get0_alpn_selected(ssl, &proto, &len); #endif /* TS_USE_TLS_ALPN */ #if TS_USE_TLS_NPN if (len == 0) { SSL_get0_next_proto_negotiated(ssl, &proto, &len); } #endif /* TS_USE_TLS_NPN */ if (len) { // If there's no NPN set, we should not have done this negotiation. ink_assert(this->npnSet != nullptr); this->npnEndpoint = this->npnSet->findEndpoint(proto, len); this->npnSet = nullptr; if (this->npnEndpoint == nullptr) { Error("failed to find registered SSL endpoint for '%.*s'", (int)len, (const char *)proto); return EVENT_ERROR; } Debug("ssl", "client selected next protocol '%.*s'", len, proto); TraceIn(trace, get_remote_addr(), get_remote_port(), "client selected next protocol'%.*s'", len, proto); } else { Debug("ssl", "client did not select a next protocol"); TraceIn(trace, get_remote_addr(), get_remote_port(), "client did not select a next protocol"); } } return EVENT_DONE; case SSL_ERROR_WANT_CONNECT: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_CONNECT"); return SSL_HANDSHAKE_WANT_CONNECT; case SSL_ERROR_WANT_WRITE: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_WRITE"); return SSL_HANDSHAKE_WANT_WRITE; case SSL_ERROR_WANT_READ: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_READ"); return SSL_HANDSHAKE_WANT_READ; // This value is only defined in openssl has been patched to // enable the sni callback to break out of the SSL_accept processing #ifdef SSL_ERROR_WANT_SNI_RESOLVE case SSL_ERROR_WANT_X509_LOOKUP: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_X509_LOOKUP"); return EVENT_CONT; case SSL_ERROR_WANT_SNI_RESOLVE: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_SNI_RESOLVE"); #elif SSL_ERROR_WANT_X509_LOOKUP case SSL_ERROR_WANT_X509_LOOKUP: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_X509_LOOKUP"); #endif #if defined(SSL_ERROR_WANT_SNI_RESOLVE) || defined(SSL_ERROR_WANT_X509_LOOKUP) if (this->attributes == HttpProxyPort::TRANSPORT_BLIND_TUNNEL || SSL_HOOK_OP_TUNNEL == hookOpRequested) { this->attributes = HttpProxyPort::TRANSPORT_BLIND_TUNNEL; sslHandShakeComplete = false; return EVENT_CONT; } else { // Stopping for some other reason, perhaps loading certificate return SSL_WAIT_FOR_HOOK; } #endif #if TS_USE_TLS_ASYNC case SSL_ERROR_WANT_ASYNC: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_ASYNC"); return SSL_WAIT_FOR_ASYNC; #endif case SSL_ERROR_WANT_ACCEPT: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_WANT_ACCEPT"); return EVENT_CONT; case SSL_ERROR_SSL: { SSL_CLR_ERR_INCR_DYN_STAT(this, ssl_error_ssl, "SSLNetVConnection::sslServerHandShakeEvent, SSL_ERROR_SSL errno=%d", errno); char buf[512]; unsigned long e = ERR_peek_last_error(); ERR_error_string_n(e, buf, sizeof(buf)); TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_SSL: sslErr=%d, ERR_get_error=%ld (%s) errno=%d", ssl_error, e, buf, errno); return EVENT_ERROR; } case SSL_ERROR_ZERO_RETURN: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_ZERO_RETURN"); return EVENT_ERROR; case SSL_ERROR_SYSCALL: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_SYSCALL"); return EVENT_ERROR; default: TraceIn(trace, get_remote_addr(), get_remote_port(), "SSL server handshake ERROR_OTHER"); return EVENT_ERROR; } }
0
[ "CWE-284" ]
trafficserver
d3f36f79820ea10c26573c742b1bbc370c351716
133,609,762,607,775,270,000,000,000,000,000,000,000
265
Bug fix in origin connection handling (#8731) Co-authored-by: Takuya Kitano <[email protected]>
index_recheck_constraint(Relation index, Oid *constr_procs, Datum *existing_values, bool *existing_isnull, Datum *new_values) { int index_natts = index->rd_index->indnatts; int i; for (i = 0; i < index_natts; i++) { /* Assume the exclusion operators are strict */ if (existing_isnull[i]) return false; if (!DatumGetBool(OidFunctionCall2Coll(constr_procs[i], index->rd_indcollation[i], existing_values[i], new_values[i]))) return false; } return true; }
0
[ "CWE-209" ]
postgres
804b6b6db4dcfc590a468e7be390738f9f7755fb
153,357,063,271,745,720,000,000,000,000,000,000,000
22
Fix column-privilege leak in error-message paths While building error messages to return to the user, BuildIndexValueDescription, ExecBuildSlotValueDescription and ri_ReportViolation would happily include the entire key or entire row in the result returned to the user, even if the user didn't have access to view all of the columns being included. Instead, include only those columns which the user is providing or which the user has select rights on. If the user does not have any rights to view the table or any of the columns involved then no detail is provided and a NULL value is returned from BuildIndexValueDescription and ExecBuildSlotValueDescription. Note that, for key cases, the user must have access to all of the columns for the key to be shown; a partial key will not be returned. Further, in master only, do not return any data for cases where row security is enabled on the relation and row security should be applied for the user. This required a bit of refactoring and moving of things around related to RLS- note the addition of utils/misc/rls.c. Back-patch all the way, as column-level privileges are now in all supported versions. This has been assigned CVE-2014-8161, but since the issue and the patch have already been publicized on pgsql-hackers, there's no point in trying to hide this commit.
GF_Err audio_sample_entry_on_child_box(GF_Box *s, GF_Box *a) { GF_UnknownBox *wave = NULL; Bool drop_wave=GF_FALSE; GF_MPEGAudioSampleEntryBox *ptr = (GF_MPEGAudioSampleEntryBox *)s; switch (a->type) { case GF_ISOM_BOX_TYPE_ESDS: if (ptr->esd) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->esd = (GF_ESDBox *)a; ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; break; case GF_ISOM_BOX_TYPE_DAMR: case GF_ISOM_BOX_TYPE_DEVC: case GF_ISOM_BOX_TYPE_DQCP: case GF_ISOM_BOX_TYPE_DSMV: if (ptr->cfg_3gpp) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->cfg_3gpp = (GF_3GPPConfigBox *) a; /*for 3GP config, remember sample entry type in config*/ ptr->cfg_3gpp->cfg.type = ptr->type; ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; break; case GF_ISOM_BOX_TYPE_DOPS: if (ptr->cfg_opus) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->cfg_opus = (GF_OpusSpecificBox *)a; ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; break; case GF_ISOM_BOX_TYPE_DAC3: if (ptr->cfg_ac3) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->cfg_ac3 = (GF_AC3ConfigBox *) a; ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; break; case GF_ISOM_BOX_TYPE_DEC3: if (ptr->cfg_ac3) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->cfg_ac3 = (GF_AC3ConfigBox *) a; break; case GF_ISOM_BOX_TYPE_MHAC: if (ptr->cfg_mha) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->cfg_mha = (GF_MHAConfigBox *) a; ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; break; case GF_ISOM_BOX_TYPE_DFLA: if (ptr->cfg_flac) ERROR_ON_DUPLICATED_BOX(a, ptr) ptr->cfg_flac = (GF_FLACConfigBox *) a; ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; break; case GF_ISOM_BOX_TYPE_UNKNOWN: wave = (GF_UnknownBox *)a; /*HACK for QT files: get the esds box from the track*/ if (s->type == GF_ISOM_BOX_TYPE_MP4A) { if (ptr->esd) ERROR_ON_DUPLICATED_BOX(a, ptr) //wave subboxes may have been properly parsed if ((wave->original_4cc == GF_QT_BOX_TYPE_WAVE) && gf_list_count(wave->child_boxes)) { u32 i; for (i =0; i<gf_list_count(wave->child_boxes); i++) { GF_Box *inner_box = (GF_Box *)gf_list_get(wave->child_boxes, i); if (inner_box->type == GF_ISOM_BOX_TYPE_ESDS) { ptr->esd = (GF_ESDBox *)inner_box; if (ptr->qtff_mode & GF_ISOM_AUDIO_QTFF_CONVERT_FLAG) { gf_list_rem(a->child_boxes, i); drop_wave=GF_TRUE; ptr->compression_id = 0; gf_list_add(ptr->child_boxes, inner_box); } } } if (drop_wave) { gf_isom_box_del_parent(&ptr->child_boxes, a); ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; ptr->version = 0; return GF_OK; } ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_ON_EXT_VALID; return GF_OK; } gf_isom_box_del_parent(&ptr->child_boxes, a); return GF_ISOM_INVALID_MEDIA; } ptr->qtff_mode &= ~GF_ISOM_AUDIO_QTFF_CONVERT_FLAG; if ((wave->original_4cc == GF_QT_BOX_TYPE_WAVE) && gf_list_count(wave->child_boxes)) { ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_ON_NOEXT; } return GF_OK; case GF_QT_BOX_TYPE_WAVE: { u32 subtype = 0; GF_Box **cfg_ptr = NULL; if (s->type == GF_ISOM_BOX_TYPE_MP4A) { cfg_ptr = (GF_Box **) &ptr->esd; subtype = GF_ISOM_BOX_TYPE_ESDS; } else if (s->type == GF_ISOM_BOX_TYPE_AC3) { cfg_ptr = (GF_Box **) &ptr->cfg_ac3; subtype = GF_ISOM_BOX_TYPE_DAC3; } else if (s->type == GF_ISOM_BOX_TYPE_EC3) { cfg_ptr = (GF_Box **) &ptr->cfg_ac3; subtype = GF_ISOM_BOX_TYPE_DEC3; } else if (s->type == GF_ISOM_BOX_TYPE_OPUS) { cfg_ptr = (GF_Box **) &ptr->cfg_opus; subtype = GF_ISOM_BOX_TYPE_DOPS; } else if ((s->type == GF_ISOM_BOX_TYPE_MHA1) || (s->type == GF_ISOM_BOX_TYPE_MHA2) || (s->type == GF_ISOM_BOX_TYPE_MHM1) || (s->type == GF_ISOM_BOX_TYPE_MHM2) ) { cfg_ptr = (GF_Box **) &ptr->cfg_mha; subtype = GF_ISOM_BOX_TYPE_MHAC; } if (cfg_ptr) { if (*cfg_ptr) ERROR_ON_DUPLICATED_BOX(a, ptr) //wave subboxes may have been properly parsed if (gf_list_count(a->child_boxes)) { u32 i; for (i =0; i<gf_list_count(a->child_boxes); i++) { GF_Box *inner_box = (GF_Box *)gf_list_get(a->child_boxes, i); if (inner_box->type == subtype) { *cfg_ptr = inner_box; if (ptr->qtff_mode & GF_ISOM_AUDIO_QTFF_CONVERT_FLAG) { gf_list_rem(a->child_boxes, i); drop_wave=GF_TRUE; gf_list_add(ptr->child_boxes, inner_box); } break; } } if (drop_wave) { gf_isom_box_del_parent(&ptr->child_boxes, a); ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_NONE; ptr->compression_id = 0; ptr->version = 0; return GF_OK; } ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_ON_EXT_VALID; return GF_OK; } } } ptr->qtff_mode = GF_ISOM_AUDIO_QTFF_ON_EXT_VALID; return GF_OK; } return GF_OK; }
0
[ "CWE-787" ]
gpac
388ecce75d05e11fc8496aa4857b91245007d26e
255,517,528,846,233,000,000,000,000,000,000,000,000
153
fixed #1587
set_execreg_lastc(int lastc) { execreg_lastc = lastc; }
0
[ "CWE-122", "CWE-787" ]
vim
d25f003342aca9889067f2e839963dfeccf1fe05
65,569,241,539,252,970,000,000,000,000,000,000,000
4
patch 9.0.0011: reading beyond the end of the line with put command Problem: Reading beyond the end of the line with put command. Solution: Adjust the end mark position.
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { return (pmd_t *)get_zeroed_page(GFP_KERNEL | __GFP_REPEAT); }
0
[]
linux
1d18c47c735e8adfe531fc41fae31e98f86b68fe
22,095,490,338,007,600,000,000,000,000,000,000,000
4
arm64: MMU fault handling and page table management This patch adds support for the handling of the MMU faults (exception entry code introduced by a previous patch) and page table management. The user translation table is pointed to by TTBR0 and the kernel one (swapper_pg_dir) by TTBR1. There is no translation information shared or address space overlapping between user and kernel page tables. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Acked-by: Tony Lindgren <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Acked-by: Olof Johansson <[email protected]> Acked-by: Santosh Shilimkar <[email protected]> Acked-by: Arnd Bergmann <[email protected]>
static void ZSTD_reduceTable(U32* const table, U32 const size, U32 const reducerValue) { ZSTD_reduceTable_internal(table, size, reducerValue, 0); }
0
[ "CWE-362" ]
zstd
3e5cdf1b6a85843e991d7d10f6a2567c15580da0
247,021,452,173,671,130,000,000,000,000,000,000,000
4
fixed T36302429
transit_unintern (struct transit *transit) { if (transit->refcnt) transit->refcnt--; if (transit->refcnt == 0) { hash_release (transit_hash, transit); transit_free (transit); } }
0
[]
quagga
8794e8d229dc9fe29ea31424883433d4880ef408
245,946,923,394,656,300,000,000,000,000,000,000,000
11
bgpd: Fix regression in args consolidation, total should be inited from args * bgp_attr.c: (bgp_attr_unknown) total should be initialised from the args.
static int atusb_start(struct ieee802154_hw *hw) { struct atusb *atusb = hw->priv; struct usb_device *usb_dev = atusb->usb_dev; int ret; dev_dbg(&usb_dev->dev, "atusb_start\n"); schedule_delayed_work(&atusb->work, 0); atusb_command(atusb, ATUSB_RX_MODE, 1); ret = atusb_get_and_clear_error(atusb); if (ret < 0) usb_kill_anchored_urbs(&atusb->idle_urbs); return ret; }
0
[ "CWE-399", "CWE-119" ]
linux
05a974efa4bdf6e2a150e3f27dc6fcf0a9ad5655
100,745,878,291,079,000,000,000,000,000,000,000,000
14
ieee802154: atusb: do not use the stack for buffers to make them DMA able From 4.9 we should really avoid using the stack here as this will not be DMA able on various platforms. This changes the buffers already being present in time of 4.9 being released. This should go into stable as well. Reported-by: Dan Carpenter <[email protected]> Cc: [email protected] Signed-off-by: Stefan Schmidt <[email protected]> Signed-off-by: Marcel Holtmann <[email protected]>
void strk_del(GF_Box *s) { GF_SubTrackBox *ptr = (GF_SubTrackBox *)s; if (ptr == NULL) return; if (ptr->info) gf_isom_box_del((GF_Box *)ptr->info); gf_free(ptr); }
0
[ "CWE-125" ]
gpac
bceb03fd2be95097a7b409ea59914f332fb6bc86
335,968,246,326,047,400,000,000,000,000,000,000,000
7
fixed 2 possible heap overflows (inc. #1088)
GF_Err hinf_box_read(GF_Box *s, GF_BitStream *bs) { return gf_isom_box_array_read(s, bs, hinf_on_child_box); }
0
[ "CWE-787" ]
gpac
388ecce75d05e11fc8496aa4857b91245007d26e
249,989,443,595,036,150,000,000,000,000,000,000,000
4
fixed #1587
u32 gf_isom_get_avc_svc_type(GF_ISOFile *the_file, u32 trackNumber, u32 DescriptionIndex) { u32 type; GF_TrackBox *trak; GF_MPEGVisualSampleEntryBox *entry; trak = gf_isom_get_track_from_file(the_file, trackNumber); if (!trak || !trak->Media || !DescriptionIndex) return GF_ISOM_AVCTYPE_NONE; if (trak->Media->handler->handlerType != GF_ISOM_MEDIA_VISUAL) return GF_ISOM_AVCTYPE_NONE; entry = (GF_MPEGVisualSampleEntryBox*)gf_list_get(trak->Media->information->sampleTable->SampleDescription->other_boxes, DescriptionIndex-1); if (!entry) return GF_ISOM_AVCTYPE_NONE; type = entry->type; if (type == GF_ISOM_BOX_TYPE_ENCV) { GF_ProtectionSchemeInfoBox *sinf = (GF_ProtectionSchemeInfoBox *) gf_list_get(entry->protections, 0); if (sinf && sinf->original_format) type = sinf->original_format->data_format; } else if (type == GF_ISOM_BOX_TYPE_RESV) { if (entry->rinf && entry->rinf->original_format) type = entry->rinf->original_format->data_format; } switch (type) { case GF_ISOM_BOX_TYPE_AVC1: case GF_ISOM_BOX_TYPE_AVC2: case GF_ISOM_BOX_TYPE_AVC3: case GF_ISOM_BOX_TYPE_AVC4: case GF_ISOM_BOX_TYPE_SVC1: case GF_ISOM_BOX_TYPE_MVC1: break; default: return GF_ISOM_AVCTYPE_NONE; } if (entry->avc_config && !entry->svc_config && !entry->mvc_config) return GF_ISOM_AVCTYPE_AVC_ONLY; if (entry->avc_config && entry->svc_config) return GF_ISOM_AVCTYPE_AVC_SVC; if (entry->avc_config && entry->mvc_config) return GF_ISOM_AVCTYPE_AVC_MVC; if (!entry->avc_config && entry->svc_config) return GF_ISOM_AVCTYPE_SVC_ONLY; if (!entry->avc_config && entry->mvc_config) return GF_ISOM_AVCTYPE_MVC_ONLY; return GF_ISOM_AVCTYPE_NONE; }
0
[ "CWE-119", "CWE-787" ]
gpac
90dc7f853d31b0a4e9441cba97feccf36d8b69a4
334,178,068,162,584,400,000,000,000,000,000,000,000
39
fix some exploitable overflows (#994, #997)
MagickExport void UnregisterStaticModules(void) { size_t extent; ssize_t i; extent=sizeof(MagickModules)/sizeof(MagickModules[0]); for (i=0; i < (ssize_t) extent; i++) { if (MagickModules[i].registered != MagickFalse) { (MagickModules[i].unregister_module)(); MagickModules[i].registered=MagickFalse; } } }
0
[ "CWE-200", "CWE-362" ]
ImageMagick
01faddbe2711a4156180c4a92837e2f23683cc68
273,473,413,827,484,700,000,000,000,000,000,000,000
18
Use the correct rights.
SplashRadialPattern::~SplashRadialPattern() { }
0
[ "CWE-369" ]
poppler
b224e2f5739fe61de9fa69955d016725b2a4b78d
197,114,180,744,366,560,000,000,000,000,000,000,000
2
SplashOutputDev::tilingPatternFill: Fix crash on broken file Issue #802
static int io_timeout(struct io_kiocb *req, unsigned int issue_flags) { struct io_ring_ctx *ctx = req->ctx; struct io_timeout_data *data = req->async_data; struct list_head *entry; u32 tail, off = req->timeout.off; spin_lock_irq(&ctx->timeout_lock); /* * sqe->off holds how many events that need to occur for this * timeout event to be satisfied. If it isn't set, then this is * a pure timeout request, sequence isn't used. */ if (io_is_timeout_noseq(req)) { entry = ctx->timeout_list.prev; goto add; } tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); req->timeout.target_seq = tail + off; /* Update the last seq here in case io_flush_timeouts() hasn't. * This is safe because ->completion_lock is held, and submissions * and completions are never mixed in the same ->completion_lock section. */ ctx->cq_last_tm_flush = tail; /* * Insertion sort, ensuring the first entry in the list is always * the one we need first. */ list_for_each_prev(entry, &ctx->timeout_list) { struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, timeout.list); if (io_is_timeout_noseq(nxt)) continue; /* nxt.seq is behind @tail, otherwise would've been completed */ if (off >= nxt->timeout.target_seq - tail) break; } add: list_add(&req->timeout.list, entry); data->timer.function = io_timeout_fn; hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode); spin_unlock_irq(&ctx->timeout_lock); return 0;
0
[ "CWE-416" ]
linux
e677edbcabee849bfdd43f1602bccbecf736a646
252,079,926,207,792,040,000,000,000,000,000,000,000
49
io_uring: fix race between timeout flush and removal io_flush_timeouts() assumes the timeout isn't in progress of triggering or being removed/canceled, so it unconditionally removes it from the timeout list and attempts to cancel it. Leave it on the list and let the normal timeout cancelation take care of it. Cc: [email protected] # 5.5+ Signed-off-by: Jens Axboe <[email protected]>
static PyObject *wsgi_signal_intercept(PyObject *self, PyObject *args) { PyObject *h = NULL; int n = 0; PyObject *m = NULL; if (!PyArg_ParseTuple(args, "iO:signal", &n, &h)) return NULL; Py_BEGIN_ALLOW_THREADS ap_log_error(APLOG_MARK, WSGI_LOG_WARNING(0), wsgi_server, "mod_wsgi (pid=%d): Callback registration for " "signal %d ignored.", getpid(), n); Py_END_ALLOW_THREADS m = PyImport_ImportModule("traceback"); if (m) { PyObject *d = NULL; PyObject *o = NULL; d = PyModule_GetDict(m); o = PyDict_GetItemString(d, "print_stack"); if (o) { PyObject *log = NULL; PyObject *args = NULL; PyObject *result = NULL; Py_INCREF(o); log = newLogObject(NULL, APLOG_WARNING, NULL); args = Py_BuildValue("(OOO)", Py_None, Py_None, log); result = PyEval_CallObject(o, args); Py_XDECREF(result); Py_DECREF(args); Py_DECREF(log); Py_DECREF(o); } } Py_XDECREF(m); Py_INCREF(h); return h; }
0
[ "CWE-264" ]
mod_wsgi
d9d5fea585b23991f76532a9b07de7fcd3b649f4
12,318,669,523,461,873,000,000,000,000,000,000,000
44
Local privilege escalation when using daemon mode. (CVE-2014-0240)
inline void Comparison(const T* input1_data, const Dims<4>& input1_dims, const T* input2_data, const Dims<4>& input2_dims, bool* output_data, const Dims<4>& output_dims) { ComparisonParams op_params; // No parameters needed. ComparisonImpl<T, F>(op_params, DimsToShape(input1_dims), input1_data, DimsToShape(input2_dims), input2_data, DimsToShape(output_dims), output_data); }
0
[ "CWE-703", "CWE-835" ]
tensorflow
dfa22b348b70bb89d6d6ec0ff53973bacb4f4695
119,385,806,458,119,620,000,000,000,000,000,000,000
9
Prevent a division by 0 in average ops. PiperOrigin-RevId: 385184660 Change-Id: I7affd4554f9b336fca29ac68f633232c094d0bd3
allocate_trace_buffer(struct trace_array *tr, struct trace_buffer *buf, int size) { enum ring_buffer_flags rb_flags; rb_flags = tr->trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0; buf->tr = tr; buf->buffer = ring_buffer_alloc(size, rb_flags); if (!buf->buffer) return -ENOMEM; buf->data = alloc_percpu(struct trace_array_cpu); if (!buf->data) { ring_buffer_free(buf->buffer); return -ENOMEM; } /* Allocate the first page for all buffers */ set_buffer_entries(&tr->trace_buffer, ring_buffer_size(tr->trace_buffer.buffer, 0)); return 0; }
1
[ "CWE-415" ]
linux
4397f04575c44e1440ec2e49b6302785c95fd2f8
326,507,675,290,447,440,000,000,000,000,000,000,000
24
tracing: Fix possible double free on failure of allocating trace buffer Jing Xia and Chunyan Zhang reported that on failing to allocate part of the tracing buffer, memory is freed, but the pointers that point to them are not initialized back to NULL, and later paths may try to free the freed memory again. Jing and Chunyan fixed one of the locations that does this, but missed a spot. Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 737223fbca3b1 ("tracing: Consolidate buffer allocation code") Reported-by: Jing Xia <[email protected]> Reported-by: Chunyan Zhang <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
CAMLexport value caml_input_value_from_block(char * data, intnat len) { uint32_t magic; mlsize_t block_len; value obj; intern_input = (unsigned char *) data; intern_src = intern_input; intern_input_malloced = 0; magic = read32u(); if (magic != Intext_magic_number) caml_failwith("input_value_from_block: bad object"); block_len = read32u(); if (5*4 + block_len > len) caml_failwith("input_value_from_block: bad block length"); obj = input_val_from_block(); return obj; }
0
[ "CWE-200" ]
ocaml
659615c7b100a89eafe6253e7a5b9d84d0e8df74
8,525,045,821,093,520,000,000,000,000,000,000,000
18
fix PR#7003 and a few other bugs caused by misuse of Int_val git-svn-id: http://caml.inria.fr/svn/ocaml/trunk@16525 f963ae5c-01c2-4b8c-9fe0-0dff7051ff02
bool Track::VetEntry(const BlockEntry* pBlockEntry) const { assert(pBlockEntry); const Block* const pBlock = pBlockEntry->GetBlock(); assert(pBlock); assert(pBlock->GetTrackNumber() == m_info.number); if (!pBlock || pBlock->GetTrackNumber() != m_info.number) return false; // This function is used during a seek to determine whether the // frame is a valid seek target. This default function simply // returns true, which means all frames are valid seek targets. // It gets overridden by the VideoTrack class, because only video // keyframes can be used as seek target. return true; }
0
[ "CWE-20" ]
libvpx
34d54b04e98dd0bac32e9aab0fbda0bf501bc742
199,113,294,451,367,900,000,000,000,000,000,000,000
16
update libwebm to libwebm-1.0.0.27-358-gdbf1d10 changelog: https://chromium.googlesource.com/webm/libwebm/+log/libwebm-1.0.0.27-351-g9f23fbc..libwebm-1.0.0.27-358-gdbf1d10 Change-Id: I28a6b3ae02a53fb1f2029eee11e9449afb94c8e3
static void FVMouse(FontView *fv, GEvent *event) { int pos = (event->u.mouse.y/fv->cbh + fv->rowoff)*fv->colcnt + event->u.mouse.x/fv->cbw; int gid; int realpos = pos; SplineChar *sc, dummy; int dopopup = true; if ( event->type==et_mousedown ) CVPaletteDeactivate(); if ( pos<0 ) { pos = 0; dopopup = false; } else if ( pos>=fv->b.map->enccount ) { pos = fv->b.map->enccount-1; if ( pos<0 ) /* No glyph slots in font */ return; dopopup = false; } sc = (gid=fv->b.map->map[pos])!=-1 ? fv->b.sf->glyphs[gid] : NULL; if ( sc==NULL ) sc = SCBuildDummy(&dummy,fv->b.sf,fv->b.map,pos); if ( event->type == et_mouseup && event->u.mouse.clicks==2 ) { if ( fv->pressed ) { GDrawCancelTimer(fv->pressed); fv->pressed = NULL; } if ( fv->b.container!=NULL && fv->b.container->funcs->is_modal ) return; if ( fv->cur_subtable!=NULL ) { sc = FVMakeChar(fv,pos); pos = fv->b.map->backmap[sc->orig_pos]; } if ( sc==&dummy ) { sc = SFMakeChar(fv->b.sf,fv->b.map,pos); gid = fv->b.map->map[pos]; } if ( fv->show==fv->filled ) { SplineFont *sf = fv->b.sf; gid = -1; if ( !OpenCharsInNewWindow ) for ( gid=sf->glyphcnt-1; gid>=0; --gid ) if ( sf->glyphs[gid]!=NULL && sf->glyphs[gid]->views!=NULL ) break; if ( gid!=-1 ) { CharView *cv = (CharView *) (sf->glyphs[gid]->views); // printf("calling CVChangeSC() sc:%p %s\n", sc, sc->name ); CVChangeSC(cv,sc); GDrawSetVisible(cv->gw,true); GDrawRaise(cv->gw); } else CharViewCreate(sc,fv,pos); } else { BDFFont *bdf = fv->show; BDFChar *bc =BDFMakeGID(bdf,gid); gid = -1; if ( !OpenCharsInNewWindow ) for ( gid=bdf->glyphcnt-1; gid>=0; --gid ) if ( bdf->glyphs[gid]!=NULL && bdf->glyphs[gid]->views!=NULL ) break; if ( gid!=-1 ) { BitmapView *bv = bdf->glyphs[gid]->views; BVChangeBC(bv,bc,true); GDrawSetVisible(bv->gw,true); GDrawRaise(bv->gw); } else BitmapViewCreate(bc,bdf,fv,pos); } } else if ( event->type == et_mousemove ) { if ( dopopup ) SCPreparePopup(fv->v,sc,fv->b.map->remap,pos,sc==&dummy?dummy.unicodeenc: UniFromEnc(pos,fv->b.map->enc)); } if ( event->type == et_mousedown ) { if ( fv->drag_and_drop ) { GDrawSetCursor(fv->v,ct_mypointer); fv->any_dd_events_sent = fv->drag_and_drop = false; } if ( !(event->u.mouse.state&ksm_shift) && event->u.mouse.clicks<=1 ) { if ( !fv->b.selected[pos] ) FVDeselectAll(fv); else if ( event->u.mouse.button!=3 ) { fv->drag_and_drop = fv->has_dd_no_cursor = true; fv->any_dd_events_sent = false; GDrawSetCursor(fv->v,ct_prohibition); GDrawGrabSelection(fv->v,sn_drag_and_drop); GDrawAddSelectionType(fv->v,sn_drag_and_drop,"STRING",fv,0,sizeof(char), ddgencharlist,noop); } } fv->pressed_pos = fv->end_pos = pos; FVShowInfo(fv); if ( !fv->drag_and_drop ) { if ( !(event->u.mouse.state&ksm_shift)) fv->sel_index = 1; else if ( fv->sel_index<255 ) ++fv->sel_index; if ( fv->pressed!=NULL ) { GDrawCancelTimer(fv->pressed); fv->pressed = NULL; } else if ( event->u.mouse.state&ksm_shift ) { fv->b.selected[pos] = fv->b.selected[pos] ? 0 : fv->sel_index; FVToggleCharSelected(fv,pos); } else if ( !fv->b.selected[pos] ) { fv->b.selected[pos] = fv->sel_index; FVToggleCharSelected(fv,pos); } if ( event->u.mouse.button==3 ) GMenuCreatePopupMenuWithName(fv->v,event, "Popup", fvpopupmenu); else fv->pressed = GDrawRequestTimer(fv->v,200,100,NULL); } } else if ( fv->drag_and_drop ) { GWindow othergw = GDrawGetPointerWindow(fv->v); if ( othergw==fv->v || othergw==fv->gw || othergw==NULL ) { if ( !fv->has_dd_no_cursor ) { fv->has_dd_no_cursor = true; GDrawSetCursor(fv->v,ct_prohibition); } } else { if ( fv->has_dd_no_cursor ) { fv->has_dd_no_cursor = false; GDrawSetCursor(fv->v,ct_ddcursor); } } if ( event->type==et_mouseup ) { if ( pos!=fv->pressed_pos ) { GDrawPostDragEvent(fv->v,event,event->type==et_mouseup?et_drop:et_drag); fv->any_dd_events_sent = true; } fv->drag_and_drop = fv->has_dd_no_cursor = false; GDrawSetCursor(fv->v,ct_mypointer); if ( !fv->any_dd_events_sent ) FVDeselectAll(fv); fv->any_dd_events_sent = false; } } else if ( fv->pressed!=NULL ) { int showit = realpos!=fv->end_pos; FVReselect(fv,realpos); if ( showit ) FVShowInfo(fv); if ( event->type==et_mouseup ) { GDrawCancelTimer(fv->pressed); fv->pressed = NULL; } } if ( event->type==et_mouseup && dopopup ) SCPreparePopup(fv->v,sc,fv->b.map->remap,pos,sc==&dummy?dummy.unicodeenc: UniFromEnc(pos,fv->b.map->enc)); if ( event->type==et_mouseup ) SVAttachFV(fv,2); }
0
[ "CWE-119", "CWE-787" ]
fontforge
626f751752875a0ddd74b9e217b6f4828713573c
185,998,352,567,974,430,000,000,000,000,000,000,000
151
Warn users before discarding their unsaved scripts (#3852) * Warn users before discarding their unsaved scripts This closes #3846.
static int tls1_in_list(uint16_t id, const uint16_t *list, size_t listlen) { size_t i; for (i = 0; i < listlen; i++) if (list[i] == id) return 1; return 0; }
0
[ "CWE-476" ]
openssl
5235ef44b93306a14d0b6c695b13c64b16e1fdec
111,160,820,390,806,700,000,000,000,000,000,000,000
8
Fix SSL_check_chain() The function SSL_check_chain() can be used by applications to check that a cert and chain is compatible with the negotiated parameters. This could be useful (for example) from the certificate callback. Unfortunately this function was applying TLSv1.2 sig algs rules and did not work correctly if TLSv1.3 was negotiated. We refactor tls_choose_sigalg to split it up and create a new function find_sig_alg which can (optionally) take a certificate and key as parameters and find an appropriate sig alg if one exists. If the cert and key are not supplied then we try to find a cert and key from the ones we have available that matches the shared sig algs. Reviewed-by: Tomas Mraz <[email protected]> (Merged from https://github.com/openssl/openssl/pull/9442)
attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) {}
0
[ "CWE-400", "CWE-703", "CWE-835" ]
linux
c40f7d74c741a907cfaeb73a7697081881c497d0
14,425,259,749,862,835,000,000,000,000,000,000,000
1
sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c Zhipeng Xie, Xie XiuQi and Sargun Dhillon reported lockups in the scheduler under high loads, starting at around the v4.18 time frame, and Zhipeng Xie tracked it down to bugs in the rq->leaf_cfs_rq_list manipulation. Do a (manual) revert of: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path") It turns out that the list_del_leaf_cfs_rq() introduced by this commit is a surprising property that was not considered in followup commits such as: 9c2791f936ef ("sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list") As Vincent Guittot explains: "I think that there is a bigger problem with commit a9e7f6544b9c and cfs_rq throttling: Let take the example of the following topology TG2 --> TG1 --> root: 1) The 1st time a task is enqueued, we will add TG2 cfs_rq then TG1 cfs_rq to leaf_cfs_rq_list and we are sure to do the whole branch in one path because it has never been used and can't be throttled so tmp_alone_branch will point to leaf_cfs_rq_list at the end. 2) Then TG1 is throttled 3) and we add TG3 as a new child of TG1. 4) The 1st enqueue of a task on TG3 will add TG3 cfs_rq just before TG1 cfs_rq and tmp_alone_branch will stay on rq->leaf_cfs_rq_list. With commit a9e7f6544b9c, we can del a cfs_rq from rq->leaf_cfs_rq_list. So if the load of TG1 cfs_rq becomes NULL before step 2) above, TG1 cfs_rq is removed from the list. Then at step 4), TG3 cfs_rq is added at the beginning of rq->leaf_cfs_rq_list but tmp_alone_branch still points to TG3 cfs_rq because its throttled parent can't be enqueued when the lock is released. tmp_alone_branch doesn't point to rq->leaf_cfs_rq_list whereas it should. So if TG3 cfs_rq is removed or destroyed before tmp_alone_branch points on another TG cfs_rq, the next TG cfs_rq that will be added, will be linked outside rq->leaf_cfs_rq_list - which is bad. In addition, we can break the ordering of the cfs_rq in rq->leaf_cfs_rq_list but this ordering is used to update and propagate the update from leaf down to root." Instead of trying to work through all these cases and trying to reproduce the very high loads that produced the lockup to begin with, simplify the code temporarily by reverting a9e7f6544b9c - which change was clearly not thought through completely. This (hopefully) gives us a kernel that doesn't lock up so people can continue to enjoy their holidays without worrying about regressions. ;-) [ mingo: Wrote changelog, fixed weird spelling in code comment while at it. ] Analyzed-by: Xie XiuQi <[email protected]> Analyzed-by: Vincent Guittot <[email protected]> Reported-by: Zhipeng Xie <[email protected]> Reported-by: Sargun Dhillon <[email protected]> Reported-by: Xie XiuQi <[email protected]> Tested-by: Zhipeng Xie <[email protected]> Tested-by: Sargun Dhillon <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Acked-by: Vincent Guittot <[email protected]> Cc: <[email protected]> # v4.13+ Cc: Bin Li <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Fixes: a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
static int verify_anonymous_token(sd_bus *b, const char *p, size_t l) { _cleanup_free_ char *token = NULL; size_t len; int r; if (!b->anonymous_auth) return 0; if (l <= 0) return 1; assert(p[0] == ' '); p++; l--; if (l % 2 != 0) return 0; r = unhexmem(p, l, (void **) &token, &len); if (r < 0) return 0; if (memchr(token, 0, len)) return 0; return !!utf8_is_valid(token); }
0
[ "CWE-787" ]
systemd
6d586a13717ae057aa1b4127400c3de61cd5b9e7
13,926,001,930,210,580,000,000,000,000,000,000,000
26
sd-bus: if we receive an invalid dbus message, ignore and proceeed dbus-daemon might have a slightly different idea of what a valid msg is than us (for example regarding valid msg and field sizes). Let's hence try to proceed if we can and thus drop messages rather than fail the connection if we fail to validate a message. Hopefully the differences in what is considered valid are not visible for real-life usecases, but are specific to exploit attempts only.
GF_Box *stss_box_new() { ISOM_DECL_BOX_ALLOC(GF_SyncSampleBox, GF_ISOM_BOX_TYPE_STSS); return (GF_Box*)tmp; }
0
[ "CWE-787" ]
gpac
388ecce75d05e11fc8496aa4857b91245007d26e
94,620,132,188,464,930,000,000,000,000,000,000,000
5
fixed #1587
rsvg_bbox_clip (RsvgBbox * dst, RsvgBbox * src) { cairo_matrix_t affine; double xmin, ymin; double xmax, ymax; int i; if (src->virgin) return; if (!dst->virgin) { xmin = dst->rect.x + dst->rect.width, ymin = dst->rect.y + dst->rect.height; xmax = dst->rect.x, ymax = dst->rect.y; } else { xmin = ymin = xmax = ymax = 0; } affine = dst->affine; if (cairo_matrix_invert (&affine) != CAIRO_STATUS_SUCCESS) return; cairo_matrix_multiply (&affine, &src->affine, &affine); for (i = 0; i < 4; i++) { double rx, ry, x, y; rx = src->rect.x + src->rect.width * (double) (i % 2); ry = src->rect.y + src->rect.height * (double) (i / 2); x = affine.xx * rx + affine.xy * ry + affine.x0; y = affine.yx * rx + affine.yy * ry + affine.y0; if (dst->virgin) { xmin = xmax = x; ymin = ymax = y; dst->virgin = 0; } else { if (x < xmin) xmin = x; if (x > xmax) xmax = x; if (y < ymin) ymin = y; if (y > ymax) ymax = y; } } if (xmin < dst->rect.x) xmin = dst->rect.x; if (ymin < dst->rect.y) ymin = dst->rect.y; if (xmax > dst->rect.x + dst->rect.width) xmax = dst->rect.x + dst->rect.width; if (ymax > dst->rect.y + dst->rect.height) ymax = dst->rect.y + dst->rect.height; dst->rect.x = xmin; dst->rect.width = xmax - xmin; dst->rect.y = ymin; dst->rect.height = ymax - ymin; }
0
[ "CWE-20" ]
librsvg
d83e426fff3f6d0fa6042d0930fb70357db24125
284,217,366,725,746,770,000,000,000,000,000,000,000
59
io: Use XML_PARSE_NONET We don't want to load resources off the net. Bug #691708.
void zero_fill_bio(struct bio *bio) { unsigned long flags; struct bio_vec bv; struct bvec_iter iter; bio_for_each_segment(bv, bio, iter) { char *data = bvec_kmap_irq(&bv, &flags); memset(data, 0, bv.bv_len); flush_dcache_page(bv.bv_page); bvec_kunmap_irq(data, &flags); } }
0
[ "CWE-772", "CWE-787" ]
linux
95d78c28b5a85bacbc29b8dba7c04babb9b0d467
43,788,537,836,456,420,000,000,000,000,000,000,000
13
fix unbalanced page refcounting in bio_map_user_iov bio_map_user_iov and bio_unmap_user do unbalanced pages refcounting if IO vector has small consecutive buffers belonging to the same page. bio_add_pc_page merges them into one, but the page reference is never dropped. Cc: [email protected] Signed-off-by: Vitaly Mayatskikh <[email protected]> Signed-off-by: Al Viro <[email protected]>
mptctl_remove(struct pci_dev *pdev) { }
0
[ "CWE-362", "CWE-369" ]
linux
28d76df18f0ad5bcf5fa48510b225f0ed262a99b
75,024,292,813,702,340,000,000,000,000,000,000,000
3
scsi: mptfusion: Fix double fetch bug in ioctl Tom Hatskevich reported that we look up "iocp" then, in the called functions we do a second copy_from_user() and look it up again. The problem that could cause is: drivers/message/fusion/mptctl.c 674 /* All of these commands require an interrupt or 675 * are unknown/illegal. 676 */ 677 if ((ret = mptctl_syscall_down(iocp, nonblock)) != 0) ^^^^ We take this lock. 678 return ret; 679 680 if (cmd == MPTFWDOWNLOAD) 681 ret = mptctl_fw_download(arg); ^^^ Then the user memory changes and we look up "iocp" again but a different one so now we are holding the incorrect lock and have a race condition. 682 else if (cmd == MPTCOMMAND) 683 ret = mptctl_mpt_command(arg); The security impact of this bug is not as bad as it could have been because these operations are all privileged and root already has enormous destructive power. But it's still worth fixing. This patch passes the "iocp" pointer to the functions to avoid the second lookup. That deletes 100 lines of code from the driver so it's a nice clean up as well. Link: https://lore.kernel.org/r/20200114123414.GA7957@kadam Reported-by: Tom Hatskevich <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]>
void * CAPSTONE_API cs_winkernel_calloc(size_t n, size_t size) { size_t total = n * size; void *new_ptr = cs_winkernel_malloc(total); if (!new_ptr) { return NULL; } return RtlFillMemory(new_ptr, total, 0); }
0
[ "CWE-190" ]
capstone
6fe86eef621b9849f51a5e1e5d73258a93440403
297,668,375,766,999,730,000,000,000,000,000,000,000
11
provide a validity check to prevent against Integer overflow conditions (#870) * provide a validity check to prevent against Integer overflow conditions * fix some style issues.
SendReceiveNoRsp(const unsigned int xid, struct cifs_ses *ses, char *in_buf, int flags) { int rc; struct kvec iov[1]; int resp_buf_type; iov[0].iov_base = in_buf; iov[0].iov_len = get_rfc1002_length(in_buf) + 4; flags |= CIFS_NO_RESP; rc = SendReceive2(xid, ses, iov, 1, &resp_buf_type, flags); cFYI(DBG2, "SendRcvNoRsp flags %d rc %d", flags, rc); return rc; }
0
[ "CWE-362" ]
linux
ea702b80e0bbb2448e201472127288beb82ca2fe
191,237,478,528,185,800,000,000,000,000,000,000,000
15
cifs: move check for NULL socket into smb_send_rqst Cai reported this oops: [90701.616664] BUG: unable to handle kernel NULL pointer dereference at 0000000000000028 [90701.625438] IP: [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.632167] PGD fea319067 PUD 103fda4067 PMD 0 [90701.637255] Oops: 0000 [#1] SMP [90701.640878] Modules linked in: des_generic md4 nls_utf8 cifs dns_resolver binfmt_misc tun sg igb iTCO_wdt iTCO_vendor_support lpc_ich pcspkr i2c_i801 i2c_core i7core_edac edac_core ioatdma dca mfd_core coretemp kvm_intel kvm crc32c_intel microcode sr_mod cdrom ata_generic sd_mod pata_acpi crc_t10dif ata_piix libata megaraid_sas dm_mirror dm_region_hash dm_log dm_mod [90701.677655] CPU 10 [90701.679808] Pid: 9627, comm: ls Tainted: G W 3.7.1+ #10 QCI QSSC-S4R/QSSC-S4R [90701.688950] RIP: 0010:[<ffffffff814a343e>] [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.698383] RSP: 0018:ffff88177b431bb8 EFLAGS: 00010206 [90701.704309] RAX: ffff88177b431fd8 RBX: 00007ffffffff000 RCX: ffff88177b431bec [90701.712271] RDX: 0000000000000003 RSI: 0000000000000006 RDI: 0000000000000000 [90701.720223] RBP: ffff88177b431bc8 R08: 0000000000000004 R09: 0000000000000000 [90701.728185] R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000001 [90701.736147] R13: ffff88184ef92000 R14: 0000000000000023 R15: ffff88177b431c88 [90701.744109] FS: 00007fd56a1a47c0(0000) GS:ffff88105fc40000(0000) knlGS:0000000000000000 [90701.753137] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [90701.759550] CR2: 0000000000000028 CR3: 000000104f15f000 CR4: 00000000000007e0 [90701.767512] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [90701.775465] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [90701.783428] Process ls (pid: 9627, threadinfo ffff88177b430000, task ffff88185ca4cb60) [90701.792261] Stack: [90701.794505] 0000000000000023 ffff88177b431c50 ffff88177b431c38 ffffffffa014fcb1 [90701.802809] ffff88184ef921bc 0000000000000000 00000001ffffffff ffff88184ef921c0 [90701.811123] ffff88177b431c08 ffffffff815ca3d9 ffff88177b431c18 ffff880857758000 [90701.819433] Call Trace: [90701.822183] [<ffffffffa014fcb1>] smb_send_rqst+0x71/0x1f0 [cifs] [90701.828991] [<ffffffff815ca3d9>] ? schedule+0x29/0x70 [90701.834736] [<ffffffffa014fe6d>] smb_sendv+0x3d/0x40 [cifs] [90701.841062] [<ffffffffa014fe96>] smb_send+0x26/0x30 [cifs] [90701.847291] [<ffffffffa015801f>] send_nt_cancel+0x6f/0xd0 [cifs] [90701.854102] [<ffffffffa015075e>] SendReceive+0x18e/0x360 [cifs] [90701.860814] [<ffffffffa0134a78>] CIFSFindFirst+0x1a8/0x3f0 [cifs] [90701.867724] [<ffffffffa013f731>] ? build_path_from_dentry+0xf1/0x260 [cifs] [90701.875601] [<ffffffffa013f731>] ? build_path_from_dentry+0xf1/0x260 [cifs] [90701.883477] [<ffffffffa01578e6>] cifs_query_dir_first+0x26/0x30 [cifs] [90701.890869] [<ffffffffa015480d>] initiate_cifs_search+0xed/0x250 [cifs] [90701.898354] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.904486] [<ffffffffa01554cb>] cifs_readdir+0x45b/0x8f0 [cifs] [90701.911288] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.917410] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.923533] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.929657] [<ffffffff81195848>] vfs_readdir+0xb8/0xe0 [90701.935490] [<ffffffff81195b9f>] sys_getdents+0x8f/0x110 [90701.941521] [<ffffffff815d3b99>] system_call_fastpath+0x16/0x1b [90701.948222] Code: 66 90 55 65 48 8b 04 25 f0 c6 00 00 48 89 e5 53 48 83 ec 08 83 fe 01 48 8b 98 48 e0 ff ff 48 c7 80 48 e0 ff ff ff ff ff ff 74 22 <48> 8b 47 28 ff 50 68 65 48 8b 14 25 f0 c6 00 00 48 89 9a 48 e0 [90701.970313] RIP [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.977125] RSP <ffff88177b431bb8> [90701.981018] CR2: 0000000000000028 [90701.984809] ---[ end trace 24bd602971110a43 ]--- This is likely due to a race vs. a reconnection event. The current code checks for a NULL socket in smb_send_kvec, but that's too late. By the time that check is done, the socket will already have been passed to kernel_setsockopt. Move the check into smb_send_rqst, so that it's checked earlier. In truth, this is a bit of a half-assed fix. The -ENOTSOCK error return here looks like it could bubble back up to userspace. The locking rules around the ssocket pointer are really unclear as well. There are cases where the ssocket pointer is changed without holding the srv_mutex, but I'm not clear whether there's a potential race here yet or not. This code seems like it could benefit from some fundamental re-think of how the socket handling should behave. Until then though, this patch should at least fix the above oops in most cases. Cc: <[email protected]> # 3.7+ Reported-and-Tested-by: CAI Qian <[email protected]> Signed-off-by: Jeff Layton <[email protected]> Signed-off-by: Steve French <[email protected]>