func
stringlengths
0
484k
target
int64
0
1
cwe
listlengths
0
4
project
stringclasses
799 values
commit_id
stringlengths
40
40
hash
float64
1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
size
int64
1
24k
message
stringlengths
0
13.3k
NO_INLINE int getRadix(const char **s, int forceRadix, bool *hasError) { int radix = 10; if (forceRadix > 36) { if (hasError) *hasError = true; return 0; } if (**s == '0') { radix = 8; (*s)++; // OctalIntegerLiteral: 0o01, 0O01 if (**s == 'o' || **s == 'O') { radix = 8; if (forceRadix && forceRadix!=8 && forceRadix<25) return 0; (*s)++; // HexIntegerLiteral: 0x01, 0X01 } else if (**s == 'x' || **s == 'X') { radix = 16; if (forceRadix && forceRadix!=16 && forceRadix<34) return 0; (*s)++; // BinaryIntegerLiteral: 0b01, 0B01 } else if (**s == 'b' || **s == 'B') { radix = 2; if (forceRadix && forceRadix!=2 && forceRadix<12) return 0; else (*s)++; } else if (!forceRadix) { // check for '.' or digits 8 or 9 - if so it's decimal const char *p; for (p=*s;*p;p++) if (*p=='.' || *p=='8' || *p=='9') radix = 10; else if (*p<'0' || *p>'9') break; } } if (forceRadix>0 && forceRadix<=36) radix = forceRadix; return radix; }
0
[ "CWE-119", "CWE-787" ]
Espruino
0a7619875bf79877907205f6bee08465b89ff10b
82,385,644,382,472,285,000,000,000,000,000,000,000
45
Fix strncat/cpy bounding issues (fix #1425)
void t_cpp_generator::generate_deserialize_list_element(ofstream& out, t_list* tlist, string prefix, bool use_push, string index) { if (use_push) { string elem = tmp("_elem"); t_field felem(tlist->get_elem_type(), elem); indent(out) << declare_field(&felem) << endl; generate_deserialize_field(out, &felem); indent(out) << prefix << ".push_back(" << elem << ");" << endl; } else { t_field felem(tlist->get_elem_type(), prefix + "[" + index + "]"); generate_deserialize_field(out, &felem); } }
0
[ "CWE-20" ]
thrift
cfaadcc4adcfde2a8232c62ec89870b73ef40df1
198,274,664,361,117,560,000,000,000,000,000,000,000
16
THRIFT-3231 CPP: Limit recursion depth to 64 Client: cpp Patch: Ben Craig <[email protected]>
double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2) { spin_unlock(&hb1->lock); if (hb1 != hb2) spin_unlock(&hb2->lock); }
0
[]
linux-2.6
5ecb01cfdf96c5f465192bdb2a4fd4a61a24c6cc
170,182,949,102,844,300,000,000,000,000,000,000,000
6
futex_lock_pi() key refcnt fix This fixes a futex key reference count bug in futex_lock_pi(), where a key's reference count is incremented twice but decremented only once, causing the backing object to not be released. If the futex is created in a temporary file in an ext3 file system, this bug causes the file's inode to become an "undead" orphan, which causes an oops from a BUG_ON() in ext3_put_super() when the file system is unmounted. glibc's test suite is known to trigger this, see <http://bugzilla.kernel.org/show_bug.cgi?id=14256>. The bug is a regression from 2.6.28-git3, namely Peter Zijlstra's 38d47c1b7075bd7ec3881141bb3629da58f88dab "[PATCH] futex: rely on get_user_pages() for shared futexes". That commit made get_futex_key() also increment the reference count of the futex key, and updated its callers to decrement the key's reference count before returning. Unfortunately the normal exit path in futex_lock_pi() wasn't corrected: the reference count is incremented by get_futex_key() and queue_lock(), but the normal exit path only decrements once, via unqueue_me_pi(). The fix is to put_futex_key() after unqueue_me_pi(), since 2.6.31 this is easily done by 'goto out_put_key' rather than 'goto out'. Signed-off-by: Mikael Pettersson <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Acked-by: Darren Hart <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: <[email protected]>
JBIG2Bitmap::JBIG2Bitmap(unsigned int segNumA, int wA, int hA) : JBIG2Segment(segNumA) { w = wA; h = hA; int auxW; if (unlikely(checkedAdd(wA, 7, &auxW))) { error(errSyntaxError, -1, "invalid width"); data = nullptr; return; } line = auxW >> 3; if (w <= 0 || h <= 0 || line <= 0 || h >= (INT_MAX - 1) / line) { error(errSyntaxError, -1, "invalid width/height"); data = nullptr; return; } // need to allocate one extra guard byte for use in combine() data = (unsigned char *)gmalloc_checkoverflow(h * line + 1); if (data != nullptr) { data[h * line] = 0; } }
0
[ "CWE-476", "CWE-190" ]
poppler
27354e9d9696ee2bc063910a6c9a6b27c5184a52
334,549,171,783,152,450,000,000,000,000,000,000,000
23
JBIG2Stream: Fix crash on broken file https://github.com/jeffssh/CVE-2021-30860 Thanks to David Warren for the heads up
static void *hi_calloc_fail(size_t nmemb, size_t size) { (void)nmemb; (void)size; return NULL; }
0
[ "CWE-190", "CWE-680" ]
redis
0215324a66af949be39b34be2d55143232c1cb71
241,741,600,139,907,330,000,000,000,000,000,000,000
5
Fix redis-cli / redis-sential overflow on some platforms (CVE-2021-32762) (#9587) The redis-cli command line tool and redis-sentinel service may be vulnerable to integer overflow when parsing specially crafted large multi-bulk network replies. This is a result of a vulnerability in the underlying hiredis library which does not perform an overflow check before calling the calloc() heap allocation function. This issue only impacts systems with heap allocators that do not perform their own overflow checks. Most modern systems do and are therefore not likely to be affected. Furthermore, by default redis-sentinel uses the jemalloc allocator which is also not vulnerable. Co-authored-by: Yossi Gottlieb <[email protected]>
static int remove_cgroup(struct cgroup_mount_point *mp, const char *path, bool recurse) { return create_or_remove_cgroup(true, mp, path, recurse); }
0
[ "CWE-59", "CWE-61" ]
lxc
592fd47a6245508b79fe6ac819fe6d3b2c1289be
284,485,755,512,056,500,000,000,000,000,000,000,000
5
CVE-2015-1335: Protect container mounts against symlinks When a container starts up, lxc sets up the container's inital fstree by doing a bunch of mounting, guided by the container configuration file. The container config is owned by the admin or user on the host, so we do not try to guard against bad entries. However, since the mount target is in the container, it's possible that the container admin could divert the mount with symbolic links. This could bypass proper container startup (i.e. confinement of a root-owned container by the restrictive apparmor policy, by diverting the required write to /proc/self/attr/current), or bypass the (path-based) apparmor policy by diverting, say, /proc to /mnt in the container. To prevent this, 1. do not allow mounts to paths containing symbolic links 2. do not allow bind mounts from relative paths containing symbolic links. Details: Define safe_mount which ensures that the container has not inserted any symbolic links into any mount targets for mounts to be done during container setup. The host's mount path may contain symbolic links. As it is under the control of the administrator, that's ok. So safe_mount begins the check for symbolic links after the rootfs->mount, by opening that directory. It opens each directory along the path using openat() relative to the parent directory using O_NOFOLLOW. When the target is reached, it mounts onto /proc/self/fd/<targetfd>. Use safe_mount() in mount_entry(), when mounting container proc, and when needed. In particular, safe_mount() need not be used in any case where: 1. the mount is done in the container's namespace 2. the mount is for the container's rootfs 3. the mount is relative to a tmpfs or proc/sysfs which we have just safe_mount()ed ourselves Since we were using proc/net as a temporary placeholder for /proc/sys/net during container startup, and proc/net is a symbolic link, use proc/tty instead. Update the lxc.container.conf manpage with details about the new restrictions. Finally, add a testcase to test some symbolic link possibilities. Reported-by: Roman Fiedler Signed-off-by: Serge Hallyn <[email protected]> Acked-by: Stéphane Graber <[email protected]>
int crypt_keyslot_area(struct crypt_device *cd, int keyslot, uint64_t *offset, uint64_t *length) { if (_onlyLUKS(cd, CRYPT_CD_QUIET | CRYPT_CD_UNRESTRICTED) || !offset || !length) return -EINVAL; if (isLUKS2(cd->type)) return LUKS2_keyslot_area(&cd->u.luks2.hdr, keyslot, offset, length); return LUKS_keyslot_area(&cd->u.luks1.hdr, keyslot, offset, length); }
0
[ "CWE-345" ]
cryptsetup
0113ac2d889c5322659ad0596d4cfc6da53e356c
63,229,921,689,068,510,000,000,000,000,000,000,000
13
Fix CVE-2021-4122 - LUKS2 reencryption crash recovery attack Fix possible attacks against data confidentiality through LUKS2 online reencryption extension crash recovery. An attacker can modify on-disk metadata to simulate decryption in progress with crashed (unfinished) reencryption step and persistently decrypt part of the LUKS device. This attack requires repeated physical access to the LUKS device but no knowledge of user passphrases. The decryption step is performed after a valid user activates the device with a correct passphrase and modified metadata. There are no visible warnings for the user that such recovery happened (except using the luksDump command). The attack can also be reversed afterward (simulating crashed encryption from a plaintext) with possible modification of revealed plaintext. The problem was caused by reusing a mechanism designed for actual reencryption operation without reassessing the security impact for new encryption and decryption operations. While the reencryption requires calculating and verifying both key digests, no digest was needed to initiate decryption recovery if the destination is plaintext (no encryption key). Also, some metadata (like encryption cipher) is not protected, and an attacker could change it. Note that LUKS2 protects visible metadata only when a random change occurs. It does not protect against intentional modification but such modification must not cause a violation of data confidentiality. The fix introduces additional digest protection of reencryption metadata. The digest is calculated from known keys and critical reencryption metadata. Now an attacker cannot create correct metadata digest without knowledge of a passphrase for used keyslots. For more details, see LUKS2 On-Disk Format Specification version 1.1.0.
default_interface_handler(vector_t *strvec) { if (vector_size(strvec) < 2) { report_config_error(CONFIG_GENERAL_ERROR, "default_interface requires interface name"); return; } FREE_PTR(global_data->default_ifname); global_data->default_ifname = set_value(strvec); /* On a reload, the VRRP process needs the default_ifp */ #ifndef _DEBUG_ if (prog_type == PROG_TYPE_VRRP) #endif { global_data->default_ifp = if_get_by_ifname(global_data->default_ifname, IF_CREATE_IF_DYNAMIC); if (!global_data->default_ifp) report_config_error(CONFIG_GENERAL_ERROR, "WARNING - default interface %s doesn't exist", global_data->default_ifname); } }
0
[ "CWE-200" ]
keepalived
c6247a9ef2c7b33244ab1d3aa5d629ec49f0a067
169,609,129,240,129,910,000,000,000,000,000,000,000
19
Add command line and configuration option to set umask Issue #1048 identified that files created by keepalived are created with mode 0666. This commit changes the default to 0644, and also allows the umask to be specified in the configuration or as a command line option. Signed-off-by: Quentin Armitage <[email protected]>
static bool work_is_canceling(struct work_struct *work) { unsigned long data = atomic_long_read(&work->data); return !(data & WORK_STRUCT_PWQ) && (data & WORK_OFFQ_CANCELING); }
0
[ "CWE-200" ]
tip
dfb4357da6ddbdf57d583ba64361c9d792b0e0b1
129,984,981,045,536,900,000,000,000,000,000,000,000
6
time: Remove CONFIG_TIMER_STATS Currently CONFIG_TIMER_STATS exposes process information across namespaces: kernel/time/timer_list.c print_timer(): SEQ_printf(m, ", %s/%d", tmp, timer->start_pid); /proc/timer_list: #11: <0000000000000000>, hrtimer_wakeup, S:01, do_nanosleep, cron/2570 Given that the tracer can give the same information, this patch entirely removes CONFIG_TIMER_STATS. Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Kees Cook <[email protected]> Acked-by: John Stultz <[email protected]> Cc: Nicolas Pitre <[email protected]> Cc: [email protected] Cc: Lai Jiangshan <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Xing Gao <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Jessica Frazelle <[email protected]> Cc: [email protected] Cc: Nicolas Iooss <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Michal Marek <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Olof Johansson <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: Arjan van de Ven <[email protected]> Link: http://lkml.kernel.org/r/20170208192659.GA32582@beast Signed-off-by: Thomas Gleixner <[email protected]>
f_ch_setoptions(typval_T *argvars, typval_T *rettv UNUSED) { channel_T *channel; jobopt_T opt; channel = get_channel_arg(&argvars[0], FALSE, FALSE, 0); if (channel == NULL) return; clear_job_options(&opt); if (get_job_options(&argvars[1], &opt, JO_CB_ALL + JO_TIMEOUT_ALL + JO_MODE_ALL, 0) == OK) channel_set_options(channel, &opt); free_job_options(&opt); }
0
[ "CWE-78" ]
vim
8c62a08faf89663e5633dc5036cd8695c80f1075
146,397,072,813,614,110,000,000,000,000,000,000,000
14
patch 8.1.0881: can execute shell commands in rvim through interfaces Problem: Can execute shell commands in rvim through interfaces. Solution: Disable using interfaces in restricted mode. Allow for writing file with writefile(), histadd() and a few others.
static int dissect_CPMGetNotify(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree _U_, gboolean in _U_, void *data _U_) { col_append_str(pinfo->cinfo, COL_INFO, "GetNotify"); return tvb_reported_length(tvb); }
0
[ "CWE-770" ]
wireshark
b7a0650e061b5418ab4a8f72c6e4b00317aff623
224,107,743,032,095,920,000,000,000,000,000,000,000
5
MS-WSP: Don't allocate huge amounts of memory. Add a couple of memory allocation sanity checks, one of which fixes #17331.
static int FIPS_hmac_sha1_test() { unsigned char key[] = "etaonrishd"; unsigned char iv[] = "Sample text"; unsigned char kaval[EVP_MAX_MD_SIZE] = {0x73, 0xf7, 0xa0, 0x48, 0xf8, 0x94, 0xed, 0xdd, 0x0a, 0xea, 0xea, 0x56, 0x1b, 0x61, 0x2e, 0x70, 0xb2, 0xfb, 0xec, 0xc6}; unsigned char out[EVP_MAX_MD_SIZE]; unsigned int outlen; ERR_clear_error(); if (!HMAC(EVP_sha1(),key,sizeof(key)-1,iv,sizeof(iv)-1,out,&outlen)) return 0; if (memcmp(out,kaval,outlen)) return 0; return 1; }
0
[]
openssl
200f249b8c3b6439e0200d01caadc24806f1a983
43,788,219,771,523,490,000,000,000,000,000,000,000
17
Remove Dual EC DRBG from FIPS module.
void Magick::Image::quantizeColors(const size_t colors_) { modifyImage(); options()->quantizeColors(colors_); }
0
[ "CWE-416" ]
ImageMagick
8c35502217c1879cb8257c617007282eee3fe1cc
325,042,077,892,120,450,000,000,000,000,000,000,000
5
Added missing return to avoid use after free.
static bool manager_get_show_status(Manager *m) { assert(m); if (m->running_as != SYSTEMD_SYSTEM) return false; if (m->no_console_output) return false; if (m->show_status > 0) return true; /* If Plymouth is running make sure we show the status, so * that there's something nice to see when people press Esc */ return plymouth_running(); }
0
[]
systemd
5ba6985b6c8ef85a8bcfeb1b65239c863436e75b
103,114,869,006,910,070,000,000,000,000,000,000,000
17
core: allow PIDs to be watched by two units at the same time In some cases it is interesting to map a PID to two units at the same time. For example, when a user logs in via a getty, which is reexeced to /sbin/login that binary will be explicitly referenced as main pid of the getty service, as well as implicitly referenced as part of the session scope.
void HeaderMapImpl::remove(const LowerCaseString& key) { EntryCb cb = ConstSingleton<StaticLookupTable>::get().find(key.get()); if (cb) { StaticLookupResponse ref_lookup_response = cb(*this); removeInline(ref_lookup_response.entry_); } else { for (auto i = headers_.begin(); i != headers_.end();) { if (i->key() == key.get().c_str()) { subtractSize(i->key().size() + i->value().size()); i = headers_.erase(i); } else { ++i; } } } }
0
[ "CWE-400", "CWE-703" ]
envoy
afc39bea36fd436e54262f150c009e8d72db5014
76,904,088,685,435,090,000,000,000,000,000,000,000
16
Track byteSize of HeaderMap internally. Introduces a cached byte size updated internally in HeaderMap. The value is stored as an optional, and is cleared whenever a non-const pointer or reference to a HeaderEntry is accessed. The cached value can be set with refreshByteSize() which performs an iteration over the HeaderMap to sum the size of each key and value in the HeaderMap. Signed-off-by: Asra Ali <[email protected]>
find_method (MonoClass *in_class, MonoClass *ic, const char* name, MonoMethodSignature *sig, MonoClass *from_class) { int i; char *qname, *fqname, *class_name; gboolean is_interface; MonoMethod *result = NULL; is_interface = MONO_CLASS_IS_INTERFACE (in_class); if (ic) { class_name = mono_type_get_name_full (&ic->byval_arg, MONO_TYPE_NAME_FORMAT_IL); qname = g_strconcat (class_name, ".", name, NULL); if (ic->name_space && ic->name_space [0]) fqname = g_strconcat (ic->name_space, ".", class_name, ".", name, NULL); else fqname = NULL; } else class_name = qname = fqname = NULL; while (in_class) { g_assert (from_class); result = find_method_in_class (in_class, name, qname, fqname, sig, from_class); if (result) goto out; if (name [0] == '.' && (!strcmp (name, ".ctor") || !strcmp (name, ".cctor"))) break; g_assert (from_class->interface_offsets_count == in_class->interface_offsets_count); for (i = 0; i < in_class->interface_offsets_count; i++) { MonoClass *in_ic = in_class->interfaces_packed [i]; MonoClass *from_ic = from_class->interfaces_packed [i]; char *ic_qname, *ic_fqname, *ic_class_name; ic_class_name = mono_type_get_name_full (&in_ic->byval_arg, MONO_TYPE_NAME_FORMAT_IL); ic_qname = g_strconcat (ic_class_name, ".", name, NULL); if (in_ic->name_space && in_ic->name_space [0]) ic_fqname = g_strconcat (in_ic->name_space, ".", ic_class_name, ".", name, NULL); else ic_fqname = NULL; result = find_method_in_class (in_ic, ic ? name : NULL, ic_qname, ic_fqname, sig, from_ic); g_free (ic_class_name); g_free (ic_fqname); g_free (ic_qname); if (result) goto out; } in_class = in_class->parent; from_class = from_class->parent; } g_assert (!in_class == !from_class); if (is_interface) result = find_method_in_class (mono_defaults.object_class, name, qname, fqname, sig, mono_defaults.object_class); out: g_free (class_name); g_free (fqname); g_free (qname); return result; }
0
[]
mono
8e890a3bf80a4620e417814dc14886b1bbd17625
258,256,107,898,645,300,000,000,000,000,000,000,000
64
Search for dllimported shared libs in the base directory, not cwd. * loader.c: we don't search the current directory anymore for shared libraries referenced in DllImport attributes, as it has a slight security risk. We search in the same directory where the referencing image was loaded from, instead. Fixes bug# 641915.
static void fix_rmode_seg(int seg, struct kvm_save_segment *save) { struct kvm_vmx_segment_field *sf = &kvm_vmx_segment_fields[seg]; save->selector = vmcs_read16(sf->selector); save->base = vmcs_readl(sf->base); save->limit = vmcs_read32(sf->limit); save->ar = vmcs_read32(sf->ar_bytes); vmcs_write16(sf->selector, save->base >> 4); vmcs_write32(sf->base, save->base & 0xfffff); vmcs_write32(sf->limit, 0xffff); vmcs_write32(sf->ar_bytes, 0xf3); }
0
[ "CWE-20" ]
linux-2.6
16175a796d061833aacfbd9672235f2d2725df65
330,228,898,353,448,940,000,000,000,000,000,000,000
13
KVM: VMX: Don't allow uninhibited access to EFER on i386 vmx_set_msr() does not allow i386 guests to touch EFER, but they can still do so through the default: label in the switch. If they set EFER_LME, they can oops the host. Fix by having EFER access through the normal channel (which will check for EFER_LME) even on i386. Reported-and-tested-by: Benjamin Gilbert <[email protected]> Cc: [email protected] Signed-off-by: Avi Kivity <[email protected]>
GuestFsfreezeStatus qmp_guest_fsfreeze_status(Error **errp) { if (!vss_initialized()) { error_setg(errp, QERR_UNSUPPORTED); return 0; } if (ga_is_frozen(ga_state)) { return GUEST_FSFREEZE_STATUS_FROZEN; } return GUEST_FSFREEZE_STATUS_THAWED; }
0
[ "CWE-190" ]
qemu
141b197408ab398c4f474ac1a728ab316e921f2b
332,926,590,283,207,030,000,000,000,000,000,000,000
13
qga: check bytes count read by guest-file-read While reading file content via 'guest-file-read' command, 'qmp_guest_file_read' routine allocates buffer of count+1 bytes. It could overflow for large values of 'count'. Add check to avoid it. Reported-by: Fakhri Zulkifli <[email protected]> Signed-off-by: Prasad J Pandit <[email protected]> Cc: [email protected] Signed-off-by: Michael Roth <[email protected]>
sraRgnEmpty(const sraRegion *rgn) { return sraSpanListEmpty((sraSpanList*)rgn); }
0
[ "CWE-476" ]
libvncserver
38e98ee61d74f5f5ab4aa4c77146faad1962d6d0
61,629,359,696,956,705,000,000,000,000,000,000,000
3
libvncserver: add missing NULL pointer checks
static int set_map_elem_callback_state(struct bpf_verifier_env *env, struct bpf_func_state *caller, struct bpf_func_state *callee, int insn_idx) { struct bpf_insn_aux_data *insn_aux = &env->insn_aux_data[insn_idx]; struct bpf_map *map; int err; if (bpf_map_ptr_poisoned(insn_aux)) { verbose(env, "tail_call abusing map_ptr\n"); return -EINVAL; } map = BPF_MAP_PTR(insn_aux->map_ptr_state); if (!map->ops->map_set_for_each_callback_args || !map->ops->map_for_each_callback) { verbose(env, "callback function not allowed for map\n"); return -ENOTSUPP; } err = map->ops->map_set_for_each_callback_args(env, caller, callee); if (err) return err; callee->in_callback_fn = true; return 0; }
0
[ "CWE-125" ]
bpf
049c4e13714ecbca567b4d5f6d563f05d431c80e
339,571,914,807,823,350,000,000,000,000,000,000,000
28
bpf: Fix alu32 const subreg bound tracking on bitwise operations Fix a bug in the verifier's scalar32_min_max_*() functions which leads to incorrect tracking of 32 bit bounds for the simulation of and/or/xor bitops. When both the src & dst subreg is a known constant, then the assumption is that scalar_min_max_*() will take care to update bounds correctly. However, this is not the case, for example, consider a register R2 which has a tnum of 0xffffffff00000000, meaning, lower 32 bits are known constant and in this case of value 0x00000001. R2 is then and'ed with a register R3 which is a 64 bit known constant, here, 0x100000002. What can be seen in line '10:' is that 32 bit bounds reach an invalid state where {u,s}32_min_value > {u,s}32_max_value. The reason is scalar32_min_max_*() delegates 32 bit bounds updates to scalar_min_max_*(), however, that really only takes place when both the 64 bit src & dst register is a known constant. Given scalar32_min_max_*() is intended to be designed as closely as possible to scalar_min_max_*(), update the 32 bit bounds in this situation through __mark_reg32_known() which will set all {u,s}32_{min,max}_value to the correct constant, which is 0x00000000 after the fix (given 0x00000001 & 0x00000002 in 32 bit space). This is possible given var32_off already holds the final value as dst_reg->var_off is updated before calling scalar32_min_max_*(). Before fix, invalid tracking of R2: [...] 9: R0_w=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0,smin_value=-9223372036854775807 (0x8000000000000001),smax_value=9223372032559808513 (0x7fffffff00000001),umin_value=1,umax_value=0xffffffff00000001,var_off=(0x1; 0xffffffff00000000),s32_min_value=1,s32_max_value=1,u32_min_value=1,u32_max_value=1) R3_w=inv4294967298 R10=fp0 9: (5f) r2 &= r3 10: R0_w=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0,smin_value=0,smax_value=4294967296 (0x100000000),umin_value=0,umax_value=0x100000000,var_off=(0x0; 0x100000000),s32_min_value=1,s32_max_value=0,u32_min_value=1,u32_max_value=0) R3_w=inv4294967298 R10=fp0 [...] After fix, correct tracking of R2: [...] 9: R0_w=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0,smin_value=-9223372036854775807 (0x8000000000000001),smax_value=9223372032559808513 (0x7fffffff00000001),umin_value=1,umax_value=0xffffffff00000001,var_off=(0x1; 0xffffffff00000000),s32_min_value=1,s32_max_value=1,u32_min_value=1,u32_max_value=1) R3_w=inv4294967298 R10=fp0 9: (5f) r2 &= r3 10: R0_w=inv1337 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0,smin_value=0,smax_value=4294967296 (0x100000000),umin_value=0,umax_value=0x100000000,var_off=(0x0; 0x100000000),s32_min_value=0,s32_max_value=0,u32_min_value=0,u32_max_value=0) R3_w=inv4294967298 R10=fp0 [...] Fixes: 3f50f132d840 ("bpf: Verifier, do explicit ALU32 bounds tracking") Fixes: 2921c90d4718 ("bpf: Fix a verifier failure with xor") Reported-by: Manfred Paul (@_manfp) Reported-by: Thadeu Lima de Souza Cascardo <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: John Fastabend <[email protected]> Acked-by: Alexei Starovoitov <[email protected]>
zfont_mark_glyph_name(const gs_memory_t *mem, gs_glyph glyph, void *ignore_data) { return (glyph >= gs_c_min_std_encoding_glyph || glyph == GS_NO_GLYPH ? false : name_mark_index(mem, (uint) glyph)); }
0
[ "CWE-704" ]
ghostpdl
548bb434e81dadcc9f71adf891a3ef5bea8e2b4e
60,964,562,478,748,540,000,000,000,000,000,000,000
5
PS interpreter - add some type checking These were 'probably' safe anyway, since they mostly treat the objects as integers without checking, which at least can't result in a crash. Nevertheless, we ought to check. The return from comparedictkeys could be wrong if one of the keys had a value which was not an array, it could incorrectly decide the two were in fact the same.
megasas_mgmt_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { switch (cmd) { case MEGASAS_IOC_FIRMWARE32: return megasas_mgmt_compat_ioctl_fw(file, arg); case MEGASAS_IOC_GET_AEN: return megasas_mgmt_ioctl_aen(file, arg); } return -ENOTTY; }
0
[ "CWE-476" ]
linux
bcf3b67d16a4c8ffae0aa79de5853435e683945c
336,300,313,460,074,780,000,000,000,000,000,000,000
12
scsi: megaraid_sas: return error when create DMA pool failed when create DMA pool for cmd frames failed, we should return -ENOMEM, instead of 0. In some case in: megasas_init_adapter_fusion() -->megasas_alloc_cmds() -->megasas_create_frame_pool create DMA pool failed, --> megasas_free_cmds() [1] -->megasas_alloc_cmds_fusion() failed, then goto fail_alloc_cmds. -->megasas_free_cmds() [2] we will call megasas_free_cmds twice, [1] will kfree cmd_list, [2] will use cmd_list.it will cause a problem: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = ffffffc000f70000 [00000000] *pgd=0000001fbf893003, *pud=0000001fbf893003, *pmd=0000001fbf894003, *pte=006000006d000707 Internal error: Oops: 96000005 [#1] SMP Modules linked in: CPU: 18 PID: 1 Comm: swapper/0 Not tainted task: ffffffdfb9290000 ti: ffffffdfb923c000 task.ti: ffffffdfb923c000 PC is at megasas_free_cmds+0x30/0x70 LR is at megasas_free_cmds+0x24/0x70 ... Call trace: [<ffffffc0005b779c>] megasas_free_cmds+0x30/0x70 [<ffffffc0005bca74>] megasas_init_adapter_fusion+0x2f4/0x4d8 [<ffffffc0005b926c>] megasas_init_fw+0x2dc/0x760 [<ffffffc0005b9ab0>] megasas_probe_one+0x3c0/0xcd8 [<ffffffc0004a5abc>] local_pci_probe+0x4c/0xb4 [<ffffffc0004a5c40>] pci_device_probe+0x11c/0x14c [<ffffffc00053a5e4>] driver_probe_device+0x1ec/0x430 [<ffffffc00053a92c>] __driver_attach+0xa8/0xb0 [<ffffffc000538178>] bus_for_each_dev+0x74/0xc8 [<ffffffc000539e88>] driver_attach+0x28/0x34 [<ffffffc000539a18>] bus_add_driver+0x16c/0x248 [<ffffffc00053b234>] driver_register+0x6c/0x138 [<ffffffc0004a5350>] __pci_register_driver+0x5c/0x6c [<ffffffc000ce3868>] megasas_init+0xc0/0x1a8 [<ffffffc000082a58>] do_one_initcall+0xe8/0x1ec [<ffffffc000ca7be8>] kernel_init_freeable+0x1c8/0x284 [<ffffffc0008d90b8>] kernel_init+0x1c/0xe4 Signed-off-by: Jason Yan <[email protected]> Acked-by: Sumit Saxena <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]>
void run() { intrusive_ptr<ExpressionContextForTest> expCtx(new ExpressionContextForTest()); const Document spec = getSpec(); const Value args = spec["input"]; if (!spec["expected"].missing()) { FieldIterator fields(spec["expected"].getDocument()); while (fields.more()) { const Document::FieldPair field(fields.next()); const Value expected = field.second; const BSONObj obj = BSON(field.first << args); VariablesParseState vps = expCtx->variablesParseState; const intrusive_ptr<Expression> expr = Expression::parseExpression(expCtx, obj, vps); Value result = expr->evaluate(Document()); if (result.getType() == Array) { result = sortSet(result); } if (ValueComparator().evaluate(result != expected)) { string errMsg = str::stream() << "for expression " << field.first.toString() << " with argument " << args.toString() << " full tree: " << expr->serialize(false).toString() << " expected: " << expected.toString() << " but got: " << result.toString(); FAIL(errMsg); } // TODO test optimize here } } if (!spec["error"].missing()) { const vector<Value>& asserters = spec["error"].getArray(); size_t n = asserters.size(); for (size_t i = 0; i < n; i++) { const BSONObj obj = BSON(asserters[i].getString() << args); VariablesParseState vps = expCtx->variablesParseState; ASSERT_THROWS( { // NOTE: parse and evaluatation failures are treated the // same const intrusive_ptr<Expression> expr = Expression::parseExpression(expCtx, obj, vps); expr->evaluate(Document()); }, AssertionException); } } }
0
[ "CWE-835" ]
mongo
0a076417d1d7fba3632b73349a1fd29a83e68816
52,231,415,577,690,690,000,000,000,000,000,000,000
46
SERVER-38070 fix infinite loop in agg expression
xmlXPtrGetStartPoint(xmlXPathObjectPtr obj, xmlNodePtr *node, int *indx) { if ((obj == NULL) || (node == NULL) || (indx == NULL)) return(-1); switch (obj->type) { case XPATH_POINT: *node = obj->user; if (obj->index <= 0) *indx = 0; else *indx = obj->index; return(0); case XPATH_RANGE: *node = obj->user; if (obj->index <= 0) *indx = 0; else *indx = obj->index; return(0); default: break; } return(-1); }
0
[ "CWE-415" ]
libxml2
f5048b3e71fc30ad096970b8df6e7af073bae4cb
167,389,323,034,567,480,000,000,000,000,000,000,000
24
Hardening of XPath evaluation Add a mechanism of frame for XPath evaluation when entering a function or a scoped evaluation, also fix a potential problem in predicate evaluation.
static int __bmc_get_device_id(struct ipmi_smi *intf, struct bmc_device *bmc, struct ipmi_device_id *id, bool *guid_set, guid_t *guid, int intf_num) { int rv = 0; int prev_dyn_id_set, prev_guid_set; bool intf_set = intf != NULL; if (!intf) { mutex_lock(&bmc->dyn_mutex); retry_bmc_lock: if (list_empty(&bmc->intfs)) { mutex_unlock(&bmc->dyn_mutex); return -ENOENT; } intf = list_first_entry(&bmc->intfs, struct ipmi_smi, bmc_link); kref_get(&intf->refcount); mutex_unlock(&bmc->dyn_mutex); mutex_lock(&intf->bmc_reg_mutex); mutex_lock(&bmc->dyn_mutex); if (intf != list_first_entry(&bmc->intfs, struct ipmi_smi, bmc_link)) { mutex_unlock(&intf->bmc_reg_mutex); kref_put(&intf->refcount, intf_free); goto retry_bmc_lock; } } else { mutex_lock(&intf->bmc_reg_mutex); bmc = intf->bmc; mutex_lock(&bmc->dyn_mutex); kref_get(&intf->refcount); } /* If we have a valid and current ID, just return that. */ if (intf->in_bmc_register || (bmc->dyn_id_set && time_is_after_jiffies(bmc->dyn_id_expiry))) goto out_noprocessing; prev_guid_set = bmc->dyn_guid_set; __get_guid(intf); prev_dyn_id_set = bmc->dyn_id_set; rv = __get_device_id(intf, bmc); if (rv) goto out; /* * The guid, device id, manufacturer id, and product id should * not change on a BMC. If it does we have to do some dancing. */ if (!intf->bmc_registered || (!prev_guid_set && bmc->dyn_guid_set) || (!prev_dyn_id_set && bmc->dyn_id_set) || (prev_guid_set && bmc->dyn_guid_set && !guid_equal(&bmc->guid, &bmc->fetch_guid)) || bmc->id.device_id != bmc->fetch_id.device_id || bmc->id.manufacturer_id != bmc->fetch_id.manufacturer_id || bmc->id.product_id != bmc->fetch_id.product_id) { struct ipmi_device_id id = bmc->fetch_id; int guid_set = bmc->dyn_guid_set; guid_t guid; guid = bmc->fetch_guid; mutex_unlock(&bmc->dyn_mutex); __ipmi_bmc_unregister(intf); /* Fill in the temporary BMC for good measure. */ intf->bmc->id = id; intf->bmc->dyn_guid_set = guid_set; intf->bmc->guid = guid; if (__ipmi_bmc_register(intf, &id, guid_set, &guid, intf_num)) need_waiter(intf); /* Retry later on an error. */ else __scan_channels(intf, &id); if (!intf_set) { /* * We weren't given the interface on the * command line, so restart the operation on * the next interface for the BMC. */ mutex_unlock(&intf->bmc_reg_mutex); mutex_lock(&bmc->dyn_mutex); goto retry_bmc_lock; } /* We have a new BMC, set it up. */ bmc = intf->bmc; mutex_lock(&bmc->dyn_mutex); goto out_noprocessing; } else if (memcmp(&bmc->fetch_id, &bmc->id, sizeof(bmc->id))) /* Version info changes, scan the channels again. */ __scan_channels(intf, &bmc->fetch_id); bmc->dyn_id_expiry = jiffies + IPMI_DYN_DEV_ID_EXPIRY; out: if (rv && prev_dyn_id_set) { rv = 0; /* Ignore failures if we have previous data. */ bmc->dyn_id_set = prev_dyn_id_set; } if (!rv) { bmc->id = bmc->fetch_id; if (bmc->dyn_guid_set) bmc->guid = bmc->fetch_guid; else if (prev_guid_set) /* * The guid used to be valid and it failed to fetch, * just use the cached value. */ bmc->dyn_guid_set = prev_guid_set; } out_noprocessing: if (!rv) { if (id) *id = bmc->id; if (guid_set) *guid_set = bmc->dyn_guid_set; if (guid && bmc->dyn_guid_set) *guid = bmc->guid; } mutex_unlock(&bmc->dyn_mutex); mutex_unlock(&intf->bmc_reg_mutex); kref_put(&intf->refcount, intf_free); return rv; }
0
[ "CWE-416", "CWE-284" ]
linux
77f8269606bf95fcb232ee86f6da80886f1dfae8
176,041,155,505,042,640,000,000,000,000,000,000,000
132
ipmi: fix use-after-free of user->release_barrier.rda When we do the following test, we got oops in ipmi_msghandler driver while((1)) do service ipmievd restart & service ipmievd restart done --------------------------------------------------------------- [ 294.230186] Unable to handle kernel paging request at virtual address 0000803fea6ea008 [ 294.230188] Mem abort info: [ 294.230190] ESR = 0x96000004 [ 294.230191] Exception class = DABT (current EL), IL = 32 bits [ 294.230193] SET = 0, FnV = 0 [ 294.230194] EA = 0, S1PTW = 0 [ 294.230195] Data abort info: [ 294.230196] ISV = 0, ISS = 0x00000004 [ 294.230197] CM = 0, WnR = 0 [ 294.230199] user pgtable: 4k pages, 48-bit VAs, pgdp = 00000000a1c1b75a [ 294.230201] [0000803fea6ea008] pgd=0000000000000000 [ 294.230204] Internal error: Oops: 96000004 [#1] SMP [ 294.235211] Modules linked in: nls_utf8 isofs rpcrdma ib_iser ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_umad rdma_cm ib_cm iw_cm dm_mirror dm_region_hash dm_log dm_mod aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce ses sha256_arm64 sha1_ce hibmc_drm hisi_sas_v2_hw enclosure sg hisi_sas_main sbsa_gwdt ip_tables mlx5_ib ib_uverbs marvell ib_core mlx5_core ixgbe ipmi_si mdio hns_dsaf ipmi_devintf ipmi_msghandler hns_enet_drv hns_mdio [ 294.277745] CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Not tainted 5.0.0-rc2+ #113 [ 294.285511] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.37 11/21/2017 [ 294.292835] pstate: 80000005 (Nzcv daif -PAN -UAO) [ 294.297695] pc : __srcu_read_lock+0x38/0x58 [ 294.301940] lr : acquire_ipmi_user+0x2c/0x70 [ipmi_msghandler] [ 294.307853] sp : ffff00001001bc80 [ 294.311208] x29: ffff00001001bc80 x28: ffff0000117e5000 [ 294.316594] x27: 0000000000000000 x26: dead000000000100 [ 294.321980] x25: dead000000000200 x24: ffff803f6bd06800 [ 294.327366] x23: 0000000000000000 x22: 0000000000000000 [ 294.332752] x21: ffff00001001bd04 x20: ffff80df33d19018 [ 294.338137] x19: ffff80df33d19018 x18: 0000000000000000 [ 294.343523] x17: 0000000000000000 x16: 0000000000000000 [ 294.348908] x15: 0000000000000000 x14: 0000000000000002 [ 294.354293] x13: 0000000000000000 x12: 0000000000000000 [ 294.359679] x11: 0000000000000000 x10: 0000000000100000 [ 294.365065] x9 : 0000000000000000 x8 : 0000000000000004 [ 294.370451] x7 : 0000000000000000 x6 : ffff80df34558678 [ 294.375836] x5 : 000000000000000c x4 : 0000000000000000 [ 294.381221] x3 : 0000000000000001 x2 : 0000803fea6ea000 [ 294.386607] x1 : 0000803fea6ea008 x0 : 0000000000000001 [ 294.391994] Process swapper/3 (pid: 0, stack limit = 0x0000000083087293) [ 294.398791] Call trace: [ 294.401266] __srcu_read_lock+0x38/0x58 [ 294.405154] acquire_ipmi_user+0x2c/0x70 [ipmi_msghandler] [ 294.410716] deliver_response+0x80/0xf8 [ipmi_msghandler] [ 294.416189] deliver_local_response+0x28/0x68 [ipmi_msghandler] [ 294.422193] handle_one_recv_msg+0x158/0xcf8 [ipmi_msghandler] [ 294.432050] handle_new_recv_msgs+0xc0/0x210 [ipmi_msghandler] [ 294.441984] smi_recv_tasklet+0x8c/0x158 [ipmi_msghandler] [ 294.451618] tasklet_action_common.isra.5+0x88/0x138 [ 294.460661] tasklet_action+0x2c/0x38 [ 294.468191] __do_softirq+0x120/0x2f8 [ 294.475561] irq_exit+0x134/0x140 [ 294.482445] __handle_domain_irq+0x6c/0xc0 [ 294.489954] gic_handle_irq+0xb8/0x178 [ 294.497037] el1_irq+0xb0/0x140 [ 294.503381] arch_cpu_idle+0x34/0x1a8 [ 294.510096] do_idle+0x1d4/0x290 [ 294.516322] cpu_startup_entry+0x28/0x30 [ 294.523230] secondary_start_kernel+0x184/0x1d0 [ 294.530657] Code: d538d082 d2800023 8b010c81 8b020021 (c85f7c25) [ 294.539746] ---[ end trace 8a7a880dee570b29 ]--- [ 294.547341] Kernel panic - not syncing: Fatal exception in interrupt [ 294.556837] SMP: stopping secondary CPUs [ 294.563996] Kernel Offset: disabled [ 294.570515] CPU features: 0x002,21006008 [ 294.577638] Memory Limit: none [ 294.587178] Starting crashdump kernel... [ 294.594314] Bye! Because the user->release_barrier.rda is freed in ipmi_destroy_user(), but the refcount is not zero, when acquire_ipmi_user() uses user->release_barrier.rda in __srcu_read_lock(), it causes oops. Fix this by calling cleanup_srcu_struct() when the refcount is zero. Fixes: e86ee2d44b44 ("ipmi: Rework locking and shutdown for hot remove") Cc: [email protected] # 4.18 Signed-off-by: Yang Yingliang <[email protected]> Signed-off-by: Corey Minyard <[email protected]>
int vrend_renderer_resource_create(struct vrend_renderer_resource_create_args *args, struct iovec *iov, uint32_t num_iovs, void *image_oes) { struct vrend_resource *gr; int ret; char error_string[256]; ret = check_resource_valid(args, error_string); if (ret) { vrend_printf("%s, Illegal resource parameters, error: %s\n", __func__, error_string); return EINVAL; } gr = (struct vrend_resource *)CALLOC_STRUCT(vrend_texture); if (!gr) return ENOMEM; vrend_renderer_resource_copy_args(args, gr); gr->iov = iov; gr->num_iovs = num_iovs; if (args->flags & VIRGL_RESOURCE_Y_0_TOP) gr->y_0_top = true; pipe_reference_init(&gr->base.reference, 1); if (args->target == PIPE_BUFFER) { if (args->bind == VIRGL_BIND_CUSTOM) { /* use iovec directly when attached */ gr->storage = VREND_RESOURCE_STORAGE_GUEST_ELSE_SYSTEM; gr->ptr = malloc(args->width); if (!gr->ptr) { FREE(gr); return ENOMEM; } } else if (args->bind == VIRGL_BIND_STAGING) { /* Staging buffers use only guest memory. */ gr->storage = VREND_RESOURCE_STORAGE_GUEST; } else if (args->bind == VIRGL_BIND_INDEX_BUFFER) { gr->target = GL_ELEMENT_ARRAY_BUFFER_ARB; vrend_create_buffer(gr, args->width); } else if (args->bind == VIRGL_BIND_STREAM_OUTPUT) { gr->target = GL_TRANSFORM_FEEDBACK_BUFFER; vrend_create_buffer(gr, args->width); } else if (args->bind == VIRGL_BIND_VERTEX_BUFFER) { gr->target = GL_ARRAY_BUFFER_ARB; vrend_create_buffer(gr, args->width); } else if (args->bind == VIRGL_BIND_CONSTANT_BUFFER) { gr->target = GL_UNIFORM_BUFFER; vrend_create_buffer(gr, args->width); } else if (args->bind == VIRGL_BIND_QUERY_BUFFER) { gr->target = GL_QUERY_BUFFER; vrend_create_buffer(gr, args->width); } else if (args->bind == VIRGL_BIND_COMMAND_ARGS) { gr->target = GL_DRAW_INDIRECT_BUFFER; vrend_create_buffer(gr, args->width); } else if (args->bind == 0 || args->bind == VIRGL_BIND_SHADER_BUFFER) { gr->target = GL_ARRAY_BUFFER_ARB; vrend_create_buffer(gr, args->width); } else if (args->bind & VIRGL_BIND_SAMPLER_VIEW) { /* * On Desktop we use GL_ARB_texture_buffer_object on GLES we use * GL_EXT_texture_buffer (it is in the ANDRIOD extension pack). */ #if GL_TEXTURE_BUFFER != GL_TEXTURE_BUFFER_EXT #error "GL_TEXTURE_BUFFER enums differ, they shouldn't." #endif /* need to check GL version here */ if (has_feature(feat_arb_or_gles_ext_texture_buffer)) { gr->target = GL_TEXTURE_BUFFER; } else { gr->target = GL_PIXEL_PACK_BUFFER_ARB; } vrend_create_buffer(gr, args->width); } else { vrend_printf("%s: Illegal buffer binding flags 0x%x\n", __func__, args->bind); FREE(gr); return EINVAL; } } else { int r = vrend_renderer_resource_allocate_texture(gr, image_oes); if (r) { FREE(gr); return r; } } ret = vrend_resource_insert(gr, args->handle); if (ret == 0) { vrend_renderer_resource_destroy(gr); return ENOMEM; } return 0; }
0
[ "CWE-787" ]
virglrenderer
cbc8d8b75be360236cada63784046688aeb6d921
2,001,250,916,882,477,000,000,000,000,000,000,000
95
vrend: check transfer bounds for negative values too and report error Closes #138 Signed-off-by: Gert Wollny <[email protected]> Reviewed-by: Emil Velikov <[email protected]>
cl_error_t cli_egg_bzip2_decompress(char* compressed, size_t compressed_size, char** decompressed, size_t* decompressed_size) { cl_error_t status = CL_EPARSE; char* decoded_tmp; char* decoded = NULL; uint32_t declen = 0, capacity = 0; bz_stream stream; int bzstat; if (NULL == compressed || compressed_size == 0 || NULL == decompressed || NULL == decompressed_size) { cli_errmsg("cli_egg_bzip2_decompress: Invalid args!\n"); status = CL_EARG; goto done; } *decompressed = NULL; *decompressed_size = 0; if (!(decoded = (char*)cli_calloc(BUFSIZ, sizeof(Bytef)))) { cli_errmsg("cli_egg_bzip2_decompress: cannot allocate memory for decompressed output\n"); status = CL_EMEM; goto done; } capacity = BUFSIZ; memset(&stream, 0, sizeof(stream)); stream.next_in = compressed; stream.avail_in = compressed_size; stream.next_out = decoded; stream.avail_out = BUFSIZ; if (BZ_OK != (bzstat = BZ2_bzDecompressInit(&stream, 0, 0))) { cli_warnmsg("cli_egg_bzip2_decompress: bzinit failed\n"); status = CL_EMEM; goto done; } /* initial inflate */ bzstat = BZ2_bzDecompress(&stream); /* check if nothing written whatsoever */ if ((bzstat != BZ_OK) && (stream.avail_out == BUFSIZ)) { /* Inflation failed */ cli_errmsg("cli_egg_bzip2_decompress: failed to decompress data\n"); status = CL_EPARSE; goto done; } while (bzstat == BZ_OK && stream.avail_in) { /* extend output capacity if needed,*/ if (stream.avail_out == 0) { if (!(decoded_tmp = cli_realloc(decoded, capacity + BUFSIZ))) { cli_errmsg("cli_egg_bzip2_decompress: cannot reallocate memory for decompressed output\n"); status = CL_EMEM; goto done; } decoded = decoded_tmp; stream.next_out = decoded + capacity; stream.avail_out = BUFSIZ; declen += BUFSIZ; capacity += BUFSIZ; } /* continue inflation */ bzstat = BZ2_bzDecompress(&stream); } /* add end fragment to decoded length */ declen += (BUFSIZ - stream.avail_out); /* error handling */ switch (bzstat) { case BZ_OK: cli_dbgmsg("cli_egg_bzip2_decompress: BZ_OK on stream decompression\n"); /* intentional fall-through */ case BZ_STREAM_END: cli_dbgmsg("cli_egg_bzip2_decompress: decompressed %lu bytes from %lu total bytes (%lu bytes remaining)\n", (unsigned long)declen, (unsigned long)(compressed_size), (unsigned long)(stream.avail_in)); break; /* potentially fatal */ case BZ_DATA_ERROR: case BZ_MEM_ERROR: default: cli_dbgmsg("cli_egg_bzip2_decompress: after decompressing %lu bytes, got error %d\n", (unsigned long)declen, bzstat); if (declen == 0) { cli_dbgmsg("cli_egg_bzip2_decompress: no bytes were decompressed.\n"); status = CL_EPARSE; } break; } *decompressed = (char*)decoded; *decompressed_size = declen; status = CL_SUCCESS; done: (void)BZ2_bzDecompressEnd(&stream); if (CL_SUCCESS != status) { free(decoded); } return status; }
0
[ "CWE-476" ]
clamav-devel
8bb3716be9c7ab7c6a3a1889267b1072f48af87b
71,721,506,159,610,930,000,000,000,000,000,000,000
113
fuzz-22348 null deref in egg utf8 conversion Corrected memory leaks and a null dereference in the egg utf8 conversion.
int flush_master_info(Master_info* mi, bool force) { DBUG_ENTER("flush_master_info"); DBUG_ASSERT(mi != NULL && mi->rli != NULL); /* The previous implementation was not acquiring locks. We do the same here. However, this is quite strange. */ /* With the appropriate recovery process, we will not need to flush the content of the current log. For now, we flush the relay log BEFORE the master.info file, because if we crash, we will get a duplicate event in the relay log at restart. If we change the order, there might be missing events. If we don't do this and the slave server dies when the relay log has some parts (its last kilobytes) in memory only, with, say, from master's position 100 to 150 in memory only (not on disk), and with position 150 in master.info, there will be missing information. When the slave restarts, the I/O thread will fetch binlogs from 150, so in the relay log we will have "[0, 100] U [150, infinity[" and nobody will notice it, so the SQL thread will jump from 100 to 150, and replication will silently break. */ mysql_mutex_t *log_lock= mi->rli->relay_log.get_log_lock(); mysql_mutex_lock(log_lock); int err= (mi->rli->flush_current_log() || mi->flush_info(force)); mysql_mutex_unlock(log_lock); DBUG_RETURN (err); }
0
[ "CWE-284", "CWE-295" ]
mysql-server
3bd5589e1a5a93f9c224badf983cd65c45215390
243,647,344,528,423,630,000,000,000,000,000,000,000
35
WL#6791 : Redefine client --ssl option to imply enforced encryption # Changed the meaning of the --ssl=1 option of all client binaries to mean force ssl, not try ssl and fail over to eunecrypted # Added a new MYSQL_OPT_SSL_ENFORCE mysql_options() option to specify that an ssl connection is required. # Added a new macro SSL_SET_OPTIONS() to the client SSL handling headers that sets all the relevant SSL options at once. # Revamped all of the current native clients to use the new macro # Removed some Windows line endings. # Added proper handling of the new option into the ssl helper headers. # If SSL is mandatory assume that the media is secure enough for the sha256 plugin to do unencrypted password exchange even before establishing a connection. # Set the default ssl cipher to DHE-RSA-AES256-SHA if none is specified. # updated test cases that require a non-default cipher to spawn a mysql command line tool binary since mysqltest has no support for specifying ciphers. # updated the replication slave connection code to always enforce SSL if any of the SSL config options is present. # test cases added and updated. # added a mysql_get_option() API to return mysql_options() values. Used the new API inside the sha256 plugin. # Fixed compilation warnings because of unused variables. # Fixed test failures (mysql_ssl and bug13115401) # Fixed whitespace issues. # Fully implemented the mysql_get_option() function. # Added a test case for mysql_get_option() # fixed some trailing whitespace issues # fixed some uint/int warnings in mysql_client_test.c # removed shared memory option from non-windows get_options tests # moved MYSQL_OPT_LOCAL_INFILE to the uint options
struct page *follow_page(struct vm_area_struct *vma, unsigned long address, unsigned int flags) { pgd_t *pgd; pud_t *pud; pmd_t *pmd; pte_t *ptep, pte; spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; page = follow_huge_addr(mm, address, flags & FOLL_WRITE); if (!IS_ERR(page)) { BUG_ON(flags & FOLL_GET); goto out; } page = NULL; pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) goto no_page_table; pud = pud_offset(pgd, address); if (pud_none(*pud)) goto no_page_table; if (pud_huge(*pud) && vma->vm_flags & VM_HUGETLB) { BUG_ON(flags & FOLL_GET); page = follow_huge_pud(mm, address, pud, flags & FOLL_WRITE); goto out; } if (unlikely(pud_bad(*pud))) goto no_page_table; pmd = pmd_offset(pud, address); if (pmd_none(*pmd)) goto no_page_table; if (pmd_huge(*pmd) && vma->vm_flags & VM_HUGETLB) { BUG_ON(flags & FOLL_GET); page = follow_huge_pmd(mm, address, pmd, flags & FOLL_WRITE); goto out; } if (pmd_trans_huge(*pmd)) { if (flags & FOLL_SPLIT) { split_huge_page_pmd(mm, pmd); goto split_fallthrough; } spin_lock(&mm->page_table_lock); if (likely(pmd_trans_huge(*pmd))) { if (unlikely(pmd_trans_splitting(*pmd))) { spin_unlock(&mm->page_table_lock); wait_split_huge_page(vma->anon_vma, pmd); } else { page = follow_trans_huge_pmd(mm, address, pmd, flags); spin_unlock(&mm->page_table_lock); goto out; } } else spin_unlock(&mm->page_table_lock); /* fall through */ } split_fallthrough: if (unlikely(pmd_bad(*pmd))) goto no_page_table; ptep = pte_offset_map_lock(mm, pmd, address, &ptl); pte = *ptep; if (!pte_present(pte)) goto no_page; if ((flags & FOLL_WRITE) && !pte_write(pte)) goto unlock; page = vm_normal_page(vma, address, pte); if (unlikely(!page)) { if ((flags & FOLL_DUMP) || !is_zero_pfn(pte_pfn(pte))) goto bad_page; page = pte_page(pte); } if (flags & FOLL_GET) get_page_foll(page); if (flags & FOLL_TOUCH) { if ((flags & FOLL_WRITE) && !pte_dirty(pte) && !PageDirty(page)) set_page_dirty(page); /* * pte_mkyoung() would be more correct here, but atomic care * is needed to avoid losing the dirty bit: it is easier to use * mark_page_accessed(). */ mark_page_accessed(page); } if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* * The preliminary mapping check is mainly to avoid the * pointless overhead of lock_page on the ZERO_PAGE * which might bounce very badly if there is contention. * * If the page is already locked, we don't need to * handle it now - vmscan will handle it later if and * when it attempts to reclaim the page. */ if (page->mapping && trylock_page(page)) { lru_add_drain(); /* push cached pages to LRU */ /* * Because we lock page here and migration is * blocked by the pte's page reference, we need * only check for file-cache page truncation. */ if (page->mapping) mlock_vma_page(page); unlock_page(page); } } unlock: pte_unmap_unlock(ptep, ptl); out: return page; bad_page: pte_unmap_unlock(ptep, ptl); return ERR_PTR(-EFAULT); no_page: pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return page; no_page_table: /* * When core dumping an enormous anonymous area that nobody * has touched so far, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ if ((flags & FOLL_DUMP) && (!vma->vm_ops || !vma->vm_ops->fault)) return ERR_PTR(-EFAULT); return page; }
0
[ "CWE-264" ]
linux-2.6
1a5a9906d4e8d1976b701f889d8f35d54b928f25
147,759,488,797,181,010,000,000,000,000,000,000,000
144
mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode In some cases it may happen that pmd_none_or_clear_bad() is called with the mmap_sem hold in read mode. In those cases the huge page faults can allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a false positive from pmd_bad() that will not like to see a pmd materializing as trans huge. It's not khugepaged causing the problem, khugepaged holds the mmap_sem in write mode (and all those sites must hold the mmap_sem in read mode to prevent pagetables to go away from under them, during code review it seems vm86 mode on 32bit kernels requires that too unless it's restricted to 1 thread per process or UP builds). The race is only with the huge pagefaults that can convert a pmd_none() into a pmd_trans_huge(). Effectively all these pmd_none_or_clear_bad() sites running with mmap_sem in read mode are somewhat speculative with the page faults, and the result is always undefined when they run simultaneously. This is probably why it wasn't common to run into this. For example if the madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page fault, the hugepage will not be zapped, if the page fault runs first it will be zapped. Altering pmd_bad() not to error out if it finds hugepmds won't be enough to fix this, because zap_pmd_range would then proceed to call zap_pte_range (which would be incorrect if the pmd become a pmd_trans_huge()). The simplest way to fix this is to read the pmd in the local stack (regardless of what we read, no need of actual CPU barriers, only compiler barrier needed), and be sure it is not changing under the code that computes its value. Even if the real pmd is changing under the value we hold on the stack, we don't care. If we actually end up in zap_pte_range it means the pmd was not none already and it was not huge, and it can't become huge from under us (khugepaged locking explained above). All we need is to enforce that there is no way anymore that in a code path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad can run into a hugepmd. The overhead of a barrier() is just a compiler tweak and should not be measurable (I only added it for THP builds). I don't exclude different compiler versions may have prevented the race too by caching the value of *pmd on the stack (that hasn't been verified, but it wouldn't be impossible considering pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines and there's no external function called in between pmd_trans_huge and pmd_none_or_clear_bad). if (pmd_trans_huge(*pmd)) { if (next-addr != HPAGE_PMD_SIZE) { VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem)); split_huge_page_pmd(vma->vm_mm, pmd); } else if (zap_huge_pmd(tlb, vma, pmd, addr)) continue; /* fall through */ } if (pmd_none_or_clear_bad(pmd)) Because this race condition could be exercised without special privileges this was reported in CVE-2012-1179. The race was identified and fully explained by Ulrich who debugged it. I'm quoting his accurate explanation below, for reference. ====== start quote ======= mapcount 0 page_mapcount 1 kernel BUG at mm/huge_memory.c:1384! At some point prior to the panic, a "bad pmd ..." message similar to the following is logged on the console: mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7). The "bad pmd ..." message is logged by pmd_clear_bad() before it clears the page's PMD table entry. 143 void pmd_clear_bad(pmd_t *pmd) 144 { -> 145 pmd_ERROR(*pmd); 146 pmd_clear(pmd); 147 } After the PMD table entry has been cleared, there is an inconsistency between the actual number of PMD table entries that are mapping the page and the page's map count (_mapcount field in struct page). When the page is subsequently reclaimed, __split_huge_page() detects this inconsistency. 1381 if (mapcount != page_mapcount(page)) 1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n", 1383 mapcount, page_mapcount(page)); -> 1384 BUG_ON(mapcount != page_mapcount(page)); The root cause of the problem is a race of two threads in a multithreaded process. Thread B incurs a page fault on a virtual address that has never been accessed (PMD entry is zero) while Thread A is executing an madvise() system call on a virtual address within the same 2 MB (huge page) range. virtual address space .---------------------. | | | | .-|---------------------| | | | | | |<-- B(fault) | | | 2 MB | |/////////////////////|-. huge < |/////////////////////| > A(range) page | |/////////////////////|-' | | | | | | '-|---------------------| | | | | '---------------------' - Thread A is executing an madvise(..., MADV_DONTNEED) system call on the virtual address range "A(range)" shown in the picture. sys_madvise // Acquire the semaphore in shared mode. down_read(&current->mm->mmap_sem) ... madvise_vma switch (behavior) case MADV_DONTNEED: madvise_dontneed zap_page_range unmap_vmas unmap_page_range zap_pud_range zap_pmd_range // // Assume that this huge page has never been accessed. // I.e. content of the PMD entry is zero (not mapped). // if (pmd_trans_huge(*pmd)) { // We don't get here due to the above assumption. } // // Assume that Thread B incurred a page fault and .---------> // sneaks in here as shown below. | // | if (pmd_none_or_clear_bad(pmd)) | { | if (unlikely(pmd_bad(*pmd))) | pmd_clear_bad | { | pmd_ERROR | // Log "bad pmd ..." message here. | pmd_clear | // Clear the page's PMD entry. | // Thread B incremented the map count | // in page_add_new_anon_rmap(), but | // now the page is no longer mapped | // by a PMD entry (-> inconsistency). | } | } | v - Thread B is handling a page fault on virtual address "B(fault)" shown in the picture. ... do_page_fault __do_page_fault // Acquire the semaphore in shared mode. down_read_trylock(&mm->mmap_sem) ... handle_mm_fault if (pmd_none(*pmd) && transparent_hugepage_enabled(vma)) // We get here due to the above assumption (PMD entry is zero). do_huge_pmd_anonymous_page alloc_hugepage_vma // Allocate a new transparent huge page here. ... __do_huge_pmd_anonymous_page ... spin_lock(&mm->page_table_lock) ... page_add_new_anon_rmap // Here we increment the page's map count (starts at -1). atomic_set(&page->_mapcount, 0) set_pmd_at // Here we set the page's PMD entry which will be cleared // when Thread A calls pmd_clear_bad(). ... spin_unlock(&mm->page_table_lock) The mmap_sem does not prevent the race because both threads are acquiring it in shared mode (down_read). Thread B holds the page_table_lock while the page's map count and PMD table entry are updated. However, Thread A does not synchronize on that lock. ====== end quote ======= [[email protected]: checkpatch fixes] Reported-by: Ulrich Obergfell <[email protected]> Signed-off-by: Andrea Arcangeli <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Dave Jones <[email protected]> Acked-by: Larry Woodman <[email protected]> Acked-by: Rik van Riel <[email protected]> Cc: <[email protected]> [2.6.38+] Cc: Mark Salter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
build_privkey_template (app_t app, int keyno, const unsigned char *rsa_n, size_t rsa_n_len, const unsigned char *rsa_e, size_t rsa_e_len, const unsigned char *rsa_p, size_t rsa_p_len, const unsigned char *rsa_q, size_t rsa_q_len, const unsigned char *rsa_u, size_t rsa_u_len, const unsigned char *rsa_dp, size_t rsa_dp_len, const unsigned char *rsa_dq, size_t rsa_dq_len, unsigned char **result, size_t *resultlen) { size_t rsa_e_reqlen; unsigned char privkey[7*(1+3+3)]; size_t privkey_len; unsigned char exthdr[2+2+3]; size_t exthdr_len; unsigned char suffix[2+3]; size_t suffix_len; unsigned char *tp; size_t datalen; unsigned char *template; size_t template_size; *result = NULL; *resultlen = 0; switch (app->app_local->keyattr[keyno].rsa.format) { case RSA_STD: case RSA_STD_N: case RSA_CRT: case RSA_CRT_N: break; default: return gpg_error (GPG_ERR_INV_VALUE); } /* Get the required length for E. Rounded up to the nearest byte */ rsa_e_reqlen = (app->app_local->keyattr[keyno].rsa.e_bits + 7) / 8; assert (rsa_e_len <= rsa_e_reqlen); /* Build the 7f48 cardholder private key template. */ datalen = 0; tp = privkey; tp += add_tlv (tp, 0x91, rsa_e_reqlen); datalen += rsa_e_reqlen; tp += add_tlv (tp, 0x92, rsa_p_len); datalen += rsa_p_len; tp += add_tlv (tp, 0x93, rsa_q_len); datalen += rsa_q_len; if (app->app_local->keyattr[keyno].rsa.format == RSA_CRT || app->app_local->keyattr[keyno].rsa.format == RSA_CRT_N) { tp += add_tlv (tp, 0x94, rsa_u_len); datalen += rsa_u_len; tp += add_tlv (tp, 0x95, rsa_dp_len); datalen += rsa_dp_len; tp += add_tlv (tp, 0x96, rsa_dq_len); datalen += rsa_dq_len; } if (app->app_local->keyattr[keyno].rsa.format == RSA_STD_N || app->app_local->keyattr[keyno].rsa.format == RSA_CRT_N) { tp += add_tlv (tp, 0x97, rsa_n_len); datalen += rsa_n_len; } privkey_len = tp - privkey; /* Build the extended header list without the private key template. */ tp = exthdr; *tp++ = keyno ==0 ? 0xb6 : keyno == 1? 0xb8 : 0xa4; *tp++ = 0; tp += add_tlv (tp, 0x7f48, privkey_len); exthdr_len = tp - exthdr; /* Build the 5f48 suffix of the data. */ tp = suffix; tp += add_tlv (tp, 0x5f48, datalen); suffix_len = tp - suffix; /* Now concatenate everything. */ template_size = (1 + 3 /* 0x4d and len. */ + exthdr_len + privkey_len + suffix_len + datalen); tp = template = xtrymalloc_secure (template_size); if (!template) return gpg_error_from_syserror (); tp += add_tlv (tp, 0x4d, exthdr_len + privkey_len + suffix_len + datalen); memcpy (tp, exthdr, exthdr_len); tp += exthdr_len; memcpy (tp, privkey, privkey_len); tp += privkey_len; memcpy (tp, suffix, suffix_len); tp += suffix_len; memcpy (tp, rsa_e, rsa_e_len); if (rsa_e_len < rsa_e_reqlen) { /* Right justify E. */ memmove (tp + rsa_e_reqlen - rsa_e_len, tp, rsa_e_len); memset (tp, 0, rsa_e_reqlen - rsa_e_len); } tp += rsa_e_reqlen; memcpy (tp, rsa_p, rsa_p_len); tp += rsa_p_len; memcpy (tp, rsa_q, rsa_q_len); tp += rsa_q_len; if (app->app_local->keyattr[keyno].rsa.format == RSA_CRT || app->app_local->keyattr[keyno].rsa.format == RSA_CRT_N) { memcpy (tp, rsa_u, rsa_u_len); tp += rsa_u_len; memcpy (tp, rsa_dp, rsa_dp_len); tp += rsa_dp_len; memcpy (tp, rsa_dq, rsa_dq_len); tp += rsa_dq_len; } if (app->app_local->keyattr[keyno].rsa.format == RSA_STD_N || app->app_local->keyattr[keyno].rsa.format == RSA_CRT_N) { memcpy (tp, rsa_n, rsa_n_len); tp += rsa_n_len; } /* Sanity check. We don't know the exact length because we allocated 3 bytes for the first length header. */ assert (tp - template <= template_size); *result = template; *resultlen = tp - template; return 0; }
0
[ "CWE-20" ]
gnupg
2183683bd633818dd031b090b5530951de76f392
160,755,338,427,340,200,000,000,000,000,000,000,000
144
Use inline functions to convert buffer data to scalars. * common/host2net.h (buf16_to_ulong, buf16_to_uint): New. (buf16_to_ushort, buf16_to_u16): New. (buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New. -- Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to avoid all sign extension on shift problems. Hanno Böck found a case with an invalid read due to this problem. To fix that once and for all almost all uses of "<< 24" and "<< 8" are changed by this patch to use an inline function from host2net.h. Signed-off-by: Werner Koch <[email protected]>
static void nested_svm_set_tdp_cr3(struct kvm_vcpu *vcpu, unsigned long root) { struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.nested_cr3 = __sme_set(root); mark_dirty(svm->vmcb, VMCB_NPT); }
0
[ "CWE-401" ]
linux
d80b64ff297e40c2b6f7d7abc1b3eba70d22a068
302,250,471,134,694,960,000,000,000,000,000,000,000
8
KVM: SVM: Fix potential memory leak in svm_cpu_init() When kmalloc memory for sd->sev_vmcbs failed, we forget to free the page held by sd->save_area. Also get rid of the var r as '-ENOMEM' is actually the only possible outcome here. Reviewed-by: Liran Alon <[email protected]> Reviewed-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: Miaohe Lin <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
multiply_ms(tmsize_t m1, tmsize_t m2) { tmsize_t bytes = m1 * m2; if (m1 && bytes / m1 != m2) bytes = 0; return bytes; }
0
[ "CWE-787" ]
libtiff
aaab5c3c9d2a2c6984f23ccbc79702610439bc65
324,082,009,763,283,140,000,000,000,000,000,000,000
9
* libtiff/tif_luv.c: fix potential out-of-bound writes in decode functions in non debug builds by replacing assert()s by regular if checks (bugzilla #2522). Fix potential out-of-bound reads in case of short input data.
static void set_cr4_guest_host_mask(struct vcpu_vmx *vmx) { vmx->vcpu.arch.cr4_guest_owned_bits = KVM_CR4_GUEST_OWNED_BITS; if (enable_ept) vmx->vcpu.arch.cr4_guest_owned_bits |= X86_CR4_PGE; if (is_guest_mode(&vmx->vcpu)) vmx->vcpu.arch.cr4_guest_owned_bits &= ~get_vmcs12(&vmx->vcpu)->cr4_guest_host_mask; vmcs_writel(CR4_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr4_guest_owned_bits); }
0
[]
kvm
a642fc305053cc1c6e47e4f4df327895747ab485
14,218,512,350,142,852,000,000,000,000,000,000,000
10
kvm: vmx: handle invvpid vm exit gracefully On systems with invvpid instruction support (corresponding bit in IA32_VMX_EPT_VPID_CAP MSR is set) guest invocation of invvpid causes vm exit, which is currently not handled and results in propagation of unknown exit to userspace. Fix this by installing an invvpid vm exit handler. This is CVE-2014-3646. Cc: [email protected] Signed-off-by: Petr Matousek <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static void assign_masked(ulong *dest, ulong src, ulong mask) { *dest = (*dest & ~mask) | (src & mask); }
0
[]
kvm
d1442d85cc30ea75f7d399474ca738e0bc96f715
149,699,183,334,669,420,000,000,000,000,000,000,000
4
KVM: x86: Handle errors when RIP is set during far jumps Far jmp/call/ret may fault while loading a new RIP. Currently KVM does not handle this case, and may result in failed vm-entry once the assignment is done. The tricky part of doing so is that loading the new CS affects the VMCS/VMCB state, so if we fail during loading the new RIP, we are left in unconsistent state. Therefore, this patch saves on 64-bit the old CS descriptor and restores it if loading RIP failed. This fixes CVE-2014-3647. Cc: [email protected] Signed-off-by: Nadav Amit <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
unsigned long ring_buffer_entries_cpu(struct trace_buffer *buffer, int cpu) { struct ring_buffer_per_cpu *cpu_buffer; if (!cpumask_test_cpu(cpu, buffer->cpumask)) return 0; cpu_buffer = buffer->buffers[cpu]; return rb_num_of_entries(cpu_buffer); }
0
[ "CWE-362" ]
linux
bbeb97464eefc65f506084fd9f18f21653e01137
93,619,797,898,609,250,000,000,000,000,000,000,000
11
tracing: Fix race in trace_open and buffer resize call Below race can come, if trace_open and resize of cpu buffer is running parallely on different cpus CPUX CPUY ring_buffer_resize atomic_read(&buffer->resize_disabled) tracing_open tracing_reset_online_cpus ring_buffer_reset_cpu rb_reset_cpu rb_update_pages remove/insert pages resetting pointer This race can cause data abort or some times infinte loop in rb_remove_pages and rb_insert_pages while checking pages for sanity. Take buffer lock to fix this. Link: https://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: b23d7a5f4a07a ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU") Signed-off-by: Gaurav Kohli <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
size_t RemoteIo::Impl::populateBlocks(size_t lowBlock, size_t highBlock) { assert(isMalloced_); // optimize: ignore all true blocks on left & right sides. while(!blocksMap_[lowBlock].isNone() && lowBlock < highBlock) lowBlock++; while(!blocksMap_[highBlock].isNone() && highBlock > lowBlock) highBlock--; size_t rcount = 0; if (blocksMap_[highBlock].isNone()) { std::string data; getDataByRange( (long) lowBlock, (long) highBlock, data); rcount = data.length(); if (rcount == 0) { throw Error(kerErrorMessage, "Data By Range is empty. Please check the permission."); } byte* source = (byte*)data.c_str(); size_t remain = rcount, totalRead = 0; size_t iBlock = (rcount == size_) ? 0 : lowBlock; while (remain) { size_t allow = EXV_MIN(remain, blockSize_); blocksMap_[iBlock].populate(&source[totalRead], allow); remain -= allow; totalRead += allow; iBlock++; } } return rcount; }
0
[ "CWE-190" ]
exiv2
c73d1e27198a389ce7caf52ac30f8e2120acdafd
119,627,411,693,728,990,000,000,000,000,000,000,000
32
Avoid negative integer overflow when `filesize < io_->tell()`. This fixes #791.
snd_compr_tstamp(struct snd_compr_stream *stream, unsigned long arg) { struct snd_compr_tstamp tstamp = {0}; int ret; ret = snd_compr_update_tstamp(stream, &tstamp); if (ret == 0) ret = copy_to_user((struct snd_compr_tstamp __user *)arg, &tstamp, sizeof(tstamp)) ? -EFAULT : 0; return ret; }
0
[ "CWE-703" ]
linux
6217e5ede23285ddfee10d2e4ba0cc2d4c046205
262,551,229,256,275,280,000,000,000,000,000,000,000
11
ALSA: compress: fix an integer overflow check I previously added an integer overflow check here but looking at it now, it's still buggy. The bug happens in snd_compr_allocate_buffer(). We multiply ".fragments" and ".fragment_size" and that doesn't overflow but then we save it in an unsigned int so it truncates the high bits away and we allocate a smaller than expected size. Fixes: b35cc8225845 ('ALSA: compress_core: integer overflow in snd_compr_allocate_buffer()') Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
agoo_con_http_events(agooCon c) { short events = 0; agooRes res = agoo_con_res_peek(c); if (NULL != res && NULL != res->message) { events = POLLIN | POLLOUT; } else if (!c->closing) { events = POLLIN; } return events; }
0
[ "CWE-444", "CWE-61" ]
agoo
23d03535cf7b50d679a60a953a0cae9519a4a130
123,436,309,665,908,310,000,000,000,000,000,000,000
11
Remote addr (#99) * REMOTE_ADDR added * Ready for merge
swab_rect (guint32 *rect) { rect[0] = GUINT32_FROM_LE (rect[0]); rect[1] = GUINT32_FROM_LE (rect[1]); rect[2] = GUINT32_FROM_LE (rect[2]); rect[3] = GUINT32_FROM_LE (rect[3]); }
0
[ "CWE-787" ]
gimp
48ec15890e1751dede061f6d1f469b6508c13439
289,248,889,209,183,500,000,000,000,000,000,000,000
7
file-psp: fix for bogus input data. Fixes bug #639203
GF_Err gf_isom_remove_track(GF_ISOFile *movie, u32 trackNumber) { GF_Err e; GF_TrackBox *the_trak, *trak; GF_TrackReferenceTypeBox *tref; u32 i, j, k, descIndex; GF_ISOTrackID *newRefs; u8 found; GF_ISOSample *samp; the_trak = gf_isom_get_track_from_file(movie, trackNumber); if (!the_trak) return GF_BAD_PARAM; e = CanAccessMovie(movie, GF_ISOM_OPEN_WRITE); if (e) return e; if (movie->moov->iods && movie->moov->iods->descriptor) { GF_Descriptor *desc; GF_ES_ID_Inc *inc; GF_List *ESDs; desc = movie->moov->iods->descriptor; if (desc->tag == GF_ODF_ISOM_IOD_TAG) { ESDs = ((GF_IsomInitialObjectDescriptor *)desc)->ES_ID_IncDescriptors; } else if (desc->tag == GF_ODF_ISOM_OD_TAG) { ESDs = ((GF_IsomObjectDescriptor *)desc)->ES_ID_IncDescriptors; } else { return GF_ISOM_INVALID_FILE; } //remove the track ref from the root OD if any i=0; while ((inc = (GF_ES_ID_Inc *)gf_list_enum(ESDs, &i))) { if (inc->trackID == the_trak->Header->trackID) { gf_odf_desc_del((GF_Descriptor *)inc); i--; gf_list_rem(ESDs, i); } } } //remove the track from the movie gf_list_del_item(movie->moov->trackList, the_trak); //rewrite any OD tracks i=0; while ((trak = (GF_TrackBox *)gf_list_enum(movie->moov->trackList, &i))) { if (trak->Media->handler->handlerType != GF_ISOM_MEDIA_OD) continue; //this is an OD track... j = gf_isom_get_sample_count(movie, i); for (k=0; k < j; k++) { //getting the sample will remove the references to the deleted track in the output OD frame samp = gf_isom_get_sample(movie, i, k+1, &descIndex); if (!samp) break; //so let's update with the new OD frame ! If the sample is empty, remove it if (!samp->dataLength) { e = gf_isom_remove_sample(movie, i, k+1); if (e) return e; } else { e = gf_isom_update_sample(movie, i, k+1, samp, GF_TRUE); if (e) return e; } //and don't forget to delete the sample gf_isom_sample_del(&samp); } } //remove the track ref from any "tref" box in all tracks, except the one to delete //note that we don't touch scal references, as we don't want to rewrite AVC/HEVC samples ... i=0; while ((trak = (GF_TrackBox *)gf_list_enum(movie->moov->trackList, &i))) { if (trak == the_trak) continue; if (! trak->References || ! gf_list_count(trak->References->child_boxes)) continue; j=0; while ((tref = (GF_TrackReferenceTypeBox *)gf_list_enum(trak->References->child_boxes, &j))) { if (tref->reference_type==GF_ISOM_REF_SCAL) continue; found = 0; for (k=0; k<tref->trackIDCount; k++) { if (tref->trackIDs[k] == the_trak->Header->trackID) found++; } if (!found) continue; //no more refs, remove this ref_type if (found == tref->trackIDCount) { gf_isom_box_del_parent(&trak->References->child_boxes, (GF_Box *)tref); j--; } else { newRefs = (GF_ISOTrackID*)gf_malloc(sizeof(GF_ISOTrackID) * (tref->trackIDCount - found)); if (!newRefs) return GF_OUT_OF_MEM; found = 0; for (k = 0; k < tref->trackIDCount; k++) { if (tref->trackIDs[k] != the_trak->Header->trackID) { newRefs[k-found] = tref->trackIDs[k]; } else { found++; } } gf_free(tref->trackIDs); tref->trackIDs = newRefs; tref->trackIDCount -= found; } } //a little opt: remove the ref box if empty... if (! gf_list_count(trak->References->child_boxes)) { gf_isom_box_del_parent(&trak->child_boxes, (GF_Box *)trak->References); trak->References = NULL; } } gf_isom_disable_inplace_rewrite(movie); //delete the track gf_isom_box_del_parent(&movie->moov->child_boxes, (GF_Box *)the_trak); /*update next track ID*/ movie->moov->mvhd->nextTrackID = 0; i=0; while ((trak = (GF_TrackBox *)gf_list_enum(movie->moov->trackList, &i))) { if (trak->Header->trackID>movie->moov->mvhd->nextTrackID) movie->moov->mvhd->nextTrackID = trak->Header->trackID; } if (!gf_list_count(movie->moov->trackList)) { gf_list_del_item(movie->TopBoxes, movie->moov); gf_isom_box_del((GF_Box *)movie->moov); movie->moov = NULL; } return GF_OK; }
0
[ "CWE-476" ]
gpac
ebfa346eff05049718f7b80041093b4c5581c24e
251,582,272,212,355,130,000,000,000,000,000,000,000
129
fixed #1706
zip_dealloc(zipobject *lz) { PyObject_GC_UnTrack(lz); Py_XDECREF(lz->ittuple); Py_XDECREF(lz->result); Py_TYPE(lz)->tp_free(lz); }
0
[ "CWE-125" ]
cpython
dcfcd146f8e6fc5c2fc16a4c192a0c5f5ca8c53c
316,677,155,375,377,780,000,000,000,000,000,000,000
7
bpo-35766: Merge typed_ast back into CPython (GH-11645)
} static inline void f2fs_clear_page_private(struct page *page) { if (!PagePrivate(page)) return; set_page_private(page, 0); ClearPagePrivate(page);
0
[ "CWE-476" ]
linux
4969c06a0d83c9c3dc50b8efcdc8eeedfce896f6
79,087,012,986,812,640,000,000,000,000,000,000,000
9
f2fs: support swap file w/ DIO Signed-off-by: Jaegeuk Kim <[email protected]>
read_region_section(FILE *fd, slang_T *lp, int len) { int i; if (len > 16) return SP_FORMERROR; for (i = 0; i < len; ++i) lp->sl_regions[i] = getc(fd); /* <regionname> */ lp->sl_regions[len] = NUL; return 0; }
0
[ "CWE-190" ]
vim
399c297aa93afe2c0a39e2a1b3f972aebba44c9d
178,604,726,536,465,460,000,000,000,000,000,000,000
11
patch 8.0.0322: possible overflow with corrupted spell file Problem: Possible overflow with spell file where the tree length is corrupted. Solution: Check for an invalid length (suggested by shqking)
void cpudef_init(void) { #if defined(cpudef_setup) cpudef_setup(); /* parse cpu definitions in target config file */ #endif }
0
[ "CWE-20" ]
qemu
0be839a2701369f669532ea5884c15bead1c6e08
79,596,106,876,103,325,000,000,000,000,000,000,000
6
migration: fix parameter validation on ram load During migration, the values read from migration stream during ram load are not validated. Especially offset in host_from_stream_offset() and also the length of the writes in the callers of said function. To fix this, we need to make sure that the [offset, offset + length] range fits into one of the allocated memory regions. Validating addr < len should be sufficient since data seems to always be managed in TARGET_PAGE_SIZE chunks. Fixes: CVE-2014-7840 Note: follow-up patches add extra checks on each block->host access. Signed-off-by: Michael S. Tsirkin <[email protected]> Reviewed-by: Paolo Bonzini <[email protected]> Reviewed-by: Dr. David Alan Gilbert <[email protected]> Signed-off-by: Amit Shah <[email protected]>
httpErrorNocloseStreamHandler(int status, FdEventHandlerPtr event, StreamRequestPtr srequest) { HTTPConnectionPtr connection = srequest->data; if(status == 0 && !streamRequestDone(srequest)) return 0; httpClientFinish(connection, 0); return 1; }
0
[ "CWE-617" ]
polipo
0e2b44af619e46e365971ea52b97457bc0778cd3
104,724,337,001,946,960,000,000,000,000,000,000,000
12
Try to read POST requests to local configuration interface correctly.
applet_is_any_device_activating (NMApplet *applet) { const GPtrArray *devices; int i; /* Check for activating devices */ devices = nm_client_get_devices (applet->nm_client); for (i = 0; devices && (i < devices->len); i++) { NMDevice *candidate = NM_DEVICE (g_ptr_array_index (devices, i)); NMDeviceState state; state = nm_device_get_state (candidate); if (state > NM_DEVICE_STATE_DISCONNECTED && state < NM_DEVICE_STATE_ACTIVATED) return TRUE; } return FALSE; }
0
[ "CWE-200" ]
network-manager-applet
8627880e07c8345f69ed639325280c7f62a8f894
277,291,753,049,184,800,000,000,000,000,000,000,000
17
editor: prevent any registration of objects on the system bus D-Bus access-control is name-based; so requests for a specific name are allowed/denied based on the rules in /etc/dbus-1/system.d. But apparently apps still get a non-named service on the bus, and if we register *any* object even though we don't have a named service, dbus and dbus-glib will happily proxy signals. Since the connection editor shouldn't ever expose anything having to do with connections on any bus, make sure that's the case.
static void write_response(ESPState *s) { trace_esp_write_response(s->status); s->ti_buf[0] = s->status; s->ti_buf[1] = 0; if (s->dma) { s->dma_memory_write(s->dma_opaque, s->ti_buf, 2); s->rregs[ESP_RSTAT] = STAT_TC | STAT_ST; s->rregs[ESP_RINTR] = INTR_BS | INTR_FC; s->rregs[ESP_RSEQ] = SEQ_CD; } else { s->ti_size = 2; s->ti_rptr = 0; s->ti_wptr = 2; s->rregs[ESP_RFLAGS] = 2; } esp_raise_irq(s); }
0
[ "CWE-787" ]
qemu
926cde5f3e4d2504ed161ed0cb771ac7cad6fd11
330,770,673,743,199,770,000,000,000,000,000,000,000
18
scsi: esp: make cmdbuf big enough for maximum CDB size While doing DMA read into ESP command buffer 's->cmdbuf', it could write past the 's->cmdbuf' area, if it was transferring more than 16 bytes. Increase the command buffer size to 32, which is maximum when 's->do_cmd' is set, and add a check on 'len' to avoid OOB access. Reported-by: Li Qiang <[email protected]> Signed-off-by: Prasad J Pandit <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
static int do_setvfinfo(struct net_device *dev, struct nlattr **tb) { const struct net_device_ops *ops = dev->netdev_ops; int err = -EINVAL; if (tb[IFLA_VF_MAC]) { struct ifla_vf_mac *ivm = nla_data(tb[IFLA_VF_MAC]); err = -EOPNOTSUPP; if (ops->ndo_set_vf_mac) err = ops->ndo_set_vf_mac(dev, ivm->vf, ivm->mac); if (err < 0) return err; } if (tb[IFLA_VF_VLAN]) { struct ifla_vf_vlan *ivv = nla_data(tb[IFLA_VF_VLAN]); err = -EOPNOTSUPP; if (ops->ndo_set_vf_vlan) err = ops->ndo_set_vf_vlan(dev, ivv->vf, ivv->vlan, ivv->qos); if (err < 0) return err; } if (tb[IFLA_VF_TX_RATE]) { struct ifla_vf_tx_rate *ivt = nla_data(tb[IFLA_VF_TX_RATE]); struct ifla_vf_info ivf; err = -EOPNOTSUPP; if (ops->ndo_get_vf_config) err = ops->ndo_get_vf_config(dev, ivt->vf, &ivf); if (err < 0) return err; err = -EOPNOTSUPP; if (ops->ndo_set_vf_rate) err = ops->ndo_set_vf_rate(dev, ivt->vf, ivf.min_tx_rate, ivt->rate); if (err < 0) return err; } if (tb[IFLA_VF_RATE]) { struct ifla_vf_rate *ivt = nla_data(tb[IFLA_VF_RATE]); err = -EOPNOTSUPP; if (ops->ndo_set_vf_rate) err = ops->ndo_set_vf_rate(dev, ivt->vf, ivt->min_tx_rate, ivt->max_tx_rate); if (err < 0) return err; } if (tb[IFLA_VF_SPOOFCHK]) { struct ifla_vf_spoofchk *ivs = nla_data(tb[IFLA_VF_SPOOFCHK]); err = -EOPNOTSUPP; if (ops->ndo_set_vf_spoofchk) err = ops->ndo_set_vf_spoofchk(dev, ivs->vf, ivs->setting); if (err < 0) return err; } if (tb[IFLA_VF_LINK_STATE]) { struct ifla_vf_link_state *ivl = nla_data(tb[IFLA_VF_LINK_STATE]); err = -EOPNOTSUPP; if (ops->ndo_set_vf_link_state) err = ops->ndo_set_vf_link_state(dev, ivl->vf, ivl->link_state); if (err < 0) return err; } if (tb[IFLA_VF_RSS_QUERY_EN]) { struct ifla_vf_rss_query_en *ivrssq_en; err = -EOPNOTSUPP; ivrssq_en = nla_data(tb[IFLA_VF_RSS_QUERY_EN]); if (ops->ndo_set_vf_rss_query_en) err = ops->ndo_set_vf_rss_query_en(dev, ivrssq_en->vf, ivrssq_en->setting); if (err < 0) return err; } if (tb[IFLA_VF_TRUST]) { struct ifla_vf_trust *ivt = nla_data(tb[IFLA_VF_TRUST]); err = -EOPNOTSUPP; if (ops->ndo_set_vf_trust) err = ops->ndo_set_vf_trust(dev, ivt->vf, ivt->setting); if (err < 0) return err; } if (tb[IFLA_VF_IB_NODE_GUID]) { struct ifla_vf_guid *ivt = nla_data(tb[IFLA_VF_IB_NODE_GUID]); if (!ops->ndo_set_vf_guid) return -EOPNOTSUPP; return handle_vf_guid(dev, ivt, IFLA_VF_IB_NODE_GUID); } if (tb[IFLA_VF_IB_PORT_GUID]) { struct ifla_vf_guid *ivt = nla_data(tb[IFLA_VF_IB_PORT_GUID]); if (!ops->ndo_set_vf_guid) return -EOPNOTSUPP; return handle_vf_guid(dev, ivt, IFLA_VF_IB_PORT_GUID); } return err; }
0
[ "CWE-200" ]
net
5f8e44741f9f216e33736ea4ec65ca9ac03036e6
284,749,509,871,030,100,000,000,000,000,000,000,000
122
net: fix infoleak in rtnetlink The stack object “map” has a total size of 32 bytes. Its last 4 bytes are padding generated by compiler. These padding bytes are not initialized and sent out via “nla_put”. Signed-off-by: Kangjie Lu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int __init sched_domain_debug_setup(char *str) { sched_domain_debug_enabled = 1; return 0; }
0
[ "CWE-703", "CWE-835" ]
linux
f26f9aff6aaf67e9a430d16c266f91b13a5bff64
79,763,215,576,022,550,000,000,000,000,000,000,000
6
Sched: fix skip_clock_update optimization idle_balance() drops/retakes rq->lock, leaving the previous task vulnerable to set_tsk_need_resched(). Clear it after we return from balancing instead, and in setup_thread_stack() as well, so no successfully descheduled or never scheduled task has it set. Need resched confused the skip_clock_update logic, which assumes that the next call to update_rq_clock() will come nearly immediately after being set. Make the optimization robust against the waking a sleeper before it sucessfully deschedules case by checking that the current task has not been dequeued before setting the flag, since it is that useless clock update we're trying to save, and clear unconditionally in schedule() proper instead of conditionally in put_prev_task(). Signed-off-by: Mike Galbraith <[email protected]> Reported-by: Bjoern B. Brandenburg <[email protected]> Tested-by: Yong Zhang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
GF_Err stbl_AppendPadding(GF_SampleTableBox *stbl, u8 padding) { if (!stbl->PaddingBits) { stbl->PaddingBits = (GF_PaddingBitsBox *) gf_isom_box_new_parent(&stbl->child_boxes, GF_ISOM_BOX_TYPE_PADB); if (!stbl->PaddingBits) return GF_OUT_OF_MEM; } stbl->PaddingBits->padbits = (u8*)gf_realloc(stbl->PaddingBits->padbits, sizeof(u8) * stbl->SampleSize->sampleCount); if (!stbl->PaddingBits->padbits) return GF_OUT_OF_MEM; stbl->PaddingBits->padbits[stbl->SampleSize->sampleCount-1] = padding; stbl->PaddingBits->SampleCount = stbl->SampleSize->sampleCount; return GF_OK; }
0
[ "CWE-120", "CWE-787" ]
gpac
77ed81c069e10b3861d88f72e1c6be1277ee7eae
21,685,779,084,877,400,000,000,000,000,000,000,000
12
fixed #1774 (fuzz)
static int v9fs_file_lock(struct file *filp, int cmd, struct file_lock *fl) { int res = 0; struct inode *inode = file_inode(filp); p9_debug(P9_DEBUG_VFS, "filp: %p lock: %p\n", filp, fl); /* No mandatory locks */ if (__mandatory_lock(inode) && fl->fl_type != F_UNLCK) return -ENOLCK; if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) { filemap_write_and_wait(inode->i_mapping); invalidate_mapping_pages(&inode->i_data, 0, -1); } return res; }
0
[ "CWE-835" ]
linux
5e3cc1ee1405a7eb3487ed24f786dec01b4cbe1f
53,017,603,056,706,380,000,000,000,000,000,000,000
18
9p: use inode->i_lock to protect i_size_write() under 32-bit Use inode->i_lock to protect i_size_write(), else i_size_read() in generic_fillattr() may loop infinitely in read_seqcount_begin() when multiple processes invoke v9fs_vfs_getattr() or v9fs_vfs_getattr_dotl() simultaneously under 32-bit SMP environment, and a soft lockup will be triggered as show below: watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [stat:2217] Modules linked in: CPU: 5 PID: 2217 Comm: stat Not tainted 5.0.0-rc1-00005-g7f702faf5a9e #4 Hardware name: Generic DT based system PC is at generic_fillattr+0x104/0x108 LR is at 0xec497f00 pc : [<802b8898>] lr : [<ec497f00>] psr: 200c0013 sp : ec497e20 ip : ed608030 fp : ec497e3c r10: 00000000 r9 : ec497f00 r8 : ed608030 r7 : ec497ebc r6 : ec497f00 r5 : ee5c1550 r4 : ee005780 r3 : 0000052d r2 : 00000000 r1 : ec497f00 r0 : ed608030 Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: ac48006a DAC: 00000051 CPU: 5 PID: 2217 Comm: stat Not tainted 5.0.0-rc1-00005-g7f702faf5a9e #4 Hardware name: Generic DT based system Backtrace: [<8010d974>] (dump_backtrace) from [<8010dc88>] (show_stack+0x20/0x24) [<8010dc68>] (show_stack) from [<80a1d194>] (dump_stack+0xb0/0xdc) [<80a1d0e4>] (dump_stack) from [<80109f34>] (show_regs+0x1c/0x20) [<80109f18>] (show_regs) from [<801d0a80>] (watchdog_timer_fn+0x280/0x2f8) [<801d0800>] (watchdog_timer_fn) from [<80198658>] (__hrtimer_run_queues+0x18c/0x380) [<801984cc>] (__hrtimer_run_queues) from [<80198e60>] (hrtimer_run_queues+0xb8/0xf0) [<80198da8>] (hrtimer_run_queues) from [<801973e8>] (run_local_timers+0x28/0x64) [<801973c0>] (run_local_timers) from [<80197460>] (update_process_times+0x3c/0x6c) [<80197424>] (update_process_times) from [<801ab2b8>] (tick_nohz_handler+0xe0/0x1bc) [<801ab1d8>] (tick_nohz_handler) from [<80843050>] (arch_timer_handler_virt+0x38/0x48) [<80843018>] (arch_timer_handler_virt) from [<80180a64>] (handle_percpu_devid_irq+0x8c/0x240) [<801809d8>] (handle_percpu_devid_irq) from [<8017ac20>] (generic_handle_irq+0x34/0x44) [<8017abec>] (generic_handle_irq) from [<8017b344>] (__handle_domain_irq+0x6c/0xc4) [<8017b2d8>] (__handle_domain_irq) from [<801022e0>] (gic_handle_irq+0x4c/0x88) [<80102294>] (gic_handle_irq) from [<80101a30>] (__irq_svc+0x70/0x98) [<802b8794>] (generic_fillattr) from [<8056b284>] (v9fs_vfs_getattr_dotl+0x74/0xa4) [<8056b210>] (v9fs_vfs_getattr_dotl) from [<802b8904>] (vfs_getattr_nosec+0x68/0x7c) [<802b889c>] (vfs_getattr_nosec) from [<802b895c>] (vfs_getattr+0x44/0x48) [<802b8918>] (vfs_getattr) from [<802b8a74>] (vfs_statx+0x9c/0xec) [<802b89d8>] (vfs_statx) from [<802b9428>] (sys_lstat64+0x48/0x78) [<802b93e0>] (sys_lstat64) from [<80101000>] (ret_fast_syscall+0x0/0x28) [[email protected]: updated comment to not refer to a function in another subsystem] Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 7549ae3e81cc ("9p: Use the i_size_[read, write]() macros instead of using inode->i_size directly.") Reported-by: Xing Gaopeng <[email protected]> Signed-off-by: Hou Tao <[email protected]> Signed-off-by: Dominique Martinet <[email protected]>
RZ_API void rz_analysis_extract_vars(RzAnalysis *analysis, RzAnalysisFunction *fcn, RzAnalysisOp *op) { rz_return_if_fail(analysis && fcn && op); const char *BP = analysis->reg->name[RZ_REG_NAME_BP]; const char *SP = analysis->reg->name[RZ_REG_NAME_SP]; if (BP) { extract_arg(analysis, fcn, op, BP, "+", RZ_ANALYSIS_VAR_KIND_BPV); extract_arg(analysis, fcn, op, BP, "-", RZ_ANALYSIS_VAR_KIND_BPV); } extract_arg(analysis, fcn, op, SP, "+", RZ_ANALYSIS_VAR_KIND_SPV); }
1
[ "CWE-703" ]
rizin
6ce71d8aa3dafe3cdb52d5d72ae8f4b95916f939
339,889,037,033,291,400,000,000,000,000,000,000,000
11
Initialize retctx,ctx before freeing the inner elements In rz_core_analysis_type_match retctx structure was initialized on the stack only after a "goto out_function", where a field of that structure was freed. When the goto path is taken, the field is not properly initialized and it cause cause a crash of Rizin or have other effects. Fixes: CVE-2021-4022
static int file_read(struct tcmu_device *dev, struct tcmulib_cmd *cmd, struct iovec *iov, size_t iov_cnt, size_t length, off_t offset) { struct file_state *state = tcmu_get_dev_private(dev); size_t remaining = length; ssize_t ret; while (remaining) { ret = preadv(state->fd, iov, iov_cnt, offset); if (ret < 0) { tcmu_err("read failed: %m\n"); ret = tcmu_set_sense_data(cmd->sense_buf, MEDIUM_ERROR, ASC_READ_ERROR, NULL); goto done; } if (ret == 0) { /* EOF, then zeros the iovecs left */ tcmu_zero_iovec(iov, iov_cnt); break; } tcmu_seek_in_iovec(iov, ret); offset += ret; remaining -= ret; } ret = SAM_STAT_GOOD; done: cmd->done(dev, cmd, ret); return 0; }
0
[ "CWE-200" ]
tcmu-runner
8cf8208775022301adaa59c240bb7f93742d1329
236,164,176,344,928,250,000,000,000,000,000,000,000
32
removed all check_config callback implementations to avoid security issues see github issue #194 qcow.c contained an information leak, could test for existance of any file in the system file_example.c and file_optical.c allow also to test for existance of any file, plus to temporarily create empty new files anywhere in the file system. This also involves a race condition, if a file didn't exist in the first place, but would be created in-between by some other process, then the file would be deleted by the check_config implementation.
int CLASS parse_tiff_ifd(int base) { unsigned entries, tag, type, len, plen = 16, save; int ifd, use_cm = 0, cfa, i, j, c, ima_len = 0; char *cbuf, *cp; uchar cfa_pat[16], cfa_pc[] = {0, 1, 2, 3}, tab[256]; double fm[3][4], cc[4][4], cm[4][3], cam_xyz[4][3], num; double ab[] = {1, 1, 1, 1}, asn[] = {0, 0, 0, 0}, xyz[] = {1, 1, 1}; unsigned sony_curve[] = {0, 0, 0, 0, 0, 4095}; unsigned *buf, sony_offset = 0, sony_length = 0, sony_key = 0; struct jhead jh; int pana_raw = 0; #ifndef LIBRAW_LIBRARY_BUILD FILE *sfp; #endif if (tiff_nifds >= sizeof tiff_ifd / sizeof tiff_ifd[0]) return 1; ifd = tiff_nifds++; for (j = 0; j < 4; j++) for (i = 0; i < 4; i++) cc[j][i] = i == j; entries = get2(); if (entries > 512) return 1; #ifdef LIBRAW_LIBRARY_BUILD INT64 fsize = ifp->size(); #endif while (entries--) { tiff_get(base, &tag, &type, &len, &save); #ifdef LIBRAW_LIBRARY_BUILD INT64 savepos = ftell(ifp); if (len > 8 && savepos + len > 2 * fsize) { fseek(ifp, save, SEEK_SET); // Recover tiff-read position!! continue; } if (callbacks.exif_cb) { callbacks.exif_cb(callbacks.exifparser_data, tag | (pana_raw ? 0x30000 : ((ifd + 1) << 20)), type, len, order, ifp,base); fseek(ifp, savepos, SEEK_SET); } #endif switch (tag) { case 1: if (len == 4) pana_raw = get4(); break; case 5: width = get2(); break; case 6: height = get2(); break; case 7: width += get2(); break; case 9: if ((i = get2())) filters = i; break; #ifdef LIBRAW_LIBRARY_BUILD case 10: if (pana_raw && len == 1 && type == 3) { pana_bpp = get2(); } break; #endif case 14: case 15: case 16: #ifdef LIBRAW_LIBRARY_BUILD if (pana_raw) { imgdata.color.linear_max[tag - 14] = get2(); if (tag == 15) imgdata.color.linear_max[3] = imgdata.color.linear_max[1]; } #endif break; case 17: case 18: if (type == 3 && len == 1) cam_mul[(tag - 17) * 2] = get2() / 256.0; break; #ifdef LIBRAW_LIBRARY_BUILD case 19: if (pana_raw) { ushort nWB, cnt, tWB; nWB = get2(); if (nWB > 0x100) break; for (cnt = 0; cnt < nWB; cnt++) { tWB = get2(); if (tWB < 0x100) { imgdata.color.WB_Coeffs[tWB][0] = get2(); imgdata.color.WB_Coeffs[tWB][2] = get2(); imgdata.color.WB_Coeffs[tWB][1] = imgdata.color.WB_Coeffs[tWB][3] = 0x100; } else get4(); } } break; case 0x0120: if (pana_raw) { unsigned sorder = order; unsigned long sbase = base; base = ftell(ifp); order = get2(); fseek(ifp, 2, SEEK_CUR); fseek(ifp, get4()-8, SEEK_CUR); parse_tiff_ifd (base); base = sbase; order = sorder; } break; case 0x0121: if (pana_raw) { /* 0 is Off, 65536 is Pixel Shift */ imgdata.makernotes.panasonic.Multishot = get4(); } break; case 0x2009: if ((pana_encoding == 4) || (pana_encoding == 5)) { int n = MIN (8, len); int permut[8] = {3, 2, 1, 0, 3+4, 2+4, 1+4, 0+4}; imgdata.makernotes.panasonic.BlackLevelDim = len; for (int i=0; i < n; i++) { imgdata.makernotes.panasonic.BlackLevel[permut[i]] = (float) (get2()) / (float) (powf(2.f, 14.f-pana_bpp)); } } break; #endif case 23: if (type == 3) iso_speed = get2(); break; case 28: case 29: case 30: #ifdef LIBRAW_LIBRARY_BUILD if (pana_raw && len == 1 && type == 3) { pana_black[tag - 28] = get2(); } else #endif { cblack[tag - 28] = get2(); cblack[3] = cblack[1]; } break; case 36: case 37: case 38: cam_mul[tag - 36] = get2(); break; case 39: #ifdef LIBRAW_LIBRARY_BUILD if (pana_raw) { ushort nWB, cnt, tWB; nWB = get2(); if (nWB > 0x100) break; for (cnt = 0; cnt < nWB; cnt++) { tWB = get2(); if (tWB < 0x100) { imgdata.color.WB_Coeffs[tWB][0] = get2(); imgdata.color.WB_Coeffs[tWB][1] = imgdata.color.WB_Coeffs[tWB][3] = get2(); imgdata.color.WB_Coeffs[tWB][2] = get2(); } else fseek(ifp, 6, SEEK_CUR); } } break; #endif if (len < 50 || cam_mul[0]>0.001f) break; fseek(ifp, 12, SEEK_CUR); FORC3 cam_mul[c] = get2(); break; #ifdef LIBRAW_LIBRARY_BUILD case 45: if (pana_raw && len == 1 && type == 3) { pana_encoding = get2(); } break; #endif case 46: if (type != 7 || fgetc(ifp) != 0xff || fgetc(ifp) != 0xd8) break; thumb_offset = ftell(ifp) - 2; thumb_length = len; break; case 61440: /* Fuji HS10 table */ fseek(ifp, get4() + base, SEEK_SET); parse_tiff_ifd(base); break; case 2: /* ImageWidth */ case 256: case 61441: /* Fuji RAF IFD 0xf001 RawImageFullWidth */ tiff_ifd[ifd].t_width = getint(type); break; case 3: /* ImageHeight */ case 257: case 61442: /* Fuji RAF IFD 0xf002 RawImageFullHeight */ tiff_ifd[ifd].t_height = getint(type); break; case 258: /* BitsPerSample */ case 61443: /* Fuji RAF IFD 0xf003 */ tiff_ifd[ifd].samples = len & 7; tiff_ifd[ifd].bps = getint(type); if (tiff_bps < tiff_ifd[ifd].bps) tiff_bps = tiff_ifd[ifd].bps; break; case 61446: /* Fuji RAF IFD 0xf006 */ raw_height = 0; if (tiff_ifd[ifd].bps > 12) break; load_raw = &CLASS packed_load_raw; load_flags = get4() ? 24 : 80; break; case 259: /* Compression */ tiff_ifd[ifd].comp = getint(type); break; case 262: /* PhotometricInterpretation */ tiff_ifd[ifd].phint = get2(); break; case 270: /* ImageDescription */ fread(desc, 512, 1, ifp); break; case 271: /* Make */ fgets(make, 64, ifp); break; case 272: /* Model */ fgets(model, 64, ifp); break; #ifdef LIBRAW_LIBRARY_BUILD case 278: tiff_ifd[ifd].rows_per_strip = getint(type); break; #endif case 280: /* Panasonic RW2 offset */ if (type != 4) break; load_raw = &CLASS panasonic_load_raw; load_flags = 0x2008; case 273: /* StripOffset */ #ifdef LIBRAW_LIBRARY_BUILD if (len > 1 && len < 16384) { off_t sav = ftell(ifp); tiff_ifd[ifd].strip_offsets = (int *)calloc(len, sizeof(int)); tiff_ifd[ifd].strip_offsets_count = len; for (int i = 0; i < len; i++) tiff_ifd[ifd].strip_offsets[i] = get4() + base; fseek(ifp, sav, SEEK_SET); // restore position } /* fallback */ #endif case 513: /* JpegIFOffset */ case 61447: tiff_ifd[ifd].offset = get4() + base; if (!tiff_ifd[ifd].bps && tiff_ifd[ifd].offset > 0) { fseek(ifp, tiff_ifd[ifd].offset, SEEK_SET); if (ljpeg_start(&jh, 1)) { tiff_ifd[ifd].comp = 6; tiff_ifd[ifd].t_width = jh.wide; tiff_ifd[ifd].t_height = jh.high; tiff_ifd[ifd].bps = jh.bits; tiff_ifd[ifd].samples = jh.clrs; if (!(jh.sraw || (jh.clrs & 1))) tiff_ifd[ifd].t_width *= jh.clrs; if ((tiff_ifd[ifd].t_width > 4 * tiff_ifd[ifd].t_height) & ~jh.clrs) { tiff_ifd[ifd].t_width /= 2; tiff_ifd[ifd].t_height *= 2; } i = order; parse_tiff(tiff_ifd[ifd].offset + 12); order = i; } } break; case 274: /* Orientation */ tiff_ifd[ifd].t_flip = "50132467"[get2() & 7] - '0'; break; case 277: /* SamplesPerPixel */ tiff_ifd[ifd].samples = getint(type) & 7; break; case 279: /* StripByteCounts */ #ifdef LIBRAW_LIBRARY_BUILD if (len > 1 && len < 16384) { off_t sav = ftell(ifp); tiff_ifd[ifd].strip_byte_counts = (int *)calloc(len, sizeof(int)); tiff_ifd[ifd].strip_byte_counts_count = len; for (int i = 0; i < len; i++) tiff_ifd[ifd].strip_byte_counts[i] = get4(); fseek(ifp, sav, SEEK_SET); // restore position } /* fallback */ #endif case 514: case 61448: tiff_ifd[ifd].bytes = get4(); break; case 61454: // FujiFilm "As Shot" FORC3 cam_mul[(4 - c) % 3] = getint(type); break; case 305: case 11: /* Software */ if ((pana_raw) && (tag == 11) && (type == 3)) { #ifdef LIBRAW_LIBRARY_BUILD imgdata.makernotes.panasonic.Compression = get2(); #endif break; } fgets(software, 64, ifp); if (!strncmp(software, "Adobe", 5) || !strncmp(software, "dcraw", 5) || !strncmp(software, "UFRaw", 5) || !strncmp(software, "Bibble", 6) || !strcmp(software, "Digital Photo Professional")) is_raw = 0; break; case 306: /* DateTime */ get_timestamp(0); break; case 315: /* Artist */ fread(artist, 64, 1, ifp); break; case 317: tiff_ifd[ifd].predictor = getint(type); break; case 322: /* TileWidth */ tiff_ifd[ifd].t_tile_width = getint(type); break; case 323: /* TileLength */ tiff_ifd[ifd].t_tile_length = getint(type); break; case 324: /* TileOffsets */ tiff_ifd[ifd].offset = len > 1 ? ftell(ifp) : get4(); if (len == 1) tiff_ifd[ifd].t_tile_width = tiff_ifd[ifd].t_tile_length = 0; if (len == 4) { load_raw = &CLASS sinar_4shot_load_raw; is_raw = 5; } break; case 325: tiff_ifd[ifd].bytes = len > 1 ? ftell(ifp) : get4(); break; case 330: /* SubIFDs */ if (!strcmp(model, "DSLR-A100") && tiff_ifd[ifd].t_width == 3872) { load_raw = &CLASS sony_arw_load_raw; data_offset = get4() + base; ifd++; #ifdef LIBRAW_LIBRARY_BUILD if (ifd >= sizeof tiff_ifd / sizeof tiff_ifd[0]) throw LIBRAW_EXCEPTION_IO_CORRUPT; #endif break; } #ifdef LIBRAW_LIBRARY_BUILD if (!strncmp(make, "Hasselblad", 10) && libraw_internal_data.unpacker_data.hasselblad_parser_flag) { fseek(ifp, ftell(ifp) + 4, SEEK_SET); fseek(ifp, get4() + base, SEEK_SET); parse_tiff_ifd(base); break; } #endif if (len > 1000) len = 1000; /* 1000 SubIFDs is enough */ while (len--) { i = ftell(ifp); fseek(ifp, get4() + base, SEEK_SET); if (parse_tiff_ifd(base)) break; fseek(ifp, i + 4, SEEK_SET); } break; case 339: tiff_ifd[ifd].sample_format = getint(type); break; case 400: strcpy(make, "Sarnoff"); maximum = 0xfff; break; #ifdef LIBRAW_LIBRARY_BUILD case 700: if ((type == 1 || type == 2 || type == 6 || type == 7) && len > 1 && len < 5100000) { xmpdata = (char *)malloc(xmplen = len + 1); fread(xmpdata, len, 1, ifp); xmpdata[len] = 0; } break; case 0x7000: imgdata.makernotes.sony.SonyRawFileType = get2(); break; #endif case 28688: FORC4 sony_curve[c + 1] = get2() >> 2 & 0xfff; for (i = 0; i < 5; i++) for (j = sony_curve[i] + 1; j <= sony_curve[i + 1]; j++) curve[j] = curve[j - 1] + (1 << i); break; case 29184: // Sony SR2Private 0x7200 sony_offset = get4(); break; case 29185: // Sony SR2Private 0x7201 sony_length = get4(); break; case 29217: // Sony SR2Private 0x7221 sony_key = get4(); break; case 29264: // Sony SR2Private 0x7250 parse_minolta(ftell(ifp)); raw_width = 0; break; case 29443: // Sony SR2SubIFD 0x7303 FORC4 cam_mul[c ^ (c < 2)] = get2(); break; case 29459: // Sony SR2SubIFD 0x7313 FORC4 cam_mul[c ^ (c >> 1)] = get2(); break; case 29456: // Sony SR2SubIFD 0x7310 FORC4 cblack[c ^ c >> 1] = get2(); i = cblack[3]; FORC3 if (i > cblack[c]) i = cblack[c]; FORC4 cblack[c] -= i; black = i; #ifdef DCRAW_VERBOSE if (verbose) fprintf(stderr, _("...Sony black: %u cblack: %u %u %u %u\n"), black, cblack[0], cblack[1], cblack[2], cblack[3]); #endif break; case 33405: /* Model2 */ fgets(model2, 64, ifp); break; case 33421: /* CFARepeatPatternDim */ if (get2() == 6 && get2() == 6) filters = 9; break; case 33422: /* CFAPattern */ if (filters == 9) { FORC(36)((char *)xtrans)[c] = fgetc(ifp) & 3; break; } case 64777: /* Kodak P-series */ if (len == 36) { filters = 9; colors = 3; FORC(36) xtrans[0][c] = fgetc(ifp) & 3; } else if (len > 0) { if ((plen = len) > 16) plen = 16; fread(cfa_pat, 1, plen, ifp); for (colors = cfa = i = 0; i < plen && colors < 4; i++) { if(cfa_pat[i] > 31) continue; // Skip wrong data colors += !(cfa & (1 << cfa_pat[i])); cfa |= 1 << cfa_pat[i]; } if (cfa == 070) memcpy(cfa_pc, "\003\004\005", 3); /* CMY */ if (cfa == 072) memcpy(cfa_pc, "\005\003\004\001", 4); /* GMCY */ goto guess_cfa_pc; } break; case 33424: case 65024: fseek(ifp, get4() + base, SEEK_SET); parse_kodak_ifd(base); break; case 33434: /* ExposureTime */ tiff_ifd[ifd].t_shutter = shutter = getreal(type); break; case 33437: /* FNumber */ aperture = getreal(type); break; #ifdef LIBRAW_LIBRARY_BUILD // IB start case 0x9400: imgdata.other.exifAmbientTemperature = getreal(type); if ((imgdata.other.CameraTemperature > -273.15f) && (OlyID == 0x4434353933ULL)) // TG-5 imgdata.other.CameraTemperature += imgdata.other.exifAmbientTemperature; break; case 0x9401: imgdata.other.exifHumidity = getreal(type); break; case 0x9402: imgdata.other.exifPressure = getreal(type); break; case 0x9403: imgdata.other.exifWaterDepth = getreal(type); break; case 0x9404: imgdata.other.exifAcceleration = getreal(type); break; case 0x9405: imgdata.other.exifCameraElevationAngle = getreal(type); break; case 0xa405: // FocalLengthIn35mmFormat imgdata.lens.FocalLengthIn35mmFormat = get2(); break; case 0xa431: // BodySerialNumber case 0xc62f: stmread(imgdata.shootinginfo.BodySerial, len, ifp); break; case 0xa432: // LensInfo, 42034dec, Lens Specification per EXIF standard imgdata.lens.MinFocal = getreal(type); imgdata.lens.MaxFocal = getreal(type); imgdata.lens.MaxAp4MinFocal = getreal(type); imgdata.lens.MaxAp4MaxFocal = getreal(type); break; case 0xa435: // LensSerialNumber stmread(imgdata.lens.LensSerial, len, ifp); break; case 0xc630: // DNG LensInfo, Lens Specification per EXIF standard imgdata.lens.MinFocal = getreal(type); imgdata.lens.MaxFocal = getreal(type); imgdata.lens.MaxAp4MinFocal = getreal(type); imgdata.lens.MaxAp4MaxFocal = getreal(type); break; case 0xa433: // LensMake stmread(imgdata.lens.LensMake, len, ifp); break; case 0xa434: // LensModel stmread(imgdata.lens.Lens, len, ifp); if (!strncmp(imgdata.lens.Lens, "----", 4)) imgdata.lens.Lens[0] = 0; break; case 0x9205: imgdata.lens.EXIF_MaxAp = libraw_powf64l(2.0f, (getreal(type) / 2.0f)); break; // IB end #endif case 34306: /* Leaf white balance */ FORC4 { int q = get2(); if(q) cam_mul[c ^ 1] = 4096.0 / q; } break; case 34307: /* Leaf CatchLight color matrix */ fread(software, 1, 7, ifp); if (strncmp(software, "MATRIX", 6)) break; colors = 4; for (raw_color = i = 0; i < 3; i++) { FORC4 fscanf(ifp, "%f", &rgb_cam[i][c ^ 1]); if (!use_camera_wb) continue; num = 0; FORC4 num += rgb_cam[i][c]; FORC4 rgb_cam[i][c] /= MAX(1, num); } break; case 34310: /* Leaf metadata */ parse_mos(ftell(ifp)); case 34303: strcpy(make, "Leaf"); break; case 34665: /* EXIF tag */ fseek(ifp, get4() + base, SEEK_SET); parse_exif(base); break; case 34853: /* GPSInfo tag */ { unsigned pos; fseek(ifp, pos = (get4() + base), SEEK_SET); parse_gps(base); #ifdef LIBRAW_LIBRARY_BUILD fseek(ifp, pos, SEEK_SET); parse_gps_libraw(base); #endif } break; case 34675: /* InterColorProfile */ case 50831: /* AsShotICCProfile */ profile_offset = ftell(ifp); profile_length = len; break; case 37122: /* CompressedBitsPerPixel */ kodak_cbpp = get4(); break; case 37386: /* FocalLength */ focal_len = getreal(type); break; case 37393: /* ImageNumber */ shot_order = getint(type); break; case 37400: /* old Kodak KDC tag */ for (raw_color = i = 0; i < 3; i++) { getreal(type); FORC3 rgb_cam[i][c] = getreal(type); } break; case 40976: strip_offset = get4(); switch (tiff_ifd[ifd].comp) { case 32770: load_raw = &CLASS samsung_load_raw; break; case 32772: load_raw = &CLASS samsung2_load_raw; break; case 32773: load_raw = &CLASS samsung3_load_raw; break; } break; case 46275: /* Imacon tags */ strcpy(make, "Imacon"); data_offset = ftell(ifp); ima_len = len; break; case 46279: if (!ima_len) break; fseek(ifp, 38, SEEK_CUR); case 46274: fseek(ifp, 40, SEEK_CUR); raw_width = get4(); raw_height = get4(); left_margin = get4() & 7; width = raw_width - left_margin - (get4() & 7); top_margin = get4() & 7; height = raw_height - top_margin - (get4() & 7); if (raw_width == 7262 && ima_len == 234317952) { height = 5412; width = 7216; left_margin = 7; filters = 0; } else if (raw_width == 7262) { height = 5444; width = 7244; left_margin = 7; } fseek(ifp, 52, SEEK_CUR); FORC3 cam_mul[c] = getreal(11); fseek(ifp, 114, SEEK_CUR); flip = (get2() >> 7) * 90; if (width * height * 6 == ima_len) { if (flip % 180 == 90) SWAP(width, height); raw_width = width; raw_height = height; left_margin = top_margin = filters = flip = 0; } sprintf(model, "Ixpress %d-Mp", height * width / 1000000); load_raw = &CLASS imacon_full_load_raw; if (filters) { if (left_margin & 1) filters = 0x61616161; load_raw = &CLASS unpacked_load_raw; } maximum = 0xffff; break; case 50454: /* Sinar tag */ case 50455: if (len < 1 || len > 2560000 || !(cbuf = (char *)malloc(len))) break; #ifndef LIBRAW_LIBRARY_BUILD fread(cbuf, 1, len, ifp); #else if (fread(cbuf, 1, len, ifp) != len) throw LIBRAW_EXCEPTION_IO_CORRUPT; // cbuf to be free'ed in recycle #endif cbuf[len - 1] = 0; for (cp = cbuf - 1; cp && cp < cbuf + len; cp = strchr(cp, '\n')) if (!strncmp(++cp, "Neutral ", 8)) sscanf(cp + 8, "%f %f %f", cam_mul, cam_mul + 1, cam_mul + 2); free(cbuf); break; case 50458: if (!make[0]) strcpy(make, "Hasselblad"); break; case 50459: /* Hasselblad tag */ #ifdef LIBRAW_LIBRARY_BUILD libraw_internal_data.unpacker_data.hasselblad_parser_flag = 1; #endif i = order; j = ftell(ifp); c = tiff_nifds; order = get2(); fseek(ifp, j + (get2(), get4()), SEEK_SET); parse_tiff_ifd(j); maximum = 0xffff; tiff_nifds = c; order = i; break; case 50706: /* DNGVersion */ FORC4 dng_version = (dng_version << 8) + fgetc(ifp); if (!make[0]) strcpy(make, "DNG"); is_raw = 1; break; case 50708: /* UniqueCameraModel */ #ifdef LIBRAW_LIBRARY_BUILD stmread(imgdata.color.UniqueCameraModel, len, ifp); imgdata.color.UniqueCameraModel[sizeof(imgdata.color.UniqueCameraModel) - 1] = 0; #endif if (model[0]) break; #ifndef LIBRAW_LIBRARY_BUILD fgets(make, 64, ifp); #else strncpy(make, imgdata.color.UniqueCameraModel, MIN(len, sizeof(imgdata.color.UniqueCameraModel))); #endif if ((cp = strchr(make, ' '))) { strcpy(model, cp + 1); *cp = 0; } break; case 50710: /* CFAPlaneColor */ if (filters == 9) break; if (len > 4) len = 4; colors = len; fread(cfa_pc, 1, colors, ifp); guess_cfa_pc: FORCC tab[cfa_pc[c]] = c; cdesc[c] = 0; for (i = 16; i--;) filters = filters << 2 | tab[cfa_pat[i % plen]]; filters -= !filters; break; case 50711: /* CFALayout */ if (get2() == 2) fuji_width = 1; break; case 291: case 50712: /* LinearizationTable */ #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_LINTABLE; tiff_ifd[ifd].lineartable_offset = ftell(ifp); tiff_ifd[ifd].lineartable_len = len; #endif linear_table(len); break; case 50713: /* BlackLevelRepeatDim */ #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_BLACK; tiff_ifd[ifd].dng_levels.dng_fcblack[4] = tiff_ifd[ifd].dng_levels.dng_cblack[4] = #endif cblack[4] = get2(); #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.dng_fcblack[5] = tiff_ifd[ifd].dng_levels.dng_cblack[5] = #endif cblack[5] = get2(); if (cblack[4] * cblack[5] > (sizeof(cblack) / sizeof(cblack[0]) - 6)) #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.dng_fcblack[4] = tiff_ifd[ifd].dng_levels.dng_fcblack[5] = tiff_ifd[ifd].dng_levels.dng_cblack[4] = tiff_ifd[ifd].dng_levels.dng_cblack[5] = #endif cblack[4] = cblack[5] = 1; break; #ifdef LIBRAW_LIBRARY_BUILD case 0xf00d: if (!is_4K_RAFdata) { FORC3 imgdata.color.WB_Coeffs[LIBRAW_WBI_Auto][(4 - c) % 3] = getint(type); imgdata.color.WB_Coeffs[LIBRAW_WBI_Auto][3] = imgdata.color.WB_Coeffs[LIBRAW_WBI_Auto][1]; } break; case 0xf00c: if (!is_4K_RAFdata) { unsigned fwb[4]; FORC4 fwb[c] = get4(); if (fwb[3] < 0x100) { imgdata.color.WB_Coeffs[fwb[3]][0] = fwb[1]; imgdata.color.WB_Coeffs[fwb[3]][1] = imgdata.color.WB_Coeffs[fwb[3]][3] = fwb[0]; imgdata.color.WB_Coeffs[fwb[3]][2] = fwb[2]; if ((fwb[3] == 17) && (libraw_internal_data.unpacker_data.lenRAFData > 3) && (libraw_internal_data.unpacker_data.lenRAFData < 10240000)) { INT64 f_save = ftell(ifp); ushort *rafdata = (ushort *)malloc(sizeof(ushort) * libraw_internal_data.unpacker_data.lenRAFData); fseek(ifp, libraw_internal_data.unpacker_data.posRAFData, SEEK_SET); fread(rafdata, sizeof(ushort), libraw_internal_data.unpacker_data.lenRAFData, ifp); fseek(ifp, f_save, SEEK_SET); int fj, found = 0; for (int fi = 0; fi < (libraw_internal_data.unpacker_data.lenRAFData - 3); fi++) { if ((fwb[0] == rafdata[fi]) && (fwb[1] == rafdata[fi + 1]) && (fwb[2] == rafdata[fi + 2])) { if (rafdata[fi - 15] != fwb[0]) continue; for (int wb_ind = 0, ofst = fi - 15; wb_ind < nFuji_wb_list1; wb_ind++, ofst += 3) { imgdata.color.WB_Coeffs[Fuji_wb_list1[wb_ind]][1] = imgdata.color.WB_Coeffs[Fuji_wb_list1[wb_ind]][3] = rafdata[ofst]; imgdata.color.WB_Coeffs[Fuji_wb_list1[wb_ind]][0] = rafdata[ofst + 1]; imgdata.color.WB_Coeffs[Fuji_wb_list1[wb_ind]][2] = rafdata[ofst + 2]; } fi += 0x60; for (fj = fi; fj < (fi + 15); fj += 3) if (rafdata[fj] != rafdata[fi]) { found = 1; break; } if (found) { fj = fj - 93; for (int iCCT = 0; iCCT < 31; iCCT++) { imgdata.color.WBCT_Coeffs[iCCT][0] = FujiCCT_K[iCCT]; imgdata.color.WBCT_Coeffs[iCCT][1] = rafdata[iCCT * 3 + 1 + fj]; imgdata.color.WBCT_Coeffs[iCCT][2] = imgdata.color.WBCT_Coeffs[iCCT][4] = rafdata[iCCT * 3 + fj]; imgdata.color.WBCT_Coeffs[iCCT][3] = rafdata[iCCT * 3 + 2 + fj]; } } free(rafdata); break; } } } } FORC4 fwb[c] = get4(); if (fwb[3] < 0x100) { imgdata.color.WB_Coeffs[fwb[3]][0] = fwb[1]; imgdata.color.WB_Coeffs[fwb[3]][1] = imgdata.color.WB_Coeffs[fwb[3]][3] = fwb[0]; imgdata.color.WB_Coeffs[fwb[3]][2] = fwb[2]; } } break; #endif #ifdef LIBRAW_LIBRARY_BUILD case 50709: stmread(imgdata.color.LocalizedCameraModel, len, ifp); break; #endif case 61450: cblack[4] = cblack[5] = MIN(sqrt((double)len), 64); case 50714: /* BlackLevel */ #ifdef LIBRAW_LIBRARY_BUILD if (tiff_ifd[ifd].samples > 1 && tiff_ifd[ifd].samples == len) // LinearDNG, per-channel black { tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_BLACK; for (i = 0; i < 4 && i < len; i++) { tiff_ifd[ifd].dng_levels.dng_fcblack[i] = getreal(type); tiff_ifd[ifd].dng_levels.dng_cblack[i] = cblack[i] = tiff_ifd[ifd].dng_levels.dng_fcblack[i]+0.5; } tiff_ifd[ifd].dng_levels.dng_fblack = tiff_ifd[ifd].dng_levels.dng_black = black = 0; } else #endif if ((cblack[4] * cblack[5] < 2) && len == 1) { #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_BLACK; tiff_ifd[ifd].dng_levels.dng_fblack = getreal(type); black = tiff_ifd[ifd].dng_levels.dng_black = tiff_ifd[ifd].dng_levels.dng_fblack; #else black = getreal(type); #endif } else if (cblack[4] * cblack[5] <= len) { FORC(cblack[4] * cblack[5]) #ifdef LIBRAW_LIBRARY_BUILD { tiff_ifd[ifd].dng_levels.dng_fcblack[6+c] = getreal(type); cblack[6+c] = tiff_ifd[ifd].dng_levels.dng_fcblack[6+c]; } #else cblack[6 + c] = getreal(type); #endif black = 0; FORC4 cblack[c] = 0; #ifdef LIBRAW_LIBRARY_BUILD if (tag == 50714) { tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_BLACK; FORC(cblack[4] * cblack[5]) tiff_ifd[ifd].dng_levels.dng_cblack[6 + c] = cblack[6 + c]; tiff_ifd[ifd].dng_levels.dng_fblack = 0; tiff_ifd[ifd].dng_levels.dng_black = 0; FORC4 tiff_ifd[ifd].dng_levels.dng_fcblack[c] = tiff_ifd[ifd].dng_levels.dng_cblack[c] = 0; } #endif } break; case 50715: /* BlackLevelDeltaH */ case 50716: /* BlackLevelDeltaV */ for (num = i = 0; i < len && i < 65536; i++) num += getreal(type); if(len>0) { black += num / len + 0.5; #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.dng_fblack += num/float(len); tiff_ifd[ifd].dng_levels.dng_black += num / len + 0.5; tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_BLACK; #endif } break; case 50717: /* WhiteLevel */ #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_WHITE; tiff_ifd[ifd].dng_levels.dng_whitelevel[0] = #endif maximum = getint(type); #ifdef LIBRAW_LIBRARY_BUILD if (tiff_ifd[ifd].samples > 1) // Linear DNG case for (i = 1; i < 4 && i < len; i++) tiff_ifd[ifd].dng_levels.dng_whitelevel[i] = getint(type); #endif break; case 50718: /* DefaultScale */ { float q1 = getreal(type); float q2 = getreal(type); if(q1 > 0.00001f && q2 > 0.00001f) { pixel_aspect = q1/q2; if (pixel_aspect > 0.995 && pixel_aspect < 1.005) pixel_aspect = 1.0; } } break; #ifdef LIBRAW_LIBRARY_BUILD case 50719: /* DefaultCropOrigin */ if (len == 2) { tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_CROPORIGIN; tiff_ifd[ifd].dng_levels.default_crop[0] = getreal(type); tiff_ifd[ifd].dng_levels.default_crop[1] = getreal(type); if (!strncasecmp(make, "SONY", 4)) { imgdata.sizes.raw_crop.cleft = tiff_ifd[ifd].dng_levels.default_crop[0]; imgdata.sizes.raw_crop.ctop = tiff_ifd[ifd].dng_levels.default_crop[1]; } } break; case 50720: /* DefaultCropSize */ if (len == 2) { tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_CROPSIZE; tiff_ifd[ifd].dng_levels.default_crop[2] = getreal(type); tiff_ifd[ifd].dng_levels.default_crop[3] = getreal(type); if (!strncasecmp(make, "SONY", 4)) { imgdata.sizes.raw_crop.cwidth = tiff_ifd[ifd].dng_levels.default_crop[2]; imgdata.sizes.raw_crop.cheight = tiff_ifd[ifd].dng_levels.default_crop[3]; } } break; #define imSony imgdata.makernotes.sony case 0x74c7: if ((len == 2) && !strncasecmp(make, "SONY", 4)) { imSony.raw_crop.cleft = get4(); imSony.raw_crop.ctop = get4(); } break; case 0x74c8: if ((len == 2) && !strncasecmp(make, "SONY", 4)) { imSony.raw_crop.cwidth = get4(); imSony.raw_crop.cheight = get4(); } break; #undef imSony #endif #ifdef LIBRAW_LIBRARY_BUILD case 50778: tiff_ifd[ifd].dng_color[0].illuminant = get2(); tiff_ifd[ifd].dng_color[0].parsedfields |= LIBRAW_DNGFM_ILLUMINANT; break; case 50779: tiff_ifd[ifd].dng_color[1].illuminant = get2(); tiff_ifd[ifd].dng_color[1].parsedfields |= LIBRAW_DNGFM_ILLUMINANT; break; #endif case 50721: /* ColorMatrix1 */ case 50722: /* ColorMatrix2 */ { int chan = (len == 9)? 3 : (len == 12?4:0); #ifdef LIBRAW_LIBRARY_BUILD i = tag == 50721 ? 0 : 1; if(chan) tiff_ifd[ifd].dng_color[i].parsedfields |= LIBRAW_DNGFM_COLORMATRIX; #endif FORC(chan) for (j = 0; j < 3; j++) { #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_color[i].colormatrix[c][j] = #endif cm[c][j] = getreal(type); } use_cm = 1; } break; case 0xc714: /* ForwardMatrix1 */ case 0xc715: /* ForwardMatrix2 */ { int chan = (len == 9)? 3 : (len == 12?4:0); #ifdef LIBRAW_LIBRARY_BUILD i = tag == 0xc714 ? 0 : 1; if(chan) tiff_ifd[ifd].dng_color[i].parsedfields |= LIBRAW_DNGFM_FORWARDMATRIX; #endif for (j = 0; j < 3; j++) FORC(chan) { #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_color[i].forwardmatrix[j][c] = #endif fm[j][c] = getreal(type); } } break; case 50723: /* CameraCalibration1 */ case 50724: /* CameraCalibration2 */ { int chan = (len == 9)? 3 : (len == 16?4:0); #ifdef LIBRAW_LIBRARY_BUILD j = tag == 50723 ? 0 : 1; if(chan) tiff_ifd[ifd].dng_color[j].parsedfields |= LIBRAW_DNGFM_CALIBRATION; #endif for (i = 0; i < chan; i++) FORC(chan) { #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_color[j].calibration[i][c] = #endif cc[i][c] = getreal(type); } } break; case 50727: /* AnalogBalance */ #ifdef LIBRAW_LIBRARY_BUILD if(len>=3) tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_ANALOGBALANCE; #endif for(c = 0; c < len && c < 4; c++) { #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.analogbalance[c] = #endif ab[c] = getreal(type); } break; case 50728: /* AsShotNeutral */ #ifdef LIBRAW_LIBRARY_BUILD if(len>=3) tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_ASSHOTNEUTRAL; for(c = 0; c < len && c < 4; c++) tiff_ifd[ifd].dng_levels.asshotneutral[c] = asn[c] = getreal(type); #else FORCC asn[c] = getreal(type); #endif break; case 50729: /* AsShotWhiteXY */ xyz[0] = getreal(type); xyz[1] = getreal(type); xyz[2] = 1 - xyz[0] - xyz[1]; FORC3 xyz[c] /= d65_white[c]; break; #ifdef LIBRAW_LIBRARY_BUILD case 50730: /* DNG: 0xc62a BaselineExposure */ tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_BASELINEEXPOSURE; tiff_ifd[ifd].dng_levels.baseline_exposure = getreal(type); break; case 50734: /* DNG: 0xc62e LinearResponseLimit */ tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_LINEARRESPONSELIMIT; tiff_ifd[ifd].dng_levels.LinearResponseLimit = getreal(type); break; #endif // IB start case 50740: /* tag 0xc634 : DNG Adobe, DNG Pentax, Sony SR2, DNG Private */ #ifdef LIBRAW_LIBRARY_BUILD if (!(imgdata.params.raw_processing_options & LIBRAW_PROCESSING_SKIP_MAKERNOTES)) { char mbuf[64]; unsigned short makernote_found = 0; INT64 curr_pos, start_pos = ftell(ifp); unsigned MakN_order, m_sorder = order; unsigned MakN_length; unsigned pos_in_original_raw; fread(mbuf, 1, 6, ifp); if (!strcmp(mbuf, "Adobe")) { order = 0x4d4d; // Adobe header is always in "MM" / big endian curr_pos = start_pos + 6; while (curr_pos + 8 - start_pos <= len) { fread(mbuf, 1, 4, ifp); curr_pos += 8; if (!strncmp(mbuf, "Pano", 4)) { // PanasonicRaw, yes, they use "Pano" as signature parseAdobePanoMakernote(); } if (!strncmp(mbuf, "MakN", 4)) { makernote_found = 1; MakN_length = get4(); MakN_order = get2(); pos_in_original_raw = get4(); order = MakN_order; INT64 save_pos = ifp->tell(); parse_makernote_0xc634(curr_pos + 6 - pos_in_original_raw, 0, AdobeDNG); curr_pos = save_pos + MakN_length - 6; fseek(ifp, curr_pos, SEEK_SET); fread(mbuf, 1, 4, ifp); curr_pos += 8; if (!strncmp(mbuf, "Pano ", 4)) { parseAdobePanoMakernote(); } if (!strncmp(mbuf, "SR2 ", 4)) { order = 0x4d4d; MakN_length = get4(); MakN_order = get2(); pos_in_original_raw = get4(); order = MakN_order; unsigned *buf_SR2; unsigned entries, tag, type, len, save; unsigned SR2SubIFDOffset = 0; unsigned SR2SubIFDLength = 0; unsigned SR2SubIFDKey = 0; int base = curr_pos + 6 - pos_in_original_raw; entries = get2(); while (entries--) { tiff_get(base, &tag, &type, &len, &save); if (tag == 0x7200) { SR2SubIFDOffset = get4(); } else if (tag == 0x7201) { SR2SubIFDLength = get4(); } else if (tag == 0x7221) { SR2SubIFDKey = get4(); } fseek(ifp, save, SEEK_SET); } if (SR2SubIFDLength && (SR2SubIFDLength < 10240000) && (buf_SR2 = (unsigned *)malloc(SR2SubIFDLength+1024))) { // 1024b for safety fseek(ifp, SR2SubIFDOffset + base, SEEK_SET); fread(buf_SR2, SR2SubIFDLength, 1, ifp); sony_decrypt(buf_SR2, SR2SubIFDLength / 4, 1, SR2SubIFDKey); parseSonySR2 ((uchar *)buf_SR2, SR2SubIFDOffset, SR2SubIFDLength, AdobeDNG); free(buf_SR2); } } /* SR2 processed */ break; } } } else { fread(mbuf + 6, 1, 2, ifp); if (!strcmp(mbuf, "PENTAX ") || !strcmp(mbuf, "SAMSUNG")) { makernote_found = 1; fseek(ifp, start_pos, SEEK_SET); parse_makernote_0xc634(base, 0, CameraDNG); } } fseek(ifp, start_pos, SEEK_SET); order = m_sorder; } // IB end #endif if (dng_version) { break; } parse_minolta(j = get4() + base); fseek(ifp, j, SEEK_SET); parse_tiff_ifd(base); break; case 50752: read_shorts(cr2_slice, 3); break; case 50829: /* ActiveArea */ top_margin = getint(type); left_margin = getint(type); height = getint(type) - top_margin; width = getint(type) - left_margin; break; case 50830: /* MaskedAreas */ for (i = 0; i < len && i < 32; i++) ((int *)mask)[i] = getint(type); black = 0; break; #ifdef LIBRAW_LIBRARY_BUILD case 50970: /* PreviewColorSpace */ tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_PREVIEWCS; tiff_ifd[ifd].dng_levels.preview_colorspace = getint(type); break; #endif case 51009: /* OpcodeList2 */ #ifdef LIBRAW_LIBRARY_BUILD tiff_ifd[ifd].dng_levels.parsedfields |= LIBRAW_DNGFM_OPCODE2; tiff_ifd[ifd].opcode2_offset = #endif meta_offset = ftell(ifp); break; case 64772: /* Kodak P-series */ if (len < 13) break; fseek(ifp, 16, SEEK_CUR); data_offset = get4(); fseek(ifp, 28, SEEK_CUR); data_offset += get4(); load_raw = &CLASS packed_load_raw; break; case 65026: if (type == 2) fgets(model2, 64, ifp); } fseek(ifp, save, SEEK_SET); } if (sony_length && sony_length < 10240000 && (buf = (unsigned *)malloc(sony_length))) { fseek(ifp, sony_offset, SEEK_SET); fread(buf, sony_length, 1, ifp); sony_decrypt(buf, sony_length / 4, 1, sony_key); #ifndef LIBRAW_LIBRARY_BUILD sfp = ifp; if ((ifp = tmpfile())) { fwrite(buf, sony_length, 1, ifp); fseek(ifp, 0, SEEK_SET); parse_tiff_ifd(-sony_offset); fclose(ifp); } ifp = sfp; #else parseSonySR2 ((uchar *)buf, sony_offset, sony_length, nonDNG); #endif free(buf); } for (i = 0; i < colors; i++) FORCC cc[i][c] *= ab[i]; if (use_cm) { FORCC for (i = 0; i < 3; i++) for (cam_xyz[c][i] = j = 0; j < colors; j++) cam_xyz[c][i] += cc[c][j] * cm[j][i] * xyz[i]; cam_xyz_coeff(cmatrix, cam_xyz); } if (asn[0]) { cam_mul[3] = 0; FORCC if(fabs(asn[c])>0.0001) cam_mul[c] = 1 / asn[c]; } if (!use_cm) FORCC if(fabs(cc[c][c])>0.0001) pre_mul[c] /= cc[c][c]; return 0; }
0
[ "CWE-400" ]
LibRaw
e67a9862d10ebaa97712f532eca1eb5e2e410a22
209,446,621,080,287,430,000,000,000,000,000,000,000
1,323
Fixed Secunia Advisory SA86384 - possible infinite loop in unpacked_load_raw() - possible infinite loop in parse_rollei() - possible infinite loop in parse_sinar_ia() Credits: Laurent Delosieres, Secunia Research at Flexera
oop MemberNameTable::find_or_add_member_name(jweak mem_name_wref) { assert_locked_or_safepoint(MemberNameTable_lock); oop new_mem_name = JNIHandles::resolve(mem_name_wref); // Find matching member name in the list. // This is linear because these are short lists. int len = this->length(); int new_index = len; for (int idx = 0; idx < len; idx++) { oop mname = JNIHandles::resolve(this->at(idx)); if (mname == NULL) { new_index = idx; continue; } if (java_lang_invoke_MemberName::equals(new_mem_name, mname)) { JNIHandles::destroy_weak_global(mem_name_wref); return mname; } } if (new_index < len) { assert(JNIHandles::resolve(this->at(new_index)) == NULL, "sanity"); // destroy the old handle JNIHandles::destroy_weak_global(this->at(new_index)); } // Not found, push the new one, or reuse empty slot this->at_put_grow(new_index, mem_name_wref); return new_mem_name; }
0
[]
jdk8u
f14e35d20e1a4d0f507f05838844152f2242c6d3
246,958,200,344,295,240,000,000,000,000,000,000,000
30
8281866: Enhance MethodHandle invocations Reviewed-by: andrew Backport-of: d974d9da365f787f67971d88c79371c8b0769f75
uint32_t GetPayloadTime(size_t handle, uint32_t index, float *in, float *out) { mp4object *mp4 = (mp4object *)handle; if (mp4 == NULL) return 0; if (mp4->metaoffsets == 0 || mp4->basemetadataduration == 0 || mp4->meta_clockdemon == 0 || in == NULL || out == NULL) return 1; *in = (float)((double)index * (double)mp4->basemetadataduration / (double)mp4->meta_clockdemon); *out = (float)((double)(index + 1) * (double)mp4->basemetadataduration / (double)mp4->meta_clockdemon); return 0; }
1
[ "CWE-125", "CWE-369", "CWE-787" ]
gpmf-parser
341f12cd5b97ab419e53853ca00176457c9f1681
240,146,925,824,367,840,000,000,000,000,000,000,000
11
fixed many security issues with the too crude mp4 reader
static void svm_cpuid_update(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && boot_cpu_has(X86_FEATURE_XSAVE) && boot_cpu_has(X86_FEATURE_XSAVES); /* Update nrips enabled cache */ svm->nrips_enabled = kvm_cpu_cap_has(X86_FEATURE_NRIPS) && guest_cpuid_has(&svm->vcpu, X86_FEATURE_NRIPS); if (!kvm_vcpu_apicv_active(vcpu)) return; /* * AVIC does not work with an x2APIC mode guest. If the X2APIC feature * is exposed to the guest, disable AVIC. */ if (guest_cpuid_has(vcpu, X86_FEATURE_X2APIC)) kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_X2APIC); /* * Currently, AVIC does not work with nested virtualization. * So, we disable AVIC when cpuid for SVM is set in the L1 guest. */ if (nested && guest_cpuid_has(vcpu, X86_FEATURE_SVM)) kvm_request_apicv_update(vcpu->kvm, false, APICV_INHIBIT_REASON_NESTED); }
0
[ "CWE-835" ]
linux
e72436bc3a5206f95bb384e741154166ddb3202e
243,815,664,031,038,430,000,000,000,000,000,000,000
31
KVM: SVM: avoid infinite loop on NPF from bad address When a nested page fault is taken from an address that does not have a memslot associated to it, kvm_mmu_do_page_fault returns RET_PF_EMULATE (via mmu_set_spte) and kvm_mmu_page_fault then invokes svm_need_emulation_on_page_fault. The default answer there is to return false, but in this case this just causes the page fault to be retried ad libitum. Since this is not a fast path, and the only other case where it is taken is an erratum, just stick a kvm_vcpu_gfn_to_memslot check in there to detect the common case where the erratum is not happening. This fixes an infinite loop in the new set_memory_region_test. Signed-off-by: Paolo Bonzini <[email protected]>
_pwgPageSizeForMedia( pwg_media_t *media, /* I - Media */ char *name, /* I - PageSize name buffer */ size_t namesize) /* I - Size of name buffer */ { const char *sizeptr, /* Pointer to size in PWG name */ *dimptr; /* Pointer to dimensions in PWG name */ /* * Range check input... */ if (!media || !name || namesize < PPD_MAX_NAME) return (NULL); /* * Copy or generate a PageSize name... */ if (media->ppd) { /* * Use a standard Adobe name... */ strlcpy(name, media->ppd, namesize); } else if (!media->pwg || !strncmp(media->pwg, "custom_", 7) || (sizeptr = strchr(media->pwg, '_')) == NULL || (dimptr = strchr(sizeptr + 1, '_')) == NULL || (size_t)(dimptr - sizeptr) > namesize) { /* * Use a name of the form "wNNNhNNN"... */ snprintf(name, namesize, "w%dh%d", (int)PWG_TO_POINTS(media->width), (int)PWG_TO_POINTS(media->length)); } else { /* * Copy the size name from class_sizename_dimensions... */ memcpy(name, sizeptr + 1, (size_t)(dimptr - sizeptr - 1)); name[dimptr - sizeptr - 1] = '\0'; } return (name); }
0
[ "CWE-93" ]
cups
07428f6a640ff93aa0b4cc69ca372e2cf8490e41
207,256,358,210,138,300,000,000,000,000,000,000,000
52
Only list supported PDLs (Issue #4923)
DEFUN(goLine, GOTO_LINE, "Go to the specified line") { char *str = searchKeyData(); if (prec_num) _goLine("^"); else if (str) _goLine(str); else /* FIXME: gettextize? */ _goLine(inputStr("Goto line: ", "")); }
0
[ "CWE-59", "CWE-241" ]
w3m
18dcbadf2771cdb0c18509b14e4e73505b242753
177,540,319,452,349,500,000,000,000,000,000,000,000
12
Make temporary directory safely when ~/.w3m is unwritable
httpConnect2( const char *host, /* I - Host to connect to */ int port, /* I - Port number */ http_addrlist_t *addrlist, /* I - List of addresses or @code NULL@ to lookup */ int family, /* I - Address family to use or @code AF_UNSPEC@ for any */ http_encryption_t encryption, /* I - Type of encryption to use */ int blocking, /* I - 1 for blocking connection, 0 for non-blocking */ int msec, /* I - Connection timeout in milliseconds, 0 means don't connect */ int *cancel) /* I - Pointer to "cancel" variable */ { http_t *http; /* New HTTP connection */ DEBUG_printf(("httpConnect2(host=\"%s\", port=%d, addrlist=%p, family=%d, encryption=%d, blocking=%d, msec=%d, cancel=%p)", host, port, (void *)addrlist, family, encryption, blocking, msec, (void *)cancel)); /* * Create the HTTP structure... */ if ((http = http_create(host, port, addrlist, family, encryption, blocking, _HTTP_MODE_CLIENT)) == NULL) return (NULL); /* * Optionally connect to the remote system... */ if (msec == 0 || !httpReconnect2(http, msec, cancel)) return (http); /* * Could not connect to any known address - bail out! */ httpClose(http); return (NULL); }
0
[ "CWE-120" ]
cups
f24e6cf6a39300ad0c3726a41a4aab51ad54c109
133,358,182,385,651,760,000,000,000,000,000,000,000
38
Fix multiple security/disclosure issues: - CVE-2019-8696 and CVE-2019-8675: Fixed SNMP buffer overflows (rdar://51685251) - Fixed IPP buffer overflow (rdar://50035411) - Fixed memory disclosure issue in the scheduler (rdar://51373853) - Fixed DoS issues in the scheduler (rdar://51373929)
Node() : camera(-1), skin(-1), mesh(-1) {}
0
[ "CWE-20" ]
tinygltf
52ff00a38447f06a17eab1caa2cf0730a119c751
250,291,158,486,867,930,000,000,000,000,000,000,000
1
Do not expand file path since its not necessary for glTF asset path(URI) and for security reason(`wordexp`).
PackLinuxElf64::invert_pt_dynamic(Elf64_Dyn const *dynp) { if (dt_table[Elf64_Dyn::DT_NULL]) { return; // not 1st time; do not change upx_dt_init } Elf64_Dyn const *const dynp0 = dynp; unsigned ndx = 1+ 0; if (dynp) for (; ; ++ndx, ++dynp) { upx_uint64_t const d_tag = get_te64(&dynp->d_tag); if (d_tag>>32) { // outrageous char msg[50]; snprintf(msg, sizeof(msg), "bad Elf64_Dyn[%d].d_tag %#lx", -1+ ndx, (long unsigned)d_tag); throwCantPack(msg); } if (d_tag < DT_NUM) { if (Elf64_Dyn::DT_NEEDED != d_tag && dt_table[d_tag] && get_te64(&dynp->d_val) != get_te64(&dynp0[-1+ dt_table[d_tag]].d_val)) { char msg[50]; snprintf(msg, sizeof(msg), "duplicate DT_%#x: [%#x] [%#x]", (unsigned)d_tag, -1+ dt_table[d_tag], -1+ ndx); throwCantPack(msg); } dt_table[d_tag] = ndx; } if (Elf64_Dyn::DT_NULL == d_tag) { break; // check here so that dt_table[DT_NULL] is set } } upx_dt_init = 0; if (dt_table[Elf64_Dyn::DT_INIT]) upx_dt_init = Elf64_Dyn::DT_INIT; else if (dt_table[Elf64_Dyn::DT_PREINIT_ARRAY]) upx_dt_init = Elf64_Dyn::DT_PREINIT_ARRAY; else if (dt_table[Elf64_Dyn::DT_INIT_ARRAY]) upx_dt_init = Elf64_Dyn::DT_INIT_ARRAY; unsigned const z_str = dt_table[Elf64_Dyn::DT_STRSZ]; if (z_str) { strtab_end = get_te64(&dynp0[-1+ z_str].d_val); if ((u64_t)file_size <= strtab_end) { // FIXME: weak char msg[50]; snprintf(msg, sizeof(msg), "bad DT_STRSZ %#x", strtab_end); throwCantPack(msg); } } // DT_SYMTAB has no designated length. // End it when next area else starts; often DT_STRTAB. (FIXME) unsigned const x_sym = dt_table[Elf64_Dyn::DT_SYMTAB]; unsigned const x_str = dt_table[Elf64_Dyn::DT_STRTAB]; if (x_sym && x_str) { upx_uint64_t const v_sym = get_te64(&dynp0[-1+ x_sym].d_val); upx_uint64_t const v_str = get_te64(&dynp0[-1+ x_str].d_val); unsigned const z_sym = dt_table[Elf64_Dyn::DT_SYMENT]; unsigned const sz_sym = !z_sym ? sizeof(Elf64_Sym) : get_te64(&dynp0[-1+ z_sym].d_val); if (v_sym < v_str) { symnum_end = (v_str - v_sym) / sz_sym; } } // DT_HASH often ends at DT_SYMTAB unsigned const v_hsh = elf_unsigned_dynamic(Elf64_Dyn::DT_HASH); if (v_hsh && file_image) { hashtab = (unsigned const *)elf_find_dynamic(Elf64_Dyn::DT_HASH); if (!hashtab) { char msg[40]; snprintf(msg, sizeof(msg), "bad DT_HASH %#x", v_hsh); throwCantPack(msg); } unsigned const nbucket = get_te32(&hashtab[0]); unsigned const *const buckets = &hashtab[2]; unsigned const *const chains = &buckets[nbucket]; (void)chains; unsigned const v_sym = get_te32(&dynp0[-1+ x_sym].d_val); if (!nbucket || (nbucket>>31) || (file_size/sizeof(unsigned)) <= (2*nbucket) // FIXME: weak || ((v_hsh < v_sym) && (v_sym - v_hsh) < (sizeof(unsigned)*2 // headers + sizeof(*buckets)*nbucket // buckets + sizeof(*chains) *nbucket // chains )) ) { char msg[90]; snprintf(msg, sizeof(msg), "bad DT_HASH nbucket=%#x len=%#x", nbucket, (v_sym - v_hsh)); throwCantPack(msg); } } // DT_GNU_HASH often ends at DT_SYMTAB; FIXME: not for Android? unsigned const v_gsh = elf_unsigned_dynamic(Elf64_Dyn::DT_GNU_HASH); if (v_gsh && file_image) { gashtab = (unsigned const *)elf_find_dynamic(Elf64_Dyn::DT_GNU_HASH); if (!gashtab) { char msg[40]; snprintf(msg, sizeof(msg), "bad DT_GNU_HASH %#x", v_gsh); throwCantPack(msg); } unsigned const n_bucket = get_te32(&gashtab[0]); unsigned const n_bitmask = get_te32(&gashtab[2]); unsigned const gnu_shift = get_te32(&gashtab[3]); upx_uint64_t const *const bitmask = (upx_uint64_t const *)(void const *)&gashtab[4]; unsigned const *const buckets = (unsigned const *)&bitmask[n_bitmask]; unsigned const *const hasharr = &buckets[n_bucket]; (void)hasharr; //unsigned const *const gashend = &hasharr[n_bucket]; // minimum upx_uint64_t const v_sym = get_te64(&dynp0[-1+ x_sym].d_val); if (!n_bucket || !n_bitmask || (-1+ n_bitmask) & n_bitmask // not a power of 2 || 8*sizeof(upx_uint64_t) <= gnu_shift // shifted result always == 0 || (n_bucket>>30) // fie on fuzzers || (n_bitmask>>30) || (file_size/sizeof(unsigned)) <= ((sizeof(*bitmask)/sizeof(unsigned))*n_bitmask + 2*n_bucket) // FIXME: weak // FIXME: next test does work for Android? || ((v_gsh < v_sym) && (v_sym - v_gsh) < (sizeof(unsigned)*4 // headers + sizeof(*bitmask)*n_bitmask // bitmask + sizeof(*buckets)*n_bucket // buckets + sizeof(*hasharr)*n_bucket // hasharr )) ) { char msg[90]; snprintf(msg, sizeof(msg), "bad DT_GNU_HASH n_bucket=%#x n_bitmask=%#x len=%#lx", n_bucket, n_bitmask, (long unsigned)(v_sym - v_gsh)); throwCantPack(msg); } } unsigned const e_shstrndx = get_te16(&ehdri.e_shstrndx); if (e_shnum <= e_shstrndx && !(0==e_shnum && 0==e_shstrndx) ) { char msg[40]; snprintf(msg, sizeof(msg), "bad .e_shstrndx %d >= .e_shnum %d", e_shstrndx, e_shnum); throwCantPack(msg); } }
1
[ "CWE-703", "CWE-369" ]
upx
eb90eab6325d009004ffb155e3e33f22d4d3ca26
37,378,469,729,196,886,000,000,000,000,000,000,000
131
Detect bogus DT_SYMENT. https://github.com/upx/upx/issues/331 modified: p_lx_elf.cpp
void NumberFormatTest::TestSurrogateSupport(void) { UErrorCode status = U_ZERO_ERROR; DecimalFormatSymbols custom(Locale::getUS(), status); CHECK(status, "DecimalFormatSymbols constructor"); custom.setSymbol(DecimalFormatSymbols::kDecimalSeparatorSymbol, "decimal"); custom.setSymbol(DecimalFormatSymbols::kPlusSignSymbol, "plus"); custom.setSymbol(DecimalFormatSymbols::kMinusSignSymbol, " minus "); custom.setSymbol(DecimalFormatSymbols::kExponentialSymbol, "exponent"); UnicodeString patternStr("*\\U00010000##.##", ""); patternStr = patternStr.unescape(); UnicodeString expStr("\\U00010000\\U00010000\\U00010000\\U000100000", ""); expStr = expStr.unescape(); expect2(new DecimalFormat(patternStr, custom, status), int32_t(0), expStr, status); status = U_ZERO_ERROR; expect2(new DecimalFormat("*^##.##", custom, status), int32_t(0), "^^^^0", status); status = U_ZERO_ERROR; expect2(new DecimalFormat("##.##", custom, status), -1.3, " minus 1decimal3", status); status = U_ZERO_ERROR; expect2(new DecimalFormat("##0.0####E0 'g-m/s^2'", custom, status), int32_t(0), "0decimal0exponent0 g-m/s^2", status); status = U_ZERO_ERROR; expect(new DecimalFormat("##0.0####E0 'g-m/s^2'", custom, status), 1.0/3, "333decimal333exponent minus 3 g-m/s^2", status); status = U_ZERO_ERROR; expect2(new DecimalFormat("##0.0#### 'g-m/s^2'", custom, status), int32_t(0), "0decimal0 g-m/s^2", status); status = U_ZERO_ERROR; expect(new DecimalFormat("##0.0#### 'g-m/s^2'", custom, status), 1.0/3, "0decimal33333 g-m/s^2", status); UnicodeString zero((UChar32)0x10000); UnicodeString one((UChar32)0x10001); UnicodeString two((UChar32)0x10002); UnicodeString five((UChar32)0x10005); custom.setSymbol(DecimalFormatSymbols::kZeroDigitSymbol, zero); custom.setSymbol(DecimalFormatSymbols::kOneDigitSymbol, one); custom.setSymbol(DecimalFormatSymbols::kTwoDigitSymbol, two); custom.setSymbol(DecimalFormatSymbols::kFiveDigitSymbol, five); expStr = UnicodeString("\\U00010001decimal\\U00010002\\U00010005\\U00010000", ""); expStr = expStr.unescape(); status = U_ZERO_ERROR; expect2(new DecimalFormat("##0.000", custom, status), 1.25, expStr, status); custom.setSymbol(DecimalFormatSymbols::kZeroDigitSymbol, (UChar)0x30); custom.setSymbol(DecimalFormatSymbols::kCurrencySymbol, "units of money"); custom.setSymbol(DecimalFormatSymbols::kMonetarySeparatorSymbol, "money separator"); patternStr = UNICODE_STRING_SIMPLE("0.00 \\u00A4' in your bank account'"); patternStr = patternStr.unescape(); expStr = UnicodeString(" minus 20money separator00 units of money in your bank account", ""); status = U_ZERO_ERROR; expect2(new DecimalFormat(patternStr, custom, status), int32_t(-20), expStr, status); custom.setSymbol(DecimalFormatSymbols::kPercentSymbol, "percent"); patternStr = "'You''ve lost ' -0.00 %' of your money today'"; patternStr = patternStr.unescape(); expStr = UnicodeString(" minus You've lost minus 2000decimal00 percent of your money today", ""); status = U_ZERO_ERROR; expect2(new DecimalFormat(patternStr, custom, status), int32_t(-20), expStr, status); }
0
[ "CWE-190" ]
icu
53d8c8f3d181d87a6aa925b449b51c4a2c922a51
250,716,003,459,174,050,000,000,000,000,000,000,000
68
ICU-20246 Fixing another integer overflow in number parsing.
static uint8_t next_transaction_id(uint8_t id) { return (((id + 1) & XACT_ID_MAX) | (id & (XACT_ID_MAX+1))); }
0
[ "CWE-787" ]
zephyr
6ec31c852049641095f2cdf25591f2e1bff86774
330,290,631,414,290,870,000,000,000,000,000,000,000
4
Bluetooth: Mesh: Check SegN when receiving Transaction Start PDU When receiving Transaction Start PDU, assure that number of segments needed to send a Provisioning PDU with TotalLength size is equal to SegN value provided in the Transaction Start PDU. Signed-off-by: Pavel Vasilyev <[email protected]> (cherry picked from commit a63c51567991ded3939bbda4f82bcb8c44228331)
**/ CImg<T>& cumulate(const char axis=0) { switch (cimg::lowercase(axis)) { case 'x' : cimg_pragma_openmp(parallel for collapse(3) cimg_openmp_if(_width>=512 && _height*_depth*_spectrum>=16)) cimg_forYZC(*this,y,z,c) { T *ptrd = data(0,y,z,c); Tlong cumul = (Tlong)0; cimg_forX(*this,x) { cumul+=(Tlong)*ptrd; *(ptrd++) = (T)cumul; } } break; case 'y' : { const ulongT w = (ulongT)_width; cimg_pragma_openmp(parallel for collapse(3) cimg_openmp_if(_height>=512 && _width*_depth*_spectrum>=16)) cimg_forXZC(*this,x,z,c) { T *ptrd = data(x,0,z,c); Tlong cumul = (Tlong)0; cimg_forY(*this,y) { cumul+=(Tlong)*ptrd; *ptrd = (T)cumul; ptrd+=w; } } } break; case 'z' : { const ulongT wh = (ulongT)_width*_height; cimg_pragma_openmp(parallel for collapse(3) cimg_openmp_if(_depth>=512 && _width*_depth*_spectrum>=16)) cimg_forXYC(*this,x,y,c) { T *ptrd = data(x,y,0,c); Tlong cumul = (Tlong)0; cimg_forZ(*this,z) { cumul+=(Tlong)*ptrd; *ptrd = (T)cumul; ptrd+=wh; } } } break; case 'c' : { const ulongT whd = (ulongT)_width*_height*_depth; cimg_pragma_openmp(parallel for collapse(3) cimg_openmp_if(_spectrum>=512 && _width*_height*_depth>=16)) cimg_forXYZ(*this,x,y,z) { T *ptrd = data(x,y,z,0); Tlong cumul = (Tlong)0; cimg_forC(*this,c) { cumul+=(Tlong)*ptrd; *ptrd = (T)cumul; ptrd+=whd; } } } break; default : { // Global cumulation. Tlong cumul = (Tlong)0; cimg_for(*this,ptrd,T) { cumul+=(Tlong)*ptrd; *ptrd = (T)cumul; } } } return *this;
0
[ "CWE-125" ]
CImg
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
279,542,860,993,466,800,000,000,000,000,000,000,000
44
Fix other issues in 'CImg<T>::load_bmp()'.
string_get_localpart(address_item * addr, gstring * yield) { uschar * s; s = addr->prefix; if (testflag(addr, af_include_affixes) && s) { #ifdef SUPPORT_I18N if (testflag(addr, af_utf8_downcvt)) s = string_localpart_utf8_to_alabel(s, NULL); #endif yield = string_cat(yield, s); } s = addr->local_part; #ifdef SUPPORT_I18N if (testflag(addr, af_utf8_downcvt)) s = string_localpart_utf8_to_alabel(s, NULL); #endif yield = string_cat(yield, s); s = addr->suffix; if (testflag(addr, af_include_affixes) && s) { #ifdef SUPPORT_I18N if (testflag(addr, af_utf8_downcvt)) s = string_localpart_utf8_to_alabel(s, NULL); #endif yield = string_cat(yield, s); } return yield; }
0
[ "CWE-78" ]
exim
d740d2111f189760593a303124ff6b9b1f83453d
249,042,750,905,022,400,000,000,000,000,000,000,000
33
Fix CVE-2019-10149
void jas_matrix_asl(jas_matrix_t *matrix, int n) { int i; int j; jas_seqent_t *rowstart; int rowstep; jas_seqent_t *data; rowstep = jas_matrix_rowstep(matrix); for (i = matrix->numrows_, rowstart = matrix->rows_[0]; i > 0; --i, rowstart += rowstep) { for (j = matrix->numcols_, data = rowstart; j > 0; --j, ++data) { *data <<= n; } } }
1
[ "CWE-20" ]
jasper
c87ad330a8b8d6e5eb0065675601fdfae08ebaab
174,502,462,739,640,650,000,000,000,000,000,000,000
17
Added fix for CVE-2016-2089.
static ssize_t aio_setup_single_vector(struct kiocb *kiocb, int rw, char __user *buf, unsigned long *nr_segs, size_t len, struct iovec *iovec) { if (unlikely(!access_ok(!rw, buf, len))) return -EFAULT; iovec->iov_base = buf; iovec->iov_len = len; *nr_segs = 1; return 0; }
1
[ "CWE-703" ]
linux
4c185ce06dca14f5cea192f5a2c981ef50663f2b
17,685,527,957,282,691,000,000,000,000,000,000,000
14
aio: lift iov_iter_init() into aio_setup_..._rw() the only non-trivial detail is that we do it before rw_verify_area(), so we'd better cap the length ourselves in aio_setup_single_rw() case (for vectored case rw_copy_check_uvector() will do that for us). Signed-off-by: Al Viro <[email protected]>
static void dump_vmcs(void) { u32 vmentry_ctl = vmcs_read32(VM_ENTRY_CONTROLS); u32 vmexit_ctl = vmcs_read32(VM_EXIT_CONTROLS); u32 cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL); u32 pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL); u32 secondary_exec_control = 0; unsigned long cr4 = vmcs_readl(GUEST_CR4); u64 efer = vmcs_read64(GUEST_IA32_EFER); int i, n; if (cpu_has_secondary_exec_ctrls()) secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); pr_err("*** Guest State ***\n"); pr_err("CR0: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n", vmcs_readl(GUEST_CR0), vmcs_readl(CR0_READ_SHADOW), vmcs_readl(CR0_GUEST_HOST_MASK)); pr_err("CR4: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n", cr4, vmcs_readl(CR4_READ_SHADOW), vmcs_readl(CR4_GUEST_HOST_MASK)); pr_err("CR3 = 0x%016lx\n", vmcs_readl(GUEST_CR3)); if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT) && (cr4 & X86_CR4_PAE) && !(efer & EFER_LMA)) { pr_err("PDPTR0 = 0x%016llx PDPTR1 = 0x%016llx\n", vmcs_read64(GUEST_PDPTR0), vmcs_read64(GUEST_PDPTR1)); pr_err("PDPTR2 = 0x%016llx PDPTR3 = 0x%016llx\n", vmcs_read64(GUEST_PDPTR2), vmcs_read64(GUEST_PDPTR3)); } pr_err("RSP = 0x%016lx RIP = 0x%016lx\n", vmcs_readl(GUEST_RSP), vmcs_readl(GUEST_RIP)); pr_err("RFLAGS=0x%08lx DR7 = 0x%016lx\n", vmcs_readl(GUEST_RFLAGS), vmcs_readl(GUEST_DR7)); pr_err("Sysenter RSP=%016lx CS:RIP=%04x:%016lx\n", vmcs_readl(GUEST_SYSENTER_ESP), vmcs_read32(GUEST_SYSENTER_CS), vmcs_readl(GUEST_SYSENTER_EIP)); vmx_dump_sel("CS: ", GUEST_CS_SELECTOR); vmx_dump_sel("DS: ", GUEST_DS_SELECTOR); vmx_dump_sel("SS: ", GUEST_SS_SELECTOR); vmx_dump_sel("ES: ", GUEST_ES_SELECTOR); vmx_dump_sel("FS: ", GUEST_FS_SELECTOR); vmx_dump_sel("GS: ", GUEST_GS_SELECTOR); vmx_dump_dtsel("GDTR:", GUEST_GDTR_LIMIT); vmx_dump_sel("LDTR:", GUEST_LDTR_SELECTOR); vmx_dump_dtsel("IDTR:", GUEST_IDTR_LIMIT); vmx_dump_sel("TR: ", GUEST_TR_SELECTOR); if ((vmexit_ctl & (VM_EXIT_SAVE_IA32_PAT | VM_EXIT_SAVE_IA32_EFER)) || (vmentry_ctl & (VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_IA32_EFER))) pr_err("EFER = 0x%016llx PAT = 0x%016llx\n", efer, vmcs_read64(GUEST_IA32_PAT)); pr_err("DebugCtl = 0x%016llx DebugExceptions = 0x%016lx\n", vmcs_read64(GUEST_IA32_DEBUGCTL), vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS)); if (vmentry_ctl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) pr_err("PerfGlobCtl = 0x%016llx\n", vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL)); if (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) pr_err("BndCfgS = 0x%016llx\n", vmcs_read64(GUEST_BNDCFGS)); pr_err("Interruptibility = %08x ActivityState = %08x\n", vmcs_read32(GUEST_INTERRUPTIBILITY_INFO), vmcs_read32(GUEST_ACTIVITY_STATE)); if (secondary_exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) pr_err("InterruptStatus = %04x\n", vmcs_read16(GUEST_INTR_STATUS)); pr_err("*** Host State ***\n"); pr_err("RIP = 0x%016lx RSP = 0x%016lx\n", vmcs_readl(HOST_RIP), vmcs_readl(HOST_RSP)); pr_err("CS=%04x SS=%04x DS=%04x ES=%04x FS=%04x GS=%04x TR=%04x\n", vmcs_read16(HOST_CS_SELECTOR), vmcs_read16(HOST_SS_SELECTOR), vmcs_read16(HOST_DS_SELECTOR), vmcs_read16(HOST_ES_SELECTOR), vmcs_read16(HOST_FS_SELECTOR), vmcs_read16(HOST_GS_SELECTOR), vmcs_read16(HOST_TR_SELECTOR)); pr_err("FSBase=%016lx GSBase=%016lx TRBase=%016lx\n", vmcs_readl(HOST_FS_BASE), vmcs_readl(HOST_GS_BASE), vmcs_readl(HOST_TR_BASE)); pr_err("GDTBase=%016lx IDTBase=%016lx\n", vmcs_readl(HOST_GDTR_BASE), vmcs_readl(HOST_IDTR_BASE)); pr_err("CR0=%016lx CR3=%016lx CR4=%016lx\n", vmcs_readl(HOST_CR0), vmcs_readl(HOST_CR3), vmcs_readl(HOST_CR4)); pr_err("Sysenter RSP=%016lx CS:RIP=%04x:%016lx\n", vmcs_readl(HOST_IA32_SYSENTER_ESP), vmcs_read32(HOST_IA32_SYSENTER_CS), vmcs_readl(HOST_IA32_SYSENTER_EIP)); if (vmexit_ctl & (VM_EXIT_LOAD_IA32_PAT | VM_EXIT_LOAD_IA32_EFER)) pr_err("EFER = 0x%016llx PAT = 0x%016llx\n", vmcs_read64(HOST_IA32_EFER), vmcs_read64(HOST_IA32_PAT)); if (vmexit_ctl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) pr_err("PerfGlobCtl = 0x%016llx\n", vmcs_read64(HOST_IA32_PERF_GLOBAL_CTRL)); pr_err("*** Control State ***\n"); pr_err("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n", pin_based_exec_ctrl, cpu_based_exec_ctrl, secondary_exec_control); pr_err("EntryControls=%08x ExitControls=%08x\n", vmentry_ctl, vmexit_ctl); pr_err("ExceptionBitmap=%08x PFECmask=%08x PFECmatch=%08x\n", vmcs_read32(EXCEPTION_BITMAP), vmcs_read32(PAGE_FAULT_ERROR_CODE_MASK), vmcs_read32(PAGE_FAULT_ERROR_CODE_MATCH)); pr_err("VMEntry: intr_info=%08x errcode=%08x ilen=%08x\n", vmcs_read32(VM_ENTRY_INTR_INFO_FIELD), vmcs_read32(VM_ENTRY_EXCEPTION_ERROR_CODE), vmcs_read32(VM_ENTRY_INSTRUCTION_LEN)); pr_err("VMExit: intr_info=%08x errcode=%08x ilen=%08x\n", vmcs_read32(VM_EXIT_INTR_INFO), vmcs_read32(VM_EXIT_INTR_ERROR_CODE), vmcs_read32(VM_EXIT_INSTRUCTION_LEN)); pr_err(" reason=%08x qualification=%016lx\n", vmcs_read32(VM_EXIT_REASON), vmcs_readl(EXIT_QUALIFICATION)); pr_err("IDTVectoring: info=%08x errcode=%08x\n", vmcs_read32(IDT_VECTORING_INFO_FIELD), vmcs_read32(IDT_VECTORING_ERROR_CODE)); pr_err("TSC Offset = 0x%016llx\n", vmcs_read64(TSC_OFFSET)); if (secondary_exec_control & SECONDARY_EXEC_TSC_SCALING) pr_err("TSC Multiplier = 0x%016llx\n", vmcs_read64(TSC_MULTIPLIER)); if (cpu_based_exec_ctrl & CPU_BASED_TPR_SHADOW) pr_err("TPR Threshold = 0x%02x\n", vmcs_read32(TPR_THRESHOLD)); if (pin_based_exec_ctrl & PIN_BASED_POSTED_INTR) pr_err("PostedIntrVec = 0x%02x\n", vmcs_read16(POSTED_INTR_NV)); if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)) pr_err("EPT pointer = 0x%016llx\n", vmcs_read64(EPT_POINTER)); n = vmcs_read32(CR3_TARGET_COUNT); for (i = 0; i + 1 < n; i += 4) pr_err("CR3 target%u=%016lx target%u=%016lx\n", i, vmcs_readl(CR3_TARGET_VALUE0 + i * 2), i + 1, vmcs_readl(CR3_TARGET_VALUE0 + i * 2 + 2)); if (i < n) pr_err("CR3 target%u=%016lx\n", i, vmcs_readl(CR3_TARGET_VALUE0 + i * 2)); if (secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING) pr_err("PLE Gap=%08x Window=%08x\n", vmcs_read32(PLE_GAP), vmcs_read32(PLE_WINDOW)); if (secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID) pr_err("Virtual processor ID = 0x%04x\n", vmcs_read16(VIRTUAL_PROCESSOR_ID)); }
0
[ "CWE-20", "CWE-617" ]
linux
3a8b0677fc6180a467e26cc32ce6b0c09a32f9bb
145,841,155,056,638,860,000,000,000,000,000,000,000
139
KVM: VMX: Do not BUG() on out-of-bounds guest IRQ The value of the guest_irq argument to vmx_update_pi_irte() is ultimately coming from a KVM_IRQFD API call. Do not BUG() in vmx_update_pi_irte() if the value is out-of bounds. (Especially, since KVM as a whole seems to hang after that.) Instead, print a message only once if we find that we don't have a route for a certain IRQ (which can be out-of-bounds or within the array). This fixes CVE-2017-1000252. Fixes: efc644048ecde54 ("KVM: x86: Update IRTE for posted-interrupts") Signed-off-by: Jan H. Schönherr <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
_outA_ArrayExpr(StringInfo str, const A_ArrayExpr *node) { WRITE_NODE_TYPE("A_ARRAYEXPR"); WRITE_NODE_FIELD(elements); WRITE_LOCATION_FIELD(location); }
0
[ "CWE-362" ]
postgres
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
126,729,650,289,411,130,000,000,000,000,000,000,000
7
Avoid repeated name lookups during table and index DDL. If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
xmlSplitQName2(const xmlChar *name, xmlChar **prefix) { int len = 0; xmlChar *ret = NULL; if (prefix == NULL) return(NULL); *prefix = NULL; if (name == NULL) return(NULL); #ifndef XML_XML_NAMESPACE /* xml: prefix is not really a namespace */ if ((name[0] == 'x') && (name[1] == 'm') && (name[2] == 'l') && (name[3] == ':')) return(NULL); #endif /* nasty but valid */ if (name[0] == ':') return(NULL); /* * we are not trying to validate but just to cut, and yes it will * work even if this is as set of UTF-8 encoded chars */ while ((name[len] != 0) && (name[len] != ':')) len++; if (name[len] == 0) return(NULL); *prefix = xmlStrndup(name, len); if (*prefix == NULL) { xmlTreeErrMemory("QName split"); return(NULL); } ret = xmlStrdup(&name[len + 1]); if (ret == NULL) { xmlTreeErrMemory("QName split"); if (*prefix != NULL) { xmlFree(*prefix); *prefix = NULL; } return(NULL); } return(ret); }
0
[ "CWE-20" ]
libxml2
bdd66182ef53fe1f7209ab6535fda56366bd7ac9
219,283,350,116,160,200,000,000,000,000,000,000,000
46
Avoid building recursive entities For https://bugzilla.gnome.org/show_bug.cgi?id=762100 When we detect a recusive entity we should really not build the associated data, moreover if someone bypass libxml2 fatal errors and still tries to serialize a broken entity make sure we don't risk to get ito a recursion * parser.c: xmlParserEntityCheck() don't build if entity loop were found and remove the associated text content * tree.c: xmlStringGetNodeList() avoid a potential recursion
pgp_seek_blob(sc_card_t *card, pgp_blob_t *root, unsigned int id, pgp_blob_t **ret) { pgp_blob_t *child; int r; if ((r = pgp_get_blob(card, root, id, ret)) == 0) /* the sought blob is right under root */ return r; /* not found, seek deeper */ for (child = root->files; child; child = child->next) { /* The DO of SIMPLE type or the DO holding certificate * does not contain children */ if ((child->info && child->info->type == SIMPLE) || child->id == DO_CERT) continue; r = pgp_seek_blob(card, child, id, ret); if (r == 0) return r; } return SC_ERROR_FILE_NOT_FOUND; }
0
[ "CWE-125" ]
OpenSC
8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
64,989,735,226,979,880,000,000,000,000,000,000,000
23
fixed out of bounds reads Thanks to Eric Sesterhenn from X41 D-SEC GmbH for reporting and suggesting security fixes.
void lcdSetPixel_ArrayBuffer_flat(JsGraphics *gfx, short x, short y, unsigned int col) { lcdSetPixels_ArrayBuffer_flat(gfx, x, y, 1, col); }
0
[ "CWE-125" ]
Espruino
8a44b04b584b3d3ab1cb68fed410f7ecb165e50e
69,830,592,577,072,350,000,000,000,000,000,000,000
3
Add height check for Graphics.createArrayBuffer(...vertical_byte:true) (fix #1421)
xmlTreeErrMemory(const char *extra) { __xmlSimpleError(XML_FROM_TREE, XML_ERR_NO_MEMORY, NULL, NULL, extra); }
0
[ "CWE-20" ]
libxml2
bdd66182ef53fe1f7209ab6535fda56366bd7ac9
204,932,373,661,070,700,000,000,000,000,000,000,000
4
Avoid building recursive entities For https://bugzilla.gnome.org/show_bug.cgi?id=762100 When we detect a recusive entity we should really not build the associated data, moreover if someone bypass libxml2 fatal errors and still tries to serialize a broken entity make sure we don't risk to get ito a recursion * parser.c: xmlParserEntityCheck() don't build if entity loop were found and remove the associated text content * tree.c: xmlStringGetNodeList() avoid a potential recursion
static int path_parentat(struct nameidata *nd, unsigned flags, struct path *parent) { const char *s = path_init(nd, flags); int err = link_path_walk(s, nd); if (!err) err = complete_walk(nd); if (!err) { *parent = nd->path; nd->path.mnt = NULL; nd->path.dentry = NULL; } terminate_walk(nd); return err; }
0
[ "CWE-416", "CWE-284" ]
linux
d0cb50185ae942b03c4327be322055d622dc79f6
85,063,775,780,821,235,000,000,000,000,000,000,000
15
do_last(): fetch directory ->i_mode and ->i_uid before it's too late may_create_in_sticky() call is done when we already have dropped the reference to dir. Fixes: 30aba6656f61e (namei: allow restricted O_CREAT of FIFOs and regular files) Signed-off-by: Al Viro <[email protected]>
int main() { gdImagePtr im; FILE *fp; fp = gdTestFileOpen2("gif", "php_bug_75571.gif"); gdTestAssert(fp != NULL); im = gdImageCreateFromGif(fp); gdTestAssert(im == NULL); fclose(fp); return gdNumFailures(); }
0
[ "CWE-681" ]
libgd
a11f47475e6443b7f32d21f2271f28f417e2ac04
254,922,445,461,762,960,000,000,000,000,000,000,000
13
Fix #420: Potential infinite loop in gdImageCreateFromGifCtx Due to a signedness confusion in `GetCode_` a corrupt GIF file can trigger an infinite loop. Furthermore we make sure that a GIF without any palette entries is treated as invalid *after* open palette entries have been removed. CVE-2018-5711 See also https://bugs.php.net/bug.php?id=75571.
void sqlite3VdbeResetStepResult(Vdbe *p){ p->rc = SQLITE_OK; }
0
[ "CWE-755" ]
sqlite
8654186b0236d556aa85528c2573ee0b6ab71be3
100,281,965,441,700,010,000,000,000,000,000,000,000
3
When an error occurs while rewriting the parser tree for window functions in the sqlite3WindowRewrite() routine, make sure that pParse->nErr is set, and make sure that this shuts down any subsequent code generation that might depend on the transformations that were implemented. This fixes a problem discovered by the Yongheng and Rui fuzzer. FossilOrigin-Name: e2bddcd4c55ba3cbe0130332679ff4b048630d0ced9a8899982edb5a3569ba7f
PHP_METHOD(snmp, close) { php_snmp_object *snmp_object; zval *object = getThis(); snmp_object = Z_SNMP_P(object); if (zend_parse_parameters_none() == FAILURE) { RETURN_FALSE; } netsnmp_session_free(&(snmp_object->session)); RETURN_TRUE; }
0
[ "CWE-20" ]
php-src
6e25966544fb1d2f3d7596e060ce9c9269bbdcf8
165,175,824,228,993,320,000,000,000,000,000,000,000
15
Fixed bug #71704 php_snmp_error() Format String Vulnerability
static int mov_read_covr(MOVContext *c, AVIOContext *pb, int type, int len) { AVStream *st; MOVStreamContext *sc; enum AVCodecID id; int ret; switch (type) { case 0xd: id = AV_CODEC_ID_MJPEG; break; case 0xe: id = AV_CODEC_ID_PNG; break; case 0x1b: id = AV_CODEC_ID_BMP; break; default: av_log(c->fc, AV_LOG_WARNING, "Unknown cover type: 0x%x.\n", type); avio_skip(pb, len); return 0; } sc = av_mallocz(sizeof(*sc)); if (!sc) return AVERROR(ENOMEM); ret = ff_add_attached_pic(c->fc, NULL, pb, NULL, len); if (ret < 0) { av_free(sc); return ret; } st = c->fc->streams[c->fc->nb_streams - 1]; st->priv_data = sc; if (st->attached_pic.size >= 8 && id != AV_CODEC_ID_BMP) { if (AV_RB64(st->attached_pic.data) == 0x89504e470d0a1a0a) { id = AV_CODEC_ID_PNG; } else { id = AV_CODEC_ID_MJPEG; } } st->codecpar->codec_id = id; return 0; }
0
[ "CWE-703" ]
FFmpeg
c953baa084607dd1d84c3bfcce3cf6a87c3e6e05
158,991,819,911,914,570,000,000,000,000,000,000,000
39
avformat/mov: Check count sums in build_open_gop_key_points() Fixes: ffmpeg.md Fixes: Out of array access Fixes: CVE-2022-2566 Found-by: Andy Nguyen <[email protected]> Found-by: 3pvd <[email protected]> Reviewed-by: Andy Nguyen <[email protected]> Signed-off-by: Michael Niedermayer <[email protected]>
vte_sequence_handler_sr (VteTerminal *terminal, GValueArray *params) { long start, end; VteScreen *screen; screen = terminal->pvt->screen; if (screen->scrolling_restricted) { start = screen->scrolling_region.start + screen->insert_delta; end = screen->scrolling_region.end + screen->insert_delta; } else { start = terminal->pvt->screen->insert_delta; end = start + terminal->row_count - 1; } if (screen->cursor_current.row == start) { /* If we're at the top of the scrolling region, add a * line at the top to scroll the bottom off. */ _vte_terminal_ring_remove (terminal, end); _vte_terminal_ring_insert (terminal, start, TRUE); /* Update the display. */ _vte_terminal_scroll_region(terminal, start, end - start + 1, 1); _vte_invalidate_cells(terminal, 0, terminal->column_count, start, 2); } else { /* Otherwise, just move the cursor up. */ screen->cursor_current.row--; } /* Adjust the scrollbars if necessary. */ _vte_terminal_adjust_adjustments(terminal); /* We modified the display, so make a note of it. */ terminal->pvt->text_modified_flag = TRUE; }
0
[]
vte
8b971a7b2c59902914ecbbc3915c45dd21530a91
134,261,939,098,117,740,000,000,000,000,000,000,000
34
Fix terminal title reporting Fixed CVE-2003-0070 again. See also http://marc.info/?l=bugtraq&m=104612710031920&w=2 . (cherry picked from commit 6042c75b5a6daa0e499e61c8e07242d890d38ff1)
writeTarData( #ifdef HAVE_LIBZ gzFile ztarfile, #endif FILE *tarfile, char *buf, int r, char *current_file) { #ifdef HAVE_LIBZ if (ztarfile != NULL) { if (gzwrite(ztarfile, buf, r) != r) { fprintf(stderr, _("%s: could not write to compressed file \"%s\": %s\n"), progname, current_file, get_gz_error(ztarfile)); disconnect_and_exit(1); } } else #endif { if (fwrite(buf, r, 1, tarfile) != 1) { fprintf(stderr, _("%s: could not write to file \"%s\": %s\n"), progname, current_file, strerror(errno)); disconnect_and_exit(1); } } }
0
[ "CWE-119" ]
postgres
01824385aead50e557ca1af28640460fa9877d51
40,939,404,534,089,156,000,000,000,000,000,000,000
28
Prevent potential overruns of fixed-size buffers. Coverity identified a number of places in which it couldn't prove that a string being copied into a fixed-size buffer would fit. We believe that most, perhaps all of these are in fact safe, or are copying data that is coming from a trusted source so that any overrun is not really a security issue. Nonetheless it seems prudent to forestall any risk by using strlcpy() and similar functions. Fixes by Peter Eisentraut and Jozef Mlich based on Coverity reports. In addition, fix a potential null-pointer-dereference crash in contrib/chkpass. The crypt(3) function is defined to return NULL on failure, but chkpass.c didn't check for that before using the result. The main practical case in which this could be an issue is if libc is configured to refuse to execute unapproved hashing algorithms (e.g., "FIPS mode"). This ideally should've been a separate commit, but since it touches code adjacent to one of the buffer overrun changes, I included it in this commit to avoid last-minute merge issues. This issue was reported by Honza Horak. Security: CVE-2014-0065 for buffer overruns, CVE-2014-0066 for crypt()
WriteAdtsHeader(AP4_ByteStream& output, unsigned int frame_size, unsigned int sampling_frequency_index, unsigned int channel_configuration) { unsigned char bits[7]; bits[0] = 0xFF; bits[1] = 0xF1; // 0xF9 (MPEG2) bits[2] = 0x40 | (sampling_frequency_index << 2) | (channel_configuration >> 2); bits[3] = ((channel_configuration&0x3)<<6) | ((frame_size+7) >> 11); bits[4] = ((frame_size+7) >> 3)&0xFF; bits[5] = (((frame_size+7) << 5)&0xFF) | 0x1F; bits[6] = 0xFC; return output.Write(bits, 7); /* 0: syncword 12 always: '111111111111' 12: ID 1 0: MPEG-4, 1: MPEG-2 13: layer 2 always: '00' 15: protection_absent 1 16: profile 2 18: sampling_frequency_index 4 22: private_bit 1 23: channel_configuration 3 26: original/copy 1 27: home 1 28: emphasis 2 only if ID == 0 ADTS Variable header: these can change from frame to frame 28: copyright_identification_bit 1 29: copyright_identification_start 1 30: aac_frame_length 13 length of the frame including header (in bytes) 43: adts_buffer_fullness 11 0x7FF indicates VBR 54: no_raw_data_blocks_in_frame 2 ADTS Error check crc_check 16 only if protection_absent == 0 */ }
0
[ "CWE-703" ]
Bento4
33331ce2d35d45d855af7441db6116b4a9e2b70f
204,665,149,348,748,440,000,000,000,000,000,000,000
40
fix #691
static double percentage(i64 num, i64 den) { double d_num, d_den; if (den < 100) { d_num = num * 100; d_den = den; if (!d_den) d_den = 1; } else { d_num = num; d_den = den / 100; } return d_num / d_den; }
0
[ "CWE-703" ]
lrzip
4b3942103b57c639c8e0f31d6d5fd7bac53bbdf4
47,426,215,030,812,590,000,000,000,000,000,000,000
15
Fix possible race condition between zpaq_decompress_buf() and clear_rulist() function as reported by wcventure.
static int query_get_string_answer(cmd_request_t cmd) { struct booth_site *site; struct boothc_hdr_msg reply; struct boothc_header *header; char *data; int data_len; int rv; struct booth_transport const *tpt; int (*test_reply_f) (cmd_result_t reply_code, cmd_request_t cmd); size_t msg_size; void *request; if (cl.type == GEOSTORE) { test_reply_f = test_attr_reply; msg_size = sizeof(cl.attr_msg); request = &cl.attr_msg; } else { test_reply_f = test_reply; msg_size = sizeof(cl.msg); request = &cl.msg; } header = (struct boothc_header *)request; data = NULL; init_header(header, cmd, 0, cl.options, 0, 0, msg_size); if (!*cl.site) site = local; else if (!find_site_by_name(cl.site, &site, 1)) { log_error("cannot find site \"%s\"", cl.site); rv = ENOENT; goto out; } tpt = booth_transport + TCP; rv = tpt->open(site); if (rv < 0) goto out_close; rv = tpt->send(site, request, msg_size); if (rv < 0) goto out_close; rv = tpt->recv_auth(site, &reply, sizeof(reply)); if (rv < 0) goto out_close; data_len = ntohl(reply.header.length) - rv; /* no attribute, or no ticket found */ if (!data_len) { goto out_test_reply; } data = malloc(data_len+1); if (!data) { rv = -ENOMEM; goto out_close; } rv = tpt->recv(site, data, data_len); if (rv < 0) goto out_close; *(data+data_len) = '\0'; *(data + data_len) = '\0'; (void)fputs(data, stdout); fflush(stdout); rv = 0; out_test_reply: rv = test_reply_f(ntohl(reply.header.result), cmd); out_close: tpt->close(site); out: if (data) free(data); return rv; }
0
[ "CWE-287", "CWE-284" ]
booth
35bf0b7b048d715f671eb68974fb6b4af6528c67
74,273,591,707,448,065,000,000,000,000,000,000,000
79
Revert "Refactor: main: substitute is_auth_req macro" This reverts commit da79b8ba28ad4837a0fee13e5f8fb6f89fe0e24c. authfile != authkey Signed-off-by: Jan Friesse <[email protected]>
PyString_FromFormat(const char *format, ...) { PyObject* ret; va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES va_start(vargs, format); #else va_start(vargs); #endif ret = PyString_FromFormatV(format, vargs); va_end(vargs); return ret; }
0
[ "CWE-190" ]
cpython
c3c9db89273fabc62ea1b48389d9a3000c1c03ae
182,346,414,360,560,330,000,000,000,000,000,000,000
14
[2.7] bpo-30657: Check & prevent integer overflow in PyString_DecodeEscape (#2174)
const SlicePtr& back() const { return ring_[internalIndex(size_ - 1)]; }
0
[ "CWE-401" ]
envoy
5eba69a1f375413fb93fab4173f9c393ac8c2818
7,403,347,792,809,913,000,000,000,000,000,000,000
1
[buffer] Add on-drain hook to buffer API and use it to avoid fragmentation due to tracking of H2 data and control frames in the output buffer (#144) Signed-off-by: antonio <[email protected]>
const AsyncTransportCertificate* AsyncSSLSocket::getPeerCertificate() const { if (peerCertData_) { return peerCertData_.get(); } if (ssl_ != nullptr) { auto peerX509 = SSL_get_peer_certificate(ssl_.get()); if (peerX509) { // already up ref'd folly::ssl::X509UniquePtr peer(peerX509); auto cn = OpenSSLUtils::getCommonName(peerX509); peerCertData_ = std::make_unique<BasicTransportCertificate>( std::move(cn), std::move(peer)); } } return peerCertData_.get(); }
0
[ "CWE-125" ]
folly
c321eb588909646c15aefde035fd3133ba32cdee
90,035,018,196,985,250,000,000,000,000,000,000,000
16
Handle close_notify as standard writeErr in AsyncSSLSocket. Summary: Fixes CVE-2019-11934 Reviewed By: mingtaoy Differential Revision: D18020613 fbshipit-source-id: db82bb250e53f0d225f1280bd67bc74abd417836
Network::DrainDecision& ListenerFactoryContextBaseImpl::drainDecision() { return *this; }
0
[ "CWE-400" ]
envoy
dfddb529e914d794ac552e906b13d71233609bf7
179,754,034,720,924,350,000,000,000,000,000,000,000
1
listener: Add configurable accepted connection limits (#153) Add support for per-listener limits on accepted connections. Signed-off-by: Tony Allen <[email protected]>
static int r_core_cmd_subst_i(RCore *core, char *cmd, char *colon) { const char *quotestr = "`"; const char *tick = NULL; char *ptr, *ptr2, *str; char *arroba = NULL; int i, ret = 0, pipefd; bool usemyblock = false; int scr_html = -1; int scr_color = -1; bool eos = false; bool haveQuote = false; if (!cmd) { return 0; } cmd = r_str_trim_head_tail (cmd); /* quoted / raw command */ switch (*cmd) { case '.': if (cmd[1] == '"') { /* interpret */ return r_cmd_call (core->rcmd, cmd); } break; case '"': for (; *cmd; ) { int pipefd = -1; ut64 oseek = UT64_MAX; char *line, *p; haveQuote = *cmd == '"'; if (haveQuote) { // *cmd = 0; cmd++; p = find_eoq (cmd + 1); if (!p || !*p) { eprintf ("Missing \" in (%s).", cmd); return false; } *p++ = 0; if (!*p) { eos = true; } } else { char *sc = strchr (cmd, ';'); if (sc) { *sc = 0; } r_core_cmd0 (core, cmd); if (!sc) { break; } cmd = sc + 1; continue; } if (p[0]) { // workaround :D if (p[0] == '@') { p--; } while (p[1] == ';' || IS_WHITESPACE (p[1])) { p++; } if (p[1] == '@' || (p[1] && p[2] == '@')) { char *q = strchr (p + 1, '"'); if (q) { *q = 0; } haveQuote = q != NULL; oseek = core->offset; r_core_seek (core, r_num_math (core->num, p + 2), 1); if (q) { *p = '"'; p = q; } else { p = strchr (p + 1, ';'); } } if (p && *p && p[1] == '>') { str = p + 2; while (*str == '>') { str++; } while (IS_WHITESPACE (*str)) { str++; } r_cons_flush (); pipefd = r_cons_pipe_open (str, 1, p[2] == '>'); } } line = strdup (cmd); line = r_str_replace (line, "\\\"", "\"", true); if (p && *p && p[1] == '|') { str = p + 2; while (IS_WHITESPACE (*str)) { str++; } r_core_cmd_pipe (core, cmd, str); } else { r_cmd_call (core->rcmd, line); } free (line); if (oseek != UT64_MAX) { r_core_seek (core, oseek, 1); oseek = UT64_MAX; } if (pipefd != -1) { r_cons_flush (); r_cons_pipe_close (pipefd); } if (!p) { break; } if (eos) { break; } if (haveQuote) { if (*p == ';') { cmd = p + 1; } else { if (*p == '"') { cmd = p + 1; } else { *p = '"'; cmd = p; } } } else { cmd = p + 1; } } return true; case '(': if (cmd[1] != '*') { return r_cmd_call (core->rcmd, cmd); } } // TODO must honor " and ` /* comments */ if (*cmd != '#') { ptr = (char *)r_str_lastbut (cmd, '#', quotestr); if (ptr && (ptr[1] == ' ' || ptr[1] == '\t')) { *ptr = '\0'; } } /* multiple commands */ // TODO: must honor " and ` boundaries //ptr = strrchr (cmd, ';'); if (*cmd != '#') { ptr = (char *)r_str_lastbut (cmd, ';', quotestr); if (colon && ptr) { int ret ; *ptr = '\0'; if (r_core_cmd_subst (core, cmd) == -1) { return -1; } cmd = ptr + 1; ret = r_core_cmd_subst (core, cmd); *ptr = ';'; return ret; //r_cons_flush (); } } // TODO must honor " and ` /* pipe console to shell process */ //ptr = strchr (cmd, '|'); ptr = (char *)r_str_lastbut (cmd, '|', quotestr); if (ptr) { char *ptr2 = strchr (cmd, '`'); if (!ptr2 || (ptr2 && ptr2 > ptr)) { if (!tick || (tick && tick > ptr)) { *ptr = '\0'; cmd = r_str_clean (cmd); if (!strcmp (ptr + 1, "?")) { // "|?" // TODO: should be disable scr.color in pd| ? eprintf ("Usage: <r2command> | <program|H|>\n"); eprintf (" pd|? - show this help\n"); eprintf (" pd| - disable scr.html and scr.color\n"); eprintf (" pd|H - enable scr.html, respect scr.color\n"); return ret; } else if (!strcmp (ptr + 1, "H")) { // "|H" scr_html = r_config_get_i (core->config, "scr.html"); r_config_set_i (core->config, "scr.html", true); } else if (ptr[1]) { // "| grep .." int value = core->num->value; if (*cmd) { r_core_cmd_pipe (core, cmd, ptr + 1); } else { r_io_system (core->io, ptr + 1); } core->num->value = value; return 0; } else { // "|" scr_html = r_config_get_i (core->config, "scr.html"); r_config_set_i (core->config, "scr.html", 0); scr_color = r_config_get_i (core->config, "scr.color"); r_config_set_i (core->config, "scr.color", false); } } } } // TODO must honor " and ` /* bool conditions */ ptr = (char *)r_str_lastbut (cmd, '&', quotestr); //ptr = strchr (cmd, '&'); while (ptr && ptr[1] == '&') { *ptr = '\0'; ret = r_cmd_call (core->rcmd, cmd); if (ret == -1) { eprintf ("command error(%s)\n", cmd); if (scr_html != -1) { r_config_set_i (core->config, "scr.html", scr_html); } if (scr_color != -1) { r_config_set_i (core->config, "scr.color", scr_color); } return ret; } for (cmd = ptr + 2; cmd && *cmd == ' '; cmd++); ptr = strchr (cmd, '&'); } /* Out Of Band Input */ free (core->oobi); core->oobi = NULL; ptr = strstr (cmd, "?*"); if (ptr) { char *prech = ptr - 1; if (*prech != '~') { ptr[1] = 0; if (*cmd != '#' && strlen (cmd) < 5) { r_cons_break_push (NULL, NULL); recursive_help (core, cmd); r_cons_break_pop (); r_cons_grep_parsecmd (ptr + 2, "`"); if (scr_html != -1) { r_config_set_i (core->config, "scr.html", scr_html); } if (scr_color != -1) { r_config_set_i (core->config, "scr.color", scr_color); } return 0; } } } #if 0 ptr = strchr (cmd, '<'); if (ptr) { ptr[0] = '\0'; if (r_cons_singleton()->is_interactive) { if (ptr[1] == '<') { /* this is a bit mess */ //const char *oprompt = strdup (r_line_singleton ()->prompt); //oprompt = ">"; for (str = ptr + 2; str[0] == ' '; str++) { //nothing to see here } eprintf ("==> Reading from stdin until '%s'\n", str); free (core->oobi); core->oobi = malloc (1); if (core->oobi) { core->oobi[0] = '\0'; } core->oobi_len = 0; for (;;) { char buf[1024]; int ret; write (1, "> ", 2); fgets (buf, sizeof (buf) - 1, stdin); // XXX use r_line ?? if (feof (stdin)) { break; } if (*buf) buf[strlen (buf) - 1]='\0'; ret = strlen (buf); core->oobi_len += ret; core->oobi = realloc (core->oobi, core->oobi_len + 1); if (core->oobi) { if (!strcmp (buf, str)) { break; } strcat ((char *)core->oobi, buf); } } //r_line_set_prompt (oprompt); } else { for (str = ptr + 1; *str == ' '; str++) { //nothing to see here } if (!*str) { goto next; } eprintf ("Slurping file '%s'\n", str); free (core->oobi); core->oobi = (ut8*)r_file_slurp (str, &core->oobi_len); if (!core->oobi) { eprintf ("cannot open file\n"); } else if (ptr == cmd) { return r_core_cmd_buffer (core, (const char *)core->oobi); } } } else { eprintf ("Cannot slurp with << in non-interactive mode\n"); return 0; } } next: #endif // TODO must honor " and ` /* pipe console to file */ ptr = strchr (cmd, '>'); if (ptr) { int fdn = 1; int pipecolor = r_config_get_i (core->config, "scr.pipecolor"); int use_editor = false; int ocolor = r_config_get_i (core->config, "scr.color"); *ptr = '\0'; str = r_str_trim_head_tail (ptr + 1 + (ptr[1] == '>')); if (!*str) { eprintf ("No output?\n"); goto next2; } /* r_cons_flush() handles interactive output (to the terminal) * differently (e.g. asking about too long output). This conflicts * with piping to a file. Disable it while piping. */ if (ptr > (cmd + 1) && ISWHITECHAR (ptr[-2])) { char *fdnum = ptr - 1; if (*fdnum == 'H') { // "H>" scr_html = r_config_get_i (core->config, "scr.html"); r_config_set_i (core->config, "scr.html", true); pipecolor = true; *fdnum = 0; } else { if (IS_DIGIT(*fdnum)) { fdn = *fdnum - '0'; } *fdnum = 0; } } r_cons_set_interactive (false); if (!strcmp (str, "-")) { use_editor = true; str = r_file_temp ("dumpedit"); r_config_set (core->config, "scr.color", "false"); } if (fdn > 0) { pipefd = r_cons_pipe_open (str, fdn, ptr[1] == '>'); if (pipefd != -1) { if (!pipecolor) { r_config_set_i (core->config, "scr.color", 0); } ret = r_core_cmd_subst (core, cmd); r_cons_flush (); r_cons_pipe_close (pipefd); } } r_cons_set_last_interactive (); if (!pipecolor) { r_config_set_i (core->config, "scr.color", ocolor); } if (use_editor) { const char *editor = r_config_get (core->config, "cfg.editor"); if (editor && *editor) { r_sys_cmdf ("%s '%s'", editor, str); r_file_rm (str); } else { eprintf ("No cfg.editor configured\n"); } r_config_set_i (core->config, "scr.color", ocolor); free (str); } if (scr_html != -1) { r_config_set_i (core->config, "scr.html", scr_html); } if (scr_color != -1) { r_config_set_i (core->config, "scr.color", scr_color); } return ret; } next2: /* sub commands */ ptr = strchr (cmd, '`'); if (ptr) { int empty = 0; int oneline = 1; if (ptr[1] == '`') { memmove (ptr, ptr + 1, strlen (ptr)); oneline = 0; empty = 1; } ptr2 = strchr (ptr + 1, '`'); if (empty) { /* do nothing */ } else if (!ptr2) { eprintf ("parse: Missing backtick in expression.\n"); goto fail; } else { int value = core->num->value; *ptr = '\0'; *ptr2 = '\0'; if (ptr[1] == '!') { str = r_core_cmd_str_pipe (core, ptr + 1); } else { str = r_core_cmd_str (core, ptr + 1); } if (!str) { goto fail; } // ignore contents if first char is pipe or comment if (*str == '|' || *str == '*') { eprintf ("r_core_cmd_subst_i: invalid backticked command\n"); free (str); goto fail; } if (oneline && str) { for (i = 0; str[i]; i++) { if (str[i] == '\n') { str[i] = ' '; } } } str = r_str_append (str, ptr2 + 1); cmd = r_str_append (strdup (cmd), str); core->num->value = value; ret = r_core_cmd_subst (core, cmd); free (cmd); if (scr_html != -1) { r_config_set_i (core->config, "scr.html", scr_html); } free (str); return ret; } } // TODO must honor " and ` core->fixedblock = false; if (r_str_endswith (cmd, "~?") && cmd[2] == '\0') { r_cons_grep_help (); return true; } if (*cmd != '.') { r_cons_grep_parsecmd (cmd, quotestr); } /* temporary seek commands */ if (*cmd!= '(' && *cmd != '"') { ptr = strchr (cmd, '@'); if (ptr == cmd + 1 && *cmd == '?') { ptr = NULL; } } else { ptr = NULL; } core->tmpseek = ptr? true: false; int rc = 0; if (ptr) { char *f, *ptr2 = strchr (ptr + 1, '!'); ut64 addr = UT64_MAX; const char *tmpbits = NULL; const char *offstr = NULL; ut64 tmpbsz = core->blocksize; char *tmpeval = NULL; ut64 tmpoff = core->offset; char *tmpasm = NULL; int tmpfd = -1; int sz, len; ut8 *buf; *ptr = '\0'; for (ptr++; *ptr == ' '; ptr++) { //nothing to see here } if (*ptr && ptr[1] == ':') { /* do nothing here */ } else { ptr--; } arroba = (ptr[0] && ptr[1] && ptr[2])? strchr (ptr + 2, '@'): NULL; repeat_arroba: if (arroba) { *arroba = 0; } if (ptr[1] == '?') { helpCmdAt (core); } else if (ptr[0] && ptr[1] == ':' && ptr[2]) { usemyblock = true; switch (ptr[0]) { case 'f': // "@f:" // slurp file in block f = r_file_slurp (ptr + 2, &sz); if (f) { buf = malloc (sz); if (buf) { free (core->block); core->block = buf; core->blocksize = sz; memcpy (core->block, f, sz); } else { eprintf ("cannot alloc %d", sz); } free (f); } else { eprintf ("cannot open '%s'\n", ptr + 3); } break; case 'r': // "@r:" // regname if (ptr[1] == ':') { ut64 regval; char *mander = strdup (ptr + 2); char *sep = findSeparator (mander); if (sep) { char ch = *sep; *sep = 0; regval = r_debug_reg_get (core->dbg, mander); *sep = ch; char *numexpr = r_str_newf ("0x%"PFMT64x"%s", regval, sep); regval = r_num_math (core->num, numexpr); free (numexpr); } else { regval = r_debug_reg_get (core->dbg, ptr + 2); } r_core_seek (core, regval, 1); free (mander); } break; case 'b': // "@b:" // bits tmpbits = strdup (r_config_get (core->config, "asm.bits")); r_config_set_i (core->config, "asm.bits", r_num_math (core->num, ptr + 2)); break; case 'i': // "@i:" { ut64 addr = r_num_math (core->num, ptr + 2); if (addr) { r_core_cmdf (core, "so %s", ptr + 2); } } break; case 'e': // "@e:" tmpeval = parse_tmp_evals (core, ptr + 2); break; case 'x': // "@x:" // hexpairs if (ptr[1] == ':') { buf = malloc (strlen (ptr + 2) + 1); if (buf) { len = r_hex_str2bin (ptr + 2, buf); r_core_block_size (core, R_ABS(len)); memcpy (core->block, buf, core->blocksize); core->fixedblock = true; free (buf); } else { eprintf ("cannot allocate\n"); } } else { eprintf ("Invalid @x: syntax\n"); } break; case 'k': // "@k" { char *out = sdb_querys (core->sdb, NULL, 0, ptr + ((ptr[1])? 2: 1)); if (out) { r_core_seek (core, r_num_math (core->num, out), 1); free (out); } } break; case 'o': // "@o:3" if (ptr[1] == ':') { tmpfd = core->io->raised; r_io_raise (core->io, atoi (ptr + 2)); } break; case 'a': // "@a:" if (ptr[1] == ':') { char *q = strchr (ptr + 2, ':'); tmpasm = strdup (r_config_get (core->config, "asm.arch")); if (q) { *q++ = 0; tmpbits = r_config_get (core->config, "asm.bits"); r_config_set (core->config, "asm.bits", q); } r_config_set (core->config, "asm.arch", ptr + 2); // TODO: handle asm.bits } else { eprintf ("Usage: pd 10 @a:arm:32\n"); } break; case 's': // "@s:" len = strlen (ptr + 2); r_core_block_size (core, len); memcpy (core->block, ptr + 2, len); break; default: goto ignore; } *ptr = '@'; goto next_arroba; //ignore; //return ret; } ignore: ptr = r_str_trim_head (ptr + 1); ptr--; cmd = r_str_clean (cmd); if (ptr2) { if (strlen (ptr + 1) == 13 && strlen (ptr2 + 1) == 6 && !memcmp (ptr + 1, "0x", 2) && !memcmp (ptr2 + 1, "0x", 2)) { /* 0xXXXX:0xYYYY */ } else if (strlen (ptr + 1) == 9 && strlen (ptr2 + 1) == 4) { /* XXXX:YYYY */ } else { *ptr2 = '\0'; if (!ptr2[1]) { goto fail; } r_core_block_size ( core, r_num_math (core->num, ptr2 + 1)); } } offstr = r_str_trim_head (ptr + 1); addr = r_num_math (core->num, offstr); if (isalpha ((unsigned char)ptr[1]) && !addr) { if (!r_flag_get (core->flags, ptr + 1)) { eprintf ("Invalid address (%s)\n", ptr + 1); goto fail; } } else { char ch = *offstr; if (ch == '-' || ch == '+') { addr = core->offset + addr; } } next_arroba: if (arroba) { ptr = arroba; arroba = NULL; goto repeat_arroba; } if (ptr[1] == '@') { // TODO: remove temporally seek (should be done by cmd_foreach) if (ptr[2] == '@') { char *rule = ptr + 3; while (*rule && *rule == ' ') rule++; ret = r_core_cmd_foreach3 (core, cmd, rule); } else { ret = r_core_cmd_foreach (core, cmd, ptr + 2); } //ret = -1; /* do not run out-of-foreach cmd */ } else { bool tmpseek = false; const char *fromvars[] = { "anal.from", "diff.from", "graph.from", "io.buffer.from", "lines.from", "search.from", "zoom.from", NULL }; const char *tovars[] = { "anal.to", "diff.to", "graph.to", "io.buffer.to", "lines.to", "search.to", "zoom.to", NULL }; ut64 curfrom[R_ARRAY_SIZE (fromvars) - 1], curto[R_ARRAY_SIZE (tovars) - 1]; // @.. if (ptr[1] == '.' && ptr[2] == '.') { char *range = ptr + 3; char *p = strchr (range, ' '); if (!p) { eprintf ("Usage: / ABCD @..0x1000 0x3000\n"); free (tmpeval); free (tmpasm); goto fail; } *p = '\x00'; ut64 from = r_num_math (core->num, range); ut64 to = r_num_math (core->num, p + 1); // save current ranges for (i = 0; fromvars[i]; i++) { curfrom[i] = r_config_get_i (core->config, fromvars[i]); } for (i = 0; tovars[i]; i++) { curto[i] = r_config_get_i (core->config, tovars[i]); } // set new ranges for (i = 0; fromvars[i]; i++) { r_config_set_i (core->config, fromvars[i], from); } for (i = 0; tovars[i]; i++) { r_config_set_i (core->config, tovars[i], to); } tmpseek = true; } if (usemyblock) { if (addr != UT64_MAX) { core->offset = addr; } ret = r_cmd_call (core->rcmd, r_str_trim_head (cmd)); } else { if (addr != UT64_MAX) { if (!ptr[1] || r_core_seek (core, addr, 1)) { r_core_block_read (core); ret = r_cmd_call (core->rcmd, r_str_trim_head (cmd)); } else { ret = 0; } } } if (tmpseek) { // restore ranges for (i = 0; fromvars[i]; i++) { r_config_set_i (core->config, fromvars[i], curfrom[i]); } for (i = 0; tovars[i]; i++) { r_config_set_i (core->config, tovars[i], curto[i]); } } } if (ptr2) { *ptr2 = '!'; r_core_block_size (core, tmpbsz); } if (tmpasm) { r_config_set (core->config, "asm.arch", tmpasm); tmpasm = NULL; } if (tmpfd != -1) { r_io_raise (core->io, tmpfd); } if (tmpbits) { r_config_set (core->config, "asm.bits", tmpbits); tmpbits = NULL; } if (tmpeval) { r_core_cmd0 (core, tmpeval); R_FREE (tmpeval); } r_core_seek (core, tmpoff, 1); *ptr = '@'; rc = ret; goto beach; } rc = cmd? r_cmd_call (core->rcmd, r_str_trim_head (cmd)): false; beach: if (scr_html != -1) { r_cons_flush (); r_config_set_i (core->config, "scr.html", scr_html); } if (scr_color != -1) { r_config_set_i (core->config, "scr.color", scr_color); } core->fixedblock = false; return rc; fail: rc = -1; goto beach; }
1
[ "CWE-119", "CWE-787" ]
radare2
00e8f205475332d7842d0f0d1481eeab4e83017c
28,499,214,244,404,140,000,000,000,000,000,000,000
758
Fix #7727 - undefined pointers and out of band string access fixes
static void sctp_v4_pf_init(void) { /* Initialize the SCTP specific PF functions. */ sctp_register_pf(&sctp_pf_inet, PF_INET); sctp_register_af(&sctp_af_inet); }
0
[ "CWE-119", "CWE-787" ]
linux
8e2d61e0aed2b7c4ecb35844fe07e0b2b762dee4
266,455,749,680,068,670,000,000,000,000,000,000,000
6
sctp: fix race on protocol/netns initialization Consider sctp module is unloaded and is being requested because an user is creating a sctp socket. During initialization, sctp will add the new protocol type and then initialize pernet subsys: status = sctp_v4_protosw_init(); if (status) goto err_protosw_init; status = sctp_v6_protosw_init(); if (status) goto err_v6_protosw_init; status = register_pernet_subsys(&sctp_net_ops); The problem is that after those calls to sctp_v{4,6}_protosw_init(), it is possible for userspace to create SCTP sockets like if the module is already fully loaded. If that happens, one of the possible effects is that we will have readers for net->sctp.local_addr_list list earlier than expected and sctp_net_init() does not take precautions while dealing with that list, leading to a potential panic but not limited to that, as sctp_sock_init() will copy a bunch of blank/partially initialized values from net->sctp. The race happens like this: CPU 0 | CPU 1 socket() | __sock_create | socket() inet_create | __sock_create list_for_each_entry_rcu( | answer, &inetsw[sock->type], | list) { | inet_create /* no hits */ | if (unlikely(err)) { | ... | request_module() | /* socket creation is blocked | * the module is fully loaded | */ | sctp_init | sctp_v4_protosw_init | inet_register_protosw | list_add_rcu(&p->list, | last_perm); | | list_for_each_entry_rcu( | answer, &inetsw[sock->type], sctp_v6_protosw_init | list) { | /* hit, so assumes protocol | * is already loaded | */ | /* socket creation continues | * before netns is initialized | */ register_pernet_subsys | Simply inverting the initialization order between register_pernet_subsys() and sctp_v4_protosw_init() is not possible because register_pernet_subsys() will create a control sctp socket, so the protocol must be already visible by then. Deferring the socket creation to a work-queue is not good specially because we loose the ability to handle its errors. So, as suggested by Vlad, the fix is to split netns initialization in two moments: defaults and control socket, so that the defaults are already loaded by when we register the protocol, while control socket initialization is kept at the same moment it is today. Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace") Signed-off-by: Vlad Yasevich <[email protected]> Signed-off-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
e_book_backend_ldap_notify_online_cb (EBookBackend *backend, GParamSpec *pspec) { EBookBackendLDAP *bl = E_BOOK_BACKEND_LDAP (backend); /* Cancel all running operations */ ldap_cancel_all_operations (backend); if (!e_backend_get_online (E_BACKEND (backend))) { /* Go offline */ e_book_backend_set_writable (backend, FALSE); bl->priv->connected = FALSE; } else { /* Go online */ e_book_backend_set_writable (backend, TRUE); if (e_book_backend_is_opened (backend)) { GError *error = NULL; if (!e_book_backend_ldap_connect (bl, &error)) { e_book_backend_notify_error ( backend, error->message); g_error_free (error); } if (bl->priv->marked_for_offline && bl->priv->cache) generate_cache (bl); } } }
0
[]
evolution-data-server
34bad61738e2127736947ac50e0c7969cc944972
17,387,984,750,384,192,000,000,000,000,000,000,000
33
Bug 796174 - strcat() considered unsafe for buffer overflow
static void fix_hostname(struct SessionHandle *data, struct connectdata *conn, struct hostname *host) { size_t len; #ifndef USE_LIBIDN (void)data; (void)conn; #elif defined(CURL_DISABLE_VERBOSE_STRINGS) (void)conn; #endif /* set the name we use to display the host name */ host->dispname = host->name; len = strlen(host->name); if(len && (host->name[len-1] == '.')) /* strip off a single trailing dot if present, primarily for SNI but there's no use for it */ host->name[len-1]=0; if(!is_ASCII_name(host->name)) { #ifdef USE_LIBIDN /************************************************************* * Check name for non-ASCII and convert hostname to ACE form. *************************************************************/ if(stringprep_check_version(LIBIDN_REQUIRED_VERSION)) { char *ace_hostname = NULL; int rc = idna_to_ascii_lz(host->name, &ace_hostname, 0); infof (data, "Input domain encoded as `%s'\n", stringprep_locale_charset ()); if(rc != IDNA_SUCCESS) infof(data, "Failed to convert %s to ACE; %s\n", host->name, Curl_idn_strerror(conn, rc)); else { /* tld_check_name() displays a warning if the host name contains "illegal" characters for this TLD */ (void)tld_check_name(data, ace_hostname); host->encalloc = ace_hostname; /* change the name pointer to point to the encoded hostname */ host->name = host->encalloc; } } #elif defined(USE_WIN32_IDN) /************************************************************* * Check name for non-ASCII and convert hostname to ACE form. *************************************************************/ char *ace_hostname = NULL; int rc = curl_win32_idn_to_ascii(host->name, &ace_hostname); if(rc == 0) infof(data, "Failed to convert %s to ACE;\n", host->name); else { host->encalloc = ace_hostname; /* change the name pointer to point to the encoded hostname */ host->name = host->encalloc; } #else infof(data, "IDN support not present, can't parse Unicode domains\n"); #endif } }
0
[ "CWE-119" ]
curl
0583e87ada7a3cfb10904ae4ab61b339582c5bd3
327,923,086,832,638,570,000,000,000,000,000,000,000
63
fix_hostname: zero length host name caused -1 index offset If a URL is given with a zero-length host name, like in "http://:80" or just ":80", `fix_hostname()` will index the host name pointer with a -1 offset (as it blindly assumes a non-zero length) and both read and assign that address. CVE-2015-3144 Bug: http://curl.haxx.se/docs/adv_20150422D.html Reported-by: Hanno Böck
static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s) { BlockDriverState *source = s->mirror_top_bs->backing->bs; MirrorOp *pseudo_op; int64_t offset; uint64_t delay_ns = 0, ret = 0; /* At least the first dirty chunk is mirrored in one iteration. */ int nb_chunks = 1; bool write_zeroes_ok = bdrv_can_write_zeroes_with_unmap(blk_bs(s->target)); int max_io_bytes = MAX(s->buf_size / MAX_IN_FLIGHT, MAX_IO_BYTES); bdrv_dirty_bitmap_lock(s->dirty_bitmap); offset = bdrv_dirty_iter_next(s->dbi); if (offset < 0) { bdrv_set_dirty_iter(s->dbi, 0); offset = bdrv_dirty_iter_next(s->dbi); trace_mirror_restart_iter(s, bdrv_get_dirty_count(s->dirty_bitmap)); assert(offset >= 0); } bdrv_dirty_bitmap_unlock(s->dirty_bitmap); mirror_wait_on_conflicts(NULL, s, offset, 1); job_pause_point(&s->common.job); /* Find the number of consective dirty chunks following the first dirty * one, and wait for in flight requests in them. */ bdrv_dirty_bitmap_lock(s->dirty_bitmap); while (nb_chunks * s->granularity < s->buf_size) { int64_t next_dirty; int64_t next_offset = offset + nb_chunks * s->granularity; int64_t next_chunk = next_offset / s->granularity; if (next_offset >= s->bdev_length || !bdrv_dirty_bitmap_get_locked(s->dirty_bitmap, next_offset)) { break; } if (test_bit(next_chunk, s->in_flight_bitmap)) { break; } next_dirty = bdrv_dirty_iter_next(s->dbi); if (next_dirty > next_offset || next_dirty < 0) { /* The bitmap iterator's cache is stale, refresh it */ bdrv_set_dirty_iter(s->dbi, next_offset); next_dirty = bdrv_dirty_iter_next(s->dbi); } assert(next_dirty == next_offset); nb_chunks++; } /* Clear dirty bits before querying the block status, because * calling bdrv_block_status_above could yield - if some blocks are * marked dirty in this window, we need to know. */ bdrv_reset_dirty_bitmap_locked(s->dirty_bitmap, offset, nb_chunks * s->granularity); bdrv_dirty_bitmap_unlock(s->dirty_bitmap); /* Before claiming an area in the in-flight bitmap, we have to * create a MirrorOp for it so that conflicting requests can wait * for it. mirror_perform() will create the real MirrorOps later, * for now we just create a pseudo operation that will wake up all * conflicting requests once all real operations have been * launched. */ pseudo_op = g_new(MirrorOp, 1); *pseudo_op = (MirrorOp){ .offset = offset, .bytes = nb_chunks * s->granularity, .is_pseudo_op = true, }; qemu_co_queue_init(&pseudo_op->waiting_requests); QTAILQ_INSERT_TAIL(&s->ops_in_flight, pseudo_op, next); bitmap_set(s->in_flight_bitmap, offset / s->granularity, nb_chunks); while (nb_chunks > 0 && offset < s->bdev_length) { int ret; int64_t io_bytes; int64_t io_bytes_acct; MirrorMethod mirror_method = MIRROR_METHOD_COPY; assert(!(offset % s->granularity)); ret = bdrv_block_status_above(source, NULL, offset, nb_chunks * s->granularity, &io_bytes, NULL, NULL); if (ret < 0) { io_bytes = MIN(nb_chunks * s->granularity, max_io_bytes); } else if (ret & BDRV_BLOCK_DATA) { io_bytes = MIN(io_bytes, max_io_bytes); } io_bytes -= io_bytes % s->granularity; if (io_bytes < s->granularity) { io_bytes = s->granularity; } else if (ret >= 0 && !(ret & BDRV_BLOCK_DATA)) { int64_t target_offset; int64_t target_bytes; bdrv_round_to_clusters(blk_bs(s->target), offset, io_bytes, &target_offset, &target_bytes); if (target_offset == offset && target_bytes == io_bytes) { mirror_method = ret & BDRV_BLOCK_ZERO ? MIRROR_METHOD_ZERO : MIRROR_METHOD_DISCARD; } } while (s->in_flight >= MAX_IN_FLIGHT) { trace_mirror_yield_in_flight(s, offset, s->in_flight); mirror_wait_for_free_in_flight_slot(s); } if (s->ret < 0) { ret = 0; goto fail; } io_bytes = mirror_clip_bytes(s, offset, io_bytes); io_bytes = mirror_perform(s, offset, io_bytes, mirror_method); if (mirror_method != MIRROR_METHOD_COPY && write_zeroes_ok) { io_bytes_acct = 0; } else { io_bytes_acct = io_bytes; } assert(io_bytes); offset += io_bytes; nb_chunks -= DIV_ROUND_UP(io_bytes, s->granularity); delay_ns = block_job_ratelimit_get_delay(&s->common, io_bytes_acct); } ret = delay_ns; fail: QTAILQ_REMOVE(&s->ops_in_flight, pseudo_op, next); qemu_co_queue_restart_all(&pseudo_op->waiting_requests); g_free(pseudo_op); return ret; }
0
[ "CWE-476" ]
qemu
66fed30c9cd11854fc878a4eceb507e915d7c9cd
264,131,357,542,127,500,000,000,000,000,000,000,000
137
block/mirror: fix NULL pointer dereference in mirror_wait_on_conflicts() In mirror_iteration() we call mirror_wait_on_conflicts() with `self` parameter set to NULL. Starting from commit d44dae1a7c we dereference `self` pointer in mirror_wait_on_conflicts() without checks if it is not NULL. Backtrace: Program terminated with signal SIGSEGV, Segmentation fault. #0 mirror_wait_on_conflicts (self=0x0, s=<optimized out>, offset=<optimized out>, bytes=<optimized out>) at ../block/mirror.c:172 172 self->waiting_for_op = op; [Current thread is 1 (Thread 0x7f0908931ec0 (LWP 380249))] (gdb) bt #0 mirror_wait_on_conflicts (self=0x0, s=<optimized out>, offset=<optimized out>, bytes=<optimized out>) at ../block/mirror.c:172 #1 0x00005610c5d9d631 in mirror_run (job=0x5610c76a2c00, errp=<optimized out>) at ../block/mirror.c:491 #2 0x00005610c5d58726 in job_co_entry (opaque=0x5610c76a2c00) at ../job.c:917 #3 0x00005610c5f046c6 in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at ../util/coroutine-ucontext.c:173 #4 0x00007f0909975820 in ?? () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91 from /usr/lib64/libc.so.6 Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2001404 Fixes: d44dae1a7c ("block/mirror: fix active mirror dead-lock in mirror_wait_on_conflicts") Signed-off-by: Stefano Garzarella <[email protected]> Message-Id: <[email protected]> Reviewed-by: Vladimir Sementsov-Ogievskiy <[email protected]> Signed-off-by: Hanna Reitz <[email protected]>
static void __exit rbd_exit(void) { ida_destroy(&rbd_dev_id_ida); rbd_sysfs_cleanup(); if (single_major) unregister_blkdev(rbd_major, RBD_DRV_NAME); destroy_workqueue(rbd_wq); rbd_slab_exit(); }
0
[ "CWE-863" ]
linux
f44d04e696feaf13d192d942c4f14ad2e117065a
15,145,220,578,523,788,000,000,000,000,000,000,000
9
rbd: require global CAP_SYS_ADMIN for mapping and unmapping It turns out that currently we rely only on sysfs attribute permissions: $ ll /sys/bus/rbd/{add*,remove*} --w------- 1 root root 4096 Sep 3 20:37 /sys/bus/rbd/add --w------- 1 root root 4096 Sep 3 20:37 /sys/bus/rbd/add_single_major --w------- 1 root root 4096 Sep 3 20:37 /sys/bus/rbd/remove --w------- 1 root root 4096 Sep 3 20:38 /sys/bus/rbd/remove_single_major This means that images can be mapped and unmapped (i.e. block devices can be created and deleted) by a UID 0 process even after it drops all privileges or by any process with CAP_DAC_OVERRIDE in its user namespace as long as UID 0 is mapped into that user namespace. Be consistent with other virtual block devices (loop, nbd, dm, md, etc) and require CAP_SYS_ADMIN in the initial user namespace for mapping and unmapping, and also for dumping the configuration string and refreshing the image header. Cc: [email protected] Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Jeff Layton <[email protected]>
static int proc_sys_delete(const struct dentry *dentry) { return !!PROC_I(d_inode(dentry))->sysctl->unregistering; }
0
[ "CWE-20", "CWE-399" ]
linux
93362fa47fe98b62e4a34ab408c4a418432e7939
206,176,564,509,294,300,000,000,000,000,000,000,000
4
sysctl: Drop reference added by grab_header in proc_sys_readdir Fixes CVE-2016-9191, proc_sys_readdir doesn't drop reference added by grab_header when return from !dir_emit_dots path. It can cause any path called unregister_sysctl_table will wait forever. The calltrace of CVE-2016-9191: [ 5535.960522] Call Trace: [ 5535.963265] [<ffffffff817cdaaf>] schedule+0x3f/0xa0 [ 5535.968817] [<ffffffff817d33fb>] schedule_timeout+0x3db/0x6f0 [ 5535.975346] [<ffffffff817cf055>] ? wait_for_completion+0x45/0x130 [ 5535.982256] [<ffffffff817cf0d3>] wait_for_completion+0xc3/0x130 [ 5535.988972] [<ffffffff810d1fd0>] ? wake_up_q+0x80/0x80 [ 5535.994804] [<ffffffff8130de64>] drop_sysctl_table+0xc4/0xe0 [ 5536.001227] [<ffffffff8130de17>] drop_sysctl_table+0x77/0xe0 [ 5536.007648] [<ffffffff8130decd>] unregister_sysctl_table+0x4d/0xa0 [ 5536.014654] [<ffffffff8130deff>] unregister_sysctl_table+0x7f/0xa0 [ 5536.021657] [<ffffffff810f57f5>] unregister_sched_domain_sysctl+0x15/0x40 [ 5536.029344] [<ffffffff810d7704>] partition_sched_domains+0x44/0x450 [ 5536.036447] [<ffffffff817d0761>] ? __mutex_unlock_slowpath+0x111/0x1f0 [ 5536.043844] [<ffffffff81167684>] rebuild_sched_domains_locked+0x64/0xb0 [ 5536.051336] [<ffffffff8116789d>] update_flag+0x11d/0x210 [ 5536.057373] [<ffffffff817cf61f>] ? mutex_lock_nested+0x2df/0x450 [ 5536.064186] [<ffffffff81167acb>] ? cpuset_css_offline+0x1b/0x60 [ 5536.070899] [<ffffffff810fce3d>] ? trace_hardirqs_on+0xd/0x10 [ 5536.077420] [<ffffffff817cf61f>] ? mutex_lock_nested+0x2df/0x450 [ 5536.084234] [<ffffffff8115a9f5>] ? css_killed_work_fn+0x25/0x220 [ 5536.091049] [<ffffffff81167ae5>] cpuset_css_offline+0x35/0x60 [ 5536.097571] [<ffffffff8115aa2c>] css_killed_work_fn+0x5c/0x220 [ 5536.104207] [<ffffffff810bc83f>] process_one_work+0x1df/0x710 [ 5536.110736] [<ffffffff810bc7c0>] ? process_one_work+0x160/0x710 [ 5536.117461] [<ffffffff810bce9b>] worker_thread+0x12b/0x4a0 [ 5536.123697] [<ffffffff810bcd70>] ? process_one_work+0x710/0x710 [ 5536.130426] [<ffffffff810c3f7e>] kthread+0xfe/0x120 [ 5536.135991] [<ffffffff817d4baf>] ret_from_fork+0x1f/0x40 [ 5536.142041] [<ffffffff810c3e80>] ? kthread_create_on_node+0x230/0x230 One cgroup maintainer mentioned that "cgroup is trying to offline a cpuset css, which takes place under cgroup_mutex. The offlining ends up trying to drain active usages of a sysctl table which apprently is not happening." The real reason is that proc_sys_readdir doesn't drop reference added by grab_header when return from !dir_emit_dots path. So this cpuset offline path will wait here forever. See here for details: http://www.openwall.com/lists/oss-security/2016/11/04/13 Fixes: f0c3b5093add ("[readdir] convert procfs") Cc: [email protected] Reported-by: CAI Qian <[email protected]> Tested-by: Yang Shukui <[email protected]> Signed-off-by: Zhou Chengming <[email protected]> Acked-by: Al Viro <[email protected]> Signed-off-by: Eric W. Biederman <[email protected]>
_copyBitmapAnd(const BitmapAnd *from) { BitmapAnd *newnode = makeNode(BitmapAnd); /* * copy node superclass fields */ CopyPlanFields((const Plan *) from, (Plan *) newnode); /* * copy remainder of node */ COPY_NODE_FIELD(bitmapplans); return newnode; }
0
[ "CWE-362" ]
postgres
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
317,718,714,495,884,420,000,000,000,000,000,000,000
16
Avoid repeated name lookups during table and index DDL. If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
void udp_set_csum(bool nocheck, struct sk_buff *skb, __be32 saddr, __be32 daddr, int len) { struct udphdr *uh = udp_hdr(skb); if (nocheck) { uh->check = 0; } else if (skb_is_gso(skb)) { uh->check = ~udp_v4_check(len, saddr, daddr, 0); } else if (skb->ip_summed == CHECKSUM_PARTIAL) { uh->check = 0; uh->check = udp_v4_check(len, saddr, daddr, lco_csum(skb)); if (uh->check == 0) uh->check = CSUM_MANGLED_0; } else { skb->ip_summed = CHECKSUM_PARTIAL; skb->csum_start = skb_transport_header(skb) - skb->head; skb->csum_offset = offsetof(struct udphdr, check); uh->check = ~udp_v4_check(len, saddr, daddr, 0); } }
0
[]
linux
a612769774a30e4fc143c4cb6395c12573415660
215,594,302,923,327,200,000,000,000,000,000,000,000
21
udp: prevent bugcheck if filter truncates packet too much If socket filter truncates an udp packet below the length of UDP header in udpv6_queue_rcv_skb() or udp_queue_rcv_skb(), it will trigger a BUG_ON in skb_pull_rcsum(). This BUG_ON (and therefore a system crash if kernel is configured that way) can be easily enforced by an unprivileged user which was reported as CVE-2016-6162. For a reproducer, see http://seclists.org/oss-sec/2016/q3/8 Fixes: e6afc8ace6dd ("udp: remove headers from UDP packets before queueing") Reported-by: Marco Grassi <[email protected]> Signed-off-by: Michal Kubecek <[email protected]> Acked-by: Eric Dumazet <[email protected]> Acked-by: Willem de Bruijn <[email protected]> Signed-off-by: David S. Miller <[email protected]>
static int piv_get_key(sc_card_t *card, unsigned int alg_id, u8 **key, size_t *len) { int r; size_t fsize; FILE *f = NULL; char * keyfilename = NULL; size_t expected_keylen; size_t keylen; u8 * keybuf = NULL; u8 * tkey = NULL; SC_FUNC_CALLED(card->ctx, SC_LOG_DEBUG_VERBOSE); keyfilename = (char *)getenv("PIV_EXT_AUTH_KEY"); if (keyfilename == NULL) { sc_log(card->ctx, "Unable to get PIV_EXT_AUTH_KEY=(null) for general_external_authenticate"); r = SC_ERROR_FILE_NOT_FOUND; goto err; } r = get_keylen(alg_id, &expected_keylen); if(r) { sc_debug(card->ctx, SC_LOG_DEBUG_VERBOSE, "Invalid cipher selector, none found for: %02x", alg_id); r = SC_ERROR_INVALID_ARGUMENTS; goto err; } f = fopen(keyfilename, "rb"); if (!f) { sc_log(card->ctx, " Unable to load key from file\n"); r = SC_ERROR_FILE_NOT_FOUND; goto err; } if (0 > fseek(f, 0L, SEEK_END)) r = SC_ERROR_INTERNAL; fsize = ftell(f); if (0 > (long) fsize) r = SC_ERROR_INTERNAL; if (0 > fseek(f, 0L, SEEK_SET)) r = SC_ERROR_INTERNAL; if(r) { sc_debug(card->ctx, SC_LOG_DEBUG_VERBOSE, "Could not read %s\n", keyfilename); goto err; } keybuf = malloc(fsize+1); /* if not binary, need null to make it a string */ if (!keybuf) { sc_log(card->ctx, " Unable to allocate key memory"); r = SC_ERROR_OUT_OF_MEMORY; goto err; } keybuf[fsize] = 0x00; /* in case it is text need null */ if (fread(keybuf, 1, fsize, f) != fsize) { sc_log(card->ctx, " Unable to read key\n"); r = SC_ERROR_WRONG_LENGTH; goto err; } tkey = malloc(expected_keylen); if (!tkey) { sc_log(card->ctx, " Unable to allocate key memory"); r = SC_ERROR_OUT_OF_MEMORY; goto err; } if (fsize == expected_keylen) { /* it must be binary */ memcpy(tkey, keybuf, expected_keylen); } else { /* if the key-length is larger then binary length, we assume hex encoded */ sc_debug(card->ctx, SC_LOG_DEBUG_VERBOSE, "Treating key as hex-encoded!\n"); sc_right_trim(keybuf, fsize); keylen = expected_keylen; r = sc_hex_to_bin((char *)keybuf, tkey, &keylen); if (keylen !=expected_keylen || r != 0 ) { sc_debug(card->ctx, SC_LOG_DEBUG_VERBOSE, "Error formatting key\n"); if (r == 0) r = SC_ERROR_INCOMPATIBLE_KEY; goto err; } } *key = tkey; tkey = NULL; *len = expected_keylen; r = SC_SUCCESS; err: if (f) fclose(f); if (keybuf) { free(keybuf); } if (tkey) { free(tkey); } LOG_FUNC_RETURN(card->ctx, r); return r; }
0
[ "CWE-125" ]
OpenSC
8fe377e93b4b56060e5bbfb6f3142ceaeca744fa
246,526,814,575,912,270,000,000,000,000,000,000,000
103
fixed out of bounds reads Thanks to Eric Sesterhenn from X41 D-SEC GmbH for reporting and suggesting security fixes.
struct inode *inode_insert5(struct inode *inode, unsigned long hashval, int (*test)(struct inode *, void *), int (*set)(struct inode *, void *), void *data) { struct hlist_head *head = inode_hashtable + hash(inode->i_sb, hashval); struct inode *old; bool creating = inode->i_state & I_CREATING; again: spin_lock(&inode_hash_lock); old = find_inode(inode->i_sb, head, test, data); if (unlikely(old)) { /* * Uhhuh, somebody else created the same inode under us. * Use the old inode instead of the preallocated one. */ spin_unlock(&inode_hash_lock); if (IS_ERR(old)) return NULL; wait_on_inode(old); if (unlikely(inode_unhashed(old))) { iput(old); goto again; } return old; } if (set && unlikely(set(inode, data))) { inode = NULL; goto unlock; } /* * Return the locked inode with I_NEW set, the * caller is responsible for filling in the contents */ spin_lock(&inode->i_lock); inode->i_state |= I_NEW; hlist_add_head(&inode->i_hash, head); spin_unlock(&inode->i_lock); if (!creating) inode_sb_list_add(inode); unlock: spin_unlock(&inode_hash_lock); return inode; }
0
[ "CWE-416" ]
tip
8019ad13ef7f64be44d4f892af9c840179009254
240,957,848,943,769,520,000,000,000,000,000,000,000
47
futex: Fix inode life-time issue As reported by Jann, ihold() does not in fact guarantee inode persistence. And instead of making it so, replace the usage of inode pointers with a per boot, machine wide, unique inode identifier. This sequence number is global, but shared (file backed) futexes are rare enough that this should not become a performance issue. Reported-by: Jann Horn <[email protected]> Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
longlong Field_timestamp::val_int(void) { MYSQL_TIME ltime; if (get_date(&ltime, TIME_NO_ZERO_DATE)) return 0; return ltime.year * 10000000000LL + ltime.month * 100000000LL + ltime.day * 1000000L + ltime.hour * 10000L + ltime.minute * 100 + ltime.second; }
0
[ "CWE-416", "CWE-703" ]
server
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
188,689,395,264,212,400,000,000,000,000,000,000,000
10
MDEV-24176 Server crashes after insert in the table with virtual column generated using date_format() and if() vcol_info->expr is allocated on expr_arena at parsing stage. Since expr item is allocated on expr_arena all its containee items must be allocated on expr_arena too. Otherwise fix_session_expr() will encounter prematurely freed item. When table is reopened from cache vcol_info contains stale expression. We refresh expression via TABLE::vcol_fix_exprs() but first we must prepare a proper context (Vcol_expr_context) which meets some requirements: 1. As noted above expr update must be done on expr_arena as there may be new items created. It was a bug in fix_session_expr_for_read() and was just not reproduced because of no second refix. Now refix is done for more cases so it does reproduce. Tests affected: vcol.binlog 2. Also name resolution context must be narrowed to the single table. Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes 3. sql_mode must be clean and not fail expr update. sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc must not affect vcol expression update. If the table was created successfully any further evaluation must not fail. Tests affected: main.func_like Reviewed by: Sergei Golubchik <[email protected]>