func
stringlengths 0
484k
| target
int64 0
1
| cwe
listlengths 0
4
| project
stringclasses 799
values | commit_id
stringlengths 40
40
| hash
float64 1,215,700,430,453,689,100,000,000B
340,281,914,521,452,260,000,000,000,000B
| size
int64 1
24k
| message
stringlengths 0
13.3k
|
---|---|---|---|---|---|---|---|
static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
struct kvm_clear_dirty_log *log)
{
int r;
mutex_lock(&kvm->slots_lock);
r = kvm_clear_dirty_log_protect(kvm, log);
mutex_unlock(&kvm->slots_lock);
return r;
| 0 |
[
"CWE-459"
] |
linux
|
683412ccf61294d727ead4a73d97397396e69a6b
| 295,512,583,303,334,200,000,000,000,000,000,000,000 | 12 |
KVM: SEV: add cache flush to solve SEV cache incoherency issues
Flush the CPU caches when memory is reclaimed from an SEV guest (where
reclaim also includes it being unmapped from KVM's memslots). Due to lack
of coherency for SEV encrypted memory, failure to flush results in silent
data corruption if userspace is malicious/broken and doesn't ensure SEV
guest memory is properly pinned and unpinned.
Cache coherency is not enforced across the VM boundary in SEV (AMD APM
vol.2 Section 15.34.7). Confidential cachelines, generated by confidential
VM guests have to be explicitly flushed on the host side. If a memory page
containing dirty confidential cachelines was released by VM and reallocated
to another user, the cachelines may corrupt the new user at a later time.
KVM takes a shortcut by assuming all confidential memory remain pinned
until the end of VM lifetime. Therefore, KVM does not flush cache at
mmu_notifier invalidation events. Because of this incorrect assumption and
the lack of cache flushing, malicous userspace can crash the host kernel:
creating a malicious VM and continuously allocates/releases unpinned
confidential memory pages when the VM is running.
Add cache flush operations to mmu_notifier operations to ensure that any
physical memory leaving the guest VM get flushed. In particular, hook
mmu_notifier_invalidate_range_start and mmu_notifier_release events and
flush cache accordingly. The hook after releasing the mmu lock to avoid
contention with other vCPUs.
Cc: [email protected]
Suggested-by: Sean Christpherson <[email protected]>
Reported-by: Mingwei Zhang <[email protected]>
Signed-off-by: Mingwei Zhang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
static struct ucma_context *ucma_get_ctx_dev(struct ucma_file *file, int id)
{
struct ucma_context *ctx = ucma_get_ctx(file, id);
if (IS_ERR(ctx))
return ctx;
if (!ctx->cm_id->device) {
ucma_put_ctx(ctx);
return ERR_PTR(-EINVAL);
}
return ctx;
}
| 0 |
[
"CWE-416",
"CWE-703"
] |
linux
|
cb2595c1393b4a5211534e6f0a0fbad369e21ad8
| 127,179,742,981,885,940,000,000,000,000,000,000,000 | 12 |
infiniband: fix a possible use-after-free bug
ucma_process_join() will free the new allocated "mc" struct,
if there is any error after that, especially the copy_to_user().
But in parallel, ucma_leave_multicast() could find this "mc"
through idr_find() before ucma_process_join() frees it, since it
is already published.
So "mc" could be used in ucma_leave_multicast() after it is been
allocated and freed in ucma_process_join(), since we don't refcnt
it.
Fix this by separating "publish" from ID allocation, so that we
can get an ID first and publish it later after copy_to_user().
Fixes: c8f6a362bf3e ("RDMA/cma: Add multicast communication support")
Reported-by: Noam Rathaus <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
static int crypto_pcomp_init_tfm(struct crypto_tfm *tfm)
{
return 0;
}
| 0 |
[
"CWE-310"
] |
linux
|
9a5467bf7b6e9e02ec9c3da4e23747c05faeaac6
| 34,670,342,473,313,313,000,000,000,000,000,000,000 | 4 |
crypto: user - fix info leaks in report API
Three errors resulting in kernel memory disclosure:
1/ The structures used for the netlink based crypto algorithm report API
are located on the stack. As snprintf() does not fill the remainder of
the buffer with null bytes, those stack bytes will be disclosed to users
of the API. Switch to strncpy() to fix this.
2/ crypto_report_one() does not initialize all field of struct
crypto_user_alg. Fix this to fix the heap info leak.
3/ For the module name we should copy only as many bytes as
module_name() returns -- not as much as the destination buffer could
hold. But the current code does not and therefore copies random data
from behind the end of the module name, as the module name is always
shorter than CRYPTO_MAX_ALG_NAME.
Also switch to use strncpy() to copy the algorithm's name and
driver_name. They are strings, after all.
Signed-off-by: Mathias Krause <[email protected]>
Cc: Steffen Klassert <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
{
struct ieee80211_sub_if_data *sdata = rx->sdata;
struct ieee80211_local *local = rx->local;
struct net_device *dev = sdata->dev;
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
__le16 fc = hdr->frame_control;
bool port_control;
int err;
if (unlikely(!ieee80211_is_data(hdr->frame_control)))
return RX_CONTINUE;
if (unlikely(!ieee80211_is_data_present(hdr->frame_control)))
return RX_DROP_MONITOR;
/*
* Send unexpected-4addr-frame event to hostapd. For older versions,
* also drop the frame to cooked monitor interfaces.
*/
if (ieee80211_has_a4(hdr->frame_control) &&
sdata->vif.type == NL80211_IFTYPE_AP) {
if (rx->sta &&
!test_and_set_sta_flag(rx->sta, WLAN_STA_4ADDR_EVENT))
cfg80211_rx_unexpected_4addr_frame(
rx->sdata->dev, rx->sta->sta.addr, GFP_ATOMIC);
return RX_DROP_MONITOR;
}
err = __ieee80211_data_to_8023(rx, &port_control);
if (unlikely(err))
return RX_DROP_UNUSABLE;
if (!ieee80211_frame_allowed(rx, fc))
return RX_DROP_MONITOR;
/* directly handle TDLS channel switch requests/responses */
if (unlikely(((struct ethhdr *)rx->skb->data)->h_proto ==
cpu_to_be16(ETH_P_TDLS))) {
struct ieee80211_tdls_data *tf = (void *)rx->skb->data;
if (pskb_may_pull(rx->skb,
offsetof(struct ieee80211_tdls_data, u)) &&
tf->payload_type == WLAN_TDLS_SNAP_RFTYPE &&
tf->category == WLAN_CATEGORY_TDLS &&
(tf->action_code == WLAN_TDLS_CHANNEL_SWITCH_REQUEST ||
tf->action_code == WLAN_TDLS_CHANNEL_SWITCH_RESPONSE)) {
skb_queue_tail(&local->skb_queue_tdls_chsw, rx->skb);
schedule_work(&local->tdls_chsw_work);
if (rx->sta)
rx->sta->rx_stats.packets++;
return RX_QUEUED;
}
}
if (rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
unlikely(port_control) && sdata->bss) {
sdata = container_of(sdata->bss, struct ieee80211_sub_if_data,
u.ap);
dev = sdata->dev;
rx->sdata = sdata;
}
rx->skb->dev = dev;
if (!ieee80211_hw_check(&local->hw, SUPPORTS_DYNAMIC_PS) &&
local->ps_sdata && local->hw.conf.dynamic_ps_timeout > 0 &&
!is_multicast_ether_addr(
((struct ethhdr *)rx->skb->data)->h_dest) &&
(!local->scanning &&
!test_bit(SDATA_STATE_OFFCHANNEL, &sdata->state)))
mod_timer(&local->dynamic_ps_timer, jiffies +
msecs_to_jiffies(local->hw.conf.dynamic_ps_timeout));
ieee80211_deliver_skb(rx);
return RX_QUEUED;
}
| 0 |
[] |
linux
|
588f7d39b3592a36fb7702ae3b8bdd9be4621e2f
| 304,371,262,619,511,640,000,000,000,000,000,000,000 | 79 |
mac80211: drop robust management frames from unknown TA
When receiving a robust management frame, drop it if we don't have
rx->sta since then we don't have a security association and thus
couldn't possibly validate the frame.
Cc: [email protected]
Signed-off-by: Johannes Berg <[email protected]>
|
OPJ_BOOL opj_jp2_decode_tile(opj_jp2_t * p_jp2,
OPJ_UINT32 p_tile_index,
OPJ_BYTE * p_data,
OPJ_UINT32 p_data_size,
opj_stream_private_t *p_stream,
opj_event_mgr_t * p_manager
)
{
return opj_j2k_decode_tile(p_jp2->j2k, p_tile_index, p_data, p_data_size,
p_stream, p_manager);
}
| 0 |
[
"CWE-20"
] |
openjpeg
|
4edb8c83374f52cd6a8f2c7c875e8ffacccb5fa5
| 10,055,536,857,159,193,000,000,000,000,000,000,000 | 11 |
Add support for generation of PLT markers in encoder
* -PLT switch added to opj_compress
* Add a opj_encoder_set_extra_options() function that
accepts a PLT=YES option, and could be expanded later
for other uses.
-------
Testing with a Sentinel2 10m band, T36JTT_20160914T074612_B02.jp2,
coming from S2A_MSIL1C_20160914T074612_N0204_R135_T36JTT_20160914T081456.SAFE
Decompress it to TIFF:
```
opj_uncompress -i T36JTT_20160914T074612_B02.jp2 -o T36JTT_20160914T074612_B02.tif
```
Recompress it with similar parameters as original:
```
opj_compress -n 5 -c [256,256],[256,256],[256,256],[256,256],[256,256] -t 1024,1024 -PLT -i T36JTT_20160914T074612_B02.tif -o T36JTT_20160914T074612_B02_PLT.jp2
```
Dump codestream detail with GDAL dump_jp2.py utility (https://github.com/OSGeo/gdal/blob/master/gdal/swig/python/samples/dump_jp2.py)
```
python dump_jp2.py T36JTT_20160914T074612_B02.jp2 > /tmp/dump_sentinel2_ori.txt
python dump_jp2.py T36JTT_20160914T074612_B02_PLT.jp2 > /tmp/dump_sentinel2_openjpeg_plt.txt
```
The diff between both show very similar structure, and identical number of packets in PLT markers
Now testing with Kakadu (KDU803_Demo_Apps_for_Linux-x86-64_200210)
Full file decompression:
```
kdu_expand -i T36JTT_20160914T074612_B02_PLT.jp2 -o tmp.tif
Consumed 121 tile-part(s) from a total of 121 tile(s).
Consumed 80,318,806 codestream bytes (excluding any file format) = 5.329697
bits/pel.
Processed using the multi-threaded environment, with
8 parallel threads of execution
```
Partial decompresson (presumably using PLT markers):
```
kdu_expand -i T36JTT_20160914T074612_B02.jp2 -o tmp.pgm -region "{0.5,0.5},{0.01,0.01}"
kdu_expand -i T36JTT_20160914T074612_B02_PLT.jp2 -o tmp2.pgm -region "{0.5,0.5},{0.01,0.01}"
diff tmp.pgm tmp2.pgm && echo "same !"
```
-------
Funded by ESA for S2-MPC project
|
end_clause()
{
if (clause_depth == 0)
int_error(c_token, "unexpected }");
else
clause_depth--;
c_token++;
return;
}
| 0 |
[
"CWE-415"
] |
gnuplot
|
052cbd17c3cbbc602ee080b2617d32a8417d7563
| 112,457,778,450,498,050,000,000,000,000,000,000,000 | 9 |
successive failures of "set print <foo>" could cause double-free
Bug #2312
|
reload_vrrp_thread(__attribute__((unused)) thread_t * thread)
{
log_message(LOG_INFO, "Reloading");
/* Use standard scheduling while reloading */
reset_process_priorities();
/* set the reloading flag */
SET_RELOAD;
/* Terminate all script process */
script_killall(master, SIGTERM, false);
if (vrrp_data->vrrp_track_files)
stop_track_files();
vrrp_initialised = false;
/* Destroy master thread */
vrrp_dispatcher_release(vrrp_data);
thread_cleanup_master(master);
thread_add_base_threads(master);
#ifdef _WITH_LVS_
if (global_data->lvs_syncd.ifname)
ipvs_syncd_cmd(IPVS_STOPDAEMON, &global_data->lvs_syncd,
(global_data->lvs_syncd.vrrp->state == VRRP_STATE_MAST) ? IPVS_MASTER:
IPVS_BACKUP,
true, false);
#endif
/* Remove the notify fifo - we don't know if it will be the same after a reload */
notify_fifo_close(&global_data->notify_fifo, &global_data->vrrp_notify_fifo);
#ifdef _WITH_LVS_
if (vrrp_ipvs_needed()) {
/* Clean ipvs related */
ipvs_stop();
}
#endif
/* Save previous conf data */
old_vrrp_data = vrrp_data;
vrrp_data = NULL;
old_global_data = global_data;
global_data = NULL;
reset_interface_queue();
#ifdef _HAVE_FIB_ROUTING_
reset_next_rule_priority();
#endif
/* Reload the conf */
start_vrrp(old_global_data);
#ifdef _WITH_LVS_
if (global_data->lvs_syncd.ifname)
ipvs_syncd_cmd(IPVS_STARTDAEMON, &global_data->lvs_syncd,
(global_data->lvs_syncd.vrrp->state == VRRP_STATE_MAST) ? IPVS_MASTER:
IPVS_BACKUP,
true, false);
#endif
/* free backup data */
free_vrrp_data(old_vrrp_data);
free_global_data(old_global_data);
free_old_interface_queue();
UNSET_RELOAD;
return 0;
}
| 0 |
[
"CWE-200"
] |
keepalived
|
26c8d6374db33bcfcdcd758b1282f12ceef4b94f
| 245,453,105,375,389,080,000,000,000,000,000,000,000 | 72 |
Disable fopen_safe() append mode by default
If a non privileged user creates /tmp/keepalived.log and has it open
for read (e.g. tail -f), then even though keepalived will change the
owner to root and remove all read/write permissions from non owners,
the application which already has the file open will be able to read
the added log entries.
Accordingly, opening a file in append mode is disabled by default, and
only enabled if --enable-smtp-alert-debug or --enable-log-file (which
are debugging options and unset by default) are enabled.
This should further alleviate security concerns related to CVE-2018-19046.
Signed-off-by: Quentin Armitage <[email protected]>
|
static struct file *do_open_execat(int fd, struct filename *name, int flags)
{
struct file *file;
int err;
struct open_flags open_exec_flags = {
.open_flag = O_LARGEFILE | O_RDONLY | __FMODE_EXEC,
.acc_mode = MAY_EXEC | MAY_OPEN,
.intent = LOOKUP_OPEN,
.lookup_flags = LOOKUP_FOLLOW,
};
if ((flags & ~(AT_SYMLINK_NOFOLLOW | AT_EMPTY_PATH)) != 0)
return ERR_PTR(-EINVAL);
if (flags & AT_SYMLINK_NOFOLLOW)
open_exec_flags.lookup_flags &= ~LOOKUP_FOLLOW;
if (flags & AT_EMPTY_PATH)
open_exec_flags.lookup_flags |= LOOKUP_EMPTY;
file = do_filp_open(fd, name, &open_exec_flags);
if (IS_ERR(file))
goto out;
err = -EACCES;
if (!S_ISREG(file_inode(file)->i_mode))
goto exit;
if (file->f_path.mnt->mnt_flags & MNT_NOEXEC)
goto exit;
err = deny_write_access(file);
if (err)
goto exit;
if (name->name[0] != '\0')
fsnotify_open(file);
out:
return file;
exit:
fput(file);
return ERR_PTR(err);
}
| 0 |
[
"CWE-362"
] |
linux
|
8b01fc86b9f425899f8a3a8fc1c47d73c2c20543
| 318,098,935,583,987,160,000,000,000,000,000,000,000 | 43 |
fs: take i_mutex during prepare_binprm for set[ug]id executables
This prevents a race between chown() and execve(), where chowning a
setuid-user binary to root would momentarily make the binary setuid
root.
This patch was mostly written by Linus Torvalds.
Signed-off-by: Jann Horn <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
BOOL nla_verify_header(wStream* s)
{
if ((s->pointer[0] == 0x30) && (s->pointer[1] & 0x80))
return TRUE;
return FALSE;
}
| 0 |
[
"CWE-476",
"CWE-125"
] |
FreeRDP
|
0773bb9303d24473fe1185d85a424dfe159aff53
| 323,631,590,968,621,070,000,000,000,000,000,000,000 | 7 |
nla: invalidate sec handle after creation
If sec pointer isn't invalidated after creation it is not possible
to check if the upper and lower pointers are valid.
This fixes a segfault in the server part if the client disconnects before
the authentication was finished.
|
string_center(PyStringObject *self, PyObject *args)
{
Py_ssize_t marg, left;
Py_ssize_t width;
char fillchar = ' ';
if (!PyArg_ParseTuple(args, "n|c:center", &width, &fillchar))
return NULL;
if (PyString_GET_SIZE(self) >= width && PyString_CheckExact(self)) {
Py_INCREF(self);
return (PyObject*) self;
}
marg = width - PyString_GET_SIZE(self);
left = marg / 2 + (marg & width & 1);
return pad(self, left, marg - left, fillchar);
}
| 0 |
[
"CWE-190"
] |
cpython
|
c3c9db89273fabc62ea1b48389d9a3000c1c03ae
| 191,406,702,358,575,540,000,000,000,000,000,000,000 | 19 |
[2.7] bpo-30657: Check & prevent integer overflow in PyString_DecodeEscape (#2174)
|
adv_error adv_png_write_rns(
unsigned pix_width, unsigned pix_height, unsigned pix_pixel,
const unsigned char* pix_ptr, int pix_pixel_pitch, int pix_scanline_pitch,
const unsigned char* pal_ptr, unsigned pal_size,
const unsigned char* rns_ptr, unsigned rns_size,
adv_bool fast,
adv_fz* f, unsigned* count)
{
if (adv_png_write_signature(f, count) != 0) {
return -1;
}
return adv_png_write_raw(
pix_width, pix_height, pix_pixel,
pix_ptr, pix_pixel_pitch, pix_scanline_pitch,
pal_ptr, pal_size,
rns_ptr, rns_size,
fast,
f, count
);
}
| 0 |
[
"CWE-119"
] |
advancecomp
|
78a56b21340157775be2462a19276b4d31d2bd01
| 191,286,979,093,170,050,000,000,000,000,000,000 | 21 |
Fix a buffer overflow caused by invalid images
|
int Option::validate(const Option::value_t &new_value, std::string *err) const
{
// Generic validation: min
if (!boost::get<boost::blank>(&(min))) {
if (new_value < min) {
std::ostringstream oss;
oss << "Value '" << new_value << "' is below minimum " << min;
*err = oss.str();
return -EINVAL;
}
}
// Generic validation: max
if (!boost::get<boost::blank>(&(max))) {
if (new_value > max) {
std::ostringstream oss;
oss << "Value '" << new_value << "' exceeds maximum " << max;
*err = oss.str();
return -EINVAL;
}
}
// Generic validation: enum
if (!enum_allowed.empty() && type == Option::TYPE_STR) {
auto found = std::find(enum_allowed.begin(), enum_allowed.end(),
boost::get<std::string>(new_value));
if (found == enum_allowed.end()) {
std::ostringstream oss;
oss << "'" << new_value << "' is not one of the permitted "
"values: " << joinify(enum_allowed.begin(),
enum_allowed.end(),
std::string(", "));
*err = oss.str();
return -EINVAL;
}
}
return 0;
}
| 0 |
[
"CWE-770"
] |
ceph
|
ab29bed2fc9f961fe895de1086a8208e21ddaddc
| 107,905,653,057,041,640,000,000,000,000,000,000,000 | 39 |
rgw: fix issues with 'enforce bounds' patch
The patch to enforce bounds on max-keys/max-uploads/max-parts had a few
issues that would prevent us from compiling it. Instead of changing the
code provided by the submitter, we're addressing them in a separate
commit to maintain the DCO.
Signed-off-by: Joao Eduardo Luis <[email protected]>
Signed-off-by: Abhishek Lekshmanan <[email protected]>
(cherry picked from commit 29bc434a6a81a2e5c5b8cfc4c8d5c82ca5bf538a)
mimic specific fixes:
As the largeish change from master g_conf() isn't in mimic yet, use the g_conf
global structure, also make rgw_op use the value from req_info ceph context as
we do for all the requests
|
_zip_file_exists(zip_source_t *src, zip_error_t *error)
{
struct zip_stat st;
zip_stat_init(&st);
if (zip_source_stat(src, &st) != 0) {
zip_error_t *src_error = zip_source_error(src);
if (zip_error_code_zip(src_error) == ZIP_ER_READ && zip_error_code_system(src_error) == ENOENT) {
return EXISTS_NOT;
}
_zip_error_copy(error, src_error);
return EXISTS_ERROR;
}
return (st.valid & ZIP_STAT_SIZE) && st.size == 0 ? EXISTS_EMPTY : EXISTS_NONEMPTY;
}
| 0 |
[
"CWE-119",
"CWE-770",
"CWE-787"
] |
libzip
|
9b46957ec98d85a572e9ef98301247f39338a3b5
| 47,234,739,504,109,980,000,000,000,000,000,000,000 | 16 |
Make eocd checks more consistent between zip and zip64 cases.
|
mz_uint mz_zip_reader_get_filename(mz_zip_archive *pZip, mz_uint file_index,
char *pFilename, mz_uint filename_buf_size) {
mz_uint n;
const mz_uint8 *p = mz_zip_reader_get_cdh(pZip, file_index);
if (!p) {
if (filename_buf_size) pFilename[0] = '\0';
return 0;
}
n = MZ_READ_LE16(p + MZ_ZIP_CDH_FILENAME_LEN_OFS);
if (filename_buf_size) {
n = MZ_MIN(n, filename_buf_size - 1);
memcpy(pFilename, p + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE, n);
pFilename[n] = '\0';
}
return n + 1;
}
| 0 |
[
"CWE-20",
"CWE-190"
] |
tinyexr
|
a685e3332f61cd4e59324bf3f669d36973d64270
| 303,994,373,405,965,170,000,000,000,000,000,000,000 | 16 |
Make line_no with too large value(2**20) invalid. Fixes #124
|
static int proc_pid_schedstat(struct task_struct *task, char *buffer)
{
return sprintf(buffer, "%llu %llu %lu\n",
(unsigned long long)task->se.sum_exec_runtime,
(unsigned long long)task->sched_info.run_delay,
task->sched_info.pcount);
}
| 0 |
[] |
linux
|
0499680a42141d86417a8fbaa8c8db806bea1201
| 146,865,734,449,799,300,000,000,000,000,000,000,000 | 7 |
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Greg KH <[email protected]>
Cc: Theodore Tso <[email protected]>
Cc: Alan Cox <[email protected]>
Cc: James Morris <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Hugh Dickins <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
ProxyProtocolTest()
: api_(Api::createApiForTest(stats_store_)),
dispatcher_(api_->allocateDispatcher("test_thread")),
socket_(std::make_shared<Network::TcpListenSocket>(
Network::Test::getCanonicalLoopbackAddress(GetParam()), nullptr, true)),
connection_handler_(new Server::ConnectionHandlerImpl(*dispatcher_)), name_("proxy"),
filter_chain_(Network::Test::createEmptyFilterChainWithRawBufferSockets()) {
EXPECT_CALL(socket_factory_, socketType()).WillOnce(Return(Network::Socket::Type::Stream));
EXPECT_CALL(socket_factory_, localAddress()).WillOnce(ReturnRef(socket_->localAddress()));
EXPECT_CALL(socket_factory_, getListenSocket()).WillOnce(Return(socket_));
connection_handler_->addListener(absl::nullopt, *this);
conn_ = dispatcher_->createClientConnection(socket_->localAddress(),
Network::Address::InstanceConstSharedPtr(),
Network::Test::createRawBufferSocket(), nullptr);
conn_->addConnectionCallbacks(connection_callbacks_);
}
| 0 |
[
"CWE-400"
] |
envoy
|
dfddb529e914d794ac552e906b13d71233609bf7
| 75,316,700,242,880,660,000,000,000,000,000,000,000 | 16 |
listener: Add configurable accepted connection limits (#153)
Add support for per-listener limits on accepted connections.
Signed-off-by: Tony Allen <[email protected]>
|
int __init ccid_initialize_builtins(void)
{
int i, err = tfrc_lib_init();
if (err)
return err;
for (i = 0; i < ARRAY_SIZE(ccids); i++) {
err = ccid_activate(ccids[i]);
if (err)
goto unwind_registrations;
}
return 0;
unwind_registrations:
while(--i >= 0)
ccid_deactivate(ccids[i]);
tfrc_lib_exit();
return err;
}
| 0 |
[
"CWE-476"
] |
linux-2.6
|
8ed030dd0aa400d18c63861c2c6deb7c38f4edde
| 27,432,973,520,972,867,000,000,000,000,000,000,000 | 20 |
dccp: fix bug in cache allocation
This fixes a bug introduced in commit de4ef86cfce60d2250111f34f8a084e769f23b16
("dccp: fix dccp rmmod when kernel configured to use slub", 17 Jan): the
vsnprintf used sizeof(slab_name_fmt), which became truncated to 4 bytes, since
slab_name_fmt is now a 4-byte pointer and no longer a 32-character array.
This lead to error messages such as
FATAL: Error inserting dccp: No buffer space available
>> kernel: [ 1456.341501] kmem_cache_create: duplicate cache cci
generated due to the truncation after the 3rd character.
Fixed for the moment by introducing a symbolic constant. Tested to fix the bug.
Signed-off-by: Gerrit Renker <[email protected]>
Acked-by: Neil Horman <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int proc_sys_getattr(const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int query_flags)
{
struct inode *inode = d_inode(path->dentry);
struct ctl_table_header *head = grab_header(inode);
struct ctl_table *table = PROC_I(inode)->sysctl_entry;
if (IS_ERR(head))
return PTR_ERR(head);
generic_fillattr(inode, stat);
if (table)
stat->mode = (stat->mode & S_IFMT) | table->mode;
sysctl_head_finish(head);
return 0;
}
| 0 |
[
"CWE-476"
] |
linux
|
23da9588037ecdd4901db76a5b79a42b529c4ec3
| 308,836,954,479,464,950,000,000,000,000,000,000,000 | 17 |
fs/proc/proc_sysctl.c: fix NULL pointer dereference in put_links
Syzkaller reports:
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN PTI
CPU: 1 PID: 5373 Comm: syz-executor.0 Not tainted 5.0.0-rc8+ #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
RIP: 0010:put_links+0x101/0x440 fs/proc/proc_sysctl.c:1599
Code: 00 0f 85 3a 03 00 00 48 8b 43 38 48 89 44 24 20 48 83 c0 38 48 89 c2 48 89 44 24 28 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 <80> 3c 02 00 0f 85 fe 02 00 00 48 8b 74 24 20 48 c7 c7 60 2a 9d 91
RSP: 0018:ffff8881d828f238 EFLAGS: 00010202
RAX: dffffc0000000000 RBX: ffff8881e01b1140 RCX: ffffffff8ee98267
RDX: 0000000000000007 RSI: ffffc90001479000 RDI: ffff8881e01b1178
RBP: dffffc0000000000 R08: ffffed103ee27259 R09: ffffed103ee27259
R10: 0000000000000001 R11: ffffed103ee27258 R12: fffffffffffffff4
R13: 0000000000000006 R14: ffff8881f59838c0 R15: dffffc0000000000
FS: 00007f072254f700(0000) GS:ffff8881f7100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fff8b286668 CR3: 00000001f0542002 CR4: 00000000007606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
drop_sysctl_table+0x152/0x9f0 fs/proc/proc_sysctl.c:1629
get_subdir fs/proc/proc_sysctl.c:1022 [inline]
__register_sysctl_table+0xd65/0x1090 fs/proc/proc_sysctl.c:1335
br_netfilter_init+0xbc/0x1000 [br_netfilter]
do_one_initcall+0xfa/0x5ca init/main.c:887
do_init_module+0x204/0x5f6 kernel/module.c:3460
load_module+0x66b2/0x8570 kernel/module.c:3808
__do_sys_finit_module+0x238/0x2a0 kernel/module.c:3902
do_syscall_64+0x147/0x600 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462e99
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f072254ec58 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462e99
RDX: 0000000000000000 RSI: 0000000020000280 RDI: 0000000000000003
RBP: 00007f072254ec70 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f072254f6bc
R13: 00000000004bcefa R14: 00000000006f6fb0 R15: 0000000000000004
Modules linked in: br_netfilter(+) dvb_usb_dibusb_mc_common dib3000mc dibx000_common dvb_usb_dibusb_common dvb_usb_dw2102 dvb_usb classmate_laptop palmas_regulator cn videobuf2_v4l2 v4l2_common snd_soc_bd28623 mptbase snd_usb_usx2y snd_usbmidi_lib snd_rawmidi wmi libnvdimm lockd sunrpc grace rc_kworld_pc150u rc_core rtc_da9063 sha1_ssse3 i2c_cros_ec_tunnel adxl34x_spi adxl34x nfnetlink lib80211 i5500_temp dvb_as102 dvb_core videobuf2_common videodev media videobuf2_vmalloc videobuf2_memops udc_core lnbp22 leds_lp3952 hid_roccat_ryos s1d13xxxfb mtd vport_geneve openvswitch nf_conncount nf_nat_ipv6 nsh geneve udp_tunnel ip6_udp_tunnel snd_soc_mt6351 sis_agp phylink snd_soc_adau1761_spi snd_soc_adau1761 snd_soc_adau17x1 snd_soc_core snd_pcm_dmaengine ac97_bus snd_compress snd_soc_adau_utils snd_soc_sigmadsp_regmap snd_soc_sigmadsp raid_class hid_roccat_konepure hid_roccat_common hid_roccat c2port_duramar2150 core mdio_bcm_unimac iptable_security iptable_raw iptable_mangle
iptable_nat nf_nat_ipv4 nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter ip6_vti ip_vti ip_gre ipip sit tunnel4 ip_tunnel hsr veth netdevsim devlink vxcan batman_adv cfg80211 rfkill chnl_net caif nlmon dummy team bonding vcan bridge stp llc ip6_gre gre ip6_tunnel tunnel6 tun crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel joydev mousedev ide_pci_generic piix aesni_intel aes_x86_64 ide_core crypto_simd atkbd cryptd glue_helper serio_raw ata_generic pata_acpi i2c_piix4 floppy sch_fq_codel ip_tables x_tables ipv6 [last unloaded: lm73]
Dumping ftrace buffer:
(ftrace buffer empty)
---[ end trace 770020de38961fd0 ]---
A new dir entry can be created in get_subdir and its 'header->parent' is
set to NULL. Only after insert_header success, it will be set to 'dir',
otherwise 'header->parent' is set to NULL and drop_sysctl_table is called.
However in err handling path of get_subdir, drop_sysctl_table also be
called on 'new->header' regardless its value of parent pointer. Then
put_links is called, which triggers NULL-ptr deref when access member of
header->parent.
In fact we have multiple error paths which call drop_sysctl_table() there,
upon failure on insert_links() we also call drop_sysctl_table().And even
in the successful case on __register_sysctl_table() we still always call
drop_sysctl_table().This patch fix it.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 0e47c99d7fe25 ("sysctl: Replace root_list with links between sysctl_table_sets")
Signed-off-by: YueHaibing <[email protected]>
Reported-by: Hulk Robot <[email protected]>
Acked-by: Luis Chamberlain <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Eric W. Biederman <[email protected]>
Cc: <[email protected]> [3.4+]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
static void nhmldump_send_header(GF_NHMLDumpCtx *ctx)
{
GF_FilterPacket *dst_pck;
char nhml[1024];
u32 size;
u8 *output;
const GF_PropertyValue *p;
ctx->szRootName = "NHNTStream";
if (ctx->dims) {
ctx->szRootName = "DIMSStream";
}
if (!ctx->filep) {
sprintf(nhml, "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>\n");
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
/*write header*/
sprintf(nhml, "<%s version=\"1.0\" ", ctx->szRootName);
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
NHML_PRINT_UINT(GF_PROP_PID_ID, NULL, "trackID")
NHML_PRINT_UINT(GF_PROP_PID_TIMESCALE, NULL, "timeScale")
p = gf_filter_pid_get_property(ctx->ipid, GF_PROP_PID_IN_IOD);
if (p && p->value.boolean) {
sprintf(nhml, "inRootOD=\"yes\" ");
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
if (ctx->oti && (ctx->oti<GF_CODECID_LAST_MPEG4_MAPPING)) {
sprintf(nhml, "streamType=\"%d\" objectTypeIndication=\"%d\" ", ctx->streamtype, ctx->oti);
gf_bs_write_data(ctx->bs_w, nhml, (u32)strlen(nhml));
} else {
p = gf_filter_pid_get_property(ctx->ipid, GF_PROP_PID_SUBTYPE);
if (p) {
sprintf(nhml, "%s=\"%s\" ", "mediaType", gf_4cc_to_str(p->value.uint));
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
NHML_PRINT_4CC(GF_PROP_PID_ISOM_SUBTYPE, "mediaSubType", "mediaSubType")
} else {
NHML_PRINT_4CC(GF_PROP_PID_CODECID, NULL, "codecID")
}
}
if (ctx->w && ctx->h) {
//compatibility with old arch, we might want to remove this
switch (ctx->streamtype) {
case GF_STREAM_VISUAL:
case GF_STREAM_SCENE:
sprintf(nhml, "width=\"%d\" height=\"%d\" ", ctx->w, ctx->h);
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
break;
default:
break;
}
}
else if (ctx->sr && ctx->chan) {
sprintf(nhml, "sampleRate=\"%d\" numChannels=\"%d\" ", ctx->sr, ctx->chan);
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
sprintf(nhml, "sampleRate=\"%d\" numChannels=\"%d\" ", ctx->sr, ctx->chan);
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
p = gf_filter_pid_get_property(ctx->ipid, GF_PROP_PID_AUDIO_FORMAT);
if (p)
sprintf(nhml, "bitsPerSample=\"%d\" ", gf_audio_fmt_bit_depth(p->value.uint));
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
NHML_PRINT_4CC(0, "codec_vendor", "codecVendor")
NHML_PRINT_UINT(0, "codec_version", "codecVersion")
NHML_PRINT_UINT(0, "codec_revision", "codecRevision")
NHML_PRINT_STRING(0, "compressor_name", "compressorName")
NHML_PRINT_UINT(0, "temporal_quality", "temporalQuality")
NHML_PRINT_UINT(0, "spatial_quality", "spatialQuality")
NHML_PRINT_UINT(0, "hres", "horizontalResolution")
NHML_PRINT_UINT(0, "vres", "verticalResolution")
NHML_PRINT_UINT(GF_PROP_PID_BIT_DEPTH_Y, NULL, "bitDepth")
NHML_PRINT_STRING(0, "meta:xmlns", "xml_namespace")
NHML_PRINT_STRING(0, "meta:schemaloc", "xml_schema_location")
NHML_PRINT_STRING(0, "meta:mime", "mime_type")
NHML_PRINT_STRING(0, "meta:config", "config")
NHML_PRINT_STRING(0, "meta:aux_mimes", "aux_mime_type")
if (ctx->codecid == GF_CODECID_DIMS) {
if (gf_filter_pid_get_property_str(ctx->ipid, "meta:xmlns")==NULL) {
sprintf(nhml, "xmlns=\"http://www.3gpp.org/richmedia\" ");
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
NHML_PRINT_UINT(0, "dims:profile", "profile")
NHML_PRINT_UINT(0, "dims:level", "level")
NHML_PRINT_UINT(0, "dims:pathComponents", "pathComponents")
p = gf_filter_pid_get_property_str(ctx->ipid, "dims:fullRequestHost");
if (p) {
sprintf(nhml, "useFullRequestHost=\"%s\" ", p->value.boolean ? "yes" : "no");
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
p = gf_filter_pid_get_property_str(ctx->ipid, "dims:streamType");
if (p) {
sprintf(nhml, "stream_type=\"%s\" ", p->value.boolean ? "primary" : "secondary");
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
p = gf_filter_pid_get_property_str(ctx->ipid, "dims:redundant");
if (p) {
sprintf(nhml, "contains_redundant=\"%s\" ", (p->value.uint==1) ? "main" : ((p->value.uint==1) ? "redundant" : "main+redundant") );
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
NHML_PRINT_UINT(0, "dims:scriptTypes", "scriptTypes")
}
//send DCD
if (ctx->opid_info) {
sprintf(nhml, "specificInfoFile=\"%s\" ", gf_file_basename(ctx->info_file) );
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
dst_pck = gf_filter_pck_new_shared(ctx->opid_info, ctx->dcfg, ctx->dcfg_size, NULL);
gf_filter_pck_set_framing(dst_pck, GF_TRUE, GF_TRUE);
gf_filter_pck_set_readonly(dst_pck);
gf_filter_pck_send(dst_pck);
}
NHML_PRINT_STRING(0, "meta:encoding", "encoding")
NHML_PRINT_STRING(0, "meta:contentEncoding", "content_encoding")
ctx->uncompress = GF_FALSE;
if (p) {
if (!strcmp(p->value.string, "deflate")) ctx->uncompress = GF_TRUE;
else {
GF_LOG(GF_LOG_ERROR, GF_LOG_AUTHOR, ("[NHMLMx] content_encoding %s not supported\n", p->value.string ));
}
}
if (ctx->opid_mdia) {
sprintf(nhml, "baseMediaFile=\"%s\" ", gf_file_basename(ctx->media_file) );
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
}
sprintf(nhml, ">\n");
gf_bs_write_data(ctx->bs_w, nhml, (u32) strlen(nhml));
gf_bs_get_content_no_truncate(ctx->bs_w, &ctx->nhml_buffer, &size, &ctx->nhml_buffer_size);
if (ctx->filep) {
gf_fwrite(ctx->nhml_buffer, size, ctx->filep);
return;
}
dst_pck = gf_filter_pck_new_alloc(ctx->opid_nhml, size, &output);
memcpy(output, ctx->nhml_buffer, size);
gf_filter_pck_set_framing(dst_pck, GF_TRUE, GF_FALSE);
gf_filter_pck_send(dst_pck);
}
| 0 |
[
"CWE-476"
] |
gpac
|
9eeac00b38348c664dfeae2525bba0cf1bc32349
| 192,488,898,754,254,960,000,000,000,000,000,000,000 | 155 |
fixed #1565
|
sd_markdown_new(
unsigned int extensions,
size_t max_nesting,
const struct sd_callbacks *callbacks,
void *opaque)
{
struct sd_markdown *md = NULL;
assert(max_nesting > 0 && callbacks);
md = malloc(sizeof(struct sd_markdown));
if (!md)
return NULL;
memcpy(&md->cb, callbacks, sizeof(struct sd_callbacks));
redcarpet_stack_init(&md->work_bufs[BUFFER_BLOCK], 4);
redcarpet_stack_init(&md->work_bufs[BUFFER_SPAN], 8);
memset(md->active_char, 0x0, 256);
if (md->cb.emphasis || md->cb.double_emphasis || md->cb.triple_emphasis) {
md->active_char['*'] = MD_CHAR_EMPHASIS;
md->active_char['_'] = MD_CHAR_EMPHASIS;
if (extensions & MKDEXT_STRIKETHROUGH)
md->active_char['~'] = MD_CHAR_EMPHASIS;
if (extensions & MKDEXT_HIGHLIGHT)
md->active_char['='] = MD_CHAR_EMPHASIS;
}
if (md->cb.codespan)
md->active_char['`'] = MD_CHAR_CODESPAN;
if (md->cb.linebreak)
md->active_char['\n'] = MD_CHAR_LINEBREAK;
if (md->cb.image || md->cb.link)
md->active_char['['] = MD_CHAR_LINK;
md->active_char['<'] = MD_CHAR_LANGLE;
md->active_char['\\'] = MD_CHAR_ESCAPE;
md->active_char['&'] = MD_CHAR_ENTITITY;
if (extensions & MKDEXT_AUTOLINK) {
md->active_char[':'] = MD_CHAR_AUTOLINK_URL;
md->active_char['@'] = MD_CHAR_AUTOLINK_EMAIL;
md->active_char['w'] = MD_CHAR_AUTOLINK_WWW;
}
if (extensions & MKDEXT_SUPERSCRIPT)
md->active_char['^'] = MD_CHAR_SUPERSCRIPT;
if (extensions & MKDEXT_QUOTE)
md->active_char['"'] = MD_CHAR_QUOTE;
/* Extension data */
md->ext_flags = extensions;
md->opaque = opaque;
md->max_nesting = max_nesting;
md->in_link_body = 0;
return md;
}
| 0 |
[] |
redcarpet
|
e5a10516d07114d582d13b9125b733008c61c242
| 307,592,278,170,251,260,000,000,000,000,000,000,000 | 63 |
Avoid rewinding previous inline when auto-linking
When a bit like "[email protected]" is processed, first the emphasis is
rendered, then the 1 is output verbatim. When the `@` is encountered,
Redcarpet tries to find the "local part" of the address and stops when
it encounters an invalid char (i.e. here the `!`).
The problem is that when it searches for the local part, Redcarpet
rewinds the characters but here, the emphasis is already rendered so
the previous HTML tag is rewinded as well and is not correctly closed.
|
void RSA_Private_Decoder::Decode(RSA_PrivateKey& key)
{
ReadHeader();
if (source_.GetError().What()) return;
// public
key.SetModulus(GetInteger(Integer().Ref()));
key.SetPublicExponent(GetInteger(Integer().Ref()));
// private
key.SetPrivateExponent(GetInteger(Integer().Ref()));
key.SetPrime1(GetInteger(Integer().Ref()));
key.SetPrime2(GetInteger(Integer().Ref()));
key.SetModPrime1PrivateExponent(GetInteger(Integer().Ref()));
key.SetModPrime2PrivateExponent(GetInteger(Integer().Ref()));
key.SetMultiplicativeInverseOfPrime2ModPrime1(GetInteger(Integer().Ref()));
}
| 0 |
[
"CWE-254"
] |
mysql-server
|
e7061f7e5a96c66cb2e0bf46bec7f6ff35801a69
| 16,644,339,753,683,346,000,000,000,000,000,000,000 | 16 |
Bug #22738607: YASSL FUNCTION X509_NAME_GET_INDEX_BY_NID IS NOT WORKING AS EXPECTED.
|
str_replace (char *str,
const char *chars_to_replace,
char replacement)
{
gboolean success;
int i;
success = FALSE;
for (i = 0; str[i] != '\0'; i++) {
if (strchr (chars_to_replace, str[i])) {
success = TRUE;
str[i] = replacement;
}
}
return success;
}
| 0 |
[] |
nautilus
|
ca2fd475297946f163c32dcea897f25da892b89d
| 248,973,553,462,723,820,000,000,000,000,000,000,000 | 17 |
Add nautilus_file_mark_desktop_file_trusted(), this now adds a #! line if
2009-02-24 Alexander Larsson <[email protected]>
* libnautilus-private/nautilus-file-operations.c:
* libnautilus-private/nautilus-file-operations.h:
Add nautilus_file_mark_desktop_file_trusted(), this now
adds a #! line if there is none as well as makes the file
executable.
* libnautilus-private/nautilus-mime-actions.c:
Use nautilus_file_mark_desktop_file_trusted() instead of
just setting the permissions.
svn path=/trunk/; revision=15006
|
tokenexec_continue(i_ctx_t *i_ctx_p, scanner_state * pstate, bool save)
{
os_ptr op = osp;
int code;
/* Since we might free pstate below, and we're dealing with
* gc memory referenced by the stack, we need to explicitly
* remove the reference to pstate from the stack, otherwise
* the garbager will fall over
*/
make_null(osp);
/* Note that gs_scan_token may change osp! */
pop(1);
again:
check_estack(1);
code = gs_scan_token(i_ctx_p, (ref *) (esp + 1), pstate);
op = osp;
switch (code) {
case 0:
if (r_is_proc(esp + 1)) { /* Treat procedure as a literal. */
push(1);
ref_assign(op, esp + 1);
code = 0;
break;
}
/* falls through */
case scan_BOS:
++esp;
code = o_push_estack;
break;
case scan_EOF: /* no tokens */
code = 0;
break;
case scan_Refill: /* need more data */
code = gs_scan_handle_refill(i_ctx_p, pstate, save,
ztokenexec_continue);
switch (code) {
case 0: /* state is not copied to the heap */
goto again;
case o_push_estack:
return code;
}
break; /* error */
case scan_Comment:
case scan_DSC_Comment:
return ztoken_handle_comment(i_ctx_p, pstate, esp + 1, code,
save, true, ztokenexec_continue);
default: /* error */
gs_scanner_error_object(i_ctx_p, pstate, &i_ctx_p->error_object);
break;
}
if (!save) { /* Deallocate the scanner state record. */
gs_free_object(((scanner_state_dynamic *)pstate)->mem, pstate, "token_continue");
}
return code;
}
| 0 |
[] |
ghostpdl
|
671fd59eb657743aa86fbc1895cb15872a317caa
| 47,033,312,241,768,610,000,000,000,000,000,000,000 | 55 |
Bug 698158: prevent trying to reloc a freed object
In the token reader, we pass the scanner state structure around as a
t_struct ref on the Postscript operand stack.
But we explicitly free the scanner state when we're done, which leaves a
dangling reference on the operand stack and, unless that reference gets
overwritten before the next garbager run, we can end up with the garbager
trying to deal with an already freed object - that can cause a crash, or
memory corruption.
|
class ColorTrafo *LineBitmapRequester::ColorTrafoOf(bool encoding,bool disabletorgb)
{
return m_pFrame->TablesOf()->ColorTrafoOf(m_pFrame,NULL,PixelTypeOf(),
encoding,disabletorgb);
}
| 0 |
[
"CWE-476"
] |
libjpeg
|
51c3241b6da39df30f016b63f43f31c4011222c7
| 333,280,466,734,244,820,000,000,000,000,000,000,000 | 5 |
Fixed a NULL-pointer access in the line-based reconstruction process
in case no valid scan was found and no data is present.
|
static inline void layout_symtab(struct module *mod, struct load_info *info)
{
}
| 0 |
[
"CWE-362",
"CWE-347"
] |
linux
|
0c18f29aae7ce3dadd26d8ee3505d07cc982df75
| 154,958,210,195,473,430,000,000,000,000,000,000,000 | 3 |
module: limit enabling module.sig_enforce
Irrespective as to whether CONFIG_MODULE_SIG is configured, specifying
"module.sig_enforce=1" on the boot command line sets "sig_enforce".
Only allow "sig_enforce" to be set when CONFIG_MODULE_SIG is configured.
This patch makes the presence of /sys/module/module/parameters/sig_enforce
dependent on CONFIG_MODULE_SIG=y.
Fixes: fda784e50aac ("module: export module signature enforcement status")
Reported-by: Nayna Jain <[email protected]>
Tested-by: Mimi Zohar <[email protected]>
Tested-by: Jessica Yu <[email protected]>
Signed-off-by: Mimi Zohar <[email protected]>
Signed-off-by: Jessica Yu <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
get_display_info_cb(gint fd, GIOCondition condition, gpointer user_data)
{
struct virtio_gpu_resp_display_info dpy_info = { {} };
VuGpu *vg = user_data;
struct virtio_gpu_ctrl_command *cmd = QTAILQ_LAST(&vg->fenceq);
g_debug("disp info cb");
assert(cmd->cmd_hdr.type == VIRTIO_GPU_CMD_GET_DISPLAY_INFO);
if (!vg_recv_msg(vg, VHOST_USER_GPU_GET_DISPLAY_INFO,
sizeof(dpy_info), &dpy_info)) {
return G_SOURCE_CONTINUE;
}
QTAILQ_REMOVE(&vg->fenceq, cmd, next);
vg_ctrl_response(vg, cmd, &dpy_info.hdr, sizeof(dpy_info));
vg->wait_in = 0;
vg_handle_ctrl(&vg->dev.parent, 0);
return G_SOURCE_REMOVE;
}
| 0 |
[] |
qemu
|
86dd8fac2acc366930a5dc08d3fb1b1e816f4e1e
| 184,018,983,040,111,000,000,000,000,000,000,000,000 | 21 |
vhost-user-gpu: fix resource leak in 'vg_resource_create_2d' (CVE-2021-3544)
Call 'vugbm_buffer_destroy' in error path to avoid resource leak.
Fixes: CVE-2021-3544
Reported-by: Li Qiang <[email protected]>
Reviewed-by: Prasad J Pandit <[email protected]>
Signed-off-by: Li Qiang <[email protected]>
Reviewed-by: Marc-André Lureau <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Gerd Hoffmann <[email protected]>
|
static AddrRange addrrange_make(Int128 start, Int128 size)
{
return (AddrRange) { start, size };
}
| 0 |
[
"CWE-476"
] |
unicorn
|
3d3deac5e6d38602b689c4fef5dac004f07a2e63
| 257,291,178,538,943,650,000,000,000,000,000,000,000 | 4 |
Fix crash when mapping a big memory and calling uc_close
|
static s32 brcmf_inform_bss(struct brcmf_cfg80211_info *cfg)
{
struct wiphy *wiphy = cfg_to_wiphy(cfg);
struct brcmf_scan_results *bss_list;
struct brcmf_bss_info_le *bi = NULL; /* must be initialized */
s32 err = 0;
int i;
bss_list = (struct brcmf_scan_results *)cfg->escan_info.escan_buf;
if (bss_list->count != 0 &&
bss_list->version != BRCMF_BSS_INFO_VERSION) {
bphy_err(wiphy, "Version %d != WL_BSS_INFO_VERSION\n",
bss_list->version);
return -EOPNOTSUPP;
}
brcmf_dbg(SCAN, "scanned AP count (%d)\n", bss_list->count);
for (i = 0; i < bss_list->count; i++) {
bi = next_bss_le(bss_list, bi);
err = brcmf_inform_single_bss(cfg, bi);
if (err)
break;
}
return err;
}
| 0 |
[
"CWE-787"
] |
linux
|
1b5e2423164b3670e8bc9174e4762d297990deff
| 317,022,420,001,423,480,000,000,000,000,000,000,000 | 24 |
brcmfmac: assure SSID length from firmware is limited
The SSID length as received from firmware should not exceed
IEEE80211_MAX_SSID_LEN as that would result in heap overflow.
Reviewed-by: Hante Meuleman <[email protected]>
Reviewed-by: Pieter-Paul Giesberts <[email protected]>
Reviewed-by: Franky Lin <[email protected]>
Signed-off-by: Arend van Spriel <[email protected]>
Signed-off-by: Kalle Valo <[email protected]>
|
frag6_print(netdissect_options *ndo, register const u_char *bp, register const u_char *bp2)
{
register const struct ip6_frag *dp;
register const struct ip6_hdr *ip6;
dp = (const struct ip6_frag *)bp;
ip6 = (const struct ip6_hdr *)bp2;
ND_TCHECK(*dp);
if (ndo->ndo_vflag) {
ND_PRINT((ndo, "frag (0x%08x:%d|%ld)",
EXTRACT_32BITS(&dp->ip6f_ident),
EXTRACT_16BITS(&dp->ip6f_offlg) & IP6F_OFF_MASK,
sizeof(struct ip6_hdr) + EXTRACT_16BITS(&ip6->ip6_plen) -
(long)(bp - bp2) - sizeof(struct ip6_frag)));
} else {
ND_PRINT((ndo, "frag (%d|%ld)",
EXTRACT_16BITS(&dp->ip6f_offlg) & IP6F_OFF_MASK,
sizeof(struct ip6_hdr) + EXTRACT_16BITS(&ip6->ip6_plen) -
(long)(bp - bp2) - sizeof(struct ip6_frag)));
}
/* it is meaningless to decode non-first fragment */
if ((EXTRACT_16BITS(&dp->ip6f_offlg) & IP6F_OFF_MASK) != 0)
return -1;
else
{
ND_PRINT((ndo, " "));
return sizeof(struct ip6_frag);
}
trunc:
ND_PRINT((ndo, "[|frag]"));
return -1;
}
| 0 |
[
"CWE-125",
"CWE-787"
] |
tcpdump
|
2d669862df7cd17f539129049f6fb70d17174125
| 91,099,372,855,770,860,000,000,000,000,000,000,000 | 35 |
CVE-2017-13031/Check for the presence of the entire IPv6 fragment header.
This fixes a buffer over-read discovered by Bhargava Shastry,
SecT/TU Berlin.
Add a test using the capture file supplied by the reporter(s), modified
so the capture file won't be rejected as an invalid capture.
Clean up some whitespace in tests/TESTLIST while we're at it.
|
static inline int ok_jpg_load_next_bits(ok_jpg_decoder *decoder, int num_bits) {
ok_jpg_load_bits(decoder, num_bits);
int mask = (1 << num_bits) - 1;
decoder->input_buffer_bit_count -= num_bits;
return (int)(decoder->input_buffer_bits >> decoder->input_buffer_bit_count) & mask;
}
| 0 |
[
"CWE-787"
] |
ok-file-formats
|
a9cc1711dd4ed6a215038f1c5c03af0ef52c3211
| 112,717,507,630,813,940,000,000,000,000,000,000,000 | 6 |
ok_jpg: Fix invalid DHT (#11)
|
void tty_flip_buffer_push(struct tty_struct *tty)
{
unsigned long flags;
spin_lock_irqsave(&tty->buf.lock, flags);
if (tty->buf.tail != NULL)
tty->buf.tail->commit = tty->buf.tail->used;
spin_unlock_irqrestore(&tty->buf.lock, flags);
if (tty->low_latency)
flush_to_ldisc(&tty->buf.work);
else
schedule_work(&tty->buf.work);
}
| 0 |
[] |
linux
|
c56a00a165712fd73081f40044b1e64407bb1875
| 248,086,565,363,781,060,000,000,000,000,000,000,000 | 13 |
tty: hold lock across tty buffer finding and buffer filling
tty_buffer_request_room is well protected, but while after it returns,
it releases the port->lock. tty->buf.tail might be modified
by either irq handler or other threads. The patch adds more protection
by holding the lock across tty buffer finding and buffer filling.
Signed-off-by: Alek Du <[email protected]>
Signed-off-by: Xiaobing Tu <[email protected]>
Cc: Jiri Slaby <[email protected]>
Cc: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
void nntp_hash_destructor(int type, void *obj, intptr_t data)
{
nntp_data_free(obj);
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
neomutt
|
6296f7153f0c9d5e5cd3aaf08f9731e56621bdd3
| 304,996,051,661,254,730,000,000,000,000,000,000,000 | 4 |
Set length modifiers for group and desc
nntp_add_group parses a line controlled by the connected nntp server.
Restrict the maximum lengths read into the stack buffers group, and
desc.
|
static int sisusb_send_bulk_msg(struct sisusb_usb_data *sisusb, int ep, int len,
char *kernbuffer, const char __user *userbuffer, int index,
ssize_t *bytes_written, unsigned int tflags, int async)
{
int result = 0, retry, count = len;
int passsize, thispass, transferred_len = 0;
int fromuser = (userbuffer != NULL) ? 1 : 0;
int fromkern = (kernbuffer != NULL) ? 1 : 0;
unsigned int pipe;
char *buffer;
(*bytes_written) = 0;
/* Sanity check */
if (!sisusb || !sisusb->present || !sisusb->sisusb_dev)
return -ENODEV;
/* If we copy data from kernel or userspace, force the
* allocation of a buffer/urb. If we have the data in
* the transfer buffer[index] already, reuse the buffer/URB
* if the length is > buffer size. (So, transmitting
* large data amounts directly from the transfer buffer
* treats the buffer as a ring buffer. However, we need
* to sync in this case.)
*/
if (fromuser || fromkern)
index = -1;
else if (len > sisusb->obufsize)
async = 0;
pipe = usb_sndbulkpipe(sisusb->sisusb_dev, ep);
do {
passsize = thispass = (sisusb->obufsize < count) ?
sisusb->obufsize : count;
if (index < 0)
index = sisusb_get_free_outbuf(sisusb);
if (index < 0)
return -EIO;
buffer = sisusb->obuf[index];
if (fromuser) {
if (copy_from_user(buffer, userbuffer, passsize))
return -EFAULT;
userbuffer += passsize;
} else if (fromkern) {
memcpy(buffer, kernbuffer, passsize);
kernbuffer += passsize;
}
retry = 5;
while (thispass) {
if (!sisusb->sisusb_dev)
return -ENODEV;
result = sisusb_bulkout_msg(sisusb, index, pipe,
buffer, thispass, &transferred_len,
async ? 0 : 5 * HZ, tflags);
if (result == -ETIMEDOUT) {
/* Will not happen if async */
if (!retry--)
return -ETIME;
continue;
}
if ((result == 0) && !async && transferred_len) {
thispass -= transferred_len;
buffer += transferred_len;
} else
break;
}
if (result)
return result;
(*bytes_written) += passsize;
count -= passsize;
/* Force new allocation in next iteration */
if (fromuser || fromkern)
index = -1;
} while (count > 0);
if (async) {
#ifdef SISUSB_DONTSYNC
(*bytes_written) = len;
/* Some URBs/buffers might be busy */
#else
sisusb_wait_all_out_complete(sisusb);
(*bytes_written) = transferred_len;
/* All URBs and all buffers are available */
#endif
}
return ((*bytes_written) == len) ? 0 : -EIO;
}
| 0 |
[
"CWE-476"
] |
linux
|
9a5729f68d3a82786aea110b1bfe610be318f80a
| 44,773,397,759,197,225,000,000,000,000,000,000,000 | 111 |
USB: sisusbvga: fix oops in error path of sisusb_probe
The pointer used to log a failure of usb_register_dev() must
be set before the error is logged.
v2: fix that minor is not available before registration
Signed-off-by: oliver Neukum <[email protected]>
Reported-by: [email protected]
Fixes: 7b5cd5fefbe02 ("USB: SisUSB2VGA: Convert printk to dev_* macros")
Cc: stable <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
static int ntop_update_host_traffic_policy(lua_State* vm) {
NetworkInterfaceView *ntop_interface = getCurrentInterface(vm);
char *host_ip;
u_int16_t vlan_id = 0;
char buf[64];
Host *h;
ntop->getTrace()->traceEvent(TRACE_INFO, "%s() called", __FUNCTION__);
if(ntop_lua_check(vm, __FUNCTION__, 1, LUA_TSTRING)) return(CONST_LUA_ERROR);
get_host_vlan_info((char*)lua_tostring(vm, 1), &host_ip, &vlan_id, buf, sizeof(buf));
/* Optional VLAN id */
if(lua_type(vm, 2) == LUA_TNUMBER) vlan_id = (u_int16_t)lua_tonumber(vm, 2);
if((!ntop_interface)
|| ((h = ntop_interface->findHostsByIP(get_allowed_nets(vm), host_ip, vlan_id)) == NULL))
return(CONST_LUA_ERROR);
h->updateHostTrafficPolicy(host_ip);
return(CONST_LUA_OK);
}
| 0 |
[
"CWE-254"
] |
ntopng
|
2e0620be3410f5e22c9aa47e261bc5a12be692c6
| 86,654,112,734,929,190,000,000,000,000,000,000,000 | 22 |
Added security fix to avoid escalating privileges to non-privileged users
Many thanks to Dolev Farhi for reporting it
|
extend_input_line()
{
if (gp_input_line_len == 0) {
/* first time */
gp_input_line = gp_alloc(MAX_LINE_LEN, "gp_input_line");
gp_input_line_len = MAX_LINE_LEN;
gp_input_line[0] = NUL;
#ifdef OS2_IPC
sprintf( mouseSharedMemName, "\\SHAREMEM\\GP%i_Mouse_Input", getpid() );
if (DosAllocSharedMem((PVOID) & input_from_PM_Terminal,
mouseSharedMemName, MAX_LINE_LEN, PAG_WRITE | PAG_COMMIT))
fputs("command.c: DosAllocSharedMem ERROR\n",stderr);
#endif /* OS2_IPC */
} else {
gp_input_line = gp_realloc(gp_input_line, gp_input_line_len + MAX_LINE_LEN,
"extend input line");
gp_input_line_len += MAX_LINE_LEN;
FPRINTF((stderr, "extending input line to %d chars\n",
gp_input_line_len));
}
}
| 0 |
[
"CWE-415"
] |
gnuplot
|
052cbd17c3cbbc602ee080b2617d32a8417d7563
| 50,983,162,692,680,620,000,000,000,000,000,000,000 | 23 |
successive failures of "set print <foo>" could cause double-free
Bug #2312
|
static u8 tcp_sacktag_one(struct sock *sk,
struct tcp_sacktag_state *state, u8 sacked,
u32 start_seq, u32 end_seq,
int dup_sack, int pcount,
const struct skb_mstamp *xmit_time)
{
struct tcp_sock *tp = tcp_sk(sk);
int fack_count = state->fack_count;
/* Account D-SACK for retransmitted packet. */
if (dup_sack && (sacked & TCPCB_RETRANS)) {
if (tp->undo_marker && tp->undo_retrans > 0 &&
after(end_seq, tp->undo_marker))
tp->undo_retrans--;
if (sacked & TCPCB_SACKED_ACKED)
state->reord = min(fack_count, state->reord);
}
/* Nothing to do; acked frame is about to be dropped (was ACKed). */
if (!after(end_seq, tp->snd_una))
return sacked;
if (!(sacked & TCPCB_SACKED_ACKED)) {
tcp_rack_advance(tp, xmit_time, sacked);
if (sacked & TCPCB_SACKED_RETRANS) {
/* If the segment is not tagged as lost,
* we do not clear RETRANS, believing
* that retransmission is still in flight.
*/
if (sacked & TCPCB_LOST) {
sacked &= ~(TCPCB_LOST|TCPCB_SACKED_RETRANS);
tp->lost_out -= pcount;
tp->retrans_out -= pcount;
}
} else {
if (!(sacked & TCPCB_RETRANS)) {
/* New sack for not retransmitted frame,
* which was in hole. It is reordering.
*/
if (before(start_seq,
tcp_highest_sack_seq(tp)))
state->reord = min(fack_count,
state->reord);
if (!after(end_seq, tp->high_seq))
state->flag |= FLAG_ORIG_SACK_ACKED;
if (state->first_sackt.v64 == 0)
state->first_sackt = *xmit_time;
state->last_sackt = *xmit_time;
}
if (sacked & TCPCB_LOST) {
sacked &= ~TCPCB_LOST;
tp->lost_out -= pcount;
}
}
sacked |= TCPCB_SACKED_ACKED;
state->flag |= FLAG_DATA_SACKED;
tp->sacked_out += pcount;
fack_count += pcount;
/* Lost marker hint past SACKed? Tweak RFC3517 cnt */
if (!tcp_is_fack(tp) && tp->lost_skb_hint &&
before(start_seq, TCP_SKB_CB(tp->lost_skb_hint)->seq))
tp->lost_cnt_hint += pcount;
if (fack_count > tp->fackets_out)
tp->fackets_out = fack_count;
}
/* D-SACK. We can detect redundant retransmission in S|R and plain R
* frames and clear it. undo_retrans is decreased above, L|R frames
* are accounted above as well.
*/
if (dup_sack && (sacked & TCPCB_SACKED_RETRANS)) {
sacked &= ~TCPCB_SACKED_RETRANS;
tp->retrans_out -= pcount;
}
return sacked;
}
| 0 |
[
"CWE-703",
"CWE-189"
] |
linux
|
8b8a321ff72c785ed5e8b4cf6eda20b35d427390
| 161,452,046,081,839,940,000,000,000,000,000,000,000 | 83 |
tcp: fix zero cwnd in tcp_cwnd_reduction
Patch 3759824da87b ("tcp: PRR uses CRB mode by default and SS mode
conditionally") introduced a bug that cwnd may become 0 when both
inflight and sndcnt are 0 (cwnd = inflight + sndcnt). This may lead
to a div-by-zero if the connection starts another cwnd reduction
phase by setting tp->prior_cwnd to the current cwnd (0) in
tcp_init_cwnd_reduction().
To prevent this we skip PRR operation when nothing is acked or
sacked. Then cwnd must be positive in all cases as long as ssthresh
is positive:
1) The proportional reduction mode
inflight > ssthresh > 0
2) The reduction bound mode
a) inflight == ssthresh > 0
b) inflight < ssthresh
sndcnt > 0 since newly_acked_sacked > 0 and inflight < ssthresh
Therefore in all cases inflight and sndcnt can not both be 0.
We check invalid tp->prior_cwnd to avoid potential div0 bugs.
In reality this bug is triggered only with a sequence of less common
events. For example, the connection is terminating an ECN-triggered
cwnd reduction with an inflight 0, then it receives reordered/old
ACKs or DSACKs from prior transmission (which acks nothing). Or the
connection is in fast recovery stage that marks everything lost,
but fails to retransmit due to local issues, then receives data
packets from other end which acks nothing.
Fixes: 3759824da87b ("tcp: PRR uses CRB mode by default and SS mode conditionally")
Reported-by: Oleksandr Natalenko <[email protected]>
Signed-off-by: Yuchung Cheng <[email protected]>
Signed-off-by: Neal Cardwell <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
shape_desc *bind_shape(char *name, node_t * np)
{
shape_desc *ptr, *rv = NULL;
const char *str;
str = safefile(agget(np, "shapefile"));
/* If shapefile is defined and not epsf, set shape = custom */
if (str && !streq(name, "epsf"))
name = "custom";
if (!streq(name, "custom")) {
for (ptr = Shapes; ptr->name; ptr++) {
if (streq(ptr->name, name)) {
rv = ptr;
break;
}
}
}
if (rv == NULL)
rv = user_shape(name);
return rv;
}
| 0 |
[
"CWE-120"
] |
graphviz
|
784411ca3655c80da0f6025ab20634b2a6ff696b
| 228,606,245,648,697,370,000,000,000,000,000,000,000 | 21 |
fix: out-of-bounds write on invalid label
When the label for a node cannot be parsed (due to it being malformed), it falls
back on the symbol name of the node itself. I.e. the default label the node
would have had if it had no label attribute at all. However, this is applied by
dynamically altering the node's label to "\N", a shortcut for the symbol name of
the node. All of this is fine, however if the hand written label itself is
shorter than the literal string "\N", not enough memory would have been
allocated to write "\N" into the label text.
Here we account for the possibility of error during label parsing, and assume
that the label text may need to be overwritten with "\N" after the fact. Fixes
issue #1700.
|
bool destroyed() const { return state_.destroyed_; }
| 0 |
[
"CWE-416"
] |
envoy
|
148de954ed3585d8b4298b424aa24916d0de6136
| 306,026,410,850,007,240,000,000,000,000,000,000,000 | 1 |
CVE-2021-43825
Response filter manager crash
Signed-off-by: Yan Avlasov <[email protected]>
|
int X509_print_ex(BIO *bp, X509 *x, unsigned long nmflags,
unsigned long cflag)
{
long l;
int ret = 0, i;
char *m = NULL, mlch = ' ';
int nmindent = 0;
ASN1_INTEGER *bs;
EVP_PKEY *pkey = NULL;
const char *neg;
if ((nmflags & XN_FLAG_SEP_MASK) == XN_FLAG_SEP_MULTILINE) {
mlch = '\n';
nmindent = 12;
}
if (nmflags == X509_FLAG_COMPAT)
nmindent = 16;
if (!(cflag & X509_FLAG_NO_HEADER)) {
if (BIO_write(bp, "Certificate:\n", 13) <= 0)
goto err;
if (BIO_write(bp, " Data:\n", 10) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_VERSION)) {
l = X509_get_version(x);
if (l >= 0 && l <= 2) {
if (BIO_printf(bp, "%8sVersion: %ld (0x%lx)\n", "", l + 1, (unsigned long)l) <= 0)
goto err;
} else {
if (BIO_printf(bp, "%8sVersion: Unknown (%ld)\n", "", l) <= 0)
goto err;
}
}
if (!(cflag & X509_FLAG_NO_SERIAL)) {
if (BIO_write(bp, " Serial Number:", 22) <= 0)
goto err;
bs = X509_get_serialNumber(x);
if (bs->length <= (int)sizeof(long)) {
ERR_set_mark();
l = ASN1_INTEGER_get(bs);
ERR_pop_to_mark();
} else {
l = -1;
}
if (l != -1) {
unsigned long ul;
if (bs->type == V_ASN1_NEG_INTEGER) {
ul = 0 - (unsigned long)l;
neg = "-";
} else {
ul = l;
neg = "";
}
if (BIO_printf(bp, " %s%lu (%s0x%lx)\n", neg, ul, neg, ul) <= 0)
goto err;
} else {
neg = (bs->type == V_ASN1_NEG_INTEGER) ? " (Negative)" : "";
if (BIO_printf(bp, "\n%12s%s", "", neg) <= 0)
goto err;
for (i = 0; i < bs->length; i++) {
if (BIO_printf(bp, "%02x%c", bs->data[i],
((i + 1 == bs->length) ? '\n' : ':')) <= 0)
goto err;
}
}
}
if (!(cflag & X509_FLAG_NO_SIGNAME)) {
const X509_ALGOR *tsig_alg = X509_get0_tbs_sigalg(x);
if (BIO_puts(bp, " ") <= 0)
goto err;
if (X509_signature_print(bp, tsig_alg, NULL) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_ISSUER)) {
if (BIO_printf(bp, " Issuer:%c", mlch) <= 0)
goto err;
if (X509_NAME_print_ex(bp, X509_get_issuer_name(x), nmindent, nmflags)
< 0)
goto err;
if (BIO_write(bp, "\n", 1) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_VALIDITY)) {
if (BIO_write(bp, " Validity\n", 17) <= 0)
goto err;
if (BIO_write(bp, " Not Before: ", 24) <= 0)
goto err;
if (!ASN1_TIME_print(bp, X509_get0_notBefore(x)))
goto err;
if (BIO_write(bp, "\n Not After : ", 25) <= 0)
goto err;
if (!ASN1_TIME_print(bp, X509_get0_notAfter(x)))
goto err;
if (BIO_write(bp, "\n", 1) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_SUBJECT)) {
if (BIO_printf(bp, " Subject:%c", mlch) <= 0)
goto err;
if (X509_NAME_print_ex
(bp, X509_get_subject_name(x), nmindent, nmflags) < 0)
goto err;
if (BIO_write(bp, "\n", 1) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_PUBKEY)) {
X509_PUBKEY *xpkey = X509_get_X509_PUBKEY(x);
ASN1_OBJECT *xpoid;
X509_PUBKEY_get0_param(&xpoid, NULL, NULL, NULL, xpkey);
if (BIO_write(bp, " Subject Public Key Info:\n", 33) <= 0)
goto err;
if (BIO_printf(bp, "%12sPublic Key Algorithm: ", "") <= 0)
goto err;
if (i2a_ASN1_OBJECT(bp, xpoid) <= 0)
goto err;
if (BIO_puts(bp, "\n") <= 0)
goto err;
pkey = X509_get0_pubkey(x);
if (pkey == NULL) {
BIO_printf(bp, "%12sUnable to load Public Key\n", "");
ERR_print_errors(bp);
} else {
EVP_PKEY_print_public(bp, pkey, 16, NULL);
}
}
if (!(cflag & X509_FLAG_NO_IDS)) {
const ASN1_BIT_STRING *iuid, *suid;
X509_get0_uids(x, &iuid, &suid);
if (iuid != NULL) {
if (BIO_printf(bp, "%8sIssuer Unique ID: ", "") <= 0)
goto err;
if (!X509_signature_dump(bp, iuid, 12))
goto err;
}
if (suid != NULL) {
if (BIO_printf(bp, "%8sSubject Unique ID: ", "") <= 0)
goto err;
if (!X509_signature_dump(bp, suid, 12))
goto err;
}
}
if (!(cflag & X509_FLAG_NO_EXTENSIONS))
X509V3_extensions_print(bp, "X509v3 extensions",
X509_get0_extensions(x), cflag, 8);
if (!(cflag & X509_FLAG_NO_SIGDUMP)) {
const X509_ALGOR *sig_alg;
const ASN1_BIT_STRING *sig;
X509_get0_signature(&sig, &sig_alg, x);
if (X509_signature_print(bp, sig_alg, sig) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_AUX)) {
if (!X509_aux_print(bp, x, 0))
goto err;
}
ret = 1;
err:
OPENSSL_free(m);
return ret;
}
| 0 |
[
"CWE-125"
] |
openssl
|
d9d838ddc0ed083fb4c26dd067e71aad7c65ad16
| 220,694,221,776,536,500,000,000,000,000,000,000,000 | 173 |
Fix a read buffer overrun in X509_aux_print().
The ASN1_STRING_get0_data(3) manual explitely cautions the reader
that the data is not necessarily NUL-terminated, and the function
X509_alias_set1(3) does not sanitize the data passed into it in any
way either, so we must assume the return value from X509_alias_get0(3)
is merely a byte array and not necessarily a string in the sense
of the C language.
I found this bug while writing manual pages for X509_print_ex(3)
and related functions. Theo Buehler <[email protected]> checked my
patch to fix the same bug in LibreSSL, see
http://cvsweb.openbsd.org/src/lib/libcrypto/asn1/t_x509a.c#rev1.9
As an aside, note that the function still produces incomplete and
misleading results when the data contains a NUL byte in the middle
and that error handling is consistently absent throughout, even
though the function provides an "int" return value obviously intended
to be 1 for success and 0 for failure, and even though this function
is called by another function that also wants to return 1 for success
and 0 for failure and even does so in many of its code paths, though
not in others. But let's stay focussed. Many things would be nice
to have in the wide wild world, but a buffer overflow must not be
allowed to remain in our backyard.
CLA: trivial
Reviewed-by: Tim Hudson <[email protected]>
Reviewed-by: Paul Dale <[email protected]>
Reviewed-by: Tomas Mraz <[email protected]>
(Merged from https://github.com/openssl/openssl/pull/16108)
(cherry picked from commit c5dc9ab965f2a69bca964c709e648158f3e4cd67)
|
void Compute(OpKernelContext* ctx) override {
const Tensor* inputs;
const Tensor* labels_indices;
const Tensor* labels_values;
const Tensor* seq_len;
OP_REQUIRES_OK(ctx, ctx->input("inputs", &inputs));
OP_REQUIRES_OK(ctx, ctx->input("labels_indices", &labels_indices));
OP_REQUIRES_OK(ctx, ctx->input("labels_values", &labels_values));
OP_REQUIRES_OK(ctx, ctx->input("sequence_length", &seq_len));
OP_REQUIRES(ctx, inputs->shape().dims() == 3,
errors::InvalidArgument("inputs is not a 3-Tensor"));
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(seq_len->shape()),
errors::InvalidArgument("sequence_length is not a vector"));
OP_REQUIRES(ctx, TensorShapeUtils::IsMatrix(labels_indices->shape()),
errors::InvalidArgument("labels_indices is not a matrix"));
OP_REQUIRES(ctx, labels_indices->dim_size(1) > 1,
errors::InvalidArgument(
"labels_indices second dimension must be >= 1. Received ",
labels_indices->dim_size(1)));
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(labels_values->shape()),
errors::InvalidArgument("labels_values is not a vector"));
const TensorShape& inputs_shape = inputs->shape();
const int64 max_time = inputs_shape.dim_size(0);
OP_REQUIRES(ctx, max_time != 0,
errors::InvalidArgument(
"Max time or first dimension of input cannot be 0."));
const int64 batch_size = inputs_shape.dim_size(1);
const int64 num_classes_raw = inputs_shape.dim_size(2);
OP_REQUIRES(
ctx, FastBoundsCheck(num_classes_raw, std::numeric_limits<int>::max()),
errors::InvalidArgument("num_classes cannot exceed max int"));
const int num_classes = static_cast<const int>(num_classes_raw);
OP_REQUIRES(
ctx, batch_size == seq_len->dim_size(0),
errors::InvalidArgument("len(sequence_length) != batch_size. ",
"len(sequence_length): ", seq_len->dim_size(0),
" batch_size: ", batch_size));
auto seq_len_t = seq_len->vec<int32>();
OP_REQUIRES(ctx, labels_indices->dim_size(0) == labels_values->dim_size(0),
errors::InvalidArgument(
"labels_indices and labels_values must contain the "
"same number of rows, but saw shapes: ",
labels_indices->shape().DebugString(), " vs. ",
labels_values->shape().DebugString()));
OP_REQUIRES(ctx, batch_size != 0,
errors::InvalidArgument("batch_size must not be 0"));
// Figure out the maximum label length to use as sparse tensor dimension.
auto labels_indices_t = labels_indices->matrix<int64>();
int64 max_label_len = 0;
for (int i = 0; i < labels_indices->dim_size(0); i++) {
max_label_len = std::max(max_label_len, labels_indices_t(i, 1) + 1);
}
TensorShape labels_shape({batch_size, max_label_len});
std::vector<int64> order{0, 1};
sparse::SparseTensor labels_sp;
OP_REQUIRES_OK(
ctx, sparse::SparseTensor::Create(*labels_indices, *labels_values,
labels_shape, order, &labels_sp));
Status labels_sp_valid = labels_sp.IndicesValid();
OP_REQUIRES(ctx, labels_sp_valid.ok(),
errors::InvalidArgument("label SparseTensor is not valid: ",
labels_sp_valid.error_message()));
typename ctc::CTCLossCalculator<T>::LabelSequences labels_t(batch_size);
for (const auto& g : labels_sp.group({0})) { // iterate by batch
const int64 batch_indices = g.group()[0];
OP_REQUIRES(ctx, FastBoundsCheck(batch_indices, batch_size),
errors::InvalidArgument("labels batch index must be between ",
0, " and ", batch_size,
" but saw: ", batch_indices));
auto values = g.values<int32>();
std::vector<int>* b_values = &labels_t[batch_indices];
b_values->resize(values.size());
for (int i = 0; i < values.size(); ++i) (*b_values)[i] = values(i);
}
OP_REQUIRES(ctx, static_cast<size_t>(batch_size) == labels_t.size(),
errors::InvalidArgument("len(labels) != batch_size. ",
"len(labels): ", labels_t.size(),
" batch_size: ", batch_size));
for (int64 b = 0; b < batch_size; ++b) {
OP_REQUIRES(
ctx, seq_len_t(b) <= max_time,
errors::InvalidArgument("sequence_length(", b, ") <= ", max_time));
}
Tensor* loss = nullptr;
OP_REQUIRES_OK(ctx, ctx->allocate_output("loss", seq_len->shape(), &loss));
auto loss_t = loss->vec<T>();
Tensor* gradient;
OP_REQUIRES_OK(ctx,
ctx->allocate_output("gradient", inputs_shape, &gradient));
auto gradient_t = gradient->tensor<T, 3>();
auto inputs_t = inputs->tensor<T, 3>();
std::vector<OutputMap> gradient_list_t;
std::vector<InputMap> input_list_t;
for (std::size_t t = 0; t < max_time; ++t) {
input_list_t.emplace_back(inputs_t.data() + t * batch_size * num_classes,
batch_size, num_classes);
gradient_list_t.emplace_back(
gradient_t.data() + t * batch_size * num_classes, batch_size,
num_classes);
}
gradient_t.setZero();
// Assumption: the blank index is num_classes - 1
ctc::CTCLossCalculator<T> ctc_loss_calculator(num_classes - 1, 0);
DeviceBase::CpuWorkerThreads workers =
*ctx->device()->tensorflow_cpu_worker_threads();
OP_REQUIRES_OK(ctx, ctc_loss_calculator.CalculateLoss(
seq_len_t, labels_t, input_list_t,
preprocess_collapse_repeated_, ctc_merge_repeated_,
ignore_longer_outputs_than_inputs_, &loss_t,
&gradient_list_t, &workers));
}
| 0 |
[
"CWE-665"
] |
tensorflow
|
14607c0707040d775e06b6817325640cb4b5864c
| 181,245,892,044,849,270,000,000,000,000,000,000,000 | 128 |
Fix nullptr deref in `tf.raw_ops.CTCLoss`.
PiperOrigin-RevId: 372266334
Change-Id: Ic52c3e9f13a38f54482d670907eda1688450862b
|
txt_shift_text_currentpoint(textw_text_enum_t *penum, gs_point *wpt)
{
return gs_moveto_aux(penum->pgs, gx_current_path(penum->pgs),
fixed2float(penum->origin.x) + wpt->x,
fixed2float(penum->origin.y) + wpt->y);
}
| 0 |
[
"CWE-476"
] |
ghostpdl
|
407c98a38c3a6ac1681144ed45cc2f4fc374c91f
| 261,194,264,268,542,580,000,000,000,000,000,000,000 | 6 |
txtwrite - guard against using GS_NO_GLYPH to retrieve Unicode values
Bug 701822 "Segmentation fault at psi/iname.c:296 in names_index_ref"
Avoid using a glyph with the value GS_NO_GLYPH to retrieve a glyph
name or Unicode code point from the glyph ID, as this is not a valid
ID.
|
static int read_id_table(long long *table_start)
{
/*
* Note on overflow limits:
* Size of SBlk.s.no_ids is 2^16 (unsigned short)
* Max size of bytes is 2^16*4 or 256K
* Max indexes is (2^16*4)/8K or 32
* Max length is ((2^16*4)/8K)*8 or 256
*/
int res, i;
int bytes = SQUASHFS_ID_BYTES(sBlk.s.no_ids);
int indexes = SQUASHFS_ID_BLOCKS(sBlk.s.no_ids);
int length = SQUASHFS_ID_BLOCK_BYTES(sBlk.s.no_ids);
long long *id_index_table;
/*
* The size of the index table (length bytes) should match the
* table start and end points
*/
if(length != (*table_start - sBlk.s.id_table_start)) {
ERROR("read_id_table: Bad id count in super block\n");
return FALSE;
}
TRACE("read_id_table: no_ids %d\n", sBlk.s.no_ids);
id_index_table = alloc_index_table(indexes);
id_table = malloc(bytes);
if(id_table == NULL) {
ERROR("read_id_table: failed to allocate id table\n");
return FALSE;
}
res = read_fs_bytes(fd, sBlk.s.id_table_start, length, id_index_table);
if(res == FALSE) {
ERROR("read_id_table: failed to read id index table\n");
return FALSE;
}
SQUASHFS_INSWAP_ID_BLOCKS(id_index_table, indexes);
/*
* id_index_table[0] stores the start of the compressed id blocks.
* This by definition is also the end of the previous filesystem
* table - this may be the exports table if it is present, or the
* fragments table if it isn't.
*/
*table_start = id_index_table[0];
for(i = 0; i < indexes; i++) {
int expected = (i + 1) != indexes ? SQUASHFS_METADATA_SIZE :
bytes & (SQUASHFS_METADATA_SIZE - 1);
res = read_block(fd, id_index_table[i], NULL, expected,
((char *) id_table) + i * SQUASHFS_METADATA_SIZE);
if(res == FALSE) {
ERROR("read_id_table: failed to read id table block"
"\n");
return FALSE;
}
}
SQUASHFS_INSWAP_INTS(id_table, sBlk.s.no_ids);
return TRUE;
}
| 0 |
[
"CWE-22"
] |
squashfs-tools
|
79b5a555058eef4e1e7ff220c344d39f8cd09646
| 124,964,703,413,217,550,000,000,000,000,000,000,000 | 64 |
Unsquashfs: fix write outside destination directory exploit
An issue on Github (https://github.com/plougher/squashfs-tools/issues/72)
shows how some specially crafted Squashfs filesystems containing
invalid file names (with '/' and ..) can cause Unsquashfs to write
files outside of the destination directory.
This commit fixes this exploit by checking all names for
validity.
In doing so I have also added checks for '.' and for names that
are shorter than they should be (names in the file system should
not have '\0' terminators).
Signed-off-by: Phillip Lougher <[email protected]>
|
int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp,
const struct sk_buff *skb, unsigned int header_len)
{
struct scatterlist sg;
const struct tcphdr *tp = tcp_hdr(skb);
struct hash_desc *desc = &hp->md5_desc;
unsigned int i;
const unsigned int head_data_len = skb_headlen(skb) > header_len ?
skb_headlen(skb) - header_len : 0;
const struct skb_shared_info *shi = skb_shinfo(skb);
struct sk_buff *frag_iter;
sg_init_table(&sg, 1);
sg_set_buf(&sg, ((u8 *) tp) + header_len, head_data_len);
if (crypto_hash_update(desc, &sg, head_data_len))
return 1;
for (i = 0; i < shi->nr_frags; ++i) {
const struct skb_frag_struct *f = &shi->frags[i];
unsigned int offset = f->page_offset;
struct page *page = skb_frag_page(f) + (offset >> PAGE_SHIFT);
sg_set_page(&sg, page, skb_frag_size(f),
offset_in_page(offset));
if (crypto_hash_update(desc, &sg, skb_frag_size(f)))
return 1;
}
skb_walk_frags(skb, frag_iter)
if (tcp_md5_hash_skb_data(hp, frag_iter, 0))
return 1;
return 0;
}
| 0 |
[] |
linux
|
7bced397510ab569d31de4c70b39e13355046387
| 321,863,324,440,564,500,000,000,000,000,000,000,000 | 35 |
net_dma: simple removal
Per commit "77873803363c net_dma: mark broken" net_dma is no longer used
and there is no plan to fix it.
This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards.
Reverting the remainder of the net_dma induced changes is deferred to
subsequent patches.
Marked for stable due to Roman's report of a memory leak in
dma_pin_iovec_pages():
https://lkml.org/lkml/2014/9/3/177
Cc: Dave Jiang <[email protected]>
Cc: Vinod Koul <[email protected]>
Cc: David Whipple <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: <[email protected]>
Reported-by: Roman Gushchin <[email protected]>
Acked-by: David S. Miller <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
|
static inline int __pskb_trim(struct sk_buff *skb, unsigned int len)
{
if (skb->data_len)
return ___pskb_trim(skb, len);
__skb_trim(skb, len);
return 0;
| 0 |
[
"CWE-20"
] |
linux
|
2b16f048729bf35e6c28a40cbfad07239f9dcd90
| 249,696,768,660,114,430,000,000,000,000,000,000,000 | 7 |
net: create skb_gso_validate_mac_len()
If you take a GSO skb, and split it into packets, will the MAC
length (L2 + L3 + L4 headers + payload) of those packets be small
enough to fit within a given length?
Move skb_gso_mac_seglen() to skbuff.h with other related functions
like skb_gso_network_seglen() so we can use it, and then create
skb_gso_validate_mac_len to do the full calculation.
Signed-off-by: Daniel Axtens <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int mailbox_delete_carddav(struct mailbox *mailbox)
{
struct carddav_db *carddavdb = NULL;
carddavdb = carddav_open_mailbox(mailbox);
if (carddavdb) {
const mbentry_t mbentry = { .name = (char *)mailbox_name(mailbox),
.uniqueid = (char *)mailbox_uniqueid(mailbox) };
int r = carddav_delmbox(carddavdb, &mbentry);
carddav_close(carddavdb);
if (r) return r;
}
return 0;
}
| 0 |
[] |
cyrus-imapd
|
1d6d15ee74e11a9bd745e80be69869e5fb8d64d6
| 238,428,100,860,250,150,000,000,000,000,000,000,000 | 15 |
mailbox.c/reconstruct.c: Add mailbox_mbentry_from_path()
|
static void
e1000e_vm_state_change(void *opaque, int running, RunState state)
{
E1000ECore *core = opaque;
if (running) {
trace_e1000e_vm_state_running();
e1000e_intrmgr_resume(core);
e1000e_autoneg_resume(core);
} else {
trace_e1000e_vm_state_stopped();
e1000e_autoneg_pause(core);
e1000e_intrmgr_pause(core);
}
| 0 |
[
"CWE-835"
] |
qemu
|
4154c7e03fa55b4cf52509a83d50d6c09d743b77
| 24,541,824,895,269,088,000,000,000,000,000,000,000 | 14 |
net: e1000e: fix an infinite loop issue
This issue is like the issue in e1000 network card addressed in
this commit:
e1000: eliminate infinite loops on out-of-bounds transfer start.
Signed-off-by: Li Qiang <[email protected]>
Reviewed-by: Dmitry Fleytman <[email protected]>
Signed-off-by: Jason Wang <[email protected]>
|
char **lxc_string_split_quoted(char *string)
{
char *nextword = string, *p, state;
char **result = NULL;
size_t result_capacity = 0;
size_t result_count = 0;
if (!string || !*string)
return calloc(1, sizeof(char *));
// TODO I'm *not* handling escaped quote
state = ' ';
for (p = string; *p; p++) {
switch(state) {
case ' ':
if (isspace(*p))
continue;
else if (*p == '"' || *p == '\'') {
nextword = p;
state = *p;
continue;
}
nextword = p;
state = 'a';
continue;
case 'a':
if (isspace(*p)) {
complete_word(&result, nextword, p, &result_capacity, &result_count);
state = ' ';
continue;
}
continue;
case '"':
case '\'':
if (*p == state) {
complete_word(&result, nextword+1, p, &result_capacity, &result_count);
state = ' ';
continue;
}
continue;
}
}
if (state == 'a')
complete_word(&result, nextword, p, &result_capacity, &result_count);
return realloc(result, (result_count + 1) * sizeof(char *));
}
| 0 |
[
"CWE-417"
] |
lxc
|
c1cf54ebf251fdbad1e971679614e81649f1c032
| 252,954,524,246,042,700,000,000,000,000,000,000,000 | 48 |
CVE 2018-6556: verify netns fd in lxc-user-nic
Signed-off-by: Christian Brauner <[email protected]>
|
static int decode_block(AVCodecContext *avctx, void *tdata,
int jobnr, int threadnr)
{
EXRContext *s = avctx->priv_data;
AVFrame *const p = s->picture;
EXRThreadData *td = &s->thread_data[threadnr];
const uint8_t *channel_buffer[4] = { 0 };
const uint8_t *buf = s->buf;
uint64_t line_offset, uncompressed_size;
uint8_t *ptr;
uint32_t data_size;
int line, col = 0;
uint64_t tile_x, tile_y, tile_level_x, tile_level_y;
const uint8_t *src;
int step = s->desc->flags & AV_PIX_FMT_FLAG_FLOAT ? 4 : 2 * s->desc->nb_components;
int bxmin = 0, axmax = 0, window_xoffset = 0;
int window_xmin, window_xmax, window_ymin, window_ymax;
int data_xoffset, data_yoffset, data_window_offset, xsize, ysize;
int i, x, buf_size = s->buf_size;
int c, rgb_channel_count;
float one_gamma = 1.0f / s->gamma;
avpriv_trc_function trc_func = avpriv_get_trc_function_from_trc(s->apply_trc_type);
int ret;
line_offset = AV_RL64(s->gb.buffer + jobnr * 8);
if (s->is_tile) {
if (buf_size < 20 || line_offset > buf_size - 20)
return AVERROR_INVALIDDATA;
src = buf + line_offset + 20;
if (s->is_multipart)
src += 4;
tile_x = AV_RL32(src - 20);
tile_y = AV_RL32(src - 16);
tile_level_x = AV_RL32(src - 12);
tile_level_y = AV_RL32(src - 8);
data_size = AV_RL32(src - 4);
if (data_size <= 0 || data_size > buf_size - line_offset - 20)
return AVERROR_INVALIDDATA;
if (tile_level_x || tile_level_y) { /* tile level, is not the full res level */
avpriv_report_missing_feature(s->avctx, "Subres tile before full res tile");
return AVERROR_PATCHWELCOME;
}
if (tile_x && s->tile_attr.xSize + (int64_t)FFMAX(s->xmin, 0) >= INT_MAX / tile_x )
return AVERROR_INVALIDDATA;
if (tile_y && s->tile_attr.ySize + (int64_t)FFMAX(s->ymin, 0) >= INT_MAX / tile_y )
return AVERROR_INVALIDDATA;
line = s->ymin + s->tile_attr.ySize * tile_y;
col = s->tile_attr.xSize * tile_x;
if (line < s->ymin || line > s->ymax ||
s->xmin + col < s->xmin || s->xmin + col > s->xmax)
return AVERROR_INVALIDDATA;
td->ysize = FFMIN(s->tile_attr.ySize, s->ydelta - tile_y * s->tile_attr.ySize);
td->xsize = FFMIN(s->tile_attr.xSize, s->xdelta - tile_x * s->tile_attr.xSize);
if (td->xsize * (uint64_t)s->current_channel_offset > INT_MAX)
return AVERROR_INVALIDDATA;
td->channel_line_size = td->xsize * s->current_channel_offset;/* uncompress size of one line */
uncompressed_size = td->channel_line_size * (uint64_t)td->ysize;/* uncompress size of the block */
} else {
if (buf_size < 8 || line_offset > buf_size - 8)
return AVERROR_INVALIDDATA;
src = buf + line_offset + 8;
if (s->is_multipart)
src += 4;
line = AV_RL32(src - 8);
if (line < s->ymin || line > s->ymax)
return AVERROR_INVALIDDATA;
data_size = AV_RL32(src - 4);
if (data_size <= 0 || data_size > buf_size - line_offset - 8)
return AVERROR_INVALIDDATA;
td->ysize = FFMIN(s->scan_lines_per_block, s->ymax - line + 1); /* s->ydelta - line ?? */
td->xsize = s->xdelta;
if (td->xsize * (uint64_t)s->current_channel_offset > INT_MAX)
return AVERROR_INVALIDDATA;
td->channel_line_size = td->xsize * s->current_channel_offset;/* uncompress size of one line */
uncompressed_size = td->channel_line_size * (uint64_t)td->ysize;/* uncompress size of the block */
if ((s->compression == EXR_RAW && (data_size != uncompressed_size ||
line_offset > buf_size - uncompressed_size)) ||
(s->compression != EXR_RAW && (data_size > uncompressed_size ||
line_offset > buf_size - data_size))) {
return AVERROR_INVALIDDATA;
}
}
window_xmin = FFMIN(avctx->width, FFMAX(0, s->xmin + col));
window_xmax = FFMIN(avctx->width, FFMAX(0, s->xmin + col + td->xsize));
window_ymin = FFMIN(avctx->height, FFMAX(0, line ));
window_ymax = FFMIN(avctx->height, FFMAX(0, line + td->ysize));
xsize = window_xmax - window_xmin;
ysize = window_ymax - window_ymin;
/* tile or scanline not visible skip decoding */
if (xsize <= 0 || ysize <= 0)
return 0;
/* is the first tile or is a scanline */
if(col == 0) {
window_xmin = 0;
/* pixels to add at the left of the display window */
window_xoffset = FFMAX(0, s->xmin);
/* bytes to add at the left of the display window */
bxmin = window_xoffset * step;
}
/* is the last tile or is a scanline */
if(col + td->xsize == s->xdelta) {
window_xmax = avctx->width;
/* bytes to add at the right of the display window */
axmax = FFMAX(0, (avctx->width - (s->xmax + 1))) * step;
}
if (data_size < uncompressed_size || s->is_tile) { /* td->tmp is use for tile reorganization */
av_fast_padded_malloc(&td->tmp, &td->tmp_size, uncompressed_size);
if (!td->tmp)
return AVERROR(ENOMEM);
}
if (data_size < uncompressed_size) {
av_fast_padded_malloc(&td->uncompressed_data,
&td->uncompressed_size, uncompressed_size + 64);/* Force 64 padding for AVX2 reorder_pixels dst */
if (!td->uncompressed_data)
return AVERROR(ENOMEM);
ret = AVERROR_INVALIDDATA;
switch (s->compression) {
case EXR_ZIP1:
case EXR_ZIP16:
ret = zip_uncompress(s, src, data_size, uncompressed_size, td);
break;
case EXR_PIZ:
ret = piz_uncompress(s, src, data_size, uncompressed_size, td);
break;
case EXR_PXR24:
ret = pxr24_uncompress(s, src, data_size, uncompressed_size, td);
break;
case EXR_RLE:
ret = rle_uncompress(s, src, data_size, uncompressed_size, td);
break;
case EXR_B44:
case EXR_B44A:
ret = b44_uncompress(s, src, data_size, uncompressed_size, td);
break;
case EXR_DWAA:
case EXR_DWAB:
ret = dwa_uncompress(s, src, data_size, uncompressed_size, td);
break;
}
if (ret < 0) {
av_log(avctx, AV_LOG_ERROR, "decode_block() failed.\n");
return ret;
}
src = td->uncompressed_data;
}
/* offsets to crop data outside display window */
data_xoffset = FFABS(FFMIN(0, s->xmin + col)) * (s->pixel_type == EXR_HALF ? 2 : 4);
data_yoffset = FFABS(FFMIN(0, line));
data_window_offset = (data_yoffset * td->channel_line_size) + data_xoffset;
if (!s->is_luma) {
channel_buffer[0] = src + (td->xsize * s->channel_offsets[0]) + data_window_offset;
channel_buffer[1] = src + (td->xsize * s->channel_offsets[1]) + data_window_offset;
channel_buffer[2] = src + (td->xsize * s->channel_offsets[2]) + data_window_offset;
rgb_channel_count = 3;
} else { /* put y data in the first channel_buffer */
channel_buffer[0] = src + (td->xsize * s->channel_offsets[1]) + data_window_offset;
rgb_channel_count = 1;
}
if (s->channel_offsets[3] >= 0)
channel_buffer[3] = src + (td->xsize * s->channel_offsets[3]) + data_window_offset;
if (s->desc->flags & AV_PIX_FMT_FLAG_FLOAT) {
/* todo: change this when a floating point pixel format with luma with alpha is implemented */
int channel_count = s->channel_offsets[3] >= 0 ? 4 : rgb_channel_count;
if (s->is_luma) {
channel_buffer[1] = channel_buffer[0];
channel_buffer[2] = channel_buffer[0];
}
for (c = 0; c < channel_count; c++) {
int plane = s->desc->comp[c].plane;
ptr = p->data[plane] + window_ymin * p->linesize[plane] + (window_xmin * 4);
for (i = 0; i < ysize; i++, ptr += p->linesize[plane]) {
const uint8_t *src;
union av_intfloat32 *ptr_x;
src = channel_buffer[c];
ptr_x = (union av_intfloat32 *)ptr;
// Zero out the start if xmin is not 0
memset(ptr_x, 0, bxmin);
ptr_x += window_xoffset;
if (s->pixel_type == EXR_FLOAT ||
s->compression == EXR_DWAA ||
s->compression == EXR_DWAB) {
// 32-bit
union av_intfloat32 t;
if (trc_func && c < 3) {
for (x = 0; x < xsize; x++) {
t.i = bytestream_get_le32(&src);
t.f = trc_func(t.f);
*ptr_x++ = t;
}
} else if (one_gamma != 1.f) {
for (x = 0; x < xsize; x++) {
t.i = bytestream_get_le32(&src);
if (t.f > 0.0f && c < 3) /* avoid negative values */
t.f = powf(t.f, one_gamma);
*ptr_x++ = t;
}
} else {
for (x = 0; x < xsize; x++) {
t.i = bytestream_get_le32(&src);
*ptr_x++ = t;
}
}
} else if (s->pixel_type == EXR_HALF) {
// 16-bit
if (c < 3 || !trc_func) {
for (x = 0; x < xsize; x++) {
*ptr_x++ = s->gamma_table[bytestream_get_le16(&src)];
}
} else {
for (x = 0; x < xsize; x++) {
ptr_x[0].i = half2float(bytestream_get_le16(&src),
s->mantissatable,
s->exponenttable,
s->offsettable);
ptr_x++;
}
}
}
// Zero out the end if xmax+1 is not w
memset(ptr_x, 0, axmax);
channel_buffer[c] += td->channel_line_size;
}
}
} else {
av_assert1(s->pixel_type == EXR_UINT);
ptr = p->data[0] + window_ymin * p->linesize[0] + (window_xmin * s->desc->nb_components * 2);
for (i = 0; i < ysize; i++, ptr += p->linesize[0]) {
const uint8_t * a;
const uint8_t *rgb[3];
uint16_t *ptr_x;
for (c = 0; c < rgb_channel_count; c++) {
rgb[c] = channel_buffer[c];
}
if (channel_buffer[3])
a = channel_buffer[3];
ptr_x = (uint16_t *) ptr;
// Zero out the start if xmin is not 0
memset(ptr_x, 0, bxmin);
ptr_x += window_xoffset * s->desc->nb_components;
for (x = 0; x < xsize; x++) {
for (c = 0; c < rgb_channel_count; c++) {
*ptr_x++ = bytestream_get_le32(&rgb[c]) >> 16;
}
if (channel_buffer[3])
*ptr_x++ = bytestream_get_le32(&a) >> 16;
}
// Zero out the end if xmax+1 is not w
memset(ptr_x, 0, axmax);
channel_buffer[0] += td->channel_line_size;
channel_buffer[1] += td->channel_line_size;
channel_buffer[2] += td->channel_line_size;
if (channel_buffer[3])
channel_buffer[3] += td->channel_line_size;
}
}
return 0;
}
| 0 |
[
"CWE-20",
"CWE-129"
] |
FFmpeg
|
26d3c81bc5ef2f8c3f09d45eaeacfb4b1139a777
| 301,334,367,618,255,430,000,000,000,000,000,000,000 | 304 |
avcodec/exr: More strictly check dc_count
Fixes: out of array access
Fixes: exr/deneme
Found-by: Burak Çarıkçı <[email protected]>
Signed-off-by: Michael Niedermayer <[email protected]>
|
BGD_DECLARE(void) gdImageChar (gdImagePtr im, gdFontPtr f, int x, int y, int c, int color)
{
int cx, cy;
int px, py;
int fline;
cx = 0;
cy = 0;
#ifdef CHARSET_EBCDIC
c = ASC (c);
#endif /*CHARSET_EBCDIC */
if ((c < f->offset) || (c >= (f->offset + f->nchars)))
{
return;
}
fline = (c - f->offset) * f->h * f->w;
for (py = y; (py < (y + f->h)); py++)
{
for (px = x; (px < (x + f->w)); px++)
{
if (f->data[fline + cy * f->w + cx])
{
gdImageSetPixel (im, px, py, color);
}
cx++;
}
cx = 0;
cy++;
}
}
| 0 |
[
"CWE-190"
] |
libgd
|
cfee163a5e848fc3e3fb1d05a30d7557cdd36457
| 132,561,768,005,802,520,000,000,000,000,000,000,000 | 29 |
- #18, Removed invalid gdFree call when overflow2 fails
- #17, Free im->pixels as well on error
|
void test_nghttp2_session_reprioritize_stream_with_idle_stream_dep(void) {
nghttp2_session *session;
nghttp2_session_callbacks callbacks;
nghttp2_stream *stream;
nghttp2_priority_spec pri_spec;
memset(&callbacks, 0, sizeof(nghttp2_session_callbacks));
callbacks.send_callback = block_count_send_callback;
nghttp2_session_server_new(&session, &callbacks, NULL);
stream = open_recv_stream(session, 1);
session->pending_local_max_concurrent_stream = 1;
nghttp2_priority_spec_init(&pri_spec, 101, 10, 0);
nghttp2_session_reprioritize_stream(session, stream, &pri_spec);
/* idle stream is not counteed to max concurrent streams */
CU_ASSERT(10 == stream->weight);
CU_ASSERT(101 == stream->dep_prev->stream_id);
stream = nghttp2_session_get_stream_raw(session, 101);
CU_ASSERT(NGHTTP2_DEFAULT_WEIGHT == stream->weight);
nghttp2_session_del(session);
}
| 0 |
[] |
nghttp2
|
0a6ce87c22c69438ecbffe52a2859c3a32f1620f
| 272,392,462,305,990,530,000,000,000,000,000,000,000 | 30 |
Add nghttp2_option_set_max_outbound_ack
|
bytestring_to_str(wmem_allocator_t *scope, const guint8 *ad, const guint32 len, const char punct)
{
gchar *buf;
guint32 buflen = len;
gchar *buf_ptr;
int truncated = 0;
if (len == 0)
return wmem_strdup(scope, "");
if (!ad)
REPORT_DISSECTOR_BUG("Null pointer passed to bytestring_to_str()");
if (!punct)
return bytes_to_str(scope, ad, len);
buf=(gchar *)wmem_alloc(scope, MAX_BYTE_STR_LEN+3+1);
if (buflen > MAX_BYTE_STR_LEN/3) { /* bd_len > 16 */
truncated = 1;
buflen = MAX_BYTE_STR_LEN/3;
}
buf_ptr = bytes_to_hexstr_punct(buf, ad, buflen, punct); /* max MAX_BYTE_STR_LEN-1 bytes */
if (truncated) {
*buf_ptr++ = punct; /* 1 byte */
buf_ptr = g_stpcpy(buf_ptr, UTF8_HORIZONTAL_ELLIPSIS); /* 3 bytes */
}
*buf_ptr = '\0';
return buf;
}
| 0 |
[
"CWE-125"
] |
wireshark
|
d5f2657825e63e4126ebd7d13a59f3c6e8a9e4e1
| 54,978,328,335,822,900,000,000,000,000,000,000,000 | 32 |
epan: Limit our bits in decode_bits_in_field.
Limit the number of bits we process in decode_bits_in_field, otherwise
we'll overrun our buffer. Fixes #16958.
|
static void virtnet_config_changed(struct virtio_device *vdev)
{
struct virtnet_info *vi = vdev->priv;
schedule_work(&vi->config_work);
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
48900cb6af4282fa0fb6ff4d72a81aa3dadb5c39
| 208,841,214,079,977,920,000,000,000,000,000,000,000 | 6 |
virtio-net: drop NETIF_F_FRAGLIST
virtio declares support for NETIF_F_FRAGLIST, but assumes
that there are at most MAX_SKB_FRAGS + 2 fragments which isn't
always true with a fraglist.
A longer fraglist in the skb will make the call to skb_to_sgvec overflow
the sg array, leading to memory corruption.
Drop NETIF_F_FRAGLIST so we only get what we can handle.
Cc: Michael S. Tsirkin <[email protected]>
Signed-off-by: Jason Wang <[email protected]>
Acked-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void Run(OpKernelContext *ctx, typename TTypes<T>::Scalar &s, const typename TTypes<T>::UnalignedVec &v) {
s.device(ctx->eigen_cpu_device()) = v.sum();
}
| 0 |
[
"CWE-125"
] |
tensorflow
|
87158f43f05f2720a374f3e6d22a7aaa3a33f750
| 14,352,629,159,288,104,000,000,000,000,000,000,000 | 3 |
Prevent heap OOB in sparse reduction ops.
PiperOrigin-RevId: 387934524
Change-Id: I894aa30f1e454f09b471d565b4a325da49322c1a
|
static const char *wsgi_set_group_authoritative(cmd_parms *cmd, void *mconfig,
const char *f)
{
WSGIDirectoryConfig *dconfig = NULL;
dconfig = (WSGIDirectoryConfig *)mconfig;
if (strcasecmp(f, "Off") == 0)
dconfig->group_authoritative = 0;
else if (strcasecmp(f, "On") == 0)
dconfig->group_authoritative = 1;
else
return "WSGIGroupAuthoritative must be one of: Off | On";
return NULL;
}
| 0 |
[
"CWE-264"
] |
mod_wsgi
|
d9d5fea585b23991f76532a9b07de7fcd3b649f4
| 202,676,134,441,652,100,000,000,000,000,000,000,000 | 15 |
Local privilege escalation when using daemon mode. (CVE-2014-0240)
|
static inline struct inet_timewait_sock *tw_next(struct inet_timewait_sock *tw)
{
return !is_a_nulls(tw->tw_node.next) ?
hlist_nulls_entry(tw->tw_node.next, typeof(*tw), tw_node) : NULL;
}
| 0 |
[
"CWE-362"
] |
linux-2.6
|
f6d8bd051c391c1c0458a30b2a7abcd939329259
| 208,342,800,436,177,080,000,000,000,000,000,000,000 | 5 |
inet: add RCU protection to inet->opt
We lack proper synchronization to manipulate inet->opt ip_options
Problem is ip_make_skb() calls ip_setup_cork() and
ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options),
without any protection against another thread manipulating inet->opt.
Another thread can change inet->opt pointer and free old one under us.
Use RCU to protect inet->opt (changed to inet->inet_opt).
Instead of handling atomic refcounts, just copy ip_options when
necessary, to avoid cache line dirtying.
We cant insert an rcu_head in struct ip_options since its included in
skb->cb[], so this patch is large because I had to introduce a new
ip_options_rcu structure.
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Herbert Xu <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static int io_timeout_remove_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
struct io_timeout_rem *tr = &req->timeout_rem;
if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
return -EINVAL;
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
return -EINVAL;
if (sqe->ioprio || sqe->buf_index || sqe->len)
return -EINVAL;
tr->addr = READ_ONCE(sqe->addr);
tr->flags = READ_ONCE(sqe->timeout_flags);
if (tr->flags & IORING_TIMEOUT_UPDATE) {
if (tr->flags & ~(IORING_TIMEOUT_UPDATE|IORING_TIMEOUT_ABS))
return -EINVAL;
if (get_timespec64(&tr->ts, u64_to_user_ptr(sqe->addr2)))
return -EFAULT;
} else if (tr->flags) {
/* timeout removal doesn't support flags */
return -EINVAL;
}
return 0;
| 0 |
[
"CWE-667"
] |
linux
|
3ebba796fa251d042be42b929a2d916ee5c34a49
| 243,115,776,930,166,560,000,000,000,000,000,000,000 | 26 |
io_uring: ensure that SQPOLL thread is started for exit
If we create it in a disabled state because IORING_SETUP_R_DISABLED is
set on ring creation, we need to ensure that we've kicked the thread if
we're exiting before it's been explicitly disabled. Otherwise we can run
into a deadlock where exit is waiting go park the SQPOLL thread, but the
SQPOLL thread itself is waiting to get a signal to start.
That results in the below trace of both tasks hung, waiting on each other:
INFO: task syz-executor458:8401 blocked for more than 143 seconds.
Not tainted 5.11.0-next-20210226-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor458 state:D stack:27536 pid: 8401 ppid: 8400 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4324 [inline]
__schedule+0x90c/0x21a0 kernel/sched/core.c:5075
schedule+0xcf/0x270 kernel/sched/core.c:5154
schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
io_sq_thread_park fs/io_uring.c:7115 [inline]
io_sq_thread_park+0xd5/0x130 fs/io_uring.c:7103
io_uring_cancel_task_requests+0x24c/0xd90 fs/io_uring.c:8745
__io_uring_files_cancel+0x110/0x230 fs/io_uring.c:8840
io_uring_files_cancel include/linux/io_uring.h:47 [inline]
do_exit+0x299/0x2a60 kernel/exit.c:780
do_group_exit+0x125/0x310 kernel/exit.c:922
__do_sys_exit_group kernel/exit.c:933 [inline]
__se_sys_exit_group kernel/exit.c:931 [inline]
__x64_sys_exit_group+0x3a/0x50 kernel/exit.c:931
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x43e899
RSP: 002b:00007ffe89376d48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 00000000004af2f0 RCX: 000000000043e899
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffffffffffffc0 R09: 0000000010000000
R10: 0000000000008011 R11: 0000000000000246 R12: 00000000004af2f0
R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000001
INFO: task iou-sqp-8401:8402 can't die for more than 143 seconds.
task:iou-sqp-8401 state:D stack:30272 pid: 8402 ppid: 8400 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4324 [inline]
__schedule+0x90c/0x21a0 kernel/sched/core.c:5075
schedule+0xcf/0x270 kernel/sched/core.c:5154
schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
io_sq_thread+0x27d/0x1ae0 fs/io_uring.c:6717
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task iou-sqp-8401:8402 blocked for more than 143 seconds.
Reported-by: [email protected]
Signed-off-by: Jens Axboe <[email protected]>
|
R_API void r_bin_java_stack_frame_free(void /*RBinJavaStackMapFrame*/ *o) {
RBinJavaStackMapFrame *obj = o;
if (obj) {
r_list_free (obj->local_items);
r_list_free (obj->stack_items);
free (obj->metas);
free (obj);
}
}
| 0 |
[
"CWE-119",
"CWE-788"
] |
radare2
|
6c4428f018d385fc80a33ecddcb37becea685dd5
| 183,038,342,938,310,200,000,000,000,000,000,000,000 | 9 |
Improve boundary checks to fix oobread segfaults ##crash
* Reported by Cen Zhang via huntr.dev
* Reproducer: bins/fuzzed/javaoob-havoc.class
|
RZ_API int rz_core_print_bb_gml(RzCore *core, RzAnalysisFunction *fcn) {
RzAnalysisBlock *bb;
RzListIter *iter;
if (!fcn) {
return false;
}
int id = 0;
HtUUOptions opt = { 0 };
HtUU *ht = ht_uu_new_opt(&opt);
rz_cons_printf("graph\n[\n"
"hierarchic 1\n"
"label \"\"\n"
"directed 1\n");
rz_list_foreach (fcn->bbs, iter, bb) {
RzFlagItem *flag = rz_flag_get_i(core->flags, bb->addr);
char *msg = flag ? strdup(flag->name) : rz_str_newf("0x%08" PFMT64x, bb->addr);
#if USE_ID
ht_uu_insert(ht, bb->addr, id);
rz_cons_printf(" node [\n"
" id %d\n"
" label \"%s\"\n"
" ]\n",
id, msg);
id++;
#else
rz_cons_printf(" node [\n"
" id %" PFMT64d "\n"
" label \"%s\"\n"
" ]\n",
bb->addr, msg);
#endif
free(msg);
}
rz_list_foreach (fcn->bbs, iter, bb) {
if (bb->addr == UT64_MAX) {
continue;
}
#if USE_ID
if (bb->jump != UT64_MAX) {
bool found;
int i = ht_uu_find(ht, bb->addr, &found);
if (found) {
int i2 = ht_uu_find(ht, bb->jump, &found);
if (found) {
rz_cons_printf(" edge [\n"
" source %d\n"
" target %d\n"
" ]\n",
i, i2);
}
}
}
if (bb->fail != UT64_MAX) {
bool found;
int i = ht_uu_find(ht, bb->addr, &found);
if (found) {
int i2 = ht_uu_find(ht, bb->fail, &found);
if (found) {
rz_cons_printf(" edge [\n"
" source %d\n"
" target %d\n"
" ]\n",
i, i2);
}
}
}
if (bb->switch_op) {
RzListIter *it;
RzAnalysisCaseOp *cop;
rz_list_foreach (bb->switch_op->cases, it, cop) {
bool found;
int i = ht_uu_find(ht, bb->addr, &found);
if (found) {
int i2 = ht_uu_find(ht, cop->addr, &found);
if (found) {
rz_cons_printf(" edge [\n"
" source %d\n"
" target %d\n"
" ]\n",
i, i2);
}
}
}
}
#else
if (bb->jump != UT64_MAX) {
rz_cons_printf(" edge [\n"
" source %" PFMT64d "\n"
" target %" PFMT64d "\n"
" ]\n",
bb->addr, bb->jump);
}
if (bb->fail != UT64_MAX) {
rz_cons_printf(" edge [\n"
" source %" PFMT64d "\n"
" target %" PFMT64d "\n"
" ]\n",
bb->addr, bb->fail);
}
if (bb->switch_op) {
RzListIter *it;
RzAnalysisCaseOp *cop;
rz_list_foreach (bb->switch_op->cases, it, cop) {
rz_cons_printf(" edge [\n"
" source %" PFMT64d "\n"
" target %" PFMT64d "\n"
" ]\n",
bb->addr, cop->addr);
}
}
#endif
}
rz_cons_printf("]\n");
ht_uu_free(ht);
return true;
}
| 0 |
[
"CWE-703"
] |
rizin
|
6ce71d8aa3dafe3cdb52d5d72ae8f4b95916f939
| 252,785,632,283,436,000,000,000,000,000,000,000,000 | 120 |
Initialize retctx,ctx before freeing the inner elements
In rz_core_analysis_type_match retctx structure was initialized on the
stack only after a "goto out_function", where a field of that structure
was freed. When the goto path is taken, the field is not properly
initialized and it cause cause a crash of Rizin or have other effects.
Fixes: CVE-2021-4022
|
int subselect_uniquesubquery_engine::index_lookup()
{
DBUG_ENTER("subselect_uniquesubquery_engine::index_lookup");
int error;
TABLE *table= tab->table;
if (!table->file->inited)
table->file->ha_index_init(tab->ref.key, 0);
error= table->file->ha_index_read_map(table->record[0],
tab->ref.key_buff,
make_prev_keypart_map(tab->
ref.key_parts),
HA_READ_KEY_EXACT);
DBUG_PRINT("info", ("lookup result: %i", error));
if (error && error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
{
/*
TIMOUR: I don't understand at all when do we need to call report_error.
In most places where we access an index, we don't do this. Why here?
*/
error= report_error(table, error);
DBUG_RETURN(error);
}
table->null_row= 0;
if (!error && (!cond || cond->val_int()))
((Item_in_subselect *) item)->value= 1;
else
((Item_in_subselect *) item)->value= 0;
DBUG_RETURN(0);
}
| 0 |
[
"CWE-89"
] |
server
|
3c209bfc040ddfc41ece8357d772547432353fd2
| 209,183,160,042,314,500,000,000,000,000,000,000,000 | 33 |
MDEV-25994: Crash with union of my_decimal type in ORDER BY clause
When single-row subquery fails with "Subquery reutrns more than 1 row"
error, it will raise an error and return NULL.
On the other hand, Item_singlerow_subselect sets item->maybe_null=0
for table-less subqueries like "(SELECT not_null_value)" (*)
This discrepancy (item with maybe_null=0 returning NULL) causes the
code in Type_handler_decimal_result::make_sort_key_part() to crash.
Fixed this by allowing inference (*) only when the subquery is NOT a
UNION.
|
void saio_box_del(GF_Box *s)
{
GF_SampleAuxiliaryInfoOffsetBox *ptr = (GF_SampleAuxiliaryInfoOffsetBox*)s;
if (ptr == NULL) return;
if (ptr->offsets) gf_free(ptr->offsets);
if (ptr->cached_data) gf_free(ptr->cached_data);
gf_free(ptr);
}
| 0 |
[
"CWE-787"
] |
gpac
|
77510778516803b7f7402d7423c6d6bef50254c3
| 32,761,446,866,594,940,000,000,000,000,000,000,000 | 8 |
fixed #2255
|
Item_int::Item_int(THD *thd, const char *str_arg, size_t length):
Item_num(thd)
{
char *end_ptr= (char*) str_arg + length;
int error;
value= my_strtoll10(str_arg, &end_ptr, &error);
max_length= (uint) (end_ptr - str_arg);
name.str= str_arg;
/*
We can't trust max_length as in show_routine_code we are using "Pos" as
the field name.
*/
name.length= !str_arg[max_length] ? max_length : strlen(str_arg);
fixed= 1;
}
| 0 |
[
"CWE-416"
] |
server
|
c02ebf3510850ba78a106be9974c94c3b97d8585
| 297,022,181,462,953,200,000,000,000,000,000,000,000 | 15 |
MDEV-24176 Preparations
1. moved fix_vcol_exprs() call to open_table()
mysql_alter_table() doesn't do lock_tables() so it cannot win from
fix_vcol_exprs() from there. Tests affected: main.default_session
2. Vanilla cleanups and comments.
|
GF_Err sdp_Read(GF_Box *s, GF_BitStream *bs)
{
u32 length;
GF_SDPBox *ptr = (GF_SDPBox *)s;
if (ptr == NULL) return GF_BAD_PARAM;
length = (u32) (ptr->size);
//sdp text has no delimiter !!!
ptr->sdpText = (char*)gf_malloc(sizeof(char) * (length+1));
if (!ptr->sdpText) return GF_OUT_OF_MEM;
gf_bs_read_data(bs, ptr->sdpText, length);
ptr->sdpText[length] = 0;
return GF_OK;
}
| 0 |
[
"CWE-400",
"CWE-401"
] |
gpac
|
d2371b4b204f0a3c0af51ad4e9b491144dd1225c
| 149,245,555,899,707,150,000,000,000,000,000,000,000 | 15 |
prevent dref memleak on invalid input (#1183)
|
static int opt_show_versions(const char *opt, const char *arg)
{
mark_section_show_entries(SECTION_ID_PROGRAM_VERSION, 1, NULL);
mark_section_show_entries(SECTION_ID_LIBRARY_VERSION, 1, NULL);
return 0;
}
| 0 |
[
"CWE-476"
] |
FFmpeg
|
837cb4325b712ff1aab531bf41668933f61d75d2
| 161,365,277,529,502,550,000,000,000,000,000,000,000 | 6 |
ffprobe: Fix null pointer dereference with color primaries
Found-by: AD-lab of venustech
Signed-off-by: Michael Niedermayer <[email protected]>
|
static OPJ_BOOL opj_j2k_update_image_data(opj_tcd_t * p_tcd,
opj_image_t* p_output_image)
{
OPJ_UINT32 i, j;
OPJ_UINT32 l_width_src, l_height_src;
OPJ_UINT32 l_width_dest, l_height_dest;
OPJ_INT32 l_offset_x0_src, l_offset_y0_src, l_offset_x1_src, l_offset_y1_src;
OPJ_SIZE_T l_start_offset_src;
OPJ_UINT32 l_start_x_dest, l_start_y_dest;
OPJ_UINT32 l_x0_dest, l_y0_dest, l_x1_dest, l_y1_dest;
OPJ_SIZE_T l_start_offset_dest;
opj_image_comp_t * l_img_comp_src = 00;
opj_image_comp_t * l_img_comp_dest = 00;
opj_tcd_tilecomp_t * l_tilec = 00;
opj_image_t * l_image_src = 00;
OPJ_INT32 * l_dest_ptr;
l_tilec = p_tcd->tcd_image->tiles->comps;
l_image_src = p_tcd->image;
l_img_comp_src = l_image_src->comps;
l_img_comp_dest = p_output_image->comps;
for (i = 0; i < l_image_src->numcomps;
i++, ++l_img_comp_dest, ++l_img_comp_src, ++l_tilec) {
OPJ_INT32 res_x0, res_x1, res_y0, res_y1;
OPJ_UINT32 src_data_stride;
const OPJ_INT32* p_src_data;
/* Copy info from decoded comp image to output image */
l_img_comp_dest->resno_decoded = l_img_comp_src->resno_decoded;
if (p_tcd->whole_tile_decoding) {
opj_tcd_resolution_t* l_res = l_tilec->resolutions +
l_img_comp_src->resno_decoded;
res_x0 = l_res->x0;
res_y0 = l_res->y0;
res_x1 = l_res->x1;
res_y1 = l_res->y1;
src_data_stride = (OPJ_UINT32)(
l_tilec->resolutions[l_tilec->minimum_num_resolutions - 1].x1 -
l_tilec->resolutions[l_tilec->minimum_num_resolutions - 1].x0);
p_src_data = l_tilec->data;
} else {
opj_tcd_resolution_t* l_res = l_tilec->resolutions +
l_img_comp_src->resno_decoded;
res_x0 = (OPJ_INT32)l_res->win_x0;
res_y0 = (OPJ_INT32)l_res->win_y0;
res_x1 = (OPJ_INT32)l_res->win_x1;
res_y1 = (OPJ_INT32)l_res->win_y1;
src_data_stride = l_res->win_x1 - l_res->win_x0;
p_src_data = l_tilec->data_win;
}
if (p_src_data == NULL) {
/* Happens for partial component decoding */
continue;
}
l_width_src = (OPJ_UINT32)(res_x1 - res_x0);
l_height_src = (OPJ_UINT32)(res_y1 - res_y0);
/* Current tile component size*/
/*if (i == 0) {
fprintf(stdout, "SRC: l_res_x0=%d, l_res_x1=%d, l_res_y0=%d, l_res_y1=%d\n",
res_x0, res_x1, res_y0, res_y1);
}*/
/* Border of the current output component*/
l_x0_dest = opj_uint_ceildivpow2(l_img_comp_dest->x0, l_img_comp_dest->factor);
l_y0_dest = opj_uint_ceildivpow2(l_img_comp_dest->y0, l_img_comp_dest->factor);
l_x1_dest = l_x0_dest +
l_img_comp_dest->w; /* can't overflow given that image->x1 is uint32 */
l_y1_dest = l_y0_dest + l_img_comp_dest->h;
/*if (i == 0) {
fprintf(stdout, "DEST: l_x0_dest=%d, l_x1_dest=%d, l_y0_dest=%d, l_y1_dest=%d (%d)\n",
l_x0_dest, l_x1_dest, l_y0_dest, l_y1_dest, l_img_comp_dest->factor );
}*/
/*-----*/
/* Compute the area (l_offset_x0_src, l_offset_y0_src, l_offset_x1_src, l_offset_y1_src)
* of the input buffer (decoded tile component) which will be move
* in the output buffer. Compute the area of the output buffer (l_start_x_dest,
* l_start_y_dest, l_width_dest, l_height_dest) which will be modified
* by this input area.
* */
assert(res_x0 >= 0);
assert(res_x1 >= 0);
if (l_x0_dest < (OPJ_UINT32)res_x0) {
l_start_x_dest = (OPJ_UINT32)res_x0 - l_x0_dest;
l_offset_x0_src = 0;
if (l_x1_dest >= (OPJ_UINT32)res_x1) {
l_width_dest = l_width_src;
l_offset_x1_src = 0;
} else {
l_width_dest = l_x1_dest - (OPJ_UINT32)res_x0 ;
l_offset_x1_src = (OPJ_INT32)(l_width_src - l_width_dest);
}
} else {
l_start_x_dest = 0U;
l_offset_x0_src = (OPJ_INT32)l_x0_dest - res_x0;
if (l_x1_dest >= (OPJ_UINT32)res_x1) {
l_width_dest = l_width_src - (OPJ_UINT32)l_offset_x0_src;
l_offset_x1_src = 0;
} else {
l_width_dest = l_img_comp_dest->w ;
l_offset_x1_src = res_x1 - (OPJ_INT32)l_x1_dest;
}
}
if (l_y0_dest < (OPJ_UINT32)res_y0) {
l_start_y_dest = (OPJ_UINT32)res_y0 - l_y0_dest;
l_offset_y0_src = 0;
if (l_y1_dest >= (OPJ_UINT32)res_y1) {
l_height_dest = l_height_src;
l_offset_y1_src = 0;
} else {
l_height_dest = l_y1_dest - (OPJ_UINT32)res_y0 ;
l_offset_y1_src = (OPJ_INT32)(l_height_src - l_height_dest);
}
} else {
l_start_y_dest = 0U;
l_offset_y0_src = (OPJ_INT32)l_y0_dest - res_y0;
if (l_y1_dest >= (OPJ_UINT32)res_y1) {
l_height_dest = l_height_src - (OPJ_UINT32)l_offset_y0_src;
l_offset_y1_src = 0;
} else {
l_height_dest = l_img_comp_dest->h ;
l_offset_y1_src = res_y1 - (OPJ_INT32)l_y1_dest;
}
}
if ((l_offset_x0_src < 0) || (l_offset_y0_src < 0) || (l_offset_x1_src < 0) ||
(l_offset_y1_src < 0)) {
return OPJ_FALSE;
}
/* testcase 2977.pdf.asan.67.2198 */
if ((OPJ_INT32)l_width_dest < 0 || (OPJ_INT32)l_height_dest < 0) {
return OPJ_FALSE;
}
/*-----*/
/* Compute the input buffer offset */
l_start_offset_src = (OPJ_SIZE_T)l_offset_x0_src + (OPJ_SIZE_T)l_offset_y0_src
* (OPJ_SIZE_T)src_data_stride;
/* Compute the output buffer offset */
l_start_offset_dest = (OPJ_SIZE_T)l_start_x_dest + (OPJ_SIZE_T)l_start_y_dest
* (OPJ_SIZE_T)l_img_comp_dest->w;
/* Allocate output component buffer if necessary */
if (l_img_comp_dest->data == NULL &&
l_start_offset_src == 0 && l_start_offset_dest == 0 &&
src_data_stride == l_img_comp_dest->w &&
l_width_dest == l_img_comp_dest->w &&
l_height_dest == l_img_comp_dest->h) {
/* If the final image matches the tile buffer, then borrow it */
/* directly to save a copy */
if (p_tcd->whole_tile_decoding) {
l_img_comp_dest->data = l_tilec->data;
l_tilec->data = NULL;
} else {
l_img_comp_dest->data = l_tilec->data_win;
l_tilec->data_win = NULL;
}
continue;
} else if (l_img_comp_dest->data == NULL) {
OPJ_SIZE_T l_width = l_img_comp_dest->w;
OPJ_SIZE_T l_height = l_img_comp_dest->h;
if ((l_height == 0U) || (l_width > (SIZE_MAX / l_height)) ||
l_width * l_height > SIZE_MAX / sizeof(OPJ_INT32)) {
/* would overflow */
return OPJ_FALSE;
}
l_img_comp_dest->data = (OPJ_INT32*) opj_image_data_alloc(l_width * l_height *
sizeof(OPJ_INT32));
if (! l_img_comp_dest->data) {
return OPJ_FALSE;
}
if (l_img_comp_dest->w != l_width_dest ||
l_img_comp_dest->h != l_height_dest) {
memset(l_img_comp_dest->data, 0,
(OPJ_SIZE_T)l_img_comp_dest->w * l_img_comp_dest->h * sizeof(OPJ_INT32));
}
}
/* Move the output buffer to the first place where we will write*/
l_dest_ptr = l_img_comp_dest->data + l_start_offset_dest;
{
const OPJ_INT32 * l_src_ptr = p_src_data;
l_src_ptr += l_start_offset_src;
for (j = 0; j < l_height_dest; ++j) {
memcpy(l_dest_ptr, l_src_ptr, l_width_dest * sizeof(OPJ_INT32));
l_dest_ptr += l_img_comp_dest->w;
l_src_ptr += src_data_stride;
}
}
}
return OPJ_TRUE;
}
| 0 |
[
"CWE-20"
] |
openjpeg
|
73fdf28342e4594019af26eb6a347a34eceb6296
| 109,887,170,465,419,600,000,000,000,000,000,000,000 | 216 |
opj_j2k_write_sod(): avoid potential heap buffer overflow (fixes #1299) (probably master only)
|
relay_websocket_send_http (struct t_relay_client *client,
const char *http)
{
char *message;
int length;
length = 32 + strlen (http) + 1;
message = malloc (length);
if (message)
{
snprintf (message, length, "HTTP/1.1 %s\r\n\r\n", http);
relay_client_send (client, RELAY_CLIENT_MSG_STANDARD,
message, strlen (message), NULL);
free (message);
}
}
| 0 |
[
"CWE-125"
] |
weechat
|
8b1331f98de1714bae15a9ca2e2b393ba49d735b
| 3,218,075,622,556,710,600,000,000,000,000,000,000 | 16 |
relay: fix crash when decoding a malformed websocket frame
|
Network::Socket::Type socketType() const override { return socket_->socketType(); }
| 0 |
[
"CWE-400"
] |
envoy
|
dfddb529e914d794ac552e906b13d71233609bf7
| 95,628,227,934,616,700,000,000,000,000,000,000,000 | 1 |
listener: Add configurable accepted connection limits (#153)
Add support for per-listener limits on accepted connections.
Signed-off-by: Tony Allen <[email protected]>
|
int get_pad_width(const int ngram_width) const {
// Ngrams can be padded with either a fixed pad width or a dynamic pad
// width depending on the 'pad_width' arg, but in no case should the padding
// ever be wider than 'ngram_width' - 1.
return std::min(pad_width_ < 0 ? ngram_width - 1 : pad_width_,
ngram_width - 1);
}
| 0 |
[
"CWE-703",
"CWE-787"
] |
tensorflow
|
ba424dd8f16f7110eea526a8086f1a155f14f22b
| 230,136,420,433,988,200,000,000,000,000,000,000,000 | 7 |
Enhance validation of ngram op and handle case of 0 tokens.
PiperOrigin-RevId: 369940178
Change-Id: Ia82f42c09d14efe76e7dc013505b832a42282f0b
|
cdio_generic_read (void *user_data, void *buf, size_t size)
{
generic_img_private_t *p_env = user_data;
return read(p_env->fd, buf, size);
}
| 0 |
[
"CWE-415"
] |
libcdio
|
dec2f876c2d7162da213429bce1a7140cdbdd734
| 160,465,779,293,914,120,000,000,000,000,000,000,000 | 5 |
Removed wrong line
|
static struct sk_buff *fq_tin_dequeue(struct fq *fq,
struct fq_tin *tin,
fq_tin_dequeue_t dequeue_func)
{
struct fq_flow *flow;
struct list_head *head;
struct sk_buff *skb;
lockdep_assert_held(&fq->lock);
begin:
head = &tin->new_flows;
if (list_empty(head)) {
head = &tin->old_flows;
if (list_empty(head))
return NULL;
}
flow = list_first_entry(head, struct fq_flow, flowchain);
if (flow->deficit <= 0) {
flow->deficit += fq->quantum;
list_move_tail(&flow->flowchain,
&tin->old_flows);
goto begin;
}
skb = dequeue_func(fq, tin, flow);
if (!skb) {
/* force a pass through old_flows to prevent starvation */
if ((head == &tin->new_flows) &&
!list_empty(&tin->old_flows)) {
list_move_tail(&flow->flowchain, &tin->old_flows);
} else {
list_del_init(&flow->flowchain);
flow->tin = NULL;
}
goto begin;
}
flow->deficit -= skb->len;
tin->tx_bytes += skb->len;
tin->tx_packets++;
return skb;
}
| 0 |
[
"CWE-330"
] |
linux
|
55667441c84fa5e0911a0aac44fb059c15ba6da2
| 151,725,938,701,259,040,000,000,000,000,000,000,000 | 46 |
net/flow_dissector: switch to siphash
UDP IPv6 packets auto flowlabels are using a 32bit secret
(static u32 hashrnd in net/core/flow_dissector.c) and
apply jhash() over fields known by the receivers.
Attackers can easily infer the 32bit secret and use this information
to identify a device and/or user, since this 32bit secret is only
set at boot time.
Really, using jhash() to generate cookies sent on the wire
is a serious security concern.
Trying to change the rol32(hash, 16) in ip6_make_flowlabel() would be
a dead end. Trying to periodically change the secret (like in sch_sfq.c)
could change paths taken in the network for long lived flows.
Let's switch to siphash, as we did in commit df453700e8d8
("inet: switch IP ID generator to siphash")
Using a cryptographically strong pseudo random function will solve this
privacy issue and more generally remove other weak points in the stack.
Packet schedulers using skb_get_hash_perturb() benefit from this change.
Fixes: b56774163f99 ("ipv6: Enable auto flow labels by default")
Fixes: 42240901f7c4 ("ipv6: Implement different admin modes for automatic flow labels")
Fixes: 67800f9b1f4e ("ipv6: Call skb_get_hash_flowi6 to get skb->hash in ip6_make_flowlabel")
Fixes: cb1ce2ef387b ("ipv6: Implement automatic flow label generation on transmit")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: Jonathan Berger <[email protected]>
Reported-by: Amit Klein <[email protected]>
Reported-by: Benny Pinkas <[email protected]>
Cc: Tom Herbert <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
CountChars(c)
int c;
{
StrCost++;
return c;
}
| 0 |
[] |
screen
|
c5db181b6e017cfccb8d7842ce140e59294d9f62
| 9,371,097,651,588,571,000,000,000,000,000,000,000 | 6 |
ansi: add support for xterm OSC 11
It allows for getting and setting the background color. Notably, Vim uses
OSC 11 to learn whether it's running on a light or dark colored terminal
and choose a color scheme accordingly.
Tested with gnome-terminal and xterm. When called with "?" argument the
current background color is returned:
$ echo -ne "\e]11;?\e\\"
$ 11;rgb:2323/2727/2929
Signed-off-by: Lubomir Rintel <[email protected]>
(cherry picked from commit 7059bff20a28778f9d3acf81cad07b1388d02309)
Signed-off-by: Amadeusz Sławiński <[email protected]
|
autoar_extractor_step_decide_destination (AutoarExtractor *self)
{
/* Step 2: Decide destination */
GList *files = NULL;
GList *l;
GFile *new_destination = NULL;
g_autofree char *destination_name;
for (l = self->files_list; l != NULL; l = l->next) {
char *relative_path;
GFile *file;
relative_path = g_file_get_relative_path (self->output_file, l->data);
file = g_file_resolve_relative_path (self->destination_dir,
relative_path);
files = g_list_prepend (files, file);
g_free (relative_path);
}
files = g_list_reverse (files);
/* When it exists, the common prefix is the actual output of the extraction
* and the client has the opportunity to change it. Also, the old prefix is
* needed in order to replace it with the new one
*/
if (self->prefix != NULL) {
autoar_extractor_signal_decide_destination (self,
self->prefix,
files,
&new_destination);
self->new_prefix = new_destination;
} else {
autoar_extractor_signal_decide_destination (self,
self->destination_dir,
files,
&new_destination);
if (new_destination) {
g_object_unref (self->destination_dir);
self->destination_dir = new_destination;
}
}
destination_name = g_file_get_path (self->new_prefix != NULL ?
self->new_prefix :
self->destination_dir);
g_debug ("autoar_extractor_step_decide_destination: destination %s", destination_name);
g_file_make_directory_with_parents (self->destination_dir,
self->cancellable,
&(self->error));
if (g_error_matches (self->error, G_IO_ERROR, G_IO_ERROR_EXISTS)) {
GFileType file_type;
file_type = g_file_query_file_type (self->destination_dir,
G_FILE_QUERY_INFO_NONE,
NULL);
if (file_type == G_FILE_TYPE_DIRECTORY) {
/* FIXME: Implement a way to solve directory conflicts */
g_debug ("autoar_extractor_step_decide_destination: destination directory exists");
g_clear_error (&self->error);
}
}
g_list_free_full (files, g_object_unref);
}
| 0 |
[
"CWE-22"
] |
gnome-autoar
|
adb067e645732fdbe7103516e506d09eb6a54429
| 61,082,107,917,067,050,000,000,000,000,000,000,000 | 71 |
AutoarExtractor: Do not extract files outside the destination dir
Currently, a malicious archive can cause that the files are extracted
outside of the destination dir. This can happen if the archive contains
a file whose parent is a symbolic link, which points outside of the
destination dir. This is potentially a security threat similar to
CVE-2020-11736. Let's skip such problematic files when extracting.
Fixes: https://gitlab.gnome.org/GNOME/gnome-autoar/-/issues/7
|
void Context::setDecoderFilterCallbacks(Envoy::Http::StreamDecoderFilterCallbacks& callbacks) {
decoder_callbacks_ = &callbacks;
}
| 0 |
[
"CWE-476"
] |
envoy
|
8788a3cf255b647fd14e6b5e2585abaaedb28153
| 184,985,365,101,907,270,000,000,000,000,000,000,000 | 3 |
1.4 - Do not call into the VM unless the VM Context has been created. (#24)
* Ensure that the in VM Context is created before onDone is called.
Signed-off-by: John Plevyak <[email protected]>
* Update as per offline discussion.
Signed-off-by: John Plevyak <[email protected]>
* Set in_vm_context_created_ in onNetworkNewConnection.
Signed-off-by: John Plevyak <[email protected]>
* Add guards to other network calls.
Signed-off-by: John Plevyak <[email protected]>
* Fix common/wasm tests.
Signed-off-by: John Plevyak <[email protected]>
* Patch tests.
Signed-off-by: John Plevyak <[email protected]>
* Remove unecessary file from cherry-pick.
Signed-off-by: John Plevyak <[email protected]>
|
const char *end() const { return str + length + is_quoted(); }
| 0 |
[
"CWE-703"
] |
server
|
39feab3cd31b5414aa9b428eaba915c251ac34a2
| 312,278,987,010,072,400,000,000,000,000,000,000,000 | 1 |
MDEV-26412 Server crash in Item_field::fix_outer_field for INSERT SELECT
IF an INSERT/REPLACE SELECT statement contained an ON expression in the top
level select and this expression used a subquery with a column reference
that could not be resolved then an attempt to resolve this reference as
an outer reference caused a crash of the server. This happened because the
outer context field in the Name_resolution_context structure was not set
to NULL for such references. Rather it pointed to the first element in
the select_stack.
Note that starting from 10.4 we cannot use the SELECT_LEX::outer_select()
method when parsing a SELECT construct.
Approved by Oleksandr Byelkin <[email protected]>
|
static int acm_tty_get_icount(struct tty_struct *tty,
struct serial_icounter_struct *icount)
{
struct acm *acm = tty->driver_data;
icount->dsr = acm->iocount.dsr;
icount->rng = acm->iocount.rng;
icount->dcd = acm->iocount.dcd;
icount->frame = acm->iocount.frame;
icount->overrun = acm->iocount.overrun;
icount->parity = acm->iocount.parity;
icount->brk = acm->iocount.brk;
return 0;
}
| 0 |
[
"CWE-416"
] |
linux
|
c52873e5a1ef72f845526d9f6a50704433f9c625
| 250,424,841,862,727,970,000,000,000,000,000,000,000 | 15 |
usb: cdc-acm: make sure a refcount is taken early enough
destroy() will decrement the refcount on the interface, so that
it needs to be taken so early that it never undercounts.
Fixes: 7fb57a019f94e ("USB: cdc-acm: Fix potential deadlock (lockdep warning)")
Cc: stable <[email protected]>
Reported-and-tested-by: [email protected]
Signed-off-by: Oliver Neukum <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
int unexpected(DviContext *dvi, int opcode)
{
dvierr(dvi, _("unexpected opcode %d\n"), opcode);
return -1;
}
| 0 |
[
"CWE-20"
] |
evince
|
d4139205b010ed06310d14284e63114e88ec6de2
| 68,952,234,594,631,880,000,000,000,000,000,000,000 | 5 |
backends: Fix several security issues in the dvi-backend.
See CVE-2010-2640, CVE-2010-2641, CVE-2010-2642 and CVE-2010-2643.
|
static int smack_sk_alloc_security(struct sock *sk, int family, gfp_t gfp_flags)
{
char *csp = current_security();
struct socket_smack *ssp;
ssp = kzalloc(sizeof(struct socket_smack), gfp_flags);
if (ssp == NULL)
return -ENOMEM;
ssp->smk_in = csp;
ssp->smk_out = csp;
ssp->smk_packet[0] = '\0';
sk->sk_security = ssp;
return 0;
}
| 0 |
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
| 179,350,751,653,330,250,000,000,000,000,000,000,000 | 17 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
NORET_TYPE void do_exit(long code)
{
struct task_struct *tsk = current;
int group_dead;
profile_task_exit(tsk);
WARN_ON(atomic_read(&tsk->fs_excl));
if (unlikely(in_interrupt()))
panic("Aiee, killing interrupt handler!");
if (unlikely(!tsk->pid))
panic("Attempted to kill the idle task!");
tracehook_report_exit(&code);
validate_creds_for_do_exit(tsk);
/*
* We're taking recursive faults here in do_exit. Safest is to just
* leave this task alone and wait for reboot.
*/
if (unlikely(tsk->flags & PF_EXITING)) {
printk(KERN_ALERT
"Fixing recursive fault but reboot is needed!\n");
/*
* We can do this unlocked here. The futex code uses
* this flag just to verify whether the pi state
* cleanup has been done or not. In the worst case it
* loops once more. We pretend that the cleanup was
* done as there is no way to return. Either the
* OWNER_DIED bit is set by now or we push the blocked
* task into the wait for ever nirwana as well.
*/
tsk->flags |= PF_EXITPIDONE;
set_current_state(TASK_UNINTERRUPTIBLE);
schedule();
}
exit_irq_thread();
exit_signals(tsk); /* sets PF_EXITING */
/*
* tsk->flags are checked in the futex code to protect against
* an exiting task cleaning up the robust pi futexes.
*/
smp_mb();
spin_unlock_wait(&tsk->pi_lock);
if (unlikely(in_atomic()))
printk(KERN_INFO "note: %s[%d] exited with preempt_count %d\n",
current->comm, task_pid_nr(current),
preempt_count());
acct_update_integrals(tsk);
group_dead = atomic_dec_and_test(&tsk->signal->live);
if (group_dead) {
hrtimer_cancel(&tsk->signal->real_timer);
exit_itimers(tsk->signal);
if (tsk->mm)
setmax_mm_hiwater_rss(&tsk->signal->maxrss, tsk->mm);
}
acct_collect(code, group_dead);
if (group_dead)
tty_audit_exit();
if (unlikely(tsk->audit_context))
audit_free(tsk);
tsk->exit_code = code;
taskstats_exit(tsk, group_dead);
exit_mm(tsk);
if (group_dead)
acct_process();
trace_sched_process_exit(tsk);
exit_sem(tsk);
exit_files(tsk);
exit_fs(tsk);
check_stack_usage();
exit_thread();
cgroup_exit(tsk, 1);
if (group_dead && tsk->signal->leader)
disassociate_ctty(1);
module_put(task_thread_info(tsk)->exec_domain->module);
proc_exit_connector(tsk);
/*
* Flush inherited counters to the parent - before the parent
* gets woken up by child-exit notifications.
*/
perf_event_exit_task(tsk);
exit_notify(tsk, group_dead);
#ifdef CONFIG_NUMA
mpol_put(tsk->mempolicy);
tsk->mempolicy = NULL;
#endif
#ifdef CONFIG_FUTEX
if (unlikely(current->pi_state_cache))
kfree(current->pi_state_cache);
#endif
/*
* Make sure we are holding no locks:
*/
debug_check_no_locks_held(tsk);
/*
* We can do this unlocked here. The futex code uses this flag
* just to verify whether the pi state cleanup has been done
* or not. In the worst case it loops once more.
*/
tsk->flags |= PF_EXITPIDONE;
if (tsk->io_context)
exit_io_context(tsk);
if (tsk->splice_pipe)
__free_pipe_info(tsk->splice_pipe);
validate_creds_for_do_exit(tsk);
preempt_disable();
exit_rcu();
/* causes final put_task_struct in finish_task_switch(). */
tsk->state = TASK_DEAD;
schedule();
BUG();
/* Avoid "noreturn function does return". */
for (;;)
cpu_relax(); /* For when BUG is null */
}
| 0 |
[
"CWE-20",
"CWE-703",
"CWE-400"
] |
linux
|
b69f2292063d2caf37ca9aec7d63ded203701bf3
| 338,485,346,983,087,950,000,000,000,000,000,000,000 | 136 |
block: Fix io_context leak after failure of clone with CLONE_IO
With CLONE_IO, parent's io_context->nr_tasks is incremented, but never
decremented whenever copy_process() fails afterwards, which prevents
exit_io_context() from calling IO schedulers exit functions.
Give a task_struct to exit_io_context(), and call exit_io_context() instead of
put_io_context() in copy_process() cleanup path.
Signed-off-by: Louis Rilling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
void __skb_free_datagram_locked(struct sock *sk, struct sk_buff *skb, int len);
static inline void skb_free_datagram_locked(struct sock *sk,
struct sk_buff *skb)
{
__skb_free_datagram_locked(sk, skb, 0);
| 0 |
[
"CWE-20"
] |
linux
|
2b16f048729bf35e6c28a40cbfad07239f9dcd90
| 65,148,270,407,341,730,000,000,000,000,000,000,000 | 5 |
net: create skb_gso_validate_mac_len()
If you take a GSO skb, and split it into packets, will the MAC
length (L2 + L3 + L4 headers + payload) of those packets be small
enough to fit within a given length?
Move skb_gso_mac_seglen() to skbuff.h with other related functions
like skb_gso_network_seglen() so we can use it, and then create
skb_gso_validate_mac_len to do the full calculation.
Signed-off-by: Daniel Axtens <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
static void zend_error_va(int type, const char *file, uint lineno, const char *format, ...) /* {{{ */
{
va_list args;
va_start(args, format);
zend_error_cb(type, file, lineno, format, args);
va_end(args);
}
| 0 |
[] |
php-src
|
a894a8155fab068d68a04bf181dbaddfa01ccbb0
| 313,633,724,532,951,250,000,000,000,000,000,000,000 | 8 |
More fixes for bug #69152
|
write32 (FILE *f,
guint32 *data,
gint count)
{
gint i;
for (i = 0; i < count; i++)
data[i] = GUINT32_TO_LE (data[i]);
return write8 (f, (guint8*) data, count * 4);
}
| 0 |
[
"CWE-787"
] |
gdk-pixbuf
|
88af50a864195da1a4f7bda5f02539704fbda599
| 96,544,123,685,908,500,000,000,000,000,000,000,000 | 11 |
ico: Be more careful when parsing headers
There is some redundancy between the ico directory and the
bitmap image header. If the two disagree on the icon dimensions,
just toss the image, instead of risking crashes or OOM later. Also
add some more debug spew that helped in tracking this down, and
make error messages more unique.
The commit also includes a test image that has an example of
this discrepancy and triggers the early exit.
https://bugzilla.gnome.org/show_bug.cgi?id=769170
|
static void ext4_ext_show_path(struct inode *inode, struct ext4_ext_path *path)
{
int k, l = path->p_depth;
ext_debug("path:");
for (k = 0; k <= l; k++, path++) {
if (path->p_idx) {
ext_debug(" %d->%llu", le32_to_cpu(path->p_idx->ei_block),
ext4_idx_pblock(path->p_idx));
} else if (path->p_ext) {
ext_debug(" %d:[%d]%d:%llu ",
le32_to_cpu(path->p_ext->ee_block),
ext4_ext_is_uninitialized(path->p_ext),
ext4_ext_get_actual_len(path->p_ext),
ext4_ext_pblock(path->p_ext));
} else
ext_debug(" []");
}
ext_debug("\n");
}
| 0 |
[
"CWE-362"
] |
linux-2.6
|
dee1f973ca341c266229faa5a1a5bb268bed3531
| 56,094,061,843,982,430,000,000,000,000,000,000,000 | 20 |
ext4: race-condition protection for ext4_convert_unwritten_extents_endio
We assumed that at the time we call ext4_convert_unwritten_extents_endio()
extent in question is fully inside [map.m_lblk, map->m_len] because
it was already split during submission. But this may not be true due to
a race between writeback vs fallocate.
If extent in question is larger than requested we will split it again.
Special precautions should being done if zeroout required because
[map.m_lblk, map->m_len] already contains valid data.
Signed-off-by: Dmitry Monakhov <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Cc: [email protected]
|
static int parse_arg_object_to_string(zval **arg, char **p, int *pl, int type TSRMLS_DC) /* {{{ */
{
if (Z_OBJ_HANDLER_PP(arg, cast_object)) {
zval *obj;
MAKE_STD_ZVAL(obj);
if (Z_OBJ_HANDLER_P(*arg, cast_object)(*arg, obj, type TSRMLS_CC) == SUCCESS) {
zval_ptr_dtor(arg);
*arg = obj;
*pl = Z_STRLEN_PP(arg);
*p = Z_STRVAL_PP(arg);
return SUCCESS;
}
efree(obj);
}
/* Standard PHP objects */
if (Z_OBJ_HT_PP(arg) == &std_object_handlers || !Z_OBJ_HANDLER_PP(arg, cast_object)) {
SEPARATE_ZVAL_IF_NOT_REF(arg);
if (zend_std_cast_object_tostring(*arg, *arg, type TSRMLS_CC) == SUCCESS) {
*pl = Z_STRLEN_PP(arg);
*p = Z_STRVAL_PP(arg);
return SUCCESS;
}
}
if (!Z_OBJ_HANDLER_PP(arg, cast_object) && Z_OBJ_HANDLER_PP(arg, get)) {
int use_copy;
zval *z = Z_OBJ_HANDLER_PP(arg, get)(*arg TSRMLS_CC);
Z_ADDREF_P(z);
if(Z_TYPE_P(z) != IS_OBJECT) {
zval_dtor(*arg);
Z_TYPE_P(*arg) = IS_NULL;
zend_make_printable_zval(z, *arg, &use_copy);
if (!use_copy) {
ZVAL_ZVAL(*arg, z, 1, 1);
}
*pl = Z_STRLEN_PP(arg);
*p = Z_STRVAL_PP(arg);
return SUCCESS;
}
zval_ptr_dtor(&z);
}
return FAILURE;
}
| 0 |
[
"CWE-416"
] |
php-src
|
0e6fe3a4c96be2d3e88389a5776f878021b4c59f
| 327,761,518,546,504,300,000,000,000,000,000,000,000 | 42 |
Fix bug #73147: Use After Free in PHP7 unserialize()
|
static char ssl_next_proto_validate(unsigned char *d, unsigned len)
{
unsigned int off = 0;
while (off < len) {
if (d[off] == 0)
return 0;
off += d[off];
off++;
}
return off == len;
}
| 0 |
[] |
openssl
|
76343947ada960b6269090638f5391068daee88d
| 196,696,704,964,627,560,000,000,000,000,000,000,000 | 13 |
Fix for CVE-2015-0291
If a client renegotiates using an invalid signature algorithms extension
it will crash a server with a NULL pointer dereference.
Thanks to David Ramos of Stanford University for reporting this bug.
CVE-2015-0291
Reviewed-by: Tim Hudson <[email protected]>
Conflicts:
ssl/t1_lib.c
|
static int item_val_int(struct st_mysql_value *value, long long *buf)
{
Item *item= ((st_item_value_holder*)value)->item;
*buf= item->val_int();
if (item->is_null())
return 1;
return 0;
}
| 0 |
[
"CWE-416"
] |
server
|
c05fd700970ad45735caed3a6f9930d4ce19a3bd
| 318,559,720,795,476,760,000,000,000,000,000,000,000 | 8 |
MDEV-26323 use-after-poison issue of MariaDB server
|
uint64_t smb_roundup(connection_struct *conn, uint64_t val)
{
uint64_t rval = lp_allocation_roundup_size(SNUM(conn));
/* Only roundup for Windows clients. */
enum remote_arch_types ra_type = get_remote_arch();
if (rval && (ra_type != RA_SAMBA) && (ra_type != RA_CIFSFS)) {
val = SMB_ROUNDUP(val,rval);
}
return val;
}
| 0 |
[
"CWE-22"
] |
samba
|
bd269443e311d96ef495a9db47d1b95eb83bb8f4
| 131,758,393,273,664,440,000,000,000,000,000,000,000 | 11 |
Fix bug 7104 - "wide links" and "unix extensions" are incompatible.
Change parameter "wide links" to default to "no".
Ensure "wide links = no" if "unix extensions = yes" on a share.
Fix man pages to refect this.
Remove "within share" checks for a UNIX symlink set - even if
widelinks = no. The server will not follow that link anyway.
Correct DEBUG message in check_reduced_name() to add missing "\n"
so it's really clear when a path is being denied as it's outside
the enclosing share path.
Jeremy.
|
static int bpf_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
{
struct inode *inode;
if (bpf_dname_reserved(dentry))
return -EPERM;
inode = bpf_get_inode(dir->i_sb, dir, mode | S_IFDIR);
if (IS_ERR(inode))
return PTR_ERR(inode);
inode->i_op = &bpf_dir_iops;
inode->i_fop = &simple_dir_operations;
inc_nlink(inode);
inc_nlink(dir);
d_instantiate(dentry, inode);
dget(dentry);
return 0;
}
| 0 |
[
"CWE-703"
] |
linux
|
92117d8443bc5afacc8d5ba82e541946310f106e
| 161,028,141,806,735,700,000,000,000,000,000,000,000 | 22 |
bpf: fix refcnt overflow
On a system with >32Gbyte of phyiscal memory and infinite RLIMIT_MEMLOCK,
the malicious application may overflow 32-bit bpf program refcnt.
It's also possible to overflow map refcnt on 1Tb system.
Impose 32k hard limit which means that the same bpf program or
map cannot be shared by more than 32k processes.
Fixes: 1be7f75d1668 ("bpf: enable non-root eBPF programs")
Reported-by: Jann Horn <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
void gf_bs_prevent_dispatch(GF_BitStream *bs, Bool prevent_dispatch)
{
if (!bs) return;
if (prevent_dispatch) {
bs->prevent_dispatch ++;
return;
}
if (!bs->prevent_dispatch) return;
bs->prevent_dispatch --;
if (bs->on_block_out && !bs->prevent_dispatch) {
assert(bs->position >= bs->bytes_out);
if (bs->position > bs->bytes_out) {
bs->on_block_out(bs->usr_data, bs->original, (u32) (bs->position - bs->bytes_out));
bs->bytes_out = bs->position;
}
}
}
| 0 |
[
"CWE-617",
"CWE-703"
] |
gpac
|
9ea93a2ec8f555ceed1ee27294cf94822f14f10f
| 19,279,048,744,489,270,000,000,000,000,000,000,000 | 18 |
fixed #2165
|
bytes_lstrip_impl(PyBytesObject *self, PyObject *bytes)
/*[clinic end generated code: output=28602e586f524e82 input=88811b09dfbc2988]*/
{
return do_argstrip(self, LEFTSTRIP, bytes);
}
| 0 |
[
"CWE-190"
] |
cpython
|
fd8614c5c5466a14a945db5b059c10c0fb8f76d9
| 119,085,472,523,933,660,000,000,000,000,000,000,000 | 5 |
bpo-30657: Fix CVE-2017-1000158 (#4664)
Fixes possible integer overflow in PyBytes_DecodeEscape.
Co-Authored-By: Jay Bosamiya <[email protected]>
|
int security_task_setioprio(struct task_struct *p, int ioprio)
{
return security_ops->task_setioprio(p, ioprio);
}
| 0 |
[] |
linux-2.6
|
ee18d64c1f632043a02e6f5ba5e045bb26a5465f
| 145,773,552,185,037,300,000,000,000,000,000,000,000 | 4 |
KEYS: Add a keyctl to install a process's session keyring on its parent [try #6]
Add a keyctl to install a process's session keyring onto its parent. This
replaces the parent's session keyring. Because the COW credential code does
not permit one process to change another process's credentials directly, the
change is deferred until userspace next starts executing again. Normally this
will be after a wait*() syscall.
To support this, three new security hooks have been provided:
cred_alloc_blank() to allocate unset security creds, cred_transfer() to fill in
the blank security creds and key_session_to_parent() - which asks the LSM if
the process may replace its parent's session keyring.
The replacement may only happen if the process has the same ownership details
as its parent, and the process has LINK permission on the session keyring, and
the session keyring is owned by the process, and the LSM permits it.
Note that this requires alteration to each architecture's notify_resume path.
This has been done for all arches barring blackfin, m68k* and xtensa, all of
which need assembly alteration to support TIF_NOTIFY_RESUME. This allows the
replacement to be performed at the point the parent process resumes userspace
execution.
This allows the userspace AFS pioctl emulation to fully emulate newpag() and
the VIOCSETTOK and VIOCSETTOK2 pioctls, all of which require the ability to
alter the parent process's PAG membership. However, since kAFS doesn't use
PAGs per se, but rather dumps the keys into the session keyring, the session
keyring of the parent must be replaced if, for example, VIOCSETTOK is passed
the newpag flag.
This can be tested with the following program:
#include <stdio.h>
#include <stdlib.h>
#include <keyutils.h>
#define KEYCTL_SESSION_TO_PARENT 18
#define OSERROR(X, S) do { if ((long)(X) == -1) { perror(S); exit(1); } } while(0)
int main(int argc, char **argv)
{
key_serial_t keyring, key;
long ret;
keyring = keyctl_join_session_keyring(argv[1]);
OSERROR(keyring, "keyctl_join_session_keyring");
key = add_key("user", "a", "b", 1, keyring);
OSERROR(key, "add_key");
ret = keyctl(KEYCTL_SESSION_TO_PARENT);
OSERROR(ret, "KEYCTL_SESSION_TO_PARENT");
return 0;
}
Compiled and linked with -lkeyutils, you should see something like:
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
355907932 --alswrv 4043 -1 \_ keyring: _uid.4043
[dhowells@andromeda ~]$ /tmp/newpag
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: _ses
1055658746 --alswrv 4043 4043 \_ user: a
[dhowells@andromeda ~]$ /tmp/newpag hello
[dhowells@andromeda ~]$ keyctl show
Session Keyring
-3 --alswrv 4043 4043 keyring: hello
340417692 --alswrv 4043 4043 \_ user: a
Where the test program creates a new session keyring, sticks a user key named
'a' into it and then installs it on its parent.
Signed-off-by: David Howells <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
vmod_regsub(VRT_CTX, VCL_HTTP hp, VCL_REGEX re,
VCL_STRING sub, VCL_BOOL all)
{
CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC);
CHECK_OBJ_NOTNULL(hp, HTTP_MAGIC);
AN(re);
for (unsigned u = HTTP_HDR_FIRST; u < hp->nhd; u++) {
const char *hdr;
VCL_STRING rewrite;
Tcheck(hp->hd[u]);
hdr = hp->hd[u].b;
if (!VRT_re_match(ctx, hdr, re))
continue;
rewrite = VRT_regsub(ctx, all, hdr, re, sub);
if (rewrite == hdr)
continue;
http_VSLH_del(hp, u);
hp->hd[u].b = rewrite;
hp->hd[u].e = strchr(rewrite, '\0');
http_VSLH(hp, u);
}
}
| 0 |
[
"CWE-476"
] |
varnish-modules
|
2c120e576ebb73bc247790184702ba58dc0afc39
| 261,053,922,707,714,360,000,000,000,000,000,000,000 | 24 |
Check VRT_StrandsWS() return value
Fixes: VSV00006
|
**/
CImg<T>& load_pfm(const char *const filename) {
return _load_pfm(0,filename);
| 0 |
[
"CWE-125"
] |
CImg
|
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
| 155,178,225,184,449,190,000,000,000,000,000,000,000 | 3 |
Fix other issues in 'CImg<T>::load_bmp()'.
|
static int check_kill_permission(int sig, struct kernel_siginfo *info,
struct task_struct *t)
{
struct pid *sid;
int error;
if (!valid_signal(sig))
return -EINVAL;
if (!si_fromuser(info))
return 0;
error = audit_signal_info(sig, t); /* Let audit system see the signal */
if (error)
return error;
if (!same_thread_group(current, t) &&
!kill_ok_by_cred(t)) {
switch (sig) {
case SIGCONT:
sid = task_session(t);
/*
* We don't return the error if sid == NULL. The
* task was unhashed, the caller must notice this.
*/
if (!sid || sid == task_session(current))
break;
/* fall through */
default:
return -EPERM;
}
}
return security_task_kill(t, info, sig, NULL);
}
| 0 |
[
"CWE-190"
] |
linux
|
d1e7fd6462ca9fc76650fbe6ca800e35b24267da
| 91,227,278,621,405,300,000,000,000,000,000,000,000 | 35 |
signal: Extend exec_id to 64bits
Replace the 32bit exec_id with a 64bit exec_id to make it impossible
to wrap the exec_id counter. With care an attacker can cause exec_id
wrap and send arbitrary signals to a newly exec'd parent. This
bypasses the signal sending checks if the parent changes their
credentials during exec.
The severity of this problem can been seen that in my limited testing
of a 32bit exec_id it can take as little as 19s to exec 65536 times.
Which means that it can take as little as 14 days to wrap a 32bit
exec_id. Adam Zabrocki has succeeded wrapping the self_exe_id in 7
days. Even my slower timing is in the uptime of a typical server.
Which means self_exec_id is simply a speed bump today, and if exec
gets noticably faster self_exec_id won't even be a speed bump.
Extending self_exec_id to 64bits introduces a problem on 32bit
architectures where reading self_exec_id is no longer atomic and can
take two read instructions. Which means that is is possible to hit
a window where the read value of exec_id does not match the written
value. So with very lucky timing after this change this still
remains expoiltable.
I have updated the update of exec_id on exec to use WRITE_ONCE
and the read of exec_id in do_notify_parent to use READ_ONCE
to make it clear that there is no locking between these two
locations.
Link: https://lore.kernel.org/kernel-hardening/[email protected]
Fixes: 2.3.23pre2
Cc: [email protected]
Signed-off-by: "Eric W. Biederman" <[email protected]>
|
zend_function *spl_filesystem_object_get_method_check(zend_object **object, zend_string *method, const zval *key) /* {{{ */
{
spl_filesystem_object *fsobj = spl_filesystem_from_obj(*object);
if (fsobj->u.dir.dirp == NULL && fsobj->orig_path == NULL) {
zend_function *func;
zend_string *tmp = zend_string_init("_bad_state_ex", sizeof("_bad_state_ex") - 1, 0);
func = zend_get_std_object_handlers()->get_method(object, tmp, NULL);
zend_string_release(tmp);
return func;
}
return zend_get_std_object_handlers()->get_method(object, method, key);
}
| 0 |
[
"CWE-74"
] |
php-src
|
a5a15965da23c8e97657278fc8dfbf1dfb20c016
| 300,540,077,702,147,130,000,000,000,000,000,000,000 | 14 |
Fix #78863: DirectoryIterator class silently truncates after a null byte
Since the constructor of DirectoryIterator and friends is supposed to
accepts paths (i.e. strings without NUL bytes), we must not accept
arbitrary strings.
|
void __cpuinit init_idle(struct task_struct *idle, int cpu)
{
struct rq *rq = cpu_rq(cpu);
unsigned long flags;
raw_spin_lock_irqsave(&rq->lock, flags);
__sched_fork(idle);
idle->state = TASK_RUNNING;
idle->se.exec_start = sched_clock();
cpumask_copy(&idle->cpus_allowed, cpumask_of(cpu));
/*
* We're having a chicken and egg problem, even though we are
* holding rq->lock, the cpu isn't yet set to this cpu so the
* lockdep check in task_group() will fail.
*
* Similar case to sched_fork(). / Alternatively we could
* use task_rq_lock() here and obtain the other rq->lock.
*
* Silence PROVE_RCU
*/
rcu_read_lock();
__set_task_cpu(idle, cpu);
rcu_read_unlock();
rq->curr = rq->idle = idle;
#if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW)
idle->oncpu = 1;
#endif
raw_spin_unlock_irqrestore(&rq->lock, flags);
/* Set the preempt count _outside_ the spinlocks! */
#if defined(CONFIG_PREEMPT)
task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0);
#else
task_thread_info(idle)->preempt_count = 0;
#endif
/*
* The idle tasks have their own, simple scheduling class:
*/
idle->sched_class = &idle_sched_class;
ftrace_graph_init_task(idle);
}
| 0 |
[
"CWE-703",
"CWE-835"
] |
linux
|
f26f9aff6aaf67e9a430d16c266f91b13a5bff64
| 23,757,380,416,944,440,000,000,000,000,000,000,000 | 44 |
Sched: fix skip_clock_update optimization
idle_balance() drops/retakes rq->lock, leaving the previous task
vulnerable to set_tsk_need_resched(). Clear it after we return
from balancing instead, and in setup_thread_stack() as well, so
no successfully descheduled or never scheduled task has it set.
Need resched confused the skip_clock_update logic, which assumes
that the next call to update_rq_clock() will come nearly immediately
after being set. Make the optimization robust against the waking
a sleeper before it sucessfully deschedules case by checking that
the current task has not been dequeued before setting the flag,
since it is that useless clock update we're trying to save, and
clear unconditionally in schedule() proper instead of conditionally
in put_prev_task().
Signed-off-by: Mike Galbraith <[email protected]>
Reported-by: Bjoern B. Brandenburg <[email protected]>
Tested-by: Yong Zhang <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: [email protected]
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
getline_equal(
char_u *(*fgetline)(int, void *, int),
void *cookie UNUSED, /* argument for fgetline() */
char_u *(*func)(int, void *, int))
{
#ifdef FEAT_EVAL
char_u *(*gp)(int, void *, int);
struct loop_cookie *cp;
/* When "fgetline" is "get_loop_line()" use the "cookie" to find the
* function that's originally used to obtain the lines. This may be
* nested several levels. */
gp = fgetline;
cp = (struct loop_cookie *)cookie;
while (gp == get_loop_line)
{
gp = cp->getline;
cp = cp->cookie;
}
return gp == func;
#else
return fgetline == func;
#endif
}
| 0 |
[
"CWE-78"
] |
vim
|
8c62a08faf89663e5633dc5036cd8695c80f1075
| 89,586,100,044,232,010,000,000,000,000,000,000,000 | 24 |
patch 8.1.0881: can execute shell commands in rvim through interfaces
Problem: Can execute shell commands in rvim through interfaces.
Solution: Disable using interfaces in restricted mode. Allow for writing
file with writefile(), histadd() and a few others.
|
double js_tonumber(js_State *J, int idx)
{
return jsV_tonumber(J, stackidx(J, idx));
}
| 0 |
[
"CWE-476"
] |
mujs
|
77ab465f1c394bb77f00966cd950650f3f53cb24
| 120,115,160,408,631,730,000,000,000,000,000,000,000 | 4 |
Fix 697401: Error when dropping extra arguments to lightweight functions.
|
static int query_raw_packet_qp_rq_state(struct mlx5_ib_dev *dev,
struct mlx5_ib_rq *rq,
u8 *rq_state)
{
void *out;
void *rqc;
int inlen;
int err;
inlen = MLX5_ST_SZ_BYTES(query_rq_out);
out = kvzalloc(inlen, GFP_KERNEL);
if (!out)
return -ENOMEM;
err = mlx5_core_query_rq(dev->mdev, rq->base.mqp.qpn, out);
if (err)
goto out;
rqc = MLX5_ADDR_OF(query_rq_out, out, rq_context);
*rq_state = MLX5_GET(rqc, rqc, state);
rq->state = *rq_state;
out:
kvfree(out);
return err;
}
| 0 |
[
"CWE-119",
"CWE-787"
] |
linux
|
0625b4ba1a5d4703c7fb01c497bd6c156908af00
| 5,531,552,518,647,873,000,000,000,000,000,000,000 | 26 |
IB/mlx5: Fix leaking stack memory to userspace
mlx5_ib_create_qp_resp was never initialized and only the first 4 bytes
were written.
Fixes: 41d902cb7c32 ("RDMA/mlx5: Fix definition of mlx5_ib_create_qp_resp")
Cc: <[email protected]>
Acked-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
dummy_callback (gpointer data)
{
return G_SOURCE_CONTINUE;
}
| 0 |
[
"CWE-295"
] |
glib-networking
|
29513946809590c4912550f6f8620468f9836d94
| 19,525,196,826,345,528,000,000,000,000,000,000,000 | 4 |
Return bad identity error if identity is unset
When the server-identity property of GTlsClientConnection is unset, the
documentation sasy we need to fail the certificate verification with
G_TLS_CERTIFICATE_BAD_IDENTITY. This is important because otherwise,
it's easy for applications to fail to specify server identity.
Unfortunately, we did not correctly implement the intended, documented
behavior. When server identity is missing, we check the validity of the
TLS certificate, but do not check if it corresponds to the expected
server (since we have no expected server). Then we assume the identity
is good, instead of returning bad identity, as documented. This means,
for example, that evil.com can present a valid certificate issued to
evil.com, and we would happily accept it for paypal.com.
Fixes #135
|
bool TLSWrap::IsClosing() {
return underlying_stream()->IsClosing();
}
| 0 |
[
"CWE-416"
] |
node
|
7f178663ebffc82c9f8a5a1b6bf2da0c263a30ed
| 101,174,024,115,042,660,000,000,000,000,000,000,000 | 3 |
src: use unique_ptr for WriteWrap
This commit attempts to avoid a use-after-free error by using unqiue_ptr
and passing a reference to it.
CVE-ID: CVE-2020-8265
Fixes: https://github.com/nodejs-private/node-private/issues/227
PR-URL: https://github.com/nodejs-private/node-private/pull/238
Reviewed-By: Michael Dawson <[email protected]>
Reviewed-By: Tobias Nießen <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
|
gdk_pixbuf_loader_get_type (void)
{
static GType loader_type = 0;
if (!loader_type)
{
static const GTypeInfo loader_info = {
sizeof (GdkPixbufLoaderClass),
(GBaseInitFunc) NULL,
(GBaseFinalizeFunc) NULL,
(GClassInitFunc) gdk_pixbuf_loader_class_init,
NULL, /* class_finalize */
NULL, /* class_data */
sizeof (GdkPixbufLoader),
0, /* n_preallocs */
(GInstanceInitFunc) gdk_pixbuf_loader_init
};
loader_type = g_type_register_static (G_TYPE_OBJECT,
"GdkPixbufLoader",
&loader_info,
0);
}
return loader_type;
}
| 0 |
[
"CWE-20"
] |
gdk-pixbuf
|
3bac204e0d0241a0d68586ece7099e6acf0e9bea
| 74,402,747,947,637,290,000,000,000,000,000,000,000 | 26 |
Initial stab at getting the focus code to work.
Fri Jun 1 18:54:47 2001 Jonathan Blandford <[email protected]>
* gtk/gtktreeview.c: (gtk_tree_view_focus): Initial stab at
getting the focus code to work.
(gtk_tree_view_class_init): Add a bunch of keybindings.
* gtk/gtktreeviewcolumn.c
(gtk_tree_view_column_set_cell_data_func):
s/GtkCellDataFunc/GtkTreeCellDataFunc.
(_gtk_tree_view_column_set_tree_view): Use "notify::model" instead
of "properties_changed" to help justify the death of the latter
signal. (-:
* tests/testtreefocus.c (main): Let some columns be focussable to
test focus better.
|
Subsets and Splits
CWE 416 & 19
The query filters records related to specific CWEs (Common Weakness Enumerations), providing a basic overview of entries with these vulnerabilities but without deeper analysis.
CWE Frequency in Train Set
Counts the occurrences of each CWE (Common Weakness Enumeration) in the dataset, providing a basic distribution but limited insight.