commit_msg
stringlengths 1
24.2k
| commit_hash
stringlengths 2
84
⌀ | project
stringlengths 2
40
| source
stringclasses 4
values | labels
int64 0
1
| repo_url
stringlengths 26
70
⌀ | commit_url
stringlengths 74
118
⌀ | commit_date
stringlengths 25
25
⌀ |
---|---|---|---|---|---|---|---|
Fix: server will crash if rdbload or rdbsave method is not provided in module (#8670)
With this fix, module data type registration will fail if the load or save callbacks are not defined, or the optional aux load and save callbacks are not either both defined or both missing. | 808f3004f0de8c129b3067d8b2ce5002fa703e77 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/808f3004f0de8c129b3067d8b2ce5002fa703e77 | 2021-04-06 17:09:36+08:00 |
Fix typos and limit unknown command error message (#10634)
minor cleanup for recent changes. | 119ec91a5aa9b655d700d911eae68e8a5fa694d4 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/119ec91a5aa9b655d700d911eae68e8a5fa694d4 | 2022-04-25 22:59:39+08:00 |
fix race in diskless load cluster tests (#8674)
| 2f717c156a0bca757b8a8dfacf27e9cbeb60f99d | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/2f717c156a0bca757b8a8dfacf27e9cbeb60f99d | 2021-03-22 10:51:13+02:00 |
Try to fix SENTINEL SIMULATE-FAILURE test by re-source init-tests before each test (#12194)
This test was introduced in #12079, it works well most of the time, but
occasionally fails:
```
00:34:45> SENTINEL SIMULATE-FAILURE crash-after-election works: OK
00:34:45> SENTINEL SIMULATE-FAILURE crash-after-promotion works: FAILED: Sentinel set crash-after-promotion but did not exit
```
Don't know the reason, it may be affected by the exit of the previous
crash-after-election test. Because it doesn't really make much sense to
go deeper into it now, we re-source init-tests to get a clean environment
before each test, to try to fix this.
After applying this change, we found a new error:
```
16:39:33> SENTINEL SIMULATE-FAILURE crash-after-election works: FAILED: caught an error in the test couldn't open socket: connection refused
couldn't open socket: connection refused
```
I am guessing the sentinel triggers failover and exits before SENTINEL FAILOVER,
added a new || condition in wait_for_condition to fix it. | da8f7428fae4ccffa5e07d869f65fe7413898f78 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/da8f7428fae4ccffa5e07d869f65fe7413898f78 | 2023-05-29 18:43:26+08:00 |
Merge pull request #4967 from JingchengLi/unstable
fix repeat argument issue and reduce unnessary loop times for redis-cli. | f45e790125cc8141c10daefe25bf3530e484c61f | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/f45e790125cc8141c10daefe25bf3530e484c61f | 2018-07-10 15:13:20+02:00 |
Merge pull request #5462 from youjiali1995/fix-migrate-expired-keys
migrate: fix mismatch of RESTORE reply when some keys have expired. | 3f6893a4e294e90c2cd333c31b822dc131b0ee41 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/3f6893a4e294e90c2cd333c31b822dc131b0ee41 | 2018-10-22 17:40:37+02:00 |
Redis Benchmark: generate random test data
The function of generating random data is designed by antirez. See #7196.
| abff264000a9926bb7b96172fd49dae3b804c4e0 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/abff264000a9926bb7b96172fd49dae3b804c4e0 | 2020-05-18 18:18:20+08:00 |
Improve RM_ModuleTypeReplaceValue() API.
With the previous API, a NULL return value was ambiguous and could
represent either an old value of NULL or an error condition. The new API
returns a status code and allows the old value to be returned
by-reference.
This commit also includes test coverage based on
tests/modules/datatype.c which did not exist at the time of the original
commit.
| 0283db5883e8dc08e8d3c7019a213712adb3d420 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/0283db5883e8dc08e8d3c7019a213712adb3d420 | 2019-12-12 18:50:11+02:00 |
Expose Redis main thread cpu time, and scrape system time via INFO command. (#8132)
Exposes the main thread CPU info via info modules ( linux specific only )
(used_cpu_sys_main_thread and used_cpu_user_main_thread). This is important for:
- distinguish between main thread and io-threads cpu time total cpu time consumed ( check
what is the first bottleneck on the used config )
- distinguish between main thread and modules threads total cpu time consumed
Apart from it, this commit also exposes the server_time_usec within the Server section so that we can
properly differentiate consecutive collection and calculate for example the CPU% and or / cpu time vs
wall time, etc...
| 19d46f8f2f4239bea738a9578e083a64c1467012 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/19d46f8f2f4239bea738a9578e083a64c1467012 | 2020-12-13 17:14:46+00:00 |
Fix function names in zslDeleteNode() top comment.
| 49ccd2a8e1b0cdd6cad6cdcdaa52dab43aeb9345 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/49ccd2a8e1b0cdd6cad6cdcdaa52dab43aeb9345 | 2020-04-14 10:52:40+02:00 |
Test: fix implementation-dependent test after code change.
| dd29d441364236992ce89230556526c707c3c960 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/dd29d441364236992ce89230556526c707c3c960 | 2019-10-10 15:22:42+02:00 |
fix usage typo in redis-cli
| db74d71eb349465b674ff42af3437969dc594f61 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/db74d71eb349465b674ff42af3437969dc594f61 | 2018-09-06 13:40:05+08:00 |
Fix HLL corruption bug
| d659654f53c276a5a96e8559793ffdb9051957fd | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/d659654f53c276a5a96e8559793ffdb9051957fd | 2019-07-29 18:11:52-04:00 |
Fix a potential overflow with strncpy
| 1d4ea00d12885108d936b76cd31097dc4894f5ca | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/1d4ea00d12885108d936b76cd31097dc4894f5ca | 2020-01-14 08:10:39+00:00 |
Sentinel: Fix failed daily tests, due to race condition (#9501)
| 53ad5627b7c0fd846590c97b9647fd4ed379d289 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/53ad5627b7c0fd846590c97b9647fd4ed379d289 | 2021-09-15 06:39:50-04:00 |
sds.c: Fix potential overflow in sdsll2str. (#8910)
Fixes an undefined behavior, same way as our `ll2string` does. | e1dc979054846f96f77783673e421863700204be | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/e1dc979054846f96f77783673e421863700204be | 2021-08-08 19:30:47+08:00 |
Clean up and stabilize cluster migration tests. (#8745)
This is work in progress, focusing on two main areas:
* Avoiding race conditions with cluster configuration propagation.
* Ignoring limitations with redis-cli --cluster fix which makes it hard
to distinguish real errors (e.g. failure to fix) from expected
conditions in this test (e.g. nodes not agreeing on configuration). | 4724dd439e1a33310008ae244f82f70835d96a2e | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/4724dd439e1a33310008ae244f82f70835d96a2e | 2021-04-06 11:57:57+03:00 |
On 32 bit platform, the bit position of GETBIT/SETBIT/BITFIELD/BITCOUNT,BITPOS may overflow (see CVE-2021-32761) (#9191)
GETBIT, SETBIT may access wrong address because of wrap.
BITCOUNT and BITPOS may return wrapped results.
BITFIELD may access the wrong address but also allocate insufficient memory and segfault (see CVE-2021-32761).
This commit uses `uint64_t` or `long long` instead of `size_t`.
related https://github.com/redis/redis/pull/8096
At 32bit platform:
> setbit bit 4294967295 1
(integer) 0
> config set proto-max-bulk-len 536870913
OK
> append bit "\xFF"
(integer) 536870913
> getbit bit 4294967296
(integer) 0
When the bit index is larger than 4294967295, size_t can't hold bit index. In the past, `proto-max-bulk-len` is limit to 536870912, so there is no problem.
After this commit, bit position is stored in `uint64_t` or `long long`. So when `proto-max-bulk-len > 536870912`, 32bit platforms can still be correct.
For 64bit platform, this problem still exists. The major reason is bit pos 8 times of byte pos. When proto-max-bulk-len is very larger, bit pos may overflow.
But at 64bit platform, we don't have so long string. So this bug may never happen.
Additionally this commit add a test cost `512MB` memory which is tag as `large-memory`. Make freebsd ci and valgrind ci ignore this test. | 71d452876ebf8456afaadd6b3c27988abadd1148 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/71d452876ebf8456afaadd6b3c27988abadd1148 | 2021-07-21 21:25:19+08:00 |
Fix memory corruption in moduleHandleBlockedClients
By using a "circular BRPOPLPUSH"-like scenario it was
possible the get the same client on db->blocking_keys
twice (See comment in moduleTryServeClientBlockedOnKey)
The fix was actually already implememnted in
moduleTryServeClientBlockedOnKey but it had a bug:
the funxction should return 0 or 1 (not OK or ERR)
Other changes:
1. Added two commands to blockonkeys.c test module (To
reproduce the case described above)
2. Simplify blockonkeys.c in order to make testing easier
3. cast raxSize() to avoid warning with format spec
| c4dc5b80b26027cba179c21f5b2be4eb81bf3b5e | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/c4dc5b80b26027cba179c21f5b2be4eb81bf3b5e | 2020-01-21 15:09:42+05:30 |
Sentinel: add an option to deny online script reconfiguration.
The ability of "SENTINEL SET" to change the reconfiguration script at
runtime is a problem even in the security model of Redis: any client
inside the network may set any executable to be ran once a failover is
triggered.
This option adds protection for this problem: by default the two
SENTINEL SET subcommands modifying scripts paths are denied. However the
user is still able to rever that using the Sentinel configuration file
in order to allow such a feature.
| 6a66b93b186506bcd37f147cbb353f0961a03870 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/6a66b93b186506bcd37f147cbb353f0961a03870 | 2018-06-14 18:57:58+02:00 |
Fix wrong zmalloc_size() assumption. (#7963)
When using a system with no malloc_usable_size(), zmalloc_size() assumed
that the heap allocator always returns blocks that are long-padded.
This may not always be the case, and will result with zmalloc_size()
returning a size that is bigger than allocated. At least in one case
this leads to out of bound write, process crash and a potential security
vulnerability.
Effectively this does not affect the vast majority of users, who use
jemalloc or glibc.
This problem along with a (different) fix was reported by Drew DeVault. | 9824fe3e392caa04dc1b4071886e9ac402dd6d95 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/9824fe3e392caa04dc1b4071886e9ac402dd6d95 | 2020-10-26 14:49:08+02:00 |
Merge pull request #5135 from oranagra/rare_repl_corruption
fix rare replication stream corruption with disk-based replication | e03358c0d978bf0352a78b531e0ccc69118640eb | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/e03358c0d978bf0352a78b531e0ccc69118640eb | 2018-07-17 17:33:58+02:00 |
rdb.c: handle fclose error case differently to avoid double fclose (#7307)
When fclose would fail, the previous implementation would have attempted to do fclose again
this can in theory lead to segfault.
other changes:
check for non-zero return value as failure rather than a specific error code.
this doesn't fix a real bug, just a minor cleanup. | 323029baa6958781ffae5da331dfc918d66a7117 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/323029baa6958781ffae5da331dfc918d66a7117 | 2020-09-24 11:17:53-04:00 |
Use stack for decoding integer-encoded values in list push
Less heap allocations when commands like LMOVE push integer values.
| 683e530cf3c3e71266b8d18073fd026da9de4ddb | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/683e530cf3c3e71266b8d18073fd026da9de4ddb | 2021-02-12 12:31:41+01:00 |
Fix range issues in ZRANDMEMBER and HRANDFIELD (CVE-2023-22458) (#11674)
missing range check in ZRANDMEMBER and HRANDIFLD leading to panic due
to protocol limitations | 16f408b1a0121cacd44cbf8aee275d69dc627f02 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/16f408b1a0121cacd44cbf8aee275d69dc627f02 | 2023-01-16 13:50:27+02:00 |
Merge pull request #6875 from WOOSEUNGHOON/cve20158080_fix
[FIX] revisit CVE-2015-8080 vulnerability | 256ec6c52f5bd41437ea703801f67426af370918 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/256ec6c52f5bd41437ea703801f67426af370918 | 2020-02-10 10:45:52+01:00 |
New security feature: Redis protected mode.
An exposed Redis instance on the internet can be cause of serious
issues. Since Redis, by default, binds to all the interfaces, it is easy
to forget an instance without any protection layer, for error.
Protected mode try to address this feature in a soft way, providing a
layer of protection, but giving clues to Redis users about why the
server is not accepting connections.
When protected mode is enabeld (the default), and if there are no
minumum hints about the fact the server is properly configured (no
"bind" directive is used in order to restrict the server to certain
interfaces, nor a password is set), clients connecting from external
intefaces are refused with an error explaining what to do in order to
fix the issue.
Clients connecting from the IPv4 and IPv6 lookback interfaces are still
accepted normally, similarly Unix domain socket connections are not
restricted in any way.
| edd4d555df57dc84265fdfb4ef59a4678832f6da | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/edd4d555df57dc84265fdfb4ef59a4678832f6da | 2016-01-07 13:00:08+01:00 |
Merge pull request #2726 from seppo0010/patch-2
Fix race condition in unit/introspection | 8637384191ade8afbe380c10c2596f4dbc1eedbc | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/8637384191ade8afbe380c10c2596f4dbc1eedbc | 2016-01-15 16:24:06+01:00 |
Merge pull request #2998 from danielhtshih/unstable
Fix a possible race condition of sdown event detection if sentinel's connection to master/slave/sentinel became disconnected just after the last PONG and before the next PING. | f5ff91f675843b132bfee9c0c4b19cc1908cacb9 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/f5ff91f675843b132bfee9c0c4b19cc1908cacb9 | 2016-05-05 17:16:58+02:00 |
Fix timing issue in logging.tcl with FreeBSD (#9910)
A test failure was reported in Daily CI.
`Crash report generated on SIGABRT` with FreeBSD.
```
*** [err]: Crash report generated on SIGABRT in tests/integration/logging.tcl
Expected [string match *crashed by signal* ### Starting...(logs) in tests/integration/logging.tcl]
```
It look like `tail -1000` was executed too early, before it
printed out all the crash logs. We can give it a few more
chances by using `wait_for_log_messages`.
Other changes:
1. In `Server is able to generate a stack trace on selected systems`,
use `wait_for_log_messages`to reduce the lines of code. And if it
fails, there are more detailed logs that can be printed.
2. In `Crash report generated on DEBUG SEGFAULT`, we also use
`wait_for_log_messages` to avoid possible timing issues. | b947049f8524fe93b383b141d041280a1c67c16d | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b947049f8524fe93b383b141d041280a1c67c16d | 2021-12-07 18:02:58+08:00 |
fix small test suite race conditions
| d0850369c4a06d6362dbaf12d873e54d6ce931cc | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/d0850369c4a06d6362dbaf12d873e54d6ce931cc | 2018-11-11 09:22:42+02:00 |
Use SipHash hash function to mitigate HashDos attempts.
This change attempts to switch to an hash function which mitigates
the effects of the HashDoS attack (denial of service attack trying
to force data structures to worst case behavior) while at the same time
providing Redis with an hash function that does not expect the input
data to be word aligned, a condition no longer true now that sds.c
strings have a varialbe length header.
Note that it is possible sometimes that even using an hash function
for which collisions cannot be generated without knowing the seed,
special implementation details or the exposure of the seed in an
indirect way (for example the ability to add elements to a Set and
check the return in which Redis returns them with SMEMBERS) may
make the attacker's life simpler in the process of trying to guess
the correct seed, however the next step would be to switch to a
log(N) data structure when too many items in a single bucket are
detected: this seems like an overkill in the case of Redis.
SPEED REGRESION TESTS:
In order to verify that switching from MurmurHash to SipHash had
no impact on speed, a set of benchmarks involving fast insertion
of 5 million of keys were performed.
The result shows Redis with SipHash in high pipelining conditions
to be about 4% slower compared to using the previous hash function.
However this could partially be related to the fact that the current
implementation does not attempt to hash whole words at a time but
reads single bytes, in order to have an output which is endian-netural
and at the same time working on systems where unaligned memory accesses
are a problem.
Further X86 specific optimizations should be tested, the function
may easily get at the same level of MurMurHash2 if a few optimizations
are performed.
| adeed29a99dcd0efdbfe4dbd5da74e7b01966c67 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/adeed29a99dcd0efdbfe4dbd5da74e7b01966c67 | 2017-02-20 16:09:54+01:00 |
Avoid integer overflows in SETRANGE and SORT (CVE-2022-35977) (#11720)
Authenticated users issuing specially crafted SETRANGE and SORT(_RO)
commands can trigger an integer overflow, resulting with Redis attempting
to allocate impossible amounts of memory and abort with an OOM panic. | 1ec82e6e97e1db06a72ca505f9fbf6b981f31ef7 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/1ec82e6e97e1db06a72ca505f9fbf6b981f31ef7 | 2023-01-16 13:49:30+02:00 |
Fix possible memory corruption in FLUSHALL when a client watches more than one key (#11854)
Avoid calling unwatchAllKeys when running touchAllWatchedKeysInDb (which was unnecessary)
This can potentially lead to use-after-free and memory corruption when the next entry
pointer held by the watched keys iterator is freed when unwatching all keys of a specific client.
found with address sanitizer, added a test which will not always fail (depending on the random
dict hashing seed)
problem introduced in #9829 (Reids 7.0)
Co-authored-by: Oran Agra <[email protected]> | 18017df7c1407bc025741c64a90f20f4a8098bd2 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/18017df7c1407bc025741c64a90f20f4a8098bd2 | 2023-02-28 12:02:55+02:00 |
Add --insecure option to command line tools. (#8416)
Disable certificate validation, making it possible to connect to servers
without configuring full trust chain.
The use of this option is insecure and makes the connection vulnerable
to man in the middle attacks. | be83bb13a8eaad68b7580b95c696f2554cf7100e | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/be83bb13a8eaad68b7580b95c696f2554cf7100e | 2021-02-07 12:36:56+02:00 |
Update security page with supported versions. (#10712)
| b414605285244c453f3fadbbe7a157cd83ed5f59 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b414605285244c453f3fadbbe7a157cd83ed5f59 | 2022-05-11 16:18:02+03:00 |
Fix failing cluster tests. (#8763)
Disable replica migration to avoid a race condition where the
migrated-from node turns into a replica.
Long term, this test should probably be improved to handle multiple
slots and accept such auto migrations but this is a quick fix to
stabilize the CI without completely dropping this test. | 5e3a15ae1b58630a10639b34cd016ba9c0ff6b15 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/5e3a15ae1b58630a10639b34cd016ba9c0ff6b15 | 2021-04-13 00:00:57+03:00 |
Fix petential cluster link error.
Funcion adjustOpenFilesLimit() has an implicit parameter, which is server.maxclients.
This function aims to ajust maximum file descriptor number according to server.maxclients
by best effort, which is "bestlimit" could be lower than "maxfiles" but greater than "oldlimit".
When we try to increase "maxclients" using CONFIG SET command, we could increase maximum
file descriptor number to a bigger value without calling aeResizeSetSize the same time.
When later more and more clients connect to server, the allocated fd could be bigger and bigger,
and eventually exceeds events size of aeEventLoop.events. When new nodes joins the cluster,
new link is created, together with new fd, but when calling aeCreateFileEvent, we did not
check the return value. In this case, we have a non-null "link" but the associated fd is not
registered.
So when we dynamically set "maxclients" we could reach an inconsistency between maximum file
descriptor number of the process and server.maxclients. And later could cause cluster link and link
fd inconsistency.
While setting "maxclients" dynamically, we consider it as failed when resulting "maxclients" is not
the same as expected. We try to restore back the maximum file descriptor number when we failed to set
"maxclients" to the specified value, so that server.maxclients could act as a guard as before.
| 0992ada2fe1cbc8f5f25c0cdec67cb69bb8d3810 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/0992ada2fe1cbc8f5f25c0cdec67cb69bb8d3810 | 2019-12-31 18:15:21+08:00 |
Reduce system calls of write for client->reply by introducing writev (#9934)
There are scenarios where it results in many small objects in the reply list,
such as commands heavily using deferred array replies (`addReplyDeferredLen`).
E.g. what COMMAND command and CLUSTER SLOTS used to do (see #10056, #7123),
but also in case of a transaction or a pipeline of commands that use just one differed array reply.
We used to have to run multiple loops along with multiple calls to `write()` to send data back to
peer based on the current code, but by means of `writev()`, we can gather those scattered
objects in reply list and include the static reply buffer as well, then send it by one system call,
that ought to achieve higher performance.
In the case of TLS, we simply check and concatenate buffers into one big buffer and send it
away by one call to `connTLSWrite()`, if the amount of all buffers exceeds `NET_MAX_WRITES_PER_EVENT`,
then invoke `connTLSWrite()` multiple times to avoid a huge massive of memory copies.
Note that aside of reducing system calls, this change will also reduce the amount of
small TCP packets sent. | 496375fc36134c72461e6fb97f314be3adfd8b68 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/496375fc36134c72461e6fb97f314be3adfd8b68 | 2022-02-22 20:00:37+08:00 |
Merge pull request #5045 from guybe7/restore_fix
Enhance RESTORE with RDBv9 new features | c6f4118ce63109d96782b74177383f85822381d8 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/c6f4118ce63109d96782b74177383f85822381d8 | 2018-06-20 11:11:39+02:00 |
Merge pull request #6148 from artix75/redis_bm_dev
Redis Benchmark: prevent CONFIG failure from exiting program | de035c94816722dd923e4aedd852869de79a5185 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/de035c94816722dd923e4aedd852869de79a5185 | 2019-06-05 17:29:50+02:00 |
Fix "default" and overwritten / reset users will not have pubsub channels permissions by default. (#8723)
Background:
Redis 6.2 added ACL control for pubsub channels (#7993), which were supposed
to be permissive by default to retain compatibility with redis 6.0 ACL.
But due to a bug, only newly created users got this `acl-pubsub-default` applied,
while overwritten (updated) users got reset to `resetchannels` (denied).
Since the "default" user exists before loading the config file,
any ACL change to it, results in an update / overwrite.
So when a "default" user is loaded from config file or include ACL
file with no channels related rules, the user will not have any
permissions to any channels. But other users will have default
permissions to any channels.
When upgraded from 6.0 with config rewrite, this will lead to
"default" user channels permissions lost.
When users are loaded from include file, then call "acl load", users
will also lost channels permissions.
Similarly, the `reset` ACL rule, would have reset the user to be denied
access to any channels, ignoring `acl-pubsub-default` and breaking
compatibility with redis 6.0.
The implication of this fix is that it regains compatibility with redis 6.0,
but breaks compatibility with redis 6.2.0 and 2.0.1. e.g. after the upgrade,
the default user will regain access to pubsub channels.
Other changes:
Additionally this commit rename server.acl_pubusub_default to
server.acl_pubsub_default and fix typo in acl tests. | 3b74b55084b7e902c7e54603b3d6122b2a31d6fa | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/3b74b55084b7e902c7e54603b3d6122b2a31d6fa | 2021-04-06 04:13:20+08:00 |
Update outdated commands descriptions and cleanups in README (#11372)
Redis commands has been significantly refactored in 7.0.
This PR updates the outdated README.md to reflect it.
Based on #10864, doing some additional cleanups.
Co-authored-by: theoboldalex <[email protected]>
Co-authored-by: Oran Agra <[email protected]> | a370bbe263009f806b3f9454674342d350f6475a | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/a370bbe263009f806b3f9454674342d350f6475a | 2022-10-11 16:39:37+08:00 |
Optimize temporary memory allocations for getKeysFromCommand mechanism
now that we may use it more often (ACL), these excessive calls to malloc
and free can become an overhead.
| 774d8cd721055b768dbffbf5c6b2fa9d6310126e | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/774d8cd721055b768dbffbf5c6b2fa9d6310126e | 2020-02-05 18:06:33+02:00 |
Merge pull request #6252 from soloestoy/tracking-flushdb
Tracking flushdb | 41ed85bd268c9e1f1afe05e9ed317d5d55d8eeef | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/41ed85bd268c9e1f1afe05e9ed317d5d55d8eeef | 2019-07-18 16:41:37+02:00 |
Allow scripts to timeout even if from the master instance.
However the master scripts will be impossible to kill.
Related to #5297.
| 83af8ef1fdbfb69dd53cedc8e772184b096a5da8 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/83af8ef1fdbfb69dd53cedc8e772184b096a5da8 | 2018-08-30 11:46:42+02:00 |
fix typo
| 42b36c5ce9071ebdfd5580fa0499a7bf354f1841 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/42b36c5ce9071ebdfd5580fa0499a7bf354f1841 | 2015-04-19 23:42:27+08:00 |
Modules hooks: fix a leak and a few more issues.
| 1e78681df83570c287a137e7acb0af705ea59ab4 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/1e78681df83570c287a137e7acb0af705ea59ab4 | 2019-10-22 10:15:12+02:00 |
Moved security bugs and vulnerability policy to SECURITY.md (#8938)
Moved security bugs and vulnerability policy to SECURITY.MD and extended security policy.
Co-authored-by: Yossi Gottlieb <[email protected]> | df4d916007c285d01b11193272419ab228916d8a | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/df4d916007c285d01b11193272419ab228916d8a | 2021-05-13 21:16:27-07:00 |
TLS: Fix missing initialization in redis-cli.
| 93edb3ff3a800a701e2c33eb8f20330569a0a134 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/93edb3ff3a800a701e2c33eb8f20330569a0a134 | 2020-01-29 21:40:02+02:00 |
ACL: CAT subcommand implemented.
| b9c97c0b2e58826efe797e53fec4558fd7f7b95d | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b9c97c0b2e58826efe797e53fec4558fd7f7b95d | 2019-02-12 17:02:45+01:00 |
adding missing test cases GET and GETEX (#12125)
adding test case of expired key or not exist for GET and GETEX.
for better test coverage. | 42dd98ec191fd71528785bd7d31bd18112078f66 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/42dd98ec191fd71528785bd7d31bd18112078f66 | 2023-05-07 04:46:11-04:00 |
lazyfree: add a new configuration lazyfree-lazy-user-del
Delete keys in async way when executing DEL command, if
lazyfree-lazy-user-del is yes.
| 3c0ed0309ac5bae52464ecb45e92056e212f2b7f | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/3c0ed0309ac5bae52464ecb45e92056e212f2b7f | 2019-07-17 11:00:51+08:00 |
Add year in log.
User: "is there a reason why redis server logs are missing the year in
the "date time"?"
Me: "I guess I did not imagine it would be stable enough to run for
several years".
| 3c19ae941d17dee7f89c282946bc379ba2106b09 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/3c19ae941d17dee7f89c282946bc379ba2106b09 | 2018-07-30 17:42:30+02:00 |
Cluster: discard pong times in the future.
However we allow for 500 milliseconds of tolerance, in order to
avoid often discarding semantically valid info (the node is up)
because of natural few milliseconds desync among servers even when
NTP is used.
Note that anyway we should ping the node from time to time regardless and
discover if it's actually down from our point of view, since no update
is accepted while we have an active ping on the node.
Related to #3929.
| 271733f4f83552acc52a8baba4ae3fa7bd6b4ba0 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/271733f4f83552acc52a8baba4ae3fa7bd6b4ba0 | 2017-04-15 10:08:39+02:00 |
redis-cli: fix bugs in hints of commands with subcommands. (#8914)
There are two bugs in redis-cli hints:
* The hints of commands with subcommands lack first params.
* When search matching command of currently input, we should find the
command with longest matching prefix. If not COMMAND INFO will always
match COMMAND and display no hints. | 8827aae83bf58b91f155005102675ff57061d9eb | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/8827aae83bf58b91f155005102675ff57061d9eb | 2021-05-17 21:52:40+08:00 |
Remove the PSYNC2 meaningful offset test.
| 32d0df0c1fb5fdd39defc6d85887cca73d062f47 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/32d0df0c1fb5fdd39defc6d85887cca73d062f47 | 2020-05-27 12:47:34+02:00 |
Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj
Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.
To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.
THE BUG
Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.
Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:
1. No DEL will be received from the master, which does not know about
such a key at all.
2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.
THE FIX
In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.
My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.
The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).
This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.
TODO
I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
| 04542cff92147b9b686a2071c4c53574771f4f88 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/04542cff92147b9b686a2071c4c53574771f4f88 | 2016-12-13 10:20:06+01:00 |
In Redis RDB check: initial POC.
So far we used an external program (later executed within Redis) and
parser in order to check RDB files for correctness. This forces, at each
RDB format update, to have two copies of the same format implementation
that are hard to keep in sync. Morover the former RDB checker only
checked the very high-level format of the file, without actually trying
to load things in memory. Certain corruptions can only be handled by
really loading key-value pairs.
This first commit attempts to unify the Redis RDB loadig code with the
task of checking the RDB file for correctness. More work is needed but
it looks like a sounding direction so far.
| e97fadb045c8bd10efd00d18374417009feb18c5 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/e97fadb045c8bd10efd00d18374417009feb18c5 | 2016-06-30 23:44:44+02:00 |
Fix typo in function_load local variable (#10209)
Refs: https://github.com/redis/redis/pull/10141 | 4e17154060fc074a413255dfd65f4fa0305445e1 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/4e17154060fc074a413255dfd65f4fa0305445e1 | 2022-01-30 13:38:58+01:00 |
Refactor config.c for generic setter interface (#9644)
This refactors all `CONFIG SET`s and conf file loading arguments go through
the generic config handling interface.
Refactoring changes:
- All config params go through the `standardConfig` interface (some stuff which
is only related to the config file and not the `CONFIG` command still has special
handling for rewrite/config file parsing, `loadmodule`, for example.) .
- Added `MULTI_ARG_CONFIG` flag for configs to signify they receive a variable
number of arguments instead of a single argument. This is used to break up space
separated arguments to `CONFIG SET` so the generic setter interface can pass
multiple arguments to the setter function. When parsing the config file we also break
up anything after the config name into multiple arguments to the setter function.
Interface changes:
- A side effect of the above interface is that the `bind` argument in the config file can
be empty (no argument at all) this is treated the same as passing an single empty
string argument (same as `save` already used to work).
- Support rewrite and setting `watchdog-period` from config file (was only supported
by the CONFIG command till now).
- Another side effect is that the `save T X` config argument now supports multiple
Time-Changes pairs in a single line like its `CONFIG SET` counterpart. So in the
config file you can either do:
```
save 3600 1
save 600 10
```
or do
```
save 3600 1 600 10
```
Co-authored-by: Bjorn Svensson <[email protected]> | 79ac57561f268814babe212c9216efe45cfdf937 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/79ac57561f268814babe212c9216efe45cfdf937 | 2021-11-07 13:40:08+02:00 |
ACL: flags refactoring, function to describe user.
| c7cd10dfe96e91582d7e341b20334464d5c51830 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/c7cd10dfe96e91582d7e341b20334464d5c51830 | 2019-01-31 16:49:22+01:00 |
Merge pull request #1998 from grobe0ba/unstable
Fix missing '-' in redis-benchmark help output (Issue #1996) | 25c231c4c1efb1099be188a74144353ace499d01 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/25c231c4c1efb1099be188a74144353ace499d01 | 2017-07-24 15:18:08+02:00 |
fix processing of large bulks (above 2GB)
- protocol parsing (processMultibulkBuffer) was limitted to 32big positions in the buffer
readQueryFromClient potential overflow
- rioWriteBulkCount used int, although rioWriteBulkString gave it size_t
- several places in sds.c that used int for string length or index.
- bugfix in RM_SaveAuxField (return was 1 or -1 and not length)
- RM_SaveStringBuffer was limitted to 32bit length
| 60a4f12f8b998c44dfff0e88202b01598287390d | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/60a4f12f8b998c44dfff0e88202b01598287390d | 2017-12-21 11:10:48+02:00 |
RESP3: Allow any command in RESP3 Pub/Sub mode.
| 9018388c3f2d183f5110b57649d3239bd699291e | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/9018388c3f2d183f5110b57649d3239bd699291e | 2018-12-19 17:56:50+01:00 |
fix benchmark failure in daily test with TLS (#10896)
The new test added in #10891 can fail with a different error.
see comment in networking.c saying
```c
/* That's a best effort error message, don't check write errors.
* Note that for TLS connections, no handshake was done yet so nothing
* is written and the connection will just drop. */
``` | d2405b9b6bf1d4e6b84d3974a52fabcbeac533f1 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/d2405b9b6bf1d4e6b84d3974a52fabcbeac533f1 | 2022-06-23 23:19:36+08:00 |
Sentinel: test command renaming feature.
| 438317796b7a34b100e47ce2c5acdf184dfdf53d | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/438317796b7a34b100e47ce2c5acdf184dfdf53d | 2018-06-26 16:08:32+02:00 |
fix memory leak in sentinel connection sharing
| 1bfa2d27a637119226ee3244d2d219c7e5a7ff33 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/1bfa2d27a637119226ee3244d2d219c7e5a7ff33 | 2020-06-21 23:04:28-04:00 |
Refactor multi-key command get keys proc
| b7ce583a5e31dbb674e0aa24366d4648c2a45dfb | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b7ce583a5e31dbb674e0aa24366d4648c2a45dfb | 2020-09-17 15:54:37+08:00 |
More improvements and fixes to generic config infra
- Adding is_valid_fn and update_fn, both return 1 for success and 0 for failure with an optional error message.
- Bugfix in handling boundary check of unsigned numeric types (was boundaries as signed)
- Adding more numeric types to generic mechanism: uint, ulonglong, long, time_t, off_t
- More verbose error replies ("argument must be between" in out of range CONFIG SET (like config file parsing)
| 28beb05aa322e4e72ac6b7b477f38f8c5eab0d57 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/28beb05aa322e4e72ac6b7b477f38f8c5eab0d57 | 2019-11-28 11:11:07+02:00 |
Fix integer overflow (CVE-2021-21309). (#8522)
On 32-bit systems, setting the proto-max-bulk-len config parameter to a high value may result with integer overflow and a subsequent heap overflow when parsing an input bulk (CVE-2021-21309).
This fix has two parts:
Set a reasonable limit to the config parameter.
Add additional checks to prevent the problem in other potential but unknown code paths. | d32f2e9999ce003bad0bd2c3bca29f64dcce4433 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/d32f2e9999ce003bad0bd2c3bca29f64dcce4433 | 2021-02-22 15:41:32+02:00 |
Merge pull request #5529 from yongman/fix-rediscli-malloc
fix zmalloc in clusterManagerComputeReshardTable | d9e822a14bfbca8ea0599052e6ec87b63e7be76f | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/d9e822a14bfbca8ea0599052e6ec87b63e7be76f | 2018-11-06 12:05:24+01:00 |
Cleanup: fix typo and remove some obsoleting definitions. (#9851)
| 596635fa0c01b377d6e86c1766c7014504f5e1db | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/596635fa0c01b377d6e86c1766c7014504f5e1db | 2021-11-28 06:47:51+08:00 |
Regression test for issue #2813.
| 5f0fef5eb9d4ca2e4f5a21388b92d9443e495da9 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/5f0fef5eb9d4ca2e4f5a21388b92d9443e495da9 | 2015-10-15 11:23:13+02:00 |
Make git ignore all files starting with appendonly.aof (#10351)
| a6fd23753701a6700fc1e9d4d9d2c3404ab02890 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/a6fd23753701a6700fc1e9d4d9d2c3404ab02890 | 2022-02-28 15:24:47+08:00 |
fix memory corruption on RM_FreeCallReply
| 8521cde570f574006ee36a2d3e0ed1b2f6953d2f | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/8521cde570f574006ee36a2d3e0ed1b2f6953d2f | 2016-11-30 11:49:49+02:00 |
Prevent use after free for inbound cluster link (#11255)
| 6c03786b66d27a53629cac21d5b89b17bfad6b65 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/6c03786b66d27a53629cac21d5b89b17bfad6b65 | 2022-09-13 16:19:29-05:00 |
ACL DRYRUN does not validate the verified command args. (#10405)
As a result we segfault when parsing and matching the command keys. | 11b071a22bd1662166213e86f7bb7394a945ea00 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/11b071a22bd1662166213e86f7bb7394a945ea00 | 2022-03-10 10:08:41+02:00 |
Fixes around clients that must be obeyed. Replica report disk errors in PING. (#10603)
This PR unifies all the places that test if the current client is the
master client or AOF client, and uses a method to test that on all of
these.
Other than some refactoring, these are the actual implications:
- Replicas **don't** ignore disk error when processing commands not
coming from their master.
**This is important for PING to be used for health check of replicas**
- SETRANGE, APPEND, SETBIT, BITFIELD don't do proto_max_bulk_len check for AOF
- RM_Call in SCRIPT_MODE ignores disk error when coming from master /
AOF
- RM_Call in cluster mode ignores slot check when processing AOF
- Scripts ignore disk error when processing AOF
- Scripts **don't** ignore disk error on a replica, if the command comes
from clients other than the master
- SCRIPT KILL won't kill script coming from AOF
- Scripts **don't** skip OOM check on replica if the command comes from
clients other than the master
Note that Script, AOF, and module clients don't reach processCommand,
which is why some of the changes don't actually have any implications.
Note, reverting the change done to processCommand in 2f4240b9d9
should be dead code due to the above mentioned fact. | ee220599b0481cf4b10e8a5293691258fd6f35a6 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/ee220599b0481cf4b10e8a5293691258fd6f35a6 | 2022-04-20 11:11:21+03:00 |
Backport Lua 5.2.2 stack overflow fix. (#7733)
This fixes the issue described in CVE-2014-5461. At this time we cannot
confirm that the original issue has a real impact on Redis, but it is
included as an extra safety measure. | d75ad774a92bd7de0b9448be3d622d7a13b7af27 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/d75ad774a92bd7de0b9448be3d622d7a13b7af27 | 2020-08-31 20:42:46+03:00 |
Fix race condition in psync2-pingoff test (#9712)
Test failed on freebsd:
```
*** [err]: Make the old master a replica of the new one and check conditions in tests/integration/psync2-pingoff.tcl
Expected '162' to be equal to '176' (context: type eval line 18 cmd {assert_equal [status $R(0) master_repl_offset] [status $R(1) master_repl_offset]} proc ::test)
```
There are two possible race conditions in the test.
1. The code waits for sync_full to increment, and assumes that means the
master did the fork. But in fact there are cases the master will increment
that sync_full counter (after replica asks for sync), but will see that
there's already a fork running and will delay the fork creation.
In this case the INCR will be executed before the fork happens, so it'll
not be in the command stream. Solve that by waiting for `master_link_status: up`
on the replica before the INCR.
2. The repl-ping-replica-period is still high (1 second), so there's a chance the
master will send an additional PING between the two calls to INFO (the line that
fails is the one that samples INFO from both servers). So there's a chance one of
them will have an incremented offset due to PING and the other won't have it yet.
In theory we can wait for the repl_offset to match, but then we risk facing a
situation where that race will hide an offset mis-match. so instead, i think we
should just change repl-ping-replica-period to prevent further pings from being pushed.
Co-authored-by: Oran Agra <[email protected]> | cea7809ceac68b58c6d538e9e1898e3421a26ade | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/cea7809ceac68b58c6d538e9e1898e3421a26ade | 2021-11-01 22:07:08+08:00 |
Turn into replica on SETSLOT (#10489)
* Fix race condition where node loses its last slot and turns into replica
When a node has lost its last slot and finds out from the SETSLOT command
before the cluster bus PONG from the new owner arrives. In this case, the
node didn't turn itself into a replica of the new slot owner.
This commit adds the same logic to the SETSLOT command as already exists
for the cluster bus PONG processing.
* Revert "Fix new / failing cluster slot migration test (#10482)"
This reverts commit 0b21ef8d49c47a5dd47a0bcc0cf1b2c235f24fd0.
In this test, the old slot owner finds out that it has lost its last
slot in a nondeterministic way. Either the cluster bus PONG from the
new slot owner and sometimes in a SETSLOT command from redis-cli. In
both cases, the result should be the same and the old owner should
turn itself into a replica of the new slot owner. | b53c7f2c0bb8692fd30c59348190c2bd897958a0 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b53c7f2c0bb8692fd30c59348190c2bd897958a0 | 2022-04-02 23:58:07+02:00 |
performEvictions: mem_freed may be negative (#7908)
If 'delta' is negative 'mem_freed' may underflow and cause
the while loop to exit prematurely (And not evicting enough
memory)
mem_freed can be negative when:
1. We use lazy free (consuming memory by appending to a list)
2. Thread doing an allocation between the two calls to zmalloc_used_memory.
| 37fd3d40ae5168eba5bfb5e5e966e1ac9aab990c | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/37fd3d40ae5168eba5bfb5e5e966e1ac9aab990c | 2020-10-13 18:50:57+02:00 |
Clean Lua stack before parsing call reply to avoid crash on a call with many arguments (#9809)
This commit 0f8b634cd (CVE-2021-32626 released in 6.2.6, 6.0.16, 5.0.14)
fixes an invalid memory write issue by using `lua_checkstack` API to make
sure the Lua stack is not overflow. This fix was added on 3 places:
1. `luaReplyToRedisReply`
2. `ldbRedis`
3. `redisProtocolToLuaType`
On the first 2 functions, `lua_checkstack` is handled gracefully while the
last is handled with an assert and a statement that this situation can
not happened (only with misbehave module):
> the Redis reply might be deep enough to explode the LUA stack (notice
that currently there is no such command in Redis that returns such a nested
reply, but modules might do it)
The issue that was discovered is that user arguments is also considered part
of the stack, and so the following script (for example) make the assertion reachable:
```
local a = {}
for i=1,7999 do
a[i] = 1
end
return redis.call("lpush", "l", unpack(a))
```
This is a regression because such a script would have worked before and now
its crashing Redis. The solution is to clear the function arguments from the Lua
stack which makes the original assumption true and the assertion unreachable. | 6b0b04f1b265c429bd19d6c99c9e7e2921723601 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/6b0b04f1b265c429bd19d6c99c9e7e2921723601 | 2021-11-28 11:59:39+02:00 |
Prevent RDB autosave from overwriting full resync results
During the full database resync we may still have unsaved changes
on the receiving side. This causes a race condition between
synced data rename/load and the rename of rdbSave tempfile.
| 98a64523c451d9f6519342b78a857a4aa729cf58 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/98a64523c451d9f6519342b78a857a4aa729cf58 | 2018-09-19 19:58:39+03:00 |
Merge pull request #5549 from oranagra/fix_test_races
fix small test suite race conditions | 46a51cdcdc0bd92473163068c2ec3bef4dffe63c | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/46a51cdcdc0bd92473163068c2ec3bef4dffe63c | 2018-11-28 18:17:05+01:00 |
fix hincrbyfloat not to create a key if the new value is invalid (#11149)
Check the validity of the value before performing the create operation,
prevents new data from being generated even if the request fails to execute.
Co-authored-by: Oran Agra <[email protected]>
Co-authored-by: chendianqiang <[email protected]>
Co-authored-by: Binbin <[email protected]> | bc7fe41e5857a0854d524e2a63a028e9394d2a5c | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/bc7fe41e5857a0854d524e2a63a028e9394d2a5c | 2022-08-28 16:33:41+08:00 |
Swap '\r\n' with spaces when returning a big number reply from Lua script. (#9870)
The issue can only happened with a bad Lua script that claims to return
a big number while actually return data which is not a big number (contains
chars that are not digits). Such thing will not cause an issue unless the big
number value contains `\r\n` and then it messes the resp3 structure. The fix
changes all the appearances of '\r\n' with spaces.
Such an issue can also happened on simple string or error replies but those
already handle it the same way this PR does (replace `\r\n` with spaces).
Other replies type are not vulnerable to this issue because they are not
counting on free text that is terminated with `\r\n` (either it contains the
bulk length like string reply or they are typed reply that can not inject free
text like boolean or number).
The issue only exists on unstable branch, big number reply on Lua script
was not yet added to any official release. | b8e82d205b010c2d8c89457ad69e25b1ba620fd0 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b8e82d205b010c2d8c89457ad69e25b1ba620fd0 | 2021-11-30 12:27:05+02:00 |
Lua debugger: try to eval as expression first.
It's handly to just eval "5+5" without the return and see it printed on
the screen as result. However prepending "return" does not always result
into valid Lua code. So what we do is to exploit a common Lua community
trick of trying to compile with return prepended, and if compilation
fails then it's not an expression that can be returned, so we try again
without prepending "return". Works great apparently.
| e6eb6eadec7e34df82e2a5bd502fe9c3487d7e61 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/e6eb6eadec7e34df82e2a5bd502fe9c3487d7e61 | 2015-11-11 22:29:56+01:00 |
Avoid assertion when MSETNX is used with the same key twice (CVE-2023-28425) (#11940)
Using the same key twice in MSETNX command would trigger an assertion.
This reverts #11594 (introduced in Redis 7.0.8) | 48e0d4788434833b47892fe9f3d91be7687f25c9 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/48e0d4788434833b47892fe9f3d91be7687f25c9 | 2023-03-20 18:50:44+02:00 |
Sentinel: persist its unique ID across restarts.
Previously Sentinels always changed unique ID across restarts, relying
on the server.runid field. This is not a good idea, and forced Sentinel
to rely on detection of duplicated Sentinels and a potentially dangerous
clean-up and re-add operation of the Sentinel instance that was
rebooted.
Now the ID is generated at the first start and persisted in the
configuration file, so that a given Sentinel will have its unique
ID forever (unless the configuration is manually deleted or there is a
filesystem corruption).
| 794fc4c9a8b2e4721196df341b84cb0569ab0efa | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/794fc4c9a8b2e4721196df341b84cb0569ab0efa | 2015-05-06 16:19:14+02:00 |
Redis Function Libraries (#10004)
# Redis Function Libraries
This PR implements Redis Functions Libraries as describe on: https://github.com/redis/redis/issues/9906.
Libraries purpose is to provide a better code sharing between functions by allowing to create multiple
functions in a single command. Functions that were created together can safely share code between
each other without worrying about compatibility issues and versioning.
Creating a new library is done using 'FUNCTION LOAD' command (full API is described below)
This PR introduces a new struct called libraryInfo, libraryInfo holds information about a library:
* name - name of the library
* engine - engine used to create the library
* code - library code
* description - library description
* functions - the functions exposed by the library
When Redis gets the `FUNCTION LOAD` command it creates a new empty libraryInfo.
Redis passes the `CODE` to the relevant engine alongside the empty libraryInfo.
As a result, the engine will create one or more functions by calling 'libraryCreateFunction'.
The new funcion will be added to the newly created libraryInfo. So far Everything is happening
locally on the libraryInfo so it is easy to abort the operation (in case of an error) by simply
freeing the libraryInfo. After the library info is fully constructed we start the joining phase by
which we will join the new library to the other libraries currently exist on Redis.
The joining phase make sure there is no function collision and add the library to the
librariesCtx (renamed from functionCtx). LibrariesCtx is used all around the code in the exact
same way as functionCtx was used (with respect to RDB loading, replicatio, ...).
The only difference is that apart from function dictionary (maps function name to functionInfo
object), the librariesCtx contains also a libraries dictionary that maps library name to libraryInfo object.
## New API
### FUNCTION LOAD
`FUNCTION LOAD <ENGINE> <LIBRARY NAME> [REPLACE] [DESCRIPTION <DESCRIPTION>] <CODE>`
Create a new library with the given parameters:
* ENGINE - REPLACE Engine name to use to create the library.
* LIBRARY NAME - The new library name.
* REPLACE - If the library already exists, replace it.
* DESCRIPTION - Library description.
* CODE - Library code.
Return "OK" on success, or error on the following cases:
* Library name already taken and REPLACE was not used
* Name collision with another existing library (even if replace was uses)
* Library registration failed by the engine (usually compilation error)
## Changed API
### FUNCTION LIST
`FUNCTION LIST [LIBRARYNAME <LIBRARY NAME PATTERN>] [WITHCODE]`
Command was modified to also allow getting libraries code (so `FUNCTION INFO` command is no longer
needed and removed). In addition the command gets an option argument, `LIBRARYNAME` allows you to
only get libraries that match the given `LIBRARYNAME` pattern. By default, it returns all libraries.
### INFO MEMORY
Added number of libraries to `INFO MEMORY`
### Commands flags
`DENYOOM` flag was set on `FUNCTION LOAD` and `FUNCTION RESTORE`. We consider those commands
as commands that add new data to the dateset (functions are data) and so we want to disallows
to run those commands on OOM.
## Removed API
* FUNCTION CREATE - Decided on https://github.com/redis/redis/issues/9906
* FUNCTION INFO - Decided on https://github.com/redis/redis/issues/9899
## Lua engine changes
When the Lua engine gets the code given on `FUNCTION LOAD` command, it immediately runs it, we call
this run the loading run. Loading run is not a usual script run, it is not possible to invoke any
Redis command from within the load run.
Instead there is a new API provided by `library` object. The new API's:
* `redis.log` - behave the same as `redis.log`
* `redis.register_function` - register a new function to the library
The loading run purpose is to register functions using the new `redis.register_function` API.
Any attempt to use any other API will result in an error. In addition, the load run is has a time
limit of 500ms, error is raise on timeout and the entire operation is aborted.
### `redis.register_function`
`redis.register_function(<function_name>, <callback>, [<description>])`
This new API allows users to register a new function that will be linked to the newly created library.
This API can only be called during the load run (see definition above). Any attempt to use it outside
of the load run will result in an error.
The parameters pass to the API are:
* function_name - Function name (must be a Lua string)
* callback - Lua function object that will be called when the function is invokes using fcall/fcall_ro
* description - Function description, optional (must be a Lua string).
### Example
The following example creates a library called `lib` with 2 functions, `f1` and `f1`, returns 1 and 2 respectively:
```
local function f1(keys, args)
return 1
end
local function f2(keys, args)
return 2
end
redis.register_function('f1', f1)
redis.register_function('f2', f2)
```
Notice: Unlike `eval`, functions inside a library get the KEYS and ARGV as arguments to the
functions and not as global.
### Technical Details
On the load run we only want the user to be able to call a white list on API's. This way, in
the future, if new API's will be added, the new API's will not be available to the load run
unless specifically added to this white list. We put the while list on the `library` object and
make sure the `library` object is only available to the load run by using [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv) API. This API allows us to set
the `globals` of a function (and all the function it creates). Before starting the load run we
create a new fresh Lua table (call it `g`) that only contains the `library` API (we make sure
to set global protection on this table just like the general global protection already exists
today), then we use [lua_setfenv](https://www.lua.org/manual/5.1/manual.html#lua_setfenv)
to set `g` as the global table of the load run. After the load run finished we update `g`
metatable and set `__index` and `__newindex` functions to be `_G` (Lua default globals),
we also pop out the `library` object as we do not need it anymore.
This way, any function that was created on the load run (and will be invoke using `fcall`) will
see the default globals as it expected to see them and will not have the `library` API anymore.
An important outcome of this new approach is that now we can achieve a distinct global table
for each library (it is not yet like that but it is very easy to achieve it now). In the future we can
decide to remove global protection because global on different libraries will not collide or we
can chose to give different API to different libraries base on some configuration or input.
Notice that this technique was meant to prevent errors and was not meant to prevent malicious
user from exploit it. For example, the load run can still save the `library` object on some local
variable and then using in `fcall` context. To prevent such a malicious use, the C code also make
sure it is running in the right context and if not raise an error. | 885f6b5cebf80108a857cd50a4b84f5daf013e29 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/885f6b5cebf80108a857cd50a4b84f5daf013e29 | 2022-01-06 13:39:38+02:00 |
Fix race condition in unit/introspection
Make sure monitor is attached in one connection before issuing commands to be monitored in another one | 97a2248309937a2cecb8b800af40526e06fc64c4 | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/97a2248309937a2cecb8b800af40526e06fc64c4 | 2015-08-11 22:56:17-07:00 |
Sanitize dump payload: ziplist, listpack, zipmap, intset, stream
When loading an encoded payload we will at least do a shallow validation to
check that the size that's encoded in the payload matches the size of the
allocation.
This let's us later use this encoded size to make sure the various offsets
inside encoded payload don't reach outside the allocation, if they do, we'll
assert/panic, but at least we won't segfault or smear memory.
We can also do 'deep' validation which runs on all the records of the encoded
payload and validates that they don't contain invalid offsets. This lets us
detect corruptions early and reject a RESTORE command rather than accepting
it and asserting (crashing) later when accessing that payload via some command.
configuration:
- adding ACL flag skip-sanitize-payload
- adding config sanitize-dump-payload [yes/no/clients]
For now, we don't have a good way to ensure MIGRATE in cluster resharding isn't
being slowed down by these sanitation, so i'm setting the default value to `no`,
but later on it should be set to `clients` by default.
changes:
- changing rdbReportError not to `exit` in RESTORE command
- adding a new stat to be able to later check if cluster MIGRATE isn't being
slowed down by sanitation.
| ca1c182567add4092e9cb6ea829e9c5193e8fd55 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/ca1c182567add4092e9cb6ea829e9c5193e8fd55 | 2020-08-13 16:41:05+03:00 |
Print IP and port on Possible SECURITY ATTACK detected (#12024)
Add a print statement to indicate which IP/port is sending the attack. So that the offending connection can be tracked
down, if necessary. | f3e16a1a1eac082aa3c54f24eaada3f6bbbd808c | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/f3e16a1a1eac082aa3c54f24eaada3f6bbbd808c | 2023-04-13 09:23:00+08:00 |
solve race conditions in psync2-pingoff test (#8720)
Another test race condition in the macos tests.
the test was waiting for PINGs to be generated and put on the replication stream,
but waiting for 1 or 2 seconds doesn't really guarantee that.
then the test that expected 6 full syncs, found only 4 | cd81dcf18bf9dafbe0edf29d9bba8c626cd2459e | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/cd81dcf18bf9dafbe0edf29d9bba8c626cd2459e | 2021-03-30 11:41:06+03:00 |
RDB: handle encoding errors with rdbExitReportCorruptRDB().
Without such change, the diskless replicas, when loading RDB files from
the socket will not abort when a broken RDB file gets loaded. This is
potentially unsafe, because right now Redis is not able to guarantee
that encoding errors are safe from the POV of memory corruptions (for
instance the LZF library may not be safe against untrusted data?) so
better to abort when the RDB file we are going to load is corrupted.
Instead I/O errors are still returned to the caller without aborting,
so that in case of short read the diskless replica can try again.
| bd0f06c18ccea62cb5e0fff018a5eaa876d3f90e | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/bd0f06c18ccea62cb5e0fff018a5eaa876d3f90e | 2019-07-18 18:51:45+02:00 |
Fix failing cluster tests (#9707)
Fix failures introduced by #9695 which was an attempt to solve failures introduced by #9679.
And alternative to #9703 (i didn't like the extra argument to kill_instance).
Reverting #9695.
Instead of stopping AOF on all terminations, stop it only on the two which need it.
Do it as part of the test rather than the infra (it was add that kill_instance used `R`
to communicate to the instance)
Note that the original purpose of these tests was to trigger a crash, but that upsets
valgrind so in redis 6.2 i changed it to use SIGTERM, so i now rename the tests
(remove "kill" and "crash").
Also add some colors to failures, and the word "FAILED" so that it's searchable.
And solve a semi-related race condition in 14-consistency-check.tcl | 48d54265ce16cff0764b0aad7b56e091401b3d4b | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/48d54265ce16cff0764b0aad7b56e091401b3d4b | 2021-10-31 19:22:21+02:00 |
use embedded string object and more efficient ll2string for long long value convert to string (#12250)
A value of type long long is always less than 21 bytes when convert to a
string, so always meets the conditions for using embedded string object
which can always get memory reduction and performance gain (less calls
to the heap allocator).
Additionally, for the conversion of longlong type to sds, we also use a faster
algorithm (the one in util.c instead of the one that used to be in sds.c).
For the DECR command on 32-bit Redis, we get about a 5.7% performance
improvement. There will also be some performance gains for some commands
that heavily use sdscatfmt to convert numbers, such as INFO.
Co-authored-by: Oran Agra <[email protected]> | 93708c7f6a0e702657e4f296ea6fc299225eea8d | redis | neuralsentry | 0 | https://github.com/redis/redis | https://github.com/redis/redis/commit/93708c7f6a0e702657e4f296ea6fc299225eea8d | 2023-06-20 20:14:44+08:00 |
Add sanitizer support and clean up sanitizer findings (#9601)
- Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.
- To build Redis with desired sanitizer : `make SANITIZER=undefined`
- There were some sanitizer findings, cleaned up codebase
- Added tests with address and undefined behavior sanitizers to daily CI.
- Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
Basically, there are three types of issues :
**1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
does not support unaligned access. Redis does unaligned access only on supported platforms.
**2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
and change how compiler generates code, current findings mostly about signed shift or simple
addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
as far as I can tell (checked generated code on godbolt.org).
**3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
UB means nothing guaranteed and risky to reason about program behavior but I don't think any
of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
will be the real benefit. | b91d8b289bb64965c5eaa445809f9f49293e99c0 | redis | neuralsentry | 1 | https://github.com/redis/redis | https://github.com/redis/redis/commit/b91d8b289bb64965c5eaa445809f9f49293e99c0 | 2021-11-11 14:51:33+03:00 |
Subsets and Splits