# Redis configuration file example. # # Note that in order to read the configuration file, Redis must be # started with the file path as first argument: # # ./redis-server /path/to/redis.conf # Note on units: when memory size is needed, it is possible to specify # it in the usual form of 1k 5GB 4M and so forth: # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # units are case insensitive so 1GB 1Gb 1gB are all the same. ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if you # have a standard template that goes to all Redis servers but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # # Note that option "include" won't be rewritten by command "CONFIG REWRITE" # from admin or Redis Sentinel. Since Redis always uses the last processed # line as value of a configuration directive, you'd better put includes # at the beginning of this file to avoid overwriting config change at runtime. # # If instead you are interested in using includes to override configuration # options, it is better to use include as the last line. # # Included paths may contain wildcards. All files matching the wildcards will # be included in alphabetical order. # Note that if an include path contains a wildcards but no files match it when # the server is started, the include statement will be ignored and no error will # be emitted. It is safe, therefore, to include wildcard files from empty # directories. # # include /path/to/local.conf # include /path/to/other.conf # include /path/to/fragments/*.conf # ################################## MODULES ##################################### # Load modules at startup. If the server is not able to load modules # it will abort. It is possible to use multiple loadmodule directives. # # loadmodule /path/to/my_module.so # loadmodule /path/to/other_module.so ################################## NETWORK ##################################### # By default, if no "bind" configuration directive is specified, Redis listens # for connections from all available network interfaces on the host machine. # It is possible to listen to just one or multiple selected interfaces using # the "bind" configuration directive, followed by one or more IP addresses. # Each address can be prefixed by "-", which means that redis will not fail to # start if the address is not available. Being not available only refers to # addresses that does not correspond to any network interface. Addresses that # are already in use will always fail, and unsupported protocols will always BE # silently skipped. # # Examples: # # bind 192.168.1.100 10.0.0.1 # listens on two specific IPv4 addresses # bind 127.0.0.1 ::1 # listens on loopback IPv4 and IPv6 # bind * -::* # like the default, all available interfaces # # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the # internet, binding to all the interfaces is dangerous and will expose the # instance to everybody on the internet. So by default we uncomment the # following bind directive, that will force Redis to listen only on the # IPv4 and IPv6 (if available) loopback interface addresses (this means Redis # will only be able to accept client connections from the same host that it is # running on). # # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES # COMMENT OUT THE FOLLOWING LINE. # # You will also need to set a password unless you explicitly disable protected # mode. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #>GNUNUX #bind 127.0.0.1 -::1 bind 0.0.0.0 #GNUNUX #tcp-keepalive 300 tcp-keepalive %%redis_tcp_keepalive #GNUNUX tls-port 6380 tls-cert-file /etc/pki/tls/certs/redis.crt tls-key-file /etc/pki/tls/private/redis.key tls-ca-cert-file /etc/pki/ca-trust/source/anchors/ca_Redis.crt #= 1.1.1) or any combination. # To enable only TLSv1.2 and TLSv1.3, use: # # tls-protocols "TLSv1.2 TLSv1.3" # Configure allowed ciphers. See the ciphers(1ssl) manpage for more information # about the syntax of this string. # # Note: this configuration applies only to <= TLSv1.2. # # tls-ciphers DEFAULT:!MEDIUM # Configure allowed TLSv1.3 ciphersuites. See the ciphers(1ssl) manpage for more # information about the syntax of this string, and specifically for TLSv1.3 # ciphersuites. # # tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256 # When choosing a cipher, use the server's preference instead of the client # preference. By default, the server follows the client's preference. # # tls-prefer-server-ciphers yes # By default, TLS session caching is enabled to allow faster and less expensive # reconnections by clients that support it. Use the following directive to disable # caching. # # tls-session-caching no # Change the default number of TLS sessions cached. A zero value sets the cache # to unlimited size. The default size is 20480. # # tls-session-cache-size 5000 # Change the default timeout of cached TLS sessions. The default timeout is 300 # seconds. # # tls-session-cache-timeout 60 ################################# GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. # When Redis is supervised by upstart or systemd, this parameter has no impact. daemonize no # If you run Redis from upstart or systemd, Redis can interact with your # supervision tree. Options: # supervised no - no supervision interaction # supervised upstart - signal upstart by putting Redis into SIGSTOP mode # requires "expect stop" in your upstart job config # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET # on startup, and updating Redis status on a regular # basis. # supervised auto - detect upstart or systemd method based on # UPSTART_JOB or NOTIFY_SOCKET environment variables # Note: these supervision methods only signal "process is ready." # They do not enable continuous pings back to your supervisor. # # The default is "no". To run under upstart/systemd, you can simply uncomment # the line below: # # supervised auto #>GNUNUX supervised systemd #GNUNUX #pidfile /var/run/redis_6379.pid pidfile /run/redis.pid # Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) loglevel notice # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null #GNUNUX logfile /var/log/redis/redis.log # To enable logging to the system logger, just set 'syslog-enabled' to yes, # and optionally update the other syslog parameters to suit your needs. # syslog-enabled no #>GNUNUX syslog-enabled yes # where # dbid is a number between 0 and 'databases'-1 databases 16 # By default Redis shows an ASCII art logo only when started to log to the # standard output and if the standard output is a TTY and syslog logging is # disabled. Basically this means that normally a logo is displayed only in # interactive sessions. # # However it is possible to force the pre-4.0 behavior and always show a # ASCII art logo in startup logs by setting the following option to yes. always-show-logo no # By default, Redis modifies the process title (as seen in 'top' and 'ps') to # provide some runtime information. It is possible to disable this and leave # the process name as executed by setting the following to no. set-proc-title yes # When changing the process title, Redis uses the following template to construct # the modified title. # # Template variables are specified in curly brackets. The following variables are # supported: # # {title} Name of process as executed if parent, or type of child process. # {listen-addr} Bind address or '*' followed by TCP or TLS port listening on, or # Unix socket if only that's available. # {server-mode} Special mode, i.e. "[sentinel]" or "[cluster]". # {port} TCP port listening on, or 0. # {tls-port} TLS port listening on, or 0. # {unixsocket} Unix domain socket listening on, or "". # {config-file} Name of configuration file used. # proc-title-template "{title} {listen-addr} {server-mode}" ################################ SNAPSHOTTING ################################ # Save the DB to disk. # # save [ ...] # # Redis will save the DB if the given number of seconds elapsed and it # surpassed the given number of write operations against the DB. # # Snapshotting can be completely disabled with a single empty string argument # as in following example: # # save "" # # Unless specified otherwise, by default Redis will save the DB: # * After 3600 seconds (an hour) if at least 1 change was performed # * After 300 seconds (5 minutes) if at least 100 changes were performed # * After 60 seconds if at least 10000 changes were performed # # You can set these explicitly by uncommenting the following line. # # save 3600 1 300 100 60 10000 #>GNUNUX %if %%redis_save save 900 1 300 10 60 10000 %end if #GNUNUX sanitize-dump-payload clients #GNUNUX #dir /var/lib/redis dir /srv/redis # | Replica | # | (receive writes) | | (exact copy) | # +------------------+ +---------------+ # # 1) Redis replication is asynchronous, but you can configure a master to # stop accepting writes if it appears to be not connected with at least # a given number of replicas. # 2) Redis replicas are able to perform a partial resynchronization with the # master if the replication link is lost for a relatively small amount of # time. You may want to configure the replication backlog size (see the next # sections of this file) with a sensible value depending on your needs. # 3) Replication is automatic and does not need user intervention. After a # network partition replicas automatically try to reconnect to masters # and resynchronize with them. # # replicaof # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the replica to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the replica request. # # masterauth # # However this is not enough if you are using Redis ACLs (for Redis version # 6 or greater), and the default user is not capable of running the PSYNC # command and/or other commands needed for replication. In this case it's # better to configure a special user to use with replication, and specify the # masteruser configuration as such: # # masteruser # # When masteruser is specified, the replica will authenticate against its # master using the new AUTH form: AUTH . # When a replica loses its connection with the master, or when the replication # is still in progress, the replica can act in two different ways: # # 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will # still reply to client requests, possibly with out of date data, or the # data set may just be empty if this is the first synchronization. # # 2) If replica-serve-stale-data is set to 'no' the replica will reply with error # "MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'" # to all data access commands, excluding commands such as: # INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE, # UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST, # HOST and LATENCY. # replica-serve-stale-data yes # You can configure a replica instance to accept writes or not. Writing against # a replica instance may be useful to store some ephemeral data (because data # written on a replica will be easily deleted after resync with the master) but # may also cause problems if clients are writing to it because of a # misconfiguration. # # Since Redis 2.6 by default replicas are read-only. # # Note: read only replicas are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only replica exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only replicas using 'rename-command' to shadow all the # administrative / dangerous commands. replica-read-only yes # Replication SYNC strategy: disk or socket. # # New replicas and reconnecting replicas that are not able to continue the # replication process just receiving differences, need to do what is called a # "full synchronization". An RDB file is transmitted from the master to the # replicas. # # The transmission can happen in two different ways: # # 1) Disk-backed: The Redis master creates a new process that writes the RDB # file on disk. Later the file is transferred by the parent # process to the replicas incrementally. # 2) Diskless: The Redis master creates a new process that directly writes the # RDB file to replica sockets, without touching the disk at all. # # With disk-backed replication, while the RDB file is generated, more replicas # can be queued and served with the RDB file as soon as the current child # producing the RDB file finishes its work. With diskless replication instead # once the transfer starts, new replicas arriving will be queued and a new # transfer will start when the current one terminates. # # When diskless replication is used, the master waits a configurable amount of # time (in seconds) before starting the transfer in the hope that multiple # replicas will arrive and the transfer can be parallelized. # # With slow disks and fast (large bandwidth) networks, diskless replication # works better. repl-diskless-sync yes # When diskless replication is enabled, it is possible to configure the delay # the server waits in order to spawn the child that transfers the RDB via socket # to the replicas. # # This is important since once the transfer starts, it is not possible to serve # new replicas arriving, that will be queued for the next RDB transfer, so the # server waits a delay in order to let more replicas arrive. # # The delay is specified in seconds, and by default is 5 seconds. To disable # it entirely just set it to 0 seconds and the transfer will start ASAP. repl-diskless-sync-delay 5 # When diskless replication is enabled with a delay, it is possible to let # the replication start before the maximum delay is reached if the maximum # number of replicas expected have connected. Default of 0 means that the # maximum is not defined and Redis will wait the full delay. repl-diskless-sync-max-replicas 0 # ----------------------------------------------------------------------------- # WARNING: RDB diskless load is experimental. Since in this setup the replica # does not immediately store an RDB on disk, it may cause data loss during # failovers. RDB diskless load + Redis modules not handling I/O reads may also # cause Redis to abort in case of I/O errors during the initial synchronization # stage with the master. Use only if you know what you are doing. # ----------------------------------------------------------------------------- # # Replica can load the RDB it reads from the replication link directly from the # socket, or store the RDB to a file and read that file after it was completely # received from the master. # # In many cases the disk is slower than the network, and storing and loading # the RDB file may increase replication time (and even increase the master's # Copy on Write memory and replica buffers). # However, parsing the RDB file directly from the socket may mean that we have # to flush the contents of the current database before the full rdb was # received. For this reason we have the following options: # # "disabled" - Don't use diskless load (store the rdb file to the disk first) # "on-empty-db" - Use diskless load only when it is completely safe. # "swapdb" - Keep current db contents in RAM while parsing the data directly # from the socket. Replicas in this mode can keep serving current # data set while replication is in progress, except for cases where # they can't recognize master as having a data set from same # replication history. # Note that this requires sufficient memory, if you don't have it, # you risk an OOM kill. repl-diskless-load disabled # Master send PINGs to its replicas in a predefined interval. It's possible to # change this interval with the repl_ping_replica_period option. The default # value is 10 seconds. # # repl-ping-replica-period 10 # The following option sets the replication timeout for: # # 1) Bulk transfer I/O during SYNC, from the point of view of replica. # 2) Master timeout from the point of view of replicas (data, pings). # 3) Replica timeout from the point of view of masters (REPLCONF ACK pings). # # It is important to make sure that this value is greater than the value # specified for repl-ping-replica-period otherwise a timeout will be detected # every time there is low traffic between the master and the replica. The default # value is 60 seconds. # # repl-timeout 60 # Disable TCP_NODELAY on the replica socket after SYNC? # # If you select "yes" Redis will use a smaller number of TCP packets and # less bandwidth to send data to replicas. But this can add a delay for # the data to appear on the replica side, up to 40 milliseconds with # Linux kernels using a default configuration. # # If you select "no" the delay for data to appear on the replica side will # be reduced but more bandwidth will be used for replication. # # By default we optimize for low latency, but in very high traffic conditions # or when the master and replicas are many hops away, turning this to "yes" may # be a good idea. repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a buffer that accumulates # replica data when replicas are disconnected for some time, so that when a # replica wants to reconnect again, often a full resync is not needed, but a # partial resync is enough, just passing the portion of data the replica # missed while disconnected. # # The bigger the replication backlog, the longer the replica can endure the # disconnect and later be able to perform a partial resynchronization. # # The backlog is only allocated if there is at least one replica connected. # # repl-backlog-size 1mb # After a master has no connected replicas for some time, the backlog will be # freed. The following option configures the amount of seconds that need to # elapse, starting from the time the last replica disconnected, for the backlog # buffer to be freed. # # Note that replicas never free the backlog for timeout, since they may be # promoted to masters later, and should be able to correctly "partially # resynchronize" with other replicas: hence they should always accumulate backlog. # # A value of 0 means to never release the backlog. # # repl-backlog-ttl 3600 # The replica priority is an integer number published by Redis in the INFO # output. It is used by Redis Sentinel in order to select a replica to promote # into a master if the master is no longer working correctly. # # A replica with a low priority number is considered better for promotion, so # for instance if there are three replicas with priority 10, 100, 25 Sentinel # will pick the one with priority 10, that is the lowest. # # However a special priority of 0 marks the replica as not able to perform the # role of master, so a replica with priority of 0 will never be selected by # Redis Sentinel for promotion. # # By default the priority is 100. replica-priority 100 # The propagation error behavior controls how Redis will behave when it is # unable to handle a command being processed in the replication stream from a master # or processed while reading from an AOF file. Errors that occur during propagation # are unexpected, and can cause data inconsistency. However, there are edge cases # in earlier versions of Redis where it was possible for the server to replicate or persist # commands that would fail on future versions. For this reason the default behavior # is to ignore such errors and continue processing commands. # # If an application wants to ensure there is no data divergence, this configuration # should be set to 'panic' instead. The value can also be set to 'panic-on-replicas' # to only panic when a replica encounters an error on the replication stream. One of # these two panic values will become the default value in the future once there are # sufficient safety mechanisms in place to prevent false positive crashes. # # propagation-error-behavior ignore # Replica ignore disk write errors controls the behavior of a replica when it is # unable to persist a write command received from its master to disk. By default, # this configuration is set to 'no' and will crash the replica in this condition. # It is not recommended to change this default, however in order to be compatible # with older versions of Redis this config can be toggled to 'yes' which will just # log a warning and execute the write command it got from the master. # # replica-ignore-disk-write-errors no # ----------------------------------------------------------------------------- # By default, Redis Sentinel includes all replicas in its reports. A replica # can be excluded from Redis Sentinel's announcements. An unannounced replica # will be ignored by the 'sentinel replicas ' command and won't be # exposed to Redis Sentinel's clients. # # This option does not change the behavior of replica-priority. Even with # replica-announced set to 'no', the replica can be promoted to master. To # prevent this behavior, set replica-priority to 0. # # replica-announced yes # It is possible for a master to stop accepting writes if there are less than # N replicas connected, having a lag less or equal than M seconds. # # The N replicas need to be in "online" state. # # The lag in seconds, that must be <= the specified value, is calculated from # the last ping received from the replica, that is usually sent every second. # # This option does not GUARANTEE that N replicas will accept the write, but # will limit the window of exposure for lost writes in case not enough replicas # are available, to the specified number of seconds. # # For example to require at least 3 replicas with a lag <= 10 seconds use: # # min-replicas-to-write 3 # min-replicas-max-lag 10 # # Setting one or the other to 0 disables the feature. # # By default min-replicas-to-write is set to 0 (feature disabled) and # min-replicas-max-lag is set to 10. # A Redis master is able to list the address and port of the attached # replicas in different ways. For example the "INFO replication" section # offers this information, which is used, among other tools, by # Redis Sentinel in order to discover replica instances. # Another place where this info is available is in the output of the # "ROLE" command of a master. # # The listed IP address and port normally reported by a replica is # obtained in the following way: # # IP: The address is auto detected by checking the peer address # of the socket used by the replica to connect with the master. # # Port: The port is communicated by the replica during the replication # handshake, and is normally the port that the replica is using to # listen for connections. # # However when port forwarding or Network Address Translation (NAT) is # used, the replica may actually be reachable via different IP and port # pairs. The following two options can be used by a replica in order to # report to its master a specific set of IP and port, so that both INFO # and ROLE will report those values. # # There is no need to use both the options if you need to override just # the port or the IP address. # # replica-announce-ip 5.5.5.5 # replica-announce-port 1234 ############################### KEYS TRACKING ################################# # Redis implements server assisted support for client side caching of values. # This is implemented using an invalidation table that remembers, using # a radix key indexed by key name, what clients have which keys. In turn # this is used in order to send invalidation messages to clients. Please # check this page to understand more about the feature: # # https://redis.io/topics/client-side-caching # # When tracking is enabled for a client, all the read only queries are assumed # to be cached: this will force Redis to store information in the invalidation # table. When keys are modified, such information is flushed away, and # invalidation messages are sent to the clients. However if the workload is # heavily dominated by reads, Redis could use more and more memory in order # to track the keys fetched by many clients. # # For this reason it is possible to configure a maximum fill value for the # invalidation table. By default it is set to 1M of keys, and once this limit # is reached, Redis will start to evict keys in the invalidation table # even if they were not modified, just to reclaim memory: this will in turn # force the clients to invalidate the cached values. Basically the table # maximum size is a trade off between the memory you want to spend server # side to track information about who cached what, and the ability of clients # to retain cached objects in memory. # # If you set the value to 0, it means there are no limits, and Redis will # retain as many keys as needed in the invalidation table. # In the "stats" INFO section, you can find information about the number of # keys in the invalidation table at every given moment. # # Note: when key tracking is used in broadcasting mode, no memory is used # in the server side so this setting is useless. # # tracking-table-max-keys 1000000 ################################## SECURITY ################################### # Warning: since Redis is pretty fast, an outside user can try up to # 1 million passwords per second against a modern box. This means that you # should use very strong passwords, otherwise they will be very easy to break. # Note that because the password is really a shared secret between the client # and the server, and should not be memorized by any human, the password # can be easily a long string from /dev/urandom or whatever, so by using a # long and unguessable password no brute force attack will be possible. # Redis ACL users are defined in the following format: # # user ... acl rules ... # # For example: # # user worker +@list +@connection ~jobs:* on >ffa9203c493aa99 # # The special username "default" is used for new connections. If this user # has the "nopass" rule, then new connections will be immediately authenticated # as the "default" user without the need of any password provided via the # AUTH command. Otherwise if the "default" user is not flagged with "nopass" # the connections will start in not authenticated state, and will require # AUTH (or the HELLO command AUTH option) in order to be authenticated and # start to work. # # The ACL rules that describe what a user can do are the following: # # on Enable the user: it is possible to authenticate as this user. # off Disable the user: it's no longer possible to authenticate # with this user, however the already authenticated connections # will still work. # skip-sanitize-payload RESTORE dump-payload sanitization is skipped. # sanitize-payload RESTORE dump-payload is sanitized (default). # + Allow the execution of that command. # May be used with `|` for allowing subcommands (e.g "+config|get") # - Disallow the execution of that command. # May be used with `|` for blocking subcommands (e.g "-config|set") # +@ Allow the execution of all the commands in such category # with valid categories are like @admin, @set, @sortedset, ... # and so forth, see the full list in the server.c file where # the Redis command table is described and defined. # The special category @all means all the commands, but currently # present in the server, and that will be loaded in the future # via modules. # +|first-arg Allow a specific first argument of an otherwise # disabled command. It is only supported on commands with # no sub-commands, and is not allowed as negative form # like -SELECT|1, only additive starting with "+". This # feature is deprecated and may be removed in the future. # allcommands Alias for +@all. Note that it implies the ability to execute # all the future commands loaded via the modules system. # nocommands Alias for -@all. # ~ Add a pattern of keys that can be mentioned as part of # commands. For instance ~* allows all the keys. The pattern # is a glob-style pattern like the one of KEYS. # It is possible to specify multiple patterns. # %R~ Add key read pattern that specifies which keys can be read # from. # %W~ Add key write pattern that specifies which keys can be # written to. # allkeys Alias for ~* # resetkeys Flush the list of allowed keys patterns. # & Add a glob-style pattern of Pub/Sub channels that can be # accessed by the user. It is possible to specify multiple channel # patterns. # allchannels Alias for &* # resetchannels Flush the list of allowed channel patterns. # > Add this password to the list of valid password for the user. # For example >mypass will add "mypass" to the list. # This directive clears the "nopass" flag (see later). # < Remove this password from the list of valid passwords. # nopass All the set passwords of the user are removed, and the user # is flagged as requiring no password: it means that every # password will work against this user. If this directive is # used for the default user, every new connection will be # immediately authenticated with the default user without # any explicit AUTH command required. Note that the "resetpass" # directive will clear this condition. # resetpass Flush the list of allowed passwords. Moreover removes the # "nopass" status. After "resetpass" the user has no associated # passwords and there is no way to authenticate without adding # some password (or setting it as "nopass" later). # reset Performs the following actions: resetpass, resetkeys, off, # -@all. The user returns to the same state it has immediately # after its creation. # () Create a new selector with the options specified within the # parentheses and attach it to the user. Each option should be # space separated. The first character must be ( and the last # character must be ). # clearselectors Remove all of the currently attached selectors. # Note this does not change the "root" user permissions, # which are the permissions directly applied onto the # user (outside the parentheses). # # ACL rules can be specified in any order: for instance you can start with # passwords, then flags, or key patterns. However note that the additive # and subtractive rules will CHANGE MEANING depending on the ordering. # For instance see the following example: # # user alice on +@all -DEBUG ~* >somepassword # # This will allow "alice" to use all the commands with the exception of the # DEBUG command, since +@all added all the commands to the set of the commands # alice can use, and later DEBUG was removed. However if we invert the order # of two ACL rules the result will be different: # # user alice on -DEBUG +@all ~* >somepassword # # Now DEBUG was removed when alice had yet no commands in the set of allowed # commands, later all the commands are added, so the user will be able to # execute everything. # # Basically ACL rules are processed left-to-right. # # The following is a list of command categories and their meanings: # * keyspace - Writing or reading from keys, databases, or their metadata # in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE, # KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace, # key or metadata will also have `write` category. Commands that only read # the keyspace, key or metadata will have the `read` category. # * read - Reading from keys (values or metadata). Note that commands that don't # interact with keys, will not have either `read` or `write`. # * write - Writing to keys (values or metadata) # * admin - Administrative commands. Normal applications will never need to use # these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc. # * dangerous - Potentially dangerous (each should be considered with care for # various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS, # CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc. # * connection - Commands affecting the connection or other connections. # This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc. # * blocking - Potentially blocking the connection until released by another # command. # * fast - Fast O(1) commands. May loop on the number of arguments, but not the # number of elements in the key. # * slow - All commands that are not Fast. # * pubsub - PUBLISH / SUBSCRIBE related # * transaction - WATCH / MULTI / EXEC related commands. # * scripting - Scripting related. # * set - Data type: sets related. # * sortedset - Data type: zsets related. # * list - Data type: lists related. # * hash - Data type: hashes related. # * string - Data type: strings related. # * bitmap - Data type: bitmaps related. # * hyperloglog - Data type: hyperloglog related. # * geo - Data type: geo related. # * stream - Data type: streams related. # # For more information about ACL configuration please refer to # the Redis web site at https://redis.io/topics/acl #>GNUNUX user %%account.username on >%%account.password ~* &* +@all # as usually, or more explicitly with AUTH default # if they follow the new protocol: both will work. # # The requirepass is not compatible with aclfile option and the ACL LOAD # command, these will cause requirepass to be ignored. # # requirepass foobared #>GNUNUX requirepass %%account.password #GNUNUX maxclients %%redis_max_clients # #>GNUNUX maxmemory %%{redis_max_memory}mb # Evict using approximated LRU, only keys with an expire set. # allkeys-lru -> Evict any key using approximated LRU. # volatile-lfu -> Evict using approximated LFU, only keys with an expire set. # allkeys-lfu -> Evict any key using approximated LFU. # volatile-random -> Remove a random key having an expire set. # allkeys-random -> Remove a random key, any key. # volatile-ttl -> Remove the key with the nearest expire time (minor TTL) # noeviction -> Don't evict anything, just return an error on write operations. # # LRU means Least Recently Used # LFU means Least Frequently Used # # Both LRU, LFU and volatile-ttl are implemented using approximated # randomized algorithms. # # Note: with any of the above policies, when there are no suitable keys for # eviction, Redis will return an error on write operations that require # more memory. These are usually commands that create new keys, add data or # modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE, # SORT (due to the STORE argument), and EXEC (if the transaction includes any # command that requires memory). # # The default is: # # maxmemory-policy noeviction #>GNUNUX maxmemory-policy %%redis_memory_policy #" if needed. latency-monitor-threshold 0 ################################ LATENCY TRACKING ############################## # The Redis extended latency monitoring tracks the per command latencies and enables # exporting the percentile distribution via the INFO latencystats command, # and cumulative latency distributions (histograms) via the LATENCY command. # # By default, the extended latency monitoring is enabled since the overhead # of keeping track of the command latency is very small. # latency-tracking yes # By default the exported latency percentiles via the INFO latencystats command # are the p50, p99, and p999. # latency-tracking-info-percentiles 50 99 99.9 ############################# EVENT NOTIFICATION ############################## # Redis can notify Pub/Sub clients about events happening in the key space. # This feature is documented at https://redis.io/topics/notifications # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # # K Keyspace events, published with __keyspace@__ prefix. # E Keyevent events, published with __keyevent@__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # n New key events (Note: not included in the 'A' class) # t Stream commands # d Module key type events # m Key-miss events (Note: It is not included in the 'A' class) # A Alias for g$lshzxetd, so that the "AKE" string means all the events # (Except key-miss events which are excluded from 'A' due to their # unique nature). # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # # Example: to enable list and generic events, from the point of view of the # event name, use: # # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. hash-max-listpack-entries 512 hash-max-listpack-value 64 # Lists are also encoded in a special way to save a lot of space. # The number of entries allowed per internal list node can be specified # as a fixed maximum size or a maximum number of elements. # For a fixed maximum size, use -5 through -1, meaning: # -5: max size: 64 Kb <-- not recommended for normal workloads # -4: max size: 32 Kb <-- not recommended # -3: max size: 16 Kb <-- probably not recommended # -2: max size: 8 Kb <-- good # -1: max size: 4 Kb <-- good # Positive numbers mean store up to _exactly_ that number of elements # per list node. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), # but if your use case is unique, adjust the settings as necessary. list-max-listpack-size -2 # Lists may also be compressed. # Compress depth is the number of quicklist ziplist nodes from *each* side of # the list to *exclude* from compression. The head and tail of the list # are always uncompressed for fast push/pop operations. Settings are: # 0: disable all list compression # 1: depth 1 means "don't start compressing until after 1 node into the list, # going from either the head or tail" # So: [head]->node->node->...->node->[tail] # [head], [tail] will always be uncompressed; inner nodes will compress. # 2: [head]->[next]->node->node->...->node->[prev]->[tail] # 2 here means: don't compress head or head->next or tail->prev or tail, # but compress all nodes between them. # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] # etc. list-compress-depth 0 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: zset-max-listpack-entries 128 zset-max-listpack-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Streams macro node max size / items. The stream data structure is a radix # tree of big nodes that encode multiple items inside. Using this configuration # it is possible to configure how big a single node can be in bytes, and the # maximum number of items it may contain before switching to a new node when # appending new stream entries. If any of the following settings are set to # zero, the limit is ignored, so for instance it is possible to set just a # max entries limit by setting max-bytes to 0 and max-entries to the desired # value. stream-node-max-bytes 4096 stream-node-max-entries 100 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # replica -> replica clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and replica clients, since # subscribers and replicas receive data in a push fashion. # # Note that it doesn't make sense to set the replica clients output buffer # limit lower than the repl-backlog-size config (partial sync will succeed # and then replica will get disconnected). # Such a configuration is ignored (the size of repl-backlog-size will be used). # This doesn't have memory consumption implications since the replica client # will share the backlog buffers memory. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Client query buffers accumulate new commands. They are limited to a fixed # amount by default in order to avoid that a protocol desynchronization (for # instance due to a bug in the client) will lead to unbound memory usage in # the query buffer. However you can configure it here if you have very special # needs, such us huge multi/exec requests or alike. # # client-query-buffer-limit 1gb # In some scenarios client connections can hog up memory leading to OOM # errors or data eviction. To avoid this we can cap the accumulated memory # used by all client connections (all pubsub and normal clients). Once we # reach that limit connections will be dropped by the server freeing up # memory. The server will attempt to drop the connections using the most # memory first. We call this mechanism "client eviction". # # Client eviction is configured using the maxmemory-clients setting as follows: # 0 - client eviction is disabled (default) # # A memory value can be used for the client eviction threshold, # for example: # maxmemory-clients 1g # # A percentage value (between 1% and 100%) means the client eviction threshold # is based on a percentage of the maxmemory setting. For example to set client # eviction at 5% of maxmemory: # maxmemory-clients 5% # In the Redis protocol, bulk requests, that are, elements representing single # strings, are normally limited to 512 mb. However you can change this limit # here, but must be 1mb or greater # # proto-max-bulk-len 512mb # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # Normally it is useful to have an HZ value which is proportional to the # number of clients connected. This is useful in order, for instance, to # avoid too many clients are processed for each background task invocation # in order to avoid latency spikes. # # Since the default HZ value by default is conservatively set to 10, Redis # offers, and enables by default, the ability to use an adaptive HZ value # which will temporarily raise when there are many connected clients. # # When dynamic HZ is enabled, the actual configured HZ will be used # as a baseline, but multiples of the configured HZ value will be actually # used as needed once more clients are connected. In this way an idle # instance will use very little CPU time while a busy instance will be # more responsive. dynamic-hz yes # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 4 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes # When redis saves RDB file, if the following option is enabled # the file will be fsync-ed every 4 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. rdb-save-incremental-fsync yes # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good # idea to start with the default settings and only change them after investigating # how to improve the performances and how the keys LFU change over time, which # is possible to inspect via the OBJECT FREQ command. # # There are two tunable parameters in the Redis LFU implementation: the # counter logarithm factor and the counter decay time. It is important to # understand what the two parameters mean before changing them. # # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis # uses a probabilistic increment with logarithmic behavior. Given the value # of the old counter, when a key is accessed, the counter is incremented in # this way: # # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P. # # The default lfu-log-factor is 10. This is a table of how the frequency # counter changes with a different number of accesses with different # logarithmic factors: # # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+ # # NOTE: The above table was obtained by running the following commands: # # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo # # NOTE 2: The counter initial value is 5 in order to give new objects a chance # to accumulate hits. # # The counter decay time is the time, in minutes, that must elapse in order # for the key counter to be divided by two (or decremented if it has a value # less <= 10). # # The default value for the lfu-decay-time is 1. A special value of 0 means to # decay the counter every time it happens to be scanned. # # lfu-log-factor 10 # lfu-decay-time 1 ########################### ACTIVE DEFRAGMENTATION ####################### # # What is active defragmentation? # ------------------------------- # # Active (online) defragmentation allows a Redis server to compact the # spaces left between small allocations and deallocations of data in memory, # thus allowing to reclaim back memory. # # Fragmentation is a natural process that happens with every allocator (but # less so with Jemalloc, fortunately) and certain workloads. Normally a server # restart is needed in order to lower the fragmentation, or at least to flush # away all the data and create it again. However thanks to this feature # implemented by Oran Agra for Redis 4.0 this process can happen at runtime # in a "hot" way, while the server is running. # # Basically when the fragmentation is over a certain level (see the # configuration options below) Redis will start to create new copies of the # values in contiguous memory regions by exploiting certain specific Jemalloc # features (in order to understand if an allocation is causing fragmentation # and to allocate it in a better place), and at the same time, will release the # old copies of the data. This process, repeated incrementally for all the keys # will cause the fragmentation to drop back to normal values. # # Important things to understand: # # 1. This feature is disabled by default, and only works if you compiled Redis # to use the copy of Jemalloc we ship with the source code of Redis. # This is the default with Linux builds. # # 2. You never need to enable this feature if you don't have fragmentation # issues. # # 3. Once you experience fragmentation, you can enable this feature when # needed with the command "CONFIG SET activedefrag yes". # # The configuration parameters are able to fine tune the behavior of the # defragmentation process. If you are not sure about what they mean it is # a good idea to leave the defaults untouched. # Active defragmentation is disabled by default # activedefrag no # Minimum amount of fragmentation waste to start active defrag # active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag # active-defrag-threshold-lower 10 # Maximum percentage of fragmentation at which we use maximum effort # active-defrag-threshold-upper 100 # Minimal effort for defrag in CPU percentage, to be used when the lower # threshold is reached # active-defrag-cycle-min 1 # Maximal effort for defrag in CPU percentage, to be used when the upper # threshold is reached # active-defrag-cycle-max 25 # Maximum number of set/hash/zset/list fields that will be processed from # the main dictionary scan # active-defrag-max-scan-fields 1000 # Jemalloc background thread for purging will be enabled by default jemalloc-bg-thread yes # It is possible to pin different threads and processes of Redis to specific # CPUs in your system, in order to maximize the performances of the server. # This is useful both in order to pin different Redis threads in different # CPUs, but also in order to make sure that multiple Redis instances running # in the same host will be pinned to different CPUs. # # Normally you can do this using the "taskset" command, however it is also # possible to this via Redis configuration directly, both in Linux and FreeBSD. # # You can pin the server/IO threads, bio threads, aof rewrite child process, and # the bgsave child process. The syntax to specify the cpu list is the same as # the taskset command: # # Set redis server/io threads to cpu affinity 0,2,4,6: # server_cpulist 0-7:2 # # Set bio threads to cpu affinity 1,3: # bio_cpulist 1,3 # # Set aof rewrite child process to cpu affinity 8,9,10,11: # aof_rewrite_cpulist 8-11 # # Set bgsave child process to cpu affinity 1,10,11 # bgsave_cpulist 1,10-11 # In some cases redis will emit warnings and even refuse to start if it detects # that the system is in bad state, it is possible to suppress these warnings # by setting the following config which takes a space delimited list of warnings # to suppress # # ignore-warnings ARM64-COW-BUG