《redis使用介紹》要點:
本文介紹了redis使用介紹,希望對您有用。如果有疑問,可以聯(lián)系我們。
redis 基礎結(jié)構(gòu)
1.1 redis 序列化
Redis本身沒有辦法支持存儲對象類型的接口,只能存儲字節(jié)類型的數(shù)據(jù),因此redis在存儲任何類型的數(shù)據(jù)結(jié)構(gòu)時都要先將其序列化.
當一個類實現(xiàn)了Serializable接口(該接口僅為標記接口,不包含任何辦法定義),表示該類可以序列化.序列化的目的是將一個實現(xiàn)了Serializable接口的對象轉(zhuǎn)換成一個字節(jié)序列,可以.把該字節(jié)序列保存起來(例如:保存在一個文件里),以后可以隨時將該字節(jié)序列恢復為原來的對象.甚至可以將該字節(jié)序列放到其他計算機上或者通過網(wǎng)絡傳輸?shù)狡渌嬎銠C上恢復,只要該計算機平臺存在相應的類就可以正常恢復為原來的對象.
快速序列化工具介紹:
公司封裝的序列化和反序列化工具:org.springframework.core.serializer.support. DeserializingConverter類的convert辦法
但實際底層最后用的還是Java的ByteArrayInputStream byteStream = new ByteArrayInputStream(source);
ObjectInputStream objectInputStream = new ObjectInputStream(inputStream);
ObjectInputStream. readObject()
1.2五種數(shù)據(jù)結(jié)構(gòu)應用 String ,hash,set ,list
參考: zhuanlan.zhihu.com/p/21368183?refer=zhangtielei
blog.csdn.net/a809146548/article/category/2915269/2
上面四種數(shù)據(jù)結(jié)構(gòu)的實現(xiàn)分別是:
1). Dict :是key和value映射關(guān)系的數(shù)據(jù)結(jié)構(gòu),基于hash表的算法采用某個哈希函數(shù)從key計算得到在哈希表中的位置,如圖:
2). Quicklist : 雙向鏈表 ,quicklist的每個節(jié)點都是一個ziplist .雙鏈表,這是在存儲效率和查詢時間上的折中設計.因為其在每個節(jié)點上除了要保留數(shù)據(jù)之外,還要額外保留兩個指針;其次,雙向鏈表的各個節(jié)點是單獨的內(nèi)存塊,地址不連續(xù),節(jié)點多了容易產(chǎn)生內(nèi)存碎片.所有其長度的設計需要根據(jù)不同情況具體定義,
Redis提供了一個配置參數(shù)list-max-ziplist-size
· -5: 每個quicklist節(jié)點上的ziplist大小不能超過64 Kb.(注:1kb => 1024 bytes)
· -4: 每個quicklist節(jié)點上的ziplist大小不能超過32 Kb.
· -3: 每個quicklist節(jié)點上的ziplist大小不能超過16 Kb.
· -2: 每個quicklist節(jié)點上的ziplist大小不能超過8 Kb.(-2是Redis給出的默認值)
如圖:
1.3 redis的持久化
作用主要是防止redis應用掛掉之后重啟服務重新加載redis內(nèi)存數(shù)據(jù)庫.
1.3.1 第一種是快照形式的:依照改動頻率存儲,存儲規(guī)則:save N M表示在N秒之內(nèi),redis至少發(fā)生M次修改則redis抓快照到磁盤.這樣的好處是保證大部分數(shù)據(jù)完整,效率高,但是數(shù)據(jù)不是完全同步的,在redis應用停掉后最近的數(shù)據(jù)會丟失.
第二種是AOF持久化:可以做到每次修改都記錄到aof文件,但這樣影響效率,也可以每秒同步變化的數(shù)據(jù)到aof文件,這樣不影響效率但有可以會丟失一秒內(nèi)的數(shù)據(jù).
2.redis 驚群處理
2.1 方案的由來
Redis的緩存數(shù)據(jù)庫是為快速響應客戶端減輕數(shù)據(jù)庫壓力的有效手段之一,其中有一種功能是失效緩存,其優(yōu)點是可以不定期的釋放使用頻率低的業(yè)務空間而增加有限的內(nèi)存,但對于同步數(shù)據(jù)庫和緩存之間的數(shù)據(jù)來說需要面臨一個問題就是:在并發(fā)量比較大的情況下當一個緩存數(shù)據(jù)失效之后會導致同時有多個并發(fā)線程去向后端數(shù)據(jù)庫發(fā)起哀求去獲取同一業(yè)務數(shù)據(jù),這樣如果在一段時間內(nèi)同時生成了大量的緩存,然后在另外一段時間內(nèi)又有大量的緩存失效,這樣就會導致后端數(shù)據(jù)庫的壓力陡增,這種現(xiàn)象就可以稱為“緩存過期產(chǎn)生的驚群現(xiàn)象”!
2.2 處理邏輯
緩存內(nèi)真實失效時間time1
緩存value中存放人為失效時間戳 :time2 ( time2 永遠小于time1 )
緩存value對應的lock鎖 (便是一個與value 對應的 另一個key),主要用于判斷是第幾個線程來讀取redis的value
當把數(shù)據(jù)庫的數(shù)據(jù)寫入緩存后,這時有客戶端第一次來讀取緩存,取當前系統(tǒng)時間:system_time 如果system_time >= time2 則認為默認緩存已過期(如果system_time < time1 則還沒真實失效 ),這時再獲取value的lock鎖,調(diào)用redis的incr函數(shù)(單線程自增函數(shù))判斷是第幾個獲取鎖的線程,當且僅當是第一個線程時返回1,以后都逐漸遞增.第一個拜訪的線程到數(shù)據(jù)庫中獲取最新值重新放入緩存并刪除lock鎖的key,并重新設置時間戳;在刪除lock之前所有拜訪value客戶端線程獲取lock的value都大于1,這些線程仍然讀取redis中的舊值,而不會集中拜訪數(shù)據(jù)庫.
2.3 偽代碼
private long expirt_time = 1000 * 40 ;//人為過期時間
private long time = 1000 * 60;//一分鐘
private long second = 60 * 6;//六分鐘
KooJedisClient client = SpringContextUtils.getBean("redisClient", KooJedisClient.class);
private final String user_key = "USER_REDIS";
private final String user_key_lock = "USER_REDIS_lock";
public void setExpireTime( HttpServletRequest request ){
String userId = request.getParameter( "userId");
//數(shù)組里存放:1:真實value ,2:過期時間
String key = org.apache.commons.lang3.StringUtils.join(new Object[]{user_key, userId});
String[] info = client.get( key , String[].class);
long nowTime = System.currentTimeMillis();
if( null != info ){
long expireRealTime = new Long( info[1] );
//如果已過期并且是第一個拜訪的線程
if( nowTime >= expireRealTime ){
Long lockNum = client.incr( user_key_lock+userId ); // 可以實現(xiàn)原子性的遞增,可應用于高并發(fā)的秒殺活動、分布式序列號生成等場景
if( ( lockNum == 1 || lockNum == null )){
//重新從數(shù)據(jù)庫獲取
User user = teacherDataMaintain.findUserInfo(new Integer(userId));
info[ 0 ] = user.getUserName();
info[ 1 ] = org.apache.commons.lang3.StringUtils.join(new Object[]{(nowTime + expirt_time), ""});
client.setex( key ,60, info );//六分后過期
client.del( user_key_lock+userId );
}else{
System.out.println( "緩存過期但不是第一個線程,返回舊值" );
}
}else{
//返回緩存中舊值
System.out.println( "緩存未過期" );
}
}else{
User user = teacherDataMaintain.findUserInfo(new Integer(userId));
String[] userInfo = { user.getUserName() ,(nowTime + expirt_time ) + "" };
client.setex( key ,60, userInfo );//過期
}
}
3.redis 分布式鎖應用
3.1 分布式鎖主要是解決分布式環(huán)境對共享資源的同步拜訪
在但進程的環(huán)境下程序上完全可以用synchronized同步鎖來限制多線程對共享資源的拜訪,但在分布式環(huán)境下同步鎖無法控制不同進程之間的線程,這種情況下就需要找一種單進程可串行處理的“鎖”,redis 就是其中的一種選擇,
3.2. 應用場景:
例如:秒殺,一個官網(wǎng)售賣的視頻課程,這個課程只有若干個名單可以售賣,每個課程對應具體的庫存,怎樣讓每個客戶端獲取課程的的庫存都是準確實時的,
Redis 緩存中與數(shù)據(jù)庫的庫存在秒殺前是一樣的,當秒殺開始的時候,同一時間點會有很多客戶端拜訪緩存和數(shù)據(jù)庫,不同的進程同時拜訪緩存或者數(shù)據(jù)庫,當緩存中的數(shù)據(jù)變化后并且沒被修改之前有可能又被另一個線程獲取,數(shù)據(jù)有可能出現(xiàn)臟讀和數(shù)據(jù)被覆蓋的可能.(臟讀 < 不可重復讀 < 幻讀)
3.3 解決方法:
對共享資源的操作要是互斥且良性的競爭,即在分布式條件下怎樣做到一次只能有一個線程來處理共享資源?且線程之間不會出現(xiàn)死鎖的情況.
3.3.1 幾個基本的函數(shù):
Setnx key value :如果沒有key 則可以獲得鎖并返回1 ,如果已存在key 則不做操作并返回0 .
Getset key value :設置value ,并返回key的舊值,若key不存在則返回null
Get key :
3.3.2 死鎖
Setnx是單線程處理的,但仍有可能出現(xiàn)死鎖
Eg:thread0 操作超時了,但它還持有著鎖,thread 1 和thread 2 讀取lock.foo檢查時間戳,然后發(fā)現(xiàn)超時了;
thread 1 發(fā)送DEL lock.foo;
thread 1 發(fā)送SETNX lock.foo 并且成功了;
thread 2 發(fā)送DEL lock.foo;
thread 2 發(fā)送SETNX lock.foo 并且成功了.
這樣一來,thread 1、thread 2都拿到了鎖!鎖平安性被破壞了!
3.3.3 辦理死鎖
thread 3 發(fā)送SETNX lock.foo 想要獲得鎖,由于thread 0 還持有鎖,所以Redis返回給thread 3 一個0;
thread 3 發(fā)送GET lock.foo 以檢查鎖是否超時了,如果沒超時,則等待或重試;
反之,如果已超時,thread 3 通過下面的操作來嘗試獲得鎖:
GETSET lock.foo <current Unix time + lock timeout + 1>
通過getSet,thread 3 拿到的時間戳如果仍然是超時的,那就說明,thread 3 如愿以償拿到鎖了.
如果在thread 3 之前,有個叫thread 4 的客戶端比thread 3 快一步執(zhí)行了上面的操作,那么thread 3 拿到的時間戳是個未超時的值,這時,thread 3 沒有如期獲得鎖,需要再次等待或重試.把穩(wěn)一下,盡管thread 3 沒拿到鎖,但它改寫了thread 4 設置的鎖的超時值,不過這一點非常微小的誤差帶來的影響可以忽略不計.
3.4 偽代碼
public synchronized boolean acquire(Jedis jedis, String lockKey, long expires) throws InterruptedException {
int timeoutMsecs = 10 * 1000;
int timeout = timeoutMsecs;
while ( timeout >= 0 ) {
String expiresStr = String.valueOf( expires ); //鎖到期時間
if ( jedis.setnx( lockKey, expiresStr ) == 1 ) {
// lock acquired
return true;
}
String currentValueStr = jedis.get(lockKey); //redis里的時間
if(currentValueStr!=null&& Long.parseLong(currentValueStr) < System.currentTimeMillis()) {
//判斷是否為空,不為空的情況下,如果被其他線程設置了值,則第二個條件判斷是過不去的
// lock is expired
//Getset命令用于設置指定key的值,并返回key舊的值.
String oldValueStr = jedis.getSet(lockKey, expiresStr);
//獲取上一個鎖到期時間,并設置現(xiàn)在的鎖到期時間
//只有一個線程才能獲取上一個線上的設置時間,因為jedis.getSet是同步的
if (oldValueStr != null && oldValueStr.equals(currentValueStr)) {
//如過這個時候,多個線程恰好都到了這里,但是只有一個線程的設置值和當前值相同,他才有權(quán)利獲取鎖
// lock acquired
return true;
}
}
timeout -= 100;
Thread.sleep(100); //每100毫秒重試一次,直至timeout用盡
}
//Expire命令用于設定鍵有效期.到期時間后鍵不會在Redis中使用.
return false;
}
以下是redis的配置文件Redis.conf的參數(shù)說明:
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
#Redis默認不是以守護進程的方式運行,可以通過該配置項修改,使用yes啟用守護進程
daemonize no
# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
#當 Redis 以守護進程的方式運行的時候,Redis 默認會把 pid 文件放在/var/run/redis.pid
#可配置到其他地址,當運行多個 redis 服務時,必要指定不同的 pid 文件和端口
pidfile /var/run/redis.pid
# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
#端口
port 6379
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
#指定Redis可接收哀求的IP地址,不設置將處理所有哀求,建議生產(chǎn)環(huán)境中設置
# bind 127.0.0.1
# Close the connection after a client is idle for N seconds (0 to disable)
#客戶端連接的超時時間,單位為秒,超時后會關(guān)閉連接
timeout 0
# Set server verbosity to 'debug'
# it can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
#日志記錄等級,4個可選值
loglevel notice
# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
#配置 log 文件地址,默認打印在命令行終端的窗口上,也可設為/dev/null屏蔽日志、
logfile stdout
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT where
# dbid is a number between 0 and 'databases'-1
#設置數(shù)據(jù)庫的個數(shù),可以使用 SELECT 命令來切換數(shù)據(jù)庫.
databases 16
#
# Save the DB on disk:
#
# save
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving at all commenting all the "save" lines.
#設置 Redis 進行數(shù)據(jù)庫鏡像的頻率.保留數(shù)據(jù)到disk的策略
#900秒之內(nèi)有1個keys發(fā)生變化時
#30秒之內(nèi)有10個keys發(fā)生變化時
#60秒之內(nèi)有10000個keys發(fā)生變化時
save 900 1
save 300 10
save 60 10000
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
#在進行鏡像備份時,是否進行壓縮
rdbcompression yes
# The filename where to dump the DB
#鏡像備份文件的文件名
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# Also the Append Only File will be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
#數(shù)據(jù)庫鏡像備份的文件放置的路徑
#路徑跟文件名分開配置是因為 Redis 備份時,先會將當前數(shù)據(jù)庫的狀態(tài)寫入到一個臨時文件
#等備份完成時,再把該臨時文件替換為上面所指定的文件
#而臨時文件和上面所配置的備份文件都會放在這個指定的路徑當中
#默認值為 ./
dir /var/lib/redis/
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#設置該數(shù)據(jù)庫為其他數(shù)據(jù)庫的從數(shù)據(jù)庫
#slaveof <masterip> <masterport> 當本機為從服務時,設置主服務的IP及端口
# slaveof
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#指定與主數(shù)據(jù)庫連接時需要的暗碼驗證
#masterauth <master-password> 當本機為從服務時,設置主服務的連接暗碼
# masterauth
# When a slave lost the connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of data data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#當slave丟失與master的連接時,或slave仍然在于master進行數(shù)據(jù)同步時(未與master堅持一致)
#slave可有兩種方式來響應客戶端哀求:
#1)如果 slave-serve-stale-data 設置成 'yes'(默認),slave仍會響應客戶端哀求,此時可能會有問題
#2)如果 slave-serve-stale-data 設置成 'no',slave會返回"SYNC with master in progress"錯誤信息,但 INFO 和SLAVEOF命令除外.
slave-serve-stale-data yes
# Require clients to issue AUTH before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#設置客戶端連接后進行任何其他指定前需要使用的暗碼
#redis速度相當快,一個外部用戶在一秒鐘進行150K次暗碼嘗試,需指定強大的暗碼來防止暴力破解
# requirepass foobared
# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limits.
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#限制同時連接的客戶數(shù)量.
#當連接數(shù)超過這個值時,redis 將不再接收其他連接哀求,客戶端嘗試連接時將收到 error 信息
# maxclients 128
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# accordingly to the eviction policy selected (see maxmemmory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# an hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#設置redis能夠使用的最大內(nèi)存.
#達到最大內(nèi)存設置后,Redis會先嘗試清除已到期或即將到期的Key(設置過期信息的key)
#在刪除時,依照過期時間進行刪除,最早將要被過期的key將最先被刪除
#如果已到期或即將到期的key刪光,仍進行set操作,那么將返回錯誤
#此時redis將不再接收寫哀求,只接收get哀求.
#maxmemory的設置比擬適合于把redis當作于類似memcached 的緩存來使用
# maxmemory <bytes>
# By default Redis asynchronously dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.aof. This file will
# be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.
#redis 默認每次更新操作后會在后臺異步的把數(shù)據(jù)庫鏡像備份到磁盤,但該備份非常耗時,且備份不宜太頻繁
#redis 同步數(shù)據(jù)文件是按上面save條件來同步的
#如果發(fā)生諸如拉閘限電、拔插頭等狀況,那么將造成比擬大范圍的數(shù)據(jù)丟失
#所以redis提供了另外一種更加高效的數(shù)據(jù)庫備份及災難恢復方式
#開啟append only 模式后,redis 將每一次寫操作哀求都追加到appendonly.aof 文件中
#redis重新啟動時,會從該文件恢復出之前的狀態(tài).
#但可能會造成 appendonly.aof 文件過大,所以redis支持BGREWRITEAOF 指令,對appendonly.aof重新整理
appendonly no
# The name of the append only file (default: "appendonly.aof")
##更新日志文件名,默認值為appendonly.aof
# appendfilename appendonly.aof
# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
#
# The default is "everysec" that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# If unsure, use "everysec".
#設置對 appendonly.aof 文件進行同步的頻率
#always 表現(xiàn)每次有寫操作都進行同步,everysec 表現(xiàn)對寫操作進行累積,每秒同步一次.
#no表現(xiàn)等操作系統(tǒng)進行數(shù)據(jù)緩存同步到磁盤,都進行同步,everysec 表現(xiàn)對寫操作進行累積,每秒同步一次
# appendfsync always
appendfsync everysec
# appendfsync no
# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
#
# To enable VM just set 'vm-enabled' to yes, and set the following three
# VM parameters accordingly to your needs.
#是否開啟虛擬內(nèi)存支持.
#redis 是一個內(nèi)存數(shù)據(jù)庫,當內(nèi)存滿時,無法接收新的寫哀求,所以在redis2.0后,提供了虛擬內(nèi)存的支持
#但必要注意的,redis 所有的key都會放在內(nèi)存中,在內(nèi)存不夠時,只把value 值放入交換區(qū)
#雖使用虛擬內(nèi)存,但性能基本不受影響,必要注意的是要把vm-max-memory設置到足夠來放下所有的key
vm-enabled no
# vm-enabled yes
# This is the path of the Redis swap file. As you can guess, swap files
# can't be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
#
# The best kind of storage for the Redis swap file (that's accessed at random)
# is a Solid State Disk (SSD).
#
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
#設置虛擬內(nèi)存的交換文件路徑,不可多個Redis實例共享
vm-swap-file /tmp/redis.swap
# vm-max-memory configures the VM to use at max the specified amount of
# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
# is, if there is still enough contiguous space in the swap file.
#
# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it's
# better to leave some margin. For instance specify an amount of RAM
# that's more or less between 60 and 80% of your free RAM.
#設置開啟虛擬內(nèi)存后,redis將使用的最大物理內(nèi)存大小.
#默認為0,redis將把他所有能放到交換文件的都放到交換文件中,以盡量少的使用物理內(nèi)存
#即當vm-max-memory設置為0的時候,其實是所有value都存在于磁盤
#在生產(chǎn)環(huán)境下,必要根據(jù)實際情況設置該值,最好不要使用默認的 0
vm-max-memory 0
# Redis swap files is split into pages. An object can be saved using multiple
# contiguous pages, but pages can't be shared between different objects.
# So if your page is too big, small objects swapped out on disk will waste
# a lot of space. If you page is too small, there is less space in the swap
# file (assuming you configured the same number of total swap file pages).
#
# If you use a lot of small objects, use a page size of 64 or 32 bytes.
# If you use a lot of big objects, use a bigger page size.
# If unsure, use the default :)
#設置虛擬內(nèi)存的頁大小
如果 value 值比擬大,如要在 value 中放置博客、新聞之類的所有文章內(nèi)容,就設大一點
vm-page-size 32
# Number of total memory pages in the swap file.
# Given that the page table (a bitmap of free/used pages) is taken in memory,
# every 8 pages on disk will consume 1 byte of RAM.
#
# The total swap size is vm-page-size * vm-pages
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
#
# It's better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
#設置交換文件的總的 page 數(shù)量
#注意page table信息是放在物理內(nèi)存中,每8個page 就會占據(jù)RAM中的 1 個 byte
#總的虛擬內(nèi)存大小 = vm-page-size * vm-pages
vm-pages 134217728
# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can't help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
#
# The special value of 0 turn off threaded I/O and enables the blocking
# Virtual Memory implementation.
#設置 VM IO 同時使用的線程數(shù)量.
vm-max-threads 4
# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
#redis 2.0后引入了 hash 數(shù)據(jù)結(jié)構(gòu).
#hash 中包括超過指定元素個數(shù)并且最大的元素當沒有超過臨界時,hash 將以zipmap來存儲
#zipmap又稱為 small hash,可大大減少內(nèi)存的使用
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
#是否重置Hash表
#設置成yes后redis將每100毫秒使用1毫秒CPU時間來對redis的hash表重新hash,可降低內(nèi)存的使用
#當使用場景有較為嚴格的實時性需求,不能接受Redis時不時的對哀求有2毫秒的延遲的話,把這項配置為no.
#如果沒有這么嚴格的實時性要求,可以設置為 yes,以便能夠盡可能快的釋放內(nèi)存
activerehashing yes
Redis官方文檔對VM的使用提出了一些建議:
當key很小而value很大時,使用VM的效果會比擬好.因為這樣節(jié)約的內(nèi)存比擬大
當key不小時,可以考慮使用一些非常辦法將很大的key變成很大的value,如可將key,value組合成一個新的value
最好使用linux ext3 等對稀疏文件支持比較好的文件系統(tǒng)保留你的swap文件
vm-max-threads參數(shù)可設置拜訪swap文件的線程數(shù),最好不要超過機器的核數(shù);設置為0則所有對swap文件的操作都是串行的,可能會造成比較長時間的延遲,但是對數(shù)據(jù)完整性有很好的保證
redis數(shù)據(jù)存儲
redis的存儲分為內(nèi)存存儲、磁盤存儲和log文件三部分,配置文件中有三個參數(shù)對其進行配置.
save seconds updates,save配置,指出在多長時間內(nèi),有多少次更新操作,就將數(shù)據(jù)同步到數(shù)據(jù)文件.可多個條件配合,默認配置了三個條件.
appendonly yes/no ,appendonly配置,指出是否在每次更新操作后進行日志記錄,如果不開啟,可能會在斷電時導致一段時間內(nèi)的數(shù)據(jù)丟失.因為redis自己同步數(shù)據(jù)文件是按上面的save條件來同步的,所以有的數(shù)據(jù)會在一段時間內(nèi)只存在于內(nèi)存中.
appendfsync no/always/everysec ,appendfsync配置,no表現(xiàn)等操作系統(tǒng)進行數(shù)據(jù)緩存同步到磁盤,always表現(xiàn)每次更新操作后手動調(diào)用fsync()將數(shù)據(jù)寫到磁盤,everysec表現(xiàn)每秒同步一次.
《redis使用介紹》是否對您有啟發(fā),歡迎查看更多與《redis使用介紹》相關(guān)教程,學精學透。維易PHP學院為您提供精彩教程。
轉(zhuǎn)載請注明本頁網(wǎng)址:
http://www.snjht.com/jiaocheng/9261.html