设为首页 收藏本站
查看: 1931|回复: 0

[经验分享] MemcacheDB, Tokyo Tyrant, Redis performance test

[复制链接]

尚未签到

发表于 2018-11-8 06:07:02 | 显示全部楼层 |阅读模式
  I had tested the following key-value store for set() and get()

  • MemcacheDB, use memcached client protocol.
  • Tokyo Tyrant (Tokyo Cabinet), use memcached client protocol
  • Redis, use JRedis Java client
1. Test environment
1.1 Hardware/OS
  2 Linux boxes in a LAN, 1 server and 1 test client
  Linux Centos 5.2 64bit
  Intel(R) Xeon(R) CPU E5410  @ 2.33GHz (L2 cache: 6M), Quad-Core * 2
  8G memory
  SCSI disk (standalone disk, no other access)
1.2 Software version
  db-4.7.25.tar.gz
  libevent-1.4.11-stable.tar.gz
  memcached-1.2.8.tar.gz
  memcachedb-1.2.1-beta.tar.gz
  redis-0.900_2.tar.gz
  tokyocabinet-1.4.9.tar.gz
  tokyotyrant-1.1.9.tar.gz
1.3 Configuration
  Memcachedb startup parameter
  Test 100 bytes
  ./memcachedb -H /data5/kvtest/bdb/data -d -p 11212 -m 2048 -N -L 8192
  (Update: As mentioned by Steve, the 100-byte-test missed the -N paramter, so I added it and updated the data)
  Test 20k bytes
  ./memcachedb -H /data5/kvtest/mcdb/data -d -p 11212 -b 21000 -N -m 2048
  Tokyo Tyrant (Tokyo Cabinet) configuration
  Use default Tokyo Tyrant sbin/ttservctl
  use .tch database, hashtable database
  ulimsiz=”256m”
  sid=1
  dbname=”$basedir/casket.tch#bnum=50000000″ # default 1M is not enough!
  maxcon=”65536″
  retval=0
  Redis configuration
  timeout 300
  save 900 1
  save 300 10
  save 60 10000
  # no maxmemory settings
1.4 Test client
  Client in Java, JDK1.6.0, 16 threads
  Use Memcached client java_memcached-release_2.0.1.jar
  JRedis client for Redis test, another JDBC-Redis has poor performance.
2. Small data>  Test 1, 1-5,000,000 as key, 100 bytes string value, do set, then get test, all get test has result.
  Request per second(mean)
DSC0000.png

StoreWriteReadMemcached55,989 50,974 Memcachedb25,583 35,260 Tokyo Tyrant42,988 46,238 Redis85,765 71,708  Server Load Average
StoreWriteReadMemcached1.80, 1.53, 0.871.17, 1.16, 0.83MemcacheDB1.44, 0.93, 0.644.35, 1.94, 1.05Tokyo Tyrant3.70, 1.71, 1.142.98, 1.81, 1.26Redis1.06, 0.32, 0.181.56, 1.00, 0.543. Larger data>  Test 2, 1-500,000 as key, 20k bytes string value, do set, then get test, all get test has result.
  Request per second(mean)
  (Aug 13 Update: fixed a bug on get() that read non-exist key)
DSC0001.png

StoreWriteReadMemcachedb357 327 Tokyo Tyrant3,501 257 Redis1,542 957 4. Some notes about the test
  When test Redis server, the memory goes up steadily, consumed all 8G and then use swap(and write speed slow down), after all memory and swap space is used, the client will get exceptions. So use Redis in a productive environment should limit to a small data>So compare Redis together with MemcacheDB/TC may not fair because Redis actually does not save data to disk during the test.
  Tokyo cabinet and memcachedb are very stable during heavy load, use very little memory in set test and less than physical memory in get test.

  MemcacheDB peformance is poor for write large data>  The call response time was not monitored in this test.


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-632049-1-1.html 上篇帖子: 九 redis学习笔记之虚拟内存 下篇帖子: 没有了
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表