设为首页 收藏本站
查看: 2026|回复: 0

[经验分享] Using LACP with a vSphere Distributed Switch 5.1

[复制链接]

尚未签到

发表于 2015-4-3 16:32:19 | 显示全部楼层 |阅读模式
Using LACP with a vSphere Distributed Switch 5.1

  by Chris Wahl on Oct 15th, 2012 | 6,347 views

  One of the more exciting and eagerly anticipated announcements in vSphere 5.1 (at least for me) were all the distributed switch enhancements. Among the new feature list is the ability to use LACP (Link Aggregation Control Protocol) on a distributed switch. Formerly, the switch was limited to a static EtherChannel or use of a third party switch, such as the Cisco 1000v, if LACP was desired. This post will go into how to actually configure and enable LACP on the latest vSphere Distributed Switch 5.1.
  Want more details on the new features? Check out my Deep Dive on vSphere Distributed Switch 5.1 Features posts.
vSphere Distributed Switch Configuration

  To start with, you’ll need a VDS running version 5.1. Additionally, you’ll need to pay heed to the list of caveats below:

  • vSphere supports only one LACP group per distributed switch and only one LACP group per host.
  • LACP does not support Port mirroring
  • LACP settings do not exist in host profiles
  • LACP only works with IP Hash load balancing and Link Status Network failover detection
  • LACP between two nested ESXi hosts is not possible

  Source KB
  I highlighted the last one for those who are running nested ESXi labs. Sorry!

  In my lab environment, I have created a VDS named “VDS-LACP” and a single port group named “LACP-TEST”. Make sure to set the load balancing method to IP Hash and ensure that all uplinks are active.

  The LACP-TEST port group configuration details
  One additional step remains: you must enable LACP on the uplink port group. This must be done via the vSphere Web Client. Navigate to the uplink port group, click the Manage tab, and then select the Settings button and click Edit.
  Have no fear, I’ve drawn you a GUI map!

  Uplinks > Manage > Settings > Properties > Edit … say that 3 times fast
  Once there, enable LACP and set it to Active.
Physical Switch Configuration

  Ensure that your vSphere uplinks are plumbed into a physical switch that is configured properly for LACP. In my case, this means creating a Link Aggregation Group (LAG) on my HP V1910 switch in the lab, shown below:

  The LAG creation screen on my V1910
  Note that I have clicked the radio button for “Dynamic (LACP Enabled)” for the interface type. For instructions on how to create a static EtherChannel, you can refer to this video “Creating a Link Aggregation Group for a vSphere Lab” that I posted a while back. Results

  If you’ve done everything right, the LACP status should show all members as active. If you forgot to enable LACP on the VDS uplinks, you’ll get an error and one or more of the ports will be in standby mode as shown below:

  This most likely means that you did not enable LACP on the VDS uplink port group
  You’ll also get a neat new item on the ESXTOP list called “LACP_MgmtPort”. I assume this handles the LACP negotiation on behalf of the VDS.
Thoughts

  I’m really glad to see LACP join the ranks of native features in the VDS, and am a little surprised it took this long to get it included. After all, this is a feature limited to those with the highest licensing level: Enterprise Plus. I haven’t had a lot of stick time with the feature yet so I can’t comment on any pros or cons from a troubleshooting or reliability standpoint, but the configuration process was simple enough.
  If you are running, or plan to run, LACP on your VDS 5.1 and care to share your experiences please leave a comment below!

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-53625-1-1.html 上篇帖子: vSphere vSwitch primer: Design considerations 下篇帖子: Load Based Teaming in vSphere 4.1
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表