设为首页 收藏本站
查看: 958|回复: 0

[经验分享] vSphere vSwitch primer: Design considerations

[复制链接]

尚未签到

发表于 2015-4-3 16:29:59 | 显示全部楼层 |阅读模式
  http://searchnetworking.techtarget.com/tip/vSphere-vSwitch-primer-Design-considerations
  Virtual switches (vSwitches) are the core networking component on a vSphere host, connecting the physical NICs (pNICs) in the host server to the virtual NICs (vNICs) in virtual machines. As managed Layer 2 switches, the vSphere vSwitch emulates the traits of a traditional Ethernet switch, performing similar functions, such as VLAN segmentation. Yet the vSphere vSwitch has no routing function and must rely on either virtual routers or physical Layer 3 routers.
  With this in mind, there are many ways to design vSwitches in vSphere. In planning vSwitch architecture, engineers must decide how they will use physical NICs (pNICs) and assign port groups to ensure redundancy, segmentation and security. The more pNICs one has, the more options there are for segregation, load balancing and failover, for example. With fewer pNICs, options are limited, and trying to balance security, performance and redundancy among vSwitches can be difficult.
  Three kinds of vSphere vSwitches
  Engineers must start by choosing the right kind of vSwitch for their environment. Three types of vSwitches can be used with vSphere: vNetwork Standard vSwitch (vSS), vNetwork Distributed vSwitch (vDS) and the Cisco Nexus 1000v.
  Standard vSwitches are easy to use and work for smaller environments. A vSphere host may have up to 248 vSSs configured with up to 512 port groups per vSwitch for a maximum of 4,088 total vSwitch ports per host. They must be configured individually on each host, so in larger environments it can be time consuming to maintain them. A vSS lacks any of the advanced networking features that are present in the vDS and 1000v.
  Distributed vSwitches are very similar to standard vSwitches, but standard vSwitches are configured individually on each host; vDSs are configured centrally using vCenter Server. You may have up to 32 vDSs per vCenter Server, and each host may connect up to 16 of these switches. While vDSs are created and maintained using vCenter Server, they are not dependent on vCenter Server for operation.
  Cisco Nexus 1000v is a hybrid distributed vSwitch developed jointly by Cisco and VMware.  The Nexus 1000v adds even more intelligence and features for better management to protect virtual machine traffic.
  Which vSphere vSwitch should you choose? Without a vSphere Enterprise Plus license the answer is simple: Standard vSwitches are your only choice. If you do have an Enterprise Plus license, the vDS or Nexus 1000v become possible. However, where the vDS is included with vSphere, the 1000v is an additional license cost per host.
  Using a vSphere vSwitch for 802.1Q VLAN tagging
  Virtual switches support 802.1Q VLAN tagging, which allows for multiple VLANs to be used on a single physical switch port. This capability can greatly reduce the number of pNICs needed in a host. Instead of needing a separate pNIC for each VLAN on a host, you can use a single NIC to connect to multiple VLANs.
  Tagging works by applying tags to all network frames to identify them as belonging to a particular VLAN. There are several methods for doing this in vSphere, with the main difference between the modes being where the tags are applied. Virtual Machine Guest Tagging (VGT) mode does this at the guest operating system layer; External Switch Tagging (EST) mode does this on the external physical switch; and Virtual Switch Tagging (VST) mode does this inside the VMkernel. The VST mode is the one that is most commonly used with vSphere.
  vSphere vSwitch design considerations
  The main influencer on vSwitch design is the number of pNICs on a host. This will determine how much redundancy and separation of traffic types can be implemented using vSwitches.

  • Redundancy: There must be enough pNICs so a vSwitch can survive a pNIC failure.
  • Load balancing: There must be enough pNICs so traffic can be spread across multiple pNICs.
  • Separation: Sensitive VM traffic types must be physically separated.
  • Function: Varying host traffic types must be physically separated.

  Assigning ports and port groups in vSwitch design
  For security and performance reasons, it is not recommended to throw all ports and port groups together on a single vSwitch with assigned pNICs. In creating vSwitches there are several types of ports and port groups that can be added for separation, security and management:

  • Service Console: This is a critical port that is unique to ESX hosts. It is the management interface for the Service Console VM, also known as a VSWIF interface. Every ESX host must have a service console, and a second can be created on another vSwitch for redundancy.
  • VMkernel: With ESXi this port serves as the interface for the management console. For both ESX and ESXi hosts this port is also used for vMotion, Fault Tolerance Logging and connections to NFS and iSCSI. data stores. VMkernel ports can be created on multiple vSwitches for redundancy and to isolate the different traffic types onto their own vSwitches.
  • Virtual Machine: These are the port groups connected to the vNICs of a VM. Multiple VM port groups can be created for each VLAN supported on a vSwitch. VLAN IDs can be set to route traffic to the correct port group.

  Service Console and VMkernel traffic is critical to the proper operation of the host and should always be separated from VM traffic. This traffic also contains sensitive data and not all traffic (i.e., vMotion) is encrypted. Therefore you want to split your pNICs into multiple vSwitches that are dedicated to specific functions and types of traffic.
  More on virtualization networking
  Protecting virtual environmentswith DMZ networks
  vCloud and VEPA both fail in virtualization networking
  High availability for virtualization is a false promise
  The ideal virtualization networking solution may not exist yet
  Vswitch design for redundancy
  If only one pNIC is used in a vSwitch and a failure were to occur, all the VMs would lose network connectivity. This is especially important for the Service Console and VMkernel ports. The heartbeat that each host continually broadcasts is the trigger for the High Availability (HA) feature and is sent via the Service Console (ESX) and VMkernel (ESXi) ports. If those ports become unavailable for longer than 12 seconds because of a network failure, HA is triggered and the VMs on the host are shut down and started on other hosts. Therefore it is important to have redundancy to prevent false alarms from occurring due to a single network port failure.
  To achieve redundancy in a vSwitch there must be at least two pNICs assigned to it, providing redundancy at the host level but not at the path level. To achieve further resilience, each pNIC must connect to a different physical switch. That way a total switch failure still ensures that a path is intact with the network. pNICs can be set up in Active or Standby mode. Active means that a pNIC actively functions in the vSwitch. Standby means that the pNIC doesn’t participate until a failure of an active pNIC occurs. In most cases a pNIC would be in Active mode to be fully used. A pNIC can function as both Active for one port group and Standby for another port group on the same vSwitch. This can be useful in certain situations when there are limited pNICs and engineers want to dedicate certain traffic types to each pNIC and still have redundancy.
  The following example shows a vSwitch that contains both a Service Console and VMkernel port group. Instead of creating a vSwitch for each, which would require four pNICs total for redundancy on each vSwitch, one vSwitch can be created for both port groups with two pNICs operating in Active/Standby mode:

  • vSwitch0 – Service Console port group – vmnic0 (active) – vmnic1 (standby)
  • vSwitch0 - VMkernel port group – vmnic1 (active) – vmnic0 (standby)
  In this configuration, each of the critical port groups has a dedicated pNIC, but if a failure occurs, the other pNIC can stand in and serve both port groups.

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-53624-1-1.html 上篇帖子: VMware vSphere ESX 4 安装 下篇帖子: Using LACP with a vSphere Distributed Switch 5.1
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表