官术网_书友最值得收藏!

Network calculations

To fulfill the plans that were drawn for reference, let's have a look at our assumptions:

  • 200 Mbits/second is needed per VM
  • Minimum network latency

To do this, it might be possible to serve our VMs by using a 10 GB link for each server, which will give:

10,000 Mbits/second / 25VMs = 400 Mbits/second

This is a very satisfying value. We need to consider another factor: highly available network architecture. Thus, an alternative is using two data switches with a minimum of 24 ports for data.

Thinking about growth from now, two 48-port switches will be in place.

What about the growth of the rack size? In this case, you should think about the example of switch aggregation that uses the Multi-Chassis Link Aggregation (MCLAG/MLAG) technology between the switches in the aggregation. This feature allows each server rack to pide its links between the pair of switches to achieve a powerful active-active forwarding while using the full bandwidth capability with no requirement for a spanning tree.

MCLAG is a Layer 2 link aggregation protocol between the servers that are connected to the switches, offering a redundant, load-balancing connection to the core network and replacing the spanning-tree protocol.

The network configuration also depends heavily on the chosen network topology. As shown in the previous example network diagram, you should be aware that all nodes in the OpenStack environment must communicate with each other. Based on this requirement, administrators will need to standardize the units will be planned to use and count the needed number of public and floating IP addresses. This calculation depends on which network type the OpenStack environment will run including the usage of Neutron or former nova-network service. It is crucial to separate which OpenStack units will need an attribution of Public and floating IPs. Our first basic example assumes the usage of the Public IPs for the following units:

  • Cloud Controller Nodes: 3
  • Compute Nodes: 15
  • Storage Nodes: 5

In this case, we will initially need at least 18 public IP addresses. Moreover, when implementing a high available setup using virtual IPs fronted by load balancers, these will be considered as additional public IP addresses.

The use of Neutron for our OpenStack network design will involve a preparation for the number of virtual devices and interfaces interacting with the network node and the rest of the private cloud environment including:

  • Virtual routers for 20 tenants: 20
  • Virtual machines in 15 Compute Nodes: 375

In this case, we will initially need at least 395 floating IP addresses given that every virtual router is capable of connecting to the public network.

Additionally, increasing the available bandwidth should be taken into consideration in advance. For this purpose, we will need to consider the use of NIC bonding, therefore multiplying the number of NICs by 2. Bonding will empower cloud network high availability and achieve boosted bandwidth performance.

主站蜘蛛池模板: 澎湖县| 高陵县| 巴彦淖尔市| 县级市| 西贡区| 成都市| 温泉县| 荣昌县| 珲春市| 崇文区| 芦山县| 霍州市| 延安市| 临颍县| 井陉县| 五河县| 江达县| 亳州市| 定日县| 苏尼特左旗| 邵阳县| 温州市| 德州市| 蒙自县| 西青区| 光泽县| 浠水县| 东兰县| 普安县| 元朗区| 江孜县| 新野县| 阳信县| 宁安市| 巩留县| 建始县| 贡嘎县| 汝州市| 涟水县| 离岛区| 揭阳市|