Hyper-V Virtual Networking configuration and best practices

Table of contents

If you’re new to the world of virtualization, networking configuration can be one of the toughest concepts to grasp. Networking is also different in Hyper-V than in other hypervisors, so even those with years of experience can stumble a bit when meeting Hyper-V for the first time. This article will start by looking at the conceptual design of virtual networking in Hyper-V, configuration and then work through implementation best practices.

Networking Basics

Before beginning, it might be helpful to ensure that you have a solid grasp of the fundamentals of Ethernet and TCP/IP networking in general. Several articles that explain common aspects begin with this explanation of the OSI model.

The Hyper-V Virtual Switch

The single most important component of networking in Hyper-V is the virtual switch. There’s an in-depth article on the Hyper-V Virtual Switch on this blog, but for the sake of this article I’ll give you a basic introduction to the concept, within the bigger picture.

The key to understanding is realizing that it truly is a switch, just like a physical switch. It operates in layer 2 as the go-between for virtual switch ports. It directs packets to MAC addresses. It handles VLAN tagging. It can even perform some Quality of Service (QoS) tasks. It’s also responsible for isolating network traffic to the virtual adapter that is supposed to be receiving it. When visualized, the Hyper-V network switch should be thought of in the same way as a standard switch:

The next part of understanding the virtual switch is how it interacts with the host. To open that discussion, you must first become acquainted with the available types of virtual switches.

Virtual Switch Modes

There are three possible modes for the Hyper-V switch: private, internal, and public. Do not confuse these with IP addressing schemes or any other virtual networking configuration in a different technology.

Hyper-V’s Private Switch

The private switch allows communications among the virtual machines on its host and nothing else. Even the management operating system is not allowed to participate. This switch is purely logical and does not use any physical adapter in any way. “Private” in this sense is not related to private IP addressing. You can mentally think of this as a switch that has no ability to uplink to other switches.

Hyper-V’s Internal Switch

The internal switch is similar to the private switch with one exception: the management operating system can have a virtual adapter on this type of switch. This allows the management operating system to directly communicate with any virtual machines that also have virtual adapters on the same internal switch. Like the private switch, the internal switch does not have any relation to a physical adapter and therefore also cannot uplink to any another switch.

Hyper-V’s External Switch

The external switch type must be connected to a physical adapter. It allows communications between the physical network and the management operating system and the virtual adapters on virtual machines. Do not confuse this switch type with public IP addressing schemes or let its name suggest that it needs to be connected to an Internet-facing system. You can use the same private IP address range for the adapters on an external virtual switch that you’re using on the physical network it’s attached to. External in this usage means that it can connect to systems that are external to the Hyper-V host.

How to Conceptualize the External Virtual Switch

Part of what makes understanding the external virtual switch artificially difficult is the way that the related settings are worded. In the Hyper-V Manager GUI, it’s worded as Allow management operating system to share this network adapter. In PowerShell’s New-VMSwitch cmdlet, there’s an AllowManagementOS parameter which is no better, and its description — Specifies whether the parent partition (i.e. the management operating system) is to have access to the physical NIC bound to the virtual switch to be created. — makes it worse. What seems to happen far too often is that people read these and think of the virtual switch and the virtual adapters like this:

Unfortunately, this is not at all an accurate representation of Hyper-V’s virtual network stack. Once the virtual switch is bound to a physical adapter, that adapter is no longer used for anything else. TCP/IP, and most other items, are removed from it. The management operating system is quite simply unable to “share” it. If you attempt to bind anything else to the adapter, it’s quite probable that you’ll break the virtual switch.

In truth, the management operating system is getting a virtual network adapter of its own. That’s what gets connected to the virtual switch. That adapter isn’t exactly like the adapters attached to the virtual machines; it’s not quite as feature-rich. However, it’s nothing at all like actually sharing the physical adapter in the way that the controls imply. A better term would be, “Connect the management operating system to the virtual switch”. That’s what the settings really do. The following image is a much more accurate depiction of what is happening:

As you can see, the management operating system’s virtual adapter is treated the same way as that of the virtual machines’ adapters. Of course, you always have the option to take one or more physical adapters out of the virtual switch. Those will be used by the management operating system as normal. If you do that, then you don’t necessarily need to “share” the virtual switch’s adapter with the management operating system:

How to Use Physical NIC Teaming with the Hyper-V Virtual Switch

As of Windows Server 2012, network adapter teaming is now a native function of the Windows Server operating system. Teaming allows you combine two or more adapters into a single logical communications channel to distribute network traffic. Hyper-V Server can also team physical adapters.

When a teamed adapter is created, the individual adapters still appear in Windows but, in a fashion very similar to the virtual switch, can no longer be bound to anything except the teaming protocol. When the team is created, a new adapter is presented to the operating system. It would be correct to call this adapter “virtual”, since it doesn’t physically exist, but that can cause confusion with the virtual adapters used with the Hyper-V virtual switch. More common terms are team adapter or logical adapter, and sometimes the abbreviation tNIC is used.

Because teaming is not a central feature or requirement of Hyper-V, it won’t be discussed in detail here. Hyper-V does utilize native adapter teaming to great effect and, therefore, it should be used whenever possible. As a general rule, you should choose the Dynamic load balancing algorithm unless you have a clearly defined overriding need; it combines the best features of the Hyper-V Port and Transport Ports algorithms. As for whether or not to use the switch independent teaming mode or one of the switch dependent modes, that is a deeper discussion that involves balancing your goals against the capabilities of the hardware that is available to you. For a much deeper treatment of the subject of teaming with Hyper-V, consult the following articles in the Altaro blog:

[thrive_leads id=’17165′]

Hyper-V and Network Convergence

Network convergence simply means that multiple traffic types are combined in a single communications channel. To a certain degree, Hyper-V always does this since several virtual machines use the same virtual switch, therefore the same network hardware. However, that could all technically be classified under a single heading of “virtual machine traffic”, so it’s not quite convergence.

In the Hyper-V space, true convergence would include at least one other role and it would include at least two physical network adapters. The simplest way to achieve this is by teaming two or more adapters as talked about in the preceding section and then creating a virtual switch atop the team adapter. When the virtual switch is created, use the “share” option or PowerShell to create a virtual adapter for the management operating system as well. If that adapter is used for anything in the management operating system, then that is considered convergence. Other possible roles will be discussed later on.

While the most common convergence typically binds all adapters of the same speed into a single channel, that’s not a requirement. You may use one team for virtual machine traffic and another for the management operating system if you wish.

Hyper-V and Networking within a Cluster

Failover Clustering has its own special networking needs, and Hyper-V extends those requirements further. Each node begins with the same requirements as a standalone Hyper-V system: one management adapter and a virtual switch. A cluster adds the need for cluster-related traffic and Live Migration.

In versions prior to 2012, the only supported configuration required that all of these roles be separated into unique gigabit connections. With the enhancements introduced in 2012 and 2012 R2, these requirements are much more relaxed. There aren’t any published requirements with the new versions (although it could be argued that the requirements for 2008 R2 were never officially superseded, so they are technically still enforced). In practice, it’s been observed that it is absolutely necessary for there to be at least two unique cluster paths, but the rest can be adjusted up or down depending on your workloads.

The following describes each role and gives a brief description of its traffic:

  • Management: This role will carry all traffic for host-level backups and any host-related file sharing activities, such as accessing or copying ISO images from a remote system. During other periods, this role usually does not experience a heavy traffic load. The typical usage is for remote management traffic, such as RDP and WS-Man (PowerShell), which are very light.
  • Cluster Communications: Each node in the cluster continually communicates with all the other nodes in a mesh pattern to ensure that the cluster is still in operation. This operation is commonly known as the “heartbeat”, although network configuration information is also traded. Heartbeat traffic is typically very light, but it is extremely sensitive to latency. If it does not have a dedicated network, it can easily be drowned out by other operations, such as large file copies, which will cause nodes to lose quorum and fail over virtual machines even though nothing is technically wrong.
    • Cluster Shared Volumes: CSV traffic is not a unique role; it travels as part of standard cluster communications. When all is well, CSV traffic is fairly minimal, only passing CSV metadata information between the nodes. If a CSV goes into Redirected Access mode, then all traffic to and from that CSV will be handled by the owner node. If any other node needs to access that CSV, it will do so over a cluster network. The cluster will ensure that the normal cluster communications, such as heartbeat, are not sacrificed, but any struggles for bandwidths will cause virtual machines to perform poorly – and possibly crash. If your cluster does not use CSVs, then this traffic is not a concern.
  • Live Migration: Without constraints, a Live Migration operation will use up as much bandwidth as it can. The typical configuration provides a dedicated adapter for this role. With converged networking, the requirement is not as strict.
  • Virtual Machine traffic: VM traffic is arguably the most important in the cluster, but it also tends to not be excessively heavy. The traditional approach is to dedicate at least one adapter to the virtual switch.

While legacy builds simply separated these onto unique, dedicated gigabit pipes, you now have more options at your disposal.

SMB Enhancements for Cluster Communications

Cluster communications have always used the SMB protocol. The SMB protocol was upgraded substantially in 2012 and now has the ability to multichannel. This feature will auto-negotiate between the source and destination host and will automatically spread SMB traffic across all available adapters.

Whereas it used to be necessary to set networks for cluster communications and then modify metric assignments to guide traffic, the preferred approach in 2012 R2 is to simply designate two or more networks as cluster networks. The hosts will automatically balance traffic loads.

SMB Enhancements for Live Migration

If the cluster’s nodes are all set to use SMB for Live Migration, then it will take advantage of the same SMB enhancements that the standard cluster communications use. In this way, management traffic, cluster communications traffic, and Live Migration could all be run across only two distinct networks instead of two. This is potentially risky, especially if Redirected Access mode is triggered.

Converged Networking Benefits for Clustering

By using converged networks, you gain substantially more options with less hardware. SMB multichannel divides traffic across distinct networks – that is, unique subnets. By using converged networks, you can create more subnets than you have physical adapters.

This is especially handy for 10GbE adapters since few hosts will have more than two. It also has its place on 1GbE networks. You can simply combine all physical adapters into one single large team and create the same number of logical networks that you would have for a traditional role, but enable each of them for cluster communications and Live Migration. This way, SMB multichannel will be able to automatically load balance its needs. Remember that even with converged networking, it’s best to not combine all roles onto a single virtual or teamed adapter. SMB multichannel requires distinct subnets to perform its role and teaming balances some traffic according to the virtual adapter.

Quality of Service Benefits for Clustering

While the concern is rarely manifested, it is technically possible for one traffic type to fully consume a converged team. There are a number of QoS (Quality of Service) options available to prevent this from occurring. You can specifically limit SMB and/or Live Migration traffic and set maximums and minimums on virtual adapters.

Before you spend much time investigating these options, be aware that most deployments do not require this degree of control and will perform perfectly well with defaults. Hyper-V will automatically work to maintain a balance of traffic that does not completely drown out any particular virtual network adapter. Because the complexity of configuring QoS outweighs its benefits in the typical environment, this topic will not be investigated in this series. The most definitive work on the subject is available on TechNet.

How to Design Cluster Networks for Hyper-V

The one critical concept is that cluster networks are defined by TCP/IP subnet. The cluster service will detect every IP address and subnet mask on each node. From those, it will create a network for each unique subnet that it finds. If any node has more than one IP address in a subnet, the cluster service will use one and ignore the rest unless the first is removed. If the service finds networks that only some nodes have IP addresses for, the network will be marked as partitioned. A network will also be marked as partitioned if cluster communications are allowed but there are problems with inter-node traffic flow. The following diagram shows some sample networks and how clustering will detect them.

In the illustration, the only valid network is Cluster Network 2. The worst is Cluster Network 4. Due to the way the subnet is configured, it overlaps with all of the other networks. The cluster service will automatically lock the node 2 adapter with IP address 192.168.5.11 out of cluster communications and mark the network as None to indicate that it is disallowed for cluster communications.

Before building your cluster, determine the IP subnets that you’ll be using. It’s perfectly acceptable to create all-new networks if necessary. For cluster communications, the nodes will not intentionally communicate with anything other than the nodes in the same cluster. The minimum number of unique networks is two. One must be marked to allow client and cluster communications; this is the management network. One must be marked to allow cluster communications (client communications optional but not recommended). Further networks are optional, but will grant the cluster the opportunity to create additional TCP streams which can help with load-balancing across teamed adapters.

Hyper-V Networking Best Practices – Configuration in Practice

There isn’t any single “correct” way to configure networking in Hyper-V any more than there is a single “correct” way to configure a physical network. This section is going to work through a number of best practices and procedures to show you how things are done and provide guidance where possible. The best advice that anyone can give you is to not overthink it. Very few virtual machines will demand a great deal of networking bandwidth.

There are a few best practices to help you make some basic configuration decisions:

  • A converged network results in the best overall bandwidth distribution. It is extremely rare to have any situation in which a single network role will be utilizing an entire gigabit connection constantly. By dedicating one or more adapters to a single role, you prevent any other role from using that adapter, even when its owning role is idle.

  • A single TCP/IP stream can only use a single physical link. One of the most confusing things about teaming that new-comers face is that combining multiple links into a single team does not automatically mean that all traffic will automatically use all available links. It means that different communications streams will be balanced across available. Or, to make that more clear, you need at least four different communications streams to fully utilize four adapters in a team.
  • Avoid using iSCSI or SMB 3 directly with teaming. It is supported for both, but it is less efficient than using MPIO (for iSCSI) or SMB multichannel. It is supported to have multiple virtual network adapters on a team that are configured for iSCSI or SMB multichannel. However, you will always get the best performance for network storage by using unteamed adapters that are not bound to a virtual switch. This article explains how to configure MPIO.

  • If iSCSI and/or SMB connections are made through virtual adapters on a converged team, they will establish only one connection per unique IP address. Create multiple virtual adapters in order to enable MPIO and/or SMB multichannel.
  • For Failover Clustering, plan in advance what IP range you want to use for each role. For example:
    • Management: 192.168.10.0/24
    • Cluster communications/CSV: 192.168.15.0/24
    • Live Migration: 192.168.20.0/24
    • SMB network 1: 192.168.30.0/24
    • SMB network 2: 192.168.31.0/24
  • The only adapter in the management operating system that should have a default gateway is the management adapter. Assigning default gateways to other adapters will cause the system unnecessary difficulty when choosing outbound connections.
  • If cluster nodes have adapters that will only be used to communicate with back-end storage (iSCSI or SMB), exclude their networks from participating in cluster communications.
  • Only the management adapter should register itself in DNS.
  • Except for the one created by checking Allow the management operating system to share this network adapter, you cannot use the GUI to create virtual network adapters for the management operating system’s use.
  • You cannot use the GUI to establish a QoS policy for the virtual switch. The only time this policy can be selected is during switch creation.
  • If desired, virtual machines can have their IP addresses in the same range as any of the cluster roles. Failover Clustering does not see the ranges in use by virtual machines and will not collide with them.
  • The management operating system will allow you to team network adapters with different feature sets and even different speeds, but it is highly recommended that you not do this. Different features can result in odd behaviors as communication are load balanced. The system balances loads in round-robin fashion, not based on adapter characteristics (for instance, it will not prioritize a 10GbE link over a 1GbE link).
  • Networking QoS only applies to outbound communications. Inbound traffic will flow as quickly as it is delivered and can be processed.
  • 10GbE links have the ability to outpace the processing capabilities of the virtual switch. A single virtual adapter or communications stream may top out at speeds as low as 3.5 Gbps, depending upon the processing power of the CPU. Balanced loads will be able to consume the entire 10GbE link, especially when offloading technologies, primarily VMQ, are in place and functional.
  • When teaming, choose the Dynamic load balancing algorithm unless you have a definite, verifiable reason not to. Do not prefer the Hyper-V Port mode simply based on its name; Dynamic combines the best aspects of the Hyper-V Port and Hash modes.
  • You can use iSCSI on a virtual machine’s virtual adapter(s) to connect it/them directly to network storage. You will have better performance and access to more features by connecting from the host and exposing storage to the guests through a VHDX. Virtual machines can have multiple network adapters, which enables you to connect the same virtual machine to different VLANs and subnets.
  • Avoid the creation of multiple virtual switches. Some other hypervisors require the administrator to create multiple virtual switches and attach them to the same hardware. Hyper-V allows only a single virtual switch per physical adapter or team. Likewise, it is not advisable to segregate physical adapters, whether standalone or in separate teams, for the purpose of hosting multiple virtual switches. It is more efficient to combine them into a single large team. The most common exception to this guideline is in situations where physical isolation of networks is required.

The necessary steps to create a team were linked earlier, but here’s the link again: https://www.altaro.com/hyper-v/how-to-set-up-native-teams-in-hyper-v-server-2012/.

Adapter and TCP/IP Configuration

If your system is running a GUI edition of Windows Server, you can configure TCP/IP for all adapters using the traditional graphical tools. For all versions, you can also use sconfig.cmd for a guided process. This section shows how to perform these tasks using PowerShell. To keep the material as concise as possible, not all possible options will be shown. Refer to the introductory PowerShell article for assistance on using discovering the capabilities of cmdlets using Get-Help and other tools.

See Adapter Status (and Names to Use in Other Cmdlets)

Get-NetAdapter

Rename a Physical or Team Adapter

Rename-NetAdapter Name CurrentName NewName NewName

Set an Adapter’s IP Address

New-NetIPAddress InterfaceAlias AdapterName IPAddress 192.168.20.20 PrefixLength 24

Set an Adapter’s Default Gateway

New-NetRoute InterfaceAlias AdapterName DestinationPrefix 0.0.0.0/0 NextHop 192.168.20.1

Tip: use “Set-NetRoute” to make changes, or “Remove-NetRoute” to get rid of a gateway.

Set DNS Server Addresses

Set-DNSClientServerAddresses InterfaceAlias AdapterName –ServerAddresses 192.168.20.5, 192.168.20.6

Prevent an Adapter from Registering in DNS

Set-DnsClient InterfaceAlias AdapterName RegisterThisConnectionsAddress $false

One final option that you may wish to consider is setting Jumbo Frames on your virtual adapters. A Jumbo Frame is any TCP/IP packet that exceeds the base size of 1514 bytes. It’s most commonly used for iSCSI connections, but can also help a bit with SMB 3 and Live Migration traffic. It’s not useful at all for traffic crossing the Internet and most regular LAN traffic doesn’t benefit much from it either. If you’d like to use it, the following post explains it in detail: https://www.altaro.com/hyper-v/how-to-adjust-mtu-jumbo-frames-on-hyper-v-and-windows-server-2012/. That particular article was written for 2012. The virtual switch in 2012 R2 has Jumbo Frames enabled by default, so you only need to follow the portions that explain how to set it on your physical and virtual adapters.

Configuring Virtual Switches and Virtual Adapters

All of the graphical tools for creating a virtual switch and setting up a single virtual adapter for the management operating system were covered in this previous article in the series. You cannot use the graphical tools to create any further virtual adapters for use by the management operating system. You also must use PowerShell to create your virtual switch if you want to control its QoS policy. The following PowerShell commands deal with the virtual switch and its adapters.

Create an External Virtual Switch

New-VMSwitch –InterfaceAlias AdapterName –Name vSwitch –AllowManagementOS $false –EnableIOV $false –MinimumBandwidthMode Weight

There are several things to note about this particular cmdlet:

  • The “InterfaceAlias” parameter shown above is actually an alias for “NetAdapterName”. The alias was chosen here because it aligns with the parameter name and output of Get-NetAdapter.
  • The cmdlet was typed with “vSwitch” as the virtual switch’s name, but you’re allowed to use anything you like. If your chosen name has a space in it, you must enclose it in single or double quotes.
  • If you do not specify the “AllowManagementOS” parameter or if you set it to true, it will automatically create a virtual adapter for the management operating system with the same name as the virtual switch. Skipping this automatic creation gives you greater control over creating and setting your own virtual adapters.
  • If you do not wish to enable SR-IOV on your virtual switch, it is not necessary to specify that parameter at all. It is shown here as a reminder that if you’re going to set it, you must set it when the switch is created. You cannot change this later.
  • The help documentation for Get-VMSwitch indicates that the default for “MinimumBandwidthMode” is “Weight”. This is incorrect. The default mode is “Absolute”. As with SR-IOV support, you cannot modify this setting after the switch is created.

Create a Private Virtual Switch

New-VMSwitch Name Isolated SwitchType Private MinimumBandwidthMode Weight

Many of the notes from the creation of the external switch apply here as well. The “EnableIOV” switch is not applicable to a private or internal switch at all. The “AllowManagementOS” switch is redundant: if the switch type is “Private” then no virtual adapter is created; if the switch type is “Internal”, then one is created. Adding one virtual adapter to the management OS on a Private switch will convert it to internal; removing all management OS virtual adapters from an Internal switch will make it Private.

Permanently Remove a Virtual Switch

Remove-VMSwitch Name vSwitch

This operation is permanent. The entire switch and all of its settings are lost. All virtual adapters in the management operating system on this switch are permanently lost. Virtual adapters in virtual machines connected to this switch are disconnected.

Add a Virtual Adapter to the Management OS

Add-VMNetworkAdapter ManagementOS SwitchName vSwitch Name 'New vAdapter'

The first thing to note is that, for some reason, this cmdlet uses “Add” instead of the normal “New” verb for creating a new object. Be aware that this new adapter will show up in Get-NetAdapter entries as vEthernet (New vAdapter) and that is the name that you’ll use for all such non-Hyper-V cmdlets. Use the same cmdlets from the previous section to configure

Retrieve a List of Virtual Adapters in the Management OS

Get-VMNetworkAdapter –ManagementOS

Rename a Virtual Adapter in the Management OS

Rename-VMNetworkAdapter ManagementOS Name CurrentName NewName NewName

How to Set VLAN Information for Hyper-V Virtual Adapters

Adapters for the management operating system and virtual machines can be assigned to VLANs. When this occurs, the Hyper-V virtual switch will handle the 802.1q tagging process for communications across the virtual switches and for packets to and from physical switches. As shown in the article on Virtual Machine settings, you can use Hyper-V Manager to change the VLAN for any of the adapters attached to virtual machines. You can only use PowerShell to change the VLAN for virtual adapters in the management operating system.

Retrieve the VLAN Assignments for All Virtual Adapters on the Host

GetVMNetworkAdapterVlan

You can use the “ManagementOS” parameter to see only adapters in the management operating system. You can use the “VMName” parameter with an asterisk to see only adapters attached to virtual machines.

Set the VLAN for a Virtual Adapter in the Management Operating System

Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName vAdapterName Access VlanId 10

Set the VLAN for all of a Virtual Machine’s Adapters

Set-VMNetworkAdapterVlan -VMName svtest -Access -VlanId 7

Remove VLAN Tagging from all of a Virtual Machine’s Adapters

Set-VMNetworkAdapterVlan -VMName svtest –Untagged

If a virtual machine has more than one virtual adapter and you’d like to operate on it separately, that might require a bit more work. When the GUI is used to create virtual adapters for a virtual machine, they are always named Network Adapter, even if there are several. So, you’ll have to use PowerShell to rename them as they are created or you won’t be able to use the “VMNetworkAdapterName” to distinguish them. Instead, you can use Get-VMNetworkAdapter to locate other distinguishing features and pipe the output to cmdlets that accept VMNetworkAdapter objects. For example, you want to change the VLAN of only one adapter attached to the virtual machine named “svtest”. By using the tools inside the guest operating system, you’ve determined that the MAC address of the adapter you want to change is “00-15-5D-19-0A-24”. With the MAC address, you can change the VLAN of only that adapter by using the following PowerShell construct:

GetVMNetworkAdapter VMName svtest | where { $_.MacAddress eq '00155D190A24' } | SetVMNetworkAdapterVlan –VMName Access VlanId 7

Cluster Networking Configuration

It is possible to use PowerShell to configure networking for your Failover Cluster, but it’s very inelegant with the current status of those cmdlets. At this time, they are not well-configured, so you must directly manipulate object property values and registry settings in fashions that are risky and error-prone. It is much preferred that you use Failover Cluster Manager to make these settings as explained in this article earlier on in the series.

Continue Exploring Networking

There’s a lot to digest in Hyper-V virtual networking. What you’ve seen so far truly is only the fundamentals. For a relatively simplistic deployment with no more than a few dozen virtual machines, you might not ever need any more information. As densities start to climb, the need to more closely tune networking increases. With gigabit adapters, your best option is to scale out. 10GbE adapters allow you to overcome physical CPU limitations with a number of offloading techniques, chief among these being VMQ. Begin your research on that topic by starting with the definitive article series on the subject, VMQ Deep Dive.

Otherwise, your best next steps are to practice with the PowerShell cmdlets. For example, learn how to use Set-VMNetworkAdapter to modify virtual adapters in similar fashion to the procedures you saw in the earlier GUI articles. With a little effort, you’ll be able to change groups of adapters at once. Hyper-V’s networking may be multi-faceted and complicated, but the level of control granted to you is equally vast.

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!