Host Server:
-
Dual Xeon E5530’s
-
24 GB Ram
-
4x 1GB NIC’s (operating at 100MB because of 100MB maanged switch)
-
Server 2008 R2 Enterprise with Hyper-V Role + BackupExec 2012 backing up to NAS
connected via iSCSI
VM’s:
-
Domain Controller (also DNS server)
-
File Server
-
Database Server
-
Two App Servers
Current NIC Setup:
-
1 Physical NIC dedicated to host OS so that BackupExec can do it’s thing without choking out VM’s.
-
1 Physical NIC shared between APP servers. (NIC usage peaks and valleys through the day)
-
1 Physical NIC shared between SQL Server and File Server (two biggest bandwidth hogs)
-
1 Physical NIC dedicated to DC
s:
-
Is dedicating one physical NIC to the DC/DNS overkill? J’ai about 20 users.
-
Any tips about setting this whole thing up better?
-
Are there any way to prioritize the different VM’s sharing a NIC?
-
I’m going to stack a 1GB switch on the 100MB one. 3 physical Servers, NAS’s and that kind of thing will connect to the 1GB switch. Users will all be plugged into the 100MB switch. With the increased bandwidth am I safe putting more VM’s on one physical NIC, or are there other factors to consider?
Merci!