Est-il possible to cause artificial réseau packet loss or latency?

J’essaie de reproduce some issues on a deployed application where the MSSQL server and client are running in two separate machines. Je pense there may be network issues between the two machines, so J’aimerais to try and reproduce these conditions on two Hyper-V virtual machines (on le même virtual server). Of course, the network for these virtual machines is “local” so it’s actually far from the conditions in a live environment.

Is there a program I can run on either virtual machine which will degrade the network performance? Or maybe any other work arounds? Par exemple, one way to reproduce the conditions may be to run the VMs on separate Hyper-V servers in geographically dispersed locations (so the SQL traffic goes over VPN or something) – but this is a little long winded Je pense. There must be a simpler way.

On Linux you’d use netem, on FreeBSD you’d use dummynet.

Neither of those solutions would work on a single Windows machine using Hyper-V. I searched, and I’m not able to locate any Windows Hyper-V compatible network emulators.

You could put two VMs on two different physical machines, with a Linux or FreeBSD box between them. But it doesn’t look like there’s any solution that’s going to do exactly what you want on a single VM host.