The throughput that we see Veeam reach for any single concurrent disk restore from Wasabi ranges from 150 to 300MB/s. For VMware servers backups, Veeam will restore multiple VMDK’s concurrently. i.e. a VMware server backup with five 1TB disks will restore ~5x faster than a backup with a single 5TB disk.
However, when performing the “Migrate to Production” function of an Agent Based backup or a Hyper-V Instant Recovery to a VMware environment, Veeam v12.x is limited to a single concurrent task at a time, so in above scenario, there would be no difference in the restore speed due to this Veeam limitation because it restores one disk at a time and doesn’t move on to the next one until it completes the prior disk restore.
Finally, the process that takes the longest is that when Veeam migrates an Instant Recovery to VMware using PowerShell or REST API calls, it creates Eager Zeroed Thick Disks on the VMware destination host. VMware can take approximately 1 hour per TB to create the eager zeroed VMDK. By allowing PowerShell and REST API calls to use Thin Disk as the default, the Migration portion of Instant Recovery would be dropped in half.
We have put in feature requests to Veeam to modify their software to support Thin Disks for Instant Recovery Migrations as well as supporting concurrent tasks for the restore, like they do for native VMware backups. Please see this post on the Veeam Forums and leave a comment and upvote it! https://forums.veeam.com/restful-api-f30/instant-recovery-and-migration-from-veeam-data-vault-t97504.html