I should document this for my own reference so I don't have to figure it out again, but hopefully it helps someone else too. Seems that most people are running vRR on VMware or KVM. Our systems guys prefer Xen for various reasons, so rather than throwing a bunch of goofball servers out there, I decided to try to get it running on there.
The VRR install image comes packaged as a .img file, which I would have guessed to be a raw disk image. It's not. The
utility will give you the truth:
$ qemu-img info jinstall64-vrr-14.2R4.9-domestic.img image: jinstall64-vrr-14.2R4.9-domestic.img file format: qcow2 virtual size: 16G (17174416896 bytes) disk size: 523M cluster_size: 65536 Format specific information: compat: 0.10
There you have it. I initially tried importing the qcow2 image, but Xen complained that it wasn't a valid disk image. So, I had to convert it to something that was supported - I picked vpc (which I guess is actually VHD?).
$ qemu-img convert -f qcow2 -O vpc jinstall64-vrr-14.2R4.9-domestic.img jinstall64-vrr-14.2R4.9-domestic.vhd
That imported into xenserver and expanded into a 16GB disk. I booted it up, but found no usable NICs. This is because xenserver (at least 6.5) presents NICs as some realtek thing by default. The internet tells me that somewhere deep down, it supports Intel e1000. But no obvious option during the xenserver NIC creation for doing that.
I'll save you the trouble of finding the solution, this guy (and probably others) figured it out. The readme has instructions - you'll need to patch the script that generates your virtual NICs, and then you manually set the MAC address to specify whether you want e1000 or default.
Once patched, create a NIC with the proper MAC address, and boot VRR back up. You should now see a em0 interface. That's it, have fun.