Just in case anyone had any doubts about this its official now, don't use SD Cards for installing your hypervisor on! So to be a little more specific, don't install your production vSphere ESXi host OS on an SD card. Yes they're cost effective, and yes they've worked in the past, and you can raid a pair to give you availability..... but honestly, times have moved on and its not worth fighting over.
The hypervisor is the heart of your virtualization platform and we need it to be rock solid. That's why we run VMware vSphere as our hypervisor of choice and why we should listen to their recommendations (it also happens to align with our experiences to date).
I know we've replaced the odd failed SD card in hosts that are running vSphere 6.5 or 6.7 over the years and this was why we started transitioning to better technologies quite sometime ago. Our latest generation of hosts (Dell's PowerEdge M740c) all have a pair of M.2 flash devices mirrored for availability. These are orders of magnitude better SD cards even without mirroring them which could easily be argued is overkill. Why you ask, well the MTBF of a M.2 SSD is substantially better than that of an SD card, I'd argue you could comfortably get away with just 1 if you wanted.
In short, the increasing demands put on the ESX-OSDATA partition mean its no longer appropriate to use this type of media - "Starting from the next major vSphere release, SD cards/USB media as a standalone boot device will not be supported" and "VMware strongly advises that you move away completely from using SD card/USB as a boot device option on any future server hardware". The VMware KB details the reasons and make it pretty clear.
For us that means retro fitting enterprise-grade persistent storage to a number of our existing hosts and a quick rebuild. Not ideal but certainly worth doing as we march on towards vSphere 7! Again, this is one of those "little" things we take care of as an IaaS provider and that happens in the background without impact to your running workloads.