You should not think of the physical NIC's (actually, ports on a PCI device) as endpoints, nor will they have IP's assigned to them.
In vSphere (ESXi) the NICs are merely uplinks to some other part of the network(s). You always want redundancy, so there should never be a configuration with fewer than 2 uplinks in a team
With 4 (10Gb?) uplinks (NIC's) on a blade you have two practical strategies no matter if iSCSI is used or not, merely to maintain redundancy for all vSwitch configurations:
Strategy one:
Place all 4 uplinks on a vSwitch (does not matter standard or distributed) and use VLANs to separate:
- Management Network
- The Management Network VMkernel is what has the IP, not the NIC
- iSCSI, vMotion, FT
- Production
- Other networks
The advantage here is greater failover potential with the ability to use Beacon Probing as a Failover detection method (requires 3 uplinks in a team to work best)
Strategy two:
Create two vSwitches, each with 2 uplinks. Use one vSwitch for all normal IP networking (with VLANs separating the networks) including:
- Management Network
- Production
- Other networks
Use the other vSwitch for:
- iSCSI, vMotion, FT
I can think of no actual advantage of doing things this way, except conforming to a very rigid physical network.