Why do the disks turn up as isLocal = false?
historically you could potentially share a jbod if you had the correct jbod cabling to share disks between e.g. 2 hosts
This means you could create your VMFS volume and take advantage of a jbod as if it was a "shared" san
In that case how it would be achieved is using a sas adapter with external ports connected to a JBOD
this adapter would be interpretted as sas adapter who's disks are potentially sharable between hosts
to err on the side of caution , ESXi marks any disk hanging off some SAS adapters that **could** be used for shared disk function as a "remote disk"
basically that's why the devices show up as is local = false
I hope that makes sense?
Now if I understand it you want to automate the VSAN creation, so when you enable VSAN in automatic mode it grabs the disks , and obviously they must be marked local
Since you are automating anyway I guess you may might decide to do a scripted install of your ESXi 6.0 hosts first and run a post install script to find out all your disks that are marked as ls Local = False and change them to is local true,
I hope that's what the ask is here
This is my horrible untested script but gives you and idea how you could grab your disks that are marked Is Local: false
#for line in $(localcli storage core device list |grep "Is Local: false" -B 15 | grep naa | awk '{print $1}' | grep naa)
> do
> localcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --device $line --option "enable_local"
> localcli storage core claiming unclaim --type=device --device $line
> localcli storage core claimrule load
> localcli storage core claimrule run
> localcli storage core claiming reclaim -d $line
>done
But the pro william lam has a kickstart script solve the is Local = false problem
using vdq -q output
here is his example
see http://www.virtuallyghetto.com/2014/07/esxi-5-5-kickstart-script-for-setting-up-vsan.html
HTH