AzureDisk
最后更新于
最后更新于
provides flexible block storage services for virtual machines operating on Azure. It mounts onto the virtual machine in VHD format and can be utilized within Kubernetes containers. One of the highlights of AzureDisk is its robust performance, especially with Premium Storage offering unparalleled . However, it falls short in one aspect - it doesn't support shared usage, and can only be used within a single Pod.
Based on various configurations, Kubernetes supports different types of AzureDisks, such as:
Managed Disks: Azure automatically manages disks and storage accounts
Blob Disks:
Dedicated (default): A unique storage account is created for each AzureDisk, which gets deleted when the PVC is deleted.
Shared: The AzureDisk shared a single storage account within the same ResourceGroup. Deleting PVC will not remove this storage account.
Note:
The type of AzureDisk must match with the VM OS Disk type - they should either be both Manged Disks or Blob Disks. If there is a mismatch, AzureDisk PV will report a mounting error.
Since Managed Disks requires storage account creation and management, it takes longer to set up compared to Blob Disks (3 minutes vs 1-2 minutes).
However, a Node can mount up to 16 AzureDisks at the same time.
Recommended Kubernetes versions for using AzureDisk are:
1.12
1.12.9 or higher
1.13
1.13.6 or higher
1.14
1.14.2 or higher
>=1.15
>=1.15
For Kubernetes clusters deployed using , two StorageClasses are automatically created. The default is managed-standard (HDD):
You can identify the root cause of the issue by examining the kube-controller-manager logs. A common error log might look like the following:
Temporary solutions include:
(1) Updating the status of all affected virtual machines
Use powershell:
Use Azure CLI:
(2) Restarting the virtual machine
kubectl cordon NODE
If a StatefulSet is running on the Node, the relevant Pod should be manually deleted.
kubectl drain NODE
Get-AzureRMVM -ResourceGroupName $rg -Name $vmname | Restart-AzureVM
kubectl uncordon NODE
This change leads to the situation where Pods originally using the lun0 disk lose access to the AzureDisk:
A temporary solution is to set the cachingmode: None
in the AzureDisk StorageClass, as shown below:
Moreover, if a Node uses the Standard_B1s
type of virtual machine, the first mount of the AzureDisk is likely to time out, and it will only be successful on the second attempt. This is because the AzureDisk formatting in Standard_B1s
virtual machines takes a long time (over 70 seconds, for instance).
By default, Azure Disk can't set uid and gid during the mount using ext4, xfs filesystem and mountOptions like uid=x, gid=x. For instance, if you try setting mountOptions uid=999,gid=999, you'll see an error similar to:
This issue can be alleviated by either of the following:
Note: Since the gid and uid default to root or 0 when mounting, if the gid or uid is set to a non-root value (for example, 1000), Kubernetes will use
chown
to change all directories and files beneath the disk. This operation can be time-consuming and may result in slow disk mounting speeds.
Setting gid and uid using chown
in initContainers. An example:
Errors might occur if you try to delete an Azure Disk PersistentVolumeClaim being used by a pod. For instance:
In Kubernetes version 1.10 and higher, PersistentVolumeClaim protection is enabled by default to prevent this issue. If the Kubernetes version you are using isn't addressing this issue, you can work around it by deleting the pod using the PersistentVolumeClaim before deleting the PersistentVolumeClaim.
This issue might occur when an AzureDisk is migrated from one Pod to another on a different Node or if multiple AzureDisks are used on the same Node. This scenario is caused by the kube-controller-manager not employing lock operations for AttachDisk and DetachDisk, ultimately leading to contention issues ( ).
This issue has been addressed in the v1.10 patch with fix .
In Kubernetes v1.7, the default caching policy for AzureDisk was switched to ReadWrite
. This change led to an issue where mounting more than five AzureDisks on a Node resulted in randomly changing disk identifiers of existing AzureDisks ( ). For instance, after the sixth AzureDisk is mounted, the originally recognised sdc
disk identifier of the lun0 disk might change to sdk
:
This issue will be addressed in v1.10 in patch .
The use of AzureDisk PVC typically requires a 1-minute initial mount duration, which is mostly consumed by the Azure ARM API call (for VM queries and Disk mounts). With , a cache was added for Azure VM, which removes the VM query time and brings the overall mount duration down to about 30 seconds. This fix is included in versions v1.9.2+ and v1.10.
Azure German Cloud only supports AzureDisk in versions v1.7.9+, v1.8.3+ and later versions (). Upgrading Kubernetes will solve the problem.
only exists in Kubernetes v1.10.0 and v1.10.1 and will be fixed in v1.10.2.
via runAsUser and gid in the fsGroup to set uid. For instance, the following configuration sets the pod to root, making it accessible for all files: