In this chapter, we'll delve into methods for troubleshooting anomalies in Windows containers.
RDP Login to Node
When troubleshooting issues with Windows containers, you often need to log into the Windows node using RDP to check the status and logs of kubelet, Docker, HNS, and so forth. When using a cloud platform, you can assign a public IP to the relevant VM. When deploying on a physical machine, access can be obtained through port mapping on the router.
In addition, there's a simpler method: exposing node's port 3389 externally through the Kubernetes Service (be sure to replace with your own node-ip):
On Windows Server 1709, images with 1709 labels should be used, like:
microsoft/aspnet:4.7.2-windowsservercore-1709
microsoft/windowsservercore:1709
microsoft/iis:windowsservercore-1709
While on Windows Server 1803, images with 1803 labels should be used, including:
microsoft/aspnet:4.7.2-windowsservercore-1803
microsoft/iis:windowsservercore-1803
microsoft/windowsservercore:1803
DNS Cannot Be Resolved Within Windows Pod
This is a known issue, and there are three temporary solutions:
After Windows restarts, clear HNS Policy and reboot KubeProxy service:
Directly configure the kube-dns Pod address for the Pod:
More simply, run an extra Pod on each Windows Node—meaning at least two Pods are running on each Node. In this case, DNS resolution also works correctly.
#!/bin/bash
# KubernetesSubnet is the name of the vnet subnet
# KubernetesCustomVNET is the name of the custom VNET itself
rt=$(az network route-table list -g acs-custom-vnet -o json | jq -r '.[].id')
az network vnet subnet update -n KubernetesSubnet \
-g acs-custom-vnet \
--vnet-name KubernetesCustomVNET \
--route-table $rt