NPS VM does not start up if NPS host is rebooted

Symptom

NPS VM is in poweroff state.

Cause

NPS host is rebooted intentionally or accidentally.

Action
  1. Login to the NPS host using valid credentials.
  2. Navigate to the path where the vagrant file is located.
    cd <sync_folder>/nps
  3. To check the status of the NPS VM, run the following command:
    source source.rc
    vagrant status
  4. If the status is displayed as poweroff, run the following command to bring up the NPS VM:
    vagrant up
  5. To check the status of the NPS VM, rerun the following command:
    vagrant status
  6. Once the status is displayed as running, login to the NPS VM from outside the NPS host using the OAM IP.
    NOTE:
    I
    1. Run the following command to check the routes configured on the NPS VM:
      route -n
    2. Run the following command to delete the route created with NAT network:
      route del -net 0.0.0.0 gw <gateway_ip_for_NAT_network>
  7. If the NPS VM is not accessible from outside the NPS host using the OAM IP, login/ssh to the NPS VM from NPS Host and delete the default route created with NAT network.
    1. To check the routes configured on the NPS VM, run the following command:
      route -n
    2. To delete the route created with NAT network, run the following command:
      route del -net 0.0.0.0 gw <gateway_ip_for_NAT_network>
  8. Login to the NPS VM and wait for a few minutes for the Kubernetes service to come up.
  9. To check the status of all the Pods, run the following commands:
    kubectl get nodes -n nps
    kubectl get pods --all-namespaces
    
    NOTE:

    All Pods must be in Running state.