Hypriot Support or Raspbian Lite?


#1

First post and a bit new to Ansible, K8S and Raspberry Pis.
I have created a 4 node Raspberry Pi cluster with the sole purpose of learning K8S.
I was reading a lot of articles and several suggested to use the Hypriot image instead of Raspbian Lite. Also, I primarily use a Chromebook, so I am running Ansible from 1 of the Raspberry Pis; namely, the node that I want to be the Master.

So, I’ve done the following:

  1. Flash Hypriot image on all 4 nodes
  2. Installed Ansible on one of the nodes
  3. Included all 4 nodes’ IP address in the inventory and /etc/ansible/hosts
  4. Verified that ansible all -m ping works correctly
  5. Cloned the Rak8s repo to the Master
  6. Modified the inventory file to include all 4 nodes and designated 1 of them is the Master
  7. Modified ansible.cfg to set the user to “pirate” instead of “pi”
  8. Kicked off the ansible-playbook all cluster.yml

The output runs without error and eventually it states it is going to reboot.
After the reboot, I log back into the Master and kubectl is not installed. I need to poke around more logs, but before that, I had 2 questions.

  1. Will Rak8s work with Hypriot as the operating system instead of Raspbian Lite?
  2. Will Rak8s work with ansible being executed from one of the nodes, which is the Master?

If the recommendation is to use Raspbian Lite and also not run Ansible from one of the nodes, I can easily do that.

Advice appreciated. Thanks.


#2

I made no considerations for Hypriot. Use at your own risk.

If you can ping all the nodes Ansible should work fine. As always, read the playbooks before executing.

I have just pushed some changes to master that might be helpful.


#3

I was able to get rak8s to work with hypriot. The only change I had to make was to add ignore_errors: true to the Disable Swap task. Other than that it worked like a charm. Thank you again for the hard work of automating this.


#4

“worked like a charm” meant that it ran. I’m finding multiple issues since then. Example: the /etc/kubernetes/admin.conf file wasn’t copied to ~/.kube/config on the master node. I couldn’t run kubectl get nodes on the master node until I did that. And the master node is stuck in NotReady due to network plugin not ready: cni config uninitialized. Thought this setup used flannel. Or do CNI and Flannel co-exist. I’m still grokking the networking piece.