Hello everyone!!! I'm an avid reader of this forum so I wanted to contribute to the community by my cluster design. The reasoning for this is to pack in the smallest form factor possible a cluster based on a certain number of nodes.
This project as for now includes the whole automation to deploy the cluster and the diagrams for building the case.
Main project: https://github.com/Kubeinit/
Ansible automation: https://github.com/Kubeinit/kubeinit/
This case design: https://github.com/Kubeinit/box/
What I'm pasting here is the content of the readme of the project on GitHub. The project is open source, so feedback, improvements are always welcomed. Also, If you like those projects. Please help by starring them!!.
The KUBErnetes INITiator (The box)
The configuration of a Kubeinit chassis is dynamic enough to hold 8 different devices in its bays. In the next picture you can see a general frontal view of different components that can be allocated on each Kubeinit chassis. In the next image from left to right can be seen, compute-node, gpu-bay, compute-node, gpu-bay, compute-node, storage-bay, compute-node, gpu-bay.
Supermicro motherboards
Check Supermicro Mini-ITX motherboards1, up-to 512GB RAM, up-to 16Cores, up-to 10GbE, refer only to SoC boards.
Examples:
There is a maximum number of additional devices that can be added, and this is based on two constraints, space and power consumption.
**** The bastion host in this POC can be the raspberry pi node integrated in the front cover.
**** This node can be used also (raspberry pi node integrated in the front cover).
Disclaimer note: All third party trademarks (including logos and icons) referenced by Kubeinit remain the property of their respective owners. Unless specifically identified as such, Kubeinit's use of third party trademarks does not indicate any relationship, sponsorship, or endorsement between Kubeinit and the owners of these trademarks. All references by Kubeinit are for educational or references purposes.
This project as for now includes the whole automation to deploy the cluster and the diagrams for building the case.
Main project: https://github.com/Kubeinit/
Ansible automation: https://github.com/Kubeinit/kubeinit/
This case design: https://github.com/Kubeinit/box/
What I'm pasting here is the content of the readme of the project on GitHub. The project is open source, so feedback, improvements are always welcomed. Also, If you like those projects. Please help by starring them!!.

The KUBErnetes INITiator (The box)
Kubeinit In A Box
Reference Architecture:
- Latest revision: 2.1 BOM update [Carlos Camacho].
Enclosure design
External design, views, and components organization.
The enclosure is designed as a rackable unit, using 7U. It tries to minimize the space used to deploy an up to 8-node cluster with redundancy for both power and networking.Enclosure 3D renders
Mechanical views
Hardware components description
Here will be described the different HW combinations for configuring a Kubeinit clusterThe configuration of a Kubeinit chassis is dynamic enough to hold 8 different devices in its bays. In the next picture you can see a general frontal view of different components that can be allocated on each Kubeinit chassis. In the next image from left to right can be seen, compute-node, gpu-bay, compute-node, gpu-bay, compute-node, storage-bay, compute-node, gpu-bay.
Raspberry Pi remote management
There is a Raspberry Pi attached to the front cover of the chassis with the official 7” touch screen. This allows to:- Control the 3 fan array in the back of the case.
- Show SNMP statistics from the cluster.
- Display cluster state.
- Reprovision the cluster using Ansible.
- Monitor temperature inside the chassis.
- Remote access to the cluster.
- Attach external Monitor/Keyboard/Mouse to a node with direct access to the cluster.
Motherboards
There must be used Mini-ITX form factor motherboards on each Kubeinit bay. Some examples are the following.Supermicro motherboards
Check Supermicro Mini-ITX motherboards1, up-to 512GB RAM, up-to 16Cores, up-to 10GbE, refer only to SoC boards.
Examples:
Networking
In a Kubeinit chassis can be allocated up to two networking switches, depending on the reference design they can be GbE or 10GbE and they can be configured in a redundant manner or not.10GbE configuration
Working on production ready configurations should use 10GbE by default.Redundant scenarios
Redundancy should be provided from two networking switches based on the same model, in the case of a Kubeinit chassis there is a max width of 340mm.Non redundant scenarios
For non redundant scenarios which can be used for non-critical production ready environments, there are multiple options depending on the size and number of ports.- 16 ports 10GbE switch Netgear XS716E
Raspberry Pi node
Raspberry Pi display views
Additional storage
There are three disk bays in each server pod, so the cluster can hold up to 24 2.5” disks. In addition if used the Asrock MB, can be used an additional 8 disk hot swap caddy with mini SAS ports.Expansion slots
Each node pod is able to hold a full size 16x PCIe card, this can be an additional GPU, FPGA board or any other custom board for specific use cases, for example an additional hot swap disk caddy.There is a maximum number of additional devices that can be added, and this is based on two constraints, space and power consumption.
- Space: Up to 4 node pods with 1 external device each.
- Power: Each PCIe 16x slot can deliver up to 75w, if there is more power needed then there is the possibility to add 2 Flex ATX PSUs. Each 8-pin connector on the external cards theoretically can deliver up to 150w so forth, the 2 PSUs should allow to connect up to 2 external cards with 2 x 8-pin connector for high power demands.
Deployment configurations
Nodes configuration for deploying OpenShift
Node type | Quantity | Description |
---|---|---|
Bastion**** | 1**** | Deployment of the environment, Ansible playbooks, hardware management, etc. |
Infrastructure | 2 | OpenShift HAProxy, container registry, routing, etcd, logging, metrics. |
Master | 3 | OpenShift API master, Kubernetes scheduler |
Compute | 3 | Runs the application containers |
Nodes configuration for deploying OpenStack
Node type | Quantity | Description |
---|---|---|
Controllers | 3 | |
Computes | 4 | |
Undercloud | 1 | |
*** |
Disclaimer note: All third party trademarks (including logos and icons) referenced by Kubeinit remain the property of their respective owners. Unless specifically identified as such, Kubeinit's use of third party trademarks does not indicate any relationship, sponsorship, or endorsement between Kubeinit and the owners of these trademarks. All references by Kubeinit are for educational or references purposes.
Last edited: