A Kubernetes Device Plugin for Hailo AI accelerators, enabling seamless scheduling of AI inference workloads on edge devices like the Raspberry Pi AI HAT+.
The plugin only deploys on nodes with this label, preventing unnecessary pods on nodes without Hailo devices:
kubectl label nodes <node-name> hailo.ai/device=presentkubectl apply -f https://raw.githubusercontent.com/gllm-dev/hailo-device-plugin/main/deploy/daemonset.yaml# Check pod is running
kubectl -n kube-system get pods -l app.kubernetes.io/name=hailo-device-plugin
# Check device is registered
kubectl get nodes -o custom-columns=NAME:.metadata.name,HAILO:.status.allocatable.hailo\\.ai/h10Should show:
NAME HAILO
<node-name> 1
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: hailo-test
spec:
containers:
- name: test
image: debian:bookworm-slim
command: ["sh", "-c", "ls -la /dev/hailo* && sleep 3600"]
resources:
limits:
hailo.ai/h10: 1
restartPolicy: Never
EOF
# Check logs
kubectl logs hailo-testShould show /dev/hailo0
apiVersion: v1
kind: Pod
metadata:
name: inference
spec:
containers:
- name: model
image: your-inference-image
resources:
limits:
hailo.ai/h10: 1- Kubernetes v1.26+
- Hailo driver installed on worker nodes (installation guide)
- Hailo device accessible at
/dev/hailo*
The DaemonSet uses a node selector (hailo.ai/device=present) to deploy only on labeled nodes.
1. Label nodes with Hailo devices:
kubectl label nodes <node-name> hailo.ai/device=present2. Deploy the plugin:
kubectl apply -f https://raw.githubusercontent.com/gllm-dev/hailo-device-plugin/main/deploy/daemonset.yamlConfigure the plugin using environment variables in the DaemonSet:
| Environment Variable | Default | Description |
|---|---|---|
HAILO_RESOURCE_NAME |
hailo.ai/h10 |
Kubernetes resource name to register |
HAILO_ARCHITECTURE |
HAILO10H |
Device architecture identifier |
HAILO_DEVICE_PATH |
/dev |
Path to device directory |
HAILO_DEVICE_PATTERN |
hailo* |
Glob pattern to match device files |
Edit the DaemonSet to add environment variables:
kubectl edit daemonset hailo-device-plugin -n kube-systemspec:
template:
spec:
containers:
- name: hailo-device-plugin
env:
- name: HAILO_RESOURCE_NAME
value: "hailo.ai/h8l"
- name: HAILO_ARCHITECTURE
value: "HAILO8L"| Device | Resource Name | Architecture | Default |
|---|---|---|---|
| AI HAT+ 2 (Hailo-10H) | hailo.ai/h10 |
HAILO10H |
Yes |
| AI HAT+ (Hailo-8L) | hailo.ai/h8l |
HAILO8L |
No |
| Hailo-8L | hailo.ai/h8l |
HAILO8L |
No |
| Hailo-8 | hailo.ai/h8 |
HAILO8 |
No |
This plugin handles device scheduling and allocation. For inference workloads, you'll need to build containers with HailoRT on your Hailo nodes (the SDK is not publicly redistributable).
Resources:
- HailoRT SDK - Runtime library for Hailo devices
- Hailo Model Zoo - Pre-trained models and examples
- hailo_model_zoo_genai - LLM support (Ollama-compatible API)
git clone https://github.com/gllm-dev/hailo-device-plugin.git
cd hailo-device-plugin
go build -o hailo-device-plugin ./cmd/plugin
docker build -t hailo-device-plugin:local .Contributions are welcome! Please read our Contributing Guidelines before submitting a Pull Request.
This project follows Conventional Commits and Keep a Changelog.
This project is licensed under the MIT License - see the LICENSE file for details.
- Hailo for their AI accelerator technology
- Kubernetes Device Plugin framework
