This guide documents a VM-friendly workflow for running PyPNM on kind and managing it in FreeLens. The model below assumes one PyPNM service per CMTS, with multiple namespaces living on a single VM/cluster.
- VM with Docker installed and running.
curlandsudoavailable.- Network access to GHCR (
ghcr.io/PyPNMApps/pypnm).
Use the bootstrap helper for Debian/Ubuntu-style hosts (no repo clone required):
curl -fsSL https://raw.githubusercontent.com/PyPNMApps/PyPNM/main/tools/k8s/pypnm_kind_vm_bootstrap.sh \
-o /tmp/pypnm_kind_vm_bootstrap.sh
bash /tmp/pypnm_kind_vm_bootstrap.shPick a release tag, then deploy into a namespace (one namespace per CMTS). This script pulls manifests from GitHub, so no repo clone is required:
TAG="v1.6.4.0"
NAMESPACE="pypnm-cmts-a"
curl -fsSL https://raw.githubusercontent.com/PyPNMApps/PyPNM/main/tools/k8s/pypnm_k8s_remote_deploy.sh \
-o /tmp/pypnm_k8s_remote_deploy.sh
bash /tmp/pypnm_k8s_remote_deploy.sh --create --tag "${TAG}" --namespace "${NAMESPACE}" --replicas 1Health check:
kubectl port-forward --namespace "${NAMESPACE}" deploy/pypnm-api 8000:8000
curl -i http://127.0.0.1:8000/healthExample response:
{
"status": "ok",
"service": {
"name": "pypnm-docsis",
"version": "1.4.3.0"
},
"uptime": {
"starttime": 1773640097,
"uptime": 1
},
"memory": {
"rss_bytes": 12582912,
"total_bytes": 17179869184,
"free_bytes": 8216707072,
"available_bytes": 10379091968,
"usage_percent": 0.07
},
"data": {
"path": ".data",
"size_bytes": 1761579619,
"directories": {
"json": 1728244816,
"xlsx": 0,
"pnm": 17349665,
"csv": 2819493,
"png": 2805111,
"db": 3388387,
"archive": 6968882,
"msg_rsp": 3265
}
}
}memory.rss_bytes reports the current resident memory used by the running PyPNM process in bytes.
memory.total_bytes reports host total memory in bytes.
memory.free_bytes reports host free memory in bytes.
memory.available_bytes reports host available memory in bytes.
memory.usage_percent reports resident process memory as a percent of host total memory.
If the returned service.version is older than expected, verify the tag used in the deploy command and confirm the namespace matches the running deployment.
- Add the kubeconfig from the VM to FreeLens.
-
If FreeLens runs on the VM, it can use
~/.kube/configdirectly. -
If FreeLens runs on your workstation, copy the kubeconfig from the VM:
VM_USER="dev01" VM_HOST="10.0.0.10" LOCAL_KUBECONFIG="${HOME}/.kube/pypnm-dev-kubeconfig" scp ${VM_USER}@${VM_HOST}:~/.kube/config "${LOCAL_KUBECONFIG}"
PowerShell:
$VM_USER = "dev01" $VM_HOST = "10.0.0.10" $LOCAL_KUBECONFIG_DIR = "$env:USERPROFILE\\.kube" $LOCAL_KUBECONFIG = "$LOCAL_KUBECONFIG_DIR\\pypnm-dev-kubeconfig" New-Item -ItemType Directory -Force -Path $LOCAL_KUBECONFIG_DIR | Out-Null scp ${VM_USER}@${VM_HOST}:~/.kube/config "$LOCAL_KUBECONFIG"
-
In FreeLens: Catalog → Clusters → Add Cluster → From kubeconfig, then select the copied file.
-
If FreeLens runs on Windows, update any file paths inside the kubeconfig (for example, replace
/home/dev01/...withC:\\Users\\<you>\\...). Then use an SSH tunnel for the API port:VM_USER="dev01" VM_HOST="10.0.0.10" VM_PORT="8000" ssh -L ${VM_PORT}:127.0.0.1:${VM_PORT} ${VM_USER}@${VM_HOST}
PowerShell:
$VM_USER = "dev01" $VM_HOST = "10.0.0.10" $VM_PORT = "8000" ssh -L ${VM_PORT}:127.0.0.1:${VM_PORT} ${VM_USER}@${VM_HOST}
-
- Select the target namespace (
pypnm-cmts-a).- In FreeLens 1.7.0: open Catalog (top-left icon), choose Clusters, then switch the namespace selector in the top bar to
pypnm-cmts-a.
- In FreeLens 1.7.0: open Catalog (top-left icon), choose Clusters, then switch the namespace selector in the top bar to
- In FreeLens, open the target namespace and locate
pypnm-api(Service or Pod). - Start a port-forward for
pypnm-apito local port 8000.- Service: forward
8000 -> 8000. - Pod: forward
8000 -> 8000.
- Service: forward
- Open
http://127.0.0.1:8000/healthin your browser.- If you are tunneling, keep the SSH tunnel running and use the same
http://127.0.0.1:8000URL on Windows.
- If you are tunneling, keep the SSH tunnel running and use the same
Tip: keep one FreeLens workspace per CMTS namespace so it is easy to keep sessions isolated.
| Example | When to use |
|---|---|
| Single namespace with 10 replicas | Load testing or simple HA with one endpoint. |
| 10 namespaces, 10 ports | One instance per CMTS or per customer. |
| Multiple kind clusters | Hard isolation between environments. |
Deploy a PyPNM instance per CMTS by assigning a unique namespace and configuration per CMTS:
TAG="v1.6.4.0"
for CMTS in cmts-a cmts-b cmts-c; do
NAMESPACE="pypnm-${CMTS}"
bash /tmp/pypnm_k8s_remote_deploy.sh --create --tag "${TAG}" --namespace "${NAMESPACE}" --replicas 1
doneEach instance should point at exactly one CMTS. Use a per-namespace config patch and apply it as described in PyPNM on Kubernetes (kind) ("Config overrides" section). Keep the CMTS routing and retrieval settings unique per namespace.
NAMESPACE="pypnm-cmts-a"
bash /tmp/pypnm_k8s_remote_deploy.sh --teardown --namespace "${NAMESPACE}"To delete the cluster after all namespaces are removed:
kind delete cluster --name pypnm-dev