Chasing My Own Tail
Setting up Mongo DB on K8S to support Appsmith was an adventure in circular reasoning, but a simple solution eventually presented itself...
Using Terraform to manage infrastructure is great. Having it spin up a MongoDB resource on Kubernetes is nifty, too. But what about needing a Replica Set configuration in that environment?
Start with a single node Mongo resource, not in replSet
mode. This resource attaches to a PVC for its operation, but if you want to switch it to replSet
operation, updating your Terraform configuration won't be enough.
The new container will try to spin up, see that another mongod
is attached to the PVC, and will start a backoff crash loop.
First step to resolve? Try wiping it all out and starting in replSet
from the beginning. That seems to work, but here's fun: you need to authenticate in order to run the `eval "rs.initiate()" command to start replica set mode... and Mongo doesn't create the initdb user if it's in replica set mode.
So... we have to start in normal mode, let non-replSet Mongo create the initdb user, and then convert to replica set, but the old container won't let go of that pesky PVC. What to do? Scale to the rescue!
kubectl -n (yourNameSpaceHere) scale deployment mongo --replicas=0
Modify this command to target your specific mongo configuration and it'll scale it down to zero, releasing the hold on the PVC. THEN update your Terraform container configuration to include the replSet
command, like so:
spec {
container {
name = "mongo"
image = "mongo:6"
port { container_port = 27017 }
command = [
"mongod",
"--replSet", "rs0",
"--bind_ip_all"
]
... and reapply Tofu or Terraform. This will (a) update your container definition, and then (b) scale it back up. Without another pod fighting for the PVC, the newly configured replSet pod can grab it and start running.
Then you just need to retrieve your mongo password and use the command line to initialize rs.
UPDATE: I have all containers "working" but AppSmith still has some configuration issues. I get 502 from service worker when hitting API routes, despite everything seeming OK. Once I've finally ironed it all out, I'll post a final Terraform configuration that I know works.