Easier development when dealing with docker-compose stacks

For some time I've had to deal with two separate, docker-compose based application stacks. One of them combining a Ruby on Rails app with a whole suite of ElasticSearch nodes, sidekiq worker, Postgresql, nginx, the whole shebang. Another is just a plain Zope/Plone stack, but the difficulties remain the same: when I wanted to do production debugging or just plain development using that environment, I needed something that can be started manually, in the whole stack. I don't want to have to deal with rpdb or remote byebug just to be able to debug. I want to poke around the whole stacks and see what happens. 

So my solution was, in both cases, to configure another service in the docker-compose stack that just did nothing.

...
debug:
  image: plone
  ports:
    - "8090:8080"
  volumes:
    - ./src:/plone/instance/src
  entrypoint: sh -c "tail -f /dev/null"

Something like the above. Notice the entry point, which just keeps the container up, but does nothing. Now I can run

docker exec -it debug_1 bash

And inside the container, I can edit the eggs to set a pdb.trace() line whereever, then start the instance: 

bin/standalone fg

Why go through this trouble instead of just running the plone container with something like

docker run --name debug plone

Usually docker-compose stack are entertwined services that need connecting to one another. My given service debug could be linked to whatever other service: postfix, postgresql, elasticsearch, etc. Why go through the trouble of linking manually, from the command line, when I can just get docker-compose to do it?

Comments