A while ago started making use of the practicality of docker composes for setting up environments for servers and clusters. One of the biggest advantages things I found in such a setup, is the reproducibility of an environment for production, staging or development systems without much effort, together with the easy automation of build and deployment processes without a lot of script writing.
One case we didn’t know how to address properly was the way to provide initial data dumps for the environments without exposing the mongo port to the outside world and/or a lot of manual intervention. Lately we found an easy solution we like to share.
version: '2' services: mongo-db: image: mongo:3.2 entrypoint: [ "/usr/bin/mongod", "--journal", "--smallfiles", "--rest" ] restart: always networks: - mongo mongosetup: image: mongo:3.2 volumes: - ./dump:/dump entrypoint: ["mongorestore", "--host", "mongo-db", "/dump"] networks: - mongo networks: mongo:
The easy setup utilises the ability of docker-compose to overwrite the default entry point of a container. Thus all the data in the mounted dump directory will be imported to the mongodb container on startup.
This approach can be taken further if a script is placed in another mounted directory and started instead. This could be used to e.g. delete the dump afterwards to prevent a reimport attempt if you don’t want to restart the service all the time or trigger some other actions after the dump is imported.