notes-api/apidirectory and create a
Dockerfile.devfile. Put the following code in it:
Dockerfilethat you worked with in the previous section. The three differences in this file are as follows:
npm installinstead of
npm run install --only=prodbecause we want the development dependencies also.
NODE_ENVenvironment variable to
notes-db- A database server powered by PostgreSQL.
notes-api- A REST API powered by Express.js
Dockerfilefor building images, Docker Compose uses
docker-compose.yamlfile to read service definitions from.
notes-apidirectory and create a new
docker-compose.yamlfile. Put following code into the newly created file:
servicesblock holds the definitions for each of the services or containers in the application.
apiare the two services that comprise this project.
dbblock defines a new service in the application and holds necessary information to start the container. Every service requires either a pre-built image or a
Dockerfileto run a container. For the
dbservice we're using the official PostgreSQL image.
dbservice, a pre-built image for the
apiservice doesn't exist. Hence, we use the
volumesblock defines any name volume needed by any of the services. At the time it only enlists
db-datavolume used by the
docker-compose.yamlfile, lets have a closer look at the individual services.
dbservice is as follows:
imagekey holds the image repository and tag used for this container. We're using the
postgres:12image for running the database container.
container_nameindicates the name of the container. By default containers are named following
<project directory name>_<service name>syntax. You can override that using
volumesarray holds the volume mappings for the service and supports named volumes, anonymous volumes, bind mounts. The syntax
<source>:<destination>is identical to what you've seen before.
environmentmap holds the values of the various environment variables needed for the service.
apiservice is as follows:
apiservice doesn't come with a pre-built image instead what it has is a build configuration. Under the
buildblock we define the context and the name of the Dockerfile for building an image. You should have an understanding of context and Dockerfile by now so I won't spend time explaining those.
imagekey holds the name of the image to be built. If not assigned the image will be named following the
<project directory name>_<service name>syntax.
DB_HOSTvariable demonstrates a feature of Compose. That is, you can refer to another service in the same application by using its name. So the
dbhere, will be replaced by the IP address of the
apiservice container. The
DB_PASSWORDvariables have to match up with
POSTGRES_PASSWORDrespectively from the
volumesmap, you can see an anonymous volume and a bind mount described. The syntax is identical to what you've seen in previous sections.
portsmap defines any port mapping. The syntax,
<host port>:<container port>is identical to the
--publishoption you used before.
volumesis as follows:
<project directory name>_<volume key>syntax and the key here is
db-data. You can learn about the different options for volume configuration in the official docs.
--project-nameswitch or the
COMPOSE_PROJECT_NAMEenvironment variable). So, since this file doesn't include that information, you can look for a "notes-api_default" network after bringing up the composed project below.
upcommand builds any missing images, creates containers and starts them in one go.
docker-compose.yamlfile is. This is very important for every
docker-composecommand you execute.
-doption here functions same as the one you've seen before. The
-foption is only needed if the YAML file is not named
docker-compose.yamlbut I've used here for demonstration purpose.
upcommand there is the
startcommand. The main difference between these two is the
startcommand doesn't create missing containers, only starts existing containers. It's basically same as the
container lscommand, there is the
pscommand for listing containers defined in the YAML only.
container lsoutput, but useful when you have tons of containers running simultaneously.
container execcommand, there is an
docker-compose. Generic syntax for the command is as follows:
npm run db:migratecommand inside the
apiservice, you can execute the following command:
container execcommand, you don't need to pass the
-itflag for interactive sessions.
docker-composedoes that automatically.
logscommand to retrieve logs from a running service. The generic syntax for the command is as follows:
apiservice execute the following command:
--followoption. Any later log will show up instantly in the terminal as long as you don't exit by pressing
ctrl + ckey combination or closing the window. The container will keep running even if you exit out of the log window.
downcommand stops all running containers and removes them from the system. It also removes any networks:
stopcommand which functions identically to the
container stopcommand. It stops all the containers for the application and keeps the contianers. These containers can later be started with the
Dockerfile.devfiles in this sub-section (except the one for the
nginxservice) as they are identical to some of the others you've already seen in previous sub-sections.
fullstack-notes-applicationdirectory. Each directory inside the project root contains the code for each services and the corresponding
docker-compose.yamlfile let's look at a diagram of how the application is going to work:
/apiin it. If yes, the router will route the request to the back-end or if not, the router will route the request to the front-end.
/notes-api/nginx/production.conffiles. Code for the
/notes-api/nginx/Dockerfile.devis as follows:
/etc/nginx/conf.d/default.confinside the container.
docker-compose.yamlfile. Apart from the
dbservices there will be the
nginxservices. There will also be some network definitions that I'll get into shortly.
networksblock is as follows:
networksblock in each of the service definitions. This way the the
dbservice will be attached to one network and the
clientservice will be attached to a separate network. The
nginxservice however will be attached to both the networks so that it can perform as router between the front-end and back-end services.
Makefile. Explore them to see how you can run this project without the help of
docker-composelike you did in the previous section.