Docker with Mosh

These are the notes I took from the Docker course on Mosh's website.

Here's a link to the tutorial

Here are the project files

Getting Started

Containers share the kernal of the host operating system. The kernal manages memory and CPU.

To see the version of the docker client and server running

docker version

A Dockerfile: contains all the information to create a docker image.

An image is made up of:

  1. Parts of the OS that are required for the container (but not the full OS)
  2. A runtime environment (for example Node)
  3. Application files
  4. Libraries require for the app to run
  5. Environment variables

Once you have an image, you can create a container from it.

A container is a process, but it's is a special process because it has it's own file system

Activity

  1. Create a directory named hello-docker
  2. Create an app.js file in the folder that simply does a console.log
  3. Run the app with node: node app.js

To run this simple app we need:

  1. An OS
  2. Node
  3. A copy of the app files (app.js)
  4. launch the app with node app.js

Create a Dockerfile inf the directory:

FROM node:alpine
COPY . /app
WORKDIR /app
CMD node app.js 
  1. The FROM command pulls an image in the docker repo is build that has Node installed on top of the Alpine linux OS
  2. The COPY command copies all files from the current directory (.) into the /app folder of the image
  3. The WORKDIR command is like using cd to specify that the CMD command should exectute in the /app folder

To build the image:

docker build -t my-first-image .
  1. The -t stands for 'tag' so that we can identify this image, this allows you to name (and optionally 'tag' your image).
  2. The dot specifies that the Dockerfile is in the current director

The above code sample creates an image named my-first-image. If you also wanted to tag (version) the image, you could build the image like so (use a colon after the name):

docker build -t my-first-image:1.1 .
  1. The image name is my-first-image
  2. The tag (version) is 1.1 Some teams like to use versioning number, others like to use colorful names for the tag/version

To list the images you have:

docker image ls

This displays:

  1. The (tag) name of the image
  2. The image id
  3. when it was created
  4. the size of the image

Now you can run this image on any computer that has docker installed.

To run the image (you can run this from any folder on the computer):

docker run my-first-image

Linux Terminal

To pull and run an Ubuntu image from the Docker repository:

docker run ubuntu

(you could just do docker pull ubuntu if you want to just download it an not run it)

To show running containers:

docker ps

To show all containers (including ones that are not running):

docker ps -a

To run an Ubuntu container in interactive mode:

docker run -it ubuntu  

This will switch the current terminal to the container, so that you are running your commands in the container, rather than on the host machine.

Note that the command line will end with a #, rather than the usual $, which indicates that you are logged in as the root user.

Some notes on grep

grep hello file.txt        # to search for hello in file.txt
grep -i hello file.txt     # case-insensitive search 
grep -i hello file*.txt    # to search in files with a pattern
grep -i -r hello .         # to search in the current directory

Some notes on find

find               # to list all files and directories
find -type d       # to list directories only
find -type f       # to list files only
find -name “f*”    # to filter by name using a pattern

Building Images

You could create multiple containers from a single image and they will be isolated from one another, so you could add/remove files from one container and they won't effect others.

Docker instructions (that you would put in a Dockerfile)

  1. FROM - to inherit from a base image
  2. WORKDIR - to set the directory from which all the following commands will be executed
  3. COPY - to copy files and dirs into the container
  4. ADD - to copy files and dirs (it has a few more features than COPY)
    1. You can copy a file by specifying a URL
    2. If you copy a .zip file, it will automatically extract it in the container
  5. RUN - to execute commands
  6. ENV - to set env vars in a container
  7. EXPOSE - to expose ports to the host machine
  8. USER - to specify the user that should run an application
  9. CMD - to specify a command to run when a container is started
  10. ENTRYPOINT -

Be careful if you use a 'latest' version of an image, like so:

FROM node:latest

This would specify the newest version of the node base image. But it's better to use a specific version so that you don't experience breaking changes.

For containers that use a linux OS, the smallest ones use the Alpine distro because it is a very lightweight distro, so that's a good choice if you are messing around.

To get started with the project for this section

  1. Extract the section4-react app folder and open the project in VSCode
  2. I installed the Docker extension (from Microsoft) in VSCode - not sure if this is required or not
  3. Add a Dockerfile to the project folder and put this in it:
FROM node:16-alpine3.18

This image uses node v16 running on Alpine 3.18 which no longer appears on the docker repo (although this still works). And if you use a newer version of node, then your container will not work. So stick with this version

Then run this command to download the image:

docker build -t react-app .

This will tag/name the container 'react-app'.

View all the images that you have downloaded:

docker image ls

Run the image in interactive mode (-it):

docker run -it react-app

This will start the image and put you in a Node terminal.

Press Ctrl+c (a few times) to get back to the host machine's terminal.

Run the image in interactive mode but use the 'shell' terminal (alpine doesn't come witht he 'bash' terminal):

docker run -it react-app sh

NOTE: If the container OS had bash (as ubuntu does) then you could replace sh with bash.

Now you can run shell commands. If you run ls you'll see that the container has all the common folders that you find on linux distros.

Copying files into a container

Update the Dockerfile to look like this:

FROM node:16-alpine3.18
WORKDIR /app
COPY . .

WORKDIR creates a folder named app in the container and COPY . . copies all files from the current folder (on the host) into the working dir on the container, which is app.

Then rebuild the image:

docker build -t react-app .

Then run it again (using the 'shell' terminal):

docker run -it react-app sh

Now if you run ls you'll see your app folder with all the project files in it.

Excluding the node_modules folder

It's better not to copy the node_modules folder when you build a container. Instead you should exlude it from the container and run npm install in the container.

When using docker in the cloud, you often build containers on remote machines, so you don't want to copy large files over the network.

But doesn't the remote machine have to then download all the modules????

Create a file in the project folder named .dockerignore and put this in it:

node_modules/

Now update the Dockerfile to look like this:

FROM node:16-alpine3.18
WORKDIR /app
COPY . .
RUN npm install

Setting environment variables

You can add an environment variable to your container by adding ENV to your docker file:

ENV SOME_VAR=hello

Note: you could omit the = (just a space) also.

Update the Dockerfile like so:

FROM node:16-alpine3.18
WORKDIR /app
COPY . .
RUN npm install
ENV SOME_VAR=hello

Then build and run the container (and open a shell when you run it):

docker build -t react-app .
docker run -it react-app sh

Once the shell terminal opens, run this command to see all the environment variables in the container:

printenv
# OR
printenv SOME_VAR
# OR
echo $SOME_VAR

Exposing ports

FROM node:16-alpine3.18
WORKDIR /app
COPY . .
RUN npm install
ENV SOME_VAR=hello
EXPOSE 3000

The EXPOSE command doesn't actually do anything, other than document that you intend expose the container to port 3000 on the host.

Setting the user account that the app runs under

By default, Docker runs the application as the root user in the container

FROM node:16-alpine3.18
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
COPY . .
RUN npm install
ENV SOME_VAR=hello
EXPOSE 3000
CMD npm start

The RUN command creates a group and user named app in the container.

Then the USER command sets the user account that will be used for and RUN command that follow.

The CMD command starts the react app (under the app account).

RUN vs CMD: Use RUN to run commands when building the container. Use CMD to run commands when starting the container.

Two variations of CMD

# Shell form
CMD npm start

# Exec form
CMD ["npm", "start"]

The shell form will run the command in a new terminal/shell process in the container. The exec form will not.

Apparently the exec form is more optimal, so you should use the exec form.

The ENTRYPOINT command

If you have multiple CMD commands at the end of your Dockerfile, all but the last are ignored. The ENTRYPOINT will not be ignored.

It's recommended that you use the 'exec' form of ENTRYPONT

Docker layers and optimizing your build times

Many of the commands in a Dockerfile will add files to the container:

# FROM adds files needed for Alpine OS and Node
FROM node:16-alpine3.18
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
# COPY adds the project files
COPY . .
# RUN npm install adds the node_modules folder
RUN npm install
ENV SOME_VAR=hello
EXPOSE 3000
CMD npm start

These are known as layers. Images are made up of several layers

The Docker engine will cache these layers so that if a layer/command does not change from one build to the next, layers will be pulled form Docker's cache.

Here is the output from running our build:

user@machine-name:~/Desktop/section4-react-app$ docker build -t react-app .
[+] Building 1.2s (9/9) FINISHED                                                                docker:default
 => [internal] load .dockerignore                                                                         0.0s
 => => transferring context: 53B                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                      0.0s
 => => transferring dockerfile: 99B                                                                       0.0s
 => [internal] load metadata for docker.io/library/node:16-alpine3.18                                     0.9s
 => [1/4] FROM docker.io/library/node:16-alpine3.18@sha256:9e38d3d4117da74a643f67041c83914480b335c3bd44d  0.0s
 => [internal] load build context                                                                         0.1s
 => => transferring context: 4.44kB                                                                       0.0s
 => CACHED [2/4] WORKDIR /app                                                                             0.0s
 => CACHED [3/4] COPY . .                                                                                 0.0s
 => CACHED [4/4] RUN npm install                                                                          0.0s
 => exporting to image                                                                                    0.0s
 => => exporting layers                                                                                   0.0s
 => => writing image sha256:b7da23088e46855316b4cb0eca574432fb2ec3ba2bbbbe163e8973d8342b88dd              0.0s
 => => naming to docker.io/library/react-app    

This shows how the layers are being built. Notice that some of the layers are being pulled from the cache.

You can run the docker history command followed by an image name to see more info about how the layers were created during a build:

docker history react-app

This is the output that is produced:

~/Desktop/section4-react-app$ docker history react-app
IMAGE          CREATED        CREATED BY                                      SIZE      
b7da23088e46   15 hours ago   RUN /bin/sh -c npm install # buildkit           361MB     
<missing>      15 hours ago   COPY . . # buildkit                             798kB     
<missing>      15 hours ago   WORKDIR /app                                    0B        
<missing>      4 weeks ago    /bin/sh -c #(nop)  CMD ["node"]                 0B        
<missing>      4 weeks ago    /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B        
<missing>      4 weeks ago    /bin/sh -c #(nop) COPY file:4d192565a7220e13…   388B      
<missing>      4 weeks ago    /bin/sh -c apk add --no-cache --virtual .bui…   7.76MB    
<missing>      4 weeks ago    /bin/sh -c #(nop)  ENV YARN_VERSION=1.22.19     0B        
<missing>      4 weeks ago    /bin/sh -c addgroup -g 1000 node     && addu…   121MB     
<missing>      4 weeks ago    /bin/sh -c #(nop)  ENV NODE_VERSION=20.10.0     0B        
<missing>      4 weeks ago    /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B        
<missing>      4 weeks ago    /bin/sh -c #(nop) ADD file:1f4eb46669b5b6275…   7.38MB

This shows you all the layers and how much disk space (size) each one added to the container.

If you make a change to a line in your Dockerfile, or if any of the files change when you use COPY, then Docker will not pull the layer from it's cache.

Here is how we update the Dockerfile to optimize our build

FROM node:16-alpine3.18
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
ENV SOME_VAR=hello
EXPOSE 3000
CMD ["npm",  "start"]

In this version the first COPY copies the package.json file and the package-lock.json file (note the wild card character).

Then it runs npm install

Then the second COPY copies the rest of the files in project folder.

So, if you do not update package.json (and therefore the node_modules folder), then the Docker engine will create these layers from it's cache when you do a new build.

Likewise, if you do not make any changes to the other files in the project folder, then the COPY . . layer will be pulled from cache.

So, here's general rule: set up your Dockerfile so that the commands that don't change frequently are near the top of the docker file (they will be cached) and the commands that copy files that change frequently should be at the bottom.

Removing images

If you run docker images in the terminal, you may see some that have no name or tag. These are layers that somehow get disassociated with their container. They are known as dangling images.

To get rid of dangling images, run this:

docker image prune

If you run docker images again, you might still see dangling images.

To remove all stopped containers:

docker container prune

To clean up everything (I think) run docker container prune first, then run docker image prune.

But the imagess that you downloaded will remain.

If you want to remove a downloaded:

docker image rm IMAGE NAME OR ID GOES HERE

If you use an ID, you only have to enter the first few characters, not the entire ID.

Tagging images

If you don't specify a tag an image when you build it, the tag will default to latest.

You should always tag your images in production so that you can identify them.

Assume you build an image like so:

docker build -t my-first-image .
  1. The -t stands for 'tag' so that you can identify this image, this allows you to name (and optionally 'tag' your image).
  2. The dot specifies that the Dockerfile is in the current director

The above code sample creates an image named my-first-image. If you also wanted to tag (version) the image, you could build the image like so (use a colon after the name):

docker build -t my-first-image:1.1 .
  1. The image name is my-first-image
  2. The tag (version) is 1.1 Some teams like to use versioning number, others like to use colorful names for the tag/version

The above code example demonstrates how to tag an image when you build it. But you can also tag image after they've been built.

docker images tag image-name:latest image-name:some-tag

You could use the image ID instead of the name.

Remember that if a tag is not specified when the container is built, it will default to 'latest'. So the above command changes the tag to 'some-tag'

Working with containers

Starting containers

Some review of the commands we've seen so far:

# To view images that have been downloaded
	docker images

# To show all running containers
	docker ps

# To show all running and stopped contianers
	docker ps -a  

# To run a container in 'detached mode' 
# (in the background so that you can continue to use the terminal for other things)
	docker run -d imagenameorid

Viewing logs

docker logs containerID

Add the -f option to 'follow' the logs as they get populated.

Publishing ports

docker run -p 80:3000 --name myContainer someImage

Here we are running a container and naming it myContainer. Port 80 on the host will map to port 3000 in the container.

Exectuting commands in the container

If you have a container that's already running, you can execute a command in t like so:

docker exec -it containerNameOrID sh

This will open a shell in the container and then you can run commands in the container.

Stopping and starting containers

To stop a running container:

docker stop containerNameOrID

To start a stopped container:

docker start containerNameOrID

Note: docker run starts a NEW container while docker start starts a container that has been stopped.

Removing containers

You cannot remove a running container, you must stop it first:

docker stop containerNameOrID
docker rm containerNameOrID

docker container prune will remove all stopped containers.

Persisting data with volumes

A container has it's own file system which is isolated from other containers. If you remove a container you will delete it's files. But you can use a volume to store data for a container on the host.

Here are the commands you can run with docker volume

  1. create
  2. inpsect
  3. ls (list)
  4. prune
  5. rm (remove)

To create a new volume:

docker volume create myVolume

To inspect a volume:

docker volume inspect myVolume

Here's the output from the above command:

[
    {
        "CreatedAt": "2024-01-11T16:17:00-06:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/myVolume/_data",
        "Name": "myVolume",
        "Options": null,
        "Scope": "local"
    }
]

The Mountpoint property will show you where the volume is on the host.

The Driver property indicates that the volume is 'local' and therefor on the host machine. But you can put volumes in the cloud and have containers use them.

Now that you have a volume on the host you can connect it to a container like so:

docker run -d -p 4000:3000 -v myVolume:/app/data react-app
  1. The -d runs the container in 'detatched mode', which leaves the terminal available for you to use for other thigns
  2. The -p is the port mapping
  3. The -v myVolume:/app/data maps the volumne on the host to /app/data in the container. Note that if myVolume has not already been created, docker will create it when you run this command.
  4. react-app is the name of the image we've been using.

The output will show you the ID of the container (since we didn't specify a name with -t).

Docker will create an /app/data folder in the container, and map it to the volume, BUT the owner of the contain will be the ROOT user

Now open a shell on the container with this command:

docker exec -it ID GOES HERE sh

If you try to run a command like this in the container, you'll get a permission denied error

echo data > data.txt

This is because the user who runs the app is app, but the data folder was created and owned by the root user (since docker created the data folder for us).

To fix this we need to update the Dockerfile:

FROM node:16-alpine3.18
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
RUN mkdir data
COPY package*.json .
RUN npm install
COPY . .
ENV SOME_VAR=hello
EXPOSE 3000
CMD ["npm",  "start"]

Notice we've added the RUN mkdir data command. After we set the working dir to /app we create the data folder, and note that we are doing it as the app user because of the USER command.

Now you have to rebuild the container (exit the shell first, if you are still in it):

docker build -t react-app . 

Now run the container again (note that I changed the port, becaue the previous one is 'in use' from the previous container (???)):

docker run -d -p 5000:3000 -v myVolume:/app/data react-app

Now log back in to the container

docker exec -it ID GOES HERE sh

And you should be able to run these commands now:

cd data
echo hello > some.txt

SO APPARENTLY WHEN YOU LOG INTO THE CONTAINER, YOU DO SO AS THE USER THAT WAS SET IN THE Dockerfile (I think).

LIKEWISE, WHEN YOU LOG INTO THE CONTAINER, YOU AUTOMATICALLY START IN THE WORKDIR (I think).

Later in the course I learned this cool trick, you can log in as the root user like so:

docker exec -it -u root ID GOES HERE sh

ANYWAY - back to volumnes - if you delete the container, the file that you just created will persist in the volume!

To see this in action, remove the container:

docker rm -f ID GOES HERE

Now create a new container from the same image like so:

docker run -d -p 5000:3000 -v myVolume:/app/data react-app

Now log back in to the container

docker exec -it ID GOES HERE sh

If you cd into the data, you should see your text file!

Copying files between the host and a container

This comes in handy if you want to view a log file on the container, you can copy it to the host and then analyze it.

docker cp IDGOESHERE:/app/data/somefile.txt .

This will copy somefile.txt from the container to the current directory on the host.

To copy from the host to the container:

docker cp ./somefile.txt IDGOESHERE:/app/data

Multi-container applications

To see if docker compose is installed:

docker-compose --version

BUT I THINK THIS (docker-compose) IS DEPRECATED, SO I DID THIS:

docker compose version

Before diving in, Mosh wants to clean some things up:

docker images   # SHOWS THE IMAGES
docker ps 		# SHOWS THE RUNNING CONTAINERS

You should always remove containers before their associated images.

You can remove multiple (running) containers like so:

docker container rm -f ID ID

You can pass in multiple container ids, remember that -f will force a running container to be removed.

Likewise, you can remove multiple images:

docker image rm ID ID ID

Our next project!

These are the contents of the project

  1. A back end node project that runs on port 3001 (uses MongoDB), docker runs a migration script to populate the db.
  2. A front end React project that talks to the back end
  3. A docker-compose.yml file

To run everything, just cd into the project folder and do this:

docker compose up

To view the app, go to localhost:3000

JSON and YAML formats

Here's a sample .yml file

---
name: Some string 
price: 111
is_published: true
tags:
	- item1
	- item2
author:
	first_name: Bob
	last_name: Smith
  1. Uses --- to start the file
  2. No curly braces, indentation instead
  3. No commas after key/value pairs
  4. No quotes around key names
  5. No square brackets for arrays, use hyphens instead

Compose files

Create a file named docker-compose.yml

version: "3.8"
services:
	web:
		build: ./frontend
		ports:
			- 3000:3000
		environment:
			DB_URL: mongodb://db/vidly
	api:
		build: ./backend
		ports:
			- 3001:3001
	db:
		image: mongo:4.0-xenial
		ports:
			- 27017:27017
		volumes:
			- vidly:/data/db
volumes:
	vidly:
  1. For some reason you must use quotes around the version "3.8"
  2. You can define your own names for the services
  3. build specifies the location of the Dockerfile for the contianer
    1. Each service should have it's own Dockerfile
  4. The db service does not have a Dockerfile because it pulls an image directly from Docker's repo
  5. Notice that the DB_URL value is referring to the db service. Each of the service names because the host name for container.
  6. Apparently if you create a volume for a service, you must also declare it's name (with no value) in the top-level volumes property

Building images

All the commands that are availbe when you run docker will also be available when you run docker compose.

Run this:

docker compose --help

Notes about some of the options:

  1. --no-cache prevents docker from pulling layers from cache. This comes in handy when things are working with your builds
  2. --pull means that you will always pull the newest version of an image

To build the containers:

docker compose build

To see all the images:

docker images

This will produce output that looks something like this:

REPOSITORY                         TAG          IMAGE ID       CREATED         SIZE
docker-project-section6_frontend   latest       7bae6a469784   2 minutes ago   299MB
docker-project-section6_backend    latest       967a35e74b90   2 minutes ago   184MB
react-app                          latest       86bf8ba64996   24 hours ago    544MB
mongo                              4.0-xenial   fb1435e8841c   16 months ago   430MB

The first column defaults to your project folder name and the 'service'. But I noticed that the mongo image is just 'mongo', maybe because we pulled it directly from docker hub.

Starting the app

If you have already built the conatiners:

docker compose up

You can build and start the containers like so:

docker compose up -

To run the container in the background (detached mode):

docker compose up -d

To see the container relevant to this project (run this from the project folder, I think):

docker compose ps

This is different than docker ps, which shows you all containers.

To stop an app (all of its containers):

docker compose down

Docker networking

When you run docker compose up, docker will create a network for your containers to communicate.

To view the network info:

docker network ls

You can log into a container (as root) and then ping the other containers in the app by their sevice name:

docker exec -it -u root ID GOES HERE sh

From the shell you could ping another container like so:

ping api

Note that this ping command would not work if you ran it from the host. To communicate from the host, you use the port mappings.

Docker comes with and embedded DNS server that can resolve the service names to ip addresses inside it's network.

Viewing logs

To view all the logs from all containers in the app/project:

docker compose logs

To 'follow' the logs:

docker compose logs -f

To look at the logs for just one container in the project:

docker logs ID GOES HERE -f

Publishing changes

You can set up your docker-compose.yml file so that any changes you make to the code in your project will update the containers.

version: "3.8"
services:
	web:
		build: ./frontend
		ports:
			- 3000:3000
		environment:
			DB_URL: mongodb://db/vidly
	api:
		build: ./backend
		ports:
			- 3001:3001
		volumes:
			- ./backend:/app   <<<<<<<<<<<<<
	db:
		image: mongo:4.0-xenial
		ports:
			- 27017:27017
		volumes:
			- vidly:/data/db
volumes:
	vidly:

This version of the file includes a volume that maps the ./backend folder on the host to the /app folder in the container

Note that if the node_modules have not been installed on the host, then the container won't work.

Migrating the database

Note this update to the docker-compose.yml file:

version: "3.8"
services:
	web:
		build: ./frontend
		ports:
			- 3000:3000
		environment:
			DB_URL: mongodb://db/vidly
	api:
		build: ./backend
		ports:
			- 3001:3001
		volumes:
			- ./backend:/app
		command: migrate-mongo up && npm start <<<<<<<<<<<<<   
	db:
		image: mongo:4.0-xenial
		ports:
			- 27017:27017
		volumes:
			- vidly:/data/db
volumes:
	vidly:

Note that this command will override any COMMAND that you have in the Dockerfile for the api (the backend folder),

The first command runs the database migration (it's using an npm package named 'migrate-mongo') and the second command starts the api app.

BUT THERE CAN BE A PROBLEM WITH THIS: If the db service has not yet started then the migrate command will fail.

So there is a hack to 'wait for' another container.

First add this file to the backend project folder, it's called wait-for (with no file extension):

#!/bin/sh

set -- "$@" -- "$TIMEOUT" "$QUIET" "$HOST" "$PORT" "$result"
TIMEOUT=15
QUIET=0

echoerr() {
  if [ "$QUIET" -ne 1 ]; then printf "%s\n" "$*" 1>&2; fi
}

usage() {
  exitcode="$1"
  cat << USAGE >&2
Usage:
  $cmdname host:port [-t timeout] [-- command args]
  -q | --quiet                        Do not output any status messages
  -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
  -- COMMAND ARGS                     Execute command with args after the test finishes
USAGE
  exit "$exitcode"
}

wait_for() {
 if ! command -v nc >/dev/null; then
    echoerr 'nc command is missing!'
    exit 1
  fi

  while :; do
    nc -z "$HOST" "$PORT" > /dev/null 2>&1
    
    result=$?
    if [ $result -eq 0 ] ; then
      if [ $# -gt 6 ] ; then
        for result in $(seq $(($# - 6))); do
          result=$1
          shift
          set -- "$@" "$result"
        done

        TIMEOUT=$2 QUIET=$3 HOST=$4 PORT=$5 result=$6
        shift 6
        exec "$@"
      fi
      exit 0
    fi

    if [ "$TIMEOUT" -le 0 ]; then
      break
    fi
    TIMEOUT=$((TIMEOUT - 1))

    sleep 1
  done
  echo "Operation timed out" >&2
  exit 1
}

while :; do
  case "$1" in
    *:* )
    HOST=$(printf "%s\n" "$1"| cut -d : -f 1)
    PORT=$(printf "%s\n" "$1"| cut -d : -f 2)
    shift 1
    ;;
    -q | --quiet)
    QUIET=1
    shift 1
    ;;
    -q-*)
    QUIET=0
    echoerr "Unknown option: $1"
    usage 1
    ;;
    -q*)
    QUIET=1
    result=$1
    shift 1
    set -- -"${result#-q}" "$@"
    ;;
    -t | --timeout)
    TIMEOUT="$2"
    shift 2
    ;;
    -t*)
    TIMEOUT="${1#-t}"
    shift 1
    ;;
    --timeout=*)
    TIMEOUT="${1#*=}"
    shift 1
    ;;
    --)
    shift
    break
    ;;
    --help)
    usage 0
    ;;
    -*)
    QUIET=0
    echoerr "Unknown option: $1"
    usage 1
    ;;
    *)
    QUIET=0
    echoerr "Unknown argument: $1"
    usage 1
    ;;
  esac
done

if ! [ "$TIMEOUT" -ge 0 ] 2>/dev/null; then
  echoerr "Error: invalid timeout '$TIMEOUT'"
  usage 3
fi

if [ "$HOST" = "" -o "$PORT" = "" ]; then
  echoerr "Error: you need to provide a host and port to test."
  usage 2
fi

wait_for "$@"

Then update the associated line in the docker-compose file to look like this:

command: ./wait-for db:27017 && migrate-mongo up && npm start

Note that you have to specify the service name and it's port number (I believe the container port, not the host port)

Alternatively, you could create an entry point script to run that looks like this (docker-entrypoint.sh in the backend folder):

#!/bin/sh

echo "Waiting for MongoDB to start..."
./wait-for db:27017 

echo "Migrating the databse..."
npm run db:up 

echo "Starting the server..."
npm start 

Then you could update the docker-compose file to run this script:

version: "3.8"
services:
	web:
		build: ./frontend
		ports:
			- 3000:3000
		environment:
			DB_URL: mongodb://db/vidly
	api:
		build: ./backend
		ports:
			- 3001:3001
		volumes:
			- ./backend:/app
		command: ./docker-entrypoint.sh <<<<<<<<<<<<<   
	db:
		image: mongo:4.0-xenial
		ports:
			- 27017:27017
		volumes:
			- vidly:/data/db
volumes:
	vidly:

Running tests

You could run tests in the containers, but I'm not interested in it at this time!

Deploying Applications

These are some options for using docker on cloud platforms:

  1. Digital Ocean (easiest)
  2. Google Cloud Platform
  3. Azure
  4. AWS

Use docker machine to deploy your containers to a production server.

You can download it from the docker github account.

Provisioning a host

Mosh is using Digital Ocean. Other platforms have a similar approach

You need to generate an access token on the Digital Ocean control panel

docker-machine create \
 --driver digitalocean \
 --digitalocean-access-token TOKENGOES HERE \
 --engine-install-url "some url"
 nameForYourContainerGoesHere

When it's done, you should be able to run this command to see your remote containers:

docker-machine ls

Connecting to the host

You can use this command to connect to your container:

docker-machine ssh nameForYourContainerGoesHere

Defining the production configuration

Create a new file called docker-compose.prod.yml and paste the contents of the docker-compose file in it.

Then make the following change to the prod version:

version: "3.8"
services:
	web:
		build: ./frontend
		ports:
			- 80:3000          <<<<<<<<<<<<<
		environment:
			DB_URL: mongodb://db/vidly
		restart: unless-stopped   <<<<<<<<<<<
	api:
		build: ./backend
		ports:
			- 3001:3001
		volumes:				<<<<<<<<<<<<<
			- ./backend:/app
		command: ./docker-entrypoint.sh
		restart: unless-stopped   <<<<<<<<<<<    
	db:
		image: mongo:4.0-xenial
		ports:
			- 27017:27017
		volumes:				<<<<<<<<<<<<<
			- vidly:/data/db
		restart: unless-stopped   <<<<<<<<<<<
volumes:
	vidly:
  1. Change the port mapping for the web service host to 80
  2. Remove the volume mapping for the api service.
  3. Remove the volume mapping for the db service
  4. Add a restart setting for each service
  5. Do we need to remove the last volumes entry ??? (mosh did not)

Options for restart:

  1. no - the container will not restart on failure
  2. always - the container will always restart the container, no matter what stopped it
  3. on-failure - the container will only restart on crashes
  4. unless-stopped - the container will always restart if it wasn't manually stopped (???)

Reducing the image sizes

For the react app, run npm run build

Then create a new docker file in the frontend folder named Dockerfile.prod

# step 1 - build phase
FROM node:14.16.0-alpine3.13 AS build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# step 2 - for production
FROM nginx:1.12-alpine
RUN addgroup app && adduser -S -G app app
USER app
COPY --from=build-stage /app/build /usr/share/nginx/html
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "damon off;"] 
  1. Added AS build-stage to the first line, so that we can refer to it later in the file
  2. We are using nginx (alpine) for production
  3. We COPY the /app/build folder from the build phase to the doc root dir of the nginx server
  4. The ENTRYPOINT starts the nginx server

Now cd into the frontend folder and run this:

docker build -t someName -f Dockerfile.prod .
  1. The -f specifies the docker file to use for the build
  2. The . means check the current dir for the docker file

Now we want to create a docker-compose.prod.yml and put this in it:

version: "3.8"
services:
	web:
		build: 
			context: ./frontend 			<<<<<<<
			dockerfile: Dockerfile.prod 	<<<<<<<
		ports:
			- 80:3000          
		environment:
			DB_URL: mongodb://db/vidly
		restart: unless-stopped   
	api:
		build: ./backend
		ports:
			- 3001:3001
		volumes:				
			- ./backend:/app
		command: ./docker-entrypoint.sh
		restart: unless-stopped   
	db:
		image: mongo:4.0-xenial
		ports:
			- 27017:27017
		volumes:				
			- vidly:/data/db
		restart: unless-stopped   
volumes:
	vidly:
  1. The context specifies the folder to find the docker file
  2. The dockerfile specifies the docker file to use

To run this version of the docker-compose file

docker compose -f docker-compose.prod.yml build

Lists the remote images:

docker-machine ls

Shows the ENV variables for a remote image:

docker-machine env IDorNAME