Table of contents
  1. Optimizing Image Sizes
  2. Create a container from image
  3. Interact with docker container
  4. Manage Docker images
  5. Docker compose file use networks
    1. method 1:
    2. method 2:
  6. docker error: http: invalid Host header
  7. use docker command without sudo
  8. Use Docker to contain Flask and Ngnix
  9. How to use MongoDB docker container
  10. How to combine Flask, MongoDB and Docker

Optimizing Image Sizes

The following word comes from the book "Kubernetes: Up and Running, 2nd edition, by Brendan Burns, Joe Beda, and Kelsey Hightower (O’Reilly). Copyright 2019 Brendan Burns, Joe Beda, and Kelsey Hightower, 978-1-492-04653-0.”

There are two pitfalls about the images

  • Create large image size that contains large files
    .
    └── layer A: contains a large file named 'BigFile'
     └── layer B: removes 'BigFile'
     └── layer C: builds on B by adding a static binary
    

    You might think that BigFile is no longer present in this image. After all, when you run the image, it is no longer accessible. But in fact it is still present in layer A, which means that whenever you push or pull the image, BigFile is still transmitted through the network, even if you can no longer access it.

  • Image caching and building
    .
    └── layer A: contains a base OS
     └── layer B: adds source code server.js
     └── layer C: installs the 'node' package
    
    versus:
      
    .
    └── layer A: contains a base OS
     └── layer B: installs the 'node' package
     └── layer C: adds source code server.js
      
    

    It seems obvious that both of these images will behave identically, and indeed the first time they are pulled they do. However, consider what happens when server.js changes. In one case, it is only the change that needs to be pulled or pushed, but in the other case, both server.js and the layer providing the node package need to be pulled and pushed, since the node layer is dependent on the server.js layer. In general, you want to order your layers from least likely to change to most likely to change in order to optimize the image size for pushing and pulling. This is why, in Example 2-4, we copy the package*.json files and install dependencies before copying the rest of the pro‐ gram files. A developer is going to update and change the program files much more often than the dependencies.

Create a container from image

docker run -d --name <container-name> --publish 3000:3000 <image-name or image-id>

This command starts the kuard container and maps ports 3000 on your local machine to 3000 in the container. The –publish option can be shortened to -p. This forwarding is necessary because each container gets its own IP address, so listening on localhost inside the container doesn’t cause you to listen on your machine. Without the port forwarding, connections will be inaccessible to your machine. The -d option specifies that this should run in the background (daemon), while –name kuard gives the container a friendly name.

Interact with docker container

  • Option 1: interact from image directly
    docker run --rm -it --entrypoint bash <container-name or container-id>
    

    Here --rm is to delete the container and associated resource after the usage of container. -it is to interface with the container. ‘–entrypoint bash’ it to use bash as the entry command line instead of the command line that you set in the Dockerfile.

  • Option 2: interact from container
    docker exec -it <container-name or container-id> bash
    

    Here you have create a container based on one image. And then use exec and -it to interact with the container

Manage Docker images

  • list unused docker images : docker images -f "dangling=true" -q
  • Remove unused docker images using one of the following methods:
    • docker images -f "dangling=true" -q | xargs -d '\n' -I{} docker rmi {}
    • docker rmi $(docker images --filter dangling=true -q )
    • docker image prune

Docker compose file use networks

Sometimes the subnet of docker network will conflict with the current server or AWS server, thus you need to change the subnet. There are two solutions:

  1. You create one network, and add it in the docker-compose file
  2. create a new network directly in the docker-compose file

method 1:

create a new network using docker docker network create -d bridge --subnet=192.168.0.0/16 data-management

then write the docker-compose.yml file as following:

version : '3'

services:
    data_management_backend:
        container_name : data_management_backend
        restart: always

        build: ./backend
        tty: true
        ports:
            - "3103:3003" # thus you can split the production and test environment
        command: python run.py
        volumes:
            - /absolut-path/data-management/backend/temp:/absolut-path/data-management/backend/temp
        networks:
            - data-management

    data_management_frontend:
        container_name: data_management_frontend
        restart: always
        build: ./frontend
        ports:
            - "3104:3004"
        command: npm run dev
        depends_on:
            - data_management_backend
        networks:
            - data-management

networks:
    data-management:
        external:true

method 2:

version : '3'

services:
    data_management_backend:
        container_name : data_management_backend
        restart: always

        build: ./backend
        tty: true
        ports:
            - "3103:3003" # thus you can split the production and test environment
        command: python run.py
        volumes:
            - /absolut-path/data-management/backend/temp:/absolut-path/data-management/backend/temp
        networks:
            - data-management

    data_management_frontend:
        container_name: data_management_frontend
        restart: always
        build: ./frontend
        ports:
            - "3104:3004"
        command: npm run dev
        depends_on:
            - data_management_backend
        networks:
            - data-management

networks:
    data-management:
        ipam:
            config:
                - subnet: 172.177.0.0/16

docker error: http: invalid Host header

This error come from the docker version 20.10.24. We need to upgrade the docker version to the newest.

sudo snap refresh docker --channel=latest/edge

please remeber to change the /var/run/docker.sock file permission.

use docker command without sudo

after you install docker successfully, then do the following steps:

  1. sudo groupadd docker
  2. sudo usermod -aG docker $USER
  3. if you write docker ps and give you error like below:

    docker ps Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get “http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json”: dial unix /var/run/docker.sock: connect: permission denied

  4. then you need to change the right of /var/run/docker.sock
  5. ls -l /var/run/docker.sock srw-rw---- 1 root root 0 Jan 29 19:00 /var/run/docker.sock
  6. sudo chmod 666 /var/run/docker.sock
  7. ls -l /var/run/docker.sock
    srw-rw-rw- 1 root root 0 Jan 29 19:00 /var/run/docker.sock

You see that the srw-rw---- changed to srw-rw-rw-

  1. now the docker can run correctly.

Use Docker to contain Flask and Ngnix

Here’s the tree structure of the project

.
├── docker-compose.yml
├── flask-app
│   ├── Dockerfile
│   ├── home
│   ├── __init__.py
│   ├── __pycache__
│   ├── requirements.txt
│   ├── run.py
│   ├── static
│   └── templates
└── nginx
    ├── default.conf
    └── Dockerfile

One important thing for the flask is that you should use python run.py in the docker-compose.yml instead of ‘flask run –port 5001’

Here’s the run.py

from flask import Flask
import pathlib
import sys,os

root_path = pathlib.Path(__file__).resolve().parent
home_path = root_path/'home'
sys.path.insert(0, str(home_path))
print(f'root_path is {root_path}')
from routes import blueprint

app = Flask(__name__)

app.register_blueprint(blueprint)

if __name__ == "__main__":
    app.run(host='0.0.0.0', debug=True, port=5002)

Here’s the Dockerfile in the flask-app

FROM python:3.8

WORKDIR usr/src/app

ENV FLASK_ENV=development
ENV FLASK_APP=run.py

COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

Here’s the docker-compose.yml file

version : '3'

services:
    flask_app:
        container_name: flask_app
        restart: always

        build: ./flask-app
        tty: true
        # volumes:
        ports:
            - "5002:5002"
        command:  python run.py # this one gives you the server's address and localhost addresss
        # command: flask run --port 5002 # this one is not working. It gives you the address of the localhost not the server's address

    flask_nginx:
        container_name: flask_nginx
        restart: always
        build: ./nginx
        ports:
            - "8085:8085"
        depends_on:
            - flask_app

How to use MongoDB docker container

How to combine Flask, MongoDB and Docker

  1. Create Flask app and using pymongo to access the mongodb data. Here you need to pay attention to the mongodb host address: docker-data:/data/db
  2. Create a Dockerfile in the Flask app folder
  3. Create a Dockerfile for Mongodb
  4. Create a docker-compose.yml file to combine Flask app and Mongodb : persist the mongodb data by docker volume. mount the MongoDB data to the /data/db
  5. Check the mongodb host ip: docker inspect mongodb-docker-container-name and get the IPAddress from the Networks. And use this ip as the MongoDB host address. If the IPAdress cannot work, then use the GateWay address. Or just don’t use the host parameter in MongoClient.
  6. docker-compose up --build -d

Sometimes you have create a mongodb docker already and you need to use it in other projects. Now you need to use the networks to connect your project with the mongodb docker.

  1. check the networks that the mongodb docker is using : docker inspect mongodb_docker_name get the Networks keyword in the output and extract the networks name, e.g. web-net
  2. in your project docker-compose.yml file, you need to add networks in the each container as shown below. ```yaml version : ‘3’

services: myWeb_backend: container_name : myWeb_backend restart: always build: ./backend networks: - web-net tty: true ports: - “3103:3003” command: python run.py

myWeb_frontend:
    container_name: myWeb_frontend
    restart: always
    build: ./frontend
    networks:
        - web-net
    ports:
        - "3104:3104"
    command: npm run docker
    depends_on:
        - myWeb_backend

networks: web-net: external: true

3. Here we use flask as the backend and react as the fronted and add the external networks. you must add the following line to enable the external networks usable.
   ```yaml
      networks:
        web-net:
            external: true 
  1. At the backend, we expose our flask port 3103:3003. The docker flask will use 3003 as the port, but when it expose to the public, it use 3103 as the port. At the fronted, we use 3104:3104 as the ports. So in the docker, it use 3104, and it also use 3104 to the public. But when you config vite, you need to pay attentiion to the server the backend is using right now, actually it’s 3103 right now instead of 3003.
  2. vite.config.js file as shown below ```js import { defineConfig } from ‘vite’ import react from ‘@vitejs/plugin-react’

// for bootstrap import * as path from ‘path’;

// https://vitejs.dev/config/ export default defineConfig({

// add these you can use material packages optimizeDeps: { include: [‘@mui/material/Tooltip’, ‘@emotion/styled’], },

plugins: [react()], server: { host: true, // port : 3004,// local port: 3104, //docker proxy: { ‘/api’: { // target: ‘http://server_ip:3003’,//loacl target: ‘http://server_ip:3103’, //docker changeOrigin: true, secure: false, ws: true, rewrite: (path) => path.replace(/^\/api/, “”), } } },

})

```

  1. uncomment the docker line to use docker-compose command or uncomment the local line to use it locally.