What is Dockerfile?
Dockerfile is a list of instructions and rules to be followed by the Docker Daemon to build a container image.
(In case you don't know about Docker Daemon, container, images and basics of Docker reach out to my earlier blog on Docker for a complete beginner.
)
Dockerfile Instructions
Dockerfile Instructions are the set of words conventionally written in the upper case in order to perform a specific action to your Image.
For commenting "#" is used which is ignored by the docker while performing actions
List of Dockerfile Instructions
From
FROM <Image_Name>
It is the starting line in your Dockerfile telling out the base image <Image_Name> on which our Image is based upon. Say for example we have a javascript application with node.js in the backend in the container, so to run over node application. Instead of basing it on some other low-level language like Linux where we will have to manually download node.js first by ourselves.
In case we don't have it we can download it from DockerHub:--
by writing code in the terminal as:--
docker pull node
Env
Env MONGO_DB_USERNAME = admin MONGO_DB_PWD = password
Env is used as an alternative for defining environment variables in Docker Compose. It is better to define the environment variable in docker-compose as it will be easier to make changes to it, in case something got changed or updated instead of rebuilding the whole image again.
RUN
RUN mkdir -p/home/app
RUN can run any kind of Linux terminal command. Here in this command, a new directory is assigned to be built using mkdir command which will be stored or created inside the Docker environment and not in the localhost or in the laptop memory we would be programming at the moment.
COPY
COPY ./home/app
Unlike RUN instruction where RUN cp will execute within my container not in my localhost, COPY will be functioning within the localhost as this command is used for copying the file or data present in the localhost or the laptop's memory into the container where it will be copied.
CMD
CMD ["node", "server.js"]
CMD is the entry-level Linux command by which we would be able to run/start the image we installed in the FROM instruction earlier as node.
Here in ["node", "server.js"] is represented as node server.js.
We are able to do it because the node image was pre-installed onto the system.
The listed Dockerfile Instruction will be written in the code editor like VS Code like:-
So in the Dockerfile, we can also mention some version of the base image we are using in the example we have been using node:13-alpine but for example, let's take the latest 20-alpine3.17, then we can mention it in the same way.
If we want to view the new version available it can be viewed from the DockerHub and then going to the versions available of the selected image which we would be taking as the base image.
Now when we click on the selected image version we would be able to see the image container of that version as well.
Now as told earlier that every image is built on a base image, we can see that step to be followed here as well where version 20-alpine3.17 is based on the version node alpine3.17 and then all other things are followed in the same manner as discussed earlier in the Dockerfile instructions section.
After creating the Dockerfile save the file with the exact name of the Dockerfile ( with 'D' in the capital at the start of the name.)
Building Image out of Dockerfile
To Build an image out of Dockerfile:-
Open up a terminal.
Locate the path of the file where we want to save this image on the system.
Give our Image a name in the terminal by:-
docker build -t <Image_name> <Path-to-Dcokerfile>
<Path-to-Dockerfile> is the path to the Dockerfile which we told Docker to use. If the Dockerfile is present in the same directory in which the Docker Image will be saved then the code with a "."(in the end) represent the current directory.
docker build -t <Image_name> .
The
docker build
command takes the path to the build context as an argument. The build context defines which paths you can reference within your Dockerfile. Paths outside the build context will be invisible to Dockerfile instructions such asCOPY
. It’s most common to set the build context to.
, as in this example, to refer to your working directory.Ok so in various conditions the path we provided to the command line may contain multiple Dcokerfile which might create errors so to end up this problem we could use:-
$ docker build -f dockerfiles/app.dockerfile -t demo-image:latest .
where -f is also a command used to point out the path of the dockerfile but it turns out to be more precise and tells out the "dockerfiles" present in the current directory inside which is the app.dockerfile inside which our file demo-image:latest is present which is named by the -t tag.
Finally, when our image is generated we can view it by writing:--
docker images
and to get the list of all the containers in the command line running in our container:--
docker ps
Precautions and Methods to Solve Errors
Whenever you adjust Dockerfile You Must rebuild the Image!!
It is recommended to rebuild the image whenever you change or adjust your Dockerfile otherwise, it may result in some errors.
When you are done adjusting the Dockerfile and finally saved your adjusted file, then you may wish to delete the Docker image you created but, before deleting the Docker Image you may require to delete the Docker Image container first by the following commands:--
docker ps -a | grep <Image-ID>
then delete the Docker image container:--
docker rm <conatiner-ID>
Then we can finally delete the Docker Images:--
docker rmi <image-ID>
Sometimes it is possible that in some containers the bash isn't installed so we can try running shell instead in that condition:--
Also when we enter into the container by specifying its Container_ID in the command:--
docker exc <Conatiner_ID> /bin/bash
We would be able to get access to the binary files of the container and if we try entering the environment of the container we will see the environment we referred to in the starting available in the environment of the container.
Sometimes, the latest versions may bring certain changes which may result in causing errors to our code so instead, specify a certain specific version of the image like node:16(for example) which we will be using as a base image while creating a dockerfile.
Use safer and trusted base images only in order to save your containers from malware and any security issues.
Use HEALTHCHECK to check container health by enabling HEALTHCHECK instructions to your Dockerfile.
HEALTHCHECK --timeout=3s CMD curl -f http://localhost || exit
Orchestrators like Kubernetes Docker Swarm can use this information to automatically restart problematic containers. The HEALTH of your containers is displayed when you run --> docker ps command
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS 335889ed4698 demo-image:latest "httpd-foreground" 2 hours ago Up 2 hours (healthy)
Don't use important secretes in your images like passwords and API keys as these if these images reach into the hands of anyone else which can create harm to your system and architecture then things might get pretty ugly and get out of hand quickly.
Managing your Images by connecting arbitrary metadata with them by using Dockerfile LABEL to specify them with more relevant data possible and make the process more easy and fast while working in a team.
LABEL com.example.project=api LABEL com.example.team=backend
Images can spread up to acquiring huge chunks of storage so to resist any slowing down of speed and efficiency try to keep your images small in size.
Docker defaults to running container processes as root. This is problematic because the root in the container is the same as the root on your host. A malicious process that escapes the container’s isolation could run arbitrary commands on your Docker host.
You can mitigate this risk by including the USER instruction in your Dockerfile. This sets the user and group on which your container will run as. It’s good practice to assign a non-root user in all of your Dockerfiles:
# set the user USER demo-app # set the user with a UID USER 1000 # set the user and group USER demo-app:demo-group
What happens next??
After we are done with creating Dockerfile, Jenkins comes into use which takes Dockerfile along with the code and creates an image based on the instructions provided in the Dockerfile.
Usually, we work with a team on these projects and thus in order to make that file available for everyone the image is then pushed to Docker Repository so that people can download it and make changes locally or else it is deployed to a development Server so that whole team can get access to the file.
Thanks for reading my blog hope, I keep up the sharing in public motto and share my learnings 😄😄.
If you like my Article then please react to it and connect with me on Twitter if you are also a tech enthusiast. I would love to collaborate with people and share the experience of tech😄😄.
My Twitter Profile: Aryan_2407
Anyway, if you want any more knowledge about Docker or Devops and how things work, keep an eye on my blogging channel or visit