Dockerfiles vs. Buildpacks vs. Jib

von Simon Holzmann | 22. April 2024 | English, Tools & Frameworks

Simon Holzmann

Lead Developer

Containerization technology has become a cornerstone of modern software development but the tools and methods you choose for packaging your applications can impact your development workflow, deployment speed, and operational efficiency.

In this Blogpost, we will look at three popular methods for building container images: Dockerfiles, Buildpacks and Jib.

The first part of this Blogpost will provide a basic understanding of the tools by briefly introducing them and giving an overview of how to use them.

After the introduction we will continue with some important thoughts regarding continuous delivery in combination with container builders. 

Finally, we will give a short conclusion about what we have explored and provide a short piece of advice.

Introduction to Dockerfiles, Buildpacks and Jib 

Before we dive into the details, let’s briefly introduce the different methods. In this section we will have a look on how straightforward it is to get started building images with the different methods for containerization. 

Dockerfiles 

Writing Dockerfiles is the most traditional and widely used method for creating Docker images. It’s a text document which contains all the commands for building a Docker image. Using Dockerfiles gives the developer fine-grained control over the image being built, including the base image, configurations, dependencies and the application code itself.  

Here is a very basic example for a Dockerfile: 

FROM amazoncorretto:21
COPY ./build/libs/my-java-app.jar /app/app.jar  
ENTRYPOINT ["java", "-jar", "app.jar"]

In this example we use an amazoncoretto base image, which contains Java to run our application. Then we simply copy a precompiled java application into the Docker image and define an entrypoint to execute our application on startup of the Docker container. Since this is just a very trivial example to showcase how a Dockerfile may look like, keep in mind that in a real-world scenario a Dockerfile often gets much more advanced and complicated. 

Mastering Dockerfiles has a steep learning curve for beginners but offers precise control over the resulting container image. Profound knowledge of Docker commands and best practices is required to write effective Dockerfiles, as they can become quite long and very complex.  

There are many practices that can be applied when writing Dockerfiles such as multi-stage builds, running the container process as non-root user or building highly optimized micro containers with a minimalistic footprint. Discussing the general functionality of Docker and its usage for building container images in different scenarios would be too much for this article and since there is a lot of common knowledge available when searching the web, we simply want to refer to the official Documentation of best practices for writing Dockerfiles. 

Buildpacks 

 Buildpacks represent an alternative approach to facilitate the process of turning code into a runnable container image and they provide a higher level of abstraction compared to Dockerfiles.  

Originating from Heroku and later adopted by the Cloud Native Computing Foundation (CNCF), Buildpacks automatically detect your application’s programming language and framework, then build a container image from it without the need for a Dockerfile. This abstraction aims to simplify the developer’s job, letting them focus on their application code rather than containerization specifics. 

Buildpacks are particularly user-friendly for developers not familiar with containerization details. They are less flexible than Dockerfiles due to their abstraction level, but still offer some customization through configurations. Compared to other container builders, Buildpacks are offering a much bigger range of features and claim to give a balanced control between developers and operators, to ensure security and compliance requirements, and to perform upgrades with minimal effort and intervention. 

There are several implementations of the Cloud Native Buildpack Specification which can be used to build a container image:  

  • Cloud Foundry Buildpacks (Originally developed by Heroku) 
  • Paketo Buildpacks (Open source Buildpacks, maintained by the CNCF) 
  • Google Cloud Buildpacks (Specifically designed to use with Google Cloud services) 

 A central part of the specification is defined as the “Platform”, which is implemented in different alternatives and the most famous is the pack CLI tool. Another one is the Kubernetes-native implementation kpack. 

The Buildpack platform will make use of a “Builder” image, which also can be changed if the default Builder from the platform does not fit the needs of the user. The Builder may pull referenced images, needed to run a build process, and all those images together define the “Buildpacks”. The platform is then responsible for delegating the build to the different Buildpacks and the result will be stored in a Docker Image. 

For the actual execution, Buildpacks work in two phases. The first phase is the “detect” phase to decide which Buildpacks are used for the build, based on the application code. The second phase is the “build” phase to execute the actual work of the build process. The result of the participating Buildpacks will be baked into a Docker image, layer by layer. 

The different Buildpacks can be configured via environment variables and offer many different features such as advanced caching or building a minimal app image. Further information and details can be found in the documentation. 

The following example shows the Paketo Buildpacks, used to build a container image: 

pack build my-app --path . --builder paketobuildpacks/builder:base

The pack CLI tool will automatically detect the application type, compile it, and package it into a Docker image. This process initially may take a few minutes because the pack CLI tool will download specific builder images, needed for the build execution, depending on the application code and its dependencies.  

The result is a ready to go, runnable Docker image and it cannot get easier to build a docker image! For Java applications there is no need to install Java or any build tools locally. With newer Spring Boot versions, integrations like the Spring Boot Maven plugin can be used to execute the same process e.g. by running the Maven-Goal “spring-boot:build-image”. 

Once the build process is complete, the application can be run by using Docker: 

docker run -p 8080:8080 my-app

This example shows how easy it is to use Buildpacks. 

The whole experience feels like using a smart kitchen appliance that can automatically prepare a dish once you’ve selected the recipe and added the necessary ingredients, it’s a bit of magic! 

When looking deeper into the resulting Docker image, e.g. for a basic Java Spring Boot application, a few things can be noticed which are often not applied when looking at other Docker images that are manually built with Dockerfiles. The Paketo Java Buildpack for example makes use of the BellSoft Liberica Buildpack which installs the JDK in the build container but only contributes the JRE to the application image. It applies optimized memory settings to run Java applications in a container with enforced memory limits. Additionally, the Java process is configured to run as non-root user in the container.  

The image size however is not the smallest with a total size of 274MB. We can see the base image layer with 63MB, the JVM layer with 158MB and our application layer with 20MB. The rest of the image size results from configurations and other small helpers of the Buildpacks.  

We only used the default behavior of the Paketo Buildpack in our example, but when looking at the documentation, many configurations for further optimizations can be applied if needed, e.g. installing a minimal JRE with JLink. 

Jib 

Jib is an open-source tool developed by Google for building optimized Docker and OCI images for Java applications without the need of a Docker daemon or a Dockerfile. It integrates directly with Maven and Gradle, allowing Java developers to build container images as part of their existing build process.  

Like Buildpacks, Jib abstracts Docker specifics but requires familiarity with Java build tools. Another main difference of Jib compared to Buildpacks might be that Jib does not require a Docker daemon to be running, making it suitable for environments where Docker cannot be installed or is not preferred. 

To use Jib e.g. with Gradle for building Docker images, first the Jib plugin needs to be added to the build.gradle file: 

plugins {  
    id 'com.google.cloud.tools.jib' version '3.4.1'  
}  
  
jib {  
    to {   
       image = 'my-docker-id/my-java-app'  
    }  
}

The above example declares the Jib plugin in the plugins section, in this case the plugin version 3.4.1 is selected. In the Jib extension section one required “to” field needs to be configured, which defines the target image to build the application to. By default, Jib builds on top of an OpenJDK base image, but you can also configure a custom base image. 

After configuring the Jib plugin, Gradle can be used to build a docker image and automatically push it to the defined container registry: 

./gradle jib

When looking into the resulting image for a basic Java Spring Boot application, we notice that Jib layers the dependencies and the compiled application for better caching automatically. This results in faster image rebuilding and is one of the main features of Jib, which is specifically optimized for Java applications. 

The whole experience of using Jib is akin to having a dedicated assistant for Java applications that takes care of all the containerization details behind the scenes. This tool excels in creating optimized images by intelligently managing image layers.  

There are many configurations which can be applied to tweak the resulting image even further, e.g. define a user and group to run the process in the container as non-root. If any further features are required which are not available via the Jib configurations, a custom base image containing these features can be specified. 

Continuous Delivery 

Thinking about continuous delivery in combination with container images raises the question of what the actual artifact should be. Is it a Docker image or is it the application itself and the Docker image is simply the packaging?  

This question can make a difference in the context of a proper continuous delivery setup. If the artifact is a collection of Jar files for example, then nothing else should be built initially in the pipeline. The packaging would potentially happen at the end of the pipeline after we are confident that all quality requirements are met. 

Tools like Buildpacks or Jib are a bit problematic in that case because they merge the software compile and the Docker build into one single step. This process enforces a continuous delivery setup to view the Docker image as the artifact, when following the rule to build the artifact only once. This artifact, the Docker image, then should be further processed in the pipeline. The container needs to be started for further pipeline processing. Since we then have no access to the compilation anymore, further pipeline processing will have quite a black box character and can make the usage of tools, e.g. for measuring test coverage, difficult. 

This problem could be solved when simply working with Jar files first and only creating the Docker image later in the pipeline. Container builders are then only used to package the files, which is possible, but this would also mean that we are losing a large part of the potential that the different Container builders are offering. 

Conclusion 

Developers have a range of options, from manual Dockerfile creation offering maximum control to automated solutions like Buildpacks and Jib, which simplify the process and integrate seamlessly into existing workflows.  

When looking at Buildpacks and Jib, many examples show a simple single command line solution for building a Docker image. For basic usage this promise is often true but for a more advanced usage and especially when integration into a continuous delivery pipeline, it is worth thinking twice if the automatic compile and build is really wanted. In complex project scenarios, the advantages of the automatic container builders can be less than expected. That does not mean that it is not worth using these tools but it’s important to keep it in mind. 

The decision on which tool to use ultimately rests on the specific requirements of your project, your team’s expertise, and the desired balance between control and convenience in your development workflow. 

As container technology continues to evolve, staying informed about these tools is essential for developers aiming to enhance their development processes.   

Hopefully reading this Blog post helps in that sense.