Mastering Production-Ready Docker Images For Your Apps
In today's fast-paced software development world, deploying applications reliably and efficiently is paramount. Docker has emerged as an indispensable tool, fundamentally changing how we package, distribute, and run our applications. If you're looking to take your applications from development to a robust, scalable production environment, understanding how to craft a truly production-ready Docker image is not just a good idea—it's essential. This comprehensive guide will walk you through the critical steps, best practices, and considerations involved in creating optimized Docker images that are not only performant but also secure and easy to manage in a live setting. We'll delve into everything from choosing the right base image to implementing advanced security measures, ensuring your application runs smoothly and consistently, no matter the scale.
Why Docker for Production Apps? Unleashing Reliability and Scalability
Docker images offer a compelling solution for deploying applications into production environments due to their unparalleled ability to ensure consistency, isolation, and portability. When you create a Docker image for your application, you're essentially packaging your code, its dependencies, and its configuration into a single, self-contained unit. This means "it works on my machine" finally translates to "it works everywhere"—from development to staging, and crucially, to production. The core benefit of Docker in production is predictability. No more "dependency hell" or environment mismatches; your application runs inside a standardized container, isolated from the host system and other applications. This isolation significantly boosts reliability, as updates to one application won't inadvertently break another. Furthermore, Docker facilitates scalability like never before. With container orchestration tools such as Kubernetes, deploying multiple instances of your production-ready Docker image to handle increased load becomes a seamless operation. You can scale up or down based on demand, optimizing resource utilization and ensuring your application remains responsive. Beyond scaling, Docker images enable rapid deployment and rollback. If a new version introduces an issue, rolling back to a previous, stable Docker image is straightforward and quick, minimizing downtime. Security is also enhanced; by running applications in isolated containers, the potential attack surface is reduced, and any compromise is contained. The consistency provided by Docker containers also extends to development workflows, enabling developers to work in environments that closely mirror production, leading to fewer surprises later on. In essence, by embracing Docker for your production applications, you're investing in a robust, flexible, and efficient deployment strategy that pays dividends in stability, performance, and peace of mind. We'll explore how to leverage these benefits by meticulously crafting your production Docker images to meet the highest standards. Optimizing your Dockerfile is key to unlocking these advantages, ensuring your images are lean, secure, and performant.
Essential Steps to Craft a Production-Ready Docker Image
Creating a production-ready Docker image is a nuanced process that goes far beyond simply packaging your application. It requires a strategic mindset, focusing on optimization for size, security, performance, and long-term maintainability. Every single decision made within your Dockerfile, from the foundational base image you select to the intricate way you manage application dependencies and configure environment variables, directly impacts the robustness and efficiency of your final production deployment. The overarching objective is to meticulously construct lean, secure, and highly efficient Docker images that not only perform optimally under real-world traffic and demanding loads but also minimize potential attack vectors and reduce operational overhead. This comprehensive section will meticulously guide you through the fundamental building blocks and indispensable best practices necessary to achieve this ambitious goal. We will meticulously cover everything from the initial selection of the most appropriate and minimal foundation for your image to the implementation of advanced build strategies like multi-stage builds, the careful management of user permissions, the incorporation of crucial health checks, and a deep dive into security hardening. By diligently focusing on these critical and interconnected steps, you will not only significantly streamline and improve your entire application deployment process but also profoundly enhance the overall stability, reliability, and security posture of your services in a live production environment. These carefully crafted Docker images will serve as the backbone of your robust production infrastructure, ensuring smooth operation and peace of mind for your team and your users. Understanding and applying these principles is paramount for anyone serious about deploying applications reliably in today's complex cloud landscapes, making your application truly resilient and ready for anything the production environment throws at it.
Start with a Minimal Base Image: Foundation for Efficiency
The journey to creating a production-ready Docker image begins with selecting the right base image, and for production, "minimal" is often synonymous with "optimal." A smaller base image translates directly into a smaller final image size, which offers numerous benefits: faster build times, quicker image pulls (especially crucial in distributed production environments or for CI/CD pipelines), and a reduced attack surface. Less software means fewer potential vulnerabilities. For many applications, Alpine Linux is an excellent choice due to its incredibly small footprint (often just a few megabytes). It's built around musl libc, which can sometimes introduce compatibility issues with certain compiled languages or complex libraries, but for many web applications, it works perfectly. Alternatives include debian:slim or ubuntu:minimal, which offer a more traditional glibc environment but are still significantly smaller than their full counterparts. When selecting, consider your application's specific needs; for example, if your application relies heavily on C libraries that are only available in a glibc environment, then debian:slim might be a better compromise than alpine. Always opt for a specific version tag (e.g., node:16-alpine or python:3.9-slim-buster) rather than latest to ensure reproducibility. This practice guarantees that your production image remains consistent across builds and deployments, preventing unexpected breaking changes that can occur when the latest tag updates. A well-chosen, minimal base image sets the stage for a highly efficient and secure Docker image right from the start, laying a solid foundation for your production application.
Multi-Stage Builds: The Secret Sauce for Lean Images
One of the most powerful techniques for optimizing Docker images for production is the multi-stage build. This feature allows you to use multiple FROM statements in a single Dockerfile, where each FROM begins a new build stage. The magic happens when you selectively copy only the necessary artifacts from one stage to another, discarding all the build tools, source code, and intermediate dependencies that are only needed during the compilation phase. For instance, in a Node.js application, you might use an initial stage to install npm dependencies (potentially including development dependencies), compile your TypeScript or Babel code, and then copy only the compiled application and its runtime dependencies into a much smaller, production-ready image in the final stage. The result is a dramatically smaller final image that contains only what's absolutely required to run your application. This not only reduces the image size, speeding up deployments and reducing storage costs, but also significantly enhances security by removing unnecessary components that could be exploited. Imagine a Go application: the first stage could build the binary, and the second stage could simply be FROM alpine where you copy only the static binary. The build tools (Go compiler) are never part of the final image. This principle applies to almost any compiled or interpreted language that has a build step, making multi-stage builds an indispensable tool in your arsenal for creating truly production-grade Docker images. It’s a game-changer for Docker image optimization.
Managing Dependencies Efficiently: Layer Caching and Best Practices
Efficiently managing dependencies is a cornerstone of creating optimized production Docker images. Dependencies often account for a significant portion of an image's size and can greatly influence build times. The key is to leverage Docker's layer caching mechanism effectively. Docker builds images layer by layer, and each instruction in a Dockerfile creates a new layer. If a layer hasn't changed, Docker can reuse the cached version, drastically speeding up subsequent builds. Therefore, when installing dependencies, it’s best practice to place instructions that are less likely to change earlier in the Dockerfile. For example, installing npm or pip dependencies before copying your application code means that if only your code changes, Docker can reuse the cached dependency layer.
# Example for Node.js
FROM node:16-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production # Install production dependencies
COPY . .
RUN npm run build # Build your application
In this example, npm ci --only=production (or pip install -r requirements.txt for Python) is executed after copying only the package.json/package-lock.json files. If these files don't change, this layer is cached. Only when package.json changes will this RUN instruction be re-executed. Additionally, always use the --no-cache-dir or similar flags when installing packages (e.g., pip install --no-cache-dir) to prevent package managers from storing cached files within the image layer, which unnecessarily inflates image size. For multi-stage builds, ensure that only the production-required dependencies are copied into the final image, further slimming down your production Docker image. This meticulous approach to dependency management is vital for maintaining lean, fast, and production-ready Docker containers.
Environment Variables & Configuration: Flexible and Secure Settings
Configuring production-ready Docker images often involves dynamic settings that differ between environments (development, staging, production). Environment variables are the standard and most flexible way to handle configuration within Docker containers. Instead of hardcoding values, you can use the ENV instruction in your Dockerfile to set default values, which can then be easily overridden at runtime using the docker run -e flag, Docker Compose, or Kubernetes manifests. This separation of configuration from the image itself is crucial for maintaining production flexibility and security. For example, database connection strings, API keys, or application specific settings should never be hardcoded into the image. While ENV is great for non-sensitive data, for truly sensitive information like database passwords or API secrets, it's highly recommended to use Docker Secrets or Kubernetes Secrets. These tools provide a secure mechanism to inject sensitive data into containers at runtime without exposing them in environment variables or commit logs. Avoid placing sensitive ENV values directly in your Dockerfile, especially if the image is publicly accessible. Best practice involves defining placeholders or default, non-sensitive values in the Dockerfile, then providing the actual production secrets via orchestration tools. This approach ensures your production-ready Docker images are generic enough to be deployed across various environments while keeping sensitive data secure and out of the image layers.
User and Permissions Management: Securing Your Containers
A critical aspect of creating secure, production-ready Docker images is managing user privileges and file permissions. By default, processes inside a Docker container run as the root user, which is a significant security risk. If an attacker gains control of your application within the container, they would have root privileges on the container, potentially allowing them to escalate privileges or access sensitive host resources. The best practice is to run your application as a non-root user. You can create a dedicated user and group within your Dockerfile and switch to it using the USER instruction. For example:
FROM node:16-alpine
WORKDIR /app
# Create a non-root user and group
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --chown=appuser:appgroup . .
USER appuser
CMD ["node", "app.js"]
This ensures that your application runs with the minimum necessary privileges, limiting the potential damage in case of a security breach. Additionally, pay close attention to file permissions. Ensure that your application files and directories have appropriate permissions, typically read-only for application code and read/write for specific data directories if needed. Use chmod and chown commands within your Dockerfile to set these correctly. Avoid granting global write permissions (chmod 777) unless absolutely necessary, as this significantly broadens the attack surface. By carefully managing user and permissions, you harden your production Docker images against potential exploits and contribute to a more secure production environment.
Health Checks and Liveness Probes: Ensuring Application Availability
For production-ready Docker images, simply having your application start doesn't mean it's healthy or ready to serve traffic. An application might start but then fail to connect to its database, or it might enter a state where it's unresponsive. This is where health checks (Docker's HEALTHCHECK instruction) and liveness/readiness probes (in Kubernetes) become indispensable. A health check allows Docker to periodically check if your containerized application is still functioning correctly. If the health check fails too many times, Docker can automatically restart the container, thereby improving the overall resilience of your service. For example, a simple health check might curl an endpoint that returns a 200 OK status if the application is healthy.
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
CMD curl --fail http://localhost:3000/health || exit 1
This instruction tells Docker to check the /health endpoint every 30 seconds, with a 10-second timeout, and to consider the container unhealthy after 3 consecutive failures. In a Kubernetes production environment, liveness probes determine if a container is running and should be restarted if it fails, while readiness probes determine if a container is ready to serve traffic. Implementing these checks in your production Docker images is crucial for ensuring high availability and robust operation, allowing orchestration systems to intelligently manage your services and provide a seamless experience for your users.
Optimizing Image Layers and Caching: Speeding Up Builds
Optimizing Docker image layers is crucial for both reducing image size and accelerating build times, two key factors for production efficiency. Docker builds images by executing each instruction in your Dockerfile, creating a read-only layer for each. Subsequent builds can reuse these layers from the cache if the instruction and its context haven't changed. To leverage this, order your Dockerfile instructions strategically. Place instructions that change infrequently (like installing system dependencies or copying package.json/requirements.txt) at the beginning. Instructions that change frequently (like copying your application code) should come later. This way, Docker can reuse cached layers for the unchanging parts, only rebuilding the latter layers when your code updates.
.dockerignore: Just like.gitignore, a.dockerignorefile prevents unnecessary files (e.g.,.gitdirectories,node_modulesin development,.envfiles, build artifacts from local development) from being sent to the Docker daemon during the build context. This significantly reduces the build context size, speeding up theCOPYinstruction and preventing sensitive or superfluous files from accidentally ending up in your production image.- Combine
RUNInstructions: MultipleRUNcommands create multiple layers. If feasible and logical, combine relatedRUNcommands into a single instruction using&&to reduce the number of layers. EachRUNinstruction also clears intermediate files from its layer, but combining them helps manage layer count. - Clean Up: Always clean up temporary files and caches generated during
RUNinstructions (e.g.,apt-get clean,rm -rf /var/lib/apt/lists/*) within the same layer where they were created. If you clean them in a subsequent layer, the original files still exist in the previous layer, bloating the image. WORKDIR: UseWORKDIRto set the working directory for subsequent instructions. This makes your Dockerfile more readable and prevents long absolute paths.COPY --from: In multi-stage builds, useCOPY --fromto copy only the necessary build artifacts from previous stages, discarding all intermediate build dependencies and tools. By meticulously applying these optimization techniques, you can ensure your production Docker images are as lean and efficient as possible, leading to faster deployments and reduced operational overhead.
Security Best Practices: Hardening Your Production Images
Security is paramount when creating production-ready Docker images. A compromised container can pose a significant risk to your entire infrastructure. Beyond running as a non-root user (as discussed), there are several other critical security best practices to implement.
- Minimize Attack Surface: This goes hand-in-hand with choosing a minimal base image and using multi-stage builds. Only install and include the absolute minimum software and dependencies required for your application to run. Every additional package is a potential vulnerability. Remove development tools, compilers, and unnecessary libraries from your final production image.
- Regularly Update Base Images: Vulnerabilities are constantly discovered. Ensure your base images are kept up-to-date. Using specific version tags (e.g.,
node:16-alpineinstead ofnode:alpine) prevents unexpected updates but also means you need a strategy (like rebuilding weekly) to pick up security patches. - Scan Your Images: Incorporate Docker image scanning tools (like Clair, Trivy, or commercial solutions) into your CI/CD pipeline. These tools analyze your image layers for known vulnerabilities (CVEs) and can provide valuable insights into potential weaknesses before deployment to production.
- Avoid Sensitive Data in Images: Never store secrets (API keys, passwords, private keys) directly in your Dockerfile or commit them to your image layers. As mentioned earlier, use Docker Secrets, Kubernetes Secrets, or external secret management systems (like HashiCorp Vault) to inject sensitive information at runtime.
- Set Resource Limits: While not directly a Dockerfile instruction, it's a crucial production deployment consideration. Define CPU and memory limits for your containers to prevent a single misbehaving application from consuming all host resources, potentially leading to a denial-of-service for other services.
- Content Trust: Enable Docker Content Trust to verify the integrity and publisher of Docker images. This helps ensure that the images you pull and run are exactly what they claim to be and haven't been tampered with. By diligently following these security best practices, you can significantly harden your production Docker images, making them more resilient against attacks and protecting your production environment.
Testing Your Docker Image Thoroughly: From Build to Runtime
A production-ready Docker image isn't complete without rigorous testing. While your application code likely has unit and integration tests, you also need to verify that the container itself behaves as expected across different stages: build, runtime, and interaction.
- Dockerfile Linting: Start by linting your Dockerfile. Tools like
hadolintcan check your Dockerfile for common best practices and potential issues, helping to catch misconfigurations early. - Build-Time Tests: Verify that your image builds successfully and efficiently. Are all dependencies installed correctly? Is the final image size within acceptable limits? Are there any unexpected warnings during the build process?
- Runtime Tests: This is where you test the actual behavior of your application inside the container.
- Unit and Integration Tests: Run your existing application tests within the Docker container. This ensures that the application behaves correctly in the isolated containerized environment, catching any subtle differences that might arise from containerization (e.g., file paths, environment variables).
- Health Checks Validation: Manually test the health check endpoint (if you've implemented one) to ensure it correctly reports the application's status.
- Smoke Tests: After the container starts, perform basic "smoke tests" to confirm the core functionality. Can it connect to its database? Can it serve a simple request?
- Security Scans: As mentioned, regularly scan your built images for known vulnerabilities using tools like Trivy or Clair.
- Performance Testing: Run performance tests against your containerized application to ensure it meets performance requirements under load.
- End-to-End Tests: Deploy your Docker image to a staging environment that closely mirrors production and run end-to-end tests. This verifies the complete system, including interactions with other services, databases, and external APIs. Incorporating these testing stages into your CI/CD pipeline ensures that only validated, high-quality production-ready Docker images make it to your live production environment, minimizing risks and enhancing reliability.
Putting It All Together: A Practical Dockerfile Example
After delving into the theoretical foundations and best practices for creating a production-ready Docker image, it’s time to bring all these concepts together into a tangible, practical example. This section will consolidate everything we’ve discussed—from multi-stage builds and minimal base images to secure user management, efficient dependency handling, and robust health checks—into a single, coherent Dockerfile. While our example will specifically focus on a Node.js application, it’s crucial to understand that the underlying principles and structural patterns are highly transferable and widely applicable to a vast array of other programming languages and frameworks, whether you're working with Python, Go, Java, or PHP. The aim here is to provide you with a concrete template that you can adapt and customize for your own projects, showcasing how each best practice seamlessly integrates to produce an optimized Docker image that is lean, secure, and performant. This detailed example will serve as a blueprint, illustrating precisely how to orchestrate various Dockerfile instructions and techniques to achieve a truly production-grade container image that meets the stringent demands of a live deployment, ready to handle real-world traffic and ensure the utmost reliability for your application. We will walk through each line, explaining its purpose and how it contributes to the overall goal of a robust and efficient production Docker image. This practical demonstration is not just about syntax; it’s about understanding the architectural decisions that underpin high-quality containerization, preparing your application for scalable and resilient operations in any production environment.
# Stage 1: Build the application (using a full-featured Node.js image)
FROM node:18-alpine AS builder
# Set the working directory for the builder stage
WORKDIR /app
# Copy package.json and package-lock.json first to leverage Docker layer caching.
# This step only invalidates if package*.json files change.
COPY package.json package-lock.json ./
# Install production dependencies. Use npm ci for clean installs and --only=production for minimal dependencies.
# Also, remove npm cache to keep the layer lean.
RUN npm ci --only=production && npm cache clean --force
# Copy the rest of the application source code
COPY . .
# Run the build process (e.g., transpiling TypeScript, bundling assets).
# Ensure your build command outputs to a specific directory (e.g., 'dist').
RUN npm run build
# Stage 2: Create the final, lean production image
# Use a minimal runtime image for security and size optimization.
# node:18-alpine-slim is an even smaller variant of alpine specifically for runtime.
FROM node:18-alpine-slim AS production
# Set the working directory for the production stage
WORKDIR /app
# Create a non-root user and group for enhanced security.
# Running as root is a significant security risk.
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy only the necessary artifacts from the builder stage:
# - The compiled application (e.g., from 'dist')
# - The production node_modules
# - package.json (might be needed by some apps at runtime, or for health checks)
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
# If your app has static assets or views in the root, copy them over
# Make sure to handle permissions if copied files need to be accessible by appuser
COPY --chown=appuser:appgroup --from=builder /app/public ./public # Example for static assets
# Change to the non-root user for running the application.
# This greatly reduces the attack surface.
USER appuser
# Expose the port your application listens on.
# This is documentation; it doesn't actually publish the port.
EXPOSE 3000
# Define a health check to allow Docker and orchestrators (like Kubernetes)
# to determine if the container is healthy and responsive.
# Adjust the URL to your app's actual health check endpoint.
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
CMD wget --quiet --tries=1 --timeout=5 http://localhost:3000/health || exit 1
# Set environment variables for production.
# Sensitive variables should be injected at runtime via Docker Secrets or Kubernetes Secrets.
ENV NODE_ENV=production \
PORT=3000 \
APP_LOG_LEVEL=info
# Define the command to run your application.
# Ensure this command is for starting the production-ready build.
CMD ["node", "dist/index.js"]
This Dockerfile showcases several key concepts for production-ready Docker images:
- Multi-Stage Build: Separates build-time dependencies from runtime dependencies. The
builderstage handles compilation, and only the essential artifacts are copied to theproductionstage. - Minimal Base Image: Uses
node:18-alpine-slimfor the final production image, drastically reducing its size and attack surface. - Dependency Caching:
package.jsonandpackage-lock.jsonare copied and dependencies installed early, leveraging Docker's layer cache. - Non-Root User: The application runs as
appuser, significantly improving security. - Selective Copying: Only necessary compiled code and production
node_modulesare copied to the final image. - Health Check: Implements a
HEALTHCHECKto ensure the application is truly healthy. - Environment Variables: Uses
ENVfor configuration, ready for runtime overrides. By following this structure, you create a lean, secure, and highly efficient Docker image ready for production deployment.
Beyond the Dockerfile: Deployment Considerations for Production
While meticulously crafting a pristine and production-ready Docker image is undeniably a monumental and indispensable step in modern application deployment, it represents only one part of the broader ecosystem required for successful operation in a live production environment. True production deployment extends significantly beyond the confines of a single Dockerfile; once your optimized Docker image is built, thoroughly tested, and deemed ready for prime time, the next critical challenge involves seamlessly integrating it into a robust deployment pipeline and managing its lifecycle effectively within a dynamic, high-stakes production environment. This holistic approach often encompasses a myriad of crucial considerations that transcend the scope of a single container, delving into the complexities of container orchestration, comprehensive logging and monitoring strategies, secure secrets management, and streamlined continuous integration/continuous delivery (CI/CD) practices. This section will illuminate these vital peripheral elements, underscoring how they complement your finely tuned Docker images to form a resilient, scalable, and observable application infrastructure. Understanding these broader deployment considerations is paramount for anyone aiming to not just run an application in production, but to operate it with high availability, efficiency, and unwavering confidence, ensuring a smooth and predictable experience for both your operational teams and your end-users.
- Container Orchestration: For any serious production application, running individual
docker runcommands is simply not scalable or resilient. This is where container orchestration platforms like Kubernetes or Docker Swarm come into play. These tools automate the deployment, scaling, and management of containerized applications. They can handle tasks like rolling updates, self-healing (restarting failed containers), load balancing, and service discovery. Adopting an orchestrator is crucial for achieving high availability and managing complex microservices architectures in production. It allows you to define the desired state of your application (e.g., "run 3 instances of this Docker image") and the orchestrator works to maintain that state. - Logging and Monitoring: In production, you need to know what your application is doing. Implement a centralized logging strategy where container logs are collected and aggregated (e.g., using ELK Stack, Splunk, or cloud-native logging services). This allows you to quickly debug issues, analyze application behavior, and ensure compliance. Similarly, comprehensive monitoring is non-negotiable. Tools like Prometheus and Grafana can collect metrics from your Docker containers (CPU, memory, network I/O, application-specific metrics) and visualize them, providing crucial insights into performance and potential bottlenecks. Implementing alerts based on these metrics ensures that your team is notified of critical issues before they impact users.
- CI/CD Pipeline Integration: A truly production-ready workflow integrates Docker image building and deployment into a Continuous Integration/Continuous Delivery (CI/CD) pipeline. Every code commit should trigger an automated build of your Docker image, run tests (including security scans), and push the new image to a private container registry (like Docker Hub, AWS ECR, Google Container Registry). For CD, successful builds can then trigger automated deployments to staging and production environments, potentially involving manual approval steps. This automation significantly speeds up release cycles, reduces human error, and ensures that only validated Docker images reach production.
- Secrets Management: While we touched upon not embedding secrets in the Dockerfile, the deployment phase is where you actually provide these secrets securely to your containers. Orchestrators offer native solutions (Kubernetes Secrets, Docker Secrets), or you can integrate with external secrets management systems like HashiCorp Vault. The key is to ensure sensitive information is encrypted at rest and in transit, and only accessible by authorized containers at runtime.
- Networking and Service Discovery: In a multi-service production environment, containers need to communicate with each other. Orchestration platforms provide robust networking solutions and service discovery mechanisms, allowing containers to find and connect to other services by name rather than IP address, simplifying configuration and increasing flexibility. By considering these broader deployment aspects, you transform your production-ready Docker images from standalone artifacts into integral components of a robust, scalable, and manageable production system.
Conclusion
Creating a production-ready Docker image is a fundamental skill in modern software deployment, moving beyond simple containerization to encompass crucial aspects of performance, security, and maintainability. We've explored the journey from selecting a minimal base image and mastering multi-stage builds to implementing rigorous security practices, efficient dependency management, and robust health checks. Each of these steps contributes to building a lean, secure, and highly performant Docker image that ensures your application runs consistently and reliably in any production environment. Remember, the ultimate goal is not just to package your application, but to package it smartly, making it resilient, scalable, and easy to operate. By embracing these best practices, you empower your teams to deploy with confidence, knowing that your Docker images are built for the challenges of production. Continue to iterate, optimize, and secure your images as your application evolves. Happy containerizing!
For further reading and in-depth understanding, explore these trusted resources:
- Docker's Official Documentation on Dockerfile Best Practices: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
- Kubernetes Documentation on Liveness, Readiness and Startup Probes: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
- OWASP Docker Security Cheatsheet: https://cheatsheetseries.owasp.org/Docker_Security_Cheat_Sheet.html