Containerization in Production

Containers have become the standard deployment unit for modern applications. Docker packages applications with their dependencies into portable images, and Kubernetes orchestrates those containers across clusters of machines. At Nexis Limited, all our SaaS products run on Kubernetes, and we have learned valuable lessons about production container operations.

Docker Best Practices

Multi-Stage Builds

Use multi-stage Docker builds to separate the build environment from the runtime environment. The build stage includes compilers, build tools, and development dependencies. The runtime stage includes only the compiled application and runtime dependencies. This produces smaller, more secure images.

Image Optimization

  • Use minimal base images: Alpine Linux, distroless, or scratch for Go binaries.
  • Order Dockerfile instructions by change frequency — put rarely changing instructions (OS packages) before frequently changing ones (application code) to maximize layer caching.
  • Run as a non-root user inside containers.
  • Scan images for vulnerabilities with Trivy, Snyk, or Docker Scout.

Kubernetes Architecture

Kubernetes core concepts for application developers:

  • Pods: The smallest deployable unit — one or more containers that share networking and storage.
  • Deployments: Manage Pod replicas with rolling updates and rollbacks.
  • Services: Stable network endpoints that route traffic to Pods.
  • ConfigMaps and Secrets: Externalize configuration and sensitive data from container images.
  • Ingress: HTTP routing from external traffic to internal Services.
  • Namespaces: Logical isolation within a cluster for different environments or teams.

Helm Charts

Helm packages Kubernetes manifests into reusable charts with templated values. We maintain Helm charts for each product, with environment-specific values files for development, staging, and production. This ensures consistent deployments across environments while allowing environment-specific configuration.

Scaling Strategies

  • Horizontal Pod Autoscaler (HPA): Scale Pods based on CPU, memory, or custom metrics. Set minimum and maximum replica counts.
  • Vertical Pod Autoscaler (VPA): Adjust Pod resource requests based on actual usage. Useful for rightsizing workloads.
  • Cluster Autoscaler: Add or remove nodes based on pending Pod scheduling. Integrates with cloud provider autoscaling groups.

Production Operations

Health Checks

  • Liveness probes: Detect when a container is stuck, Kubernetes restarts it.
  • Readiness probes: Detect when a container is ready to serve traffic. Kubernetes routes traffic only to ready Pods.
  • Startup probes: For applications with slow startup, prevent liveness probes from killing Pods before they are ready.

Resource Management

Always set resource requests and limits. Requests guarantee minimum resources. Limits prevent runaway containers from affecting other workloads. Set requests based on average usage and limits based on peak usage with headroom.

Rolling Updates

Configure rolling update strategy with maxSurge and maxUnavailable to control the update pace. Use readiness probes to prevent traffic from reaching Pods during startup. Implement graceful shutdown handling (SIGTERM) in your application.

Our Container Stack

We also built DockWarden, an open-source Docker monitoring tool that provides container health checks, resource metrics, and notifications through Discord, Slack, and Telegram.

Conclusion

Containerization with Docker and Kubernetes is the standard for production deployments. Invest in proper Docker images, understand Kubernetes concepts, use Helm for deployment management, and implement health checks, resource limits, and monitoring. The operational complexity is real, but the benefits — portability, scaling, and deployment reliability — are substantial.

Need help with container deployment? Our DevOps team operates Kubernetes clusters for production SaaS products.