Preparing Google Cloud deployments for Docker Hub pull request limits
Docker Hub is a popular registry for hosting public container images. Earlier this summer, Docker announced it will begin rate-limiting the number of pull requests to the service by “Free Plan” users. For pull requests by anonymous users this limit is now 100 pull requests per 6 hours; authenticated users have a limit of 200 pull requests per 6 hours. When the new rate limits take effect on November 1st, they might disrupt your automated build and deployment processes on Cloud Build or how you deploy artifacts to Google Kubernetes Engine (GKE), Cloud Run or App Engine Flex from Docker Hub.
This situation is made more challenging because, in many cases, you may not be aware that a Google Cloud service you are using is pulling images from Docker Hub. For example, if your Dockerfile has a statement like “
FROM debian:latest”or your Kubernetes Deployment manifest has a statement like “
image: postgres:latest” it is pulling the image directly from Docker Hub. To help you identify these cases, Google Cloud has prepared a guide with instructions on how to scan your codebase and workloads for container image dependencies from third-party container registries, like Docker Hub.
We are committed to helping you run highly reliable workloads and automation processes. In the rest of the blog post, we’ll discuss how these new Docker Hub pull rate limits may affect your deployments running on various Google Cloud services, and strategies for mitigating against any potential impact. Be sure to check back often, as we will update this post regularly.
Impact on Kubernetes and GKE
One of the groups that may see the most impact from these Docker Hub changes is users of managed container services. Like it does for other managed Kubernetes platforms, Docker Hub treats GKE as an anonymous user by default. This means that unless you are specifying Docker Hub credentials in your configuration, your cluster is subject to the new throttling of 100 image pulls per six hours, per IP. And many Kubernetes deployments on GKE use public images. In fact, any container name that doesn’t have a container registry prefix such as
gcr.io is pulled from Docker Hub. Examples include
Container Registry hosts a cache of the most requested Docker Hub images from Google Cloud, and GKE is configured to use this cache by default. This means that the majority of image pulls by GKE workloads should not be affected by Docker Hub’s new rate limits. Furthermore, to remove any chance that your images would not be in the cache in the future, we recommend that you migrate your dependencies into Container Registry, so that you can pull all your images from a registry under your control.
In the interim, to verify whether or not you are affected, you can generate a list of DockerHub images your cluster consumes:
You may want to know if the images you use are in the cache. The cache will change frequently but you can check for current images via a simple command:
It is impractical to predict cache hit-rates, especially in times where usage will likely change dramatically. However, we are increasing cache retention times to ensure that most images that are in the cache stay in the cache.
GKE nodes also have their own local disk cache, so when reviewing your usage of DockerHub, you only need to count the number of unique image pulls (of images not in our cache) made from GKE nodes:
For private clusters, consider the total number of such image pulls across your cluster (as all image pulls will be routed via a single NAT gateway).
For public clusters you have a bit of extra breathing room, as you only need to consider the number of unique image pulls on a per-node basis. For public nodes, you would need to churn through more than 100 unique public uncached images every 6 hours to be impacted, which is fairly uncommon.
If you determine that your cluster may be impacted, you can authenticate to DockerHub by adding
imagePullSecrets with your Docker Hub credentials to every Pod that references a container image on Docker Hub.
While GKE is one of the Google Cloud services that may see an impact from the Docker Hub rate limits, any service that relies on container images may be affected, including similar Cloud Build, Cloud Run, App Engine, etc.
Finding the right path forward
Upgrade to a paid Docker Hub account
Arguably, the simplest—but most expensive—solution to Docker Hub’s new rate limits is to upgrade to a paid Docker Hub account. If you choose to do that and you use Cloud Build, Cloud Run on Anthos, or GKE, you can configure the runtime to pull with your credentials. Below are instructions for how to configure each of these services:
Cloud Run on Anthos: Deploying private container images from other container registries
Google Kubernetes Engine: Pull an Image from a Private Registry
Switch to Container Registry
Another way to avoid this issue is to move any container artifacts you use from Docker Hub to Container Registry. Container Registry stores images as Google Cloud Storage objects, allowing you to incorporate container image management as part of your overall Google Cloud environment. More to the point, opting for a private image repository for your organization puts you in control of your software delivery destiny.
To help you migrate, the above-mentioned guide also provides instructions on how to copy your container image dependencies from Docker Hub and other third-party container image registries to Container Registry. Please note that these instructions are not exhaustive—you will have to adjust them based on the structure of your codebase.
Additionally, you can use Managed Base Images, which are automatically patched by Google for security vulnerabilities, using the most recent patches available from the project upstream (for example, GitHub). These images are available in the GCP Marketplace.
Here to help you weather the change
The new rate limits on Docker Hub pull requests will have a swift and significant impact on how organizations build and deploy container-based applications. In partnership with the Open Container Initiative (OCI), a community devoted to open industry standards around container formats and runtimes, we are committed to ensuring that you weather this change as painlessly as possible.
Related Google News:
- Scaling deep retrieval with TensorFlow Recommenders and Vertex AI Matching Engine May 1, 2023
- Unleash your Google Cloud data with ThoughtSpot, Looker, and BigQuery May 1, 2023
- Track, Trace and Triumph: How Utah Division of Wildlife Resources is harnessing Google Cloud to… May 1, 2023
- Seeing the World: Vertex AI Vision Developer Toolkit May 1, 2023
- BBC: Keeping up with a busy news day with an end-to-end serverless architecture May 1, 2023
- Scalable electronic trading on Google Cloud: A business case with BidFX May 1, 2023
- Google Cloud and Equinix: Building Excellence in ML Operations (MLOps) May 1, 2023
- Google Docs can make a table of contents for you — here’s how May 1, 2023