Why docker push needs clock to be in sync ?

Hello everyone,

Few days back I was trying to push to the docker hub and docker push was failing at just 99.x%. I tried to push multiple times but it was exiting with same error message: Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password

The message was quite confusing to me as it didn’t make any sense to give such error message after pushing upto 99.x%. I tried to look it up on the internet and after visiting lot of links, I saw someone talking about machine’s time in some irrelevant thread. Then it clicked me that sometimes my linux VM’s clock, on which docker runs, gets off the wall clock. Even though ntp runs in the background but it won’t make a big jump to correct the time. So, I manually fixed the time and docker push succeeded.

But I was still wondering why does docker push needs clocks to be in sync ? Any guesses ?

The answer is authentication process. I was easily able to find docker docs which describes the authentication process. But why authentication needs time to be in sync ? I’ll answer this up in a while.

MITM docker requests

I decided to go wild and debug the authentication process manually. From the error message Get https://registry-1.docker.io/v2/: unauthorized, I saw that docker uses standard HTTPS protocol for authentication. So, I decided to use standard HTTPS MITM proxy to inspect the traffic. I never did MITM on OSX, so wasn’t aware of any good tools for it. Moreover the traffic was originating from my linux VM, that means regular HTTP proxy solutions like burpsuite or charles proxy won’t work. So, I started looking up other options by which I can setup transparent proxy.

squid proxy - I had used it once in past during my internship at ZScaler but it’s quite lengthy process to set it up given that I’m quite lazy.

mitmproxy - Somehow I came across it and I loved its simplicity. I found the doc to setup transparent proxy quite easy. But in this example, they have only mentioned the case in which test device is different than the host on which proxy server is running and test device needs to be configured to use proxy host as the default gateway for sending packets.

Well, I knew I have to do some magic with routing so that I can use the same device as a test device as well as proxying requests. But I wasn’t sure how exactly it has to be done. I tried to look up on the internet and realised I don’t have enough understanding of the linux ip stack. Obviously it’s quite easy to just copy-paste few commands to block/unblock specific ip or port using iptables but how can I route the traffic from the docker instance to proxy server listening on port 8080 - that isn’t possible without proper understanding of how packets go through linux networking stack.

While going through various links on StackOverflow, trying to learn this black magic, I came across linux-ip.net. It explained linux ip stack in quite good detail. So, I decided to step back and learn about it from the beginning. Then I recalled it’s the same site which I had added in my ToDo list a year ago for learning networking on linux.

I skimmed through the whole guide and checked things which seemed relevant enough to me. I might have understood just 20% of it but believe me it was more than enough. I was able to follow and understand all of those SO links.

upload successful

Above diagram is of utmost importance in case you want to play with the packets using netfilter interface of linux. Now I knew that I have to add rule in OUTPUT chain to send all the IP packets with destination port of 80/443 (HTTP/HTTPS) to address 127.0.0.1:8080(on which mitmproxy was listening). With this mitmproxy will be able to listen to all the traffic and show me everything.

iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:8080
iptables -t nat -A OUTPUT -p tcp --dport 443 -j DNAT --to-destination 127.0.0.1:8080

But after intercepting these packets, mitmproxy needs to send those packets to original destination. So, it will change the destination address of the packets back to the original address. If that’s the case, then it would result into never ending loop in which my above given rules would send the packets sent by mitmproxy back to mitmproxy itself. So, I needed to do something to break this loop. I decided to change destination of packets sent by processes owned by a “specific” owner.

Basically I added these rules to achieve it:

iptables -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner 1000 -j DNAT --to-destination 127.0.0.1:8080
iptables -t nat -A OUTPUT -p tcp --dport 443 -m owner --uid-owner 1000 -j DNAT --to-destination 127.0.0.1:8080

With this, I ran the mitmproxy as root user(uid = 0). So, my rules will only do DNAT-ing on the packets originated from processes running as vagrant user(uid=1000). After installing the certs from the mitm.it, I ran curl https://google.com and was able to see intercepted traffic in the mitmproxy. But when I ran docker push, I couldn’t see anything. Then I recalled one difference I had learnt between rkt and docker i.e. docker daemon runs as a root which isn’t safe etc etc. So, basically docker client was sending request to docker daemon and since docker daemon was running as root, it won’t show up in mitmproxy as I was only intercepting packets for uid=1000 user. I changed the above rules to setup intercepting for root user instead.

iptables -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner 0 -j DNAT --to-destination 127.0.0.1:8080
iptables -t nat -A OUTPUT -p tcp --dport 443 -m owner --uid-owner 0 -j DNAT --to-destination 127.0.0.1:8080

I ran mitmproxy as non root user and voila!! I was able to see docker’s intercepted traffic. Now, it was time to see the intercepted traffic.

Inspecting docker HTTP requests

upload successful

So, first of all it sends request to the registry-1.docker.io/v2/ and it returns 401 response. Let’s take a look at the full response headers.

upload successful

It’s returning Bearer realm in the Www-Authenticate header in accordance with OAuth format mentioned in https://tools.ietf.org/html/rfc6750#section-3. So, it tells that client has to first authenticate itself with auth.docker.io and get access token from there using which registry-1.docker.io will service requests from the client.

Now, let’s check the request and response associated with auth.docker.io.

Request headers:
upload successful

Response headers:
upload successful

In the request headers, docker client has specified Authorization header passing our credentials to the server and in the response, server returns us access_token which we have to send back to the registry-1.docker.io to get access to the resource. That access_token has associated expires_in and issued_at fields which defines for how long this token is valid. These fields are same as nbf, iaf, exp fields in jwt specification. Well, that’s the reason why authentication was failing. Though I’m still not sure why docker allowed push upto 99.x% and then failed. Real world is complex for sure.

If you want read more about the authentication process of docker, you can check it out here.

That’s it for now, thanks for reading!