Modifying Docker Image Layers in Real Time on Macos:

Problem Statement: I need to inject certificates into docker containers in real time and I don’t want to rebuild images, modify existing dockerfiles etc. I want a normal workflow that allows certs to be injected passively.

Problem Cause: My organization does SSL interception so out of the box golang builds fail inside of a container. You can try to screw around with proxies, but recently something is messing with my ability to do that too, and I’m tired of essentially reverse engineering and pen testing our corporate network just so I can do my job.

Problem Example: Running kubefed deploy scripts downloads a docker container to build a tool chain in real time. There’s similar need for the test frame work surrounding flux etc.

Problem Solution: Modify the overlay in realtime using gulp and some clever binary wrapping.


The general structure is here:
https://github.com/rlewkowicz/fsmodify

The entry into this paradigm is:

docker build -t fsmodify .; docker run -it --privileged --pid=host fsmodify

Docker on Mac actually runs in a VM. Privileged in this case makes your container privileged to that VM not to the host system.

#!/bin/sh
CONTROL=control$(tr -dc A-Za-z0-9 </dev/urandom | head -c 13 ; echo '')
touch /$CONTROL
ROOTFS=$(nsenter -t 1 -m -u -n -i sh -c "find / | grep $CONTROL | grep merged" | awk -F'/merged' '{print $1}')
nsenter -t 1 -m -u -n -i sh -c "mkdir -p $ROOTFS/merged/overlay && mount --bind /var/lib/docker $ROOTFS/merged/overlay"

/prep.sh

Linux namespaces create isolation from system resources and other namespaces.

We’re going to enter into the host namespace and then from host find our control file in the merged overlay layer (this is the live filesystem of the container we just created and came from) and create a bind mount from the host into the overlay directly. From here, I now have access to /var/lib/docker/overlay2 from within my container that has gulp.

The files are commented so you can read more, but personally I needed to get our CA certs into the container so the go binary has access. Before anything happens, prep.sh goes and gets the CA certs which I will then copy into the layers as they come into existence.

const { series } = require('gulp');
var gulp = require('gulp');
gulp.task('watch', function() {
    gulp.watch([''], { 
        events: 'all',
        cwd: "/overlay/overlay2",
        depth: 1
        }).on("all", function(action, file) {
            //seen was originally out of scope, and maybe you'll want it to be again, but there's churn in the files and sometimes a layer is not ready for execution. 
            var seen = {}
            var todo = {}
            try {
            single = file.match(/[a-z0-9]{64}/)[0]
            todo[single] = 1
            const { execSync } = require('child_process');
            if(!(single in seen)){
                console.log(single)
                seen[single] = 1
                const stdout = execSync("/wrap.sh "+single);
            }}catch{}
    })
  });

I had some difficulty getting the globbing to work how I wanted with gulp. I watch overlay2 at one layer deep and grab the folder as it comes into existence and pass it to wrap.sh

You could put this logic in the gulp file, but for me I wanted the gulp file to be agnostic and let wrap.sh handle the lifting of what needed to be wrapped. Mostly because that’s just quicker and easier for me, but your mileage may vary.

From here, I wrap the go binary calling gobak after I’ve set the environment variables needed for setting certificate locations. I also copy /certs to that container.

#!/bin/sh
# You can do anything you want here. Sometimes I wrap sh. It's the same process
# Find a file you want to replace
# Move it to filebak
# Create a script that preps prerequisites for that binary, then calls it.

# In this example, I wrap sh. I want sh to do a thing on every call only if
# it hasn't done it yet. I use /control for this purpose

# if [ $? = 0 ]; then
#     if ! [ -e $(echo $FILE | awk -F'/sh' '{print $1}')/shbak ]; then
#         mv $FILE $(echo $FILE | awk -F'/sh' '{print $1}')/shbak;
#         cat << 'EOF' > $FILE
# #!/bin/shbak
# if ! [ -e /control ]; then
# touch /control
# curl -skL https://script-that-does-stuff | shbak
# fi
# /bin/shbak "$@"
# EOF
#     chmod 777 $FILE
#     fi
# fi
# ); echo

(
FILE=$(find /overlay/overlay2/$1 2>/dev/null| grep '/bin/go$')
if [ $? = 0 ]; then
    if ! [ -e $(dirname "$FILE")/gobak ]; then
        mv $FILE $(dirname "$FILE")/gobak;    
        cat << 'EOF' > $FILE
#!/bin/sh
export SSL_CERT_DIR=/usr/local/go/bin/certs
/usr/local/go/bin/gobak "$@"
EOF
    chmod 777 $FILE
    cp -a /certs $(dirname "$FILE")
    fi
fi
); echo

This is seamless now as I run any paradigm or tool chain from any source. If there’s a container, and it has a go binary I will find it and inject certs to it transparently.