FTP Docker container with Active Directory Authentication

I am working on a project that required me to come up with a container that could do FTP and use active directory as its authentication provider. After several hours of tinkering around and reading blog after blog (thank you all for inspiration!) I finally have a working configuration that is stable.

Shout out to the most useful blog I ran across that help me get further down the line was: https://warlord0blog.wordpress.com/2015/08/04/vsftpd-ldap-active-directory-and-virtual-users/

Source code for this project is located at https://github.com/joharper54SEG/Docker-vsftp-ldap

Running the container

Let start with the basics. This project is based on vsftpd and ubuntu 18.04. It’s using the libpam-ldapd module for authentication. It’s using confd as a method to dynamically configure the services using environment variables I pass via a kubernetes configmap and secret.

The environment variables you will need to run this solution as-is are:

LDAP_BASE: DC=domain,DC=com
LDAP_BINDDN: username@domain.com
LDAP_BINDPW: password for bind user
LDAP_FILTERMEMBEROF: memberOf=distinguished name of group
LDAP_SSL_CACERTFILE: /etc/ssl/certs/ca-certificates.crt
LDAP_SSL_ENABLE: “off”
LDAP_URI: ldap://ldap.fqdn.com
VSFTPD_GUEST_ENABLE: “YES”
VSFTPD_LOCAL_ENABLE: “YES”
VSFTPD_PASV_ADDRESS: IP of host
VSFTPD_PASV_ENABLE: “YES”
VSFTPD_PASV_MAX_PORT: “10095”
VSFTPD_PASV_MIN_PORT: “10090”
VSFTPD_SSL_CIPHERS: HIGH
VSFTPD_SSL_ENABLE: “NO”
VSFTPD_SSL_PRIVATEKEY: /etc/ssl/certs/privatekey.key
VSFTPD_SSL_PUBLICKEY: /etc/ssl/certs/publickey.pem
VSFTPD_SSL_SSLV2: “NO”
VSFTPD_SSL_SSLV3: “NO”
VSFTPD_SSL_TLSV1: “YES”

Notable ones here are:

PASV_ADDRESS – this needs to be set to the IP address of the host the container is running on. Without this, you will be unable to connect using passive FTP.

LDAP_FILTERMEMBEROF – I only wanted to allow members of a certain group to have FTP rights to this server. This is a standard LDAP filter and can be modified to fit whatever you would like.

Confd Process

I wanted a way to dynamically adjust settings without having to rebuild the entire container. I also did not want to store passwords and other revealing information in the config files because it’s best practice not to and because I wanted to share the project with the public. At first, I started off trying to use bash scripting to dynamically build the config. That turned out to be a giant PITA so I starting digging for alternates and come across confd. I am going to do a high-level review of how this works. Refer to the confd project for more detail. http://www.confd.io/

To get confd working you needed the config files instructing confd what to do. These are .toml files located in /etc/confd/cond.d and look like this:

[template]
src = "vsftpd.conf.tmpl"
dest = "/etc/vsftpd.conf"
keys = [
  "/vsftpd/pasv/address",
  "/vsftpd/pasv/min/port",
  "/vsftpd/pasv/max/port",
  "/vsftpd/pasv/enable",
  "/vsftpd/ssl/enable",
  "/vsftpd/ssl/tlsv1",
  "/vsftpd/ssl/sslv2",
  "/vsftpd/ssl/sslv3",
  "/vsftpd/ssl/ciphers",
  "/vsftpd/ssl/publickey",
  "/vsftpd/ssl/privatekey",
]

src = name of the template file located in /etc/confd/templates. You will have a template file for each config you want confd to update.
dest = where confd is going to write the file after it does its thing
keys = these correspond to the environment variable we want confd to use. “/vsftpd/pasv/address” is the environment variable VSFTPD_PASV_ADDRESS we pass at container runtime. Note: environment variables have to be all caps.

Next, are the templates. These are your config files with the variable string confd uses. Just copy your config files here, rename them with a .tmpl extension and change the key values to the confd strings.

Example: pasv_address={{getv “/vsftpd/pasv/address”}}

Now in the container start script (/sbin/init.sh) the line below runs confd which will take all our environment variables (keys), build to config file and copy it into the correct location (dest).

confd -onetime -backend env

Active Directory Authentication

To make AD authentication work properly the mapping had to be configured. These mappings are located in the /etc/nslcd.conf file. Or if your looking at the source it would be the /etc/confd/templates/nslcd.conf.tmpl file which gets copied to /etc/nslcd.conf on container start via the confd process.

Each user in AD needs 3 attributes set in order for them to be sucked in by nslcd. These attributes are uidNumber, gidNumber, and unixHomeDirectory. Not going to go into details on these, I will only say that uidNumber should be unique for each user in your environment. How you do that is up to you.
unixHomeDirectory is not being used in this example, but is still required to be set. If you want to use this then uncomment line 18 in /etc/nslcd.conf. In my case, I am setting the home directory to /home/vftp/samaccountname.

Next up are the PAM modules. We have one module we referenced in our config (/etc/pam.d/vsftpd) as I only want AD users logging in. To use local users this would have to be modified.

Next look at /etc/nsswitch.conf. You will see the ldap entries in there for passwd, group, and shadow.

Now if everything lines up, you should be able to login to FTP with your AD accounts. Run “getent passwd” from inside the container and you should see users listed there from AD.

Virtual Users

So before this project, I had no idea what virtual users were. basic explanation is these are users that are not local on the system. If you run “cat /etc/passwd” you will not see them listed there, but you will see them if you run “getent passwd”. these user would not be able to login via ssh or any other service but will be granted access to FTP due to our PAM configs. This is why we need the ” guest_enable=YES ” option in our vsftpd.conf. This also creates some interesting things when it comes to file permissions as the vsftpd daemon will also run as the “ftp” user when a guest logs in. you can create your own user and tell vsftpd to use it instead via the ” guest_username=svc_ftp ” option. As you can see, I needed to do this as I wanted to share data with other containers on the system via docker volumes. I needed to set a uid consistently across this dataset for permission to work and the built in ftp user had a conflicting uid. This also means that the uid set for the user in AD is not really used here beyond adding them into the cache service as a virtual user.

Depending on your requirements you may want to tinker with these permission settings a bit more if security is a concern. For my use case, these settings work fine.

Conclusion

It works! Feel free to use this information as you need and take the time to familiarize yourself with all the settings in each config file as I am skipping over quite a few of them in this blog.

Azure DevOps Agent in Docker with Deployment Groups

Recently started learning how to user Azure DevOps, formerly VSTS, to do automated build and deploy tasks to several docker hosts. In this POC I am running I will have about 600 machines running docker in remote locations to act as an edge/branch server. These docker hosts will be running 5-6 containers to handle basic services such as DNS, FTP, FTE and PowerShell for automation. I wanted an easy way to deploy and update containers on these 600 remote machines. I looked at various things from using Rancher to custom PowerShell scripts to hit the docker API and/or SSH into each machine and run the docker commands. For this post, we are going to focus on how I accomplished these tasks using Azure DevOps.

Microsoft has already released pre-built Docker images for the VSTS agent (https://hub.docker.com/_/microsoft-azure-pipelines-vsts-agent). While this worked great it only allowed me to register the agent into an agent pool in DevOps. In my case, I wanted the agent to register with a deployment group so I could run the same task on every agent. Turns out Microsoft has no documented way of how to accomplish that. So I started digging through how they built their container, the agent looks exactly like the one they install on Linux (no surprise there) which they do provide instructions for on how to register the agent in a deployment group. So now its just down to getting the container version to accept the options for deployment groups. Heres how I did it.

Pull the image down from the hub to your localhost. In this case, I want the agent to have the docker command line utilities installed so I can interface with docker on the host through the Unix socket. Microsoft has an image already built for this:

docker pull  mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce 

Now, if you look at the directions in DevOps on how to deploy an agent to Linux (Pipelines > Deployment Groups > Register) you will find that two options are needed that are not in the documentation for the docker based agent, nor is the container from Microsoft setup to accept these options. Those options are –deploymentgroup and –deploymentgroupname.

After inspecting the image we can see that it is launching the start.sh script. CMD /bin/sh -c ./start.sh

This script can be downloaded from github here:
https://github.com/Microsoft/vsts-agent-docker

To make this do what we need we will need to modify the start.sh script and then build a custom container with our changes.

#Lines 40-46
if [ -n "$VSTS_DEPLOYMENTPOOL" ]; then
  export VSTS_DEPLOYMENTPOOL="$(eval echo $VSTS_DEPLOYMENTPOOL)"
fi

if [ -n "$VSTS_PROJECTNAME" ]; then
  export VSTS_PROJECTNAME="$(eval echo $VSTS_PROJECTNAME)"
fi

#Lines 91-100.  Remove --pool. 
#Add --deploymentgroup, --deploymentgroupname, --projectname

./bin/Agent.Listener configure --unattended \
  --agent "${VSTS_AGENT:-$(hostname)}" \
  --url "https://$VSTS_ACCOUNT.visualstudio.com" \
  --auth PAT \
  --token $(cat "$VSTS_TOKEN_FILE") \
  --work "${VSTS_WORK:-_work}" \
  --deploymentgroup \
  --deploymentgroupname "${VSTS_DEPLOYMENTPOOL:-default}" \
  --projectname "${VSTS_PROJECTNAME:-projectnamedefault}" \
  --replace & wait $!

Create a new dockerfile that will copy in your modified start.sh and build your new container.

FROM mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce

COPY start.sh /vsts

WORKDIR /vsts

RUN chmod +x start.sh

CMD ./start.sh

Now you can run your container. Specify your deployment group and project name as environment variables at runtime.

docker run -e VSTS_ACCOUNT=yourAccountName \
-e VSTS_TOKEN=yourtoken \
-e VSTS_DEPLOYMENTPOOL=yourpoolname \
-e VSTS_PROJECTNAME=yourprojectname \
-v /var/run/docker.sock:/var/run/docker.sock \
-d \
--name DevOps-Agent \
--hostname $(hostname) \
-it \
yourprivateregistry/devops-agent

And boom, achievement badge unlocked! You now have agents reporting to a deployment group.