FTP Docker container with Active Directory Authentication

I am working on a project that required me to come up with a container that could do FTP and use active directory as its authentication provider. After several hours of tinkering around and reading blog after blog (thank you all for inspiration!) I finally have a working configuration that is stable.

Shout out to the most useful blog I ran across that help me get further down the line was: https://warlord0blog.wordpress.com/2015/08/04/vsftpd-ldap-active-directory-and-virtual-users/

Source code for this project is located at https://github.com/joharper54SEG/Docker-vsftp-ldap

Running the container

Let start with the basics. This project is based on vsftpd and ubuntu 18.04. It’s using the libpam-ldapd module for authentication. It’s using confd as a method to dynamically configure the services using environment variables I pass via a kubernetes configmap and secret.

The environment variables you will need to run this solution as-is are:

LDAP_BASE: DC=domain,DC=com
LDAP_BINDDN: username@domain.com
LDAP_BINDPW: password for bind user
LDAP_FILTERMEMBEROF: memberOf=distinguished name of group
LDAP_SSL_CACERTFILE: /etc/ssl/certs/ca-certificates.crt
LDAP_SSL_ENABLE: “off”
LDAP_URI: ldap://ldap.fqdn.com
VSFTPD_GUEST_ENABLE: “YES”
VSFTPD_LOCAL_ENABLE: “YES”
VSFTPD_PASV_ADDRESS: IP of host
VSFTPD_PASV_ENABLE: “YES”
VSFTPD_PASV_MAX_PORT: “10095”
VSFTPD_PASV_MIN_PORT: “10090”
VSFTPD_SSL_CIPHERS: HIGH
VSFTPD_SSL_ENABLE: “NO”
VSFTPD_SSL_PRIVATEKEY: /etc/ssl/certs/privatekey.key
VSFTPD_SSL_PUBLICKEY: /etc/ssl/certs/publickey.pem
VSFTPD_SSL_SSLV2: “NO”
VSFTPD_SSL_SSLV3: “NO”
VSFTPD_SSL_TLSV1: “YES”

Notable ones here are:

PASV_ADDRESS – this needs to be set to the IP address of the host the container is running on. Without this, you will be unable to connect using passive FTP.

LDAP_FILTERMEMBEROF – I only wanted to allow members of a certain group to have FTP rights to this server. This is a standard LDAP filter and can be modified to fit whatever you would like.

Confd Process

I wanted a way to dynamically adjust settings without having to rebuild the entire container. I also did not want to store passwords and other revealing information in the config files because it’s best practice not to and because I wanted to share the project with the public. At first, I started off trying to use bash scripting to dynamically build the config. That turned out to be a giant PITA so I starting digging for alternates and come across confd. I am going to do a high-level review of how this works. Refer to the confd project for more detail. http://www.confd.io/

To get confd working you needed the config files instructing confd what to do. These are .toml files located in /etc/confd/cond.d and look like this:

[template]
src = "vsftpd.conf.tmpl"
dest = "/etc/vsftpd.conf"
keys = [
  "/vsftpd/pasv/address",
  "/vsftpd/pasv/min/port",
  "/vsftpd/pasv/max/port",
  "/vsftpd/pasv/enable",
  "/vsftpd/ssl/enable",
  "/vsftpd/ssl/tlsv1",
  "/vsftpd/ssl/sslv2",
  "/vsftpd/ssl/sslv3",
  "/vsftpd/ssl/ciphers",
  "/vsftpd/ssl/publickey",
  "/vsftpd/ssl/privatekey",
]

src = name of the template file located in /etc/confd/templates. You will have a template file for each config you want confd to update.
dest = where confd is going to write the file after it does its thing
keys = these correspond to the environment variable we want confd to use. “/vsftpd/pasv/address” is the environment variable VSFTPD_PASV_ADDRESS we pass at container runtime. Note: environment variables have to be all caps.

Next, are the templates. These are your config files with the variable string confd uses. Just copy your config files here, rename them with a .tmpl extension and change the key values to the confd strings.

Example: pasv_address={{getv “/vsftpd/pasv/address”}}

Now in the container start script (/sbin/init.sh) the line below runs confd which will take all our environment variables (keys), build to config file and copy it into the correct location (dest).

confd -onetime -backend env

Active Directory Authentication

To make AD authentication work properly the mapping had to be configured. These mappings are located in the /etc/nslcd.conf file. Or if your looking at the source it would be the /etc/confd/templates/nslcd.conf.tmpl file which gets copied to /etc/nslcd.conf on container start via the confd process.

Each user in AD needs 3 attributes set in order for them to be sucked in by nslcd. These attributes are uidNumber, gidNumber, and unixHomeDirectory. Not going to go into details on these, I will only say that uidNumber should be unique for each user in your environment. How you do that is up to you.
unixHomeDirectory is not being used in this example, but is still required to be set. If you want to use this then uncomment line 18 in /etc/nslcd.conf. In my case, I am setting the home directory to /home/vftp/samaccountname.

Next up are the PAM modules. We have one module we referenced in our config (/etc/pam.d/vsftpd) as I only want AD users logging in. To use local users this would have to be modified.

Next look at /etc/nsswitch.conf. You will see the ldap entries in there for passwd, group, and shadow.

Now if everything lines up, you should be able to login to FTP with your AD accounts. Run “getent passwd” from inside the container and you should see users listed there from AD.

Virtual Users

So before this project, I had no idea what virtual users were. basic explanation is these are users that are not local on the system. If you run “cat /etc/passwd” you will not see them listed there, but you will see them if you run “getent passwd”. these user would not be able to login via ssh or any other service but will be granted access to FTP due to our PAM configs. This is why we need the ” guest_enable=YES ” option in our vsftpd.conf. This also creates some interesting things when it comes to file permissions as the vsftpd daemon will also run as the “ftp” user when a guest logs in. you can create your own user and tell vsftpd to use it instead via the ” guest_username=svc_ftp ” option. As you can see, I needed to do this as I wanted to share data with other containers on the system via docker volumes. I needed to set a uid consistently across this dataset for permission to work and the built in ftp user had a conflicting uid. This also means that the uid set for the user in AD is not really used here beyond adding them into the cache service as a virtual user.

Depending on your requirements you may want to tinker with these permission settings a bit more if security is a concern. For my use case, these settings work fine.

Conclusion

It works! Feel free to use this information as you need and take the time to familiarize yourself with all the settings in each config file as I am skipping over quite a few of them in this blog.

Storage Replica in Server 2016

One of the new features of 2016 Server (Datacenter edition only) that I have been playing with is storage replica.  I now have this technology running in production for one of our file servers.  Before storage replica came along, we were using DFS-R to replicate this particular set of data (~16TB, 27m files) to our DR facility for faster recovery.  Now anyone out there that has been using DFS-R for any length of time knows how much of a pain it can be.  Anytime we had a slight hiccup on any side of the replication link it would take weeks to get back to normal.  Granted, DFS-R has its use cases that work very well, but for a dataset this large it was nothing but problem after problem for me.  This is what made me look for alternatives that could be done quickly and cheaply.  In comes storage replica, a new feature in 2016 that is volume block-level replication.  I have tested this in a stretch cluster configuration and in a server to server configuration.  This post will focus on the server to server replication as that is what I decided to do for production.

Why server to server and not stretch cluster?

  1. Stretch cluster works with Synchronous replication only.  This is Microsoft’s official documented stance.  I found out that async will work with a stretch cluster but it will not failover automatically, it was a manual process to get it to failover.
  2. Our link did not meet the requirements for Sync replication.  Our wan link to DR is 10Gb with around 25ms latency.  MS recommends nothing over 5ms latency.  Now did I try sync over 25ms?  I sure did, I put the cowboy hat on and went to town testing all kinds of scenarios that were not officially documented or supported.  But did it work?  Yes, it did, quite well actually since the way Microsoft does sync replication it actually behaves more like async in reality.  Why not stretch cluster then?  Because this is a production set of data and Microsoft does not support sync over 25ms links (as of now).  I was not going to put us in an unsupported setup even though I think it may have worked just fine.

So server to server replication it is.  Here is the setup:

  • 16 TB of unstructured data on 1 volume
  • 2 VMs running on VMware with a standard 18TB vmdk.  RDMs will work too.
  • D drive is replicated, formatted in GPT NTFS, 8k Allocation unit size
  • Users access data using DFS-N

Steps to configure

Install features on both servers

Install-WindowsFeature -ComputerName SERVERNAME -Name Storage-Replica,FS-FileServer -IncludeManagementTools -restart

Add Disks and Format

Each server needs 2 disks.  1 for the data and another for the replication log.  In my setup, D is the data volume, L is the log volume.  Both formatted in GPT NTFS. D drive set to 8k blocks (default is 4k), this was to allow for a volume larger than 17TB.  Note: I have seen some info on the interwebs that have stated not to use storage replica for volumes larger than 10TB.  However, I can’t find anything in MS documentation that states that so I continued on full steam ahead.

2017-04-12 12_12_32-Clipboard

Verify your partitions sizes are exactly the same on both sides.  The best way to see this is to run  “Get-Partition -DriveLetter D | select size”.  Look at the size of the partitions on both servers.  This number should match (represented in Bytes).  In my case, these did not match exactly and I got an error back “Data partition sizes are different in those two groups“.  From the research I have done this is most likely due to the fact that these VMs were not on identical storage (Primary server on Tintri, Secondary on EMC VNX) and the NTFS format did some rounding differently on each VM.  To fix this problem, run “Resize-Partition -DriveLetter D -Size xxxxx”,  xxxxx being the size of the smaller volume.

Copy your Data over

I used robocopy with /mir to get the data copied to our new primary 2016 server.  You only need to copy to one of the servers, not both.  As soon as we fire up storage replica the initial block copy will get the data onto your secondary server for you (much faster than robocopy will, I promise you).  MS did mention that seeding the data “may” help the speed of this initial block copy.  During testing, I did not see a major difference in sync times if I seeded the data first.  Your mileage may vary.

Configure Storage Replica

Before turning this on, make sure windows is fully updated.  There were some bugs in the RTM version that were fixed in later updates.

Test Topology

This will create a pretty little HTML output to verify you have everything you need in place.

Test-SRTopology -SourceComputerName PrimaryServerName -SourceVolumeName d: -SourceLogVolumeName l: -DestinationComputerName SecondaryServerName -DestinationVolumeName d: -DestinationLogVolumeName l: -DurationInMinutes 5 -ResultPath c:\temp

If you have errors when running this, fix them before going on.

Create replication connections

New-SRPartnership -SourceComputerName PrimaryServerName -SourceRGName Groups-rg01 -SourceVolumeName d: -SourceLogVolumeName l: -DestinationComputerName SecondaryServerName -DestinationRGName Groups-rg02 -DestinationVolumeName d: -DestinationLogVolumeName l: -ReplicationMode Asynchronous

Warning: Make sure you get the replication direction correct!  If backwards you will replicate your empty volume over to your primary and wipe out all that data you just robocopied over. 

After you run this, the D drive will go offline on the secondary server.  Initial block copy will start.  Check status by running “get-srgroup”.

Another thing you will probably want to do is increase the log size. By default, it is set to 8GB.  I changed ours to 250GB by running these commands.

Set-SRGroup -LogSizeInBytes 250181844992 -Name Groups-rg01
Set-SRGroup -LogSizeInBytes 250181844992 -Name Groups-rg02

You can also run this handy snippet on the secondary server to see how much data is left to copy.

while($true) {
 $v = (Get-SRGroup -Name "Groups-rg02").replicas | Select-Object numofbytesremaining
 [System.Console]::Write("Number of bytes remaining: {0}`r", $v.numofbytesremaining)
 Start-Sleep -s 5
}

I found this initial replication to be very fast.  We moved 16TB of data to the secondary site in just under 24 hours.  DFS-R took weeks to move this same data.

Configure DFS-N

In our case, users were already using DFS-N to access the data.  However, we need to change some defaults to make this work the way I wanted.  By default, the namespace client cache is good for 1800 seconds (30 minutes).  The problem with this is if we failover to the secondary server, the users would need to reboot or wait up to 30 minutes for their drives to reconnect.  That could be a problem so I lowered the cache time to 30 seconds.  Yes, it will create a lot more referrals from your namespace servers but it will be worth it :).  To do this, open the DFS management console and open properties for the folder your working with.  Under referrals, change the cache duration to 30.

2017-04-12 13_21_34-Clipboard

Next, create the shares and add your primary server in as a target to the needed folders. Now, obviously, you need to be careful here and know what you’re doing with your data. This data is not replicated from your old server to the new, only robocopied over.  I am not going into all scenarios here on how that is a problem but just think about that for a few.

Now failover to your secondary server.  See the section below on how to do that.  You need to do this so the volume will come online to configure the shares on the secondary. Once the shares are created add the secondary server in as a DFS-N target.  Failover to the primary.

I keep the secondary server target disabled in DFS to prevent clients from trying to use a volume that is offline.

Failing over to secondary server

Set-SRPartnership -NewSourceComputerName SecondaryServerName -SourceRGName Groups-rg02 -DestinationComputerName PrimaryServerName -DestinationRGName Groups-rg01

Run the above command to bring the volume online on the secondary server and reverse replication.  Then run over to DFS and enable secondary target, disable the primary target. Clients should reconnect in 30 seconds or less.

Questions I still have

RPO by default is set to 30.  But what that means exactly I don’t know.  30 blocks, 30 seconds, 30 minutes, 30% of the log size?  I cannot find anything from Microsoft that explains it in detail.  My guess would be it is a percentage of the log.  I say that because before I resized the log to 250GB we would fall out of RPO pretty quick, after the resize we almost never fall out of RPO.

Update:  Microsoft answered this finally.  RPO is defined by seconds.

What is the recommended size of the log?  Percentage of active data may be or could be set to change rate times a certain RTO?  No clear guidance from MS.

Update:  https://docs.microsoft.com/en-us/windows-server/storage/storage-replica/storage-replica-frequently-asked-questions#FAQ15.5

Final thoughts

So far I have been impressed with how storage replica works.  It was fairly simple to setup, replicated data very fast and overcomes a bunch of DFS limitations and problems.

Positives

  • Fast replication
  • Replicates encrypted and locked files
  • decent visibility into replication status
  • Runs over SMB3  (side note, we did try and run this through a riverbed steelhead for testing, got about 30% reduction in WAN traffic)

Negatives

  • 1 to 1 replication.  Cannot do 1 to many like DFS.
  • manual steps to failover (unless you stretch cluster it)

References

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/storage-replica-overview?f=255&MSPPError=-2147217396

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/server-to-server-storage-replication?f=255&MSPPError=-2147217396

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/storage-replica-frequently-asked-questions