Share the Knowledge
RSS icon Home icon
  • Ansible Playbook for a Django Stack (Nginx, Gunicorn, PostgreSQL, Memcached, Virtualenv, Supervisor)

    Posted on April 20th, 2014 webmaster No comments         

    I decided to create a separate GitHub project for the Ansible playbook I’m currently using to fully provision a production server for my open-source app, GlucoseTracker, so it can be reused by people who are using the same stack.

    You can download the playbook here: https://github.com/jcalazan/ansible-django-stack

    The playbook can fully provision an Ubuntu 12.04 LTS server (will test 14.04 soon) from the base image with the following applications, which are quite popular in the Django community:

    • Nginx
    • Gunicorn
    • PostgreSQL
    • Memcached
    • Virtualenv
    • Supervisor

    I used this awesome guide to set up my server initially (which took like half a day) before automating the entire process with Ansible.  If I need to move to a new server or cloud provider, I can pretty much rebuild a fully-configured server in about 5 minutes with one command.  Pretty neat.

    Note: I’ve also ran this playbook successfully on Amazon EC2, Rackspace, and Digital Ocean virtual private servers.

    TL;DR

    For those who are in a hurry, simply install Ansible, Vagrant, and VirtualBox (if you don’t have them already), clone the project from GitHub, and type this in from the project directory:

    vagrant up

    Wait a few minutes for Ansible to do its magic.  Visit http://192.168.33.15 when finished. Congrats, you just deployed a fully configured Django app!

    The Juicy Details

    Below are some things you should know before using this playbook for your projects.

    Project Structure

    I have my Django project structure set up this way:

    glucose-tracker\ (project directory)

    —-> glucosetracker\ (application directory)

    —-> settings\

    ———-> base.py

    ———-> local.py

    ———-> dev.py

    —-> requirements.txt file, scripts, other files and directories I don’t consider part of the application

    If you have the same project structure that I have, then all you really have to change is the env_vars/base file to get started, where you can set the Git repo location, project name, and the application name which are used throughout the playbook.

    If you don’t have the same project structure, you will need to change the group_vars/webservers file as well (and possibly the environment specific vars file in env_vars/ if you don’t split up your settings file), where you can set the path settings to match your project structure.

    Environment Variables

    I like to separate my environment-specific settings from the main code repo for security reasons and for easier management.  For example,  in my Django settings file, I set the EMAIL_HOST_PASSWORD setting to something like:

    EMAIL_HOST_PASSWORD = os.environ['EMAIL_HOST_PASSWORD']

    This way, I won’t have to leave the password in the code and if I need to change the email password I can do so quickly by changing the environment variable setting on the server instead of modifying the code and re-deploying it.

    The way I have this setup is I created a postactivate script (see roles/web/templates/) that creates the environment variables.  This gets ran after activating virtualenv so those settings are applied only to that virtualenv.

    Now, because I like having all my configurations in my Ansible playbook repo, I keep these values in a vars file and encrypt them with Ansible Vault (see my previous post about this for more details).

    Applying Roles

    The playbooks in the repo apply all the roles.  If you only want certain roles applied to your server, simply remove the ones you don’t need from the roles: section.

    For example, if you don’t use memcached, your roles: section will look something like this.

      roles:
        - base
        - db
        - web
    

    Django Management Commands

    In env_vars/, you will see the following settings in the environment-specific vars files:

    run_django_syncdb: yes
    run_django_south_migration: yes
    run_django_collectstatic: yes
    

    If you don’t want to run some of these commands, simply set the value to no.

    OpenSSL ‘Heartbleed’ Patch

    Don’t worry, the playbook already takes care of this for you.  The first task in the playbook is to do an apt-get update and ensure that openssl and libssl are the latest version. ;)

    I think that pretty much covers most of the questions you might have if you decide to use this Ansible playbook for your Django projects. I might add a few more roles here in the next few weeks, such as Celery, RabbitMQ, and Solr, as we use them at work and we’re currently in the process of automating our infrastructure.

    If you have any questions or suggestions, please feel free to leave a comment below.

    Some Useful Links:

  • How to deploy encrypted copies of your SSL keys and other files with Ansible and OpenSSL

    Posted on April 5th, 2014 webmaster 3 comments         

    I’ve been working on fully automating server provisioning and deployment of my Django app, GlucoseTracker, the last couple of weeks with Ansible. Since I made this project open-source, I needed to make sure that passwords, secret keys, and other sensitive information are encrypted when I push my code to my repository.

    Of course, I have the option to not commit them to the repo, but I want to be able to build my entire infrastructure from code and maintain all configuration in one place, the git repo.

    Fortunately, Ansible has a command line tool called Ansible Vault (comes with the core package) that will allow you to encrypt your configuration files and decrypt them during deployment by passing in the password or password file in the command.  This is mainly useful for encrypting your environment variables files that contain the passwords/keys for your application.

    For example, in my Django settings, instead of assigning the values directly in the settings file, I do something like this:

    SECRET_KEY = os.environ['DJANGO_SECRET_KEY']

    Django would then read the value from the server’s environment variables.  In my case, since I use virtualenv, I have a postactivate script that sets the environment variables for that virtualenv.

    Since I want Ansible to fully automate my server configuration and store all the information that I need to do so in a git repo, I have to encrypt the variables that my app use in production.  For example, I have a file called production in the env_vars folder of my repo that the playbook will use.  I encrypt this file with a password using Ansible Vault and when running the playbook I decrypt it by passing in the password in the command argument.  If you use CI tools like Jenkins you can then just set this password in Jenkins (perhaps as an environment variable) so you won’t have to type it in manually.

    If I need to make a change in the vars file, I can simply just type in:

    ansible-vault edit env_vars/production

    This will prompt me for the password and display the decrypted values in vim for me where I can make my changes and save it back in the encrypted version.  This is a nice option if you just need to make small changes as you won’t forget to encrypt the file again after decrypting it.

    Here’s what an Ansible Vault encrypted vars file will look like:

    $ANSIBLE_VAULT;1.0;AES
    53616c7465645f5f5d2e57e60827b394fec9e16fef1954b9578542cc3098c72cf30cfe25dbf1fb49
    0c0ace757c1b4f9e60860bf91988b21dc2636bf5cf5295396c22e7ba34af68a702ce2b224091aaa7
    7e579aff2159f5cbfa2e05caf432cef3a32729aef212f5509c89f2a681d113b4b2ffc01dab88e5a6
    9e8092f59b9adb9960af646551490111131912ece4df55c6045e5d4e49ee8d3c143ca6ba95492f12
    ddaf044ef02aa25d78a14ba60411ecdc72aa68c6a756d5000906cfeac62e2e975a2f72b172a1a386
    b0e8431213018e074c810b7851e82c1770c985fbae8bd3f1c15367aeebd674da7e228e0a0864467e
    88a67ce40646f39d43ea9810f8e5f4273d7c035704f2087b85a18e6f2ea37f054ca1deae8de5a588
    99727f9116540552cc4ed107bb8ba133787bb0321bdd0f6464ece4a88e60cf301cf24c6b646a931b
    5b8caa27dc5b6cc6f3c40bff172b0ea778b6ced036d45acb24a34e016ff9d55a2a75de65307eaf97
    593e88a6c492f97427d3536f7ca0f15f5a253d4e16903efdcbb92b2cf20c74bc45d2cc2ee90ecd0d
    257e8f334416de3369735d7e5b2b48afc2c7d34948523c8e429a7e15f2c62c1c7de0b88d87af6096
    d44581bfd1d547362874fabd5ae188be92cd8c38fbdaeabb2af217c11eb6010579fb8ae0f1a24608
    94504a5e07e866b9c3daa7335d0f18cc7cec914790b39d17a2c2c8c76d3cfd24903d58817290b873
    92101ee222074488c3a9b61d2f8ccaf6261dd897f1e49abe6f5c45945d3d84bcce3acaf8c9f16c17
    536f3aea2f289395ef6987908b20b99d3377b4b50f8cb2064c628b5cc281b609fcba7f49a35b2c73
    e1d232dbc107f311db7dd2391f7e8f77c187bcef2904b11ab0c24ebf69af37901e8ad178ff383414
    335a448f44b63e39cbcbabaef94a0f33024393c16e78240bfd33b3f031a9461287ef1aef1d2959fd
    ec8fb7c8be7f7601bb4fc9fbefd5f6f62b7d44dde16ad59ac6144a4b08339efeb9bc39a106082eb9
    da14ab70e43b261ae1d717f97edaff5a1a40ef5455da5a2ee69e83893a11b859b758ffe9eb2b09ff
    bc9169086fcf3b6e66c15ba5ae2c6b98705c0a486fa3e6a05cfbc1bff5006fc3abe078643b372655
    0c47e02238f533d877e1fd764fa7eb0772d3fb75532156c928d5ecf3fd0980f9ac274f43cab3ca71
    42b8130b116ecf497771f927bed2f1e9f0a39a96f27dd8d894d15821614ad4953785767080492d2c
    e898145cc0430c19bcc2f8b139b864307867940b32a8f1adb5fa39114b18304b595a3240611ccac4
    9988715adcab5420eb68e0abf2cf8133467fe2f54680102ef5ab3b8b158af1b8048bac65b178a847
    30ab48c579aee1a820e2b386ae3719d4a923e6d2a3440b55672bb872774386d8a10ddd8d347aaabc
    2dfecacbb1b7018aa79ead9cb820cfcf519efdf31e956be89b1b13a659dab3a769f24100b226da6b
    87d4793ef44b3157984681455dfa00e295b50f7ae7d2ed5e1070f296a9d297e4c05190d65537ec10
    3a7684ce2da75ecd8c6aebfe0616e67dd1d64ac216db208ba8afdb701d4402c203f0238a69443d71

    What about files that get copied to the server such as private keys for SSL certificates?

    This is where I had to do something extra as the Ansible copy module will copy these files as they’re stored in the repository (i.e. the encrypted version).  To get around this, I simply used OpenSSL to encrypt these files using symmetric encryption and set the password to decrypt it in my vars file (which Ansible Vault will encrypt).

    To encrypt a file with OpenSSL using AES 256 encryption:

    openssl aes-256-cbc -salt -a -e -in ssl_signed/unencrypted.key -out ssl_signed/encrypted.key -k MysupasecuresecretPasswordZ.x!!

    To decrypt an AES 256 encrypted file with OpenSSL:

    openssl aes-256-cbc -salt -a -d -in ssl_signed/encrypted.key -out ssl_signed/unencrypted.key -k MysupasecuresecretPasswordZ.x!!

    Example vars file:

    # Nginx settings.
    nginx_server_name: www.glucosetracker.net
    ssl_src_dir: ssl_signed
    ssl_dest_dir: /etc/ssl
    ssl_key_password: MysupasecuresecretPasswordZ.x!!

    I have a task in my playbook that would copy my SSL keys to the remote server and run the OpenSSL command to decrypt it (using the password from the ssl_key_password variable in my vars file):

    - name: Copy the SSL cert and key to the remote server
      copy: src={{ ssl_src_dir }}/ dest={{ ssl_dest_dir }}
    
    - name: Decrypt the SSL key
      command: openssl aes-256-cbc -salt -a -d -in {{ ssl_dest_dir }}/nginx.key
               -out {{ ssl_dest_dir }}/decrypted.key -k {{ ssl_key_password }}
               creates={{ ssl_dest_dir }}/decrypted.key
    
    - name: Rename the decrypted SSL key
      command: mv {{ ssl_dest_dir }}/decrypted.key {{ ssl_dest_dir }}/nginx.key
               removes={{ ssl_dest_dir }}/decrypted.key
    

    Now let’s run the production playbook:

    ansible-playbook -i inventory/production –private-key=/aws-keys/ec2-glucosetracker.pem –vault-password-file=~/ansible/decryption_password -vvvv production.yml

    This is just one example and this same simple concept can be applied to different scenarios.  Just to summarize the steps:

    1. Encrypt your files with OpenSSL using symmetric encryption.
    2. Assign the decryption password to a variable in your Ansible vars file.
    3. Encrypt your vars file using Ansible Vault.
    4. Create a task in your playbook to decrypt the encrypted files using OpenSSL and the password in the encrypted vars file.
    5. Run your Ansible playbook, passing in the Ansible Vault password in the command or specifying the file where the password is stored.

    View my entire playbook here:

    https://github.com/jcalazan/glucose-tracker/tree/master/deployment/ansible

  • Django Tip: How to configure Gunicorn to auto-reload your code during development

    Posted on March 30th, 2014 webmaster No comments         

    I just finished fully automating my entire server stack for my Django app with Ansible and Vagrant (using VirtualBox).  One of the reasons I did this is to make my development environment as close to production as possible to hopefully eliminate any surprises when deploying to production.  It also allows me to setup a development environment very quickly as I won’t have to deal with manual installation and configuration of different packages.  In a team environment, the benefit of doing this multiplies.

    This is basically my process:

    1. Type in ‘vagrant up’ to create or start the VirtualBox virtual machine.

    2. I have my virtual machine configured via Vagrant to share my local code (which is located in my Dropbox folder) with the virtual machine.

    3. I make a change to my code, then I open my web browser and enter my virtual machine’s IP which is statically set to 192.168.33.10.  The browser shows my changes.

    What I want Gunicorn to do is similar to what the Django runserver does: automatically reload the application server when the code changes.

    There are different ways to approach this, such as using a package called watchdog to watch the file changes and then restart Gunicorn.  But it turned out there’s an even simpler way to do this with Gunicorn by setting the max_requests setting to 1 (full list of settings).  When calling Gunicorn simply add this option (note that it’s 2 dashes in the beginning):

    –max-requests 1

    What this will basically do is tell Gunicorn to restart the process for every request, which would reload your code.  It won’t know whether your code changed or not, it will always reload it.  For production, this is probably not a good idea, but during development it’s a nice simple trick and you won’t really see a difference in performance as you’d be the only user.

    Here’s the shell script that I use to start my Gunicorn process (note that I use Nginx to communicate with Gunicorn via a socket file, also note that I have variables set here which Ansible replaces with the actual values):

    #!/bin/bash
    
    NAME="{{ application_name }}"
    DJANGODIR={{ application_path }}
    SOCKFILE={{ virtualenv_path }}/run/gunicorn.sock
    USER={{ gunicorn_user }}
    GROUP={{ gunicorn_group }}
    NUM_WORKERS=3
    
    # Set this to 0 for unlimited requests. During development, you might want
    # to set this to 1 to automatically restart the process on each request
    # (i.e. your code will be reloaded on every request).
    MAX_REQUESTS={{ gunicorn_max_requests }}
    
    echo "Starting $NAME as `whoami`"
    
    # Activate the virtual environment.
    cd $DJANGODIR
    source ../../bin/activate
    source ../../bin/postactivate
    
    # Create the run directory if it doesn't exist.
    RUNDIR=$(dirname $SOCKFILE)
    test -d $RUNDIR || mkdir -p $RUNDIR
    
    # Programs meant to be run under supervisor should not daemonize themselves
    # (do not use --daemon).
    exec python manage.py run_gunicorn \
    		--settings={{ django_settings_file }} \
     		--name $NAME \
      		--workers $NUM_WORKERS \
      		--max-requests $MAX_REQUESTS \
      		--user=$USER --group=$GROUP \
      		--log-level=debug \
      		--bind=unix:$SOCKFILE
    
  • Deploying your Django app with Fabric

    Posted on January 25th, 2014 webmaster No comments         

    I’ve been making quite a bit of improvements and changes to my Django app, GlucoseTracker, lately that the small amount of time I spent creating a deployment script using Fabric had already paid off.

    Fabric is basically a library written in Python that lets you run commands on remote servers (works locally as well) via SSH. It’s very easy to use and can save you a lot of time. It eliminates the need to connect to a remote server manually, and if you have a lot of servers to update, then the time savings really add up.

    Since I only have one server for my production environment, my setup and deployment process are very simple.  I use GitHub to store my code, a virtualenv environment where my code is deployed to, Gunicorn for the WSGI server managed by Supervisor, PostgreSQL for the database, and Nginx for the web server.

    My deployment process goes something like this:

    1. Run unit tests.
    2. Pull latest code from the master branch hosted on GitHub.
    3. Activate virtualenv.
    4. Run pip install against the requirements file (in case a new library was added or updated).
    5. Run South migrations for all apps (in case the there were changes in the database/table schemas).
    6. Restart Gunicorn with Supervisor.

    Here’s what my fabric.py file looks like:

    from fabric.api import local, env, cd, sudo
    
    env.hosts = ['www.glucosetracker.net']
    
    # The user account that owns the application files and folders.
    owner = 'glucosetracker'
    
    app_name = 'glucosetracker'
    app_directory = '/webapps/glucosetracker/glucose-tracker'
    settings_file = 'settings.production'
    
    
    def run_tests():
        local('coverage run manage.py test -v 2 --settings=settings.test')
    
    
    def deploy():
        """
        Deploy the app to the remote host.
    
        Steps:
            1. Change to the app's directory.
            2. Pull changes from master branch in git.
            3. Activate virtualenv.
            4. Run pip install using the requirements.txt file.
            5. Run South migrations.
            6. Restart gunicorn WSGI server using supervisor.
        """
        with cd(app_directory):
    
            sudo('git pull', user=owner)
    
            venv_command = 'source ../bin/activate'
    
            pip_command = 'pip install -r requirements.txt'
            sudo('%s && %s' % (venv_command, pip_command), user=owner)
    
            south_command = 'python glucosetracker/manage.py migrate --all ' \
                            '--settings=%s' % settings_file
            sudo('%s && %s' % (venv_command, south_command), user=owner)
    
            sudo('supervisorctl restart glucosetracker')
    

    To run this script, you first need to install the Fabric library:

    pip install fabric

    Then call the run_tests method by typing:

    fab run_tests

    Deploy with:

    fab deploy

    Make sure to run the fab command in the directory where the fabric.py file is located.

    For the run_tests() method, you’ll notice that I use the function local(). Since I don’t have a staging environment I just run my tests locally. I then deploy directly to my production server.

    Also note that the SSH session is not persistent, which is why you can see in my script that I combine the virtualenv activation command with the commands dependent on it being active.  I also run my app using a user account named glucosetracker which has limited access to the server to minimize damage in case someone figures out a way to run malicious code through my app.

    That’s pretty much it.  This is just a very simple example and you can do a lot more with it.  It takes very little time to get started, so even for small projects it’s definitely worth checking out.  It’s really nice to be able to make even just tiny changes to your app and have it deployed in seconds by running one simple command.

  • Display messages to your users with django-sticky-messages

    Posted on January 8th, 2014 webmaster No comments         

    I launched my Django app, GlucoseTracker, the beginning of this year and I’ve already added a few new things to it.  To notify users about these new features, I used this nice, simple app called django-sticky-messages.  It was written by a friend of mine for his Django app, Pool Manager.

    The app lets you set the message to display to the users in the Django admin.  You can set a start and end time when the message will be displayed.  If you use Twitter Bootstrap 3, you can have a dismissible message similar to the one shown below using the CSS classes alert alert-dismissable.

    django_sticky_messages_01

    The code in my dashboard template looks something like this:

    
    {% if sticky_message %}
    <div class="alert alert-info alert-dismissable">
      <button type="button" class="close" data-dismiss="alert"
        aria-hidden="true">&times;</button>
      {{ sticky_message.message|safe }}
    </div>
    {% endif %}
    
    

    You can change the alert-info class to change the color of the message. For example, alert-success will display a message with a green font and background.

    This is also great for notifying your users if you need to take down the server for a scheduled maintenance.