A Django Build and Deploy Example¶
The Example Project¶
This project brings up a Django site on a Ubuntu 14 server. The site
itself is trivial, just what django-admin startproject
produces. The cluster consists of an Nginx http proxy in front of a
uwsgi server, the latter launched by supervisord. The uwsgi server is
running the Django app in a virtualenv that we build, configure and
install. The deploy will copy Django static files into a path
accessible to Nginx for static delivery. Static file processing will
only execute once per deploy, as will database migrations, even if we
add multiple Django application server instances.
We will deploy local settings to the Python app, only if those
settings change. The Django site SECRET_KEY
will be kept encrypted
in our deploy configuration, and only decrypted in memory while the
deploy runs.
Project Source and DIY¶
If you want to play along at home, and try this project yourself, the source can be found in the Fabex repo at https://bitbucket.org/rmorison/fabex/src under the fabex-example tree.
Once you pull the source, and pip install Fabex, check and change the
hostenvs
section in targets/example.yaml
hostenvs: server: {ip: 127.0.0.1, ssh_host: 127.0.0.1, ssh_user: ubuntu}
You might want to build an Ubuntu VM and point at that, or you can run the example on localhost. Note, the build does install the apt packages listed in example.yaml, so decide if that’s ok for your local machine.
After you’re all installed and configured, you can build the “whole enchilada” with
fab password:123 target:example install setup deploy
Target Yaml & Encrypting Sensitive Settings¶
Here’s the yaml configuration our Fabex fabfile will use:
domain: example.com timezone: America/Los_Angeles locale: en_US.UTF-8 roledefs: proxy: [server] app: [server] hostenvs: server: {ip: 192.168.56.33, ssh_host: 192.168.56.33, ssh_user: ubuntu} roleenvs: proxy: packages: [ntp, nginx] server_names: "*.example.com" httpd_user: www-data static_path: /opt/html static_user: ubuntu project_name: example app: packages: [ntp, supervisor, python-dev, python-pip, virtualenvwrapper] user: ubuntu virtualenvwrapper: /usr/share/virtualenvwrapper/virtualenvwrapper.sh workon_home: /home/ubuntu/pyves project_name: example requirements: [django==1.9 uwsgi] debug: yes secret_key: !decrypt DeZkYMRlEbWoMre7waFBoKFTxB0AFIBCsDxCzuttAkEeVHL1Xivbuc16OaPc7HmCVi2fOXUWEZp8ZDbFRXSCGjHG903lcAyhhdtuv70GdIU= wsgi_port: 10000
Note the !decrypt
on the secret_key
. !decrypt
is a Fabex
specific yaml macro to decrypt on the fly. That encrypted text was
generated by the fxcrypt
utility provided by Fabex.
fxcrypt 123 'el!qznto#i@v=v+b-(3^3*%nv=kzx!j3+%j6h*7or95anju#uc'
which outputs
DeZkYMRlEbWoMre7waFBoKFTxB0AFIBCsDxCzuttAkEeVHL1Xivbuc16OaPc7HmCVi2fOXUWEZp8ZDbFRXSCGjHG903lcAyhhdtuv70GdIU=
Note: your output will differ due to the AES initialization
vector. See fabex/crypt.py
for more info. We will, however, need
the 123
encryption key, else Fabex will except when it reads the
target yaml.
As a check
fxcrypt 123 --decrypt 'DeZkYMRlEbWoMre7waFBoKFTxB0AFIBCsDxCzuttAkEeVHL1Xivbuc16OaPc7HmCVi2fOXUWEZp8ZDbFRXSCGjHG903lcAyhhdtuv70GdIU='
yields
el!qznto#i@v=v+b-(3^3*%nv=kzx!j3+%j6h*7or95anju#uc
Fabfile Walkthrough¶
A Fabex fabfile.py
should import from fabex.api, instead of
fabric.api. Fabex wraps many of the Fabric functions and adds several
of its own, e.g., the dryrun
, target
, and password
tasks.
Again, you’ll find this complete example at https://bitbucket.org/rmorison/fabex/src under the fabex-example tree.
Import and Initialize¶
"""A Fabex example deploy of Django with an Nginx proxy"""
import os
from contextlib import contextmanager
## get fabric from fabex, with addons
from fabex.api import *
from fabex.contrib.files import *
## fabex specific config
fabex_config(config={'target_dir': 'targets',
'template_dir': 'templates',
'template_config': 'templates.yaml'})
The fabex_config
sets up the
- target directory, where site specific deploy values are stored in yaml files
- template directory, where Jinja2 templates for upload to the server are stored
- template config, with per template upload directives.
More on the templating subsystem as we proceed.
Helper Functions¶
Next, a virtualenv
context manager, for running under a Python
virtualenv, and a ssh
task, to run a cmd on a host via ssh. These
helpers are used later.
## utility functions and helper tasks
@contextmanager
def virtualenv(workon=None, cd=None):
"""Source .virtualenvrc for virtualenvwrapper; workon and cdvirtualenv, if given"""
pre = ["source ~/.virtualenvrc"]
if workon:
pre.append("workon {}".format(workon))
pre.append("cdvirtualenv")
if cd:
pre.append("cd {}".format(cd))
with prefix(" && ".join(pre)):
yield
@task
def ssh(cmd, use_sudo=''):
"""Run a command remotely over ssh. Use sudo if use_sudo=y"""
return sudo(cmd) if use_sudo.startswith('y') else run(cmd)
Install Tasks¶
Next up, we have our first block of tasks, for the install step. We’ll
update existing packages, install per role packages, and setup ssh
keys. In the task_roles
decorators, note the group='install'
setting, along with the runs_once_per_host
decorator.
The group
argument allows several tasks to be bundled into an
“uber” task. The runs_once_per_host
decorator is a Fabex add on,
similar to Fabric’s runs_once
. A task so decorated will run once,
and only once, on each host it matches up with.
## install
@task_roles(['proxy', 'app'], group='install')
@runs_once_per_host
def apt_upgrade():
"""apt update package source listings and update installed packages"""
sudo("DEBIAN_FRONTEND=noninteractive apt-get -y update")
sudo("DEBIAN_FRONTEND=noninteractive apt-get -y upgrade")
@task_roles(['proxy', 'app'], group='install')
def apt_install():
"""apt install role required packages"""
sudo('DEBIAN_FRONTEND=noninteractive apt-get install --yes {}'
.format(' '.join(env.packages)))
@task_roles(['proxy', 'app'], group='install')
@runs_once_per_host
def setup_sshkey():
"""ssh-keygen a key"""
run("[ -e ~/.ssh/id_dsa ] && echo id_dsa exists"
" || ssh-keygen -t dsa -N '' -f ~/.ssh/id_dsa")
In fact, if you look at the fab -l
for this fabfile you’ll see
install Fabex group: apt_upgrade, apt_install, setup_sshkey
in the listed tasks. Fabex added this task when it saw the group
arg in task_roles
, and built a list of tasks to invoke.
Setup Tasks¶
Here things get interesting, as setting up Nginx and a Python
virtualenv are more involved. In the Nginx setup, we handle the tricky
business of getting app server ssh public keys and adding then to
the proxy’s .ssh/allowed_hosts
file. (Remember, there could be many app servers.)
We use env.roledefs
to find all the app servers, and the Fabex host_settings
context manager run the ssh
task on an app server instead of the proxy.
With that, we get that app server’s ssh key, and apply it to the proxy.
## setup
@task_roles('proxy', group='setup')
def setup_nginx():
"""Build an (nginx) static server/load balancer/appserver proxy"""
# make sure dirs exist and any default site file is gone
sudo("mkdir -p /etc/nginx/conf.d /etc/nginx/sites-enabled /etc/nginx/sites-available")
sudo("rm -f /etc/nginx/sites-enabled/default")
# landing path for django static files
sudo("mkdir -p {static_path} && chown {static_user} {static_path}".format(**env))
# get app host ssh keys, so app can sync static files to proxy
for app_server in env.roledefs['app']:
with host_settings(app_server):
try:
pubkey = execute(ssh, cmd="cat ~/.ssh/id_dsa.pub").values()[0]
except (ValueError, IndexError):
abort("{} could not get ssh key from {}".format(env.host, app_host))
append('~/.ssh/authorized_keys', pubkey)
Now we setup the Django application server. The virtualenvwrapper package was installed by the previous install task, so we’ll use that to build up a Python virtualenv and pip install to it.
@task_roles('app', group='setup')
def setup_virtualenv():
"""Setup virtualenvwrapper and make a virtualenv for the project"""
# virtualenv setup
upload_project_template('virtualenvrc')
append(".bashrc", "source $HOME/.virtualenvrc")
if not exists(env.workon_home):
run("mkdir -v -p {workon_home}".format(**env))
# mkvirtualenv
with virtualenv():
run("lsvirtualenv | grep -q '^{project_name}$'"
" && echo virtualenv {project_name} exists"
" || mkvirtualenv {project_name}".format(**env))
@task_roles('app', group='setup')
def pip_install():
"""Install requirements"""
# clone repo, setup django project, pip install
with virtualenv(env.project_name):
run('pip install {}'.format(' '.join(env.requirements)))
Here, we meet Fabex’s use of Jinja templates. Fabric contrib’s section
has a template_upload. Fabex extends that with a
upload_project_template
facility. “Project templates” are listed
in the templates.yaml
we referenced in the fabex_config
init. Here is templates.yaml
from the example project:
nginx-site.conf: local_path: nginx-site.conf remote_path: /etc/nginx/sites-available/{{project_name}} reload_command: > cd /etc/nginx/sites-enabled && ln -s -f -v ../sites-available/{{project_name}} . && service nginx restart owner: root:root # app virtualenvrc: local_path: virtualenvrc remote_path: .virtualenvrc supervisord-uwsgi: local_path: supervisord-uwsgi.conf remote_path: "/etc/supervisor/conf.d/uwsgi.conf" reload_command: supervisorctl update owner: root local_settings: local_path: local_settings.py remote_path: "{{settings_dir}}/local_settings.py" uwsgi.ini: local_path: uwsgi.ini remote_path: "{{settings_dir}}/uwsgi.ini"
The top level key is what we reference in the
upload_project_template
call. The sub attributes tell Fabex where
to find the source file, where to put the templated output file, any
command to run after placing (optional), and an owner for a chmod
after placing (optional).
Note especially, we can inject env
values into attribute values
using Jinja2 syntax. This feature turns out to be extremely useful
in practical applications. Of course, the templates themselves are
Jinja2 processed, with the env
context, as provided by Fabex.
And finally, templated output is built and compared to that on the
host. If there’s no diff, there’s no file uploaded, nor
reload_command
run.
Kudos to the Mezzanine project for inspiring this approach.
Deploy Tasks¶
And finally, deploy. We use a number of “cross role tricks”. One in
config_proxy
is to build the list of app server IP addresses for
Nginx’s upstream
proxy directive. Another in deploy_app
gets the proxy’s IP address to rsync Django’s collectstatic output into an Nginx accessable tree.
The remainder I’ll leave as an exercise for the reader ;-)
## deploy
@task_roles('proxy', group='deploy')
def config_proxy():
"""Deploy config changes to load balancer"""
wsgi_port = env.roleenvs['app']['wsgi_port']
appserver_ips = [env.hostenvs[host]['ip'] for host in env.roledefs['app']]
wsgi_servers = ["server {}:{};".format(ip, wsgi_port) for ip in appserver_ips]
upstream_servers = '\n '.join(wsgi_servers)
with settings(upstream_servers=upstream_servers):
upload_project_template('nginx-site.conf')
@task_roles('app', group='deploy')
def deploy_app():
"""Deploy webapp"""
with virtualenv(env.project_name):
# pull and update code in real app
run("[ -e {project_name} ] && echo app {project_name} exists"
" || django-admin startproject {project_name}".format(**env))
# uploads, appends don't obey cd
django_path = os.path.join(env.workon_home, *[env.project_name]*2)
app_settings = {'django_path': django_path,
'settings_dir': os.path.join(django_path, env.project_name),
'settings_py': os.path.join(django_path, env.project_name, 'settings.py'),
'static_root': os.path.join(django_path, 'static')}
with settings(**app_settings):
upload_project_template('local_settings', reload=False)
upload_project_template('uwsgi.ini', reload=False)
append(app_settings['settings_py'], "from local_settings import *")
upload_project_template('supervisord-uwsgi')
start_or_restart = sudo("supervisorctl status uwsgi | grep -q '^uwsgi *RUNNING'"
" && echo restart || echo start")
sudo("supervisorctl {} uwsgi".format(start_or_restart))
@task_roles('app', group='deploy')
@runs_once
def once_after_app():
"""Deploy actions that should run only once per deploy; should follow deploy_app"""
with virtualenv(env.project_name, cd=env.project_name):
# update db and statics
run("./manage.py migrate --noinput")
run("mkdir -p static && ./manage.py collectstatic --noinput --clear")
# rsync statics to proxy
proxy_ip = env.hostenvs[env.roledefs['proxy'][0]]['ip']
with role_settings('proxy', proxy_ip=proxy_ip):
run("rsync -e 'ssh -o StrictHostKeyChecking=no' -avz"
" static {proxy_ip}:{static_path}".format(**env))
Target for a “Real” Cluster¶
Finally, let’s look at what it would take to build a cluster with 3 application servers. The top of our target yaml becomes
roledefs: proxy: [server] app: [app1 app2 app3 app4] hostenvs: server: {ip: 192.168.56.33, ssh_host: 192.168.56.33, ssh_user: ubuntu} app1: {ip: 192.168.56.34, ssh_host: 192.168.56.34, ssh_user: ubuntu} app2: {ip: 192.168.56.35, ssh_host: 192.168.56.35, ssh_user: ubuntu} app3: {ip: 192.168.56.36, ssh_host: 192.168.56.36, ssh_user: ubuntu}
That’s it, not a line of change in the fabfile!