Fabric is an awesome tool. Like Capistrano and Vlad, it makes deployments a lot simpler than with shell scripts on their own.
However, once the complexity of your setup starts to grow, you very quickly start wishing for a cleaner and more powerful API.
And once you are deploying to more than 30 servers, you really wish that Fabric would run commands in parallel instead of doing them sequentially, one after another.
Having recently experienced this pain, I decided to rework the core of Fabric. I've documented it below — describing all the changes to the current Fabric 1.0a — including funky new features like fab shell and staged deployments!
https://github.com/tav/pylibs/tree/master/fabric
Task Decorator
Traditionally, all functions within your fabfile.py were implicitly exposed as Fabric commands. So once you started abstracting your script to be more manageable, you had to use _underscore hacks to stop functions being exposed as commands, e.g.
from support.module import SomeClass as _SomeClass
from urllib import urlencode as _urlencode, urlopen as _urlopen
def _support_function():
...
Not only is this ugly and cumbersome, but it also goes against The Zen of Python philosophy of “explicit is better than implicit”. So I've added a @task decorator to explicitly mark functions as Fabric commands, e.g.
from urllib import urlopen
from fabric.api import *
def get_latest_commit():
return urlopen('http://commits.server.com/latest').read()
@task
def check():
"""check if local changes have been committed"""
local_version = local('git rev-parse HEAD')
if local_version != get_latest_commit():
abort("!! Local changes haven't been committed !!")
@task
def deploy():
"""publish the latest version of the app"""
with cd('/var/app'):
run('git remote update')
run('git checkout %s' % get_latest_commit())
sudo("/etc/init.d/apache2 graceful")
The above uses @task to explicitly mark check and deploy as Fabric commands so that they can be run as usual, i.e. fab check and fab deploy. Any functions and classes you import or define don't have to be “hidden” with the _underscore hack.
For backwards compatibility, if you never use the @task decorator, you'll continue to get the traditional behaviour.
YAML Config
It's preferable not to hard-code configurable values into fabfiles. This enables you to reuse the same fabfile across different projects and keep it updated without having to constantly copy, paste and modify.
So I've added native config support to Fabric. You can enable it by simply specifying the env.config_file variable:
env.config_file = 'deploy.yaml'
This will load and parse the specified YAML file before running any commands. The configuration values will then be accessible from env.config. This object is simply a dictionary with dot-attribute access similar to the env dictionary.
You can then keep your fabfiles clean from hard-coded values, e.g.
def get_latest_commit():
return urlopen(env.config.commits_server).read()
@task
def deploy():
with cd(env.config.app_directory):
...
Having configurable values also means that frameworks like Django could ship default tasks for you to use, e.g.
from django.fabric import deploy, test
Script Execution
It really helps to be able to quickly iterate the scripts that you will be calling with Fabric. And whilst you should eventually create them as individual files and sync them as part of your deployment, it'd be nice if you could try them out during development.
To help with this I've added a new execute() builtin. This lets you execute arbitrary scripts on remote hosts, e.g.
@task
def stats():
memusage = execute("""
#! /usr/bin/env python
import psutil
pid = open('pidfile').read()
process = psutil.Process(int(pid))
print process.get_memory_percent()
""", 'memusage.py', verbose=False, dir='/var/ampify')
Behind the scenes, execute() will:
- Strip any common leading whitespace from every line in the source.
- Copy the source into a file on the remote server.
- If the optional name parameter (the 2nd parameter) had been specified, then the given name will be used, otherwise an auto-generated name will be used.
- If the optional dir parameter was specified, then the file will be created within the given directory, otherwise it'll be created on the remote $HOME directory.
- The file will then be made executable, i.e. chmod +x.
- And, finally, it will be run and the response will be returned. Unless the verbose parameter is set to False, then it will by default also output the response to the screen.
As you can imagine, execute() makes it very easy to rapidly try out and develop your deployment logic. Your scripts can be in whatever languages you can run on your remote hosts — Ruby, Perl, Bash, Python, JavaScript, whatever.
Environment Variable Manager
When calling various command line tools, you often have to mess with environment variables like $PATH, $NODE_PATH, $PYTHONPATH, etc. To deal with this, Fabric forced you to write things like:
NODE_PATH = "NODE_PATH=$NODE_PATH:%s" % node_libs
run("%s vows" % NODE_PATH)
run("%s coffee -c amp.coffee" % NODE_PATH)
Instead you can now do:
with env.NODE_PATH(node_libs):
run("vows")
run("coffee -c amp.coffee")
That is, if you call a property on the env object which is composed of only upper-case ASCII characters, then a Python Context Manager is generated which will manage the use of the environment variables for you.
The manager has the following constructor signature:
manager(value, behaviour='append', sep=os.pathsep, reset=False)
So, if you'd called env.PATH(value, behaviour), the way the value is treated will depend on the behaviour parameter which can be one of the following:
- append — appends value to the current $PATH, i.e. PATH=$PATH:<value>
- prepend — prepends value to the current $PATH, i.e. PATH=<value>:$PATH
- replace — ignores current $PATH altogether, i.e. PATH=<value>
The sep defaults to : on most platforms and determines how the values are concatenated. And if you call the manager with no arguments at all or stringify it, it will return the string which would be used during execution, e.g.
print env.NODE_PATH # NODE_PATH=$NODE_PATH:/opt/ampify/libs/node
The various calls can be also nested, e.g.
with env.PATH('/usr/local/bin'):
with env.PATH('/home/tav/bin'):
print env.PATH # PATH=$PATH:/usr/local/bin:/home/tav/bin
with env.PATH('/opt/local/bin', 'replace'):
with env.PATH('/bin', 'prepend'):
print env.PATH # PATH=/bin:/opt/local/bin
print env.PATH # PATH=$PATH:/usr/local/bin
You can specify reset=True to discard any nested values, e.g.
with env.PATH('/opt/ampify/bin', 'prepend'):
with env.PATH('/home/tav/bin', reset=True):
print env.PATH # PATH=$PATH:/home/tav/bin
Extension Hooks
I've added native hooks support so that you can extend certain aspects of Fabric without having to resort to monkey-patching. You attach functions to specific hooks by using the new @hook decorator, e.g.
@hook('config.loaded')
def default_config():
if 'port' in env.config:
return
port = prompt("port number?", default=8080, validate=int)
env.config.port = port
For the moment, only the following builtin hooks are defined. It's possible that I may add support for individual <command>.before and <command>.after hooks, but so far I haven't needed that functionality.
Hook Name | Hook Function Signature | Description |
---|---|---|
commands.before | func(cmds, cmds_to_run) | Run before all the commands are run (including the config handling). |
config.loaded | func() | Run after any config file has been parsed and loaded. |
commands.after | func() | Run after all the commands are run. |
listing.display | func() | Run when the commands listing has finished running in response to fab or fab --list. Useful for adding extra info. |
You can access the functions attached to a specific hook by calling hook.get(), e.g.
functions = hook.get('config.loaded')
You can also define your own custom hooks and call the attached functions by using hook.call(). This takes the hook name as the first parameter and all subsequent parameters are used when calling the hook's attached functions.
Here's an example that shows hook.call() in action along with env.hook and the ability to attach the same function to multiple hooks by passing multiple names to the @hook decorator:
@hook('deploy.success', 'deploy.failure')
def irc_notify(release_id, info=None):
client = IrcClient('devbot', 'irc.freenode.net', '#esp')
if env.hook == 'deploy.success':
client.msg("Successfully deployed: %s" % release_id)
else:
client.msg("Failed deployment: %s" % release_id)
@hook('deploy.success')
def webhook_notify(release_id, info):
webhook.post({'id': release_id, 'payload': info})
@task
def deploy():
release_id = get_release_id()
try:
...
info = get_release_info()
except Exception:
hook.call('deploy.failure', release_id)
else:
hook.call('deploy.success', release_id, info)
And, finally, you can disable specific hooks via the --disable-hooks command line option which takes space-separated patterns, e.g.
$ fab deploy --disable-hooks 'config.loaded deploy.*'
You can override these disabled hooks in turn with the --enable-hooks option, e.g.
$ fab deploy --disable-hooks 'deploy.*' --enable-hooks 'deploy.success'
Both hook.call() and hook.get() respect these command-line parameters, so if you ever need to access the raw hook registry dict, you can access it directly via hook.registry.
Interpolation & Directories Support
Fabric provides a number of builtin functions that let you make command line calls:
- local() — runs a command on the local host.
- run() — runs a command on a remote host.
- sudo() — runs a sudoed command on a remote host.
I've added two optional parameters to these. A dir parameter can be used to specify the directory from within which the command should be run, e.g.
@task
def restart():
run("amp restart", dir='/opt/ampify')
And a format parameter can be used to access the new formatted strings functionality, e.g.
@task
def backup():
run("backup.rb {app_directory} {s3_key}", format=True)
This makes use of the Advanced String Formatting support that is available in Python 2.6 and later. The env object is used to look-up the various parameters, i.e. the above is equivalent to:
@task
def backup():
command = "backup.rb {app_directory} {s3_key}".format(**env)
run(command)
By default, format is set to False and the string is run without any substitution. But if you happen to find formatting as useful a I do, you can enable it as default by setting:
env.format = True
You can then use it without having to constantly set format=True, e.g.
@task
def backup():
run("backup.rb {app_directory} {s3_key}")
Fabric Contexts
I've used Fabric for everything from simple single-machine deployments to web app deployments across multiple data centers to even physical multi-touch installations at the Museum of Australian Democracy.
And for everything but the simplest of setups, I've had to work around the limited support offered by Fabric's @hosts and @roles in terms of:
- Managing configurable env values when dealing with different servers.
- Executing workflows involving different sets of servers within the same Fabric command, e.g. app servers, db servers, load balancers, etc.
And, clearly, I am not the only one who has experienced this pain. So I've introduced the notion of “contexts” and an env() context runner into the Fabric core. This means that you can now do things like:
def purge_caches():
run('./purge-redis-cache', dir=env.redis_directory)
run('./purge-varnish-cache', dir=env.varnish_directory)
@task
def deploy():
env('app-servers').run('./manage.py set-read-only {app_directory}')
env('db-servers').run('./upgrade.py {db_name}')
env('cache-servers').run(purge_caches)
env('app-servers').run('./manage.py restart {app_directory}')
And run it with a simple:
$ fab deploy
That's right. You no longer have to manually manage env values or run insanely long command strings like:
$ fab run_tests freeze_requirements update_repository create_snapshot update_virtualenv update_media update_nginx graceful_migrate
You can call env() with a sequence of contexts as arguments and it'll act as a constructor for the new context runner object, e.g. with multiple contexts:
env('app-servers', 'db-servers')
And when you call a context runner method, e.g. .run(), it will be run for each of the host/settings returned by env.get_settings() for the given contexts. So, for the following:
env('app-servers').run('./manage.py upgrade {app_directory}')
What happens behind the scenes is somewhat equivalent to:
ctx = ('app-servers',)
responses = []
for cfg in env.get_settings(ctx):
with settings(ctx=ctx, **cfg):
response = run('./manage.py upgrade {app_directory}')
responses.append(response)
return responses
That is, it is up to env.get_settings() to return the appropriate sequence of hosts and env settings for the given contexts, e.g.
[{'host_string': 'amp@appsrv21.espians.com', 'host': 'appsrv21.espians.com',
'app_directory': '/opt/ampify'},
{'host_string': 'admin@appsrv23.espians.com', 'host': 'appsrv23.espians.com',
'app_directory': '/var/ampify', 'shell': '/bin/tcsh -c'}]
And since deployment setups vary tremendously, instead of trying to make one size fit all, I've made it so that you can define your own env.get_settings() function, e.g.
from fabric.network import host_regex
def get_settings(contexts):
return [{
'host_string': context,
'host': host_regex.match(context).groupdict()['host']
} for context in contexts]
env.get_settings = get_settings
It doesn't matter to Fabric what “context” means to you. In the above, for example, contexts are treated as hosts, e.g.
env('dev2.ampify.it', 'dev6.ampify.it')
The default env.get_settings() interprets “contexts” in a specific way and this is described in the Default Settings Manager section. But you can simply override it to suit your specific deployment setup.
Context Runner
The context runner supports a number of methods, including:
- .execute()
- .local()
- .reboot()
- .run()
- .sudo()
They behave in the exact same way as their builtin counterparts. The return value for each of the calls will be a list-like object containing the response for each host in order, e.g.
responses = env('app-servers').sudo('make install')
for response in responses:
...
You can also pass in a function as the first parameter to .run(). It will then be run for each of the hosts/settings. You can specify a warn_only=True parameter to suppress and return any exceptions instead of raising them directly, e.g.
responses = env('app-servers').run(some_func, warn_only=True)
if succeeded(responses):
...
In addition, two new builtins are provided to quickly determine the nature of the responses:
- succeeded — returns True if all the response items were successful.
- failed — returns True if any of the response items failed or threw an exception.
You can loop through the response and the corresponding host/setting values by using the .ziphost() and .zipsetting() generator methods on the returned list-like responses object, e.g.
responses = env('app-servers').multirun('make build')
for response, host in responses.ziphost():
...
You might occasionally want to make a call on a subset of hosts for particular contexts. A utility .select() method is provided to help with this. If you pass it an integer value, it'll randomly sample as many values from the settings, e.g.
@task
def build():
env('build-servers').select(3).run('{build_command}')
The .select() method can also take a filter function to let you do more custom selection, e.g. to select 2 build servers for each platform, you might do something like:
from itertools import groupby
from random import sample
def selector(settings):
settings = sorted(settings, key=lambda s: s['platform'])
grouped = groupby(settings, lambda s: s['platform'])
return [sample(list(group), 2) for platform, group in grouped]
@task
def build():
env('build-servers').select(selector).run('{build_command}')
Parallel Deployment
You no longer have to patiently wait whilst Fabric runs your commands on each server one by one. By taking advantage of the .multirun() context runner call, you can now run commands in parallel, e.g.
@task
def deploy():
responses = env('app-servers').multirun('make build')
if failed(responses):
...
The above will run make build on the app-servers in parallel — reducing your deployment time significantly! The returned responses will be a list of the individual responses from each host in order, just like with .run().
If any of the hosts threw an exception, then the corresponding element within responses will contain it. Exceptions will not be raised, irregardless of whether you'd set warn_only — it is up to you to deal with them.
By default you see the output from all hosts, but sometimes you just want a summary, so you can specify condensed=True to .multirun() and it'll provide a much quieter, running report instead, e.g.
[multirun] Running 'make build test' on 31 hosts [multirun] Finished on dev2.ampify.it [multirun] Finished on dev7.ampify.it [multirun] 2/31 completed ...
Also, when dealing with large number of servers, you'll run into laggy servers from time to time. And depending on your setup, it may not be critical that all servers run the command. To help with this .multirun() takes two optional parameters:
- laggards_timeout — the number of seconds to wait for laggards once the satisfactory number of responses have been received — needs to be an int value.
- wait_for — the number of responses to wait for — this should be an int, but can also be a float value between 0.0 and 1.0, in which case it'll be used as the factor of the total number of servers.
So, for the following:
responses = env('app-servers').multirun(
'make build', laggards_timeout=20, wait_for=0.8
)
The make build command will be run on the various app-servers and once 80% of the responses have been received, it'll continue to wait up to 20 seconds for any laggy servers before returning the responses.
The responses list will contain the new TIMEOUT builtin object for the servers that didn't return a response in time. Like with the .run() context runner call, .multirun() also accepts a function to execute instead of a command string, e.g.
env.format = True
def migrate_db():
run('./manage.py schemamigration --auto {app_name}')
run('./manage.py migrate {app_name}')
@task
def deploy():
env('frontends').multirun('routing --disable')
env('app-nodes').multirun(migrate_db)
env('frontends').multirun('routing --enable')
@task
def migrate():
env('app-nodes').multirun(migrate_db)
There's also a .multilocal() which runs the command locally for each of the hosts in parallel, e.g.
env.format = True
@task
def db_backup():
env('db-servers').multilocal('backup.rb {host}')
Behind the scenes, .multirun() uses fork() and domain sockets, so it'll only work on Unix platforms. You can specify the maximum number of parallel processes to spawn at any one time by specifying env.multirun_pool_size which is set to 10 by default, e.g.
env.multirun_pool_size = 200
There's also an env.multirun_child_timeout which specifies the number of seconds a child process will wait to hear from the parent before committing suicide. It defaults to 10 seconds, but you can configure it to suit your setup:
env.multirun_child_timeout = 24
And, oh, there's also a .multisudo() for your convenience.
Contextualised Tasks
If you call env() without any contexts, it will use the contexts defined in the env.ctx tuple. This is automatically set by the context runner methods, but you can also set it by specifying contexts to the @task() decorator, e.g.
@task('app-servers')
def single():
print env.ctx
@task('app-servers', 'db-servers')
def multiple():
print env.ctx
All this does is set env.ctx to the new tuple value, e.g.
$ fab single
('app-servers',)
$ fab multiple
('app-servers', 'db-servers')
This is useful if you want to specify default contexts for a specific Fabric command, e.g.
@task('app-servers')
def deploy():
env().run('./manage.py set-read-only {app_directory}')
env('db-servers').run('./upgrade.py {db_name}')
env().run('./manage.py restart {app_directory}')
One reason you might want to use default contexts is so that you can later use the new command line @context syntax to override it. So if you wanted to deploy to a specific server, perhaps a new one, you could do something like:
$ fab deploy @serv21.espians.com
And serv21.espians.com will be used in place of app-servers. You can specify multiple @context parameters on the command line and they will only apply to the immediately preceding Fabric command, e.g.
$ fab build @bld4.espians.com deploy @serv21.espians.com @serv22.espians.com
Fabric Shell
Sometimes you just want to run a bunch of commands on a number of different servers. So I've added a funky new feature — Fabric shell mode:
Once you are in the shell, all of your commands will be sequentially run() on the respective hosts for your calling context. You can also use the .shell-command syntax to call shell builtin commands — including your own custom ones!
You start shells from within your Fabric commands by invoking the .shell() method of a context runner. Here's sample code to enable fab shell:
@task('app-servers')
def shell():
"""start a shell within the current context"""
env().shell(format=True)
The above defaults to the app-servers context when you run fab shell, but you can always override the context from the command line, e.g.
$ fab shell @db-servers
And if your Python has been compiled with readline support, then you can also leverage the tab-completion support that I've added:
- Terms starting with { auto-complete on the properties within the env object to make it easy for you to use the string formatting support.
- Terms starting with . auto-complete on the available shell builtin commands.
- All other terms auto-complete on the list of available executables on your local machine's $PATH.
Other readline features have been enabled too, e.g. command history with up/down arrows, ^r search and even cross-session history. You can configure where the history is saved by setting env.shell_history_file which defaults to:
env.shell_history_file = '~/.fab-shell-history'
Shell Builtins
As mentioned above, you can call various .shell-commands from within a Fabric shell to do something other than run() a command on the various hosts, e.g.
>> .multilocal backup.rb {host_string}
The above will run the equivalent of the following in parallel for each host:
local("backup.rb {host_string}")
The following builtins are currently available:
- .cd — changes the working directory for all future commands.
- .help — displays a list of the available shell commands.
- .info — list the current context and the respective hosts.
- .local — runs the command locally.
- .multilocal — runs the command in parallel locally for each host.
- .multirun — runs the command in parallel on the various hosts.
- .multisudo — runs the sudoed command in parallel on the various hosts.
- .sudo — runs the sudoed command on the various hosts.
- .toggle-format — toggles string formatting support.
You can define your own using the new @shell decorator, e.g.
@shell
def python_version(spec, arg):
with hide('running', 'output'):
version = run('./get-py-version')
puts("python %s" % version)
And since the above satisfies the necessary func(spec, arg) function signature, it can be called from within a shell, e.g.
>> .python-version
[dev1.ampify.it] python 2.5.2
[dev3.ampify.it] python 2.7.0
The spec object is initialised with the various variables that the .shell() method was originally called with, e.g. dir, format, etc. All changes made, e.g. spec.dir = ‘/var/ampify’, are persistent and will affect all subsequent commands within the shell.
The arg is a string value of everything following the shell command name, e.g. for .local echo hi, it will be “echo hi”. The command name is derived from the function name, but you can provide your own as the first parameter to the @shell decorator.
And, finally, you can specify single=True to the @shell decorator to specify that you only want the command to be run a single time and not repeatedly for every host, e.g.
@shell('hello', single=True)
def demo(spec, arg):
print "Hello", arg.upper()
The above can then be called from within a shell and it won't repeat for every host/setting, e.g.
>> .hello tav
Hello TAV
The env.ctx is set for even single-mode shell commands, so you can always take advantage of context runners, e.g.
@shell
def backup(spec, arg):
env().multirun(...)
Command Line Listing
When fab was run without any arguments, it used to spit out the help message with the various command line options:
$ fab
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
Options:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-l, --list print list of possible commands and exit
--shortlist print non-verbose list of possible commands and exit
-d COMMAND, --display=COMMAND
print detailed info about a given command and exit
...
As useful as this was, I figured it'd be more useful to list the various commands without having to constantly type fab --list. So now when you run fab without any arguments, it lists the available commands instead, e.g.
$ fab
Available commands:
check check if local changes have been committed
deploy publish the latest version of the app
You can of course still access the help message by running either fab -h or fab --help. And you can hide commands from being listed by specifying display=None to the @task decorator, e.g.
@task(display=None)
def armageddon():
...
Staged Deployment
For most serious deployments, you tend to have distinct environments, e.g. testing, staging, production, etc. And existing fabric setups tend to have similarly named commands so you can run things like:
$ fab production deploy
Now, not being a fan of repeatedly adding the same functionality to all my fabfiles, I've added staged deployment support to Fabric. You can enable it by specifying an env.stages list value, e.g.
env.stages = ['staging', 'production']
The first item in the list is taken to be the default and overridable commands of the following structure are automatically generated for each item:
@task(display=None)
def production():
puts("env.stage = production", prefix='system')
env.stage = 'production'
config_file = env.config_file
if config_file:
if not isinstance(config_file, basestring):
config_file = '%s.yaml'
try:
env.config_file = config_file % stage
except TypeError:
env.config_file = config_file
That is, it will update env.stage and env.config_file with the appropriate values and print out a message, e.g.
$ fab production check deploy
[system] env.stage = production
...
And if your fab command line call didn't start with one of the stages, it will automatically run the one for the default stage, e.g.
$ fab check deploy
[system] env.stage = staging
...
For added convenience, the environments are also listed when you run fab without any arguments, e.g.
$ fab
Available commands:
check check if local changes have been committed
deploy publish the latest version of the app
Available environments:
staging
production
And, for further flexibility, you can override env.stages with the FAB_STAGES environment variable. This takes a comma-separated list of environment stages and, again, the first is treated as the default, e.g.
$ FAB_STAGES=development,production fab deploy
[system] env.stage = development
...
The staged deployment support is compatible with the YAML Config support. So if you set env.config_file to True, it will automatically look for a <stage>.yaml file, e.g. production.yaml
env.config_file = True
However if you set it to a string which can be formatted, it will be replaced with the name of the staged environment, e.g.
env.config_file = 'etc/ampify/%s.yaml'
That is, for fab development, the above will use etc/ampify/development.yaml. And, finally, if env.config_file is simply a string with no formatting characters, it will be used as is, i.e. the same config file irregardless of env.stage.
Default Settings Manager
The default env.get_settings() can be found in the fabric.contrib.tav module. If env.config_file support isn't enabled it mimics the traditional @hosts behaviour, i.e. all the contexts are treated as host strings, e.g.
>>> env.get_settings(('admin@dev1.ampify.it', 'dev3.ampify.it:2222'))
[{'host': 'dev1.ampify.it',
'host_string': 'admin@dev1.ampify.it',
'port': '22',
'user': 'admin'},
{'host': 'dev3.ampify.it',
'host_string': 'dev3.ampify.it',
'port': '2222',
'user': 'tav'}]
However, if env.config_file is enabled, then a YAML config file like the following is expected:
default:
shell: /bin/bash -l -c
app-servers:
directory: /var/ampify
hosts:
- dev1.ampify.it:
debug: on
- dev2.ampify.it
- dev3.ampify.it:
directory: /opt/ampify
- dev4.ampify.it
build-servers:
cmd: make build
directory: /var/ampify/build
hosts:
- build@dev5.ampify.it
- dev6.ampify.it:
user: tav
- dev7.ampify.it
hostinfo:
dev*.ampify.it:
user: dev
debug: off
dev3.ampify.it:
shell: /bin/tcsh -c
dev7.ampify.it:
user: admin
In the root of the config, everything except the default and hostinfo properties are assumed to be contexts. And in each of the contexts, everything except the hosts property is assumed to be an env value.
Three distinct types of contexts are supported:
Host contexts — any context containing the . character but no slashes, e.g. dev1.ampify.it.
>>> env.get_settings(
... ('tav@dev1.ampify.it', 'dev7.ampify.it', 'dev3.ampify.it')
... )
[{'debug': False,
'host': 'dev1.ampify.it',
'host_string': 'tav@dev1.ampify.it',
'port': '22',
'shell': '/bin/bash -l -c',
'user': 'tav'},
{'debug': False,
'host': 'dev7.ampify.it',
'host_string': 'dev7.ampify.it',
'port': '22',
'shell': '/bin/bash -l -c',
'user': 'admin'},
{'debug': False,
'host': 'dev3.ampify.it',
'host_string': 'dev3.ampify.it',
'port': '22',
'shell': '/bin/tcsh -c',
'user': 'dev'}]The settings are composed in layers: first, any default values; any matching *patterns in hostinfo; any explicit matches within hostinfo; any specific information embedded within the context itself, e.g. user@host:port.
Non-host contexts, e.g. app-servers.
>>> env.get_settings(('app-servers',))
[{'debug': True,
'directory': '/var/ampify',
'host': 'dev1.ampify.it',
'host_string': 'dev1.ampify.it',
'port': '22',
'shell': '/bin/bash -l -c',
'user': 'dev'},
{'debug': False,
'directory': '/var/ampify',
'host': 'dev2.ampify.it',
'host_string': 'dev2.ampify.it',
'port': '22',
'shell': '/bin/bash -l -c',
'user': 'dev'},
{'debug': False,
'directory': '/opt/ampify',
'host': 'dev3.ampify.it',
'host_string': 'dev3.ampify.it',
'port': '22',
'shell': '/bin/tcsh -c',
'user': 'dev'},
{'debug': False,
'directory': '/var/ampify',
'host': 'dev4.ampify.it',
'host_string': 'dev4.ampify.it',
'port': '22',
'shell': '/bin/bash -l -c',
'user': 'dev'}]First, the hosts are looked up in the hosts property of the specific context and then a similar lookup is done for each of the hosts as with host contexts with two minor differences:
- Any context-specific values, e.g. the properties for app-servers, are added as a layer after any default values.
- Any host-specific values defined within the context override everything else.
Composite contexts — made up of a non-host context followed by a slash and comma-separated hosts, e.g. app-servers/dev1.ampify.it,dev3.ampify.it.
>>> env.get_settings(('app-servers/dev1.ampify.it,dev3.ampify.it',))
[{'debug': True,
'directory': '/var/ampify',
'host': 'dev1.ampify.it',
'host_string': 'dev1.ampify.it',
'port': '22',
'shell': '/bin/bash -l -c',
'user': 'dev'},
{'debug': False,
'directory': '/opt/ampify',
'host': 'dev3.ampify.it',
'host_string': 'dev3.ampify.it',
'port': '22',
'shell': '/bin/tcsh -c',
'user': 'dev'}]These are looked up similarly to non-host contexts, except that instead of looking for hosts within the config, the specified hosts are used. This is particularly useful for overriding contexts to a subset of servers from the command-line.
All in all, I've tried to provide a reasonably flexible default — but you are welcome to override it to suit your particular needs, e.g. you might want to load the config dynamically over the network, etc.
Finally, it's worth noting that all responses are cached by the default env.get_settings() — making it very cheap to switch contexts.
Hyphenated Commands
This is a minor point, but I find hyphens, e.g. deploy-ampify, to be more aesthetically pleasing than underscores, i.e. deploy_ampify. So Fabric now displays and supports hyphenated variants of all commands, e.g. for the following fabfile:
from fabric.api import *
@task
def deploy_ampify():
"""deploy the current version of ampify"""
...
The command listing shows:
$ fab
Available commands:
deploy-ampify deploy the current version of ampify
And you can run the command with:
$ fab deploy-ampify
Command Line Env Flags
You can now update the env object directly from the command line using the new env flags syntax:
- +<key> sets the value of the key to True.
- +<key>:<value> sets the key to the given string value.
So, for the following content in your fabfile:
env.config_file = 'deploy.yaml'
env.debug = False
Running the following will set env.config_file to the new value:
$ fab <commands> +config_file:alt.yaml
And the following will set env.debug to True:
$ fab <commands> +debug
The env flags are all set in the order given on the command line and before any commands are run (including the config file handling if one is specified). And you can, of course, specify as many env flags as you want.
Optimisations
It won't make much of a difference, but for my own sanity I've made a bunch of minor optimisations to Fabric, e.g.
- Removed repeated look-up of attributes within loops.
- Removed repeated local imports within functions.
- Removed redundant double-wrapping of functions within the @hosts and @roles decorators.
Colors Support
You can enable the optional colors support by setting env.colors, i.e.
env.colors = True
Here's a screenshot of it in action:
You can customise the colors by modifying the env.color_settings property. By default it is set to:
env.color_settings = {
'abort': yellow,
'error': yellow,
'finish': cyan,
'host_prefix': green,
'prefix': red,
'prompt': blue,
'task': red,
'warn': yellow
}
You can find the color functions in the fabric.colors module.
Logging Improvements
The builtin puts() and fastprint() logging functions have also been extended with the optional format parameter and env.format support similar to the local() and run() builtins, so that instead of having to do something like:
@task
def db_migrate():
puts(
"[%s] [database] migrating schemas for %s" %
(env.host_string, env.db_name)
)
You can now just do:
env.format = True
@task
def db_migrate():
puts("migrating schemas for {db_name}", 'database')
And it will output something like:
$ fab db-migrate
[somehost.com] [database] migrating schemas for ampify
The second parameter which was previously a boolean-only value that was used to control whether the host string was printed or not, can now also be a string value — in which case the string will be used as the prefix instead of the host string.
You can still control if a host prefix is also printed by using the new optional show_host parameter, e.g.
@task
def db_migrate():
puts("migrating schemas for {db_name}", 'database', show_host=False)
Will output something like:
$ fab db-migrate
[database] migrating schemas for ampify
And if you'd enabled env.colors, the prefix will be also colored according to your settings!
Autocompletion
Bash completion is one of those features that really helps you to be more productive. Just include the following in your ~/.bashrc or equivalent file and you'll be able to use Fabric's new command completion support:
_fab_completion() {
COMPREPLY=( $( \
COMP_LINE=$COMP_LINE COMP_POINT=$COMP_POINT \
COMP_WORDS="${COMP_WORDS[*]}" COMP_CWORD=$COMP_CWORD \
OPTPARSE_AUTO_COMPLETE=1 $1 ) )
}
complete -o default -F _fab_completion fab
It completes on all available commands and command line options, e.g.
$ fab --dis<tab>
--disable-known-hosts --disable-hooks --display
Also, since Fabric has no way of knowing which command line env flags and contexts you might be using, you can specify additional autocompletion items as an env.autocomplete list value, e.g.
env.autocomplete = ['+config_file:', '+debug', '@db-servers']
This will then make those values available for you to autocomplete, i.e.
$ fab +<tab>
+config_file: +debug
Backwards Compatibility
All these changes should be fully backwards compatible. That is, unless you happen to have specified any of the new env variables like env.stages, your existing fabfiles should run as they've always done. Do let me know if this is not the case…
Usage
If you'd like to take advantage of these various changes, the simplest thing to do is to clone my pylibs repository and put it on your $PYTHONPATH, i.e.
$ git clone git://github.com/tav/pylibs.git
$ cd pylibs
$ python setup.py
$ export PYTHONPATH=$PYTHONPATH:`pwd`
Then create a fab script somewhere on your $PATH with the following content:
#! /usr/bin/env python
from fabric.main import main
main()
Make sure the script is executable, i.e.
$ chmod +x fab
And as long as you have Python 2.6+, you should be good to go…
Next Steps
I have been talking to the Fabric maintainers about merging these changes upstream. And so far they've been quite positive. But, as you can imagine, there are a number of competing ideas about what is best for Fabric's future.
So if you like these features, then do leave a comment expressing your support. It'd really help getting these features into the next version of Fabric.
— Thanks, tav