After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).
I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.
1. Create the new charm from the bootstrap-wsgi charm
First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:
$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/
Then update the charm metadata to reflect the ubuntu-reviews service:
--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
description: |
<Multi-line description here>
categories:
I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):
--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
- hosts: localhost
vars:
- - app_label: wsgi-app.example.com
+ - app_label: click-reviews.ubuntu.com
roles:
- role: wsgi-app
listen_port: 8080
- wsgi_application: example_wsgi:application
- code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+ wsgi_application: wsgi:app
+ code_archive: "{{ build_label }}/rnr-server.tgz"
when: build_label != ''
Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).
With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.
Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:
$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log
At this point, juju status shows no issues, but curling the service does (as expected):
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer
Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:
$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi
2. Adding the wsgi, settings and urls
The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).
An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:
--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
roles:
- role: wsgi-app
listen_port: 8080
- wsgi_application: wsgi:app
+ python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+ wsgi_application: clickreviewsproject.wsgi:application
code_archive: "{{ build_label }}/rnr-server.tgz"
when: build_label != ''
@@ -18,20 +19,18 @@
tasks:
- # Pretend there are some package dependencies for the example wsgi app.
- name: Install any required packages for your app.
- apt: pkg={{ item }} state=latest update_cache=yes
+ apt: pkg={{ item }}
with_items:
- - python-django
- - python-django-celery
+ - python-django=1.5.4-1ubuntu1~ctools0
tags:
- install
- upgrade-charm
- name: Write any custom configuration files
- debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+ copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
tags:
- - config-changed
- # Also any backend relation-changed events, such as databases.
+ - install
+ - upgrade-charm
notify:
- Restart wsgi
I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…
3. Adding missing dependencies
At this point, it’s easiest in my opinion to run debug-hooks:
$ juju debug-hooks ubuntu-reviews/0
and directly test and install missing dependencies, adding them to the playbook as you go:
ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart
Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:
$ juju upgrade-charm --repository=../.. ubuntu-reviews
to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.
Other times you might want to destroy the environment to redeploy from scratch.
Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).
You can see the extra dependencies in the commit.
4. Customise settings
At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.
So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:
--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
current_symlink:
default: "latest"
type: string
- description: |
+ description: >
The symlink of the code to run. The default of 'latest' will use
the most recently added build on the instance. Specifying a
differnt label (eg. "r235") will symlink to that directory assuming
it has been previously added to the instance.
+ django_secret_key:
+ default: "you-really-should-set-this"
+ type: string
+ description: >
+ The secret key that django should use for each instance.
--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+ 'django.contrib.auth',
+ 'django.contrib.contenttypes',
+ 'django.contrib.sessions',
+ 'django.contrib.sites',
+ 'django_openid_auth',
+ 'django.contrib.messages',
+ 'django.contrib.staticfiles',
+ 'clickreviews',
+ 'south',
+)
...
Again, full diff in the commit.
With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}
But that only works because the input validation is fired before anything tries to use the (non-existent) database…
5. Adding the database relation
I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:
- The reviews application uses a custom postgres “debversion” column type
- A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
- The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d
So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:
--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
@juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
@juju deploy gunicorn
@juju deploy nrpe-external-master
+ @juju deploy postgresql
@juju add-relation ubuntu-reviews gunicorn
@juju add-relation ubuntu-reviews nrpe-external-master
+ @juju add-relation ubuntu-reviews postgresql:db
+ @juju set postgresql extra-packages=postgresql-9.1-debversion
+ @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+ @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+ @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+ @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
@echo See the README for explorations after deploying.
I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).
--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
vars:
- app_label: click-reviews.ubuntu.com
+ - code_archive: "{{ build_label }}/rnr-server.tgz"
roles:
- role: wsgi-app
listen_port: 8080
python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
wsgi_application: clickreviewsproject.wsgi:application
- code_archive: "{{ build_label }}/rnr-server.tgz"
when: build_label != ''
- role: nrpe-external-master
@@ -25,6 +25,7 @@
- python-django=1.5.4-1ubuntu1~ctools0
- python-tz
- python-pip
+ - python-psycopg2
tags:
- install
- upgrade-charm
@@ -46,13 +47,24 @@
tags:
- install
- upgrade-charm
+ - db-relation-changed
notify:
- Restart wsgi
+
# XXX Normally our deployment build would ensure these are all available in the tarball.
- name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
pip: name={{ item.key }} version={{ item.value }}
with_dict:
south: 0.7.6
+ django-openid-auth: 0.2
tags:
- install
- upgrade-charm
+
+ - name: sync the database
+ django_manage: >
+ command="syncdb --noinput"
+ app_path="{{ application_dir }}/django-project"
+ pythonpath="{{ code_dir }}/current/src"
+ tags:
+ - syncdb
To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.
By far the most complicated part of this change though, was the database settings:
--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
DEBUG = True
TEMPLATE_DEBUG = DEBUG
SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+ {% for dbrelation in relations['db'] %}
+ {% if loop.first %}
+ {% set services=relations['db'][dbrelation] %}
+ {% for service in services %}
+ {% if service.startswith('postgres') %}
+ {% set dbdetails = services[service] %}
+ {% if 'database' in dbdetails %}
+ 'default': {
+ 'ENGINE': 'django.db.backends.postgresql_psycopg2',
+ 'NAME': '{{ dbdetails["database"] }}',
+ 'HOST': '{{ dbdetails["host"] }}',
+ 'USER': '{{ dbdetails["user"] }}',
+ 'PASSWORD': '{{ dbdetails["password"] }}',
+ }
+ {% endif %}
+ {% endif %}
+ {% endfor %}
+ {% endif %}
+ {% endfor %}
+}
+{% endif %}
+
TIME_ZONE = 'UTC'
ROOT_URLCONF = 'clickreviewsproject.urls'
WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
'django_openid_auth',
'django.contrib.messages',
'django.contrib.staticfiles',
+ 'reviewsapp',
'clickreviews',
- 'south',
)
AUTHENTICATION_BACKENDS = (
'django_openid_auth.auth.OpenIDBackend',
Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).
Full details are in the commit. With those changes I can now use the service:
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]
A. Cleanup: Simplify database settings
In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).
This simplifies things quite a bit:
--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
owner: "{{ wsgi_user }}"
group: "{{ wsgi_group }}"
tags:
- - install
- - upgrade-charm
+ - config-changed
+ notify:
+ - Restart wsgi
+
+ - name: Write the database settings
+ template:
+ src: "templates/database_settings.py.j2"
+ dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+ owner: "{{ wsgi_user }}"
+ group: "{{ wsgi_group }}"
+ tags:
- db-relation-changed
notify:
- Restart wsgi
+ when: "'database' in current_relation"
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django.db.backends.postgresql_psycopg2',
+ 'NAME': '{{ current_relation["database"] }}',
+ 'HOST': '{{ current_relation["host"] }}',
+ 'USER': '{{ current_relation["user"] }}',
+ 'PASSWORD': '{{ current_relation["password"] }}',
+ }
+}
--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
TEMPLATE_DEBUG = DEBUG
SECRET_KEY = '{{ django_secret_key }}'
-{% if 'db' in relations %}
-DATABASES = {
- {% for dbrelation in relations['db'] %}
- {% if loop.first %}
- {% set services=relations['db'][dbrelation] %}
- {% for service in services %}
- {% if service.startswith('postgres') %}
- {% set dbdetails = services[service] %}
- {% if 'database' in dbdetails %}
- 'default': {
- 'ENGINE': 'django.db.backends.postgresql_psycopg2',
- 'NAME': '{{ dbdetails["database"] }}',
- 'HOST': '{{ dbdetails["host"] }}',
- 'USER': '{{ dbdetails["user"] }}',
- 'PASSWORD': '{{ dbdetails["password"] }}',
- }
- {% endif %}
- {% endif %}
- {% endfor %}
- {% endif %}
- {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
TIME_ZONE = 'UTC'
With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]
Summary
Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:
- Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
- Handling the application settings
- Handling the database relation and custom setup
The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.
Things I’d do if I was doing this charm for a real deployment now:
- Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
- Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
- Ensure all dependencies are either available in the distro, or included in the tarball
Filed under:
cloud automation,
juju