Brendel Consulting logo

Brendel Consulting

composing beautiful software solutions

Apr 18, 2012

Random primary keys for Django models

The automatically generated primary keys for Django models tend to be just sequentially increasing integer numbers. If you expose those IDs at any point to your users, for example in the URLs referring to particular IDs, then your users can oftentimes easily guess how many users your site has, or how many objects of a certain kind your database holds.

You could follow the approach of never exposing those primary keys to the user, and always use an additional, randomly generated key instead whenever a user needs to refer to an object in your database. However, you then need to implement this random key creation, need to maintain two fields, etc. It would be nice if we didn't have to worry about this and could just use the primary keys of our models to refer to them, internally as well as externally.

For this purpose, I have created a new base class, which you can use instead of models.Model when you create your own Django models. This base class is called RandomPrimaryIdModel . With this base class, your primary IDs will start to look random, similar to what you know from URL shorteners.

Here's an example. Let's say you have defined a Django model like this (in the example, the only change you have to make to your normal model definition is to replace the models.Model base class with RandomPrimaryIdModel):

    from random_primary import RandomPrimaryIdModel
    class MyModel(RandomPrimaryIdModel):
        # Normally define your Django model

    for i in xrange(3):
        m = MyModel(... parameters ...).save()

As output you might get something like this:


You can tune the key length as well as the characters that are used to construct the key. The docstring of the class is pretty extensive, so please have a look.

The code for the new model base class is free to use for anyone and can be found in this Github repository.

Hopefully, this can be useful to you. I'd welcome any feedback or comment.

Labels: , , ,


Jan 24, 2012

Django ModelForms: Setting any field attributes via Meta class

Django's ModelForms are a convenient way to create forms from already existing models. However, if any customization on the form definition is desired then we often are forced to manually re-define the field in the ModelForm definition. For example, consider this discussion on Stack Overflow. This is of course very unfortunate.

Some things about the form fields already can be modified via the Meta class of the ModelForm, for example the widgets used to render the form fields. Unfortunately, setting of custom error messages via the Meta class does not seem to be possible.

So I implemented a small change to the ModelForm class, which allows you to set arbitrary field attributes via the Meta class. For that, I introduced a new Meta class field, called "field_args". It is used like this (note that we are deriving from a new base class, called ExtendedMetaModelForm):

class AuthorForm(ExtendedMetaModelForm):
    class Meta:
        model = Author
        field_args = {
            "first_name" : {
                "error_messages" : {
                    "required" : "Please let us know what to call you!"
            "notes" : {
                "+error_messages" : {
                    "required" : "Please also enter some notes.",
                    "invalid"  : "This is not a valid note."
                "widget" : forms.Textarea(attrs={'cols': 70, 'rows': 15}),

As you can see, field_args is a dictionary of dictionaries. Each dictionary within field_args specifies the field attributes you wish to set for a given field. As you can see, you are not limited to the error messages. For instance, the "notes" field here receives a custom widget. Please note that the Meta class already allows you to set custom widgets. This example here is is just to show that you are free to use field_args to set any field attribute.

Another interesting feature is illustrated with the error messages for the "notes" field. As you can see we have a plus sign at the start of  "+error_messages". This merely is used as an indicator that we wish to merge the specified attributes to an already existing attribute, rather than fully replace it. This only is supported if both the newly defined value and the already existing value for the named field attribute are dictionary types. The advantage of using the "+" notation is that you do not have to fully specify all error messages - for example - if you only wish to customize a few of them. In our example above, the error messages for the first_name field are fully replaced, while for the notes field they are merely modified or appended to.

The new ExtendedMetaModelForm can be used anywhere you traditionally would have used forms.ModelForm. The source code for this class is as follows:

class ExtendedMetaModelForm(forms.ModelForm):
    Allow the setting of any field attributes via the Meta class.
    def __init__(self, *args, **kwargs):
        Iterate over fields, set attributes from Meta.field_args.
        super(ExtendedMetaModelForm, self).__init__(*args, **kwargs)
        if hasattr(self.Meta, "field_args"):
            # Look at the field_args Meta class attribute to get
            # any (additional) attributes we should set for a field.
            field_args = self.Meta.field_args
            # Iterate over all fields...
            for fname, field in self.fields.items():
                # Check if we have something for that field in field_args
                fargs = field_args.get(fname)
                if fargs:
                    # Iterate over all attributes for a field that we
                    # have specified in field_args
                    for attr_name, attr_val in fargs.items():
                        if attr_name.startswith("+"):
                            merge_attempt = True
                            attr_name = attr_name[1:]
                            merge_attempt = False
                        orig_attr_val = getattr(field, attr_name, None)
                        if orig_attr_val and merge_attempt and \
                                    type(orig_attr_val) == dict and \
                                    type(attr_val) == dict:
                            # Merge dictionaries together
                            # Replace existing attribute
                            setattr(field, attr_name, attr_val)

I hope this little code snippet here helps you to make your work with Django's model forms easier.



Jan 19, 2012

How to use django_extensions' runscript command

The django_extensions app provides many helpful additions for the Django developer, which are worth checking out. I highly recommend it to anyone working on a Django project. One such addition is runscript, which allows you to run any script in the Django context, without having to manually import and configure your paths and Django settings. Unfortunately, there is very little documentation on how to correctly use it. This blog post here just quickly sums it all up:

1. Running a script

This is done like this:

    ./ runscript <scriptname>

You cannot specify the name of a specific Python file, though. Scripts have to be within modules in a particular location, which is explained next.

2. The location of scripts

The runscript command looks for modules called scripts. These modules may be located in your project root directory, but may also reside in any of your apps directories. This allows you to maintain app-specific scripts. So, create the scripts directory (either at the root or within an apps directory), along with an file:

    mkdir scripts
    touch scripts/

3. The script itself

Scripts are simple Python files, but they need to contain a run() function. For example like this:

    def run():
        print "I am a script"

Within such a script, you could import and use any of your models or other parts of your Django project. When you store the above code in a file called scripts/ then you can run it like so (note that this is without the .py extension):

    ./ runscript testscript

4. Run multiple scripts at once

A nice feature of runscript  is that you can execute multiple scripts at the same time, if those scripts have the same name. Assume that you have different applications and wish to initialize data for them by running a script. Rather than putting all of this into a single script, you could have a scripts directory in your various apps directories, and within each of those have - for example - an script, thus keeping each script smaller and easier to understand and maintain.

You can see this in action if you specify more verbose output via the "-v 2" option. 

    ./ runscript -v 2 init_data

The output will look something like this:

    Check for django_extensions.scripts.init_data
    Check for django.contrib.auth.scripts.init_data
    Check for one_of_my_apps.scripts.init_data
    Found script 'one_of_my_apps.scripts.init_data' ...
    Running script 'one_of_my_apps.scripts.init_data' ...
    I am a script for one of my apps.
    Check for some_other_app.scripts.init_data
    Found script 'some_other_app.scripts.init_data' ...
    Running script 'some_other_app.scripts.init_data' ...
    I am a script for some other app.
    Check for scripts.init_data
    Found script 'scripts.init_data' ...
    Running script 'scripts.init_data' ...
    I am a script at the root level.

As you can see, runscript checks for a scripts module for every installed application, plus on the root level. If it finds a match, it runs the script and then keeps looking for more matches.



Aug 28, 2011

Kiwi Pycon 2011

Kiwi Pycon 2011, the annual Python conference in New Zealand has just been concluded. Two days in Wellington, full of inspiring talks and discussion. Thank you to the organizers and all the presenters. It's really nice to see other developers interested in Python.

In New Zealand, the Python community is pretty small. If you look through the software related section in a bookstore, you will find very little about Python. And the Python job market... well, there really isn't much. That is probably one of the reasons why I mostly work with overseas clients.

Hopefully, Python continues to make inroads with New Zealand organizations. It would benefit them and all of us here on this small, South Pacific island.

Labels: , ,


Aug 17, 2011

Tips on using oDesk contractors as system administrators

Sites like oDesk, eLance and others give any business instant access to skilled, remote staff, often working at very competitive hourly prices. Basically, you can find outsourced workers (usually individuals, sometimes teams) for small to mid-sized projects you need to get done, while not having to deal with paperwork or recruiting. A lot has been written about the pitfalls of outsourcing and also on how to make such outsourced work succeed: Define the project very well, document the APIs against which the contractor should work, demand good documentation, etc. However, most of these tips and tricks have been written about software development and coding.

But what about tasks such as system administration? You can find quite a few sysadmins on these sites. If you are an entrepreneur who wishes to focus on the business idea, or maybe even just on the software development, what is more tempting than to outsource those pesky aspects of system administration to someone else? Setting up a web-server with caching proxy, a database server, a CMS, a cluster of build and test servers, etc. After all, these might not be your core skills. Figuring all of this out yourself would take time, you might still not get it right and it certainly is quite distracting. So, in theory, these should make for perfect outsourced projects.

There's just one problem.

Consider that most of the system administration tasks I just mentioned require root or administrator access to your systems. Those systems later will hold your software, sometimes your source code or - even more important - your customer data, maybe logins, addresses or even credit card numbers.

Now consider that the contractors you meet on oDesk are very often in different countries and that you will have no contract with them besides whatever automated agreement you strike up via oDesk. You won't meet them in person, there usually are no NDAs and sometimes you won't even talk to them on the phone.

I have worked with quite a few oDesk contractors now and was lucky enough to meet many fine engineers and administrators. But if push comes to shove, you just don't know ahead of time who you are really going to get, and considering that any dispute would likely be across borders and legal systems, realistically you would have no legal recourse in case something goes wrong.

While any issues or disputes are bad enough when it comes to software development, you at least have a deliverable you can review and examine relatively easily, or even throw away should you decide to do so. However, for system administration tasks, there are numerous ways in which a disgruntled or simply malicious or careless person with root access could compromise your system before it goes into production. Backdoors could have been installed, timed jobs could run unbeknownst to you, collecting and sending off important data from your system for some nefarious purposes, or some passwords and accounts could accidentally have been left open and unprotected.

So, having used oDesk contractors for a number of projects, including system administration tasks, let me give you two recommendations:

Firstly - and true for any type of task - choose contractors who have worked a lot on oDesk (or whichever other outsourcing site you use) and who are still active there. Look for the "recently worked hours" and recent projects. Look for contractors with a good reputation. The idea is to choose contractors who really work projects on those sites for a living and thus rely on a good reputation. That, of course, is still not a guarantee, but it is a good starting point.

Secondly, if at all possible, do NOT give "system administration" tasks to contractors (which require root access to production systems). Instead, give "system administration automation" tasks. That's a subtle but important difference. However, it pays off in several ways. These are slightly arbitrary definitions, but they may serve to make a point:

With "system administration" tasks I mean: Setting up a complete server for you, or installing and configuring an additional software package on your server. Basically, working directly on a production system. You give someone root on those machines and they have the keys to the kingdom, with all the issues we have described before. If you need to have those tasks done, you should probably look for someone you can meet in person, get a better feel for and develop a better trust relationship with.

On the other hand, with "system administration automation" tasks I mean the development of scripts, which setup a server (or software package) for you in an automated fashion. How does this work? There are particular tools (for example puppet or chef), or just a number of custom shell scripts, which can be used to automate the administration and setup of machines. Now the project description is different. While before you description may have read "Configure a caching proxy for our application server..." it now changes to "Provide scripts, which automate the setup of a caching proxy for our application server...".

What do you gain with this? On one hand, you can just give a throw-away system to the contractor on which to develop those scripts. Throw away servers are easy and cheap to get these days. You may use Amazon EC2, Rackspace, Linode or any number of other providers to get "dedicated looking" machines for just a few dollars per month or even just on an hourly basis. On those machines, the contractor can get a root login and can develop the automation scripts until they work. Once the project is completed you can test it easily: Get yourself another throw away server and run the finished scripts (which naturally have to come with documentation on how to run them). If your server ends up configured properly, the project was successful. You may specify scripts for automated tests and self-diagnosis as part of the deliverable as well. However, at no point did the contractor require root or administration rights on your actual production server.

It's important to use the same or similar base machine image for the throw away server as for the production server. So for example, if you run your production server on Linode, with Ubuntu 10.04 LTS as base OS you should create a similar Linode instance with the same base OS for the contractor as throw away server to work on. Then likewise to test the deliverables before applying those scripts to your production server. Also, consider that if all your configuration setups for your production server are automated then it is easy to give a complete mock production server to the contractor as a throw away server: You can quickly bring the throw away server into the same state (or at least containing the same configurations and software packages) as your production server before the contractor starts work. Just be careful not divulge important passwords or SSH keys that way.

A second advantage of requiring automation scripts is that the scripts themselves can act as a documentation of sorts of all the tasks and steps that are involved in completing this system administration task. Therefore, they will allow you and other contractors to more easily pick up where one contractor left off.

Finally, if you did not automate the system administration task, you cannot easily reproduce the work in case you want to bring up an additional production server. So, having automated scripts helps you to scale more effectively, as your organization grows.

Do you have any other tips and ideas on how to effectively work with remote contractors for system administration tasks? If so, please let us know in the comments.

So, the next time you ask for a system administration task to be performed and before handing over the root password to your production server to the contractor, consider if it might not be better to instead phrase the project as a system administration automation task instead.

You should follow me on twitter here.

Labels: , , , , , , , , , ,


Jul 14, 2011

Automatically create a table of contents with jQuery

Recently, I had to create an FAQ page. A good FAQ should have a table of contents (TOC) at the start, easily allowing you to see the various section of the FAQ and the questions in each section. Naturally, I didn't want to have to maintain the TOC manually, every time a question/answer item had to be moved, added or deleted. On the server side, we didn't have a CMS in place (the project doesn't really require this, the FAQ is the only page with a TOC).

So, what to do? I could write scripts, running on the server, parsing a questions/answer file, generating the faq.html file out of that. Or I could use one of any number of other server-side tools. But since I had just started to get into jQuery, I decided to try something slightly different. Why not dynamically create the TOC and the numbering of the questions when the page is loaded? In other words, the HTML of the page would only contain questions and answers, but no TOC and no fixed numbering of the questions; all of that would be added dynamically in the browser when the page was displayed.

This seemed to be as good an exercise as any to try out my fledgling jQuery skills. Well, it works! Here is the JavaScript:

You can see the result here, in this extremely simple test page, which has no external dependencies, except the jQuery library. So, all you need can be seen in the source of that page. For illustration purposes, switch off JavaScript in your browser and reload the page. As you can see, it still looks readable, just the numbering and TOC is gone.

If you examine the page source, you can see a blob of jQuery in the ready() function, but no TOC. Instead, you have the questions and answers, following a particular structure. The jQuery code examines this structure and dynamically creates the TOC, the numbering of each entry and the links.

The structure you have to follow with your questions and answers is very simple:

A div with id "faq_toc" is where the TOC will be created. All top-level sections (or 'chapters') of the FAQ need to be enclosed in div elements with class "level_1_elem". In each of those sections, the individual questions and answers are enclosed in a simple div element. You can see that there are definitions for "level_1_link" and "qlink". These are the anchor tags, which allow linking to the individual answers and sections. These anchors do not contain any numbering - static or auto-generated. This means that even after you move questions around or add questions ahead of them, any links to those questions remain valid.

The result is a properly formatted TOC and numbered questions and answers, looking something like this:

This, I think, is not bad at all. No need to add a CMS or anything complicated on the server side, if the client can generate the TOC for us so nicely.

Labels: , , ,


Jun 25, 2011

Wireless USB headsets on Linux: My experience with the Logitech G930

USB wireless headsets can be a bit of a hit and miss affair on Linux. Stories abound of these devices failing to work, people not being able to use them for Skype, etc. I'm running Ubuntu 10.04 (Lucid Lynx) on my somewhat aging Dell Latitude D820 laptop, but recently decided to give one of those headsets a try: The Logitech G930. Here I am describing how it's performing for me.

For many years, I have used cheap wired headsets with ordinary analog audio headphone/mic plugs. But I'm spending a lot of time on Skype, talking to my overseas colleagues and customers. Since it seems to help me think, I have the tendency to walk around during discussions, usually just in circles in my home-office. But even for this small distance from my desk, wired headsets get in the way. So, a comfortable fitting wireless headset would be great.

While general USB support on Linux is excellent, wireless USB headsets historically were not very well supported, which is why I have been hesitant to try them out. I was looking around for a good RF-based headset, but what I found was either very pricey or just headphones (without a microphone). Then I came across some reports of people claiming to successfully use the Logitech G930 wireless USB headset on Linux. While this is marketed as a “gaming headset”, I figured that as long as it worked, I wouldn't care what it's called. So, with some trepidation, I decided to order this headset from Amazon. Pricing for this product can differ quite a bit, but you should expect to spend somewhere between US$90 to US$120.

The hardware

The headset arrived a few days later. In the packaging I found the headset itself, which feels like a solidly made device. It is very light and provides ample space around your ears, so it remains comfortable even for longer usage. There is also a USB connected base station – which conveniently also contains the charger cable for the headset itself – and a USB stick with the actual dongle for the wireless connection. You can see it all in this picture here (all images in this article – except screenshots – are courtesy of Logitech or

The 'base station' is a purely passive device. Basically, a fancy USB cable or hub with a drum to tidily roll up your cables. You can see that it contains a single USB plug at the top. This allows the wireless dongle to be plugged in there, rather than into one of the ports of your computer. That is very welcome, since the dongle is quite long and would inevitably be bumped-in or bent. You avoid this danger by plugging it into that base station and placing it somewhere out of the way. The second, smaller cable from the base station is used to charge your headset. Fortunately, the headset works even while charging. The USB dongle may also be plugged straight into the computer, in which case the cable roll is not needed for operation.

The microphone feels solid and automatically mutes itself if it is flipped upwards. There is also a mute button on the left ear cup. When muted, a small red light at the end of the microphone serves as visual reminder to the wearer. A nice feature. The microphone beam actually is flexible so that it can be bent towards your mouth when talking. This probably also prevents it from breaking easily should the headset be dropped. The entire microphone connection and assembly feels solid and well done. Additional controls on the left ear cup are a roller for volume, an on-off button, a Dolby 7.1 surround button and three special effect G-buttons, which supposedly can be programmed with various functions.

The Linux experience

There is some software that comes in the packaging, but as usual, it is Windows only. So, I ignored it and just plugged in the base station cable and USB dongle, switched on the headphones, waited for the green light on dongle and headset to indicate a good connection and anxiously wondered what would happened. At first nothing. Music output continued on my laptop's built-in speakers.

But in no time, I noticed that a new sound device had been identified: When you open the PulseAudio volume control application, you can see it in the Output and Input tabs: Logitech G930 Headset Analog Stereo for output and Logitech G930 Headset Analog Mono for input. I have to say, I was very happy to see that without any special software the device was correctly recognized. Compliments to Linux's great USB support.

I selected the Logitech device in both the input and output tap, restarted my music player and there it was: Wireless sound in my new headset! Then I started Skype and made my first test call, which worked as well! The microphone was recognized and used without problem. I did not have to change any settings in Skype. In fact, the sound quality of the microphone is very good, better than with previous standard wired headsets. The built-in sound card in my laptop had always produced a slight crackle whenever I used the microphone. Annoying for others in conference calls or when I wanted to record voice-overs for screencasts. However, with this USB headset there was no crackle at all, just a nice and clear recording.

The roller for volume adjustments works well. However, it does not change the headset volume alone, but actually the volume for the entire system. Using it has the same effect as using the ACPI volume buttons on my laptop, with the familiar volume notifier appearing in the upper right corner of my desktop when the roller is used.

Sound quality: Using an equalizer to get some bass

While the sound is clear, initially I was a little disappointed about the lack of depth and bass for the G930. However, apparently for good reasons the Windows version of the Logitech software for this headset contains an equalizer, which would allow you to put more emphasis on the lower frequencies. So, I figured I should try this on Linux as well. All I needed was an equalizer. Sadly, there wasn't one in the repositories. Why PulseAudio wouldn't come bundled with an equalizer by default, or even have one available in the standard repos is beyond me. After some searching, however, I found Conn O Griofa's wonderful equalizer application for PulseAudio. Conn has his own repository. So, all you need to do is to execute these instructions:

    sudo add-apt-repository ppa:psyke83/ppa
    sudo apt-get update
    sudo apt-get install pulseaudio-equalizer

The PulseAudio Equalizer then becomes available under your Applications > Sound menu. Select the settings form you like, save and apply. This equalizer application makes a huge difference. While before the headset sounded “tinny” and flat, music now suddenly had great volume, very satisfying bass and depth. Here are my settings:

Notice that when active, the equalizer appears as its own output device, which has to be selected.

It's a good idea to start the equalizer whenever you login. For that, go to System > Preferences > Startup Applications. Click 'Add' to enter a new startup-application. Enter “pulseaudio-equalizer enable” in the command field, as shown here. That way, the equalizer always starts and its settings are applied.

Please make sure to not turn up the low frequency too high. If you do, you will get some pretty nasty distortions or clipping on louder bass tones. However, my settings shown above are probably quite moderate, avoid those distortions and result in great sound with the Logitech G930 headset.

Wireless performance

The range and quality of the wireless signal is impressive. I have walked out my room, down a pretty long corridor, with the signal having to traverse 'around' a TV, through a heavy door and pretty stable wall and along the corridor. Only then were the first distortions audible. The range is advertised as 40 feet, but I would think it's actually more than that.

When there is much light...

Considering that the custom Windows software for the G930 cannot be used on Linux, it is no surprise that a few things do not work. Likewise, Linux itself – particularly PulseAudio – still has a few noticeable quirks. Here's a list of what I found so far:

  • The programmable “G buttons” on the left ear cup don't do anything. On Windows, using Logitech's provided software, you can program them with various functions, such as voice morph, or special application specific shortcuts. It would be nice if I could program those on Linux as well, for example to pause my music player, or to pick up an incoming Skype call. Sadly, it's not meant to be, unless Logitech (or some enterprising soul) manages to release useful Linux software for this task.
  • The Dolby Surround Sound button has no effect. Since I don't use the headset for gaming, though, this doesn't concern me much.
  • When you adjust the volume, either with your computer's ACPI buttons or the roller on the side of the headset, sometimes a faint crackling is audible. It's not really very bothersome, since it's only there right when you change the volume, but I noticed it nevertheless.
  • The ear cups are nicely padded and thus muffle any ambient noise well. However, when you are in a VoIP call, you therefore also have no good feeling for the sound or volume of your own voice. It doesn't seem to be possible to redirect your microphone input to the headset speakers. This, however, seems to be more of a PulseAudio limitation than of the headset. I tried to switch-on the loopback module for PulseAudio, but there is too much of a delay to make this useful. So, when using VoIP, you have to be a bit mindful of your own voice. You can get used to it, though.
  • For some reason, I cannot switch sound devices in the middle of a song. In fact, when I want to switch to the external speakers, or back to the USB headset, I have to do that switch in the Input and Output device dialogs of the PulseAudio volume control application. And even then at first nothing changes. I actually have to restart my sound applications (Skype, gnome-player, etc.) before the change is recognized. Quite annoying. Again, this seems to be mostly a PulseAudio limitation, and doesn't appear to have anything to do with the Logitech headset per-se. However, after always using the standard, built-in sound card and cheap headphones where changing devices was merely a matter of pulling a plug, I certainly noticed that particular aspect. Pulling the plug on the USB dongle turned out to be not a good idea: Doing so at some point locked up my computer. So, switching the devices in software is the preferred way. A properly working and simple to use PulseAudio device chooser application would be great. There is a package with exactly this name in the Ubuntu repositories, but it does not do – at least not easily – what the name seems to promise.


For the most part the Logitech G930 on Ubuntu (10.04 and most likely also later versions) has been a very positive “works out of the box” experience for me. Great sound quality, comfortable to wear, good for music and VoIP, good build quality. I'm happy with my purchase. While there is room for improvement, I would certainly recommend the headphones for use on Linux already.

You should follow me on Twitter...