Monday, 26 November 2012

End Of Term: Freeseer & UCOSP

Well, this is my second attempt because blogger doesn't play nice with Ctrl+Z. It deleted EVERYTHING and wouldn't revert changes because it auto-saves EVERYTHING. Bad blogger. Bad.

This is my end of term blog entry for the UCOSP program and Freeseer. It's been a good few months but it's drawing to an end. In this post, I will detail what I am handing off for the test framework, share some roadblocks I encountered and in general give some advice and my thoughts on UCOSP in general.

First off: how far did I get this term?
Well, I'm happy to report that at the time of writing, my code is being reviewed and updated with suggested changes. More specifically, there are about 25 unit tests primarily spanning the record, config tool and talk editor apps but also covering a few other easy things.

Some advice to the mentors about assigning requirements for testing: Saying X unit tests isn't a good metric. I found this out rather early. If we follow what unittest says, we only get the number of test_* methods which get executed. However, a single test_* method can contain several ( a good modular test module will ) assertions. In this case, a user can write 3 test_* methods but wind up spanning dozens of assertions. Watch out for this :-)

In addition to the unit tests I wrote, I posted a short series of blogs detailing why I went with unittest versus py.test and some documentation for extending and running Freeseer's test suite. This documentation is being moved into Freeseer's documentation.

The initial project proposal including 2-3 weeks of code refactoring, but it was decided that the final few weeks could better be spent with more focused testing in the record, config tool and talk editor apps.

In all good projects, we encounter roadblocks. For me, roadblocks include descriptions and possible solutions which worked or not. I include solutions that worked (if any) because others can change temporary hacks into full working solutions or they can apply this solution to another problem. That's what working in a team is about: knowledge transfer.

Here's a list of some of the roadblocks I encountered:

1. The QT library
For starters, anyone planning to work on testing needs a mastery of this.
When testing a GUI app like Freeseer, you should test it by interacting with it like your user does. Something I did not have time to do this term that I wish I would have was to develop use cases for the app i.e. "When user clicks X, this happens". This aids in testing immensely.

The QT library is amazing. It is pure OO-paradigm and once you get the hang of its event driven framework, it's trivial.
In my honest opinion, the QtTest library/module is horrible. I'm still wondering how such an amazing framework has such a crappy test library.

As you read through the test code, you'll realize that not all GUI objects respond to UI uniformly. A great example is how the QCheckbox cannot be clicked using the mouseClick() method available in the test library. Instead, one must call click() on the instance explicitly, which destroys the user oriented test paradigm.
All hope is not lost, as long as the widget instance can be referenced, there is a way to interact with it, perhaps just not directly. This is a "gotchya" of testing QT widgets -- more on this in part 3.

2. Gstreamer
Alright, so it's not actually an issue with GStreamer or the freeseer-record application.
The plan was to implement a powerful unit test which would make the record application switch from initial state, to standby, to record, wait 10 seconds then stop. At each stage, the test would ensure all things were covered.

This in theory is a fantastic way of ensuring the record app stays stable and correct. The issue occurs when the app switches from initial state to standby to record. Between initial state and standby, a preview is loaded. I'm told this is done by running GStreamer and pausing it almost immediately. Then, when the app goes into record state, we run GStreamer.

Unfortunately, loading GStreamer is not instantaneous and therefore the preview takes some time to load. Switching from standby to record too quickly actually causes Freeseer some problems and in some cases causes a full UI freeze. This is what the test causes to happen: transitioning from standby to record before the pewview has loaded.

The obvious solution is to detect when GStreamer has loaded the preview before continuing the tests for record. I found this was not such a trivial matter. For example, it turns out that the state of GStreamer is set to Pause prior to the preview being loaded. Therefore, it is not sufficient to check this flag and move on. With a bit more trial and error, it seems that no UI elements tied to the RecordApp instance can be checked to determine if the preview has loaded. I assume there must be a way to query GStreamer itself or to read the log and determine the state of the application that way. I did not go much further into this.

3.  Pop-up Boxes
So this isn't a huge problem, but needs to be said. Without a hack, it's not good to try testing widgets which have no link back to the application instance.
For example, if your pop up box is created and torn down in the local scope of a function, then it shouldn't be tested -- but we want to test these things.

Developers, please ensure that ALL of your widget instances are tied to the main class instance.

4.  Logging and VERBOSE output
This is not officially a roadblock. 

The way logging is used in Freeseer is excellent -- reminds me of old C utilities with configurable verbosity. Unfortunately, unittest has its own verbose output which should rank in priority when running the test suite but ends up getting overpowered by Freeseer's output.

This needs to be resolved. The test modules should have some means of turning off Freeseer's output temporarily. I tried to disable logging in each module. This didn't work for all the apps and is bad practice because there is no code reuse. I'd entertain the possibilty of making a FreeseerTest class which each test module would subclass. This main class would control global settings for the test suite -- like turning logging on and off.

Some final thoughts on UCOSP and Freeseer
I don't regret doing UCOSP, I'm just a little underwhelmed. This year, we had about 2 months to work on our projects. This sounds awesome because 2 months gives me a university credit, but its much too short of time to put any serious work into a project -- even less a somewhat mature open source project.

I like the idea of UCOSP but I'd argue that it isn't for everyone (which is true for anything) as the project choices are limited. For Freeseer, the project is well developed and continues to attract student's contributions. Some advice to future Freeseer project contributors, if this is your UCOSP project choice, know what you're interested in working on before the code sprint. The mentors warn of this, but many -- including myself -- didn't do so. We showed up to the code sprint, spent nearly two days setting Freeseer up and had to scramble to decide what we'd do for the term.

We've been asked to mention what we did and didn't enjoy. Honestly, I can't say I didn't enjoy working with this team -- bunch of great people. The only thing I would like to see changed is the way we pick projects and are presented Freeseer.

I think students should be given a better idea as to where the project is going or that the team have more project descriptions ready. This way students spend more time being creative with design and implementation than they do just trying to figure out some new magical feature. Obviously, some will have excellent ideas, they will be encouraged (as we were) to be vocal about them.

A final note, I chose UCOSP over my university campus' Capstone Project course (CSC490). Looking at the amount of work the students there were able to put in comparison to what we were able to do in UCOSP is drastically different. I'd say one of the factors here is that for several students in UCOSP, working with little to no supervision is new and causes people to drop off the radar. Once you drop off the radar in a project with a short deadline, its essentially impossible to catch up -- don't let this happen to you. Again: UCOSP is NOT for everyone! 

I had a blast this term and though I probably won't be a regular contributor to Freeseer after this (I have my own project to contribute to ), I thank the mentors and UCOSP organizers for the opportunity!