Monday, 26 November 2012

End Of Term: Freeseer & UCOSP

Well, this is my second attempt because blogger doesn't play nice with Ctrl+Z. It deleted EVERYTHING and wouldn't revert changes because it auto-saves EVERYTHING. Bad blogger. Bad.

This is my end of term blog entry for the UCOSP program and Freeseer. It's been a good few months but it's drawing to an end. In this post, I will detail what I am handing off for the test framework, share some roadblocks I encountered and in general give some advice and my thoughts on UCOSP in general.

First off: how far did I get this term?
Well, I'm happy to report that at the time of writing, my code is being reviewed and updated with suggested changes. More specifically, there are about 25 unit tests primarily spanning the record, config tool and talk editor apps but also covering a few other easy things.

Some advice to the mentors about assigning requirements for testing: Saying X unit tests isn't a good metric. I found this out rather early. If we follow what unittest says, we only get the number of test_* methods which get executed. However, a single test_* method can contain several ( a good modular test module will ) assertions. In this case, a user can write 3 test_* methods but wind up spanning dozens of assertions. Watch out for this :-)

In addition to the unit tests I wrote, I posted a short series of blogs detailing why I went with unittest versus py.test and some documentation for extending and running Freeseer's test suite. This documentation is being moved into Freeseer's documentation.

The initial project proposal including 2-3 weeks of code refactoring, but it was decided that the final few weeks could better be spent with more focused testing in the record, config tool and talk editor apps.

In all good projects, we encounter roadblocks. For me, roadblocks include descriptions and possible solutions which worked or not. I include solutions that worked (if any) because others can change temporary hacks into full working solutions or they can apply this solution to another problem. That's what working in a team is about: knowledge transfer.

Here's a list of some of the roadblocks I encountered:

1. The QT library
For starters, anyone planning to work on testing needs a mastery of this.
When testing a GUI app like Freeseer, you should test it by interacting with it like your user does. Something I did not have time to do this term that I wish I would have was to develop use cases for the app i.e. "When user clicks X, this happens". This aids in testing immensely.

The QT library is amazing. It is pure OO-paradigm and once you get the hang of its event driven framework, it's trivial.
In my honest opinion, the QtTest library/module is horrible. I'm still wondering how such an amazing framework has such a crappy test library.

As you read through the test code, you'll realize that not all GUI objects respond to UI uniformly. A great example is how the QCheckbox cannot be clicked using the mouseClick() method available in the test library. Instead, one must call click() on the instance explicitly, which destroys the user oriented test paradigm.
All hope is not lost, as long as the widget instance can be referenced, there is a way to interact with it, perhaps just not directly. This is a "gotchya" of testing QT widgets -- more on this in part 3.

2. Gstreamer
Alright, so it's not actually an issue with GStreamer or the freeseer-record application.
The plan was to implement a powerful unit test which would make the record application switch from initial state, to standby, to record, wait 10 seconds then stop. At each stage, the test would ensure all things were covered.

This in theory is a fantastic way of ensuring the record app stays stable and correct. The issue occurs when the app switches from initial state to standby to record. Between initial state and standby, a preview is loaded. I'm told this is done by running GStreamer and pausing it almost immediately. Then, when the app goes into record state, we run GStreamer.

Unfortunately, loading GStreamer is not instantaneous and therefore the preview takes some time to load. Switching from standby to record too quickly actually causes Freeseer some problems and in some cases causes a full UI freeze. This is what the test causes to happen: transitioning from standby to record before the pewview has loaded.

The obvious solution is to detect when GStreamer has loaded the preview before continuing the tests for record. I found this was not such a trivial matter. For example, it turns out that the state of GStreamer is set to Pause prior to the preview being loaded. Therefore, it is not sufficient to check this flag and move on. With a bit more trial and error, it seems that no UI elements tied to the RecordApp instance can be checked to determine if the preview has loaded. I assume there must be a way to query GStreamer itself or to read the log and determine the state of the application that way. I did not go much further into this.


3.  Pop-up Boxes
So this isn't a huge problem, but needs to be said. Without a hack, it's not good to try testing widgets which have no link back to the application instance.
For example, if your pop up box is created and torn down in the local scope of a function, then it shouldn't be tested -- but we want to test these things.

Developers, please ensure that ALL of your widget instances are tied to the main class instance.


4.  Logging and VERBOSE output
This is not officially a roadblock. 

The way logging is used in Freeseer is excellent -- reminds me of old C utilities with configurable verbosity. Unfortunately, unittest has its own verbose output which should rank in priority when running the test suite but ends up getting overpowered by Freeseer's output.

This needs to be resolved. The test modules should have some means of turning off Freeseer's output temporarily. I tried to disable logging in each module. This didn't work for all the apps and is bad practice because there is no code reuse. I'd entertain the possibilty of making a FreeseerTest class which each test module would subclass. This main class would control global settings for the test suite -- like turning logging on and off.

Some final thoughts on UCOSP and Freeseer
I don't regret doing UCOSP, I'm just a little underwhelmed. This year, we had about 2 months to work on our projects. This sounds awesome because 2 months gives me a university credit, but its much too short of time to put any serious work into a project -- even less a somewhat mature open source project.

I like the idea of UCOSP but I'd argue that it isn't for everyone (which is true for anything) as the project choices are limited. For Freeseer, the project is well developed and continues to attract student's contributions. Some advice to future Freeseer project contributors, if this is your UCOSP project choice, know what you're interested in working on before the code sprint. The mentors warn of this, but many -- including myself -- didn't do so. We showed up to the code sprint, spent nearly two days setting Freeseer up and had to scramble to decide what we'd do for the term.

We've been asked to mention what we did and didn't enjoy. Honestly, I can't say I didn't enjoy working with this team -- bunch of great people. The only thing I would like to see changed is the way we pick projects and are presented Freeseer.

I think students should be given a better idea as to where the project is going or that the team have more project descriptions ready. This way students spend more time being creative with design and implementation than they do just trying to figure out some new magical feature. Obviously, some will have excellent ideas, they will be encouraged (as we were) to be vocal about them.

A final note, I chose UCOSP over my university campus' Capstone Project course (CSC490). Looking at the amount of work the students there were able to put in comparison to what we were able to do in UCOSP is drastically different. I'd say one of the factors here is that for several students in UCOSP, working with little to no supervision is new and causes people to drop off the radar. Once you drop off the radar in a project with a short deadline, its essentially impossible to catch up -- don't let this happen to you. Again: UCOSP is NOT for everyone! 

I had a blast this term and though I probably won't be a regular contributor to Freeseer after this (I have my own project to contribute to ), I thank the mentors and UCOSP organizers for the opportunity!

Wednesday, 31 October 2012

Freeseer Test Framework Design


In this post, we'll learn about the steps required for developers to implement their own unit tests.

The steps we'll cover are:
  1. Setting up your environment to support the testing
  2. Structure of the src/freeseer/test folder
  3. Creating a new unit test module & the unittest.TestCase "lifecycle"
  4. Running the test suite
  5. Gotchas!

Step 1: Setting up your environment to support testing

This is one of the easiest steps because there are no extra dependencies for testing! As long as your python installation contains the unittest module (part of the standard library) and the QtTest module (part of the PyQt4 package and used throughout Freeseer).

If you want to make sure, you can fire up python and check:
$ python
Python 2.7.3 (default, Aug  1 2012, 05:16:07)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>>

We're using python 2.7.x and that is the minimum version required at this time for Freeseer in general (also the maximum I believe since we do not support python3 yet).

Next, import unittest then import QtTest:
>>> import unittest
>>> from PyQt4 import QtTest

If there are any errors, you won't be able to proceed with testing. If unittest fails to import, then something is missing in your basic Python installation. If QtTest fails to import then it's likely Freeseer will not even run!


2. Structure of the src/freeseer/test folder

Now that we know the installation dependencies are working, let's dive into the test suite. At this time, the test modules are contained in a folder in the src/freeseer directory called test. Since as of Python 2.7, unittest supports (recursive) test module discovery, all test modules should exist somewhere inside the src/freeseer/test folder because when we run the suite later, we'll be pointing to this folder as the root.

Since Freeseer is well organized into modules, we'd like to mirror this setup in the test folder. This means that if your code is located in src/freeseer/framework/core.py then your test code should be found in src/test/framework/test_core.py (more about file naming conventions later). We do this for logical ordering: it tells us that test modules in src/freeseer/test/folder_name are for testing modules in src/freeseer/folder_name.

Note: If you are creating a new folder in src/freeseer/test/*, ensure that your folder contains a __init__.py such that your test module can be imported by unittest during discovery.



3. Creating a new unit test module & the unittest.TestCase "lifecycle"

We now know where the test suite is located: src/freeseer/test. We also know that the directory hierarchy below test matches the one below src/freeseer. Next, as developers we'd like to add a new test module or modify an existing one.

Let's write a simple test case for the Presentation class found in src/freeseer/framework/presentation.py. This class is simple, it is a model which holds data. We pass a bunch of parameters and all the class attributes are public. We'll create an instance, ensure the values we pass are correctly set and learn about the unittest module in the process.

We'll need to go to src/freeseer/test and check if there is a folder named framework. If there isn't let's create it and immediately add an empty __init__.py inside it (see Note from part 2).

Next, we'll create our test module. It would be nice to keep convention and name it test_presentation.py (i.e. the convention is test_my_module_name.py where the module counterpart is name my_module_name.py) but there is no way to enforce it. As such this is going to be a "best practice". Fortunately, unittest does enforce something: the name of the test module must have the form test_*.py or it will not be discovered. Thus, your module name must start with test_ and finish with .py. This little bit can be configured (we'll see how in part 4), but the default pattern for unittest discovery is 'test_*.py'.

We now have our test module which can be found at:
src/freeseer/test/framework/test_presentation.py.

Let's add some functionality!

Note: The example used in the rest of this entry is to experiment with unittest and may not implement logical test cases!

$ cat test_presentation.py
#!/usr/bin/python

import unittest

from freeseer.framework.presentation import Presentation

class TestPresentation(unittest.TestCase):
   
    def setUp(self):
        self.pres = Presentation("John Doe", event="haha", time="NOW")

    def test_correct_time_set(self):
        self.assertTrue(self.pres.time == "NOW")
        self.pres.speaker = "John Doe"

    def test_speaker_not_first_param(self):
        self.assertNotEquals(self.pres.speaker, "John Doe")

    def test_event_is_default(self):
        self.assertFalse(self.pres.event != "Default")





There we have it, our first unit test to test Freeseer functionality!
Since there are no comments, let's go through it quick and take some time to understand the important parts:

import unittest
This will be in every test module. As we will see, each test class we write will subclass unittest.TestCase and this class is also where we get the assert* family of calls.

from freeseer.framework.presentation import Presentation
This will also be in every module but slightly modified. Our target class to test this time is the Presentation class found in src/freeseer/framework/presentation.py. When python loads the packages, it will convert src/freeseer/framework/presentation.py into freeseer.framework.presentation (this is an oversimplification of course). Therefore, ensure your import path is correct for the target module. From the import path, simply import the class you wish to test against -- in our case this is Presentation.

class TestPresentation(unittest.TestCase):
First: unittest.TestCase is required as the parent class or this will not be treated as a test class.
This is the class we're creating which will encapsulate all the testing functionality for the Presentation class.  How do we know we're using this class to test the Presentation class? -- we don't. It's up to the developer to name it appropriately and naming the class TestPresentation is another unenforceable best practice.


setUp, runTest, test_*, tearDown and the unittest "lifecycle"
I invite you to read the documentation.

The unittest.TestCase offers a "life cycle" a.k.a an ordered method call framework allowing a developer to setup, run and takedown tests respectively.

If the unittest.TestCase has implemented the setUp() method, then this method runs first. It is used to set-up any code required for the tests.

The next method which will run depends on whether the developer implemented runTest() or test_* methods. The choice here is a matter of opinion, but if runTest() is implemented, then all tests are in this method. If no assertion fails, runTest() will return OK, otherwise it will return FAIL. If a collection of test_* methods are implemented, then we can still have several assertions in each test_* method, but now every individual test_* has an OK/FAIL.

If the unittest.TestCase has implemented the tearDown() method, then this method runs last. It is used to unset or destroy code required for the tests.

The lifecycle
There is a predefined order of execution for the above methods:

Case 1: User implements runTest()
First, setUp() will be executed. If there is an exception, then then runTest() will not be executed. If setUp() succeeds, then runTest() is executed. Regardless of the result of runTest(), tearDown() will be executed.

Case 2: User implements test_* methods
As above, if setUp() fails, then test_* will not be executed and regardless of the result of the test_* method, tearDown() will be executed. However, now for each test_* method, we will execute setUp(), a test_* method, then tearDown(). 


Note: The order in which test_* methods are run is determined by their alphanumeric ordering. For a given unittest.TestCase class, the test_* methods will sorted alphanumerically in increasing order, then run in this order.
 
The assert* family of methods (documentation)
Each of these has the power to FAIL a test_* or runTest method. A test could contain several assert methods and will continue to run until an assertion fails. If no assertion fails, then the test will be marked as OK.

A useful addition to the assert methods provided are the option to pass a message when the assertion fails. For example:

    def runTest(self):
        self.assertEquals(3, 4, "Silly you, 3 is not 4!")

If the optional message ( "Silly you, 3 is not 4!" in this case) is given, then if the assertion fails the user will be given this optional message instead of the generic message. 


4. Running the test suite (documentation)
We've written our test case(s) and now we want to see the results. First, let's go over the expected results:

Recall: we are using the test_* methods, thus setUp() will execute before each test_* and the test_* methods will be executed in alphanumeric order. A test_* will FAIL if any of its assertions are false.

   def setUp(self):
        self.pres = Presentation("John Doe", event="haha", time="NOW")


In setUp(), we are creating a Presentation instance and storing it in self.pres. Now, each test_* will access this instance using self.pres.   

   def test_correct_time_set(self):
        self.assertTrue(self.pres.time == "NOW")
        self.pres.speaker = "John Doe"


In test_correct_time_set(), we are checking that the time parameter in the constructor was correctly set to "NOW", then we are setting self.pres.speaker to "John Doe". 

    def test_speaker_not_first_param(self):
        self.assertNotEquals(self.pres.speaker, "John Doe")

In test_speaker_not_first_param(), we are checking that "John Doe" was in fact not set as the Presentation.speaker (it will be set as Presentation.title).

    def test_event_is_default(self):
        self.assertFalse(self.pres.event != "Default")


Finally, in test_event_is_default(), we are checking that self.pres.event was set as "Default". Note that this case should fail. 

Before we begin, a note about the alphanumeric order. The test_* methods will run in the following order:
  1. setUp(), test_correct_time()
  2. setUp(), test_event_is_default()
  3. setUp(), test_speaker_not_first_param()

Note: to avoid package import errors, we need to run the following commands from the src folder.

Example #1:

This first method is the most basic and least verbose version. We are running unittest as a module and telling it to "discover"; recursively find tests starting from freeseer/test.

In the output, we see a FAIL in test_event_is_default, which we expected to fail.
Along with the FAIL message, we get the module information framework.test_presentation.TestPresentation, line number where the failure occured, the code of the failed assertion and the AssertionError with a generic message (this is where your custom message would be printed instead). Finally, we get the number of tests executed, total time and number of FAILED.

Something to note here is that even if a test fails, we mark it as FAIL and move on. This can be configured (see -f ). 



$ python -m unittest discover freeseer/test/
.F.
======================================================================
FAIL: test_event_is_default (framework.test_presentation.TestPresentation)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
(path to test_presentation.py), line 20, in test_event_is_default
    self.assertFalse(self.pres.event != "Default")
AssertionError: True is not false

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=1)


Example #2:

This next method is the same as before, but with the added -v. We are telling unittest to be more verbose (output more information). The output will be as above but will also contain a listing of each test method, module and result information.

$ python -m unittest discover freeseer/test/ -v
test_correct_time_set (framework.test_presentation.TestPresentation) ... ok
test_event_is_default (framework.test_presentation.TestPresentation) ... FAIL
test_speaker_not_first_param (framework.test_presentation.TestPresentation) ... ok

======================================================================
FAIL: test_event_is_default (framework.test_presentation.TestPresentation)
----------------------------------------------------------------------
Traceback (most recent call last):
  File (path to test_presentation.py), line 20, in test_event_is_default
    self.assertFalse(self.pres.event != "Default")
AssertionError: True is not false

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=1)



Example #3:

As in the previous method, we are telling unittest to be more verbose but now we are also instructing -f. This option means "fail fast" and will cancel the entire test execution on a failure.  Looking at the output, only 2 tests were executed because the second failed.

Note: If the intent is to see whether or not your new code breaks any functionality, you will likely use this method.

$ python -m unittest discover freeseer/test/ -v -f
test_correct_time_set (framework.test_presentation.TestPresentation) ... ok
test_event_is_default (framework.test_presentation.TestPresentation) ... FAIL

======================================================================
FAIL: test_event_is_default (framework.test_presentation.TestPresentation)
----------------------------------------------------------------------
Traceback (most recent call last):
  File
(path to test_presentation.py), line 20, in test_event_is_default
    self.assertFalse(self.pres.event != "Default")
AssertionError: True is not false

----------------------------------------------------------------------
Ran 2 tests in 0.006s

FAILED (failures=1)



5. Gotchas!

This section is like a Q&A.

Q: Why didn't test_speaker_not_first_param() fail if it is being set to "John Doe" in test_correct_time_set() ?
A: Because before test_speaker_not_first_param() is invoked, setUp() is executed which resets self.pres to a new instance. Thus self.pres is as it would be and self.pres.speaker = "".

Q: When I run (example #1, #2 and/or #3), I get the unittest help menu, why could this be happening OR I am getting weird import errors from unittest, what's going on ?
A: From experience this was ultimately the result of an import error or invocation from the wrong place...

1. Ensure you are in the src folder 
2. Check that you are in fact using -v and/or -v after unittest (easiest to remember if it's at the end of the command).
3. Ensure that you are using __init__.py files (they can be empty files as they only tell Python to treat the folder as a package) in all the directories inside src/freeseer/test. If that's correct, maybe 
4. Make sure you that in your test module, you are importing from freeseer.folder.module_name


Test Frameworks for Freeseer


We learn a bunch of programming languages in school. Alongside these, we are told about debuggers and test libraries, but for such small assignments, it's much simpler to use print statements to debug the code. In fact, I've rarely had to use testing libraries or sanity checks in code I've written for an assignment (scratch that, I HAD to debug my CSC458 router implementation... curse those TCP/UDP packets -- GDB FTW!).

As projects mature, it is good to supplement the project's stable code with a test framework -- there are several good reasons to test and it's never too late to start. There are obvious pros to testing often but a team needs to be motivated to use it. For this, a test framework needs to have a few important qualities.
Note: These points are my opinion only and many people have their own views on the matter!

A test framework should:
  • Have proper documentation
  • Contain a simple "Start Tests" method or script
  • Allow a simple and lightweight structure for developers to add their test cases
  • Seamlessly integrate into existing code
  • Allow features to be tested using real use cases
  • Be as portable as the application it is testing
  • Clearly output results for tests
  • etc...
Let's talk about those requirements for a bit. For my UCOSP project, I proposed to design and implement a test framework for Freeseer. Bare in mind, the design and implement criteria mainly involve researching and analyzing what is best for the team. In addition to the above requirements, Freeseer poses some interesting requirements:
  • Freeseer is written in Python, uses QT and GStreamer
  • Freeseer's source tree is already organized into package format
  • In the best case scenario, a test framework would impose little to no extra dependencies on Freeseer
 Now I'll take some time to explain why each of these requirements makes designing a test framework tricky. For starters, we're using Python. As I'll discuss later, the main choices for Python testing are unitest and py.test . In addition, the python QT implementation offers functionality to test GUI applications, thus the final test framework choice will need to be compatible with this to avoid fragmented testing. The fact that Freeseer has a well organized source already means that test modules shouldn't break this organized tree. Due to this, the test framework should be able to run inline with source modules, in a completely separate location or both. Finally, the most important contraint: little to no extra dependencies. This one is tough because many Python developers will know that despite the availability of functionality in the standard library, it is simple to add new functionality using third-party modules which are trivial to install.


Unittest vs. Py.Test

I didn't have to look very hard to find exhaustive comparisons of these two. A quick google search reveals several blog entries and documents from developers, Python enthusiasts and testers. I read a few, but the explanation that stood out to me was a series of blog entries found here and here



The first link is about unittest, the second about Py.Test. For both, the author goes into great detail while focusing on: availability, ease of use, API complexity, test execution customization, test fixture management, test reuse and organization, assertion syntax, dealing with exceptions. The entries also have code snippets to show how everything is setups. At the end of each entry, the author wraps up with pros and cons for the framework just discussed.

If you're interested in general, I'd absolutely suggest reading the series, but to keep this short, I'll summarize my findings using relevant points to Freeseer.

Unittest 
Unittest is part of the Python standard library and has been for quite some time. To use it, one needs to create a class which is of a unittest.TestCase. This module also facilitates a setup and teardown mechanism for test cases along with a suite() method to run several automated tests in sequence. To create a test suite, all tests must be imported and aggregated into a single main module. A test will pass or fail based on the result of assertions or explicit fails and tests result output is customizable. Here are some pros/cons from the author for unittest:

Pros
  • available in the Python standard library
  • easy to use by people familiar with the xUnit frameworks
  • flexibility in test execution via command-line arguments
  • support for test fixture/state management via set-up/tear-down hooks
  • strong support for test organization and reuse via test suites
  Cons
  • xUnit flavor may be too strong for "pure" Pythonistas
  • API can get in the way and can make the test code intent hard to understand
  • tests can end up having a different look-and-feel from the code under test
  • tests are executed in alphanumerical order
  • assertions use custom syntax


Py.Test
"As Python unit test frameworks go, py.test is the new kid on the block. It already has an impressive set of features, with more to come, since the tool is under very active development."
Py.Test has no explicit API and automatically finds and runs tests (using prefix rules). Like unittest, there are setup and teardown mechanisms. Py.test has a huge collection of configurable command line parameters which customize the test execution. Since py.test automatically finds test cases, it makes it trivial to add a new test case to the suite (py.test can be configured to run all tests it finds in a given directory). Here are some pros and cons from the author:

Pros
  • no API!
  • great flexibility in test execution via command-line arguments
  • strong support for test fixture/state management via setup/teardown hooks
  • strong support for test organization via collection mechanism
  • strong debugging support via customized traceback and assertion output
  • very active and responsive development team
  Cons
  • many details, especially the ones related to customizing the collection process, are subject to refactorings and thus may change in the future
  • a lot of magic goes on behind the scenes, which can sometimes obscure the tool's intent (it sure obscures its output sometimes)


For Freeseer, another con is that py.test is a third party dependency, but installation is as easy as:
    $ easy_install pytest


Final Thoughts:
The first thing to note is that the blog posts are from 2005. There are things that py.test did then that unittest does now. For example, as of python 2.7, unittest now supports test discovery! Here's what that means:
    $ python -m unittest discover /path/to/test_dir

will recursively go through /path/to/test_dir and will attempt to import all test_*.py modules. If successful and if the module contains a class which inherits from unittest.TestCase, all methods for this class will be executed. At the time of the blogs above, this was a beautiful feature of py.test which set it apart from unittest.

In python 2.7, unittest offers discovery, modular and logical grouping of test cases, informative verbose output (class name, method name, result) and a fail switch for the suite when a single unit test fails. In addition, we can run the test suite in discovery mode from the command line (like we did above) and as tests are executed information is outputted to the screen. Upon completion, the number of tests, time taken and result (OK/FAIL) are also outputted. All of this functionality is included into unittest which is in the python standard library and requires no external dependencies. Remember: LESS IS MORE. In all honesty, the only real problem with unittest for a project in general, is that it uses its own methods for assertions which pass/fail tests (it's not too complicated, just seems unnecessary).

To conclude, I am not saying unittest is better than py.test (because it isn't), but not all projects need py.test. The problem that py.test is trying to solve is compatibility in any code environment -- it does discovery and allows any method starting with test_* to be used as a test case. Though this is useful and simple, a project like Freeseer could actually benefit more from a structured module approach like unittest. Again, py.test can do this as well (in fact, unittest is clearly playing catch up) but it does so at the price of adding dependencies -- something Freeseer would rather not do.

Looking at the current organization of the source code in Freeseer, we have framework, frontend and plugins as the main separators. It would be simpler for developers to be able to create a test class for each class in the source and bundle together all unit tests for this in the test counterpart. This way, when functionality changes or gets extended, there is no guessing where the test code should go making it dead simple for developers to implement their own test case (BOOM no reason to not be able to test your code).

In the next post, I'll detail the design and show the power of using unittest in Freeseer to create a similar test code structure which will allow an obvious link between source modules and their test module counterparts.

Stay Tuned!



Saturday, 6 October 2012

UCOSP & Freeseer

This fall term, I have the opportunity to join the Undergraduate Capstone Open Source Project (UCOSP). Like traditional Capstone-like courses, the goal of this is to undertake a large term-long project usually as part of a team. Unlike Capstone (usually), UCOSP brings students from various schools together to work on one of several open source projects.

For a while, I have been trying to join the open source community. Surprisingly, I had issues. I realized early on that I didn't really know what it is I wanted to work on. I knew I wanted to contribute, but the overwhelming thought of trying to do something useful for the Chrome or Firefox projects kept me away. On top of that, even with a few years of development experience and the drive to learn programming languages used in project of interest, I simply didn't have time to learn them.

Alas, after seeing a post about UCOSP, I decided that was my ticket in. Without a second thought, I enrolled. Of all the available projects, I chose the Freeseer project. I should note that the others were interesting as well, but Freeseer was a cool idea because I have tons of Python experience, I've never worked on a multimedia tool and the code base was small and young which makes it easier to contribute great ideas. Anything we add at this stage is revolutionary for Freeseer and will affect its growth. On the other hand, working on a huge and widely developed project, contributions are just as important, but on a much smaller scale. Really, I feel this is true for code bases in general, not just open source.

Freeseer is an open source screencasting application suite. It records audio, it records video from the desktop or a vga source and it does so using a simple easy to comprehend user interface. If you haven't played with it, know that it is as simple as "push to record, push to stop". It like an old VCR unit (if you have one lying around) which is why the makers of Freeseer pronounce it "Free-C-R".

From a user point of view, the application installs rather easily. Simply download the installer for your platform and once the install completes, you're good to go. For developers, it's not quite as simple. In its current state,  it's easiest to develop for Freeseer on a Linux box - actually, Windows didn't turn out THAT bad for some.

For UCOSP, we met for a weekend "first sprint" and students from all the UCOSP projects traveled (some cross country) to meet up in Kitchener. We met each other, our team mates and project mentors. We received a tour of the Google office, had a lunch in their cafeteria and heard them talk about Open Source - after all, Google did support UCOSP this term and Facebook is doing the next one.

Once we broke into our teams, we got to know each other more. This term, according to the mentors, the Freeseer team is larger than usual. I suppose this is a good sign as there is plenty to do in Freeseer and the team is eager to contribute. We spent the weekend discussing Freeseer: our mentor Andrew Ross spent a good deal of time explaining why he started Freeseer. In short, there was a need for a simple and free solution to recording conferences.

Freeseer is written in Python and is easiest to develop for on the Linux platform. We spent most of Saturday realizing it wasn't that simple. If you use a fresh Ubuntu 12.04 install, Freeseer takes about an hour to install and works mostly well (some things were broken). I'm pretty sure those developing on Windows got it in about that much time as well. Once finally installed, each member was to decide how they would contribute to Freeseer and write a proposal for a term-long project around it. Some have opted to add a new feature - like a YouTube uploader, a YouTube streamer, while others are stabilizing existing features - like alerting the recorder when microphone volume is low.

For my project, I am working on the testing area. Currently, Freeseer has no unit test suite, no use cases and a lot of new code which needs to be refactored and reviewed. My proposal is to investigate and suggest a method of integrating unit tests into Freeseer. Once the community agrees on the design, I'll move forward and implement some unit tests. In addition, I'll be taking an in-depth look at some modules in Freeseer to determine how to stabilize some components, essentially refactoring. I feel that stabilizing Freeseer and ensuring that future contributions are also stable will make Freeseer a more adoptable tool.

Blog entries will document the process I will follow to select a test framework, add it to Freeseer, implement unit tests and overall ensure that future contributors will be adding to a solid, stable and bug-free open source project!