Saturday, October 20, 2012

Gah! TCP doesn't won't work the way I want :(

Well, I've been having more fun messing around with Node.js and allowing myself to be distracted by interesting problems. The latest of which was triggered by my desire to integrate the BrowserStack beta API for cross browser testing. This is a nice service that will fire up any number of different versions of browsers and point them at a URL that you specify. Integrating this with Testacular and Mocha means that I can run all my browser javascript tests in all browser variants and get the results right back in my shell immediately, without having to run a myriad of browser versions locally :) This even includes mobile platforms :D

So what's the catch?

Well, in order for BrowserStack to connect to my Testacular server it needs to hit a public URL. Unfortunately my development machine is not reachable on a public URL (nor do I want it to be, at least not really public). The solution suggested by BrowserStack was to use a simple service called LocalTunnel. This service provides a client with which you can create an SSH tunnel to a local port that you specify. The service then allocates a random subdomain of from which it will forward HTTP requests to your local port. Very useful and sounds easy, right? Unfortunately when I tried the client it didn't work and the only clues were leading me into a world of SSH keys, etc.

Hence the distraction. As I probably want to fire up my tunnel and browsers programatically I'm not so fond of relying on command line interfaces and really I want a node module to do it. What's more if I'm going to dig around in secure connections why don't I take the opportunity to expand my knowledge in a direction that I want it expanded. So I decided I would implement my own tunnel service and client solution in node and thus the tls-tunnel package was born.

Early on I figured I didn't want to mess about with generating random sub domains and trying to route based on the sub domain on which a connection was made so instead I decided to assign ports on the server to satisfy client connections. This way whenever a new client connects and requests a tunnel the server will allocate a port from a predefined range of available ports and start listening on that.

My plan was to use a free Heroku or Nodejitsu instance to then deploy my tls-tunnel server when I needed it.

This is where I learnt a hard lesson in the problems of bottom up development. Although I am applying TDD principles I did in fact fail to validate one of my initial assumptions - that I could use multiple ports! Both Heroku and Nodejitsu will only expose one port to your application... this could/should have been a red flag. I realised this early on but plowed ahead anyway thinking that at a later date I could apply a small change to my tunnel and instead use the random subdomain solution to differentiate between tunnels.

So I got my tunnel working using TLS (hence the name) with clients and servers authenticating each other with their own self signed SSL certificates. I was pretty proud of myself for implementing something that was in theory protocol agnostic - I had noticed that other similar solutions were limited to HTTP traffic... this should have been a red flag!

I next turned to the problem of making it all work on one port. Having already learnt quite a bit about the TLS/SSL problem domain I now learned a hard lesson about the TCP domain or more specifically the Node.js net domain.

I had made the assumption that when a raw TCP socket was connected to a server I would be able to read out the domain name that it had used... Wrong!!!

What LocalTunnel is doing is using the HTTP protocol to get the domain name that was used for the connection. GAH!! and what do you know this is the same reason the Heroku and Nodejitsu limit access to a single port. Double GAH!!!

So now I'm left with a choice. My solution can still work but I'm going to have to put it on an Amazon EC2 instance or something (I can get one for free for now). Or I can bite the bullet and implement the same HTTP restriction (boo) and do subdomain based tunnelling.

It's not such a simple choice though. On the one hand it's easy to integrate Heroku and Nodejitsu into my development and testing process (and even share that) as opposed to the hoops I will have to jump through to get it up and running on an EC2 instance. But on the other I don't want to limit my solution to HTTP and I haven't actually verified yet that I can use random subdomains on either service (once bitten, etc).

Perhaps there is a third way though - maybe if I only support one tunnel at a time I can use a single port...

That said, I'm leaning towards the EC2 solution for flexibility ("lean"-ing might be a bad choice of word here though - if you'll excuse the pun ;))

Saturday, July 21, 2012

Scheduling for the internet

While trying to figure out the best way to synchronise scheduled event start times across different users in different time zones, I managed to work my way around in a bit of a circle yesterday.

I have an app that allows users to schedule events and obviously specify start times. In my initial hacking I was specifying those start times and storing them in the database as strings (not even validated as dates, really just a place holder to mock up the site). So yesterday I thought I would tackle this to make it more functional.

I added a date picker widget and a time picker widget and fixed things so that only dates and times could be specified which I then stored in my database.

But wait, I thought, how do i know what timezone the user intends. After all, when i publish this event to other users I will want to give the start time in their timezone. Hmm... So I started my research on timezones.

I started out trying to put a timezone picker on my event scheduler page which would default to the current timezone of the clients browser. This actually isn't so simple.

A major complication is that I don't really want the timezone, I actually want the locale of which the timezone is only a feature. The other feature is daylight savings time (DST). There are only so many time zones (which is quite manageable) but there are lots of variations in the treatment of DST (not so manageable). Unfortunately for me I need to consider DST if I am to know what real time an event organiser is actually aiming for (they will always be working in local time I presume and would not care to specify start times in UTC).

Here are a few of the interesting libraries that I looked at to get get a handle on this.
  • On the client to detect and select a timezone
    • Josh Fraser provided the best hope for something simple with his client timezone and DST detection algorithm in Javascript. But he does mention that instead folks should use...
    • Jon Nylander's jsTimezoneDetect solution. This is apparently much more advanced and works off the complete list of time locales from the Olson zoneinfo database. Unfortunately this would be tricky to integrate in my web page and would provide a huge number of options for users. I've seen these before on the internet and they are annoying.
  • Then on the server to get a nice unix time in my database
    • node-time looked interesting
    • moment.js seemed to talk the talk but on further analysis I wasn't sure if it knew about DST or if I would have to tell it
    • timezone-js may have been the most promising
But then came my small eureka moment... Why am I doing all this work? Well, pretty much because my client side controls give me strings and not date objects. However the browser does know what time locale it's in and how to present dates. So this is where I returned almost to my starting point.

I ripped out all the timezone selector stuff from my page and instead I used the client side date functions to generate a Date object there and transmit a nice simple unix time back to the server for storage. For those that don't know, unix times are the number of milliseconds since 00:00:00.000 01/01/1970 (UTC). They don't care about time locales. So now I do all the locale specific formatting and parsing in the browser. Seemed obvious after I'd done it :)

I may add a widget to my pages to let the user know which zone the times are being displayed in but I'm not sure even that is worth the effort. It would only catch a few issues with people working on devices with the wrong timezone set.

Friday, July 20, 2012

Grunt watch and the Node.js require cache revisited

Still inspired by James Shore's series "Let's Code: Test-Driven Javascript",  I've been continuing with my endeavors to get the Grunt watch stuff working flawlessly(?).

In my last post I mentioned some niggles that were remaining from my previous workaround.
  • The workaround only addresses the Mongoose issue
  • The workaround assumes intimate knowledge of Mongoose
  • Grunt watch still explodes silently when unhandled errors are encountered in tests
    • undefined references
    • nonexistent requires
    • etc.
The good news is that I think I have addressed all of these. In addition to that I've figured out some stuff about how to extend grunt and how to manipulate the Node.js require cache.

First off I thought I'd take a look at Mocha to see if it handled things better. After all Mocha also has a watch function.
  • Mocha watch does not explode on undefined references (which is nice)
  • Mocha watch does still explode on nonexistent requires (actually I didn't find this out till much later on when integrating with grunt)
  • Mocha watch still failed to handle my Mongoose issue
  • Unfortunately Mocha watch doesn't integrate with JSHint and actually I'd quite like to lint my code on file changes too
So despite only having a small advantage in not falling over so much I thought Mocha showed more promise than NodeUnit and as James noted it is much more active on GitHub. In fact it's under the same banner as Express and Jade which are definitely very popular and well maintained frameworks for Node.js.

Next thing was to integrate Mocha with Grunt so that i can use the Grunt watch function to both lint and run tests on file changes.

The nice thing about writing my own task to run Mocha instead of NodeUnit is that it was then quite easy to fix the issue of exploding on nonexistent requires... It just needed a try/catch around the call. In retrospect I could probably have added this to the existing NodeUnit task but by the time I got to this point, I'd already ported all my tests to Mocha.

[A short interlude on Mocha and Should...]

James noted in his videos that Mocha is targeted as a BDD test framework and as such he is not so keen on it's verbosity. I can see what he means but, to be honest, I don't find it much of an issue and in fact quite like it, so for a while at least, I think i'll stick with it.

I also tried the should.js assert library that provides an interesting take on asserts by making them a bit more natural language like. Things like:;

On first take I thought cool and went full steam ahead in making all my asserts like this. Currently though I'm not sure I like it.

For one, I keep thinking that I should be able to write something in a natural way but find that it's not really supported - it kinda feels like I'm being teased. This will lessen I guess as I really learn the idioms.

A more annoying problem though is related to the way Javascript handles types and comparisons. I keep finding comparisons that i think should work and don't and then comparisons that I think shouldn't work and do! I think this is made worse by hiding the comparisons inside assert functions. As a result I'm starting to come to the opinion that not only is the should framework more trouble than it's worth but in fact any assert framework that hides comparison logic is not such a good idea to use in tests in Javascript. This includes very standard things like: assert.equal(object1, object2);

I may revert to just a single check function that will better reflect how comparisons would actually be written in production code. Ie: assert(conditionalCodeThatResolvesToTrueOrFalse);

[...interlude over]

So there I have it, I can now run my tests as files change and rely on the watch task to keep going no matter what happens (so far!). Just the mongoose problems to resolve then, and actually I added another.
  • If a unit test beforeEach function falls over then the after functions are not run
    • This means that as I open a database connection in before and close it in after, when I get such an error I then continue to get failures when files change due to not being able to open the database anymore (it's already open)
    • Not as serious as the silent failures as at least the watch process keeps pinging me and I can restart it. But still a little annoying
This new issue got me thinking again about the require cache. My previous investigations here had proven fruitless but then, perhaps I had been led astray by some dubious comments on StackOverflow. Beware, this code does not work:

for (var key in Object.keys(require.cache)) {delete require.cache[key];}

So now I was thinking about the Mongoose module.
  • The problem isn't that the changed module is still in cache
  • The problem is that the Mongoose module is still in cache
  • In fact the problem is that any modules are still in cache
  • I must clear the cache completely before running my tests!
    • Actually I had tried this and it didn't seem to work
    • However I had tried it in my tests themselves, now I could try it in my new grunt task :)
      • I had already needed to add code that dropped all my own files from cache to make things work. It made sense to drop the rest too when I come to think about it.
So i fixed the code above:

for (var key in require.cache) {delete require.cache[key];}

Tidied up my mocha task adding support for options and this is what I have in a new module...

To use this I dropped it in a grunt tasks directory and updated my grunt.js file...

Note that the call to loadTasks takes the directory name. Also note that I overrode the built in NodeUnit test task and that the options to pass into mocha are given in the mocha config property.

So that's it I no longer have to use my Mongoose workaround as the Mongoose module is cleaned up along with everything else before I run the tests :)

I hope this will save me from similar gotchas in other modules too, but I guess I'll just have to code and find out :D

Wednesday, July 18, 2012

NodeUnit, Mongoose and Grunt watch

Edit: Although interesting to me as a history to my Node.js testing issues, this article is now pretty much superseded by this one which better addresses all of the below problems

Now that James Shore's "Test Driven Javascript" series has kicked off I've been integrating unit tests into the 5Live hangout project. This has, for the most part, been simpler than I expected. NodeUnit is pretty easy to use and from one of the comment threads I have been introduced to Grunt which allows me to tie all my lint tasks and unit tests into a single automated script (James has been doing this himself in Jake but I figured I would give Grunt a try as it does some of the 'grunt' work for me :)) .

Like I said, for the most part this has all been going swimmingly. One of the nice features of Grunt that I discovered is the watch task. This allows me to watch a list of files and when they change, automagically kick off my lint tasks and unit tests - very nice :D

There are some problems though. My application uses Mongoose to interact with a MongoDB database. As such I follow the standard mongoose pattern of using a singleton and defining my model schemas like this...

As I'm doing TDD I actually start off with something like this in a separate test file...

That's all hunkydory. I can run the tests and they pass. I can kick off grunt watch and leave it running while I start editing my files. Let's see what happens when I change my test, thusly...

As expected grunt watch pings me to let me know that my test has failed :)

So I go back to my model and update the greeting function...

Gah, grunt watch pings me again to say that my test still fails. Puzzlement abounds!

If I stop grunt watch and run the tests manually they pass! So what's going on?

Well I wasted a lot of time messing around with the require.cache object, as I figured it was something to do with Node.js module caching, but that wasn't it at all. Either NodeUnit or Grunt is smart enough to remove the changed files from the module cache (I think it must be Grunt that does this but I didn't check).

Eventually I realised that it was the mongoose singleton that was causing the problem. After all this only happened with my mongoose model tests. As the mongoose singleton persists between test runs it doesn't matter that I change the methods on my models, the old versions also persist.

Again I tried a number of workarounds but so far the best seems to be the following.

First I created a wrapper for the mongoose singleton which allows me to reset the schemas...

Next I integrated this wrapper into my tests (only the tests, i still use the the mongoose singleton directly in the model implementations)...

So why do I prefer this solution and what else did I try?

Well I also had another workaround which fixed the problems with method updates.
  • Instead of using Schema.methods to assign methods I used Model.prototype
  • Instead of using Schema.statics to assign static methods I just assigned them to the Model directly
Why didn't I like this?
  • This solution meant a little rejigging of the code to what seemed like a non standard pattern in the actual implementations
  • This did not fix a similar problem with updating the actual Schema - ie. adding or removing fields
I still don't much like my eventual workaround as...
  • it depends on knowledge of the the internals of Mongoose (which might change)
But at least it's contained in my tests and seems to work for all changes in the model.

However, even with this workaround in place I'm still  not fully happy with the way grunt watch works.
  • Annoyingly, it exits on a number of test failures particularly when things are not yet defined. This happens a lot when doing TDD, it's how we write failing tests.
    • When it does exit it doesn't actually ping me. As such I have to keep looking to see if it stopped (If i have to do this all the time, it occurs to me that I may as well not use it and instead run my tests manually)
  • I'm now just waiting for the next gotcha as I only worked around a problem with Mongoose.
    • It seems to me quite likely that there will be other libraries that follow similar internal patterns and they are likely to trip me up in the same way
I have a solution to suggest though...
  • Spawn a new Node.js process at least for every grunt watch event if not for every NodeUnit test
    • Wouldn't this fix the problem once and for all?

Thursday, July 12, 2012

Amazon EC2 learnings

Yesterday I discovered that Amazon AWS offer a free tier which basically means I can have a free server (possibly more than one) in their cloud for a year!

Awesome, I'll have some of that :)

I decided to see if could get our 5Live hangout project up and running there. Figured it would be useful as our free heroku plan limits us to a 16MB database!!!! On AWS I can have up to 30GB of storage for free :)

Of course I'll have to be my own sys admin to get it though. So that's where the adventure begins. This is what I needed to setup...
  • A server
  • Some storage
  • Install Node.js
  • Install MongoDB
  • Install Git
  • Open ports to allow access from the interwebs
In figuring this stuff out I probably created and terminated about 20 EC2 instances. There's the first 2 things I learned...
  • Amazon refers to it's virtual machines as EC2 instances
  • Amazon calls deleting an instance "terminating it"
    • When you terminate an instance it does not go away immediately (takes about 20 minutes) but it is not recoverable
    • There is an option for termination protection which I haven't tried but might be a good idea :)
Only a limited number of the virtual machine types are covered by the free tier but that's ok I only actually tried 2 of them. Didn't think I'd be interested in running my stuff on windows so I only tried the Amazon Linux and Ubuntu 12.04 images. Both of which are free in the micro configuration (1 core, 613MB RAM). After switching between the 2 a few times I settled on Ubuntu mainly because it is more familiar to me. However my research suggests that the Amazon Linux images might be better optimized for EC2.

Now for the real purpose of this blog post, which is mainly for my own notes, these are the steps to setting up the above list of requirements.

Create an Amazon AWS account

First we need an AWS account
  1. From sign up for a new account if you don't have one and verify with the fancy phone call verification
  2. Wait for email confirmation of the new account

Create an EC2 instance

We need a virtual machine

Choose the free tier eligible machine type
Keep the default machine options

Create a new security group

  1. Head over to and sign in with your new account
  2. Select the EC2 link
  3. Select the Instances/Instances link on the left hand side
  4. Click the Launch Instance button
  5. Choose the Classic Wizard option and click Continue 
  6. Choose Ubuntu Server 12.04 LTS 64bit and click Select 
  7. Keep the default options for the machine type as pictured above and click Continue 
  8. Keep the default options for for the machine features as pictured above and click Continue 
  9. Enter a name for the instance (this is only used for display in the AWS console and is not the machine name) and click Continue 
  10. Next you will have to create a key pair - this is used instead of passwords to log on to the virtual machine using SSH (If this is not the first instance on the account then you can reuse an existing key pair). Enter a name for the key pair and click Create & Download your Key Pair - keep this somewhere safe but accessible. Then click Continue 
  11. Create a new security group with at least port 22 open so that you can SSH to the instance as pictured above. I have decided that it is best to create a new security group for each EC2 instance as it is not possible to change to a different security group after the instance has been created. However it is possible to change the rules in a security group, so if you want different instances to have different rules then you need to create different security groups for each instance. Then click Continue 
  12. You will then be presented with a page to review so just click Launch and on the next dialog click Close 

Create an EBS volume

We need an Elastic Block Store volume so we can separate our MongoDB data from the OS volume
  1. Select the Elastic Block Store/Volumes link on the left hand side. Notice that there is already an 8GB volume for the EC2 instance OS. Make a note of the zone for this existing volume (eg. us-east-1d), we will want to create our new volume in the same zone so the EC2 instance can be attached to it
  2. Click Create Volume 
  3. Select the size of the volume (eg. 10GB) and the same zone as noted in the last step. Don't select a snapshot. Click Yes, Create 
  4. Right click the newly created volume and select Attach Volume 
  5. Select the newly created Ubuntu instance and leave the Device field to the default. Click Yes, Attach. This will actually attach the volume to /dev/xvdf and not /dev/sdf on this version of Ubuntu, as noted on the dialog

Start the instance and log on using SSH

We're going to need our key pair file in the next step. On OSX and linux it can be supplied to the ssh command using the -i option but on windows I use Putty. Putty does not accept *.pem files as generated by amazon so it's necessary to convert it to a *.ppk file using PuttyGen. Anyway follow these steps to logon...
  1. In the AWS console go back to Instances/Instances on the left hand side
  2. Select the instance and on the Description tab scroll down until you find the Public DNS entry. This is the public host name of your server. As an aside it also contains the static IP address in case you want to know what that is - eg.
  3. Launch Putty and paste the Public DNS host name into the host name field
  4. Prepend the host name with ubuntu@ so that you don't need to specify the user name when connecting (the default user is called ubuntu)
  5. On the left hand side select Connection/SSH/Auth.
  6. Under Private key file for authentication browse for the *.ppk file generated by PuttyGen from the *.pem file created and downloaded from Amazon
  7. Go back to the Session section at the top on the left hand side and save the session with a sensible name
  8. Click Open and you should just be logged in as the ubuntu user (after accepting the public key)

Format the EBS volume and mount it permanently

We want a nice efficient file system and it seems that it's de rigueur to use XFS. XFS is supported by the Ubuntu 12.04 kernel but the tools to format volumes are not there by default. Anyway here are the steps to follow at the command line...
  1. sudo apt-get install xfsprogs
  2. sudo mkfs -t xfs /dev/xvdf
  3. sudo mkdir /mnt/data
  4. sudo nano /etc/fstab
The last step will start nano so that we can edit the /etc/fstab file to ensure that our volume is mounted whenever the machine reboots. Add the following line...
  • /dev/xvdf /mnt/data xfs noatime,noexec,nodiratime 0 0
Write out the file with ctrl-o and exit with ctrl-x.

Now we need to mount the data volume. At the command line...
  • sudo mount -a

Install the latest stable Node.js

At the time of writing the default Node.js package available in Ubuntu is 0.6.12 and the latest stable is 0.8.2. In order to get the latest stable release do the following at the command line...
  1. sudo apt-get install python-software-properties
  2. sudo apt-add-repository ppa:chris-lea/node.js
  3. sudo apt-get update
  4. sudo apt-get install nodejs npm

Install and start the latest stable MongoDB

At the time of writing the latest MongoDB was 2.0.6 and that is what we download in the following steps. Check with to see if there is a newer version. At the command line...
  1. cd ~
  2. curl -O
  3. tar -xzf mongodb-linux-x86_64-2.0.6.tgz
  4. cd mongodb-linux-x86_64-2.0.6/bin
  5. sudo mkdir /mnt/data/db
  6. sudo chown ubuntu /mnt/data/db
  7. ./mongod --fork --logpath ~/mongod.log --dbpath /mnt/data/db/
  8. cd ~
  9. tail -f mongod.log
This will start the MongoDB daemon in the background and output the logging to ~/mongod.log. The last command allows you to check that the daemon starts up ok. Once it has completed the startup sequence then it is safe to ctrl-c out of the tail and mongod will continue running. To stop mongod, the safest way is from the mongo client. At the command line...
  1. cd ~/mongodb-linux-x86_64-2.0.6/bin
  2. ./mongo
  3. use admin
  4. db.shutdownServer()
The last command shutdown the server and prints out lots of stuff that looks like errors but it should be fine and it should be possible to start the server again as before.

Install Git

I use GitHub and all my code is up there so I need git to put it on my new server. At the command line...
  • sudo apt-get install git

Opening more ports

While developing Node.js applications I usually use the default Express port of 3000. You will remember that when we created the server instance we only opened port 22 in the security group. In order to hit the server on port 3000 we have to add that to our security group too...
  1. In the AWS console select Network & Security/Security Groups on the left hand side
  2. Select the the security group created specifically for the server instance
  3. Select the Inbound tab
  4. For Create a new rule select Custom TCP rule 
  5. For Port range enter 3000
  6. For source enter 
  7. Click Add Rule 
  8. Click Apply Rule Changes 
It should now be possible to connect to services running on port 3000 from the internet. Remember that the host name is the Public DNS entry under the EC2 instance description.

Monday, July 9, 2012

Startup Weekend Amsterdam - The 5Live Crew

Had a fantastic time at Startup Weekend Amsterdam this weekend. Big up to the organisers :)

It was hard work and I think it will take a while for my girlfriend to forgive me (I found out about it and signed up Thursday with much too little discussion) but I'm so glad I did it.

It kicked of at 6pm Friday with 66 pitches! Some good, some bad, some completely indecipherable but all of them given and received in great spirit, in 60 seconds or under (with change overs I guess it would have been at least 2 hours of pitches, phew).

There was a shameless pitch to clone Kickstarter in Europe (did you know Europeans couldn't raise funds on Kickstarter?)... This actually got one of my votes.

There were a lot of pitches to provide match up services for sports enthusiasts, I counted at least 4. These were interesting to me as I know a French guy (hey, Alex, if you're reading!) who's had some success with this in Canada for ice hockey. Check out Hockey Community! Like I say interesting but i din't vote for any of them as I didn't want to encourage competition for my friend (who I think should branch out to other sports ;))

A cool sounding app called OhHeyWorld got my second vote. With one click it lets you notify friends and family when you arrive at a destination by checking in by email, facebook, twitter, etc all at once. I really liked this but it didn't get so many other votes so happily the guy pitching it ended up on our team :)

My final vote though went to a Googler who wanted to build an app on top of Google Hangouts to match coaches, music teachers, etc to people online for live lessons via video. 2 things peaked my interest. I've had this conversation before, particularly with musicians. And second it would be nice to learn something about integrating with Google Hangouts.

I didn't pitch any of my own ideas, although I kinda wish I had. Next time I will... and there will be a next time :)

So with all voting completed the pitches were narrowed down to 20. By which time the Google Hangout idea had merged with another to provide business mentoring over video conferencing, inspired by Quora (of which I'm largely unaware, but have just signed up). This is what I ended up working on with 7 other cool guys. We were 5 business guys, 2 developers and 1 designer (I think). Heavy on the business side and it showed after the first day. I'm not sure how many visions there were or how many times we pivoted (being 1 of the 2 techies I was pretty much head down coding the whole time - hoping that it was in the right direction) but at the end of the night on Saturday the cracks were starting to show. It was time for rest.

The next morning, although we were down one business guy who I think was unimpressed by the lack of focus and constant direction changing, there was definitely more commitment to get things done and to try and ship, after all we only had till 4pm.

We seemed to have a single vision now and we set about building it up and applying copy, etc. We needed a presentation and we needed some kind of validation. To be honest we already had some validation. All the pivoting the day before had come from surveying conference participants and collecting feedback. Ok, we didn't get out of the building physically but we did sell the expertise of one of our team members something like 7 times for $5 (Now he has to do some video conferences with the people who stumped up the cash). This wasn't even with conference folks, these were real people on the interwebs. We sold him through fiverr at an admittedly knock down price for his real estate know how and you may say that we only validated that people will buy gold if we sell for the price of coal. I disagree though, we validated that people are willing to pay for live interactive video sessions in which they can learn something... and they actually paid, not just promise to pay or expressed an interest.

This was our eventual proposition (I almost typed final but there may be more pivots to come) and 5Live was born. We will provide a market place to match up those with skills/knowledge to those who want to have those skills/knowledge. Sessions can be scheduled by the skilled for free and spots can be reserved by the young padowans for 5 euros/dollars or less. Then at the designated time each will click on some button and enter into a live interactive video conference through which knowledge will be imparted. The subsequent revenues will then be distributed back to the skilled.

I see it as a place where everyone is equal and sharing their wisdom with each other. Perhaps in the evening instead of watching TV. I see it as a natural progression from broadcast TV through social video like YouTube to the next wave of social knowledge sharing where you can ask your questions live. It's Etsy meets YouTube meets fiverr meets Quora (if I understand Quora correctly). This vision may not be the same as the other founders and may change once it's tested (will change!) but right now it looks like it has legs.

So at the end of the weekend what did we have?

Third place!! Yay :)

We were pushed out by a very worthy non-profit, charity/donation app called Easygiving who took first place for allowing people to easily manage and adjust monthly charity plans and donation distribution. And a service for helping you transfer your apps between mobile phone platforms when you switch to Android and can't find that app you loved so much on your iPhone (or vice versa) called Aloha. Kudos to them :)

But we still believe in our thing! And to prove it here are the slides, a video and a link so you can see more (if the link is dead then it probably means we changed the name and hit the big time - you should have been here last week ;))

Saturday, June 30, 2012

On Puzzles and Mysteries

While reading Steve Denning's  "The Leader's Guide to Radical Management" this morning, I was introduced to an interesting distinction between puzzles and mysteries. I'm not sure this would be a dictionary type of distinction but certainly a useful one. Steve asserts that a puzzle is a problem for which there is a known (or at least knowable) solution. Even though that solution may be challenging to apply it is at least plannable. Traditional management and waterfall style methods are a good fit for puzzles, not surprising as that is exactly the kind of problem that they evolved to solve.

Mysteries, on the other hand, are problems that are new. That no one has solved before. They are not really knowable (at least in advance) as they are unique. The solution is going to be hard to find and will have to be uncovered a piece at a time with each successive discovery leading the investigation like clues in a good mystery novel :)

This idea seems to be basically the same as the difference between complicated and complex in complexity theory (as I understand it). Puzzles are complicated, mysteries are complex.

In his book, Steve is asserting that the traditional methods of management are aimed at solving the puzzle of efficiently providing goods and services. We have to accept that this is a valuable puzzle to solve, it provides obvious competitive advantages. He further asserts that this does not address the issue of what goods and services to provide and that there is a gap between what customers get and what customers want that is not answered through traditional management. This is where there is an opportunity to exploit another competitive advantage by refocusing the organisation on delighting customers. This seems to be the main driving force for the radical management techniques and I have to agree that this is the new frontier where real gains in competitive advantage can be found.

Steve's book provides a lot of insight into how an organisation can refocus on this frontier; self organizing teams, leaving decisions to the last responsible moment, iterations and increments, focusing on a minimum marketable product, etc. These are the same principles (and maybe some additional ones) that come from the Agile movement and Scrum.

What interested me this morning though was the nature of this problem of delighting customers. It may be mysterious now but if the solution is Radical Management/Agile/Scrum what happens when everyone is doing it. Will the problem become a puzzle, what will the next frontier be?

I think the answer is no. Delighting customers is not the problem that is solved by these new practices, it is the result of solving some other problems. It is a moving target, the key is in finding the problems/mysteries that then lead to customer delight. Delight comes from having a problem solved that you didn't even know you had. This may be leading to a delighting customers arms race as mysteries are found and converted to puzzles but that doesn't sound so bad.

In statistics there is a concept of a non-stationary time series (I know a little about this as it was the subject of my final year dissertation at college). This is a series of data over time that is by it's nature not possible to model as it changes behaviour (technically variance) over time. Such activity is found in stock market prices where feedback in the system is a factor. While there are people making a lot of money out of quantitative analysis (predicting the future by analysing the past data) the problem is that the activity that results from applying a model may change the model.

My feeling is that delighting customers follows the same pattern. Once customers figure out what it is that's delighting them they will no longer be delighted and will look for something more/different (I hope mainly different rather than more as that's what makes chasing these problems interesting).

"Any sufficiently advanced technology is indistinguishable from magic."
Arthur C. Clarke"Profiles of The Future", 1961 (Clarke's third law)

The problems we solve that delight people are the magic that then quickly depreciate to the status of advanced technology (sorry, advanced technology, you're just not as cool as magic). Magic (and delight) is a moving boundary.

So why am I worried about this? Well, I'm sure that there will be something new after Radical Management/Agile/Scrum but for now I wanted to reassure myself that these are the tools we need now to keep chasing the magic boundary which is still far ahead of us. When we catch it (or get close to it as traditional management has gotten us close to the magic of efficiency) we will likely need to adjust  our focus on a new magic boundary and with that will come new techniques and practices.


Apologies to Steve Denning if I have failed to get what he wanted to say in his book or in fact completely misrepresented him in this post. Suffice to say that everything written here is my own understanding and I do not wish to put words in anyone's mouths. Also, I'm less than a third of the way through the book so there is likely more in it than I have gotten out of it so far.

Friday, June 29, 2012


So I almost ditched the OpenTV IDE again! Nothing is ever easy. I have been busy building the eclipse plugin to complement the OOOCode libraries I have been making, and everything was going great, until I tried to actually deploy the plugin in the OpenTV IDE...

Nothing, nada, zip. It just doesn't show up and nowhere does it report any errors!

All morning I spent cursing this hacked together, Frankenstein, eclipse monstrosity. This afternoon, though, I have to apologise so we can move on together in harmony.

So what's the story...

Well i'm still a bit peaved that when eclipse plugins don't load, there is no obvious error reported anywhere, but eventually I found a comment in a forum suggesting the use of the OSGI console to load the plugin and diagnose it manually. Easy, I just have to launch the OpenTV IDE with the -console option.

Sure enough a console launches alongside the editor. Next I tried...

  • ? - Yay! Lot's of handy help.
  • diag com.pghalliday.ooocode - Boo! bundle cannot be found
  • install com.pghalliday.ooocode - Boo! invlid URL and stuff
  • install file:plugins/com.pghalliday.ooocode_1.0.0.201206281825.jar - Yay! bundle installed
  • diag com.pghalliday.ooocode - Oh... turns out my plugin is dependent on the org.junit bundle

So that was it. In my eagerness to have everything unit tested and to adopt test driven development I had fallen into a trap. My deployable jar file also includes the unit tests and is thus dependent on the JUnit bundle. Now I have to get Eclipse to export a jar file without the unit tests (for now i have worked around the problem by copying the org.junit bundle from the JEE Eclipse distribution to the OpenTV IDE plugins folder as well).

Wednesday, June 27, 2012

OOOCode - Part 2

Phew, a lot has happened since my last post on this subject. After much wrangling with the C preprocessor I have a working pattern which I'm mostly happy with. It currently includes support for the following...

  • Classes
  • Private data
  • Public methods
  • Private methods
  • Multiple interfaces
  • A unit test framework
  • Some eclipse file templates

Still to do (in no particular order)...

  • Documentation
  • Performance profiling
  • Exception handling
  • Improve unit test automation
  • Eclipse wizards

Things that I'm not quite sure about...

  • Not sure if I really needed to implement the unit test stuff as classes and interfaces but it has an elegance
  • Don't much like having multiple calling conventions for public methods, private methods and interface methods
  • No support for inheritance - is it really needed anyway?
  • No support for up casting and figuring out what type something is at run time - this may become more of an issue when thinking about exception handling
  • Currently only support a single constructor

This state can be found on GitHub here

So how does it look right now (after all there is no documentation ;))

Create an application that runs unit tests

After generating your own application as described in OOOCode - Part 1 copy in the OOOCode/src/OOOCode directory from the above github project and add it and its subdirectories to the OpenTV options include paths.

Then create a Main.c as below...

The above code achieves the following...

  • Records the memory available at start up
  • Creates a debug output object using the OOOConstruct macro
    • In this case an OOODebug class is used instead of direct calls to O_debug as in tests for the unit test classes themselves it is necessary to use mock objects
  • Creates a debug reporter object
    • Unit test reports need to go somewhere, this class dumps them to debug output
    • The debug instance is cast to an IDebug interface using the OOOCast macro, in this way it is possible to pass in a mock object when needed
  • Calls the OOOUnitTestsRun method passing in the reporter to run the tests
    • The debug reporter is cast to an IReporter interface using the OOOCast macro, this will allow the unit test framework to be extended later with different reporter objects (eg. over HTTP, etc)
  • Then the instantiated objects are destroyed using the OOODestroy macro and the memory is checked to ensure that the tests and the unit test framework used did not leak
  • The last part (while loop) just ensures that the VSTB does not exit so we can see the test report in the debug output

So there are some key concepts introduced here...

  • OOOConstruct - use this to construct an instance of a class, the first parameter is the class name, additional parameters are passed into the constructor as arguments
  • OOOCast - use this to cast an instance of a class to an interface, the first parameter is the interface name, the second parameter is the class instance
  • OOODestroy - use this to destroy an instance of a class and free it's memory, the only argument is the class instance

However, this will not yet compile. The function OOOUnitTestsRun is a special function that is generated by the OOOUnitTestsRun.h header file using xmacros. It generates an array of tests to run and runs them based on the contents of another header file: OOOTests.h

This initial OOOTests.h is empty and so this application does not yet run any tests. Now the application can be compiled (assuming that the OOOCode source has been added to the include paths).

NB. The OOOTests.h file does not have an include guard and this is deliberate. An include guard would prevent the xmacros that use it from working. For more details on the xmacro pattern see this drdobbs article

Adding a test for MyClass

First update OOOTests.h...

Once again this will not compile, but hey, we're doing test driven development.

Add the MyClass.Test.h header...

Still this will not compile, but note that the OOOTest macro call does not have a semicolon on the end - this is important. This just declares the test it does not yet compile because the test has not been implemented. Other tests can be declared with other names by adding further OOOTest calls (without semicolons)

Add the MyClass.Test.c file...

Now it gets a little bit more interesting and in fact should compile and run again. When it is run this code will print the "MyClass test" string to the debug in an xml test report format signifying that it is just information. The important concepts that we now have are...

  • Declare tests with calls to OOOTest in unguarded headers that are included in OOOTests.h
  • Implement tests using the OOOTest macro as defined in OOOUnitTestDefines.h so that they look like functions
  • We can ouput information to the test report using the OOOInfo macro (this is actually a variadic macro that behaves like printf). Two other similar macros are also available in test implementations...
    • OOOWarning - adds a warning to the test report
    • OOOError - adds an error to the test report

Adding MyClass

We are going to add a class that takes an integer in the constructor and exports a method to retrieve that integer. So lets first write some more of the test. We update MyClass.Test.c as follows...

Again this will not compile but we can see how we want our class to behave...

  • We include the class header (does not yet exist)
  • We construct an instance of the class
  • We check the retrieval of the integer constructor parameter
  • We destroy the instance of the class

The key concepts are...

  • The memory allocated in a test must be freed in the test, the unit test framework does check for memory anomalies and adds them to the test report
  • Public methods are called with the OOOCall macro, the first argument is the instance, the second argument is the method name, additional arguments would be the parameters for the method
  • OOOCheck is used to test a condition that must be true for the test to pass, it can be called as many times as you like but if the condition resolves to FALSE then an error entry will be added to the test report along with the file, line and condition that failed, etc.

Add MyClass.h...

Now things are getting really interesting. Still this will not compile as we do not have an implementation for MyClass but lets go through what's happening here in the header...

  • We have an include guard - that's fine here :)
  • The OOCode.h header is included to enable all the OOOCode goodness ;)
  • The name of the class is #defined as OOOClass - this is used inside other macros as the class name and simplifies those macro calls
  • The class is declared using the OOODeclare macro, it takes the constructor arguments as parameters
  • A list of implemented interfaces is given using the OOOImplements block, this must be present even if it is empty as in this case
  • A list of public methods is given using the OOOExports block, in this case one method is exported using OOOExport
  • The declare block is closed and importantly OOOClass is #undef'd so that other classes can be declared

Let's go through those macro calls in a bit more detail...

  • OOODeclare - actually declares the type and the constructor hence the addition of the constructor arguments
  • OOOImplements - starts the structure defining the public interfaces avialable (OOOImplement will be detailed later)
  • OOOImplement - finalizes the interfaces structure
  • OOOExports - starts the vtable structure providing access to public methods
  • OOOExport - adds a public method to the vtable, the first argument is the return type, the second argument is the name of the method, any further arguments will be the parameters for the method (in this case there are none)
  • OOOExportsEnd - finalizes the vtable structure
  • OOODeclareEnd - this finalizes everything and defines a public structure used to access the public methods and interfaces

Now we're also ready to add the implementation, so create the following MyClass.c file...

So what's this all about then...

  • Include the MyClass.h header
  • #define OOOClass to the name of the class, again this is so that other macro calls can use it and their interfaces are thus simplified (it would be nice if macros could define other macros internally, but hey ho...)
  • Declare the private data fields using OOOPrivateData - just one integer field
  • Implement the destructor function with OOODestructor - in this case it is empty as no additional memory is allocated when constructing objects of this class
  • Implement a method using OOOMethod - this one just returns the integer field accessed through a call to the OOOF macro
  • Implement the constructor using OOOConstructor - in the constructor it is also necessary to map any internal functions to external methods and interfaces, in this case...
    • OOOMapMethods is used to open a list methods to map to the exported vtable
    • OOOMethodMapping is used to map the getMyField method to the first entry in the vtable - the compiler will pick up any type incompatibilities here
    • The mapping is closed with OOOMapMethodsEnd
    • Lastly the constructor assigns the nMyField parameter to the nMyField private data entry (again using the OOOF accessor macro)

Again let's look at these new macros...

  • OOOPrivateData - starts a new private data structure, this should only appear once
  • OOOPrivateDataEnd - closes the private data structure, fields in the structure should be placed between these 2 macros in the same format you would use for a struct (it is a struct!)
  • OOODestructor - this starts the destructor method, destructors take no additional arguments, this should only appear once
  • OOODestructorEnd - this ends the destructor method, it actually also frees the class instance which is why you don't have to do it yourself. The curly braces between these 2 macro calls in this case a re purely a matter of style and optional, they would only be required if it were necessary to declare any local variables in the destructor method. I use them anyway because it makes the implementation look more like a standard C method (a bit)
  • OOOMethod - this starts a method implementation, the first argument is the return type, the second is the method name, any additional arguments will be passed into the method. The method is effectively private until it is mapped to the class or an interface vtable (the macro declares it static)
  • OOOMethodEnd - this closes the method implementation, again the curly braces are mostly optional
  • OOOConstructor - this starts the constructor implementation, it should appear only once. The arguments are the constructor parameters
  • OOOMapMethods - this starts the class vtable mapping
  • OOOMethodMapping - this maps a method to an entry in the class vtable, the only parameter is the method name - this is the private method name as defined in the call to OOOMethod, it does not have to match the exported method name in the vtable defined in the header. It is important to add the methods to the mapping in the same order that they are added to the vtable in the header using calls to OOOExport
  • OOOMapMethodsEnd - this closes the vtable mapping
  • OOOConstructorEnd - this closes the constructor implementation
  • OOOF - this macro accesses the private fields of the current class instance, the only argument is the name of the field. It is not possible to access fields of instances of other classes but later we will see how to access fields of other instances of the same class

Adding IMyInterface

We will now add an interface that defines a single method that returns an integer. Again let's start with the test and update MyClass.Test.c as follows...

Only a small change but again this will not compile...

  • We added a new check, casting the instance to IMyInterface and calling the getData interface method using the OOOICall calling convention - in the test I have assumed that getData will be mapped to retrieving the constructor parameter

Only one new macro has been introduced here...

  • OOOICall - this macro must be used when calling methods on interface instances, it is much the same as OOOCall in that the first argument is the interface instance, the second argument is the interface method name and any additional arguments are passed through as parameters to the implementation

To make this compile we will have to add IMyInterface.h and update MyClass to implement the interface...

This is pretty similar to the pattern used to declare the class...

  • There is an include guard
  • OOOCode.h is included
  • This time we specify the interface name in a #define called OOOInterface
  • The interface vtable is then started with a call to the OOOVirtuals macro
  • Methods are added to the vtable using calls to the OOOVirtual macro
  • The vtable is then closed and OOOInterface is #undef'd so that other interfaces can be declared

So the new macros are...

  • OOOVirtuals - starts the interface vtable
  • OOOVirtual - declares a method entry in the vtable, the first argument will be the return type, the second argument is the method name and any additional arguments will be the parameters for the method. Any method implementing this virtual method will have to have the same signature and the compiler will check
  • OOOVirtualsEnd - closes the interface vtable

This still won't compile so next we update MyClass.h...

Again a small change...

  • The interface header has been added
  • An OOOImplement call has been added to the OOOImplements block to add the interface to the declaration

Just one new macro then...

  • OOOImplement - this adds the interface to the interface table, the only argument is the name of the interface

This will compile but the test will fail. In fact the test should crash with a NULL pointer exception as the method has neither been implemented or mapped to the interface. We need to update MyClass.c too...

Now the code will both compile and the tests will run successfully! (assuming I transcribed everything correctly).

There are 2 additions here...

  • The method has been implemented as getData and this method just calls the other method using the private method calling convention, OOOC
  • The interface vtable has been mapped in the constructor
    • The interface name is given in the OOOInterface #define to simplify the other macro calls
    • The interface vtable mapping is started with a call to OOOMapVirtuals
    • The getData method is mapped using a call to the OOOVirtualMapping macro
    • The mapping is closed with a call to OOOMapVirtualsEnd and OOOInterface is undef'd so that other interfaces can be mapped

So the new macros we have used are...

  • OOOC - this macro accesses the private methods of the current class instance, the first argument is the name of the method and any additional arguments are passed through as parameters to the method. It is not possible to access private methods of instances of other classes but later we will see how to access methods of other instances of the same class
  • OOOMapVirtuals - this starts the interface vtable mapping
  • OOOVirtualMapping - this maps a method to an entry in the interface vtable, the only parameter is the method name - this is the private method name as defined in the call to OOOMethod, it does not have to match the exported method name in the vtable defined in the interface header. It is important to add the methods to the mapping in the same order that they are added to the vtable in the interface header using calls to OOOVirtual
  • OOOMapVirtualsEnd - this closes the interface vtable mapping

Adding copy and isEqual methods

As a final example and to round out the macro examples let's see how we can add additional methods to copy and compare instances of MyClass. Of course, we start with a test so let's update MyClass.Test.c...

The following changes were made...

  • A new instance of MyClass, pMyClassCopy, is generated through a call to a new copy method
  • We check that the new copy is equal to the original
  • We check that the new copy returns the same value from getMyField
  • We check that the copy method didn't cheat and that the copy is a different instance (pointer address)
  • We remember to clean up the new instance too

Once again our code does not compile, but that's ok. We need to update MyClass.h to export the new methods...

This will now compile but, as with the interface implementation, the test will crash when it gets to the copy call as the method has not been implemented and mapped in the vtable. Anyway let's see what we've done...

  • Two new calls have been made to OOOExport to export the copy and isEqual methods

Now we implement and map the methods in MyClass.c...

Yay, sucess! The code compiles, runs and the tests pass. We added 2 new methods and mapped them so what's new in this...

  • In the compare method we used a new macro, OOOPCall, this is more efficient than OOOCall and can also be used to access unmapped methods in a class. The first argument is the class instance, the second is the method name and any additional arguments will be passed into the method. In this case OOOC could not be used as we wanted to call a method on another instance
  • Notice that the additional mappings in OOOMapMethods are preceded by commas - this is because they result in static initialiser elements in a structure. The same applied to the virtual mappings if there are more than one.

So what haven't we seen? Well, two additional things spring to mind...

  • It is also possible to access fields on other instances of a class. This is achieved through calls to OOOField, like OOOPCall this can only be used in the class implementation and the first parameter will be the instance, the second parameter is the field name. We could have used this in place of OOOPCall above but it is a matter of style to use the accessor method instead (performance optimisations could dictate otherwise though)
  • If it is necessary to access the current instance (perhaps to return from a method or pass into another method) then it is always available in the methods, constructor and destructor through the OOOThis pointer

So that's it. Although this will all probably change in the next 5 minutes. If you're interested then keep an eye on this blog and the GitHub repository.

Saturday, June 23, 2012

A Macro Interlude

Or... Traversing Macro Pasting Hell

Full disclaimer: This article used to be very different but was also complete tosh. The code I wrote based on its assumptions and misinformation was working fine for days and then broke around 5 minutes after posting the article when I discovered a new use case. This new article addresses that use case and the subsequent solution

Macro pasting

The C preprocessor has a handy (if not essential) operator for pasting 2 tokens together, ##. I use it all over the place in my OOOCode project to generate class and function names, etc using macros. It can be used like this...

Now consider the following...

In this second example I have passed a macro in as an argument to the PASTE macro and as a result it does not get expanded. In order to fix this it is necessary to add a level of indirection...

As an aside, this same problem (and solution) occurs with the quoting operator, #, too.

Variadic macros and swallowing extra commas

Now it gets interesting as the pasting operator can also be used in variadic macros to swallow commas when no arguments are provided...

Handy, yeah? Well sort of. The problem is that this still exhibits the same problems as above when macros are used as arguments...

Macro pasting variadic hell

The above code does not compile as the second macro call does not get expanded. So I tried this...

I'm not sure if this would work in other environments but the C Preprocessor that comes with the OpenTV IDE doesn't swallow the comma in this case.

This gave me a big problem. I can either support macros as arguments or zero length argument lists... but not both :(

Believe me I tried a great many more constructions involving the ## operator and various indirections but to no avail. It just wasn't happening. Eventually (it was quite long time that may even have involved praying as I was a long way into my OOOCode stuff and this was pretty key) I came across a different solution involving detecting empty argument lists. Doing this is not simple and definitely not something I want to get into here, but just know that in the following example the ISEMPTY macro expands to 1 if the argument list supplied is empty or 0 if not...

For the ISEMPTY macro stuff, special thanks have to go to Jens Gustedt and this article:

Sunday, June 17, 2012

OOOCode - Part 1

This is a continuation of the following post on Object Oriented C

Object Oriented C - Part 2

First thing is to get the existing pattern example into an OCode project and stick that up on GitHub. I don't like the default OpenTV IDE project templates as they actually require multiple Eclipse projects and due to the nature of the OpenTV hacked Eclipse cannot be built and run from a command line. This creates obvious problems when trying to set up a continuous integration environment. However for these purposes a standard OCode template will suffice.

To do this:

I created an OOOCode directory and checked out the OOOCode git repository to it.

I opened the OpenTV IDE and selected  the new OOOCode directory as the workspace location

I created a new OpenTV Project called OOOCode in the default location (pressing Finish immediately as I did not want the hello world example code)

This resulted in the following projects being created in my OOOCode directory.
  • OOOCode
  • OOOCodeDir
  • OOOCodeFlow
  • OOOCodeRes
As pictured:

To build and run the project in the OpenTV Virtual STB. 

Choose "Build All" from the "Project" menu.

Select the OOOCodeFlow project and press the Run button.

In the next dialog choose "Local OpenTV Application".

That's as much as I'm going to say about the OpenTV IDE for now but next I copied in my existing OO pattern example code and committed it to GitHub. This starting state can be found here:

Full disclosure... I did fix a couple of issues in the code i originally posted in the gists:
  • I missed a parameter in the MyClass constructor
  • I forgot that the VSTB exits when the main method exits and as such it can be difficult to check the debug output unless you put a real message loop in and wait for the quit message

Object Oriented C - Part 2

Now that I have detailed my existing pattern it's time to start working on an easier to implement/maintain replacement. To do this I need to start thinking about how to test it. After all I am fully embracing test driven development these days.

One thing I forgot of course when detailing the existing pattern is how to use it in an application. I think that part at least is pretty straight forward but just to complete the square, here's an example main.c:

So here are the tests although I haven't used the unit test framework I might use ordinarily (I didn't want to cloud the details and introduce more dependencies). Again I haven't run this code (currently working in OSx and the OpenTV IDE only runs on Windows) but as soon as I do I'll correct any mistakes found and likely add it to my previously created project in GitHub (currently lying empty)

One last thing for this post though, I thought of a better name for my project. From Object Oriented OCode we get OOOCode. That's right... Oooooo, code ;) (or Oh! Oh! Oh! Code - haven't decided)

Previous link is now almost certainly broken so here's a new one.

Saturday, June 16, 2012

Object Oriented C Reboot

As is normally the case I started something and then put it down and did lots of other things instead. In a rare bout of refocusing though I'm picking up the object oriented C (OCode) stuff again.

Previously I got very side tracked into setting up a perfect development environment as I would like to use at work. Something that would be compatible with automated build systems. Furthermore, I started trying to port the subversion externals pattern that I like to use for shared code to Git on GitHub - I quickly discovered that I'm still very much a Git newbie. This was not very lean of me...

So i'm rebooting. This time around I will focus on the task at hand which is to create a simple templating and/or preprocessing system for generating class and interface boiler plate for use in OpenTV applications which are exclusively written in C.

Let's start with a description of the pattern I currently use to implement classes. This part is simple and the boiler plate is not so bad.

The header:
The implementation:
Excuse any obvious errors, I just typed that out without trying to run or compile it, but i think you can get the idea - nicely encapsulated, right?

So that's all good. A little boiler plate in exposing opaque types and constructors/destructors but not so bad. The problems/challenges(/opportunities ;)) start when I try to extend this pattern with interfaces. Let's extend the example with that pattern.

The interface:
Now we can see some really obvious complexity. Just look at the length of it and it doesn't even do anything really. Immediately apparent is how much work it would be to add a new method to the interface.

  1. Add a new typedef for the method prototype
  2. Add a field to the interface structure
  3. Add an argument to the interface constructor
  4. Add a redirector method to allow the implementation to be called

It's fiddly work and potentially error prone (and this stuff can be hard to debug). The good news is that interfaces tend to be fairly stable once done as they don't contain business logic (although they may represent it I guess).

We're not finished though. The interface has to be implemented by our class for it to be useful.

Here's the new header for our class:

Notice the addition of a method to get an instance as an instance of MyInterface this is our casting convention.

Here's the new implementation of our class:

That wasn't so bad, we:

  1. Added a field to the class structure to store the interface instance
  2. Implemented the interface method
  3. Constructed an interface instance in the class constructor
  4. Destroyed the interface instance in the class destructor
  5. Added the method to implement our casting convention
Again it was fiddly though and remembering that we have to do these things in every class that implements the interface we are now exposed to following types of errors:
  • Memory leaks due to forgetting to destroy the interface
  • Strange affects from casting incorrectly in method implementations (doesn't seem likely until you remember that casts are a nightmare for hiding copy paste errors)

So let's review. We now have 3 quite complicated files that are quite hard to maintain. Particularly, changes in interfaces result in a large quantity of refactoring radiating out all over the place. Plus we have to be careful whenever we implement a new instance of an interface. And remember that this example only implements 1 interface method!

This is a barrier to using the pattern which I would like to overcome. Just writing it up took longer than I expected and I have to go out now so I guess there will be a part 2 where I actually get started on how I would like it to look and work :)


As an aside the work I have been doing to make OpenTV application generating and testing makefiles using make function implementations will likely be the subject of a future post. As will figuring out how to share code and resources across Git repositories/projects.

Friday, June 15, 2012

Required watching!

Well it might be if it weren't 197 episodes and counting but this series of videos illustrating test driven development (TDD) in Java and Eclipse are really excellent. I'm currently up to episode 25 (since yesterday... addicted!) and already I have learnt so much about:

  • writing tests up front
  • emergent design
  • AND just how magical eclipse is when it comes to generating code and refactoring in Java

I really recommend watching some of these. It starts with a completely blank application so I would go from there.

I really want some of these IDE features in the OCode development we do too. And I can see that there may be some mileage in that.

Now i can't wait for James' new series that I gladly helped fund on Kickstarter.

2 things that have got me excited recently in one package - TDD and server side Javascript... Yay, Node.js!

Thursday, June 7, 2012

Error handling in C

An interesting and somewhat familiar set of macros for error handling in C:

A possible coding standard. Not with the debug flags (I hate debug builds!) but interesting all the same. I can see this approach getting rid of some of the over indentation caused by checking return values. As a coding standard though, I am resistant to always using the return value of a function as an error code... maybe I shouldn't be.

The most interesting part is that it looks and behaves like exception handling to an extent. A problem which is altogether too difficult to mess around with in C. However by stopping short of trying to automatically pass you back up the stack perhaps this does capture some of the essence (read value). A good, lean application of the 80/20 rule.

Still, to retrofit this stuff would likely fall short of being lean. Maybe I can mix it into the the Object Oriented C stuff I started working on.

Saturday, June 2, 2012

Object Oriented C

I'm waiting for virus updates to complete and the new windows GitHub client to install - why is my computer so slow today!?

Anyway a chance to write a blog post then.

I'm becoming increasingly agile :) ... no not like a cat :s

Recently I completed my certified Scrum Master and Product Owner training and we're making great progress at work putting it all into practice. It really has reinvigorated my interest in software development and once again I am reading voraciously on subjects that actually excite me (you'll maybe shake your head when you realise what those subjects are ;)). Can't remember having this feeling since I first started coding back in University.

So what's on my mind today?

Well, I've been reading Michael Feather's "Working Effectively With Legacy Code" and once again I have found an author who really does reflect, distill and reinforce what I only really knew at the back of my mind. This is great as it gives me more confidence that I'm on the right path but also provides a glimpse of where that path leads.

One place that it leads is to greater use of object oriented principles in our code.

Some background... We write software for the OpenTV middleware in C to run on digital TV set top boxes. This is not object oriented. It isn't even really C, we can't use standard libraries and have to write pretty much everything from scratch. This has included basic libraries and a lot of tools and frameworks. Our main focuses at the moment are

  • Reduce build times and complexity
  • Use more off the shelf and advanced development/debugging tools
    • There is a freely available OpenTV IDE based on Eclipse which is very useful as it also includes a set top box emulator allowing us to step through code (
  • Automate builds / implement continuous integration (Jenkins)
    • This conflicts with the eclipse IDE which is not good with headless builds, we've worked around this though, it just means we can't take advantage of standard OpenTV project templates. No biggy, we know what we're doing :)
  • Add a unit testing framework and increase test coverage of legacy code
  • Test driven development
    • We have the framework so we have to do this to really reap the benefits in my opinion
  • Automate tests
    • Very much a TODO - the set top box emulator does not like to run on our continuous integration server unless it is not running as a service and some one is logged on. It uses DirectX :s
  • Move away from specification documents to specification by tests.
    • We are looking at integrating Fit and FitNesse to capture business rules and automate acceptance testing

So why more OO?

Simple really. We need units to test. We need those units to be independent so that we can develop and verify them quickly. The approaches of TDD and the means of dealing with legacy code to make it more maintainable/manageable require great discipline and effort. We need to make the task less onerous. A major part of doing that is to use interfaces to break dependencies between libraries. There are other options in C (preprocessor for example) and we have to be careful about performance on our platform. But with careful application I am convinced that we will get the best mileage out of a consistent application of OO interfaces.

So today I am going to figure out a good standard pattern for implementing classes and interfaces in C. Probably using the preprocessor to do some of the hard work. Previous attempts have always resulted in a lot of unwieldy repetition that I sometimes think only I understand (making it instant legacy code!).

Ok, installs completed :) New git repository created. I wonder if I'll just end up reinventing Objective C.

Wednesday, April 4, 2012

I'm back! Let's see how long for :)

Wow, we're well into 2012 already and I haven't posted anything since 2010!

So what's changed?

I guess I was too busy doing stuff to blog about it for a while but yeah, quite a bit has changed. I have a lovely girlfriend now (which is probably another reason I haven't posted) :)

I quit drinking just over a year ago and haven't looked back on that. Just before my exam for the WSET advanced certificate in wines and spirits (yeah that helped - fail!)

I have taken up tea drinking in quite a serious way (and am now a very fussy tea snob ;))

I took up Ashtanga Yoga, but have lapsed already :o (although still cycling, yay!)

I have a renewed interest in work mostly due to discovering Scrum.

So now my plan is to get back into blogging so perhaps we can expect some stuff about tea and agile development :)