Friday 19 December 2008

Soundwave XAML

I visited the NoiseTrade website this week and they had a neat looking soundwave icon they are using throughout the site. I doubt it is original to them, but I  it works nicely and I thought I would have a go at creating it in XAML.

Its very simple to do. You create a series of concentric circles and then mask off the bit you don’t want with a Polygon.

Soundwave in XAML

Here’s the XAML:

<Grid Width="100" Height="100">  
    <Ellipse Width="10" Height="10" Fill="Orange" />
    <Ellipse Width="30" Height="30" Stroke="Orange" StrokeThickness="5"  />
    <Ellipse Width="50" Height="50" Stroke="Orange" StrokeThickness="5"  />
    <Ellipse Width="70" Height="70" Stroke="Orange" StrokeThickness="5"  />
    <Polygon Stretch="Fill" Fill="White" Points="0,0, 1,1 -1,1 -1,-1 1,-1" />
  </Grid>

The only really clever thing going on here is that we use the Stretch property of the Polygon to centre our cut-out shape over the centre point of the circles. The Stretch property also means we can use nominal coordinates, making it easier to get the coordinates right. Alternatively we could have used the following:

<Polygon Fill="Red" Points="50,50, 100,100 0,100 0,0 100,0" />

The outer Grid must be sized to be a square so that the angles are 45 degrees when using the Stretch=Fill method. Here’s the whole thing with the cut-out in red to make it more obvious what is going on:

XAML Soundwave with clipping polygon

I then tried using Expression Blend to subtract the Polygon from the each of the Ellipses in turn, but it struggled to do what I wanted as it closed the new arcs giving the following effect (not sure why the right-hand sides look clipped):

Soundwave closed path

However, I was able to fix this by first going in and removing the final “z” from each newly created Path, which meant that the shapes were no longer closed. Then using the node editor tool, delete the extra node from each path. This gets us almost there, with the exception that Expression Blend has let the central circle drift slightly out of position:

XAML Soundwave in Blend

The resulting XAML isn’t quite so clean as our initial attempt, but it has the advantage of not relying on a white shape covering things up, meaning that it can be placed on top of any background image.

<Grid Width="100" Height="100">
  <Path HorizontalAlignment="Right" Margin="0,46.464,45,46.464" Width="5" Fill="Orange" Stretch="Fill" Data="M3.535533,0 C4.4403558,0.90481973 5,2.1548209 5,3.535533 C5,4.916245 4.4403558,6.1662469 3.535533,7.0710659 L0,3.535533 z"/>
  <Path Margin="0,38.661,34.5,38.339" Stretch="Fill" Stroke="Orange" StrokeThickness="5" Data="M11.338835,2.5 C13.600883,4.7620502 15,7.8870525 15,11.338835 C15,14.790609 13.600889,17.915615 11.338835,20.17767" Width="8.808" HorizontalAlignment="Right"/>
  <Path Margin="0,31.59,24.5,31.41" Stretch="Fill" Stroke="Orange" StrokeThickness="5" Data="M18.409901,2.5 C22.481602,6.5716963 25,12.19669 25,18.409901 C25,24.623108 22.481606,30.248114 18.409901,34.319801" Width="11.737" HorizontalAlignment="Right"/>
  <Path Margin="0,24.519,14.5,24.481" Stretch="Fill" Stroke="Orange" StrokeThickness="5" Data="M25.480968,2.5 C31.362314,8.3813419 35,16.506346 35,25.48097 C35,34.455612 31.362314,42.580608 25.480968,48.461937" Width="14.665" HorizontalAlignment="Right"/>
</Grid>

Interestingly, rendering this in Kaxaml, the inner segment now appears in the right place:

XAML Soundwave in Kaxaml

Thursday 18 December 2008

When to Review Code

I am assuming I can take it for granted that most developers acknowledge the value of performing code reviews. All the companies I have worked for have had some kind of policy that states a code reviews must take place, but when the code review takes place can vary dramatically. While I was writing this post, I came across a couple of helpful Stack Overflow questions which have more insights on the when a code review should take place. (When to review code - before or after checkin to main and When is the most effective time to do code reviews)

I'll discuss three possible times to perform a code review:

Post Check-in

I think this is the most common time for code reviews to take place. The developer checks in their code, then arranges for it to be reviewed.

Advantages:
  • The developer has actually "finished" coding, meaning that issues found really are issues as opposed to "oh yes I was going to do that later".
  • The developer is not held up waiting around for a reviewer - they can get their code checked in and start work on the next task.
  • Source Control systems typically have tools which make it very easy to highlight just the changes that have been made in a particular check-in. This is very convenient for reviewing bug fixes.
Disadvantages:
  • Unreviewed code can get into into main branch, increasing the chances of an inadvertent show-stopper bug.
  • There is potential for the review to be put on the back burner and forgotten as there is nothing being held up by it.
  • Management may view this feature as "complete" and thus resist allocating further time on it if the review recommends some refactoring.
  • The checked in code may have already gone through a manual testing cycle which is a strong disincentive to touch the code even to implement minor enhancement suggestions (as they invalidate the tests).
  • The developer is likely working on a new task when the code review takes place, meaning that it is no longer fresh in their mind, and (depending on the way source control is used) can easily end up mixing code review enhancements with new code for an unrelated feature in a single check-in.
It is interesting how the use of branches in source control can lessen the impact of many of the points I have made here (both advantages and disadvantages).

Pre-Checkin

One company I worked for had a policy of pre-checkin code reviews. For small changes, another developer came and looked over your shoulder while you talked them through the changes, and for larger features, you emailed the code across and awaited the response.

Advantages

  • The code is still fresh in the developer's mind as they have not yet moved on.
  • It is clear to the manager that we are not yet "finished" until the code review, resulting code modifications and re-testing have taken place.
  • The fact that the code is still checked out means that it is very easy to make changes or add TODOs right in the code during the review, which results in less suggestions getting forgotten.
  • The code can be checked in with a comment detailing who reviewed it - share the blame with your reviewer if it contains a show-stopper bug!
Disadvantages
  • Though the code is "ready" for check-in, it may need to be merged again before the actual check-in, which can sometimes result in significant changes.
  • It can make it harder for the reviewer to view the changes offline.
  • A strict no check-in without code review policy can result in people making fewer check-ins, and batching too much up into one check-in.
  • Can hold up the developer from starting new work if the reviewer is not available.

Got it working

There is one other potential time for a code review. In Clean Code, Bob Martin talks about the "separation of concerns" developers tend to employ while coding. That is, first we "make it work", and then we "make it clean". We have got the main task complete, but things like error handling, comments, removing code smells etc have yet to take place. What if a code review took place before the "make it clean" phase?

Advantages
  • The developer does not need to feel so defensive about their code, as they have not completely finished.
  • Its not too late to make architectural changes, and maybe even refactor other areas of the codebase to allow the new code to "fit" better.
  • Because there is still time allocated to the task, it can effectively serve as a second design review, with an opportunity to make some changes to the design with the benefit of hindsight.
  • Could be done as a paired programming exercise, experimenting with cleaning up the code structure and writing some unit tests.
Disadvantages
  • Wasting time looking at issues that were going to be fixed anyway.
  • May not be appropriate for small changes, and with large pieces of development, may depend heavily on whether the coder follows many small iterations of "make it work, make it clean", or a few large iterations.

Final thoughts

There are two factors that greatly affect the choice of when to code review. The first is your use of Source Control. If individual developers and development teams are able to check work into sub-branches, before merging into the main branch, then many of the disadvantages associated with a post-checkin code review go away. In fact, you can get the best of both worlds by code reviewing after checking in to the feature branch, but before merging into the main branch.

Second is the scope of the change. A code review of a minor feature or bugfix can happen late on in the development process as the issues found are not likely to involve major rearchitecture. But if you are developing a whole new major component of a system, failing to review all the code until the developer has "finished" invariably results in a list of suggestions that simply cannot be implemented due to time constraints.

Thursday 11 December 2008

List Filtering in WPF with M-V-VM

The first Windows Forms application I ever wrote was an anagram generator back in 2002. As part of my ongoing efforts to learn WPF and M-V-VM, I have been porting it to WPF, and adding a few new features along the way. Today, I wanted to add a TextBox that would allow you filter the anagram results to only show results that contained a specific sub-string.

The first task is to create a TextBox and a ListBox in XAML and set their binding properties. We want the filter to update every time a user types a character in, so we use the UpdateSourceTrigger to specify this.

<TextBox Margin="5" 
     Text="{Binding Path=Filter, UpdateSourceTrigger=PropertyChanged}" 
     Width="150" />
...
<ListBox Margin="5" ItemsSource="{Binding Phrases}" />

>Now we need to create the corresponding Filter and Phrases properties in our ViewModel. Then we need to get an ICollectionView based on our ObservableCollection of phrases. Once we have done this, we can attach a delegate to it that will perform our filtering. The final step is to call Refresh on the view whenever the user changes the filter box.

private ICollectionView phrasesView;
private string filter;

public ObservableCollection<string> Phrases { get; private set; }
                    
public AnagramViewModel()
{
   ...
   Phrases = new ObservableCollection<string>();
   phrasesView = CollectionViewSource.GetDefaultView(Phrases);
   phrasesView.Filter = o => String.IsNullOrEmpty(Filter) ? true : ((string)o).Contains(Filter); 
}

public string Filter 
{
   get
   {
      return filter;
   }
   set
   {
      if (value != filter)
      {
         filter = value;
         phrasesView.Refresh();
         RaisePropertyChanged("Filter");
      }
   }
}

And that's all there is to it. It can be kept nicely encapsulated within the ViewModel without the need for any code-behind in the view.

List filtering in WPF

Wednesday 10 December 2008

Measuring TDD Effectiveness

There is a new video on Channel 9 about the findings of an experimental study of TDD. They compared some projects at Microsoft and IBM with and without TDD, and published a paper with their findings (which can be downloaded here).

The headline results were that the TDD teams had a 40-90% lower pre-release "defect density", at the cost of a 15-35% increase in development time. To put those figures into perspective, imagine a development team working for 100 days and producing a product with 100 defects. What kind of improvement might they experience if they used TDD?

  Days Defects
Non TDD 100 100
TDD worst case 115 60
TDD best case 135 10

Here's a few observations I have on this experiment and the result.

Limitations of measuring Time + Defects

From the table above it seems apparent that a non-TDD team could use the extra time afforded by their quick and dirty approach to development to do some defect fixing and would therefore able to comfortably match the defect count of a TDD team given the same amount of time. From a project manager perspective, if their only measures of success are in terms of time and defect count, then they will not likely perceive TDD as being a great success.

Limitations of measuring Defect Density

The test actually measures "defect density" (i.e. defects per KLOC) rather than simply "defects". This is obviously to give a normalised defect number as the comparable projects were not exactly the same size, but are a couple of implications about this metric.

First, if you write succinct code, your defect density is higher than verbose code with the same number of defects. I wonder whether this might negatively penalise TDD which ought to produce fewer LOC (due to not writing more than is needed to pass the tests). Second, the number of defects found pre-release often says more about the testing effort put in than the quality of the code. If we freeze the code and spend a month testing, the bug count goes up, and therefore so does the defect density, but the quality of the code has not changed one bit. The TDD team may well have failing tests that reveal more limitations of their system than a non-TDD team would have time to discover.

In short, defect density favours the non-TDD team so the actual benefits may be even more impressive than those shown by the study.

TDD results in better quality code

The lower defect density results of this study vindicate the idea that TDD will produce better quality code. I thought it was disappointing that the experiment didn't report any code quality metrics, such as cyclomatic complexity, average class lengths, class coupling etc. I think this may have revealed an even bigger difference between the two approaches. As it is the study seems to assume that quality is simply proportional to defect density, which in my view is a mistake. You can have a horrendous code-base that through sheer brute force has had most of its defects fixed, but that does not mean it is high quality.

TDD really needs management buy-in

The fact that TDD takes longer means that you really need management buy-in to introduce TDD in your workplace. Taking 35% longer does not go down well with most managers, unless they really believe it will result in significant quality gains. I should add also, that to do TDD effectively, you need a fast computer, which also requires management buy-in. TDD is based on minute by minute cycles of writing a failing unit test, and writing code to fix it. There are two compile steps in that cycle. If your compile takes several minutes then TDD is not a practical way of working.

TDD is a long-term win

Two of the biggest benefits of TDD relate to what happens next time you revisit your code-base to add more features. First is the comprehensive suite of unit tests that allow you to refactor with confidence. Second is the fact that applications written with TDD tend to be much more loosely coupled, and thus easier to extend and modify. This is another thing the study failed to address at all (perhaps a follow-up could measure this). Even if for the first release of your software the TDD approach was worse than non-TDD, it would still pay for itself many times over when you came round for the second drop.

Tuesday 9 December 2008

Some thoughts on assemblies versus namespaces

I have noticed a few debates over on StackOverflow and various blogs recently on whether components in a large .NET application should each reside in their own assembly, or whether there should be fewer assemblies and namespaces used instead to separate components (for example, see this article).

Much of the debate revolves around refuting the idea that simply by breaking every component out into its own assembly you have necessarily achieved separation of concerns (e.g. Separate Assemblies != Loose Coupling).

This may be the case, but why not have lots of assemblies? There are two main reasons to favour fewer assemblies:

  • Speed - Visual Studio simply does not cope well with large numbers of projects. If you want to maintain a short cycle of code a little, compile, test a little, then you want your compile time to be as quick as possible.
  • Simplified Deployment - its easier to deploy applications containing a half a dozen files than those with 100s.

Both are valid concerns (and the speed issue is causing real problems on the project I am working on), but I want to revisit the two main reasons why I think that separate assemblies can still bring benefits to a large project.

Separate assemblies enforce dependency rules. This is the controversial one. Many people have said that NDepend can do this for you, which is true, but not every company has NDepend (or at least not on every developer's PC). It is a shame there is not a simpler light-weight tool available for this task.

While I agree that separate assemblies does not automatically mean loose coupling, I have found it is a great way to help inexperienced developers to put code in the right place (or at least make it very difficult to put it in the wrong place). Yes, in an ideal world, there would be training and code reviews (and NDepend) to protect against the design being compromised. And it seems likely that genius developers such as  Jeremy Miller are in a position to control these factors in their workplace. But many of us are not.

And in a large project team, separating concerns such as business logic and GUI into separate assemblies does make a very real and noticeable difference in how often coupling is inadvertently introduced between the layers, and thus how testable the application is.

Separate assemblies enable component reuse. What happens when someone creates a new application that needs to reuse a component from an existing one? Well you simply take a dependency on whatever assembly contains that component. But if that assembly contains other components, you not only bring them along for the ride, but all their dependencies too. Now you might say that you could split the assembly up at this point, but this is not a decision to be taken lightly - no one wants to be the one who breaks someone else's build. After all, there are other applications depending on this assembly - you don't want to be checking out their project files and modifying them just because you needed to take a dependency on something.

Conclusion

I guess I'm saying that the namespace versus assembly decision isn't quite as simple as some bloggers are making out, and the side you fall in this debate probably reveals a lot about the size and ability of the team you are working on, the working practices and tools you have at your disposal, and what sort of application you are creating (in particular, is it a single application or a whole suite of inter-related applications, and is there one customer, or many customers each with their own customised flavour of your applications).

It seems to me that much of this problem could have been solved if Visual Studio had given us a way of using netmodules. This little-known about feature of .NET allows you to compile code into pieces that can be put together into a single assembly. It would be extremely useful if you could compose a solution of many netmodules, and decide at build time whether you want to make them into one or many assemblies. That way in one application component A could share a DLL with component B, while in another application, component A could be in a DLL all by itself.

Wednesday 19 November 2008

Visual Studio Debugging Feature - Tracepoints

While watching the PDC 2008 session on Visual Studio Debugger Tips and Tricks (TL59), I came across a handy Visual Studio debugging feature I didn't know about - tracepoints. A tracepoint is simply a breakpoint that will emit a debug trace statement when it is reached. It can emit things like the current function name, stack trace, or the contents of a variable. What's really cool is that you don't have to actually stop execution. This allows you to quickly add debugging statements without the need to check out your code or remember to take the Debug.WriteLine statements out afterwards.

It's really handy for working out what order things are getting called in multi-threaded code, when setting normal breakpoints would change the order of execution. I have also used them a bit for performance measurements, although I still need to create and start a Stopwatch instance in the code itself (perhaps there is another trick I am missing).

To create a tracepoint, simply create a normal breakpoint and right-click it, and select the "When Hit..." option.

Creating a tracepoint step 1

This brings up a dialog that allows you to specify the format of the trace statement, as well as allowing you to specify that it should continue execution (otherwise this will just remain a normal breakpoint). Finally, you can specify a macro to be run as well. The dialog gives good instructions for how to specify the tracepoint message. Most useful is being able to put an expression in curly braces. For example, {stopwatch.ElapsedMilliseconds}

Creating a tracepoint step 2

Once you have done this, the tracepoint is indicated by a diamond instead of the normal circle:

A tracepoint

It's available in both Visual Studio 2005 and 2008.

Monday 17 November 2008

SourceSafe Item Change Counter

Although I remain hopeful that we will switch to TFS soon, my work currently uses SourceSafe for our source control. I wanted to identify which files in our repository had been changed the most times, and which files had been changed by the most different people. This data is to be used as part of a presentation I will be doing on dependencies and the impact they have on code quality (low-level components that are frequently changing cause a lot of pain).

Using the SourceSafe COM object makes this a simple task. It's not quick (and my code isn't optimised in any way), but it does provide a useful view into which are the classes subject to constant change. The project parameter of the Count method should be of the form $/MyProject.

Here's the code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SourceSafeTypeLib;

namespace SourceSafeAnalysis
{
    class SourceSafeChangeCounter : IDisposable
    {
        VSSDatabase db;
        List<SourceSafeChange> changes;

        public SourceSafeChangeCounter(string sourceSafeIni, string user, string password)
        {
            db = new VSSDatabase();
            db.Open(sourceSafeIni, user, password);            
        }

        public void Count(string project)
        {
            changes = new List<SourceSafeChange>();
            VSSItem item = db.get_VSSItem(project, false);
            CountItem(item);
            changes.Sort((x,y) => y.Changes - x.Changes);
        }

        private void CountItem(IVSSItem item)
        {
            if (item.Type == (int)VSSItemType.VSSITEM_PROJECT)
            {
                IVSSItems children = item.get_Items(false);
                foreach (IVSSItem child in children)
                {
                    CountItem(child);
                }
            }
            else
            {
                if (item.Name.EndsWith(".cs"))
                {
                    IVSSVersions versions = item.get_Versions((int)VSSFlags.VSSFLAG_RECURSNO);
                    SourceSafeChange change = new SourceSafeChange();
                    change.FileName = item.Spec;

                    foreach (IVSSVersion version in versions)
                    {
                        // ignore labels
                        if (String.IsNullOrEmpty(version.Label))
                        {
                            change.AddUser(version.Username);
                            change.Changes++;
                        }
                    }
                    changes.Add(change);
                }
            }
        }

        public IList<SourceSafeChange> Results
        {
            get { return changes; }
        }

        public void Dispose()
        {
            db.Close();
            db = null;
        }
    }

    class SourceSafeChange
    {
        private List<string> users;

        public SourceSafeChange()
        {
            users = new List<string>();
        }

        public string FileName { get; set; }
        public int Changes { get; set; }
        public IList<string> Users { get { return users; } }

        public void AddUser(string username)
        {
            if (!users.Contains(username))
            {
                users.Add(username);
            }
        }

        public override string ToString()
        {
            return string.Format("{0}: {1} ({2} users)", FileName, Changes, Users.Count);
        }
    }
}

Thursday 13 November 2008

WaveOutOpen Callbacks in NAudio

The waveOutOpen Windows API allows you to specify a variety of callback modes to receive information about when a buffer has finished playing. The choices are:

CALLBACK_EVENT The dwCallback parameter is an event handle.
CALLBACK_FUNCTION The dwCallback parameter is a callback procedure address.
CALLBACK_NULL No callback mechanism. This is the default setting.
CALLBACK_THREAD The dwCallback parameter is a thread identifier.
CALLBACK_WINDOW The dwCallback parameter is a window handle.

Of these, only the CALLBACK_FUNCTION and CALLBACK_WINDOW are of interest to us in NAudio, as they are the only two that will actually give back information about which buffer in particular has finished playing.

My preference was to use CALLBACK_FUNCTION as this means that the user has no need to pass Window handles to the WaveOut open constructor. There are of course implications to this. It means that the callback function itself will not be running on the GUI thread. However, so long as you are aware of this and don't attempt to talk to GUI components, all is well.

The company I work for has been using CALLBACK_FUNCTION for a large .NET application installed on thousands of PCs worldwide. However, we have discovered that certain audio chipsets have intermittent problems hanging in waveOutReset or waveOutPause if the CALLBACK_FUNCTION is used. Early versions of Realtek drivers exhibited this problem, but the latest drivers work correctly. However, the SoundMAX chipset found on a lot of modern laptops has this problem, and we have not been able to find a workaround.

So no problem, we decided to switch over to the CALLBACK_WINDOW mechanism. This works great on all chipsets we have tested. There is however one big problem. If you decide you want to stop playing, close the WaveOut device, and immediately create a new one and start playing, it all falls over horribly, and the .NET framework crashes with an execution engine exception.

The problem is related to the use of a native window to intercept the WaveOut messages. Here's the NativeWindow used by NAudio:

private class WaveOutWindow : System.Windows.Forms.NativeWindow
{
    private WaveInterop.WaveOutCallback waveOutCallback;

    public WaveOutWindow(WaveInterop.WaveOutCallback waveOutCallback)
    {
        this.waveOutCallback = waveOutCallback;
    }

    protected override void WndProc(ref System.Windows.Forms.Message m)
    {
        if (m.Msg == (int)WaveInterop.WaveOutMessage.Done)
        {
            IntPtr hOutputDevice = m.WParam;
            WaveHeader waveHeader = new WaveHeader();
            Marshal.PtrToStructure(m.LParam, waveHeader);

            waveOutCallback(hOutputDevice, WaveInterop.WaveOutMessage.Done, 0, waveHeader, 0);
        }
        else if (m.Msg == (int)WaveInterop.WaveOutMessage.Open)
        {
            waveOutCallback(m.WParam, WaveInterop.WaveOutMessage.Open, 0, null, 0);
        }
        else if (m.Msg == (int)WaveInterop.WaveOutMessage.Close)
        {
            waveOutCallback(m.WParam, WaveInterop.WaveOutMessage.Close, 0, null, 0);
        }
        else
        {
            base.WndProc(ref m);
        }
    }
}

When the WaveOut device is created we attach our NativeWindow to the handle passed in. Then when the WaveOut device is closed, we dispose of our buffers and detach from the NativeWindow:

protected void Dispose(bool disposing)
{
    Stop();
    lock (waveOutLock)
    {
        WaveInterop.waveOutClose(hWaveOut);
    }
    if (disposing)
    {
        if (buffers != null)
        {
            for (int n = 0; n < numBuffers; n++)
            {
                if (buffers[n] != null)
                {
                    buffers[n].Dispose();
                }
            }
            buffers = null;
        }
        if (waveOutWindow != null)
        {
            waveOutWindow.ReleaseHandle();
            waveOutWindow = null;
        }
    }
}

This works fine with CALLBACK_FUNCTION, as by the time the call to waveOutReset (which happens in the Stop function) returns, all the callbacks have been made. In CALLBACK_WINDOW mode, this is not the case. We might still get Done and Closed WaveOutMessages. What happens to those messages? Well in the normal case they simply will be ignored by the message loop of the main window now that we have unregistered our NativeWindow. But if we immediately open another WaveOut device, using the same Window handle, then messages for the (now disposed) WaveOut device will come through to be handled by the new WaveOut's callback function. This means that the associated buffer GCHandle will be to a disposed object.

Could accessing an invalid GCHandle cause an ExecutionEngineException? I'm not sure. I put in various safety checks to ensure we didn't access the Target of a dead GCHandle, but that didn't solve the problem. Sadly there is no stack trace available to point to the exact cause of the fault.

The solution? Don't use an existing window. Instead of inheriting from NativeWindow, inherit directly from Form. No attaching of handles is needed any more. The form doesn't even need to be shown. I will do some more testing on this, but if it looks robust enough, the change will be checked in to NAudio.

Of course the CALLBACK_FUNCTION method continues to work fine, and there are no known issues with the current CALLBACK_WINDOW implementation so long as you don't recreate a new one based on the same window handle immediately after closing the last. There will be an additional benefit to this change - the need for Window handles to be passed to the WaveOut constructor will be completely removed.

Saturday 8 November 2008

Model View View-Model (MVVM) in Silverlight

I watched an excellent screencast by Jason Dolinger recently, showing how to implement the Model View View-Model pattern in WPF. A lot of the documentation on the MVVM pattern seems unnecessarily complicated, but Jason's demonstration explains it very clearly.

The basic idea of MVVM is that we would like to use data binding to connect our View to our Model, but data binding is often tricky because our model doesn't have the right properties. So we create a View Model that is perfectly set up to have just the right properties for our View to bind to, and gets its actual data from the Model.

To try the technique out I decided to refactor a small piece of a Silverlight game I have written, called SilverNibbles, which is a port of the old QBasic Nibbles game, keeping the graphics fairly similar. At the top of the screen there is a scoreboard, whose role it is to keep track of the scores, number of lives, level, speed etc. This is the piece I will attempt to modify to use MVVM.

SilverNibbles Scoreboard

Let's have a look at the original code-behind for the Scoreboard user control. As you can see, there is a not lot going on here. Whenever a property on the Scoreboard is set, the appropriate changes are made to the graphical components. This is very similar to the typical code that would be written for a Windows Forms user control.

using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;

namespace SilverNibbles
{
    public partial class Scoreboard : UserControl
    {
        private int players;
        private int level;
        private int speed;
        private int jakeScore;
        private int sammyScore;

        public Scoreboard()
        {
            InitializeComponent();
        }

        public int Players
        {
            get 
            {
                return players; 
            }
            set 
            {
                players = value;
                jakeLives.Visibility = players == 2 ? Visibility.Visible : Visibility.Collapsed;
                jakeScoreLabel.Visibility = players == 2 ? Visibility.Visible : Visibility.Collapsed;
                jakeScoreTextBlock.Visibility = players == 2 ? Visibility.Visible : Visibility.Collapsed;
            }
        }

        public int Level
        {
            get
            {
                return level;
            }
            set
            {
                level = value;
                levelTextBlock.Text = value.ToString();
            }
        }

        public int Speed
        {
            get
            {
                return speed;
            }
            set
            {
                speed = value;
                speedTextBlock.Text = value.ToString();
            }
        }

        public int SammyScore
        {
            get
            {
                return sammyScore;
            }
            set
            {
                sammyScore = value;
                sammyScoreTextBlock.Text = value.ToString();
            }
        }

        public int JakeScore
        {
            get
            {
                return jakeScore;
            }
            set
            {
                jakeScore = value;
                jakeScoreTextBlock.Text = value.ToString();
            }
        }

        public int JakeLives
        {
            get
            {
                return jakeLives.Lives;
            }
            set
            {
                jakeLives.Lives = value;
            }
        }

        public int SammyLives
        {
            get
            {
                return sammyLives.Lives;
            }
            set
            {
                sammyLives.Lives = value;
            }
        }
    }
}

Instead of having all these properties, the Scoreboard will become a very simple view. Now the code-behind of Scoreboard is completely minimalised:

namespace SilverNibbles
{
    public partial class Scoreboard : UserControl
    {
        public Scoreboard()
        {
            InitializeComponent();
        }
    }
}

Now it is the job of whoever creates Scoreboard to give it a ScoreboardViewModel as its DataContext. It doesn't even have to be a ScoreboardViewModel either, so long as it has the appropriate properties. This means that a graphic designer could use an XML file instead to create dummy data to help while designing the appearance. Here's my code elsewhere in the project that sets up the data context of the Scoreboard with its View Model.

scoreData = new ScoreboardViewModel();
scoreboard.DataContext = scoreData;

Now I simply modify the scoreData object's properties, and the Scoreboard will update its view automatically.

What has changed in the Scoreboard user control's XAML? Well first, it has Binding statements for every property that gets its value from the view model. I was also able to remove all the x:Name attributes from the markup, which is a sign that MVVM has been done right. I also needed to make my Lives property on my LivesControl user control into a dependency property to allow it to accept the binding syntax as a value.

<UserControl x:Class="SilverNibbles.Scoreboard"
    xmlns="http://schemas.microsoft.com/client/2007" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
    xmlns:sn="clr-namespace:SilverNibbles"
     >
    <Grid x:Name="LayoutRoot" Background="LightYellow" ShowGridLines="False">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto" />
            <RowDefinition Height="Auto" />
            <RowDefinition Height="Auto" />
            </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="20*" />
            <ColumnDefinition Width="35*" />
            <ColumnDefinition Width="25*" />
            <ColumnDefinition Width="10*" />
        </Grid.ColumnDefinitions>
        
        <!-- row 0 -->
        <TextBlock 
            Grid.Row="0" 
            Grid.Column="0"
            Text="High Score" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen" />
        <TextBlock 
            Grid.Row="0" 
            Grid.Column="1"
            Margin="5,0,5,0"
            Text="{Binding Record}" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen" />
        
        <TextBlock 
            Grid.Column="2" 
            Text="Level" 
            HorizontalAlignment="Right" 
            Margin="5,0,5,0"
            FontSize="18"
            FontFamily="teen bold.ttf#Teen" />
        <TextBlock 
            Grid.Column="3" 
            Margin="5,0,5,0"
            Text="{Binding Level}" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen" />
                
        <!-- row 1 -->        
        <TextBlock 
            Grid.Row="1"
            Text="Sammy" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen" />
       <sn:LivesControl
            Grid.Row="1"
            Grid.Column="1"
            HorizontalAlignment="Right"
           Fill="{StaticResource SammyBrush}"
           Lives="{Binding SammyLives}"
             />        
       <TextBlock 
            Grid.Row="1"
            Grid.Column="1"
            Margin="5,0,5,0"
            Text="{Binding SammyScore}" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen"
            />
        
       <TextBlock 
           Grid.Row="1" 
           Grid.Column="2" 
            Text="Speed" 
            HorizontalAlignment="Right" 
            Margin="5,0,5,0"
            FontSize="18"
            FontFamily="teen bold.ttf#Teen"
           />
        <TextBlock 
            Grid.Row="1"
            Grid.Column="3" 
            Margin="5,0,5,0"
            Text="{Binding Speed}" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen"
            />
        
        <!-- row 2 -->
        <TextBlock 
            Grid.Row="2"
            Grid.Column="0"
            Text="Jake" 
            FontSize="18"
            FontFamily="teen bold.ttf#Teen"
            Visibility="{Binding JakeVisible}"            
            />
        <sn:LivesControl
            Grid.Row="2"
            Grid.Column="1"
            HorizontalAlignment="Right"
            Fill="{StaticResource JakeBrush}"
            Visibility="{Binding JakeVisible}"
            Lives="{Binding JakeLives}"
             />
       <TextBlock 
            Grid.Row="2"
            Grid.Column="1"
            FontSize="18"
            Margin="5,0,5,0"
            Text="{Binding JakeScore}" 
            FontFamily="teen bold.ttf#Teen"
            Visibility="{Binding JakeVisible}"
           />
    </Grid>
</UserControl>

Here's what my ScoreboardViewModel looks like. The main thing to notice is that I have implemented the INotifyPropertyChanged interface. I rather lazily raise the changed event every time the setter is called irrespective of whether the value really changed. Notice also I have created a JakeVisible property. This highlights how the View Model can be used to create properties that are exactly what the View needs.

namespace SilverNibbles
{
    public class ScoreboardViewModel : INotifyPropertyChanged
    {
        private int players;
        private int level;
        private int speed;
        private int jakeScore;
        private int sammyScore;

        public int Players
        {
            get
            {
                return players;
            }
            set
            {
                players = value;
                RaisePropertyChanged("Players");
                RaisePropertyChanged("JakeVisible");
            }
        }

        public Visibility JakeVisible
        {
            get
            {
                return players == 2 ? Visibility.Visible : Visibility.Collapsed;
            }
        }

        public int Level
        {
            get
            {
                return level;
            }
            set
            {
                level = value;
                RaisePropertyChanged("Level");
            }
        }

        public int Speed
        {
            get
            {
                return speed;
            }
            set
            {
                speed = value;
                RaisePropertyChanged("Speed");
            }
        }

        public int SammyScore
        {
            get
            {
                return sammyScore;
            }
            set
            {
                sammyScore = value;
                RaisePropertyChanged("SammyScore");
            }
        }

        public int JakeScore
        {
            get
            {
                return jakeScore;
            }
            set
            {
                jakeScore = value;
                RaisePropertyChanged("JakeScore");
            }
        }

        int jakeLives;
        int sammyLives;

        public int JakeLives
        {
            get
            {
                return jakeLives;
            }
            set
            {
                jakeLives = value;
                RaisePropertyChanged("JakeLives");
            }
        }

        public int SammyLives
        {
            get
            {
                return sammyLives;
            }
            set
            {
                sammyLives = value;
                RaisePropertyChanged("SammyLives");
            }
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void RaisePropertyChanged(string property)
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(property));
            }
        }
    }
}

I think there are alternatives to using INotifyPropertyChanged such as creating dependency properties or inheriting from DependencyObject. I don't know what the advantages or disadvantages of taking that approach would be. That is something for a future investigation.

There are lots of other parts of the SilverNibbles application that could be refactored to use this pattern, but that is also a task for another day. View the code at CodePlex, and play SilverNibbles here.

Saturday 1 November 2008

Using NAudio to Replace the BabySmash Audio Stack

After adding MIDI in support to BabySmash, the next obvious step was to replace the audio playback mechanism with NAudio too, which would allow better mixing of sounds, and doing cool things like controlling volume and panning of sounds. It also gives me a chance to write some more example documentation for NAudio.

To be able to play from one of the embedded WAV files, we need to create a WaveStream derived class that can read from an embedded resource, and convert the audio into a common format ready for mixing. One problem with BabySmash is that the current embedded audio files represent a whole smorgasbord of formats:

babygigl2.wav
scooby2.wav
MP3 11025Hz mono
babylaugh.wav
ccgiggle.wav
giggle.wav
PCM 11kHz mono 8 bit
EditedJackPlaysBabySmash.wav PCM 22kHz mono 16 bit
falling.wav
rising.wav
PCM 8kHz mono 16 bit
laughingmice.wav PCM 11127Hz! mono 8 bit
smallbumblebee PCM 22kHz stereo 16 bit

So I created WavResourceStream whose job it was to take an embedded resource and output a 32bit IEEE floating point stereo stream at 44.1kHz. I could equally have chosen 22kHz, which would reduce the amount of data that needs to be passed around. The choice of floating point audio is important for the mixing phase, as it gives us plenty of headroom.

The constructor is the most interesting part of this class. It takes the resource name, and uses a WaveFileReader to read from that resource (n.b. the constructor for WaveFileReader that takes a Stream has only recently been checked in to NAudio, there were other ways of doing this in the past, but in the latest code I am trying to clean things up a bit).

The next step is to convert to PCM if it is not already (there are two files whose audio is actually MP3, even though they are contained within a WAV file). The WaveFormatConversionStream looks for an ACM codec that can perform the requested format conversion. The BlockAlignReductionStream helps us ensure we call the ACM functions with sensible buffer sizes.

The step afterwards is to get ourselves to 44100kHz. You can't mix audio unless all streams are at the same sample rate. Again we use a BlockAlignmentReductionStream to help with the buffer sizing. Finally we go into a WaveChannel32 stream, which converts us to 32 bit floating point stereo stream, and allows us to  set volume and pan if required. So the audio graph depth is already potentially six streams deep. It may seem confusing at first, but once you get the hang of it, chaining WaveStreams together is quite simple.

class WavResourceStream : WaveStream
{
    WaveStream sourceStream;

    public WavResourceStream(string resourceName)
    {
        // get the namespace 
        string strNameSpace = Assembly.GetExecutingAssembly().GetName().Name;

        // get the resource into a stream
        Stream stream = Assembly.GetExecutingAssembly().GetManifestResourceStream(strNameSpace + resourceName);
        sourceStream = new WaveFileReader(stream);
        var format = new WaveFormat(44100, 16, sourceStream.WaveFormat.Channels);
            
        if (sourceStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm)
        {
            sourceStream = WaveFormatConversionStream.CreatePcmStream(sourceStream);
            sourceStream = new BlockAlignReductionStream(sourceStream);
        }
        if (sourceStream.WaveFormat.SampleRate != 44100 ||
            sourceStream.WaveFormat.BitsPerSample != 16)
        {
            sourceStream = new WaveFormatConversionStream(format, sourceStream);
            sourceStream = new BlockAlignReductionStream(sourceStream);
        }
        
        sourceStream = new WaveChannel32(sourceStream);            
    }

The rest of the WavResourceStream is simply implementing the WaveStream abstract class members by calling into the source stream we constructed:

    public override WaveFormat WaveFormat
    {
        get { return sourceStream.WaveFormat; }
    }

    public override long Length
    {
        get { return sourceStream.Length; }
    }

    public override long Position
    {
        get { return sourceStream.Position; }
        set { sourceStream.Position = value; }
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        return sourceStream.Read(buffer, offset, count);
    }

    protected override void Dispose(bool disposing)
    {
        if (sourceStream != null)
        {
            sourceStream.Dispose();
            sourceStream = null;
        }
        base.Dispose(disposing);
    }
}

Now we need to create a mixer that can take multiple WavResourceStreams and mix their contents together into a single stream. NAudio includes the WaveMixer32Stream, but it is designed more for sequencer use, where you want exact sample level control over the positioning of the source streams, and also can reposition itself. BabySmash's needs are simpler - we simply want to play sounds once through. So I created a simpler MixerStream, which may find its way in modified form into NAudio in the future.

The first part of the code simply allows us to add inputs to the mixer. Our mixer doesn't really care what the sample rate is as long as all inputs have the same sample rate. The PlayResource function simply creates one of our WavResourceStream instances and adds it to our list of inputs. Currently I have no upper limit on how many inputs can be added, but it would make sense to limit it in some way (probably by throwing away some existing inputs rather than failing to play new ones).

public class MixerStream : WaveStream
{
    private List<WaveStream> inputStreams;
    private WaveFormat waveFormat;
    private int bytesPerSample;

    public MixerStream()
    {
        this.waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(44100, 2);
        this.bytesPerSample = 4;
        this.inputStreams = new List<WaveStream>();
    }

    public void PlayResource(string resourceName)
    {
        WaveStream stream = new WavResourceStream(resourceName);
        AddInputStream(stream);
    }

    public void AddInputStream(WaveStream waveStream)
    {
        if (waveStream.WaveFormat.Encoding != WaveFormatEncoding.IeeeFloat)
            throw new ArgumentException("Must be IEEE floating point", "waveStream.WaveFormat");
        if (waveStream.WaveFormat.BitsPerSample != 32)
            throw new ArgumentException("Only 32 bit audio currently supported", "waveStream.WaveFormat");

        if (inputStreams.Count == 0)
        {
            // first one - set the format
            int sampleRate = waveStream.WaveFormat.SampleRate;
            int channels = waveStream.WaveFormat.Channels;
            this.waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(sampleRate, channels);
        }
        else
        {
            if (!waveStream.WaveFormat.Equals(waveFormat))
                throw new ArgumentException("All incoming channels must have the same format", "inputStreams.WaveFormat");
        }

        lock (this)
        {
            this.inputStreams.Add(waveStream);
        }
    }

The real work of MixerStream is done in the Read method. Here we loop through all the inputs and mix them together. We also detect when an input stream has finished playing and dispose it and remove it from our list of inputs. The mixing is performed using unsafe code so you need to set the unsafe flag on the project to get it to compile. One important note is that we always return a full empty buffer even if we have no inputs, because our IWavePlayer expects full reads if it is to keep going.

public override int Read(byte[] buffer, int offset, int count)
{
    if (count % bytesPerSample != 0)
        throw new ArgumentException("Must read an whole number of samples", "count");            

    // blank the buffer
    Array.Clear(buffer, offset, count);
    int bytesRead = 0;

    // sum the channels in
    byte[] readBuffer = new byte[count];
    lock (this)
    {
        for (int index = 0; index < inputStreams.Count; index++)
        {
            WaveStream inputStream = inputStreams[index];

            int readFromThisStream = inputStream.Read(readBuffer, 0, count);
            System.Diagnostics.Debug.Assert(readFromThisStream == count, "A mixer input stream did not provide the requested amount of data");
            bytesRead = Math.Max(bytesRead, readFromThisStream);
            if (readFromThisStream > 0)
            {
                Sum32BitAudio(buffer, offset, readBuffer, readFromThisStream);
            }
            else
            {
                inputStream.Dispose();
                inputStreams.RemoveAt(index);
                index--;
            }
        }
    }
    return count;
}

static unsafe void Sum32BitAudio(byte[] destBuffer, int offset, byte[] sourceBuffer, int bytesRead)
{
    fixed (byte* pDestBuffer = &destBuffer[offset],
              pSourceBuffer = &sourceBuffer[0])
    {
        float* pfDestBuffer = (float*)pDestBuffer;
        float* pfReadBuffer = (float*)pSourceBuffer;
        int samplesRead = bytesRead / 4;
        for (int n = 0; n < samplesRead; n++)
        {
            pfDestBuffer[n] += pfReadBuffer[n];
        }
    }
}

The remaining functions of the MixerStream are quite simple. We don't need to report position or length as BabySmash simply plays a continuous stream.

public override long Length
{
    get { return 0; }
}

public override long Position
{
    get { return 0; }
    set 
    {
        throw new NotImplementedException("This mixer is not repositionable");
    }
}

public override WaveFormat WaveFormat
{
    get { return waveFormat; }
}

protected override void Dispose(bool disposing)
{
    if (disposing)
    {
        if (inputStreams != null)
        {
            foreach (WaveStream inputStream in inputStreams)
            {
                inputStream.Dispose();
            }
            inputStreams = null;
        }
    }
    else
    {
        System.Diagnostics.Debug.Assert(false, "WaveMixerStream32 was not disposed");
    }
    base.Dispose(disposing);
}

Finally we are ready to set up BabySmash to play its audio using MixerStream. We add two new members to the Controller class:

private MixerStream mainOutputStream;
private IWavePlayer wavePlayer;

And then in the Controller.Launch method, we create a new MixerStream, and use WaveOut to play it. We have chosen a read-ahead of 300 milliseconds which should mean that we don't get stuttering on a reasonably powerful modern PC. We need to pass the window handle to WaveOut as I have found that some laptop chipsets (most notably SoundMAX) have issues with running managed code in their callback functions. We don't have to use WaveOut if we don't want to. WASAPI works as well if you have Vista (although I found it was stuttering a bit for me).

IntPtr windowHandle = new WindowInteropHelper(Application.Current.MainWindow).Handle;
wavePlayer = new WaveOut(0, 300, windowHandle);
//wavePlayer = new WasapiOut(AudioClientShareMode.Shared, 300);
mainOutputStream = new MixerStream();
wavePlayer.Init(mainOutputStream);
wavePlayer.Play();

The call to wavePlayer.Play will mean we start making calls into the Read method of the MixerStream, but initially it will just be silence. When we are ready to make a sound, simply call the MixerStream.PlayResource method instead of the PlayWavResourceYield method:

//audio.PlayWavResourceYield(".Resources.Sounds." + "rising.wav");
mainOutputStream.PlayResource(".Resources.Sounds." + "rising.wav");

So what does it sound like? Well I haven't tested it too much, but it certainly is possible to have a large number of simultaneous laughs on my PC. "Cacophony" would sum up it up pretty well. The next step of course would be to implement the "Play notes from songs on keypress" feature request. Another obvious enhancement would be to cache the converted audio so that we didn't need to continually pass the same files through ACM again and again.

Friday 31 October 2008

Rethinking Syncing

I have been doing a lot of synchronization work recently, trying to get multiple video and audio streams to play back together as best synchronized as possible. This is not a trivial task, especially if those streams are only partially downloaded and the user can reposition anywhere they like during playback.

In the past I have tended to use locks for synchronization. So you have a renderer thread, that is writing video frames or audio samples to an output device, and when it needs to read some more data, it takes a lock, reads what it needs and releases that lock. Meanwhile, over in another thread, quite probably controlled by a slider of some description on the GUI, the user makes repositioning requests. When one of these requests comes in, the various audio and video streams take a lock on the same object that the renderer wants, reposition their streams to the new position, and then release the lock. Hey presto, the renderer will now read from the correct place on its next read call and is in no danger of reading corrupt data packets because we are midway through repositioning.

However, I am beginning to think that there is a better approach. When the user attempts to reposition the playback cursor, this simply posts a repositioning request into a queue. Then when the renderer threads want to read more data, they first check to see if a reposition request needs to be serviced. I think this approach will perform just as well as the first, (if not better because we can throw away repositioning requests if they are not serviced quickly enough). It also means that locks do not need to be held for significant periods of time (just a quick one to govern the read/write from the reposition queue). I am thinking I might trial this new sync model in a future version of NAudio. Let me know in the comments if you think this is a better approach or not.

There are a couple of interesting implications of this approach that need to be considered. First, when you ask a stream to reposition, you won't find out whether the reposition was successful immediately. You can query the current position just after setting the position and its value will not be up to date. Second, if the renderer thread is processing the repositioning requests, then when playback is stopped, repositioning requests may not get serviced. It very much depends on your application as to whether this is a problem or not.

Thursday 30 October 2008

MIDI In for BabySmash

I watched Scott Hanselman's presentation at the PDC on BabySmash this afternoon, and was very impressed both with the quality of his presentation skills as well as the sheer coolness of all the new technologies he was using.

As I watched it, I thought to myself how easy it would be to add MIDI in to this application using the NAudio open source .NET audio toolkit I have written. I downloaded the BabySmash source code from CodePlex and it only took about 10 minutes for me to add MIDI in support.

The first thing to do was to listen for MIDI in events on the default device. Thanks to NAudio, this is just a couple of lines of code, called after InitializeComponent.

private void StartMonitoringMidi()
{
    midiIn = new MidiIn(0); // default device
    midiIn.MessageReceived += midiIn_MessageReceived;
    midiIn.Start();
}

Next we needed to handle the MIDI in messages. I only care about NoteOn messages, and need to switch to the despatcher thread before interacting with the GUI:

void midiIn_MessageReceived(object sender, MidiInMessageEventArgs e)
{
    if (this.Dispatcher.Thread == Thread.CurrentThread)
    {
        NoteOnEvent noteOnEvent = e.MidiEvent as NoteOnEvent;
        if (noteOnEvent != null)
        {
            controller.ProcessMidiNoteOn(this, noteOnEvent);
        }
    }
    else
    {
        Dispatcher.BeginInvoke(new EventHandler<MidiInMessageEventArgs>
            (midiIn_MessageReceived), sender, e);
    }
}

The final step is for the new Controller method to somehow convert the note on event into something to be displayed. Obviously we could have lots of fun here, but for this initial demo I wanted to keep it really simple.

public void ProcessMidiNoteOn(FrameworkElement uie, NoteOnEvent noteOn)
{
    AddFigure(uie, new string((char)('A' + noteOn.NoteNumber - 30), 1));
}

And that's all there is to it. Less than 20 lines of code. Here's a brief video of me demonstrating it:

Some notes:

  • The existing audio playback in BabySmash seems problematic and caused hanging on my laptop. I might see if I can replace it with the audio playback mechanism from NAudio. This would allow me to put some drum / musical sounds in and manage the number of concurrent sounds.
  • I didn't change the graphics at all. I had hoped that the MEF plugin architecture demoed in the PDC talk would be there for me to create my own "SmashPack", but it looks like that code hasn't made it onto CodePlex just yet. One advantage of MIDI in is that each input has a velocity (how hard you smashed it) as well as what note you hit. I would like to try out using colour or initial size to indicate velocity.
  • BabySmash seemed to cope very well with the speed of new notes coming at it. It probably would feel a bit more responsive if the shapes appeared instantaneously rather than animating in.
  • Apologies for the very poor sound quality on the video. I sound like I am speaking with a lisp, and the piano sounds dreadful. As for the drums, well the less said the better. I am not a drummer and there is something badly wrong when you can hear the sticks louder than the drum sounds themselves.
  • Earlier this week two keys on my piano actually got "smashed" for real by one of my children, which means I have an expensive repair bill coming my way. There's no way I'm going to let any of them have a go at smashing on my laptop!
  • Also apologies for the difficulty of seeing what's on the screen in the video. My laptop screen viewing angles aren't that great (and my cheap camera isn't exactly the greatest either, having suffered yet another "baby smash" incident of its own).

Sunday 26 October 2008

NAudio and the PDC

Every time a new version of the .NET framework is due to be announced I always get my hopes up that this will be the year that we are finally given a low-level audio API for .NET. Looking at the sessions for this year's PDC, it sadly seems that audio has been overlooked once more. In fact, the only audio related session is one from Larry Ostermann, which seems like it will announce some new Windows 7 API's for writing apps with real-time audio communications (presumably chat / phone apps).

But there is one piece of news that might prove very useful for NAudio, which is that there are some upcoming changes in the .NET type system that promise to make COM interop a lot easier. I'm not getting my hopes up too high, but anything that makes COM and P/Invoke code easier in .NET would be a huge benefit to NAudio. I have been looking at adding a managed wrapper round the Windows Media Format SDK, and the sheer amount of interop code that is required is daunting. Anyway, I'll be watching the PDC sessions with interest this week, and will post back here if there turns out to be anything relevant to NAudio.

Monday 20 October 2008

Volume Metering and Audio Waveform Display in NAudio

I spent a couple of hours this evening adding two features to NAudio that I have been meaning to add for a long time. They are two Windows Forms controls, one to display volume levels, and the other to display audio waveforms. They are still in a very raw state, and I will probably make some enhancements to their appearance as well as how they calculate their volume levels in the future, but they work pretty well for basic visualisation purposes. I'm also looking forward to creating their equivalents in WPF.

winforms-waveform

I've made a very short screencast to show them in action. Watch it here: http://screencast.com/t/m13tSFGAG

Find Orphaned Source Files Using LINQ

On a project I am working on there is a growing number of files that are in Source Control but are not actually referenced by any .csproj files. I decided to write a quick and dirty command line program to find these files, and at the same time learn a bit of LINQ to XML.

During the course of my development, I ran into a couple of tricky issues. First was how to combine some foreach loops into a LINQ statement, and second was to construct the regex for source file matching. Both I guess I could have solved myself with a bit of time reading books, but I decided to throw them out onto Stack Overflow. Both were answered within a couple of minutes of asking. I have to say this site is incredible, and rather than treating it as a last resort for questions I have reached the end of my resources on, I am now thinking of it more like a super-knowledgeable co-worker who you can just ask a quick question and get a pointer in the right direction.

Here's the final code. I'm sure it could easily be turned into one nested LINQ query and improved on a little, but it does what I need. Feel free to suggest refactorings and enhancements in the comments.

using System.Text;
using System.IO;
using System.Xml.Linq;
using System.Text.RegularExpressions;

namespace SolutionChecker
{
    public class Program
    {
        public const string SourceFilePattern = @"(?<!\.g)\.cs$";

        static void Main(string[] args)
        {
            string path = (args.Length > 0) ? args[0] : GetWorkingFolder();
            Regex regex = new Regex(SourceFilePattern);
            var allSourceFiles = from file in Directory.GetFiles(path, "*.cs", SearchOption.AllDirectories)
                                 where regex.IsMatch(file)
                                 select file;
            var projects = Directory.GetFiles(path, "*.csproj", SearchOption.AllDirectories);
            var activeSourceFiles = FindCSharpFiles(projects);
            var orphans = from sourceFile in allSourceFiles                          
                          where !activeSourceFiles.Contains(sourceFile)
                          select sourceFile;
            int count = 0;
            foreach (var orphan in orphans)
            {
                Console.WriteLine(orphan);
                count++;
            }
            Console.WriteLine("Found {0} orphans",count);
        }

        static string GetWorkingFolder()
        {
            return Path.GetDirectoryName(typeof(Program).Assembly.CodeBase.Replace("file:///", String.Empty));
        }

        static IEnumerable<string> FindCSharpFiles(IEnumerable<string> projectPaths)
        {
            string xmlNamespace = "{http://schemas.microsoft.com/developer/msbuild/2003}";
            
            return from projectPath in projectPaths
                   let xml = XDocument.Load(projectPath)
                   let dir = Path.GetDirectoryName(projectPath)
                   from c in xml.Descendants(xmlNamespace + "Compile")
                   let inc = c.Attribute("Include").Value
                   where inc.EndsWith(".cs")
                   select Path.Combine(dir, c.Attribute("Include").Value);
        }
    }
}

Tuesday 14 October 2008

Styling a ListBox in Silverlight 2

Back in April, I started a series of posts here on how to completely customise the style of a ListBox in Silverlight 2 beta 1, including the scroll-bar styles. That was followed by a rather painful update to Silverlight 2 beta 2 (see Part 1, Part 2, Part 3). Now that Silverlight 2 has officially shipped, the code required one last update. Thankfully the changes were not so great this time. A few deprecated properties needed to be removed from the XAML, and some Duration attributes needed to be changed to GeneratedDuration, but that was about it.

There is some good news. From initial inspection it looks like the scroll-bar bug, where you saw a stretched horizontal scroll-bar behind your vertical scroll-bar has finally been fixed. And first impressions of the XAML preview in Visual Studio are good - you can actually see what it looks like without having to run the project.

What it hasn't fixed is my appalling colour selections and graphical design skills. I might take the time to work a bit more on the appearance of this thing now that I can be sure the syntax isn't going to change yet again.

Fully Styled Listbox in Silverlight 2

I've uploaded my example project which can be opened in Visual Studio 2008 (with Silverlight Tools) or in Expression Blend 2 SP1. Download it here.

Friday 10 October 2008

Refactoring to Reduce Method Complexity

I gave a seminar to the developers at my work this week on the subject of how we can keep the overall complexity of our projects down by refactoring methods. Here's some of my key points...

Measure the Complexity of Your Source Code

I have been experimenting recently with the excellent freeware application SourceMonitor. This can scan your source code folder (it supports several languages including C#, C++, VB.NET, Java), and give you a report that will help you quickly identify classes with the most methods, methods with the most lines of code, methods with the deepest level of indentation, and methods with the greatest "complexity" rating (which I think is related to how many possible routes through the code there are). You will be surprised at how good this simple tool is at identifying the areas of your code that have the most problems.

SourceMonitor output

SourceMonitor output

Measuring complexity allows you to identify whether new code you have written is over-complicated. It also alerts you to the fact that you may have increased the complexity of the existing code you have modified. If developers make it a goal to leave the code they work on less complex than it was when they started, gradually an overcomplicated codebase can be brought under control.

Complexity Grows Gradually

A competent developer writing new code knows not to make functions that are hugely long and convoluted. But complexity sneaks in gradually, most commonly because of bug fixes or special cases come up. Often these 5-10 line additions are considered so trivial that the method in question does not need to be refactored. But before long, you have gigantic methods that cause headaches for anyone who needs to debug, modify or reuse them.

Consider this simple line of code:

string filename = Path.Combine(folderBrowserDialog.SelectedPath,
    String.Format("RecordedItem {0}-{1}-{2}-{3}-{4}-{5} Channel {6}.xml", 
        recordedItem.StartTime.Year,
        recordedItem.StartTime.Month,
        recordedItem.StartTime.Day,
        recordedItem.StartTime.Hour,
        recordedItem.StartTime.Minute,
        recordedItem.StartTime.Second,
        recordedItem.Channel));

All it is doing is creating a filename. Many developers would not consider putting this into a function of its own (even though it is taking up 9 lines of vertical space). What happens next is a bug report is filed, and suddenly, the code to create a filename gets even longer...

string filename = Path.Combine(folderBrowserDialog.SelectedPath,
    String.Format("RecordedItem {0}-{1}-{2}-{3}-{4}-{5} Channel {6}.xml", 
        recordedItem.StartTime.Year,
        recordedItem.StartTime.Month,
        recordedItem.StartTime.Day,
        recordedItem.StartTime.Hour,
        recordedItem.StartTime.Minute,
        recordedItem.StartTime.Second,
        recordedItem.Channel));
int attempts = 0;
while (File.Exists(filename))
{
    attempts++;
    if (attempts > 20)
    {
        log.Error("Could not create a filename for {0}", recordedItem);
        throw new InvalidOperationException("Error creating filename");
    }
    filename = Path.Combine(folderBrowserDialog.SelectedPath,
        String.Format("RecordedItem {0}-{1}-{2}-{3}-{4}-{5} Channel {6} {7}.xml", 
            recordedItem.StartTime.Year,
            recordedItem.StartTime.Month,
            recordedItem.StartTime.Day,
            recordedItem.StartTime.Hour,
            recordedItem.StartTime.Minute,
            recordedItem.StartTime.Second,
            recordedItem.Channel,
            attempts));
}

Now we have thirty lines of code all devoted to the relatively simple task of creating a unique filename. Most developers who will work on our code do not need or want to know the details. This code therefore is an ideal candidate for breaking out into a single method:

string filename = GetFilenameForRecordedItem(folderBrowserDialog.Path, recordedItem);

Now developers can skip over this code and only look within the function if they actually need to know how the filename is generated.

Break Long Methods into Individual Tasks

The most common reason for a method being very long, or deeply nested is simply that it is performing more than one task. If a method doesn't fit onto one screen in an IDE, the chances are it does too much.

Methods are not just for code that needs to be reused. It is fine to have private methods that only have one caller. If creating a method makes overall comprehension easier, then that is reason enough to create it.

Let's take another simple example. It is common to see functions that have a comment followed by several lines of code, and then another comment, and then more code, as follows:

private void buttonSearch_Click(object sender, EventArgs args)
{
    // validate the user input
    ... several lines of code
    // do the search
    ... several lines of code
}

When presented like this, it is pretty obvious that the search button click handler performs two discrete tasks, one after the other. The code can therefore be made much more simple to read if it is refactored into...

private void buttonSearch_Click(object sender, EventArgs e)
{
    if (IsUserInputValid())
    {
        PerformSearch();
    }
}

Now a developer can quickly see what is going on, and can easily choose which of the two functions to delve deeper into depending on what part of the functionality needs to be changed. It is also interesting to note that we don't need the comments any more. Comments are added when the code isn't self-explanatory. So clear code requires less comments.

Refactor Similar Code, Not Just Duplicated Code

Most developers know not to reuse by cutting and pasting large blocks of code. If exactly the same code is needed in two places, they will move it into a common function. But often the code isn't an exact duplicate, it is simply similar.

Consider this example

XmlNode childNode = xmlDoc.CreateElement("StartTime");
XmlText textNode = xmlDoc.CreateTextNode(recordedItem.StartTime.ToString());
childNode.AppendChild(textNode);
recItemNode.AppendChild(childNode);

childNode = xmlDoc.CreateElement("Duration");
textNode = xmlDoc.CreateTextNode(recordedItem.Duration.ToString());
childNode.AppendChild(textNode);
recItemNode.AppendChild(childNode);

Here we have two very similar blocks of code, but they are not identical. This should not be seen as a barrier to reuse. Without too much effort this duplicated code can be extracted into a helper method. The end result is less complexity and much easier to see the intent.

AddTextNode(recItemNode,"StartTime", recordedItem.StartTime.ToString());
AddTextNode(recItemNode,"Duration", recordedItem.Duration.ToString());

Extract Reusable Functionality into Static Helper Methods

The AddTextNode method in the previous example actually could be used by any class that needed to add a child node to an XML node. So rather than staying buried inside the class that uses it, only for the wheel to be reinvented by the next coder who needs to do a similar task, move it out into a static method in a utility class. In this case, we could create an XmlUtils class. Obviously there is no guarantee that other developers will notice it and use it, but they stand a greater chance of finding it if it resides in a common utilities location rather than hidden away in the code behind of a Windows form.

Use Delegates To Enable Reuse Of Common Surrounding Code

There is one type of cut and paste code that can seem too hard to effectively refactor. Consider the following two functions. They are virtually identical, with the exception that the line of code inside the try block is different.

private void buttonSaveAsXml_Click(object sender, EventArgs args)
{
    FolderBrowserDialog folderBrowserDialog = new FolderBrowserDialog();
    folderBrowserDialog.Description = "Select folder to save search results to";
    if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
    {
        try
        {
            SaveAsXml(folderBrowserDialog.SelectedPath);
        }
        catch (Exception e)
        {
            log.Error(e, "Error exporting to XML");
            MessageBox.Show(this, e.Message, this.Text, MessageBoxButtons.OK, MessageBoxIcon.Warning);
        }
    }
}

private void buttonSaveAsCsv_Click(object sender, EventArgs args)
{
    FolderBrowserDialog folderBrowserDialog = new FolderBrowserDialog();
    folderBrowserDialog.Description = "Select folder to save search results to";
    if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
    {
        try
        {
            SaveAsCsv(folderBrowserDialog.SelectedPath);
        }
        catch (Exception e)
        {
            log.Error(e, "Error exporting to CSV");
            MessageBox.Show(this, e.Message, this.Text, MessageBoxButtons.OK, MessageBoxIcon.Warning);
        }
    }
}

The answer is to make use of delegates. We can take all the common code and put it into a SelectSaveFolder function, and pass in the action to do once the folder has been selected as a delegate. I've used the new C#3 lambda syntax, but ordinary or anonymous delegates work just as well:

delegate void PerformSave(string path);

void SelectSaveFolder(PerformSave performSaveFunction)
{
    FolderBrowserDialog folderBrowserDialog = new FolderBrowserDialog();
    folderBrowserDialog.Description = "Select folder to save search results to";
    if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
    {
        try
        {
            performSaveFunction(folderBrowser.SelectedPath);
        }
        catch (Exception e)
        {
            log.Error(e, "Error exporting");
            MessageBox.Show(this, e.Message, this.Text, MessageBoxButtons.OK, MessageBoxIcon.Warning);
        }
    }
}

private void buttonSaveAsXml_Click(object sender, EventArgs args)
{
    SelectSaveFolder(path => SaveAsXml(path));
}

private void buttonSaveAsCsv_Click(object sender, EventArgs args)
{
    SelectSaveFolder(path => SaveAsCsv(path));
}

The benefit here is that the two button click handlers which previously had a lot of duplicated code now show very simply what they will do. And that is the point of refactoring your methods. Make them show the intent, not the implementation details.

Use IDE Refactoring Support

Finally, remember to use the Visual Studio Refactor | Extract Method command wherever possible. It can't always work out exactly what you would like it to do, but it can save you a lot of time and avoid a lot of silly mistakes if you make a habit of using it. (If you have a tool like Resharper then I'm sure that's even better).

Tuesday 7 October 2008

The Cost of Complexity and Coupling

Five weeks ago I was given a task to add some custom video playback to a large C# application. The video format in question is a proprietary codec wrapped in a custom file format and can only be played back using a third party ActiveX control. There was no documentation for the codec, the file format or the ActiveX control. We did, however, have a working application that could play back these video. All I needed to do was extract the bits we needed as a reusable component.

Sounds simple, right? The problem was that the working application was huge and did far more than simply playing back video. There were dependencies on over 100 .NET assemblies plus a few COM objects thrown in for good measure. The Visual Studio solution itself contained 20 projects. My task was to understand how it all worked, so I could reuse the bits we needed.

Approach 1 - Assembly Reuse

My first approach was to identify the classes that we could reuse and copy their assemblies to my test application and use them as dependencies. This seemed a good idea, until I found that tight coupling resulted in me copying the majority of the 100 dependent assemblies, and required me to construct instances of classes that really were not closely related to what I was trying to do. I spent a long time (probably too long) trying to persevere with this approach until I finally admitted defeat.

Approach 2 - Refactor

My second approach was to take a copy of the source for the entire application and gradually delete all the stuff that wasn't needed. Eventually I would end up with just the bits that were necessary for my needs. This approach was a dead end. First there was far too much to delete, and second, the tight coupling meant that removal of one class would cause all kinds of others to break, with cascading side-effects until I inadvertently broke the core functionality.

Approach 3 - Code Reuse

My third approach was to simply copy the source code of the classes that seemed to be needed over to my test project. That way, I could edit the code and comment out properties or methods that introduced unnecessary dependencies. Even so, the tight coupling still caused a lot of junk to be moved across as well. In the end my test application contained about 200 classes in total. And when I finally finished putting all the pieces together, it didn't work. Four weeks gone, and no closer to my reusable component.

Approach 4 - DIY

My final approach was to go to the lowest level possible. I would use a hex editor and really understand how the file format worked. And I would examine the interface of the ActiveX control and call it directly rather than through several layers of wrapper classes. This meant I would need to re-implement a lot of low-level features myself, including the complex task of synchronization and seeking. The existing application, which I was getting to know quite well by this point, would simply act as a reference. It was another week of work before I got the breakthrough I was looking for. Finally, five weeks after starting, I could display video in a window in my test application.

Technical Debt

Why am I telling this story? I think it demonstrates the concept of technical debt. The working application did what it was supposed to and was bug-free (at least in terms of the feature I was interested in). But its complexity and coupling meant that I spent weeks trying to understand it and found it almost impossible to extract reusable components from it.

And this is exactly the meaning of technical debt. Sure you can do stuff the "quick way" rather than the right way, and it may allow you to meet project deadlines. But if the application is going to be developed further, sooner or later, you will have to pay for your shortcuts. And the way that usually plays out is that a future feature was supposed to take a couple of weeks to implement ends up taking several months.

What happens next is that the managers will hold an inquest. How on earth did this feature overrun by so much? Was the developer stupid or just lazy? Or is he useless at estimating timescales? And it sounds like passing the buck to blame architectural mistakes made by previous developers. After all, we're programmers. We're supposed to be able to understand complicated things and find a way to get new features in.

How Big is Your Debt?

Technical debt is real, it really costs companies lots of wasted time, and both developers and managers need to take it very seriously. But how can it be measured? If we could find some way of quantifying it, we could then observe whether we are accumulating debt, or slowly but surely paying it off.

For most of my development career there were only two metrics that counted: number of open bugs, and number of support cases raised. But these measure existing problems. Technical debt is a problem waiting to happen. Fortunately, there has been an increase in the number of tools that can put some numbers against the quality of source code and its overall design. I'll list a few of the ones I have tried or want to try here...

I would be very interested to hear of any other suggestions or tools for measuring "technical debt".

Tuesday 30 September 2008

Getting Started With MSBuild

I tried using MSBuild for the first time this week, and took a couple of wrong turnings along the way. Here's a brief summary of how I got started and how to perform a few basic tasks...

What is it?

MSBuild is the build system used by Visual Studio to compile your .NET projects. Check out this nice introduction to MSBuild on CodeProject. The .csproj files you may be familiar with from Visual Studio are in fact nothing more than MSBuild project files.

Why use it?

There are a few reasons why you might want to use MSBuild directly in preference to simply compiling your solution in Visual Studio:

  • You can use it on a dedicated build machine without the need to install Visual Studio.
  • You can use it to batch build several Visual Studio solutions.
  • You can use it to perform additional tasks such as creating installers, archiving, retrieving from source control etc.

It is the third reason that appeals to me, as I wanted to automate the creation of zip files for source code and runtime binaries, as well as run some unit tests.

Creating a project file

MSBuild project files are simply XML files. I created master.proj and put it in the root of my source code folder, alongside my .sln file.

Build Target

The first code I put in to my project file was an example I found on the web that performed a Clean followed by a Build. Here is a slightly modified version of the project file:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="3.5" DefaultTargets="Build"  
    xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <DeploymentProject>MyApplication</DeploymentProject>
    <OutputDirectory>$(DeploymentProject)\bin\$(Configuration)</OutputDirectory>
  </PropertyGroup>

  <Target Name="Clean">    
    <RemoveDir Directories="$(OutputDirectory)" 
            Condition="Exists($(OutputDirectory))"></RemoveDir>
  </Target>
  <Target Name="Build">
    <MSBuild 
      Projects="$(DeploymentProject)\MyApplication.csproj"
      Properties="Configuration=$(Configuration)" >      
    </MSBuild>
  </Target>
</Project>

I ran the build script using the following command line: MSBuild master.proj

It deleted all my source code! Aarrghh!! Fortunately I had a recent backup available.

The reason was that I had not put a default value for Configuration in the PropertyGroup, and my OutputDirectory property was missing the bin from its folder path. Note to self: always backup before running any script that can delete files for the first time.

Zip Target

After my unfortunate experience with clean, I was more determined than ever to create a target that would perform a source-code backup for me! To use the Zip task in MSBuild you first need to download and install MSBuild Community Tasks.

Then you create an ItemGroup defining which files you want to be included (and excluded) before using the Zip task to actually create the archive. Here's the two targets I created - one to make a release package, and one to make a source code archive:

<Target Name="Package" DependsOnTargets="Build">
    <ItemGroup>
      <!-- All files from build -->
      <ZipFiles Include="$(DeploymentProject)\bin\$(Configuration)\**\*.*"
         Exclude="**\*.zip;**\*.pdb;**\*.vshost.*" />
    </ItemGroup>
    <Zip Files="@(ZipFiles)"
         WorkingDirectory="$(DeploymentProject)\bin\$(Configuration)\"
         ZipFileName="$(ApplicationName)-$(Configuration)-$(Version).zip"
         Flatten="True" />
    </Target>

    <Target Name="Backup">
    <ItemGroup>
      <!-- All source code -->
      <SourceFiles Include="**\*.*" 
        Exclude="**\bin\**\*.*;**\obj\**\*.*;*.zip" />
    </ItemGroup>
    <Zip Files="@(SourceFiles)"
         WorkingDirectory=""
         ZipFileName="$(ApplicationName)-SourceCode-$(Version).zip" />
</Target>

See Ben Hall's blog for another example.

Test Task

My final task was to get some NUnit tests running. Again the MSBuild Community Tasks are required for this. It took me a while to get this working correctly, particularly because it seemed to have trouble detecting where my copy of NUnit was installed. Here's the XML:

<Target Name="Test" DependsOnTargets="Build">
    <NUnit Assemblies="@(TestAssembly)"
           WorkingDirectory="MyApplication.UnitTests\bin\$(Configuration)"
           ToolPath="C:\\Program Files\\NUnit 2.4.7\\bin"
           />
</Target>

Conclusion

It took me longer than I wanted to work out how to do these basic tasks with MSBuild, but even so, I am sure the time will be very quickly recouped as I will be able to reuse most of these tasks on future projects. The jury is still out on whether it is preferable to use MSBuild to NAnt though.