Tuesday 17 November 2009

.NET Gotcha – Static Initialiser Ordering

Last week I had to troubleshoot a strange problem. A developer had broken up the source code for a large class of constants (defining various colours) into separate files using the partial keyword, and all of a sudden our Windows Forms controls were painting themselves black instead of the correct colour. Put them back into one file again and the problem went away.

Eventually we tracked the problem down due to the order in which the fields were being initialised by the static constructor. Consider the following code and unit test (which passes):

static class MyConstants
{
    public static readonly int MyNumber = 5;
    public static readonly int MyOtherNumber = MyNumber;
}

[TestFixture]
public class MyConstantsTests
{
    [Test]
    public void ConstantsAreInitialisedCorrectly()
    {
        Assert.AreEqual(5, MyConstants.MyNumber);
        Assert.AreEqual(5, MyConstants.MyOtherNumber);
    }
}

Now if we simply change the ordering of the statements in MyConstants…

static class MyConstants
{
    public static readonly int MyOtherNumber = MyNumber;
    public static readonly int MyNumber = 5;
}

… the test will fail as MyOtherNumber will be 0. Obviously if the two definitions exist in different source files, this type of problem is much harder to spot. The test does pass if we use the const keyword instead of static readonly, but since we were initialising using the Color.FromArgb function, this was not an option for us.

The moral of the story is to avoid setting static read-only fields to values dependent on other fields. Or at least be aware of the problems that can arise from doing so.

Friday 23 October 2009

The “Unlegacy my Code” Kata

Daily TDD Kata

I’ve read a number of blog posts recommending doing a daily TDD ‘kata’ recently. For those unfamiliar with the concept, essentially you attempt to solve a simple programming task using TDD. The problems are often simple mathematical challenges, such as finding the prime factors of a number, or creating a calculator.

You limit yourself to 30 minutes to work on the problem. One of the goals of doing this is to speed up the way you work. For this reason, many people recommend doing the same kata many times, just like a musician would practice the same piece daily, gradually improving in speed, fluency and accuracy.

Benefits

There are several benefits to taking the time each day for a coding warm-up.

  • Learning the TDD way of working: red, green, refactor
  • Learning a unit testing or mocking framework
  • Stretching your brain to see if you can come up with an even more elegant solution than last time
  • Mouseless programming (learning to use keyboard shortcuts)
  • Solving a familiar problem in a new language
  • Learning how to apply a design pattern
A Possible Weakness?

While I think these benefits are great, I do think most of the kata examples I have seen suffer from a weakness. And that is that you are always testing something that is inherently easy to test. After all, what could be easier to write unit tests for than an Add method? This can mean that when you transition to attempting to write tests for your “real” code, you can fall at the first hurdle, as the “arrange” part of your test is horribly complicated, and you have no idea what to do for the “assert” part.

Introducing the “Unlegacy my code” Kata

Legacy code has been described as “code without tests”. Which means that unless your your development team is comprised of TDD champions, you likely have a lot of “legacy code”. Recently I decided I would try my own variation on a daily kata, that would go like this:

  1. Load up the source code of the application you are working on.
  2. Choose a class or method that is not covered by any existing unit tests (preferably something you are currently working on).
  3. Give yourself a maximum of 30 minutes to create a meaningful unit test.
  4. If it passes, check it in, and congratulations, you now have slightly less legacy code than before.
  5. If it fails, rollback and get on with your day’s work. Hopefully you learned something from the experience. You could write up a memo on what is wrong with the design of the class you tried to test. And you could always have another go at it tomorrow.
Problems you will run into

This kata will not necessarily be straightforward. Here’s the two main difficulties you will encounter:

1. Tight Coupling & Hidden Dependencies.

You may find that it is almost impossible to instantiate an instance of the class you want to test because of its dependencies (often hidden through the use of singletons). Sometimes your 30 minutes is up before you have even managed to successfully instantiate the class.

2. Multiple Responsibility Syndrome.

Classes that fail to adhere to the “Single Responsibility Principle” often fail spectacularly. They are responsible for everything from the printout of wage slips to the launching of nuclear warheads. They talk to the database, the file system, the network, the registry, and create a few background threads too. This means that they typically have dozens of dependencies unrelated to the behaviour you want to test. And when you call a single method you aren’t doing one thing, you are doing 100 additional things, one of which is bound to throw an exception. The best course of action is to extract a single, isolated responsibility and move it into its own testable class.

Give it a try

I’ve been doing this “unlegacy my code” kata for a couple of weeks now, and am (very) slowly seeing the unit test coverage rise. The fact that it is directly related to what you are working on also makes it much easier to justify the 30 minute daily investment to your manager.

Friday 16 October 2009

Merge-Friendly Code

If you are working a large application where you have to simultaneously support hotfixes, service packs and new features for customers running several different versions, while developing new versions of your software, chances are you have some kind of branching structure in your source control system.

And before too long, you will experience the joy of merging features from one branch into another. Here’s my top six tips for writing code that is easy to merge…

1. Little and Often

The first principle is that it is better to make many small, focused check-ins, and merge them early, rather than checking a vast collection of changes in one hit and attempting a gigantic merge after several months of development. Sadly, this is not always possible, as sometimes a major change has to be kept out of a branch for a long time.

2. Get Your Branch Strategy Right

It is worth spending some time making sure you choose the correct branching strategy. A lot could be said about this, but here’s just two things to consider.

Don’t put two major features into one branch unless you are willing to deploy them together. Although you can “cherry-pick” changesets to merge, the reality is that this only works if your changes are completely isolated from one another. In other words if changeset B depends on something that was in changeset A, then you can’t just merge changeset B into another branch, changeset A has to come along for the ride too.

Also, avoid getting into the situation where you need to merge between two branches that are not directly related. In TFS, these are called “baseless merges”, and they can result in deleted code getting re-inserted.

3. Check in Clean Code

Nothing messes up a merge more quickly than someone making widespread “cosmetic” changes – introducing or deleting whitespace, renaming things, moving methods to a different part of the file, etc. Sweeping changes like this have a high probability of conflicting with someone else’s change.

The solution is of course, to reduce the need for this kind of change by making sure that what you check in is formatted correctly, and follows the appropriate coding standards and naming conventions. Tools like StyleCop, Resharper, and FxCop are all able to help here.

4. Single Responsibility Principle (SRP)

Simply put, the Single Responsibility Principle dictates that every class should have one and only one responsibility. If it has two or more, you should extract functionality into additional classes. Similarly every method should perform one and only one task.

Adhering to this straightforward principle results in many classes, each composed of short methods. Very often merge conflicts are due to two people working on the same file or method, but changing it for very different reasons. But if a class or method has only “one reason to change”, then the chances of two developers working on different features needing to simultaneously change it are greatly reduced.

5. Open Closed Principle (OCP)

The Open Closed Principle states that classes should be open for extension but closed for modification. Or to say it another way, it should be possible to add new features and capabilities to your codebase simply by creating new classes, rather than having to mess with the internals of existing classes. And if you use technologies like MEF, it really is possible to add whole new features without touching a single line of your existing codebase.

Obviously, in any large real-world application, there will always be the need to make some changes to legacy code. But this should be the exception rather than the norm. In fact, the only real reasons to change the existing code are to fix bugs, and to make it more extensible.

If commits to source control of new features consist mostly of changes to existing files rather than the creation of new ones, maybe you need to do some research into how you can better design your classes to conform to OCP.

6. Unit Tests

After performing a merge, it is vital that you are able to determine as quickly as possible if you made any mistakes in the conflict resolution process. A unit test suite with good code coverage is invaluable in helping with this task. Obviously this may need to be complemented with further integration testing and manual testing, but the quicker you can identify problems with a merge, the better.

So those are my top recommendations for avoiding merging nightmares. Anyone got any other suggestions?

Saturday 10 October 2009

NAudio 1.3 Release Notes

It has been well over a year since I last released a version of NAudio, and since then there have been loads of new features and bugfixes added, so I have decided it is time for a new drop. Another reason for releasing now is that NAudio has been getting a lot more attention recently, mainly due to StackOverflow (and even got a mention on This Week on Channel 9).

As always, head to CodePlex to download the latest source code and binaries.

What’s New?

  • WaveOut has a new constructor (this is breaking change), which allows three options for waveOut callbacks. This is because there is no “one size fits all” solution, but if you are creating WaveOut on the GUI thread of a Winforms or WPF application, then the default constructor should work just fine. WaveOut also allows better flexibility over controlling the number of buffers and desired latency.
  • Mp3FileReader and WaveFileReader can have a stream as input, and WaveFileWriter can write to a stream. These features are useful to those wanting to stream data over networks.
  • The new IWaveProvider interface is like a lightweight WaveStream. It doesn’t support repositioning or length and current position reporting, making the implementation of synthesizers much simpler. The IWavePlayer interface takes an IWaveProvider rather than WaveStream. WaveStream implements IWaveProvider, so existing code continues to work just fine.
  • Added in LoopStream, WaveProvider32 and WaveProvider16 helper classes. Expect more to be built upon these in the future.
  • I have also started using the WaveBuffer class. This clever idea from Alexandre Mutel allows us to trick the .NET type system into letting us cast from byte[] to float[] or short[]. This improves performance by eliminating unnecessary copying and converting of data.
  • There have been many bugfixes including better support for VBR MP3 file playback.
  • The mixer API has had a lot of bugs fixed and improvements, though differences between Vista and XP continue to prove frustrating.
  • The demo project (NAudioDemo) has been improved and includes audio wave-form drawing sample code.
  • There is now a WPF demo as well (NAudioWpfDemo), which also shows how to draw wave-forms in WPF, and even includes some preliminary FFT drawing code.
  • The WaveIn support has been updated and enhanced. WaveInStream is now obsolete.
  • WASAPI audio capture is now supported.
  • NAudio should now work correctly on x64 operating systems (accomplished this by setting Visual Studio to compile for x86).

As usual, I welcome any feedback on this release. Do let me know if you use NAudio to build anything cool.

Friday 9 October 2009

Recording the Soundcard Output to WAV in NAudio

Suppose you want to not just play back some audio, but record what you are playing to a WAV file. This can be achieved in NAudio by creating an IWaveProvider whose read method reads from another IWaveProvider but also writes to disk as it goes. This is very easy to implement, and the WaveRecorder class I present here will be added to NAudio shortly. Our WaveRecorder also needs to be disposable, as we will want to close the WAV file when we are finished.
/// <summary>
/// Utility class to intercept audio from an IWaveProvider and
/// save it to disk
/// </summary>
public class WaveRecorder : IWaveProvider, IDisposable
{
    private WaveFileWriter writer;
    private IWaveProvider source;
 
    /// <summary>
    /// Constructs a new WaveRecorder
    /// </summary>
    /// <param name="destination">The location to write the WAV file to</param>
    /// <param name="source">The source Wave Provider</param>
    public WaveRecorder(IWaveProvider source, string destination)
    {
        this.source = source;
        this.writer = new WaveFileWriter(destination, source.WaveFormat);
    }
     
    /// <summary>
    /// Read simply returns what the source returns, but writes to disk along the way
    /// </summary>
    public int Read(byte[] buffer, int offset, int count)
    {
        int bytesRead = source.Read(buffer, offset, count);
        writer.WriteData(buffer, offset, bytesRead);
        return bytesRead;
    }
 
    /// <summary>
    /// The WaveFormat
    /// </summary>
    public WaveFormat WaveFormat
    {
        get { return source.WaveFormat; }
    }
 
    /// <summary>
    /// Closes the WAV file
    /// </summary>
    public void Dispose()
    {
        if (writer != null)
        {
            writer.Dispose();
            writer = null;
        }
    }
}

Now we have our WaveRecorder, we can insert it anywhere in the chain we like. The most obvious place is right at the end of the chain. So we wrap the WaveStream or WaveProvider we would normally pass to the Init method of our IWavePlayer with the WaveRecorder class. To demonstrate, I will extend the sine wave generating code I created recently, to save the sine wave you are playing to disk. Only three extra lines of code are required:
IWavePlayer waveOut;
WaveRecorder recorder; 

private void button1_Click(object sender, EventArgs e)
{
    StartStopSineWave();
}

void StartStopSineWave()
{
    if (waveOut == null)
    {
        var sineWaveProvider = new SineWaveProvider16();
        sineWaveProvider.SetWaveFormat(16000, 1); // 16kHz mono
        sineWaveProvider.Frequency = 500;
        sineWaveProvider.Amplitude = 0.1f;
        recorder = new WaveRecorder(sineWaveProvider, @"C:\Users\Mark\Documents\sine.wav");
        waveOut = new WaveOut();
        waveOut.Init(recorder);
        waveOut.Play();
    }
    else
    {
        waveOut.Stop();
        waveOut.Dispose();
        waveOut = null;
        recorder.Dispose();
        recorder = null;                
    }
}

Thursday 8 October 2009

Playback of Sine Wave in NAudio

In this post I will demonstrate how to create and play a sine wave using NAudio. To do this, we need to create a derived WaveStream, or more simply, a class that implements IWaveProvider.
One awkwardness of implementing IWaveProvider or WaveStream, is the need to provide the data into a byte array, when it would be much easier to write to an array of floats for 32 bit audio (or shorts for 16 bit). To help with this, I have created the WaveProvider32 class (likely to be committed into the source for NAudio 1.3), which uses the magic of the WaveBuffer class, to allow us to cast the target byte array into an array of floats.
public abstract class WaveProvider32 : IWaveProvider
{
    private WaveFormat waveFormat;
    
    public WaveProvider32()
        : this(44100, 1)
    {
    }

    public WaveProvider32(int sampleRate, int channels)
    {
        SetWaveFormat(sampleRate, channels);
    }

    public void SetWaveFormat(int sampleRate, int channels)
    {
        this.waveFormat = WaveFormat.CreateIeeeFloatWaveFormat(sampleRate, channels);
    }

    public int Read(byte[] buffer, int offset, int count)
    {
        WaveBuffer waveBuffer = new WaveBuffer(buffer);
        int samplesRequired = count / 4;
        int samplesRead = Read(waveBuffer.FloatBuffer, offset / 4, samplesRequired);
        return samplesRead * 4;
    }

    public abstract int Read(float[] buffer, int offset, int sampleCount);

    public WaveFormat WaveFormat
    {
        get { return waveFormat; }
    }
}

Now we can derive from WaveProvider32 to supply our actual sine wave data:
public class SineWaveProvider32 : WaveProvider32
{
    int sample;

    public SineWaveProvider32()
    {
        Frequency = 1000;
        Amplitude = 0.25f; // let's not hurt our ears            
    }

    public float Frequency { get; set; }
    public float Amplitude { get; set; }

    public override int Read(float[] buffer, int offset, int sampleCount)
    {
        int sampleRate = WaveFormat.SampleRate;
        for (int n = 0; n < sampleCount; n++)
        {
            buffer[n+offset] = (float)(Amplitude * Math.Sin((2 * Math.PI * sample * Frequency) / sampleRate));
            sample++;
            if (sample >= sampleRate) sample = 0;
        }
        return sampleCount;
    }
}

Using it is straightforward. We choose our output sample rate, the frequency of the sine wave and amplitude. You can even adjust them in real-time.
private WaveOut waveOut;

private void button1_Click(object sender, EventArgs e)
{
    StartStopSineWave();
}

private void StartStopSineWave()
{
    if (waveOut == null)
    {
        var sineWaveProvider = new SineWaveProvider32();
        sineWaveProvider.SetWaveFormat(16000, 1); // 16kHz mono
        sineWaveProvider.Frequency = 1000;
        sineWaveProvider.Amplitude = 0.25f;
        waveOut = new WaveOut();
        waveOut.Init(sineWaveProvider);
        waveOut.Play();
    }
    else
    {
        waveOut.Stop();
        waveOut.Dispose();
        waveOut = null;
    }
}

Please note, that you will need to be using the latest NAudio code from source control to use this (until 1.3 is released).

Wednesday 7 October 2009

Looped Playback in .NET with NAudio

In this post I will explain how to seamlessly loop audio with NAudio. The first task is to create a WaveStream derived class that will loop for us. This class takes a source WaveStream, and in the override Read method, will loop back to the beginning once the source stream stops returning data. Obviously this requires that the source stream you pass in does in fact stop returning data. Another option would be to use the Length property of the source stream, and go back to the beginning once we have sent Length bytes. Here’s my implementation of LoopStream. I might put this into NAudio for the next release: (Update: have fixed a bug in the Read method, thanks Neverbith for spotting it. I will also possibly add a configuration to allow you to use the Source’s Length property as well)

/// <summary>
/// Stream for looping playback
/// </summary>
public class LoopStream : WaveStream
{
    WaveStream sourceStream;

    /// <summary>
    /// Creates a new Loop stream
    /// </summary>
    /// <param name="sourceStream">The stream to read from. Note: the Read method of this stream should return 0 when it reaches the end
    /// or else we will not loop to the start again.</param>
    public LoopStream(WaveStream sourceStream)
    {
        this.sourceStream = sourceStream;
        this.EnableLooping = true;
    }

    /// <summary>
    /// Use this to turn looping on or off
    /// </summary>
    public bool EnableLooping { get; set; }

    /// <summary>
    /// Return source stream's wave format
    /// </summary>
    public override WaveFormat WaveFormat
    {
        get { return sourceStream.WaveFormat; }
    }

    /// <summary>
    /// LoopStream simply returns
    /// </summary>
    public override long Length
    {
        get { return sourceStream.Length; }
    }

    /// <summary>
    /// LoopStream simply passes on positioning to source stream
    /// </summary>
    public override long Position
    {
        get { return sourceStream.Position; }
        set { sourceStream.Position = value; }
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        int totalBytesRead = 0;

        while (totalBytesRead < count)
        {
            int bytesRead = sourceStream.Read(buffer, offset + totalBytesRead, count - totalBytesRead);
            if (bytesRead == 0)
            {
                if (sourceStream.Position == 0 || !EnableLooping)
                {
                    // something wrong with the source stream
                    break;
                }
                // loop
                sourceStream.Position = 0;
            }
            totalBytesRead += bytesRead;
        }
        return totalBytesRead;
    }
}

Now using this to play a looping WAV file is trivial:

private WaveOut waveOut;

private void buttonStartStop_Click(object sender, EventArgs e)
{
    if (waveOut == null)
    {
        WaveFileReader reader = new WaveFileReader(@"C:\Music\Example.wav");
        LoopStream loop = new LoopStream(reader);
        waveOut = new WaveOut();
        waveOut.Init(loop);
        waveOut.Play();
     }
     else
     {
         waveOut.Stop();
         waveOut.Dispose();
         waveOut = null;
     }
}

Monday 28 September 2009

Book Review – The Art of Unit Testing (Roy Osherove)

In this book, Roy Osherove gives a comprehensive introduction to unit testing. He explains why you would want unit tests in the first place, how to go about writing and running them, as well as addressing some of the challenges involved in introducing unit testing to projects or development teams.

Those who have experience writing unit tests might not find a lot of new material in this book, but it is still worth skimming through as he often provides several ways of achieving a goal, one of which you may not have thought of.

Part 1 introduces unit tests and their benefits, and explains how they differ from integration tests. This elementary distinction is an important one as when many developers initially try to write their first unit tests, they actually write integration tests, ending up with tests that are fragile, unrepeatable and slow, and potentially putting them off from writing any more.

Part 2 explains how mocks and stubs can be used to enable automated testing. He shows that you can make classes testable using several techniques, not just passing in interfaces for all dependencies. He introduces a few of the most popular mocking frameworks.

Part 3 deals with the problem of managing and maintaining all your unit tests. He advocates continuous integration, keeping the tests simple, and presents a useful list of unit testing anti-patterns.

Part 4 tackles some of the tricky issues of introducing unit testing into a legacy codebase, or into a development team that is resistant to change. Osherove is a pragmatist rather than a purist. He recognizes that you may have to start very small, and prove that the time taken to write the tests is worth it.

Two appendices provide useful additional information on some of the OO design principles that make for testable code as well as summarising many of the open source and commercial unit testing tools and frameworks available. This is a very helpful resource, as it helps newcomers to navigate their way through the bewildering array of choices as well as highlighting some new tools that I hadn’t come across.

Overall I would say this is an excellent book to pass round developers in a team that is considering using unit testing or is new to the practice. Doubtless some will be disappointed that he doesn’t stridently demand that TDD is used exclusively, but his honest realism is refreshing and may even prove more effective in winning over new converts to test driven development.

Styling a WPF Volume Meter

In WPF, the complete separation of behaviour from appearance means that you can take a control such as the ProgressBar, and style it to look like any kind of meter. For a recent project, I wanted to make a audio volume meter. The look I wanted was a gradient bar, starting with green for low volumes and going to red for the highest volume:

wpf-volume-meter-1

When the volume is at half way, the progress bar should just show the left hand portion of the gradient:

wpf-volume-meter-2

Styling a ProgressBar in WPF is actually very easy. You can start with the very helpful “Simple Style” template that comes with kaxaml. You simply need to define the visuals for two parts, called PART_Track and PART_Indicator. The first is the background, while the second is the part that dynamically changes size as the current value changes.

At first it would seem trivial to create the look I am after – just create two rectangles on top of each other. The trouble is, that I only want the whole gradient to appear if the value is at maximum. Here’s it incorrectly drawing the entire gradient when the volume is low:

wpf-volume-meter-3

To work around this, I painted the entire gradient on the PART_Track. Then the PART_Indicator was made transparent. This required one further rectangle to cover up the part of the background gradient that I don’t want. I do this with a DockPanel. This allows the PART_Indicator to use its required space, while the masking rectangle fills up the remaining space on the right-hand side, covering up the background gradient.

<style targettype="{x:Type ProgressBar}" x:key="{x:Type ProgressBar}">
  <setter property="Template">
    <setter.value>
      <controltemplate targettype="{x:Type ProgressBar}">
        <grid minheight="14" minwidth="200">
          <rectangle name="PART_Track" stroke="#888888" strokethickness="1">
            <rectangle.fill>
              <lineargradientbrush startpoint="0,0" endpoint="1,0">
                <gradientstop offset="0" color="#FF00FF00"/>
                <gradientstop offset="0.9" color="#FFFFFF00"/>
                <gradientstop offset="1" color="#FFFF0000"/>
              </lineargradientbrush>
            </rectangle.fill>
          </rectangle>
          <dockpanel margin="1">
            <rectangle name="PART_Indicator">
            </rectangle>
            <rectangle name="Mask" minwidth="{TemplateBinding Width}" fill="#C0C0C0"/>
          </dockpanel>
        </grid>
      </controltemplate>
    </setter.value>
  </setter>
</style>

I think there may be an even better way to solve this using a VisualBrush, but I can’t quite get it working at the moment. I’ll post with the solution once I’ve worked it out.

Saturday 26 September 2009

Trimming a WAV file using NAudio

I’m hoping to write a few brief code snippets to demonstrate various uses of NAudio, to eventually form the basis of an FAQ. This example shows how you can take a WAV file and trim a section out of it. You specify the TimeSpan to remove from the beginning and end, as well as an output WAV file. Please note this will only be reliable with PCM format WAV files.

public static class WavFileUtils
{
    public static void TrimWavFile(string inPath, string outPath, TimeSpan cutFromStart, TimeSpan cutFromEnd)
    {
        using (WaveFileReader reader = new WaveFileReader(inPath))
        {
            using (WaveFileWriter writer = new WaveFileWriter(outPath, reader.WaveFormat))
            {
                int bytesPerMillisecond = reader.WaveFormat.AverageBytesPerSecond / 1000;

                int startPos = (int)cutFromStart.TotalMilliseconds * bytesPerMillisecond;
                startPos = startPos - startPos % reader.WaveFormat.BlockAlign;

                int endBytes = (int)cutFromEnd.TotalMilliseconds * bytesPerMillisecond;
                endBytes = endBytes - endBytes % reader.WaveFormat.BlockAlign;
                int endPos = (int)reader.Length - endBytes; 

                TrimWavFile(reader, writer, startPos, endPos);
            }
        }
    }

    private static void TrimWavFile(WaveFileReader reader, WaveFileWriter writer, int startPos, int endPos)
    {
        reader.Position = startPos;
        byte[] buffer = new byte[1024];
        while (reader.Position < endPos)
        {
            int bytesRequired = (int)(endPos - reader.Position);
            if (bytesRequired > 0)
            {
                int bytesToRead = Math.Min(bytesRequired, buffer.Length);
                int bytesRead = reader.Read(buffer, 0, bytesToRead);
                if (bytesRead > 0)
                {
                    writer.WriteData(buffer, 0, bytesRead);
                }
            }
        }
    }
}

Friday 25 September 2009

Circular WPF Button Template

Here’s another WPF button template, since my first one continues to be one of the most popular posts on my blog. This one is in response to a question about how the button could be made circular. The basic background of the button is formed by superimposing three circles on top of each other:

<Grid Width="100" Height="100" Margin="5">
   <Ellipse Fill="#FF6DB4EF"/>
   <Ellipse>
      <Ellipse.Fill>
         <RadialGradientBrush>
            <GradientStop Offset="0" Color="#00000000"/>
            <GradientStop Offset="0.88" Color="#00000000"/>
            <GradientStop Offset="1" Color="#80000000"/>
         </RadialGradientBrush>
      </Ellipse.Fill>
   </Ellipse>
   <Ellipse Margin="10">
      <Ellipse.Fill>
         <LinearGradientBrush>
            <GradientStop Offset="0" Color="#50FFFFFF"/>
            <GradientStop Offset="0.5" Color="#00FFFFFF"/>
            <GradientStop Offset="1" Color="#50FFFFFF"/>
         </LinearGradientBrush>
      </Ellipse.Fill>
   </Ellipse>
</Grid>

Here’s what that looks like:

Circular Button

Now to turn it into a template, we follow a very similar process to before. We allow the Background colour to be overriden by the user if required. I couldn’t come up with a neat way of forcing the button to be circular, but it is not too much of a chore to set the Height and Width in XAML.

The focus rectangle has been made circular. For the IsPressed effect, I simply change the angle of the linear gradient of the inner circle a little, and move the ContentPresenter down a bit, which seems to work OK. I haven’t created any triggers yet for IsMouseOver, IsEnabled or IsFocused, mainly because I’m not quite sure what would create a the right visual effect.

<Page.Resources>
  <Style x:Key="MyFocusVisual">
     <Setter Property="Control.Template">
        <Setter.Value>
           <ControlTemplate TargetType="{x:Type Control}">
              <Grid Margin="8">
                 <Ellipse
                    Name="r1"
                    Stroke="Black"
                    StrokeDashArray="2 2"
                    StrokeThickness="1"/>
                 <Border
                    Name="border"
                    Width="{TemplateBinding ActualWidth}"
                    Height="{TemplateBinding ActualHeight}"
                    BorderThickness="1"
                    CornerRadius="2"/>
              </Grid>
           </ControlTemplate>
        </Setter.Value>
     </Setter>
  </Style>
  <Style x:Key="CircleButton" TargetType="Button">
     <Setter Property="OverridesDefaultStyle" Value="True"/>
     <Setter Property="Margin" Value="2"/>
     <Setter Property="FocusVisualStyle" Value="{StaticResource MyFocusVisual}"/>
     <Setter Property="Background" Value="#FF6DB4EF"/>
     <Setter Property="Template">
        <Setter.Value>
           <ControlTemplate TargetType="Button">
              <Grid>
                 <Ellipse Fill="{TemplateBinding Background}"/>
                 <Ellipse>
                    <Ellipse.Fill>
                       <RadialGradientBrush>
                          <GradientStop Offset="0" Color="#00000000"/>
                          <GradientStop Offset="0.88" Color="#00000000"/>
                          <GradientStop Offset="1" Color="#80000000"/>
                       </RadialGradientBrush>
                    </Ellipse.Fill>
                 </Ellipse>
                 <Ellipse Margin="10" x:Name="highlightCircle" >
                    <Ellipse.Fill >
                       <LinearGradientBrush >
                          <GradientStop Offset="0" Color="#50FFFFFF"/>
                          <GradientStop Offset="0.5" Color="#00FFFFFF"/>
                          <GradientStop Offset="1" Color="#50FFFFFF"/>
                       </LinearGradientBrush>
                    </Ellipse.Fill>
                 </Ellipse>
                 <ContentPresenter x:Name="content" HorizontalAlignment="Center" VerticalAlignment="Center"/>
              </Grid>
              <ControlTemplate.Triggers>
                 <Trigger Property="IsPressed" Value="True">
                    <Setter TargetName="highlightCircle" Property="Fill">
                       <Setter.Value>
                       <LinearGradientBrush StartPoint="0.3,0" EndPoint="0.7,1">
                          <GradientStop Offset="0" Color="#50FFFFFF"/>
                          <GradientStop Offset="0.5" Color="#00FFFFFF"/>
                          <GradientStop Offset="1" Color="#50FFFFFF"/>
                       </LinearGradientBrush>
                       </Setter.Value>
                    </Setter>
                    <Setter TargetName="content" Property="RenderTransform">
                       <Setter.Value>
                          <TranslateTransform Y="0.5" X="0.5"/>
                       </Setter.Value>
                    </Setter>
                 </Trigger>
              </ControlTemplate.Triggers>
           </ControlTemplate>
        </Setter.Value>
     </Setter>
  </Style>
</Page.Resources>

Here’s how we declare a few of these buttons of different sizes, and with different background colours:

<WrapPanel>
  <Button Width="100" Height="100" Style="{StaticResource CircleButton}">Hello World</Button>
  <Button Width="80" Height="80" Style="{StaticResource CircleButton}" Background="#FF9F1014">Button 2</Button>
  <Button Width="80" Height="80" Style="{StaticResource CircleButton}" Background="#FFD8C618">Button 3</Button>      
  <Button Width="80" Height="80" Style="{StaticResource CircleButton}" Background="#FF499E1E">Button 4</Button>
  <Button Width="80" Height="80" Style="{StaticResource CircleButton}" Background="Orange">Button 5</Button>
  <Button Width="80" Height="80" Style="{StaticResource CircleButton}" Background="#FF7C7C7C">Button 6</Button>
  <Button Width="80" Height="80" Style="{StaticResource CircleButton}" Background="Purple" Foreground="White">Button 7</Button>
  <Button Width="100" Height="100" Style="{StaticResource CircleButton}" Background="#FF3120D4" Foreground="White">Button 8</Button>
</WrapPanel>

Circular Buttons

Sadly, the use of ControlTemplate triggers means that this template can’t be used directly in Silverlight. I’ll maybe look into converting them to VisualStates for a future blog post.

Thursday 24 September 2009

Crowd-Sourced Code Reviews

In a post yesterday, I bemoaned the lack of a clean syntax in NUnit for testing whether events fired from an object under test, and set to work at writing my own helper methods. I posted on the NUnit discussion group, to see whether there was a better way. And there was. A really obvious way. Here’s how my tests could have been written without the need for an Assert.Raises function, or for any code outside the test method:
[Test]
public void TestRaisesClosedEvent_Improved()
{
    Blah blah = new Blah();
    EventArgs savedEventArgs = null;
    blah.Closed += (sender, args) => savedEventArgs = args;
    blah.RaiseClosed(1);
    Assert.IsNotNull(savedEventArgs);
}

[Test]
public void TestCanCheckRaisesMany_Improved()
{
    Blah blah = new Blah();
    var savedEvents = new List<openedeventargs>();
    blah.Opened += (sender, args) => savedEvents.Add(args);
    blah.RaiseOpened(5);
    Assert.AreEqual(5, savedEvents.Count);
    Assert.AreEqual("Message 3", savedEvents[2].Message);
}
Of course, I was kicking myself for missing such an obvious solution, but it left me pondering how often this happens without me realising it.
Earlier today Clint Rutkas tweeted:
i feel like i'm doing such bad things in WPF right now
… and I know exactly how he feels. In a couple of WPF applications I am working on, I am trying to use the MVVM pattern. It often ends up with me writing code that might be a really cool piece of lateral thinking, or it might be a pointlessly overcomplicated hack. For an example, here’s an  attached dependency property (usage here) I created to let me bind to Storyboard completed events.
What I need is another person to look over my shoulder and tell me whether I am missing the obvious, unaware of an API, or ignorant of a best practice. But all too often, that person doesn’t exist.
In a commercial environment, hopefully there is at least some kind of provision for code reviews to take place, or maybe even pair programming. But what about the lone programmer working on open source projects?
Maybe we need some way to “crowd-source” code reviews. Some way of getting lots of eyes on your source code, even if it is only for a couple of minutes, and an easy way of getting hold of that feedback. A bit like fivesecondtest but where you help out an open source developer by code reviewing a changeset. I’ve asked for this as a feature on CodePlex.
What do you think? Could it work? How do you make sure you’re not missing the obvious in the code you write?

Wednesday 23 September 2009

Asserting events with NUnit

Suppose we want to write unit tests for a class that raises events. We want to check that the right events are raised, the right number are raised, and that they have the correct parameters. Here’s an example of a class that we might want to test:
class Blah
{
    public event EventHandler Closed;
    public event EventHandler<OpenedEventArgs> Opened;

    public void RaiseClosed(int count)
    {
        for(int n = 0; n < count; n++)
        {
            Closed(this, EventArgs.Empty);
        }
    }

    public void RaiseOpened(int count)
    {
        for(int n = 0; n < count; n++)
        {
            Opened(this, new OpenedEventArgs() { Message = String.Format("Message {0}", n + 1) });
        }
    }
}

class OpenedEventArgs : EventArgs
{
    public string Message { get; set; }
}

To write NUnit test cases for this class, I would usually end up writing an event handler, and storing the parameters of the event in a field like follows:
EventArgs blahEventArgs;

[Test]
public void TestRaisesClosedEvent()
{
    Blah blah = new Blah();
    blahEventArgs = null;
    blah.Closed += new EventHandler(blah_Closed);
    blah.RaiseClosed(1);
    Assert.IsNotNull(blahEventArgs);
}

void blah_Closed(object sender, EventArgs e)
{
    blahEventArgs = e;
}
While this approach works, it doesn’t feel quite right to me. It is fragile – forget to set blahEventArgs back to null and you inadvertently break other tests. I was left wondering what it would take to create an Assert.Raises method that removed the need for a private variable and method for handling the event.
My ideal syntax would be the following:
Blah blah = new Blah();
var args = Assert.Raises<OpenedEventArgs>(blah.Opened, () => blah.RaiseOpened(1));
Assert.AreEqual("Message 1", args.Message);

The trouble is, you can’t specify “blah.Opened” as a parameter – this will cause a compile error. So I have to settle for second best and pass the object that raises the event, and the name of the event. So here’s my attempt at creating Assert.Raise, plus an Assert.RaisesMany method that allows you to see how many were raised and examine the EventArgs for each one:
public static EventArgs Raises(object raiser, string eventName, Action function)
{
    return Raises<EventArgs>(raiser, eventName, function);
}

public static T Raises<T>(object raiser, string eventName, Action function) where T:EventArgs
{
    var listener = new EventListener<T>(raiser, eventName);
    function.Invoke();
    Assert.AreEqual(1, listener.SavedArgs.Count);
    return listener.SavedArgs[0];
}

public static IList<T> RaisesMany<T>(object raiser, string eventName, Action function) where T : EventArgs
{
    var listener = new EventListener<T>(raiser, eventName);
    function.Invoke();
    return listener.SavedArgs;
}

class EventListener<T> where T : EventArgs
{
    private List<T> savedArgs = new List<T>();

    public EventListener(object raiser, string eventName)
    {
        EventInfo eventInfo = raiser.GetType().GetEvent(eventName);
        var handler = Delegate.CreateDelegate(eventInfo.EventHandlerType, this, "EventHandler");
        eventInfo.AddEventHandler(raiser, handler);            
    }

    private void EventHandler(object sender, T args)
    {
        savedArgs.Add(args);
    }

    public IList<T> SavedArgs { get { return savedArgs; } }
}
This allows us to dispense with the private field and event handler method in our test cases, and have nice clean test code:
[Test]
public void TestCanCheckRaisesEventArgs()
{
    Blah blah = new Blah();
    AssertExtensions.Raises(blah, "Closed", () => blah.RaiseClosed(1));
}

[Test]
public void TestCanCheckRaisesGenericEventArgs()
{
    Blah blah = new Blah();
    var args = AssertExtensions.Raises<OpenedEventArgs>(blah, "Opened", () => blah.RaiseOpened(1));
    Assert.AreEqual("Message 1", args.Message);
}

[Test]
public void TestCanCheckRaisesMany()
{
    Blah blah = new Blah();
    var args = AssertExtensions.RaisesMany<OpenedEventArgs>(blah, "Opened", () => blah.RaiseOpened(5));
    Assert.AreEqual(5, args.Count);
    Assert.AreEqual("Message 3", args[2].Message);
}

There are a couple of non-optimal features to this solution:
  • Having to specify the event name as a string is ugly, but there doesn’t seem to be a clean way of doing this.
  • It expects that your events are all of type EventHandler or EventHandler<T>. Any other delegate won’t work.
  • It throws away the sender parameter, but you might want to test this.
  • You can only test on one particular event being raised in a single test (e.g. can’t test that a function raises both the Opened and Closed events), but this may not be a bad thing as tests are not really supposed to assert more than one thing.
I would be interested to know if anyone has a better way of testing that objects raise events. Am I missing a trick here? Any suggestions for how I can make my Raises function even better?
Download full source code here

Tuesday 15 September 2009

Checking MIME type for cross domain Silverlight hosting

I recently ran into some problems trying to embed a Silverlight application in a web page hosted on another server. It seems that to do so, you must have your MIME type for .xap files correctly configured (should be application/x-silverlight-app). Interestingly, this is not required if you are hosting the .xap file in a web page on the same domain.

Unfortunately for me, my shared Linux hosting provider allows no way of configuring or even viewing the MIME types. After searching in vain for a utility that would let me have a look and see what MIME type a given URL was returning, I wrote my own test function. It turns out to be possible in just two lines of code – create a WebRequest, and examine the ContentType of the WebResponse.

[TestFixture]
public class MimeTypeChecker
{
    [Test]
    public void CheckSilverlightMimeTypeIsCorrect()
    {
        string mimeType = GetMimeType("http://www.mydomain.com/mysilverlightapp.xap");
        Assert.AreEqual("application/x-silverlight-app", mimeType);
    }

    public string GetMimeType(string url)
    {
        WebRequest request = WebRequest.Create(url);
        WebResponse response = request.GetResponse();
        return response.ContentType;
    }
}

Update: If your .xap file is hosted on an Apache server on Linux, you can set up the MIME types correctly by creating a .htaccess file. This can be put in the root of your hosting space, or just in the folder containing the .xap file. It only needs the .xap registration, but the example I show below configures xaml and xbap as well:

AddType application/x-silverlight-app .xap
AddType application/xaml+xml .xaml
AddType application/x-ms-xbap .xbap

Monday 14 September 2009

Parameterized Tests with TestCase in NUnit 2.5

For some time, NUnit has had a RowTest attribute in the NUnit.Extensions.dll, but with NUnit 2.5, we have built-in support for parameterized tests. These reduce the verbosity of multiple tests that differ only by a few arguments.

For example, if you had test code like this:

[Test]
public void Adding_1_and_1_should_equal_2()
{
    Assert.AreEqual(2,calculator.Add(1,1));
}

[Test]
public void Adding_1_and_2_should_equal_3()
{
    Assert.AreEqual(3, calculator.Add(1, 2));
}

[Test]
public void Adding_1_and_3_should_equal_4()
{
    Assert.AreEqual(4, calculator.Add(1, 3));
}

You can now simply refactor to:

[TestCase(1, 1, 2)]
[TestCase(1, 2, 3)]
[TestCase(1, 3, 4)]
public void AdditionTest(int a, int b, int expectedResult)
{
    Assert.AreEqual(expectedResult, calculator.Add(a, b));
}

The NUnit GUI handles this very nicely, allowing us to see which one failed, and run them individually:

NUnit Parameterized Tests

TestDriven.NET doesn’t give us the ability to specify an individual test case to run (hopefully a future feature), but it will show us the parameters used for any test failures:

TestCase 'SanityCheck.CalculatorTests.AdditionTest(1,2,3)' failed: 
  Expected: 3
  But was:  2
...\CalculatorTests.cs(19,0): at SanityCheck.CalculatorTests.AdditionTest(Int32 a, Int32 b, Int32 expectedResult)

TestCase 'SanityCheck.CalculatorTests.AdditionTest(1,3,4)' failed: 
  Expected: 4
  But was:  2
...\CalculatorTests.cs(19,0): at SanityCheck.CalculatorTests.AdditionTest(Int32 a, Int32 b, Int32 expectedResult)

Another very cool feature is that you can specify a TestCaseSource function, allowing you to generate test cases on the fly. One way I have used this feature is for some integration tests that examine a folder of legacy test data files and create a test for each file.

There are a few options for how to provide the source data. Here I show using a function that returns an IEnumerable<string>:

[TestCaseSource("GetTestFiles")]
public void ImportXmlTest(string xmlFileName)
{
    xmlImporter.Import(xmlFileName);
}

public IEnumerable<string> GetTestFiles()
{
    return Directory.GetFiles("D:\\Test Data", "*.xml");
}

Now when NUnit loads the test assembly, it runs the GetTestFiles function, to get the test case parameters, allowing you to run them individually if necessary.

TestCaseSource Function

There is one gotcha. If your TestCaseSource function takes a long time to run, you will need to wait every time NUnit (or TestDriven.NET) reloads your unit test DLL. So my example of examining a directory for files could negatively impact performance if there are too many test cases (I discovered this after loading 14000 test cases from a network drive).

Friday 11 September 2009

Coding Dojo

I went to my first “coding dojo” last night, which was also the first one run to be run in Southampton by Tom Quinn. You can read about what a coding dojo is here, but the basic idea is that a group of people collaboratively work on solving a basic programming problem, using TDD. To ensure everyone participates, developers are paired and work together for five minutes, before one leaves the pair and another joins.

The task we attempted was to create a calculator that worked with Roman numerals. We started by listing some tests we wanted to pass. This I think was a good way to start, but we actually made very little progress in the first round of coding. Perhaps it was partly the awkwardness of coding in front of a group – fear of looking stupid, or fear of looking like a show-off - so no one was initially willing to stamp their own design onto the problem. The second time round the table it was a different story, with people eager to get the keyboard to implement their ideas.

The other real problem was that we didn’t agree our overall approach to the problem. This was silly, because it was clear that we were going to have to make something to convert Roman numerals to integers, and something to convert integers back to Roman numerals. But because we hadn’t agreed that, we started off making the tests pass with hard coded results.

It meant we ended up with 7 or 8 tests for the top-level object – the calculator, when really we needed some lower-level tests for Roman numeral conversion. It highlighted a TDD anti-pattern: doing “the simplest thing that could make the test pass” should not mean, “hard code return values for all the test cases”. If you do that, you may have passing tests but you are no closer to solving your problem.

Positives: It was good to see everyone getting involved, and the idea of pairing worked well. It was also interesting to see how different people tackled the same problem.

Suggestions: Perhaps enforce the “red-green-refactor” rule a bit more, so that the there is more focus on getting a single test passing before you hand on, and more emphasis on refactoring, rather than just passing tests. Five minutes is surprisingly short, and lots of pairs had to take over with the code in a non-compiling, non-working state. You could perhaps increase the timeslot to 7 minutes, although if there are more than 7 or 8 people present, you would be hard pressed to go twice round the room in a two hour meeting. If you had 10 or more people, it might make sense to break into two groups.

Monday 7 September 2009

Custom Object Factory Unity Extension

Suppose you have an object that cannot be directly created by your IoC container. In my case, it is because that object is a .NET remoting object, so it must be created on a different computer.

One way to solve this would be to register a factory that creates your object. But in my application, there are dozens of objects that need to be created in this special way, and they all inherit from a common base interface. Ideally, I would like it to be completely transparent, so I request the type I want, and the container works out that it needs to be built in a special way.

So I set about making a Unity extension, which would allow me to intercept Resolve requests for certain interfaces, and create them using my custom factory method, or return the ones already cached.

The way to accomplish this is to create a Build Strategy, which checks to see if the requested type meets our criteria. If it does, we have a look to see if we have already cached and constructed the object. If not, we call our factory method to construct it, and cache the result. One important thing to notice is that I pass the “Context” from the extension into the build strategy. That is so that if you call Resolve from a child container, it will return the same instance as if you called it from a different child container. Obviously, your requirements may differ.

The if statement in PreBuildUp contains my rule for deciding if this is a Resolve request I want to intercept. Again, this could be customised for any arbitrary logic.

public class FactoryMethodUnityExtension<T> : UnityContainerExtension
{
    private Func<Type,T> factory;

    public FactoryMethodUnityExtension(Func<Type,T> factory)
    {
        this.factory = factory;
    }

    protected override void Initialize()
    {
        var strategy = new CustomFactoryBuildStrategy<T>(factory, Context);

        Context.Strategies.Add(strategy, UnityBuildStage.PreCreation);            
    }
}

public class CustomFactoryBuildStrategy<T> : BuilderStrategy
{
    private Func<Type,T> factory;
    private ExtensionContext baseContext;

    public CustomFactoryBuildStrategy(Func<Type,T> factory, ExtensionContext baseContext)
    {
        this.factory = factory;
        this.baseContext = baseContext;
    }

    public override void PreBuildUp(IBuilderContext context)
    {
        var key = (NamedTypeBuildKey)context.OriginalBuildKey;

        if (key.Type.IsInterface && typeof(T).IsAssignableFrom(key.Type))
        {
            object existing = baseContext.Locator.Get(key.Type);
            if (existing == null)
            {
                // create it
                context.Existing = factory(key.Type);
                
                // cache it
                baseContext.Locator.Add(key.Type, context.Existing);
            }
            else
            {
                context.Existing = existing;
            }
        }
    }
}

Using the extension is very simple. Simply give it the delegate to use to create the objects, and register it as an extension:

WhateverFactory factory = new WhateverFactory();
container = new UnityContainer();
container.AddExtension(new FactoryMethodUnityExtension<IWhatever>(factory.Create));

Here’s a couple of blog posts I found helpful while trying to learn how to create a Unity extension:

Thursday 20 August 2009

Lazy Loading of Dependencies in Unity

I have been learning the Unity IoC container recently as we will be making use of it in a project I am working on. Like all IoC containers, it makes it nice and easy to automatically construct an object, fulfilling all its dependencies.

One issue that comes up frequently when using IoC containers, is how to implement lazy loading. For example, suppose my class has a dependency on IEmailSender, but only uses it in certain circumstances. I might not wish for the concrete implementation to be created until I actually know I need it.

public class MyClass(IEmailSender emailSender)

One quick way round this is to take a dependency on the container instead. With Unity, the container comes already registered, so you can simply change the constructor prototype. Now you can call container.Resolve<IEmailSender> at the point you are ready to use it.

public class MyClass(IUnityContainer container)

The disadvantage of this solution is that we have now obscured the real dependencies of MyClass. It could ask for anything it likes from the container, and we have to examine the code to find out what it actually uses. Fortunately, there is a way we can solve this using Unity’s ability to allow you to register open generic types.

Suppose we create a generic class called Lazy, that implements ILazy as follows:

public interface ILazy<T>
{
    T Resolve();
    T Resolve(string namedInstance);
}

public class Lazy<T> : ILazy<T>
{
    IUnityContainer container;

    public Lazy(IUnityContainer container)
    {
        this.container = container;
    }

    public T Resolve()
    {
        return container.Resolve<T>();
    }

    public T Resolve(string namedInstance)
    {
        return container.Resolve<T>(namedInstance);
    }
}

Now we need to tell our container to use Lazy when someone asks for ILazy:

container.RegisterType(typeof(ILazy<>),typeof(Lazy<>));

And now that allows us to change our original class to have the following prototype:

public class MyClass(ILazy<IEmailService> emailServiceFactory)

Now we have advertised that our class depends on an IEmailService, but have not created it immediately on construction of MyClass, nor have we allowed MyClass to get at the IUnityContainer itself.

Saturday 8 August 2009

Live Mesh Wishlist

I have been using Windows Live Mesh for over a year now, and I have to say it has become an invaluable tool for me. However, I do have a few feature requests I would like to see, to improve its usefulness and usability.

1. Recycle Bin

The biggest risk with Live Mesh is that if you accidentally delete a file on one PC, it gets deleted on all PCs. So you must not think of Live Mesh as a viable backup option, even though it does replicate your files across multiple PCs. I would like to see an option for deleted files to go into a recycle bin in the cloud. So long as you have available space in your 5GB, it should be possible to restore files you have previously deleted. Obviously it should be possible to empty that recycle bin if necessary.

2. Syncing status

The client software needs to give an indication of whether it is still in syncing or not. Often I make changes to a file at work just before I go home. I don’t want to turn the PC off until I know that the change has been uploaded to the cloud.

3. Moving folder problem

In theory it should not be possible to inadvertently delete everything from a synced folder, but I managed to do so. I moved the parent folder which contained the Live Mesh folder to another drive, and moments later, the contents of the folder had been deleted on all my synchronized devices. Fortunately I was able to restore the data, but I would like Live Mesh to handle this scenario more gracefully.

4. Ignore list

It should be possible to “ignore” certain files in a synchronized folder. This would be particularly useful for source code. You could do this at the subfolder level or by extension / filename matching.

5. Backup Option

Finally, if Microsoft want to monetise Live Mesh, offering a backup solution with it would be ideal. This should keep not just deleted files, but could also store previous versions of files, allowing you to revert to earlier versions. On top of this, you could add a new type of Live Mesh folder, where you right-click and select that you simply want it to be backed up. Live Mesh would then upload its contents to the cloud, but the synchronization with other devices would not be needed. If it were reasonably priced, and you could control the scheduling of its bandwidth usage, I would definitely be interested.

Thursday 6 August 2009

Using TFS to find what files a user has got checked out

Posting this as a reminder to self of how to do it…

tf.exe status /user:username

Can also add a "/s:" parameter to specify a TFS server, but seems to default to the right one for me.

Saturday 4 July 2009

Audio WaveForm Drawing Using WPF

A while ago I blogged about how to display audio waveforms in WinForms. Porting this code to WPF is not as simple as it may first appear. The rendering models are quite different. In Windows Forms, you can draw points or lines using GDI functions. You are responsible for drawing them all every time the window is invalidated and its Paint method is called. However, in WPF, you create objects to put on a Canvas, and WPF manages the rendering and invalidation.

My first attempt was to stay as close to the WinForms model as I could. I have a sample aggregator that looks for the highest sample value over a short period of time. It then uses that to calculate the height of the line it should draw. Every time we calculate a new one, we add a line to our Canvas at the next X position, wrapping round if necessary, and deleting any old lines.

wpf-waveform-1

As can be seen, this gives a reasonable waveform display. I made use of a LinearGradientBrush to try to improve the visual effect a little (although this requires we cheat and keep the waveform symmetrical). There is a big performance problem however - it is very inefficient to keep throwing new lines onto a Canvas and removing old ones. The solution was to start re-using lines once we had wrapped past the right-hand edge of the display.

private Line CreateLine(float value)
{
    Line line;
    if (renderPosition >= lines.Count)
    {
        line = new Line();
        lines.Add(line);
        mainCanvas.Children.Add(line);
    }
    else
    {
        line = lines[renderPosition];
    }
    line.Stroke = this.Foreground;
    line.X1 = renderPosition;
    line.X2 = renderPosition;
    line.Y1 = yTranslate + -value * yScale;
    line.Y2 = yTranslate + value * yScale;
    renderPosition++;
    line.Visibility = Visibility.Visible;
    return line;
}

This solves our performance issue, but I still wasn’t too happy with the visual effect – it is too obviously composed of vertical lines. I tried a second approach. This added two instances of PolyLine to the canvas. Now, we would add a point to each line when a new maximum sample was created. Again the same trick of re-using points when we had scrolled off the right-hand edge was used for performance reasons.

wpf-waveform-2

As nice as this approach is, there is an obvious problem that we are not able to render the bit in between the top and bottom lines. This requires a Polygon. However, we can’t just add new points to the end of the Polygon’s Points collection. We need all of the top line points first, followed by all of the bottom line points in reverse order if we are to create a shape.

The trick is that when we get a new sample maximum and minimum in, we have to insert those values into the middle of the existing Points collection, or calculate the position in the points array. Notice that I create a new Point object every time to make sure that the Polygon is invalidated correctly.

private int Points
{
    get { return waveForm.Points.Count / 2; }
}

public void AddValue(float maxValue, float minValue)
{
    int visiblePixels = (int)(ActualWidth / xScale);
    if (visiblePixels > 0)
    {
        CreatePoint(maxValue, minValue);

        if (renderPosition > visiblePixels)
        {
            renderPosition = 0;
        }
        int erasePosition = (renderPosition + blankZone) % visiblePixels;
        if (erasePosition < Points)
        {
            double yPos = SampleToYPosition(0);
            waveForm.Points[erasePosition] = new Point(erasePosition * xScale, yPos);
            waveForm.Points[BottomPointIndex(erasePosition)] = new Point(erasePosition * xScale, yPos);
        }
    }
}

private int BottomPointIndex(int position)
{
    return waveForm.Points.Count - position - 1;
}

private double SampleToYPosition(float value)
{
    return yTranslate + value * yScale;
}

private void CreatePoint(float topValue, float bottomValue)
{
    double topYPos = SampleToYPosition(topValue);
    double bottomYPos = SampleToYPosition(bottomValue);
    double xPos = renderPosition * xScale;
    if (renderPosition >= Points)
    {
        int insertPos = Points;
        waveForm.Points.Insert(insertPos, new Point(xPos, topYPos));
        waveForm.Points.Insert(insertPos + 1, new Point(xPos, bottomYPos));
    }
    else
    {
        waveForm.Points[renderPosition] = new Point(xPos, topYPos);
        waveForm.Points[BottomPointIndex(renderPosition)] = new Point(xPos, bottomYPos);
    }
    renderPosition++;
}

This means that our minimum and maximum lines join together to create a shape, and we can fill in the middle bit.

wpf-waveform-3

Now we are a lot closer to the visual effect I am looking for, but it is still looking a bit spiky. To smooth the edges, I decided to only add one point every two pixels instead of every one:

wpf-waveform-4

This tidies up the edges considerably. You can take this a step further and have a point every third pixel, but this highlights another problem – that our polygons have sharp corners as they draw straight lines between each point. The next step would be to try out using Bezier curves, although I am not sure what the performance implications of that would be. Maybe that is a subject for a future post.

The code for these waveforms will be made available in NAudio in the near future.

Thursday 2 July 2009

Where are you going to put that code?

Often we want to modify existing software by inserting an additional step. Before we do operation X, we want to do operation Y. Consider a simple example of a LabelPrinter class, with a Print method. Suppose a new requirement has come in that before it prints a label for a customer in Sweden, it needs to do some kind of postal code transformation.

Approach 1 – Last Possible Moment

Most developers would immediately gravitate towards going to the Print method, and putting their new code in there. This seems sensible – we run the new code at the last possible moment before performing the original task.

public void Print(LabelDetails labelDetails)
{
    if (labelDetails.Country == "Sweden")
    {
        FixPostalCodeForSweden(labelDetails);
    }
    // do the actual printing....
}

Despite the fact that it works, this approach has several problems. We have broken the Single Responsibility Principle. Our LabelPrinter class now has an extra responsibility. If we allow ourselves to keep coding this way, before long, the Print method will become a magnet for special case features:

public void Print(LabelDetails labelDetails)
{
    if (labelDetails.Country == "Sweden")
    {
        FixPostalCodeForSweden(labelDetails);
    }
    if (printerType == "Serial Port Printer")
    {
        if (!CanConnectToSerialPrinter())
        {
            MessageBox.Show("Please attach the printer to COM1");
            return;
        }
    }
    // do the actual printing....
}

And before we know it, we have a class where the original functionality is swamped with miscellaneous concerns. Worse still, it tends to become untestable, as bits of GUI code or hard dependencies on the file system etc creep in.

Approach 2 – Remember to Call This First

Having seen that the LabelPrinter class was not really the best place for our new code to be added, the fallback approach is typically to put the new code in the calling class before it calls into the original method:

private void DoPrint()
{
    LabelDetails details = GetLabelDetails();
    // remember to call this first before calling Print
    DoImportantStuffBeforePrinting(details);
    // now we are ready to print
    labelPrinter.Print(details);
}

This approach keeps our LabelPrinter class free from picking up any extra responsibilities, but it comes at a cost. Now we have to remember to always call our DoImportantStuffBeforePrinting method before anyone calls the LabelPrinter.Print method. We have lost the guarantee we had with approach 1 that no one call call Print without the pre-requisite tasks getting called.

Approach 3 – Open – Closed Principle

So where should our new code go? The answer is found in what is known as the “Open Closed Principle”, which states that classes should be closed for modification, but open for extension. In other words, we want to make LabelPrinter extensible, but not for it to change every time we come up with some new task that needs to be done before printing.

There are several ways this can be done including inheritance or the use of the facade pattern. I will just describe one of the simplest – using an event. In the LabelPrinter class, we create a new BeforePrint event. This will fire as soon as the Print function is called. As part of its event arguments, it will have a CancelPrint boolean flag to allow event handlers to request that the print is cancelled:

public void Print(LabelDetails labelDetails)
{
    if (BeforePrint != null)
    {
        var args = new BeforePrintEventArgs();
        BeforePrint(this, args);
        if (args.CancelPrint)
        {
            return;
        }
    }
    // do the actual printing....
}

This approach means that our LabelPrinter class keeps to its single responsibility of printing labels (and thus remains maintainable and testable). It is now open for any future enhancements that require an action to take place before printing.

There are a couple of things to watch out for with this approach. First, you would want to make sure that whenever a LabelPrinter is created, all the appropriate events were hooked up (otherwise you run into the same problems as with approach 2). One solution would be to put a LabelPrinter into your IoC container ready set up.

Another potential problem is the ordering of the event handlers. For example, checking if you have permission to print would make sense as the first operation. The simplest approach is to add the handlers in the right order.

Conclusion

Always think about where you are putting the new code you write. Does it really belong there? Can you make a small modification to the class you want to change so that it is extensible, and then implement your new feature as an extension to that class? If you do, you will not only keep the original code maintainable, but your new code is protected from accidental breakage, as it is isolated off in a class of its own.

Friday 29 May 2009

Creating HTML using NVelocity

I recently had need to create some HTML output from a .NET console application. Often in this scenario, I will simply crank out the HTML in code, constructing it bit by bit with a StringBuilder. However, this time round I decided to look for a more elegant solution. I wanted to create a text file with a template, and for my data to be dynamically put into the right place.

While this could be done with XLST, or even some custom string replacement code, I decided to try out a .NET templating engine. There are a number of these available, including NHaml, Brail, and Spark, but I chose to go with NVelocity, whose syntax seemed to be nice and straightforward, allowing other developers to easily see what is going on and make changes to the templates.

Getting the NVelocity DLL

This proved harder than I was expecting. The original NVelocity project has not been updated in several years, but over at the Castle Project they have taken the source and are improving it. However, I couldn’t find a Castle Project download that contained a built DLL, so I ended up having to download the entire Castle Project source using Subversion, and building it.

Creating a Template

This is the nice and easy bit. Here you can see I am printing out a HTML table of books in a collection of books. I think the NVelocity syntax is pretty self-explanatory.

<h3>Books</h3>

#foreach($book in $books)
#beforeall
<table>
  <tr>
    <th>Title</th>
    <th>Author</th>
  </tr>
#before
  <tr>
#each
    <td>$book.Title</td>
    <td>$book.Author</td>
#after
  </tr>
#afterall
</table>
#nodata
No books found.
#end

Applying the Transformation

Now we need to get hold of the template we created and load it into a stream. I embedded my template as a resource. Then we need to set up a VelocityContext, which will contain all the data needed to be injected into our HTML. Then it is a simple matter of creating the VelocityEngine and passing it the context and the template. It returns a string, which can be written to disk if required.

public static string TransformBooksToHtml(IEnumerable<Book> books, string resourceTemplateName)
{
    Stream templateStream = typeof(TemplateEngine).Assembly.GetManifestResourceStream(resourceTemplateName);
    var context = new VelocityContext();
    context.Put("books", books);
    return ApplyTemplate(templateStream, context);
}

public static string ApplyTemplate(Stream templateStream, VelocityContext context)
{
    VelocityEngine velocity = new VelocityEngine();
    ExtendedProperties props = new ExtendedProperties();
    velocity.Init(props);
    var writer = new StringWriter();
    velocity.Evaluate(context, writer, "XYZ", new StreamReader(templateStream));
    return writer.GetStringBuilder().ToString();
}

Limitations

One limitation that comes to mind is that I am not sure what would happen if the data contained characters that needed to be encoded for HTML (e.g. the less than symbol). I haven’t tested this scenario, but I am sure there is some way of working round it (especially as NVelocity is intended specifically for scenarios requiring HTML output).

Friday 27 March 2009

Binding Combo Boxes in WPF with MVVM

I was recently creating a simple WPF application and was trying to use the MVVM pattern. In this pattern, all the controls on your form are data bound to properties on your “View Model” class. While there are lots of examples of how to do this with Text boxes, List boxes, and even master-detail views, it seems that examples for binding Combo boxes are a little thin on the ground.

What I wanted to do was bind the items in the ComboBox to a list in my ViewModel and to track the currently selected item. I found that there does not seem to be one “official” way of doing this. One approach bound both the ItemsSource and the SelectedValue properties of the Combo box to corresponding properties on ViewModel. The approach I went with uses a CollectionView which is a class included with .NET that encapsulates a list and the concept of a current item as well as supporting the INotifyPropertyChanged interface.

So here is the XAML first. I set the IsSychronizedWithCurrentItem to true to allow us to track the current item on the ItemsSource.

<ComboBox ItemsSource="{Binding Path=Queries}"                 
          IsSynchronizedWithCurrentItem="True"
          DisplayMemberPath="Name" />

The code in the ViewModel first creates the List, and then creates a CollectionView based on that list. This allows us to set the CurrentItem from the ViewModel as well as get notified whenever the CurrentItem changes.

public MainWindowViewModel()
{
    IList<Query> availableQueries = new List<Query>();
    // fill the list...

    Queries = new CollectionView(availableQueries);
    Queries.MoveCurrentTo(availableQueries[0]);
    Queries.CurrentChanged += new EventHandler(queries_CurrentChanged);
}

public CollectionView Queries { get; private set; }

void queries_CurrentChanged(object sender, EventArgs e)
{
    Query currentQuery = (Query)Queries.CurrentItem;
}

I’m not sure yet whether using CollectionView is a better approach than the alternatives I have seen which bind the SelectedValue or SelectedItem property. I would be interested to hear in the comments if you think either approach has benefits over the other. One consideration is that Silverlight doesn’t seem to support CollectionView at the moment.

Monday 2 February 2009

An Audio Effects Framework for NAudio

This is just a quick post to point out that I have had an article published on Coding4Fun this week. It demonstrates how to make a voice changing effect for Skype using NAudio. Check out the article here: Skype Voice Changer.

I have now uploaded all the code (and release binaries) to a CodePlex project.

The other thing to say is that the audio effect framework I developed for this application will eventually find its way into NAudio. I am still deciding on quite how best to integrate it, but watch this space for further news.

Have fun talking to your friends in silly voices on Skype!

Tuesday 6 January 2009

XAML SoundWave Take 2

In my previous post, I described a couple of options for creating a sound-wave icon graphic in XAML. While my final solution worked OK, I have since thought up an even simpler approach by using the XAML Path mini-language.

The reason I didn’t initially go for this option was that I didn’t want to calculate the coordinates of each point. However, we can draw the shape rotated by 45 degrees first, which allows us to put each point nicely on a grid, and then use a RotateTransform afterwards to put it back.

The XAML Path is a very useful tool to master if you prefer to work directly with XAML instead of using a design tool like Expression Blend. Here’s the first path, to create a quadrant shape:

<Path Fill="Orange" Data="M 0,0 h 1 a 1,1 90 0 1 -1,1 Z" />

The M command starts us off by moving to the specified coordinates (0,0 in this case). A lower case h means a relative horizontal line – we move one unit to the right. Now the lower case a command means we are drawing an arc using relative coordinates. The first two arguments are the radius of the arc (1,1 = circle of radius 1), then comes the angle in degrees (90), then a flag indicating if this is a large arc – in our case, no (0), and then another flag indicating if we are going clockwise (1 = yes). Finally, we put the end coordinates (-1,1) which are relative to the starting point because we used a lower case a. This path is then closed with the Z command although this is not strictly necessary for us as we are not using a Stroke brush on this Path.

Now we can draw the other three sections, each of which consists of a horizontal line followed by an arc, then a vertical line, and finally another arc with a smaller radius. I don’t bother with the Z because I have reached my starting point again with the second arc.

<Path Fill="Orange" Data="M 2,0 h 1 a 3,3 90 0 1 -3,3 v -1 a 2,2 90 0 0 2,-2" />
<Path Fill="Orange" Data="M 4,0 h 1 a 5,5 90 0 1 -5,5 v -1 a 4,4 90 0 0 4,-4" />
<Path Fill="Orange" Data="M 6,0 h 1 a 7,7 90 0 1 -7,7 v -1 a 6,6 90 0 0 6,-6" />

We end up with the right graphics but with the wrong orientation:

image

A RotateTransform (plus a ScaleTransform) allows us to put this exactly how we want it:

image

Here’s the complete XAML:

<Page
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Background="#E0E0E0">
  <Grid Width="8" Height="8">
    
    <Grid.RenderTransform>
      <TransformGroup>
      <ScaleTransform ScaleX="10" ScaleY="10" />
      <RotateTransform Angle="-45" />
      </TransformGroup>
    </Grid.RenderTransform>
    
    <Path Fill="Orange" Data="M 0,0 h 1 a 1,1 90 0 1 -1,1 Z" />
    <Path Fill="Orange" Data="M 2,0 h 1 a 3,3 90 0 1 -3,3 v -1 a 2,2 90 0 0 2,-2" />
    <Path Fill="Orange" Data="M 4,0 h 1 a 5,5 90 0 1 -5,5 v -1 a 4,4 90 0 0 4,-4" />
    <Path Fill="Orange" Data="M 6,0 h 1 a 7,7 90 0 1 -7,7 v -1 a 6,6 90 0 0 6,-6" />
  </Grid>
</Page>