Tuesday 30 December 2014

Mixing and Looping with NAudio

On a recent episode of .NET Rocks (LINK), Carl Franklin mentioned that he had used NAudio to create an application to mix together audio loops, as part of his “Music to Code By” Kickstarter. He had four loops, for drums, bass, and guitar, and the application allows the volumes to be adjusted individually. He made a code sample of his application available for download here.

image

This is quite simple to set up with NAudio. To perform the looping part, Carl made use of a LoopStream (using a technique I describe here). The key to looping is simply in the Read method, to read from your source, and if you reach the end (your source returns 0 or fewer samples than requested), reposition to the start and keep reading. This means you have a WaveStream that will never end.

Here's the code for a LoopStream that a WaveFileReader can be passed into:

/// <summary>
/// Stream for looping playback
/// </summary>
public class LoopStream : WaveStream
{
    WaveStream sourceStream;

    /// <summary>
    /// Creates a new Loop stream
    /// </summary>
    /// <param name="sourceStream">The stream to read from. Note: the Read method of this stream should return 0 when it reaches the end
    /// or else we will not loop to the start again.</param>
    public LoopStream(WaveStream sourceStream)
    {
        this.sourceStream = sourceStream;
        this.EnableLooping = true;
    }

    /// <summary>
    /// Use this to turn looping on or off
    /// </summary>
    public bool EnableLooping { get; set; }

    /// <summary>
    /// Return source stream's wave format
    /// </summary>
    public override WaveFormat WaveFormat
    {
        get { return sourceStream.WaveFormat; }
    }

    /// <summary>
    /// LoopStream simply returns
    /// </summary>
    public override long Length
    {
        get { return sourceStream.Length; }
    }

    /// <summary>
    /// LoopStream simply passes on positioning to source stream
    /// </summary>
    public override long Position
    {
        get { return sourceStream.Position; }
        set { sourceStream.Position = value; }
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        int totalBytesRead = 0;

        while (totalBytesRead < count)
        {
            int bytesRead = sourceStream.Read(buffer, offset + totalBytesRead, count - totalBytesRead);
            if (bytesRead == 0)
            {
                if (sourceStream.Position == 0 || !EnableLooping)
                {
                    // something wrong with the source stream
                    break;
                }
                // loop
                sourceStream.Position = 0;
            }
            totalBytesRead += bytesRead;
        }
        return totalBytesRead;
    }
}

For mixing, the approach Carl took was simply to create four instances of DirectSoundOut and start them playing together. To allow adjusting the volumes of each channel he passed each LoopStream into a WaveChannel32, which converts to 32 bit floating point, and has a Volume property (1.0 is full volume). To ensure that the four parts remained in sync, when you deselect a part, it doesn't actually stop it playing - instead it sets its volume to 0.

This approach to synchronization works surprisingly well, but it is not actually guaranteed to keep the four parts synchronized. Over time, they could drift. So a better approach is to use a single output device, and feed each of the four WaveChannels into a mixer. Here’s an example block diagram showing a modified signal chain with two inputs feeding into a single mixer:

 

----------   ----------   -----------
| Wave   |   | Loop   |   | Wave    |
| File   |-->| Stream |-->| Channel |---
| Reader |   |        |   | 32      |  |   ------------
----------   ----------   -----------  --->| Mixing   |
                                           | Wave     |
----------   ----------   -----------  --->| Provider |
| Wave   |   | Loop   |   | Wave    |  |   | 32       |
| File   |-->| Stream |-->| Channel |---   ------------
| Reader |   |        |   | 32      |
----------   ----------   -----------



NAudio has a number of options available for mixing. The best is probably MixingSampleProvider, but for Carl's project, it was easier to use MixingWaveProvider32, since he's not making use of the ISampleProvider interface. This allows you to mix together any WaveProviders that are already in 32 bit floating point format.

MixingWaveProvider32 requires that you specify its inputs up front. So here, we could connect each of our inputs, and then start playing. With this simple change, Carl's mixing application is now guaranteed to not go out of sync. This is the recommended way to mix multiple sounds with NAudio.

Here's the code that sets up the mixer (Carl has a class called WavePlayer encapsulating the WaveFileReader, LoopStream and WaveChannel32, allowing you to access the WaveChannel32 with the Channel property):

foreach (string file in files)
{
    Clips.Add(new WavePlayer(file));                    
}
var mixer = new MixingWaveProvider32(Clips.Select(c => c.Channel));
audioOutput.Init(mixer);

You can download my modified version of Carl's application here.

The only caveat is that mixers require all their inputs to be in the same format. For this application, this isn’t a problem, but if you want to mix together sounds of arbitrary formats, you'd need to convert them all to a common format. This is something I cover in my NAudio Pluralsight course if you’re interested in finding out more about how to do this.

Tuesday 16 December 2014

ClickOnce Deployment Fundamentals

I'm delighted to announce that my sixth Pluralsight course, ClickOnce Deployment Fundamentals is now live. In it I go through all the options available for customising your ClickOnce deployment, as well as how to handle updates, the capabilities of the deployment API, and what gets stored where on the disk. I also have modules covering some of the more advanced parts of ClickOnce such as handling pre-requisites with the bootstrapper, signing your deployment, and using the MAGE tool.

Why ClickOnce?

You may be surprised that I'm doing a course on ClickOnce, since it is now a fairly old and oft-maligned technology. As I explain in the course, it's not the right choice for all installers, but for simple .NET applications, it may actually prove to be the simplest solution for keeping your application automatically up to date. I go through some of the pros and cons in the course, as well as pointing out a few alternatives you might want to consider.

Some ClickOnce Resources

I've tried to give a fairly comprehensive coverage of ClickOnce capabilities in the course, but you can't cover everything, so here's some of what I consider to be the most helpful resources if you are planning to use it yourself.

  • RobinDotNet Robin is one of the few genuine ClickOnce experts out there on the web, and she has provided several really helpful articles, including things like how you can host your ClickOnce deployments in Azure blob storage.
  • MSDN - MSDN may not be the most thrilling documentation to read, but don't overlook it when it comes to ClickOnce, as it is really the only comprehensive source of information you’ll find. Have a look here and here for some useful material.
  • Smart Client Deployment book by Brian Noyes. This really is the best book out there on ClickOnce. Don’t be put off by the fact that it is fairly old now. ClickOnce hasn’t changed an awful lot though, so pretty much everything in the book is still relevant.
  • Finally here’s a video that discusses re-signing with MAGE, which shows how to work around a nasty gotcha when re-signing if you are using .deploy file extensions (which you probably are if deploying via the web).

More to Come on Signing…

I’m also hoping to follow this up with another post about the process of signing your ClickOnce applications. I actually attempted to buy my own code signing certificate which I wanted to use in my demos in this course, but it has proved surprisingly difficult to complete the purchase of my certificate (certainly a story for a future blog post), so for the course I just used a self-generated certificate. As soon as I finally get the real deal, I’ll post showing what difference it makes to the warnings you receive during installation when your app is signed by a certificate issued by a trusted Certificate Authority.

Saturday 29 November 2014

Effective Debugging with Divide and Conquer

I frequently get support requests for NAudio from people who complain that their audio doesn’t sound right. Can I have a look at their code and see what they are doing wrong?

Frequently the code they post contains multiple steps. Audio is recorded, processed, encoded with a codec, sent over the network, received over the network, decoded with a codec, processed some more, and then played.

Now if the audio doesn’t sound right, there’s clearly a problem somewhere, but how can we pinpoint where exactly? It is possible that a code review might reveal the problem, but often you’ll actually need to debug to get to the bottom of a problem like this.

The Number One Debugging Principle

Perhaps the most basic and foundational skill you need to learn if you are ever to debug effectively is to “divide and conquer”. If you have thousands of lines of code in which the bug might be hiding, going through each one line by line would take too long.

What you need to do is divide your code in half. Is the bug in the first half or the second? Once you’ve identified which half contains the problem, do the same again, and before long you’ll have narrowed down exactly where the problem lies. It’s a very simple and effective technique, and yet all too often overlooked.

Divide and Conquer Debugging in Practice

To illustrate this technique, let’s take the audio example I mentioned earlier. Where’s the problem? Well lets start by eliminating the network code. Instead of sending audio out over the network, write it to a WAV file. Then play that WAV file in Windows Media Player. If it sounds fine, then the problem isn’t in the first half of our system. With one quick test, we’ve narrowed down the problem to the decode and playback side of things.

Now we could test half of the remainder of the code by playing back from a pre-recorded file instead of from the network. If that sounds OK then its something in the code that receives over the network and decodes audio. So we can very quickly zero in on the problem area.

The point is simple, you don’t need to restrict yourself to looking at the output of the entire system to troubleshoot a problem. Look at the state at intermediate points to find out where things are going wrong. And often you don’t need to run through all the code. Can you pass data just through a small part of your logic, to see if the problem resides there?

Learn it and Use it

If you learn the art of divide and conquer, you’ll not only be great at debugging, but it will improve the way you write your code in the first place. Because as I’ve argued before on this blog, divide and conquer is perhaps the most essential skill for a programmer to have.

Friday 14 November 2014

Extending WPF Control Templates with Attached Properties

One of the great things about WPF is that you can completely customise everything about the way a control looks. Most of the time then, creating a custom style is more than sufficient to get your controls looking just how you want them? But what if you need your control to change its appearance based on some criteria. At this point, I’d usually be tempted to create my own custom control. But that isn’t actually always necessary.

For example, I recently wanted to create a peak LED button in an audio application that was monitoring sound levels. The button would show the peak decibel level, and when the sound went above a certain threshold, would go red. To acknowledge the clipping, you could click the button and the it would revert to its default colour. So my button needed a boolean property that would indicate is clipped or not.

image

But how can you add a property to a class without inheriting it? Well in WPF, you can make use of attached properties. You’ve already used them if you’ve set Grid.Row for a control in your XAML. The control itself has no Grid.Row property, but you can associate a grid row value with that control, which enables it to be positioned correctly within the grid.

So we need an attached property that stores whether a peak has been detected or not. Attached properties are similar to dependency properties if you’ve ever created those before. I can never remember how to write them from scratch, but if you have an example handy, you can copy it. You need to inherit from DependencyObject, to register a static DependencyProperty, and to provide static getter and setter methods. Here’s the one I created:

public class PeakHelper : DependencyObject
{
    public static readonly DependencyProperty IsPeakProperty = DependencyProperty.RegisterAttached(
        "IsPeak", typeof (bool), typeof (PeakHelper), new PropertyMetadata(false));


    public static void SetIsPeak(DependencyObject target, Boolean value)
    {
        target.SetValue(IsPeakProperty, value);
    }

    public static bool GetIsPeak(DependencyObject target)
    {
        return (bool)target.GetValue(IsPeakProperty);
    }
}

Now we have our attached property, we can use it in our button template. The regular button template is simply a ContentPresenter inside a Border, and then we use a Trigger to set the border’s background and border colours when our attached property is true. Obviously this is a very simplistic button template otherwise, with no triggers for mouse-over, pressed, or disabled.

<ControlTemplate TargetType="Button" x:Key="PeakButtonControlTemplate" >
    <Border x:Name="PeakBorder" BorderBrush="Gray" 
            BorderThickness="2" Background="LightGray">
        <ContentPresenter HorizontalAlignment="Center">
        </ContentPresenter>        
    </Border>
    <ControlTemplate.Triggers>
        <Trigger Property="local:PeakHelper.IsPeak"
                 Value="True">
            <Setter 
                TargetName="PeakBorder"
                Property="BorderBrush"
                Value="Red"></Setter>
            <Setter 
                TargetName="PeakBorder"
                Property="Background"
                Value="Pink"></Setter>

        </Trigger>
    </ControlTemplate.Triggers>
</ControlTemplate>

And that’s all there is to it. We can now use regular MVVM to set the IsPeak attached property to true, which will turn our button red:

<Button Margin="4" 
    Template="{StaticResource PeakButtonControlTemplate}" 
    Content="{Binding MaxVolume}"  
    local:PeakHelper.IsPeak="{Binding IsPeak}" 
    Command="{Binding PeakReset}" 
    Width="40" Height="20" />

Thursday 6 November 2014

Styling a Vertical ProgressBar in WPF

Styling your own custom progress bar in WPF has always been a relatively straightforward task. I blogged about how I created a volume meter style several years ago. But recently I needed to create a style for a vertical progress bar, and it proved a lot more complicated than I anticipated. The root of the problem appears to be a breaking change in .NET 4, that meant your PART_Indicator’s Width rather than Height gets adjusted, irrespective of the orientation of the ProgressBar.

It means you end up with vertical progress bars looking like this:

image

Instead of what we want which is this:

image

The trick to fixing this is to use a LayoutTransform as described in this StackOverflow answer. The transform rotates the root element 270 degrees when the ProgressBar’s Orientation property is set to Vertical. If this sounds a bit of a hack to you, well it is, but it does seem to work.

The XAML for the style shown above is as follows:

<Style TargetType="ProgressBar">
  <Setter Property="Template">
    <Setter.Value>
      <ControlTemplate TargetType="ProgressBar" >
        <Grid x:Name="Root">
          <Border 
            Name="PART_Track" 
            CornerRadius="2" 
            Background="LightGreen"
            BorderBrush="Green"
            BorderThickness="2" />
          <Border 
            Name="PART_Indicator" 
            CornerRadius="2" 
            Background="ForestGreen" 
            BorderBrush="Green" 
            BorderThickness="2" 
            HorizontalAlignment="Left" />
          </Grid>
          <ControlTemplate.Triggers>
            <!-- Getting vertical style working using technique described here: http://stackoverflow.com/a/6849237/7532 -->
            <Trigger Property="Orientation" Value="Vertical">
              <Setter TargetName="Root" Property="LayoutTransform">
                <Setter.Value>
                  <RotateTransform Angle="270" />
                </Setter.Value>
              </Setter>

              <Setter TargetName="Root" Property="Width"
                Value="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Height}"
              />
              <Setter TargetName="Root" Property="Height"
                Value="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Width}"
              />
            </Trigger>            
          </ControlTemplate.Triggers>       
  
        </ControlTemplate>
    </Setter.Value>
  </Setter>
</Style>

One word of caution. If you plan to plan to use the gradient fill and dock panel technique I described in my blog, then you’ll need another trigger to set the MinWidth property on the Mask element. This allows you to get the gradually revealed gradient fill in either Horizontal or Vertical alignment:

image

I’ve made the XAML for these styles available in a gist on GitHub.

Thursday 23 October 2014

Thoughts on the demise of CodePlex and Mercurial

I've been an enthusiastic user of CodePlex ever since it first launched in 2006. 14 of my open source projects are hosted there, including my "main" contribution to .NET open source, NAudio, and my most downloaded project of all time, Skype Voice Changer.

CodePlex was for me a huge improvement over SourceForge, where I had initially attempted to host NAudio in back 2003, but never actually succeeded in figuring out how to use CVS on Windows. Thanks to the TFS to SVN bridge, CodePlex source control was easy to work with using TortoiseSVN, and offered discussion forums, bug tracking, release hosting, and even ClickOnce hosting, which I make use of for a number of projects.

I was particularly excited in 2010 when CodePlex started to support Mercurial for source control. I was just awakening to the amazing power and flexibility of DVCS, and I quickly settled on Mercurial as my preferred option to Git - it just seemed to play a lot better with Windows, have a simpler command line syntax, and wasn't blocked by my work firewall. So I made the switch to Mercurial for NAudio in 2011.

Git vs Mercurial

It became obvious soon after making my decision that Git was definitely winning the popularity contest in the DVCS space. Behind the sometimes arcane command line syntax, there was an incredibly powerful feature-set there, and slowly but surely thanks to tools like SourceTree and GitHub for Windows, the developer experience on Windows improved and overtook Mercurial.

A case in point would be Microsoft’s decision to support Git natively in Visual Studio. Thanks to the built-in integration, it is trivially easy to enable Git source control for every project you create. For a long time I hoped that Mercurial support would follow, especially since Microsoft had appeared to back it in the past through CodePlex, but they made it clear that Mercurial support was never coming.

Likewise with Windows Azure, when Microsoft added the ability to deploy using DVCS, it was Git that was supported, and Mercurial users were left out in the cold again. I believe that has actually now been rectified, but for me at least, the damage had been done. I’ve used Git for almost all my new projects for over a year now, and I only really use Mercurial now for my legacy projects. It’s obvious that if I want to integrate with the latest tooling, I need to be using Git, not Mercurial.

GitHub vs CodePlex

Although CodePlex added Mercurial hosting back in 2010, it was obvious that they were rapidly losing users to GitHub, and in 2012, they finally added Git support. In theory this should have revived CodePlex as the premier hosting site for .NET open source, but it became apparent that they were falling behind in other areas too. GitHub’s forking and pull request system is very slick, and GitHub pages is a much nicer option than the rather clunky wiki system that CodePlex uses for documentation.

For a while it looked like CodePlex was fighting back, with regular updates of new features, but the CodePlex blog has had no news to announce for over a year now, and perhaps more of an indictment, GitHub has become the hosting provider of choice for the new and exciting open source projects coming out of Microsoft, such as the new ASP.NET vNext project. There are some notable exceptions such as Roslyn (which is considering a move to GitHub) and Visual F# tools (which has a top-voted feature request to move to GitHub).

Does CodePlex offer any benefits over GitHub? Well there are a few. I like having a separate Discussion forum to my Issues list. The ClickOnce hosting is useful for several of my projects. And I can’t complain about the modest income stream that their DeveloperMedia ad integration allows you to tap into if you have a popular project you’d like to generate some income for. But GitHub is a clear winner in most other respects.

Standardisation vs Competition

Now it could be considered a good thing that Git has won the DVCS war and GitHub has won the open source hosting war. It allows us all to embrace them and standardise on one way of working, saving time learning multiple tools and workflows. But there is a part of me that feels reluctant to walk away from Mercurial and CodePlex, as a lack of competition in these spaces will ultimately leave us poorer. If there is no viable competition, what will drive Git and GitHub to keep innovating, and meeting the needs of all their users?

For example, GitHub at one point unexpectedly decided to ditch their uploads feature. This immediately made it unsuitable for hosting lots of the sorts of projects that I work on, which are released as a set of binaries. It looks like they have remedied the situation now with a releases feature, but for me that did highlight a danger that they were already in such a position of strength they could afford to make a decision that would be hugely unpopular with many of their users.

I’m also uneasy about the way that GitHub has become established in the minds of many developers as the only place that counts when evaluating someone’s contribution to open source. My CoderWall page simply ignores all my CodePlex work and focuses entirely on a few peripheral projects I have hosted on GitHub and BitBucket. My OpenHub (formerly Ohloh) page does at least attempt to track my CodePlex work but somehow only picks up a very limited subset of my actual commit history (apparently I did almost nothing in the last two years). I’d rather they didn’t show anything about my commit history than a misrepresentation. I’ve also read numerous blogs proclaiming that you should only hire a developer after checking their GitHub contributions. So it is concerning that the all the work I have done on CodePlex counts for nothing in the minds of some simply because I did it on the wrong hosting site with the wrong DVCS tool. Hopefully the new Microsoft MVP criteria won’t take the same blinkered approach.

Time to Transition?

So I find myself at the end of 2014 wondering whether the time has come to migrate NAudio to GitHub. I was initially against the idea, but it would certainly make it easier to accept contributions (very few people are willing to learn Mercurial), and GitHub pages would be a great platform to build improved documentation on. And all of a sudden these tools that attempt to “rank” you as a contributor to open source would finally recognize me as having done something!

But part of me wishes that the likes of CodePlex and Mercurial would have a renaissance, as well as new DVCS (Veracity?) and alternative open source hosting sites like the excellent BitBucket will continue to grow and flourish and provide real competition to Git and GitHub, spurring them on to more innovation.

I’d love to know your thoughts on this in the comments. Have you transitioned your open source development to Git and GitHub, and why / why not?

TLDR: Git is awesome and GitHub is awesome but the software development community is poorer for there being no viable competition.

Thursday 16 October 2014

How to Create Circular Backgrounds for your Font Awesome Icons

I was recently using the font awesome icons for a webpage I was creating, which provide a really nice scalable set of general purpose icons, and I wanted them to appear on a circular background, something like this:

image

But not being an expert in CSS, I didn’t know how to set this up. After a bit of searching and experimentation I found two different ways to create this effect.

Method 1: CSS circles

The first is to create the circle using the border-radius css property and some padding to create space around your icon. You can set the radius to 50% or to half the width and height to create a circle. The only catch is that the container needs to be square, otherwise you’ll end up with an ellipse. For example, if I use this style

.circle-icon {
    background: #ffc0c0;
    padding:30px;
    border-radius: 50%;
}

And then try to use it with a font awesome icon like this:

<i class="fa fa-bicycle fa-5x circle-icon"/>

I get an ellipse:

image

So we need to specify the height and width explicitly, and this leads us to also need some rules to get our icon centred horizontally (using text-align) and vertically (using line-height and vertical-align). So if we update our CSS style like this:

.circle-icon {
    background: #ffc0c0;
    width: 100px;
    height: 100px;
    border-radius: 50%;
    text-align: center;
    line-height: 100px;
    vertical-align: middle;
    padding: 30px;
}

Now we get the circle as desired:

image

So mission accomplished, sort of, although it feels a shame to have to specify exact sizing for things. Maybe any CSS experts reading this can tell me a better way of accomplishing this effect. The good news is that font awesome itself has a concept of “stacked” icons which offers another way to achieve the same effect.

Method 2 – Stacked Icons

Stacked icons are basically drawing two font awesome icons on top of each other. Font awesome comes with the fa-circle icon which is a solid circle, so we can use that for the background. We need to style it to set the background colour correctly, and we use fa-stack-2x on this icon to indicate that it must be drawn twice the size of the icon that will appear to be inside the circle. Then we put both icons inside a span with the fa-stack class, and we can still use the regular font awesome styles to choose the overall icon size, such as fa-4x. So here for example, I have

<span class="fa-stack fa-4x">
  <i class="fa fa-circle fa-stack-2x icon-background"></i>
  <i class="fa fa-lock fa-stack-1x"></i>
</span>

where icon-background is simply specifying a background colour:

.icon-background {
    color: #c0ffc0;
}

and this looks like this:

image

As you can see this is a nice simple technique, and I prefer it to the CSS approach. Font awesome also includes circles with borders, so if you want to create something like this (I used three stacked icons), you can:

image

To experiment with this yourself, try this jsfiddle.

Monday 6 October 2014

Auto-Registration with StructureMap

When I want an IoC container I usually either use Unity or create my own really simple one. But I’ve been meaning to try out some of the alternatives, and so I decided to give StructureMap a try for my latest project.

I had two tasks to accomplish. The first was to tell it to use a particular concrete class (ConsoleLogger) as the implementer of my ILog interface. The second was to get it to scan the assembly and find all the implementers of my ICommand interface, without having to register each one explicitly.

As you’d expect. the first task is very simple to accomplish. You create a new Container, and then map the interface to the concrete implementer using a fluent API. Here I’ve said that my logger will be a singleton:

var container = new Container(x => 
    x.ForSingletonOf<ILog>().Use<ConsoleLogger>());
var logger = container.GetInstance<ILog>();

The second task is also straightforward to achieve with StructureMap. We ask StructureMap to scan the calling assembly, and add all types of ICommand. The one gotcha is that I had to make the ICommand interface and implementing classes public for it to detect them. Here’s the registration code:

var container = new Container(x =>
    {
        x.ForSingletonOf<ILog>().Use<ConsoleLogger>();
        x.Scan(a =>
           {
               a.TheCallingAssembly();
               a.AddAllTypesOf<ICommand>();
           });
   });

Now we can easily get hold of all the implementations of the ICommand interface like so:

var commands = container.GetAllInstances<ICommand>();

As you can see, it’s very straightforward. In fact there are ways with StructureMap to simplify things further by making use of conventions. So if you’re looking for an IoC that can do auto-registration, why not give StructureMap a try?

Thursday 2 October 2014

September Pluralsight Course Recommendations

I blogged in August about some of my Pluralsight course recommendations, and so I thought I’d give another brief roundup of some of the best ones I’ve watched in the last month or two.

Hack your API First (Troy Hunt). With the “shellshock” and “heartbleed” vulnerabilities making national headlines, everyone wants to be sure that their website or network connected application is secure. But do you have the confidence as a developer that you know how to ensure your programs are safe from attack? Troy Hunt has created several security related Pluralsight courses, and they are all excellent as well as being a lot of fun to watch. This latest one gives lots of practical guidance on how you can ensure any APIs your application exposes or uses can be kept secure.

Executable Specifications (Elton Stoneman). Lots of developers (myself included) have embraced the practice of writing “unit tests”, but often these tests are very low level and only cover small components in isolation. What if we could write specifications in such a way that acceptance testing of the whole stack could be automated? Don’t believe it’s possible? Well you might change your mind after watching this course. Elton Stoneman does a superb job of showing the power of the SpecFlow framework. I’ll definitely be watching more of his courses in the future, and looking out for a chance to try SpecFlow on one of my own projects.

F# Functional Data Structures (Kit Eason) F# is a language I find very exciting and I am slowly getting to grips with it’s syntax. The real challenge though is to go beyond simply writing C# code in the F# syntax, and to start taking advantage of the power of functional programming. In this course Kit Eason guides you through the various data structures offered by F#, showing you how, when and why to use them. You should definitely check it out if you are learning F#.

.NET Interoperability Fundamentals (Pavel Yosifovich). Probably the biggest headache for me when I started creating NAudio was having to learn how to interoperate with unmanaged code effectively. I’ve done a huge amount of P/Invoke to Windows APIs as well as COM interop for the more modern Windows APIs, and it has been a painful and error prone process. This is the course I wish I could have watched 10 years ago. Pavel knows his stuff, and really I’d describe this course as an expert chatting about everything you need to know to effectively work with unmanaged code. Well worth watching if you need to do any kind of interop.

So that’s my recommendations for this month. I know there are probably plenty of other good ones I missed, so let me know in the comments what you’ve been watching. And I know my blogging output has been reduced over the summer. I’m hoping to get back up to speed in the near future, and I’ll have news to share about my next Pluralsight course soon.

Saturday 27 September 2014

Announcing Windows Forms Best Practices

Those of you following my blog will know that I’ve been working on a course entitled Windows Forms Best Practices for Pluralsight. I’m pleased to announce that it is now live, and you can check out some of my ideas for how to write better Windows Forms applications (as well as how to incrementally migrate away to a newer technology).

The approach I took was to make a small Windows Forms demo application in the style often seen in typical line of business applications – all the code sits inside the code behind of a monolithic main form. And through the course I improve this application in various ways, both from a usability and a maintainability perspective.

Along the way I refactor it towards a Model View Presenter pattern, and improve it in various other ways such as better threading and exception handling, making it resizable and localizable.

One regret I do have is that there simply wasn’t time to show how the code could be refactored completely to MVP – I just did the first step. I have however since continued the refactoring and have a version of the demo application that uses MVP more extensively. I’ll try to make this available for viewers of my course, so do get in touch (@mark_heath on Twitter) if you’d like to get hold of this.

The course is available for viewing here. Hope you find it helpful.

Wednesday 10 September 2014

Creating RF64 and BWF WAV files with NAudio

There are two extensions to the standard WAV file format which you may sometimes want to make use of. The first is the RF64 extension, which overcomes the inherent limitation that WAV files cannot be larger than 4GB. The second is the Broadcast Wave Format (BWF) which builds on the existing WAV format and specifies various extra chunks containing metadata.

In this post, I’ll explain how you can make a class to create Broadcast Wave Files using NAudio that supports large file sizes using the RF64 extension, and includes the “bext” chunk from the BWF specification.

File Header

First of all, a WAV file starts with the byte sequence ‘RIFF’ and then has a four byte size value, which is the number of bytes following in the entire file. However, for large files, instead of ‘RIFF’, ‘RF64’ is used, and the following four byte integer for the RIFF size is then ignored (it should be set to -1).

Then we have the ‘WAVE’ identifier (another 4 bytes), and following that in a normal WAV file we would usually expect the format chunk (with the ‘fmt ‘ 4 byte identifier). But to support RF64, we add a “JUNK” chunk. This is of size 28 bytes, and initially is all set to zeroes. If the overall size of the entire WAV file grows to over 4GB, then we will turn this “JUNK” chunk into a ‘ds64’ chunk. If the overall file size does not grow beyond 4GB, then the junk chunk is simply left in place, and media players will just ignore it.

The ds64 Chunk

A ds64 chunk consists of three 8 byte integers and a four byte integer. These are the RIFF size, which is the size of the entire file minus 8 bytes, the data size, which is the number of bytes of sample data in the ‘data’ chunk and the ‘sampleCount’ which is the number of samples. The sample count is optional really, as it corresponds to the sample count found in the ‘fact’ chunk of a standard WAV file. This chunk is usually only present for non-PCM audio formats as it is trivial to calculate the sample count for PCM from the byte count. Finally, a ds64 chunk can have a table containing the sizes of any other huge chunks, but usually this would be unused since it is likely only the ‘data’ chunk that will grow larger than 4GB. So the final four bytes of a typical ds64 chunk are 0s, indicating no table entries.

The bext chunk

Following the ds64 chunk, we have the bext chunk from the BWF specification. This has space for a textual description of the file as well as timestamps, and newer versions of the bext chunk also allow you to specify various bits of loudness information. The algorithms for calculating this are hard to track down, so I tend to use version 1 of bext and ignore them.

The fmt chunk

Then we have the standard ‘fmt ‘ chunk, which works just the same way it does in a standard WAV file, containing a WAVEFORMATEX structure with information about sample rate, bit depth, encoding and number of channels. RF64 files are almost always either PCM or IEEE floating point samples, since it is only with uncompressed audio that you typically end up creating files larger than 4GB.

The data chunk

Finally, we have the ‘data’ chunk, containing the actual audio data. Again this is used in exactly the same way as it is in a regular WAV file, except that the chunk data length only needs to be filled in the file is less than 4GB. If it is a RF64 file, the four byte length for the data chunk is ignored (set it to –1), and the size from the ds64 chunk is used instead.

The code

Here’s a simple implementation of a BWF writer class, that creates BWF files with a simple bext chunk and turns them into RF64 files if necessary. I plan to clean this code up a little and import it into NAudio in the future (either as its own class or upgrade WaveFileWriter to support RF64 and BWF – let me know your preference in the comments).

Friday 8 August 2014

Brush up on your Languages with Pluralsight

Over the last year not only have I created a number of courses for Pluralsight, I’ve also watched a lot too. Most of the time, I’m not watching to learn a brand new technology, but as a refresher for something I’ve already used a bit. Often in just an hour or two (on 1.3x playback) you can watch a whole course and pick up loads of great tips.

I’ve found it a particularly effective way to brush up on my skills in a few programming languages that I’m semi-proficient in, but not completely “fluent” in. So here’s a few programming language related courses that I can recommend:

First, last year I was glad I watched Structuring JavaScript by Dan Whalin, as I had been hearing lots of people talking about the “revealing module pattern” and “revealing prototype pattern” but hadn’t yet properly learned what those patterns were. He explains them simply and clearly.

Another great course is Python Fundamentals by Austin Bingham and Robert Smallshire. It’s been a number of years since I did any serious Python development, so my skills had grown a bit rusty. This superbly presented course is a brilliant introduction to Python, and filled in a couple of gaps in my knowledge. They’ve got a follow-up course out as well which is undoubtedly also worth watching.

Third, several times over the years I’ve tried and failed to get to grips with PowerShell. The Everyday PowerShell for Developers course by Jim Christopher was exactly what I needed as it shows how to do the sorts of things developers will be interested in doing with PowerShell.

And finally, the F# section of the Pluralsight library is still small, but growing fast, and one fascinating course was Mark Seemann’s Functional Architecture with F#. It’s fast-moving but gives fascinating insights into how you could architect a typical line of business application in a more functional way.

Anyway, that’s enough recommendations for now. I have several other courses I want to highlight, so maybe this will become a regular blog feature. Let me know in the comments if there are any must-see courses you’ve come across.

Wednesday 30 July 2014

Going Beyond the Limits of Windows Forms

One of the things I explore in my Windows Forms Best Practices course on Pluralsight (soon to be released) is how you can go beyond the limitations of the Windows Forms platform, and integrate newer technologies into your legacy applications. Here are three ways you can extend the capabilities of your Windows Forms applications.

1. Use Platform Invoke to harness the full power of the Windows Operating System

Because Windows Forms has not been significantly updated for several years now, many of the newer capabilities of Windows are not directly supported. For example, Windows Forms controls do not provide you with touch events representing your gestures on a touch screen.

However, this does not mean you are limited to using only what the System.Windows.Forms namespace has to offer. For example, with a bit of Platform Invoke, you can register to receive WM_TOUCH messages in your application with a call into the RegisterTouchWindow API in your Form load event. Then you can override the WndProc method on your form to detect the WM_TOUCH messages, and use another Windows API, GetTouchInputInfo, to interpret the message parameters.

This may sound a little complicated to you, but there is a great demo application that is part of the Windows 7 SDK which you can read about here. Of course it would be much nicer if Windows Forms had built-in touch screen support, but don’t let the fact that it doesn’t stop you from taking advantage of operating system features like this. Anything the Windows API can do, your Windows Forms application can do, thanks to P/Invoke.

2. Use the WebBrowser control to host Web Content

Many existing Windows Forms line of business applications are being gradually replaced over time with newer HTML 5 applications, allowing them to be accessed from a much broader set of devices. But the transition can be a slow process, and sometimes the time required to rewrite the entire application is prohibitive. However, with the WebBrowser Windows Forms control, you can host any web content you like, meaning that you could incrementally migrate certain parts of your application to web pages.

The WebBrowser control is essentially Internet Explorer hosted inside a Windows Forms control, and will use whatever version of IE you have installed. Frustratingly, it defaults to IE 7 compatibility mode, and there isn’t an easy programmatic way to change that. However, if you are in control of the HTML that is rendered, or are able to write a registry key, then you can use one of the two techniques described here to fix it.

The WebBrowser control actually has a few cool properties, such as the ability to let you explore and manipulate the DOM as a HtmlDocument using the WebBrowser’s Document property. You can even provide a .NET object that can be manipulated using JavaScript with the ObjectForScripting property. So two way communication from the your .NET code to the webpage, and vice versa are possible.

Obviously composing your application partly out of Windows Forms controls and partly out of hosted web pages won’t be a completely seamless user experience, but it may provide a way for you to incrementally retire various parts of a large legacy Windows Forms application and replace them with HTML 5.

3. Use the ElementHost control to host WPF content

Alternatively, you may wish to move away from Windows Forms towards WPF and XAML. Not all applications are suited to being web applications, and the WPF platform offers many advantages in terms of rendering capabilities that Windows Forms developers may wish to take advantage of. It also provides a possible route towards supporting other XAML based platforms such as Windows Store apps or Windows Phone apps.

The ElementHost control allows you to host any WPF control inside a Windows Forms application. It’s extremely simple to use, and with the exception of a few quirks here and there, works very well. If you made use of a Model View Presenter pattern, where your Views are entirely passive, then migrating your application to WPF from Windows Forms is not actually as big a task as you might imagine. You simply need to re-implement your view interfaces with WPF components instead of Windows Forms. In my Pluralsight course I show how easy this is, simply swapping out part of the interface for the demo application with a WPF replacement.

So if you are working on a Windows Forms application and find yourself frustrated by the limitations of the platform, remember that you aren’t limited to what Windows Forms itself has to offer. Consider one of these three techniques to push the boundaries of what is possible, and start to migrate towards more modern UI development technologies at the same time.

Friday 18 July 2014

10 Ways to Create Maintainable and Testable Windows Forms Applications

Most Windows Forms applications I come across have non-existent or extremely low unit test coverage. And they are also typically very hard to maintain, with hundreds if not thousands of lines of code in the code behind for the various Form classes in the project. But it doesn’t have to be this way. Just because Windows Forms is a “legacy” technology doesn’t mean you are doomed to create an unmaintainable mess. Here’s ten tips for creating maintainable and testable Windows Forms applications. I’ll be covering many of these in my forthcoming Pluralsight course on Windows Forms best practices.

1. Segregate your user interface with User Controls

First, avoid putting too many controls onto a single form. Usually the main form of your application can be broken up into logical areas (which we could call “views”). You will make life much easier for yourself if you put the controls for each of these areas into their own container, and in Windows Forms, the easiest way to do this is with a User Control. So if you have an explorer style application, with a tree view on the left and details view on the right, then put the TreeView into its own UserControl, and create a UserControl for each of the possible right-hand views. Likewise if you have a tab control, create a separate UserControl for each page in the tab control.

Doing this not only keeps your classes from getting unmanageably large, but it also makes tasks like setting up resizing and tab order much more straightforward. It also allows you to easily disable whole sections of your user interface in one go where necessary. You’ll also find when you break up your user interface into smaller UserControls containing logically grouped controls, that it becomes much easier to redesign the UI layout of your application.

2. Keep non UI code out of code behind

Invariably in Windows Forms applications you’ll find code accessing the network, database or file system in the code behind of a form. This is a serious violation of the “Single Responsibility Principle”. The focus of your Form or UserControl class should simply be the user interface. So when you detect non UI related code exists in your code behind, refactor it out into a class with a single responsibility. So you might create a PreferencesManager class for example, or a class that is responsible for calls to a particular web service. These classes can then be injected as dependencies into your UI components (although this is just the first step – we can take this idea further as we’ll see shortly). 

3. Create passive views with interfaces

One particularly helpful technique is to make each of the forms and user controls you create implement a view interface. This interface should contain properties that allow the state and content of the controls in the view to be set and retrieved. It may also include events to report back user interactions such as clicking on a button or moving a slider. The goal is that the implementation of these view interfaces is completely passive. Ideally there should be no conditional logic whatsoever in the code behind of your Forms and UserControls.

Here’s an example of a view interface for a new user entry view. The implementation of this view should be trivial. Any business logic doesn’t belong in the code behind (and we’ll discuss where it does belong next).

interface INewUserView
{
    string FirstName { get; set; }
    string LastName { get; set; }
    event EventHandler SaveClicked;
}

By ensuring that your view implementations are as simple as possible, you will maximise your chances of being able to migrate to an alternative UI framework such as WPF, since the only thing you will need to do is recreate the views in the new technology. All your other code can be reused.

4. Use presenters to control the views

So if you have made all your views passive and implement interfaces, you need something that will implement the business logic of your application and control the views. And we can call these “presenter” classes. This is the pattern known as “Model View Presenter” or MVP.

In Model View Presenter your views are completely passive, and the presenter instructs the view what data to display. The view is also allowed to communicate back to the presenter. In my example above it does so by raising an event, but often with this pattern your view is allowed to make direct calls into the presenter.

What is absolutely not allowed is for the view to start directly manipulating the model (which includes your business entities, database layer etc). If you follow the MVP pattern, all the business logic in your application can be easily tested because it resides inside presenter or other non-UI classes.

5. Create services for error reporting

Often your presenter classes will need to display error messages. But don’t just put a MessageBox.Show into a non-UI class. You’ll make that method impossible to unit test. Instead create a service (say IErrorDisplayService) that your presenter can call into whenever it needs to report a problem. This keeps your presenters unit testable, and also provides flexibility to change the way errors are presented to the user in the future.

6. Use the Command pattern

If your application contains a toolbar with a large number of buttons for the user to click, the Command pattern may be a very good fit. The command pattern dictates that you create a class for each command. This has the great benefit of segregating your code into small classes each with a single responsibility. It also allows you to centralise everything to do with a particular command. Should the command be enabled? Should it be visible? What is its tooltip and shortcut key? Does it require a specific privilege or license to execute? How should we handle exceptions thrown when the command runs?

The command pattern allows you to standardise how you deal with each of these concerns that are common to all the commands in your application. Your command object will have an Execute method that actually contains the code to perform the required behaviour for that command. In many cases this will involve calling into other objects and business services, so you will need to inject those as dependencies into your command object. Your command objects themselves should be possible (and straightfoward) to unit test.

7. Use an IoC container to manage dependencies

If you are using Presenter classes and Command classes, then you will probably find that the number of classes they depend on grows over time. This is where an Inversion of Control container such as Unity or StructureMap can really help you out. They allow you to easily construct your views and presenters no matter how many levels of dependencies they have.

8. Use the Event Aggregator pattern

Another design pattern that can be really useful in a Windows Forms application is the event aggregator pattern (sometimes also called a “Messenger” or an “event bus”). This is a pattern where the raiser of an event and the handler of an event do not need to be coupled to each other at all. When an “event” happens in your code that needs to be handled elsewhere, simply post a message to the event aggregator. Then the code that needs to respond to that message can subscribe and handle it, without needing to worry about who is raising it.

An example might be that you send a “help requested” message with details of where the user currently is in the UI. Another service then handles that message and ensures the correct page in the help documentation is launched in a web browser. Another example is navigation. If your application has many screens, a “navigate” message can be posted to the event aggregator, and then a subscriber can respond to that by ensuring that the new screen is displayed in the user interface.

As well as radically decoupling publishers and subscribers of events, the event aggregator also has the great benefit of creating code that is extremely easy to unit test.

9. Use Async and Await for threading

If you are targeting .NET 4 and above and using Visual Studio 12 or newer, don’t forget that you can make use of the new async and await keywords which will greatly simplify any threading code in your application, and automatically handle the marshalling back onto the UI thread when the background tasks have completed. They also hugely simplify handling exceptions across multiple chained background tasks. They are a great fit for Windows Forms applications and well worth checking out if you haven’t already.

10. Don’t leave it too late

It is possible to retrofit all the patterns and techniques I described above to an existing Windows Forms application, but I can tell you from bitter experience, it can be a lot of work, especially once the code behind for forms gets into the thousands of lines of code territory. If you start off building your applications with patterns like MVP, event aggregators and the command pattern you’ll find it a lot less painful to maintain as they grow bigger. And you’ll also be able to unit test all your business logic which which is critical for on-going maintainability.

Thursday 10 July 2014

Six ways to initiate tasks on another thread in .NET

Over the years, Microsoft have provided us with multiple different ways to kick off tasks on a background thread in .NET. It can be rather bewildering to decide which one you ought to use. So here’s my very quick guide to the choices available. We’ll start with the ones that have been around the longest, and move on to the newer options.

Asynchronous Delegates

Since the beginning of .NET you have been able to take any delegate and call BeginInvoke on it to run that method asynchronously. If you don’t care about when it ends, it’s actually quite simple to do. Here we’ll call a method that takes a string:

Action<string> d = BackgroundTask;
d.BeginInvoke("BeginInvoke", null, null);

The BeginInvoke method also takes a callback parameter and some optional state. This allows us to get notification when the background task has completed. We call EndInvoke to get any return value and also to catch any exception thrown in our function. BeginInvoke also returns an IAsyncResult allowing us to check or wait for completion.

This model is called the Asynchronous Programming Model (APM). It is quite powerful, but is a fairly cumbersome programming model, and not particularly great if you want to chain asynchronous methods together as you end up with convoluted flow control over lots of callbacks, and find yourself needing to pass state around in awkward ways.

Thread Class

Another option that has been in .NET since the beginning is the Thread class. You can create a new Thread object, set up various properties such as the method to execute, thread name, and priority, and then start the thread.

var t = new Thread(BackgroundTask);
t.Name = "My Thread";
t.Priority = ThreadPriority.AboveNormal;
t.Start("Thread");
This may seem like the obvious choice if you need to run something on another thread, but it is actually overkill for most scenarios.

It’s better to let .NET manage the creation of threads rather than spinning them up yourself. I only tend to use this approach if I need a dedicated thread for a single task that is running for the lifetime of my application.

ThreadPool

The ThreadPool was introduced fairly early on in .NET (v1.1 I think) and provided an extremely simple way to request that your method be run on a thread from the thread pool. You just call QueueUserWorkItem and pass in your method and state. If there are free threads it will start immediately, otherwise it will be queued up. The disadvantage of this approach compared to the previous two is that it provides no mechanism for notification of when your task has finished. It’s up to you to report completion and catch exceptions.

ThreadPool.QueueUserWorkItem(BackgroundTask, "ThreadPool");

This technique has now really been superseded by the Task Parallel Library (see below), which does everything the ThreadPool class can do and much more. If you’re still using it, its time to learn TPL.

BackgroundWorker Component

The BackgroundWorker component was introduced in .NET 2 and is designed to make it really easy for developers to run a task on a background thread. It covers all the basics of reporting progress, cancellation, catching exceptions, and getting you back onto the UI thread so you can update the user interface.

You put the code for your background thread into the DoWork event handler of the background worker:

backgroundWorker1.DoWork += BackgroundWorker1OnDoWork;

and within that function you are able to report progress:

backgroundWorker1.ReportProgress(30, progressMessage);

You can also check if the user has requested cancellation with the BackgroundWorker.CancellationPending flag.

The BackgroundWorker provides a RunWorkerCompleted event that you can subscribe to and get hold of the results of the background task. It makes it easy to determine if you finished successfully, were cancelled, or if an exception that was thrown. The RunWorkerCompleted and ProgressChanged events will both fire on the UI thread, eliminating any need for you to get back onto that thread yourself.

BackgroundWorker is a great choice if you are using WinForms or WPF. My only one criticism of it is that it does tend to encourage people to put business logic into the code behind of their UI. But you don't have to use BackgroundWorker like that. You can create one in your ViewModel if you are doing MVVM, or you could make your DoWork event handler simply call directly into a business object.

Task Parallel Library (TPL)

The Task Parallel Library was introduced in .NET 4 as the new preferred way to initiate background tasks. It is a powerful model, supporting chaining tasks together, executing them in parallel, waiting on one or many tasks to complete, passing cancellation tokens around, and even controlling what thread they will run on.

In its simplest form, you can kick off a background task with TPL in much the same way that you kicked off a thread with the ThreadPool.

Task.Run(() => BackgroundTask("TPL"));

Unlike the ThreadPool though, we get back a Task object, allowing you to wait for completion, or specify another task to be run when this one completes. The TPL is extremely powerful, but there is a lot to learn, so make sure you check out the resources below for learning more.

C# 5 async await

The async and await keywords were introduced with C# 5 and .NET 4.5 and allow you to write synchronous looking code that actually runs asynchronously. It’s not actually an alternative to the TPL; it augments it and provides an easier programming model. You can call await on any method that returns a task, or if you need to call an existing synchronous method you can do that by using the TPL to turn it into a task:

await Task.Run(() => xdoc.Load("http://feeds.feedburner.com/soundcode"));

The  advantages are that this produces much easier to read code, and another really nice touch is that when you are on a UI thread and await a method, when control resumes you will be back on the UI thread again:

await Task.Run(() => xdoc.Load("http://feeds.feedburner.com/soundcode"));
label1.Text = "Done"; // we’re back on the UI thread!

This model also allows you to put one try…catch block around code that is running on multiple threads, which is not possible with the other models discussed. It is also now possible to use async and await with .NET 4 using the BCL Async Nuget Package.

Here’s a slightly longer example showing a button click event handler in a Windows Forms application that calls a couple of awaitable tasks, catches exceptions whatever thread its on and updates the GUI along the way:

private async void OnButtonAsyncAwaitClick(object sender, EventArgs e)
{
    const string state = "Async Await";
    this.Cursor = Cursors.WaitCursor;
    try
    {
        label1.Text = String.Format("{0} Started", state);
        await AwaitableBackgroundTask(state);
        label1.Text = String.Format("About to load XML"); 
        var xdoc = new XmlDocument();
        await Task.Run(() => xdoc.Load("http://feeds.feedburner.com/soundcode"));
        label1.Text = String.Format("{0} Done {1}", state, xdoc.FirstChild.Name);
    }
    catch (Exception ex)
    {
        label1.Text = ex.Message;
    }
    finally
    {
        this.Cursor = Cursors.Default;
    }
}

Learn More

Obviously I can’t fully explain how to use each of these approaches in a short blog post like this, but many of these programming models are covered in depth by some of my fellow Pluralsight Authors. I’d strongly recommend picking at least one of these technologies to master (TPL with async await would be my suggestion).

Thread and ThreadPool:

Asynchronous Delegates:

BackgroundWorker:

Task Parallel Library:

Async and Await

Wednesday 2 July 2014

Input Driven Resampling with NAudio using ACM

NAudio provides a number of different mechanisms for resampling audio, but the resampler classes all assume you are doing “output driven” resampling. This is where you know how much resampled audio you want, and you “pull” the necessary amount of input audio through. This is fine for playing back audio, or for resampling existing audio files, but there are cases when you want to do “input driven” resampling. For example, if you are receiving blocks of audio from the network or soundcard, and want to resample just that block of audio before sending it on somewhere else, then input driven is what you want.

All of NAudio’s resamplers (ACM, Media Foundation and WDL) can be used in input driven mode, but in this post I’ll focus on using AcmStream, since it is the most widely available, and doesn’t require floating point samples.

Step 1 – Open an ACM Resampler Stream

We start by opening the ACM resampler, specifying the desired input and output formats. Here, we open an ACM stream that can convert mono 16 bit 44.1kHz to mono 16 bit 8kHz. (Note that the ACM resampler won’t change bit depths or channel counts at the same time – so the WaveFormats should differ by sample rate only).

var resampleStream = new AcmStream(new WaveFormat(44100, 16, 1), new WaveFormat(8000, 16, 1));

Note that you shouldn’t recreate the AcmStream for each buffer you receive. Codecs like to maintain internal state, so you’ll get the best results if you open it once, and then keep passing audio through it.

Step 2 – Copy Audio into the Source Buffer

Now with the ACM stream, it already has a source and destination buffer allocated that it will use for all conversions. So we need to copy audio into the source buffer, Convert it, then read it out of the destination buffer. The source buffer is a limited size, so if you have more input data than can fit into the source buffer, you’ll need to perform this conversion in multiple stages. Here we’ll get some input data as a byte array (e.g. received from the network or a soundcard), and copy it into the source buffer:

byte[] source = GetInputData();
Buffer.BlockCopy(source, 0, resampleStream.SourceBuffer, 0, source.Length);

Step 3 – Resample the Source Buffer with the Convert function

To convert the audio in the source buffer, call the Convert function. This will return the number of bytes available in the destination buffer for you to read out. It will also tell you how many source bytes were used. This should be all of them. If they haven’t been used, it probably means you are trying to convert too many at once. Pass the audio through in smaller block sizes, or copy the “leftovers” out and put them through again next time (this is how NAudio’s WaveFormatConversionStream works internally). Here’s the code to convert what you’ve copied into the source buffer:

int sourceBytesConverted = 0;
var convertedBytes = resampleStream.Convert(source.Length, out sourceBytesConverted);
if (sourceBytesConverted != source.Length)
{
    Console.WriteLine("We didn't convert everything {0} bytes in, {1} bytes converted");
}

Step 4 – Read the Resampled Audio out of the Destination Buffer

Now we can copy our converted audio out of the ACM stream’s destination buffer:

var converted = new byte[convertedBytes];
Buffer.BlockCopy(resampleStream.DestBuffer, 0, converted, 0, convertedBytes);

And that’s all you have to do (apart from remember to Dispose your AcmStream when you’re done with it). It is a little more convoluted than output driven resampling, but not too difficult once you know what you’re doing.

I’ll try to do some future posts showing how to do input driven resampling with the Media Foundation resampler, and the WDL resampler.

Sunday 29 June 2014

How to create zip backups of previous git commits

I’m working on a new Pluralsight course at the moment, and one of the things I need to do is provide “before” and “after” code samples of all the demos I create. In theory should be easy since I’m using git for source control, so I can go back to any previous point in time using git checkout and then zip up my working folder. But the trouble with that approach is that my zip would also contain a whole load of temporary files and build artefacts that I don’t want. What I needed was to be able to quickly create a zip file containing only the files under source control for a specific revision.

I thought at first I’d need to write my own tool to do this, but I discovered that the built-in git archive command does exactly what I want. To create a zip file for a specific commit you just use the following command:

git archive --format zip --output example.zip 47636c1

Or if like me you are using tags to identify the commits you want to export, you can use the tag name instead of the hash:

git archive --format zip --output begin-demo.zip begin-demo

Or if you just want to export the latest files under source control, you can use a branch name instead:

git archive --format zip --output master.zip master

In fact, if you have Atlassian’s excellent free SourceTree utility installed, it’s even easier, since you can just right-click any commit and select “archive”. Anyway, this feature saved me a lot of time, so hope this proves useful to someone else as well.

Tuesday 24 June 2014

How to Create a Breaking Non-Space in HTML

If you’ve done any web development, you’re probably familiar with the commonly used “non-breaking space”, entered in HTML as &nbsp; The non-breaking space prevents a line break from occurring at that point, even though there is a space. You might use one to keep an icon together with the word it is associated with.

But recently I found myself wanting the opposite: a “breaking non-space” if you like. In other words, I wanted to mark certain points within a long word where I wanted a line-break to be allowed if necessary. This is useful for long camel-cased names of libraries such as my SharpMediaFoundation project. It’s too long to fit in the sidebar of my blog so I wanted it to be able to break after Sharp or Media.

It took a bit of searching, but eventually I found how to do this in HTML. It’s called the “Word Break Opportunity”, and you simply need to insert <wbr> at the points you are happy for a line-break to occur. So for my example I simply needed to enter Sharp<wbr>Media<wbr>Foundation. It’s not a feature you’ll need a lot, but occasionally it comes in handy.

Friday 20 June 2014

How to Zip a Folder with ASP.NET Web Pages

Since I’m basing my new blog on MiniBlog, all my posts are stored as XML files in a posts folder. I wanted a simple way to create a backup of my posts, without resorting to FTP.

My solution is to create a URL that allows me to download the entire posts folder as a zip archive. To do the zipping, I used DotNetZip which is available as a nuget package and has a nice clean API.

Then in the Razor view (e.g. export.cshtml), the following code can be used to create a zip archive:

@using Ionic.Zip
@{
    Response.Clear();
    Response.BufferOutput = false; // for large files...
    System.Web.HttpContext c= System.Web.HttpContext.Current;
    string archiveName= String.Format("archive-{0}.zip", 
            DateTime.Now.ToString("yyyy-MMM-dd-HHmmss")); 
    Response.ContentType = "application/zip";
    Response.AddHeader("content-disposition", 
"filename=" + archiveName); var postsFolder = Server.MapPath("~/posts"); using (ZipFile zip = new ZipFile()) { zip.AddDirectory(postsFolder); zip.AddEntry("Readme.txt", String.Format("Archive created on {0}", DateTime.Now); zip.Save(Response.OutputStream); } Response.Close(); }

As you can see, it’s very straightforward, and I’ve also shown adding your own custom readme.txt from a string. If you’d rather add each file manually, just provide an enumeration of files to add to zip.AddFiles.

Finally, it would probably not be a great idea to let anyone call this, so you can protect it with a simple call to IsAuthenticated:

if (User.Identity.IsAuthenticated)

Thursday 19 June 2014

Why Use an Event Aggregator?

All programs need to react to events. When X happens, do Y. Even the most trivial “Hello, World” application prints its output in response to the “program was started” event. And as our programs gain features, the number of events they need to respond to (such as button clicks, or incoming network messages) grows, meaning the strategy we use for handling events will have a big impact on the overall maintainability of our codebase.

In this post I want to compare four different ways you can respond to a simple event. For our example, the event is a “Create User” button being clicked in a Windows Forms application. And in response to that event our application needs to do the following things:

  1. Ensure that the entered user data is valid
  2. Save the new user into the database
  3. Send the user a welcome email
  4. Update the GUI to add the new user to a ListBox of users

Approach 1 – Imperative Code

This first approach is in one sense the simplest. When the event happens, just do whatever is needed. So in this example, right in the button click handler, we’d construct whatever objects we needed to perform those four tasks:

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    // get user
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    // validate user
    if (string.IsNullOrEmpty(user.Name) ||
        string.IsNullOrEmpty(user.Password) ||
        string.IsNullOrEmpty(user.Email))
    {
        MessageBox.Show("Invalid User");
        return;
    }
    
    // save user to database
    using (var db = new SqlConnection(@"Server=(localdb)\v11.0;Initial Catalog=EventAggregatorDemo;Integrated Security=true;"))
    {
        db.Open();
        using (var cmd = db.CreateCommand())
        {
            cmd.CommandText = "INSERT INTO Users (UserName, Password, Email) VALUES (@UserName, @Password, @Email)";
            cmd.Parameters.Add("UserName", SqlDbType.VarChar).Value = user.Name;
            cmd.Parameters.Add("Password", SqlDbType.VarChar).Value = user.Password;
            cmd.Parameters.Add("Email", SqlDbType.VarChar).Value = user.Email;
            cmd.ExecuteNonQuery();
        }
    
        // get the identity of the new user
        using (var cmd = db.CreateCommand())
        {
            cmd.CommandText = "SELECT @@IDENTITY";
            var identity =  cmd.ExecuteScalar();
            user.Id = Convert.ToInt32(identity);
        }
    }
    
    // send welcome email
    try
    {
        var fromAddress = new MailAddress(AppSettings.EmailSenderAddress, AppSettings.EmailSenderName);
        var toAddress = new MailAddress(user.Email, user.Name);
        const string subject = "Welcome";
        const string body = "Congratulations, your account is all set up";
    
        var smtp = new SmtpClient
        {
            Host = AppSettings.SmtpHost,
            Port = AppSettings.SmtpPort,
            EnableSsl = true,
            DeliveryMethod = SmtpDeliveryMethod.Network,
            UseDefaultCredentials = false,
            Credentials = new NetworkCredential(fromAddress.Address, AppSettings.EmailPassword)
        };
        using (var message = new MailMessage(fromAddress, toAddress)
        {
            Subject = subject,
            Body = body
        })
        {
            smtp.Send(message);
        }
    }
    catch (Exception emailException)
    {
        MessageBox.Show(String.Format("Failed to send email {0}", emailException.Message));
    }
    
    // update gui
    listBoxUsers.Items.Add(user.Name);
}

What’s wrong with this code? Well multiple things. Most notably it violates the Single Responsibility Principle. Here inside our GUI we have code to talk to the database, code to send emails, and code that knows about our business rules. This means it’s going to be almost impossible to unit test in its current form. It also means that we’ll likely end up with cut and paste code if later we discover that another part of our system needs to create new users, because that code will need to perform the same sequence of actions.

The reason we’re in this mess is that we have tightly coupled the code that publishes the event (in this case the GUI), to the code that handles that event.

So what can we do to fix this? Well, if you know the “SOLID” principles, you know its always a good idea to introduce some “Dependency Injection”. So let’s do that next…

Approach 2 – Dependency Injection

What we could do here is create a few classes each with a single responsibility. A UserValidator validates the entered data, a UserRepository saves users to the database, and an EmailService sends the welcome email. And we give each one an interface, allowing us to mock them for our unit tests.

Suddenly, our create user button click event handler has got a whole lot more simple:

public NewUserForm(IUserValidator userValidator, IUserRepository userRepository, IEmailService emailService)
{
    this.userValidator = userValidator;
    this.userRepository = userRepository;
    this.emailService = emailService;
    InitializeComponent();
}

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    // get user
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    // validate user
    if (!userValidator.Validate(user)) return;

    // save user to database
    userRepository.AddUser(user);

    // send welcome email
    const string subject = "Welcome";
    const string body = "Congratulations, your account is all set up";
    emailService.Email(user, subject, body);

    // update gui
    listBoxUsers.Items.Add(user.Name);
}

 

So we can see we’ve improved things a lot, but there are still some issues with this style of code. First of all, we’ve probably not taken DI far enough. Our GUI still has quite a lot of knowledge about the workflow of handling this event – we need to validate, then save, then send email. And so probably we’d find ourselves wanting to create another class just to orchestrate these three steps.

Another related problem is the construction of objects. Inversion of Control containers can help you out quite a bit here, but it’s not uncommon to find yourself having classes with dependencies on dozens of interfaces, just because you need to pass them on to child objects that get created later. Even in this simple example we’ve got three dependencies in our constructor.

But does the GUI really need to know anything whatsoever about how this event should be handled? What if it simply publishes a CreateUserRequest event and lets someone else handle it?

Approach 3 – Raising Events

So the third approach is to take advantage of .NET’s built-in events, and simply pass on the message that the CreateUser button has been clicked:

public event EventHandler<UserEventArgs> NewUserRequested;

protected virtual void OnNewUserRequested(UserEventArgs e)
{
    var handler = NewUserRequested;
    if (handler != null) handler(this, e);
}

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    // send an event
    OnNewUserRequested(new UserEventArgs(user));
}

public void OnNewUserCreated(User newUser)
{
    // now a user has been created, we can update the GUI
    listBoxUsers.Items.Add(newUser.Name);
}

What’s going on here is that the button click handler now does nothing except gather up what was entered on the screen and raise an event. It’s now completely up to whoever subscribes to that event to deal with it (performing our tasks of Validate, Save to Database and Send Email), and then they need to call us back on our “OnNewUserCreated” method so we can update the GUI.

This approach is very nice in terms of simplifying the GUI code. But one issue that you can run into is similar to the one faced with the dependency injection approach. You can easily find yourself handling an event only to pass it on to the thing that really needs to handle it. I’ve seen applications where an event is passed up through 7 or 8 nested GUI controls before it reaches the class that knows how to handle it. Can we avoid this? Enter the event aggregator…

Approach 4 – The Event Aggregator

The event aggregator completely decouples the code that raises the event from the code that handles it. The event publisher doesn’t know or care who is interested, or how many subscribers there are. And the event subscriber doesn’t need to know who is responsible for publishing it. All that is needed is that both the publisher and subscriber can talk to the event aggregator. And it may be acceptable to you to use a Singleton in this case, although you can inject it if you prefer.

So in our next code sample, we see that the GUI component now just publishes one event to the event aggregator (a NewUserRequested event) when the button is clicked, and subscribes to the NewUserCreated event in order to perform its GUI update. It needs no knowledge of who is listening to NewUserRequested or who is publishing NewUserCreated.

public CreateUserForm()
{
    InitializeComponent();
    EventPublisher.Instance.Subscribe<NewUserCreated>
        (n => listBoxUsers.Items.Add(n.User.Name));
}

private void buttonCreateUser_Click(object sender, EventArgs e)
{
    // get user
    var user = new User()
               {
                   Name = textBoxUserName.Text,
                   Password = textBoxPassword.Text,
                   Email = textBoxEmail.Text
               };
    EventPublisher.Instance.Publish(new NewUserRequested(user));
}

As you can see, this approach leaves us with trivially simple code in our GUI class. The subscribers too are simplified since they don’t need to be wired up directly to the class publishing the event.

Benefits of the Event Aggregator Approach

There are many benefits to this approach beyond the decoupling of publishers from subscribers. It is conceptually very simple and easy for developers to get up to speed on. You can introduce new types of messages easily without making changes to public interfaces. It’s very unit testing friendly. It also discourages chatty interactions with dependencies and encourages a more asynchronous way of working – send a message with enough information for the handlers to deal with it, and then wait for the response, which is simply another message on the event bus.

There are several upgrades to a vanilla event aggregator that you can create to make it even more powerful. For example, you can give subscribers the capability to specify what thread they want to handle a message on (e.g. GUI thread or background thread). Or you can use WeakReferences to reduce memory leaks when subscribers forget to unsubscribe. Or you can put global exception handling around each callback to a subscriber so you can guarantee that when you publish a message every subscriber will get a chance to handle it, and the publisher will be able to continue.

There are many situations in which the event aggregator really shines. For example, imagine you need an audit trail tracking many different events in your application. You can create a single Auditor class that simply needs to subscribe to all the messages of interest that come through the event aggregator. This helps keep cross-cutting concerns in one place.

Another great example is a single event that can be fired from multiple different places in the code, such as when the user requests help within a GUI application. We simply need to publish a HelpRequested message to the aggregator with contextual information of what screen they were on, and a single subscriber can ensure that the correct help page is launched.

Where Can I Get An Event Aggregator?

Curiously, despite the extreme usefulness of this pattern, there doesn’t seem to be an event aggregator implementation that has emerged as a “winner” in the .NET community. Perhaps this is because it is so easy to write your own. And perhaps also because it really depends what you are using it for as to what extensions and upgrades you want to apply. Here’s a few to look at to get you started:

  • Udi Dahan’s Domain Event pattern
    • Uses an IDomainEvent marker interface on all messages
    • Integrates with the container to find all handlers of a given event
    • Has a slightly odd approach to threading (pubs and subs always on the same thread)
  • José Romaniello’s Event Aggregator using Reactive Extensions
    • A very elegant and succinct implementation using the power of Rx
    • Subscriptions are Disposable
    • Use the power of Rx to filter out just events you are interested in, and handle on whatever thread you want
  • Laurent Bugnion’s Messenger (part of MVVM Light)
    • Uses weak references to prevent memory leaks
    • Can pass “tokens” for context
    • Includes passing in object as the subscriber, allowing you to unsubscribe from many events at once
    • Allows you to subscribe to a base type and get messages of derived types
  • PRISM Event Aggregator (from Microsoft patterns and practices)
    • A slightly different approach to the interface – events inherit from EventBase, which has Publish, Subscribe and Unsubscribe methods.
    • Supports event filtering, subscribing on UI thread, and optional strong references (weak by default)

I’ve made a few myself which I may share at some point too. Let me know in the comments what event aggregator you’re using.