Sunday, July 07, 2013

Transferring one TFS repository to another via GIT-TF – with history!

Recently a client needed to migrate a large TFS repository to a new machine, and to a later version of TFS.  They tried to follow the Microsoft procedure but had problem with that (different OS versions, security settings, that sort of thing).  In the end they decided to just ‘Get Latest’ from the old repo and commit that into the new one, losing all the history of the source code.

As retrieving history / comparing old versions of code, is one of the main jobs of a source code provider, I suggested using GIT-TF to do the migration.  After a fair bit of googling I had a stab at doing the import.  As it took me a few attempts and none of the instructions were quite right (at least in our scenario) I thought I’d post a demo of the complete instructions here. (Prerequisites – you must have a working GIT prompt and have successfully installed GIT-TF.  These instructions assume that you are using GIT Bash).

Current TFS Repositories

Our two TFS histories look like this (Old on the left, new on the right).  Of course, in reality the history on the left would be much bigger.  Notice that the latest commit on the new repository is removing all the files that TFS automatically adds – the build process templates etc.  You should also do this, as we want to start with the new repository empty.
image

image
Our new TFS server looks the same, but has no history apart from the auto-generated check-in's of the TF Build Automation and template files.  You should delete this files from the New TFS repository now (and remember to check-in the deletes!).

Clone the TFS repository's to GIT

Run these commands in a GIT prompt:

cd c
mkdir git
cd git
git tf clone http://myoldserver:8080/tfs $/OldTfs --deep
git tf clone http://mynewserver:8080/tfs $/NewTfs --deep

This will create two new GIT repositories under c:\git called OldTfs and NewTfs.  The NewTfs git repository should be empty, as per your new TFS repository.  Running git log on the OldTfs git repo should display your complete TFS checkin history.

Remove link between GIT repository and TFS Changesets

git-tf file

Now, as we need to pull in the ‘old’ GIT repository to the new one, we need to remove the details of the new TFS changesets that we’ve already pulled into the GIT repo.  To do that, remove the file “git-tf” from the “.git” folder in c:\git\NewTfs.

Now we need to re-create the link to the new server (but without the changeset details), so run this command:

cd c/git/NewTfs 
git tf configure http://mynewserver:8080/tfs $/NewTfs
Pull in the old GIT repository and push to new TFS

Next we need to add the old GIT repo as a remote in the new one, and then pull from it.  The important option is to specify “--rebase” to ensure that the full commit history is pulled across:

git remote add master file:///c/git/OldTfs
git pull --rebase master master

Running “git log” should now display the full history of your old TFS repository in the new GIT repo, so the only step left is to push this to your new TFS server:

git tf checkin --deep

Remember the “--deep” option or only the latest changeset will be committed.  Once this is finished, you should be able to see your full TFS history displayed in the Source Control Explorer on your new server!

Friday, July 06, 2012

Multiple submit buttons on an ASP.NET MVC form

Wow, well over 3 years since a blog post!

I recently needed to create a form containing multiple buttons.  Normally, I use a variation of this technique to know which button is clicked, and have each handled by a different action method.  However, on this occasion the button was actually the same button repeated for a list of entities, so mapping by name wasn’t good enough – each button was named “edit”.  I needed a way to know which edit button was pressed.  In this instance having a <form> for each button plus a hidden input specifying which entity was being edited wasn’t acceptable – each entity also had other input controls that needed to be submitted as one, and had to work without JavaScript.

So I created MultiButtonExAttribute (an MVC ActionNameSelector) which matched only on the prefix of the button name, and used the rest of the name to store state information.  All you have to do is create input buttons using this pattern:

<input type="submit" name="edit_id:1234_other:somestring" value="Edit" />

Where the name is made up of a prefix (“edit”), then a separator (“_”), then key/value pairs of data separated by a colon.  Each key/value pair is then separated by another underscore.  On the server-side, create an action method to handle the form submit and decorate it like this:

[MultiButtonEx("edit")]
[HttpPost]
public ActionResult EditEntity(int id, string other)
{
    //TODO: whatever needs to be done
    //The ID will be parsed for you by the DefaultModelBinder
    //and in this case will have the integer value 1234
}

Note that the key/value pairs take part in the normal model binding, so are passed type-safe to the parameters of the action method.

To make the submit button easier to render, I also created a HtmlHelper which ensures the ‘name’ attribute is generated correctly:

@Html.MultiButtonEx(new {id = item.Id, other = item.Other}, "edit", "Click Me!")

Which will translate the anonymous object into the correct format.

NOTE: The code on github is an example and not production ready – you’ll no doubt want to beef up the error handling, move the separator characters into consts and encode those characters if they appear in your data, etc.  Also, I’m sure there’s no doubt a limit on the length of a HTML name attribute (which probably just for fun varies across browsers).

I am also not even sure this is a good idea – if anyone can think of a better way to achieve this please let me know!!

More info on github repository.

Wednesday, April 15, 2009

Calling a 3.5 WebService from a 2.0 WebSite

This week I upgraded a web service project to v3.5 of the .Net framework.  However, another website then stopped working as it called the web service using JavaScript (by referencing the client-side proxy created by the Microsoft Ajax Library).  After a little bit of debugging I found that the response from a 3.5 webservice is different to a v2.0 service.  My webservice just returned a Guid.  When the service was using v2.0 the response just contained a guid.  Once it was upgraded to v3.5 however, it returned an JSON object with a property called ‘d’ and the value of ‘d’ was the guid.

How to fix

There are a couple of ways to get around this problem.  The easiest is probably to just upgrade the website to 3.5 as well, as then the serialisation of the object will be done for you automatically.  However, this wasn’t an option for me.  Instead I modified my JavaScript callback method to work with either response format.  The code changed from this:

function onCallback(result, context)
{
    var guid = result;
    //do further processing here...
}

to this:

function onCallback(result, context)
{
    var guid = result.d ? result.d : result;
    //do further processing here...
}

 

All we are doing is checking for the existence of the ‘d’ property and either getting the result from there or just using the result itself.  The benefit of this simple change is that the callback method will continue to work for any combination of v2.0 and v3.5 websites and services.

Hopefully this will be useful for somebody!  Posting it here so that I don’t forget about it myself in the future!

Saturday, April 04, 2009

Using In Memory SQLite Database for Testing with FluentNHibernate

I’ve been playing with a little bit of TDD with FluentNHibernate and the MVC Framework lately and I had a few issues trying to get unit tests running with an in-memory SQLite database.  There are quite a few blogs describing how to do this but none of them use FluentNHibernate, so I thought I’d document the way I achieved this.  I’m not sure that this is the best way, so if anyone has a better idea please let me know.

I started off with this class to configure my mappings:

public class NHibernateMapping
{
    public ISessionFactory BuildSessionFactory()
    {
        return Fluently.Configure()
            .Database(SQLiteConfiguration.Standard.InMemory())
            .Mappings(
                o => o.AutoMappings.Add(
                    AutoPersistenceModel.MapEntitiesFromAssemblyOf<MyDummyEntity>()
                        .WithSetup(a =>
                            {
                                a.IsBaseType = ty => ty.FullName == typeof(DomainEntity).FullName;
                                a.FindIdentity = prop => prop.Name.Equals("Id");
                            }
                        )
                )
            )
        .BuildSessionFactory();
    }
}

At first this worked absolutely fine for my tests.  However, no where in here is the schema for the database actually defined.  My initial tests passed only because they were creating, loading and saving objects in the same NHibernate session so they weren’t actually hitting the database!  NH could supply everything from it’s level 1 cache.  When I wrote a test to check that an action worked as expected when an invalid ID was specified it failed with an ADOException from NH – because it now tried to read a row from the database but the table didn’t exist!

I then changed my NHibernateMapping class to call SchemaExport, but the test still failed because SchemaExport creates the schema and then closes the connection.  This destroys the in-memory database so when my test read the table didn’t exist again!

From this post I found a connection provider which ensured that the same connection would always be used.  The code for this class is:

public class SQLiteInMemoryTestConnectionProvider :
    NHibernate.Connection.DriverConnectionProvider
{
    private static IDbConnection _connection;

    public override IDbConnection GetConnection()
    {
        if (_connection == null)
            _connection = base.GetConnection();
        return _connection;
    }

    public override void CloseConnection(IDbConnection conn)
    {
    }

    /// <summary>
    /// Destroys the connection that is kept open in order to 
    /// keep the in-memory database alive.  Destroying
    /// the connection will destroy all of the data stored in 
    /// the mock database.  Call this method when the
    /// test is complete.
    /// </summary>
    public static void ExplicitlyDestroyConnection()
    {
        if (_connection != null)
        {
            _connection.Close();
            _connection = null;
        }
    }
}

I then modified the NHibernateMapping class to expose the NH configuration and session factory separately, and also allow the IPersistenceConfigurer to be passed it (so that I could use a different database for testing and live).  The class now looks like this:

public class NHibernateMapping
{

    IPersistenceConfigurer _dbConfig;

    public NHibernateMapping(IPersistenceConfigurer dbConfig)
    {
        _dbConfig = dbConfig;
    }

    public Configuration BuildConfiguration()
    {
        return Fluently.Configure()
            .Database(_dbConfig)
            .Mappings(
                o => o.AutoMappings.Add(
                    AutoPersistenceModel.MapEntitiesFromAssemblyOf<MyDummyEntity>()
                        .WithSetup(a =>
                            {
                                a.IsBaseType = ty => ty.FullName == typeof(DomainEntity).FullName;
                                a.FindIdentity = prop => prop.Name.Equals("Id");
                            }
                        )
                )
            )
        .BuildConfiguration();
    }

    public ISessionFactory BuildSessionFactory()
    {
        return BuildConfiguration().BuildSessionFactory();
    }

}

Then, in the test setup, I just need to tell FluentNH to use my test connection provider, call SchemaExport, and create my SessionFactory:

[TestInitialize]
public void Init()
{
    var mapping = new NHibernateMapping(
        SQLiteConfiguration.Standard.InMemory()
            .Provider<SQLiteInMemoryTestConnectionProvider>());
    new NHibernate.Tool.hbm2ddl.SchemaExport(m.BuildConfiguration())
        .Execute(true, true, false, true);
    _sessionFactory = m.BuildSessionFactory();
}

As I said, I’m not sure if this is the best way to achieve this, so if someone has a more elegant solution please let me know.

Saturday, March 07, 2009

Patch Written for the UpdatePanelAnimationExtender

UPDATE: This patch was finally accepted!

While using the UpdatePanelAnimationExtender control from the Ajax Control Toolkit I decided that I didn’t like the behaviour of the control.  My issue was that I had a update panel that I wanted to ‘collapse’ when an async postback started, and expand again once the postback had completed.  If you view the controls’s sample page you can see this effect in operation.  However, if the postback finishes before the ‘collapse’ animation has finished, the animation is aborted and the update panel will ‘jump’ to a height of zero before expanding again.  I wanted the collapse animation to finish regardless of how quickly the server returned to ensure that the animation always appeared smoothly.

The way this is achieved on the sample page is by having a call to Thread.Sleep in the PageLoad method.  I didn’t really want to waste resources on the server just to ensure a client-side animation appeared smoothly, so I set about writing a patch for the control.

Looking at the JavaScript behaviour for the control it was obvious why the control behaved the way it did.  This is the JavaScript code fired when the async postback has completed:

    _pageLoaded : function(sender, args) {
        /// <summary>
        /// Method that will be called when a partial update (via an UpdatePanel) finishes
        /// </summary>
        /// <param name="sender" type="Object">
        /// Sender
        /// </param>
        /// <param name="args" type="Sys.WebForms.PageLoadedEventArgs">
        /// Event arguments
        /// </param>
        
        if (this._postBackPending) {
            this._postBackPending = false;
            
            var element = this.get_element();
            var panels = args.get_panelsUpdated();
            for (var i = 0; i < panels.length; i++) {
                if (panels[i].parentNode == element) {
                    this._onUpdating.quit();
                    this._onUpdated.play();
                    break;
                }
            }
        }
    }

As you can see, once this method is called the _onUpdating animation is cancelled immediately by the call to the quit() method.  What I needed was a way to check that the animation has finished before playing the _onUpdated animation, and if not, wait until it has finished.  The first part was easily accomplished with a simple if:

if (this._onUpdating.get_animation().get_isPlaying()) {…}

The second part – waiting till it had finished – proved a bit harder however.  My initial thought was to use window.setTimeout to check later if the animation had finished.  However, the function supplied to setTimeout runs in the context of the ‘window’ object, so I didn’t have a reference to the ‘this._onUpdated’ or ‘this._onUpdating’ private variables.  A quick Google lead me to this page by K. Scott Allen which describes the use of the call() and apply() methods in JavaScript.  These methods are actually on the function object itself and allow us to alter what ‘this’ refers to in a method call.  Very powerful – and definitely dangerous too – but exactly what I needed.  I added a new private method to the JavaScript class called _tryAndStopOnUpdating as follows:

    _tryAndStopOnUpdating: function() {
        if (this._onUpdating.get_animation().get_isPlaying()) {
            var context = this;
            window.setTimeout(function() { context._tryAndStopOnUpdating.apply(context); }, 200);
        }
        else {
            this._onUpdating.quit();
            this._onUpdated.play();
        }
    }

Firstly, this method checks if the first animation is still playing, and if so uses window.setTimeout to wait 200ms before calling itself to check again.  The use of ‘apply’ here ensures that when the method is called again the ‘this’ keyword refers to our JavaScript class as expected.  Note that if I hadn’t saved ‘this’ to a local variable and just referred to ‘this’ in the function passed to window.setTimeout, then the call would fail as ‘this’ would then refer to the JavaScript window object itself.

All that remained was to add a new property to the server control to allow this alternative behaviour to be switched on or off and to modify the body of the _pageLoaded method to call my new method like so:

        if (this._postBackPending) {
            this._postBackPending = false;
            
            var element = this.get_element();
            var panels = args.get_panelsUpdated();
            for (var i = 0; i < panels.length; i++) {
                if (panels[i].parentNode == element) {
                    if (this._AlwaysFinishOnUpdatingAnimation) {
                        this._tryAndStopOnUpdating();
                    }
                    else {
                        this._onUpdating.quit();
                        this._onUpdated.play();
                    }
                    break;
                }
            }
        }

 

You can see an example of this modified UpdatePanelAnimationExtender here.  The bottom checkbox controls whether the first animation will always complete before the second one starts.  Hopefully you’ll be able to see how much smoother the animation is with the bottom checkbox checked!

Unfortunately this patch hasn’t made it into the control toolkit yet, so if you would like to see it in there please vote for my patch here.  Thanks!

Sunday, February 22, 2009

Alternative Stylesheets for Different Browsers

On a site I've been working on recently various Css Hacks are used to ensure that the site is displayed consistently on all browsers.  However, as more and more browsers are released the css files just got messier and harder to maintain.  Moving the 'hacks' into their own file and selectively including them would make life a lot easier.  The usual way to achieve this is using conditional comments.  This is only supported in IE, but as most of our css hacks were around IE this was acceptable.

The problem with this however, is that the site was using ASP.Net Themes, and that automatically adds the relevant stylesheets to the page for you - meaning that you have no way of selectively choosing the correct stylesheets!  (Incidentally, I'd love to be proved wrong about this so please let me know if I'm missing something!).

I decided to write a more flexible theming system instead.  The plan was to load all the stylesheets in a certain directory and add them to the pages automatically in the same way ASP.Net themes do.  But it would also support convention-based subdirectories containing the 'hacks' for the different browsers.  The structure would be something like this:

image

Any css files in the Theme1 directory would always be included, but css files in the IE directory would only be included if the user was using IE.  The convention for the names of the folder is to match the Browser property of the HttpBrowserCapabilities class (accessible from Request.Browser).  I ended up also allowing further sub-directories so that different browser versions could have different stylesheets.

 

image If you need a stylesheet for a specific version of a browser, you just create a folder with the version number as its name.  e.g. To have a stylesheet specifically for FireFox v2, create a folder called '2' in the FireFox folder.  If you want a stylesheet for IE versions 6 and below, you can place it in a folder called '6-'.  Likewise, if you want a stylesheet for versions 7 and up, you should place it in a folder called '7+'.  In the future I may extend this convention to allow things like '1-3' and '4-7' so that ranges of versions can be included.

 

I have uploaded this theming engine here.  To use the engine you must register the StylesheetManager control on your webforms/masterpage like so:

<%@ Register Assembly="SPL.WebSite.Projects" Namespace="SPL.WebSite.Projects.WebControls.Theming" TagPrefix="spl" %>

And then in the <head /> section include an instance of the control:

<spl:StylesheetManager runat="server" ThemeDirectory="~/DemoPages/DemoStyles" />

The only property you need to set is the location of the root directory of your theme.  When the control renders it will figure out which stylesheets are required based on the user's browser and write out <link /> tags for each one.

When running in release mode, instead of linking to n stylesheets, the control will link to an HttpHandler instead which will merge the css files into one and write them directly into the response.  To get this working you need to include this handler in your web.config:

<add verb="GET" path="CssCombiner.axd" type="SPL.WebSite.Projects.HttpHandlers.Theming.CssCombineHandler, SPL.WebSite.Projects"/>

Note that the handler will cache the css to avoid multiple disk accesses on each request.  Currently this is cached for a hard-coded time of 1 hour.  Depending on your circumstances you may wish to change this to use a configuration value instead.

Feel free to use this theming engine if it meets your needs and please let me know if you have any improvements.  Note that the uploaded version doesn't contain things like error handling, logging, etc and the http handler it uses hard-coded.  These are all things you will probably want to modify before using in anger.

A demo page is available here.

Wednesday, November 26, 2008

Post-Redirect-Get Pattern in MVC

I found a good write-up of the PRG pattern in MVC by Matt Hawley this week, and have decided to use it in an MVC project I'm working on.  I have made a few changes to Matt's code however. 

1 - Use ModelBinders and ModelState to Simplify Code

Firstly, as the new version of MVC (the beta release) supports Model Binders, I updated the example to use these instead.  Now we can just save the ModelState into TempData in one go, instead of saving the error messages and the users input, so the Submit action looks something like:

public ActionResult Submit()
{
    //OMITTED: Do work here...
    if (!ModelState.IsValid)
    {
        //Save the current ModelState to TempData:
        TempData["ModelState"] = ModelState;
    }
}

In the Create action we just need to pull these values out of TempData and add to the ModelState.  MVC will then enter the users input back into the textboxes for you.  The Create action looks like:

public ActionResult Create()
{
    //If we have previous model state in TempData, add it to our
    //current ModelState property.
    var previousModelState = TempData["ModelState"] as ModelStateDictionary;
    if (previousModelState != null)
    {
        foreach (KeyValuePair<string, ModelState> kvp in previousModelState)
            if (!ModelState.ContainsKey(kvp.Key))
                ModelState.Add(kvp.Key, kvp.Value);
    }

    return View();
}

(You will want to wrap this boiler plate code up in a helper class or something though)

2. Fix Scenario where user input will be lost

Secondly, I did find a small issue with the code as posted, when the user does the following:

GET the page with a form
POST the form with invalid input
REDIRECT back to the page (with the users input intact)
REFRESH the page - the users input is now lost!

I'm not sure this is a particularly common scenario, but losing the users input is never a good way to instil trust in your application!  The reason the data is lost in this case is because we stored the users input in TempData which only exists for this request and the next one.  I thought about putting the value into Session instead, but then you'd have to come up with a strategy for removing the items at the right time (you wouldn't want the form to remember it's values from the last time it was used for instance).  In the end I decided that just putting the values back into TempData would be the best solution.  This requires the following line to be added to the Create action:

public ActionResult Create() 
{ 
    //If we have previous model state in TempData, add it to our 
    //current ModelState property. 
    var previousModelState = TempData["ModelState"] as ModelStateDictionary; 
    if (previousModelState != null) 
    { 
        foreach (KeyValuePair<string, ModelState> kvp in previousModelState) 
            if (!ModelState.ContainsKey(kvp.Key)) 
                ModelState.Add(kvp.Key, kvp.Value); 
        TempData["ModelState"] = ModelState;  //Add back into TempData
    } 

    return View(); 
}

If the user now refreshes the page after a validation failure, they will no longer lose their input.  If they go on to fix the validation errors and submit the form, the saved TempData value will be automatically cleared by the MVC framework.

Sunday, November 23, 2008

DDD7 - My Thoughts

Yesterday, I attended my first DeveloperDay at the Microsoft Campus in Reading.  Below are the sessions that I attended and my thoughts on them:

TDD and Hard-To-Test Code - Ian Cooper

The first session of the day was about testing code which is hard to test.  The speaker, Ian Cooper, clearly had a lot of experience and knowledge about the subject.  Unfortunately however, I felt that there wasn't really enough code.  Most of the presentation was about the sort of code that is hard-to-test, I was hoping for more help with how to test that code.  Having said that, I still found the talk informative and interesting.  The concept of 'Seams' was something I hadn't come across by that name before (see here for a good example), although it is basically the Open/Closed Principle.

ASP.NET MVC - Show me the code - Steve Sanderson

Having followed MVC from the first preview, I was very interested in this presentation.  The format was very much about code not PowerPoint which I personally liked.  I thought Steve was extremely knowledgeable about MVC, and he really showed how easy MVC can be to create a fully functional website.  Despite not really learning much from the presentation (I'd used most of the code he used already myself), I really enjoyed the fast pace of it.  I think anyone in the audience that hadn't already used MVC will have downloaded it by now!  He sold it very well.  Definitely a speaker to look out for the in future.

Steve also has an MVC book coming out soon - definitely one for the wish list!

ASP.NET 4.0 - TOP SECRET - Phil Winstanley and Dave Sussman

This was my most eagerly anticipated session.  However, it ended up being a bit of a disappointment.  The two speakers were unable to talk much about ASP.NET v4 as Microsoft didn't announce them at the PDC.  Although they did show some nice features of VS 2010 (adornments look very nice!), it seemed as if the presentation was put together in a rush.  I didn't feel that there was very much 'meat' in the talk, and it was only the ability of the presenters to make the audience laugh that kept it going.

Oslo, Microsoft's vision for the future of Modelling - Robert Hogg

An interesting session about Oslo, something I didn't know too much about.  The subject was clearly too big to talk about in just one hour, but Rob did cover quite a few things.  Oslo's main aim is to increase pretty much everything by TEN TIMES!  Productivity, performance, everything!  Bold aims.  It'll be interesting to see how much of it they can achieve.  The idea of improving the communication between BA's, Architects and Dev's could make a huge difference to the future of the industry if they can pull it off.  Oh, and the modelling tool Quadrant looks like it could become a showpiece WPF application.  Very shiny!

The bleeding edge of web - Helen Emerson

This was a good presentation to finish the day on, as my brain was starting to hurt!  Didn't contain anything earth-shattering, but was an excellent introduction to the new features we can expect to have in the future versions of browsers.  Just a shame that it will probably take years for all browsers to implement them (I'm looking at you IE!).  Overall, a fun talk which Helen improved by getting some good audience participation.

Overall Thoughts

I enjoyed my first DeveloperDay overall.  I think in the future I will try to attend the sessions that I'm unfamiliar with, as I believe I will get more out of it that way - they are overviews rather than training sessions after all, so if you already have an overview of a technology pick a topic you don't know instead.

Enjoyable day though, and all of the speakers did a brilliant job.

Thursday, November 20, 2008

AutoTabExtender AJAX Control

I have just uploaded my AutoTabExtender control to my Projects page.  Please feel free to take a look at the code and download it if it's useful for you.

I decided to create this control when I had a broken hand.  While entering my memorable to login to my online bank account, I realised that having to press 'tab' to move from day to month to year was really slowing me down with only one typing hand!!  This control can extend a textbox so that focus automatically moves on when the length of the text entered equals the MaxLength property.  The usage of the extender is as follows:

<asp:TextBox runat="server" ID="txtPart1"MaxLength="2" Columns="2"></asp:TextBox>
<asp:TextBox runat="server" ID="txtPart2"MaxLength="2" Columns="2"></asp:TextBox>
<asp:TextBox runat="server" ID="txtPart3"MaxLength="2" Columns="2"></asp:TextBox>

<spl:AutoTabExtender runat="server" ID="ate1" TargetControlID="txtPart1" NextControlID="txtPart2"></spl:AutoTabExtender>

<spl:AutoTabExtender runat="server" ID="ate2" TargetControlID="txtPart2" NextControlID="txtPart3"></spl:AutoTabExtender>

 

So, you need an extender for each textbox that should auto-tab.  You then need to specify the NextControlID to indicate to which control the focus should be moved.  In the example above I have used a form for entering a sortcode, so we need an extender for the first and second textboxes.  The extender will also allow 'natural' deleting between textboxes.  See the live demo for this here.

The extender should work in IE, FF and Chrome.

Sunday, November 02, 2008

Using Castle's Dynamic Proxy Part 2 - Using Mixins

As promised, this is my follow up post to this post.  This time I will show how to use the DynamicProxy library with mixins.  Using mixin's you can add functionality to an object at runtime.  In this example, I will continue the lazy load theme from the previous post.

Imagine we have a contacts application, which contains a Person object.  This all works splendidly until one day we are asked to create a sales application.  This application must use our existing repository of contacts, but cannot use the same database.  In this new application we derive a new class from Person, with a few new attributes defining how the customer is to be treated:

public class Customer : Person
{
    public bool ApplyDiscount { get; set; }
    public bool AllowFreeDelivery { get; set; }
}

Now, to load a Customer, we must first load the main person data from the contacts database, and then load the extended data from the new sales database.  This means that each time a Customer object is loaded, there will be two database calls (no, we can't use linked servers to join the databases!).  Most of the time, this would be fine, but if you don't need the extended Customer information, then it's still a wasted database call.  (I know this sounds like a very contrived example, but I have really worked on a project where we needed to do just this).

How can we get round this?  Well, in a similar way to the previous post, we need to dynamically load the data when it's requested.  In the previous example however, when we intercept to 'get' call, we check to see if the data has already been loaded to prevent it hitting the database every time we get the property.  However, in this case the Customer object only has boolean fields - we can't check these to see if the data has already been loaded as neither true nor false imply that we haven't loaded the properties from the database.  We could change the properties to use nullable boolean's (bool?  in c#), then the lazy load method could check that one of the properties was null before loading the data.  Alternatively, we could add a boolean field called isLoaded to the class and check that instead.

The problem with both of these solution though is that we would then have data-access concerns in our entity model.  This is not good SOC!  The solution we are going to use is to add the isLoaded flag at runtime!  We will not have to make any modifications to the Customer class defined above.  To do this, we need to add the following interface and implementation:

public interface ILazyLoad
{
    bool IsLoaded { get; }
}

public class LazyLoadImpl : ILazyLoad
{
    bool _isLoaded;
    public bool IsLoaded
    {
        get { return _isLoaded; }
    }
}

Now we need our Customer object to implement this interface, so when we create a new object we instead tell Castle to create us a proxy (as per previous post), but also add the ILazyLoad interface:

//Create a proxy object instead of a standard Customer object.
Castle.DynamicProxy.ProxyGenerator a = new Castle.DynamicProxy.ProxyGenerator();
//Create an instance of our Lazy Load Implementation and pass that to Castle.
ILazyLoad mixin = new LazyLoadImpl();
ProxyGenerationOptions pgo = new ProxyGenerationOptions();
pgo.AddMixinInstance(mixin);
Customer c = a.CreateClassProxy(typeof(Customer), pgo, new CustomerInterceptor());

Now our CustomerInterceptor can intercept calls to the 'get' properties of the Customer object, then cast the Customer instance to ILazyLoad and check the value of the IsLoaded property to see if the data needs to be loaded.  If it does, then the data is retrieved and the properties are set.  The _isLoaded field is then set to 'true' using reflection.  Next time we access a property, the interceptor will know that we've already loaded the data so won't need to hit the database again.  We now have a Customer object that loads it's data only when required.  (Please refer to the previous post to see how to setup the interceptors).

 

Note:  This example uses DynamicProxy v1.  I believe version 2 of DynamicProxy works in the same way, but having not personally used it I can't guarantee this.