EntityFramework–What’s in it for Microsoft?

I can’t understand why Microsoft persists with the EntityFramework development. Don’t get me wrong, the project isn’t necessarily terminally flawed or anything, but what is their corporate motivation?

Can they go their shareholders and demonstrably show that development of the EntityFramework will increase the companies bottom line?

I don’t think so.

The decision on what data access strategy to use does not drive the choice of server platform or development language. Generally enterprise organisations will have an overall strategy or theme to their suite of applications, with an intention to reduce the technology footprint to maximise the opportunities for re-use both in terms of components and physical resources, including developers.

It is very unlikely, IMHO, that a project in a predominantly unix/java environment would suggest creating a Win2008 stack so that their project can use EntityFramework 4.1.

If you programming web sites hosted in a windows enterprise environment, then you’re going to use .Net (especially if you reading this). You’re not using .Net because of the EntityFramework. If the EntityFramework did not exist then you would still use .Net and host in windows servers. In fact, only a year ago when EntityFramework really, really, was the pits that was exactly what we all did.

Can you seriously imagine in the pre-EntityFramework a developer saying, “hey guys, lets do this in java and hibernate, because, you know, .Net data access is just too hard….”. But that seems like the business problem Microsoft is trying to solve, and I for one just can’t see it.

So, that brings me back to the thrust of my discussion. Microsoft are not going to sell any more licenced products because of the EntityFramework. That means they will not make a cent from it.

How big is the EntityFramework team? I’m not sure, but whatever millions of dollars a year Microsoft is spending on the development I doubt they will ever see a red cent in return.

Microsoft is not a charity. If there aren’t making a dollar from a business activity, how loyal are they going to be to it?

But it’s worse than that. Microsoft haven’t developed a great product, and by freezing development on Linq-2-Sql they’ve burnt social capital. They’ve lumbered themselves with a product that they’ve asked enterprises to trust but knowing that it doesn’t scale up to the needs of enterprises.

There’s a marketing blitz to manage. Books to write, blogs and tweets and whatever people do on Facebook. A new fleet MVP’s spruiking the latest and greatest and demonstrating, once again, how EntityFramework works if you’re designing the worlds simplest blog or dinner invite management program.

So, projects will take on EntityFramework. They might have been directed to by their bosses, or perhaps they themselves feel cosier in the warm embracing bosom of the Microsoft mother ship. They’ll send of queries to the “support” line at Microsoft wondering about specific features to be told that those features are not supported, and may never be supported, and if we did support them then it would be a scheduled release sometime in the first quarter of the next millennium. You are not going to get a special build, just for you. Are you certainly are not going to be able to compile your own version.

And when it all goes horribly wrong, when entities and contexts and states are sprinkled from the the browser to the database and all the tiers in between (oh, yes – whose ever seen a demo on n-tier implementations of the EntityFramework, and aren’t self tracking POCO objects a contradiction?) and it performs like a dog then the blame will laid at Microsoft’s feet.

I think there is every likelihood that Microsoft will stop development of EntityFramework. Just. Like. That. Wouldn’t that be a pain. Projects already coupled to EF would have a just get by. Perversely, Microsoft are not likely to loose any revenue from this decision.

Consider NHibernate as an alternative. It hasn’t got a profit motive – it is driven by the passion of those core developers and the community that picks up builds and provides feedback. No corporate backer demanding profits and revenue streams, source code free to download and wade through. By any comparison NHibernate is more feature rich than EntityFramework (a direct result of it’s maturity and pedigree). The main complaint with NHibernate is the lack of coherent documentation on all the knobs, levers and other points of extensibility that can be made use of.

I think EntityFramework is a case of corporate “Not Written Here Syndrome”, where software is written in the firm but predominantly mistaken belief that you can write something better than that other, mature product, because you’re a genius and you’re starting from scratch. We’ve all done it. But hopefully at some point we mature out of it.

I want to reflect on the fact that Microsoft supports jQuery, to the benefit of both – it certainly adds keyboard-cred to the ASP.Net MVC platform. In my opinion, Microsoft would have done everyone a favour by getting behind the NHibernate project while enhancing Linq-2-Sql for the small data access tasks.

Advertisements
Posted in Entity Framework | Leave a comment

Javascript Object Instantiation and Prototypes

This is part of series on javascript mechanics I’m writing – mainly for my own benefit but also to help those shy server developers out there venture into the mysterious land of the ‘client side’. In my last post I covered the basics of what is a javascript object, and in my next post I am going to cover javascript closures but today I’m going to cover object construction and initialisation.

Following on from my previous post, I now want to explain javascript objects in some greater detail. In my previous post I had some script that created an object as follows:

 
var rect = new Object();

rect.width = 10;
rect.height = 30;

rect.area = function () {
    return this.width * this.height;
};

This script is invoking the Object type’s instance factory method that returns a new Object instance. This is equivalent defining a function called Object() and calling in the context of a newly created object as follows:

 
function Object(){
  return this;
}

var rect = {};

Object.apply(rect);

.....

So what is really going on here?

What the new keyword does is instantiates an empty object ({}) and calls the specified constructor function in the context of the newly created object. Once you understand this, you are well on your way to becoming a javascript legend!

So you can see that every javascript starts the same – as an empty object {}, it is what method gets called in conjunction with new keyword that defines the properties of the resultant object. To demonstrate what I mean, consider the following script:

 

var objectCount = 1; 

Object = function(){ 
  this.id = objectCount++; 
}; 

var obj = new Object(); 
//this will have a value of 1
alert(obj.id); 

var otherobj = {}; 
//this object did not get run though the object factory, so this will be undefined
alert(otherobj.id); 

If you run this, obj.id will have some integer value. However otherobj.id is undefined, becuase it’s instance has not been run though the object factory. If I now do this..

 
var objectCount = 1; 

Object = function(){ this.id = objectCount++; }; 

var otherobj = {}; 
//apply the factory method in the context of the new object 

Object.apply(otherobj); 

//the object now has the property set and equal to 2 
alert(otherobj.id); 

otherobj.id now has a integer value.

So to reiterate, in javascript all object are created equal. What the new operator does is to apply a factory method to the newly instantiated object.

Now we understand what a object fatcory is, we can start to create them so we get uniformly created objects. First we’ll create a Factory method for a mythical but possibly useful Rect class as follows:

 

function Rect(width,height){
  this.width = width;
  this.height = height;
  this.area = function () {
    return this.width * this.height;
  };
}

var rect= new Rect(10,5);

alert(rect.area());

var rect2 = new Rect(5,1);

alert(rect2.area());

alert(rect2.area == rect.area);

Here you’ll note the use of parameters in the constructor. Again, what the new keyword results in is the construction of an empty object {}, which is then set as the “this” context for the factory method that is called with the two parameters. If you run this script, you’ll see that the for each instance of Rect created, they both have a method area defined, but they each have their own version.

This may initially seem bizarre, but it should be remembered that functions in javascript are objects that can be assigned to properties at runtime, much like delegates in C#.

This is probably sub-optimal, so javascript has the concept of a prototype. The prototype of a constructor function is a special property of the function that is used in the object construction pipeline to initialise objects prior to their constructor being run. Each property of the prototype object is shallow copied to the new object (from my previous post, you’ll recall javascript objects are more or less associative arrays whose properties can be altered freely at runtime).

The psudeo-code for object construction then becomes

  1. Create an empty object {}
  2. Copy all the properties of the prototype object of the constructor function to the newly created object
  3. Pass the object to the constructor function to perform any initialisation

Revisiting the script from above

 

function Rect(width,height){
  this.width = width;
  this.height = height;;
};

/* Add to the prototype */
Rect.prototype.area  = function () {
   return this.width * this.height;
};

var rect= new Rect(10,5);

alert(rect.area());

var rect2 = new Rect(5,1);

alert(rect2.area());

alert(rect2.area == rect.area);

When this code is run, it will be seen that both rects have the function srea and they are in fact the same function-object. This makes sense becuase when the prototype is applied, the copies of the function are by reference.

Note that the prototype of an constructor function can be altered at any point. If we re-assign the area of our rect, then new instances of rect will have a different area method to the first few instances. Changing the prototype does not affect instances of created objects.

Being objects, the popular way to define a prototype is to use object-notation. The provides for a succinct collation of each function to be assigned to the new object. The following sample demonstrates:

 

function Rect(width,height){
  this.width = width;
  this.height = height;;
};

/* Add to the prototype */
Rect.prototype = {
  area:function(){
       return this.width * this.height;
   },
   diagonal:function(){
       return Math.sqrt( Math.pow(this.width,2) + Math.pow(this.height,2) );
   }
};


var rect= new Rect(10,5);

alert(rect3.diagonal());

One thing to keep in mind when creating prototype objects is the need to reference object properties using the keyword this. This is to deal with closures, which will the topic of my next javascript in-depth topic.

Conclusion

In this post I have the fundamentals of objects and their instantiation in Javascript.

Posted in Javascript | Tagged , | 5 Comments

Copying Jquery Validation from one element to another

A simple note to self about copying jquery validations on one element to another.

1. First get the validation rules pertaining to the element you want to copy from using the validation jquery extensions

var rules = $(‘#elementtocopy’).rules();

2. The get the messages that match those rules – these are in the settings object for the element. Note how element name is used, and not element id.

rules[‘messages’] = $(‘form’).data(‘validator’).settings[“your element name, NOT id”];

3. Simply invoke the validation jquery extension on the selector that you wish to apply the rules to

$(‘#newelementid’).rules(“add”, args);

Simple eh 😉

Posted in jQuery | Leave a comment

Using Unobtrusive Ajax Forms in ASP.Net MVC3

This post covers how to use the new unobtrusive libraries in ASP.net MVC3 to provide an improved form submission user experience utilising AJAX. The idea is to have the whole form contents submitted and then replaced by the server generated content, be it either the submitted form with validation errors, or a success message.

First, you need to ensure that you reference the required javascript – jquery.unobtrusive-ajax.js (or min.js for production).


<script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script>

Each Ajax form requires at least two views. The first view is a wrapper around the actual form content, while second view is a partial view that contains the form itself. To demonstrate what I mean, I have created a simple registration form (note how it used the new IValidatableObject interface for complex model validation at binding):


namespace Registration.Models
{
  public class RegistrationFormModel: IValidatableObject
  {
    [Required]
    public string Firstname
    {
      get;
      set;
    }

    [Required]
    public string Surname
    {
      get;
      set;
    }

    [Required]
    public int Age
    {
      get;
      set;
    }
    
    public string RegsitrationUniqueId
    {
      get;
      set;
    }

    #region IValidatableObject Members

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext){
      if (Age < 21){
        yield return new ValidationResult("Age must be at least 21 years.", new string []{ "Age" });
      }
    }

    #endregion
  }
}

The outer form is just a container that holds the dynamic content, be it the data entry form or the success message. I’ve included the current date time to demonstrate how the outer wrapper does not change when the inner form content is submitted.



@{
    ViewBag.Title = "MyForm";
    Layout = "~/Views/Shared/_Layout.cshtml";
}

<h2>Register Online - @DateTime.Now.ToString()</h2>

<div id="formContent">
    @{Html.RenderPartial("FormContent");}
</div>


As you can see, it merely renders the content of the partial view inside an element that becomes the target of our ajax post result. The form content, for the purpose of this blog, is trivial. Note how I have used the Post option in the AjaxOptions – this is because I want data submitted to the Index() overload of the controller that accepts HttpPost.


@model Registration.Models.RegistrationFormModel
           
@{
    AjaxOptions options = new AjaxOptions{
        HttpMethod = "Post",
        UpdateTargetId = "formContent"        
    };        
    
}

@using (Ajax.BeginForm(options)) {    
  <fieldset>
    <legend>Registration Form</legend>
    @Html.ValidationSummary(true)

    <div class="editor-label">
      @Html.LabelFor(model => model.Firstname)
    </div>
    <div class="editor-field">
      @Html.EditorFor(model => model.Firstname)
      @Html.ValidationMessageFor(model => model.Firstname)
    </div>

    <div class="editor-label">
      @Html.LabelFor(model => model.Surname)
    </div>
    <div class="editor-field">
      @Html.EditorFor(model => model.Surname)
      @Html.ValidationMessageFor(model => model.Surname)
    </div>


    <div class="editor-label">
      @Html.LabelFor(model => model.Age)
    </div>
    <div class="editor-field">
      @Html.EditorFor(model => model.Age)
      @Html.ValidationMessageFor(model => model.Age)
    </div>

    <div>
      <input type="submit" value="Register" />
    </div>
  </fieldset>
}

I have added an additional partial view which will be rendered when the form is successfully submitted:


@model Registration.Models.RegistrationFormModel
           
<h3>You have successfully registered.</h3>

<p>Your regsitration number is @Model.RegsitrationUniqueId</p>

The controller for this form for the purposes of this demo is pretty simple. If the model is valid then a registration is calculated but whatever means and the Success view is returned. If the model is invalid (becuiase the IValidatableObject interface has returned a model error, then the data-entry form is returned back to the user to fix their errors.

public class FormController : Controller{

  public ActionResult Index(){
    return View(new RegistrationForm());
  }

  [HttpPost]
  public PartialViewResult Index(RegistrationForm model){
    if (ModelState.IsValid){               
      //go and do registration business logic,
      RNGCryptoServiceProvider csp = new RNGCryptoServiceProvider();

      byte [] regsitrationBytes = new byte[16];
      csp.GetBytes(regsitrationBytes);
      model.RegsitrationUniqueId = Convert.ToBase64String(regsitrationBytes);
      return PartialView("Success", model);
    }
    else
    {
      //return the data entry form
      return PartialView("FormContent", model);
    }            
 }
}

The project layout is as follows:

When the user navigates to the registration page:

When the user attempt to register a user who is too young, server side validation kicks in and the data entry form is returned:

When the data is correctly entered, the success message is displayed:

Note how in all the images the date/time that the registration startd has not changed – there has been no full post back to the server.

Of interest is how the unobtrusive ajax is rendered to the client – without masses of javascript but rather HTML5 valid element attributes on the form element as follows:

<form action="/PVA/Form/" data-ajax="true" data-ajax-method="Post" data-ajax-mode="replace" data-ajax-update="#formContent" id="form0" method="post">
...
</form>

Conclusion

This post has demonstrated how easy it is to provide a superior user experience using Ajax forms and the unobstrusive javascript libraries in ASP.Net MVC 3.

kick it on DotNetKicks.com

Posted in Ajax, ASP.Net MVC, Javascript | Tagged , | 12 Comments

Resolving Instances using Delegates in Unity

Introduction

In previous posts, I have covered bootstrapping Unity in MVC, using Unity as a DependencyResolver in MVC and using Unity in conjunction with WCF services. I am going to extend the Unity theme a little bit more and describe a technique I use to get Unity to return a instance of a WCF Channel for a requested service interface.

One of things I like about StructureMap as an IOC container is ability to use a delegate to provide the required instance for a type, as in the following code.


ChannelFactory<IAccountService> accountChannelFactory = 
                                    new ChannelFactory<IAccountService>("AccountService");

ObjectFactory.Initialize(cfg =>
{
  cfg.For<IAccountService>().Use(container 
                               => accountChannelFactory.CreateChannel());
});

Unity has no such method. When specifying an instance to return for a requested type, Unity requires a created instance to use and manages the lifetime of the object as a singleton. This means that the above functionality, where a Channel is created by a delegate to the ChannelFactory to satisfy the resolution of the IAccountService, cannot be readily implemented in Unity.

To get around this limitation, I have designed a LifetimeManager that takes as a parameter a delegate. The implementation, shown below, is simple. In the override for GetValue() the delegate is called.


public class NewInstanceLifetimeManager : LifetimeManager{
   private LifetimeManager baseManager;
   private Func<object> sourceFunc = null;

   public NewInstanceLifetimeManager(Func<object> sourceFunc, 
                                                   LifetimeManager baseManager = null){

     Contract.Requires(sourceFunc != null, "sourceFunc must be provided");
     this.sourceFunc = sourceFunc;
     this.baseManager = baseManager;
   }

   public override object GetValue(){
     object result = baseManager.GetValue();

     if (result == null){
       result = sourceFunc();

       if (baseManager != null){
         baseManager.SetValue(result);
       }
     }

     return result;
  }

  public override void RemoveValue(){
    if (baseManager != null){
      baseManager.RemoveValue();
    }
  }

  public override void SetValue(object newValue){
    if (baseManager != null){
      baseManager.SetValue(newValue);
    }
  }
}   

This seems to work pretty well as in the following:


ChannelFactory<IAccountService> accountChannelFactory = 
                                   new ChannelFactory<IAccountService>("AccountService");

container.RegisterType<IAccountService>(
               new NewInstanceLifetimeManager(()=>accountService.CreateChannel()));

Note how the NewInstanceLifetimeManager takes an underlying LifetimeManager implementation as an optional paramater. For channels I like to use a per-request context lifetime manager. I can use the NewInstanceLifetimeManager in conjunction with the ContextLifetimeManager as follows (I am using the container to resolve the ContextLifetimeManager so that the required ContextItemProvider constructor parameter is set):


ChannelFactory<IAccountService> accountChannelFactory = 
                      new ChannelFactory<IAccountService>("AccountService");

container.RegisterType<IBusinessService>(
  new NewInstanceLifetimeManager(() => businessService.CreateChannel(), 
                                 container.Resolve<ContextLifetimeManager>() ));

Conclusion

This post has covered how to extend the functionality of the Unity IOC container to enable the getting of instances using a delegate through the use of a LifetimeManager that takes a delegate as a parameter. A useful application of this technique is for the creation of WCF Channels from channel factories.

kick it on DotNetKicks.com

Posted in StructureMap, Unity | Tagged , , | 1 Comment

NHibernate vs EntityFramework – Experience From the Real World

I got asked today what I considered in choosing NHibernate or Entity Framework. This is a modified version of my response.

General Function

Both are object relational mappers capable of working with POCO objects, ie objects that are built without any dependencies on their persistence store.

As ORMs, they implement Unit of Work, Repository and ObjectQuery patterns (see Fowler PPoEA). They resolve entity relationships and they both enable the mapping of type heirachies using Table per Hierarchy (TPH), Table per Class (or join subclass) (TPC) and Table per concrete type.

Entity Framework out of box does not support databases other than SqlServer|Express|CE. There are third party drivers available for other databases such as Oracle and MYSQL.

NHibernate out of box provides supports for SqlServer|Express|CE, Oracle, MySQL and many others.

NHibernate provides support for caching of entities, which is useful for global reference data or long running session scenarios. This caching is provided through pluggable providers, a number of which are provided with NHibernate and can work with SqlServer broker in a similar manner to a sql cache dependency.

Configuration

Entity Framework supports a database-first approach with an integrated designer for Visual Studio. This designer produces an XML file (EDMX) that describes the required mappings. The designer does not accommodate for the full range of mappings possible, which leave you having to deal with the somewhat cryptic XML file directly, parts of which can be safely edited without being overwritten by the designer.

The build action on the EDMX file is to generate the entity classes. The big improvement on EF 4 is that the t4 to create the classes is easily modified to produce the classes you want.

My personal opinion is that designers are more often a curse than a blessing, particularly for application of any real complexity. An IDE upgrade may be impossible, or you might modify the underlying XML file in such a way that it never works again, or perhaps the designer is just too sluggish. I don’t rate designers too highly.

The alternative to Database-first development is model or code first. In this mode, domain entities are coded and their relationship to underlying database tables described using either declarative attributes or a Fluent API. I have discussed the configuration of Entity Framework CodeFirst here. As a general rule, I would avoid using declarative attribute to define database mappings as this is coupling your entities to the persistence store.

There is no code generation with NHibernate. Instead you have write your domain classes and use XML files to describe how the database tables relates to the domain entities. These XML files are then read from a location or as a embedded resource by NHibernate at runtime to create the mappings. The XML schema is not that difficult to understand but being XML is feels a bit, say 2003 rather than 2011.

So these days most folk seem to be using FluentNhibernate. This is an open source companion project to NHibernate and enable the use of compile-time safe Fluently coded mapping classes to describe eahc classes mapping. An interesting feature of FluentNHibernate is the Automapping features, which uses conventions to map tables to classes without the need for any configuration by the developer.

Entity Framework does not support mapping to enums or other custom types. Database fields must map to scalar primitive .net types. NHibernate supports custom types through the IUserType interface.

API

Entity Framework has a smaller API than NHibernate and exposes less knobs and levers. For that reason, it is probably easier to “learn” but when you need greater control then the limitations of the API quickly surface.

NHibernate has many points for extensibility for example:

  • In the construction of the connection, which can be useful to set user session variables in the database
  • Caching, as already discussed
  • Interception of queries, inserts, delete and updates
  • Interception of object creation
  • Diagnostic logging with log4net

To the best of my knowledge, Entity Framework does not have any of these extensibility points.

On the other hand, with a larger API comes a steeper learning curve. For example, the NHibernate Session has the following methods: Save, Update, SaveOrUpdate, SaveOrUpdateCopy, Replicate and Merge, some of which have overloads that return different types.

Lazy or eager loading is a huge consideration, particularly with n-tier applications. Eager loading can kill you because you end up sucking down the whole database in a single call, but lazy-loading can leave you with n-select performance issues. Entity Framework feel really immature in this regards because you can only set the lazy loading behaviour for the entire context, not one an relationship by relationship basis. With NHibernate, you can configure the container so that parent objects are eager loaded and child collection are lazy-loaded, or mix it up to suit each case.

NHibernate allows greater specification of loading strategies. You can eager load using a join in an initial query or a subsequent select, and even the specify an optimal batch size on a select. None of these features are available with Entity Framework. A sub-select strategy is useful for preventing unnecessary queries for objects that are already in the session ,and is something evaluated on a case-by-case basis.

Both Entity Framework and NHibernate support Linq, although the implementation is different. In Entity Framework Linq queries against the database contact are passed to the native SQL linq provider while NHibernate Linq is translated into HQL or criteria in the session to create the sql. From an API perspective the feel is more or less the same.

In addition to Linq, NHibernate provides a QueryByObject API in which you pass in a object with properties you want matched set, and two other criteria APIs – ICriteria and HQL. With the multiple APi available, my advice is to choose one and use it consistently, preferably wrapped behind an application specific repository.

Another important area of difference is cascade specification and the attachment of detached entities. NHibernate allows you to specify whether a relationship should automatically cascade inserts, updates, deletes and deletes including orphins. EntityFramework on the other hand only allows you to turn the cascade delete specification on or off. In attaching detached instances, which is done in n-tier applications, NHibernate will automatically cascade the attachment of related objects and will perform the neccesary database queiers to work out what needs to be persisted to the database. EntityFramework will not cascade an attach, each relationship needs to be travsersed and attached explicitly. This is a bit cumbersome for large object graphs.

Both provide access to the raw sql connection for those cases where it is just needed.

Support

Entity Framework, produced and supported by Microsoft has the advantage over NHibernate in terms of developers being able to find answers to questions. The MSDN documentation for Entity Framework is relatively complete and there are many technical evangelists espousing the use of Entity Framework.

NHibernate on the other hand has not got a corporate sponsor, and the quality of it’s documentation is definitely a pain point for developers learning NHibernate. A the time of writing, many google searches for NHibernate throw up their old JBoss site however the current “home” of NHibernate is http://www.nhforge.org. This fact alone throws a lot of developers.

As previously stated, the API for NHibernate has a lot more knobs and levers than Entity Framework, and I think for that reason alone much greater emphasis should be placed on getting the documentation right. I demonstrated before the large API for simply saving an object. It requires a lot a fair amount of pain and experience to know what eahc of them does.

Entity Framework is less mature than NHibernate, and I believe that it has a number of releases to go before it is enterprise feature-rich. Each release represents an opportunity for breaking changes, which is a consideration. Alternatively, Microsoft may not develop EntityFramework further which would be creating a potential legacy application. On the other hand, NHibernate is a community project and has to date been successful without a commercial imperative.

Conclusion

I’ve covered a lot of stuff here based on my own real-world experience with both Entity Framework and NHibernate in n-tier applications. In a few words, I would say that Entity Framework is easier to learn but harder to control and extend to a complex scenario, so as the complexity of the domain increases I would lean more and more towards NHibernate configured using FluentNHibernate.

In any event, I really want to stress the importance of separating the dependencies on the ORM from the application code, either through the use an IOC container or application interfaces. It may be that NHibernate is a better choice for complex applications at the moment, it may not always be that way.

I would also look at the development team. Frankly, working with an ORM and domain classes is a whole different kettle of fish to coding transaction-script style services and records sets. It’s taken to a whole new level when multiple tiers come into play. Some developers find the transition too much and that needs to taken into account when choosing a development approach. I would recommend finding a mentor that can guide the team through the pitfalls that most definitely exist.

Lastly, in addition to implementing a number of useful enterprise patterns, above all an ORM is about reducing the complexity of mapping a complex object domain to a relational database. If the project is simple to start with, with little or no object-relational impedance, an ORM will only introduce complexity where none was required for no perceivable benefit.

kick it on DotNetKicks.com

Posted in Design Patterns, Entity Framework, NHibernate | Tagged , | 10 Comments

Custom Unobstrusive Jquery Validation in ASP.Net MVC 3

Introduction

This post demonstrates how to add client side unobtrusive validation for custom validation attributes. I will use a real example, which is the client side validation of the Australian Business Number and Australian Company Numbers for registered companies in Australia. These numbers are validated using a checksum algorithm. This post implements custom validation by having validation attributes implement the IClientValidatable interface. This may not always be appropriate or possible, in which case you may want to use the DataAnnotationsModelValidatorProvider, which I have covered here.

Define the Custom Validation Attribute

I’ll define a base CheckSumNumber attribute that each particular instance will derive from. This needs to implement System.Web.Mvc.IClientValidatable and to return the neccesary System.Web.Mvc.ModelClientValidationRule to the Html renderers when required.

Note how the specification of the actual check sum type is passed in to the constructor of the abstract class.

public abstract class CheckSumNumberAttribute : ValidationAttribute, IClientValidatable{                       

  private string checkSumType;

  protected CheckSumNumberAttribute(string checkSumType){
    this.checkSumType = checkSumType;
  }
       
  public string CheckSumType{
    get {return checkSumType;}
  }
      
  public IEnumerable<ModelClientValidationRule> 
   GetClientValidationRules(ModelMetadata metadata, ControllerContext context){
    yield return 
       new ModelClientValidationCheckSumNumberRule(this.ErrorMessageString, this.CheckSumType);
  }
}

The code for ModelClientValidationCheckSumNumberRule is shown below:

public class ModelClientValidationCheckSumNumberRule : ModelClientValidationRule{

   public ModelClientValidationCheckSumNumberRule(string errorMessage, string checkSumType)
            : base(){
     this.ErrorMessage = errorMessage;
     this.ValidationType = "checksum";
     this.ValidationParameters.Add("checksumtype", checkSumType);
  }
}

ABN and ACN server side validation

The implementations of the two concrete classes, AustralianBusinessNumberAttribute and AustralianCompanyNumberAttribute is shown below. The algorithm are publically available from the the Australian Taxation Office or the Australian Business Registrar.

public class AustralianBusinessNumberAttribute : CheckSumNumberAttribute{
  private static int[] ABN_WEIGHT = { 10, 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 };
  private static Regex ABNRegex = new Regex("\\d{11}");

  public AustralianBusinessNumberAttribute() : base("abn") { }

  public override bool IsValid(object val){
    string ABN = val as string;
    bool valid = false;
    if (ABN != null){
      ABN = ABN.Replace(" ", "").Trim();
    }

    if (string.IsNullOrEmpty(ABN)){
      return true;
    }

    if (!ABNRegex.IsMatch(ABN)){
      return false;
    }

    int sum = 0;
    try{
      for (int i = 0; i < ABN_WEIGHT.Length; i++){
        // Subtract 1 from the first left digit before multiplying against the weight
        if (i == 0){
          sum = (Convert.ToInt32(ABN.Substring(i, 1)) - 1) * ABN_WEIGHT[i];
        }else{
          sum += Convert.ToInt32(ABN.Substring(i, 1)) * ABN_WEIGHT[i];
        }
      }
      valid = (sum % 89 == 0);
    }
    catch{
      valid = false;
    }
    return valid;
  }
}

public class AustralianCompanyNumberAttribute : CheckSumNumberAttribute{
  private static int[] ACN_WEIGHT = { 8, 7, 6, 5, 4, 3, 2, 1 };
  private static Regex ACNRegex = new Regex("\\d{9}");

  public AustralianCompanyNumberAttribute(): base("acn"){}

  public override bool IsValid(object val){
    string ACN = val as string;
    bool valid = false;

    if (ACN != null){
      ACN = ACN.Replace(" ", "").Trim();
    }

    if (string.IsNullOrEmpty(ACN)){
      return true;
    }

    if (!ACNRegex.IsMatch(ACN)){
      return false;
    }

    int remainder = 0;
    int sum = 0;
    int calculatedCheckDigit = 0;

    try{
      // Sum the multiplication of all the digits and weights
      for (int i = 0; i < ACN_WEIGHT.Length; i++){
        sum += Convert.ToInt32(ACN.Substring(i, 1)) * ACN_WEIGHT[i];
      }

      // Divide by 10 to obtain remainder
      remainder = sum % 10;

      // Complement the remainder to 10
      calculatedCheckDigit = (10 - remainder == 10) ? 0 : (10 - remainder);

      // Compare the calculated check digit with the actual check digit
      valid = (calculatedCheckDigit == Convert.ToInt32(ACN.Substring(8, 1)));
    }
    catch{
      valid = false;
    }
    return valid;
  }
}

That’s it for the server side programming. If you add these attributes to your data model the model binder will validate accordingly during post back.

Using the attributes

public class CompanyModel{
  public string CompanyName{get;set;}

  [AustralianBusinessNumber(ErrorMessage="The ABN is incorrect")]
  public string AustralianBusinessNumber{get;set;}

  [AustralianCompanyNumber(ErrorMessage = "The ACN is incorrect")]
  public string AustralianCompanyNumber{get;set;}
}

When the editor for this mdoel is emitted, the follows HTML is generated:

<input data-val="true" 
data-val-checksum="The ABN is incorrect" 
data-val-checksum-checksumtype="abn" 
id="AustralianBusinessNumber" 
name="AustralianBusinessNumber" 
type="text" value="" />

<input data-val="true" 
data-val-checksum="The ACN is incorrect" 
data-val-checksum-checksumtype="acn" 
id="AustralianCompanyNumber" 
name="AustralianCompanyNumber" 
type="text" value="" />

Client side ABN and ACN validation

To enable client side validation, you need to define the validation methods. I have cerate client side version of the check sum validators for ABN and ACN as shown below. Note the use of a local namespace ‘Xhalent’ to prevent pollution of the global object.


var Xhalent = Xhalent || {};

Xhalent.validateABN = function (value) {

  value = value.replace(/[ ]+/g, '');

  if (!value.match(/\d{11}/)) {
    return false;
  }

  var weighting = [10, 1, 3, 5, 7, 9, 11, 13, 15, 17, 19];

  var tally = (parseInt(value.charAt(0)) - 1) * weighting[0];

  for (var i = 1; i < value.length; i++) {
    tally += (parseInt(value.charAt(i)) * weighting[i]);
  }

  return (tally % 89) == 0;
};

Xhalent.validateACN = function (value) {
  value = value.replace(/[ ]+/g, '');

  if (!value.match(/\d{9}/)) {
    return false;
  }

  var weighting = [8, 7, 6, 5, 4, 3, 2, 1];
  var tally = 0;
  for (var i = 0; i < weighting.length; i++) {
    tally += (parseInt(value.charAt(i)) * weighting[i]);
  }

  var check = 10 - (tally % 10);
  check = check == 10 ? 0 : check;

  return check == parseInt(value.charAt(i));
};

Hooking it up

You then need to register these functions with the jquery validation framework. I’ve added an inline function that switches to correct function implementation based on the check sum type parameter.

//get reference to global jquery validator object and addMethod named 'xhalent_checksum'
$.validator.addMethod("xhalent_checksum", function (value, element, checksumtype) {
  if (value == null || value.length == 0) {
    return true;
  }

  if(checksumtype == 'abn') {
    return Xhalent.validateABN(value);
  }else if (checksumtype == 'acn') {
    return Xhalent.validateACN(value);    
  }
});

The final step is to wire up the adapter for the unobtrusive validation library to the now-registered validation method. The method I have used takes three parameters – the name of the rule, the name of the required attribute for the rule, and the jquery validation method to call to process the rule. There are other methods for adding adapters – for a validator that has no parameters the best method to use is $.validator.unobtrusive.adapters.addBool(adaptername, rulename).

$.validator.unobtrusive.adapters.addSingleVal('checksum', 'checksumtype', 'xhalent_checksum');

It’s important that any javascript you have written is loaded after the libraries on which they depend has bene loaded. For that reason, I tend to incoporate them into a separate .js file, which can then be minified.

Conclusion

This post has demonstrated how to implement client side unobstrusive validation for custom validators in ASP.net MVC 3, and shown implementations for the validation of the Australian Business Number and Australian Company Number

kick it on DotNetKicks.com

Posted in ASP.Net MVC, jQuery | Tagged , , | 3 Comments