Author Archives: steven777400

Lightweight ASP.NET Background Processes

Imagine a tool which performs some long running background task on the server (on the order of several minutes) and wants to notify users of completion. This notification may take the form of an email, or it may be that the user remains on the website and receives a visual confirmation that the task is complete. The “industrial strength” solution to this is Signal R, which provides all sorts of client/server connectivity capabilities.

In many cases, this may be an excessive solution. Running a background task, with both backend and frontend notifications, is straightforward and doesn’t require a lot of wiring. This post shows one simple approach that will work with ASP.NET Web API or MVC.

Core Concept

Use a dictionary in application state to track background tasks and their status. Match each task to an ID for reportability.

This technique will not work if the runtime of the background task is too long compared to the application idle time-out on IIS. For this reason, the technique is appropriate only for non-essential tasks which don’t expect to run more than a few minutes.

Long Running Job Class

Initiating, checking, and maintaining jobs is handled by the long running jobs class. The class has static methods to create and check a job; and instance values of an ID (which can be sent to/from the client) and the task itself.

Non-static fields and methods are the easiest, primarily just wrappers around the ID and Task.

public class LongRunningJob
{

  public enum JobStatus { Running, Done, Failed }

  /// <summary>
  /// A unique identifier for this job.  Send this ID to the client, 
  /// and it may use it to query the status of the job.
  /// </summary>
  public readonly Guid ID;

  /// <summary>
  /// The actual background task.  Use a continuation to send email 
  /// or perform other server-side operations when the task completes.
  /// </summary>
  public readonly Task Task;
                
  protected LongRunningJob(Guid id, Task task)
  {
    ID = id;
    Task = task;
  }

  /// <summary>
  /// Returns a status based on the Task
  /// </summary>
  public JobStatus Status
  {
    get
    {
      if (Task.IsFaulted)
        return JobStatus.Failed;
      else if (Task.IsCompleted || Task.IsCanceled)
        return JobStatus.Done;
      else
        return JobStatus.Running;
    }

  }

}

The real work is done in the static methods which provide factory construction of a long running job, and status querying via the application state dictionary. The dictionary is accessed via a “Tasks” static property.

Starting a new job is done via the factory method:

/// <summary>
/// Start a new job.  The method provided will be placed into the task 
/// queue and may be started immediately.
/// </summary>
/// <param name="job">The method to run in the background</param>
/// <returns>An object containing the task and a unique identifier that
/// can be used to retrieve the job (and check its status)</returns>
public static LongRunningJob StartJob(Action job)
{
  Guid id = Guid.NewGuid();
  var task = Task.Factory.StartNew(job);
  var lrj = new LongRunningJob(id, task);
  Tasks.Add(id, lrj);
  return lrj;
}

Retrieving a job by ID queries the Tasks dictionary:

/// <summary>
/// Retrieve a job by ID.  If no matching job is found, 
/// returns null.
/// </summary>
/// <param name="id">The ID from LongRunningJob.ID</param>
/// <returns>The LongRunningJob, 
/// or null if no matching job found</returns>
public static LongRunningJob RetrieveJob(Guid id)
{
  if (Tasks.ContainsKey(id))
  {
    return Tasks[id];
  }
  else
  {
    return null;
  }
}

We define the application state dictionary of tasks via a property which instantiates it on first request.

protected static IDictionary<Guid, LongRunningJob> Tasks
{
  get
  {
    var dict = HttpContext.Current.Application["_LongRunningJob"] as 
      IDictionary<Guid, LongRunningJob>;
    if (dict == null)
    {
      dict = new Dictionary<Guid, LongRunningJob>();
      HttpContext.Current.Application["_LongRunningJob"] = dict;
    }
    return dict;
  }
}

Sample Usage

To demonstrate usage, we create a sample application which performs long-running jobs as sleeping threads of various lengths. Failures can be introduced by intentionally throwing exceptions.

Here is the sample Web API controller methods:

[HttpPost]
public Guid CreateRegularJob([FromBody] int seconds)
{
  var job = Models.LongRunningJob.StartJob(new Action(() => 
  {
    // TODO: some long running task
    System.Threading.Thread.Sleep(seconds * 1000);
  }));
  job.Task.ContinueWith(new Action<System.Threading.Tasks.Task>(t => {
    // TODO: send email notification of completion
  }));
  return job.ID;
}

[HttpPost]
public Guid CreateFailJob([FromBody] int seconds)
{
  var job = Models.LongRunningJob.StartJob(new Action(() =>
  {
    System.Threading.Thread.Sleep(seconds * 1000);
    throw new Exception();
  }));
  return job.ID;
}

[HttpGet]
public Models.LongRunningJob.JobStatus CheckStatus(Guid id)
{
  var job = Models.LongRunningJob.RetrieveJob(id);
  return job.Status;               
}

On the client side, sample jobs are created to POST’ing to the appropriate create jobs methods. Each ID is placed in a table, and a polling routine queries the status of each running ID each second until they complete or fail. The ajax calls return without blocking for the long running task to complete.

A frontend sample shows several jobs queued up to run simultaneously for various lengths:
longjobpoll

Future Work

  • Tasks can be extended to return data.
  • Old jobs should be removed from the dictionary to avoid memory leaks.

“Effective JavaScript”, a Review

David Herman’s book “Effective JavaScript” is written for developers who are already comfortable with basic JavaScript syntax and semantics, but want to learn the idioms (he calls it “pragmatics”) of the language. There’s an old saying, “You can write FORTRAN in any language,” meaning that developers who learn the best practices of one (possibly older) platform may not update their style with a newer platform and may simply to continue to code in the older style in a more (or differently) capable language.

Egregious examples of this weakness include non-object oriented style in C++ or non-functional style in C#. As an experienced C# developer who writes enough JavaScript to get by, and no more, I often worry that I’m “writing C# in JavaScript” (see also “How Good C# Habits can Encourage Bad JavaScript Habits“). Herman writes his book to developers in a similar situation.

The book is divided into a series of items, where each item addresses one facet of JavaScript that might be misunderstand, misused, or unknown to a developer inexperienced in JavaScript.

Most Interesting Items

Coming from C#, I found several items to be particularly valuable:

Item 13: Use Immediately Invoked Function Expressions to Create Local Scopes

In C# there are (somewhat rare) occasions where a variable needs to be declared inside a loop to avoid an incorrect reference. In JavaScript, however, two language features that conspire to make this difficulty substantially more dangerous: all “outer” variables are stored by reference in the closure, not by value; and JavaScript does not support block scoping. In other words, you can’t have a value type, and you can’t declare a local variable scoped inside a loop (or other block).

Since functions are the only scoping type in JavaScript, a workaround is to create and immediately execute a function to enclose the scope.

Item 18: Understand the Difference between Function, Method, and Constructor Calls

In C#, classes (as opposed to objects) are a real distinction, and a constructor is a specific “kind” of thing with special compiler treatment. In JavaScript, it becomes clear through exposure that functions can serve as classes (that is, be instantiated with “new”) but many of the subtleties are unclear. When is a function also a constructor? Since there are no classes, only functions and objects, what does “new” actually do? This item helps clear up some of these confusions.

There’s a lot that isn’t answered by this “item”, but that alludes to be answered to by future items (mostly in Chapter 4, “Objects and Prototypes”)

Item 21: Use apply to Call Functions with Different Numbers of Arguments

In C#, methods usually have a fixed number of parameters. A method can be made to accept variable parameters using a params array. Consider the String.Format method:

public static string Format(
	string format,
	params Object[] args
)

Such a method could then be called either with an actual array or with variable parameters. The “work” of allowing this flexibility is done by the method definition, in the use of the “params” keyword.

String.Format("{0} - {1} - {2}", someArrayOfThreeObjects);
// OR
String.Format("{0} - {1} - {2}", object1, object2, object3);

In JavaScript, the same capability exists, but this work is instead done by the caller using the “apply” syntax. An interesting comparison.

Item 36: Store Instance State Only on Instance Objects

What the author here shows as an “accident” could easily be reclassified as the JavaScript distinction between static and instance members. Instance members go on the instance. Static members go on the prototype. I suppose the distinction is only equivalent for fields/properties, not for methods.

Item 49: Prefer for Loops to for…in Loops for Array Iteration

C# programmers are very accustomed to using foreach loops for iterating over the items in a collection (such as an array). This naturally leads to the temptation to use the similar-seeming for…in loop in JavaScript to iterate over an array. Bad plan. Use .forEach method on arrays instead (in ES5).

Concurrency

The final chapter is on concurrency. Overall, this section made me appreciate the .NET Task Parallel Library and C#’s async/await so much more. It is also effectively exposed the dangers of trying to handle concurrency “on your own” without using appropriate high-level constructs.

Overall

An excellent review of JavaScript “gotcha’s” and best practices that might not be obvious to a developer approaching from another language, such as C# or Java. It’s a short book, a quick read, but worthwhile.

“Bolt-On” CSRF Protection in Intranet Web API Windows Authentication Scenarios

Web API is a powerful tool for constructing web services, but also for separating concerns of a web application. However, like any web application, security concerns must be addressed. One of the most common security concerns with authenticated operations is cross-site request forgery. In a traditional ASP.NET MVC application, we can use the built-in AntiForgeryToken mechanism to place a unique token within each page served to the client, and require that token be included in any requests back to the server. Since an attacker cannot access the contents of the actual served page, they are unable to acquire the token; and since the token is not a cookie or credential, the browser will not send it automatically. Thus, an attacking page cannot craft an effective CSRF attack.

When moving into the Web API world, things are a little more muddy. The general tack has been to take a different angle on authentication and authorization all together. For example, the use of certificates to sign each request, bearer tokens, oAuth, or similar approaches. However, for purely intranet applications, we may still wish to use Windows authentication. This means the end-user of a website which calls our API will automatically do so with the user’s windows credential. (It is also possible for server-side applications to use impersonation to call the API with a particular Windows service account, however, this is not subject to CSRF vulnerability).

There are also ways to make a joint MVC/Web API application use the MVC anti-forgery tokens, which is really neat, but only works if the API and application are one cohesive web application solution, as otherwise the Web API’s instance of ASP.NET would differ from the MVC instance of ASP.NET, and they would not share the set of valid anti-forgery tokens. It also doesn’t work if “plain” HTML/JS is being used to access the API.

However, by using Windows authentication, we open our API up to CSRF attackers. An intranet user may be using a web application which uses our API (and thus, carries their credential), while at the same time they are browsing evilattacksites.com which can construct a form with the intranet URL and post it to the API’s intranet address. This post will carry the user’s credentials with it, and execute a successful attack.

You might note that the API is on a intranet site and the attacker is (probably) external, so CORS might be suggested as an answer. However, CORS only controls what data can be read, it does not protect against CSRF submissions.

We want Web API CSRF protection that:

  • Works with any client (doesn’t require pages generated by MVC)
  • Works with Windows authentication/intranet
  • Is easy to “bolt-on” to existing Web API services AND clients

Closing the CSRF Vulnerability

The solution we will use is to provide a per-IP CSRF token that must be attached to the HTTP header and is validated on all POST/PUT/DELETE requests.

The technique here is to construct a message handler which will process all Web API requests before they go to the controller. It will validate the presence of a CSRF token when needed, and produce it when requested. Thus, there will be no changes to the controllers! The only server-side application change (besides importing the message handler), is to add it in the WebApiConfig Register method.

config.MessageHandlers.Add(new Infrastructure.CSRFMessageHandler());

On the client side, none of the individual requests to the API need to be altered. An initial login call (handled by the CSRFMessageHandler on the server) acquires the CSRF token and places it into the headers for all subsequent ajax calls. There will be no other changes needed to the client. However, clients which consume multiple Web API services secured in this way will have a more complex setup procedure. 🙂

$(function () {
   $.ajax("../api/login")
      .done(function (data) { $.ajaxSetup({ headers: 
         { "X-CSRF-Key": data } }) })
      .fail(function () { /* DO SOMETHING */ });
});

With no further changes, the API is now secure, and the clients receive and use the CSRF token to gain access. This protection can easily be “bolted on” to existing APIs and clients.

Implementation of CSRFMessageHandler

The real magic of this technique is in CSRFMessageHandler. This handler intercepts each call to the Web API before it is sent to the controller. There are two parts: the CSRF token verification, and the token generation.

First, the overall class structure of the handler. We prepare a RNGCryptoServiceProvider for generating the token, and extract the request context so we have access to the ASP.NET application state object, where the CSRF tokens will be stored (you could also store them in a database, or other repository).

public class CSRFMessageHandler : DelegatingHandler
{
   private static readonly RNGCryptoServiceProvider rng = 
      new RNGCryptoServiceProvider();
   public const string CSRF_HEADER_NAME = "X-CSRF-Key";      

   protected override async Task<HttpResponseMessage> SendAsync(
      HttpRequestMessage request, CancellationToken cancellationToken)
   {            
      var method = request.Method.Method.Trim().ToUpper();

      // Extract the context from the request property
      // so that application "state" can be accessed
      var context = ((HttpContextBase)request.
         Properties["MS_HttpContext"]);

      // PART ONE: On "login" request, create token.  
      // Put this here so no need to modify controller.
      ... see below
      // PART TWO: For update methods, enforce the CSRF key
      ... see below

      return await base.SendAsync(request, cancellationToken);
   }
}

When the client sends a “login” request (any request ending in /login), it will be intercepted by the handler and a CSRF token will be created and returned. If desired, you could check the dictionary to see if the client already has a key and reuse it. You could send back the username along with the key. You could set an expiration time for the key and make a process to remove old (expired) keys. Some additional improvement is definitely possible.

if (method == "GET" && 
   request.RequestUri.AbsolutePath.ToUpper().EndsWith("/LOGIN"))
{
   var keys = context.Application[CSRF_HEADER_NAME] as 
      IDictionary<string, string>;
   if (keys == null)
   {
      keys = new Dictionary<string, string>();
      context.Application[CSRF_HEADER_NAME] = keys;
   }

   byte[] bkey = new byte[16];
   rng.GetBytes(bkey);
   var key = Convert.ToBase64String(bkey); 
   keys[context.Request.UserHostAddress] = key;
   return new HttpResponseMessage(HttpStatusCode.OK) 
   { 
      Content = new StringContent(key) 
   };
}

For all modification requests (POST, PUT, DELETE), the handler will validate that the appropriate key is included in the headers.

if (method == "POST" || method == "PUT" || method == "DELETE")
{
   HttpResponseMessage response = request.CreateErrorResponse(
      HttpStatusCode.Forbidden,
      "POST/PUT/DELETE require valid and matching anti-CSRF key, use Login method");

   string key = null;
                
   if (request.Headers.Contains(CSRF_HEADER_NAME))
      key = request.Headers.GetValues(CSRF_HEADER_NAME).Single();

   var keys = context.Application[CSRF_HEADER_NAME] as 
      IDictionary<string, string>;
   string ipaddr = context.Request.UserHostAddress;
   
   // match to the key for user's IP address
   if (keys == null ||
      !keys.ContainsKey(ipaddr) ||
      keys[ipaddr] != key) throw new HttpResponseException(response);
}

“Class”-like Inheritance, Overriding, and Superclass Calls in JavaScript

JavaScript is a prototype-based language with objects, rather than classes, as its fundamental construct. However, the use of functions to approximate classes (complete with the “new” keyword) is very common. Although we’re often told that JavaScript supports inheritance, it’s not entirely obvious how the class-like inheritance works in a prototype-based language such as JavaScript. Adding to the confusion is various syntax approaches that may be used to devise objects and “classes”.

As an aside, the techniques shown in this post require ES5 support, which, for example, excludes IE 8 and below. There are fairly simple shims that can be used to enable support in older browsers.

We’ll first consider a “naive” approach to defining a class in JavaScript, then discuss why this approach isn’t suitable for an inheritance scenario.

function BaseClass(initialValue) {
  this.value = initialValue;
  this.compute = function() { return this.value + 1; };
};

This class seems to work fine:

var instance = new BaseClass(3);
console.log(instance.compute()); // outputs 4

However, this approach is not suitable for inheritance. The function “compute” (and any other functions defined in this way) will be defined not on the “class” (prototype) but on the object itself. When overridden, there will be no reasonable way to access the base function, because the override will actually replace the original function on that object. Thus, functions should be defined on the prototype, while fields are defined on the object itself. If you define a field on the prototype, it is like a static member in C#.

function BaseClass(initialValue) {
  this.value = initialValue;
};
BaseClass.prototype.compute = function() { return this.value + 1; };

This produces the same result as before, but now the function is defined on the prototype instead of the object, so it can be referenced by derived classes.

Now imagine we want to construct a derived class. There are two important considerations: ensuring that the super-constructor is called correctly, and ensuring that “instanceof” continues to give the correct result. There is no “base” or “super” keyword in JavaScript, so you must use the actual name of the base class whenever you want to refer to it.

function DerivedClass(initialValue, secondValue) {
  BaseClass.call(this, initialValue); // *** Call super-constructor
  this.anotherValue = secondValue; // Additional field
};
// *** Establish prototype relationship
DerivedClass.prototype = Object.create(BaseClass.prototype); 

// Additional function
DerivedClass.prototype.anotherCompute = 
   function() { return this.compute() + this.anotherValue; }; 

Now let’s look at the things we can do with an instance of the derived class. They should all work as expected.

var instance2 = new DerivedClass(4, 6);
// base method with base fields: outputs 5 (4+1)
console.log(instance2.compute()); 

// derived method with base method and derived field: 
// outputs 11 ((4+1)+6)
console.log(instance2.anotherCompute()); 

// outputs true
console.log(instance2 instanceof DerivedClass); 

// ALSO outputs true
console.log(instance2 instanceof BaseClass); 

Now imagine we want to create a derived class that overrides some behavior of the base class. First, we will create a class inheriting from the previous derived class, and override the compute method.

function ThirdClass(initialValue, secondValue) {
  // In this case, the "base" class 
  // of this class is called "DerivedClass"
  DerivedClass.call(this, initialValue, secondValue); 

  // no new member fields
};
ThirdClass.prototype = Object.create(DerivedClass.prototype); 

// override base function compute
ThirdClass.prototype.compute = function() { return 100; }; 

Let’s see this overridden function in action. The override also impacts, for example, the “anotherCompute” function since it calls compute. The overriding works exactly as you would expect as in, say, C#.

var instance3 = new ThirdClass(4, 6);
// overridden method always outputs 100
console.log(instance3.compute()); 

// derived method with OVERRIDDEN method (inside) and derived field: 
// outputs 106 (100+6)
console.log(instance3.anotherCompute()); 

// confirm that the compute has only been overridden 
// for instances of ThirdClass: 
// outputs 5, as before
console.log(instance2.compute()); 

One final task remains: can we override a function, and then still make a call to the base class version of that function? In this case, we’ll replace the definition of compute in the third class with one that overrides compute while also making a class to the base class definition of compute. This is where the technique for function definition (defining on the prototype vs. defining on the object) becomes critical. Since the functions are actually defined on the prototypes, we can call back to the base class prototype for that version of the function. Remember that “ThirdClass” derives from “DerivedClass”

// override base function compute, but still use it in this function
ThirdClass.prototype.compute = function() { 
  return DerivedClass.prototype.compute.call(this) * 5; 
}; 

Now we’ll retry the two instance3 calls from earlier:

// overridden method calls base implementation of compute, 
// which returns 5, and multiplies it by 5. output: 25
console.log(instance3.compute()); 

// derived method with overridden method
// (inside, which calls base implementation) 
// and derived field: outputs 31 ((5*5)+6)
console.log(instance3.anotherCompute()); 

We now have a reliable technique for constructing “class”es in JavaScript which fully support inheritance, overriding, and superclass calls in the way conventional from class-based object-oriented languages.

Remember, though, that this is “writing C#/Java in JavaScript” and should be used sparingly. Also, some authors believe implementation inheritance is usually bad even in languages which support it as a primary form of reuse.

Quickly Tell When You’re Connected to Production in SQL Management Studio

This is one of those things that “everybody knows” but I don’t find out about until years later (like pushing Windows-L to lock the workstation).

If you have access to various environments in SQL (dev, multiple test/QA machines, and production) sometimes there can be fear: am I connected to the correct server? I often stop and re-read the connection information to be sure I’m on the right server before I execute a command that could be particularly troublesome. I intentionally minimize my permissions in production to help reduce this, but there are still some permissions that exist; and some folks have more production permissions because they change it frequently.

So, here’s an easy way to help reduce anxiety. In SQL Management Studio 2008 and above, you can set custom status bar colors depending on which server you are connected to. Very easy.

When establishing the connection, simply press the “Options” button on the bottom right:
sql1

Then, you can check the box to show a custom color. SQL Management studio will save this value, and for all new query windows that are opened, will highlight the bottom status bar if that server is connected. An easy way to “warn” yourself if connected to a shared environment, and a quick confidence to know when you’re connected to a local dev environment only.

sql2

Effective Internal/External Secure WCF Services

We have a service-based application which has both internal (intranet) and external (internet) customers. We want to expose a secure WCF service for this application using appropriate and effective protocols based on the kind of connection.

Originally, the “obvious” answer appeared to be use a WCF routing service. A WCF router can be placed on an externally-facing web server and proxy requests to the internal server which actually serves the requests. However, upon further investigation, it turns out that the WCF router is extremely limited in the default implementation when security is in use:

It turns out that the WCF team did not provide a very good solution here. The only use case that supports security context forwarding is message security with Windows credentials. … Other use cases such as username password , X.509 or federated credentials does not allow security context forwarding. It means that the router can be configured to enforce message security but the service must be configured to disable security and it cannot access the security context such as the user’s Identity. (Source)

Well that’s a total non-starter. Since we have external users, we support custom username/password authentication. The server’s behavior is tightly integrating with the user’s identity, which must be known. Furthermore, the idea of transporting messages completely in the open between the router and primary server was also not appealing. We could enable this message transport by re-attaching the client’s credentials at the router, however, this would require loading the primary server’s private key certificate on the router server (which is a DMZ server). Neither secure nor appealing.

The solution was to use our NetScaler DMZ device to accept HTTP connections and forward them to an HTTP WCF endpoint on the primary server. Encryption is delegated to the message level, using a certificate, with authentication handled by username/password also at the message level. The client (whether internal or external) encrypts and signs the message with the server’s public key certificate. The private key only exists on the server. Thus, the communication is secure at all points.

In order to ensure no lapses in security implementation occur, we can set the “EncryptAndSign” protection level on the service interface. This ensures that the service will reject any circumstance in which the message is not encrypted.

[ServiceContract(ProtectionLevel = ProtectionLevel.EncryptAndSign)]
public interface IWsbisService { ... }

Since the requests will be forwarded from the NetScaler, the URL that the client requests will not match the actual answering host. By default, WCF will reject connections of this type. It is necessary to modify the service behavior to disable address filtering.

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
public class WsbisService : IWsbisService { ... }    

In order to implement this technique, we relieve the WCF service of any transport-level security. Although external transport level security (HTTPS) can be provided by the NetScaler, this interferes with WCF (it wants to either encrypt the message, or transport, but not both; since the HTTPS is stripped off, that won’t work). All WCF security (encryption and signing) will be provided at the message level, which can then be passed without inspection by the NetScaler.

We then configure the two server endpoints (HTTP, which will come from the NetScaler and external connections, and Net.TCP, which will be faster for internal connections) to both require message security. Although message security is slower than transport security, WCF cannot provided consistent transport security in these scenarios given that we don’t wish to expose the private key on an externally facing server, which would be necessary to perform the encryption at the transport level.

<netTcpBinding>        
   <binding name="netTcpBindingConfig">
      <security mode="Message">
         <message clientCredentialType="UserName"/>
      </security>
   </binding>
</netTcpBinding>
<wsHttpBinding>
   <binding name="wsHttpBindingConfig" messageEncoding="Mtom">
      <security mode="Message">
         <message clientCredentialType="UserName"/>
      </security>
   </binding>
</wsHttpBinding>

Even though “UserName” is the credential type, a certificate will still be required for encryption. You will need to generate and apply a certificate. We load the certificate in code, and use a custom certificate and certificate validator to ensure the thumbprint matches.

On the server:

X509Certificate2 certificate = new X509Certificate2(certFile);
host.Credentials.ServiceCertificate.Certificate = certificate;

On the client:

client.ClientCredentials.ServiceCertificate.Authentication.CertificateValidationMode = System.ServiceModel.Security.X509CertificateValidationMode.Custom;
client.ClientCredentials.ServiceCertificate.Authentication.CustomCertificateValidator = Certificate;

Where “Certificate” is an instance of a certificate validator class (X509CertificateValidator) which loads the public key of the certificate and compares the thumbprint of the presented certificate to the local certificate. This prohibits “man in the middle” type attacks by forcing one specific certificate (whose private key exists only on the primary server) to be used for encryption. Together with a client username/password, mutual authentication is assured. The server is thus authenticated to the client with this certificate, and the client is authenticated to the server with a username/password.

This architecture grants end-to-end security for both intranet and external customers without exposure of keys or dangerous “en route” decryption/re-encryption.

(Note: the diagram shows HTTPS but it’s just HTTP, with the message content being encrypted)
NetScalerWCF

Handling Performance Issues with INotifyPropertyChanged and Expensive PropertyChanged Event Handlers

Consider the case of an object which implements INotifyPropertyChanged and has properties that may be dependent on each other. You may have a circumstance where a single user command causes several properties to change, and, depending on the object’s implementation, may generate several PropertyChanged events. If an event handler for the object’s PropertyChanged event executes an expensive or UI intensive task, the multiple PropertyChanged events can cause a real performance problem.

One approach to this problem is to “batch” the PropertyChanged events, combining any duplicate notifications, and dispatch them as a single event after some quiet timeout. The quiet timeout can be very small to maintain the appearance of responsiveness while still capturing multiple change events into a single event dispatch.

I constructed a static class “AccumulatedPropertyChangeNotification” with two public methods: AddHandler, and RemoveHandler. These methods allowed an event handler to be attached to an INotifyPropertyChanged, indirectly, through the accumulator. In order to appropriately handle accumulated property changes (which could be more than one property), I created an event type which supports a sequence of changed properties.

public class AccumulatedPropertyChangedEventArgs : EventArgs 
{
    /// <summary>
    /// A distinct list of properties changed within the last notification cycle. If any notification
    /// of ""/null (meaning possibly all properties) has been submitted, then this property will be null.
    /// </summary>
    public IEnumerable<string> Properties {get;set;}
}

In order to accumulate dispatches, I used a timer which checks which events are queued and, when appropriate, dispatches them in bulk. This means that the AccumulatedPropertyChanged event is dispatched on a different thread than the object that generated the original PropertyChanged event. To address this situation, I also allow listeners to supply an optional ISynchronizeInvoke for the timer.

Adding and removing handlers are performed with these signatures:

public static void AddHandler(INotifyPropertyChanged objectToWatch, AccumulatedPropertyChangedHandler handler, ISynchronizeInvoke syncObject = null);
public static void RemoveHandler(INotifyPropertyChanged objectToWatch, AccumulatedPropertyChangedHandler handler);

When a handler is added, the handler is added to an internal notification list, and the PropertyChanged event of the objectToWatch is wired to an internal handler in the AccumulatedPropertyChangeNotification class. This handler simply puts the event, and the time it occurred, into an internal event queue.

private static List<Tuple<INotifyPropertyChanged, AccumulatedPropertyChangedHandler, ISynchronizeInvoke>> notifications;
private static List<Tuple<INotifyPropertyChanged, PropertyChangedEventArgs, DateTime>> changeNotificationsQueued;

When the timer fires, it looks at the change notification queue, groups by INotifyPropertyChanged instance, and checks for the most recent DateTime of property changed. If it is at least as old as the mandatory delay (by default, 50 milliseconds), then all the changes for that instance are batched and dispatched to all AccumulatedPropertyChangedHandler delegates associated with that object. This was a bit tricky to get correct.

The other issue that was tricky was RemoveHandler, doing a lookup on a given handler to remove it. Apparently Object.ReferenceEquals with methods doesn’t work well (or didn’t for me), so I ended up using .Equals on the delegate, which is not (still isn’t) what I expected to do.

Download implementation and unit tests. You will probably want to change the namespace to match your project.

If you’re using WPF, you’ll need a wrapper for the Dispatcher to implement ISynchronizeInvoke (come on Microsoft, get your shit together). A good implementation can be found at http://blogs.openspan.com/2010/04/hosting-openspan-a-complete-isynchronizeinvoke-wrapper-for-the-dispatcher/, though beware I found that InvokeRequired was returning false when it should have been returning true.

Upgrading MVC 3 in Visual Studio 2013

Visual Studio 2013 does not support MVC 3 in a “first-class” way. Therefore, it is desirable to upgrade MVC 3 projects to MVC 4. A tool exists which is very useful to accomplish this, but some additional steps might be necessary (as we found).

Primary Web MVC 3 Project Upgrade

Open the project in Visual Studio 2013. Select the Web project. Open the Package Manager Console:
mvc2013_1

“Install” the UpgradeMvc3ToMvc4 package. Be sure “Default Project” (top bar of console) is set to the correct project!

mvc2013_2

This should complete the initial upgrade. However, you might receive a dependency error:

mvc2013_3

In this case, you can remove the conflict (replace Microsoft.AspNet.Razor with whatever the dependency error is):

Uninstall-Package Microsoft.AspNet.Razor -Force

Uninstall the package indicated in the “already referencing” error. Ignore the warnings about breaking packages. Then try the upgrade again. You may need to uninstall several packages before the install succeeds.

Upgrading Unit Test Project References

If any unit test projects or other dependency projects exist, they will need to be upgraded as well. Change “default project” on the top bar of the package manager console to these projects and run the install command again.

Some errors may happen here, generally this is OK. The main point is to update the references automatically. If this does not succeed, these references are relatively easy to update manually. Make them match those in the web project (which was already upgraded).
Attempt to “Rebuild Solution”. It should be successful (no errors). Check for Model intellisense in the view. Should work. Run all tests. Should pass.

Reset Windows Authentication Mode

Go to web project’s primary Web.config (in the root of the web project).

In the “AppSettings” section, add the following two lines:

<add key="autoFormsAuthentication" value="false" />
<add key="enableSimpleMembership" value="false"/>

This is to switch back to Windows authentication. Otherwise the app will try to use forms authentication even if it previously used Windows authentication.

Ensure IIS Express is set to Windows authentication

If this is your first web project in Visual Studio 2013, you may also need to configure IIS express to support Windows Authentication. Follow these instructions: http://stackoverflow.com/a/19515891.

You only need to do that once per machine.

Model Validation with Data Annotations and Metadata Classes in WPF

In the ASP.NET MVC world, Microsoft has provided an interesting technique for data validation: data annotations. Data annotations are lightweight, and, using Metadata classes (shown in the previous link), data annotations can be applied effortlessly to entity framework or datatset generated models (for database-first uses).  This technique has been very convenient for our EF/MVC applications.

What about WPF applications?

Suddenly the waters get murky.  When using data binding with WPF, supported validation techniques include:

None of these fit well with data annotations.  However, there is a way forward: the blog post “Data Validation in WPF” includes a bit on implementing data annotations with the help of a validator class.  Unfortunately, this approach does not support metadata classes!

To work with generated data classes and metadata model validation in WPF, I have extended the above exchange to support validation rules in metadata classes.  In this way, you can share your metadata validation rule classes between WPF and MVC applications with no changes.

Implementation

For the model class that you want to provide data annotation support to, extend it with a partial class implementing INotifyDataErrorInfo.  The implementation is completely boilerplate, and you could extend your code generator template to include it. For this example, my model class is called “Bridges”, as it is directly derived from the table of that name in the database. I’ll assume we’re going to add a metadata class called “BridgesValidation” to validate some aspects of this model.

To begin with, we can create a partial class for the model, adding an interface for INotifyDataErrorInfo and attaching the metadata class in the usual way (the same way you would with EF database first).

[MetadataType(typeof(BridgesValidation))]
public partial class Bridges :  INotifyDataErrorInfo

Next, we need to provide the interface event, and prepare a dictionary for storing which fields are in error. The method will simply lookup in the dictionary to see if the requested property is in error.

public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged;
private Dictionary<string, string> errorsForName = new Dictionary<string, string>();

public System.Collections.IEnumerable GetErrors(string propertyName)
{
   if (errorsForName.ContainsKey(propertyName))
      return new string[] { errorsForName[propertyName] };
   else
      return new string[0];
}

How (and when?) to calculate which fields are in error? I created a calculate errors method, which I attached to the column changed event notification for the model (which derives from DataRow). You could just as easily attach to the property changed event notification for the model. Depending on how the model is implemented (DataRow, EF, etc) the exact attachment procedure will vary. In any case, however, a boilerplate error calculation routine can be used.

The process for this routine is:

  1. Clear all existing errors
  2. Acquire all metadata class types
  3. For each property and each metadata class, find any validation attributes
  4. Apply validation attributes and save errors

First, clear all existing errors. If deriving from DataRow, could also call “ClearErrors” here.

errorsForName.Clear();

Second, acquire all metadata class types (this is the interesting bit that the above link does not include).

var types = this.GetType().GetCustomAttributes(typeof(MetadataTypeAttribute), false)
                          .OfType<MetadataTypeAttribute>()
                          .Select(mta => mta.MetadataClassType);

Finally, look through all classes and all properties to find any validation attributes. When the “Add” is called on errorsForName, if using a DataRow derivative, that would be a good time to call “SetColumnError” as well.

        foreach (Type t in types.Union(new Type[] { this.GetType() }))
        {
            foreach (var prop in t.GetProperties())
            {
                var validations = prop.GetCustomAttributes(true).OfType<ValidationAttribute>().ToArray();
                foreach (var val in validations)
                {
                    var propVal = this[prop.Name];
                    if (!val.IsValid(propVal))
                    {
                        var errMsg = val.FormatErrorMessage(prop.Name);
                        errorsForName.Add(prop.Name, errMsg);
                        if (ErrorsChanged != null)
                            ErrorsChanged(this, new DataErrorsChangedEventArgs(prop.Name));
                    }
                }
            }
        }

Now you are ready to use data annotation validations in WPF!

Simply use the same kind of data annotation metadata class as you would for an ASP.NET MVC application, and use the above partial class template to extend each generated model class file. WPF will pick up the INotifyDataErrorInfo, which will in turn provide a seamless link between the metadata class data annotations and the WPF user interface.

Preventing Memory (and Time!) Leaks in WPF with Weak Events

Normally in .NET the garbage collector eliminates any concern we might have regarding memory leaks.  However, there are certain cases where “lost” objects may be retained because references to them still exist.  One possible mechanism for this is event wiring.

Imagine the case where a host component, like a main application window, contains an object which can notify of changes.  The main window instantiates a child component, and sets a property on the child to the observable component.  The child component then begins to listen for changes.  This adds a reference to the child window to the listener collection of the observable component.  Since the observable component is long-lived, that reference will persist.  If the child component is created and destroyed many times, a noticeable memory leak may result.

Let’s create a simple main and child components (in this case, windows). The main window will own the reference to the observable item. The child window will accept it as a property, and listen for changes on it. Very important to note that these components do not need to be windows; they can be any created and destroyed component.

Here’s a framework for the main window:

public partial class MainWindow : Window
{
   private ObservableCollection items;

   public MainWindow()
   {
      InitializeComponent();
      items = new ObservableCollection();
      items.Add(5);
      items.Add(7);
   }

   public void ShowCloseOtherWindow()
   {
      Window1 w = new Window1 { Items = items };
      w.Show();
      w.Close();
   }
}

And here’s a framework for the child window. Notice I am omitting the event wiring, which will occur in the setter. This is the point of difference and where weak events will come into play. The assumption is some control is databound to the Sum property and will receive notifications of changes via Window1’s PropertyChanged event.

public partial class Window1 : Window, INotifyPropertyChanged
{

   private ObservableCollection _items;
   public ObservableCollection Items
   {
      get
      {
         return _items;
      }
      set
      {
         _items = value; 
         // TODO: Wire this to notify collection changed
      }
   }

   public int Sum { get; protected set; }

   void _items_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
   {
      Sum = Items.Sum();
      if (PropertyChanged != null)
         PropertyChanged(this, new PropertyChangedEventArgs("Sum"));
   }

   public event PropertyChangedEventHandler PropertyChanged;
}

For testing purposes, we can rig up a simple timer on the main window which will create and close many child windows. The timer will simply invoke ShowCloseOtherWindow every, say, 250 ms or so. We can then monitor the memory usage of the application with different notification techniques. To monitor memory usage, I’m using perfmon on the specific process. To monitor performance, I’m using a Stopwatch instance around a call to .Add in the main window. This Add occurs every 30 seconds. When the Add (or any other change) occurs, the observable collection notifies all listeners. The time taken to notify all listeners is monitored by the stopwatch.

To establish a baseline, I will start with the empty implementation of the Items setter. This is obviously unacceptable in practice, since it does not result in changes (or even the original value) of the Items being exposed to the Sum calculator.

In this diagram, overall app memory (blue) and time to add item (red) are indicated. Overall memory usage increases slightly over time (as expected), since items are continually added to the collection. Since the collection is never actually listened to or processed in any way, time to add stays relatively flat.
NoEventListen - Copy

A Naive Implementation

Now assume we want Window1 to listen to the collection for changes, and when changes occur, to recalculate the value of Sum. We can modify the setter of Items:

if (_items != null)
{
   _items.CollectionChanged -= _items_CollectionChanged;                    
}
_items = value;
_items.CollectionChanged += _items_CollectionChanged;

This seems very reasonable: it ensures that the items collection can be changed, and that extraneous events will not cause a recalculation. Nevertheless, this implementation hides a dangerous flaw: By adding the listener, the items collection now has a reference to the component (in this case, an instance of Window1). If the component is intended to be discarded, the garbage collector will not collect it because a reference still exists (from the items collection). This causes a memory leak if many instances are created and then discarded. Furthermore, since the “zombie” components are still listening for changes and still recalculate the Sum on changes, as more and more instances are created (and zombified), the processing overhead of adding values will increase dramatically, causing a time leak.

Let’s run the program again, with the same instrumentation on memory (blue) and time (red). Notice how both skyrocket quickly (time usage even increases polynomially). The flat line of memory at the end is where the program crashed!
EventListenDefault

Hopefully you can see how aggressive this kind of memory/time leak can be and how insidiously difficult to locate it it might be in a large and complex program!

How to Fix

The “obvious” solution is to use IDisposable and dispose of the object, having the Dispose method remove the event listener. In some cases, disposing of objects is tricky and we want to allow the garbage collector to work. For the remainder of this article, I will assume we want to find a garbage collectable solution.

Problems with garbage collection can sometimes be addressed using “weak references”. A weak reference tells the garbage collector not to count that particular reference when considering whether an object can be freed or not. Thus, if we modify the event handler above so that it is a weak reference, the garbage collector will still be able to collect it, and the problem (both time and space) will go away.

However, this is not as trivial as it seems. Although .NET has a WeakReference class, this class only supports object references, not delegates (and by extension, not event handlers). What we really need is a “weak event” mechanism.

Several solutions exist. In general, an intermediate static object which maintains the weak references and dispatches events as appropriate is needed. In many cases, developers have needed to write these classes themselves. For example, see the detailed Weak Event Handlers blog entry by Steven Hollidge.

In WPF, Microsoft has provided a framework of this sort in the .NET library through WPF weak events. This implementation makes it very easy for WPF developers to use weak events. Although the documentation on weak events is somewhat confusing, the generic WeakEventManager is sufficient and can be easily used in place of a traditional unsubscribe/subscribe model. We modify the Items setter as follows:

if (_items != null)
{                    
   WeakEventManager<ObservableCollection<int>, NotifyCollectionChangedEventArgs>.RemoveHandler(
      _items, "CollectionChanged", _items_CollectionChanged);
}
_items = value;                
WeakEventManager<ObservableCollection<int>, NotifyCollectionChangedEventArgs>.AddHandler(
   _items, "CollectionChanged", _items_CollectionChanged);

There is no need to modify the event handler or write any other boilerplate code.

The result? Memory usage and time increase slowly as items are added to the collection (which is exactly the expected behavior). The time increase is due to the calculation of sum which now occurs, whereas in the first graph, since the event handler was never wired, no sum calculation ever occurred.
EventListenWeak

Download the complete WPF Weak Reference demo source code, including both techniques and instrumentation.

A comprehensive article dealing with weak events in all their facets is available by Daniel Grunwald.