Debugging ASP .NET memory leak - watch for static references

A while ago, I needed to debug an ASP .NET website, which was eating more and more memory the longer it was running, and never seemed to release the memory again. Even though the number of concurrent users was more or less constant, the memory usage seemed to be proportional to the total number of sessions since last application restart.

What was happening here ? Could it be session-state elements that never got garbage collected ? To find out, I simulated load on a test version of the website, let the sessions time out and used WinDbg and the Son of Strike dll extension command "gcroot" to find out which objects was stll alive and why they could not be collected by the garbage collector. First I forced a few garbage collections by calling GC.Collect() to make sure that the garbage collector had actually run.

I found out that some objects, that we keep in Session was rooted to an object array, which, it turned out, represented static references.
After some investigation, I found the problem in the code: A developer had created a static event, and had the objects in session subscribe to this event. This meant that the session objects could not be collected, since there were still reachable references to the objects.

It turned out that the event did not need to be static, so the fix was rather easy to implement. The debugging of the issue, however, took some time.

The morale of this is, that even though we have managed code with memory management, you should still keep an eye on memory usage during development. And with ASP .NET, you should be very careful with static references, since they could root your objects so that they can't be garbage collected.

You should also be aware, that static objects are shared for all users. This should not come as a surprise, if you know just a bit about the ASP .NET process model, but it is my experience that it is easy to forget, even for experienced developers. Because the static objects are shared, you should treat them as such, and remember to use proper synchronization methods when accessing them..

I might blog more in the future about this kind of issues.


Built on Sitecore Xpress

As from today, this blog runs on Sitecore Xpress. I have spent the day setting it up.

So, what is Sitecore Xpress ? Sitecore Xpress is a personal edition of the professional Sitecore CMS product, which just recently was released and made free of charge for developers. This is really cool, since Sitecore is a very good, mature and professional CMS, that has loads of features and is built on .NET. It also has a lot of extension points, which makes it a realy nice development platform for large enterprise websites. The free Xpress edition has almost all the features of the full product, though it is restricred to one user and can only be used for personal, non-commercial sites (details here).

Since we use Sitecore a lot at work, and it is so developer-friendly, the decition to use Sitecore Xpress for my personal website, was really easy to make. It will make a good platform for the various features I would like to add to my web site in the feature.

A nice example on how easy use and setup Sitecore is, that although there are no built-in blogging application in Sitecore, it took me just a day to setup the entire website, including the blog and migration of content from the old platform. The blog is actually a tree of so-called Sitecore content Items. Items is the cornerstone of Sitecore. Using a few XSLT renderings, the blog page, archives and individual posts and comments can be displayed. No additional code was written to achieve this. The only code I have written for this website so far, is a small User Control ( < 30 lines of code ) that allows users to add comments to the articles.

I am probably going to be blogging a bit more about Sitecore and ASP .NET related subjects in the future. I also hope that this change of platform will increase my motivation for writing more posts, so the frequency of new posts should increase.


Sitecore certification

Sitecore is a .NET CMS system developed in Denmark by Sitecore A/S. We are starting to use the product more and more at work.

Today I attended the second Sitecore development course in Copenhagen, "SCD2". So now I am a certified Sitecore developer ;-)

Sitecore is a rather good CMS system (in my humble opinion). Some of the forces is that is easy to extend, and extensions can be created in well known technologies such as .NET controls and XSLT transformation (amongst others), and one can choose whichever is best for the task at hand. My impression is that it has a good and well-structured API, and that it is very stable. I am looking forward to do some real development work using the techniques I learned today.


NEF File reader is on hold ...

I have not had very much time to blog nor work on my NEF file reader project for a while. It has been turbulent and busy times at work, lately.

This post is just to let you know, that the project is curently on hold. Perhaps I will resume it later, when I have more free time at hand. If anyone is interested in a peek at the source code as it looks now (incomplete and raw), please let me know; and I can mail it to you. There is not too much functionality, however. The largest obstacle in getting further with the project, is the need to decode the Huffman-encoded sample bit lengths embedded in the file - I have not resolved this yet.


NEF file reader update

Progress on the NEF / RAW file reader is slow, since i only work on it occasionally when I feel like coding in my spare time. However, I made some progress tonight, that I would like to update you about.

First, NEF files are really TIFF files. So I need to parse them as a TIFF file to get some useful information out of them. TIFF files consists of so-called IFD tags, which in parts contains metadata about the image as well as the actual image data. Currently I have developed a basic IFD tag parser, that parses all of the IFD tags in a NEF file.

Furthermore I have found, by looking at the IFD tags, that there are at least 3 embedded images in each NEF files. These are a small, low-quality 160x120 thumbnail represented as RGB data, a nearly full size JPG in low quality, as well as the actual NEF image data. While implementing the IFD parser, I had a lot of help from this TIFF FAQ, as well as from the official Adobe TIFF specification version 6.

Of course, the NEF image data is the interesting part. I have studied the file format, and made these conclusions; mostly based on what I can tell from the dcraw source code:

  • The NEF file consists of Width * Height samples.
  • The data is represented as a Color Filter Array, that is each sample represents the intensity of a single color (since this is what the camera shoots, as many other digital cameras);
  • therefore, I must interpolate 2 of the colors for each sample, to get a full-color, full-scale image.
  • Each sample is 12 bits.

But this is not all; it is not so simple as to just read 12 bits per sample. The data is compressed. As far as I can tell currently; the following is the compression scheme.

  • A ยต-law type compression curve is used to limit the number of possible sample values to 634 (and consequently companding the 12 bit values to log2(634) = 9.4 bits of actual accuray; while preserving the 12 bits dynamic range).
  • The curve values are embedded in the file.
  • Samples are indices to values in the curve.

Samples are encoded (compressed) like this:

  • A simple predictor is used, so each value read is actually the difference between a previous sample value and the current. This keeps values stored in the file low and keeps differences between each value needed to be stored, low.
  • Each value is  stored as a run-length encoded length (in bits) of the value, and immediately thereafter, the actual value.
  • Run length encoding is accomplished by using a classic Huffman table.

So, now I need to implement the reading of the actual sample values and interpolate the values to come up with a full color image. After that, I probably need to implement the reading or approximation of white balance of the image, so that the program will be able to produce actually usable images.


Hobby project - Nikon NEF file reader

So, my new hobby is digital photography. I have bought a slightly used Nikon D50 camera, and has already taken a lot of pictures.

If you're into digital photography, you will know that most cameras can produce both JPG and RAW output, where the JPG images are compressed by the camera firmware before being written to the flash card. The RAW files are uncompressed data as it was sampled by the camera CMOS (typically).  Remembe, JPG is a lossy compression format, so detail will be lost if you just save the JPG's. Therefore, if you are serious about photography, you will want to shoot in RAW.

However, as RAW files are essentially untouched image data, a RAW file needs to be processed by some image processing software before it can be used. This can be a cumbersome process, loading up the camera manufacturers software and maybe Photoshop or another image processing utility, processing each image, perhaps adjusting colors and saving the file into a more common format.

Therefore I have decided to try and create a .NET program that does the conversion for me. For "version 1", it will probably only support the NEF file format that my Nikon produces (Nikon's proprietary RAW format), but perhaps I will add support for others along the way. This program will not be a replacement for professional photo processing software, but is meant to be an easy to use tool, that can be used when one simply needs to convert a NEF file to a JPEG or bitmap representation quickly.

The NEF file format is not documented anywhere that I have been able to find on the web. But, looking at the files, they seem to be based on the TIFF image format, and they carry the TIFF file header.

Also, I have found that Dave Coffin has created an excellent utility for doing just what I want, dcraw. I could just as well just use that, but I think that it will be an interesting challenge to try to implement it myself. But I am quite sure that digging in to the dcraw source code will provide me with most of the details that I need to decode the format. 


Using a HTTP module to enforce a single domain

When I first setup this blog, you could access it on both "driis.dk" and "www.driis.dk". For various reasons, it is best practice to have a single url for the same content (it makes analyzing the site traffic easier, and avoids ambiguous URLs), so I wanted to setup the site to just use the www version of the domain name.

Now, there is a couple of ways you could do this. The easiest way would be to just setup IIS to redirect incoming requests to http://www.driis.dk. However, this site is hosted by a hosting company, where I do not have access to the IIS manager. 

Instead, I decided to write a custom HTTP module to do the redirecting. HTTP modules are great, as they allow you to hook into the ASP .NET application pipeline using the evens of the HttpApplication object. This makes it is a very strong tool to build advanced functionality. In this case, I decided to handle the BeginRequest event to check if the domain name is correct. If not, I issue a HTTP Redirect and ends the request. This way, all of the work associated with rendering the page is avoided if we just want to redirect anyway. The code is really simple, and basically goes like this:

         /// <summary>
        /// Handles the BeginRequest event.
        /// </summary>
        /// <param name="sender"></param>
        /// <param name="e"></param>
        private void HandleBeginRequest(object sender, EventArgs e)
        {
            HttpApplication app = (HttpApplication) sender;
            string currentDomain = app.Context.Request.Url.DnsSafeHost.ToLower();
            if ( currentDomain != TargetDomain )
            {
                string newUrl = String.Format("http://{0}/{1}", TargetDomain, app.Request.Url.PathAndQuery);
                app.Response.Redirect(newUrl);
                app.CompleteRequest();
            }
        }

 Another advantage of this method, is that once it is configured in web.config, it always works. If you had used the IIS method, you would have to remember to configure IIS correctly again if the site switches server.

You can download the code as well as the compiled assembly here: dr.BlogEngine.DomainEnforcement.zip (19,18 kb)

This is a domain enforcement module in its simplest form. It simply looks at all incoming requests, and if they don't match the specified domain, the request is redirected to the corresponding url on the correct domain. Feel free to use the module, expand it or use it as inspiration. Just don't blame me if it kills a kitten.

To use the module, simply add it to the <httpModules> section in web.config:

<add name="DomainEnforcement" type="dr.BlogEngine.DomainEnforcement.DomainEnforcementModule,dr.BlogEngine.DomainEnforcement"/>

Then add an appSetting specifying the domain you would like to use as the primary domain:

<appSettings>
  <add key="EnforceDomain" value="www.driis.dk" />
</appSettings>

That's all there is to it.


Setting up BlogEngine.NET

After deciding to get this blog, I needed to find some software to run it on. I briefly considered writing it myself, but decided that it would be too much effort, and that I probably never would be able to finish it, considering work and other projects.

So I started hunting around for some open source blogging software, and after an hour of googling, I decided on BlogEngine.NET. My main considerations where that it needed to be open source .NET (so I can fix stuff if it breaks), highly customizable and easy to extend. It was also important that it would work without a database, since my current web host only provides access to a sluggish and unreliable MySQL server (but hey, it's cheap ;-)

I also considered dasBlog, but BlogEngine.NET seemed simpler and more suitable for my simple needs. 

Setting stuff up was simpler than expected.  BlogEngine.NET target audience is developers, according to their web page, so I was prepared to be hand-editing a myriad of obscure configuration files - But after downloading the source and creating a virtual directory on my local machine for the application, it just ran. I actually started editing the settings.xml file before I found out that a UI was provided for it. So i customized some settings, and did a bit of changes to one of the default templates, before I was ready to go online. I actually think that setting it up for debugging in Visual Studio took longer than the actual customizations (My home machine runs Vista, and I had some problems with ASP .NET and IIS7, which I was able to resolve by using the "Classic ASP.NET pipelining" mode).

I downloaded the source, so that I could try to compile it for myself and poke a bit around, before ultimately deciding to base my blog on this application. I haven't spent too much time looking at the code, but it seems to be well designed and thought-through.

Anyways, uploading the site via FTP to my web host was no problem. I used Visual Studio to make a precompiled copy of the web site first.

Setting up the blog, customizing it and publishing it took less than 2 hours. I did not get stuck at all, and everything behaved as expected; overall it was a nice experience. So thumbs up to the BlogEngine.NET developers !


Got a blog

So, I decided today that I needed a blog. So here it is, me blogging about programming in general, .NET and my everyday experiences. 

Why does the world need another blog; you might ask ? Well, it is an experiment for now. I will try to provide some interesting content - and at the same time it will be a place for me to document various findings and thoughts. Anyways, it is much better than the old "Web 0.1" static content website that I used to have on this domain ;-)

I encourage everyone to use the comments for questions and/or suggestions. 

Don't know who I am ? 

Well, you don't really need to, in order to read my posts. My name is Dennis Riis and I am a danish software developer / architect, currently working with web technologies. If you want to read more about me, you can always head over to the About me page.