Friday 18 December 2015

ASP.NET Tips #62 - Delayed execution in EF can trip you up

If your Model exposes an IQueryable object, your View may actually run a query multiple times when it tries to read the object's data.

List properties in the Model should be of type IList to force the query to be executed once and only once.

It’s easy to spot these additional queries with a tool like ANTS Performance Profiler, which shows you what queries are being run by your application.

Wednesday 16 December 2015

ASP.NET Tips #61 - Confirm you are retrieving only the records you need

If you are calling the database using Entity Framework, you may write this command:

return context.Products.ToList().FirstOrDefault();

While this essentially returns one record, it retrieves the entire Products table from the database and then returns you one record. A better command would be:

return context.Products.FirstOrDefault();

This will produce a SQL command equivalent to grabbing the TOP 1 record instead of the entire contents of the table.

Monday 14 December 2015

ASP.NET Tips #60 - Minimize your database calls

While Entity Framework can be a fantastic ORM tool, it can also be a major pain when you are executing a number of calls that are constantly hitting the database.

For simple database activities (SELECT, INSERT, UPDATE, or DELETE), use Entity Framework's standard Add, SaveChanges,and Delete methods. For complex result sets, it can sometimes be more efficient to place them in a stored procedure and leave the heavy lifting to the database.

ASP.NET Tips #59 - If your ORM is capable of handling multi-dataset returns, use this to reduce your database round-trips

This is often called a 'none-chatty' design, where a single larger call is preferred over multiple smaller calls. Each call involves a degree of negotiation and overhead and the more calls you make, the more overhead is incurred. A single larger call reduces the overhead.

Wednesday 9 December 2015

ASP.NET Tips #58 - Spot potential issues in your code with Concurrency Visualizer

From Visual Studio 2013, Concurrency Visualizer became a plugin rather than a standard feature. It is still a tremendously useful performance analysis tool, however, particularly when you use the SDK to instrument your code with flags, messages, alerts, and spans.

Concurrency Visualizer provides several views: utilization, threads, and cores. For me, the threads view is most useful. MSDN describes the threads view here, but that page does not do justice to the power of what you can see and find (however, they do offer another page within the docs that gives a bigger glimpse as to the power lurking beneath your fingertips).

The following Concurrency Visualizer timeline is a good example:

This is from a recent project where a multithreaded system has a large number of users making requests via a web interface and expecting prompt responses back. The system allows an administrator to hot swap pieces of the back end supposedly without interfering with the horde of eager users. But there was a delay of a few seconds now and then, and the source was not clear.

After instrumenting the likely sections of code for Concurrency Visualizer, starting a CV data collection, then running thousands of automated user inputs in parallel with dozens of automated admin inputs, a pattern quickly emerged in the graph.

There was a synchronization lock being held in the admin chunk of code (the "Clean" span) that blocked the end-user processing (the "converse" span) significantly. You can see in the illustration that most of the end-user time was waiting for that lock (the red bar).

I had only to visually scan the timeline to find my markers. Concurrency Visualizer revealed what was being blocked; when the lock was released on one thread and – via the vertical black line at the end of the red block – how that tied back to the other thread and allowed it to continue; and provided stack threads that told me exactly where in the code all this was happening.

For more details, kindly visit https://msdn.microsoft.com/en-us/library/dd627193%28v=vs.120%29.aspx

Tuesday 8 December 2015

ASP.NET Tips #57 - Keep an eye on your server-side code

Practice good performance hygiene by keeping an eye on your server-side code with a tool like Glimpse. Glimpse inspects web requests as they happen, providing insights and tooling that reduce debugging time and empower every developer to improve their web applications.

Monday 7 December 2015

ASP.NET Tips #56 - Cache your static content by directory

If you have a directory of content like JavaScript, images, and CSS, place a web.config in that content directory to cache your static content.

For example, if you had a directory called 'Contents' with an images, styles, and scripts directory underneath it, you would place a web.config (similar to below) in just the content directory.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
   <system.webServer>
      <staticContent>
         <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />
      </staticContent>
   </system.webServer>
</configuration>

You can include this web.config in any static directory to gain maximum caching web performance.

For more details, kindly visit https://www.iis.net/configreference/system.webserver/staticcontent/clientcache

Thursday 3 December 2015

ASP.NET Tips #55 - Send as little as possible to the web browser

Web page load time is often limited by network latency, which can range in the hundreds of milliseconds, especially for mobile devices. The more files are transferred and the larger those files, the more round trips will be needed, so the greater the impact the latency will have. As a developer, there's rarely much you can do to reduce the latency, but you can cut down the number of times the round trip cost is incurred.

HTTP compression should always be turned on, and makes a particularly big impact for easily compressible content like html. Minificafion and bundling of JavaScript & CSS files can be automatically handled by ASP.NET from v4.5 onwards. Just make sure you set BundleTable.EnableOptimizations = true;

If you're including libraries like jQuery, serve it up from Google or Microsoft's free CDNs. Google's datacenters are probably better than yours, and it’s very likely a user will already have a cached copy of the same library from having visited other websites using the same CDN, entirely eliminating their browser’s need to re-download it. It also reduces your own server's load and bandwidth.

Clear out the cruft! A new ASP.NET project includes many libraries, not all of which are necessarily used, and if something's being unnecessarily sent to the client, it's incurring performance cost for no good reason. It can also be astonishing how many libraries get added to a project over time, but not removed when they stop being required. An occasional spring clean helps keep this in check.

Wednesday 2 December 2015

ASP.NET Tips #54 - Use ASP.NET generic handlers instead of WebForms or MVC

In ASP.NET applications, generic handlers, WebForms, and MVC all implement the IHttpHandler interface. The generic handler is normally the most lightweight option.

For more information, see: https://msdn.microsoft.com/en-us/library/bb398986.aspx

Tuesday 1 December 2015

ASP.NET Tips #53 - Efficiently Streaming Large HTTP Responses With HttpClient

HttpClient in .NET has a default behaviour of buffering the response body when you make the call through GetAsync, PostAsync, PutAsync, etc. This is generally okay if you are dealing with small sized response bodies. However, if you wanted to download a large image and write it to a disk, you might end up consuming too much memory unnecessarily.

The following code, for example, will use up lots of memory if the response body is large:

static async Task HttpGetForLargeFileInWrongWay()
{
  using (HttpClient client = new HttpClient())
  {
    const string url = "https://github.com/tugberkugurlu/ASPNETWebAPISamples/archive/master.zip";
    using (HttpResponseMessage response = await client.GetAsync(url))
    using (Stream streamToReadFrom = await response.Content.ReadAsStreamAsync())
    {
      string fileToWriteTo = Path.GetTempFileName();
      using (Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create))
      {
        await streamToReadFrom.CopyToAsync(streamToWriteTo);
      }
      response.Content = null;
    }
  }
}

By calling GetAsync method directly, every single byte is loaded into memory.

A far better method is to only read the headers of the response and then get a handle for the network stream as below:

static async Task HttpGetForLargeFileInRightWay()
{
  using (HttpClient client = new HttpClient())
  {
    const string url = "https://github.com/tugberkugurlu/ASPNETWebAPISamples/archive/master.zip";
    using (HttpResponseMessage response = await client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead))
    using (Stream streamToReadFrom = await response.Content.ReadAsStreamAsync())
    {
      string fileToWriteTo = Path.GetTempFileName();
      using (Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create))
      {
        await streamToReadFrom.CopyToAsync(streamToWriteTo);
      }
    }
  }
}

The HttpCompletionOption.ResponseHeadersRead enumeration value reads the headers and returns the control back rather than buffering the response. The CopyTo Async method then streams the content rather than downloading it all to memory.

For more information, see http://www.tugberkugurlu.com/archive/efficiently-streaming-large-http-responses-with-httpclient