NoSQL databases gives an interesting point of view in modern application design. There are different kinds of them and one common characteristic is that they are optimized for fast data access. For masterdata a normal choice would be a relations database, but will in some cases not perform well especially when it comes to data reading. This is where NoSQL databases comes in handy as a good complement. But they need optimized (de-normalized) data structures to perform well and therefore not the optimal solution for masterdata. Not said it must be so – it depends.

But it must not be one or the other. They can interact in the same environment which makes it interesting. Just to mention an example, using the Command Query Responsibility Segregation (CQRS) principle in your application. The data source of the command responsibilities could be a normalized SQL database holding the masterdata, while having a NoSQL database for the query part containing de-normalized copies of masterdata.

Azure NoSQL Services

Microsoft Azure offers four fully-managed NoSQL services:

This article will focus, as the title says, on the Azure Table Storage and how to make it perform well. Note: The code examples below assumes you have setup a reference to a Azure cloud table. See external references at the bottom of this page to get started.

Some notes about the Azure Storage Emulator

During development the Azure Storage Emulator is a great tool to test your application. It uses a local SQL instance and the local file system to emulate the Azure storage services. To manage this local database installation you could use Microsoft SQL Server Management Studio. But the drawback with this is the lack in performance. Each table insert operation generates a couple of related SQL operations – wrapped in a transaction. Depending on transaction level , those will have limited ability to execute in parallel. So to actually test the performance out you should use a real table storage up in the Azure cloud to get a fair result. Take a look at the SQL Server Profiler to trace stored procedures and SQL queries executed, when running the emulator.

First test

In this first test a list of dynamic table entities are created from a data source of 88 0000 items. These are inserted, one at a time, into Azure table. With this approach – running on my laptop computer – the performance reached 12 items/second.

var entities = items
     .Select(x =>
     {
         var entity = new DynamicTableEntity("Item", x.Id);
         entity["Text"] = new EntityProperty(x.Text);
         return entity;
     }).ToList();
 foreach (var entity in entities)
 {
     myCloudTable.Execute(TableOperation.Insert(entity));
 }

Service Point Manager

Every operation performed against the Azure storage seems to imply one or more network hops. Most of them are small HTTP requests to a single endpoint address. By default the .NET framework is very restrictive with this kind of communication. By overriding the settings exposed by the ServicePointManager it is possible to make .NET less restrictive and thus gaining improved performance. Note that those settings must be applied before the application makes any outbound connections.

There are three main settings. The first one is the DefaultConnectionLimit which has a default value of 2. A higher value allows to do more requests in parallel. The optimal value for this depends on your application and the surrounding environment and of course on the endpoint itself. There are some recommendations around which could be a great starting point, but I think you´ll be better of at the end, trying different values to find the magic threshold for your environment.

The other setting to pay attention to is the Nagle´s algoritm. It is used to reduce network traffic and to increase performance in TCP/IP based networks and by default this setting is on. But it has a negative effect on performance when using Azure storage services. So try setting UseNagleAlgoritm property to false on the ServicePointManager.

The third setting is the 100-Continue behavior. By default this is on, which tells the client to wait for a HTTP status 100 Continue from the server. Disabling this may result in better performance when using Azure storage services.

So to summarize those settings in code:

// This code applies the settings to the table storage endpoint only
 var tableServicePoint = ServicePointManager.FindServicePoint(myCloudStorageAccount.TableEndpoint);
 tableServicePoint.UseNagleAlgorithm = false;
 tableServicePoint.Expect100Continue = false;
 tableServicePoint.ConnectionLimit = 100;

With this settings the insert operation performance doubles: 24 items/second. But still not close to a great performance. In the next code snippet we´ll take advantage of the increased connection limit, applied to the connection point, using Parallelism, to make iterations over the dataset to run in parallel. The MaxDegreeOfParallelism property is set to 20, because higher values above seems not to increase performance. This could of course be different in your environment.

Parallel.ForEach(entities, new ParallelOptions { MaxDegreeOfParallelism = 20 }, (entity) =>
{
    myCloudTable.Execute(TableOperation.Insert(entity));
});

Using the parallel approach the insert operation performance reached 240 items/second. So that increased performance up to ten times.

Batch operations

The Azure storage services also provides entity group transaction aka batch operations, which makes it possible to insert up to 100 items at a time. Using this the application managed to insert 363 items/second. Combining the batch operation with the parallel approach, some serious performance hits starting to happen: 3267 items/second. This is done in the code example below. Note the TableConstants.TableServiceBatchMaximumOperations constant, which stores the maximum number of operations allowed in a TableBatchOperation.

var batches = new List<TableBatchOperation>();
for (var i = 0; i < entities.Count; i += TableConstants.TableServiceBatchMaximumOperations)
{
    var batchItems = entities.Skip(i)
                             .Take(TableConstants.TableServiceBatchMaximumOperations)
                             .ToList();
 
    var batch = new TableBatchOperation();
    foreach (var item in batchItems)
    {
        batch.Insert(item);
    }
 
    batches.Add(batch);
}
 
Parallel.ForEach(batches, new ParallelOptions { MaxDegreeOfParallelism = 20 }, (batchOperation) =>
{
    myCloudTable.ExecuteBatch(batchOperation);
});

Asynchronous batch operations

Now we´re almost there. Just one more thing. The Azure table storage services also comes with an asynchronous interface. The ExecuteBatchAsync initializes an asynchronous operation which awaiting execution. So if we put all batch tasks into a list and executes them in parallel, do we get any further performance hits? Yes. With this approach the application managed to insert 21k items/second and finished of all 88 000 in 4 seconds. That´s a great performance improvement over the 12 items/second we had at the beginning of this article.

var batchTasks = new List<Task<IList<TableResult>>>();
 
for (var i = 0; i < entities.Count; i += TableConstants.TableServiceBatchMaximumOperations)
{
    var batchItems = entities.Skip(i)
                             .Take(TableConstants.TableServiceBatchMaximumOperations)
                             .ToList();
 
    var batch = new TableBatchOperation();
    foreach (var item in batchItems)
    {
        batch.Insert(item);
    }
 
    var task = _table.ExecuteBatchAsync(batch);
    batchTasks.Add(task);
}
 
Parallel.ForEach(batchTasks, new ParallelOptions { MaxDegreeOfParallelism = 20 }, (tasks) =>
{
    Task.WhenAll(tasks);
});

However, it is not considered a good practice (correct me if I´m wrong) starting tasks inside a Parallel.ForEach. You should avoid that, because if the task pool has reached the maximum number of tasks being started, the Parallel.ForEach will compete with its compute resources and will start to slow things down. In the final code example below some slightly modification has been done. A task threshold has been added to limit the number of tasks being executed in parallel. When this limit has been reached the tasks got executed. This approach gaves the same performance as with the Parallel: 21k items/second.

var taskCount = 0;
var taskThreshold = 200; // Seems to be a good value to start with
var batchTasks = new List<Task<IList<TableResult>>>();
 
for (var i = 0; i < entities.Count; i += TableConstants.TableServiceBatchMaximumOperations)
{
    taskCount++;
 
    var batchItems = entities.Skip(i)
                             .Take(TableConstants.TableServiceBatchMaximumOperations)
                             .ToList();
 
    var batch = new TableBatchOperation();
    foreach (var item in batchItems)
    {
        batch.Insert(item);
    }
 
    var task = myCloudTable.ExecuteBatchAsync(batch);
    batchTasks.Add(task);
 
    if (taskCount >= taskThreshold)
    {
        Task.WhenAll(batchTasks);
        taskCount = 0;
    }
}
 
Task.WhenAll(batchTasks);

Conclusion

NoSQL databases gives an interesting point of view when it comes to software architecture and design. As we see in the CQRS principle it must not be the one or the other. They can co-op in the same environment and you get the best of two worlds: SQL for normalized master data and NoSQL for de-normalized data, optimized for speed.

But there are more than one things to consider before your application will perform at its best. When it comes to Azure Table storage, the basic concept should be batch operations and parallelism to speed up the table operations. Changing the settings of the ServicePointManager is another thing to consider. But all of this depends on your environment.

Another important thing – we don´t deal with in this article – is the table design, which is important to make scalable performant tables. Please read more about this in the Microsoft documentation.

External resources:

Kategorier: Development