Design a reliable data access layer to access SQL Azure

May include but is not limited to: define client data access standards, connection timeout scenarios

There’s a nice article on TechNet on this topic and the next one, Design an efficient strategy to avoid data access throttling. I won’t publish another post for it, because there’s such a huge overlap between these subjects. The linked article above explains both of them pretty well.

As I have stated numerous times in these few posts (I think I’m constantly repeating myself) SQL Azure has the unique feature of killing your connections if you don’t play by the rules and try to monopolize all resources of the server where your database server resides. This feature is called throttling, and you should be aware of it and code your data access logic accordingly. This means that you should build in retry logic (the most preferred way for this is to use extension methods on the basic ADO.NET classes, such as SqlConnection and SqlCommand, according to the online examples floating around). Typically (a Microsoft recommendation) is to wait ten seconds and try again. Of course you can increase this time limit if it needed.

Trying to force something that fails constantly isn’t such a bright idea. So on the second or third approach, you could as well check out what the real problem is, and change your workload accordingly – if all else fails, then of course you could partition your database horizontally, but that’s a whole different topic.

Microsoft says you should do these three things in your data access layer: execute transactions in a continuous loop, catch connection termination errors and wait a little to reconnect. Now let’s see some possible reasons of database throttling. I remind you again to check out the TechNet article that is the basis of this post for a complete reference. So the following conditions result in throttling:

  • Consuming more than a million locks.
  • Uncommitted transactions.
  • Locking a system resource for more than 20 seconds.
  • A single transaction’s log file size exceeds 1 GB.
  • Using more than 5 GB tempdb space.
  • Consuming more than 16 MB of memory for more than 20 seconds when memory is limited.
  • Exceeding maximum database size.
  • Idle connections longer than 30 minutes.
  • A transaction running for more than 24 hours.
  • DoS attack (throttles connections from a particular (set of) IP address(es).
  • Network errors.
  • Failover errors.

These are the main reasons of connection throttling. Of course there’s a cure, and it comes in the form of some general best practices recommended by Microsoft. It makes such a perfect set of exam questions that I won’t leave out anything. So the best practices to avoid throttling:

  • Minimize network latency (choose the closest data center).
  • Reduce network usage: cache data and minimize round-trips to the server.
  • Keep your connections open as short as possible.
  • Set up short timeout durations in the connection string.
  • Use connection pooling.
  • Wrap all database operations in transactions with TRY CATCH blocks.
  • Fine tune your T-SQL.
  • Keep (or push) business logic in SQL Azure.
  • Use stored procedures and batching.

Optimize a data access strategy

May include but is not limited to: batch operations and performance techniques, data latency due to location, saving bandwidth cost

These topics were mentioned in some of my posts previously published, but a little review and repeat couldn’t hurt. This topic is about improving performance – both “real” performance, the raw speed of retrieving data, and both perceived performance, such as not blocking your UI thread while you’re waiting for data to be received.

When dealing with the Azure environment you’d like to be cost effective (you should be cost effective in every case, but cloud computing is a bit special, since you literally have to pay for your laziness). This means that you should design your data access strategy in a chunky manner. Roundtrips to the database over encrypted connection tend to be slow, not to mention the possibility of failure. Because of this, you’d like to query bigger chunks of data, and let the SQL Azure engine process that for you to save computing costs, too. You’d better use views and stored procedures, and implement your business logic in them, very close to the data (an arguable design, I admit). Also, you’d like to submit your changes in bulks – a good ORM such as EF or Hibernate can do this for you.

Another great way of reducing bandwidth usage is to load the data that is absolute necessary, and not anything else. The concept of lazy loading is very useful here – and mature ORM tools provide this for you nowadays. Note that lazy loading and chunky data retrieval are contradictory at best, so you have to find the right balance between them.

You can do a lot by caching your results. If you deploy an ASP.NET application to the cloud, you get a great and easy-to-use caching-feature.

The concept of sharding (or horizontal partitioning, or federating…) can help a lot to scale an application horizontally. The idea is relatively simple: you have multiple databases with the exact same schema, and store data based on some criteria (geographic location, customer ID, etc.) in them. The benefits are smaller indexes and faster query results; the drawback is complication in data access code.

The techniques above are real performance tuners, but you can easily boost the perceived performance of your applications by not blocking the main (UI) thread. Make database calls asynchronously, show nice animations while loading, and your app will be judged faster.

Design a database migration plan from SQL Server to SQL Azure

May include but is not limited to: differences between SQL Azure and SQL Server, concessions for unsupported features, schema, data, reporting and analytic tooling

Differences between SQL Azure and SQL Server

As I stated in the previous blog post, there are a lot of differences between SQL Azure and SQL Server. Let’s start at the beginning. When you create an SQL Azure database, you’ll have to provide the edition of SQL Azure and the size of database you wish to use. Here’s the list of database sizes available:

  • Web edition
    • 1 GB
    • 5 GB
  • Business edition
    • 10 GB
    • 20 GB
    • 30 GB
    • 40 GB
    • 50 GB
    • 100 GB
    • 150 GB

Note that the pricing varies – you pay $9.99 per database per month up to 1 GB, or $49.95 per database per month up to 5 GB database size in the Web Edition. The business edition costs $99.99 per 10 GB of database per month, and maxes out in $499.95 per database.

Continue reading “Design a database migration plan from SQL Server to SQL Azure”

Choose the appropriate data storage model based on technical requirements

May include but is not limited to: SQL Azure, Cloud drive, performance, scalability, accessibility from other applications and platforms, Windows Azure storage services: blobs, tables and queues

SQL Azure

SQL Azure in itself is big enough to fill a book (In fact, it does fill a book. Most if this section is based on the book Pro SQL Azure by Scott Klein and Herve Roggero) so this section is just a quick introduction. SQL Azure is a transactional database based on SQL Server 2008. It supports the T-SQL language and a limited set of functions from SQL Server. It also supports ADO.NET and ODBC data access. You can even use your favorite SSMS to connect and manage SQL Azure databases, but there’s an online solution, too.

You should be aware that SQL Azure runs in a multitenant environment. This means that you have restrictions on query time, CPU, etc. So if you have a long running query, massive CPU usage, or something similar that might affect another users’ databases on the server, your database connection can be (and will be) throttled (terminated).

Despite this fact you should be aware that using SQL Azure you pay for storage (GBs of database size)*, so you should perform some CPU intensive tasks within SQL Azure instead of your application. The benefit is that CPU usage in SQL Azure is free, while you have to pay for it on an hourly base in your app hosted in the cloud.

Scalability in SQL Azure is revolving around sharding. The design guidelines are explained here. Sharding is a kind of horizontal partitioning; you store rows separately instead of columns. I’ll explain the concept in another blog post later.

Last but not least, have a look at the (most important) limitations of SQL Azure:

  • No support for backing      up/restoring databases (there are workarounds, of course)
  • No USE statement, and you      cannot use database names (this ends cross-database queries)
  • No Windows Authentication
  • Setting server level collation      is disabled
  • No heap tables, clustered      indexes are a must
  • Maximal database size is 150 GB
  • No SQL Server Agent
  • Idle connection are terminated      (after 30 minutes)

For the full list of limitations, see http://msdn.microsoft.com/en-us/library/ee336245.aspx.

Windows Azure Storage Services

Blobs

Blobs – as their name shows are large binary objects stored in the cloud. At the time of this writing, their size maxes out at 200 GB in the case of a block blob and 1 TB when using page blobs. Usually you would store images or video/audio in blobs. Video usage is especially useful, because block blobs support streaming.

I introduced two different kinds of blobs, block and page blobs. Let’s elaborate further on them. If you need further info, refer to MSDN.

Block blobs

A block blob is built from blocks, which can have the maximum size of 4 MBs (the largest block supported in one operation). You are free to modify, delete and insert block of a block blob, commit or discard your changes as needed. The maximum size of a block blob is 200 GB, and it can contain a total of 50.000 blocks.

Page blobs

Page blobs are optimized for random access. They can be 1 TB large, and they are built from 512 byte pages. You cannot “version” your pages, so updates of one or more pages are immediately in effect.

A special subtype of page blobs is Azure Drive (or Cloud drive). This is a VHD mounted as a local drive letter. It was mostly used before the other APIs were available.

Queue storage

Windows Azure provides a queue-based messaging service that you can use for communication between Azure roles (more on them later). Your messages can be 64 KB in size, and generally they are FIFO, but no guarantee exists that they will be treated in this fashion. You can of course process messages bigger than 64 KBs, by using blobs.

Tables

Tables allow you to store entities of 1 MB up to 100 TB. An entity can have 255 “columns” with different data types. Unlike SQL Azure, there is no relational support, so you can’t have foreign keys, joins, etc. The best usage of these tables is for example a leader board for a game. Small in size, not complex, no relationships required.

Table entities have three reserved properties which define a key for the entity: a partition key for the table itself, a row key for the entity within the table, and a timestamp.

Tables come with a fairly limited type set, these are byte[], bool, DateTime, double, Guid, int, long and string. For more info on tables, refer to MSDN.

There is more info about the various storage options in Windows Azure in this Technet article.

 

*The full truth is that you pay for two things: storage and bandwidth, however, bandwidth within an SQL Azure database and an application running inside Windows Azure is free.