Performance Tuning ASP/SQL Server applications
Developers and DBAs have traditionally had a sometimes antagonistic relationship. DBAs getting upset with developers who don’t tweak their applications to avoid putting any strain on the precious databases, and developers being frustrated by the sometimes esoteric rules imposed on them by how SQL works.
I’ve been lucky having worked as both a developer (on COBOL and Focus on IBM mainframes down to hand-crafting assembler on 8 bit micro-processors) and a DBA (on DB2, Datacom/DB and SQL Server) so can see both sides of the equation. And I’ve come to the realisation having worked with ‘pure’ DBAs and ‘pure’ developers that the world is a much better place if you do have a bit of an understanding of the requirements and limitations on both sides of the fence, and you’re willing to make a bit of an effort to tune both the DB and the code to support the end goal - stable, efficient applications.
Over the years as traffic has grown on my pet project from 1 million page impressions a day to 2 then to 3.4 and currently a peak of over 6 million in a day (690,000 in one hour), with pretty much the same number of servers handling the load, optimising the code and the database becomes more and more critical. Of course each year the hardware also gets better for the same price - this year with AMD Opteron processors and 64 bit versions of Windows 2003 Server and SQL Server 2005 we have much more raw computing power available to throw at the problem.
A lot of performance tuning is very application / environment specific, but over the years there are a few tricks which I’ve applied to my favourite site and also a number of other projects, either building from scratch or going in fire-fighting to work out why response times where so slow.
Because I’m a nice person…. here they are to share. If you find any of them useful, or have any suggestions to improve them please feel free to share in the comments below.
Oh, and because I’m English I spell optimise with an “s” but I know our cousins across the water like to spell it optimize with a “z” … but it’s just about making things work better at the end of the day!
Indexing - Easy to miss, devastating if not done right.
When you’re developing database structures and code side-by-side it’s very common to add new queries, a quick select or an update without checking that SQL is going to be able to optimise it without a bit of extra help. Imagine if the table has 100,000 rows and you’re doing an update on a column that’s not the primary key (why are you doing that in the first place ?!) or isn’t part of an index …. The poor old SQL Server has to sift through every one of those rows to find the one you wanted to update (maintaining a lock on the table in addition to the CPU, RAM and Disk resource required for the trawl). Index optimisation can be assisted (with sufficient test data) through use of the Enterprise Manager SQL Tuning Wizard, but in most cases a manual tuning exercise identifying queries and either re-structuring them to support the data, or adding indices to support their requirements is beneficial. Note: Don’t go overboard on index creation…. Every additional index brings with it overhead in terms of data storage and the resource requirements to maintain those indices.
Locking – Sometimes it’s good to share.
SQL Server is, by default, fairly cautious. While this is often good, especially with rapidly changing data and transactions, it can be very resource intensive for simple data retrieval where it is either known that the data won’t be changing, or it’s okay to use the last completed status rather than worry about the values of a currently updating record. This is often referred to as a dirty read.
Telling SQL Server not to observe locks on a table is simple – add “with (nolock)” after the table name in the from clause (eg “select email from tMember with (nolock) where member_id = 1”)
Don’t nest queries on the same connection - Sometimes it’s not so good to share
A lot of time in an application it’s easy to just create one connection to a database and use that for all communications. While it makes for a quick and easy programming model, it’s not making best use of the database and communications resources. In a typical web application the ASP pages communicate with the SQL Server via a TCP connection. Each one of those connections requires a dedicated port which is held open (and locked) for as long as SQL and the operating system believe it’s still needed. In a simple query model where all requests are sequential this doesn’t cause a huge problem as the system is able to keep track of resources and almost all the assumptions it makes are correct. If however you nest queries on the same connection that all starts to get confused.
rs = con.execute("select email, first_name, last_name from tMember with (nolock) where member_id = 1")
is okay as it’s obvious to the database when you’ve finished with the connection
rs = con.execute("select email, first_name, last_name from tMember with (nolock) order by member_id") While not rs.eof rs2 = con.execute("select answer from tCompetitionEntry with (nolock) where comp_id = 1 and member_id = " & rs("member_id").value .... rs.movenext End While
is bad because SQL and the underlying operating system can never be sure exactly when you’ve finished with the query… so leave it to timeout rather than free it up for re-use nice and quickly.
The solution is very simple… open a second database connection for the nested transaction
rs = con.execute("select email, first_name, last_name from tMember with (nolock) order by member_id") While not rs.eof rs2 = con2.execute("select answer from tCompetitionEntry with (nolock) where comp_id = 1 and member_id = " & rs("member_id").value .... rs.movenext End While
- just don’t forget for each level of nesting you add a new level of database connection. It’s quite easy when you have nested includes, function and subroutine calls and (especially dangerous) recurssion to not track which database connection you are using, but for a high traffic, database intensive site getting this wrong can be pretty catastrophic.
Release resources - some things just want to be free
Further to eliminating nested queries on a connection there are a number of other things that you can do to help SQL and the operating system free up resources to make for a much smoother and stable environment.
Always close connections yourself, as soon as you can (and don’t rely on timeouts and garbage collection to do your housekeeping).
rs = con.execute(“select count(*) from tCompetitionEntry with (nolock) where comp_id = 1 and member_id = “ & uMemberId”)
if rs(0).value > 0 then
’ they have already entered so bounce them to an error page
response.redirect(“error.aspx?err=” & server.urlencode(“Sorry, you have already entered this competition”))
…..while not having any huge issue with it is however bad because it leaves the database connection hanging around. Better to include a specific con.close before the redirect. In fact I always use a small subroutine to ensure the database connection(s) have been closed before performing the redirection (makes sure I don’t forget!) …
’ they have already entered so bounce them to an error page
page_redirect(“error.aspx?err=” & server.urlencode(“Sorry, you have already entered this competition”))
closeDB() ‘ call to subroutine to ensure all database connections are closed
Move the data to the application and free the connection. While it’s quick and easy to keep a connection open while you loop through and process the data you need to be aware that in doing so you have tied up the connection for the entire duration of the processing.
rs = con.execute(“select email, first_name, last_name from tMember with (nolock) order by member_id”)
While not rs.eof
End WhileIn this day and age RAM is pretty much a commodity and it’s very cost effective to add a GB or 2 of RAM to a server. Certainly better than having to upgrade your entire infrastructure because it’s easier than optimising some code. To take advantage of the available memory simply dump the data from the query into an array, release the connection and then process the data (although of course, then monitoring memory usage to ensure you’re not thrashing to disc is important) rs = con.execute(“select email, first_name, last_name from tMember with (nolock) order by member_id”)
memberArray = rs.getRows()
rs = nothing
for memberLoop = 0 to ubound(memberArray,2)
TCP Time_Wait and Max User (Ephemeral) Ports By default Windows uses the conventional BSD range of 1024 to 5000 for its anonymous (ephemeral) port range. This places a limit on the number of concurrent sessions you can have active at any one time which for a high traffic site this can be a significant issue. You can increase the port range that is used for anonymous ports to approximately 20,000 ports by modifying the MaxUserPort registry key (this parameter controls the maximum port number that is used when an application requests any available user port from the system). You can set only the upper bound of the ephemeral port range. To modify the MaxUserPort registry key, follow these steps:
Start Registry Editor (Regedt32.exe). Please note that you should backup your registry and any important files on your computer before editing the registry. Locate the MaxUserPort key in the registry: HKEY_LOCAL_MACHINE\ SYSTEM\ CurrentControlSet\ Services\ Tcpip\ Parameters
On the Edit menu, click Add Value, and then add the following registry value: Value Name: MaxUserPort
Data Type: REG_DWORD
Value: 65534 (for example)
Valid Range: 5000-65534 (decimal)
Default: 0x1388 (5000 decimal)
Quit Registry Editor. When a connection is closed it stays in the TIME_WAIT state for a certain length of time in case the system encounters a situation where it is better to re-use it rather than create a new connection. As most of the connections will be from new users wanting new data this however it’s an optimal situation for a hightly dynamic website. By default this TIME_WAIT state is maintained for 240 seconds. While a connection is in the TIME_WAIT state, the socket pair cannot be reused. For more information, see RFC 793.
You can change the timeout on the connections from 240 seconds (the default) to any length from 30 seconds to 300 seconds. Use the TcpTimedWaitDelay registry parameter to change this value (this parameter determines the length of time that a connection stays in the TIME_WAIT state when the connection is being closed). To do this:
Start Registry Editor (Regedt32.exe) locate the following key in the registry, and then modify the value HKEY_LOCAL_MACHINE\ System\ CurrentControlSet\ services\ Tcpip\ Parameters
Value Type: REG_DWORD-time in seconds
Valid Range: 30-300 (decimal)
Default: 0xF0 (240 decimal)
More information available at http://support.microsoft.com/default.aspx?scid=kb;EN-US;319502 Select * - sometimes wishing on a star isn’t good It’s very simple when writing a query to take the easy route and ask the database for all the columns in a query – by using “select * from tMember (with nolock)”. This innocent piece of coding can have a lot of undesired results. Firstly in terms of performance – not only does SQL Server have to go away and find out what those column names are before it can execute the query, it also has to pull back more data that you potentially need. It’s also possible that any code which relies on the sequence those rows are returned in will break when a new column is added.
Far better to craft SQL Selections that return exactly what you need (with a minimal overhead) by specifically defining the required columns (eg “select first_name, last_name, email from tMember with (nolock) where member_id = 1” is a lot better for the system than “select * from tMember where member_id = 1” in so many ways)
OPTION (FAST n) - For when bigger isn’t better Before executing a query SQL Server has no real way of knowing how large the returned set of data will be, so it has to allocate an estimated buffer. Although it’s estimates are good (and get better as it works with the data) there are many occasions where a little human intervention and knowledge of the application can really help.
If you know that a query will only every return one row, or a hundred rows, or for that matter 100,000 rows why not tell SQL Server up-front so it can try and marshal it’s resources in the most efficient way.
Note: There can be a dark side to this hint… Although it optimises for the number of rows specified if your query returns more than the declared number of rows overall performance will be compromised, so only use this if you’re sure of the size of the return set, or you add a bit of margin
Note: Option Fast is similar to the SQL7 hint FASTFIRSTROW which only optimised for single row returns, but FASTFIRSTROW will not be supported in future versions of SQL
Stored Procedures vs ad hoc queries – It’s always better if you know what you’re If you send an ad hoc query to SQL Server it has to attempt to optimise it (based on what it knows about the database, the tables, existing indices etc, assisted by hints and help you’ve already given it) and then execute it. It has to do that every time the query is submitted.
Stored Procedures are a way in which a series of SQL statements (and associated logic) can be packaged together with input and output parameters to perform a specific function. While requiring a little more forethought on behalf of the developer and DBA Stored Procedures are great for SQL Server as it only has to perform the optimisation step once when the query is defined, and not every time it is executed. Those small savings can add up over time.
The other bonus of using Stored procedures is code maintainability. If you make all DB references via Stored Procedures then when you need to make a change to the database you don’t have to spend as much time checking every dynamic SQL statement to see if it breaks - check the Stored Procedures referencing the table in question and work from there….
Sadly stored procedures are often an area where the demarcation really rears its head, with DBAs not wanting to let the developers near their database, and the programmers not wanting a DBA dictating how they develop their queries. But it’s also one of the best places to forge some common ground and shared understanding of what’s possible, desirable and practical.
It is strongly recommended that you do not create any stored procedures using sp_ as a prefix. SQL Server always looks for a stored procedure beginning with sp_ in this order:
The stored procedure in the master database. The stored procedure based on any qualifiers provided (database name or owner). The stored procedure using dbo as the owner, if one is not specified. Therefore, although the user-created stored procedure prefixed with sp_ may exist in the current database, the master database is always checked first, even if the stored procedure is qualified with the database name.
Important: If any user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.
Database Housekeeping – It’s easier to find things when they’re organised Over time, just like the filesystem on your PC, any database will become fragmented. The data in tables and indices scattered around the structure of the database file, and even the physical files underlying the database scattered around the hard drive(s). This all adds overhead to every database operation. Sadly tidying this up is not a quick, or easy task – but if scheduled regularly can be made more predictable, and certainly less painful.
There are a number of DBCC tasks which can be run within SQL Server. Please check the documentation before diving in and trying these on a live database.
DBCC INDEXDEFRAG Is one of the more useful utilities as it optimises the target index. DBCC SHOWCONTIG can be used to show what tables and indices are in need of defragmentation. These two DBCC commands can be used together to identify trouble-spots and rectify them.
Externally to the database itself the physical health of the machine (fragmentation of the hard drive, sufficient RAM etc) can all contribute to smooth running, or poor performance. Use of SQL Profiler and the Windows Performance Monitor tools is highly recommended to examine all projects on a regular basis – in new development to make sure there are no unexpected poor performers, in old projects just to make sure you can’t eke out a little more performance or improve stability and in maintenance projects just to make sure the latest changes have not impacted the database unduly.
Is that it? It all sounds so simple…. Taken on their own any of thes techniques have value. Used in combination they can have some pretty good results. Obviosuly it’s always better to design and build your application from the ground-up to include these guidelines and any other site specific performance rules… but always remember there’s no one panacea that’s guaranteed to make your application perform perfectly, and you can never stop researching other/better solutions to problems you think you’ve got covered. Often you may have to try a number of combinations and review code, database and environment to find the answer to the problem. And it’s not always obvious. Although these hints are mostly based around development work I have had an experience where a website was performing terribly, even though the database server was at less than 15% utilisation and the front-end webservers where all below 20% utilisation…. the problem turned out to be a faulty network switch in the infrastructure between them (so… developers and DBAs… remember, play nice with the networking gurus as well as each other!)