Quantcast
Channel: CSS SQL Server Engineers
Viewing all 339 articles
Browse latest View live

Dipping My Toes Into SQL Azure – Part 2 – Protection Mechanisms

$
0
0

clip_image001

I left off my previous post stating that I was going to start looking at performance and reliability issues associated with SQL Azure.   In doing this work I discovered and encountered some of the protection mechanisms

 

SQL Azure Protection Mechanisms

SQL Azure is secure, isolated, multi-tenant system.  As such, the system is designed with various protection mechanisms.   An obvious protection mechanism is denial of service (DOS) prevention.   SQL Azure implements SQL Server specific mechanisms to maintain performance and reliability standards. 

Note: These mechanisms are added and changed frequently to maintain the best environment possible providing high levels of reliability and performance for SQL Azure users.   Because of this any values I use in this post should be treated as rough estimates.  In fact, I have inflated or deflated the values I use in testing to levels that will trigger possible issues in my code and may never trigger the same behavior in SQL Azure but it allows me to harden my code base.

Stand-Alone Testing – Simulate Some SQL Azure Behavior(s)

I found that I could simulate many of SQL Azure's protection mechanisms on a stand-alone server and run my test suites.   I specifically inflated or deflated the protection target to trigger the issues in my testing.   Many of the things can be done with simple configuration changes to establish limits.  I can’t cover all the mechanisms in a single blog but I tried to provide the ones that I found as gotchas.

SQL Azure Mechanisms

Tempdb  (http://technet.microsoft.com/en-us/library/cc966545.aspx#EGAA)

Since TEMPDB is a shared resource on an instance there is a potential for overuse by a single user.   I started thinking of the locations in my code that I use temp tables, triggers(version store) could lead to hash spills, cursors and sorting activities.  Those that could use significant space in TEMPDB I evaluated for how to reduce the TEMPDB usage by breaking things down in batches or coming up with a new design, adding an index and such.

The TEMPDB limitation is currently based on per session usage.  Stand-alone doesn’t have a simple per session usage limit for TEMPDB so I set a small fixed size for TEMPDB that will trigger out of space issues in my test runs.  Limiting this to functional tests I avoid multi-tenant usage of TEMPDB and identify those queries and data sets that could encounter a limit on SQL Azure.  I currently test with TEMPDB MAX = 512MB limit.   As of today this is about ¼ the limit of SQL Azure.  It is big enough for me to run most of my queries without issue and small enough to catch design and usage patterns in my code that I need to reconsider.

I use the Trace Events: Errors and Warning category for events such as (Exchange split, Hash Warning, Sort Warning, ….) to find queries that I might need or want to turn.   I also add the Cursor category to find cursors that might be using TEMPDB because of cursor conversions.

There are performance counters for TEMPDB that I can set alerts on as well as DMV queries to help me monitor TEMPDB usage.

dm_tran_version_store and dm_tran_version_store for looking at version store activities

db_db_file_space_usage and dm_db_task_space_usage for looking at space usage  

Locks

Each lock requires a memory allocation and SQL Azure requires a given transaction to be a good citizen.   I set the sp_configure value for locks to 1,000,000 (1 million) which is far below the current SQL Azure limit but again a good way for me to find possible lock consumers in my code.

The DMVs for locks are helpful (dm_tran_locks) and the Trace Events: Locks – Escalation is a good location to find possible queries that could consume locks if escalation is not achieved.

I also enabled trace flag –T1211 (test system only) that disabled lock escalation to help find those queries that could consume a large number of locks.

One I found these queries I revisited the batch size of the transaction and in some instances updated query hints.   For example ReadTrace is doing a load operation and during the load there is no other access to the tables.  I could use the TABLOCK query hint to reduce the lock memory I am using and it does not change any concurrent access paths for my application.

Transaction Log Space

The databases from various tenants using SQL Azure share disk drives.  It would be an inefficient system is each database got its own set of disk drives and even if each had its own disk drive allowing the log to grow unbounded will lead to out of space issues.   The protection mechanisms in SQL Azure monitor how much space a transaction has consumed as well as how much log can’t be truncated from all activity in the database because an open transaction is holding the truncation point of the transaction log.

Just like I did for TEMPDB I set the log size to a fixed MAX of 512MB on my stand-alone server and setup a job to backup the log once a minute.  Again, far lower than the current SQL Azure target but good for identifying code lines I need to evaluate.   If I have a transaction produces more log than 512MB per minute occurs my testing encounters the out of space errors and I can make the necessary application changes.

You can monitor the space usage in with DMVs similar to how I described this in the TEMPDB section.

I also had a test run where I allowed the log to grow unbounded and monitored the size of my log backups.    This accounts for situations where a transaction might not use a large amount of transaction log space but a transaction is open for a long time and holds the truncation point, causing other transactions to accumulate large amounts of combined log space.

Transaction Length

Along with the transaction log space check is a transaction duration check.   As described above a long running transaction could be idle but holding the log truncation point.   I established job that would look at dm_tran_database_transactions with a database_transaction_begin_time greater than 5 minutes.  When I find it I would issue the appropriate KILL, triggering errors in my test run for those areas in my code that need to look at their transaction duration and concurrency needs.

CPU

CPU is another resource that has to be shared on any server and a runaway query is something that SQL Azure can prevent as well.   This is yet another scenario that I can’t just give you a number for.  If you are using CPU but it is not causing CPU contention on the system SQL Azure may not take any action.    If you are familiar with the Resource Governor behavior in SQL Server it is similar to SQL Azure behaviors.   The resource governor does not penalize you for using CPU it only makes sure the contenders for CPU are fairly treated.   Without getting into the details of the CPU activity of SQL Azure you can assume that a session may be terminated if the system determines it is a CPU hog that impacts the overall performance characteristic of the server.

I can’t simulate this directly with resource governor because resource governor only handles fairness and does not take a KILL action on the session like SQL Azure.  Instead I created a job that uses the information in dm_exec_sessions and dm_exec_requests to find CPU consumers. Since SQL Azure currently sets MAX DOP = 1 you too can set this sp_configure so you don’t have to worry about parallel worker roll-ups in the queries.  If since the last batch start time and CPU consumption exceeds 50% for a 5 minute period I KILL the session.  This is a query that is longer running and I need to evaluate if it can be tuned or batched in a way to prevent constant CPU usage for 5+ minutes.

Memory

A single query is not allowed to consume all the memory for the instance.   Just like each database does not get its own disk drive, each database/query does not get its own memory bank.   When SQL Azure determines a session is consuming detrimental amounts of memory the session can be killed.   To simulate this I set the sp_configure, max server memory value to 1GB.   Again, this works better with functional tests but it is a good way to find batches that need to be evaluated.

At a more, individual query level you can use resource governor to establish a MAX MEMORY setting.  I went as far as to modify the application name I was connecting with for each connection.   For example Conn_1, Conn_2, … and setup matching resource governor pools and groups so I could indicate the necessary limits to help me test for SQL Azure compliance.

Request Limit

A request is an active command for the SQL Server (dm_exec_requests).   One of the mechanisms is to make sure requests are progressing properly and doing meaningful work.   If the number of active requests for a single database grow past a reasonable limit requests may be killed to maintain proper capabilities.   The data is based on the number of active requests and how long the transactions can be active at the request level.  The more requests the shorter the transaction duration must be to maintain the same active request level.   For example (let me just pick a random number) if you have less than 400 requests the allowable transaction duration might be 10 minutes but if you exceed 400 active requests the allowable duration may drop to 5 minutes.

This is easy to simulate and I added some logic into my transaction length simulation job.  As the number of requests grow, I shrink the time the transaction may be active before I issue a KILL on the request.   I picked some low numbers so I made sure my application was processing transactions in an efficient way.   I picked 5 minutes by default and for every 50 more, active requests, I dropped the target duration of my transaction by ¼ until I reached a 30 second minimum limit cap for the KILL logic.

A Peek Inside

As I mentioned SQL Azure is evolving and adapting to customer needs but here are some of the common protection mechanisms currently employed.

Check

Brief Description

Error Message/Text

DOS

Denial Of Service

 

Lock Count

A single transaction has accumulated a large amount of locks

40550 - The session has been terminated because it has acquired too many locks. Try reading or modifying fewer rows in a single transaction.

Blocking System Tasks

A session is blocking a critical system task

40549 - Session is terminated because you have a long running transaction. Try shortening your transaction.

Too Many Requests

A given database appears to have too many, pending requests.

40549 - Session is terminated because you have a long running transaction. Try shortening your transaction.

TEMPDB Space Usage

A single transaction has accumulated a large amounts of TEMPDB space

40551 - The session has been terminated because of excessive TEMPDB usage. Try modifying your query to reduce temporary table space usage.

Log Bytes Usage

A single transaction has accumulated a large about of log space

40552 - The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.

Transaction Length

A single transaction has been open for a long time, holding the log truncation point

40552 - The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.

Memory Usage

A session is consuming a significant amount of memory resources to the detriment of other users.

40553 - The session has been terminated because of excessive memory usage. Try modifying your query to process fewer rows.

CPU Usage

A session is consuming a significant amount of CPU resource for an extended period of time to the detriment of other users

Might be error/messages such as the following


40545- The service is experiencing a problem that is currently under investigation.

DB Size

Your database has reached a size quota.

40544 - The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.

Failover

The database is likely under some sort of recovery action (failover perhaps) and should be available shortly.

40174 - The partition is in transition and transactions are being terminated.

Additional Reference: http://social.technet.microsoft.com/wiki/contents/articles/1541.aspx#Troubleshooting

Other Testing Hints and Tips

Cross Database

Since you can’t issue a USE database or cross database queries I changed my testing system to use multiple SQL Server instances.   Each instance only supports a single database so even if my application uses DB1 and DB2 they are on separate instances and I am able to identify issues in my code.

Idle Connections

I have found that with the additional firewalls, proxies and routers involved; going external to my corporate network may also drop connections totally unrelated to SQL Azure backend. For my testing I added a job that would KILL any session that had not issues a batch in the last 1 minute.   1 minute is a short window but I found my home ISV has a rule that terminates idle connections at 60 seconds.   This leads to any number of design reviews in my code and possible keep alive activities for critical connections.   (Keep alive is expense so only consider this for critical connections and assume that they can be terminated by other mechanisms anyway.)

Random Kills

Since the protection mechanisms are not under your control, Microsoft is updating them and changing them to maintain the best systems possible as well as your network provider and even your network administrator a simple test is to add a random KILL job to your test suite.   This will simulate any number of dropped connection issues and allow you to validate the application recovery capabilities and handling.

Database Size

When you sign-up for SQL Azure the contacts involves a database size.   Fix the database max size in your testing environment to simulate the same behavior as SQL Azure.

Failover Simulations

SQL Azure keeps 3 replica of the database to protect your data.   Along with the SQL Server protection mechanisms I have described there are additional mechanisms that check the service state, machine level stability issues and more.   These can trigger replica movement/failover actions.   You can simulate this easily by restarting the SQL Server service or an even easier way is ALTER DATABASE SET RECOVERY SIMPLE with ROLLBACK IMMEDIATE followed by ALTER DATABASE SET RECOVERY FULL with ROLLBACK IMMEDIATE.  It is a great way to test your application stability.

Max Workspace Memory

This is another place for memory consumption.  Using max server memory and the memory setting of resource governor is a good way to find queries that may need attention.  For example, if you query sys.dm_exec_query_memory_grants and the grant is greater than ~16384K and has been running for more than say 20 seconds and you have another worker that has waited for a memory grant for more than 20 seconds the memory consuming query may be terminated.   My setting the max server memory or workload properties you can simulate the conditions.

Bob Dorr - Principal SQL Server Escalation Engineer
Assistance Provided by Keith Elmore - Principal SQL Server Escalation Engineer


Tracking Down Missing Indexes in SQL Azure

$
0
0

One of the challenges of SQL Azure is that not all of the TSQL that you are used to using is supported yet.  Since the underlying engine is plain ole’ SQL Server, the engine can understand the TSQL, but we just block its use because we haven’t yet made it work in the multi-tenant, multi-server environment that is SQL Azure. 

One of the classic missing index scripts can be seen in Bart Duncan’s classic post.  For simplicity, I have reposted the TSQL below:

   1:  SELECT 
   2:    migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure, 
   3:    'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 
   4:    + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'
   5:    + ' ON ' + mid.statement 
   6:    + ' (' + ISNULL (mid.equality_columns,'') 
   7:      + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END 
   8:      + ISNULL (mid.inequality_columns, '')
   9:    + ')' 
  10:    + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, 
  11:    migs.*, mid.database_id, mid.[object_id]
  12:  FROM sys.dm_db_missing_index_groups mig
  13:  INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
  14:  INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
  15:  WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10
  16:  ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

Unfortunately, if you try to use this TSQL, you immediately run into the problem that none of the DMVs are supported in SQL Azure.  So much for the easy way…

Since the DMVs are just ongoing collections of information that you can collect manually from dm_exec_query_stats, I decided to try to build this up manually.  This led me to generate the following query:

 
   1:  SELECT top (50) cast(replace(cast(qp.query_plan as nvarchar(max)),'xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"','') as xml),
   2:  qp.query_plan.value('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan"; (/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes/MissingIndexGroup/@Impact)[1]' , 'decimal(18,4)') * execution_count AS TotalImpact
   3:  FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(sql_handle) st cross apply sys.dm_exec_query_plan(plan_handle) qp 
   4:  WHERE qp.query_plan.exist('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan";/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes/MissingIndexGroup/MissingIndex[@Database!="m"]') = 1
   5:  ORDER BY TotalImpact DESC

This generates a list of ShowPlanXMLs ordered by “execution count * missing index group impact”.  Now that we have a list of ShowPlans ordered by overall impact, we need to parse the ShowPlanXML to pull out the missing indexes.  For those unfamiliar with the missing index information in ShowPlanXML data, here is an example:

<MissingIndexes><MissingIndexGroup Impact="98.6314"><MissingIndex Database="[BugCheck]" Schema="[dbo]" Table="[Watchlists]"><ColumnGroup Usage="EQUALITY"><Column Name="[ID]" ColumnId="1" /></ColumnGroup></MissingIndex></MissingIndexGroup></MissingIndexes>

As you can see, it contains all the information necessary to define the indexes the engine thinks are missing.

Now, for each ShowPlanXML row, we need to use XQuery to shred the MissingIndexes information into its key information.  In a classic case of copying good work already done instead of spending time doing it myself, I found that the Performance Dashboard Reports already do this shredding in one of their reports, so I could copy it:

   1:      SELECT cast(index_node.query('concat(string((./@Database)[1]),".",string((./@Schema)[1]),".",string((./@Table)[1]))') as nvarchar(100)) as target_object_name
   2:      ,replace(convert(nvarchar(max), index_node.query('for $colgroup in ./ColumnGroup,$col in $colgroup/Column where $colgroup/@Usage = "EQUALITY" return string($col/@Name)')), '] [', '],[') as equality_columns
   3:      ,replace(convert(nvarchar(max), index_node.query('for $colgroup in ./ColumnGroup,$col in $colgroup/Column where $colgroup/@Usage = "INEQUALITY" return string($col/@Name)')), '] [', '],[') as inequality_columns
   4:      ,replace(convert(nvarchar(max), index_node.query('for $colgroup in .//ColumnGroup,$col in $colgroup/Column where $colgroup/@Usage = "INCLUDE"    return string($col/@Name)')), '] [', '],[') as included_columns 
   5:      from (select convert(xml, @query_plan) as xml_showplan) as t outer apply xml_showplan.nodes ('//MissingIndexes/MissingIndexGroup/MissingIndex') as missing_indexes(index_node)

By combining the above two queries with a cursor, I can stick each shredded missing index into a temporary table.  Then, I can use the equality, inequality, and included columns from the temporary table to generate CREATE INDEX statements as follows:

   1:  select distinct 'Create NonClustered Index IX_' + substring(replace(replace(target_object_name,'[',''),']',''), 0, charindex('.',replace(replace(target_object_name,'[',''),']',''))) +' On ' + target_object_name + 
   2:  ' (' + IsNull(equality_columns,'') + 
   3:  CASE WHEN equality_columns IS Null And inequality_columns IS Null THEN ',' ELSE '' END + IsNull(inequality_columns, '') + ')' + 
   4:  CASE WHEN included_columns='' THEN
   5:  ';'
   6:  ELSE
   7:  ' Include (' + included_columns + ');'
   8:  END
   9:  from #results

DISCLAIMER:  As with all automated INDEX suggestion scripts, you need take a look at the CREATE INDEX statements suggested and decide if they make sense for you before you run out and apply them to your production instance!!

One important thing to point out is that even though I was designing this script for SQL Azure, it works just fine against an on-premise instance of SQL Server.  Since SQL Azure supports a subset of the overall SQL Server functionality, you will almost always find that a solution for SQL Azure works just fine against SQL Server.  Lastly, this functionality has been added to the CSS SQL Azure Diagnostics Tool (CSAD) so that you don’t have to worry about running this manually if you don’t want to.

For completeness, here is the TSQL statement in its entirety:

create table #results (target_object_name nvarchar(100), equality_columns nvarchar(100), inequality_columns nvarchar(100), included_columns nvarchar(100))
 
declare @query_plan as xml
declare @totalimpact as float
 
declare querycursor CURSOR FAST_FORWARD FOR
SELECT top (50) cast(replace(cast(qp.query_plan as nvarchar(max)),'xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan"','') as xml),
qp.query_plan.value('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan"; (/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes/MissingIndexGroup/@Impact)[1]' , 'decimal(18,4)') * execution_count AS TotalImpact
FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(sql_handle) st cross apply sys.dm_exec_query_plan(plan_handle) qp 
WHERE qp.query_plan.exist('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan";/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes/MissingIndexGroup/MissingIndex[@Database!="m"]') = 1
ORDER BY TotalImpact DESC
 
OPEN querycursor
FETCH NEXT FROM querycursor
INTO @query_plan, @totalimpact  --need to remove the namespace
 
WHILE @@FETCH_STATUS=0
BEGIN
 
    insert into #results (target_object_name, equality_columns, inequality_columns, included_columns)
    SELECT cast(index_node.query('concat(string((./@Database)[1]),".",string((./@Schema)[1]),".",string((./@Table)[1]))') as nvarchar(100)) as target_object_name
    ,replace(convert(nvarchar(max), index_node.query('for $colgroup in ./ColumnGroup,$col in $colgroup/Column where $colgroup/@Usage = "EQUALITY" return string($col/@Name)')), '] [', '],[') as equality_columns
    ,replace(convert(nvarchar(max), index_node.query('for $colgroup in ./ColumnGroup,$col in $colgroup/Column where $colgroup/@Usage = "INEQUALITY" return string($col/@Name)')), '] [', '],[') as inequality_columns
    ,replace(convert(nvarchar(max), index_node.query('for $colgroup in .//ColumnGroup,$col in $colgroup/Column where $colgroup/@Usage = "INCLUDE"    return string($col/@Name)')), '] [', '],[') as included_columns 
    from (select convert(xml, @query_plan) as xml_showplan) as t outer apply xml_showplan.nodes ('//MissingIndexes/MissingIndexGroup/MissingIndex') as missing_indexes(index_node)
    
    FETCH NEXT FROM querycursor
    INTO @query_plan, @totalimpact
    
END
 
CLOSE querycursor
DEALLOCATE querycursor
 
select distinct 'Create NonClustered Index IX_' + substring(replace(replace(target_object_name,'[',''),']',''), 0, charindex('.',replace(replace(target_object_name,'[',''),']',''))) +' On ' + target_object_name + 
' (' + IsNull(equality_columns,'') + 
CASE WHEN equality_columns IS Null And inequality_columns IS Null THEN ',' ELSE '' END + IsNull(inequality_columns, '') + ')' + 
CASE WHEN included_columns='' THEN
';'
ELSE
' Include (' + included_columns + ');'
END
from #results
 
drop table #results

How It Works: FileStream (RsFx) Garbage Collection

$
0
0

In a previous blog I outlined how file stream transactions work and retain the before and after images of the file to allow various forms of recovery.  Reading that blog should be considered a prerequisite: http://blogs.msdn.com/b/psssql/archive/2008/01/15/how-it-works-file-stream-the-before-and-after-image-of-a-file.aspx

I have had several questions lately about how the files get cleaned up as it is possible to build up a large set of files in the file stream container (directory) because of the transactional activity.  

File Streams uses a garbage collection process to clean up files that are not longer needed.   A system task wakes up periodically (~10 seconds or so) and checks for file stream garbage collection needs.

If you execute the following example it shows the behavior.   The table has a single row but the update runs a 100 times and builds up a set of files.

create database dbFS

on primary(name='FSData', filename = 'c:\temp\FS\FSData.mdf'),

filegroup RSFX CONTAINS FILESTREAM DEFAULT(name='Rsfx', filename='c:\temp\FS\RsFx')

log on(name='FSLog', filename='c:\temp\fs\FSData.ldf')

go

set nocount on

go

ALTER DATABASE dbFS set recovery full

go

 

backup database dbFS to disk = 'c:\temp\DelMe.bak' with init

go

use dbFS

go

CREATE TABLE RsFx

(

      [Id] [uniqueidentifier] ROWGUIDCOL NOT NULL UNIQUE,

      [Text] VARBINARY(MAX) FILESTREAM NULL

)

GO

insert into RsFx values(NEWID(), 0x10101010)

go

 

update RsFx set Text = 0x20202020

go 100

The file names are the LSN values associated with the transaction log records.   Since I have not backed up the log the files remain.

image

In the file stream container directory tree is the $FSLOG directory with a set of files as well.  The file names have embedded information and are considered part of the database log chain.   Part of the file name is an LSN that matches the LSN file name of the actual file storage.  The File stream garbage collector uses these files to help determine the cleanup requirements.

DANGER:  Deleting a file directly from the file stream container is considered database corruption and dbcc checkdb will report corruption errors.  These files are linked to the database in a transactional way and the deletion of a file is the same as a damaged page within the database.

File streams GC runs on a periodic basis.   It checks several requirements before deleting the physical file.

  • What is the truncation point in the log?   File stream GC can only remove files in the truncated portion of the log.   Beyond the truncation point could mean the file is needed for backup, rollback, replication, … other data recovery situations.   
  • Oldest open transaction - this will help determine log truncation point
  • Last checkpoint - this will help determine log truncation point
  • Looks for valid recovery capabilities - backups have been taken, etc…  The GC activity always ensures the file remains until it is clearly safe to remove the file.

The easiest way to kick start the File Stream GC activity is to backup the log and issue a checkpoint in the database.  (Making sure you don't have any long running transactions.)

backup

log dbFS to disk = 'c:\temp\DelMe.bak' with init
go
checkpoint
go

Shortly after taking completing this action file stream garbage collection will cleanup some files.   This may not be all the files.   It honors the log truncation point (sys.databases - reuse reason).   It also may limit the number of files collected during one iteration of the system task.  So you may see files collected, a slight delay, and then more files collected.

Why don't all the files get collected?  - Even after this you may see you have a few files remaining but only a single row in the database.  You issue the backup log and checkpoint several times but the files are not getting deleted.    This is because of how SQL Server truncates the log.  The truncation point can also account for things like the VLFs.   Read http://support.microsoft.com/kb/907511 for more details on how the log file activity could impact the file stream GC behavior.

Using the Dedicated Admin Connection (DAC  - admin:) you can see the file stream, tombstone tables.    These tables hold the GC actions, LSN values and other details used by GC when it is executed.

Shown below is a set of queries against the tombstone in dbFS.  I order the entries based on LSN sequence.   Looking at the first row you can compare this to the log truncation point to help determine the GC behaviors.

use dbFS
go
select * from sys.objects where name like '%tombstone%'
go
select * from sys.filestream_tombstone_2073058421 order by oplsn_fseqno asc, oplsn_bOffset asc, oplsn_slotid asc
go

image

Notice that you can see the rowset GUID and Column GUID as well as the file stream value name (file) so you can determine versioned copies of the data.   Using this table the GC will access it in sorted LSN order and cleanup the files as appropriate.

Bob Dorr - Principal SQL Server Escalation Engineer

Inappropriate usage of high isolation level isn’t just about blocking when it comes to performance

$
0
0

Normally when you see high isolation level such as SERIALIZABLE is used, your natural instinct tells you that it may cause blocking or deadlocking.   But you may be surprised to know that it actually can cause high CPU as well.  When you use SERIALIZABLE or REPEATABLE READ isolation levels, even shared locks can’t be released until the transaction is committed or rolled back.  This can cause large number of locks being held at any given time.    Granted that inappropriately managing transaction in read committed isolation level can also cause large number of locks being held, this generally accumulates locks other than shared locks.

Large number of locks at server level can cause  a query with exact same plan  consume more CPU than normal.   The reason  is that in order for SQL Server to grant a lock request, it has to traverse internal structures to figure out who owns what.  When number of locks becomes large, it takes time and CPU to figure out who owns what even the request may not have any conflict and eventually results in a grant.

Below is a chart of data from an actual customer.   The green line is lock memory (KB) and red line is total CPU (%).   That’s right!   Lock memory reached to about  9.9GB at one point.  During the period of high lock memory, you see sustained period of 100% CPU.  There was some dip in CPU periodically. Those were actual times of blocking.  But during periods with 100% CPU, we noticed a particular query would consumed 30 seconds of CPU per execution whereas it only consumed 0.3 seconds of CPU with exact same query plan.  This is because of high number of locks.  As more request piled in, server became totally non-responsive. 

Upon further look, we discovered that the procedure used SERIALIZABLE level and there were insufficient indexes.  As a result, every execution would need 1/4 million locks being held until end of the procedure execution.   So the system would work fine with a few concurrent executions.  But as more concurrent requests come in, this was no longer linear increase of CPU consumption. It’s actually exponential because more and more locks were being held.  Long term, this application needs to evaluate if SERIALIZABLE is truly needed.  Lucky enough, we were able to find a good index and dramatically reduced lock required.

 

image

 

Symptoms of this type of issue

  1. Blocking/deadlocking
  2. High CPU
  3. Out of memory errors
  4. A query normally takes small amount of CPU now takes much more CPU even plan remains the same.
  5. Non-responsive server

Identification of this type of issue

There are several ways you can identify this issue.

  1. use perfmon to look at lock memory under SQL Server:Memory Manager.  If you see this value reaching 1 GB, you should really pay attention.
  2. exec sp_lock will enumerate a locks for the server

Solution

  1. Evaluate if you truly need high isolation level.
  2. Evaluate individual query that holds large number of locks.  You may be able to tune the query by adding indexes to reduce number locks required
  3. also note that incorrectly managing transactions  that accumulate large number of locks (even with read committed isolation level, not just SERIALIZABLE or REPEATABLE READS) can cause exact same behavior.

Jack Li | Senior Escalation Engineer |Microsoft SQL Server Support

SQL Server 2008 R2 does not start when applying certain hotfix updates

$
0
0

We have noticed that when you install certain hotfixes (this includes Cumulative Update 1 and certain versions of security update MS11-049 etc. ) on an SQL Server 2008 R2 RTM Instance on which Utility Control Point is configured, the installer fails to apply the upgrade scripts and SQL Server does not start. You will see the following messages in the SQL Server error log:

2011-06-17 14:35:11.94 spid7s Executing [sysutility_mdw].sysutility_ucp_core.sp_initialize_mdw_internal
2011-06-17 14:35:13.08 spid7s SQL Server blocked access to procedure 'sys.xp_qv' of component 'Agent XPs' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Agent XPs' by using sp_configure. For more information about enabling 'Agent XPs', see "Surface Area Configuration" in SQL Server Books Online.
2011-06-17 14:35:13.08 spid7s Error: 15281, Severity: 16, State: 1.
2011-06-17 14:35:13.08 spid7s SQL Server blocked access to procedure 'sys.xp_qv' of component 'Agent XPs' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Agent XPs' by using sp_configure. For more information about enabling 'Agent XPs', see "Surface Area Configuration" in SQL Server Books Online.
2011-06-17 14:35:13.15 spid7s Error: 912, Severity: 21, State: 2.
2011-06-17 14:35:13.15 spid7s Script level upgrade for database 'master' failed because upgrade step 'sqlagent100_msdb_upgrade.sql' encountered error 15281, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2011-06-17 14:35:13.27 spid7s Error: 3417, Severity: 21, State: 3.
2011-06-17 14:35:13.27 spid7s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

This is a known issue that has been documented in KB: http://support.microsoft.com/kb/2163980 and fixed in Cumulative Update 2 for SQL Server 2008 R2 RTM .

Here are actions you can take to prevent this from happening when applying a hotfix or corrective actions to start SQL Server back if in a failed state.

ALREADY IN FAILED STATE:

If the SQL Server service is already in a failed state (for example: an attempt to install the security patch or CU1 was made), follow the steps mentioned in the Workaround section of the KB 2163980 to recover from the problem.

Once you implement the workaround and restart SQL Server the upgrade will complete successfully.

PATCH NOT YET APPLIED (STEPS TO PREVENT FAILURE FOR MANUAL UPDATES):

If the patch has not yet been applied and you want to prevent this issue from happening, you can follow these steps:

1. Verify the version of your SQL Server 2008 R2 instance. Also, check if the instance is configured for UCP, run the following query:

declare @isUCP bit

select @isUCP=msdb.dbo.fn_sysutility_get_is_instance_ucp()

select (case(@isUCP) when 0 then 'This is not a UCP instance.' else 'This is a UCP instance.' end)

If SQL Server version is < 10.50.1720.0, and if this instance of SQL Server is a UCP instance, then you will be impacted by this issue. Proceed further with rest of the steps below.

NOTE: For MS11-049, which is the recently released security update for SQL Server, this issue is specific to KB 2494088 version of the fix for SQL Server 2008 R2 which is applicable only for server instances on RTM version of the product. If you already have applied a Cumulative Update for the instance, then you will need to apply KB2494086 of the security fix. Since KB2494086 includes the fix from KB 2163980, installation of that KB 2494086 version will not run into this issue.

2. Stop SQL Server Agent service for the associated instance. If it is a clustered instance, take the SQL Server Agent resource offline.

This is a critical step - if SQL Server Agent service is running when setup is launched, it will be stopped during setup which in-turn disables Agent XPs sp_configure option resulting in the above failures.

3. Log on to SQL Server and make sure sp_configure option 'Agent XPs' is enabled.

4. Run the setup to install the security fix (MS11-049) or Cumulative update 1.

5. Start the SQL Server Agent service once the patch is installed successfully.

6. Repeat steps 1-5 for every SQL Server 2008 R2 instance.

Ajay Jagannathan | Microsoft SQL Server Escalation Services

 

 

Trace shows the incorrect Session Login Name

$
0
0

The is more of an FYI blog post but I have read several blog and forum posts on this subject and I decided to dig into the behavior which revealed a trace bug.

For the vast majority of events the Session Login Name represents the originating session credentials where as the Login Name represents the current session credentials and everything works correctly.   The login name could be different for instance if there was an execute as in progress. 

I found was the Existing Connection event incorrectly produces the session login name of the user who is starting the trace instead of the user associated with the login information of the existing connection.

I have filed a defect to get this addressed in future releases of the SQL Server.

Bob Dorr - Principal SQL Server Escalation Engineer

How It Works: Return codes from SQLIOSim

$
0
0

I have been asked how to automate SQLIOSim on several occasions.  SQLIOSim is a utility to test SQL Server I/O integrity (not performance) patterns against a system without needing to install SQL Server on the system.  It ships with SQL Server 2008 and SQL Server 2008 R2 and is located in the BINN directory.

There is a GUI version of SQLIOSim.exe and a command line version SQLIOSim.com.

You can invoke the command line version (.com) from a batch file and check the return value.

  • 0 = No Errors or Warnings encountered and all cycles completed
  • NOT 0 = Error, Warning or CTRL+C, cycles were not completed

Be sure to set the StopOnError=TRUE in the configuration file you are using.  This way when an error is encountered the testing cycles are interrupted and control is returned to the command prompt where you can check the ERRORLEVEL.

You should take a look at the SQL Server High Availability website for more details. Specially the "SQL Server I/O Reliability Program" that outlines a common SQLIOSim stress test.

http://www.microsoft.com/sqlserver/en/us/solutions-technologies/mission-critical-operations/high-availability.aspx 

http://download.microsoft.com/download/6/E/8/6E882A06-B71B-4642-9EB4-D1EA0D6223C8/SQL%20Server%20IO%20Reliability%20Program%20Requirements%20Document.docx  (AlwaysOn.SQLIOSim.cfg.ini)

Bob Dorr - Principal SQL Server Escalation Engineer

Stored procedure recompile caused by alter table on temp table

$
0
0

Lately we got a customer who called in and reported  that a particular statement involving a temp table always got recompiled and caused performance problems.  

We have a KB article http://support.microsoft.com/kb/243586  which documents various scenarios that will cause recompile involving temp tables. But none of the conditions seemed to match.  After digging deeper, we were able to track down the problem.

Before we discuss the root cause, let’s talk about how you can track down stored procedure recompiles.  Starting SQL Server 2005, recompile only occurs at statement level.  In other words, one statement recompiling won’t cause the entire stored procedure to recompile.

You can use profiler to watch recompiles.  When you choose event, make sure you choose “SQL:StmtRecompile” event (see below).

image

 

When you run your stored procedure.  You will see “SQL:StmtRecompile” event  as shown below if that statement gets recompiled.

 

image

 

Now that we know how to monitor recompiles, let’s talk about what’s expected.  If you have a query involving temp table inside a stored procedure, you will always see SQL:StmtRecompile  for that query first time you run the procedure.   This is a feature called deferred compile.  When the procedure is compiled, the query involving temp table is not even compiled.  We wait until first time the query is executed.  So you will see the recompile at least once for a query involving temp table.  What is not expected is that you see the same query being recompiled again and again.   That is what this customer complained about.

It turned out that this customer had some alter table statement on the temp table.   In the case of temp table, if you have one alter statement on any of the temp tables, statements involving all tables will recompile.  The stored procedure below (proc_test) has two temp tables (#t1 and #t2).   Though #t2 has no alter table against it, the insert statement involving #t2 will also recompile in addition to the insert on #t1.  You can use profiler to monitor yourself.  Avoid alter on temp table.


use tempdb
go
create proc proc_test
as
set nocount on
create table #t1 (c1 int)
create table #t2 (c1 int)
insert into #t1 values (1)
insert into #t2 values (1) --this will always recompile even the alter statement is on a different temp table
alter table #t1 add c2 as c1


How compressed is your backup?…

$
0
0

While working recently with a  customer, it was brought to my attention that while SQL Server has a great feature to compress backups (introduced in SQL Server 2008), the space consumed by the backup before it is complete may not be as expected.  To be more precise, when you backup a database using compression, the space used by the backup file may actually appear to larger than its “final size”. To be honest, I had not spent much time looking at our compressed backup feature in great detail.

While researching this, I found some people have already discovered this behavior:

http://adventuresinsql.com/2010/02/why-do-my-sql-2008-compressedbackups-shrink/

Also, as it turns out, we had written a KB article explaining this behavior:

http://support.microsoft.com/kb/2001026

You can see from reading these links that what the engine does when backing up a database using compression, it pre-allocates a file that is a percentage of the estimated final size of the database. Then at the end of the backup if the final size needed is less, we shrink the file to the final needed size. So if you monitor the space used of the backup operation, it is possible to observe that initial file size created is larger then when the backup completes. For example, you may backup a 22Gb database with compression. You may see the file size of the backup show up somewhere past 7Gb while the backup is running but end up only being 4Gb when it completes. As the KB article explains, we chose this method to avoid a performance penalty of having to always grow the file as needed to reach its final size.

The customer I was working with said they didn’t mind a small performance penalty (a possible longer duration for the backup operation)  so they could save on space and only use up the actual size required for the compressed backup.

Thus comes into play trace flag 3042. As you can see from reading the above blog post, trace flag 3042 bypasses the “pre-allocation algorithm” and grows the file as needed. Up until now, this trace flag was officially undocumented and unsupported. But as you can see in the KB article it is now documented. This was a change we just made in the last few days. Behind the scenes, I was able to work with the SQL Product team (thank you Kevin Farlee and the test team for the engine) to have them run the necessary functional tests to ensure the use of the flag would be supported.

I don’t have any numbers on the possible overall performance hit for using this trace flag. The customer I have worked with has said the overall backup time was slightly slower but not impactful and no significant increase in CPU was observed when using this trace flag.

Consider the update to this article as the official support for the use of the trace flag. This flag is supported in SQL Server 2008, SQL Server 2008 R2, and Denali..

Bob Ward
Microsoft

SQL Server 2008/2008 R2 on Newer Machines with More Than 8 CPUs Presented per NUMA Node May Need Trace Flag 8048

$
0
0

Applies To:  SQL 2008, 2008 R2 and Denail builds

The SQL Server developer can elect to partition memory allocations at different levels based on the what the memory is used for.   The developer may choose a global, CPU, Node, or even worker partitioning scheme.   Several of the allocation activities within SQL Server use the CMemPartitioned allocator.  This partitions the memory by CPU or NUMA node to increase concurrency and performance.  

You can picture CMemPartitioned like a standard heap (it is not a HeapCreate) but this concept is the same.  When you create a heap you can specify if you want synchronized assess, default size and other attributes.   When the SQL Server developer creates a memory object they indicate that they want things like thread safe access, the partitioning scheme and other options.

The developer creates the object so when a new allocation occurs the behavior is upheld.  On the left is a request from a worker against a NODE based memory object.  This will use a synchronization object (usually CMEMTHREAD or SOS_SUSPEND_QUEUE type) at the NODE level to allocate memory local to the workers assigned NUMA NODE.   On the right is an allocation against a CPU based memory object.  This will use a synchronization object at the CPU level to allocate memory local to the workers CPU.

In most cases the CPU based design reduces synchronization collisions the most because of the way SQL OS handles logical scheduling.  Preemptive and background tasks make collisions possible but CPU level reduces the frequency greatly.  However, going to CPU based partitioning means more overhead to maintain individual CPU access paths and associated memory lists.  

The NODE based scheme reduces the overhead to the # of nodes but can slightly increase the collision possibilities and may impact ultimate, performance results for very specific scenarios.  I want to caution you the scenarios encountered by Microsoft CSS have been limited to very specific scopes and query patterns.

image

 

Newer hardware with multi-core CPUs can present more than 8 CPUs within a single NUMA node.  Microsoft has observed that when you approach and exceed 8 CPUs per node the NODE based partitioning may not scale as well for specific query patterns.   However, using trace flag 8048 (startup parameter only requiring restart of the SQL Server process) all NODE based partitioning is upgraded to CPU based partitioning.   Remember this requires more memory overhead but can provide performance increases on these systems.

HOW DO I KNOW IF I NEED THE TRACE FLAG?

The issue is commonly identified by looking as the DMVs dm_os_wait_stats and dm_os_spin_stats for types (CMEMTHREAD and SOS_SUSPEND_QUEUE).   Microsoft CSS usually sees the spins jump into the trillions and the waits become a hot spot.   

Bob Dorr - Principal SQL Server Escalation Engineer

The DMV sys.dm_os_memory_clerks May Show What Appears To Be Duplicate Entries

$
0
0


In SQL Server 2008 some of the memory based DMVs show memory node id 0 when you might not expect them to.   For example you could see the following on a single, node system.

SQL 2008

memory_clerk_address  type                      name    memory_node_id
0x0000000003EF6828    MEMORYCLERK_SQLBUFFERPOOL Default 0
0x0000000005040828    MEMORYCLERK_SQLBUFFERPOOL Default 0

SQL 2008 R2

memory_clerk_address type                      name    memory_node_id
0x0000000003EF6828   MEMORYCLERK_SQLBUFFERPOOL Default 0
0x0000000005040828   MEMORYCLERK_SQLBUFFERPOOL Default 64

Notice that the clerk addresses are different so they really do belong to different NUMA nodes.    The difference is the exposure of the logical memory node id for the DAC node (64).

In order to support the Dedicated Admin Connection it is given a logical node with a bit of dedicated memory.   The node is not associated with any physical NUMA node or CPU and it is given the memory_node_id of 64. 

To understand this a bit better you have to understand how SQL considers memory nodes.   At the SQLOS level a memory node it represented by the physical layout of CPUs to Memory as presented by the operating system.   When looking at SQLOS level DMVs the memory nodes often represent this physical alignment.  Since DAC is a logical implementation it is just assigned a node id by SQLOS.   In SQL 2008 some of the DMVs did not account for DAC and would output the physical memory node association, making it look like you have duplicate clerks on the same node.

There may also be ways you can see what appears to be duplicates in SQL 2008 R2 if you are using SOFT NUMA.   Under SOFT NUMA you can split a physical NUMA node into logical NUMA nodes at the scheduling level for SQL Server.   However, at the SQLOS memory node level it still works with the physical memory layout presented by the operating system.  This allows SQLOS to provide facilities such as node local memory access to one or more logical nodes that may have been configured with SOFT NUMA.

Bob Dorr - Principal SQL Server Escalation Engineer

The SQL PASS Summit comes again to Seattle and CSS will be there…

$
0
0

For the 9th year, the Microsoft CSS team will speak and meet customers at the SQL PASS Summit, being held  this year in Seattle, Washington from October 11th through the 14th.

As in past summits, CSS will have a presence during pre-conference seminars, the main conference, and our infamous SQL Clinic.

I’ll be posting more details on our involvement as we progress closer to the conference, but for now let me give you a quick summary of what CSS will bring to the conference.

Pre-Conference Seminar

Adam Saxton has become of the of the best speakers from CSS to come to recent PASS Summits. We are sending him back again this year to spend a day covering the all important, but misunderstood topic of Kerberos.

Kerberos is something that anyone who must deploy SQL Server wishes they didn’t have to worry about. But unfortunately, it impacts everyone and is not a simple subject.  Preventing and troubleshooting Kerberos issues is a very difficult topic even for our own CSS engineers. Adam will not only provide you the internals of how it works but give you practical step-by-step advice on how to configure Kerberos and how to diagnose problems should they occur. Kerberos affects all aspects of SQL Server including the engine, Reporting Services, and Analysis Services. Adam, covers it all including integration with Sharepoint 2010 and PowerPivot

Get the details about Adam’s session at http://www.sqlpass.org/summit/2011/Speakers/CallForSpeakers/SessionDetail.aspx?sid=1997. If you haven’t signed up for a pre-conference session yet, take a serious look at Adam’s talk.

Main Conference Talks

CSS has always presented in the main conference on a variety of topics but always focused on two types of talks: 1) Deep internals 2) Troubleshooting.

This year will be no different as we will provide talks in the areas of:

  • Performance showing off the upgraded Performance Dashboard reports for 2011
  • SQL Azure focusing on important troubleshooting tips and steps you can take to avoid problems building apps for SQL Azure
  • Performance and engine tuning showing you “all the magic knobs” you may not know about
  • A deep dive into the internals of tempdb. This will be one of the new “half-day” talks and I fully expect paramedics to be on-call from the brain overload that is likely to take place

SQL Clinic

For the first time last year we partnered with the SQL CAT team for the SQL Server Clinic. It was an amazing success. if you need face to face time with architects or problem solvers, the clinic will be for you. This is something you can take advantage of for no extra charge as part of attending the summit. We love the direct customer interaction so don’t miss a chance to come by and visit. More details on the exact room location at the summit will come later as we get closer to the conference.

So many years ago Ken Henderson encouraged me to get involved with the SQL Community through PASS and it is amazing how much we now contribute. It is a great opportunity for our team to meet customers face to face and to.. “show off a bit” on our skills and knowledge about the SQL Server product.

Stay tuned for more details as we move towards mid October.

Bob Ward
Microsoft

Inside the SQL Server Clinic…

$
0
0

In my last post, I reviewed the Microsoft CSS involvement at the upcoming 2011 SQL Server PASS Summit. One big part as I’ve mentioned is the SQL Server Clinic. I thought you might find it interesting to learn more about exactly what the clinic is and how you can make the best use of it.

Starting on Wednesday, October 12th 2011 (which is the first day of the main conference), we will open up the doors of room 611 (I’m fairly confident we will use this room again) of the Seattle Convention Center. Our hours are “after keynote” each day (we want anyone working the clinic to have the chance to attend the keynote) until about 5-6pm. At any point in time, we could have up to 10-15 Microsoft SQL CAT and CSS engineers in the room waiting to answer your questions or troubleshoot your problems. There is no reservation required. Just show up and walk in.

When talking to our CSS engineers there are some things that you can bring that can help make your experience better. If you have a specific problem to solve, it helps to bring the details .ERRORLOG files, error messages, specific query syntax, or details of your environment. However, if you simply want to bring us your laptop with a remote desktop session back to your office, we will take that too. I can’t tell you how many times I’ve debugged a problem live on a customer’s laptop using remote desktop in the clinic.

Will we have all the answers? No. But most of the engineers in the clinic are willing to do to some research later for things we can’t answer quickly. This is no substitute for opening a case with Microsoft Technical Support. But in almost every situation I’ve seen in the clinic, we usually can provide some direction or tip that gets you closer to solving your issue..

If there is some issue or problem you have been having with SQL Server and you are struggling for an answer, given us a chance in the Clinic. We love the challenge of new problems we have not seen and also the satisfaction of helping someone face to face resolve a tough or difficult issue involving SQL Server.

We hope to see you at the upcoming PASS Summit.

Bob Ward
Microsoft

RML Questions

$
0
0

 

The following questions have surfaced several times recently so I decided to post the answers to assist others.

 

String is missing proper closing quote near (Char Pos: 0xC1 Byte Pos: 0x182)

This is not a utility bug.   It is a command found in the trace that was malformed.  For example:   select * from tbl where name = ''Bob'  <---- This is missing the the trailing quote and the parser logic in RML is pointing it out to you.    We keep the integrity of the missing quote so we can actually replay the syntax error.

This is usually an application issue and a place to be very careful about T-SQL security injection.   

Let's say that Bob in my example comes from a search box in the application.    If I made a mistake and added 'Bob in the search box the application might build the string above and return me a issue near quote.   Now I know you are submitting this as dynamic SQL T-SQL and not bound parameters.   This means I can enter the following in the search criteria box.

Bob';  drop database production; select '

The application would then build the following command:  select * from tbl where 'Bob'; drop database production; select ''

I hit the OK button and your production database might disappear.  These are places hackers love - changing this to and RPC event (bound parameters) would be much better for safety.    (Look at things like sp_executesql) 


Subject: RML Read Trace Error

I am getting the following error while using Read Trace to read SQL 2000 trace files. Do we have any latest version where this problem is fixed? I am using RML Version :9.01.0109 Read Trace Error: [Error: 110003][State: 0][Abs Char: 113][Seq: 0] SYNTAX ERROR: String is missing proper closing quote near (Char Pos: 0xC1 Byte Pos: 0x182)



Embedded NULL Charcters

It is not a bug it is a designed output warning.  What I am saying is that EMBEDEDED WITHIN the text of the command there is an actual 0x00 character.   This is often unexpected and usually a binding error from a client application.  SQL allows 0x00 to be stored so it is not a violation of T-SQL it is more of a bad practice.

For example, it is a way to hide T-SQL injection attacks because a vast majority of applications only display the string up to the first 0x00 character.  (C/C++ null termination character)

I could submit the following to your server:    select * from good table;(0x00 character added here)drop database Production;

With the right permissions the production database is dropped and you won't see this in any of the traces as the 0x00 hides it from the common display utilities.

Look at the details closer to see if this is something you want to correct in your environment.


Subject: RML Read Trace Error

 

Do we have any latest version where this bug is fixed?

 

I am using Version : 9.01.0109

 

Error Info:

[Error: 110009][State: 10][Abs Char: 0][Seq: 0] WARNING: The input stream contains an embedded NULL (0x0) character near

 

 

 

Bob Dorr, Microsoft SQL Server Escalation Support - Principal Escalation Engineer

After Applying SQL Server 2008 R2 SP1 Error 9013 is logged (The tail of the log for database %ls is being rewritten to match the new sector size of %d bytes …)

$
0
0

In SQL Server 2008 R2 SP1 we made updates to dynamically accommodate disk drives that present physical sector sizes greater than 512 bytes.   In practice, these are generally 4K, physical sector size drives and the SQL 2008 R2 transaction log code will dynamically adjust to the presented physical sector size to accommodate various sector configurations.

To understand this issue completely you have to go back to SQL Server 2000 where we shipped SQL Server's master, model and msdb as if it had been formatted on a 4K, physical sector size drive.   This allowed SQL Server to execute on a 4K drive without having to rebuild the master, model or msdb databases.

The formatted sector size (physical sector size of disk where the database was created) is stored in metadata for the database and when the database is restarted (recovered/online/…) we check to see if the data written in the log file is aligned on this formatted sector size boundary.   When we detect a situation where the data is not aligned on this boundary we fix up the tail of the log and write a message in the SQL Server error log.

image

               ErrorNumber:  9013
               ErrorFormat:   The tail of the log for database %ls is being rewritten to match the new sector size of %d bytes.  %d bytes at offset %I64d in file %ls will be written.

Customers that have been applying SQL Server 2008 R2 SP1 based builds may encounter the error on restart of the SQL Server service for the master, msdb and model databases.

The work to handle varying sector sizes as done in SQL Server Denali code line and then ported back to the SQL 2008 R2 code line.   During the port  a specific check was overlooked that results in the error message condition.   The database will start writing log records at the physical, disk sector size.  This is the proper behavior but for database like master, msdb, and model the logical sector size is not accommodated properly.  

Allow me to try to visualize this.   In the diagram I show an example of the masterlog.ldf.   SQL Server has a logical sector size of 4K but the disk uses 512 byte sectors.

In SQL Server 2008 R2 SP1 we detect the physical sector size of 512 bytes and use that as our write boundary.  Prior versions would have used the formatted (logical) sector size in this example  - max(physical, logical).   We can now write the 1st and 2nd sectors.   When we restart SQL the SQL 2008 R2 SP1 log file is checked against the metadata (4K) and it does not align so we log the warning and we write empty data in the next 6 sectors, re-aligning the log write activity on a 4K boundary.    But the next write to the log encounters the issue and again writes at a 512 byte boundary so the next restart can trigger the repeat behavior to fix-up the tail of the log.

Note: This does NOT present a risk to the database. SQL Server is writing on sector aligned boundaries, it is just a logical and physical mismatch of the sector size that results in the noisy startup messages.

Bob Dorr - Principal SQL Server Escalation Engineer


Yes–we made a mistake–and are finally going to fix it

$
0
0

If you are still using ADO in some of your code and tried to upgrade your build machine to Windows 7 SP1, you probably ran into the fact that you cannot run the recompiled program on downlevel OSes unless you modify your references to point to the backward compatible type libraries from http://support.microsoft.com/kb/2517589. Unfortunately, the solution does not work in several scenarios of which the biggest is Visual Basic for Applications (VBA).

The GUID change itself was not really the big problem – in fact, if you have been developing against MDAC/WDAC for a while you probably remember the days when applications would only run on the MDAC version that it was compiled on or higher. When we stopped revisioning MDAC around the Windows 2003 SP2 days, the problem pretty much went away. Until recently…

What happened is that we realized that some of our ADO APIs used platform dependent data types. By this I mean that the 32-bit version of the API used LONG, but the 64-bit version of the same API used LONGLONG.  This caused problems when 64-bit applications tried to consume these platform dependent data types and the caller’s data type did not match that of the callee’s. http://support.microsoft.com/kb/983246 discusses a scenario where a VBA macro runs into this exact problem. Unfortunately, we drastically underestimated the number of customers who were recompiling ADO applications on Windows 7 SP1. Even worse, when I say drastically, I really mean DRASTICALLY.

As soon as we realized the magnitude of the problem, we started scrambling to come up with a better solution. At this point, though, our significantly less than ideal first attempt had compounded the problem because it had the potential to spread the changed GUIDs to downlevel OSes. At this point, we made the painful decision to pull http://support.microsoft.com/kb/983246. Yes – we recognized that it would leave some scenarios like VBA without a workable solution, but we deemed that a better option than continuing to spread the modified GUIDs. Although not ideal, our standing recommendations were to use either the backward compatible libraries from http://support.microsoft.com/kb/2517589 or to compile on Windows 7 RTM. While not covering every scenario, it covered the bulk of them and was the best option we could provide without massive re-architecting.

Now, I am happy to announce that we are coming out with a much better solution. We are going to do the following:

  1. Ship the 6.0 type library from Windows 7 RTM via the new type library file msado60.tlb.  This will ship for multiple platforms.
  2. Ship a new 6.1 type library (which contains both new and deprecated interfaces) and embed it inside the msado15.dll
  3. Revert back all 2.x type libraries to what they looked like in Windows 7 RTM.

Therefore, the 2.x and 6.0 type libraries can be used for backward compatible purposes, and the 6.1 type library can be used for 64-bit VBA solutions. If you find yourself in the enviable place of not having to do any downlevel OS support, you can migrate entirely to the 6.1 type library even for your non-VBA applications.

Although you can see this fix in action in the Windows 8 Preview we shipped in September, we won’t be able to deliver the Windows 7 SP1 fix until early next year. As you can imagine, we are checking and re-checking everything because we completely understand that the only thing worse than no fix at this point would be another incomplete fix that requires yet another direction shift and another significant delay. Also, we are hard at working making sure some other known issues with the Windows 7 SP1 WDAC build get addressed with this fix as well. These are as follows:

We are still working hard to try to deliver this fix even sooner than early next year, but at this point I am not able to make any promises for an earlier delivery.

Easy JDBC Logging

$
0
0

I have been supporting Microsoft’s JDBC driver for almost six years now and the one thing with which I always struggle is getting logging going.  JDBC logging is probably some of the most useful logging out there (I only wish BID tracing were so easy to enable and consume!), but for some reason I always struggle getting the correct logging.properties file registered and then figuring out exactly where the log file will be generated.  I finally got tired of fighting with it and decided to change both my test code and my command-line to make this much, much easier.

The first thing to recognize is that Java will by default generate the log file in the User.Home folder.  Therefore, I decided to output that location as part of my code:

System.out.println("User Home Path: " + System.getProperty("user.home"));

The second thing to do is to manually specify the logging.properties file in the command-line:

java.exe -Djava.util.logging.config.file=c:\temp\logging.properties myJavaClass

Just in case you were wondering, I am using a very simple logging.properties file:

# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler
 
# Set the default logging level for the root logger
.level = ALL
 
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = INFO
 
# Set the default logging level for new FileHandler instances
java.util.logging.FileHandler.level = ALL
 
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
 
 
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
 
# For example, set the com.xyz.foo logger to only log SEVERE
# messages:
com.microsoft.sqlserver.jdbc.level=FINEST
com.xyz.foo.level = SEVERE

Now, for the one of the few times where I have needed to generate a JDBC log, it happens on the first time!

Untitled

Happy logging!

The week that was PASS 2011 & Moving on…

$
0
0

image

This is to recap my week at PASS Summit 2011.  This was my 4th US PASS, and every year it is amazing.  I really enjoy sharing information with the community as well as getting to meet the people I talk to on Twitter and in other areas.  Between Technical and Social Networking, this really is a great event.  There were a lot of people at PASS this year as well.  This was the first year that I’ve noticed an overflow room for the Key notes and it was filled.

The Big Announcement

image

The big announcement this year was that we officially launched the Product Branding for SQL Server 2012 and stated that it will be released in first half of next year.  Project Crescent also was named Power View.  They showed this video during one of the keynotes, and I thought it was a really well made video clip discussing some testimonials of the new features.

The Sessions

This year we had our normal CSS Pre-Con done by yours truly, as well as a 3 hour main con talk that Bob Ward did.  We also had Keith Elmore doing a talk about the new Performance Dashboard that will ship with SQL Server 2012.  I found out that Performance Dashboard has been one of the biggest downloads for the Microsoft Download Center.  I hadn’t realized that.  Cindy Gross also Presented a SQL talk.  I wasn’t going to do a main con talk this year, but one of our presenters had to miss the summit due to medical issues and I had a talk on standby that I had presented at SQL Saturday.  I found out the Sunday I landed that I would be doing that talk.  It was a high level walk through of Reporting Services.

image                      WP_000045

SQL Clinic

The Clinic this year had a great turn out as ever.  I think this year, we also had a much better showing on the CSS side.  I think this was in part due to the awesome schedule that Chris Wilson and John Gose put together.  It seemed to work out really well.  We got a lot of different questions spanning the enter product line as usually.  The big theme this year was Replication though.  on Wednesday it seemed that every other question was something about Replication.  I did my part to tout the new SQL AlwaysOn feature that will be in SQL Server 2012.

One attraction that was new this year was the SQL Kinect Demo!  It was a fun mock setup of some Management Studio interactions with SQL Server.  For example, on one screen, a touchdown like movement would create a table or database.  Everyone had a blast with this. 

WP_000048           WP_000042

Entertainment

As mentioned, the social events and just time to meet and greet with people attending PASS was great.  There was always something going on every night.  This year was my first year at a SQL Karaoke event.  To illustrate the power of Social, myself and 2 friends arrived at the Karaoke bar.  There was really no one there at 9:15 after the last crowd left.  We decided to get on twitter and start telling people to come using the #sqlpass and #sqlkaraoke twitter hashtags.  Within 15-20 minutes we had about 60-70 people there and had a blast.  It was really amazing to witness that. 

Food is also a great experience at PASS.  At least with some of the restaurants that we end up going to.  Like the Crab Pot that just dumps a bunch of sea food on your table.  And, the massive deserts that you get!

WP_000032      WP_000039

SQL KILT

Unfortunately, I did not get to participate in the SQL KILT event on Thursday.  I really wanted to, but things didn’t work out.  This year they had a backup for everyone that didn’t get a kilt though.

WP_000050

 

Moving on…

I wasn’t really advertising this before, but it came out a lot at the PASS Summit.  I have actually left the SQL Support group and am now with the Health Solution’s Support Group.  I’m now supporting products like HealthVault and Amalga.  There were a lot of reasons that led me to this decision, but one thing I weighed heavily was the SQL Community and the fact that it is really awesome and I have meet a lot of great people.  PASS is one of those events that I have come to love and it has felt like a family reunion to me (minus the drama that happens at family reunions!).  Our Health Products are really great and I’m looking forward to the new adventure that awaits there.  There are a lot of great opportunities with this group and I hope to bring some of the Social and Community Passion with me.

You can continue to follow me at my new blog - http://blogs.msdn.com/hsgsupport.  Also, be sure to check out http://www.whatsnextinhealth.com!

Adam W. Saxton | Microsoft HSG Escalation Services
http://twitter.com/awsaxton

Error 1803 and model size change in SQL Server 2012

$
0
0

Recently I encountered error 1803 when working on SQL Server 2012. The script I ran against a SQL Server 2012 instance was

CREATE DATABASE [suspect_db] ON  PRIMARY

( NAME = N'suspect_db', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL11_CTP3\MSSQL\DATA\suspect_db.mdf' , SIZE = 2048KB , FILEGROWTH = 1024KB )

LOG ON

( NAME = N'suspect_db_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQL11_CTP3\MSSQL\DATA\suspect_db_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)

GO

 

I was confident I used this same script successfully on previous versions of SQL Server. Therefore, I examined the error message and the script properly:

Msg 1803, Level 16, State 1, Line 3

The CREATE DATABASE statement failed. The primary file must be at least 3 MB to accommodate a copy of the model database.

 

Then I went and compared the physical size of model and noticed that the size changed between SQL Server 2012 and previous versions. Here is a comparison of the sizes:

 

SQL Server version

Physical file size (bytes)

sp_spaceused information

model.mdf

modellog.ldf

reserved

data

index_size

unused

2000

655,360

524,288

528 KB

144 KB

280 KB

104 KB

2005

1,245,184

524,288

1136 KB

472 KB

560 KB

104 KB

2008

1,310,720

524,288

1200 KB

472 KB

624 KB

104 KB

2008 R2

1,310,720

524,288

1216 KB

512 KB

632 KB

72 KB

2012

2,162,688

524,288

2096 KB

792 KB

1080 KB

224 KB

 

So, the next obvious question is - Why this change now?

            When the SQL Server product team ships new features or enhancements to existing features, these involve changes to the core metadata. These might be in the form of new system tables, views, stored procedures and other objects. In some occasions, these changes may be server wide that you just make the change to the associated catalog tables in the msdb or master database. If this is a change that needs to be implemented in every database, then you will see the effect in the model database. Since we added several new exciting features in SQL Server 2012, we needed to add supporting system tables in every database. Therefore, those need to be persisted in model database first. Depending upon on the number of system tables you add and the available free space, the data file needs to grow. That is what happened in SQL 2012. New stored procedures and system wide views are normally implemented in the Resource database and installed as part of the setup.

 

If you have scripts where you specify the initial size of the database [especially SQL Express], make sure to consider this factor when migrating applications to SQL Server 2012.

 

Here is the query I used to find out what system tables were added in each version of SQL Server:

select object_name(p.object_id) as object_name , p.index_id , p.rows , au.total_pages

from sys.allocation_units au left outer join sys.partitions p on (au.container_id = p.hobt_id)

where au.type in ( 1 , 3 )

union

select object_name(p.object_id) as object_name , p.index_id , p.rows , au.total_pages

from sys.allocation_units au left outer join sys.partitions p on (au.container_id = p.partition_id)

where au.type in ( 2 )

 

The output I got from different versions is shown below with the changes highlighted. As you can see these system tables correspond to the new features introduced in SQL Server 2012 [e.g. AlwaysOn, Contained database, FileTable].

 

SQL 2012

SQL 2008 R2

object_name

index_id

rows

total_pages

size_bytes

object_name

index_id

rows

total_pages

size_bytes

filestream_tombstone_2073058421

1

0

0

0

filestream_tombstone_2073058421

1

0

0

0

filestream_tombstone_2073058421

2

0

0

0

filestream_tombstone_2073058421

2

0

0

0

filetable_updates_2105058535

1

0

0

0

 

 

 

 

 

queue_messages_1977058079

1

0

0

0

queue_messages_1977058079

1

0

0

0

queue_messages_1977058079

2

0

0

0

queue_messages_1977058079

2

0

0

0

queue_messages_2009058193

1

0

0

0

queue_messages_2009058193

1

0

0

0

queue_messages_2009058193

2

0

0

0

queue_messages_2009058193

2

0

0

0

queue_messages_2041058307

1

0

0

0

queue_messages_2041058307

1

0

0

0

queue_messages_2041058307

2

0

0

0

queue_messages_2041058307

2

0

0

0

sysallocunits

1

138

4

32,768

sysallocunits

1

103

4

32,768

sysallocunits

2

138

2

16,384

sysallocunits

2

103

2

16,384

sysasymkeys

1

0

0

0

sysasymkeys

1

0

0

0

sysasymkeys

2

0

0

0

sysasymkeys

2

0

0

0

sysasymkeys

3

0

0

0

sysasymkeys

3

0

0

0

sysaudacts

1

0

0

0

sysaudacts

1

0

0

0

sysbinobjs

1

23

2

16,384

sysbinobjs

1

23

2

16,384

sysbinobjs

2

23

2

16,384

sysbinobjs

2

23

2

16,384

sysbinsubobjs

1

3

2

16,384

sysbinsubobjs

1

3

2

16,384

sysbinsubobjs

2

3

2

16,384

sysbinsubobjs

2

3

2

16,384

sysbrickfiles

1

0

0

0

 

 

 

 

 

syscerts

1

0

0

0

syscerts

1

0

0

0

syscerts

2

0

0

0

syscerts

2

0

0

0

syscerts

3

0

0

0

syscerts

3

0

0

0

syscerts

4

0

0

0

syscerts

4

0

0

0

syschildinsts

1

0

0

0

 

 

 

 

 

sysclones

1

0

0

0

 

 

 

 

 

sysclsobjs

1

16

2

16,384

sysclsobjs

1

16

2

16,384

sysclsobjs

2

16

2

16,384

sysclsobjs

2

16

2

16,384

syscolpars

1

694

17

139,264

syscolpars

1

483

16

131,072

syscolpars

2

694

7

57,344

syscolpars

2

483

5

40,960

syscommittab

1

0

0

0

syscommittab

1

0

0

0

syscommittab

2

0

0

0

syscommittab

2

0

0

0

syscompfragments

1

0

0

0

syscompfragments

1

0

0

0

sysconvgroup

1

0

0

0

sysconvgroup

1

0

0

0

syscscolsegments

1

0

0

0

 

 

 

 

 

syscsdictionaries

1

0

0

0

 

 

 

 

 

sysdbfiles

1

2

2

16,384

 

 

 

 

 

sysdbfrag

1

0

0

0

 

 

 

 

 

sysdbfrag

2

0

0

0

 

 

 

 

 

sysdbreg

1

0

0

0

 

 

 

 

 

sysdbreg

2

0

0

0

 

 

 

 

 

sysdbreg

3

0

0

0

 

 

 

 

 

sysdercv

1

0

0

0

sysdercv

1

0

0

0

sysdesend

1

0

0

0

sysdesend

1

0

0

0

sysendpts

1

0

0

0

 

 

 

 

 

sysendpts

2

0

0

0

 

 

 

 

 

sysfgfrag

1

0

0

0

sysfgfrag

1

2

2

16,384

sysfiles1

0

2

2

16,384

sysfiles1

0

2

2

16,384

sysfoqueues

1

0

0

0

 

 

 

 

 

sysfos

1

0

0

0

 

 

 

 

 

sysfos

2

0

0

0

 

 

 

 

 

sysftinds

1

0

0

0

sysftinds

1

0

0

0

sysftproperties

1

0

0

0

 

 

 

 

 

sysftproperties

2

0

0

0

 

 

 

 

 

sysftproperties

3

0

0

0

 

 

 

 

 

sysftsemanticsdb

1

0

0

0

 

 

 

 

 

sysftstops

1

0

0

0

sysftstops

1

0

0

0

sysguidrefs

1

0

0

0

sysguidrefs

1

0

0

0

sysguidrefs

2

0

0

0

sysguidrefs

2

0

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

sysidxstats

1

180

4

32,768

sysidxstats

1

123

4

32,768

sysidxstats

2

180

2

16,384

sysidxstats

2

123

2

16,384

sysiscols

1

367

4

32,768

sysiscols

1

275

4

32,768

sysiscols

2

367

2

16,384

sysiscols

2

275

2

16,384

syslnklgns

1

0

0

0

 

 

 

 

 

sysmultiobjrefs

1

107

2

16,384

sysmultiobjrefs

1

106

2

16,384

sysmultiobjrefs

2

107

2

16,384

sysmultiobjrefs

2

106

2

16,384

sysnsobjs

1

1

2

16,384

sysnsobjs

1

1

2

16,384

sysnsobjs

2

1

2

16,384

sysnsobjs

2

1

2

16,384

sysobjkeycrypts

1

0

0

0

sysobjkeycrypts

1

0

0

0

sysobjvalues

1

187

5

40,960

sysobjvalues

1

125

3

24,576

sysobjvalues

1

187

25

204,800

sysobjvalues

1

125

25

204,800

sysowners

1

14

2

16,384

sysowners

1

14

2

16,384

sysowners

2

14

2

16,384

sysowners

2

14

2

16,384

sysowners

3

14

2

16,384

sysowners

3

14

2

16,384

sysphfg

1

1

2

16,384

sysphfg

1

1

2

16,384

syspriorities

1

0

0

0

syspriorities

1

0

0

0

syspriorities

2

0

0

0

syspriorities

2

0

0

0

syspriorities

3

0

0

0

syspriorities

3

0

0

0

sysprivs

1

137

2

16,384

sysprivs

1

130

2

16,384

syspru

1

0

0

0

 

 

 

 

 

sysprufiles

1

2

2

16,384

sysprufiles

1

2

2

16,384

sysqnames

1

98

2

16,384

sysqnames

1

97

2

16,384

sysqnames

2

98

2

16,384

sysqnames

2

97

2

16,384

sysremsvcbinds

1

0

0

0

sysremsvcbinds

1

0

0

0

sysremsvcbinds

2

0

0

0

sysremsvcbinds

2

0

0

0

sysremsvcbinds

3

0

0

0

sysremsvcbinds

3

0

0

0

sysrmtlgns

1

0

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

sysrowsetrefs

1

0

0

0

sysrowsetrefs

1

0

0

0

sysrowsets

1

124

2

16,384

sysrowsets

1

91

2

16,384

sysrscols

1

870

17

139,264

sysrscols

1

632

9

73,728

sysrts

1

1

2

16,384

sysrts

1

1

2

16,384

sysrts

2

1

2

16,384

sysrts

2

1

2

16,384

sysrts

3

1

2

16,384

sysrts

3

1

2

16,384

sysscalartypes

1

34

2

16,384

sysscalartypes

1

34

2

16,384

sysscalartypes

2

34

2

16,384

sysscalartypes

2

34

2

16,384

sysscalartypes

3

34

2

16,384

sysscalartypes

3

34

2

16,384

sysschobjs

1

2063

33

270,336

sysschobjs

1

53

2

16,384

sysschobjs

2

2063

33

270,336

sysschobjs

2

53

2

16,384

sysschobjs

3

2063

33

270,336

sysschobjs

3

53

2

16,384

sysschobjs

4

2063

6

49,152

sysschobjs

4

53

2

16,384

sysseobjvalues

1

0

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

syssingleobjrefs

1

155

2

16,384

syssingleobjrefs

1

146

2

16,384

syssingleobjrefs

2

155

2

16,384

syssingleobjrefs

2

146

2

16,384

syssoftobjrefs

1

0

0

0

syssoftobjrefs

1

0

0

0

syssoftobjrefs

2

0

0

0

syssoftobjrefs

2

0

0

0

syssqlguides

1

0

0

0

syssqlguides

1

0

0

0

syssqlguides

2

0

0

0

syssqlguides

2

0

0

0

syssqlguides

3

0

0

0

syssqlguides

3

0

0

0

systypedsubobjs

1

0

0

0

systypedsubobjs

1

0

0

0

systypedsubobjs

2

0

0

0

systypedsubobjs

2

0

0

0

sysusermsgs

1

0

0

0

 

 

 

 

 

syswebmethods

1

0

0

0

 

 

 

 

 

sysxlgns

1

0

0

0

 

 

 

 

 

sysxlgns

2

0

0

0

 

 

 

 

 

sysxlgns

3

0

0

0

 

 

 

 

 

sysxmitbody

1

0

0

0

 

 

 

 

 

sysxmitqueue

1

0

0

0

sysxmitqueue

1

0

0

0

sysxmlcomponent

1

100

2

16,384

sysxmlcomponent

1

99

2

16,384

sysxmlcomponent

2

100

2

16,384

sysxmlcomponent

2

99

2

16,384

sysxmlfacet

1

112

2

16,384

sysxmlfacet

1

112

2

16,384

sysxmlplacement

1

19

2

16,384

sysxmlplacement

1

18

2

16,384

sysxmlplacement

2

19

2

16,384

sysxmlplacement

2

18

2

16,384

sysxprops

1

0

0

0

sysxprops

1

0

0

0

sysxsrvs

1

0

0

0

 

 

 

 

 

sysxsrvs

2

0

0

0

 

 

 

 

 

SUM

2,146,304

SUM

1,245,184

SQL 2008

SQL 2005

object_name

index_id

rows

total_pages

size_bytes

object_name

index_id

rows

total_pages

size_bytes

filestream_tombstone_2073058421

1

0

0

0

 

 

 

 

0

filestream_tombstone_2073058421

2

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

queue_messages_1977058079

1

0

0

0

queue_messages_1977058079

1

0

0

0

queue_messages_1977058079

2

0

0

0

queue_messages_1977058079

2

0

0

0

queue_messages_2009058193

1

0

0

0

queue_messages_2009058193

1

0

0

0

queue_messages_2009058193

2

0

0

0

queue_messages_2009058193

2

0

0

0

queue_messages_2041058307

1

0

0

0

queue_messages_2041058307

1

0

0

0

queue_messages_2041058307

2

0

0

0

queue_messages_2041058307

2

0

0

0

sysallocunits

1

103

4

32,768

sysallocunits

1

89

2

16,384

sysallocunits

2

103

2

16,384

 

 

 

 

 

sysasymkeys

1

0

0

0

sysasymkeys

1

0

0

0

sysasymkeys

2

0

0

0

sysasymkeys

2

0

0

0

sysasymkeys

3

0

0

0

sysasymkeys

3

0

0

0

sysaudacts

1

0

0

0

 

 

 

 

0

sysbinobjs

1

23

2

16,384

sysbinobjs

1

23

2

16,384

sysbinobjs

2

23

2

16,384

sysbinobjs

2

23

2

16,384

sysbinsubobjs

1

3

2

16,384

sysbinsubobjs

1

0

0

0

sysbinsubobjs

2

3

2

16,384

sysbinsubobjs

2

0

0

0

 

 

 

 

0

 

 

 

 

0

syscerts

1

0

0

0

syscerts

1

0

0

0

syscerts

2

0

0

0

syscerts

2

0

0

0

syscerts

3

0

0

0

syscerts

3

0

0

0

syscerts

4

0

0

0

syscerts

4

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysclsobjs

1

16

2

16,384

sysclsobjs

1

14

2

16,384

sysclsobjs

2

16

2

16,384

sysclsobjs

2

14

2

16,384

syscolpars

1

483

16

131,072

syscolpars

1

419

16

131,072

syscolpars

2

483

5

40,960

syscolpars

2

419

4

32,768

syscommittab

1

0

0

0

 

 

 

 

0

syscommittab

2

0

0

0

 

 

 

 

0

syscompfragments

1

0

0

0

 

 

 

 

0

sysconvgroup

1

0

0

0

sysconvgroup

1

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysdbfiles

1

2

2

16,384

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysdercv

1

0

0

0

sysdercv

1

0

0

0

sysdesend

1

0

0

0

sysdesend

1

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysfgfrag

1

2

2

16,384

 

 

 

 

0

sysfiles1

0

2

2

16,384

sysfiles1

0

2

2

16,384

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysftinds

1

0

0

0

sysftinds

1

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysftstops

1

0

0

0

 

 

 

 

0

sysguidrefs

1

0

0

0

sysguidrefs

1

0

0

0

sysguidrefs

2

0

0

0

sysguidrefs

2

0

0

0

 

 

 

 

 

syshobtcolumns

1

538

7

57,344

 

 

 

 

 

syshobts

1

78

2

16,384

sysidxstats

1

117

2

16,384

sysidxstats

1

102

2

16,384

sysidxstats

2

117

2

16,384

sysidxstats

2

102

2

16,384

sysiscols

1

269

4

32,768

sysiscols

1

216

2

16,384

sysiscols

2

269

2

16,384

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysmultiobjrefs

1

106

2

16,384

sysmultiobjrefs

1

102

2

16,384

sysmultiobjrefs

2

106

2

16,384

sysmultiobjrefs

2

102

2

16,384

sysnsobjs

1

1

2

16,384

sysnsobjs

1

1

2

16,384

sysnsobjs

2

1

2

16,384

sysnsobjs

2

1

2

16,384

sysobjkeycrypts

1

0

0

0

sysobjkeycrypts

1

0

0

0

sysobjvalues

1

119

3

24,576

sysobjvalues

1

102

3

24,576

sysobjvalues

1

119

25

204,800

sysobjvalues

1

102

25

204,800

sysowners

1

14

2

16,384

sysowners

1

14

2

16,384

sysowners

2

14

2

16,384

sysowners

2

14

2

16,384

sysowners

3

14

2

16,384

sysowners

3

14

2

16,384

sysphfg

1

1

2

16,384

 

 

 

 

0

syspriorities

1

0

0

0

 

 

 

 

0

syspriorities

2

0

0

0

 

 

 

 

0

syspriorities

3

0

0

0

 

 

 

 

0

sysprivs

1

130

2

16,384

sysprivs

1

120

2

16,384

 

 

 

 

0

 

 

 

 

0

sysprufiles

1

2

2

16,384

 

 

 

 

0

sysqnames

1

97

2

16,384

sysqnames

1

91

2

16,384

sysqnames

2

97

2

16,384

sysqnames

2

91

2

16,384

sysremsvcbinds

1

0

0

0

sysremsvcbinds

1

0

0

0

sysremsvcbinds

2

0

0

0

sysremsvcbinds

2

0

0

0

sysremsvcbinds

3

0

0

0

sysremsvcbinds

3

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

 

sysrowsetcolumns

1

538

7

57,344

sysrowsetrefs

1

0

0

0

sysrowsetrefs

1

0

0

0

sysrowsets

1

91

2

16,384

sysrowsets

1

78

2

16,384

sysrscols

1

632

9

73,728

 

 

 

 

0

sysrts

1

1

2

16,384

sysrts

1

1

2

16,384

sysrts

2

1

2

16,384

sysrts

2

1

2

16,384

sysrts

3

1

2

16,384

sysrts

3

1

2

16,384

sysscalartypes

1

34

2

16,384

sysscalartypes

1

27

2

16,384

sysscalartypes

2

34

2

16,384

sysscalartypes

2

27

2

16,384

sysscalartypes

3

34

2

16,384

sysscalartypes

3

27

2

16,384

sysschobjs

1

53

2

16,384

sysschobjs

1

47

2

16,384

sysschobjs

2

53

2

16,384

sysschobjs

2

47

2

16,384

sysschobjs

3

53

2

16,384

sysschobjs

3

47

2

16,384

sysschobjs

4

53

2

16,384

sysschobjs

4

47

2

16,384

 

 

 

 

0

 

 

 

 

0

 

 

 

 

 

sysserefs

1

89

2

16,384

syssingleobjrefs

1

146

2

16,384

syssingleobjrefs

1

133

2

16,384

syssingleobjrefs

2

146

2

16,384

syssingleobjrefs

2

133

2

16,384

syssoftobjrefs

1

0

0

0

 

 

 

 

0

syssoftobjrefs

2

0

0

0

 

 

 

 

0

syssqlguides

1

0

0

0

syssqlguides

1

0

0

0

syssqlguides

2

0

0

0

syssqlguides

2

0

0

0

syssqlguides

3

0

0

0

syssqlguides

3

0

0

0

systypedsubobjs

1

0

0

0

systypedsubobjs

1

0

0

0

systypedsubobjs

2

0

0

0

systypedsubobjs

2

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

sysxmitqueue

1

0

0

0

sysxmitqueue

1

0

0

0

sysxmlcomponent

1

99

2

16,384

sysxmlcomponent

1

93

2

16,384

sysxmlcomponent

2

99

2

16,384

sysxmlcomponent

2

93

2

16,384

sysxmlfacet

1

112

2

16,384

sysxmlfacet

1

97

2

16,384

sysxmlplacement

1

18

2

16,384

sysxmlplacement

1

17

2

16,384

sysxmlplacement

2

18

2

16,384

sysxmlplacement

2

17

2

16,384

sysxprops

1

0

0

0

sysxprops

1

0

0

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

 

 

 

 

0

SUM

1,228,800

SUM

1,163,264

 

 

NOTE: Do not query information directly from system tables or manipulate them. Use the documented and supported interfaces to query and modify the state of various SQL Server entities.

All the information provided above is based on SQL Server 2012 CTP3. It is possible there are changes when the product is released for production.

 

Thanks

Suresh Kandoth

SQL Server Escalation Services

Microsoft

 

RML: ReadTrace Appears To Hang at "Doing Post-Load Data Cleanup" Phase

$
0
0

Keith and I continue to field the question as to why the Post-Load Data Cleanup appears to take a long time (hours) and can cause SQL Server to use large amounts of CPU.

Notes from Keith:

"What that step does it try to correlate stmt-level events with the batch in which they ran, and show plans with the statement.  If you capture starting events then all of this can be done at import time (not via the query) because ReadTrace caches the previous batch/rpc starting event and previous sp:stmt starting event and uses those sequence numbers to fill in on the values on the completed events/showplan events during the import itself.

I’ve tried to optimize this post load import query about half a dozen times now, and invariably it gets better for certain types of scenarios and worse for others.  The best solution is to capture the starting events and avoid ragged trace starting point. 

The other option is to use the ReadTrace command line parameter –T22 which skips ALL of that processing (including things like indexing), but some of it could be done by manually calling a subset of the same procedures that ReadTrace uses in this step, skipping the one to the Postload fixups. 

Even then, some of the reporting features may not work correctly because the event association would not be set correctly.

"

Bob Dorr / Keith Elmore

Viewing all 339 articles
Browse latest View live




Latest Images