Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels

Channel Catalog

Channel Description:

This is the official team Web Log for Microsoft Customer Service and Support (CSS) SQL Support. Posts are provided by the CSS SQL Escalation Services team.

older | 1 | 2 | 3 | (Page 4) | 5 | 6 | .... | 17 | newer

    0 0


    Many of you have used the RML utilities that Keith Elmore and I built and provided to the community on the download center.   One of the features of the RML utilities was the ability to do sophisticated replay operations.   I was queried about the new DReplay and thought this would make a nice post.

    As my e-mail exchange highlights; Keith and I spent numerous hours with the DReplay team going over various designs, pitfalls and issues we had experienced, worked around or otherwise encountered and much of the feedback was folded into the DReplay design.  So when Bob Ward asked me the question "Should I use DReplay or RML?" my answer was that I had no reason to believe the DReplay does not provide the functionality the RML would and that Keith and I have no plans to enhance anything in RML replay for Denali at this juncture.

    When you combine the additional logic that profiler replacement is targeted for you will have two very nice features that carry forward many parts of the RML feature set, right in the box.   I really like the new grouping and sorting capabilities that are being added to the XEProfiler so you can take a trace and answer a question like, "What query took the most CPU in aggregate within the filtered window?" and you don't need to call CSS SQL Server Support to help you answer this question from a captured trace.

    RML will still be around and I have been working on updates to handle the new SQL Denali ODBC client version and such but we are pleased that the development team carried forward and integrated several of the big features from RML into the shipping product.

    From: Robert Dorr
    Sent: Friday, January 07, 2011 12:02 PM
    Subject: RE: SQL Profiler: Load testing


    Thanks for asking.


    No RML based still profiler based but Keith and I spent weeks with Shirley and the team walking them through our code base, telling them about the edge cases and such.   They took many of the features from RML and added them into the code base, including over 50% of our test suites.   In fact, they had an entire virtual machine that I setup for them with source, build scripts and the tests that they were walking through.


    I think we have a much better opportunity with the new DReplay design to do the things that RML did than we ever had, built right into the box.



    Sent: Friday, January 07, 2011 10:58 AM
    To: Robert Dorr
    Subject: RE: SQL Profiler: Load testing


    Out of curiosity, is that feature based on RML codebase? From a quick look, there are many similarities.

    Bob Dorr - Principal SQL Server Escalation Engineer

    0 0

    I ran into an issue yesterday where the EventSequence column appears negative (or could be truncated and won't sort correctly) in the Profiler display.

    Here is an example of the display from a trace that I debugged.


    When I looked at the raw format I could see the storage for the EventSequence was 8 bytes (BIGINT) but the display was formatting an INT (4 bytes).

    16332532102          __int64

                    -847337082             int  

    Event Data Column [51] = TRACE_COL_EVENTSEQ  

    0x0000000102AF6C28  86 a9 7e cd 03 00 00 00 00  

                    0xCD7EA986 =                         (int)0xCD7EA986     -847337082             int

    WORKAROUND: Using the SQL Server function, fn_trace_gettable, you can see the correct BIGINT values.   Also, the RML Utilities handle the values properly.

    What I found is that the Profiler GUI uses definition files for how to format the column values.  These files are located  under: "Program Files (x86)\Microsoft SQL Server\100\Tools\Profiler\TraceDefinitions" and the EventSequence column is defined as an INT.

    !!! CAUTION !!! - Changing these files is not supported and could cause additional display anomalies or even Profiler to terminate processing. !!!

    Now I had to clear some of the cobwebs and think back a few years.   When SQL 2005 shipped the EventSequence was an INT value but we updated it to a BIGINT during a service pack release and carried the BIGINT forward to the SQL 2008 code base as well.  The definition files are correct when reading an old .TRC produced before these changes.   For this specific trace the INT would have rolled over, making it hard to sort but it would not have truncated/ignored the additional 4 bytes from the BIGINT because the server did not produce a BIGINT.  This is specifically why you can't just alter the definition file and have it assume it can access 8 bytes of data as only 4 may be stored.

    For SQL Server 2008 traces they are always produced with 8 byte values (BIGINT) so I updated the .XML definition file to a type = 2.   I knew I was opening a file with the correct storage and the display of the EventSequence was correctly displayed as a BIGINT and no longer truncated and formatted as an INT value.

    It is unlikely that you will encounter this problem as trace has to be running for quite some time to produce more than a SINGED INTEGER value worth of events.   The event sequence is kept from the start of the SQL Server process and incremented for each event that is produced.   So you have to have trace running for quite some time to approach the INT_MIN value.

    Bob Dorr - Principal SQL Server Escalation Engineer

    0 0

    In a lengthy discussion this past week I was reminded that Jan 2011 is when the hard drive manufactures agreed to focus on drives with sector sizes of 4K.    I have read all the latest materials about this over the past week and you can too.  Just search for 512e or Advanced Format Sector sizes and you will find the same articles I read.  I concentrated on articles by Seagate, Western Digital and other manufactures.

    Why am I talking about this on a SQL Server blog? - The change has impact to your SQL Servers.   There are two areas you need to be aware of.  PERFORMANCE and DATA INTEGRITY

    PERFORMANCE: All the articles outline the performance implications for the 512e (512 byte sector size emulation mode).   This is important to you because when SQL Server creates a database it makes the Windows API calls to determine the sector size.   When 512e is enabled the operating system reports 512 bytes and SQL Server aligns the log file I/O requests on 512 byte boundaries.    This means that placing a database on a 512e enabled drive will cause SQL Server to engage the RMW (Read-Modify-Write) behavior and you could see elongated I/O times when writing log records.  This many only be a millisecond or two but can accumulate quickly.


    DATA INTEGRITY:  When I point this out I am not indicating that the 4K sector based drives are inherently any better or worse than 512 byte sector drives.   In fact, many of the designs for the 4K sector drives allow an enhanced ECC mechanism so in some respects the drives could be considered more resilient to media failure conditions than the 512 byte sector formats.  


    What I am warning about is the Read-Modify-Write behavior that takes place under the 512e mode.   When SQL Server thinks the drive is handling 512 byte sectors the log I/O is aligned on 512 byte boundaries so a partial 4K write could be encountered at the drive level.   Some specifications say that the drive may bundle these until the 4K sector is filled before flushing to the platter media, others are not so detailed in their information.   If the drive holds the 512 byte write in disk cache (not battery backed) but reports the write complete to SQL Server, SQL Server can flush the data page because it thinks it has met the WAL protocol requirement for writing the log record before the data page.   If a crash occurs at this point and the disk cache does not have time to flush you have missing log records that recovery won't know about.


    Here are a few snippets from the articles I read.

    A drawback to the current r/m/w operation is that a power loss during the r/m/w operation can cause unrecoverable data loss. This possibility occurs during every r/m/w operation, at the point where the two part-modified sectors at the start and end of the logical blocks (i.e., the "boundary" sectors) are being written to the media.

    In modern computing applications, data such as documents, pictures and video streams are much larger than 512 bytes. Therefore, hard drives can store these write requests in cache until there are enough sequential 512-byte blocks to build a 4K sector.

    Read-Modify-Write Prevention  

    As described above, a read-modify-write condition occurs when the hard drive is

    issued a write command for a block of data that is smaller, or misaligned, to the

    4K sectors. These write requests are called runts since they result in a request

    smaller than 4K. There are two primary root causes for runts in 512-byte emulation. 

    1. Write requests that are misaligned because of logical to physical partition misalignment

    2. Write requests smaller than 4K in size


    For SQL Server the best recommendation is to work with the hardware manufacture to make sure the 512e mode is disabled on drives that hold the SQL Server database and log files and that the Windows API is reporting 4K sector sizes.   SQL Server will then align the log writes on 4K boundaries and avoid the emulation behavior.

    Moving Databases

    SQL Server stores the initial sector size with the database metadata and may prevent you from attaching or restoring the database to a drive of different sector size.   Going from a 4K to a 512 byte drive can lead to torn write behavior.  Going from a 512 byte to a 4K drive can lead to Read-Modify-Write behavior.

    Bob Dorr - Principal SQL Server Escalation Engineer

    0 0

    following upgrading Visual Studio 2010 from 2008, the  customer started to experience problems when debugging CLR assemblies. 

    The behaviors are:

    1. Deploying (debugging will automatically invokes deploy) a CLR assembly will take a long time
    2. Deploying a CLR assembly may fail with error “Deploy error SQL01268: CREATE ASSEMBLY for assembly failed because assembly failed verification" after waiting for a long time.

    After investigation, it turned to be an issue in CLR assembly deployment process.  For the issue to occur, the target database needs to have many database objects such as views, stored procedures (in terms of thousands).   Visual Studio 2010 tries to reverse engineer the objects as part of deployment.  Large number of objects can delay the process.

    Note that this issue doesn’t occur if you run TSQL Script (by issuing CREATE ASSEMBLY) manually yourself to deploy CLR objects (preferred way to deploy into production).

    The issue does pose challenges to debugging.  By default debugging will automatically involve deployment.  So essentially debugging will take a long time or may fail (as in the error above).  Here are a couple of solutions for this problem.

    Technorati Tags:

    Solution 1 – manually deploy before debugging.

    Step 1: do this only once by deploying your project to an empty database

    1. Point your project to an empty database that doesn’t have lots of objects and deploy the project.  This should be fast.d
    2. As part of the deploy, VS generate a script called $(ProjectName).sql in bin\debug directory.   Copy and save this script to a new location.
    3. Make some slight modification in the script. 
      • replacing the binary value from CREATE/ALTER Assemblies with the actual file path.
      • comment out :setvar lines
      • comment out :on error exit
      • replace $DatabaseName to your real database
      • look for CREATE and ALTER assembly statements. Replace the binary data with file path for CLR dll and pdb

    Step 2: debugging

    1. Configure your project to the real database you plan to use and debug
    2. Under build menu|Configuraiton Manager, uncheck “deploy” for the CLR project
    3. Build the project
    4. Manually run the script in from above step to deploy assembly
    5. Debug your code.
    6. Note that every time you need to debug, you need to run the script.  So solution 2 may be preferred by you.

    Solution  2: using post build events

    Requirement:  you need to have SQL Server tools installed as it needs sqlcmd.exe.   Instead of relying on automatic deployment, post build event will deploy the assembly and objects.

    1. Under build menu | configuration manager, uncheck “Deploy” for the CLR project.
    2. Come up with a script that would create and drop assemblies objects and create assembly. Note that you can use the step 1 of solution one to generate a SQL Script as well for this step.
    3. Add this script the your project directory as part of the project. 
      • In the script you can use variables so that you can pass ProjectDir
      • At minimum, you need to deploy .pdb file as it contains symbols for debugging. 
    4. Then in post event do this: sqlcmd.exe –S<serverName>  -E  -d<databasename>  -i"$(ProjectDir)deployscript.sql" -v ProjectDir="$(ProjectDir)"   (where <serername> and <databasename> are your SQL Server and Database names).
    5. To debug, do two steps
      •   Build the project.  When you build the project, it will run the sqlcmd.exe to deploy the assembly.
      • Then start debugging

    Script 1 (an example script for solution 2)

    if OBJECT_ID ('StoredProcedure1') is not null      drop procedure StoredProcedure1
    if exists (select * from sys.assemblies where name = 'CLRProject') drop assembly CLRProject
    create assembly CLRProject from   '$(ProjectDir)bin\debug\CLRProject.dll'
    ALTER ASSEMBLY [CLRProject]     DROP FILE ALL     ADD FILE FROM '$(ProjectDir)bin\debug\CLRProject.pdb'
    CREATE PROCEDURE [dbo].[StoredProcedure1]
    AS EXTERNAL NAME [CLRProject].[StoredProcedures].[StoredProcedure1]

    0 0

    A short but good discussion about the RML comparison DIFF calculations.


    From: Robert Dorr
    Sent: Wednesday, January 26, 2011 10:21 AM
    Subject: RE: MSDN Blogs: Contact request: RML Tools: Estimated Comparison Differences


    Thanks for the question and feedback.


    For example ProjectedCPUDiff is one of the columns in tblComparisonBatchPartialAggs


    ·         The hash id generated by ReadTrace is the same in the Baseline and Comparison database for the same query text pattern.

    ·         Look at the Baseline for the number of executions and total CPU burn and calculate the average per execution

    ·         Look at the Comparison total CPU burn for the same query pattern

    ·         Total Comparison Burn – (Avg Baseline burn * Comparison Executes)


    You can see the actual formulas at work in the T-SQL procedure ReadTrace.usp_CreateComparisonData



           update ReadTrace.tblComparisonBatchPartialAggs 

                                      set [ProjectedCPUDiff]            = [c.TotalCPU] - (([b.TotalCPU]/[b.CompletedEvents]) * [c.CompletedEvents]),

                                             [ProjectedReadsDiff] = [c.TotalReads] - (([b.TotalReads]/[b.CompletedEvents]) * [c.CompletedEvents]),

                                             [ProjectedWritesDiff]      = [c.TotalWrites] - (([b.TotalWrites]/[b.CompletedEvents]) * [c.CompletedEvents]),

                                             [ProjectedDurationDiff]= [c.TotalDuration] - (([b.TotalDuration]/[b.CompletedEvents]) * [c.CompletedEvents])

                                      where  [b.HashID] is not null

                                      and           [c.HashID] is not null



    Sent: Tuesday, January 25, 2011 6:01 PM
    To: PSS SQL Bloggers
    Subject: MSDN Blogs: Contact request: RML Tools: Estimated Comparison Differences


    Subject: RML Tools: Estimated Comparison Differences


    Hello, I have recently started to explore the RML tools for generating workloads and found it very straightforward to use due to the help documentation. However, one thing that is not clear to me is how the estimated values are obtained. For example, how are the values calculated for the "Estimated Comparison Differences" in the report table and how are these calculated: "Estimated CPU, Duration. Reads and Writes Diffs" Knowing this will help me derive meaningful conclusions from the report generated. Thank you. Simon

    Bob Dorr - Principal SQL Server Escalation Engineer

    0 0

    I had an interesting conversation with some other support engineers and a customer as it relates to the MAXDOP setting in the workload group.

    Inquiry: The customer set the MAXDOP=1 for the workload group but when looking at the showplan the parallel operators showed up. They were expecting to force a MAXDOP=1 for the workload group and the login could only use serial plans.   (In the end, SQL Server is using serial plans.)

    The plan output is expected as the workload and resource pool settings don’t impact the plan generation.   The resource governor is a RUNTIME application of the values.   The plan is generated for the server and cached so any session can use it.  The MAXDOP is applied at runtime along with the CPU and other resource governor settings.

    When you look at the statistics profile output you will see the parallel operators are never executed (executes column) or capped at the MAXDOP setting for the workload group.

    The confusion came when the customer started using OPTION(MAXDOP 1) on the query and the plan output no longer showed the parallel operators.   This is because the sp_configure and OPTION(MAXDOP) options are seen at compile time and the version of the plan can be safely compiled without any parallel operators and inserted into cache properly and later matched.

    For example if you change the max degree of parallism setting via sp_configure as soon as you issue the reconfigure action the query cache is emptied.

    To illustrate this I ran the query with and without the OPTION(MAXDOP 1) enabled and looked at what sys.dm_exec_quer_stats entries were present after execution.  There are 2 entries present in the DMV, one serial and the other parallel.



    Select ….



    Select … option(MAXDOP 1)


    Serial plan snippet


    Parallel plan snippet


    Bob Dorr - Principal SQL Server Escalation Engineer


    0 0

    Recently we worked with a customer who could not launch SSMS from a specific computer. We heard about this issue in the past from few others but never found the actual cause of this till now. So posting some of the information here as well as how we narrowed this down.

    Whenever you try to open SSMS you will immediately get the following error:


    After you click OK, the window will disappear. This happens consistently. So we started off with the favorite tool from Sysinternals –> Process Monitor. One key information we found from this was the inability to load some of the key .net framework libraries like the following:


    By this time we also learnt that only specific users are encountering the problem. This confirmed that this must be then a specific permissions issue. So we started to check and compare all the permissions of this folder hierarchy. Some of the tools we used include the AccessCheck and AccessEnum from Sysinternals. These tools allow you to see the permission changes in a specific folder hierarchy. Also you can use the built-in command line tool like CACLS or the downloadable XCACLS to review and reset the permissions on various folders. These tools especially come in handy when you dealing with system and hidden folders like the one we are dealing with here.

    In the non-working machine, we examined the properties of the file that encountered the “ACCESS DENIED” and noticed that this file did not have the EXECUTE permission for the built-in Users group.


    We tracked this lack of permissions all the way up to the C:\Windows\Assembly folder. When we compared the permissions of this folder with other machines where this was working fine, we noticed that all working configurations had the EXECUTE permission set for the Users group. Here is the CACLS output from a working machine.


    Now when we added the execute permissions for Users group for everything below Assembly folder, then SSMS starts working fine. You can use the CACLS or XCACLS command to reset the permissions on these folders to their defaults.

    Moral of the Story: Do not change permission settings on the system and critical folders whose default permissions are set by the Operating System or .net Framework unless you want to go on adventure rides like this.


    Suresh B. Kandoth

    0 0

    WARNING:  The series is based on pre-release software so things could change but I will attempt to provide you with the best information I can! 

    The enterprise edition of SQL Server has provided page level repair for quite some time now.  

    For those not familiar with page level repair allow me to provide a brief overview.

    If a page is determined to be damaged (823, 824 - like checksum failure) during runtime the page is marked suspect and added to the suspect page table.  All attempts to access to this page will return an error but the rest of the database remains usable.   The DBA can then restore a series of backups with the page repair option.   The page will be recovered from the backups and returned for use to the database, online.  This restore sequence can be a daunting task but it is a nice way to avoid a full restore.

    To help you with this type of restore SQL Server Denali is adding a 'Restore Assistant' to help you build various restore steps.   For example, when you elect to restore a database in management studio (SSMS) you can select the timeline option and use the sliders.


    AlwaysOn provides similar capabilities, online to retain the high availability of your database.

    Page Damaged In Primary


    1. When a page becomes damaged (823, 824) on the primary, access to the page is prevented.   Attempts to access the page result in error.
    2. A broadcast is made to all secondary replicas asking for a copy of the page at the primary's, current LSN.
    3. When the secondary has redone all the log records to catch up to the primary an image of the page is then sent to the primary.
    4. The first image received by the primary replaces the damaged image and responses from other secondary's are then ignored

    This is essentially the behavior of page restore.   The secondary restores the page to the proper LSN level and responds to the primary with a valid copy.   The repair time is based on the time it takes for a secondary to complete the redo to catch up to the primary.

    Page Damaged On Secondary


    1. When a page becomes damaged (823, 824) on the secondary, access to the page is prevented.   Attempts to access the page result in error.
    2. A request is made to the primary for the current copy of the page.
    3. The primary responds with the current copy of the page which is held in memory an only allows redo access to the page.
    4. Redo continues on the secondary and applies the log records
    5. When redo reaches the LSN level, on the page that was retrieved from the primary, the page can then be used by all workers.

    This is essentially the behavior of page restore.   The secondary restores the page and makes sure redo has advanced in the log records, far enough to match the image.   When redo reaches the same LSN level the database is transitionally consistent and access to the page can again be granted.

    Bob Dorr - Principal SQL Server Escalation Engineer

    0 0

    I've had some questions in the past regarding Reporting Services Integration with SharePoint. For this post, I'm going to focus on SharePoint 2010. I'm also going to assume that you already have a SharePoint Farm setup. The examples I'm going to use will be a full SharePoint Cloud, but the steps will be similar for a SharePoint Foundation Server.

    Overview of Reporting Services and SharePoint Technology Integration

    Deployment Topologies for Reporting Services in SharePoint Integrated Mode

    Configuring Reporting Services for SharePoint 2010 Integration

    Lets start off with some key documentation that you can use for reference when you do this:

    Lets also talk about my setup that I will be walking through. I will have 4 servers. This consists of a Domain Controller, a SQL Server, a SharePoint Server and a server for Reporting Services. You may opt to have SharePoint and Reporting Services on the same box, which will simplify this a bit and I will point out some of the differences.

    The Reporting Services Add-In for SharePoint is one of the key components to getting Integration working properly. The Add-In needs to be installed on any of the Web Front Ends (WFE) that are in your SharePoint farm along with the Central Admin server. One of the new changes with SQL 2008 R2 & SharePoint 2010 is that the 2008 R2 Add-In is now a pre-req for the SharePoint Install. This means that the RS Add-In will be laid down when you go to install SharePoint.


    This actually avoids a lot of issues we saw with SP 2007 and RS 2008 when installing the Add-In.

    SharePoint Authentication

    Before we jump into the RS Integration pieces, one thing I want to point out about the SharePoint Farm is how you setup the Site. More specifically how you configure authentication for the site. Whether it will be Classic or Claims. This choice is important in the beginning. I don't believe that you can change this option once it is done. If you can change it, it would not be a simple process.

    NOTE: Reporting Services 2008 R2 is NOT Claims aware

    Even if you choose your SharePoint site to use Claims, Reporting Services itself isn't Claims aware. That said, it does affect how authentication works with Reporting Services. So, what is the difference from a Reporting Services perspective? It comes down to whether you want to forward User Credentials to the datasource.

    Classic - Can use Kerberos and forward the user's credentials to your back end datasource (will need to use Kerberos for that.

    Claims - A Claims token is used and not a windows token. RS will always use Trusted Authentication in this scenario and will only have access to the SPUser token. You will need to store your credentials within your data source.

    I'll look at the authentication pieces in later posts. For now I just want to focus on setup of RS. At this point SharePoint is installed on my SharePoint Box and setup with a Classic Auth Site on port 80. On the RS Server I have just installed Reporting Services and that's it.

    Setting up Reporting Services

    Our first stop on the RS Server is the Reporting Services Configuration Manager.

    Service Account:

    Be sure to understand what service account you are using for Reporting Services. If we run into issues, it may be related to the service account you are using. The default is Network Service. When I go to deploy new builds, I always use Domain Accounts, because that is where I'm likely to hit issues. For my server I've used a Domain Account called RSService.


    Web Service URL:

    We will need to configure the Web Service URL. This is the ReportServer virtual directory (vdir) that hosts the Web Services Reporting Services uses, and what SharePoint will communicate with. Unless you want to customize the properies of the vdir (i.e. SSL, ports, host headers, etc…), you should just be able to click Apply here and be good to go.


    When that is done you should see the following



    We need to create the Reporting Services Catalog Database. This can be placed on any SQL 2008 or SQL 2008 R2 Database Engine. SQL11 would work ok as well, but that is still in BETA. This action will create two databases, ReportServer and ReportServerTempDB, by default.

    The other important step with this is to make sure that you choose SharePoint Integrated for the database type. Once this choice is made, it cannot be changed.




    For the credentials, this is how the Report Server will communicate with the SQL Server. What ever account you select, will be given certain rights within the Catalog database as well as a few of the system databases via the RSExecRole. MSDB is one of these database for Subscription usage as we make use of SQL Agent.


    Once that is done, it should look like the following:



    Report Manager URL:

    We can skip the Report Manager URL as it isn't used when we are in SharePoint Integrated mode. SharePoint is our frontend. Report Manager doesn't work.

    Encryption Keys:

    Backup your Encryption Keys and make sure you know where you keep them. If you get into a situation where you need to migrate the Database or restore it, you will need these.


    That's it for the Reporting Services Configuration Manager. If you browse to the URL on the Web Service URL tab, it should show something similar to the following.


    What happened? SharePoint is installed on my WFE and I finished setting up Reporting Services. In this example, Reporting Services and SharePoint are on different machines. Had they been on the same machine, you wouldn't have seen this error. We technically need to install SharePoint on the RS Box. That means IIS will be enabled as well.

    Setting up SharePoint on the RS Server

    So, we need to do what we did for the SharePoint WFE. First thing is to go through the Prereq install. After that is done, startup the SharePoint setup.

    For the setup I choose Server Farm and a complete install to match my SharePoint Box, as I do not want a standalone install for SharePoint.

    SharePoint Configuration

    In the SharePoint Configuration Wizard, we want to connect to an existing farm.


    We will then point it to the SharePoint_Config database that our farm is using. If you don't know where this is, you can find out through Central Admin through System Settings -> Manager Servers in this farm.



    Once the wizard is done, that is all we need to do on the Report Server Box for now. Going back to the ReportServer URL, we will see another error, but that is because we have not configured it through Central Administrator.


    Reporting Services SharePoint Configuration

    Now that SharePoint is installed and configured on the RS server and RS is setup and setup through the Reporting Services Configuration Manager, we can move onto the configuration within Central Admin. RS 2008 R2 has really simplified this process. We use to have a 3 step process that you had to perform to get this to work. Now we just have one step.

    We want to go to the Central Administrator Web site and then into General Application Settings. Towards the bottom we will see Reporting Services.


    We will click on "Reporting Services Integration".


    Web Service URL:

    We will provide the URL for the Report Server that we found in the Reporting Services Configuration Manager.

    Authentication Mode:

    We will also select an Authentication Mode. The following MSDN link goes through in detail what these are.

    Security Overview for Reporting Services in SharePoint Integrated Mode

    In short, if your site is using Claims Authentication, you will always be using Trusted Authentication regardless of what you choose here. If you want to pass windows credentials, you will want to choose Windows Authentication. For Trusted Authentication, we will pass the SPUser token and not rely on the Windows credential.

    You will also want to use Trusted Authentication if you have configured your Classic Mode sites for NTLM and RS is setup for NTLM. Kerberos would be needed to use Windows Authentication and to pass that through for your data source.

    Activate feature:

    This gives you an option of activating the Reporting Services on all Site collections, or you can choose which ones you want to activate it on. This just really means which sites will be able to use Reporting Services.

    When it is done, you should see the following


    Going back to the ReportServer URL, we should see something similar to the following


    NOTE: If your SharePoint site is configured for SSL, it won't show up in this list. It is a known issue and doesn't mean there is a problem. Your reports should still work.

    So, that's it! I know, that makes it look simple and depending on your deployment/configuration, you may run into other issues. I'll try and cover some of those in later posts.


    Adam W. Saxton | Microsoft SQL Server Escalation Services

    0 0

    My colleges asked me if 'squirrely' is a technical term and for this post the answer is yes.  CSS is not going to deny support to customers but SQL Server was not tested in this scenario so you may have chased yourself up a tree, hence I use the term squirrely.

    SQL Server 2005 introduced snapshot databases and modified DBCC to create secondary snapshot streams for online DBCC operations.   The online DBCC creates a secondary stream of the database files that is SPARSE.  CSS has found that if a 3rd party backups and utilities or NT Backup is used against the database files the SPARSE setting may get incorrectly, propagated to the parent stream.   In the case of DBCC this is the original database files(s).


    • Create Database MyDB
    • DBCC checkdb(MyDb)   -- Completed without error
    • Utility like NT Backup touches the database files  (Incorrectly makes SPARSE sticky on on the main file stream)

    The next time the database is opened (recovery, restart, etc…) the sparse attribute it detected by SQL Server and the status shown in sys.database_files is updated to indicate is_sparse = TRUE.

    use MyDB
    select is_sparse, * from sys.database_files

    If the is_sparse is not equal to 0 it indicates that SQL Server is treating the file as a sparse file.  This causes alternate, inappropriate, code lines to be used in areas such as auto grow.  Future releases of SQL Server may contain additional messages in the error log when this situation is encountered.

    The case I am working on today shows a primary database file (not a snapshot or secondary sparse stream) using the Windows API DeviceIoControl to zero the contents during an auto grow.   This is not the normal code line as the DeviceIoControl to zero the contents is only used for sparse files.   The MSDN documentation associated with this ( - FSCTL_SET_ZERO_DATA) indicates that the processing may deallocate other locations in the file while handling the request.

    "If the file is sparse or compressed, the NTFS file system may deallocate disk space in the file. This sets the range of bytes to zeroes (0) without extending the file size."

    SQL Server does not support backup and restore of snapshot/sparse files so when the primary database is treated as sparse the SQL Server is running in untested situations and the support boundaries blur.

    What Should I Do?

    Run a query against each of your databases and look for the is_sparse <> 0.  For any files that are not snapshot databases you need to copy the data out of the file, drop the file, create a new file and load the data.  I.E.: Transfer your data to a new file.

    Then determine what utility is touching the file and propagating the sparse attribute and configure it to avoid the SQL Server files.

    [Sep 27 Update]

    One of the things I love about the blog interactions.  I have had a great list of questions related to this blog already so I would like to add a Q/A section.

    Are you saying that if any Snapshots exist when using these third party tools that it could cause the is_sparse flag to be turned on?
    [[rdorr]] No it is a Windows based issue.  When DBCC runs it creates a sparse stream.   SQL destroys the stream at the end of DBCC but the sparse bit becomes ‘sticky’ and will get upgraded to the primary stream. 

    Is the problem because the snapshots exist _while_ those backups are taking place?
    [[rdorr]] No, the DBCC just needs to be executed so at some time the secondary steam existed.   It does not apply to snapshot databases (Create database for snapshot)

    In the case of the DBCC activity – is the problem only going to happen if the DBCC is taking place while the third party utility kicks in?
    [[rdorr]] No it can occur after DBCC has completed successfully.

    You talk about copying the contents out of one file and putting them into another file -- i'm assuming you are meaning a new FileGroup and moving the objects into a new filegroup to transfer all the data to the new filegroup.
    [[rdorr]] Yes a new file in the same file group or a new file group.

    The question I have there is that if it is the filegroup that contains the system objects for the database, how do we get that information over into the new file?
    [[rdorr]] New database you can't move system objects. Is it not possible to change the SPARSE attribute on the file system back so that SQL treats the file correctly?
    [[rdorr]] Not cleanly.  If you have such a situation the file system is tracking it as a sparse file and we have all kinds of unknowns.

    This sounds to me like a very dangerous situation that could easily result in data loss.
    [[rdorr]] Should not result in data loss.  NTFS tracks the correct allocations.  The problem is that sparse files are limited in size so you are running along and you can’t grow anymore or a backup may not restore.  This is where you get into possible data loss.
    !!! NOTE !!! This brings up a good point.  After the problem is corrected you should take a full backup.

    Is the problem only encountered during growths?   if so, should we disable growth until we can reset the file attribute or move the contents somehow to another file(group)?
    [[rdorr]] Grow shows the behavior but the file is already getting tracked as sparse.

    Continued work on this has revealed SQL 2005, SQL 2008 and Denali differences.    Windows has published various KB articles on how to change a file at the NTFS level from sparse to non-sparse.   The basics are a file copy.   Copy expands the destination file (does not retain the sparse attribute).

    FIX STEPS for SQL 2008

    !!! WARNING !!!  Take appropriate backups and precautions when attempting these corrections.  A failed step in this process could render the database unusable and you may lose data.

    For SQL Sever 2008 you must detach the database, copy the files and attach the database to correct the NTFS and SQL Server state from IsSparse = 1 to IsSparse = 0

          sp_detach_db 'dbISSPARSE'                

    ren dbISSPARSE.mdf dbISSPARSE.mdf.orig
    copy dbISSPARSE.mdf.orig dbISSPARSE.mdf
    fsutil sparse queryflag dbISSPARSE.mdf

    sp_attach_db 'dbISSPARSE', 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\dbISSPARSE.mdf'
    select is_sparse, * from sys.master_files where name = 'dbISSPARSE'     (!!! IS SPARSE = 0, CORRECT AT NTFS AND SQL Level !!!)


    !!! WARNING !!!  Take appropriate backups and precautions when attempting these corrections.  A failed step in this process could render the database unusable and you may lose data.

    • Close the database files:  ALTER DATABASE MyDB OFFLINE
    • Validate the sparse setting using an elevated admin cmd prompt  Examples shown below.

    C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA>fsutil sparse queryflag dbTest.mdf
    This file is NOT set as sparse
    C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA>fsutil sparse queryflag dbTest.mdf
    This file is set as sparse

    Note: File that are 'set as sparse' that are NOT databases created using 'CREATE DATABASE FOR SNAPSHOT' should be corrected.

    • Rename the database file: ren MyDB.mdf  MyDBSparse.mdf
    • Copy the data into a new file.  (Note:  Do not copy with overwrite as copy will retain the sparse attribute): copy MyDBSparse.mdf MyDb.mdf
    • Use FSUtil to validate all files for the database are no longer sparse.
    • Bring the database back online: ALTER DATABASE MyDB ONLINE
    • Validate that is_sparse returns the expected outcome: use MyDB; select is_sparse, * from sys.database_files


    Take a full SQL Server backup of the database and make sure other backups are no longer causing the issue to re-occur.

    FIX STEPS for SQL 2005

    !!! WARNING !!!  Take appropriate backups and precautions when attempting these corrections.  A failed step in this process could render the database unusable and you may lose data.

    Use the database copy wizard to copy the data into a new database or upgrade to SQL Server 2008 and use the steps outlined above.   I tested the sp_detach_db on SQL 2005 with an sp_attach_db on SQL 2008 and the is_sparse status can be restored to the expected running state

     Bob Dorr - Principal SQL Server Escalation Engineer 

    0 0

    Previous Post: SharePoint Adventures : Setting up Reporting Services with SharePoint Integration

    In the previous post, I walked through getting RS 2008 R2 Integrated with SharePoint 2010. What I didn't touch on was if you wanted to get this work with Kerberos. Kerberos itself can be complicated. This is partly because you need to track so many things. And, as the deployment becomes more distributed, you have to track more things.

    A while back, I posted a blog post describing my Kerberos Checklist. I'll use this as we step through my SharePoint deployment to get Kerberos working in this environment.

    Before we get into the details, there is one piece I want to point out that is special with SharePoint 2010. The authentication model that you select for your site makes a big difference in whether this will work or not. SharePoint allows you to choose between Classic and Claims. If you choose to have a Claims site, you will not be able to get Kerberos to work with RS 2008 R2 when integrated with SharePoint 2010. If the site is Claims based, you won't be able to change it back either. Part of the reason why Kerberos won't work is because when we detect you are in a Claims site, we always go with Trusted Authentication from the RS Perspective. This means that a Windows Token will not be passed from SharePoint to the Report Server. An SPUser token will be passed instead. My next post will go into how you can determine if your site is Classic or Claims.

    That being said, lets dive in…

    For this setup, I only have 3 servers involved.

      Server Name Service Account Delegation Required Custom App Settings?
    SharePoint DSDContosoSP DSDCONTOSO\spservice Yes Yes
    Reporting Services DSDContosoRS DSDCONTOSO\rsservice Yes Yes
    SQL Server DSDContosoSQL DSDCONTOSO\sqlservice No No


    Service Principal Name (SPN)

    Because we are going to do Kerberos we are going to need some SPNs. In the table above, we know that the SharePoint Service account is DSDCONTOSO\spservice. This means that those SPN's will need to go on that user account (DSDCONTOSO\spservice) and not the Machine Account (DSDContosoSP). Had we been using LocalSystem or Network Service, the SPNs would have gone on the Machine Account. That's always how you figure out where the SPN's go. It's always based on the context that the Service is running under.

    So, let's have a look at DSDCONTOSO\spservice. We'll use SETSPN to do that. Starting with Windows 2008, SETSPN ships with the operating system and is available directly from a command prompt.


    We can see that there are no SPN's registered on the spservice account. You'll also notice that I can run SETSPN with or without the Domain Name. This is because I only have a single domain. If you had multiple domains, supplying the domain name tells SETSPN where you want to modify the account at. This really is helpful if you have the same account name in multiple domains. Going forward, I will leave out the domain name.

    As a quick check, I also look at the Machine Account (DSDContosoSP). This is just a double check to make sure I won't run into a duplicate situation.


    Because SharePoint is a web application, we are interested in the HTTP SPN. We do not see one listed on the Machine account. You can take note of the HOST SPN though. This will be found on any Machine Account. You should never see these on a User account. They get created when a Machine Account is created. The HOST SPN will cover the HTTP service if you are running within the context of the Machine Account (LocalSystem or Network Service). Had that been the case, we wouldn't have needed and HTTP SPN. But, because we are using the spservice user account, we will need to put an HTTP SPN on the spservice account.

    NOTE: Domain Admin permissions are required to add (-a) or delete (-d) an SPN. Anyone can list (-l) out an SPN.


    We added two SPNs to the spservice account. HTTP/dsdcontososp and HTTP/dsdcontososp.dsdcontoso.local. HTTP SPNs are based on the URL that you are going to use. In this case we are just using the machine name for the URL (http://dsdcontososp). Internet Explorer will convert this to the Fully Qualified Domain Name (FQDN) when it builds out the required SPN. So the SPN request for that URL would be HTTP/dsdcontososp.dsdcontoso.local. And we have just added that.

    NOTE: HTTP SPNs should NOT have a port listed. They are purely based on the host within the URL without the port number. This means that you have two sites running on different ports, you should use the same Service Account for both as you will have a shared SPN between the two. For example: http://dsdcontososp & http://dsdcontososp:5555 both use the following SPN: HTTP/dsdcontososp.

    What about the Netbios SPN? Well, I always add that for good measure. Hopefully it will never be needed. But, on the off chance that the name lookup fails, we will be covered. When we do the reverse name lookup to get the FQDN, we have to go out to the DNS Server to do that. If for whatever reason the DNS Lookup fails, we will just use the netbios name. So, the SPN would look like HTTP/dsdcontososp. The fact that we added both means I won't be hit by intermittent DNS issues and end users won't be interrupted unnecessarily. So, my take on it is to always add both the NETBIOS and FQDN SPNs.

    If we look at the spservice account now, we will see both SPNs that we added.



    We know that we are going from the SharePoint server to the Report Server. In order for credentials to be forwarded from SharePoint to RS, we need to give the Service Account permission to delegate. By default, this is disabled.

    NOTE: Domain Admin permissions are required to modify delegation settings on an account.


    NOTE: The delegation tab will only be visible if SPN's are present on that account.

    The Delegation Tab of the account is where we will find these settings. Here is how the options break down:

    Do not trust this user for delegation No Trust. We cannot delegate.
    Trust this user for delegation to any service (Kerberos only) Full Trust. We can delegate to any service.
    Trust this user for delegation to specified services only Constrained Delegation.  Requires you to list the services that we can delegate to in the list below the radio dials.
       Use Kerberos Only Constrained Delegation with Kerberos Protocol only
       Use any authentication protocol Constrained Delegation with Protocol Transitioning.  Useful on the Claims side of things.

    For this example, I'm not going to go into the Constrained Delegation side of things. I will do that in a later post. We will just stick with Full Trust.

    So, I select "Trust this user for delegation to any service (Kerberos Only)". Please take into account that Constrained Delegation is the more secure option. But, it also presents its restrictions as a result. Stay tuned for more information about that.


    SharePoint Settings

    The SPN and delegation settings are really the basic Kerberos settings needed for any application. However, SharePoint has some app specific settings that we need to pay attention to. For this, we will head over to SharePoint's Central Admin Site.

    We will go to Application Management and then to Manage Web Applications.


    You will select the site you are interested in, in my case it is the SharePoint - 80 site, and then click on Authentication Providers.


    Click on Default.


    We want to choose "Negotiate (Kerberos)" and then hit "Save".


    This configures that SharePoint site to use Negotiate. Negotiate will always attempt to use Kerberos first if an SPN is available to use. We can test to see if this is working properly by going back to the SharePoint site. It should come up as normal without any prompts for credentials or 401.1 errors. If you encounter that, something isn't right.

    However, at this point our reports should no longer work. The underlying error here will be a 401.1 against the Report Server because it hasn't been setup for Kerberos.


    In the SharePoint ULS Log we will see:

    02/21/2011 08:16:46.68         w3wp.exe (0x0F44)         0x0F68        SQL Server Reporting Services         UI Pages         aacz        High         Web part failed in SetParamPanelVisibilityForParamAreaContent: System.Net.WebException: The request failed with HTTP status 401: Unauthorized.
    at Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.GetSecureMethods()
    at Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.IsSecureMethod(String methodname)
    at Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.SetConnectionSSLForMethod(String methodname)
    at Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.ProxyMethodInvocation.Execute[TReturn](RSExecutionConnection connection, ProxyMethod`1 initialMethod, ProxyMethod`1 retryMethod)
    at Microso...        23c6017c-3d37-4b70-b378-d5dd875518f6

    Which brings us to the next stop in our journey…

    Reporting Services

    Service Principal Name (SPN)

    Our service account for Reporting Services is rsservice and not Network Service, so the SPN's will go on the rsservice account itself. Also, Reporting Services is a web application, so we are still sticking with an HTTP SPN. Lets check out what is on the Service Account and the Machine Account.



    Everything looks good here. Again, HTTP SPN's are URL based. So, we are going to create the SPN based on the url which you can get from the Reporting Services Configuration Manager under the Web Service URL Tab.


    Based on that, our SPNs will be the following - HTTP/dsdcontosors and HTTP/dsdcontosors.dsdcontoso.local


    And doing a listing of the rsservice account, we should see two SPNs on it.


    Reporting Services Settings

    I'm doing Settings first instead of Delegation to show that Delegation may not be needed. However, there is a setting for Reporting Services that is needed in order for Kerberos to work successfully against Reporting Services. This setting resides in the rsreportserver.config file which by default should be found at :

    C:\Program Files\Microsoft SQL Server\MSRS10_50.<instance name>\Reporting Services\ReportServer

    The setting that we are interested in is Authentication Type. If you look at the current setting, you may see different results.


    For mine, I see RSWindowsNTLM under the Authentication Types. This is because when I first setup Reporting Services, I used a Domain Account instead of the default Network Service. When you do this, it will default the setting to RSWindowsNTLM. If I would have chosen Network Service as the Account to use during setup, this setting would have reflected RSWindowsNegotiate. And then you could later change it to a Domain Account without this setting changing.

    All I need to do for mine to get Kerberos working is to change it over to RSWindowsNegotiate. You can either add it on top of RSWindowsNTLM or replace RSWindowsNTLM.

    NOTE: RSWindowsNegotiate is specific to Internet Explorer. Other browsers may need RSWindowsKerberos instead. You will need to test that to see what works best for your configuration.

    In my case, I just added it on top of RSWindowsNTLM


    At this point, my Hello World report should come up ok as I'm not using any data sources for it.


    However, the report where I do have a data source will fail. However, it is with a different message this time.


    In the ULS log, we won't see an error by default, because the Reporting Services Monitoring trace points have not been enabled within Central Admin. The error itself will be a "Login failed for user 'NT AUTHORITY\ANONYMOUS'". That error comes directly from SQL Server. Whereas the 401.1 errors were Web related errors.


    In order for Reporting Services to forward credentials to a back end data source, we need to enable delegation permissions on the Service Account. The data sources are process within the Report Server Windows Service and not SharePoint, so the SharePoint settings don't help us here.

    We will do what we did with the SharePoint Account and enabled Full Trust for the Reporting Services Account.


    This in itself is not enough to get our Report with the Data Source working though. This just allows Reporting Services to forward the user's credentials to another Service. That service we are forwarding to still needs to be setup properly. In this example it is SQL Server we are forwarding to and it does not have it's SPN configured yet. So, we will still fail with a Login Failed message from SQL. Reporting Services at this point should be good to go though.

    Which brings us to the last stop in our journey…

    SQL Server

    Service Principal Name (SPN)

    I have previously written a blog post concerning the SQL Server SPNs. What SPN do I use and how does it get there?

    It goes through how SQL Server can make use of it's ability to manage the SPN for you, and which SPN is needed based on which protocol you are trying to connect with. I won't go through all the details again here, so I will make a few assumptions.

    First, that the ability for SQL to manage it's SPNs is not working because I'm using a Domain Account and I haven't given it the permissions necessarily for that to occur. You can also verify this in the SQL ERRORLOG:

    2011-02-21 08:58:01.40 Server The SQL Server Network Interface library could not register the Service Principal Name (SPN) for the SQL Server service. Error: 0x2098, state: 15. Failure to register an SPN may cause integrated authentication to fall back to NTLM instead of Kerberos. This is an informational message. Further action is only required if Kerberos authentication is required by authentication policies.

    Second, that I'm going to be connecting with the TCP protocol and not Named Pipes.

    The SQL Service is using the sqlservice account. And because we are using the TCP Protocol, the SPN will need the port number. In this case, it is a default instance, so we know the port will be 1433. So, our SPN will look like the following for SQL - MSSQLSvc/dsdcontososql:1433 and MSSQLSvc/dsdcontososql.dsdcontoso.local:1433. You'll notice I'm doing both the NETBIOS and FQDN SPNs here. It is the same reason as with the HTTP SPN. In this case, the SQL Client connectivity components will do a reverse lookup on the server name to try and resolve the FQDN. So, with everything working as it should, it should always try to get the FQDN SPN even if you supply the NETBIOS server name in the connection string.

    The SPNs for SQL Server are derived from the Connection String that the client is using. The client in this case being Reporting Services. Reporting Services is a .NET Application, so it is using SqlClient to connect to SQL.

    We can see that there are no SQL SPNs registered on the service account or the machine account


    So, lets go ahead and add the SPNs.



    Everything looks good on the SPN front. For good measure, you may want to use the setspn tool to search for duplications. It is a new feature of SPN that was added in Windows 2008. It is the -X command. It will search the entire domain for duplicates. You should never have a duplicate as it will cause an error.


    Looks like we do not have any duplicate SPNs. At this point the Report that we have with a data source to the SQL Server should run ok as there are no application specific settings that needs to be set for SQL Server outside of the SPN.


    NOTE: Depending on how you have approached the setup, you may still encounter an error due to the fact that the failed Kerberos requests may still be cached. You can either wait for cache to clear out, or you can restart the services to get it going. I had to recycle SharePoint and Reporting Services for it to start working on my box, as well as log off and back in (or just run klist purge on the client).


    For your back end server, you may not need to enable delegation. If the hops stop with this server, then we are done and do not need delegation. However, if this backend server will be continuing on to another service, then delegation will be necessary if it will try to forward the windows user credential.

    A great example of this with SQL Server is the use of a Linked Server. However, just the fact that you have a Linked Server doesn't mean that you need delegation. It is dependent on how you configure authentication on the Linked Server.


    If "Be made using the login's current security context" is selected for the Linked Server, then we will need to enable delegation for the SQL Service account.

    There are also other things that may require delegation from SQL. SQLCLR is one that might depending on what you are doing. The general rule of thumb is that if anything within SQL is trying to reach out to another resource and will need to send the current user's credentials, than you will need Delegation enabled on the SQL Service Account.

    In my case I'm not, so I'm going to leave it alone.


    So, that's it. We went through each stop along the communication path (SharePoint, RS and SQL), and we validated the settings for each one as we got there. We also saw that certain things began to work as we enabled items. The Report without the data source started working with SQL being setup because we weren't reaching out to SQL. And, we also looked at when you need to enable delegation or not depending on whether that service needed to reach out to another service. For Reporting Services, had we not been hitting a data source, we may not have needed to enable Delegation on the rsservice account as I showed with the HelloWorld report. But when we need to access data, we then need to have it if we want to use Kerberos. The other option would be to store the credentials within the data source.

    Hopefully this helps someone when trying to setup this type of deployment, or any deployment that requires Kerberos in order to work.

    Adam W. Saxton | Microsoft SQL Server Escalation Services

    0 0

    When integrating Reporting Services with SharePoint, the authentication scheme for the SharePoint site can affect how Reporting Services works. I've been asked many time about how to tell if the SharePoint site is using Claims or is in Classic mode.

    Central Admin

    To determine if you are using Claims or Classic Authentication for your SharePoint site. Go to Central Admin, Application Management and Manage Web Applications.


    When you click on a given web application, the Authentication Providers button will enable.

    For my port 80 application, which is using Classic authentication, you will see the following


    However, if we look at my port 5555 application, we will see something different


    For Claims applications, it will actually show you "Claims Based Authentication". When we are in Classic Mode, we will just see "Windows".


    We can also use PowerShell to determine the Authentication mode that the applications are using. SharePoint 2010 has moved towards PowerShell for scripting items as most Microsoft Servers are doing. I ran these scripts by opening the SharePoint 2010 Management Shell, but you could load the cmdlets manually through a normal PowerShell Command prompt.

    The script is pretty simple. We can just make use of Get-SPWebApplication.


    $web = Get-SPWebApplication "<URL for Application>"


    For my Classic Application (http://dsdcontoso), we see the following:


    And, for my Claims Application (http://dsdcontoso:5555), we see the following:


    Knowing which mode SharePoint is in can really help with your deployment of Reporting Services and help you avoid or account for certain behaviors and issues. Remember when we are using Claims Based Authentication with the SharePoint Web Application, we will always use Trusted Authentication with Reporting Services. Even if you select Windows Authentication.


    Adam W. Saxton | Microsoft SQL Server Escalation Services

    0 0

    WARNING:  This is based on pre-release software so things could change!

    I got started playing with the new Azure Reporting CTP and had an interesting experience.  I struggled getting to the Report Server URL for my Azure Instance.  I think part of this was because it was the end of the day and my brain was fried  Smile  Within the Azure Portal, it indicates what the Web Service URL is that you can use.  However, it just presents the Host’s Fully Qualified Domain Name (FQDN).  For example:

    So, in my excited haste, I typed in  And I’m using IE9, so it kept trying to do a Bing search for me.  After a while I started getting proxy errors saying the site refused my request.

    After some frustrated digging and question asking, I finally got the answer I needed.  The URL format is the following:

    NOTE: Be sure to replace abcdefgh with your correct server that you can find within the Azure Portal.

    I really should have known that.  I’m going to chalk it up to a brain freeze.  And I’m throwing it out there in case anyone else struggles with it.

    Of note, Mary Lingel on the Documentation Team was very quick to post this on the CTP Release Notes for people to find as well.  You can also have a look at the FAQ and Troubleshooting articles.

    Adam W. Saxton | Microsoft SQL Server Escalation Services

    0 0

    SharePoint 2010 introduced a new authentication architecture called Claims Based Authentication

    SharePoint Claims-Based Identity

    When using Claims Based Authentication for your SharePoint site, in order for authentication to flow between services, those services needs to be claims aware. That means that they understand what to do with the Claims Token as it is not an Windows/NT Token that we normally think of. Applications that are Claims aware can transition a Claims token to a Windows Token by way of the Claims to Windows Token Services (C2WTS). But this would be an explicit call from that application. Excel Calculation Services within SharePoint is a good example of a Claims aware service that makes use of C2WTS to transition from the Claims token to the Windows Token when you go to refresh an Excel Workbook from within SharePoint.

    Reporting Services 2008 R2, in contrast, is a good example of a Service that is not Claims aware and does not recognize the token. So, what does RS do? The following Books Online Documentation talks about it briefly.

    Claims Authentication and Reporting Services

    RS has two authentication modes within the Add-In for SharePoint


    "Windows Authentication" will do what we expect. It will pass the Windows Token and can make use of Kerberos, if Kerberos is configured. See my previous Blog post about using Kerberos with the Report Server. When using "Trusted Account", we do not pass the Windows Token back to the Report Server. Instead, the RS Proxy will attach the SPUser Token from SharePoint into the header of the SOAP request.

    The Report Server will take the SPUser token and use that as the user's credential. We can validate the user using the SharePoint API's to make sure they have access to things such as the RDL (Report). One thing you will notice though is that the actual User ID looks a little difference from a Report Perspective.

    Classic Windows Site w/ Windows Authentication:


    This shows up like we would expect with the Domain\User format.

    Windows Claim site w/ Trusted Authentication:


    This looks a little different. This is what it will show when we are in a claims site. Specifically this is the format when using Windows Claims. The "w" before the pipe indicates that we are using Windows Claims. I will show in a later post what that looks like for a different provider within SharePoint. For now, I'm just going to stick with Windows Authentication. This is what we extract from the SPUser token because there is no Windows Token. We still the security context of Domain\User, but we have a prefix on it and it is separated by a pipe "|".

    I mentioned before that RS will use "Trusted Account" even when you choose "Windows Authentication" from the Integration settings. To prove that, lets run the same report again with "Windows Authentication" selected to see what the User ID looks like.



    You can see that we still don't see the Windows representation, but we see the Claims representation. A nice trick though is that you can leave the setting for "Windows Authentication" and have a Classic and Claims site side by side, and they will both work. You can have Kerberos working with the Classic site, while using the SPUser token with the Claims site.

    From a Data perspective this looks a little different. If we have a Data source on a Claims site and that data source is setup to use Windows authentication, it will not work.


    We will get the following error:

    An error has occurred during report processing. (rsProcessingAborted)
      Cannot impersonate user for data source 'DataSource1'. (rsErrorImpersonatingUser)
        This data source is configured to use Windows integrated security. Windows integrated security is either disabled for
        this report server or your report server is using Trusted Account mode. (rsWindowsIntegratedSecurityDisabled)

    You'll notice that the exception is "rsWindowsIntegratedSecurityDisabled".

    When using a Claims Site with Reporting Services, you have to store the credentials within the Data Source. There is no way to pass the Windows Credential to the backend data source. This is because RS doesn't have the Windows Token. We only have the SPUser token, which is not even the Claims/SAML Token. RS doesn't know about the Claims aspect and also doesn't know about the Claims to Windows Token Service (C2WTS).

    So, you can still use reports and they will render under a Claims site, but the restriction is that you can't use Windows Authentication back to the data source. As a result also, this makes configuration a little simpler with Reporting Services as you don't really need Kerberos in this situation. You can avoid that headache if all you are going to have is a Claims site with Reporting Services Integrated.


    Adam W. Saxton | Microsoft SQL Server Escalation Services

    0 0

    I was helping Bob Dorr out with a report he was creating.  He was trying to get some images and a column to hide under certain conditions.  He added the expression based on data using the Iif() function.  His complaint was that the column and/or image was still showing and wasn’t hidden.

    For my example, I’ll use the ExecutionLog and the Format value to display an image (Excel) if the format is EXCEL. The expression in question that we were using was something similar to the following.

    =iif("EXCEL" = Fields!Format.Value, true, false)

    This resulted in no image showing.


    What happened is that without looking, we went on the premise that the expression was for the Show question and not the Hide.  Meaning we formatted the expression thinking that the resulting true or false was to answer the question “Do we show it?”.  It’s actually the opposite.  It answers the questions “Do we hide it?”.

    We have actually seen this crop up from customers quite a bit.  When he asked me what was going on, the thought that popped in my head was “think negative” that reminded me that I should look at it based on hidden rather than show.

    So, if we change the expression to the following

    =iif("EXCEL" = Fields!Format.Value, false, true)

    we see the following outcome


    Writing up this blog post, I actual spotted something I hadn’t seen before.  The dialog box actually states to equate for Hidden rather than Show.


    That’s honestly the first time I actually saw that.  Normally I just blew right past it, which I think most people do based on the cases that I’ve had relating to this.

    The moral of the story is that when writing an expression for Visibility, make sure to keep in mind that it should equate to “Do we hide it?”.

    Adam W. Saxton | Microsoft SQL Server Escalation Services

    0 0

    I recently ran into a problem with one of my own internal applications and it re-raised a philosophical question I have had before with customers.  There are really two sides to the question:
    1) Should I set my non-default instance to listen on TCP 1433?
    2) Should I set my default instance to listen on something other than TCP 1433?

    In both cases, I recommend "no". 

    Let me tackle the second question first since that is a simpler question.  I know that years ago one of the common security recommendations was to put your default instance on a port other than TCP 1433 to make it more difficult for attackers to find it.  However, I can say with complete comfort that if an attacker has a network trace that has a connection attempt to your instance, they can figure out the port on which your instance is listening very easily.  Even if you encrypt the conversation, the first five packets of the conversation are unencrypted because you cannot encrypt anything until you have contacted the instance.  Since those first five packets are the same for any connection attempt, it is very easy to detect them in a network trace.  Although I don't do this maliciously (I swear!), I do this on a regular basis when someone sends me a network trace to a named instance and neglects to tell me the port on which the instance is listening.

    In addition, if you change your default instance to a port other than TCP 1433, you now need to specify it in every connection string - either directly (servername,port#) or indirectly via client alias.  Given how easy it is to find this conversation in a network, I really cannot see the additional effort as being worth the negligible security benefit (security by obfuscation is never a great idea).

    The first question is a little bit more complex.  Setting your default instance to TCP 1433 does indeed give you the benefit of not having to specify the instance name or port number in the connection string.  This is because the SQL Server client libraries don't bother querying SQL Browser for the port number if they don't detect an instance name in the connection string.  Instead, they go straight to TCP 1433. 

    The downside to this approach shows up when you are working with application administrators who don't know anything about the SQL Server instance.  If they don't know that the instance is a named instance, they might configure their connection string as if the SQL Server instance was a default instance.  Since the instance is listening on TCP 1433, the attempt to connect will succeed.  The the real problem comes later when you decide to change the port on which your SQL Server instance is listening (maybe you read my blog:)).  If you do, but don't change the client connection string, the client won't be able to connect.  And, because the client thinks your instance is a default instance, it won't query SQL Browser, so will never find out the new port.  The only way to fix this is to create an alias on the client (tough to maintain over time) or to modify the connection string to specify an instance name.  Now, instead of just getting downtime on the SQL Server side, you have to take downtime on the client side, too.

    In conclusion, given that there is a negligible security benefit to modifying the port for your default instance and there is significant potential for outages with setting your named instance to TCP 1433.  Therefore, with the exception of setting a static port for your named instances, I recommend you just leave the port settings to default.

    P.S.  Please don't set any of your instances to TCP 1434 either.  While not technically wrong, it is very confusing since SQL Browser listens on UDP 1434 and hardly anybody references the protocol (TCP vs. UDP) when talking about ports.  Making sure both sides of the conversation are talking about the same service can then get quite confusing if you put SQL Server on TCP 1434.

    Evan Basalik | Senior Support Escalation Engineer | Microsoft SQL Server Escalation Services

    0 0

    Last November I was part of an exciting launch at the SQL PASS conference of the beta of a project called Atlanta (see my original post on this topic at

    I’m excited to tell you that what started as an idea several years ago, then became a project with a codename,  (this is big at Microsoft) is now an official Product name. The Release Candidate (RC) for System Center Advisor was launched today at the 2011 Microsoft Management Summit in Las Vegas.

    Like the beta, the RC for System Center Advisor is free for anyone to download and explore. Some of the new features that have been built into this release are:

    • Improvements to the User Interface
    • New SQL Server rules (bringing the total of 50 rules for SQL Server)
    • New rules for Windows Server, Windows Server Active Directory, and Windows Server Hyper-V
    • Email notifications when alerts are fired
    • Auto-resolution of alerts based on changes you make on your server (you are going to really like this one)

    Over the next two weeks I will be posting a series of blogs on more information about System Center Advisor including some of these features but more importantly to show you some of the rules for SQL Server and behind the scenes information on how and why we created them.

    You will also have questions about System Center Advisor that will come up that might already be answered at the following product website:

    But to start with, let me show you how easy it can be to install this product. For a complete set of instructions on how to deploy System Center Advisor, see this link:

    First, you need to sign-up for the service at

    On this page, select “Create Account”. All you need is a valid Windows Live account. You will be asked to fill in a few small details about yourself and then you are ready to proceed to install this on servers you want to assess.

    For me, I’m already a System Center Advisor user (because I’ve been testing a bunch leading up to the launch) so I’ll pick the option here that I’m already on System Center Advisor.


    I’m immediately led to this page which is the main “Alert Dashboard” to show me what alerts System Center Advisor has found on servers I have selected to assess. The warning on this screen indicates I’ve not installed System Center Advisor on any servers associated with this account. It includes a hyperlink to do this.

    I’m prompted to download 2 things: 1) A certificate used to identify a specific gateway server with my account 2) The setup installation program. The name of the setup program is AdvisorSetup.exe.


    I want to install this on a server in the Microsoft network at my office in Texas that has SQL Server 2008 R2 installed on Windows Server 2008 R2. I know for this server it has internet access so I’ll be installing everything on this computer. When I launch AdvisorSetup.exe, the following describes the simple setup process to get this going. It took no more than 5 minutes to go through this on my server.


    After selecting Next I must accept the License agreement.


    Now I’m asked what folder to install the Software. By default it is %programfiles%\System Center Advisor


    Next I need to select whether I’m installing the Agent Service, Gateway Service, or both. I’ll talk more in future blog posts about what these services are. But for now think of the Agent service as the software that performs the assessment and the Gateway services as the software that communicates with the System Center Advisor service in the cloud. For my server I know it is connected to the internet (via proxy) so I’ll install both services. If you have a server that has no internet connectivity, you would first install the software and choose Gateway on a computer that does have internet access (can be through a proxy). Then on the server that are you assessing that is not connected to the internet, copy over the AdvisorSetup.exe and select Agent. When you select Agent you are prompted to provide the name of the computer where you installed the Gateway. For me, I’m just installing them both on the same server.


    Since I chose Gateway I have to provide some information. First, use the Browse button to select the Certificate you downloaded from the web site as I described earlier. Second, you may need to provide a proxy server and port depending on your network infrastructure. I won’t cover the last checkbox for now so have left that blank.


    After selecting Next, I’m prompted to Install.


    In about 2-3 minutes, I received the screen that the install had completed


    OK, so now what?. The setup was easy. What do I do now? Well, first I always like to go to the web portal and ensure the System Center Advisor back-end services have acknowledged my installation. Go back to the portal you logged in at and select the icon on the left side which looks like servers stacked on each other. You should see that a Gateway and Agent have been registered for your server name. You should also see a Last update time very close to when the install competed for the Gateway. This means the Gateway has successfully registered with the System Center Advisor service.


    System Center Advisor was built by default to be lightweight. So by default we schedule on a daily basis a check of this server you have installed the agent on and then transmit this to the System Center Advisor service in the cloud through the Gateway. This does not occur typically immediately but generally in off hours. So now that you have seen how simple it can be to deploy this, my next post tomorrow will be what happens the next day when I come into the office and see what System Center Advisor has found on my server.


    Bob Ward

    0 0

    Yesterday I created a blog post on the Release Candidate launch of System Center Advisor:

    At the end of that post, I told you the gateway had successfully registered my server but I had not seen any alerts posted yet. So what happened today?

    I logged into today using the same LiveID account I used to install the gateway and agent on my server. When I signed in my Alert dashboard looked like this (remember this server has SQL Server 2008 R2 RTM installed on Windows Server 2008 R2 RTM).


    I will spend other blog posts touring you through some of the user interface features but I want to first talk about some of the rules. For this post, let’s focus on the first 2 alerts that System Center Advisor has recommended as Critical Alerts (note the RED icon)

    The first Alert is for a Windows Update that can affect SQL Server performance:




    It is very possible that if you have deployed SQL Server on Windows 2008 R2 that you might have not seen this Windows update. It is not pushed as part of Windows Update as it is not considered a critical fix that should be pushed to all Windows customers. However, in CSS we have seen many customers contact us with performance problems exhibited by unexplained high CPU (most kernel or privileged time) running SQL Server on Windows Server 2008 R2. Most of these customers were using fairly heavy user loads and applications that required a large amount of I/O. We’ve seen enough calls that we thought it was important to detect whether a customer had this update applied.  Since System Center Advisor can detect what is running on the server, this update will only be recommended to customers it applies to (namely Windows Server 2008 R2 where this fix has not been applied).

    Note the section to the right of the description Information detected relating to this issue.  This information allows us to provide the user context on how we detected the alert. You can see from this table that we look at the version of NTOSKRNL.EXE to determine if the fix has been applied. This is because for this Windows fix, this is the file that requires updating. We display the version of the file detected and the minimum version of the file required to fix the problem.

    If you select Click here to view Solutions / Knowledge Base Article you will find that System Center Advisor puts you directly to the article that allows you to apply the fix:


    From here I can download and apply the fix should I determine if it makes sense for my server and environment. From our experience in CSS, if you are using any normal load of I/O for SQL Server on Windows Server 2008 R2, we recommend you apply this update. While this update is for Windows, the SQL CSS team recommended this alert go into System Center Advisor given the impact it has had on SQL Server users.

    The second fix represents something new for the RC release of System Center Advisor, Windows Server alerts. The first alert was also for Windows Server but its origins came from research of SQL Server cases. The second alert is pure Windows.


    This Windows update represents a common customer problem when attempting to use basic shared resources of a Windows 2008 R2 server such as files and printers. Again, we show the version of the file affected by this fix mrxsmb10.sys and the minimum version required to address the problem. The article link exists as well to apply the fix.

    I’ve shown you 2 critical alerts that are both Windows updates that can save you a lot of pain and suffering. You should know that we carefully consider these updates we believe you should apply even if you are not seeing any problems. We pick updates that we’ve seen help many customers go through the pain you won’t have to if you apply them to your server.

    Tomorrow I’ll show you a great new feature of System Center Advisor before I show you more alerts: the ability to System Center Advisor to detect you have resolved a problem described in the alert. We call this the “autoclose” feature. So what I’m going to do now is apply the updates recommended by these two alerts and see if the alerts still appear tomorrow no my dashboard


    Bob Ward

    0 0

    Yesterday I posted that System Center Advisor had found several alerts for me to look at the day after I installed it:

    We focused on the top 2 “Error” alerts which were recommended Windows updates. I decided to go ahead and apply these fixes last night to this server.

    Today when I came into the office, I signed into System Center Advisor again and found these 2 alerts were gone? Before I show you what happened I wanted to point out another great advantage of System Center Advisor being a cloud service.  Today when I came into the office, I didn’t even logon to the server where I had installed System Center Advisor. I signed in on my laptop at and used the same LiveID I used when installing System Center Advisor on that server. Anywhere I have internet connectivity I can view the current status of my servers.

    When I signed in,  this is what my initial page looked like:


    If you have been reading my blogs, you will see that there are no red icons. No errors or “critical alerts”. Where did they go?

    Notice the columns of this grid. To the right you can see a column called Status. Note that all of the alerts have a Status = Active.

    Let’s zoom in a bit and look at the column header for status


    The “funnel” icon symbolizes the Ability to “filter” on status values. If we click this icon, the following appears:


    By default we only show you Active alerts. Let’s change this and uncheck Active and check Closed. The Alert “grid” now changes to this:


    The two alerts I’ve highlighted are closed alerts. Who closed them? System Center Advisor did this automatically. In this RC release, a feature was added that detects when an alert condition has been resolved and will mark the alert closed. Now you know you have addressed the problem in the alert. This is a really nice feature. Now your alert grid can become a “todo list” of actions to take on recommended alerts. As you work through to resolve these, System Center Advisor will close them and save that state in the backend database so you can review a history of these. In fact, the other alerts on the closed list are from earlier tests I had run on a different server back in the Atlanta beta.

    I’ve shown you today how Atlanta can detect when you have taken action on alerts you believe make sense for your environment. Next week we will look at other rules Atlanta has detected on this server and other features you can use to manage the recommendations System Center provides you to help manage your Windows and SQL Server systems.


    Bob Ward

    0 0

    I’ve been in Seattle this week at an internal Microsoft conference so haven’t had time to check back on what other things System Center Advisor found out about my server from last week. I've got a few minutes before I catch a plan back to Texas so while here at SeaTac airport I thought I would just pop open a browser and see what else System Center Advisor says about my Windows and SQL Server installation

    I logged into with my live ID to which brought up my list of active alerts (I call this my “todo list”). I wanted to get more organized with this list so I clicked on the Path header column to sort my alerts by “object”. The Path column is the context of the alert from an object perspective. If the alert applies to the computer, the path will be the server name. If the alerts applies to the SQL instance, the path will be the SQL instance name. And finally if the alert applies to a specific database, the path will be the instance name and database name.

    The first alert at the top of my list is for the computer


    This rule is about Hyper-V. I do have Hyper-V enabled on this server and in fact have several “dormant” virtual machines I use from time to time. What the alert is saying is that if you are using the Hyper-V role, it is best not to have other roles enabled (in this case File Services).  This is a best practice for any production server using Hyper-V. You want to ensure your virtual machines have the maximum resources available so using other roles could possibly affect overall performance for virtualization.

    For me, I’m not all the interested in optimal performance of my virtual machines so I’m ok with allowing File Services to be on this machine. But what do I do? I don’t want the alert to show up anymore and I don’t want to remove the File Services role. Fortunately, System Center Advisor can allow me to Ignore this rule.

    At the top of the alert list is a Ignore button


    When I click Ignore I get this window


    So I have a choice about the scope of how I want to Ignore this rule. I’ll leave the default for now and move on

    The next rule recommends I apply a SQL Server update


    This rule looks interesting to me and I think this update may make sense to avoid any possible problems with tempdb especially since the errors that can occur appear to be related to database corruption (but really are not).. I’ll leave this rule active and address it when I get back to Texas

    The next rule says something about performance and tempdb


    I know from personal experience that concurrency problems with tempdb is a very common problem. I’m not using tempdb much yet for this SQL Server but I want to stay ahead of the game. I certainly want to address this problem. I’ve heard you need to have multiple files but am not sure how many to add. One clue that can help me know why this rule was fired is in the section at the top corner of this image


    This information indicates that I only have 1 tempdb file but 4 logical processors. What we did with this rule is very simple and conservative. We know there is debate internally and in the community about how many files for tempdb you actually need. So we didn’t try to recommend an optimal number of files. What we do look for are SQL instances with only one file but the server has multiple logical processors. We know in these situations you will have contention you can avoid.

    I’d like to address this as well when I get back to Texas so I’ll leave this active.

    Yikes. Last call for my flight. I yearn fro the blue sunny skies of Texas so I don’t want to miss it.  One thing I noticed though though when connecting is that I received an email to my live account from System Center Advisor. I wonder what it is for. I’ll talk about that email and more alerts in my next post

    Bob Ward, Microsoft

older | 1 | 2 | 3 | (Page 4) | 5 | 6 | .... | 17 | newer