Quantcast
Channel: CSS SQL Server Engineers
Viewing all 339 articles
Browse latest View live

Working with Power View: Click to Interact is not available

$
0
0

I was working on a Power View report for our internal business numbers and we are making use of the Export to PowerPoint feature to include them in our slide deck.  If you aren’t familiar with this feature in Power View, when you are in the report, click on “file” and then “Export to PowerPoint”.  This will go through each slide you have in the report and generate a PowerPoint slide for each one.

image

 

When you run the presentation within PowerPoint, you should see “click to interact” in the lower right.

image

One of my colleagues tried opening the PowerPoint and mentioned that he didn’t see the “click to interact”.  He just had what looked like an image snapshot on the slide.

I had him go to the SharePoint site to try it out, and we were prompted to install Silverlight.  Turns out he didn’t have it installed.

After Silverlight was installed, the “click to interact” was available. 

Within the slide itself, it uses the Silverlight hosted control (XcpControl2) which points back to the SharePoint Server, so If Silverlight is not installed, it won’t function properly.

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton


Working with Power View: Some controls on this presentation can't be activated

$
0
0

When one of our Managers opened up the PowerPoint slide deck that Power View generated, he got the following error:

Some controls on this presentation can't be activated. They might not be registered on this computer.

I had found some references to the RC0 timeline indicating that if you had PowerPoint x64 you would get this issue because the Silverlight 5 x64 bits had not been release yet, so there was a platform mismatch.  However, in this person’s case, he was running the 32bit version of PowerPoint.

When he checked the version of Silverlight, he discovered that he was running version 4.1.

I pointed him to the Silverlight 5 download to install the latest bits.  After that was done everything was working as expected.

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

Behavior Change When Handling Character Conversions SQL Server’s ODBC Driver (SQL 2012 – Version 11.xx)

$
0
0

Summary

The SQL Server 2012 ODBC Driver (SQLNCLI11.dll) updated handling of SQL_WCHAR* (NCHAR/NVARCHAR/NVARCHAR(MAX)) and SQL_CHAR* (CHAR/VARCHAR/NARCHAR(MAX)) conversions. The change was made to accommodate surrogate pair behavior as outlined at: http://technet.microsoft.com/en-us/library/ms143179(v=SQL.110).aspx

 

“In versions of SQL Server prior to SQL Server 2012, string functions did not recognize surrogate pairs as a single character. Some string operations – such as string length calculations and substring extractions – returned incorrect results. SQL Server 2012 now fully supports UTF-16 and the correct handling of surrogate pairs.”

 

ODBC functions, such as SQLGetData, SQLBindCol, SQLBindParameter, may return (-4) SQL_NO_TOTAL as the length/indicator parameter when using the SQL Server 2012 driver when prior versions of the SQL Server ODBC driver returned a length value (which may not have been correct in all cases).

 

More Information

 

SQLGetData Behavior

Many Windows APIs allow you to call the API with a buffer size of 0 and the returned length is the size needed to store the data returned from the API. The following pattern is common for Win32 programmers (minus error handling for clarity).

int iSize = 0;
BYTE * pBuffer = NULL;
GetMyFavoriteAPI(pBuffer, &iSize); // Returns needed size in iSize
pBuffer = new BYTE[iSize]; // Allocate buffer
GetMyFavoriteAPI(pBuffer, &iSize); // Retrieve actual data  

Some applications made the assumption that SQLGetData provides the same capabilities.  

int iSize = 0;
WCHAR * pBuffer = NULL;
SQLGetData(hstmt, SQL_W_CHAR, ...., (SQLPOINTER*)0x1, 0, &iSize); // Get storage size needed
pBuffer = new WCHAR[(iSize/sizeof(WCHAR)) + 1]; // Allocate buffer
SQLGetData(hstmt, SQL_W_CHAR, ...., (SQLPOINTER*)pBuffer, iSize, &iSize); // Retrieve data  

Note: The ODBC specification clearly outlines that SQLGetData can only be called to retrieve chunks of actual data. Calling it using this paradigm is relying on bug behavior.

Let’s look at a specific example of the driver change if you are using the incorrect logic shown above. The application is specifically querying a varchar column and binding as Unicode (SQL_UNICODE/SQL_WCHAR)

 

Query: select convert(varchar(36), ‘123’)

SQLGetData(hstmt, SQL_WCHAR, ….., (SQLPOINTER*) 0x1, 0 , &iSize); // Attempting to determine storage size needed

 

Driver Version

Length or Indicator Outcome

Description

< 11.xxx

6

The driver made the incorrect assumption that converting CHAR to WCHAR could be accomplished as length * 2.

>= 11.xxx

-4 (SQL_NO_TOTAL)

The driver logic no longer makes the assumption that converting from CHAR to WCHAR or WCHAR to CHAR is a (multiply) *2 or (divide)/2 action is valid.

As such, calling SQLGetData will no longer return the length of the expected conversion as prior driver versions may have provided. The driver detects the conversion between CHAR to WCHAR or WCHAR to CHAR and returns (-4) SQL_NO_TOTAL instead of the *2 or /2 behavior that could be incorrect.

 

The proper way to use SQLGetData would be to retrieve the chunks of the data. Pseudo code shown below.

 

while( (SQL_SUCCESS or SQL_SUCCESS_WITH_INFO) == SQLFetch(...) )
{
   SQLNumCols(...iTotalCols...)

for(int iCol = 1; iCol < iTotalCols; iCol++)
     
{

WCHAR* pBufOrig, pBuffer = new WCHAR[100];

SQLGetData(.... iCol … pBuffer, 100, &iSize); // Get original chunk

while(NOT ALL DATA RETREIVED (SQL_NO_TOTAL, ...) )

{

pBuffer += 50; // Advance buffer for data retreived

// May need to realloc the buffer when you reach current size

SQLGetData(.... iCol … pBuffer, 100, &iSize); // Get next chunk

}
}

}

SQLBindCol Behavior

 

Query: select convert(varchar(36), ‘1234567890’)

SQLBindCol(… SQL_W_CHAR, …) Only bound a buffer of WCHAR[4] – Expecting String Data Right Truncation behavior

 

Driver Version

Length or Indicator Outcome

Description

< 11.xxx

20

· SQLFetch reports String Data Right Truncation

· Length indicates the complete length of data returned, not what was stored (assumes *2 CHAR to WCHAR conversion which may not be correct for glyphs)

· Data stored in buffer is 123\0 - Buffer is guaranteed to be NULL terminated

>= 11.xxx

-4 (SQL_NO_TOTAL)

· SQLFetch reports String Data Right Truncation

· Length indicates -4 (SQL_NO_TOTAL) because the rest of the data was not converted

· Data stored in buffer is 123\0 - Buffer is guaranteed to be NULL terminated

 

SQLBindParameter (OUTPUT Parameter Behavior)

 

Query: create procedure spTest @p1 varchar(max) OUTPUT

select @p1 = replicate(‘B’, 1234)

SQLBindParameter(… SQL_W_CHAR, …) // Only bind up to first 64 characters

 

Driver Version

Length or Indicator Outcome

Description

< 11.xxx

2468

· SQLFetch returns no more data available

· SQLMoreResults returns no more data available

· Length indicates the size of the data returned from server, not stored in buffer

· Original buffer contains 63 B’s and NULL terminator - Buffer is guaranteed to be NULL terminated

>= 11.xxx

-4 (SQL_NO_TOTAL)

· SQLFetch returns no more data available

· SQLMoreResults returns no more data available

· Length indicates (-4) SQL_NO_TOTAL because the rest of the data was not converted

· Original buffer contains 63 B’s and NULL terminator - Buffer is guaranteed to be NULL terminated

 

Handling CHAR and WCHAR Conversions

The SQL Server 2012 ODBC Driver provides you with several ways to accommodate this change. In fact, if you are handling blobs (varchar(max), nvarchar(max), …) the logic is similar.

· If you bind (SQLBindCol or SQLBindParameter) the data is saved or truncated into the specified buffer

· If you do NOT bind you can retrieve the data in chunks using SQLGetData, SQLParamData, … functions  

 

I asked 'Kamil Sykora - Senior Escalation Engineer' to help me explain this behavior. 

Drivers MAY return the total remaining length or they MAY NOT – driver specific. If the driver returns a length, the app is free to use that to allocate the buffer of an appropriate size. If it returns SQL_NO_TOTAL it should chunk it with a fixed size buffer. http://msdn.microsoft.com/en-us/library/windows/desktop/ms715441(v=vs.85).aspx

 

1. Places the length of the data in *StrLen_or_IndPtr. If StrLen_or_IndPtr was a null pointer, SQLGetData does not return the length.

· For character or binary data, this is the length of the data after conversion and before truncation due to BufferLength. If the driver cannot determine the length of the data after conversion, as is sometimes the case with long data, it returns SQL_SUCCESS_WITH_INFO and sets the length to SQL_NO_TOTAL.  

I tested this with the new driver, it supports ODBC 3.8 extensions allowing us to stream output parameter data using SQLGetData. This is not directly related to this change but allows us to chunk the output parameter without having to know the size ahead of time. Here’s a sample with some output:

 

OdbcHandle& dbc = getDbc(handles);
wstring driverOdbcVersion(32, L' ');
dbc.check(SQLGetInfo(dbc.h, SQL_DRIVER_ODBC_VER, &driverOdbcVersion[0] , driverOdbcVersion.size(), NULL));

bool use38 = false;

if (wstring::npos != driverOdbcVersion.find(L"3.8")) {

use38 = true;

}

val.resize(100, L' ');

int paramToken = 222; //app-defined identifier, can be anything

if (use38) {

//binding as SQL_WCHAR allows the converted remaining length to be returned

//stmt.check(SQLBindParameter(stmt.h, 1, SQL_PARAM_OUTPUT_STREAM, SQL_C_WCHAR, SQL_WCHAR, 1234, 0, (SQLPOINTER) paramToken, 0, &ind));

stmt.check(SQLBindParameter(stmt.h, 1, SQL_PARAM_OUTPUT_STREAM, SQL_C_WCHAR, SQL_CHAR, 1234, 0, (SQLPOINTER) paramToken, 0, &ind));

} else {

stmt.check(SQLBindParameter(stmt.h, 1, SQL_PARAM_OUTPUT, SQL_C_WCHAR, SQL_CHAR, 1234, 0, &val[0], val.size(), &ind));

}

stmt.check(ret = SQLExecDirect(stmt.h, &sqlProc[0], SQL_NTS), "Convert SQLExecDirect 3 failed");

wcout << L"SQLExecDirect: " << mRet.str(ret) << endl;

ret = SQLFetch(stmt.h); //this might fail if there are no actual results returned by the stored proc

wcout << L"SQLFetch: " << mRet.str(ret) << endl;

if (SQL_SUCCESS != ret) {

stmt.handleError(ret);

}

stmt.check(ret = SQLMoreResults(stmt.h));

wcout << L"SQLMoreResults: " << mRet.str(ret) << endl;

if (use38 && SQL_PARAM_DATA_AVAILABLE == ret) {

int outParamToken = NULL;

stmt.check(ret = SQLParamData(stmt.h, (SQLPOINTER*) &outParamToken));

wcout << L"SQLParamData: " << mRet.str(ret) << endl;

if (outParamToken == paramToken ) { //for multiple params, this needs to be checked so we retrieve the right parameter

stmt.check(ret = SQLGetData(stmt.h, 1, SQL_C_WCHAR, &val[0], val.size(), &ind), "Convert SQLGetData (parameter, initial) failed");

while (SQL_SUCCESS == ret || SQL_SUCCESS_WITH_INFO == ret) {

wcout << L" Indicator: " << mInd.str(ind) << L", Value: " << val.c_str() << endl;

stmt.check(ret = SQLGetData(stmt.h, 1, SQL_C_WCHAR, &val[0], val.size(), &ind), "Convert SQLGetData(parameter, loop) failed");

}

}

} else {

wcout << val.c_str() << endl;

}

 

SQLExecDirect: SQL_SUCCESS
SQLFetch: SQL_SUCCESS
SQLMoreResults: SQL_PARAM_DATA_AVAILABLE
SQLParamData: SQL_PARAM_DATA_AVAILABLE
Indicator: SQL_NO_TOTAL, Value: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
Indicator: SQL_NO_TOTAL, Value: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB

Indicator: SQL_NO_TOTAL, Value: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
Indicator: 18, Value: BBBBBBBBB

I also inadvertently found that if you bind the parameter as SQL_C_WCHAR/SQL_WCHAR (see commented out in code above), you still get the remaining length. I believe this length should be correct even with surrogate pairs since binding it this way causes the parameter to be sent as Unicode by SQL Server, thus the correct length is available to the driver:

SQL Profiler event with SQL_C_WCHAR/SQL_WCHAR binding:

declare @p1 nchar(1234)
set @p1=N'BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB'
exec spTestOutVarchar @p1 output
select @p1

SQLExecDirect: SQL_SUCCESS
SQLFetch: SQL_SUCCESS
SQLMoreResults: SQL_PARAM_DATA_AVAILABLE
SQLParamData: SQL_PARAM_DATA_AVAILABLE
Indicator: 2468, Value: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB

Indicator: 116, Value: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
Indicator: 18, Value: BBBBBBBBB  

Bob Dorr - Principal SQL Server Escalation Engineer

 

 

BISM: value of the 'EffectiveUserName' XML for Analysis property is not valid

$
0
0

I’ve been working with Power View reports and one of the items I was setting up was using a BISM Data source connecting to an Analysis Services Instance in Tabular Mode.  When I went to spin up a new Power View Report against the BISM Connection, I got the following error:

image

Within the details, we saw the following:

<Source>Microsoft SQL Server 2012 Analysis Services</Source><Message>The 'HSGCONTOSO\asaxton' value of the 'EffectiveUserName' XML for Analysis property is not valid.</Message>

Of note, the asaxton account is a Domain Admin and is listed as a Server Administrator on the Analysis Services service, and is a Farm Administrator for the SharePoint Farm.

Searching Bing didn’t pull anything up specific to BISM for this error.  However, I stumbled across the following documentation.

Grant Analysis Services Administrative Permissions to Shared Service Applications
http://technet.microsoft.com/en-us/library/hh230972.aspx#bkmk_ssas

The Service account I was using for the PowerPivot Shared Service was the spservice account.  After adding that account to the AS Server Admins, the report came up without error.

image

image

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

Clarification about the two LPIM upgrade rules that did not FAIL

$
0
0

Consider the following scenario on a X64 system:
     You install SQL Server [2008 R2, 2008, 2005] standard edition.
     You grant the "Lock Pages in Memory" user right to the SQL Server service startup account.
     You did not enable the trace flag 845, as a result the SQL Server instance did not use locked page allocations
     Now you attempt to upgrade this SQL Server instance to SQL Server 2012.
     Now the upgraded SQL Server instance starts using locked page allocations.
     There is a upgrade rule named "LPIM check for x64 installations" which is supposed to warn you about this change in behavior. But you notice that this rule indicates PASSED for the above circumstances.

Consider the following scenario on a X86 system:
     You install SQL Server [2008 R2, 2008, 2005] standard edition.
     You grant the "Lock Pages in Memory" user right to the SQL Server service startup account.
     You did not setup and configure the ‘awe enabled’ feature, as a result the SQL Server instance did not use locked pages allocations
     Now you attempt to upgrade this SQL Server instance to SQL Server 2012.
     Now the upgraded SQL Server instance starts using locked page allocations.
     There is a upgrade rule named "LPIM check for x86 installations" which is supposed to warn you about this change in behavior. But you notice that this rule indicates PASSED for the above circumstances.

The upgrade rule shows a PASSED status due to a bug in the upgrade rule code. The upgrade rule incorrectly checks for the "Lock Pages in Memory" user right assignment.

A bug is filed and work is underway to fix this in a future Cumulative Update.

You can find more information about the change in behavior from the Knowledge Base article:
     2659143    How to enable the "locked pages" feature in SQL Server 2012
      http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143

Aaron Bertrand originally blogged about these two rules not working in the blog: http://sqlblog.com/blogs/aaron_bertrand/archive/2012/02/06/upgrading-to-sql-server-2012-with-lock-pages-in-memory.aspx

Thanks
Suresh Kandoth
Microsoft

CSS will be at SQL Rally

$
0
0

I’m happy to announce that the SQL Support group will be present at SQL Rally 2012 which will be held in Dallas, TX.  It helps that we have one of our major support centers in the area.  Our team will be speaking at the conference and we will also be available to talk to you in person at the SQL Server Clinic.

Here are the talks that will be given:

(SS12-404) What’s New for SQL Server 2012 Supportability
Friday 5/11 8:45am
Bob Ward

Bob will go through how you can support SQL Server 2012.  This will look at some items that are new along with some items that were improved.This is definitely a talk a DBA does not want to miss!

(SS12-400) Digging into Reporting Services 2012 with SharePoint 2010
Friday 5/11 10:15am
Adam Saxton

I will be having a look at how Reporting Services integrates with SharePoint 2010 in this new release.  It is a significant change from previous versions and I’ll go through how it can effect you.

(SS12-402) SQL Server 2012: Memory Manager Rebooted
Friday 5/11 10:15am
Suresh Kandoth

Suresh is the person you want speaking to you about the internals of how SQL manages memory.  He will dive into the changes that occurred in SQL 2012 along with what tools you can use to monitor the server.

(SS12-403) Troubleshooting Performance on SQL Server 2012 with Extended Events
Friday 5/11 1:00pm
Rohit Nayak

Rohit has a great talk about Extended Events.  He will go through the new items that are present in SQL 2012 along with showing the new UI tools that you can use to manage Extended Events.

SQL Server Clinic

The SQL Clinic has become a staple of the PASS Summit.  We are happy that we will be able to offer the SQL Clinic at SQL Rally this year.  Both the SQL CAT team and the Support team will be present to answer your questions.  If you have a question about architecture and design, the SQL CAT team will be there to help you out.  Have a problem with SQL Server, or have a question that you don’t know the answer to, the CSS team will be there to help you.  In addition to the speakers listed above, we will have Ajay Jagannathan, Bala Shankar, Lisa Liu and Wayne Robertson at the Clinic.

We will be in the SQL Exhibit Hall where the meals and vendors will be.  Look for the 3 round tables in the corner of the room.  We will be present on Thursday and Friday.

I’m really looking forward to the conference and am excited to meet members of the SQL Community.  I’m sure we will see some familiar faces, and I’m also excited to meet some new faces.  Also, feel free to stop us if you just feel like chatting about your experience with SQL or the Support group.

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

SQL Rally 2012 recap

$
0
0

Getting back into the swing of things, I thought I would post my thoughts about how SQL Rally 2012 went along with sharing some of the experiences.  I had posted before about what our presence would be like at Rally.  You can read that here

I didn’t get to attend last years SQL Rally, so I guess I was a first timer to Rally.  It was definitely a smaller scale.  Didn’t seem to have the full presence that PASS Summit does, but that is to be expected.

Sessions

The Support team had 4 sessions that were highlighted in the previous post.  We had a great showing to all 4 sessions and they were well received.  Even if one of them had some demo issues (I won’t say who that was…  /frown).  Unfortunately, the sessions were not recorded.

 

SQL Clinic

image

 

We have been doing the SQL Clinic has a great history with PASS.  It has evolved from labs to what it is today over the years.  This is really the chance for people to interact with Microsoft directly via the SQL CAT team and the SQL Support Team.  Thursday was slow from my perspective.  I think part of this was the fact that, while we were advertising the SQL Clinic pretty heavily, I don’t know that people really knew what it was and that Microsoft was available to answer questions.  We changed some of our advertising throughout Thursday and in our Sessions on Friday.  Friday had a much better showing and we had some great conversations.

We heard a lot of positive comments about the Clinic and I really hope that it was helpful for the people that we talked with.  I also hope that it helped to build some relationships. While the Clinic is primarily a technical function, it is also a networking function.  We did have a few people stop by just to introduce themselves.  I also had a few conversations about what it is like to work in Support.

 

A taste of the Clinic:

 

1. SQL Server 2005 32 bit running ~250+ databases having some issues on a 32bit version of Windows 2003 with 4GB of memory.

My first response was “Are you having memory issues?”

2. Questions around the new AS Tabular Model/PowerPivot/BISM and how it will work going forward

3. An OLAP 2000 Cube processing data from SQL 2005.  The Relational database was upgraded to SQL 2008 and now the Query Plan is huge when doing the processing

This led into a discussion about updating statistics and looking at indexes after upgrading from 2005 to 2008

4. Several questions relating to Merge Replication

5. Several questions around SSIS

Thanks to Tim Mitchell (Twitter | Blog) for assisting.  #SQLFamily

 

One of the best comments I saw about the SQL Clinic was the following Twitter post from Nancy Wilson (Twitter | Blog)

image

We also had a few questions about how to get more involved in the SQL Community and with things like PASS.  The best advice I can give to that is to get on Twitter if you aren’t already on.  The SQL Community uses Twitter heavily and if you want to know what is going on, that is the place to find out.

 

 

 

image  image

 

image

 

Other Pictures:

image   image

 

Thanks to Tim Mitchell (Twitter | Blog) for some of the photos.

Also special thanks to PASS and the local NTSSUG group for helping us put this together. 

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

AlwaysON - HADRON Learning Series: Worker Pool Usage for HADRON Enabled Databases

$
0
0

I am on several e-mail aliases related to Always On databases (reference Availability Group, AG, HADRON) and the question of worker thread usage is a hot topic this week.  I developed some training around this during the beta so I decided to pull out the relevant details and share them with you all.   Hopefully this will provide you with a 10,000 foot view around the basic worker thread consumption related to HADRON enabled databases.

The following is a drawing I created for the training, showing the HadrThreadPool on a primary (upper left) and a secondary (lower right).

image

Always On is different than database mirroring.  Database mirroring used dedicated threads per database and Always On uses a request queue and worker pool to handle the requests.  The HadrThreadPool is shared among all HADRON enabled databases.

Primary

On the primary messages the active log scanner is the log pole.   When a secondary is ready to receive log blocks a message is sent to the primary to start the log scanning.   This message is handled by a worker in the HadrThreadPool.  The startup and tearing down of a log scan operation can be expensive so the request will retain the worker thread, waiting on new log record flushes, until it has been idle for at least 20 seconds, usually 60 seconds before returning the message to the pool for reuse.   All other messages acquire a worker, perform the operation and return the worker to the pool. 

Secondary

The expensive path on the secondary is the redo work.  Similar to how the primary waits for idle log scan activity the secondary will wait for idle redo activity for at least 20 seconds before returning the worker to the pool.

Messages/Task Types

There is wide set of messages exchanged between the primary and secondary as depicted in the following diagram.  

image   Task Types

TransportRouting
DbMsg
Conversation
BuildMsgAndSend
TransportNotification
Timer
EndpointChange
ArMgrDbOp
TransportVersioned
ArMgrDbSerializedAccess
SyncProgress
DbRestart
DbRedo
EvalReadonlyRoutingInfo
LogPoolTrunc
NewLogReady

 

HadrThreadPool - High Level Formula for HADRON Worker Needs

The formula is as follows but I have to temper this with 'ACTIVE.'

image

To keep the pool of workers fully utilized you have to have activity in the database.  If I have 100 databases in 25 different AGs but only 10 active databases (at any point in time) the (Max Databases) I would pick a max databases value around 15 for the calculation as to the relative size of the HadrThreadPool used on my system.  If all 100 database are active then account for 100 Max Databases in your calculation.

How Do I See The Pool Workers?

The common tasks assigned to HADRON activities can be observed using the following query.

select

* from sys.dm_exec_requests
where command like '%HADR%'
or command like '%DB%'
or command like '%BRKR%' -- Not HadrThreadPool but Service Broker transport threads needed

Relevant Command Types

HADR_BACKUP_LOCK_HOLDER
HADR_AR_MGR_STARTUP
HADR_AR_MGR_RESTART
HADR_AR_MGR_NOTIFICATION_WORKER
DB_MIRROR

XEvents

There are many new XEvents associated with HADRON.   The XeSqlPkg::hadr_worker_pool_task allows you to watch which HADRON tasks are executing and completing on your system so you can establish a specific baseline for concurrent task execution levels.

Backup and File Streams Impacts

A backup activity on a secondary requires a worker from the pool on the primary to maintain the proper locking and backup sequence capabilities.  This could be a longer term thread and scheduling of backups can impact the worker pool.

The file stream data is not part of the physical LDF file so the actual file stream data needs to be streamed to the secondary.  On the primary the log block is cracked (find all File Stream records and send proper requests to secondary) and the file stream data is sent in parallel with the log blocks.   The more file stream activity the database has the more likely additional threads are necessary to handle the parallel file stream shipping activities on the primary and secondary (receive and save).

Max Usage

The formula uses a 2x factor calculation.  For a database that is under heavy activity, backups frequently active and file stream activity a 5x factor would be max use case calculation at full utilization.  Again, the database activity is key to the worker use and reuse.

Cap

The HardThreadPool is capped at the sp_configure 'max worker threads' minus 40 level.  To increase the size of the HadrThreadPool increase the max worker thread setting.  Note: increasing the max worker thread setting can reduce the buffer pool size.

Idle Workers in HadrThreadPool

A worker in the HadrThreadPool, in an idle state for more than 2 seconds can be returned to the SQL system pool.

image

 

Bob Dorr -  Principal SQL Server Escalation Engineer

How It Works: When is the FlushCache message added to SQL Server Error Log?

$
0
0

FlushCache is the SQL Server routine that performs the checkpoint operation.  The following message is output to the SQL Server error log when trace flag (3504) is enabled.

2012-05-30 02:01:56.31 spid14s     FlushCache: cleaned up 216539 bufs with 154471 writes in 69071 ms (avoided 11796 new dirty bufs) for db 6:0

2012-05-30 02:01:56.31 spid14s                 average throughput:  24.49 MB/sec, I/O saturation: 68365, context switches 80348
2012-05-30 02:01:56.31 spid14s                 last target outstanding: 1560, avgWriteLatency 15

Prior to SQL Server 2012 the trace flag had to be enabled in order to output the information to the SQL Server error log. (Trace flag was the only way to obtain the output.)

 

SQL Server 2012 adds an additional condition (is long checkpoint) to the logic. If the trace flag is enabled or the checkpoint 'TRUE == IsLong' the message is added to the SQL Server error log.

 
Is Long Checkpoint: A long checkpoint is defined as a 'FlushCache / checkpoint' operation on a database that has exceeded the configured recovery interval.
 
If your server does not have the trace flag enabled (use dbcc tracestatus(-1) to check) the message is indicating that the checkpoint process, for the indicated database, exceeded the configured recovery interval. If this is the case you should review your I/O capabilities as well as the checkpoint and recovery interval targets.
 
Not meeting the recovery interval target means that recovery from a crash could exceeded operational goals.

Bob Dorr - Principal SQL Server Escalation Engineer

How It Works: DROP AVAILABILITY GROUP Behaviors

$
0
0

I just learned something new about the DROP AVAILABILITY GROUP command behavior that I didn't realize.  The comment on TechNet, alludes to how DROP works but we definitely need to update our documentation.

Remove an Availability Group (SQL Server) - http://technet.microsoft.com/en-us/library/ff878113(v=sql.110).aspx

"When the availability group is online, deleting it from a secondary-replica causes the primary replica to transition to the RESOLVING state."

The statement is true but it leaves too much to the imagination - so here is How It Works.

Let's start with a fully functioning availability group (AG), for simplicity, with a 2 node cluster example (assume quorum using 3rd disk resource). 

image The Windows cluster is communicating between Node 1 and Node 2 and keeping the cluster registry and cluster activities in sync.

The RHS.exe process is running, hosting the SQL Server cluster resource DLL, on the primary.  The SQL Server resource DLL is maintaining the proper connection to the primary instance and servicing the IsAlive and LooksAlive requests.

The Secondary is connected to the primary and receiving data. 

Note:  The primary does not connect to the secondary, only the secondary connects to the primary in Always On.  This will be important fact for step #3 later.

Step #1 - The first step in dropping an AG is to take the AG offline. 
Step #2 - Remove the AG from the registered cluster resources.

image Offline is coordinated using the Windows clustering APIs to trigger the necessary behaviors.

The change in resource state signals to the SQL Server instances.  No further changes should be allowed so the databases on all nodes are taken to the restoring state.

Important Fact: Offline can occur for more than just DROP AVAILABILITY GROUP.  For example, if the cluster losses network communication the resource can be taken offline.

The offline state protects the databases from issues such as split brain and changes in general.

Once the AG is taken offline and the AG resource is removed from the cluster you are left with the databases in restoring state on all the impacted nodes.

WARNING: Running restore with recovery only, on the specified instance, allows you to access the database and make changes.  Use this with caution because if you want to re-join the AG only one of the copies can be recovered and modified.  Recovering more than one copy of the database would require you to merge changes (you create your own split brain situation.)

Step #3 - Only Occurs when DROP AVAILABILITY GROUP is executed on the OLD/Original Primary

image Step #3 recovers the database(s) that were in the AG but it is only executed on the OLD/Original PRIMARY.

It is only safe to execute the recovery on the old primary and only if the command was executed on the OLD PRIMARY.

Offline can occur for many reasons (DROP AG, Loss of quorum, …) making the original, primary SQL Server instance, performing the T-SQL DROP AVAILABILITY GROUP command, the only safe/known owner of the AG.

I pointed out a subtle (Note) during my description of step #1 and #2 about the connections to the primary from the secondary.  When the DROP AVAILABILITY GROUP command is executed on a secondary the communication with the primary SQL Server instance is lost during the offline phase.   (DROP AVAILABILITY GROUP is not recommended on a secondary node and should only be used for emergency purposes.)  The secondary no longer has a safe communication channel with the primary resulting in no clear way to tell the primary it is okay to recovery and allow changes and it can't blindly assume who it thinks is primary is really the current primary. 

The primary is signaled to go offline by the cluster resource changes and the primary can't be certain the offline, indicated from the cluster resource change, was because of a quorum loss, forced failover or drop availability group is taking place.   Following the safety protocol offline only takes the database to the restoring state.

The confusion comes with the behavior of step #3.  

  • If the DROP occurs on the original primary the database is restored and changes can be made to the database.  This is safe because all secondary's maintain the restoring state.  You could recreate the AG and the changes made to the primary while the AG was DROPPED will get propagated to the secondary's.
  • If the DROP occurs on the secondary the primary is left in restoring state, requiring administrator intervention to restore with recovery only to allow changes to the database preventing introduction of something like a split-brain. 

Once I went back and looked at the technical flow of this I understand the behavior but it is worth noting that the behavior triggers different outcomes.

  • If I drop the AG from the primary changes are allowed in the database without active High Availability (HA) capabilities (though they could be restored by recreating the AG.)
  • If I drop the AG from a secondary no changes are allowed so it may impact production applications, preventing split-brain from occurring. 

Bob Dorr -  Principal SQL Server Escalation Engineer

Could not load file or assembly System.EnterpriseServices

$
0
0

While setting up a SharePoint 2010 environment with Reporting Services 2012, I hit an interesting error. To explain my setup, this is a Classic Mode Auth site with Kerberos enabled for the site.

I had verified that I had the proper accounts configured and Kerberos Configuration in place. Also, knowing that even in a Classic Mode site, Claims is still going to be used now with RS 2012 in SharePoint Integrated mode as we are a SharePoint Shared Service.

I have my Claims to Windows Token Service (C2WTS) set to a Domain account, and I verified it was delegating to the proper services. The claims service account was also in the local Admins group on the SharePoint Server. This is because I've found that it is needed to be in the Local Admin group. I haven't found what specific right is required yet to avoid the Local Admin group.

When trying to create a Data Source for Reporting Services, when it click on Test Connection, I got the following error:

clip_image001

Within the SharePoint ULS log, we see the following under the "Report Server Processing" category:

Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: , Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Cannot create a connection to data source ''. ---> System.IO.FileLoadException: Could not load file or assembly 'System.EnterpriseServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Either a required impersonation level was not provided, or the provided impersonation level is invalid. (Exception from HRESULT: 0x80070542) File name: 'System.EnterpriseServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' ---> System.Runtime.InteropServices.COMException (0x80070542): Either a required impersonation level was not provided, or the provided impersonation level is invalid. (Exception from HRESULT: 0x80070542)

0x80070542 = ERROR_BAD_IMPERSONATION_LEVEL

Of note, I got the same error from a Claims Mode Auth site.

While setting this up, I purposefully did incremental steps to see what was really needed to get this working. As mentioned above, I had the Claims Service Account within the Local Admin group as I wanted to see if that would be all that is needed. Apparently not. I know we have documentation indicating that you also need the local policy right "Act as part of the Operating System", but I didn't include that out right.

The issue here is the fact that the Claims Service Account is not in the "Act as part of the Operating System" policy. I added the claims service account to that policy right:

clip_image002

After that, I restarted the C2WTS Windows Service and performed an IISReset. Restarting just the C2WTS Windows Service was not enough to correct the error. Possibly due to caching.

clip_image003

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

SharePoint Adventures : “The connection either timed out or was lost” with RS DataSource to SSAS

$
0
0

A customer had encountered an issue with their SharePoint 2010 / Reporting Services 2012 deployment.  They had setup a Data Source for Reporting Services that was setup to connect to a stand alone Analysis Services instance.  When they clicked on “Test Connection” they saw the following:

 

SNAGHTML7d37f3[4]

Within the SharePoint ULS Log, we saw the following – which was really the same error:

Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: , Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Cannot create a connection to data source 'AdventureWorksAS2012.rsds'. ---> Microsoft.AnalysisServices.AdomdClient.AdomdConnectionException: The connection either timed out or was lost. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace ---

When I saw this error, I did not attribute the error to an authentication issue as this usually indicates a network related issue.  I was actually able to reproduce the issue in my local environment.  Once I had it reproduced I grabbed an Analysis Services Profiler trace and saw the following.

SNAGHTML80759e

The minute I saw that, my mindset shifted to an authentication issue and I was pretty sure this was Kerberos related – which based on our deployment of SharePoint 2010 and RS 2012 this also equated to a Claims/Kerberos issue.  Some people think that because we are a Claims aware service now, that Kerberos isn’t needed any longer.  What you will see below is that Kerberos is definitely in play and contributes to the issue.

So, I started with a Network trace using Network Monitor 3.4.  After I collected the trace, I just filtered with the KeberosV5 protocol and applied that.  Here is what I saw:

SNAGHTML82d1d1

There were actually two things going on here.

  1. I was missing the MSOLAPSvc.3/BSPegasus SPN
  2. The Claims Service Account did not have Constrained Delegation setup to allow delegation to the OLAP Service.

I added my MSOLAP SPN’s.  In this case it was requesting the NETBIOS name, so I added both:

SNAGHTML988123

What surprised me on this was that I didn’t see any PRINCIPAL_UNKNOWN errors here.  Just the KDC_ERR_BADOPTION.  In the past, I usually ignored BADOPTION errors and sometimes it can be red herring.  The key here is the number.  The BADOPTIONS I typically ignored had a 5 code with it.  These had 13.  Of note, this BADOPTION was because of Item 2 above – lack of Constrained Delegation configured within Active Directory.

The thing to remember about this deployment is that this is going to be Claims to start.  This means that we will be using the Claims to Windows Token Services (C2WTS).  There will be a Windows Service on the server that is affected and it will have an associated Service Account.  In my case, my service account is BATTLESTAR\claimsservice.  After adding the SPN, I allowed the the Constrained Delegation option to the MSOLAP service.  This is done on the Delegation tab for the service account in question.  If you are using LocalSystem for the C2WTS service account, it would be on the machine account for the server that the C2WTS service is running on.

NOTE: In order to see the Delegation tab in Active Directory, an SPN needs to be on that account.  However, there is no SPN needed for the Claims Service Account.  In my case, I just added a bogus SPN to get it to show.  The SPN I added isn’t used for anything other than to get the Delegation tab to show.

SNAGHTML8ff864

SNAGHTML9036bb

After I had that in place, I did an IISReset to flush and cache for that logged in session and ran a Network Trace again – because I got the same error.

SNAGHTML9149ed

You can notice that the BADOPTION is not present after the MSOLAP TGS request.  That’s because what we did  corrected that one.  However, now we see a BADOPTION after the TGS request for the RSService.  This is something I ran into a few months back that a lot of people either aren’t aware of, or the configuration is so confusing that it is just missed.  Even though you setup the Claims Service with Delegation settings, the actual Shared Service that we are using, also needs these delegation settings.  In this case it would be Reporting Services.  So, we have to repeat what we did for the Claims Service with the Reporting Services account.

NOTE:  In this configuration, the Reporting Services Service Account will not have any SPN’s on it as they are not needed (unless you are sharing it with something else).  So, we’ll need to add a bogus SPN on the RS Service Account to get the Delegation tab to show up.

In my case, I’m sharing my RSService account with a native mode service, so I actually have an HTTP SPN on the account and the Delegation tab is available. 

NOTE: Because the Claims Service has forced Constrained Delegation because of the need for Protocol transitioning, the RS Service MUST use Constrained Delegation.  You can’t go from more secure to less secure.  It will fail.

SNAGHTML967c53

Now lets look at the network trace with these changes.

SNAGHTML991141

You can see that we got a successful response on the 4th line without getting the BADOPTION.  We still see one more BADOPTION, but I didn’t concern myself with it, because…

SNAGHTML99d8d4

I was now working!!!

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

SharePoint Adventures : Access Denied errors when using RS 2012 with a Claims SharePoint Site

$
0
0

I actually ran into this issue back in late April.  I wanted to get this out sooner but life happened.  This issue has started to crop up a lot more as more people are starting to use Reporting Services 2012 in SharePoint Integrated mode.  Here is the run down.  This issue is specific to a Claims Auth SharePoint site.  I encountered it with both Reporting Services and PowerPivot, but I’m going to focus on Reporting Services as there are other issues with PowerPivot in a Claims site and isn’t going to work regardless.

You can see this Access Denied error in different area and essentially occurs when you try to work with anything in a document library.  For example, you can create a Reporting Services Data Source and it will perform the Test Connection operation successfully.  But, once you try to open it up after saving it, you will see:

 

image

 

Throwing Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: , Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user 'BATTLESTAR\asaxton' are insufficient for performing this operation.; 

From Report Builder, if we try to reference that Data Source, we see the following:

image

If we try to deploy a report from a SQL Server Data Tools project, we see the following:

image

There is effectively the same in all circumstances and occurs when we try to go call the SharePoint API to get an item from the document library.  This is actually a SharePoint permission issue of sorts.

If you enable Verbose Logging for the SharePoint Foundation – General Category, you will see the following:

PermissionMask check failed. asking for 0x00010000, have 0x00000000

We can see that the PermissionMask is empty, but it is requiring OpenWeb (0x00010000) – SPRights Enumeration. So, this explains the Access Denied as it looks like I’m coming in as Anonymous.  The other puzzling piece here is the fact that it labeled ‘BATTLESTAR\asaxton’ in the error messages instead of the claims user which looks different.

I looked at the UserInfo table of the content database and saw the following:

image

Some how I’m in there twice with both my claims user and my Windows user.  And, it looks like it is picking up the Windows User when I’m getting the access denied instead of the claims user.  Listing the users within SharePoint using http://admadama:81/_layouts/people.aspx?MembershipGroupId=0 also shows both users.

image

You can click on the users until you see the user with the Windows User  instead of the claims user.

image

At this point we can delete this user and leave the claims user.  The Windows User will still show in the UserInfo table, but it sill show a value of 3 for tp_Deleted

image

After that, everything should start working as expected.

image

Of note, this issue is being addressed in SP1 as indicated in Tristan’s response here.

 

***** UPDATE – 7/13 *****

Of note, this only seems to really affect the Farm Admin account.  The BATTLESTAR\asaxton account is what I used when I setup SharePoint and Reporting Services.  When hitting this error, if I add another user, for example to the Members group, it only shows up once.

image

And, the items worked fine where as for asaxton they do not.  This should limit the impact to the organization.  If you think you are hitting this issue, you can use the people.aspx URL to check (http://admadama:81/_layouts/people.aspx?MembershipGroupId=0) or look at the UserInfo table to see if you see a claims entry and a Windows logon entry for the same user within the Content database for that Claims site.

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

Where Did My Availability Group (AG) Go?

$
0
0

I ran into an issue where the AG was no longer present on a specific node of my cluster but I had NOT dropped the AG from another node in the cluster.  (http://blogs.msdn.com/b/psssql/archive/2012/06/13/how-it-works-drop-availability-group-behaviors.aspx)

  Note: Use the XEvent files captured in the LOG directory to confirm that a DDL command was not issued.

There are protection mechanisms built into the AlwaysOn Availability groups to protect the data.  In this specific case I had the registry on Node 1 become damaged.  In doing so the cluster registry no longer matched the SQL Server metadata and my AG is removed from the node.

image

Note:  The database(s) still exist on Node 1 but they are no longer joined to the 'My AG' availability group. This protects the data on Node 1 from unexpected/mis-configured changes.

It is recommended that you keep the following backups for proper restore of the Availability Group configuration.

Bob Dorr -  Principal SQL Server Escalation Engineer

PowerPivot : When versions get mixed up…

$
0
0

I worked a case today where they were trying to get SQL 2012 Reporting Services installed in a SharePoint environment that also had SQL 2008 R2 PowerPivot installed.  This, by itself, is fine and wasn’t really causing the problem.  When they tried to open the Excel Workbook from SharePoint that had the PowerPivot data in it, we saw the following error:

image

We found the following within the SharePoint ULS Log for the Excel Calculation Services category:

ExternalSource.ExecuteOperation: We exhausted all available connection information. Exception: Microsoft.Office.Excel.Server.CalculationServer.Interop.ConnectionInfoException: Exception of type 'Microsoft.Office.Excel.Server.CalculationServer.Interop.ConnectionInfoException' was thrown.    
at Microsoft.Office.Excel.Server.CalculationServer.ConnectionInfoManager.GetConnectionInfo(Request request, String externalSourceName, Int32 externalSourceIndex, Boolean& shouldReportFailure)    
at Microsoft.Office.Excel.Server.CalculationServer.ExternalSource.ExecuteOperation(Request request, ExternalSourceStateInfo externalSourceStateInfo, ExternalSourceStateInfo prevExternalSourceStateInfo, Int32 index, ConnectionInfoManager connectionInfoManager, ExternalDataScenario scenario, DataOperation dataOperation, Boolean verifyPreOperationConnection), Data Connection Name: PowerPivot Data

What the error above is indicating is that it was trying to establish a connection from the defined data connections within the Excel Workbook. One of these connections is the PowerPivot Connection for the PowerPivot data stored within the workbook itself.  This connection/Data source is called “PowerPivot Data”. When we looked at that connection, we found the following:

image

You can get to this information by going to the Data Tab within Excel and clicking on “Existing Connections”. Then Right click on “PowerPivot Data” and edit the connection properties. Then go to the Definition tab.

The key here is the MSOLAP.5 Provider. This provider is the PowerPivot 2012 Provider. However, within the SharePoint environment, we had the SQL 2008 R2 version of PowerPivot. These versions are not compatible. Also, the MSOLAP.5 provider does not exist because we hadn’t installed it.  It comes with the PowerPivot 2012 install. So, the error above is really saying that it couldn’t find the MSOLAP.5 provider. Which in this case is true.

This is all about aligning the PowerPivot version of the workbook with the PowerPivot version of the Server. I think what may have caused this situation is that when you go to http://powerpivot.com and click on the “Download PowerPivot” button, it takes you to the SQL 2012 PowerPivot Add-in for Excel.

We have two options at this point:

  1. Upgrade the SharePoint environment to the 2012 version of PowerPivot
  2. Downgrade the Excel Add-in to the 2008 R2 version for PowerPivot and recreate the Excel Workbook.

SQL 2012 PowerPivot Add-in for Excel: http://www.microsoft.com/en-us/download/details.aspx?id=29074

SQL 2008 R2 PowerPivot Add-in for Excel: http://www.microsoft.com/en-us/download/details.aspx?id=7609

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton


SocketException when upgrading Reporting Services from SQL 2008/R2 to SQL 2012

$
0
0

I ran into an interesting case yesterday that brought back some memories.  The customer had a SQL 2008 R2 instance of Reporting Services that they were upgrading to SQL 2012.  They didn’t get far into the upgrade when they hit the following error which was present in the Setup Logs:

(16) 2012-07-24 05:52:09 Slp: Initializing rule      : Valid DSN
(16) 2012-07-24 05:52:09 Slp: Rule is will be executed  : True
(16) 2012-07-24 05:52:09 Slp: Init rule target object: Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers
(16) 2012-07-24 05:52:10 RS: Error validating Report Server database version: System.Net.Sockets.SocketException: The requested name is valid, but no data of the requested type was found
   at System.Net.Dns.GetAddrInfo(String name)
   at System.Net.Dns.InternalGetHostByName(String hostName, Boolean includeIPv6)
   at System.Net.Dns.GetHostEntry(String hostNameOrAddress)
   at Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers.IsLocalDbServer(String dbServer)
   at Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers.SetValidDatabaseVersion(DSN dsn)

(16) 2012-07-24 05:52:10 Slp: Evaluating rule        : RS_ValidDSN
(16) 2012-07-24 05:52:10 Slp: Rule running on machine: MATLKPACSAPP001
(16) 2012-07-24 05:52:10 Slp: Rule evaluation done   : Succeeded
(16) 2012-07-24 05:52:10 Slp: Rule evaluation message: The Report Server has a valid DSN.
(16) 2012-07-24 05:52:10 Slp: Send result to channel : RulesEngineNotificationChannel
(16) 2012-07-24 05:52:10 Slp: Initializing rule      : Valid Database compatibility level and successful connection
(16) 2012-07-24 05:52:10 Slp: Rule is will be executed  : True
(16) 2012-07-24 05:52:10 Slp: Init rule target object: Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers
(16) 2012-07-24 05:52:10 RS: Error validating Report Server database version: System.Net.Sockets.SocketException: The requested name is valid, but no data of the requested type was found
   at System.Net.Dns.GetAddrInfo(String name)
   at System.Net.Dns.InternalGetHostByName(String hostName, Boolean includeIPv6)
   at System.Net.Dns.GetHostEntry(String hostNameOrAddress)
   at Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers.IsLocalDbServer(String dbServer)
   at Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers.SetValidDatabaseVersion(DSN dsn)
(16) 2012-07-24 05:52:10 Slp: Evaluating rule        : RS_ValidDatabaseVersion
(16) 2012-07-24 05:52:10 Slp: Rule running on machine: MATLKPACSAPP001
(16) 2012-07-24 05:52:10 Slp: Rule evaluation done   : Failed
(16) 2012-07-24 05:52:10 Slp: Rule evaluation message: The report server database is not a supported compatibility level or a connection cannot be established. Use Reporting Services Configuration Manager to verify the report server configuration and SQL Server management tools to verify the compatibility level.

The minute I saw the SocketException and the Dns.GetAddrInfo, I remembered a blog post I had created a while back – 2009 to be exact.  This was here.  Although this one was a little different.  Also, of note, the original item I had filed in 2009 was related to the Reporting Services Configuration manager and was subsequently fixed with 2008 SP3 as well as SQL 2008 R2 and SQL 2012.  Which actually explains how the customer got into this situation to begin with, but I’ll come back to that.

The original problem was when you tried to use a server name in the format of SERVER,PORT (i.e. myserver,1433).  For a normal SQL connection using SQL Native Client or our ODBC/OLE DB providers, this would work.  However, on the Configuration Manager side of the house, we were only parsing for a “\” which is the usual character for when we have a named instance.  There are scenarios though where people don’t want to specify the Named Instance name and just want to use a port.  In the previous issue, it was due to a firewall restriction not wanting to open up UDP 1434 to allow for SQL Browser to work, which is needed for Named Instance resolution to a port.  The fix to the 2009 post was that code was added to also parse for the “,” as well as the “\”.

One thing we will notice is that the call stack in the exception above, differs from the call stack in the original post.

2009 Post:

ReportServicesConfigUI.RSDatabase.IsLocalDbServer(String dbServer)

This issue:

Microsoft.SqlServer.Configuration.RSExtension.DsnUpgradeBlockers.IsLocalDbServer(String dbServer)

So, the correction happened when we were down the ReportServicesConfigUI code path which is specific to the Reporting Services Configuration Manager.  We have a second code path which still had the original issue, which is invoked when we perform a Setup/Upgrade.

How did this even happen?

Now I come back to how we got here in the first place.  Well, we corrected the Configuration Manager issue which means that within Configuration Manager you can set your server name to the format of SERVER,PORT and it will work.  Think about this scenario.  The Customer installed SQL 2008 R2 Reporting Services in Files Only mode indicating that they want to configure it later.  After setup is complete, they open up Reporting Services Configuration Manager and go through the motion to Create a new Database for Reporting Services and specify the server name in the SERVER,PORT format.  In the issue I saw, they had changed the port of their default instance to 14330 so they used something like myserver,14330.  This worked. Then they go to upgrade to SQL 2012 and hit the error above.

How do we get around it?

So, while we know this issue can occur, how do we get around it so that we can do a successful upgrade?  The good thing here is that Setup didn’t get very far, so we are still with our SQL 2008 R2 instance.

NOTE:  Before doing anything, take a backup of the Reporting Services Catalog DB (by default – ReportServer) and a backup of your Encryption Keys from the Reporting Services Configuration Manager. 

One of the easier things we could do is to do the following for a native instance:

  1. Use Reporting Services Configuration Manager to create a new Database on a different instance where we can use the traditional Server naming format without the port information.
  2. Perform the Upgrade
  3. Then use the Reporting Services Configuration manager to point back to the original database at which point it should be upgraded.

If you don’t have access to another SQL Instance, it may involve using the SERVER\INSTANCE format, or if you are in the case of the customer who changed the default port, change it back to the default port to perform the upgrade and then change it back.  Although, that could potentially affect a lot of other items, so you want to make sure that you plan that accordingly as to not impact other areas of your business.

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

How It Works: SQL Server (BCP, Database I/O, Backup/Restore, …) Reports Operating System Error (665, 1450 or 33) when writing to the file - BIG DATA

$
0
0

Suresh and I have blogged about these issues before but this post will put another spin on the information, as it applies to BIG DATA.

Previous Blog References

I ran into a 665 issue with a customer attempting to BCP data out of a database.  The scenario was that it worked if one instance of BCP was running but if they started a second instance of BCP, at the same time,  (using where clause to divide the table and queryout parameter) the BCP(s) would fail with the following error.

     SQLState = S1000, NativeError = 0  Error = [Microsoft][SQL Server Native Client 10.0]I/O error while writing BCP data-file

After reproducing it in the lab and debugging it I found the problem to be the file system limitation (FILE SYSTEM LIMITATION = 665) during the WriteFile call.  (I did file work items with the SQL Server development team to surface more error details so we don't have to troubleshoot with the debugger in the future.)

Tracking down the source of the problem it was the physical, disk cluster allocations and the way NTFS tracks allocations for an individual file.   I highly recommend you read the following post, it describes the NTFS behavior really nicely: http://blogs.technet.com/b/askcore/archive/2009/10/16/the-four-stages-of-ntfs-file-growth.aspx 

To summarize, when you are at the maximum allocation state for a single NTFS file you have a MTF allocation (1K) for the Attribute List Entries that points to ## of Child records (1K allocations) holding information about the starting physical cluster and how many contiguous clusters are allocated for that segment of the file.  The more of these you have the more fragmented the file becomes.

clip_image001

A nice example I found was the following.

  • Mapping Pair  (Physical Cluster, ## of Clusters From Physical)
  • The file segment starts at physical cluster 100 and used 8 clusters. 
  • The entry is 100, 8

The mapping pair information can be consolidated and compressed so it it not a simple division calculation of the MTF size / Mapping Pair it depends on the cluster locations, contagious cluster acquire capabilities and compression of the mapping pair (for example if cluster location can be stored in less than 8 bytes NTFS can compress the LARGE_INTEGER value.)

The cluster size and MTF sizes determine the maximum size an NTFS file can accommodate.  The following articles highlight these options and limitations.



  • Best Case - Contiguous Allocations
    image



    Worst Case - Alternating Allocations between files
    image

    In the BCP scenario the customer had determined that running 6 instances of BCP on this target system maximized the rows per second transferred.   However, the more instances of BCP they enabled the faster they encountered the file system limitation.   As you can see the more allocations taking place on the same disk/LUN raises the fragmentation level chances and internally puts pressure in the mapping pair space.

    Making it a bit more pronounced is that BCP uses a 4K buffer for its writes.  It fills the buffer, writes it and repeats.  When on a system that uses 4K clusters this aligns well but it allows 4K osculation of clusters between multiple copies to the same physical media.

    Here are a few suggestions for handling Big Data.

    Apply QFE  (Requires Format)
    • Windows provides a QFE to extended the size of the MTF allocation to 4K.  While this will not eliminate the problem the size of the tracking tree is much larger.   Not only is the ATTRIBUTE ENTRY LIST larger so too are the allocations for the MAPPING PAIRs.  Each of the two levels are 3x larger.

      Be careful:  Increasing the MTF size means any file will require 4K of physical storage.

    http://support.microsoft.com/kb/967351

    Avoid NTFS Compression
    • Avoid using NTFS compression for the target files.  NTFS compression is known to use more mapping pairs.
    Use 64K Clusters
    • Allow large allocations for each cluster tracked.  
    Defragmentation
    • Reducing the physical media fragmentation always helps.  In fact, if a file starts to encounter the limitations, defragmenting the file may restore functionality.
    BCP
    • Avoid causing fragmentation by using separate physical paths when available
    • Use queryout and a where clause to grab chunks of data in reasonable file sizes to avoid hitting the limitations
    • Evaluate native vs character mode BCP to determine the smaller output format for the data
    Backup
    • Use T-SQL compression to reduce storage space requirements
    • Avoid increasing fragmentation when using stripes (Disk=File1, Disk=File2) by separating the physical I/O.  Placing stripes on the same physical media can increase the likely hood of physical fragmentation.
    MDF/NDF/LDFs It is unlikely to encounter on database files but for completeness I have included some aspects of database files.
    • Avoid auto-grow activities that could lead to numerous fragments but do not pick a growth size for the LDFs that can lead to concurrency issues during the growth.   It is best to avoid auto growth and do specific grows based on need.
    • Create on drives known to be low on physical fragmentation
    • Avoid files that will push against the limitations.  For example, you may need to restore and the restore may not be able to acquire the same physical layout.
    • Use caution with read only database on NTFS compressed volumes

    Using Contig.exe from sysinternals (http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx) and the -a parameter you can view the number of fragments used by a given file.

    Error References

    • 33 - The process cannot access the file because another process has locked a portion of the file.  Note:  This usually occurs when NTFS compression is enable on the target.
    • 665 - The requested operation could not be completed due to a file system limitation
    • 1450 - Insufficient system resources exist to complete the requested service           

    Bob Dorr - Principal SQL Server Escalation Engineer

How It Works: XEL Display in SQL Server Management Studio (SSMS) Row Limit

$
0
0

This is a simple issue but if you don't expect the behavior it can surprise you.

The grid, used by SSMS, is limited to a maximum number of rows, that can be displayed, of 1 million.   

Note:  There is no warning dialog or flashing toolbar.

Shown in the figure below is the display vs event totals.

clip_image001


The design is an internal, display filter always including TOP 1000000.   

You may adjust other filter criteria, such as the time rage, event type, and up to the first 1 million rows meeting the filter criteria are displayed.

Bob Dorr - Principal SQL Server Escalation Engineer

PowerShell and AlwaysOn - Gotcha - Exception setting "ConnectionString": "Keyword not supported: 'applicationintent'."

$
0
0

Here is the an issue I saw come across an alias that is a gotcha!

I’m running into a problem connecting to an AlwaysOn read-intent secondary and I was wondering if someone could help me out.  I have the .Net Framework 4.5 installed and the newest SQL Client install for SQL Server 2012.  Running this command from the server where SQL Server 2012 is installed works fine:        

 

$conn=new-object System.Data.SqlClient.SQLConnection

$ConnectionString = "Server=tcp:AGL,1800;Database=Contoso;Integrated Security=SSPI"

$conn.ConnectionString=$ConnectionString

$conn.Open()

             

However, as soon as I include the ApplicationIntent=ReadOnly switch, it fails.  I’m not sure how to specify that the SQL Native Client 11 should be used in the connection string (and not sure if I need to do such a thing. 

$conn=new-object System.Data.SqlClient.SQLConnection

$ConnectionString = "Server=tcp:AGL,1800;Database=Contoso;Integrated Security=SSPI;ApplicationIntent=ReadOnly"

$conn.ConnectionString=$ConnectionString

$conn.Open() 

             

Here is the error I receive, which is odd.  The casing in the error message is completely different than I specify.  Must be hard-coded somewhere.

 

Exception setting "ConnectionString": "Keyword not supported: 'applicationintent'."

At line:3 char:7

+ $conn. <<<< ConnectionString=$ConnectionString

    + CategoryInfo          : InvalidOperation: (:) [], RuntimeException

    + FullyQualifiedErrorId : PropertyAssignmentException"

------------------------------------------------------------

Answer: You have to use a version of PowerShell that loads the correct framework. 

You can execute the following PowerShell command to interrogate the version of CLR loaded: $PSVersionTable.CLRVersion
 

You can either:

 

Bob Dorr - Principal SQL Server Escalation Engineer

SharePoint Adventures : AccessDeniedException when a user tries to Add Subscription

$
0
0

I’ve been seeing a few cases that relate to a previous blog post which involved an AccessDeniedException when trying to perform report operations.  However, I ran into a different variation that involved the same error. 

The scenario was that the customer has users that were in a the Viewer role within SharePoint.  This could also be the case if the user only had Read permissions within the SharePoint site. When they went to “Manage Subscriptions” for a report in a document library, and then click “Add Subscription”, they received the following error:

Throwing Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: , Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user 'BATTLESTAR\badama' are insufficient for performing this operation.;   

At face value, my reaction was that this is expected.  I wouldn’t expect a user in a Viewer Role or with only Read rights to be able create a subscription. This made sense.  Until I was pointed to the following documentation for Reporting Services 2012:

Compare Roles and Tasks in Reporting Services to SharePoint Groups and Permissions
http://msdn.microsoft.com/en-us/library/bb283182.aspx

Browser

View reports and self-manage individual subscriptions.

Use the Visitors group to grant permissions to view reports and create subscriptions. The Visitors group has Read level permissions, which enables group members to view pages, list items, and documents.

Interesting.  Also interesting, is that the customer indicated this was working with the 2008 R2 version.  So, something changed.  Either this was an intended change and the documentation just didn’t get updated (which was my initial assumption), or an unintended change occurred that shouldn’t have, and the documentation was still accurate.

After some debugging, I found that we had changed the permission type that we check for.

SPBasePermissions Enumeration
http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spbasepermissions.aspx

2008 R2 Version
ViewListItems - View items in lists, documents in document libraries, and view Web discussion comments.

2012 Version
EditListItems - Edit items in lists, edit documents in document libraries, edit Web discussion comments in documents, and customize Web Part Pages in document libraries.

This change was done for security reasons, which makes sense.  The thought behind this was also that, in most organizations, creating a subscription is a relatively high-privileged operation since it could have some performance impact and could affect the security of the data and stability of the server.

As a result, the referenced documentation above, for Reporting Services, is going to be updated to reflect this change.

 

Techie Details

To determine what was going on, I captured a dump using DebugDiag and set a Crash Dump Rule for all IIS processes on the box.  I set the Exception Rule to a .NET 1.0-3.5 exception and set the Exception Name to AccessDeniedException as that’s what was in our error message.

Once I have the dump and open it up using WinDBG, I issue the following command to load the .NET Debugger Extension that ships with the .NET Framework.  This is something you can do on your own machine.

0:055> .loadby sos mscorwks

Then we can verify that this dump actually grabbed the right error.

0:055> !pe
Exception object: 0000000103ad3dd0
Exception type: Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException
Message: The permissions granted to user 'BATTLESTAR\badama' are insufficient for performing this operation.
InnerException: <none>
StackTrace (generated):
<none>
StackTraceString: <none>
HResult: 80131500

We have the right error.  Let’s see what the call stack is.

0:055> !clrstack
OS Thread Id: 0x1128 (55)
Child-SP         RetAddr          Call Site
0000000011d8cc10 000007ff01779e15 Microsoft.ReportingServices.Library.BaseExecutableCatalogItem.ThrowIfNoAccess(Microsoft.ReportingServices.Interfaces.ReportOperation)
0000000011d8cc70 000007ff01768a60 Microsoft.ReportingServices.Library.GetExecutionOptionsAction.PerformActionNow()
0000000011d8cd10 000007ff017799c1 Microsoft.ReportingServices.Library.RSSoapAction`1[[System.__Canon, mscorlib]].Execute()
0000000011d8cdc0 000007ff017798d4 Microsoft.ReportingServices.Library.ReportingService2005Impl.GetExecutionOptions(System.String, Microsoft.ReportingServices.Library.Soap.ExecutionSettingEnum ByRef, Microsoft.ReportingServices.Library.Soap.ScheduleDefinitionOrReference ByRef)
0000000011d8ce30 000007ff0028d8a9 Microsoft.ReportingServices.ServiceRuntime.ReportServiceManagement+<>c__DisplayClassee.<GetExecutionOptions>b__ed()
0000000011d8ce70 000007ff002c7b9f Microsoft.ReportingServices.ServiceRuntime.ReportServiceBase.ExecuteWithContext[[System.__Canon, mscorlib]](System.Func`1<System.__Canon>)
0000000011d8cec0 000007fee835bd28 DynamicClass.SyncInvokeGetExecutionOptions(System.Object, System.Object[], System.Object[])

One thing to note here, to tell us we are on the right track, you’ll notice the GetExecutionOptionsAction call.  We saw this a few lines above the error in the SharePoint ULS log.

Call to RSGetExecutionOptionsAction().  

So, this lines up.  We see this calling into ThrowIfNoAccess, and there is a ReportOperation being passed in.  We can try to look at the stack objects to see if we can tell what the ReportOperation is that is being passed in.

0:055> !dso
OS Thread Id: 0x1128 (55)
RSP/REG          Object           Name
rbp              0000000103ad3168 Microsoft.ReportingServices.Library.ProfessionalReportCatalogItem
0000000011d8c938 0000000103ad5ac8 Microsoft.ReportingServices.Diagnostics.ContextBody
0000000011d8c940 0000000103ad59f8 Microsoft.SqlServer.SqlDumper.Dumper
...
0000000011d8cbb0 0000000103ad4088 System.String
0000000011d8cbc0 00000001039d50b0 System.String
0000000011d8cbc8 0000000103ad3368 Microsoft.ReportingServices.SharePoint.Server.SharePointSecurity
0000000011d8cbe8 0000000103ad4218 System.Object[]    (System.Object[])
0000000011d8cbf8 00000001039d50b0 System.String
0000000011d8cc00 0000000103ad3dd0 Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException
0000000011d8cc10 0000000103ad3dd0 Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException
0000000011d8cc18 00000001039d50b0 System.String
0000000011d8cc20 00000001409202b0 System.String
0000000011d8cc28 000000014048b5c0 Microsoft.ReportingServices.Diagnostics.Utilities.RSTrace
0000000011d8cc30 0000000103a2e410 Microsoft.ReportingServices.Diagnostics.ExternalItemPath
0000000011d8cc38 0000000103a2af30 Microsoft.ReportingServices.Diagnostics.CatalogItemContext
0000000011d8cc40 0000000103a2a040 Microsoft.ReportingServices.Library.GetExecutionOptionsAction
0000000011d8cc48 0000000103ad3168 Microsoft.ReportingServices.Library.ProfessionalReportCatalogItem
0000000011d8cc50 0000000103a25498 Microsoft.ReportingServices.Library.RSService
0000000011d8cc58 0000000103a2af30 Microsoft.ReportingServices.Diagnostics.CatalogItemContext
0000000011d8cc70 0000000103ad3168 Microsoft.ReportingServices.Library.ProfessionalReportCatalogItem

Unfortunately, nothing looks like the ReportOperation. At this point, we can’t tell what right we are trying to validate, and why it caused the AccessDeniedException.  At this point, we can drop to the code itself.  We can use the !savemodule command to output the .NET Assembly, and then we can use something like Telerik’s JustDecompile to have a look.

The first .NET Assembly we are interested in is ReportingServicesLibrary.dll.  We can find this within the dump this way.

0:055> lmvm ReportingServicesLibrary
start             end                 module name
00000000`68f80000 00000000`69190000   ReportingServicesLibrary 
    Loaded symbol image file: ReportingServicesLibrary.DLL
    Image path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\RSTempFiles\797da953fbf941e79d445a1791e8fa32\b8ec940d\7ebb764f\assembly\dl3\f0a7062f\000cc91d_1b14cd01\ReportingServicesLibrary.DLL
    Image name: ReportingServicesLibrary.DLL
    Using CLR debugging support for all symbols
    Has CLR image header, track-debug-data flag not set
    Timestamp:        Fri Apr 06 05:35:20 2012 (4F7EC6E8)
    CheckSum:         00213370
    ImageSize:        00210000
    File version:     11.0.2316.0
    Product version:  11.0.2316.0

We can then use the Module Start Address (00000000`68f80000) for the !savemodule command and give it a path to save the module.

0:055> !savemodule 00000000`68f80000 c:\temp\files\ReportingServicesLibrary.dll
3 sections in file
section 0 - VA=2000, VASize=208824, FileAddr=1000, FileSize=209000
section 1 - VA=20c000, VASize=570, FileAddr=20a000, FileSize=1000
section 2 - VA=20e000, VASize=c, FileAddr=20b000, FileSize=1000

Within JustDecompile, we are going to look for Microsoft.ReportingServices.Library.GetExecutionOptionsAction.PerformActionNow().  This is because it is calling the ThrowIfNoAccess, and we want to see if we can pick off what it is passing in for the parameter.

I skipped the load for Microsoft.ReportingServices.Diagnostics, but grabbed the assembly for Microsoft.ReportingServices.Interfaces

0:055> lmvm Microsoft_ReportingServices_Interfaces
start             end                 module name
00000000`697d0000 00000000`697e2000   Microsoft_ReportingServices_Interfaces   (deferred)            
    Image path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\RSTempFiles\797da953fbf941e79d445a1791e8fa32\b8ec940d\7ebb764f\assembly\dl3\c45b2c1c\00f9ce0d_bee8cc01\Microsoft.ReportingServices.Interfaces.DLL
    Image name: Microsoft.ReportingServices.Interfaces.DLL
    Using CLR debugging support for all symbols
    Has CLR image header, track-debug-data flag not set
    Timestamp:        Fri Feb 10 21:47:30 2012 (4F35E4D2)
    CheckSum:         0001FA3C
    ImageSize:        00012000
    File version:     11.0.2100.60
    Product version:  11.0.2100.60

0:055> !savemodule 00000000`697d0000 c:\temp\files\Microsoft.ReportingServices.Interfaces.dll
3 sections in file
section 0 - VA=2000, VASize=aa74, FileAddr=1000, FileSize=b000
section 1 - VA=e000, VASize=598, FileAddr=c000, FileSize=1000
section 2 - VA=10000, VASize=c, FileAddr=d000, FileSize=1000

From there we can expand the PerformActionNow() method and see the code.

internal override void PerformActionNow()
{
    ExecutionSettingEnum executionSettingEnum = ExecutionSettingEnum.Live;
    ScheduleDefinitionOrReference scheduleDefinitionOrReference = null;
    CatalogItemContext catalogItemContext = new CatalogItemContext(base.Service, base.ActionParameters.ReportPath, "report");
    CatalogItem catalogItem = base.Service.CatalogItemFactory.GetCatalogItem(catalogItemContext);
    catalogItem.ThrowIfWrongItemType(ItemType.Report, ItemType.LinkedReport);
    BaseReportCatalogItem baseReportCatalogItem = catalogItem as BaseReportCatalogItem;
    baseReportCatalogItem.ThrowIfNoAccess(ReportOperation.ReadPolicy); <—we are passing ReportOperation.ReadPolicy
    base.Service.ExecCacheDb.GetExecutionOptions(catalogItemContext.CatalogItemPath, baseReportCatalogItem.ItemID, out executionSettingEnum, out scheduleDefinitionOrReference);
    this.ActionParameters.ExecutionSettings = executionSettingEnum;
    this.ActionParameters.Schedule = scheduleDefinitionOrReference;
}

So, we can see that we are passing ReportOperation.ReadPolicy.  This ReadPolicy is Reporting Services view of the permission. But, what is ThrowIfNoAccess doing to compare it.  We can click on that within JustDecompile to see what it is doing.

internal void ThrowIfNoAccess(ReportOperation operation)
{
    if (base.Service.SecMgr.CheckAccess(base.ThisItemType, base.SecurityDescriptor, operation, base.ItemContext.ItemPath))
    {
        return;
    }
    else
    {
        throw new AccessDeniedException(base.Service.UserName); <—this is what threw the error that we captured.
    }
}

The IF statement is what is returning false, which then throws the AccessDeniedException.  Clicking on the CheckAccess method leads us into the Security Class and one of the overloads for CheckAccess, but due to the decompile, it isn’t very useful.

Lets go back to the ReadPolicy to see what we can get from that.  Clicking on ReportOperation will show us the enumeration itself.  We can then click on ReadPolicy.

public const ReportOperation ReadPolicy = 20;

We will probably want to do a search by Symbol or FullText, but at first try you will only really see the enumeration itself.  However, we know that this is being used for a SharePoint operation, so lets go grab the SharePoint Assemblies that Reporting Services uses to see if that contains the usage.  Because we aren’t sure, the first two modules that looks interesting are Microsoft_ReportingServices_SharePoint_Common and Microsoft_ReportingServices_SharePoint_Server.

0:055> lmvm Microsoft_ReportingServices_SharePoint_Common
start             end                 module name
00000000`6f7d0000 00000000`6f7f6000   Microsoft_ReportingServices_SharePoint_Common   (deferred)            
    Image path: C:\Windows\assembly\GAC_MSIL\Microsoft.ReportingServices.SharePoint.Common\11.0.0.0__89845dcd8080cc91\Microsoft.ReportingServices.SharePoint.Common.dll
    Image name: Microsoft.ReportingServices.SharePoint.Common.dll
    Using CLR debugging support for all symbols
    Has CLR image header, track-debug-data flag not set
    Timestamp:        Fri Feb 10 21:49:29 2012 (4F35E549)
    CheckSum:         00029204
    ImageSize:        00026000
    File version:     11.0.2100.60
    Product version:  11.0.2100.60

0:055> !savemodule 00000000`6f7d0000 c:\temp\files\Microsoft.ReportingServices.SharePoint.Common.dll
3 sections in file
section 0 - VA=2000, VASize=1f4f4, FileAddr=1000, FileSize=20000
section 1 - VA=22000, VASize=5b0, FileAddr=21000, FileSize=1000
section 2 - VA=24000, VASize=c, FileAddr=22000, FileSize=1000

0:055> lmvm Microsoft_ReportingServices_SharePoint_Server
start             end                 module name
00000000`69790000 00000000`697c8000   Microsoft_ReportingServices_SharePoint_Server   (deferred)            
    Image path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\RSTempFiles\797da953fbf941e79d445a1791e8fa32\b8ec940d\7ebb764f\assembly\dl3\c5ea2cc7\00f9af03_c4e8cc01\Microsoft.ReportingServices.SharePoint.Server.DLL
    Image name: Microsoft.ReportingServices.SharePoint.Server.DLL
    Using CLR debugging support for all symbols
    Has CLR image header, track-debug-data flag not set
    Timestamp:        Fri Feb 10 21:49:35 2012 (4F35E54F)
    CheckSum:         000478B9
    ImageSize:        00038000
    File version:     11.0.2100.60
    Product version:  11.0.2100.60

0:055> !savemodule 00000000`69790000 c:\temp\files\Microsoft.ReportingServices.SharePoint.Server.dll
3 sections in file
section 0 - VA=2000, VASize=30b54, FileAddr=1000, FileSize=31000
section 1 - VA=34000, VASize=5f0, FileAddr=32000, FileSize=1000
section 2 - VA=36000, VASize=c, FileAddr=33000, FileSize=1000

After those are saved, you can open them within JustDecompile and then perform the FullText search for ReadPolicy.

image

The first two items are enumerations and not really helpful.  The next item, labeled InitializeMaps(), was interesting.  Although this prompted us for Microsoft.ReportingServices.SharePoint.ObjectModel.dll.

0:055> lmvm Microsoft_ReportingServices_SharePoint_ObjectModel
start             end                 module name
00000000`697f0000 00000000`69800000   Microsoft_ReportingServices_SharePoint_ObjectModel   (deferred)            
    Image path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\RSTempFiles\797da953fbf941e79d445a1791e8fa32\b8ec940d\7ebb764f\assembly\dl3\13fb0569\00f9af03_c4e8cc01\Microsoft.ReportingServices.SharePoint.ObjectModel.DLL
    Image name: Microsoft.ReportingServices.SharePoint.ObjectModel.DLL
    Using CLR debugging support for all symbols
    Has CLR image header, track-debug-data flag not set
    Timestamp:        Fri Feb 10 21:49:36 2012 (4F35E550)
    CheckSum:         000177BB
    ImageSize:        00010000
    File version:     11.0.2100.60
    Product version:  11.0.2100.60

0:055> !savemodule 00000000`697f0000 c:\temp\files\Microsoft.ReportingServices.SharePoint.ObjectModel.dll
3 sections in file
section 0 - VA=2000, VASize=80f4, FileAddr=1000, FileSize=9000
section 1 - VA=c000, VASize=630, FileAddr=a000, FileSize=1000
section 2 - VA=e000, VASize=c, FileAddr=b000, FileSize=1000

AuthzData.m_RptOper2PermMask.Add(ReportOperation.ReadPolicy, (uint)1048576);

It is mapping ReadPolicy to another value – although we can’t tell what that really is.  Looking at the second InitializeMaps(), we can see the following.

SharePointAuthzData.m_RsReportOperationToSpRight.Add(ReportOperation.ReadPolicy, (RSSPBasePermissions)((long)4));

This is within Microsoft.ReportingServices.SharePoint.Server within the Type SharePointAuthzData. This one is interesting on two fronts.  It shows RSReportOperationToSpRight, which is what we are interested in.  We ultimately want to see what SharePoint Right we are trying to represent.  The other piece is the RSSPBasePermissions with a Long value of 4.  We can click on RSSPBasePermissions to see what that shows.  It is another enumeration.

EditListItems = 4,

And that is our SharePoint permission that links to the SPBasePermission documentation listed at the beginning of the blog.

If you were to follow these same steps (although we know where to go look now) on the 2008 R2 side, you would see that ReadPolicy maps to ViewListItems (1) instead of EditListItems (4), which confirms the change.

 

Adam W. Saxton | Microsoft Escalation Services
http://twitter.com/awsaxton

Viewing all 339 articles
Browse latest View live




Latest Images