Quantcast
Channel: CSS SQL Server Engineers
Viewing all 339 articles
Browse latest View live

Reporting Services: Error creating HTTP endpoint – Access is Denied

$
0
0

I’ve seen this issue a few times.  We had a case come in where they were seeing a blank page when they went to Report Manager for Reporting Services.  You may also see an HTTP 503 error.  This just means that the service had a problem and there was probably an exception that occurred under the hoods.

In this case, the issue was with SQL 2008 R2.  When looking at the Reporting Services Logs, we can see the following exception when the service starts.

rshost!rshost!1380!03/13/2015-14:52:11:: e ERROR: Failed to register url=http://+:80/ReportServer_RS2008R2/ for endpoint 2, error=5. <—5 = Access Denied
rshost!rshost!1380!03/13/2015-14:52:11:: w WARN: Endpoint 2 is enabled but no url is registered for vdir=/ReportServer_RS2008R2, pdir=C:\Program Files\Microsoft SQL Server\MSRS10_50.RS2008R2\Reporting Services\ReportServer.
servicecontroller!DefaultDomain!1a20!03/13/2015-14:52:11:: e ERROR: Error creating HTTP endpoint. System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
   at Microsoft.ReportingServices.HostingInterfaces.IRsUnmanagedCallback.CreateHttpEndpoint(RsAppDomainType application, String[] urlPrefixes, Int32 cPrefixes, String[] hosts, Int32 cHosts, Boolean wildCardPresent, String virtualDirectory, String filePath, Int32 authType, Int32 logonMethod, String authDomain, String authRealm, Boolean authPersist, Int32 extendedProtectionLevel, Int32 extendedProtectionScenario, Boolean enabled)
   at Microsoft.ReportingServices.Library.ServiceAppDomainController.SetWebConfiguration(RunningApplication rsApplication, Boolean enabled, String folder)

This comes down to understanding URL Reservations and Reporting Services starting with RS 2008 and later.

About URL Reservations and Registration
https://msdn.microsoft.com/en-us/library/bb677364.aspx

What is happening here is that the Service Account was changed outside of Reporting Services Configuration Manager.  On my system, if we look, we see that the service is currently set to my RSService account.

SNAGHTMLc733cec

However, if we look at the URL Reservations that are currently registered we will see that they are configured for the Network Service account.  You can see this by running the following command from an Admin Command Prompt.

netsh http show urlacl

SNAGHTMLc757690

The problem here is that the service account was changed within the Services area and not from within Reporting Services Configuration Manager.  As a result the URL Reservation permissions were not updated to the new service account.  So, when we try to use the reservation, we get an Access Denied because the RSService account doesn’t have permissions.  The Network Service Account does.

You could also encounter other issues by doing this.  For example, you would probably not have had the Encryption Key, so if you had Data Sources and what not present, you wouldn’t be able to use the encrypted content.

How do we get back to working?

We can go back to the services area, and change the account back to what it was before.  If Network Services doesn’t work, you can use the netsh command above to see what account is actually listed and change it back to that account.

Once the account is back, you can then go into Reporting Services Configuration Manager and change the account on the Service Account Tab to the one you want.  This will also prompt you to make a backup of the Encryption key.

SNAGHTMLc7ecddc

You will also see, when you do it this way, that it will remove the reservations and re-add them.

SNAGHTMLc7f5d79

Running the netsh command, we can also see the correct service account is applied.

SNAGHTMLc805259

NOTE: Changing the service account from Network Service to a domain user account will add the RSWindowsNegotiate tab into the authentication types within the rsreportserver.config.  So, if you don’t have Kerberos configured, you may get prompted 3 times followed by a 401 error when you go to Report Manager.

 

Adam W. Saxton | Microsoft Business Intelligence Support - Escalation Services
@GuyInACube | YouTube | Facebook.com\guyinacube


Does statistics update cause a recompile?

$
0
0

This my “statistics series” blog.   See “Statistics blogs reference” at end of this blog.

In this blog, I will talk two scenarios related to recompile in conjunction with statistics update.  A statement can be recompiled for two categories of reasons.  First category is related to correctness (such as schema change).  Another category is related to plan optimality.   Statistics update related recompile falls into second category.

If I were to ask you a question “Does statistics update cause recompile for a query referencing the table?”,  what would your answer be?  In most cases, the answer is YES!  However there are a couple of scenarios where recompile is not necessary.   In other words, a query won’t recompile even you have updated statistics for the tables being accessed.  We actually got users who called in and inquired about the behaviors from time to time.

Scenario 1 – trivial plan

When a plan is trivial, it’s unnecessary to recompile the query even statistics has been updated.  Optimizer generates trivial plan for very simple queries (usually referencing a single table).  In XML plan, you will see statementOptmLevel="TRIVIAL". In such case, it’s futile and you won't get a better or different plan.

Let’s see this in action.  In the script below, I created a table and two procedures (p_test1 and p_test2).  p_test1 has a very simple statement.   I execute them once so that the plans will be cached.  Then one row is inserted (this is very important as it will be explained in the 2nd scenario).  Statistics then is updated.

use tempdb
go
if object_id ('t') is not null      drop table t
go
create table t(c1 int)
go
create procedure p_test1 @p1 int
as
    select c1 from t where c1 = @p1
go
create procedure p_test2 @p1 int
as
select t1.c1 from t t1 join t t2 on t1.c1=t2.c1 where t1.c1 = @p1
go
set nocount on
declare @i int = 0

while @i < 100000
begin
    insert into t values (@i)
    set @i += 1
end
go
create index ix on t(c1)
go
exec p_test1 12
exec p_test2  12
go
insert into t values (-1)
go
update statistics t
go

 

I started profiler trace to trace “SQL:StmtRecompile” event followed by running the following queries again

--no recompile because of trivial plan
exec p_test1 12
--recompile because of stats updated with data change and it's not a trivial plan
exec p_test2  12

Note that only the statement from p_test2 produced StmtRecompile event.   This is because the statement in p_test1 produced a trivial plan.  Recompile would be futile anyways.

image

 

Scenario 2 –no data change

In this scenario, the plan can be non-trivial plan but it still won’t recompile if the table whose statistics was updated hasn’t got any row modification(insert,delete and update) since last statistics update. 

Let’s use the same demo above to illustrate the behavior. 

let’s update statistics one more time  (update statistics t).  Note that I didn’t modify the table.  Now run the query (p_test2)  again below.  Note that no StmtRecompile event was produced.  It used the existing plan.    In short,  if there is no data change, there is no need to recompile.

--no recompile because there is no data change even though stats got updated
exec p_test2  12

image

 

Scenario 2 actually has complications.  Suppose that you updated statistics yesterday.  Today you decided that you need to update statistics with fullscan thinking that it may produce better statistics to benefit queries.   But there was no change in data.  You may be in for a surprise that SQL still used the same plans without recompileing.   In such case, you will need to manually free procedure cache to get rid of the plan.

Statistics blogs reference

  1.  filtered statistics
  2. statistics update with index rebuild
  3. partitioned table statistics
  4. sampling & statistics quality.

 

Jack LI | Senior Escalation Engineer | Microsoft SQL Server Support

Understanding SQL Server’s Spatial Precision Filtering

$
0
0

A spatial index is not precise on its own. The spatial index is grid design requiring a precision filter as part of the query plan. In this blog I will provide a high level (10,000 foot) overview of the design.

The spatial index overlays a series of grids. If the shape has any area (representation) that falls into a cell, the shape’s primary key (PK) and cell location are added to the spatial index.

In the figure below:

· The ‘red’ point results in row 2, col 1.
· The ‘green’ point results in row 1, col 2.
· The ‘blue’ polygon results in row 1, col 1, 2 and 3; row 2 col 2; row 3 col 1, 2, and 3; and row 4 col 1, 2 and 3

clip_image002[5]

Let us use the following example.

Select * from MySpatialTable mst
  Inner join poly_SpatialDataSet mlt on
         mst.GeoColumn.STIntersect(mlt.GeoColumn) = 1 -- Predicate constant on right side to allow optimizer to select spatial index

The plan below is a portion of this sample query. The spatial index on MySpatialTable is leveraged to determine ‘possible’ rows of interest. MySpatialTable holds the polygon and query is searching for the points intersecting the polygon.

1. For each row in poly_SpatialDataSet the nested loop feeds the spatial, tessellation function. Tessellation determines the cell, mapped on to the same grid as the polygon index. For each point the cell identifier and primary key is passed through the nested loop.

2. The nested loop uses the polygon’s spatial index to determine if the cell containing the point is a cell contained in the polygon. If any part of the polygon and point appear in the same cell identifier a possible hit is confirmed. This does not mean the polygon and point intersect, only that they fall within the same grid cell. The primary key for the point and polygon are then passed to the consumer of the nested loop.

3. Once a primary key for the polygon and point are identified the precision filter is invoked. The precision filter deserializes the spatial objects and performs a full STIntersects evaluation confirming if the point truly intersects the polygon.

clip_image004[5]

Deserialization can be expensive. To deserialize a spatial object SQL Server uses the primary key to lookup the row and read the associated blob data storing the spatial data. SQL Server then leverages the Microsoft.SqlServer.Types .NET library to create a spatial object, deserializing the points and other metadata from the blob. The larger the blob the more work to instantiate the spatial object. You can monitor the performance counter (Access Methods : Count of LOB Read Aheads). The performance counter is helpful as deserialization leverages blob read-ahead capabilities.

The Precise Filter is a Non-Caching operator. When looking at query plans in SQL Server you may see rewind, rebind, table spool as such activities. These actions can be used to cache or reset specific portions of the execution plan, reducing the overhead of re-fetching a row, for example. The spatial, precision filter does not provide caching operations. In the example we have 2 points and a single polygon. 1 of those points will flow to the precision filter for evaluation.

Let’s say we had a million points that fell in a cell matching that of the polygon. The primary keys for a million points would flow to the precision filter. The primary key for the polygon would also flow a million times to the precision filter. The query plan logic does not assume the same, polygon row can arrive at the filter multiple times. Each time the filter is executed the polygon is materialized, used and destroyed. If the polygon was cached by the spatial filter (DCR is filed with the SQL development team) the polygon would only need to be created 1 time and it could compare itself to each of the 1 million points. Instead, it creates and destroys the polygon object 1 million times.

Because deserialization can be expensive reducing the size of the spatial object and the number of rows arriving at the precision filter helps your query execute faster.

The following is sample output from 185,000 points using statistics profile. You can see the filter was only invoked (executed) 80 times. This is indicating that only 80 of the 185,000 points fell within a cell also occupied by the polygon. Of those 80 possible hits the precision filter found 0 intersections. These 80 points are fell outside the bounds of our polygon.

clip_image006[5]

· Without these 80 rows to precision filter, the query runs in 12 seconds

· With this 80 rows to precision filter, the query runs in 52 seconds

The original spatial index was built using a CELLS_PER_OBJECT setting of 64 and HIGH grid levels. Rebuilding the grid with CELLS_PER_OBJECT = 8192 changed the comparisons required from 80 to 27, reducing runtime.

CAUTION: Changing CELLS_PER_OBJECT or grid levels may not yield better results. Study your spatial objects carefully to determine optimal index options.

Another option to reduce the time spent in the precision filter is reducing the size of the spatial object. If you can reliably split the spatial object into multiple rows it may reduce the size of the deserialization and improve performance.

Here is an example taking a MultiPolygon and creating a row for each Polygon. Using smaller polygons reduces the workload for the filter.

CAUTION: Use caution when breaking up a spatial object to accommodate the desired query results.

CAUTION: Be careful when attempting to split the spatial object. Creating too many, small objects can increase the number of rows that fall into a single cell causing lots of rows to match a single cell and degrading performance over a larger object.

declare @iObjects int = 0
select @iObjects = Geography.STNumGeometries() from MyObjects

while(@iObjects > 0)
begin
  insert into MultiplePolyObjects
  select @iObjects, FeatureSid, Geography.STGeometryN(@iObjects) /*.ToString()*/ from MyObjects
  set @iObjects = @iObjects – 1
end

Leverage the knowledge of your spatial data, index precision and the precision filter invocations to tune your spatial queries.

Related Spatial References

http://blogs.msdn.com/b/psssql/archive/2013/12/09/spatial-index-is-not-used-when-subquery-used.aspx
http://blogs.msdn.com/b/psssql/archive/2013/11/19/spatial-indexing-from-4-days-to-4-hours.aspx
http://blogs.msdn.com/b/psssql/archive/2015/01/26/a-faster-checkdb-part-iv-sql-clr-udts.aspx

Bob Dorr - Principal SQL Server Escalation Engineer

Apr 14 2015 – UPDATE (rdorr)

The SQL Server development team provided me with additional information, special thanks to Milan Stojic.

Hint: The precision filtering is often referred to as the secondary filter in various publications.  The index filter is the primary filter and the precision filter is the secondary filter.

The query hint SPATIAL_WINDOW_MAX_CELLS allows fine tuning between primary and secondary filtering.   Adjusting the SPATIAL_WINDOWS_MAX_CELLS can provide increased filtering of the spatial index similar to increasing the index precision (CELLS_PER_OBJECT ) outlined in this blog.    The query hint allows targeting of specific queries instead of complete index changes.

… WITH

(INDEX ( P1 ),SPATIAL_WINDOW_MAX_CELLS=8192) …

Reference: SPATIAL_WINDOW_MAX_CELLS - http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/SQLServer_Denali_Spatial.docx 

Index Hint: A general recommendation is HHHH for point data and AUTO grid for other spatial data types.

geography.Filter: When using STIntersects you may consider .Filter instead.  Filter is not an exact match but if your application tolerance allows for such variance it may perform better.  Reference: https://msdn.microsoft.com/en-us/library/cc627367.aspx 

geography.Reduce: Another option may be to reduce large spatial objects, retrieving the possible rows before doing more precise work.  This may require a two step process; first to reduce and get a intermediate result set of possible rows and a final step using the reduced row possibilities against the complete object.

Bob Dorr - Principal SQL Server Escalation Engineer

 

What is RESOURCE_GOVERNOR_IDLE and why you should not ignore it completely

$
0
0

If you have query that runs slow, would you believe it if I tell you that you instructed SQL Server to do so?  This can happen with Resource Governor.

My colleague Bob Dorr has written a great blog about Resource Governor CPU cap titled “Capping CPU using Resource Governor – The Concurrency Mathematics”.

Today,, I will explore a customer scenario related to this topic. We have had a customer who complained that their queries ran slow.  Our support captured data and noticed that wait type “RESOURCE_GOVERNOR_IDLE” was very high.   Below is SQL Nexus Bottleneck Analysis report.

image

My initial thought was that this should be ignorable. We have many wait types that are used for idle threads for many different queues when queues are empty.   This must be one of those.

Since I haven’t seen it, I decided to check in the code.  It turned out to be significant.  This wait type is related to resource governor CPU cap implementation (CAP_CPU_PERCENT).     When you enable CAP_CPU_PERCENT for a resource pool, SQL Server ensures that pool won’t exceed the CPU cap.   If you configure 10% for CAP_CPU_PERCENT, SQL Server ensures that you only use 10% of the CPU for the pool.  If you pound the server (from that pool) with CPU bound requests, SQL Server will insert ‘idle consumer’ into runnable queue to take up the quantum that pools is not entitled to.   While the ‘idle consumer’ is waiting, we put RESOURCE_GOVERNOR_IDLE to indicate that the ‘idle consumer’ is taking up quantum.   here is what what the runnable queues for a particular resource pool would look like with and without CAP_CPU_PERCENT configured.

image  image

Not only you will see that wait type in sys.dm_os_wait_stats, but also you will see ring buffer entries like below:

select * from sys.dm_os_ring_buffers
where ring_buffer_type ='RING_BUFFER_SCHEDULER' and record like '%SCHEDULER_IDLE_ENQUEUE%'
<Record id = "139903" type ="RING_BUFFER_SCHEDULER" time ="78584090"><Scheduler address="0x00000002F0580040"><Action>SCHEDULER_IDLE_ENQUEUE</Action><TickCount>78584090</TickCount><SourceWorker>0x00000002E301C160</SourceWorker><TargetWorker>0x0000000000000000</TargetWorker><WorkerSignalTime>0</WorkerSignalTime><DiskIOCompleted>0</DiskIOCompleted><TimersExpired>0</TimersExpired><NextTimeout>6080</NextTimeout></Scheduler></Record>

 

Conclusion:

If you see wait type RESOURCE_GOVERNOR_IDLE, don’t ignore it.  You need to evaluate if you are setting the CPU cap correctly.  It may be what you wanted.  But you it may be that you have capped it too low and the queries are impacted in a way you didn’t intend to.  If it’s what you intended to do, you will need to explain to your user that they are “throttled”.

Demo

For the demo, observe how long the query runs before and after the CPU cap is configured.


--first measure how long this takes
select count_big (*) from sys.messages m1 cross join sys.messages m2  -- cross join sys.messages m3

go
--alter to 5 (make sure you revert it back later)
ALTER RESOURCE POOL [default]
WITH ( CAP_CPU_PERCENT = 5 );
go
ALTER RESOURCE GOVERNOR RECONFIGURE;
go

--see the configuration
select * from sys.dm_resource_governor_resource_pools

go

--now see how long it takes
select count_big (*) from sys.messages m1 cross join sys.messages m2  -- cross join sys.messages m3


go
--While the above query is running, open a different connection and run the following query
--you will see that it keeps going up. note that if you don't configure CAP_CPU_PERCENT, this value will be zero
select * from sys.dm_os_wait_stats where wait_type ='RESOURCE_GOVERNOR_IDLE'

image

go


--revert it back
ALTER RESOURCE POOL [default]
WITH ( CAP_CPU_PERCENT = 100 );
go
ALTER RESOURCE GOVERNOR RECONFIGURE;
go

 

Jack Li | Senior Escalation Engineer | Microsoft SQL Server

Pssdiag Manager update 12.0.0.1001 released

$
0
0

We just released a pssdiag Manager update to codeplex.

Where to download

You can download both binary and source code at http://diagmanager.codeplex.com/.

What's New

This version support SQL Server 2012 and 2014

Requirements

  1. Diag Manager requirements
    • Windows 7 or above (32 or 63 bit)
    • .NET framework 2.0 installed
  2. Data collection
    • The collector can only run on a machine that has SQL Server with targeted version (either client tools only or full version) installed

Training

  1. Downloading and Installing Diag Manager
  2. Configuring and customizing pssdiag packages using Diag Manager
  3. Running pssdiag package
  4. PSSDIAG performance considerations

 

Jack Li |Senior Escalation Engineer | Microsoft SQL Server Support

twitter| pssdiag |Sql Nexus

Forced parameterization to the rescue

$
0
0

Some of the features have been around for a long time.  But we keep seeing users not taking advantage of it.   I wanted to give you an example how forced parameterization can help you.

Recently I worked with a customer with a very active system serving many concurrent users.  Here is some basic information:

  1. CPU: 160 logical CPU (80 cores with hyper-threading enabled)
  2. RAM: 2TB RAM
  3. Active users: about 1400
  4. Batch requests/sec:  averaging 4000 or above

This is very mission critical system.  When their users reached max of 1400 and CPU reached above 70-80%, their application started to slow down.    With high CPU, the usual troubleshooting is the tune heavy hitter queries.  But SQL Nexus& RML report showed that there wasn’t predominant set of queries to tune.  The screenshot bellowed showed that top 10 queries accumulatively accounted for less than 20% of total CPU consumed.   This made it hard to focus and tune individual queries.

 

image

 

We noticed that the compilation was fairly high as shown in the screenshot below.  SQL Compilation/sec averaged 730.

image

 

With compilation being this high, it usually was because ad hoc queries were used at high rate.  To prove this, we pulled out “SQL Plan” out of “Cache Object Counts”.   It was almost over 160,000 (see screenshot below)!   This counter meant that there were almost 160,000 ad hoc plans in the plan cache!

image

 

Solution

Many times, ad hoc queries at high rate can cause issues such as wasting CPU to compile and wasting plan cache memory.   We had this customer enable “Forced Parameterization” for the database.  After that, the CPU dropped to 10-20% even with highest user load and performance became super fast.

Sometimes, a solution may be simpler than you might have thought.  Just keep this option handy.  If things don’t work out, it’s easy to back it out.  Over the course of troubleshooting performance issues, I have used this trick many times.  I hope this serve as a reminder for you.

 

Jack Li | Senior Escalation Engineer | Microsoft SQL Server Support

twitter| pssdiag |Sql Nexus

Server’s “Max Degree of Parallelism” setting, Resource Governor’s MAX_DOP and query hint MAXDOP–which one should SQL Server use?

$
0
0

SQL Server allows a user to control max degree of parallelism of a query in three different ways.   Just for references, here is a list of documentation:

  1. SQL Server wide “max degree of parallelism” configuration is documented in max degree of parallelism Option.   Microsoft Support has recommended guidelines on setting max degree of parallelism per KB “Recommendations and guidelines for the "max degree of parallelism" configuration option in SQL Server”.
  2. Resource Governor’s MAX_DOP is documented in CREATE WORKLOAD GROUP
  3. MAXDOP query hint is documented in “Query Hints (Transact-SQL)

What is effective setting if all or some of these settings are enabled?    Permutations of this can be confusing.   So I decided to do some code research and here is the table of all possible combinations of the settings:

Query Hint (QH)

Resource Governor (RG)

Sp_conifgure

Effective MAXDOP of a query

Not set

Not set

Not set

Server decides (max cpu count up to 64)

Not set

Not set

Set

Use sp_configure

Not set

Set

Not set

Use RG

Not set

Set

Set

Use RG

Set

Not set

Not set

Use QH

Set

Set

Not set

Use min(RG, QH)

Set

Set

set

Use min (RG, QH)

Set

Not set

Set

Use QH

When you reference the above table, please note the following:

  1. 0 of any configure (Query hint, Resource governor, or sp_configure) means max dop is not set.  For example if you use option (MAXDOP 0) query hint, it is considered as MAXDOP hint is not set at query level.
  2. A query can be set to use serial plan regardless of these settings.  Optimizer decides if a plan is serial plan based on cost and certain TSQL constructs (an example if SQL 2014 query use memory optimized table).
  3. Actual DOP can be lower than MAXDOP due to memory or thread shortage. 

 

For reference, my colleague Bob Dorr has written a couple of blogs in this space:

Credits:  I’d like to thank Jay Choe  -- Sr. Software Engineer at Microsoft for reviewing my code research and confirming the findings, and Bob Ward -- CTO CSS AMERICAS at Microsoft for prompting the research on this topic.

 

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

XEvent Timestamp is a large integer value not the expected datatime value

$
0
0

The timestamp column for XEvent is stored internally as an offset from the start of the trace.   The XEvent header contains the starting, UTC time and each event stores the offset in ticks from the value stored in the header.

On a system where the time is adjusted, for example daylight savings time falls backward, the offset stored in the individual trace record can become a negative value.

There is a bug in the common, XEvent reader code that impacts the TSQL reader as well as the client reader (SSMS, XEvent Linq reader, …).   Instead of reading the value as a signed value the value is read as unsigned.   This causes the unsigned value to look like 0xFFFFFFFFFFFF####.   The signed value should be –#### but it is incorrectly treated as a unsigned.   This large of an offset is illegal and causes the reader to return an error.

When using the TSQL reader function (sys.fn_xe_file_target_read) an incorrect value returned from the reader for timestamp is output as the calculated, tick offset value (%I64u) instead of the datetime value.

A correct event looks like:

…. timestamp="2015-04-11T11:19:24.265Z">                               

Incorrect event might look like: 

…. timestamp="18446744070113720036">

Attempting to open the file in SSMS (management Studio) results in the following error.

image

Bob Dorr - Principal SQL Server Escalation Engineer


SQL Server Page Life Expectancy (PLE)

$
0
0

This week I was involved in a conversation with Paul Randal relating to PLE per node vs PLE server wide.

There is an all-up PLE counter as well as individual, per NUMA node PLE counters.  SQL Server Books Online describes the values as:

SQL Server Buffer Manager \ Page life expectancy– Indicates the number of seconds a page will stay in buffer pool without references.

SQL Server Buffer Node \ Page life expectancy– Indicates the minimum number of seconds a page will stay in buffer pool on this node without references.

The descriptions leave a bit to the imagination.   It is pretty common place to ask someone about the all-up value and the assumption is a simple average of the individual node values.  For example, using the following 4 Node values, the AVG = 1750 divided by 1000 = 175.

1000
2000
1500
2500

This is not the calculation used for all-up number.  The Buffer Manager value is an average of the rates or the (Harmonic Mean.)  Using the harmonic mean, run rates average, for this example the PLE = 155.

Paul outlines additional calculations and highlights the need to watch per node values for better management of your PLE targets in his post:  http://www.sqlskills.com/blogs/paul/page-life-expectancy-isnt-what-you-think 

Bob Dorr - Principal SQL Server Escalation Engineer

Error in XML document. Hexadecimal value 0x1F, is an invalid character

$
0
0

I worked on an issue recently where we were noticing that a large majority of the out of the box System Center Configuration manager (SCCM) reports were throwing the same error. Very odd! I would expect to see an error from a custom report but not an out of the box report! Here is the error the reports were throwing

From SQL Server Reporting Services (SSRS):

The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version.
There is an error in XML document (1, 21726).
'¬', hexadecimal value 0x1F, is an invalid character.
Line 1, position 1869.

From SCCM:

System.InvalidOperationException
There is an error in XML document (1, 21726).
 
Stack Trace:
   at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
   at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle)
   at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
   at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
   at Microsoft.ConfigurationManagement.Reporting.Internal.Proxy.ReportingService2005.GetReportParameters(String Report, String HistoryID, Boolean ForRendering, ParameterValue[] Values, DataSourceCredentials[] Credentials)
   at Microsoft.ConfigurationManagement.Reporting.Internal.SrsReportServer.GetReportParameters(String path, Dictionary`2 parameterValues, Boolean getValues)
 
-------------------------------
 
System.Xml.XmlException
'­', hexadecimal value 0x1F, is an invalid character. Line 1, position 21726.
 
Stack Trace:
   at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
   at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle)
   at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
   at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
   at Microsoft.ConfigurationManagement.Reporting.Internal.Proxy.ReportingService2005.GetReportParameters(String Report, String HistoryID, Boolean ForRendering, ParameterValue[] Values, DataSourceCredentials[] Credentials)
   at Microsoft.ConfigurationManagement.Reporting.Internal.SrsReportServer.GetReportParameters(String path, Dictionary`2 parameterValues, Boolean getValues)

Taking a look at the stack I can see that it appears to be failing to read one of the parameters (GetReportParameters)

I opened up a few of the reports in Report Builder and saw that each had a few parameters that are in every SCCM report but each had one parameter which was common to just the failing reports. The parameter was CollID. When taking a look at the query for the dataset (Parameter_DataSet_CollectionID)

select CollectionID, CollectionName=Name, NameSort=CollectionID+' - '+Name
from fn_rbac_Collection(@UserSIDs) 
order by 2

I then opened up the Function fn_rbac_Collection in the SCCM database to see what table it was pulling from. It is getting its parameters from v_Collection

I used the following SQL query to search through the parameters to find which one(s) contain the 1F hex value

SELECT Name
FROM v_Collection
WHERE CONVERT(varchar(max),convert(varbinary(max),convert(nvarchar(max),Name)),2) LIKE '%1F%'

Here is what we got:

Name
Uni-InternetExplorer_11.0¬_R01

Nothing seemed off about that name so I pasted it into notepad and started keying through the letters. I noticed that going from left to right, when I key past the zero in 11.0, I had to click the arrow key twice on my keyboard! Opening the string in a hex editor I could see that right between that zero and the underscore is that 1F hex.

Knowing now that it was the culprit, we went into SCCM, found that collection, and then retyped it so that it would no longer have that hidden character.

Kicked off the report and we had a successful render!

In other cases, I also came across reports that had the same issue, but were pulling from Assignments. This is the query I used to pull the corrupt assignment parameters

SELECT AssignmentName
FROM CI_CIAssignments
WHERE CONVERT(varchar(max),convert(varbinary(max),convert(nvarchar(max),AssignmentName)),2) LIKE '%1F%'

Mark Hughes
Microsoft Business Intelligence Support – Escalation

How to get unstuck when using SQL Aliases

$
0
0

I got a case recently where the customer had a SQL Alias setup but was having issues connecting to their application. Being in Business Intelligence Support we deal with plenty of connectivity issues and this is one topic of connectivity that does not get touched on a lot.
 
A SQL alias is an alternate name that can be used to make a connection. The alias encapsulates the required elements of a connection string, and exposes them with a name chosen by the user. Aliases can be used with any client application.

It started by getting this error message when trying to connect to SQL from a client application. For the sake of this write up we are going to use SQL Server Reporting Services (SSRS) as our application. We received an error similar to the following:

ERROR [08001] [Microsoft][SQL Server Native Client 10.0]TCP Provider: No such host is known. ERROR [HYT00] [Microsoft][SQL Server Native Client 10.0]Login timeout expired ERROR [01S00] [Microsoft][SQL Server Native Client 10.0]Invalid connection string attribute ERROR [08001] [Microsoft][SQL Server Native Client 10.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.

image

From this we can tell we cannot connect to SQL Server because it could not find the server.
 
We tried a Universal Data Link (UDL) file to start with using the SQL Server name the customer provided, not knowing at that point it was actually a SQL Alias. Of Note, I always start with a UDL test because it is the simplest way to test connectivity. This gets the Customer’s application out of the picture.

image

Next we did a Ping on the IP of the SQL server and this was successful. This test rules out any DNS or name resolution issues in the environment.

image

So since Ping was successful, I went down the TELNET road. This allows us to see if anything is listening on the port we are expected to be on.  This could be impaired by a firewall or the listener not being there.

image

We know the connection was successful because we got a blank screen that looks like this…

image

This got me thinking, why does Ping and TELNET work but not the UDL. I tested the UDL with ODBC Driver, MS OLE DB Provider, and SQL Server Native Client (SNAC), of which none worked using the name. I even tested the UDL forcing the TCP protocol which failed also.

TCP:<server name>,<port #>
e.g. - TCP:SQL2008R2,1433)

None of this was making sense especially since the server looked correct in configuration manager based on the Customer’s comment that they were using an Alias and they said it looked correct.
 
Aliases can be found under SQL Server Configuration Manager in the SQL Native Client configuration area.

image

At this time everything looked Ok, so I had to do further research where a SQL Alias can reside.  During that research I found a bunch of references to Cliconfg.  What is Cliconfg you may ask? I had the same question! In short, Cliconfg is the old configuration tool that ships with Windows. You can find more info on Cliconfg here .
 
NOTE: Be aware that Cliconfg is missing an "I" in the config part. This is due to the old 8.3 naming conventions for files.
 
So on the Application we went to Start > Run and typed in Cliconfg and noticed that they were using an IP address instead of a name which we saw in SQL Server Configuration Manager of which the Customer indicated the IP listed was incorrect.
 
In Cliconfg we saw something similar to the following…

image

I know the SQL Server IP address is 10.0.0.4, but the alias in Cliconfg was configured for 10.0.0.272. So to help correct this we edited the Connection parameters and set Server Name to the actual name of the SQL Server instead of the IP Address.

image

image

After changing that in Cliconfg we were able to connect to the SQL Server using the UDL successfully. Then we went back to the application, in this example being SSRS, and it was also able to connect successfully.

image

image

Bitness matters!

Be aware that client aliases can lead to connectivity issues if not configured correctly. These client aliases are really just entries in the registry. Also be aware that BITNESS MATTERS! The alias bitness will depend on what the bitness of the application is.  If the application is 32-bit then the alias needs to be 32-bit. 
 
From SQL server configuration manager the bitness is broken out in the GUI.

image

For Cliconfg there is actually separate application. One for 32-bit and one for 64-bit. By default if you go to start > run and type in Cliconfg on a 64-bit OS, you will get the 64-bit version.  If the application is 32-bit, and you added a 64-bit alias then your will not pick up the alias.  To get the 32-bit version of Cliconfg you can go to this path…

%SystemDrive%\Windows\sysWOW64\cliconfg.exe

Mark Ghanayem
Microsoft Business Intelligence Support

Operating System Error (665 – File System Limitation) Not just for DBCC Anymore

$
0
0

The operating system error 665, indicating a file system limitation has been reached continues to gain momentum beyond DBCC snapshot files.    Over the last ~18 months I have reproduced issues with standard database files, BCP output, backups and others.

We have posted previous blogs talking about the NTFS attribute design and associated limitations (665) as the storage space for the attribute structures becomes exhausted.

http://blogs.msdn.com/b/psssql/archive/2008/07/10/sql-server-reports-operating-system-error-1450-or-1452-or-665-retries.aspx

http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx

Online DBCC has been susceptible to this limitation leveraging a secondary stream for copy-on-write activities.    The sparse nature of DBCC snapshot or a snapshot database can drive attribute exhaustion.

As space is acquired the disk storage location(s) and size(s) are stored in the attribute structures.   If the space is adjacent to a cluster already tracked by the file the attributes are compressed into a single entry, spanning the entire size.   However, if the space is fragmented it has to be tracked with multiple attributes.

The 665 issue can pop up with larger file sizes.   As the file grows it acquires more space.  During the space acquisition the attributes are used to track this space.  

Repro 1 – DBCC Snapshot or Snapshot Database

The copy-on-write logic acquires new space as pages are dirtied in the main database.  As writes occur to the snapshot target more attributes are used.  The space allocations and clusters are by definition and design stored in spare locations.

Repro 2 – Database files

I can insert data into a database with a smaller auto grow size, each acquiring new disk space.  If the disk space is not acquired in strict, cluster adjacent order the growth is tracked with separate attributes.   After millions of grows from a large import or index rebuild (usually around 300GB) I can exhaust the attribute list and trigger the 665 error.

Repro 3 – BCP

BCP extends the output file in the process of simply writing to the file.  Each block written has to acquire disk space and it is reliant on adjacent disk space allocation to accommodate a large file.

One customer was trying to use a copy of BCP per CPU and the query option to partition the output streams and improve the export performance.   This can work very well but in this case it back-fired.  The output location was shared and as each copy of BCP was writing data it caused them to leap frog each other on the same physical media.   Each of the BCP output streams quickly exhausted the attribute storage as none of them were acquiring adjacent storage.

What Should I Do?

Testing has shown that defragmentation for the physical media may help reduce the attribute usage.   Just be sure your defragmentation utility is transactional.  (Note:  defragmentation is a different story on SSD media and typically does not address the problem.  Copying the file and allowing the SSD firmware to repack the physical storage is often a better solution.)

Copy - Doing a file copy may allow better space acquisition.

Use SQL Server 2014 for DBCC online operations.  SQL Server 2014 no longer uses the secondary stream but a separate, sparse file marked for delete on close.  This may reduce the shared attribute storage required by a secondary stream.

Adjust the auto growth to acquire sizes conducive for production performance as well as packing of attributes.

Format the drive using the larger NTFS metadata, file tracking structures providing a larger set of attribute structures. https://support.microsoft.com/en-us/kb/967351

Use ReFS instead of NTFS.  ReFS does not contain the same design as NTFS.

Adjust utility write sizes, for example the BCP buffer size change:   I just discovered that the fix list for SQL Server 2012 SP2 does not appear to contain the fix information.   The fix changes the bcpWrite (internal API used by ODBC driver and BCP.exe) from a 4K write size to 64K write size.

Backup – Carefully plan the number of files (stripe set) as well as transfer and block sizes.

Bob Dorr - Principal SQL Server Escalation Engineer

Identifying SQL Server 2014 New Cardinality Estimator issues and Service Pack 1 improvement

$
0
0

 

In a previous blog, I talked about SQL 2014’s new Cardinality Estimator (new CE) and trace flags 9481 and 2312 can be used to control which version of Cardinality Estimator is used.

In this blog, I will modify a real scenario customer hit to show you that how you can use these trace flags to spot issues related to new CE and how SQL 2014 SP1 can help.

Problem

Customer reported to us that a particular query ran slowly on SQL 2014.  After troubleshooting, we determined that new Cardinality Estimator grossly over-estimated.  We compared the plans using trace flag 2312 (new CE) and 9481 (old CE).

Here is a simplified repro without revealing customer’s table and data.  Customer’s query is more complex.  But the issue is the over-estimation of a particular join that produced a bad query plan overall.

The script below inserts 1 million rows in table t.  c1 is primary key.  c2 has duplicates( each number repeats 100 times).

create table t (c1 int primary key, c2 int)
go
set nocount on

go
begin tran
declare @i int
set @i = 0
while @i < 1000000
begin

insert into t values (@i, @i % 10000)
set @i = @i +1

end
commit tran
go
create index ix on t(c2)
go
update statistics t
go

 

How many rows do you think the following query will return?  It will return 100 rows.  Note that I used querytraceon 2312 to force new CE on SQL 2014.  But if your database compatibility level is 120, you will get new CE without having to use the trace flag.  Again, the previous blog has instructions how to control new and old CE with trace flags and database compatibility level.

select t1.c2 from t t1 join t t2 on t1.c2 = t2.c2
where t1.c1 = 0 option (querytraceon 2312)

 

Do you think the query below will return more rows or less rows than the query above?  Note that I added an AND clause “t1.c1 <> t2.c1”.   This should have made it more restrictive and return no more rows than the previous query.  It actually returns 99 rows because there is one row filtered out by t1.c1 <> t2.c1.

select t1.c2 from t t1 join t t2 on t1.c2 = t2.c2 and t1.c1 <> t2.c1
where t1.c1 = 0 option (querytraceon 2312)

Let’s take a look at optimizer estimates.

The first query has very accurate estimate.

image

Take a look at the estimate below for secondary query.  The query has 99 rows returned but the estimate is 1,000,000 rows.    The difference is that I added one more AND predicate in the secondary query (t1.c1 <> t2.c1).  It should have cut down the estimate.  But it actually made it much larger.

image

 

Note that the same query has fairly low estimate by forcing old CE with trace flag 9481

select t1.c2 from t t1 join t t2 on t1.c2 = t2.c2 and t1.c1 <> t2.c1
where t1.c1 = 0 option (querytraceon 9481)

image

 

It is this behavior that made customer’s original query much slower. This is a bug in new Cardinality Estimator.

Customer called in for help tuning the query.  First we had them revert to old Cardinality Estimator by trace flag 9481 which made query fast.   We knew that we had quite a few fixes in this space and asked customer to apply SP1 on a test machine.  But the query was still slow.   So we went ahead and collected statistics clone and started to look at query in house.   We were able to reproduce the issue where new Cardinality Estimator had very high estimate but old Cardinality Estimator has low estimate even on SP1

We thought it’s a new bug after SP1.  But as we looked at the fix more closely, it required trace flag  4199 to be enabled.  In fact all optimizer fixes require 4199 to activate.    After enabling 4199, SP1 was able to estimate correctly.  Customer tested their original query on SP1 with trace flag 4199 and it ran fast.

Solution

You need to apply SP1 but you must also enable trace flag 4199 in order to activate the fix.

SQL Server 2014 Service Pack 1 made various fixes on new Cardinality Estimator (new CE).  The release notes also documents the fixes.

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

 

twitter| pssdiag |Sql Nexus

Not able to use PFX format certificate in SQL Server?

$
0
0

If you have a security certificate in PFX format that is generated by Microsoft Certificate store, you can not use it directly in SQL Server.   But you can use PVKConverter.exe tool to convert to PVK/DER format that can be used by SQL Server.   KB “How to use PFX-formatted certificates in SQL Server” has documentation on using this tool.

We have a customer who reported to us that they were not able to use their certificate even after they did the conversion.  They got various errors like below:

Msg 15297, Level 16, State 56, Line 1 The certificate, asymmetric key, or private key data is invalid.

Msg 15474, Level 16, State 6, Line 8 Invalid private key. The private key does not match the public key of the certificate.

After digging and debugging, we learned that  it is because the serial number of their certificate was too long.   Currently SQL Server only allows serial number up to 16 bytes.   But customer’s certificate had 19 bytes for the serial number.

You can check your certificate’s serial number by using certutil.exe –dump option or just use certificate manager (certmgr.msc) and check the property details as shown below.  In this example, the serial number is exactly 16 bytes.

 

image

 

Now the question is why customer’s certificate had 19 bytes of serial number?  They told me that they generated the certificate using Microsoft Certificate store.

It turned out that you can actually have some control over the serial number through HighSerial as documented in “Custom CA Configuration”.   If you set it to 0 like ( “certutil -setreg ca\highserial 0” ), you will get 10 byte serial number for future certificate generation (after you configure and restart your certificate service).   There are various other options in the document that you can explore and control length and content of your certificate’s serial number.

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Upcoming end of support for SQL Server 2005

$
0
0

<This is reposted from http://blogs.msdn.com/b/sql_server_team/archive/2015/05/27/upcoming-end-of-support-for-sql-server-2005.aspx>

 

Please note that SQL Server 2005 will be out of extended support on Apr 12, 2016. If you are still running SQL Server 2005 after April 12, 2016, you will no longer receive security updates. Please see http://www.microsoft.com/en-us/server-cloud/products/sql-server-2005/default.aspx for upgrade options.

   

More information about SQL Server 2005 support policy is available at https://support.microsoft.com/en-us/lifecycle/search/default.aspx?alpha=sql%20server%202005&Filter=FilterNO.

   

Product

Version

SP

Mainstream Support End
  Date

Extended Support End
  Date

Options / Notes

SQL Server

2005

SP4

04/12/2011

04/12/2016

Technical support continues
  till 04/12/2016, yet mainstream (hotfix) support ends as of 04/12/2011;
  options for hotfix support after 04/12/2011:

ð  Continue with self-help

ð  Upgrade to the latest
  supported service pack for SQL Server 2005 or SQL Server 2008 or SQL Server
  2008 R2

ð  Extended hotfix support
  agreement


Getting the latest SQL Server Native Client

$
0
0

If you are installing a Service Pack (SP) or Cumulative Update (CU) for SQL Server, you may notice that the SQL Server Native Client doesn’t get updated.  It may also be difficult to find the item to actually get it updated.

If you go look at the Feature Packs for the Service Packs, or go look at the Cumulative Update downloads, the sqlncli.msi package may not be listed there.  So, how do you get it?

Get it from the SP or CU Package

When you go to run the SP or CU Package, it will self extract to a GUID folder.  When it is done self extracting, you will see the SQL Setup landing page.  I actually want to ignore the Setup landing page for this.  But while that is there, we can go into Explorer and browse to the GUID folder.

SNAGHTML1ceb00f

From there, you will want to go to the region folder that matches your region.  For me, it will be the 1033 folder.

SNAGHTML1d03140

From there, we can go to \x64\setup\x64 and you should see the sqlnsli.msi there.  If you are 32bit, it will be an x86 folder.

SNAGHTML1d1d7bc

From there, you can copy the MSI out to where ever you need to run it at.

If you are looking for the 32bit SQL Server Native Client, but are running on a x64 machine, use the x64 MSI.  It will lay down both the 32bit and 64bit Driver/Provider.

 

Adam W. Saxton | Microsoft Business Intelligence Support - Escalation Services
@GuyInACube | YouTube | Facebook.com\guyinacube

How It Works: SQL Server (SQLNCLI11) ODBC Driver–Keyset Cursor

$
0
0

This blog is based on SQL Server 2014 CU7’s updated release of SQLNCLI11 and MSSQLODBC drivers.

The basics behind KEYSET cursor behavior are described here: https://msdn.microsoft.com/en-us/library/windows/desktop/ms675119(v=vs.85).aspx 

The critical part of the referenced link is the keyset cursors ability to see changes in the data.  

When you open the cursor the sp_cursoropen or sp_cursorprepexec procedure is called returning a cursor handle to the ODBC client. You then use the SQLFetch API to retrieve 1 or more rows.  You have a choice of using SQLBindCol or SQLGetData to retrieve the data for the given columns in your result set.

Shown here is a single row (SQLFetch) scenario using SQLGetData for each column instead of bindings.

ODBC CallClient TDS RequestServer TDS Response
SQLPrepare
SQLExecute
sp_cursorprepexecReturns cursor handle to client
For Each ROW
SQLFetch
sp_cursoroption TXTPTR_ONLY (1)
sp_cursorfetch
Returns row data for all columns and only TEXTPTR values for blob column
For Each Column
SQLGetData
sp_cursor REFRESH (40)

or

sp_cursoroption TEXT_DATA (3)
sp_cursor REFRESH (40)
Returns row data for all columns and only TEXTPTR values for blob column


If the column to be retrieved is a blob (TXTPTR ONLY) return TEXT_DATA and refresh row data for all columns

Formula: Same Row Returned = (Number of Columns +1)     (SQLGetData REFRESH for each column + original SQLFetch)

The chatty nature of REFRESH, as each column is retrieved, surprised me a bit until I did my own testing.   You can fetch column 2.  Change column 3 at the server and then allow the client to call SQLGetData for column 3 and you see the new data.   Without the REFRESH call the intent of the KEYSET cursor to handle data changes could not be accomplished.

Here is an annotated example.

select iID, strData, tData, dtEntry from tblTest

This results in the following behavior:

  1. SQLFetch – Return data for the row =  int, varchar, text_ptr, datetime
  2. SQLGetData(iID) - Return data for the row =  int, varchar, text_ptr, datetime
  3. SQLGetData(strData) - Return data for the row =  int, varchar, text_ptr, datetime
  4. SQLGetData(tData) - Return data for the row =  int, varchar, text_data, datetime
  5. SQLGetData(dtEntry) - Return data for the row =  int, varchar, *text_ptr, datetime

The same row data is returned to the client 5 times but the actual blob is only streamed beginning at step #4.

*As part of the investigation I found a bug.  Once the blob column retrieves the text_data the driver fails to issue TEXT_PTR only for subsequent column data retrieval.   The 5th event above would actually return the entire blob instead of the pointer.  To avoid the additional, TDS overhead, place your blob column at the end of the column list.

Note: Prior to the SQL 2014 CU7 fix the SQL Server ODBC drivers may not set the TEXT_PTR option.  This issue caused each REFRESH and the original FETCH to return the entire blob stream, impacting network bandwidth and performance. 

Optimizing The Round Trips

The way to optimize this is to use column bindings and SQLSetPos with a refresh option only when required.

Using SQLBindCol before the SQLFetch allows column data to be retrieved during the SQLFetch call.   Indicating a reasonable buffer size of the blob columns is the next step in your optimization.   Perhaps you can use an 4K buffer to retrieve the majority of your blobs.   Then use a local memcpy of the actual blob into your final destination.    An indirect column binding, if you will.

If the temporary blob buffer is too small the SQLFetch returns SUCCESS_WITH_INFO indicating string right truncation.   Looking at the bound length indicators you can determine which of the columns was too small.    Rebind the column and use SQLSetPos with a refresh to retrieve the data.  (Resetting of the binding may be necessary before the next SQLFetch invocation.)  

BYTE bBlob[4096];
SQLINTEGER iBlobState = 0;
SQLINTEGER istrDataState = 0;
WCHAR strData[65];
int iData;
WCHAR strData2[65];

sqlrc = SQLBindCol(hstmt, 1, SQL_C_BINARY, bBlob, 1024, &iBlobState); // Initial 1K for sample
sqlrc = SQLBindCol(
hstmt, 2, SQL_C_WCHAR, strData, 64, &istrDataState);
sqlrc = SQLBindCol(
hstmt, 3, SQL_INTEGER, &iData, 0, NULL);
sqlrc = SQLBindCol(
hstmt, 4, SQL_C_WCHAR, strData2, 64, NULL);

sqlrc = SQLFetch (
hstmt);  //  Retrieves data from SQL Server, streaming entire TEXT_DATA, you want this to work the majority of the time !!if(true == truncated)
{
    // Allocate new buffer as needed
     sqlrc = SQLBindCol(hstmt, 1, SQL_C_BINARY, bBlob, iBlobState, &iBlobState);
   sqlrc = SQLSetPos(
hstmt, iRow, SQL_REFRESH, SQL_LOCK_NO_CHANGE);  // Round trip to SQL Server, streaming entire TEXT_DATA – Avoid this path as much as possible to optimize performance
}

// Final storage, local copy instead of across the wire retrieval for actual length
memcpy(bFinal, bBlob, iBlobState);

The optimization technique allows you to retrieve the majority of rows in a single network trip.  Using SQLGetData for data retrieval encounters

     the additional overhead.

Binding Array

ODBC allows binding of an array for a column and a multiple row fetch operation.  This can also be helpful but requires a bit more work to retrieve the truncated blob data.  In order to retrieve a specific row that encountered the short transfer, clear current bindings, set the rowset position for the cursor and use SQLGetData.  

  BYTE bBlob[2][64];

  SQLINTEGER  iBlobState[2]; 
  WCHAR strData[2][64];

  int iData;

  WCHAR strData2[2][64];

  sqlrc = SQLBindCol(*hstmt, 1, SQL_C_BINARY, bBlob, 64, iBlobState);

  sqlrc = SQLBindCol(*hstmt, 2, SQL_C_WCHAR, strData, 128, NULL);

  sqlrc = SQLBindCol(*hstmt, 3, SQL_INTEGER, &iData, 0, NULL);

  sqlrc = SQLBindCol(*hstmt, 4, SQL_C_WCHAR, strData2, 128, NULL);

 

  sqlrc = SQLFetch(*hstmt);

  if (SQL_SUCCESS_WITH_INFO && iBlobLen is short)

  {

         //======================================================================a

         //   Clear all bindings so we don't have any references to mess up memory

         //

         //   NOTE: This means the SQLGetData can be different from the original fetch data

         //======================================================================

         sqlrc = SQLBindCol(*hstmt, 1, SQL_C_BINARY, NULL, 0, NULL);

         sqlrc = SQLBindCol(*hstmt, 2, SQL_C_WCHAR, NULL, 0, NULL);

         sqlrc = SQLBindCol(*hstmt, 3, SQL_INTEGER, NULL, 0, NULL);

         sqlrc = SQLBindCol(*hstmt, 4, SQL_C_WCHAR, NULL, 0, NULL);

         //   For just the blob row

         sqlrc = SQLSetPos(*hstmt, 2, SQL_POSITION, SQL_LOCK_NO_CHANGE);          //     Update position to row with short transfer of blob

         BYTE bBlob2[64] = {};

         sqlrc = SQLGetData(*hstmt, 1, SQL_C_BINARY, bBlob2, 64, NULL);     //     Get blob that was short into new buffer not original array

         sqlrc = SQLSetPos(*hstmt, rows_fetched, SQL_POSITION, SQL_LOCK_NO_CHANGE); // need to call this after to advance to next set of rows as on next SQLFetch call

  }

  //   FIX BINDINGS AND LOOP ON NEXT FETCH GROUP

Make sure you have the correct driver version installed: http://blogs.msdn.com/b/psssql/archive/2015/07/14/getting-the-latest-sql-server-native-client.aspx

Bob Dorr - Principal SQL Server Escalation Engineer

Sparse Files – Supported on both NTFS and REFS

$
0
0

The history of sparse file support capabilities has lead to confusion that I would like to clear up.

Sparse files ARE SUPPORTED on NTFS and REFS.    SQL Server 2014 takes full advantage of the sparse support for online DBCC and database snapshots.

There are older blogs indicating limited support for sparse files on REFS that are outdated.  REFS supports sparse files and does not encounter the limitation (665) as seen on NTFS deployments.

Confusion seems to stem from the limited support in SQL Server 2012.    The REFS file system does not support large, secondary streams.   Online DBCC uses a snapshot approach and prior to SQL Server 2014 placed the information in a secondary stream.   Since the stream can’t support the SQL Server file sizes online DBCC could not be supported on REFS.

SQL Server 2014 creates a separate file, in the same location as the parent file, and marks it for sparse allocations when creating snapshot targets.   This allows full support of Online DBCC and snapshot databases stored on REFS.

Another point of confusion is the reporting in Windows File Explorer.   There are conditions where the file explorer indicates the same size on disk as allocated for sparse file.   This is incorrect as the file on disk is often smaller than the target allocation.

 

Bob Dorr -  Principal SQL Server Escalation Engineer

What build of SQL Server are you using?

$
0
0

As a person administering or supporting a SQL Server install base you will get asked this question very frequently: Which build of SQL Server are you using? If all the SQL Server instances that you manage are at the same build level then you will know exactly what that build number corresponds to. Also you need to be aware of the translation of a specific build number to its corresponding service pack level and cumulative update/security update levels. But in reality you are managing multiple versions of the product at different service pack levels and cumulative update/security update levels. So normally what people do is either go to the internet to query for the build number or create quick reference cheat sheets for some frequently used build numbers in their organization.

How will you react if we told you that starting today you will not need to worry about all that anymore? Extremely excited? Yes. We are too. Download and install the CU released today and you will notice what we are talking about. Starting with this month CU – SQL Server 2012 Service Pack 2 Cumulative Update 7, you will notice a very visible change in 2 places: SELECT @@VERSION and SQL Server Error log.

Here is a quick snippet of the outputs with the change highlighted:

image

image

So now with this change you will be able to quickly identify the servicing update level of your SQL Server installations. You will be able to determine the version of the product, the service pack level, cumulative update level or security update level.

You will notice that this change will propagate to all future servicing updates released from this point onwards. This was the outcome of great collaboration between the product group and the support team combined with great feedback from the community members. Keep your feedback flowing and we can continue to enhance this information and make it available through other interfaces that exist in the product to make it easier for you to perform identification and inventory management easier.

Suresh Kandoth [SQL Server Escalation Team – Microsoft]

Error 17053 when using third party network storage device / SMB file share

$
0
0

Starting SQL Server 2012, we support creating database on a remote SMB file share without requiring a trace flag.  You can even configure SQL cluster to use SMB file share for databases. This is documented here.

When creating or opening data or log files, SQL Server calls various file manipulation API including an WIN32 API called DeviceIoControl to send commands to the device driver for various operations.

I want to bring your attention that not all third party SMB device supports all device IO Control code used by SQL Server when calling API DeviceIoControl.

Recently we worked with a customer who was configuring to use a third party network attached storage device to store all their databases remotely on an SMB share.

They kept receiving the following error whenever they perform the following:

  1. restarting SQL Server
  2. creating a new database
  3. marking a database online

2015-06-04 13:14:19.97 spid9s      Error: 17053, Severity: 16, State: 1.
2015-06-04 13:14:19.97 spid9s      DoDevIoCtlOut() GetOverlappedResult() : Operating system error 1(Incorrect function.) encountered.

Through debugging, we learned that the failure came when SQL Server calls DeviceIoControl using device control code FSCTL_FILESYSTEM_GET_STATISTICS.  This API is called for creating and opening any data or log file.

But this particular third party device driver doesn’t  support device io control code FSCTL_FILESYSTEM_GET_STATISTICS.  In other words,  the error occurred because the storage driver doesn’t support it device code FSCTL_FILESYSTEM_GET_STATISTICS in windows API DeviceIoControl call.

If you experience similar errors as above with third party device, first test to see if you can create database on Windows file share without error.  If you can do it with Windows file share but not vendor’s SMB share, please contact your storage vendor to get verification on what they support.

A few very important notes:

  1. If the device doesn’t support io code FSCTL_FILESYSTEM_GET_STATISTICS, there are different rampifications depending on which file System you use
  • If you are using NTFS with SQL Data and log files, this error can be safely ignored
  • But if you are using ReFS file system, ignoring this error can lead serious performance degradation as many optimizations will be skipped by SQL Server.  SQL 2014 and above support ReFS (Resilient File System)
  • Not all 17053 error is the same.  In this case, we debugged and discovered it’s the io code FSCTL_FILESYSTEM_GET_STATISTICS that’s not supported.   the same error can be raised by other situations.   The situation described here is very specific to a particular device vendor that supports SMB share with their devices for SQL Server.  If you are not clear what to do, please open a support call with us.
  • References

    1. DeviceIoControl API:  https://msdn.microsoft.com/en-us/library/windows/desktop/aa363216(v=vs.85).aspx
    2. Device code FSCTL_FILESYSTEM_GET_STATISTICS:  https://msdn.microsoft.com/en-us/library/windows/desktop/aa364565(v=vs.85).aspx
    3. SQL Server support of SMB file share: https://msdn.microsoft.com/en-us/library/hh759341(v=sql.120).aspx

     

    Jack Li |Senior Escalation Engineer | Microsoft SQL Server

    twitter| pssdiag |Sql Nexus

    Viewing all 339 articles
    Browse latest View live




    Latest Images