Friday, May 20, 2016

DBCC CHECKDB


DBCC CHECKDB
Use and Abuse

 

This description of DBCC CHECKDB and how to use it is an update of an article I wrote on SQL Server Pro several years ago.

When I first wrote this article I was in the process of recruiting a senior DBA and I was surprised at the lack of knowledge about database corruption and how to deal with it!

Most candidates were aware that they should run DBCC CHECKDB to check for corruption, but most also thought that the solution should be to run it again with one of the repair options! This is dangerous, as it can cause data loss.

So here is how to use DBCC CHECKDB, and what to do when you have database corruption.

So How Do I Use It?

The primary purpose is to check for consistency errors, and should ideally be run every day.

The basic syntax is:

DBCC CHECKDB ('YourDatabase') WITH NO_INFOMSGS

NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.

This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:

DBCC CHECKDB ('YourDatabase') WITH PHYSICAL_ONLY

It Has Found A Problem - What Do I Do?

This article is about the approach to take to recover the corrupt data, but don't forget to investigate the cause. Corruption, thankfully, is very rare but make sure your disks and/or SAN are checked thoroughly to identify the cause. It is also worth looking at the Windows event logs - sometimes you will see an error message that helps your investigation.

By far the best option for fixing the corruption is to restore from a backup, but let's look at how you investigate which pages are affected and what type of data is affected:

Look at the output from DBCC CHECKDB. You may see something like this:

Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data): Page (1:94299) could not be processed. See other errors for details. Msg 8939, Level 16, State 98, Line 1 Table error: Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data), page (1:94299). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. CHECKDB found 0 allocation errors and 2 consistency errors in table 'yourtable' (object ID 2088535921). CHECKDB found 0 allocation errors and 2 consistency errors in database 'yourdb'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (YourDatabase).

From this you can see which page is corrupted (1:94299)

The first thing to do is check if it is data in a heap, in a clustered index, or in a non-clustered index. In the above text you can see it is index ID 0. You could also examine the page (1:94299 in database 'YourDatabase') as follows:

DBCC TRACEON (3604, -1)
GO
DBCC PAGE('YourDatabase', 1, 94299, 3)
GO

In the output you will see something like:

Metadata: IndexId = n

If n is greater than 1 it is a non-clustered index and can safely be dropped and recreated. If n is 0 or 1 you have data corruption and need to perform one of the options described below.

Restoring from a backup

If the recovery model is FULL (or BULK_LOGGED, with some limitations), you can backup the tail of the log, perform a restore (with norecovery) from the last clean full backup, followed by subsequent log backups and finally the tail of the log.

If only a few pages are affected you have the option of selectively restoring only the bad pages, as follows:

RESTORE DATABASE YourDatabase PAGE = '1:94299' FROM DISK = 'C:\YourDatabase.bak' WITH NORECOVERY

If the recovery model is simple you don't have that option, and have to accept that a restore from the last full backup will result in subsequent transactions being lost. In this case, or if you have no backups at all, you may decide that an automatic repair is the only option.

Automatic Repair Options

First let me emphasise the importance of running a backup BEFORE you go any further.

Have a look at the output of the original CHECKDB. It will specify the minimum repair level.

REPAIR_REBUILD

If the minimum repair level is REPAIR_REBUILD you have been lucky.

The syntax is:

DBCC CHECKDB('DB Name', REPAIR_REBUILD)

REPAIR_ALLOW_DATA_LOSS

This attempts to repair all errors. Sometimes the only way to repair an error is to deallocate the affected page and modify page links so that it looks like the page never existed. This has the desired effect of restoring the database's structural integrity but means that something has been deleted (hence the ALLOW_DATA_LOSS). There are likely to be issues with referential integrity, not to mention the important data that may now be missing.

The syntax is:

DBCC CHECKDB('DB Name', REPAIR_ALLOW_DATA_LOSS)

Make sure you run DBCC CHECKCONSTRAINTS afterwards so you are aware of referential integrity issues and can take the appropriate action.

And Finally

My original reason for writing this was to stress that the correct action when faced with corruption is nearly always to restore from a backup. Only use the automatic repair options as a last resort, and with full understanding of the damage this may do.

Just as important is that regular backups are an essential part of a DBA's responsibilities, and you should use the FULL recovery model with regular log backups for all but the most trivial databases.

DBCC CHECKDB is a powerful tool, but also very dangerous in the wrong hands.

Maybe instead of adding the REPAIR_ALLOW_DATA_LOSS option, Microsoft should have created a separate DBCC command called:

DBCC DELETE_DATA_TO_FIX_CORRUPTION

A bit wordy, but DBAs would be under no illusion about exactly what damage they could be doing, and it may make them think twice before running it!

 

Monday, May 2, 2016

Uninstall of SQL fails with error about RsFx Driver


I was working on uninstalling installations of SQL 2012 and SQL 2008 R2 to later do a clean install of SQL 2012. All seems to be going relatively smooth until I encountered the following message:
Warning 26003. Microsoft SQL Server 2008 R2 Setup 
Support Files cannot be uninstalled because the 
following products are installed: 
Microsoft SQL Server 2008 R2 RsFx Driver

Understandable; So I will uninstall using Control Panel\Programs\Programs and Features (Add/Remove for the old school). However the item wasn’t listed. After searching on how to do this using the registry I found a simpler way, I am now calling it a clean and supported way to doing this.
Steps to correct this issue:
We need to obtain the product GUID from WMI using WMIC so that we can use the MSIEXEC /X {GUID} command to remove the software.
Open a command prompt as administrator and type: WMIC PRODUCT LIST to get a list of products and the GUID associated with them. I found using the following command works best: WMIC PRODUCT GET Caption, IdentifyingNumber > c:\info.txt 

Once you have the GUID of the software you want to uninstall you simply type: MSIEXEC /X {GUID}
Software is now uninstalled and you can proceed.
*note* the steps provided can be used for any software you are looking to uninstall that is not listed under Add/Remove programs. Just follow the steps as such:
  1. Get the product GUID from WMI (Win32_Product class)
  2. Find the GUID related to the product or products that the SQL error mentions
  3. Run MSIEXEC /X {GUID} for each of the products to uninstall them manually.

Tuesday, January 12, 2016

TEMP DATABASE

TEMPDB !!  Did we knew Everything ??

Does it feel bad when you thought you know everything about something!! And ... Hold On...
The myth just got busted!!

It happened for me :) and it was nothing other then our own SQL Temp DB..!!  Hope the below helps every one of my blog reader in some ways (Will keep it short)!!

What’s Tempdb??

<< General Overview>>
The tempdb is a temporary workspace. Among other uses, SQL Server uses the tempdb for: 
·         Storage of explicitly created temporary tables.
·         Worktables that hold intermediate results created during query processing and sorting.
·         Materialized static cursors.

<< Lets go deep >>

Every thing doesn't get logged : SQL Server records only enough information in the tempdb transaction log to roll back a transaction .REDO operations are just never needed ... Why ??.. This is because tempdb is re-created every time you restart SQL Server; therefore, it does not have any transactions to roll forward or roll back. This feature helps in 2 ways!!
·         Increases the performance of INSERT statements
·         Saves us Log Space

Shrinking is not fun !! : Tempdb is the only DB which doesn't like to be shrinked so easy. To shrink tempdb you need to make sure it doesn't have any active transactions going on.

2005 Management Studio is Screwed !!: Never believe the tempDB size displayed with 2005 Management studio ( 2008 is good ). It does not show the correct size of tempdb files after a shrink operation. The 'Currently allocated space' value is always pulled from sys.master_files DMV and this value never gets updated after shrink operation:

To find the correct size use the below query :
use tempdb
select (size*8) as FileSizeKB from sys.database_files


Shrink the tempdb in SQL Server !!


Lets Start -- were you aware of below??

Have you come across a situation, of you shrinking the tempdb (With any of 3 options mentioned below) and ... Its just doesn't work !! Well!! Did you know the below

"If you run DBCC SHRINKDATABASE/DBCC SHRINKFILE, no other activity should be occurring with the tempdb database." 


If you have any active connection with the tempDB it simply won’t shrink or will fail with any of below errors:

Server: Msg 2501, Level 16, State 1, Line 1 Could not find table named '1525580473'. Check sysobjects.
-or-

Server: Msg 8909, Level 16, State 1, Line 0 Table Corrupt: Object ID 1, index ID 0, page ID %S_PGID. The PageId in the page header = %S_PGID.


So make sure these:

·         For other processes not to use tempdb while DBCC SHRINKDATABASE is run, start SQL Server in single user mode (Safest).
           OR

·         Check if you have any active Transactions in tempdb (Using DBCC Opentran), if not go ahead keeping fingers crossed.


Back to Point


TempDB can be shrunk with any of below 3 options (Choose your best!!)

Method 1


This method requires you to restart SQL Server.

1.    Stop SQL Server. Open a command prompt, and then start SQL Server by typing the following command:

sqlservr -c -f 

The -c and -f parameters cause SQL Server to start in a minimum configuration mode with a tempdb size of 1 MB for the data file and 0.5 MB for the log file.

2.    Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:

ALTER DATABASE tempdb MODIFY FILE
(NAME = 'tempdev', SIZE = target_size_in_MB)
--Desired target size for the data file

ALTER DATABASE tempdb MODIFY FILE
(NAME = 'templog', SIZE = target_size_in_MB)
--Desired target size for the log file
    
3.    Stop SQL Server by pressing Ctrl-C at the command prompt window, restart SQL Server as a service, and then verify the size of the Tempdb.mdf and Templog.ldf files.

Limitations:

·         Only operates on the default tempdb logical files, tempdev and templog
·         Additional files can be shrunk with other (Below) 2 methods once SQL Service starts
·         All tempdb files are re-created during start up; therefore, they are empty and can be removed using (ALTER DATABASE .... Remove FILE Option)

Method 2

Use the DBCC SHRINKDATABASE command to shrink the tempdb database as a whole. DBCC SHRINKDATABASE receives the parameter target_percent, which is the desired percentage of free space left in the database file after the database is shrunk. If you use DBCC SHRINKDATABASE, you may have to restart SQL Server.

1.    Determine the space currently used in tempdb by using the sp_spaceused stored procedure. Then, calculate the percentage of free space left for use as a parameter to DBCC SHRINKDATABASE; this calculation is based on the desired database size.

Note In some cases you may have to execute sp_spaceused @updateusage=true to recalculate the space used and to obtain an updated report. Refer to SQL Server Books Online for more information about the sp_spaceused stored procedure.

Consider this example:
Assume that tempdb has two files, the primary data file (Tempdb.mdf), which is 100 MB in size and the log file (Tempdb.ldf), which is 30 MB. Assume that sp_spaceused reports that the primary data file contains 60 MB of data. Also assume that you want to shrink the primary data file to 80 MB. Calculate the desired percentage of free space left after the shrink, 80 MB - 60 MB = 20 MB. Now, divide 20 MB by 80 MB = 25% and that is your target_percent. The transaction log file is shrunk accordingly, leaving 25% or 20 MB of space free after the database is shrunk.
2.    Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands:

dbcc shrinkdatabase (tempdb, 'target percent')
   -- This command shrinks the tempdb database as a whole


·         The target size for data and log files cannot be smaller than the size specified when the database was created ( OR redefined with ALTER DATABASE or DBCC Shrinkfile)
·         DBCC SHRINKDATABASE is the calculation of the target_percentage parameter and its dependency on the current space used.

Method 3
Use the command DBCC SHRINKFILE to shrink the individual tempdb files. DBCC SHRINKFILE provides more flexibility than DBCC SHRINKDATABASE because you can use it on a single database file without affecting other files that belong to the same database. DBCC SHRINKFILE receives the target size parameter, which is the desired final size for the database file. 
  1. Determine the desired size for the primary data file (tempdb.mdf), the log file (templog.ldf), and/or additional files added to tempdb. Make sure that the space used in the files is less than or equal to the desired target size.
  2. Connect to SQL Server with Query Analyzer, and then run the following Transact-SQL commands for the specific database files that you need to shrink:

  •    use tempdb
  •    go
  •    dbcc shrinkfile (tempdev, 'target size in MB')
  •    go
   -- This command shrinks the primary data file

  •    dbcc shrinkfile (templog, 'target size in MB')
  •    go
-- This command shrinks the log file, look at the last paragraph.
     



Advantage

·         Can reduce the size of a file to a size smaller than its original size. 
·         Can be issues on any of the data or log files.

Limitations

·          You cannot make the database smaller than the size of the model database. 


Look out for my next ... !! Something more on 

TempDB !!


Saturday, December 26, 2015

SQL Server 2012 and 2014 New Features



1) What are the main new features introduced in SQL Server 2012?

Below are some of the important features introduced in SQL Server 2012.

    Column store indexes
    Sequence objects
    Contained database
    User defined Server roles
    Windows server core support
    DQS Data quality services
    Tabular Model (SSAS)
    Always ON
    Pagination
    Error handling

2) What is Column Stored Indexes?

A columnstore index stores data in a column-wise (columnar) format, unlike the traditional B-tree structures used for clustered and nonclustered rowstore indexes, which store data row-wise (in rows). A columnstore index organizes the data in individual columns that are joined together to form the index. This structure can offer significant performance gains for queries that summarize large quantities of data, the sort typically used for business intelligence (BI) and data warehousing.

3) What are the benefits of Column Stored Indexes?

The benefits of using a non-clustered columnstore index are:

    Only the columns needed to solve a query are fetched from disk (this is often fewer than 15% of the columns in a typical fact table)
    It is easier to compress the data due to the redundancy of data within a column
    Buffer hit rates are improved because data is highly compressed, and frequently accessed parts of commonly used columns remain in memory, while infrequently used parts are paged out.

4) What are the Limitations of Column Stored Indexes?

Column stored indexes have below limitations:-

Replication cannot be implemented.

Indexed views cannot be applied.

Column store indexes do not support the following data types :-

    decimal greater than 18 digits
    binary and varbinary
    BLOB
    CLR
    (n)varchar(max)
    Datetime offset with precision greater than 2

5) What is Sequence in SQL Server 2012?

SEQUENCE is one of the new features introduced in Sql Server 2012. Sequence is a user-defined object and as name suggests it generates sequence of numeric values according to the properties with which it is created. A sequence is an object in SQL Server (Transact-SQL) that is used to generate a number sequence. This can be useful when you need to create a unique number to act as a primary key.

6) When can we use Sequence in place of Identity column?

    The application requires a number before the insert into the table is made.
    The application requires sharing a single series of numbers between multiple tables or multiple columns within a table.
    The application must restart the number series when a specified number is reached. For example, after assigning values 1 through 10, the application starts assigning values 1 through 10 again.
    The application requires sequence values to be sorted by another field. The NEXT VALUE FOR function can apply the OVER clause to the function call. The OVER clause guarantees that the values returned are generated in the order of the OVER clause’s ORDER BY clause.
    An application requires multiple numbers to be assigned at the same time. For example, an application needs to reserve five sequential numbers. Requesting identity values could result in gaps in the series if other processes were simultaneously issued numbers. Calling sp_sequence_get_range can retrieve several numbers in the sequence at once.
    You need to change the specification of the sequence, such as the increment value.

7) What is the difference between Identity and Sequence objects?

Differences between Identity and Sequence are:

    Sequence is used to generate database-wide sequential number, but identity column is tied to a table.
    Sequence is not associated with a table.
    Same sequence can be used in multiple tables.
    It can be used in insert statement to insert identity values; it can also be used in T-SQL Scripts.

8) What is contained database in SQL Server 2012?

A contained database basically includes all database settings and the metadata within itself thereby resulting in no configuration dependencies on the instance of the SQL Server Database Engine where the database is actually installed. Users will be able to connect to a contained database without authenticating a login at the Database Engine level. This feature really helps to isolate the database from the Database Engine thereby making it possible to easily move the database from one instance of SQL Server to another.

9) Is it possible to create User defined server roles in SQL Server 2012?

Yes, SQL Server 2012 provides the capability of creating User defined Server roles.

10) What is Always ON Feature in SQL Server 2012?

AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases that fail over together. An availability group supports a set of read-write primary databases and one to four sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations.

11) What are the benefits of Always ON feature in SQL Server 2012?

    Utilizing database mirroring for the data transfer over TCP/IP
    providing a combination of Synchronous and Asynchronous mirroring
    providing a logical grouping of similar databases via Availability Groups
    Creating up to four readable secondary replicas
    Allowing backups to be undertaken on a secondary replica
    Performing DBCC statements against a secondary replica
    Employing Built-in Compression & Encryption

12) What is Windows Server core support in SQL Server 2012?

Windows server core is one of the flavors of Windows operating system. It is a GUI less version of windows operating system. When you boot with windows core you would be surprised to get a simple DOS command line as shown in the figure as compared to start program files and crowded desktop short cuts. Because only necessary services are enabled, we have less memory consumption, simplified management as many features are not enabled and great stability.

SQL Server 2012 supports the Windows Core Operating system.

13) In which edition of SQL Server 2012, auditing feature is available?

In SQL Server 2012, support for server auditing is expanded to include all editions of SQL Server.

14) What are the enhancements in terms of SQL Server Analysis services?

    Analysis Services will include a new BI Semantic Model (BISM). BISM is a 3-layer model that includes:Data Model
    Business Logic
    Data AccessBISM will enhance Microsoft’s front end analysis experiencing including Excel, Reporting Services and SharePoint Insights. Microsoft has said that BISM is not a replacement for the current BI Models but more of an alternative model. In simple terms, BISM is a relation model that includes BI artifact such as KPIs and hierarchies.

15) What are DQS Data quality services?

The data-quality solution provided by Data Quality Services (DQS) enables a data steward or IT professional to maintain the quality of their data and ensure that the data is suited for its business usage. DQS is a knowledge-driven solution that provides both computer-assisted and interactive ways to manage the integrity and quality of your data sources. DQS enables you to discover, build, and manage knowledge about your data. You can then use that knowledge to perform data cleansing, matching, and profiling. You can also leverage the cloud-based services of reference data providers in a DQS data-quality project.

16) What are the features provided by DQS to resolve data quality issues?

DQS provides the following features to resolve data quality issues.

    Data Cleansing: the modification, removal, or enrichment of data that is incorrect or incomplete, using both computer-assisted and interactive processes. For more information, see Data Cleansing.
    Matching: the identification of semantic duplicates in a rules-based process that enables you to determine what constitutes a match and perform de-duplication. For more information, see Data Matching.
    Reference Data Services: verification of the quality of your data using the services of a reference data provider. You can use reference data services from Windows Azure Marketplace DataMarket to easily cleanse, validate, match, and enrich data. For more information, see Reference Data Services in DQS.
    Profiling: the analysis of a data source to provide insight into the quality of the data at every stage in the knowledge discovery, domain management, matching, and data cleansing processes. Profiling is a powerful tool in a DQS data quality solution. You can create a data quality solution in which profiling is just as important as knowledge management, matching, or data cleansing. For more information, see Data Profiling and Notifications in DQS.
    Monitoring: the tracking and determination of the state of data quality activities. Monitoring enables you to verify that your data quality solution is doing what it was designed to do. For more information, see DQS Administration.
    Knowledge Base: Data Quality Services is a knowledge-driven solution that analyzes data based upon knowledge that you build with DQS. This enables you to create data quality processes that continually enhances the knowledge about your data and in so doing, continually improves the quality of your data.

17) What are the components of DQS?

Data Quality Services consists of Data Quality Server and Data Quality Client. These components enable you to perform data quality services separately from other SQL Server operations. Both are installed from within the SQL Server setup program.

18) What is the codename of SQL Server 2012?

SQL Server 2012 is code named as Denali

19) What is the codename of SQL Server 2014?

SQL Server 2014 is code named as Hekaton.

20) What are the main new features introduced in SQL Server 2014?

Below are some of the important features introduced in SQL Server 2014.

    In-Memory OLTP
    Managed Backup to Azure
    Azure VMs for Availability replicas
    SQL Server Data Files in Azure
    Updateable columnstore indexes
    Delayed durability
    SSD buffer pool extension
    Incremental statistics

21) What is In-Memory OLTP feature in SQL Server 2014?

In-Memory OLTP is a new feature in SQL Server 2014 for OLTP workloads to significantly improve performance and reduce processing time when you have a plentiful amount of memory and numerous multi-core processors. For a memory optimized table, all data is stored in memory and hence unlike disk based tables, pages don’t need to be brought into the buffer pool or cache. For data persistence of memory optimized tables, the In-memory OLTP engine creates a set of checkpoint files on a filestream filegroup that keeps track of changes to the data in an append only mode and uses this during recovery and the restore process.

22) What is AMR tool in SQL Server 2014?

SQL Server 2014’s Analysis, Migrate and Report (AMR) tool can help you find out where the In-Memory OLTP features can be effectively used.

23) Is the new In-Memory OTLP engine the same as the previous SQL Server 6.5 Database Consistency Checker (DBCC) PINTABLE capability?

No, this feature is nowhere same as DBCC Pintable command. Its uses completely different mechanism to handle locking and data changes.

24) Which SQL Server 2014 editions support in memory OLTP feature?

64-bit Enterprise, Developer, or Evaluation edition of SQL Server 2014.

25) What are the important benefits of in memory OLTP?

    With tables in memory rather than on disk, the time to access those tables is considerably reduced, leading to higher performance.
    Because T-SQL is compiled to machine code, natively compiled stored procedures use fewer instructions, so business logic processing is considerably faster.
    Optimistic multi-version concurrency control removes the need for the latches and locks that are necessary in disk-based tables to ensure transactional integrity, but can cause contention and blocking for highly concurrent workloads. Instead, when rows are modified, new versions are created that use time-stamps for validation to ensure full ACID support.
    In-Memory OLTP is fully integrated with SQL Server and is managed with a similar set of tools.

26) What are the Limitations of In memory OLTP SQL Server 2014?

    Functions like replication, mirroring, and database snapshots are not supported, for example, and the data in in-memory tables is limited to 250 GB per server.
    Similarly, In-Memory OLTP does not recognize statements such as ALTER TABLE, CREATE INDEX, DROP INDEX and ALTER INDEX. This means that we cannot simply alter existing disk-based tables to become memory-optimized tables, and once we’ve created a memory-optimized table, we can’t subsequently alter its structure or add more indexes.
    FOREIGN KEYS and CHECK CONSTRAINTS have also been shelved in order to improve performance, as have many classic T-SQL operators and functionalities such as OR, LIKE, BETWEEN, and OUTER JOIN. This poses probably the biggest challenge because it restricts what developers can do in a stored procedure, seriously limiting business logic.

27) When the server fails, do I lose all my data?

NO, memory-optimized tables can still be fully durable! The tables can be created as SCHEMA_AND_DATA and will log the inserts/updates/deletes into the SQL Server database transaction log (and then the data files). SQL Server will write changes to the transaction log and execute recovery through checkpoint files by reading the data into memory (from disk) on recovery.

28) Which Recovery Model supports In Memory OLTP?

Memory-optimized tables are supported with all 3 recovery models for SQL Server databases (full, bulk-logged and simple).

29) What is the difference between Column Stored index feature in SQL Server 2012 and 2014?

Columnstore indexes in SQL Server 2012 brought a dramatic boost to data warehouse performance, but with a hitch: They couldn’t be updated. With SQL Server 2014, now they can. This means you no longer have to drop and re-create columnstore indexes every time you need to load your warehouse tables. Not only that, but updateability also means you may be able to look at columnstore indexes for certain OLTP applications. The caveat is that you must have a clustered columnstore index on the table. Non-clustered columnstores aren’t supported.

30) What is Delayed durability?

In SQL Server, changes to data are written to the log first. This is called write ahead logging (WAL). Control isn’t returned to the application until the log record has been written to disk (a process referred to as “hardening”). Delayed durability allows you to return control back to the application before the log is hardened. This can speed up transactions if you have issues with log performance. Nothing is free, though, and here you sacrifice recoverability. Should the database go down before the log is committed to disk, then you lose those transactions forever. It may be worth the risk if your log performance is severely degrading application response times.

31) What is Incremental statistics feature in SQL Server 2014?

Updating statistics in SQL Server is the very definition of redundant work. Whenever statistics need to be rebuilt, you can’t just update the new items — you have to update everything. This means that a table with 200 million rows and only 40 million changes will need to update all 200 million rows in order to pick up those changes. Incremental statistics in SQL Server 2014 allow you to update just those rows that have changed and merge them with what’s already there. This can have a big impact on query performance in some configurations.