1) What are the main new features introduced in SQL Server 2012?
Below are some of the important features introduced in SQL Server 2012.
Column store indexes
Sequence objects
Contained database
User defined Server roles
Windows server core support
DQS Data quality services
Tabular Model (SSAS)
Always ON
Pagination
Error handling
2) What is Column Stored Indexes?
A columnstore index stores data in a column-wise (columnar) format, unlike the traditional B-tree structures used for clustered and nonclustered rowstore indexes, which store data row-wise (in rows). A columnstore index organizes the data in individual columns that are joined together to form the index. This structure can offer significant performance gains for queries that summarize large quantities of data, the sort typically used for business intelligence (BI) and data warehousing.
3) What are the benefits of Column Stored Indexes?
The benefits of using a non-clustered columnstore index are:
Only the columns needed to solve a query are fetched from disk (this is often fewer than 15% of the columns in a typical fact table)
It is easier to compress the data due to the redundancy of data within a column
Buffer hit rates are improved because data is highly compressed, and frequently accessed parts of commonly used columns remain in memory, while infrequently used parts are paged out.
4) What are the Limitations of Column Stored Indexes?
Column stored indexes have below limitations:-
Replication cannot be implemented.
Indexed views cannot be applied.
Column store indexes do not support the following data types :-
decimal greater than 18 digits
binary and varbinary
BLOB
CLR
(n)varchar(max)
Datetime offset with precision greater than 2
5) What is Sequence in SQL Server 2012?
SEQUENCE is one of the new features introduced in Sql Server 2012. Sequence is a user-defined object and as name suggests it generates sequence of numeric values according to the properties with which it is created. A sequence is an object in SQL Server (Transact-SQL) that is used to generate a number sequence. This can be useful when you need to create a unique number to act as a primary key.
6) When can we use Sequence in place of Identity column?
The application requires a number before the insert into the table is made.
The application requires sharing a single series of numbers between multiple tables or multiple columns within a table.
The application must restart the number series when a specified number is reached. For example, after assigning values 1 through 10, the application starts assigning values 1 through 10 again.
The application requires sequence values to be sorted by another field. The NEXT VALUE FOR function can apply the OVER clause to the function call. The OVER clause guarantees that the values returned are generated in the order of the OVER clause’s ORDER BY clause.
An application requires multiple numbers to be assigned at the same time. For example, an application needs to reserve five sequential numbers. Requesting identity values could result in gaps in the series if other processes were simultaneously issued numbers. Calling sp_sequence_get_range can retrieve several numbers in the sequence at once.
You need to change the specification of the sequence, such as the increment value.
7) What is the difference between Identity and Sequence objects?
Differences between Identity and Sequence are:
Sequence is used to generate database-wide sequential number, but identity column is tied to a table.
Sequence is not associated with a table.
Same sequence can be used in multiple tables.
It can be used in insert statement to insert identity values; it can also be used in T-SQL Scripts.
8) What is contained database in SQL Server 2012?
A contained database basically includes all database settings and the metadata within itself thereby resulting in no configuration dependencies on the instance of the SQL Server Database Engine where the database is actually installed. Users will be able to connect to a contained database without authenticating a login at the Database Engine level. This feature really helps to isolate the database from the Database Engine thereby making it possible to easily move the database from one instance of SQL Server to another.
9) Is it possible to create User defined server roles in SQL Server 2012?
Yes, SQL Server 2012 provides the capability of creating User defined Server roles.
10) What is Always ON Feature in SQL Server 2012?
AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases that fail over together. An availability group supports a set of read-write primary databases and one to four sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations.
11) What are the benefits of Always ON feature in SQL Server 2012?
Utilizing database mirroring for the data transfer over TCP/IP
providing a combination of Synchronous and Asynchronous mirroring
providing a logical grouping of similar databases via Availability Groups
Creating up to four readable secondary replicas
Allowing backups to be undertaken on a secondary replica
Performing DBCC statements against a secondary replica
Employing Built-in Compression & Encryption
12) What is Windows Server core support in SQL Server 2012?
Windows server core is one of the flavors of Windows operating system. It is a GUI less version of windows operating system. When you boot with windows core you would be surprised to get a simple DOS command line as shown in the figure as compared to start program files and crowded desktop short cuts. Because only necessary services are enabled, we have less memory consumption, simplified management as many features are not enabled and great stability.
SQL Server 2012 supports the Windows Core Operating system.
13) In which edition of SQL Server 2012, auditing feature is available?
In SQL Server 2012, support for server auditing is expanded to include all editions of SQL Server.
14) What are the enhancements in terms of SQL Server Analysis services?
Analysis Services will include a new BI Semantic Model (BISM). BISM is a 3-layer model that includes:Data Model
Business Logic
Data AccessBISM will enhance Microsoft’s front end analysis experiencing including Excel, Reporting Services and SharePoint Insights. Microsoft has said that BISM is not a replacement for the current BI Models but more of an alternative model. In simple terms, BISM is a relation model that includes BI artifact such as KPIs and hierarchies.
15) What are DQS Data quality services?
The data-quality solution provided by Data Quality Services (DQS) enables a data steward or IT professional to maintain the quality of their data and ensure that the data is suited for its business usage. DQS is a knowledge-driven solution that provides both computer-assisted and interactive ways to manage the integrity and quality of your data sources. DQS enables you to discover, build, and manage knowledge about your data. You can then use that knowledge to perform data cleansing, matching, and profiling. You can also leverage the cloud-based services of reference data providers in a DQS data-quality project.
16) What are the features provided by DQS to resolve data quality issues?
DQS provides the following features to resolve data quality issues.
Data Cleansing: the modification, removal, or enrichment of data that is incorrect or incomplete, using both computer-assisted and interactive processes. For more information, see Data Cleansing.
Matching: the identification of semantic duplicates in a rules-based process that enables you to determine what constitutes a match and perform de-duplication. For more information, see Data Matching.
Reference Data Services: verification of the quality of your data using the services of a reference data provider. You can use reference data services from Windows Azure Marketplace DataMarket to easily cleanse, validate, match, and enrich data. For more information, see Reference Data Services in DQS.
Profiling: the analysis of a data source to provide insight into the quality of the data at every stage in the knowledge discovery, domain management, matching, and data cleansing processes. Profiling is a powerful tool in a DQS data quality solution. You can create a data quality solution in which profiling is just as important as knowledge management, matching, or data cleansing. For more information, see Data Profiling and Notifications in DQS.
Monitoring: the tracking and determination of the state of data quality activities. Monitoring enables you to verify that your data quality solution is doing what it was designed to do. For more information, see DQS Administration.
Knowledge Base: Data Quality Services is a knowledge-driven solution that analyzes data based upon knowledge that you build with DQS. This enables you to create data quality processes that continually enhances the knowledge about your data and in so doing, continually improves the quality of your data.
17) What are the components of DQS?
Data Quality Services consists of Data Quality Server and Data Quality Client. These components enable you to perform data quality services separately from other SQL Server operations. Both are installed from within the SQL Server setup program.
18) What is the codename of SQL Server 2012?
SQL Server 2012 is code named as Denali
19) What is the codename of SQL Server 2014?
SQL Server 2014 is code named as Hekaton.
20) What are the main new features introduced in SQL Server 2014?
Below are some of the important features introduced in SQL Server 2014.
In-Memory OLTP
Managed Backup to Azure
Azure VMs for Availability replicas
SQL Server Data Files in Azure
Updateable columnstore indexes
Delayed durability
SSD buffer pool extension
Incremental statistics
21) What is In-Memory OLTP feature in SQL Server 2014?
In-Memory OLTP is a new feature in SQL Server 2014 for OLTP workloads to significantly improve performance and reduce processing time when you have a plentiful amount of memory and numerous multi-core processors. For a memory optimized table, all data is stored in memory and hence unlike disk based tables, pages don’t need to be brought into the buffer pool or cache. For data persistence of memory optimized tables, the In-memory OLTP engine creates a set of checkpoint files on a filestream filegroup that keeps track of changes to the data in an append only mode and uses this during recovery and the restore process.
22) What is AMR tool in SQL Server 2014?
SQL Server 2014’s Analysis, Migrate and Report (AMR) tool can help you find out where the In-Memory OLTP features can be effectively used.
23) Is the new In-Memory OTLP engine the same as the previous SQL Server 6.5 Database Consistency Checker (DBCC) PINTABLE capability?
No, this feature is nowhere same as DBCC Pintable command. Its uses completely different mechanism to handle locking and data changes.
24) Which SQL Server 2014 editions support in memory OLTP feature?
64-bit Enterprise, Developer, or Evaluation edition of SQL Server 2014.
25) What are the important benefits of in memory OLTP?
With tables in memory rather than on disk, the time to access those tables is considerably reduced, leading to higher performance.
Because T-SQL is compiled to machine code, natively compiled stored procedures use fewer instructions, so business logic processing is considerably faster.
Optimistic multi-version concurrency control removes the need for the latches and locks that are necessary in disk-based tables to ensure transactional integrity, but can cause contention and blocking for highly concurrent workloads. Instead, when rows are modified, new versions are created that use time-stamps for validation to ensure full ACID support.
In-Memory OLTP is fully integrated with SQL Server and is managed with a similar set of tools.
26) What are the Limitations of In memory OLTP SQL Server 2014?
Functions like replication, mirroring, and database snapshots are not supported, for example, and the data in in-memory tables is limited to 250 GB per server.
Similarly, In-Memory OLTP does not recognize statements such as ALTER TABLE, CREATE INDEX, DROP INDEX and ALTER INDEX. This means that we cannot simply alter existing disk-based tables to become memory-optimized tables, and once we’ve created a memory-optimized table, we can’t subsequently alter its structure or add more indexes.
FOREIGN KEYS and CHECK CONSTRAINTS have also been shelved in order to improve performance, as have many classic T-SQL operators and functionalities such as OR, LIKE, BETWEEN, and OUTER JOIN. This poses probably the biggest challenge because it restricts what developers can do in a stored procedure, seriously limiting business logic.
27) When the server fails, do I lose all my data?
NO, memory-optimized tables can still be fully durable! The tables can be created as SCHEMA_AND_DATA and will log the inserts/updates/deletes into the SQL Server database transaction log (and then the data files). SQL Server will write changes to the transaction log and execute recovery through checkpoint files by reading the data into memory (from disk) on recovery.
28) Which Recovery Model supports In Memory OLTP?
Memory-optimized tables are supported with all 3 recovery models for SQL Server databases (full, bulk-logged and simple).
29) What is the difference between Column Stored index feature in SQL Server 2012 and 2014?
Columnstore indexes in SQL Server 2012 brought a dramatic boost to data warehouse performance, but with a hitch: They couldn’t be updated. With SQL Server 2014, now they can. This means you no longer have to drop and re-create columnstore indexes every time you need to load your warehouse tables. Not only that, but updateability also means you may be able to look at columnstore indexes for certain OLTP applications. The caveat is that you must have a clustered columnstore index on the table. Non-clustered columnstores aren’t supported.
30) What is Delayed durability?
In SQL Server, changes to data are written to the log first. This is called write ahead logging (WAL). Control isn’t returned to the application until the log record has been written to disk (a process referred to as “hardening”). Delayed durability allows you to return control back to the application before the log is hardened. This can speed up transactions if you have issues with log performance. Nothing is free, though, and here you sacrifice recoverability. Should the database go down before the log is committed to disk, then you lose those transactions forever. It may be worth the risk if your log performance is severely degrading application response times.
31) What is Incremental statistics feature in SQL Server 2014?
Updating statistics in SQL Server is the very definition of redundant work. Whenever statistics need to be rebuilt, you can’t just update the new items — you have to update everything. This means that a table with 200 million rows and only 40 million changes will need to update all 200 million rows in order to pick up those changes. Incremental statistics in SQL Server 2014 allow you to update just those rows that have changed and merge them with what’s already there. This can have a big impact on query performance in some configurations.