SQL Compare v1.14 serial key or number

SQL Compare v1.14 serial key or number

SQL Compare v1.14 serial key or number

SQL Compare v1.14 serial key or number

Sysinternals Utilities Index

Sysinternals Suite
The entire set of Sysinternals Utilities rolled up into a single download.

Sysinternals Suite for Nano Server
Sysinternals Utilities for Nano Server in a single download.

Sysinternals Suite for ARM64
Sysinternals Utilities for ARM64 in a single download.

AccessChk
v (October 15, )
AccessChk is a command-line tool for viewing the effective permissions on files, registry keys, services, processes, kernel objects, and more.

AccessEnum
v (November 1, )
This simple yet powerful security tool shows you who has what access to directories, files and Registry keys on your systems. Use it to find holes in your permissions.

AdExplorer
v (November 15, )
Active Directory Explorer is an advanced Active Directory (AD) viewer and editor.

AdInsight
v (October 26, )
An LDAP (Light-weight Directory Access Protocol) real-time monitoring tool aimed at troubleshooting Active Directory client applications.

AdRestore
v (November 1, )
Undelete Server Active Directory objects.

Autologon
v (August 29, )
Bypass password screen during logon.

Autoruns
v (June 24, )
See what programs are configured to startup automatically when your system boots and you login. Autoruns also shows you the full list of Registry and file locations where applications can configure auto-start settings.

BgInfo
v (October 19, )
This fully-configurable program automatically generates desktop backgrounds that include important information about the system including IP addresses, computer name, network adapters, and more.

BlueScreen
v (November 1, )
This screen saver not only accurately simulates Blue Screens, but simulated reboots as well (complete with CHKDSK), and works on Windows NT 4, Windows , Windows XP, Server and Windows 95 and

CacheSet
v (November 1, )
CacheSet is a program that allows you to control the Cache Manager's working set size using functions provided by NT. It's compatible with all versions of NT.

ClockRes
v (July 4, )
View the resolution of the system clock, which is also the maximum timer resolution.

Contig
v (July 4, )
Wish you could quickly defragment your frequently used files? Use Contig to optimize individual files, or to create new files that are contiguous.

Coreinfo
v (August 18, )
Coreinfo is a new command-line utility that shows you the mapping between logical processors and the physical processor, NUMA node, and socket on which they reside, as well as the cache’s assigned to each logical processor.

Ctrl2cap
v (November 1, )
This is a kernel-mode driver that demonstrates keyboard input filtering just above the keyboard class driver in order to turn caps-locks into control keys. Filtering at this level allows conversion and hiding of keys before NT even "sees" them. Ctrl2cap also shows how to use NtDisplayString() to print messages to the initialization blue-screen.

DebugView
v (April 23, )
Another first from Sysinternals: This program intercepts calls made to DbgPrint by device drivers and OutputDebugString made by Win32 programs. It allows for viewing and recording of debug session output on your local machine or across the Internet without an active debugger.

Desktops
v (October 17, )
This new utility enables you to create up to four virtual desktops and to use a tray interface or hotkeys to preview what’s on each desktop and easily switch between them.

Disk2vhd
v (January 21, )
Disk2vhd simplifies the migration of physical systems into virtual machines (cromwellpsi.com).

DiskExt
v (July 4, )
Display volume disk-mappings.

Diskmon
v (November 1, )
This utility captures all hard disk activity or acts like a software disk activity light in your system tray.

DiskView
v (October 15, )
Graphical disk sector utility.

Disk Usage (DU)
v (February 13, )
View disk usage by directory.

EFSDump
v (November 1, )
View information for encrypted files.

FindLinks
v (July 4, )
FindLinks reports the file index and any hard links (alternate file paths on the same cromwellpsi.com) that exist for the specified file.  A file's data remains allocated so long as at it has at least one file name referencing it.

Handle
v (June 14, )
This handy command-line utility will show you what files are open by which processes, and much more.

Hex2dec
v (July 4, )
Convert hex numbers to decimal and vice versa.

Junction
v (July 4, )
Create Win2K NTFS symbolic links.

LDMDump
v (November 1, )
Dump the contents of the Logical Disk Manager's on-disk database, which describes the partitioning of Windows Dynamic disks.

ListDLLs
v (July 4, )
List all the DLLs that are currently loaded, including where they are loaded and their version numbers.

LiveKd
v (May 16, )
Use Microsoft kernel debuggers to examine a live system.

LoadOrder
v (July 4, )
See the order in which devices are loaded on your WinNT/2K system.

LogonSessions
v (July 4, )
List the active logon sessions on a system.

MoveFile
v (January 24, )
Allows you to schedule move and delete commands for the next reboot.

NotMyFault
v (November 18, )
Notmyfault is a tool that you can use to crash, hang, and cause kernel memory leaks on your Windows system.

NTFSInfo
v (July 4, )
Use NTFSInfo to see detailed information about NTFS volumes, including the size and location of the Master File Table (MFT) and MFT-zone, as well as the sizes of the NTFS meta-data files.

PendMoves
v (February 5, )
Enumerate the list of file rename and delete commands that will be executed the next boot.

PipeList
v (July 4, )
Displays the named pipes on your system, including the number of maximum instances and active instances for each pipe.

PortMon
v (January 12, )
Monitor serial and parallel port activity with this advanced monitoring tool. It knows about all standard serial and parallel IOCTLs and even shows you a portion of the data being sent and received. Version 3.x has powerful new UI enhancements and advanced filtering capabilities.

ProcDump
v (September 17, )
This command-line utility is aimed at capturing process dumps of otherwise difficult to isolate and reproduce CPU spikes. It also serves as a general process dump creation utility and can also monitor and generate process dumps when a process has a hung window or unhandled exception.

Process Explorer
v (April 28, )
Find out what files, registry keys and other objects processes have open, which DLLs they have loaded, and more. This uniquely powerful utility will even show you who owns each process.

Process Monitor
v (September 17, )
Monitor file system, Registry, process, thread and DLL activity in real-time.

PsExec
v (June 29, )
Execute processes on remote systems.

PsFile
v (June 29, )
See what files are opened remotely.

PsGetSid
v (June 29, )
Displays the SID of a computer or a user.

PsInfo
v (June 29, )
Obtain information about a system.

PsKill
v (June 29, )
Terminate local or remote processes.

PsPing
v (January 29, )
Measure network performance.

PsList
v (June 29, )
Show information about processes and threads.

PsLoggedOn
v (June 29, )
Show users logged on to a system.

PsLogList
v (June 29, )
Dump event log records.

PsPasswd
v (June 29, )
Changes account passwords.

PsService
v (June 29, )
View and control services.

PsShutdown
v (December 4, )
Shuts down and optionally reboots a computer.

PsSuspend
v (June 29, )
Suspend and resume processes.

PsTools
v (July 4, )
The PsTools suite includes command-line utilities for listing the processes running on local or remote computers, running processes remotely, rebooting computers, dumping event logs, and more.

RAMMap
v (October 15, )
An advanced physical memory usage analysis utility that presents usage information in different ways on its several different tabs.

RegDelNull
v (July 4, )
Scan for and delete Registry keys that contain embedded null-characters that are otherwise undeleteable by standard Registry-editing tools.

Registry Usage (RU)
v (July 4, )
View the registry space usage for the specified registry key.

RegJump
v (April 20, )
Jump to the registry path you specify in Regedit.

SDelete
v (February 13, )
Securely overwrite your sensitive files and cleanse your free space of previously deleted files using this DoD-compliant secure delete program.

ShareEnum
v (November 1, )
Scan file shares on your network and view their security settings to close security holes.

ShellRunas
v (February 28, )
Launch programs as a different user via a convenient shell context-menu entry.

Sigcheck
v (June 24, )
Dump file version information and verify that images on your system are digitally signed.

Streams
v (July 4, )
Reveal NTFS alternate streams.

Strings
v (July 4, )
Search for ANSI and UNICODE strings in binary images.

Sync
v (July 4, )
Flush cached data to disk.

Sysmon
v (October 15, )
Monitors and reports key system activity via the Windows event log.

TCPView
v (July 25, )
Active socket command-line viewer.

VMMap
v (October 15, )
VMMap is a process virtual and physical memory analysis utility.

VolumeId
v (July 4, )
Set Volume ID of FAT or NTFS drives.

Whois
v (December 11, )
See who owns an Internet address.

WinObj
v (February 14, )
The ultimate Object Manager namespace viewer is here.

ZoomIt
v (December 11, )
Presentation utility for zooming and drawing on the screen.

Источник: [cromwellpsi.com]
, SQL Compare v1.14 serial key or number

SQL Compare 14


The information on this page applies to several Redgate products. 

Using the user interface

  1. On the Help menu, click Manage my license. For some products you may need to click Enter serial number.

  2. Most products will ask you to log in with a Redgate ID next. 

    If you are the license owner (purchaser/administrator) and want to activate the product, you should log in with your existing Redgate ID.

    If the license was purchased for you, you should log in with your own Redgate ID.  If you don't already have a Redgate ID, you can create one by following the link in the window (more information about creating and using a Redgate ID).

    Enter the email address and password for your Redgate ID and click Login.

  3. Enter your serial number for the product on the next screen.
    You can find your serial number by logging in to your account on the Redgate website or by contacting the license owner.

  4. If you don't want to send your Windows user name and local machine name to Redgate when you activate your products, clear the Send information about this activation to Redgate check box.
    It can be useful to send information about your activation to Redgate in case you need to contact support in the future to find out where your serial keys are being used.

  5. Click Activate.

  6. Your product is activated and a confirmation page is shown.

  7. If your serial number is for a bundle or suite, all the other products in the bundle or suite are also activated.

  8. If there's a problem with your activation request, an error is shown. For information about activation errors and what you can do to resolve them, see Troubleshooting licensing and activation errors.

  9. You can now continue to use your product.

Enabling manual activation using the latest licensing client

You can use manual activation to activate products when your computer doesn't have an internet connection or your internet connection does not allow SOAP requests.

You'll need access to another computer with an internet connection and then transfer the installer over on a flash drive.

  1. Launch the product you wish to activate
  2. Visit http://localhost/redgate/cromwellpsi.com
  3. Under Recently connected products, next to the product you wish to activate choose 'Activate using fallback' 
  4. Enter your serial key and the activation will fail (if you have no internet connection)
  5. Now you have the option to activate manually 

You can then manually activate the license following the additional step-by-step instructions below. 

NB: Make sure you leave the original window with the request text open whilst you generate the response text, then paste in the response. 

Manual activation

You can use manual activation to activate products when your computer doesn't have an internet connection or your internet connection does not allow SOAP requests. You'll need access to another computer with an internet connection.

You can use manual activation when an error is shown and the Activate Manually button is available.

To activate manually:

  1. Click Activate manually.

  2. The Manual activation page is shown.

  3. Under Step 1, copy all of the activation request, and leave this dialog box open (if you close it you may have to start again).

  4. On a computer with an internet connection, go to cromwellpsi.com and under Step 1, paste the activation request into the box.

  5. Click Get Activation Response.

  6. Under Step 2, copy the activation response.

  7. Alternatively you can save the activation response to a .txt file.

  8. Back on the computer where you're activating your Redgate product, under Step 2, paste the activation response.

  9. Click Finish.

  10. The Activation successful page is shown.

  11. You can now continue to use your product.

When you install most Redgate products (apart from free ones), you have a trial period to evaluate them without purchase. Trial periods vary from 14 to 28 days depending on the product.

Manual activations will show in the customer portal as Not logged in (XXX-XXX-XXX-XXX).

If you need more time to evaluate a product, email licensing@cromwellpsi.com

Activating using the command line

Open a command prompt, navigate to the folder where your product executable file is located and run a command with the following syntax:

For example:

The product activation dialog box is displayed. Follow the instructions above for Using the user interface.

Problems activating Redgate products

If you are having problems activating a product on a new computer you can log in to your account on the Redgate website to free up a license by removing a user or usage that is no longer using the product, for more info see Managing your Redgate licenses.

You may need to contact your license administrator if you aren't the person who bought the product.

Changing the serial number used to activate a product

To change the serial number used to activate a product you will need to follow one of these options, depending on the product and version you are using:

  • On the Help menu, select Manage my license.

  • On the Help menu, select Enter serial number

For some products, you will need to deactivate the old serial number first.

Источник: [cromwellpsi.com]
SQL Compare v1.14 serial key or number

Query Processing Architecture Guide

Applies to:

The SQL Server Database Engine processes queries on various data storage architectures such as local tables, partitioned tables, and tables distributed across multiple servers. The following topics cover how SQL Server processes queries and optimizes query reuse through execution plan caching.

Execution modes

The SQL Server Database Engine can process Transact-SQL statements using two distinct processing modes:

  • Row mode execution
  • Batch mode execution

Row mode execution

Row mode execution is a query processing method used with traditional RDMBS tables, where data is stored in row format. When a query is executed and accesses data in row store tables, the execution tree operators and child operators read each required row, across all the columns specified in the table schema. From each row that is read, SQL Server then retrieves the columns that are required for the result set, as referenced by a SELECT statement, JOIN predicate, or filter predicate.

Note

Row mode execution is very efficient for OLTP scenarios, but can be less efficient when scanning large amounts of data, for example in Data Warehousing scenarios.

Batch mode execution

Batch mode execution is a query processing method used to process multiple rows together (hence the term batch). Each column within a batch is stored as a vector in a separate area of memory, so batch mode processing is vector-based. Batch mode processing also uses algorithms that are optimized for the multi-core CPUs and increased memory throughput that are found on modern hardware.

Batch mode execution is closely integrated with, and optimized around, the columnstore storage format. Batch mode processing operates on compressed data when possible, and eliminates the exchange operator used by row mode execution. The result is better parallelism and faster performance.

When a query is executed in batch mode, and accesses data in columnstore indexes, the execution tree operators and child operators read multiple rows together in column segments. SQL Server reads only the columns required for the result, as referenced by a SELECT statement, JOIN predicate, or filter predicate.
For more information on columnstore indexes, see Columnstore Index Architecture.

Note

Batch mode execution is very efficient Data Warehousing scenarios, where large amounts of data are read and aggregated.

SQL Statement Processing

Processing a single Transact-SQL statement is the most basic way that SQL Server executes Transact-SQL statements. The steps used to process a single statement that references only local base tables (no views or remote tables) illustrates the basic process.

Logical Operator Precedence

When more than one logical operator is used in a statement, is evaluated first, then , and finally . Arithmetic, and bitwise, operators are handled before logical operators. For more information, see Operator Precedence.

In the following example, the color condition pertains to product model 21, and not to product model 20, because has precedence over .

You can change the meaning of the query by adding parentheses to force evaluation of the first. The following query finds only products under models 20 and 21 that are red.

Using parentheses, even when they are not required, can improve the readability of queries, and reduce the chance of making a subtle mistake because of operator precedence. There is no significant performance penalty in using parentheses. The following example is more readable than the original example, although they are syntactically the same.

Optimizing SELECT statements

A statement is non-procedural; it does not state the exact steps that the database server should use to retrieve the requested data. This means that the database server must analyze the statement to determine the most efficient way to extract the requested data. This is referred to as optimizing the statement. The component that does this is called the Query Optimizer. The input to the Query Optimizer consists of the query, the database schema (table and index definitions), and the database statistics. The output of the Query Optimizer is a query execution plan, sometimes referred to as a query plan, or execution plan. The contents of an execution plan are described in more detail later in this topic.

The inputs and outputs of the Query Optimizer during optimization of a single statement are illustrated in the following diagram:

A statement defines only the following:

  • The format of the result set. This is specified mostly in the select list. However, other clauses such as and also affect the final form of the result set.
  • The tables that contain the source data. This is specified in the clause.
  • How the tables are logically related for the purposes of the statement. This is defined in the join specifications, which may appear in the clause or in an clause following .
  • The conditions that the rows in the source tables must satisfy to qualify for the statement. These are specified in the and clauses.

A query execution plan is a definition of the following:

  • The sequence in which the source tables are accessed. Typically, there are many sequences in which the database server can access the base tables to build the result set. For example, if the statement references three tables, the database server could first access , use the data from to extract matching rows from , and then use the data from to extract data from . The other sequences in which the database server could access the tables are:
    , , , or
    , , , or
    , , , or
    , ,

  • The methods used to extract data from each table.
    Generally, there are different methods for accessing the data in each table. If only a few rows with specific key values are required, the database server can use an index. If all the rows in the table are required, the database server can ignore the indexes and perform a table scan. If all the rows in a table are required but there is an index whose key columns are in an , performing an index scan instead of a table scan may save a separate sort of the result set. If a table is very small, table scans may be the most efficient method for almost all access to the table.

  • The methods used to compute calculations, and how to filter, aggregate, and sort data from each table.
    As data is accessed from tables, there are different methods to perform calculations over data such as computing scalar values, and to aggregate and sort data as defined in the query text, for example when using a or clause, and how to filter data, for example when using a or clause.

The process of selecting one execution plan from potentially many possible plans is referred to as optimization. The Query Optimizer is one of the most important components of the Database Engine. While some overhead is used by the Query Optimizer to analyze the query and select a plan, this overhead is typically saved several-fold when the Query Optimizer picks an efficient execution plan. For example, two construction companies can be given identical blueprints for a house. If one company spends a few days at the beginning to plan how they will build the house, and the other company begins building without planning, the company that takes the time to plan their project will probably finish first.

The SQL Server Query Optimizer is a cost-based optimizer. Each possible execution plan has an associated cost in terms of the amount of computing resources used. The Query Optimizer must analyze the possible plans and choose the one with the lowest estimated cost. Some complex statements have thousands of possible execution plans. In these cases, the Query Optimizer does not analyze all possible combinations. Instead, it uses complex algorithms to find an execution plan that has a cost reasonably close to the minimum possible cost.

The SQL Server Query Optimizer does not choose only the execution plan with the lowest resource cost; it chooses the plan that returns results to the user with a reasonable cost in resources and that returns the results the fastest. For example, processing a query in parallel typically uses more resources than processing it serially, but completes the query faster. The SQL Server Query Optimizer will use a parallel execution plan to return results if the load on the server will not be adversely affected.

The SQL Server Query Optimizer relies on distribution statistics when it estimates the resource costs of different methods for extracting information from a table or index. Distribution statistics are kept for columns and indexes, and hold information on the density1 of the underlying data. This is used to indicate the selectivity of the values in a particular index or column. For example, in a table representing cars, many cars have the same manufacturer, but each car has a unique vehicle identification number (VIN). An index on the VIN is more selective than an index on the manufacturer, because VIN has lower density than manufacturer. If the index statistics are not current, the Query Optimizer may not make the best choice for the current state of the table. For more information about densities, see Statistics.

1 Density defines the distribution of unique values that exist in the data, or the average number of duplicate values for a given column. As density decreases, selectivity of a value increases.

The SQL Server Query Optimizer is important because it enables the database server to adjust dynamically to changing conditions in the database without requiring input from a programmer or database administrator. This enables programmers to focus on describing the final result of the query. They can trust that the SQL Server Query Optimizer will build an efficient execution plan for the state of the database every time the statement is run.

Note

SQL Server Management Studio has three options to display execution plans:

  • The Estimated Execution Plan, which is the compiled plan, as produced by the Query Optimizer.
  • The Actual Execution Plan, which is the same as the compiled plan plus its execution context. This includes runtime information available after the execution completes, such as execution warnings, or in newer versions of the Database Engine, the elapsed and CPU time used during execution.
  • The Live Query Statistics, which is the same as the compiled plan plus its execution context. This includes runtime information during execution progress, and is updated every second. Runtime information includes for example the actual number of rows flowing through the operators.

Processing a SELECT Statement

The basic steps that SQL Server uses to process a single SELECT statement include the following:

  1. The parser scans the statement and breaks it into logical units such as keywords, expressions, operators, and identifiers.
  2. A query tree, sometimes referred to as a sequence tree, is built describing the logical steps needed to transform the source data into the format required by the result set.
  3. The Query Optimizer analyzes different ways the source tables can be accessed. It then selects the series of steps that return the results fastest while using fewer resources. The query tree is updated to record this exact series of steps. The final, optimized version of the query tree is called the execution plan.
  4. The relational engine starts executing the execution plan. As the steps that require data from the base tables are processed, the relational engine requests that the storage engine pass up data from the rowsets requested from the relational engine.
  5. The relational engine processes the data returned from the storage engine into the format defined for the result set and returns the result set to the client.

Constant Folding and Expression Evaluation

SQL Server evaluates some constant expressions early to improve query performance. This is referred to as constant folding. A constant is a Transact-SQL literal, such as , , , , or .

Foldable Expressions

SQL Server uses constant folding with the following types of expressions:

  • Arithmetic expressions, such as 1+1, 5/3*2, that contain only constants.
  • Logical expressions, such as 1=1 and 1>2 AND 3>4, that contain only constants.
  • Built-in functions that are considered foldable by SQL Server, including and . Generally, an intrinsic function is foldable if it is a function of its inputs only and not other contextual information, such as SET options, language settings, database options, and encryption keys. Nondeterministic functions are not foldable. Deterministic built-in functions are foldable, with some exceptions.
  • Deterministic methods of CLR user-defined types and deterministic scalar-valued CLR user-defined functions (starting with SQL Server (x)). For more information, see Constant Folding for CLR User-Defined Functions and Methods.

Note

An exception is made for large object types. If the output type of the folding process is a large object type (text,ntext, image, nvarchar(max), varchar(max), varbinary(max), or XML), then SQL Server does not fold the expression.

Nonfoldable Expressions

All other expression types are not foldable. In particular, the following types of expressions are not foldable:

  • Nonconstant expressions such as an expression whose result depends on the value of a column.
  • Expressions whose results depend on a local variable or parameter, such as @x.
  • Nondeterministic functions.
  • User-defined Transact-SQL functions1.
  • Expressions whose results depend on language settings.
  • Expressions whose results depend on SET options.
  • Expressions whose results depend on server configuration options.

1 Before SQL Server (x), deterministic scalar-valued CLR user-defined functions and methods of CLR user-defined types were not foldable.

Examples of Foldable and Nonfoldable Constant Expressions

Consider the following query:

If the database option is not set to for this query, then the expression is evaluated and replaced by its result, , before the query is compiled. Benefits of this constant folding include the following:

  • The expression does not have to be evaluated repeatedly at run time.
  • The value of the expression after it is evaluated is used by the Query Optimizer to estimate the size of the result set of the portion of the query .

On the other hand, if is a scalar user-defined function, the expression is not folded, because SQL Server does not fold expressions that involve user-defined functions, even if they are deterministic. For more information on parameterization, see Forced Parameterization later in this article.

Expression Evaluation

In addition, some expressions that are not constant folded but whose arguments are known at compile time, whether the arguments are parameters or constants, are evaluated by the result-set size (cardinality) estimator that is part of the optimizer during optimization.

Specifically, the following built-in functions and special operators are evaluated at compile time if all their inputs are known: , , , , , , and . The following operators are also evaluated at compile time if all their inputs are known:

  • Arithmetic operators: +, -, *, /, unary -
  • Logical Operators: , ,
  • Comparison operators: <, >, <=, >=, <>, , ,

No other functions or operators are evaluated by the Query Optimizer during cardinality estimation.

Examples of Compile-Time Expression Evaluation

Consider this stored procedure:

During optimization of the statement in the procedure, the Query Optimizer tries to evaluate the expected cardinality of the result set for the condition . The expression is not constant-folded, because is a parameter. However, at optimization time, the value of the parameter is known. This allows the Query Optimizer to accurately estimate the size of the result set, which helps it select a good query plan.

Now consider an example similar to the previous one, except that a local variable replaces in the query and the expression is evaluated in a SET statement instead of in the query.

When the statement in MyProc2 is optimized in SQL Server, the value of is not known. Therefore, the Query Optimizer uses a default estimate for the selectivity of , (in this case 30 percent).

Processing Other Statements

The basic steps described for processing a statement apply to other Transact-SQL statements such as , , and . and statements both have to target the set of rows to be modified or deleted. The process of identifying these rows is the same process used to identify the source rows that contribute to the result set of a statement. The and statements may both contain embedded statements that provide the data values to be updated or inserted.

Even Data Definition Language (DDL) statements, such as or , are ultimately resolved to a series of relational operations on the system catalog tables and sometimes (such as ) against the data tables.

Worktables

The Relational Engine may need to build a worktable to perform a logical operation specified in an Transact-SQL statement. Worktables are internal tables that are used to hold intermediate results. Worktables are generated for certain , , or queries. For example, if an clause references columns that are not covered by any indexes, the Relational Engine may need to generate a worktable to sort the result set into the order requested. Worktables are also sometimes used as spools that temporarily hold the result of executing a part of a query plan. Worktables are built in tempdb and are dropped automatically when they are no longer needed.

View Resolution

The SQL Server query processor treats indexed and nonindexed views differently:

  • The rows of an indexed view are stored in the database in the same format as a table. If the Query Optimizer decides to use an indexed view in a query plan, the indexed view is treated the same way as a base table.
  • Only the definition of a nonindexed view is stored, not the rows of the view. The Query Optimizer incorporates the logic from the view definition into the execution plan it builds for the Transact-SQL statement that references the nonindexed view.

The logic used by the SQL Server Query Optimizer to decide when to use an indexed view is similar to the logic used to decide when to use an index on a table. If the data in the indexed view covers all or part of the Transact-SQL statement, and the Query Optimizer determines that an index on the view is the low-cost access path, the Query Optimizer will choose the index regardless of whether the view is referenced by name in the query.

When an Transact-SQL statement references a nonindexed view, the parser and Query Optimizer analyze the source of both the Transact-SQL statement and the view and then resolve them into a single execution plan. There is not one plan for the Transact-SQL statement and a separate plan for the view.

For example, consider the following view:

Based on this view, both of these Transact-SQL statements perform the same operations on the base tables and produce the same results:

The SQL Server Management Studio Showplan feature shows that the relational engine builds the same execution plan for both of these statements.

Using Hints with Views

Hints that are placed on views in a query may conflict with other hints that are discovered when the view is expanded to access its base tables. When this occurs, the query returns an error. For example, consider the following view that contains a table hint in its definition:

Now suppose you enter this query:

The query fails, because the hint that is applied on view in the query is propagated to both tables and in the view when it is expanded. However, expanding the view also reveals the hint on . Because the and hints conflict, the resulting query is incorrect.

The , , , , or table hints conflict with each other, as do the , , , , table hints.

Hints can propagate through levels of nested views. For example, suppose a query applies the hint on a view . When is expanded, we find that view is part of its definition. 's definition includes a hint on one of its base tables. But this table also inherits the hint from the query on view . Because the and hints conflict, the query fails.

When the hint is used in a query that contains a view, the join order of the tables within the view is determined by the position of the view in the ordered construct. For example, the following query selects from three tables and a view:

And is defined as shown in the following:

The join order in the query plan is , , , , .

Resolving Indexes on Views

As with any index, SQL Server chooses to use an indexed view in its query plan only if the Query Optimizer determines it is beneficial to do so.

Indexed views can be created in any edition of SQL Server. In some editions of some versions of SQL Server, the Query Optimizer automatically considers the indexed view. In some editions of some versions of SQL Server, to use an indexed view, the table hint must be used. For clarification, see the documentation for each version.

The SQL Server Query Optimizer uses an indexed view when the following conditions are met:

  • These session options are set to :
    • The session option is set to OFF.
    • The Query Optimizer finds a match between the view index columns and elements in the query, such as the following:
      • Search condition predicates in the WHERE clause
      • Join operations
      • Aggregate functions
      • clauses
      • Table references
    • The estimated cost for using the index has the lowest cost of any access mechanisms considered by the Query Optimizer.
    • Every table referenced in the query (either directly, or by expanding a view to access its underlying tables) that corresponds to a table reference in the indexed view must have the same set of hints applied on it in the query.

    Note

    The and hints are always considered different hints in this context, regardless of the current transaction isolation level.

    Other than the requirements for the options and table hints, these are the same rules that the Query Optimizer uses to determine whether a table index covers a query. Nothing else has to be specified in the query for an indexed view to be used.

    A query does not have to explicitly reference an indexed view in the clause for the Query Optimizer to use the indexed view. If the query contains references to columns in the base tables that are also present in the indexed view, and the Query Optimizer estimates that using the indexed view provides the lowest cost access mechanism, the Query Optimizer chooses the indexed view, similar to the way it chooses base table indexes when they are not directly referenced in a query. The Query Optimizer may choose the view when it contains columns that are not referenced by the query, as long as the view offers the lowest cost option for covering one or more of the columns specified in the query.

    The Query Optimizer treats an indexed view referenced in the clause as a standard view. The Query Optimizer expands the definition of the view into the query at the start of the optimization process. Then, indexed view matching is performed. The indexed view may be used in the final execution plan selected by the Query Optimizer, or instead, the plan may materialize necessary data from the view by accessing the base tables referenced by the view. The Query Optimizer chooses the lowest-cost alternative.

    Using Hints with Indexed Views

    You can prevent view indexes from being used for a query by using the query hint, or you can use the table hint to force the use of an index for an indexed view specified in the clause of a query. However, you should let the Query Optimizer dynamically determine the best access methods to use for each query. Limit your use of and to specific cases where testing has shown that they improve performance significantly.

    The option specifies that the Query Optimizer not use any view indexes for the whole query.

    When is specified for a view, the Query Optimizer considers using any indexes defined on the view. specified with the optional clause forces the Query Optimizer to use the specified indexes. can be specified only for an indexed view and cannot be specified for a view not indexed.

    When neither nor is specified in a query that contains a view, the view is expanded to access underlying tables. If the query that makes up the view contains any table hints, these hints are propagated to the underlying tables. (This process is explained in more detail in View Resolution.) As long as the set of hints that exists on the underlying tables of the view are identical to each other, the query is eligible to be matched with an indexed view. Most of the time, these hints will match each other, because they are being inherited directly from the view. However, if the query references tables instead of views, and the hints applied directly on these tables are not identical, then such a query is not eligible for matching with an indexed view. If the , , , , , or hints apply to the tables referenced in the query after view expansion, the query is not eligible for indexed view matching.

    If a table hint in the form of references a view in a query and you do not also specify the hint, the index hint is ignored. To specify use of a particular index, use .

    Generally, when the Query Optimizer matches an indexed view to a query, any hints specified on the tables or views in the query are applied directly to the indexed view. If the Query Optimizer chooses not to use an indexed view, any hints are propagated directly to the tables referenced in the view. For more information, see View Resolution. This propagation does not apply to join hints. They are applied only in their original position in the query. Join hints are not considered by the Query Optimizer when matching queries to indexed views. If a query plan uses an indexed view that matches part of a query that contains a join hint, the join hint is not used in the plan.

    Hints are not allowed in the definitions of indexed views. In compatibility mode 80 and higher, SQL Server ignores hints inside indexed view definitions when maintaining them, or when executing queries that use indexed views. Although using hints in indexed view definitions will not produce a syntax error in 80 compatibility mode, they are ignored.

    Resolving Distributed Partitioned Views

    The SQL Server query processor optimizes the performance of distributed partitioned views. The most important aspect of distributed partitioned view performance is minimizing the amount of data transferred between member servers.

    SQL Server builds intelligent, dynamic plans that make efficient use of distributed queries to access data from remote member tables:

    • The Query Processor first uses OLE DB to retrieve the check constraint definitions from each member table. This allows the query processor to map the distribution of key values across the member tables.
    • The Query Processor compares the key ranges specified in an Transact-SQL statement clause to the map that shows how the rows are distributed in the member tables. The query processor then builds a query execution plan that uses distributed queries to retrieve only those remote rows that are required to complete the Transact-SQL statement. The execution plan is also built in such a way that any access to remote member tables, for either data or metadata, are delayed until the information is required.

    For example, consider a system where a customers table is partitioned across Server1 ( from 1 through ), Server2 ( from through ), and Server3 ( from through ).

    Consider the execution plan built for this query executed on Server1:

    The execution plan for this query extracts the rows with key values from through from the local member table, and issues a distributed query to retrieve the rows with key values from through from Server2.

    The SQL Server Query Processor can also build dynamic logic into query execution plans for Transact-SQL statements in which the key values are not known when the plan must be built. For example, consider this stored procedure:

    SQL Server cannot predict what key value will be supplied by the parameter every time the procedure is executed. Because the key value cannot be predicted, the query processor also cannot predict which member table will have to be accessed. To handle this case, SQL Server builds an execution plan that has conditional logic, referred to as dynamic filters, to control which member table is accessed, based on the input parameter value. Assuming the stored procedure was executed on Server1, the execution plan logic can be represented as shown in the following:

    SQL Server sometimes builds these types of dynamic execution plans even for queries that are not parameterized. The Query Optimizer may parameterize a query so that the execution plan can be reused. If the Query Optimizer parameterizes a query referencing a partitioned view, the Query Optimizer can no longer assume the required rows will come from a specified base table. It will then have to use dynamic filters in the execution plan.

    Stored Procedure and Trigger Execution

    SQL Server stores only the source for stored procedures and triggers. When a stored procedure or trigger is first executed, the source is compiled into an execution plan. If the stored procedure or trigger is again executed before the execution plan is aged from memory, the relational engine detects the existing plan and reuses it. If the plan has aged out of memory, a new plan is built. This process is similar to the process SQL Server follows for all Transact-SQL statements. The main performance advantage that stored procedures and triggers have in SQL Server compared with batches of dynamic Transact-SQL is that their Transact-SQL statements are always the same. Therefore, the relational engine easily matches them with any existing execution plans. Stored procedure and trigger plans are easily reused.

    The execution plan for stored procedures and triggers is executed separately from the execution plan for the batch calling the stored procedure or firing the trigger. This allows for greater reuse of the stored procedure and trigger execution plans.

    Execution Plan Caching and Reuse

    SQL Server has a pool of memory that is used to store both execution plans and data buffers. The percentage of the pool allocated to either execution plans or data buffers fluctuates dynamically, depending on the state of the system. The part of the memory pool that is used to store execution plans is referred to as the plan cache.

    The plan cache has two stores for all compiled plans:

    • The Object Plans cache store (OBJCP) used for plans related to persisted objects (stored procedures, functions, and triggers).
    • The SQL Plans cache store (SQLCP) used for plans related to autoparameterized, dynamic, or prepared queries.

    The query below provides information about memory usage for these two cache stores:

    Note

    The plan cache has two additional stores that are not used for storing plans:

    • The Bound Trees cache store (PHDR) used for data structures used during plan compilation for views, constraints, and defaults. These structures are known as Bound Trees or Algebrizer Trees.
    • The Extended Stored Procedures cache store (XPROC) used for predefined system procedures, like or , that are defined using a DLL, not using Transact-SQL statements. The cached structure contains only the function name and the DLL name in which the procedure is implemented.

    SQL Server execution plans have the following main components:

    • Compiled Plan (or Query Plan)
      The query plan produced by the compilation process is mostly a re-entrant, read-only data structure used by any number of users. It stores information about:

      • Physical operators which implement the operation described by logical operators.

      • The order of these operators, which determines the order in which data is accessed, filtered, and aggregated.

      • The number of estimated rows flowing through the operators.

        Note

        In newer versions of the Database Engine, information about the statistics objects that were used for Cardinality Estimation is also stored.

      • What support objects must be created, such as worktables or workfiles in tempdb. No user context or runtime information is stored in the query plan. There are never more than one or two copies of the query plan in memory: one copy for all serial executions and another for all parallel executions. The parallel copy covers all parallel executions, regardless of their degree of parallelism.

    • Execution Context
      Each user that is currently executing the query has a data structure that holds the data specific to their execution, such as parameter values. This data structure is referred to as the execution context. The execution context data structures are reused, but their content is not. If another user executes the same query, the data structures are reinitialized with the context for the new user.

    When any Transact-SQL statement is executed in SQL Server, the Database Engine first looks through the plan cache to verify that an existing execution plan for the same Transact-SQL statement exists. The Transact-SQL statement qualifies as existing if it literally matches a previously executed Transact-SQL statement with a cached plan, character per character. SQL Server reuses any existing plan it finds, saving the overhead of recompiling the Transact-SQL statement. If no execution plan exists, SQL Server generates a new execution plan for the query.

    Note

    The execution plans for some Transact-SQL statements are not persisted in the plan cache, such as bulk operation statements running on rowstore or statements containing string literals larger than 8 KB in size. These plans only exist while the query is being executed.

    SQL Server has an efficient algorithm to find any existing execution plans for any specific Transact-SQL statement. In most systems, the minimal resources that are used by this scan are less than the resources that are saved by being able to reuse existing plans instead of compiling every Transact-SQL statement.

    The algorithms to match new Transact-SQL statements to existing, unused execution plans in the plan cache require that all object references be fully qualified. For example, assume that is the default schema for the user executing the below statements. While in this example it is not required that the table is fully qualified to execute, it means that the second statement is not matched with an existing plan, but the third is matched:

    Changing any of the following SET options for a given execution will affect the ability to reuse plans, because the Database Engine performs constant folding and these options affect the results of such expressions:

    Caching multiple plans for the same query

    Queries and execution plans are uniquely identifiable in the Database Engine, much like a fingerprint:

    • The query plan hash is a binary hash value calculated on the execution plan for a given query, and used to uniquely identify similar execution plans.
    • The query hash is a binary hash value calculated on the Transact-SQL text of a query, and is used to uniquely identify queries.

    A compiled plan can be retrieved from the plan cache using a Plan Handle, which is a transient identifier that remains constant only while the plan remains in the cache. The plan handle is a hash value derived from the compiled plan of the entire batch. The plan handle for a compiled plan remains the same even if one or more statements in the batch get recompiled.

    Note

    If a plan was compiled for a batch instead of a single statement, the plan for individual statements in the batch can be retrieved using the plan handle and statement offsets.
    The DMV contains the and columns for each record, which refer to the currently executing statement of a currently executing batch or persisted object. For more information, see cromwellpsi.com_exec_requests (Transact-SQL).
    The DMV also contains these columns for each record, which refer to the position of a statement within a batch or persisted object. For more information, see cromwellpsi.com_exec_query_stats (Transact-SQL).

    The actual Transact-SQL text of a batch is stored in a separate memory space from the plan cache, called the SQL Manager cache (SQLMGR). The Transact-SQL text for a compiled plan can be retrieved from the sql manager cache using a SQL Handle, which is a transient identifier that remains constant only while at least one plan that references it remains in the plan cache. The sql handle is a hash value derived from the entire batch text and is guaranteed to be unique for every batch.

    Note

    Like a compiled plan, the Transact-SQL text is stored per batch, including the comments. The sql handle contains the MD5 hash of the entire batch text and is guaranteed to be unique for every batch.

    The query below provides information about memory usage for the sql manager cache:

    There is a 1:N relation between a sql handle and plan handles. Such a condition occurs when the cache key for the compiled plans is different. This may occur due to change in SET options between two executions of the same batch.

    Consider the following stored procedure:

    Verify what can be found in the plan cache using the query below:

    Here is the result set.

    Now execute the stored procedure with a different parameter, but no other changes to execution context:

    Verify again what can be found in the plan cache. Here is the result set.

    Notice the has increased to 2, which means the same cached plan was re-used as-is, because the execution context data structures were reused. Now change the option and execute the stored procedure using the same parameter.

    Verify again what can be found in the plan cache. Here is the result set.

    Notice there are now two entries in the DMV output:

    • The column shows the value in the first record which is the plan executed once with .
    • The column shows the value in the second record which is the plan executed with , because it was executed twice.
    • The different refers to a different execution plan entry in the plan cache. However, the value is the same for both entries because they refer to the same batch.
      • The execution with set to OFF has a new , and it's available for reuse for calls that have the same set of SET options. The new plan handle is necessary because the execution context was reinitialized due to changed SET options. But that doesn't trigger a recompile: both entries refer to the same plan and query, as evidenced by the same and values.

    What this effectively means is that we have two plan entries in the cache corresponding to the same batch, and it underscores the importance of making sure that the plan cache affecting SET options are the same, when the same queries are executed repeatedly, to optimize for plan reuse and keep plan cache size to its required minimum.

    Tip

    A common pitfall is that different clients may have different default values for the SET options. For example, a connection made through SQL Server Management Studio automatically sets to ON, while SQLCMD sets to OFF. Executing the same queries from these two clients will result in multiple plans (as described in the example above).

    Removing execution plans from the Plan Cache

    Execution plans remain in the plan cache as long as there is enough memory to store them. When memory pressure exists, the SQL Server Database Engine uses a cost-based approach to determine which execution plans to remove from the plan cache. To make a cost-based decision, the SQL Server Database Engine increases and decreases a current cost variable for each execution plan according to the following factors.

    When a user process inserts an execution plan into the cache, the user process sets the current cost equal to the original query compile cost; for ad-hoc execution plans, the user process sets the current cost to zero. Thereafter, each time a user process references an execution plan, it resets the current cost to the original compile cost; for ad-hoc execution plans the user process increases the current cost. For all plans, the maximum value for the current cost is the original compile cost.

    When memory pressure exists, the SQL Server Database Engine responds by removing execution plans from the plan cache. To determine which plans to remove, the SQL Server Database Engine repeatedly examines the state of each execution plan and removes plans when their current cost is zero. An execution plan with zero current cost is not removed automatically when memory pressure exists; it is removed only when the SQL Server Database Engine examines the plan and the current cost is zero. When examining an execution plan, the SQL Server Database Engine pushes the current cost towards zero by decreasing the current cost if a query is not currently using the plan.

    The SQL Server Database Engine repeatedly examines the execution plans until enough have been removed to satisfy memory requirements. While memory pressure exists, an execution plan may have its cost increased and decreased more than once. When memory pressure no longer exists, the SQL Server Database Engine stops decreasing the current cost of unused execution plans and all execution plans remain in the plan cache, even if their cost is zero.

    The SQL Server Database Engine uses the resource monitor and user worker threads to free memory from the plan cache in response to memory pressure. The resource monitor and user worker threads can examine plans run concurrently to decrease the current cost for each unused execution plan. The resource monitor removes execution plans from the plan cache when global memory pressure exists. It frees memory to enforce policies for system memory, process memory, resource pool memory, and maximum size for all caches.

    The maximum size for all caches is a function of the buffer pool size and cannot exceed the maximum server memory. For more information on configuring the maximum server memory, see the setting in .

    The user worker threads remove execution plans from the plan cache when single cache memory pressure exists. They enforce policies for maximum single cache size and maximum single cache entries.

    The following examples illustrate which execution plans get removed from the plan cache:

    • An execution plan is frequently referenced so that its cost never goes to zero. The plan remains in the plan cache and is not removed unless there is memory pressure and the current cost is zero.
    • An ad-hoc execution plan is inserted and is not referenced again before memory pressure exists. Since ad-hoc plans are initialized with a current cost of zero, when the SQL Server Database Engine examines the execution plan, it will see the zero current cost and remove the plan from the plan cache. The ad-hoc execution plan remains in the plan cache with a zero current cost when memory pressure does not exist.

    To manually remove a single plan or all plans from the cache, use DBCC FREEPROCCACHE. Starting with SQL Server (x), the to clear the procedure (plan) cache for the database in scope.

    Recompiling Execution Plans

    Certain changes in a database can cause an execution plan to be either inefficient or invalid, based on the new state of the database. SQL Server detects the changes that invalidate an execution plan and marks the plan as not valid. A new plan must then be recompiled for the next connection that executes the query. The conditions that invalidate a plan include the following:

    • Changes made to a table or view referenced by the query ( and ).
    • Changes made to a single procedure, which would drop all plans for that procedure from the cache ().
    • Changes to any indexes used by the execution plan.
    • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as , or generated automatically.
    • Dropping an index used by the execution plan.
    • An explicit call to .
    • Large numbers of changes to keys (generated by or statements from other users that modify a table referenced by the query).
    • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
    • Executing a stored procedure using the option.

    Most recompilations are required either for statement correctness or to obtain potentially faster query execution plans.

    In SQL Server versions prior to , whenever a statement within a batch causes recompilation, the entire batch, whether submitted through a stored procedure, trigger, ad-hoc batch, or prepared statement, was recompiled. Starting with SQL Server (9.x), only the statement inside the batch that triggers recompilation is recompiled. Also, there are additional types of recompilations in SQL Server (9.x) and later because of its expanded feature set.

    Statement-level recompilation benefits performance because, in most cases, a small number of statements causes recompilations and their associated penalties, in terms of CPU time and locks. These penalties are therefore avoided for the other statements in the batch that do not have to be recompiled.

    The extended event (xEvent) reports statement-level recompilations. This xEvent occurs when a statement-level recompilation is required by any kind of batch. This includes stored procedures, triggers, ad hoc batches and queries. Batches may be submitted through several interfaces, including sp_executesql, dynamic SQL, Prepare methods or Execute methods. The column of xEvent contains an integer code that indicates the reason for the recompilation. The following table contains the possible reasons:

    permission changed

    Query notification environment changed

    requested

    Parameterized plan flushed

    Plan affecting database version changed

    Query Store plan forcing policy changed

    Query Store plan forcing failed

    Query Store missing the plan

    Note

    In SQL Server versions where xEvents are not available, then the SQL Server Profiler SP:Recompile trace event can be used for the same purpose of reporting statement-level recompilations. The trace event also reports statement-level recompilations, and this trace event can also be used to track and debug recompilations. Whereas generates only for stored procedures and triggers, generates for stored procedures, triggers, ad-hoc batches, batches that are executed by using , prepared queries, and dynamic SQL. The EventSubClass column of and contains an integer code that indicates the reason for the recompilation. The codes are described here.

    Note

    When the database option is set to , queries are recompiled when they target tables or indexed views whose statistics have been updated or whose cardinalities have changed significantly since the last execution. This behavior applies to standard user-defined tables, temporary tables, and the inserted and deleted tables created by DML triggers. If query performance is affected by excessive recompilations, consider changing this setting to . When the database option is set to , no recompilations occur based on statistics or cardinality changes, with the exception of the inserted and deleted tables that are created by DML triggers. Because these tables are created in tempdb, the recompilation of queries that access them depends on the setting of in tempdb. Note that in SQL Server prior to , queries continue to recompile based on cardinality changes to the DML trigger inserted and deleted tables, even when this setting is .

    Parameters and Execution Plan Reuse

    The use of parameters, including parameter markers in ADO, OLE DB, and ODBC applications, can increase the reuse of execution plans.

    Warning

    Using parameters or parameter markers to hold values that are typed by end users is more secure than concatenating the values into a string that is then executed by using either a data access API method, the statement, or the stored procedure.

    The only difference between the following two statements is the values that are compared in the clause:

    The only difference between the execution plans for these queries is the value stored for the comparison against the column. While the goal is for SQL Server to always recognize that the statements generate essentially the same plan and reuse the plans, SQL Server sometimes does not detect this in complex Transact-SQL statements.

    Separating constants from the Transact-SQL statement by using parameters helps the relational engine recognize duplicate plans. You can use parameters in the following ways:

    • In Transact-SQL , use :

      This method is recommended for Transact-SQL scripts, stored procedures, or triggers that generate SQL statements dynamically.

    • ADO, OLE DB, and ODBC use parameter markers. Parameter markers are question marks (?) that replace a constant in an SQL statement and are bound to a program variable. For example, you would do the following in an ODBC application:

      • Use to bind an integer variable to the first parameter marker in an SQL statement.
      • Put the integer value in the variable.
      • Execute the statement, specifying the parameter marker (?):

      The SQL Server Native Client OLE DB Provider and the SQL Server Native Client ODBC driver included with SQL Server use to send statements to SQL Server when parameter markers are used in applications.

    • To design stored procedures, which use parameters by design.

    If you do not explicitly build parameters into the design of your applications, you can also rely on the SQL Server Query Optimizer to automatically parameterize certain queries by using the default behavior of simple parameterization. Alternatively, you can force the Query Optimizer to consider parameterizing all queries in the database by setting the option of the statement to .

    When forced parameterization is enabled, simple parameterization can still occur. For example, the following query cannot be parameterized according to the rules of forced parameterization:

    However, it can be parameterized according to simple parameterization rules. When forced parameterization is tried but fails, simple parameterization is still subsequently tried.

    Simple Parameterization

    In SQL Server, using parameters or parameter markers in Transact-SQL statements increases the ability of the relational engine to match new Transact-SQL statements with existing, previously-compiled execution plans.

    Warning

    Using parameters or parameter markers to hold values typed by end users is more secure than concatenating the values into a string that is then executed using either a data access API method, the statement, or the stored procedure.

    If a Transact-SQL statement is executed without parameters, SQL Server parameterizes the statement internally to increase the possibility of matching it against an existing execution plan. This process is called simple parameterization. In SQL Server versions prior to , the process was referred to as auto-parameterization.

    Consider this statement:

    The value 1 at the end of the statement can be specified as a parameter. The relational engine builds the execution plan for this batch as if a parameter had been specified in place of the value 1. Because of this simple parameterization, SQL Server recognizes that the following two statements generate essentially the same execution plan and reuses the first plan for the second statement:

    When processing complex Transact-SQL statements, the relational engine may have difficulty determining which expressions can be parameterized. To increase the ability of the relational engine to match complex Transact-SQL statements to existing, unused execution plans, explicitly specify the parameters using either sp_executesql or parameter markers.

    Note

    When the +, -, *, /, or % arithmetic operators are used to perform implicit or explicit conversion of int, smallint, tinyint, or bigint constant values to the float, real, decimal or numeric data types, SQL Server applies specific rules to calculate the type and precision of the expression results. However, these rules differ, depending on whether the query is parameterized or not. Therefore, similar expressions in queries can, in some cases, produce differing results.

    Under the default behavior of simple parameterization, SQL Server parameterizes a relatively small class of queries. However, you can specify that all queries in a database be parameterized, subject to certain limitations, by setting the option of the command to . Doing so may improve the performance of databases that experience high volumes of concurrent queries by reducing the frequency of query compilations.

    Alternatively, you can specify that a single query, and any others that are syntactically equivalent but differ only in their parameter values, be parameterized.

    Forced Parameterization

    You can override the default simple parameterization behavior of SQL Server by specifying that all , , , and statements in a database be parameterized, subject to certain limitations. Forced parameterization is enabled by setting the option to in the statement. Forced parameterization may improve the performance of certain databases by reducing the frequency of query compilations and recompilations. Databases that may benefit from forced parameterization are generally those that experience high volumes of concurrent queries from sources such as point-of-sale applications.

    When the option is set to , any literal value that appears in a , , , or statement, submitted in any form, is converted to a parameter during query compilation. The exceptions are literals that appear in the following query constructs:

    • statements.
    • Statements inside the bodies of stored procedures, triggers, or user-defined functions. SQL Server already reuses query plans for these routines.
    • Prepared statements that have already been parameterized on the client-side application.
    • Statements that contain XQuery method calls, where the method appears in a context where its arguments would typically be parameterized, such as a clause. If the method appears in a context where its arguments would not be parameterized, the rest of the statement is parameterized.
    • Statements inside a Transact-SQL cursor. ( statements inside API cursors are parameterized.)
    • Deprecated query constructs.
    • Any statement that is run in the context of or set to .
    • Statements that contain more than 2, literals that are eligible for parameterization.
    • Statements that reference variables, such as .
    • Statements that contain the query hint.
    • Statements that contain a clause.
    • Statements that contain a clause.

    Additionally, the following query clauses are not parameterized. Note that in these cases, only the clauses are not parameterized. Other clauses within the same query may be eligible for forced parameterization.

    • The <select_list> of any statement. This includes lists of subqueries and lists inside statements.
    • Subquery statements that appear inside an statement.
    • The , , , , , , or clauses of a query.
    • Arguments, either direct or as subexpressions, to , , , , or any operator.
    • The pattern and escape_character arguments of a clause.
    • The style argument of a clause.
    • Integer constants inside an clause.
    • Constants specified by using ODBC extension syntax.
    • Constant-foldable expressions that are arguments of the , , , , and operators. When considering eligibility for forced parameterization, SQL Server considers an expression to be constant-foldable when either of the following conditions is true:
      • No columns, variables, or subqueries appear in the expression.
      • The expression contains a clause.
    • Arguments to query hint clauses. These include the number_of_rows argument of the query hint, the number_of_processors argument of the query hint, and the number argument of the query hint.

    Parameterization occurs at the level of individual Transact-SQL statements. In other words, individual statements in a batch are parameterized. After compiling, a parameterized query is executed in the context of the batch in which it was originally submitted. If an execution plan for a query is cached, you can determine whether the query was parameterized by referencing the sql column of the cromwellpsi.comheobjects dynamic management view. If a query is parameterized, the names and data types of parameters come before the text of the submitted batch in this column, such as (@1 tinyint).

    Note

    Parameter names are arbitrary. Users or applications should not rely on a particular naming order. Also, the following can change between versions of SQL Server and Service Pack upgrades: Parameter names, the choice of literals that are parameterized, and the spacing in the parameterized text.

    Data Types of Parameters

    When SQL Server parameterizes literals, the parameters are converted to the following data types:

    • Integer literals whose size would otherwise fit within the int data type parameterize to int. Larger integer literals that are parts of predicates that involve any comparison operator (includes <, <=, =, !=, >, >=, , !<, !>, <>, , , , , and ) parameterize to numeric(38,0). Larger literals that are not parts of predicates that involve comparison operators parameterize to numeric whose precision is just large enough to support its size and whose scale is 0.
    • Fixed-point numeric literals that are parts of predicates that involve comparison operators parameterize to numeric whose precision is 38 and whose scale is just large enough to support its size. Fixed-point numeric literals that are not parts of predicates that involve comparison operators parameterize to numeric whose precision and scale are just large enough to support its size.
    • Floating point numeric literals parameterize to float(53).
    • Non-Unicode string literals parameterize to varchar() if the literal fits within 8, characters, and to varchar(max) if it is larger than 8, characters.
    • Unicode string literals parameterize to nvarchar() if the literal fits within 4, Unicode characters, and to nvarchar(max) if the literal is larger than 4, characters.
    • Binary literals parameterize to varbinary() if the literal fits within 8, bytes. If it is larger than 8, bytes, it is converted to varbinary(max).
    • Money type literals parameterize to money.

    Guidelines for Using Forced Parameterization

    Consider the following when you set the option to FORCED:

    • Forced parameterization, in effect, changes the literal constants in a query to parameters when compiling a query. Therefore, the Query Optimizer might choose suboptimal plans for queries. In particular, the Query Optimizer is less likely to match the query to an indexed view or an index on a computed column. It may also choose suboptimal plans for queries posed on partitioned tables and distributed partitioned views. Forced parameterization should not be used for environments that rely heavily on indexed views and indexes on computed columns. Generally, the option should only be used by experienced database administrators after determining that doing this does not adversely affect performance.
    • Distributed queries that reference more than one database are eligible for forced parameterization as long as the option is set to in the database whose context the query is running.
    • Setting the option to flushes all query plans from the plan cache of a database, except those that currently are compiling, recompiling, or running. Plans for queries that are compiling or running during the setting change are parameterized the next time the query is executed.
    • Setting the option is an online operation that it requires no database-level exclusive locks.
    • The current setting of the option is preserved when reattaching or restoring a database.

    You can override the behavior of forced parameterization by specifying that simple parameterization be attempted on a single query, and any others that are syntactically equivalent but differ only in their parameter values. Conversely, you can specify that forced parameterization be attempted on only a set of syntactically equivalent queries, even if forced parameterization is disabled in the database. Plan guides are used for this purpose.

    Note

    When the option is set to , the reporting of error messages may differ from when the option is set to : multiple error messages may be reported under forced parameterization, where fewer messages would be reported under simple parameterization, and the line numbers in which errors occur may be reported incorrectly.

    Preparing SQL Statements

    The SQL Server relational engine introduces full support for preparing Transact-SQL statements before they are executed. If an application has to execute an Transact-SQL statement several times, it can use the database API to do the following:

    • Prepare the statement once. This compiles the Transact-SQL statement into an execution plan.
    • Execute the precompiled execution plan every time it has to execute the statement. This prevents having to recompile the Transact-SQL statement on each execution after the first time.
      Preparing and executing statements is controlled by API functions and methods. It is not part of the Transact-SQL language. The prepare/execute model of executing Transact-SQL statements is supported by the SQL Server Native Client OLE DB Provider and the SQL Server Native Client ODBC driver. On a prepare request, either the provider or the driver sends the statement to SQL Server with a request to prepare the statement. SQL Server compiles an execution plan and returns a handle for that plan to the provider or driver. On an execute request, either the provider or the driver sends the server a request to execute the plan that is associated with the handle.

    Prepared statements cannot be used to create temporary objects on SQL Server. Prepared statements cannot reference system stored procedures that create temporary objects, such as temporary tables. These procedures must be executed directly.

    Excess use of the prepare/execute model can degrade performance. If a statement is executed only once, a direct execution requires only one network round-trip to the server. Preparing and executing an Transact-SQL statement executed only one time requires an extra network round-trip; one trip to prepare the statement and one trip to execute it.

    Preparing a statement is more effective if parameter markers are used. For example, assume that an application is occasionally asked to retrieve product information from the sample database. There are two ways the application can do this.

    Using the first way, the application can execute a separate query for each product requested:

    Using the second way, the application does the following:

    1. Prepares a statement that contains a parameter marker (?):
    2. Binds a program variable to the parameter marker.
    3. Each time product information is needed, fills the bound variable with the key value and executes the statement.

    The second way is more efficient when the statement is executed more than three times.

    In SQL Server, the prepare/execute model has no significant performance advantage over direct execution, because of the way SQL Server reuses execution plans. SQL Server has efficient algorithms for matching current Transact-SQL statements with execution plans that are generated for prior executions of the same Transact-SQL statement. If an application executes a Transact-SQL statement with parameter markers multiple times, SQL Server will reuse the execution plan from the first execution for the second and subsequent executions (unless the plan ages from the plan cache). The prepare/execute model still has these benefits:

    • Finding an execution plan by an identifying handle is more efficient than the algorithms used to match an Transact-SQL statement to existing execution plans.
    • The application can control when the execution plan is created and when it is reused.
    • The prepare/execute model is portable to other databases, including earlier versions of SQL Server.

    Parameter Sensitivity

    Parameter sensitivity, also known as "parameter sniffing", refers to a process whereby SQL Server "sniffs" the current parameter values during compilation or recompilation, and passes it along to the Query Optimizer so that they can be used to generate potentially more efficient query execution plans.

    Parameter values are sniffed during compilation or recompilation for the following types of batches:

    • Stored procedures
    • Queries submitted via
    • Prepared queries

    For more information on troubleshooting bad parameter sniffing issues, see Troubleshoot queries with parameter-sensitive query execution plan issues.

    Note

    For queries using the hint, both parameter values and current values of local variables are sniffed. The values sniffed (of parameters and local variables) are those that exist at the place in the batch just before the statement with the hint. In particular, for parameters, the values that came along with the batch invocation call are not sniffed.

    Parallel Query Processing

    SQL Server provides parallel queries to optimize query execution and index operations for computers that have more than one microprocessor (CPU). Because SQL Server can perform a query or index operation in parallel by using several operating system worker threads, the operation can be completed quickly and efficiently.

    During query optimization, SQL Server looks for queries or index operations that might benefit from parallel execution. For these queries, SQL Server inserts exchange operators into the query execution plan to prepare the query for parallel execution. An exchange operator is an operator in a query execution plan that provides process management, data redistribution, and flow control. The exchange operator includes the , , and logical operators as subtypes, one or more of which can appear in the Showplan output of a query plan for a parallel query.

    Important

    Certain constructs inhibit SQL Server's ability to leverage parallelism on the entire execution plan, or parts or the execution plan.

    Constructs that inhibit parallelism include:

    Источник: [cromwellpsi.com]
    .

    What’s New in the SQL Compare v1.14 serial key or number?

    Screen Shot

    System Requirements for SQL Compare v1.14 serial key or number

    Add a Comment

    Your email address will not be published. Required fields are marked *