To check the Query Execution Plan of an SQL Stored Procedure, you can use SQL Server Management Studio (SSMS) or other query profiling tools. Here’s how to do it using SSMS:
- Open SQL Server Management Studio: Launch SQL Server Management Studio and connect to the SQL Server instance where your database and Stored Procedure are located.
- Select the Database: From the Object Explorer, expand the Databases folder, and select the database that contains your Stored Procedure.
- Enable Actual Execution Plan: Before executing the Stored Procedure, make sure the “Actual Execution Plan” option is enabled. You can find this option on the toolbar or go to the “Query” menu and check the “Include Actual Execution Plan” option.
- Execute the Stored Procedure: Run the Stored Procedure by executing a SQL query that calls the Stored Procedure. You can do this by opening a new query window and writing a
EXECstatement for the Stored Procedure. For example:
EXEC YourSchema.YourStoredProcedureName @Param1 = 'Value1', @Param2 = 'Value2';Replace
YourSchemawith the appropriate schema name and
YourStoredProcedureNamewith the name of your Stored Procedure. Also, provide appropriate parameter values for
- View the Execution Plan: After executing the query, the “Execution Plan” tab should appear alongside the “Results” tab at the bottom of the query window. Click on the “Execution Plan” tab to view the graphical representation of the query execution plan. The execution plan shows how SQL Server is processing the query, including the operations performed, indexes used, and estimates of the cost of each operation. It can help you identify performance bottlenecks and areas for optimization.
- Analyze the Execution Plan: Carefully review the execution plan to understand how the Stored Procedure is being executed. Look for any table scans, expensive operations, or missing indexes that might be impacting the performance.You can also hover over each operator in the execution plan to view additional details and statistics about that specific step.
- Optional: Save the Execution Plan: If you want to save the execution plan for further analysis or to share it with others, you can do so by clicking on “File” in the SSMS menu and selecting “Save Execution Plan As…”. This will save the plan as an XML file.
Remember that the execution plan may vary based on the parameters passed to the Stored Procedure, so you might want to test the procedure with different parameter values to get a more comprehensive understanding of its performance.
AND finding the SQL stored procedure execution plan from last executions:
To find the SQL Stored Procedure execution plan from the last executions, you can use SQL Server’s dynamic management views (DMVs) or extended events. I’ll show you how to do it using DMVs, as they are easier to work with and provide valuable information about the execution plans. Specifically, we’ll use the
sys.dm_exec_sql_textDMVs. Here’s the step-by-step process:
- Open SQL Server Management Studio: Launch SQL Server Management Studio and connect to the SQL Server instance where your database and Stored Procedure are located.
- Identify the Stored Procedure: First, you need to identify the name of the Stored Procedure you want to check the execution plan for.
- Query the Dynamic Management Views: Use the following query to retrieve the execution plan of the last execution of the Stored Procedure:
SELECT TOP 1
qt.text AS [SQLText],
qp.query_plan AS [ExecutionPlan]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
WHERE qt.objectid = OBJECT_ID(‘YourSchema.YourStoredProcedureName’)
ORDER BY qs.creation_time DESC;
YourSchemawith the appropriate schema name and
YourStoredProcedureNamewith the name of your Stored Procedure.
- Interpret the Results:
creation_time: The timestamp of when the query plan was compiled.
SQLText: The text of the Stored Procedure.
ExecutionPlan: The XML representation of the execution plan.
- View the Execution Plan: Copy the XML content from the
ExecutionPlancolumn and paste it into a new query window. Then click on the “Query” menu and select “Display Estimated Execution Plan” or “Include Actual Execution Plan” (if it’s not already enabled).The graphical representation of the execution plan will be displayed, and you can analyze it to understand how the Stored Procedure is being executed.
- Remember that the
sys.dm_exec_query_statsDMV holds the query execution statistics for the entire SQL Server instance, and the above query will retrieve the execution plan for the last execution of the specified Stored Procedure. If you want to analyze execution plans for multiple executions or for a specific time range, you can modify the query accordingly, such as by using additional filtering conditions or joining with other DMVs.
A slow API execution while executing an SQL Stored Procedure can be caused by various factors. To troubleshoot and improve its performance, you can follow these steps:
- Identify the Bottleneck:
- Check the API logs or monitoring tools to identify if the slowness is in the API itself or if it’s due to the SQL Server.
- Measure the execution time of the API and the Stored Procedure separately to determine which part is causing the delay.
- Examine the SQL Stored Procedure:
- Review the Stored Procedure’s code to ensure it is optimized for performance.
- Check if there are any complex or nested queries, subqueries, or joins that can be simplified or optimized.
- Make sure the Stored Procedure is using appropriate indexes on the tables it accesses.
- Query Execution Plan:
- Analyze the query execution plan of the Stored Procedure to see if there are any performance issues.
- Ensure that the execution plan is using the correct indexes and not performing any unnecessary table scans.
- Verify that the relevant tables have appropriate indexes based on the queries used in the Stored Procedure.
- Missing or incorrect indexes can lead to slow performance.
- Parameter Sniffing:
- Check if parameter sniffing is affecting the Stored Procedure’s performance.
- Parameter sniffing can cause performance issues when SQL Server generates an execution plan based on the first set of parameters used.
- Ensure that table statistics are up to date, as outdated statistics can lead to suboptimal query plans.
- Hardware and Resource Utilization:
- Check the hardware and resource utilization of the server running the SQL Server instance.
- Insufficient memory, CPU, or disk I/O can cause performance problems.
- Concurrency and Locking:
- Investigate if there are any contention issues with locks and concurrent access to the same data.
- Ensure the Stored Procedure is not causing excessive locking or contention.
- Database Maintenance:
- Perform regular database maintenance tasks like index reorganization/rebuilding and database reindexing.
- Caching and Memory:
- Consider implementing caching mechanisms to reduce the need for repetitive SQL calls.
- If the API and the database server are on separate machines, ensure the API server has sufficient memory to handle database requests efficiently.
- Network Latency:
- Examine the network latency between the API server and the SQL Server.
- If the API and the database server are located in different regions, network latency can be a factor.
- Load Testing:
- Conduct load testing to simulate a realistic user load and identify performance bottlenecks.
- Version and Patching:
- Ensure that the SQL Server version is up-to-date with the latest patches and updates.
By following these steps, you should be able to pinpoint the root cause of the slow API execution and take appropriate actions to improve its performance. Remember to make changes cautiously and test thoroughly after each modification to avoid any unintended consequences.
- Identify the Bottleneck:
Performance metrics for IO It’s critical to comprehend the primary metrics used to gauge and keep track of I0 activities if you want to evaluate and improve IO performance in SQL Server. These measurements offer insightful information about how effectively data is stored and retrieved. Let’s examine the main performance indicators for IO.
- Throughput: The amount of data exchanged between the storage subsystem and SOL Server in a specific amount of time is measured by throughput. Megabytes per second (MB/s) or gigabytes per second (GB/s) are the most used units of measurement. Faster data transfer and greater IO performance are indicators of higher throughput. Any restrictions in the IO path, such as those related to network bandwidth, disc controller capacity, or storage device speed, can be found by monitoring and optimizing throughput.
- Latency: The term “latency” describes the amount of time needed for an IO request to go from the SOL server to the storage subsystem and back, taking into account any delays experienced at each step. Typically, it is expressed in milliseconds (ms). Greater responsiveness and quicker IO operations are indicated by lower latency values. Monitoring latency assists in identifying potential performance limiting factors, such as slow storage units, overworked disc controllers, or contention problems.
- IOPS (Input /Output Operations Per Second): Input/Output Operations Per Second, or IOPS, is a measure of how many IO operations the storage subsystem completes in a second. The speed at which data may be read from or written to storage devices is measured. IO throughput is increased and performance is improved with higher IOPS levels. Monitoring IOPS aids in evaluating the workload intensity and determining if the storage subsystem can effectively handle the IO demands.
- A Queue Length: The number of pending IO requests in the IO queue that are awaiting processing by the storage subsystem is referred to as the queue length. It shows the pressure of the workload on the storage systems. High queue length could be a sign that the storage subsystems are overloaded and having trouble meeting I0 needs. In order to avoid IO bottlenecks and ensure effective IO processing, it is important to monitor and optimize the queue length.
- Response Time: Response time “measures the whole amount of time required to accomplish an IO operation, including both the IO execution time and any queue waiting time. It offers a thorough overview of the end-to-end functionality of IO operations. Monitoring response time assists in identifying performance issues and optimizing the IO channel to decrease delays. A lower response time indicates faster IO completion and better user experience.
Administrators can learn more about the efficacy of IO operations in SOL Server by tracking and examining various IO performance indicators. The SOL Server environment’s overall performance and scalability are improved by adopting targeted optimizations to increase IO throughput, decrease latency, and identify bottlenecks. These metrics also serve as benchmarks for evaluating the impact of configuration changes.
Configuring and Tuning Storage:
The storage subsystem’s configuration and optimization are essential for maximizing IO performance in SOL Server. Administrators can maximize the effectiveness, dependability, and scalability of the storage system by using good judgement and best practice’s. Let’s look at the main factors to take into account when configuring and optimizing storage in SOL Server.
- RAID Configurations RAID (Redundant Array of Independent Disks) :
Various levels of data redundancy, performance, and capacity are available with RAID (Redundant Array of Independent Discs) setups. The selection of a RAID configuration is influenced by various elements, including workload needs, data accessibility, and performance objectives. RAID 0 (striping) is one of the more used RAID levels. RAID IO (striping and minoring), RAID 5 (striping with parity), and RAID 1 (mirroring). Optimising IO throughput and safeguarding data integrity can be accomplished by analysing the workload characteristics and choosing an appropriate RAID configuration.
- Disk Types: The performance of an IO operation can be greatly impacted by selecting the proper type of storage device. Solid-state drives (SSDs) outperform traditional hard disk drives (HDDs) in terms of performance while offering higher capacity. SSDs are the best choice for IO-intensive tasks because they provide faster data access and lower latency. Combining HDDs with SSDs to create hybrid storage systems helps balance performance and capacity. Choosing the right disk type and being aware of the workload needs will improve IO responsiveness.
- Partition Alignment: IO activities are efficiently spread across storage devices thanks to proper partition alignment. Performance might be hampered and extra IO overhead introduced by misaligned partitions. Increasing IO efficiency by aligning partitions to the proper offset ( 64 KB for most modem storage devices), Partitions must be accurately aligned during initial storage provisioning or partition construction to prevent potential performance bottlenecks.
- File Placement: By utilizing parallelism, distributing SOL Server data files and transaction logs over many storage devices helps increase IO performance. Separate discs or disc arrays are used to store files, which helps spread out IO workload and lower contention. Performance can be further improved by moving frequently requested files to faster storage devices or SSDs. Effective file placement strategies can maximize IO throughput and reduce delay by taking into account the workload characteristics and available storage resources.
- Filegroup Configurations: Organizing tables and indexes into filegroups, which may be mapped to certain storage devices, is possible with SOL Server. Administrators can optimize IO operations based on data access patterns and performance needs by efficiently using filegroups. For instance, putting tables or indexes that are often requested in different filegroups on faster storage devices might enhance IO responsiveness. For effective data retrieval and storage, filegroups should be carefully planned and configured.
- Compression and Encryption: Data compression and encryption technologies provided by SOL Server have an impact on IO performance. Compression reduces the amount of data that must be stored on disk, which lowers the IO needs and boosts throughput. Compression may, however, result in a higher CPU load during IO activities. Data at rest is protected by encryption, but additional CPU overhead is created during IO operations. Optimizing IO performance necessitates weighing the trade-offs and applying compression and encryption only when necessary based on workload demands and available system resources.
Administrators can improve IO performance in SOL Server by configuring and optimising the storage subsystem in accordance with the workload characteristics and performance goals. The SQL Server environment is more responsive, scalable, and reliable as a result of proper RAID configurations, disc type selection, partition alignment, file placement strategies, and usage of compression and encryption techniques.
Businesses and organizations rely extensively on database management systems to store, analyze, and retrieve enormous amounts of data in today’s data-driven world. Microsoft’s SOL Server stands out as one of the top relational database management systems, supporting vital applications and managing heavy workloads. An essential component of S0L Server performance optimization’s is the effective management of Input/Output (1O) activities. Data reading and writing from and to storage devices, such as hard disc drives (HDDS) or solid-state drives (SSD5), are included in 1O processes. The total responsiveness, scalability, and reliability of SOL Server are directly impacted by the efficiency with which I0 operations are managed.
In order to provide a thorough examination of SOL Server IO, this research examines its essential components, best practice’s, and methods for improving IO performance. Database administrators, developers, and system architects can optimize their SOL Server environments and guarantee optimal throughput by comprehending the underlying ideas and utilizing the available tools.
The report will examine a number of SOL Server IO-related topics, such as:
- IO Architecture: A description of the IO architecture used by SOL Server, outlining the key elements involved in the IO process, The function of storage subsystems, disc controllers, buffers, and cache will be examined in this part, shedding insight on how these components interact to promote effective data retrieval and storage.
- IO Performance Metrics: An explanation of the key metrics considered when evaluating IO performance. Understanding these metrics is essential for locating bottlenecks, tracking performance, and implementing targeted optimization. Throughput, latency, and IOPS (Input/Output Operations Per Second) are critical indicators that help evaluate the efficiency of IO operations.
for the SOL Server system. It will address issues like RAID configuration and storage tuning. The tactics for selecting and optimizing storage configurations, disc types (HDDs vs. SSDs), partition alignment, and file placement will be the emphasis of this section, which will also emphasize the effects of these decisions on performance and reliability.
The finest methods and practice’s for optimizing the performance of IO in SQL are examined in finest Practice’s for IO Optimization. Topics covered include maximizing the SOL Server buffer pool, effectively using storage cache, configuring files and filegroups appropriately, and utilizing compression and encryption methods for IO optimization.
IO Performance monitoring and troubleshooting. Discussion of the tools and techniques for tracking and troubleshooting SOL Server performance. The built-in monitoring tools, performance counters, dynamic management views, and third-party monitoring options covered in this section will enable administrators to proactively identify and fix IO-related problems.
This paper intends to provide users with the knowledge and resources essential to improve the responsiveness, scalability, and reliability of their database environments by analyzing the many components and strategies for optimizing IO performance in SOL Server. Organizations who have a thorough understanding of SOL Server IO may maximize the potential of their data and guarantee efficient and effective operations.
Architecture of IO:
The IO architecture of SOL Server consists of a number of essential elements that cooperate to provide effective data retrieval and storage. It is essential to comprehend these elements if you want to improve IO performance.
Let’s examine them carefully:
- Subsystems for storage The storage component serves as SOL Server I0’s core. It is made up of physical storage components like hard disc drives (HDDs) or solid-state drives (SSDs), which house the SOL Server database’s actual data files and transaction logs. It’s crucial to pick storage components with the right performance, dependability, and capacity to handle the workload.
- Disk Controllers: Disk controllers govern the data flow between the server and the storage subsystem, providing effective IO operations. They act as the interface between storage devices and the SOL Server instance. High-performance disc controllers with RAID (Redundant Array of Independent Discs) compatibility and superior caching capabilities can significantly increase IO throughput and reliability.
- Buffers and Cache: To minimise the amount of physical IO operations, SOL Server makes use of buffers and cache. Data and index pages are stored in memory by the buffer pool, a dedicated section of memory, to reduce disc access. Through this approach, IO latency is decreased and data retrieval performance is improved. In order to optimise IO operations through caching, SOL Server also uses the storage cache that is located on the disc controller or storage device.
- IO Request and Completion: An IO request is created whenever a SQL Server operation or query needs data from the storage subsystem. Information about the file location, offset, and the number of bytes to read or write are included in the request. The request is processed by the disc controller, who also carries out the required IO activities such as fetching or writing data from/to the storage devices. An IO completion event is generated once the operation is finished, enabling SQL Server to carry on with the desired operation.
- IO Path: An IO request’s transit from the SQL Server instance to the storage subsystem and back is referred to as the IO path. Multiple layers of hardware and software are involved, including the operating system, disk controllers, disk drivers, and storage devices. Each element in the IO O path introduces some latency, which might affect the performance of the IO as a whole. For effective IO activities, the IO path must be optimized by lowering latency and minimizing contention.
Administrators can identify possible bottlenecks, optimize the setup of storage subsystems, and fine-tune the IO path by understanding the IO architecture of SOL Server. Organizations can ensure optimal IO performance, which improves responsiveness, scalability, and reliability of their SOL Server settings, by utilizing the available resources and putting best into practise’s.
SQL Connectivity Issues:
You might have observed or attempted to resolve SQL connectivity difficulties as a DBA. So, I decided to post some tips here on how to fix SQL Server connectivity issues. Based on my experience, I’ve provided the essential actions to do below. The following are the things I look for while trying to connect to SQL Server but failing:
- SQL Server configuration Issue
- Firewall Issue
- Network Issue
- Logon Issue
- Configuration Issue in Application
SQL Server configuration Issue:
- Verify that the machine hosting SQL Server is up and listening on protocols.
- In SQL Server Configuration Manager, protocols can be verified and enabled.
- Named Pipes and TCP protocols are needed for SQL Server to connect remotely, ensure sure they are enabled.
- In SQL Server Errorlog, we can check to see if the server is listening on the appropriate protocols or not.
- Additionally, if the SQL instance is a named instance, check sure SQL Server Browser is operating on the machine.
- SQL connectivity requests between the client and server may be prevented by a firewall.
- Although disabling the firewall can temporarily solve the problem, we can disable it permanently in real time.
- Adding SQL Server and SQL Browser to the Firewall Exception list will permanently cure issue.
- Provide the port number in which SQL is listening while adding the exception for the SQL Server DB Engine.
- When adding a SQL Browser exception, make sure to include UDP port 1434.
- The Ping response from your Source Server to Target Server is the first thing to check for SQL connectivity.
- Additionally, you can use Telnet to connect to a SQL Server computer from a client machine and test UDL.
- Run a nslookup to see if an incorrect DNS entry is the root of the SQL connectivity problem.
Logon Issue to SQL Server:
- This is distinct from the aforementioned instances. Based on the error message we receive from the application team, this has to be troubleshot.
- As it is a problem with the login connection to SQL, we must look at each situation separately.
- If a login attempt is made to a SQL server and is unsuccessful, the errors can be seen in the server’s event viewer and SQL error logs.
- We can examine the logs and resolve the problem.
Configuration Issue in Application:
- An incorrect configuration in the application can also cause a problem with SQL connectivity.
- In the application configuration file, look for the connection string parameters.
- Additionally, verify the login account application being used to connect to SQL.
- Check the SQL instance properties to see if the authentication mode is enabled if the account is a SQL account.
- Check the account’s permissions level as well.
1. Quarantined nodes in Windows Failover Clusters:
I’ve observed this issue on Windows Server 2016, where one of the nodes were quarantined after many node failover attempts in an hour. For the following few hours, the WSFC prevents the node from rejoining the cluster. This could be due to a network issue that I’ve noticed in the current environment. Quarantined receives a ping response from one node. Intermittently, the ping response from one node to the quarantined issue node receives a Timeout response.
For more details go through below link it helped me a lot :
2. You will encounter an error, The primary replica is not Active .The command cannot be executed.
This is due to an issue with the endpoint port not listening on the correct port or the tcp endpoint being stopped for some reason.
Go through below link for more :
3. Unexpectedly, an availability group was missed, dropped, or removed.
This was caused by the fact that SQL was not being transmitted to WSFC. The SQL will then delete the Availability group.
Please go through below link for more details:
4. Availability group is in Resolving State :
Issue : For a few minutes, there was a cluster failure that affected the availability group, and the replicas went into a resolving state. The replicas returned to their regular primary and secondary states once the cluster was brought back online, however several databases were still not synchronising. Furthermore, the databases on the primary were unavailable.
Resolution : The only remedy we found was to restart the primary replica’s SQL instance. Even a simple restart of the SQL service through the config manager, however, was stuck on “stopping service.” We had to use the TSQL command “SHUTDOWN WITH NOWAIT” to make SQL to stop. The databases were available again after SQL was brought back up, and AG was in sync and healthy.
For further details go through below link :
Issue : The secondary replica does not correctly transition to the primary role if an automated failover event fails. As a result, the availability replica will indicate that this replica is in the process of resolving. Furthermore, the availability databases state that they are not synchronising, and apps are unable to access them.
Reasons to occur :
Case 1 : The value of “Maximum Failures in the Specified Period” has been reached.
The clustered resource will fail three times in a six-hour period if the default behaviour is followed. RESOLVING STATE is applied to the AG replica.
Case 2 : Local SQL Server NT AUTHORITY\SYSTEM login account has insufficient NT Authority SYSTEM account rights. The following permissions are provided by default to this local login account:
Alter Any Availability Group
View server state
Case 3 : If one of the availability databases in the availability group is in the synchronizing or not synchronized state, automatic failover will not be able to successfully transition the secondary replica into the primary role.
To get more details check below link :
5 . Diagnose Unexpected Failover or Availability Group in RESOLVING State
Lease Timeout : A lease timeout can be triggered if SQL Server does not react within the normal 20-second lease timeout period.
Lease Timeout CAUSE – 100% CPU Utilization: A lease timeout might occur if CPU utilization is extremely high for an extended length of time. Using Performance Monitor, keep an eye on your CPU usage.
Sync Issues :
how to troubleshoot Always On synchronization issue?
Multiple reasons for the database status changing to not in sync
- Network Issue
- Huge transactions
- Space Issues
SQL Server can be started and operated using domain user accounts, local user accounts, managed service accounts, virtual accounts, and built-in system accounts. After installation, a startup account must be configured for each SQL Server service in order for it to function.
The accounts that can be used to start SQL Server services, the default values in SQL Server Setup, the concept of per-service SIDs, startup options, and firewall configuration are all covered in this section.
Default Service Accounts:
The table below lists the default service accounts that setup uses when installing all components. The default accounts listed are the suggested accounts, unless otherwise stated.
Stand-alone Server or Domain Controller
Component Windows Server 2008 Windows 7 and Windows Server 2008 R2 and higher Database Engine NETWORK SERVICE Virtual Account* SQL Server Agent NETWORK SERVICE Virtual Account* SSAS NETWORK SERVICE Virtual Account* ** SSIS NETWORK SERVICE Virtual Account* SSRS NETWORK SERVICE Virtual Account* SQL Server Distributed Replay Controller NETWORK SERVICE Virtual Account* SQL Server Distributed Replay Client NETWORK SERVICE Virtual Account* FD Launcher (Full-text Search) LOCAL SERVICE Virtual Account SQL Server Browser LOCAL SERVICE LOCAL SERVICE SQL Server VSS Writer LOCAL SYSTEM LOCAL SYSTEM Advanced Analytics Extensions NTSERVICE\
PolyBase Engine NETWORK SERVICE NETWORK SERVICE PolyBase Data Movement Service NETWORK SERVICE NETWORK SERVICE Service Accounts for SQL Services
When resources from outside the SQL Server workstation are needed, Microsoft advises using a Managed Service Account (MSA) with the least amount of privileges. When set up on a Domain Controller, a virtual account cannot serve as the service account.
SQL Server Failover Cluster Instance
Component Windows Server 2008 Windows Server 2008 R2 Database Engine None. Provide a domain user account. Provide a domain user account. SQL Server Agent None. Provide a domain user account. Provide a domain user account. SSAS None. Provide a domain user account. Provide a domain user account. SSIS NETWORK SERVICE Virtual Account SSRS NETWORK SERVICE Virtual Account FD Launcher (Full-text Search) LOCAL SERVICE Virtual Account SQL Server Browser LOCAL SERVICE LOCAL SERVICE SQL Server VSS Writer LOCAL SYSTEM LOCAL SYSTEM SQL Service Accounts for SQL Components
Managed Service Accounts, Group Managed Service Accounts, and Virtual Accounts are all types of managed service accounts.
In order to give essential programs like SQL Server their own accounts without requiring an administrator to manually manage the Service Principal Name (SPN) and credentials for these accounts, managed service accounts, group managed service accounts, and virtual accounts were created. The long-term maintenance of service account users, passwords, and SPNs is made simpler as a result.
Managed Service Accounts:
The domain controller can establish and administer a specific kind of domain account called a Managed Service Account (MSA). It’s used to execute a service and is connected to a single member computer. The password is automatically managed by the domain controller. A MSA can be used to start a Windows service by a computer, but it cannot be used to log into a computer. When allowed read and write service PrincipalName rights, an MSA can register a Service Principal Name (SPN) in Active Directory. A $ suffix, such as DOMAINACCOUNTNAME$, identifies an MSA. The password box should be left empty when specifying an MSA. A MSA cannot be used on several nodes because it can only be used on one machine. Use the before setting up SQL Server
Group Managed Service Accounts:
Group Managed Service Accounts are MSAs for multiple servers (GMSA). Windows controls a service account for applications running across a number of hosts. Active Directory automatically changes the group managed service account password without having to restart any services. To set up SQL Server services, you can utilise a group managed service account principle. Starting with SQL Server 2014, group managed service accounts are available for standalone instances, and for failover cluster instances and availability groups, group managed service accounts are available starting with SQL Server 2016.To use a group managed service account for SQL Server 2014 or later, the operating system must be Windows Server 2012 R2 or later. KB 2998082 must be deployed on Windows Server 2012 R2 servers in order for services to log in without interruption following a password change. For further information, see Group Managed Service Accounts. Note: The domain administrator must create the group managed service account in Active Directory before SQL Server setup can utilize it for SQL Server services.
In Windows Server 2008 R2 and Windows 7, virtual accounts are managed local accounts that simplify service administration by offering the following features. The virtual account has automatic administration and a domain environment connection capability. If the default option for service accounts is selected during SQL Server setup, a virtual account in the format NT SERVICE is created with the instance name as the service name.To start SQL Server, while creating a virtual account, leave the password blank. If the virtual account is unable to do so, manually register the Service Principal Name (SPN).For further details on manually registering an SPN, see Manual SPN Registration.
Virtual accounts cannot be utilized with SQL Server Failover Cluster Instances because they do not share the same SID across all cluster nodes. The table below gives some names of virtual accounts as examples.
Service Virtual Account Name Default instance of the Database Engine service NT SERVICE\MSSQLSERVER Named instance of a Database Engine service named PAYROLL NT SERVICE\MSSQL$PAYROLL SQL Server Agent service on the default instance of SQL Server NT SERVICE\SQLSERVERAGENT SQL Server Agent service on an instance of SQL Server named PAYROLL NT SERVICE\SQLAGENT$PAYROLL Virtual SQL Service Accounts
For further information on Managed Service Accounts and Virtual Accounts, see the Managed Service Accounts Frequently Asked Questions and the Managed Service Accounts ideas section of the Service Accounts Step-by-Step Guide (FAQ).
Always utilise the fewest user rights possible while running SQL Server services. When feasible, utilise an MSA or virtual account. When MSA and virtual accounts are not available, utilise a specific low-privilege user account or domain account rather than a shared account for SQL Server services. Use different accounts for the various SQL Server services. Don’t grant the service groups or the SQL Server service account any new rights. When a service SID is supported, permissions will be given either directly to a service SID or through group membership.
The following page has further details on SPN Registration:
The steps listed below can be used to generate an SSL certificate for SQL AlwaysOn enabled servers:
- Obtain a server certificate: The first step is to either construct a self-signed certificate or obtain a server certificate from a reputable certificate authority (CA), such as DigiCert or Comodo. To create a self-signed certificate, you can use an application like OpenSSL. Self-signed certificates are not validated by a third party and are only accepted if the client adds them as an exception.
- Install the certificate: After obtaining the certificate, you must instal it on the AlwaysOn availability group’s primary and secondary replicas. Installing the certificate on a local machine is possible by using the Microsoft Management Console (MMC) Snap-in.
- After installing the certificate, you must setup the primary replica in order to use it for SSL encryption. You can accomplish this by configuring the “ForceEncryption” option for the SQL Server network protocol to “Yes” using the SQL Server Configuration Manager.
- Add the certificate to the listener for the availability group: The certificate must now be added to the availability group listener. The SQL Server Management Studio (SSMS) can be used for this.
- Check the configuration: Next, make sure the certificate is set up correctly and that SSL encryption is being used. The “sys.dm exec connections” DMV can be used to determine whether SSL is enabled.
- Setup SQL Server: Launch SQL Server Management Studio, connect to the server, and access the server properties to set up SQL Server to use the SSL certificate. Right-click the “TCP/IP” protocol under “Protocols for SQLEXPRESS” on the “Security” tab and choose “Properties.”
- Choose the certification: Choose the certificate you want to use for SSL encryption from the “Certificate” page.
- Restart SQL Server: In order for the modifications to take effect, SQL Server must finally be restarted.
It’s also crucial to remember that before deploying the certificate to your production environment, you should test it to ensure that it is functioning properly. This will guarantee that the certificate is installed properly and that the configuration is done correctly.
How to add the certificate to the availability group listener?
- Open the SQL Server Management Studio (SSMS) and establish a connection to the availability group’s primary replica.
- Expand the “Availability Groups” folder after navigating to the “Always On High Availability” folder.
- Select “Properties” by performing a right-click on the availability group listener.
- Go to the “Certificate” tab in the properties window.
- From the drop-down list, choose the certificate you wish to use for the listener.
- To save the changes, click “OK.”
- Use the T-SQL command below to check the listener’s status:
SELECT * FROM sys.availability_group_listeners
- For the modifications to take effect, restart the availability group listener.
Note: It’s crucial to remember that this phase needs to be completed on the primary replica, and the listener needs to be available and online. All of the replicas in the availability group should also have the certificate installed.
The certificate for the availability group listener can be be created and configured using the New-AzSqlAvailabilityGroupListenerCert PowerShell command.
How can you tell if SQL Server successfully loaded the SSL certificate?
- The SQL Server Error Log will indicate that the certificate “ABCXYZ” has been loaded for encryption once you restart your SQL Server service, indicating that the certificate has been loaded.
- You can use the following query to see if the requests are being encrypted:
select * from sys.dm_exec_connections where encrypt_option = ‘TRUE’
- In order to discover which computer is making encrypted requests, you can identify the client information with the help of the columns “client net address” and “session id.”
Note: Since all requests are encrypted by default due to server-side authentication, you should see that for all requests! To view each connection request, execute the command “select * from sys.dm exec connections.”
What if the SSL certificate is still not shown in SQL Server Configuration Manager?
- In SQL Server Configuration Manager, there is a chance that you won’t initially see the SSL certificate in the drop-down list for Certificate.
- When we viewed the certificate’s details in one of the cases, we discovered that the Subject was “abcxyz-corpau.abc.com,” while the Subject Alternative Name had different names.
- In our instance, the customer chose to use this certificate on numerous servers. SQL Server’s domain name is ABCXYZ-SQLServer.abc.local
- Now, the following conditions were not satisfied, hence we were unable to see the certificate in SQL Server Configuration Manager:
- The common name (CN) and host name or fully qualified domain name (FQDN) of the server computer must match, according to the Subject attribute of the certificate.
- Therefore, in our case, we proposed that the Certificate Authority update the Subject name from ABCXYZ-Corp.ABC.com to ABCXYZ-SQLServer.ABC.Local (FQDN of SQL Server).
- After making this modification, we loaded the certificate in MMC once more, and SQL Server Configuration Manager now displayed the loaded certificate.
Make a quick fix for automating database migration across SQL Server versions using PowerShell and T-SQL scripting. Before moving on to a set of customised actions, I need to find several databases that can be moved using a standard procedure or at the very least a collection of standard activities. You could want to write general migration scripts and set up a system to automate their execution in these circumstances. A general method like this can be quite helpful if you’re going to iterate through several test sessions and want to migrate a fresh copy before each one.
The following actions can be made to simplify the migrating process:
1) The backup database from the Source Instance will be transferred.
2) Restore the database that will be moved to the new instance.
3) Refresh the allocation usage.
4) Data purity can be used to check database consistency.
5) Set the level of page verification.
6) Set the database compatibility level to match the version of the destination instance (e.g. 120).
7) Statistics should be updated.
8) Remove any orphaned schemas (schemas automatically created for each database user to maintain compatibility but with no objects belonging to them).
9) Fix Orphaned Users if any exists.
10) Run Rebuild or re-organize all databases based on Migration Change window and the databases size.
I’m automating the first two Point’s, databases today for backup and restore. I still have to finish automating the other points, but I’m working on it. I’m hoping that by sharing this backup and restore PowerShell script today, some of the DBAs may find it useful.
You will use handlers that SMO will immediately call back when you execute backup and restore commands in T-SQL, and you will provide a straightforward progress notification similar to the ones offered by SSMS or sqlcmd. Please remember that the completion handler does nothing more than display the error.
A handful of the PowerShell script’s functions include Get-SQLInstance, DatabaseFullBackup, and RestoreDatabaseFromFullBackup.
Click the link below top open the PowerShell script that will implement backup/restore procedures for each database which we want to migrate to complete the migration :
A few variables in the script above, including SourceInstanceName, Destination Instance Name, Backup Path, DATA File Path, LOG File Path, and Databases Names, will need to be updated.
The script will start taking database backups from the source instance and restoring them to the destination instance once all the parameters have been customised to your environment.
Additionally, my destination instance doesn’t have any databases. Using the aforementioned PowerShell script, I’m currently backing up and restoring my “DB1,” “DB2,” and “DB3” databases. When I run the script, it first takes a backup of the first database, restores it to the destination instance once the backup is finished, and then takes a backup of the second database. It starts restoring the second database once the backup is finished.
As soon as the DB1 Database backup is complete, the destination DB1 Database is instantly restored. After the restoration is complete, the backup for the following database (the DB2 database) is initiated, etc.,
Migration of the databases I mentioned in the Databases Field was completed by the script.
The migration of all three databases from the source instance to the destination instance has been completed successfully. Now all you have to do to migrate your given databases is to run the script in PowerShell ISE.