SQLTreeo Docs

07 Aug 2019 0 minutes to read Contributors

SQLTreeo Docs is the home for documentation for end users, developers, and IT professionals. Check out our  tutorials and code examples.

Learn how to use Microsoft SQL Server to address your database needs, both on-premises, and in the cloud!

 

Indexes

07 Aug 2019 0 minutes to read Contributors

Finding Unused Indexes

07 Aug 2019 0 minutes to read Contributors

Situation

Good index management is key to success. Having too many unused indexes around slows down your system. It is not difficult to find them or defining the criteria for dropping them. Here is how you retrieve them:

-- Find Unused Index Script
-- Original Author: Pinal Dave (C) 2011
SELECT 
o.name AS TableName
, i.name AS IndexName
, dm_ius.user_seeks AS UserSeek
, dm_ius.user_scans AS UserScans
, dm_ius.user_lookups AS UserLookups
, dm_ius.user_updates AS UserUpdates
, p.TableRows
, dm_ius.last_user_scan
, dm_ius.last_user_seek
, dm_ius.last_user_update
, 'DROP INDEX ' + QUOTENAME(i.name)
+ ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement',
index_create_script.Statement AS 'Create Index Statement'
FROM sys.dm_db_index_usage_stats dm_ius
INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID
INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID
FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p
ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID
INNER JOIN (
SELECT 'CREATE INDEX '+IndexName+' ON ' + TableName + ' ('+KeyCols+' ) '+CASE WHEN IncludeCols IS NOT NULL THEN ' INCLUDE ('+IncludeCols+' )' ELSE '' END AS Statement, IndexName
FROM (
SELECT
'[' + Sch.name + '].[' + Tab.[name] + ']' AS TableName,
Ind.Name AS IndexName, 
SUBSTRING(( SELECT ', ' + AC.name
FROM sys.[tables] AS T
INNER JOIN sys.[indexes] I ON T.[object_id] = I.[object_id]
INNER JOIN sys.[index_columns] IC ON I.[object_id] = IC.[object_id]
AND I.[index_id] = IC.[index_id]
INNER JOIN sys.[all_columns] AC ON T.[object_id] = AC.[object_id]
AND IC.[column_id] = AC.[column_id]
WHERE Ind.[object_id] = I.[object_id]
AND Ind.index_id = I.index_id
AND IC.is_included_column = 0
ORDER BY IC.key_ordinal 
FOR
XML PATH('') ), 2, 8000) AS KeyCols,
SUBSTRING(( SELECT ', ' + AC.name
FROM sys.[tables] AS T
INNER JOIN sys.[indexes] I ON T.[object_id] = I.[object_id]
INNER JOIN sys.[index_columns] IC ON I.[object_id] = IC.[object_id]
AND I.[index_id] = IC.[index_id]
INNER JOIN sys.[all_columns] AC ON T.[object_id] = AC.[object_id]
AND IC.[column_id] = AC.[column_id]
WHERE Ind.[object_id] = I.[object_id]
AND Ind.index_id = I.index_id
AND IC.is_included_column = 1
ORDER BY IC.key_ordinal 
FOR
XML PATH('') ), 2, 8000) AS IncludeCols
FROM sys.[indexes] Ind
INNER JOIN sys.[tables] AS Tab ON Tab.[object_id] = Ind.[object_id]
INNER JOIN sys.[schemas] AS Sch ON Sch.[schema_id] = Tab.[schema_id]
) index_create_script) index_create_script ON i.name = index_create_script.Indexname
WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1
AND dm_ius.database_id = DB_ID()
AND i.type_desc = 'nonclustered'
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
AND i.is_unique = 0
ORDER BY last_user_seek, last_user_scan

The WHERE clause contains a filter that only shows the indexes for the current databases and only those indexes that are nonclustered, are non unique, is not a constraint and not a primary key. You can adjust that for your situation.

Result

The result is a list of DROP and CREATE statements that you can copy and execute in your database. The good thing about this is, if you save these results in an Excel file and mark the indexes that you've actually dropped, then you have managed your index maintenance. At some point in time, you want to have the original index back and that's where you can use the CREATE statement.   

We especially pay attention to the data in the yellow marked area. For example If the userseek, userscan, userlookup is 0 and your sql server is already running for some time, we could drop those indexes. If the userseek, userscan, userlookup is low (<1000) and userupdates is a magnitude of that you could choose those indexes to be dropped the next time you do some index maintenance. You will always have the CREATE statement in your excel. Save your excel file with a date and time of the maintenance you have done and you will build up a nice history. Excel files older than 6 months can be discarded.

About us

Over the years, SQLTreeo gained precious know-how which we transformed into customer support, managed database services and database solutions. In order to express our knowledge and experiences, we tend to work with anything less than the best-trained staff and that is the key to success and the biggest asset of our company. Thus, all our database management services are controlled by high level skilled and Senior database administrators. We can help you in every aspect of SQL Server.

Find Missing Indexes

07 Aug 2019 0 minutes to read Contributors

Situation

SQL Server registers every missing index. Here is how you retrieve that information from the missing index DMV's

select 'CREATE INDEX idx_DBO_'+REPLACE(REPLACE(REPLACE(statement,'[',''),']',''),'.','_')+CONVERT(VARCHAR,t1.index_handle)+'E ON '+statement+'('+equality_columns+') '+
CASE WHEN included_columns IS NOT NULL THEN 
'INCLUDE( '+ISNULL(included_columns,'')+') ' ELSE '' END as [Equality] ,
'CREATE INDEX idx_DBO_'+REPLACE(REPLACE(REPLACE(statement,'[',''),']',''),'.','_')+CONVERT(VARCHAR,t1.index_handle)+'I ON '+statement+'('+inequality_columns+') '+
CASE WHEN included_columns IS NOT NULL THEN 
'INCLUDE( '+ISNULL(included_columns,'')+') ' ELSE '' END as [InEquality], db_name(t1.database_id) as database_name, t1.inequality_columns,t1.included_columns,t1.statement as table_name,(select max(rowcnt) from sysindexes where id=t1.object_id and status in (0, 2066) ) as [rowcount] , t2.user_seeks,t2.last_user_seek,t2.last_user_scan,t2.avg_total_user_cost,t2.avg_user_impact from sys.dm_db_missing_index_details t1 WITH (NOLOCK) inner join 
sys.dm_db_missing_index_groups t0 WITH (NOLOCK) on t1.index_handle=t0.index_handle INNER JOIN
sys.dm_db_missing_index_group_stats t2 WITH (NOLOCK) ON t0.index_group_handle=t2.group_handle
where database_id = Db_id() and user_seeks > 10 
order by avg_user_impact desc, user_seeks desc, user_scans desc, last_user_seek desc

The WHERE clause contains a filter that only shows the missing indexes for the current databases and only for indexes that are missing for more then 10 times. You can change that how ever you want.

Result

The result is a list of CREATE statements that you can copy and execute in your database. The names of the indexes are somehow prefabricated, you can adjust that formula.  

We especially pay attention to the data in the yellow marked area. For example If the amount of rows is very high and the user seeks is very low and the last user seek was a week ago, we wouldn't create that index of course. But if the rowcount was for example 200000 and the user seeks above 1000 and last user seek actual with current date then yes.

About us

Over the years, SQLTreeo gained precious know-how which we transformed into customer support, managed database services and database solutions. In order to express our knowledge and experiences, we tend to work with anything less than the best-trained staff and that is the key to success and the biggest asset of our company. Thus, all our database management services are controlled by high level skilled and Senior database administrators. We can help you in every aspect of SQL Server.

C# - Read emails from exchange online mailbox (Office 365) into SQL Server

07 Aug 2019 0 minutes to read Contributors

Situation

You have an Office 365 mailbox. You want to use C# to read those mails and save them to SQLServer.

Requirements

  • Office365 mailbox
  • SQLServer
  • Exchange WebServices
  • C# (visual studio)

Exchange WebServices

Applies to: EWS Managed API | Exchange Online | Exchange Server 2013 | Office 365 There is 1 step you need to take and that is to install Exchange Webservices (MIT License). Download EWS Managed API via nuget. This step is very easy and does not require more then download/install via a nuget package. This only has to be done on your Development Machine. Once you want to release your product for production release all you need to include is the Microsoft.Exchange.WebServices.dll and you're all set.

Implemented Solution

using System;
using Microsoft.Exchange.WebServices.Data;
using System.Data.SqlClient;

namespace ReadOffice365Mailbox
{
    class Program
    {
        static void Main(string[] args)
        {
            ExchangeService _service;

            try
            {
                Console.WriteLine("Registering Exchange connection");

                _service = new ExchangeService
                {
                    Credentials = new WebCredentials("yourmailbox@email.com", "*******")
                };
            }
            catch
            {
                Console.WriteLine("new ExchangeService failed. Press enter to exit:");
                return;
            }

            // This is the office365 webservice URL
            _service.Url = new Uri("https://outlook.office365.com/EWS/Exchange.asmx");

            // Prepare seperate class for writing email to the database
            try
            {
                Write2DB db = new Write2DB();

                Console.WriteLine("Reading mail");

                // Read 100 mails
                foreach (EmailMessage email in _service.FindItems(WellKnownFolderName.Inbox, new ItemView(100)))
                {
                    db.Save(email);

                }

                Console.WriteLine("Exiting");
            }
            catch (Exception e)
            {
                Console.WriteLine("An error has occured. \n:" + e.Message);
            }
        }
    }
}

 

...
using Microsoft.Exchange.WebServices.Data;
...

namespace ReadOffice365Mailbox
{
    class Program
    {
        static void Main(string[] args)
        {
            ExchangeService _service;
            _service = new ExchangeService
                {
                    Credentials = new WebCredentials("yourmailbox@email.com", "*******")
                };
...

Instantiate a new ExchangeService and provide your mailbox credentials.

            // This is the official office365 webservice URL
            _service.Url = new Uri("https://outlook.office365.com/EWS/Exchange.asmx");

            // Read 100 mails
            foreach (EmailMessage email in _service.FindItems(WellKnownFolderName.Inbox, new ItemView(100)))

...

Then, because you want to call an Webservice (ExchangeWebService EWS) you provide the webservice URL and you call the function FindItems( , ). This call will connect to your mailbox (if you've provided the correct credentials) and start reading (in this case 100 mail headers) After that you can read all the basic email properties like subject, from and to. Microsoft has implemented a lightweight EWS so that if you want to read more properties like Htmlbody or TextBody, you have to send an instruction to load that extra information with the following command:

           
       // Read 100 mails
       foreach (EmailMessage email in _service.FindItems(WellKnownFolderName.Inbox, new ItemView(100)))
       {
            email.Load(new PropertySet(BasePropertySet.FirstClassProperties, ItemSchema.TextBody));

            // Then you can retrieve extra information like this:
            string recipients = "";

            foreach (EmailAddress emailAddress in email.CcRecipients)
            {
                recipients += ";" + emailAddress.Address.ToString();
            }
            cmd.Parameters.AddWithValue("@message_id", email.InternetMessageId);
            cmd.Parameters.AddWithValue("@from", email.From.Address);
            cmd.Parameters.AddWithValue("@body", email.TextBody.ToString());
            cmd.Parameters.AddWithValue("@cc", recipients);
            cmd.Parameters.AddWithValue("@subject", email.Subject);
            cmd.Parameters.AddWithValue("@received_time", email.DateTimeReceived.ToUniversalTime().ToString());

            recipients = "";
            foreach (EmailAddress emailAddress in email.ToRecipients)
            {
                recipients += ";" + emailAddress.Address.ToString();
            }
            cmd.Parameters.AddWithValue("@to", recipients);
            }
...

Save your emails to the database

           
class Write2DB
    {
        SqlConnection conn = null;

        public Write2DB()
        {
            Console.WriteLine("Connecting to SQL Server");
            try
            {
                conn = new SqlConnection("Server=*******;DataBase=***********;uid=******;pwd=**********");
                conn.Open();
                Console.WriteLine("Connected");
            }
            catch (System.Data.SqlClient.SqlException e)
            {
                throw (e);
            }
        }

        public void Save(EmailMessage email)
        {
            email.Load(new PropertySet(BasePropertySet.FirstClassProperties, ItemSchema.TextBody));

            SqlCommand cmd = new SqlCommand("dbo.usp_servicedesk_savemail", conn)
            {
                CommandType = System.Data.CommandType.StoredProcedure,
                CommandTimeout = 1500

            };

            string recipients = "";

            foreach (EmailAddress emailAddress in email.CcRecipients)
            {
                recipients += ";" + emailAddress.Address.ToString();
            }
            cmd.Parameters.AddWithValue("@message_id", email.InternetMessageId);
            cmd.Parameters.AddWithValue("@from", email.From.Address);
            cmd.Parameters.AddWithValue("@body", email.TextBody.ToString());
            cmd.Parameters.AddWithValue("@cc", recipients);
            cmd.Parameters.AddWithValue("@subject", email.Subject);
            cmd.Parameters.AddWithValue("@received_time", email.DateTimeReceived.ToUniversalTime().ToString());

            recipients = "";
            foreach (EmailAddress emailAddress in email.ToRecipients)
            {
                recipients += ";" + emailAddress.Address.ToString();
            }
            cmd.Parameters.AddWithValue("@to", recipients);

            // Execute the procedure
            cmd.ExecuteNonQuery();
        }
        ~Write2DB()
        {
            Console.WriteLine("Disconnecting from SQLServer");
        }
    }
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE procedure [dbo].[usp_servicedesk_savemail]  @message_id varchar(50),@to varchar(512), @from varchar(256), @cc varchar(256)=null,@subject varchar(256),  @body varchar(max), @received_time datetime
as
begin
	IF NOT EXISTS( SELECT 1 from mnit_email where param1 = @message_id )
		insert into mnit_email( created_datetime, type, [from], [to], cc, [subject], body, param1 )
		values( @received_time, 'receive_supportmail', @from, @to, @cc,  @subject, @body, @message_id)
end
GO

The complete visual studio solution can be downloaded here. ReadOffice365MailAndSaveToSQL

About us

Over the years, SQLTreeo gained precious know-how which we transformed into customer support, managed database services and database solutions. In order to express our knowledge and experiences, we tend to work with anything less than the best-trained staff and that is the key to success and the biggest asset of our company. Thus, all our database management services are controlled by high level skilled and Senior database administrators. We can help you in every aspect of SQL Server.

How to Extract last name or first name(s) from a combined field

07 Aug 2019 0 minutes to read Contributors

Situation

You have a field that contains a display name (combination of last name and first name): And you would like to have the result like this:

Problem

you don't know where the last name begins, but you do know that it is the last one in the field.

Solution

This is how you want to extract the information:

DECLARE @contacts TABLE( displayname NVARCHAR(512), firstname NVARCHAR(256) NULL, lastname nvarchar(256) NULL)
INSERT INTO @contacts( displayname )
VALUES 
('Michael Jack Abbey'),
('John Accrington'),
('Bob Dillan Addison'),
('Paul Ashfort Adling'),
('Helen Allsworth'),
('Tim Allsworth'),
('Neil Ambridge'),
('Ian Attwood'),
('Michael R.')
SELECT displayname, dbo.fn_extractname(displayname, 0 ) as firstname, dbo.fn_extractname(displayname, 1 ) as lastname from @contacts

dbo.fn_extractname(displayname, 0 ) is a function (see code below). first parameter is the string containing the contact information, second parameter defines which part you want to extract (0=first name(s) / 1=last name)

REVERSE

Use the REVERSE() function (available since SQL Server 2008). It does what it implies!

CREATE FUNCTION dbo.fn_extractname( @string NVARCHAR(MAX), @last BIT = 0 )
RETURNS NVARCHAR(MAX)
AS
BEGIN
	DECLARE @returnstring NVARCHAR(MAX) = REVERSE( @string )
	DECLARE @charindex INT = CHARINDEX( ' ', @returnstring, 1 )

	IF( @charindex = 0 )
		RETURN @string
	
	RETURN REVERSE( SUBSTRING( @returnstring, CASE WHEN @last = 1 THEN 1 ELSE @charindex+1 END, CASE WHEN @last = 1 THEN @charindex-1 ELSE LEN(@string)-1 END))
END

Here is how a reverse string looks like:  It doesn't look readable, but it does not have to be. We will reverse it back to normal once we're done. Some explanations:

	DECLARE @returnstring NVARCHAR(MAX) = REVERSE( @string )

Always reverse the string because the last name is often only 1 combination and first names can be many.

	DECLARE @charindex INT = CHARINDEX( ' ', @returnstring, 1 )

Find the first occurence where the last name might end. Second parameter is where to start. First character always start at position 1.

	IF( @charindex = 0 )
		RETURN @string

It can be just 1 word or no space was found, then return the original string.

	RETURN REVERSE( SUBSTRING( @returnstring, CASE WHEN @last = 1 THEN 1 ELSE @charindex+1 END, CASE WHEN @last = 1 THEN @charindex-1 ELSE LEN(@string)-1 END))

In the end, reverse the string again and return it back.   SUBSTRING has 3 parameters. first parameter is the string you want to process second parameter is where you want to start (first character starts at position 1) third parameter is the length of the string you want to retrieve.   Example: SUBSTRING( 'Michael Jack Abbey', 9, 4 ) will return "Jack"   So for the second parameter (where do we start) it depends on @last = 1. If we want to retrieve the last name then we start at the beginning (position 1) otherwise start at at the 1st found ' ' (space) + 1 (skip the space)   for the third parameter (how much characters we want to return) depends also on @last = 1 if we want to retrieve the last name then we stop at the 1st found ' ' (space) - 1 (skip the space), else we want to have have the remainder (LEN(@string)) of the string. LEN() will return a larger number than there are characters left in the string, but that is not an issue.  

About us

Over the years, SQLTreeo gained precious know-how which we transformed into customer support, managed database services and database solutions. In order to express our knowledge and experiences, we tend to work with anything less than the best-trained staff and that is the key to success and the biggest asset of our company. Thus, all our database management services are controlled by high level skilled and Senior database administrators. We can help you in every aspect of SQL Server.

SQL Server Scripts

07 Aug 2019 0 minutes to read Contributors

 

 

 

 

 

 

 

 

The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

That being said, we do our upmost best to provide working scripts for you. SQL Server knowledge is required though.

VBA script reads outlook mailbox

07 Aug 2019 0 minutes to read Contributors

In one of our projects we needed to analyze email stored in outlook or office365 mailbox. We have written this utility in VBA that helped us do the job.

 Download this macro enabled spreadsheet: InboxCalculatorV3

Purpose

VBA script reads outlook mailbox. Read outlook mail (you can specify which folder to process) into excel for further pivotting.

Results

The results are written into Excel cells. You can adjust what is read and how it is written into Excel. Results of VBA script[/caption]

VBA script reads outlook mailbox

Simple and generic sourcecode to connect to an outlook mailbox (local or office365) and read mail.

The sourcecode is free of use. Adjustments or improvements are always welcome. If you need support just leave a comment in this post.

FIND PARTITIONS, ROW COUNT OF EACH PARTITION OF A PARTITION TABLE

07 Aug 2019 0 minutes to read Contributors

SELECT
DB_NAME() AS 'DatabaseName'
,OBJECT_NAME(p.OBJECT_ID) AS 'TableName'
,p.index_id AS 'IndexId'
,CASE
WHEN p.index_id = 0 THEN 'HEAP'
ELSE i.name
END AS 'IndexName'
,p.partition_number AS 'PartitionNumber'
,prv_left.value AS 'LowerBoundary'
,prv_right.value AS 'UpperBoundary'
,ps.name as PartitionScheme
,pf.name as PartitionFunction
,CASE
WHEN fg.name IS NULL THEN ds.name
ELSE fg.name
END AS 'FileGroupName'
,CAST(p.used_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'UsedPages_MB'
,CAST(p.in_row_data_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'DataPages_MB'
,CAST(p.reserved_page_count * 0.0078125 AS NUMERIC(18,2)) AS 'ReservedPages_MB'
,CASE
WHEN p.index_id IN (0,1) THEN p.row_count
ELSE 0
END AS 'RowCount'
,CASE
WHEN p.index_id IN (0,1) THEN 'data'
ELSE 'index'
END 'Type'
FROM sys.dm_db_partition_stats p
INNER JOIN sys.indexes i
ON i.OBJECT_ID = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.data_spaces ds
ON ds.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.partition_functions pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.destination_data_spaces dds
ON dds.partition_scheme_id = ps.data_space_id
AND dds.destination_id = p.partition_number
LEFT OUTER JOIN sys.filegroups fg
ON fg.data_space_id = dds.data_space_id
LEFT OUTER JOIN sys.partition_range_values prv_right
ON prv_right.function_id = ps.function_id
AND prv_right.boundary_id = p.partition_number
LEFT OUTER JOIN sys.partition_range_values prv_left
ON prv_left.function_id = ps.function_id
AND prv_left.boundary_id = p.partition_number - 1
WHERE
OBJECTPROPERTY(p.OBJECT_ID, 'ISMSSHipped') = 0
AND p.index_id IN (0,1)

Blocking Locks and Deadlocks

07 Aug 2019 0 minutes to read Contributors

Situation

Your query is waiting for another query and the other query is doing nothing.

Blocking Locks

A blocking lock occurs when one lock causes another process to wait until the current process is entirely done with the resources.

Deadlocks

Consider the situation where one partially finished transaction must wait for another transaction to complete. At the same time, the other partially finished transaction must also wait for the original transaction to complete. This is called a deadlock: when each transaction is waiting for resources used by the other. When such a deadlock occurs, the database typically cancels one of the transactions.

Lock-Avoiding Design Strategies

There are a few design strategies that can reduce the occurrence of blocking locks and deadlocks:

  • Use clustered indexes on high-usage tables.
  • Avoid high row count SQL statements that can cause a table lock. For example, instead of inserting all rows from one table to another all at once, put a single INSERT statement in a loop and insert one row at a time.
  • Break long transactions up into many shorter transactions. With SQL Server, you can use "bound connections" to control the execution sequence of the shorter transactions.
  • Make sure that UPDATE and DELETE statements use an existing index.
  • If you use nested transactions, be sure there are no commit or rollback conflicts.
 

Problem

What most users do is, they kill the query that is blocking your query. Ofcourse this is very bad practice but we also understand that it is not an uncommon thing to do. It would be better to look at your indexes, optimize the resource usage of your query and to automatically avoid this situation from happening.

Kill Blocking Locks and Deadlocks

The following script can be automated via SQLServer Agent Jobs or executed manually to help you kill connections:

/*
	Functionality  
	=============
	Allocate and (optionally) kill blocking locks

	Applications
	============
	MSSQL

	PRERequisites
	=============
	- Set parameter @who_to_kill (can also be empty)

	VERSION HISTORY
	Version		Date		Author			Reason
	------------------------------------------------------------------
	4			18 dec 17	ak				kill statement replaced with select
	5			25 apr 18	dr				changed from sp_lock to sys.dm_tran_locks

	Parameters
	=======
	@wait_duration_threshold INT: (Milliseconds that a process is waiting)
					from -2,147,483,648 to 2,147,483,647
					only used in combination with @who_to_kill = 'DEADLOCK'

	@who_to_kill VARCHAR(10): (identify what process to analyze)
					'BLOCKER' - the process that blocks another process
					'WAITING' - the process that is waiting until the blocker finishes
					'BOTH' - both
					'NONE' (default) - do nothing
					'DEADLOCK' - Not a real deadlock but 2 processes who are waiting for eachother.
								 it takes into account the parameter @wait_duration_threshold to check for
								 how long either process is waiting for the other

	@showonly BIT: (show or show and kill the processes identified)
					1 - do not execute KILL
					0 (default) - only show processes that match @who_to_kill.  

	Logging
	=======
	None

	Permissions
	=======
	if @showonly = 0
		ALTER ANY CONNECTION (inherited via sysadmin or processadmin)
        AND

	VIEW SERVER STATE (for sp_lock and sys.dm_os_waiting_tasks)

	Results
	=======
	result set with blocking locks
	executed KILL command if @showonly = 0
*/
SET NOCOUNT ON

DECLARE @who_to_kill VARCHAR(10) = 'NONE'
DECLARE @wait_duration_threshold INT = 1
DECLARE @showonly BIT = 0
IF (UPPER(ISNULL(@who_to_kill, '')) NOT IN ('BLOCKER', 'WAITING', 'BOTH', 'NONE', 'DEADLOCK'))
	RAISERROR('Wrong victim specification. Only ''BLOCKER'', ''WAITING'', ''BOTH'', ''NONE'', or ''DEADLOCK'' are allowed!', 16, 1)

--	Waiting sessions
IF (ISNULL(OBJECT_ID('tempdb.dbo.#locked_sessions'), 0) != 0)
	DROP TABLE #locked_sessions

SELECT
	IDENTITY(int, 1, 1) AS rownum,
	*
INTO
	#locked_sessions
FROM
	sys.dm_os_waiting_tasks AS wt
		RIGHT JOIN
		(
		SELECT
			l1.request_session_id AS blocking_spid,
			l2.request_session_id AS waiting_spid
		FROM
			sys.dm_tran_locks AS l1
			INNER JOIN
			sys.dm_tran_locks AS l2 ON 
				l1.resource_database_id = l2.resource_database_id AND
				l1.resource_associated_entity_id = l2.resource_associated_entity_id AND
				l1.request_type = l2.request_type AND
				l1.request_session_id != l2.request_session_id AND
				l1.request_status = 'GRANT' AND
				l2.request_status IN ('WAIT', 'CONVERT')
		) AS ws ON
			wt.blocking_session_id = ws.blocking_spid AND
			wt.session_id = ws.waiting_spid

--	Cursor to process dead-locked sessions
--
DECLARE 
	@blocking_spid INT,
	@waiting_spid INT,
	@wait_duration_ms_blocking INT,
	@wait_duration_ms_waiting INT,
	@sql_text VARCHAR(MAX) = ''

DECLARE sessions_cursor CURSOR FAST_FORWARD
FOR
	SELECT
		l1.blocking_spid,
		l1.waiting_spid,
		l1.wait_duration_ms AS wait_duration_ms_blocking,
		l2.wait_duration_ms AS wait_duration_ms_waiting
	FROM
		#locked_sessions AS l1
		--	This join is the filter which actually checks there is some mutual locking.
		--	If we remove it we will be processing "long held locks" which are not necessarily "dead".
		INNER JOIN
		#locked_sessions AS l2 ON 
			(l1.blocking_spid = l2.waiting_spid AND l1.waiting_spid = l2.blocking_spid AND l1.wait_duration_ms > l2.wait_duration_ms AND @who_to_kill = 'DEADLOCK')
			OR 
			(@who_to_kill != 'DEADLOCK' AND 1 = 1)
	WHERE
		l1.wait_duration_ms > @wait_duration_threshold OR
		l2.wait_duration_ms > @wait_duration_threshold


--	If we found any deadlocks, we start processing those by killing them both
OPEN sessions_cursor

FETCH NEXT FROM sessions_cursor INTO @blocking_spid, @waiting_spid, @wait_duration_ms_blocking, @wait_duration_ms_waiting

WHILE @@FETCH_STATUS = 0
BEGIN

	SELECT @sql_text = ISNULL('KILL ' +
		CASE @who_to_kill
		WHEN 'BLOCKER' THEN CAST(@blocking_spid AS varchar(10))
		WHEN 'WAITING' THEN CAST(@waiting_spid AS varchar(10))
		WHEN 'BOTH' THEN CAST(@blocking_spid AS varchar(10)) + CHAR(10) + 'KILL ' + CAST(@waiting_spid AS varchar(10))
		WHEN 'DEADLOCK' THEN CAST(@blocking_spid AS varchar(10))
		WHEN 'NONE' THEN NULL
		END,
		CONCAT('/* Blocking SPID: ',@blocking_spid,' for ',@wait_duration_ms_blocking,'ms*/') )

	IF( @showonly = 0 )
		EXEC(@sql_text)

	SELECT @sql_text

	FETCH NEXT FROM sessions_cursor INTO @blocking_spid, @waiting_spid, @wait_duration_ms_blocking, @wait_duration_ms_waiting

END

CLOSE sessions_cursor
DEALLOCATE sessions_cursor

Permissions

if @showonly = 0 ALTER ANY CONNECTION (inherited via sysadmin or processadmin) and VIEW SERVER STATE (for sp_lock and sys.dm_os_waiting_tasks)   If @showonly = 1 then NO connections will be killed.

Support

In case you are in need of SQL Server support on this subject or others, you can contact us at SQLTreeo via online chat or e-mail servicedesk@sqltreeo.com. We deliver 24x7x365 managed services and support.

Database Mirroring With Automatic Failover

07 Aug 2019 0 minutes to read Contributors

In the Customer interaction software (CIC) of Genesys it is possible to connect to a mirrored database that supports automatic failover. This article is not supported by Genesys and is to be used only in accordance with Genesys guidelines.

In the Additional Information textbox enter the following connection string:

SQL Server Native Client 10.0:

Driver={SQL Server Native Client 10.0};Database=I3_IC40;Server=<strong>[primary server FQDN\INSTANCE]</strong>;Failover_Partner=<strong>[mirror server FQDN\INSTANCE]</strong>;Trusted_Connection=No

SQL Server Native Client 11.0:

Driver={SQL Server Native Client 11.0};Database=I3_IC40;Server=<strong>[primary server FQDN\INSTANCE]</strong>;Failover_Partner=<strong>[mirror server FQDN\INSTANCE]</strong>;Trusted_Connection=No

Trusted_Connection can be No or Yes depending on your security guidelines.

Database can be any mirrored database. This is obligatory!

[primary server FQDN\INSTANCE] eg.: server.domain.org <em>or</em> cicsql01.yahoo.com or cicsql01.yahoo.com\instance1

[secondary server FQDN\INSTANCE] eg.: server.domain.org <em>or</em> cicsql01.yahoo.com or cicsql01.yahoo.com\instance1

After this CIC makes a connection to the <strong>[primary server FQDN\INSTANCE]</strong>. if then is detected that the database is not the principal, a connection is established to <strong>[mirror server FQDN\INSTANCE]</strong>.

If during an established connection a failover occurs a re-connect is done to the server that has the principal database available and so on.

A connection timeout (default 15s) in the connectionstring can be changed to speed up the connection. Do not set it lower then 5 seconds.

Availability Groups

07 Aug 2019 0 minutes to read Contributors

Topic: Several scripts and best practices related to Availability Groups.

Create a Login in a SQL Server Availability Group

07 Aug 2019 0 minutes to read Contributors

Situation

In SQL Server Availability Groups (AG), Logins must have the same SID (Security identifier) on all the nodes where they use the AG databases. An Availability Group consists of (at least) 2 different SQL Servers: 

Problem

In case of a failover, select/insert/update/delete in that database will fail.

Explanation

A login (syslogins) has an equivalent record in the database's sysusers table. The sysusers table holds the security configuration (owner, read/write, execute, etc.) for that login. The connection between those two tables is the SID column.

USE [database]
GO
SELECT su.sid, su.name, sl.name, CASE WHEN sl.name IS NULL THEN 'SID is not equal' ELSE 'Ok' END as Check FROM dbo.sysusers su LEFT OUTER JOIN dbo.syslogins sl on su.sid=sl.sid where issqluser = 1 and su.name not in ('guest', 'INFORMATION_SCHEMA', 'sys')

Create a login on SQL Server 1 and add that login as a user in an AG database. Then add some rights to that user e.g.: GRANT SELECT ON table TO user. That specific user that is connected via its SID to a specific login with the same SID has been granted those rights. After a failover takes place then that user is no longer connected to the same login because the SID differ. Thus SQL Server is not able after a succesfull login to connect that login to a user in the database and it cannot acquire any rights assigned.

The SID will not be the same (1)

If you create a login with Management Studio on SQL Server 1 and you do the same on SQL Server 2, you will have 2 different SID's. 

The SID will not be the same (2)

If you create a login with a Transact-SQL script on both servers, you will have 2 different SID's.

USE [master]
GO
CREATE LOGIN [test] WITH PASSWORD=N'test', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO

Verify if you already have a problem

Check 1:

Basically the results from both servers need to be the same

USE [database]
GO
SELECT su.sid, su.name, sl.name, CASE WHEN sl.name IS NULL THEN 'SID is not equal' ELSE 'Ok' END as Check FROM dbo.sysusers su LEFT OUTER JOIN dbo.syslogins sl on su.sid=sl.sid where issqluser = 1 and su.name not in ('guest', 'INFORMATION_SCHEMA', 'sys')

When you execute this on each server in every database that is part of an AG, the results basically should be the same. Basically because 1 server can have more logins then the other if that is how it is configured but the check column should show OK in all cases. If you have orphaned users that are related to the login used for connecting to the AG database, you need to delete the login from the mirror and re-create it with the correct SID.

Check 2:

Check for orphaned users on the mirror server

Orphaned users are users in a database that cannot be linked to a login via their SID. sp_change_users_login helps you out.

USE [database]
GO
-- A Microsoft SQL Server standard system procedure
EXEC sp_change_users_login 'report'

If you have orphaned users that are related to the login used for connecting to the AG database, you need to delete the login and re-create it with the correct SID.

 

How to create a Login in a SQL Server Availability Group

Solution

  1. Create a login on SQL Server 1 (it does not matter if it is the primary or secondary)
  2. Select the SID of the login
  3. Create a login on SQL Server 2 using that SID
  4. Create a user and give appropriate rights on the primary SQL Server (because the database is in read/write mode)
  5. Done
-- Execute on the primary or secondary server
USE [master]
GO
CREATE LOGIN test WITH PASSWORD = 'TesT123', CHECK_POLICY=OFF

-- Select the SID
SELECT sid, name FROM syslogins where name = 'test'

-- Execute on the SQL Server(s) where you want to create the login
USE [master]
GO
CREATE LOGIN test WITH  PASSWORD=N'TesT123', sid = 0xF0462AD0BEAC7C46B40D1D5D79C32386, CHECK_POLICY=OFF

Support

In case you are in need of SQL Server support on this subject or others, you can contact us at SQLTreeo via online chat or e-mail servicedesk@sqltreeo.com. We deliver 24x7x365 managed services and support.

Microsoft Azure stretch database

07 Aug 2019 0 minutes to read Contributors

Purpose

  Information regarding the new feature in SQL Server 2016.

References

Microsoft documentation: https://docs.microsoft.com/en-us/

Contents

What is a stretch database? When should you stretch a database? Implementing stretch database. Microsoft Azure stretch database Security. Limitations stretch database. Monitoring Stretch database. Contact      

What is a stretch database?

  A database stretch is migrating your COLD data from your SQL Server to Microsoft Azure cloud. Unlike typical COLD data that is offline, when stretching a database your COLD data is still online and available for querying. Stretch database can be a solution for several common issues if they are present/experienced.   https://azure.microsoft.com/nl-nl/services/sql-server-stretch-database/  

      

When should you stretch a database?

As mentioned, stretching a database can be a benefit in several scenarios: If any of the situations apply then you can consider stretching your database to Microsoft Azure cloud:  

  • Keep Cold/historical data available and online.
  • Require longer retention periods.
  • Massive tables that number in millions/billions.
  • Due to large table/database sizes adding/switching storage is a common occurrence.
  • Maintenance jobs/tasks take longer due to large size of databases and tables (e.g. creating index, index rebuilds, database integrity checks)
  • Want to save expenses on storage
  • SLA requires you to have your data restored within a certain time limit, but due to large sizes it is impossible to achieve.

  These are a few examples of certain scenarios; any similar scenarios can also indicate the possibility of stretching your database.     

Implementing a stretch database

  Before implementing a Stretch database on SQL Server there are certain prerequisites that need to be completed first.    

  • SQL Server version 2016 or higher.
  • Microsoft Azure subscription.
  • Microsoft Azure account
  • Check limitations on tables (constraints, DML operations, Indexes, Properties)
  • Check limitations on datatypes (column types, keys)

  The plans differ based on the speed starting from 100 DSU up to 2000 DSU, prices vary upon each DSU plan chosen.

 After the above prerequisites are completed you can continue to implement a stretch database on your SQL Server. When implementing a stretch database there are several factors that need to be accounted for as the initial setup might affect your production performance, and depending on table size it can take up a considerable amount of time before it is completed.     

Microsoft Azure stretch database Security

  When stretching a database to Azure cloud this will be done only over a Secure channel (from Source: SQL Server to Destination: Azure cloud) Any sensitive column is converted to ciphertext so no one will be able to retrieve any information. Security can be extended with Always Encrypted: Data that is sent to Azure cloud is encrypted, Client-side data is encrypted by keys, the keys are created when configuring the stretch database on each row(row-level). When querying data that is in the cloud it will also be decrypted by the client driver. A key is used between SQL Server and Azure Cloud that will not be visible in the database, this key is used to make the connection from SQL to Azure cloud. https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine         

Limitations in a stretch database

  Limitations for Stretch-enabled tables Stretch-enabled tables have the following limitations.   Constraints  

  • Uniqueness is not enforced for UNIQUE constraints and PRIMARY KEY constraints in the Azure table that contains the migrated data.

DML operations

  • You can't UPDATE or DELETE rows that have been migrated, or rows that are eligible for migration, in a Stretch-enabled table or in a view that includes Stretch-enabled tables.
  • You can't INSERT rows into a Stretch-enabled table on a linked server.

Indexes  

  • You can't create an index for a view that includes Stretch-enabled tables.
  • Filters on SQL Server indexes are not propagated to the remote table.
  • Limitations that currently prevent you from enabling Stretch for a table

  The following items currently prevent you from enabling Stretch for a table. Table properties

  • Tables that have more than 1,023 columns or more than 998 indexes
  • FileTables or tables that contain FILESTREAM data
  • Tables that are replicated, or that are actively using Change Tracking or Change Data Capture
  • Memory-optimized tables
  • Data types
  • text, ntext and image
  • timestamp
  • sql_variant
  • XML
  • CLR data types including geometry, geography, hierarchyid, and CLR user-defined types.

  Column types  

  • COLUMN_SET
  • Computed columns
  • Constraints
  • Default constraints and check constraints
  • Foreign key constraints that reference the table. In a parent-child relationship (for example, Order and Order_Detail), you can enable Stretch for the child table (Order_Detail) but not for the parent table (Order).

  Indexes  

  • Full-text indexes
  • XML indexes
  • Spatial indexes
  • Indexed views that reference the table

 Monitoring a Stretch database

Monitoring your SQL to Azure stretched table can be done by a build in dashboard in SSMS. When opening the dashboard, you can find detailed information regarding status, e.g. how many rows are still in the on-premise database and how many have been already transferred to Azure.

SQL Server Management Studio

07 Aug 2019 0 minutes to read Contributors

SQL Server Management Studio (SSMS) is an integrated environment for managing any SQL infrastructure, from SQL Server to Azure SQL Database. SSMS provides tools to configure, monitor, and administer instances of SQL Server and databases. Use SSMS to deploy, monitor, and upgrade the data-tier components used by your applications, and build queries and scripts.

Use SSMS to query, design, and manage your databases and data warehouses, wherever they are - on your local computer, or in the cloud.

SSMS is free!

Download the latest SQL Server management Studio here!

Enable dark-theme for SSMS

07 Aug 2019 0 minutes to read Contributors

Purpose

Guide to enable dark-theme in SQL Server Management Studio(SSMS).  

References

www.sqlserver.info  

Contents

  Before you start Requirements how to enable dark theme  

Before you start

The dark-theme is not officially supported by Microsoft, however, the code is already present. This option is only available in SSMS version 2016 or higher, any version below that does not support this option. Since the option is not officially supported changes to the configuration file need to be made manually.  

Requirements

  SQL Server Management Studio (SSMS) needs to be closed during the process. SSMS version 2016/2017. Notepad ++ (or any other text editor) Administrator privilege Access to the following path for SSMS 2016 C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\ManagementStudio   Access to the following path for SSMS 2017   C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Binn\ManagementStudio  

How to enable dark-theme 

  • Close SSMS
  • Open notepad ++ (or any other text editor)
  • Navigate to file path that corresponds with your version
  • Open file  [ssms.pkgundef]
  • CTRL+F to find  "// Remove Dark" 
  • Add 2 slashes (//) in front of this key that starts with [$RootKey$\Themes\
  • Save and close the file
  • Run SSMS
  • Navigate to tools --> options --> General
  • At visual experience select color theme: --> "Dark"

       

If you need to copy a login without knowing the password

07 Aug 2019 0 minutes to read Contributors

SELECT 'IF(SUSER_ID('+QUOTENAME(SL.name,'''')+') IS NULL)
BEGIN
CREATE LOGIN '+QUOTENAME(SL.name)+' WITH PASSWORD = '+CONVERT(NVARCHAR(MAX),SL.password_hash,1)+' HASHED, 
SID = '+CONVERT(NVARCHAR(MAX),SL.sid,1)+' END'  FROM sys.sql_logins AS SL WHERE name not like '##%##' and name <> 'sa' order by name

Replication

07 Aug 2019 0 minutes to read Contributors

Topic: Replication

Check for Undistributed Commands

07 Aug 2019 0 minutes to read Contributors

Situation

When you have replication implemented you need to check for problems in delivering data to the subscribers. In case the data cannot be delivered to a subscriber due to many different reasons, the undistributed commands will accumulate. 

Checking for the amount of undistributed commands, gives you a heads up on that.

Solution

Below procedure and function can be implemented on the distribution server. You can schedule this procedure in a job every five minutes and it will send out a mail informing you of the situation.

CREATE PROCEDURE [dbo].[CheckReplication_UndistributedCommands]
@CmdCountThreshold INT = 100000
,@SendMail BIT = 1
,@Recipients VARCHAR(50) = 'youremail@domain.com'
,@Subscribers VARCHAR(1000) = NULL
,@CopyRecipients VARCHAR(50) = NULL
AS
BEGIN

SET NOCOUNT ON;

IF( DB_ID('distribution') IS NULL )
RETURN

DECLARE
@publisher SYSNAME
,@publisher_id INT
,@publisher_db SYSNAME
,@publication SYSNAME
,@subscriber SYSNAME
,@subscriber_id INT
,@subscriber_db SYSNAME
,@subscription_type INT
,@agent_id INT
,@lastrunts TIMESTAMP
,@avg_rate FLOAT
,@xact_seqno VARBINARY(16)
,@cmd NVARCHAR(500);

DECLARE @t TABLE (
publisher SYSNAME
,publisher_id INT
,publisher_db SYSNAME
,publication SYSNAME
,subscriber SYSNAME
,subscriber_id INT
,subscriber_db SYSNAME
,subscription_type INT
,agent_id INT);

INSERT @t
SELECT DISTINCT
publisher.name publisher
,da.publisher_id
,p.publisher_db
,p.publication
,subscriber.name subscriber
,da.subscriber_id
,s.publisher_db
,da.subscription_type
,da.id
FROM distribution.dbo.MSdistribution_agents da with (nolock)
JOIN master.sys.servers publisher with (nolock)
ON publisher.server_id = da.publisher_id
JOIN master.sys.servers subscriber with (nolock)
ON subscriber.server_id = da.subscriber_id
AND
(
@Subscribers IS NULL
OR charindex(subscriber.[name], @Subscribers) > 0
)
JOIN distribution.dbo.MSpublications p with (nolock)
ON p.publication = da.publication
AND p.publisher_id = da.publisher_id
AND p.publisher_db = da.publisher_db
JOIN distribution.dbo.MSsubscriptions s with (nolock)
ON s.publication_id = p.publication_id
AND s.publisher_id = da.publisher_id;

DECLARE @tres TABLE (
publisher SYSNAME NULL
,publisher_db SYSNAME NULL
,publication SYSNAME NULL
,subscriber SYSNAME NULL
,subscriber_db SYSNAME NULL
,pendingcmdcount INT
,estimatedprocesstime INT);

WHILE EXISTS(select 1 from @t)
BEGIN
SELECT TOP 1
@publisher = publisher
,@publisher_id = publisher_id
,@publisher_db = publisher_db
,@publication = publication
,@subscriber = subscriber
,@subscriber_id = subscriber_id
,@subscriber_db = subscriber_db
,@subscription_type = subscription_type
,@agent_id = agent_id
FROM @t;

;WITH dist_sessions (start_time, runstatus, timestamp)
AS( SELECT
start_time
,max(runstatus)
,max(timestamp)
FROM distribution.dbo.MSdistribution_history with (nolock)
WHERE agent_id = @agent_id
AND runstatus in (2,3,4)
GROUP BY start_time)
SELECT
@lastrunts = max(timestamp)
FROM dist_sessions;

SELECT
@xact_seqno = xact_seqno
,@avg_rate = delivery_rate
FROM distribution.dbo.MSdistribution_history with (nolock)
WHERE agent_id = @agent_id
AND timestamp = @lastrunts;

SELECT @xact_seqno = isnull(@xact_seqno, 0x0), @avg_rate = isnull(@avg_rate, 0.0);

IF (@avg_rate = 0.0)
BEGIN
SELECT @avg_rate = isnull(avg(delivery_rate),0.0)
FROM distribution.dbo.MSdistribution_history
WHERE agent_id = @agent_id;
END;

SET @cmd = N'distribution.sys.sp_MSget_repl_commands
@agent_id = @agent_id
,@last_xact_seqno = @xact_seqno
,@get_count = 2
,@compatibility_level = 9000000';

INSERT @tres(pendingcmdcount)
EXEC sp_executesql
@cmd
,N'@agent_id int, @xact_seqno varbinary(16)'
,@agent_id = @agent_id, @xact_seqno = @xact_seqno;

UPDATE @tres
SET
publisher = @publisher
,publisher_db = @publisher_db
,publication = @publication
,subscriber = @subscriber
,subscriber_db = @subscriber_db
,estimatedprocesstime = CASE WHEN @avg_rate != 0.0
THEN CAST((CAST(pendingcmdcount as float) / @avg_rate) as int)
ELSE pendingcmdcount
END
WHERE publication IS NULL;

DELETE FROM @t
WHERE @publisher = publisher
AND @publisher_db = publisher_db
AND @publication = publication
AND @subscriber = subscriber
AND @subscriber_db = subscriber_db
AND @subscription_type = subscription_type;
END;

IF EXISTS(SELECT 1 FROM @tres WHERE pendingcmdcount >= @CmdCountThreshold)
BEGIN
IF @SendMail = 1
BEGIN
DECLARE
@subject VARCHAR(100) = 'Replication Latency (Undistributed Commands)'
,@body VARCHAR(1000)
,@computername VARCHAR(128) = @@SERVERNAME
,@tableHTML NVARCHAR(MAX)
,@importance VARCHAR(10);

SELECT
@subject = @computername + ' ' + @subject
,@importance='High';

SET @tableHTML =
N'<table border="1">' +
N'<tr>' +
N'<th>Publisher</th>' +
N'<th>Publisher DB</th>' +
N'<th>Publication</th>' +
N'<th>Subscriber</th>' +
N'<th>Subscriber DB</th>' +
N'<th>Pending CMD</th>' +
N'<th>Estimated Process Time</th>' +
isnull(CAST((
SELECT
[td] = publisher, ''
,[td] = publisher_db, ''
,[td] = publication, ''
,[td] = subscriber, ''
,[td] = subscriber_db, ''
,[td] = pendingcmdcount, ''
-- ,[td] = CAST(DATEPART(DAY, DATEADD(SECOND, estimatedprocesstime, '19000101'))-1 AS VARCHAR) + FORMAT(DATEADD(SECOND, estimatedprocesstime, '19000101'), '.HH:mm:ss')
,[td] = dbo.fn_timediff_formatted('1900-01-01', estimatedprocesstime)
FROM @tres
WHERE pendingcmdcount >= @CmdCountThreshold
ORDER BY 1, 2, 3, 4, 5
FOR XML PATH('tr'), TYPE
) AS NVARCHAR(MAX)), '') +
N'</tr></table>';

EXEC msdb.dbo.sp_send_dbmail
@recipients = @Recipients
,@copy_recipients = @CopyRecipients
,@subject = @subject
,@importance = @importance
,@body = @tableHTML
,@body_format = 'HTML';
END
ELSE
BEGIN
SELECT
publisher
,publisher_db
,publication
,subscriber
,subscriber_db
,pendingcmdcount
-- ,CAST(DATEPART(DAY, DATEADD(SECOND, estimatedprocesstime, '19000101'))-1 AS VARCHAR) + FORMAT(DATEADD(SECOND, estimatedprocesstime, '19000101'), '.HH:mm:ss') estimatedprocesstime
,dbo.fn_timediff_formatted('1900-01-01', estimatedprocesstime) AS estimatedprocesstime
FROM @tres
WHERE pendingcmdcount >= @CmdCountThreshold
ORDER BY 1, 2, 3, 4, 5;

-- [BEGIN]-FOR NAGIOS
DECLARE @cmdcount INT;

SELECT @cmdcount = MAX(pendingcmdcount)
FROM @tres;

RETURN @cmdcount;
-- [END]-FOR NAGIOS
END;
END;
END;
GO

 

You will also need the following function:

/****** Object: UserDefinedFunction [dbo].[fn_timediff_formatted] Script Date: 17-06-2019 16:01:59 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author: kde
-- Create date: 12/04/2019
-- Description: Displays time difference between two dates (in random order) in a formatted string
-- Sample call:
-- SELECT dbo.[fn_timediff_formatted](CURRENT_TIMESTAMP, CURRENT_TIMESTAMP-4.33)
-- =============================================
CREATE FUNCTION [dbo].[fn_timediff_formatted]
(
@d1 DATETIME,
@d2 DATETIME
)
RETURNS VARCHAR(50)
AS
BEGIN

DECLARE
@outputstr VARCHAR(50) = '',
@nbr_days INT,
@nbr_hours INT,
@nbr_mins INT,
@nbr_sec INT;

DECLARE @remaining_sec INT = ABS(DATEDIFF(SECOND, @d1, @d2)); -- order of @d1 and @d2 doesn't matter

-- number of days
IF @remaining_sec > 86400 -- 1 day
BEGIN
SET @nbr_days = @remaining_sec / 86400;
SET @remaining_sec = @remaining_sec - (@nbr_days * 86400);
END;
ELSE
SET @nbr_days = 0;

-- number of hours
IF @remaining_sec > 3600
BEGIN
SET @nbr_hours = @remaining_sec / 3600;
SET @remaining_sec = @remaining_sec - (@nbr_hours * 3600);
END;
ELSE
SET @nbr_hours = 0;

-- number of minutes
IF @remaining_sec > 60
BEGIN
SET @nbr_mins = @remaining_sec / 60;
SET @remaining_sec = @remaining_sec - (@nbr_mins * 60);
END;
ELSE
SET @nbr_mins = 0;

-- number of seconds
SET @nbr_sec = @remaining_sec;
SET @remaining_sec = @remaining_sec - @nbr_sec;

SET @outputstr = CASE
WHEN @nbr_days = 0
THEN
''
ELSE
CAST(@nbr_days AS VARCHAR) + 'd '
END + CASE
WHEN @nbr_hours = 0
THEN
''
ELSE
CAST(@nbr_hours AS VARCHAR) + 'h '
END + CASE
WHEN @nbr_mins = 0
THEN
''
ELSE
CAST(@nbr_mins AS VARCHAR) + 'm '
END + CAST(@nbr_sec AS VARCHAR) + 's ';

RETURN @outputstr;

END;
GO

 

 

Backups

07 Aug 2019 0 minutes to read Contributors

Topic: SQL Server Backups

Show the backup history of all databases

07 Aug 2019 0 minutes to read Contributors

Show the backup history of all databases on the server.


SELECT
T1.name ,T3_full.full_backup_start_date ,T3_full.full_backup_finish_date ,T3_full.full_Duration ,t3_full.full_backup_size ,t3_full.full_physical_device_name ,T3_diff.diff_backup_start_date ,T3_diff.diff_backup_finish_date ,T3_diff.diff_Duration ,t3_diff.diff_backup_size ,t3_diff.diff_physical_device_name ,T3_log.log_backup_start_date ,T3_log.log_backup_finish_date ,T3_log.log_Duration ,t3_log.log_backup_size ,t3_log.log_physical_device_name FROM master..sysdatabases T1 LEFT OUTER JOIN ( SELECT database_name ,MAX(full_backup_start_date) AS full_backup_start_date ,MAX(full_backup_finish_date) AS full_backup_finish_date ,MAX(diff_backup_start_date) AS diff_backup_start_date ,MAX(diff_backup_finish_date) AS diff_backup_finish_date ,MAX(log_backup_start_date) AS log_backup_start_date ,MAX(log_backup_finish_date) AS log_backup_finish_date FROM ( SELECT msdb.dbo.backupset.database_name ,CASE WHEN msdb.dbo.backupset.type = 'D' THEN MAX(msdb.dbo.backupset.backup_start_date) ELSE NULL END AS full_backup_start_date ,CASE WHEN msdb.dbo.backupset.type = 'D' THEN MAX(msdb.dbo.backupset.backup_finish_date) ELSE NULL END AS full_backup_finish_date ,CASE WHEN msdb.dbo.backupset.type = 'I' THEN MAX(msdb.dbo.backupset.backup_start_date) ELSE NULL END AS diff_backup_start_date ,CASE WHEN msdb.dbo.backupset.type = 'I' THEN MAX(msdb.dbo.backupset.backup_finish_date) ELSE NULL END AS diff_backup_finish_date ,CASE WHEN msdb.dbo.backupset.type = 'L' THEN MAX(msdb.dbo.backupset.backup_start_date) ELSE NULL END AS log_backup_start_date ,CASE WHEN msdb.dbo.backupset.type = 'L' THEN MAX(msdb.dbo.backupset.backup_finish_date) ELSE NULL END AS log_backup_finish_date FROM msdb.dbo.backupset GROUP BY msdb.dbo.backupset.database_name ,msdb.dbo.backupset.type ) max_date_subset GROUP BY database_name ) T2 ON T1.name = T2.database_name LEFT OUTER JOIN ( SELECT msdb.dbo.backupset.database_name ,msdb.dbo.backupset.backup_start_date AS full_backup_start_date ,msdb.dbo.backupset.backup_finish_date AS full_backup_finish_date ,DATEDIFF(second, msdb.dbo.backupset.backup_start_date, msdb.dbo.backupset.backup_finish_date) AS full_Duration ,msdb.dbo.backupset.backup_size AS full_backup_size ,msdb.dbo.backupmediafamily.physical_device_name AS full_physical_device_name FROM msdb.dbo.backupmediafamily INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id ) T3_full ON T2.database_name = T3_full.database_name AND t2.full_backup_start_date = T3_full.full_backup_start_date AND t2.full_backup_finish_date = T3_full.full_backup_finish_date LEFT OUTER JOIN ( SELECT msdb.dbo.backupset.database_name ,msdb.dbo.backupset.backup_start_date AS diff_backup_start_date ,msdb.dbo.backupset.backup_finish_date AS diff_backup_finish_date ,DATEDIFF(second, msdb.dbo.backupset.backup_start_date, msdb.dbo.backupset.backup_finish_date) AS diff_Duration ,msdb.dbo.backupset.backup_size AS diff_backup_size ,msdb.dbo.backupmediafamily.physical_device_name AS diff_physical_device_name FROM msdb.dbo.backupmediafamily INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id ) T3_diff ON T2.database_name = T3_diff.database_name AND t2.diff_backup_start_date = T3_diff.diff_backup_start_date AND t2.diff_backup_finish_date = T3_diff.diff_backup_finish_date LEFT OUTER JOIN ( SELECT msdb.dbo.backupset.database_name ,msdb.dbo.backupset.backup_start_date AS log_backup_start_date ,msdb.dbo.backupset.backup_finish_date AS log_backup_finish_date ,DATEDIFF(second, msdb.dbo.backupset.backup_start_date, msdb.dbo.backupset.backup_finish_date) AS log_Duration ,msdb.dbo.backupset.backup_size AS log_backup_size ,msdb.dbo.backupmediafamily.physical_device_name AS log_physical_device_name FROM msdb.dbo.backupmediafamily INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id ) T3_log ON T2.database_name = T3_log.database_name AND t2.log_backup_start_date = T3_log.log_backup_start_date AND t2.log_backup_finish_date = T3_log.log_backup_finish_date

Frequently Asked Questions

07 Aug 2019 0 minutes to read Contributors

Which SSMS versions SQLTreeo SSMS AddIn is compatible with

SQLTreeo SSMS AddIn is compatible with every SSMS starting from SSMS 2005. Everytime a new version comes out, SQLTreeo SSMS AddIn is releasing a new version. In new versions of SSMS AddIn we might build new functionality without upgrading previous versions.

Can i use SQLTreeo on different installations of SSMS

Yes, SQLTreeo is available for All SSMS Versions, from SSMS2005 up to the latest SSMS Version. If multiple SSMS versions are installed on your machine, SQLTreeo installation takes care of integrating in all the available SSMS installations.   If you add a new SSMS installation, just reinstall SQLTreeo.

Can i use 1 license on multiple/different machines

Yes you can! Deactivate the license in your MySQLTreeo and use the license on your other machine.

Where can i find SQLTreeo Logfiles

SQLTreeo Add-in generates log files for troubleshooting in case an issue/error arises. The location of the log files are as following: C:\Users\%user%\SQLTreeo\Logs  

Can i add compare license to an existing license

Yes!

Do you have any volume licensing pricing packages

Yes, we offer volume pricing packages. Please note that any volume pricing is single-purchase-based; previously purchased or planned purchases of licenses are not considered.

STEPS

  1. Please send your request with details of requested volume to sales@sqltreeo.com for more information.

Can i get an invoice with our PO number

Yes. When you log into www.sqltreeo.com you can go to license or purchase history and select the license(s) you purchased. There you can generate your invoice.

Can i get an offer. I need this for creating a PO number

You can create the offer in the purchase screen. Press the button create a quote to customize your offer. you can enter your PO number or any other reference. You can send it via email or download the offer directly. Full self service.

Do i have to buy new licenses to update to the latest SQLTreeo version

No, you buy a time based license. During that period you can always upgrade without any extra charge.

Where can i download the latest versions

To download the latest version of any of our products:

Go to https://www.sqltreeo.com/downloads.

Can i assign specific rights to users

SQLTreeo monitoring allows Administrators to assign roles and corresponding rights to users or usergroups. There is no limit to the amount of users assigned.

STEPS

  1. Login to your Monitoring environment. You see the Dashboard of your environment
  2. Click on the left on "Configuration"
  3. Click on "Users" (on the left bottom).
    1. Here you see all assigned users to your environment
    2. To change the rights per user, click on the grey button on the right. You can chose Edit, Disable, Remove, impersonate)
    3. Select "Edit" to adjust user rights
    4. Follow step 6
  4. Click on "Add user"
  5. Fill in user details
  6. Select usergroup. Usergroups are predefined rights on actions.
    • If you want to create your own usergroup click on the Arrow next to "Users"
    • Select "user groups"
    • Select "Add user group"
    • Complete the fields
    • Click "Save"
    • Click on "Access Rights"
      • Now you can choose which rights are alligned with specific actions
  7. Click "Save".

There is a short video: “How to assign rights to users”.

Where can we download the SSMS AddIn manual to setup dynamic folders

The dynamic folders functionality is integrated in SSMS ADDIN, and available for everyone using SSMS 2016 or higher and is running on SQLTreeo V0.8.6 or higher. There is no documentation on dynamic folders.  

There is no manual available. The process is very easy

What is SQLTreeo Monitoring Software

SQLTreeo Monitoring is a cloud based software solution provided as SaaS; what you see is what you get. No long installations and configurations, all you need to do is to install an agent in your environment and add your SQL servers, that’s it. With SQLTreeo Momitoring you have instant access to all relevant information to keep your MSSQL server healthy and with maximum performance. It is designed for DBAs, by DBAs! Drillling into details of job or backup histries simplifies your time spent searching for answers! Why not try it yourself for 14 days?   Watch the YouTube videohere!

What is the added value of SQLTreeo Monitoring compared to SSMS

Your MSSQL database has all information available for you to optimize performance and solve incidents. However, SSMS is simply too limited to show that information and the way to retrieve the information you need via SSMS is time consuming and can lead to several manual interactions to get information. (Reading error logs, checking the job activity monitor to see if your jobs have failed or succeeded, writing long scripts or check db for db if your backups have completed successfully, etc). Via SQLTreeo Monitoring you see all the important information you need on a daily basis. No need for manual interactions anymore, just open the dashboards and you will have the information you want to see within seconds. All this information is displayed in an intuitive way, using hovering and drill-down techniques, and the use of colors and icons in dashboards.

Does SQLTreeo Monitoring store sensitive data

SQLTreeo Monitoring does not store any company-sensitive or user-related data. The agent “only“ executes a number of queries at given intervals in your environment, and polling your server every few seconds to see if there has been any changes since the last poll. The information being retrieved from system tables will be sent to your browser(interface). Sensitive data is neither stored nor replicated or sent. All the data is only accessible within your environment. So, in one sentence: SQLTreeo enables you to access all your SQL Server information you need to discover bottlenecks, create awareness for server health, and be notified in case something goes wrong.

Does SQLTreeo Monitoring take backups

SQLTreeo Monitoring does not backup; that is your responsibility.What SQLTreeo does show are the history and status of backups per server. It also allows you to drill down into detailed data about the backup based on time intervals you select. This provides insight of that status of any backup.

What types of dashboard does SQLTreeo Monitoring provide

SQLTreeo has several types of dashboards which are specific in its purpose. The dashboards are topic related.

  • Server dashboard, which show the status of your server. In case something is wrong it will be displayed as such (green/orange/red) and it will give you the indication for why your server is in that state.
  • Jobs dashboards provide all the information you need for checking history execution, run times, status, disabled, retrying and such.
  • Backups dashboards gives insight in server backups per database. It shows - in a very short time - the status of your backup plans, failed, outdated backups, logs not created, and you can easily filter out the root cause of failures and create backups to fix it.
  • Alerting dashboards
  • Reporting dashboards
  • Replication dashboards

What about alerts in SQLTreeo Monitoring

Alerts are setup as default but with the many thresholds provided you simply click and select the required alerts and thresholds you need to monitor your MS SQL Server. Thresholds are created to help you monitor your environment. Each DBA knows there are a few things that need to be taken care of: Make sure backups are made, disks have enough free space, no blocking queries, or long running transactions, log usage of your database, jobs that failed etc. Thresholds can be configured with conditions. SQLTreeo has created many thresholds which are close to what a DBA daily monitor. And you have SQL Server performance counters to specify certain areas of your server for monitoring. Values indicate on which value a condition is triggered. For example disks are triggered CRITICAL when >90% utilization, send a warning. When 90%<’status’>50% and only if it’s that state for more than X minutes, send a warning.  Once the disk is below 50% , send a message: status = ‘clear’) Once a threshold Is triggered it will be shown in your dashboard, beeps a notification on your interface and sends an e-mail to the users that have subscribed to that servergroup/environment/usergroup/action. When clicking on the red icon you immediately see the reason and you can take actions.This saves you a lot of time, instead of getting a call from someone saying the disks are full and you have to figure out which server, which disk, what is the root cause (error log ), go to server and remove unneeded files or such. This takes up a lot of time, with SQLTreeo monitoring it will take only seconds, avoiding this call you otherwise would have received. Because now you know when an issue is going to occur.

What do i need to install to work with SQLTreeo Monitoring

SQLTreeo Monitoring is a cloud based monitoring solution which implies no programming. The only thing you need to do is download and install SQLTreeo Agent which acts as broker between SQL server and our software. Here is the YouTube Video how to do it!  

What limitations are there for SQLTreeo Monitoring Users

SQLTreeo monitoring supports an unlimited number of users. You, the SQLTreeo Monitoring administrator can assign roles to users and invite them to look to the live monitoring of a specific database. You can consult a colleague, for example for a second opinion or comments. The integrated chat function allows you to have instant communication without being in the same location. You can assign users with specific permissions on what actions they can do (show, add, delete, modify) on all the screens in your interface; if you don’t want testers to make changes on production environment or give them any insight you can only show them what they need to see.

Want to know how to add users and assign roles in SQLTreeo Monitoring? Watch the video here.

Which security standards does SQLTreeo follow

SQLTreeo monitoring security standards are documented here.

Why did you develop your own monitoring solution

Some of our DBA friends have asked us why we spent time and resources for developing this SQL Monitoring Platform \, because there are already so many. Well we thought that we would share our answer with you as well: there isn't any which is 10% dedicated to the needs of a MSSQL DBA for a reasonable price! Back in the days when all our energy went to managing databases for clients - before we created our own products, we used other brand tools and software to monitor and inform us. But it frustrated us as DBAs that each monitoring solution we looked at seems not to be developed by database administrators. What we, for example, missed is to easy access in-depth real time data and wanted to understand to detail what happened with a failing job. It is one of many specific features that DBAs need. Of course we tried every other solution that were available, but none filled these needs. We simply could not find a suitable, affordable solution which offered what we, as database administrators, were looking for! Most renown platforms are heavy, on premise and enterprise licensed, with extreme license and implementation costs. But above all not dedicated for database administrators.

Why is SQLTreeo Monitoring so attractively priced

THE SHORT ANSWER IS: WE ARE NOT GREEDY!

We really think that cloud based database monitoring licensing is the opposite of "old school" on-premise licensing.

  • Our software license model is based on the number of your MSSQL instances;
  • Our base module is extremely features-rich;
  • Our additional two models are for specific use and are fixed priced;
  • Our support is 365 days a year and free of charge;
  • Our license model is based on instant purchase; no contracts or investments required.

Contact sales@sqltreeo.com if you require more information.

SQL Server Technologies

07 Aug 2019 0 minutes to read Contributors

SQL Server is a central part of the Microsoft data platform. SQL Server is an industry leader in operational database management systems (ODBMS). 

SQLTreeo is an expert in SQL Server Technologies. We provide our expert knowledge and experience ad-hoc, project based, SAAS based, PAAS based and via Managed Services.

Database Engine

The Database Engine is the core service for storing, processing, and securing data. The Database Engine provides controlled access and rapid transaction processing to meet the requirements of the most demanding data consuming applications within your enterprise. The Database Engine also provides rich support for sustaining high availability.

Machine Learning Services

Machine Learning Services gives the ability to run Python and R scripts with relational data. You can use open source and Microsoft packages for predictive analytics and machine learning. The scripts are executed in-database without moving data outside SQL Server or over the network.

Integration Services

Integration Services is a platform for building high performance data integration solutions, including packages that provide extract, transform, and load (ETL) processing for data warehousing.

Analysis Services

Analysis Services is an analytical data platform and toolset for personal, team, and corporate business intelligence. Servers and client designers support traditional OLAP solutions, new tabular modeling solutions, as well as self-service analytics and collaboration using Power Pivot, Excel, and a SharePoint Server environment. Analysis Services also includes Data Mining so that you can uncover the patterns and relationships hidden inside large volumes of data.

Reporting Services

Reporting Services delivers enterprise, Web-enabled reporting functionality. You can create reports that draw content from a variety of data sources, publish reports in various formats, and centrally manage security and subscriptions.

Replication

Replication is a set of technologies for copying and distributing data and database objects from one database to another, and then synchronizing between databases to maintain consistency. By using replication, you can distribute data to different locations and to remote or mobile users by means of local and wide area networks, dial-up connections, wireless connections, and the Internet.

Data Quality Services

SQL Server Data Quality Services (DQS) provides you with a knowledge-driven data cleansing solution. DQS enables you to build a knowledge base, and then use that knowledge base to perform data correction and deduplication on your data, using both computer-assisted and interactive means. You can use cloud-based reference data services, and you can build a data management solution that integrates DQS with SQL Server Integration Services and Master Data Services.

Master Data Services

Master Data Services is the SQL Server solution for master data management. A solution built on Master Data Services helps ensure that reporting and analysis is based on the right information. Using Master Data Services, you create a central repository for your master data and maintain an auditable, securable record of that data as it changes over time.

Source: https://docs.microsoft.com/en-us/sql/sql-server/sql-server-technical-documentation

Microsoft SQL Server Trace Flags

07 Aug 2019 0 minutes to read Contributors

⚠ REMEMBER: Be extremely careful with trace flags, test in your development environment first. And consult professionals first if you are the slightest uncertain about the effects of your changes.

⚠ Some trace flags were introduced in specific SQL Server versions. For more information on the applicable version, see the Microsoft Support article associated with a specific trace flag.

⚠ Trace flag behavior may not be supported in future releases of SQL Server.

 

https://github.com/ktaranov/sqlserver-kit/blob/master/SQL%20Server%20Trace%20Flag.md

Installation Best Practices

07 Aug 2019 0 minutes to read Contributors

Document history

Date

Version

Reason

24 August 2017

0.1

Initial version

7 september 2017

0.2

Added windows cluster settings and SQL Server custom configurations

30 june 2019

0.3

Added tabular prerequisites

Updated best practices

Installation Prerequisites

07 Aug 2019 0 minutes to read Contributors

Hardware

Hardware

Component

Description

Setup as

Checked / Present

RAID

RAID

Offer fault-tolerance

RAID1(0)

CACHE

BBWC

enabled

Write cache

enabled

Read /write cache ratio

60/40

DISKS

SAS

Operating System

10K 72GB - RAID1 minimum

15K 146GB – RAID1 recommended

Backup disk

10K - Any size – RAID1 to hold backup’s

SSD

SQL Server Data Files

300GB - RAID1 – 64K Preformatted 100%

MEMORY

SIZE

Memory size

8GB minimum

32 GB recommended

ILO Console

Open ILO console

Enabled

ILO 

Teaming

NIC interface bundle

Enabled / failover / load balanced

Software

OS

Component

description

Setup as

Checked / Present

OS

Version

Windows

Windows Server 2012 Standard

SP

Service Pack

Latest

Volumes

OS

OS

C:\ (72GB on SAS)

PageFile

Page File

X:\ (1 1\2 RAM) on SSD

SQL Server

TempDB

N:\ (146 GB on SSD)

Database Logfiles (LDF)

T:\ (146 GB on SSD)

Database files (MDF)

I:\ (300GB on SSD)

Backup

Backup Files

Z:\ (any size on SAS enough to hold backup’s for 2 week retention)

Security

Firewall

 

Off

Antivirus

 

Off. If On exclude all SQL Server disks

Mounts

ISO

ISO Mounting tool

Any tool to mount ISO’s

 

Domain

 

AD

Component

Description

Setup as

Checked / Present

Service account

SQL Server Agent Service

User account to run SQL Agent service

DOMAIN\sqlagent

SQL Server Service

User account to run SQL service

DOMAIN\sqlservice

Domain Groups

SQL Server DBA’s

AD Global group to host DBA Administrators

DOMAIN\GROUP

Service accounts should be member of this group

SQL Server Users

AD Global group to host DBA users

DOMAIN\GROUP

Name resolution

Hosts file

DNS resolution

DBA group able to access local host file

SQL Server

SINGLE INSTANCE  

(no high availability) 

Component 

Description 

Setup as 

Checked / Present 

SQL Server 

Version 

Version 

SQL Server Standard 

 

Instance name 

SQL Server instance 

DEFAULT 

 

Binaries 

 

ISO mount 

 

Listening ports 

Listening ports 

1433 – 1434 

 

 

MULTIPLE  INSTANCES

(HIGH AVAILABLE)

Component

Description

Setup as

Checked / Present

SQL Server

Version

Version

SQL Server Enterprise Edition

Instance name

SQL Server instance

DEFAULT

Binaries

 

ISO mount

Listening ports

Listening ports

60000 recommended

Always ON

Availability Groups

VIP and VDNS

Add extra IP and DNS to decorate AG group listener

Windows Clustering Services

Windows Clustering

Installed

 

SQL Server Patches

07 Aug 2019 0 minutes to read Contributors

Install the latest SP and CU’s

NTFS Allocation Unit Size

07 Aug 2019 0 minutes to read Contributors

OS – default
Data disk – 64KB 100% preformatted
Log disk – 64KB 100% preformatted
TempDB disk – 64KB 100% preformatted

Perform Volume Maintenance Tasks

07 Aug 2019 0 minutes to read Contributors

Location Of Installation Files

07 Aug 2019 0 minutes to read Contributors

SQL Binaries

C:\Software\Microsoft SQL Server <VERSION/SP/CU>

Command batch to start sql installation

C:\Software\Microsoft SQL Server <VERSION>\setupsql.cmd

SQL Server setup configuration ini file

C:\Software\Microsoft SQL Server <VERSION>\ ConfigurationFile_<SERVERNAME>.ini

Installation Steps

07 Aug 2019 0 minutes to read Contributors

Start Setup

Choose New SQL Server stand-alone installation or add features to an existing installation.

Choose Product key

Enter a valid product key if applicable.

License Terms

Accept the license terms.

Microsoft Update Regime

Do not use Microsoft update regime for SQL Servers.

Feature Selection

Choose a minimal Database Engine Services. Rest is only need if specifically required. Instance root directory should be the SQL Server Data disk. Leave Microsoft SQL Server as a folder name. Shared feature directory should be separated from operating system drive. Same for x86. Leave the folder names intact.

Instance Configuration

If this is a single SQL Server environment, choose default instance.

Server configuration - Service Accounts

Choose a different domain service account for SQL Server Agent and Database Engine. This account will be given the appropriate rights via the SQL Server setup on the data, log and tempdb drives and shared features. Set startup type for SQL Server Agent and Database Engine to automatic. Browser can be disabled and with no account name. Enable the check Grant Perform Volume Maintenance Task privilege. This will enable SQL Server not to write 0’s and 1’s to the data and log files resulting in much faster allocation of data on disk.

Server configuration - Collation

Only if a very specific collation is required, leave this to standard SQL Server latin character set with Case Insensitive and Accent Sensitive.

Database Engine Configuration - Server Configuration

Specify mixed mode and a secure SA password. Add domain groups for windows authentication.

Database Engine Configuration - Data Directories

Set the Data root directory, User database directory to drive I:. Set the Database Log directory to T:. Set the Backup directory to any location separated from Data and Log and Tempdb.

Database Engine Configuration - TempDB

Allow SQL Server to create 8 different datafiles (for fast round robin Tempdb access). Set the initial size to 1GB and allow growth for 256MB per datafile. Set the Data directories for the Tempdb to a different drive than Data, Log and Backup and preferably to SSD drives. The Log directory for Tempdb can be the same as the Tempdb data drive. Initial size of 1GB and grow with 256MB is sufficient.

Ready to Install

SQL Server will create a configuration file that can be reused for unattended installation of a similar windows configuration. Copy that file to the c:\software location together with SQL Server.

Disable Media Sending Feature

https://support.microsoft.com/en-us/help/239924/how-to-disable-the-media-sensing-feature-for-tcp-ip-in-windows

 

Check the Primary or Secondary Status of an Availability Group Server

07 Aug 2019 0 minutes to read Contributors

DECLARE @role_desc varchar(20)
IF( HAS_PERMS_BY_NAME ('sys.dm_hadr_availability_replica_states', 'OBJECT', 'execute') = 1)
BEGIN
-- if this is not an AG server then return 'PRIMARY'
IF NOT EXISTS( SELECT 1 FROM sys.DATABASES d INNER JOIN sys.dm_hadr_availability_replica_states hars ON d.replica_id = hars.replica_id)
SELECT @role_desc = 'PRIMARY'
ELSE
-- else return if there is AN PRIMARY availability group PRIMARY else 'SECONDARY
IF EXISTS( SELECT hars.role_desc FROM sys.DATABASES d INNER JOIN sys.dm_hadr_availability_replica_states hars ON d.replica_id = hars.replica_id WHERE hars.role_desc = 'PRIMARY' )
SELECT @role_desc = 'PRIMARY'
ELSE
SELECT @role_desc = 'SECONDARY'
END
ELSE
SELECT @role_desc = 'PRIMARY'

SELECT @role_desc

Automatically DBCC CHECKDB your databases

07 Aug 2019 0 minutes to read Contributors


SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
/*
Functionality
=============
DBCC CHECKDB all databases

Applications
============
--

PRERequisites
=============
- A Table containing servernames and database names. (This table will be used to restore the backup's of those databases onto a server to offload
the DBCC CHECKDB
- usp_restoredbV5 - A custom procedure to automatically restore databases.
- Ola Halengren's [DatabaseIntegrityCheck]

VERSION HISTORY
Version Date Author Reason
------------------------------------------------------------------------------
1.0 20160520 drb creation

Logging
=======
local Table TargetDatabases

Results
=======
TargetDatabases and Ola's Commandlog
*/
CREATE procedure [dbo].[usp_dba_dbcc_checkdatabases]
as
begin
SET NOCOUNT ON
DECLARE @id int,
@starttime datetime,
@endtime datetime,
@OnlyDoThisOnce tinyint = 1,
@dbcc_errorcode int,
@result varchar(max)

-- Parameters for usp_restoredb proc
DECLARE @server sysname,
@database sysname,
@restore_to sysname ='@@SERVERNAME WHERE YOU WANT THE BACKUP TO BE RESTORED TO',
@restore_as sysname,
@restore_data_to sysname = 'I:\DBCC_CHECKDB_FOLDER\',
@restore_log_to sysname = 'T:\DBCC_CHECKDB_FOLDER\',
@immediate_restore bit = 1,
@username sysname = NULL,
@password sysname = NULL

IF OBJECT_ID( 'TargetDatabases' ) = 0
BEGIN
EXEC('CREATE TABLE TargetDatabases( id int identity(1,1) PRIMARY KEY, servername sysname,recovery varchar(20) NULL,database_name sysname NULL, State sysname NULL, duration int null, dbcc_checked datetime null, dbcc_error bit default 0, message varchar(150) null, uid sysname, pwd sysname)')
END

WHILE( 1=1 )
BEGIN
-- See if we're in the weekend..., if not... Stop dbcc
-- IF( DATENAME(WEEKDAY, current_timestamp ) NOT IN ('Saturday','Sunday') )
-- BREAK

-- If we have nothing to do, fetch a new list of DB's
IF NOT EXISTS( SELECT 1 FROM TargetDatabases where dbcc_checked IS NULL) AND @OnlyDoThisOnce = 1
BEGIN
SET @OnlyDoThisOnce = 2

insert into TargetDatabases( servername, recovery, database_name, state, UID, PWD )
select servername, recovery, database_name, state, uid, pwd from OPENROWSET('SQLNCLI11','DRIVER={SQL Server};Server=@@SERVERNAME_WHERE_TARGETDATABASES_IS_LOCATED;Database=@@DATABASE;Persist Security Info=True;UID=@@READONLY_LOGIN;PWD=@@READONLY_PASSWORD','select * from targetdatabases')

-- Don't check any Test or Uat DB's and don't check SDW itself also
DELETE FROM TargetDatabases where dbcc_checked is null and servername like '@@EXCLUDESERVERS%' OR servername like '@@EXCLUDESERVERS%' OR
servername IN ( @@EXCLUDESERVERS )

-- Check what we do not have to check again
UPDATE newbatch
SET dbcc_checked = CURRENT_TIMESTAMP,
message = 'Already checked in the last 24 hours'
FROM TargetDatabases oldbatch LEFT OUTER JOIN
TargetDatabases newbatch ON DATEDIFF( hour, oldbatch.dbcc_checked, current_timestamp ) < 28 and oldbatch.dbcc_error = 0 and newbatch.dbcc_checked IS NULL and newbatch.servername=oldbatch.servername and newbatch.database_name=oldbatch.database_name
WHERE newbatch.id IS NOT NULL
END

-- This is to eliminate any newly added databases to SDW during a batch of DBCC check
DELETE FROM TargetDatabases where dbcc_checked IS NULL and database_name IN (select name from sysdatabases where name not in ('master','msdb','distribution'))

-- Fetch the next DB to be processed
SELECT TOP 1 @id = id, @server = servername, @database = database_name, @restore_as = database_name+'_dbcc', @username=UID, @password=PWD FROM TargetDatabases where dbcc_checked IS NULL

-- If we have processed all, yeah right!
IF(@@ROWCOUNT = 0 )
BREAK

-- update status
UPDATE TargetDatabases
SET message = 'Restoring database...',
dbcc_checked = CURRENT_TIMESTAMP
WHERE ID=@id

RAISERROR( 'Working on %s:%i',1,0, @database,@id )

SET @starttime = CURRENT_TIMESTAMP

BEGIN TRY
EXEC usp_restoredbV5 @server = @server, @database = @database, @restore_to = @restore_to, @restore_as = @restore_as, @restore_data_to= @restore_data_to, @restore_log_to=@restore_log_to, @immediate_restore=@immediate_restore, @username=@username, @password=@password, @result = @result OUTPUT

IF DATABASEPROPERTYEX( @restore_as, 'Status' ) IN ('OFFLINE', 'RESTORING', 'RECOVERING', 'SUSPECT', 'EMERGENCY' ) OR DATABASEPROPERTYEX( @restore_as, 'Updateability' ) = 'READ_ONLY'
BEGIN
UPDATE TargetDatabases
SET message = 'Database in error recovery :'+CAST(DATABASEPROPERTYEX( @restore_as, 'Status' ) as varchar(10))
WHERE ID=@id

EXEC( 'DROP DATABASE ['+@database+'_dbcc]')
END
ELSE
IF DATABASEPROPERTYEX( @restore_as, 'Status' ) = NULL
BEGIN
UPDATE TargetDatabases
SET message = 'Could not find database'
WHERE ID=@id
END
ELSE
BEGIN
-- update status
UPDATE TargetDatabases
SET message = 'Database restored, doing a DatabaseIntegrityCheck'
WHERE ID=@id


-- OLA HALENGREN
exec @dbcc_errorcode = [DatabaseIntegrityCheck]
@Databases = @restore_as,
@CheckCommands = 'CHECKDB',
@PhysicalOnly = 'N',
@NoIndex = 'N',
@ExtendedLogicalChecks = 'N',
@TabLock = 'Y',
@FileGroups = NULL,
@Objects = NULL,
@LockTimeout = NULL,
@LogToTable = 'Y',
@Execute = 'Y'

EXEC( 'DROP DATABASE ['+@database+'_dbcc]')

SET @endtime = CURRENT_TIMESTAMP

-- update status
IF( @dbcc_errorcode = 0 )
BEGIN
UPDATE TargetDatabases SET duration = DATEDIFF( second, @starttime, @endtime ), dbcc_error = 0, message = 'Finished succesfully without errors.'
WHERE ID=@id
END
ELSE
BEGIN
UPDATE TargetDatabases SET duration = DATEDIFF( second, @starttime, @endtime ),dbcc_error = @dbcc_errorcode, message = 'Finished but WITH DBCC ERRORS.'
WHERE ID=@id
END
END
END TRY
BEGIN CATCH
DECLARE @error varchar(max) = ERROR_MESSAGE()+' - '+ISNULL(@result,'')
-- update status
UPDATE TargetDatabases SET message = @error
WHERE ID=@id

-- DROP THE DBCC RESTORED DATABASE
EXEC( 'DROP DATABASE ['+@database+'_dbcc]')

END CATCH
END
end

Your three step guide to strong client partnership

07 Aug 2019 0 minutes to read Contributors

A business is only as good as the clients it serves. When we founded SQLTreeo some 25 years ago, we knew our success would depend on building long-term client relationships based on flexibility and collaboration. Now we’d like to share with you our ideas on how to deliver this sort of value consistently to your clients.

In recent months, we’ve had many conversations about how companies can develop productive and enduring client relationships. With technology changing rapidly and specialist skills, such as data management, growing in importance, companies increasingly need to rely on trusted partners to meet their business objectives.

But how do you build those mutually beneficial client relationships? Read more here

Don’t let downtime kill business

07 Aug 2019 0 minutes to read Contributors

Databases are everywhere in the modern world. From our Facebook and Instagram updates to TV streaming and social gaming; to some extent they’re all dependent on reliable database software. On a more serious level, so are our banking transactions and flights, as well as healthcare professionals’ ability to access patients’ medical data.

That’s why even a short period of database downtime is a nightmare for some companies. If an offline server will have a huge impact on your business, you need to take every step possible to ensure your systems and infrastructure are redundant. And, you need to have an effective disaster recovery plan in place to ensure they’re up and running again before too much damage is done.

High availability isn’t just an issue for customer-facing teams. If you have users accessing the system to submit new work, update or alter existing work, or generate the results in a report on a daily basis, any downtime will be detrimental to their productivity. It’s worth bearing in mind that it can take hours (even days) to restore some large databases. Could you afford to bring your business to a standstill for that time?

How can I ensure a high availability database environment?

Technical solutions, such as replication, logshipping, mirroring, Windows failover cluster and Availability groups are a means to achieve a high availability database environment. However, it is also important to create the right cultural mindset across your organization. As a starting point, it helps to be clear about your business requirements and your current patterns of database use. Depending on your already defined requirements, you can then proceed with the solution that fits your criteria.

 

There are many factors that can cause downtime. And, in most cases, they are not possible to detect before it happens. These factors include, hardware failure, software failure, environmental hazards, human error, power outages and connection issues. It is important to have a good understanding of how database downtime will affect your business as downtime means users and applications can no longer access the server/database.

We’re not just talking about how much money you might lose per hour in the event of a database failure. You also need to consider the cost of lost productivity while the problem is fixed. Think of the additional customer services resources needed to answer queries, and the potential cost of lost sales, customers and brand reputation.

Depending on your business, the cost could also depend on the time of day or year the problem occurs. The summer holidays for travel companies, for example, or the run up to Christmas for ecommerce retailers. These peak trading seasons are also often the time when your people take holidays and resources are stretched.

Meeting individual business needs

Every business has specific needs and requirements. In some circumstances, a database out of action for a whole day is not business critical. On the other hand, for some companies even a ten-minute outage would be a disaster.

Do you have access to Database Administrator (DBA) knowledge and expertise? Is your server in a redundant environment such as a well-equipped data center or in the Cloud? Do you have client Service Level Agreements (SLAs) to factor in? Weighing up what you really need and finding the best solution very much depends on your answers to these questions.

How can technology help?

There is a range of technical solutions and ways to set up high availability configurations within the SQL server environment. For example, transaction log shipping can be implemented to automatically send backups from a primary database on a primary server to one or more secondary databases on a separate secondary server. Alternatively, database mirroring creates and maintains an exact replica of a database to ensure continuous data availability and minimize or avoid downtime, even during system upgrades. If you have multiple databases, SQL Always On Availability Groups may be the best solution.

In summary, the best way to prevent downtime and minimize potential losses to your business is to adopt a series of best practices that help you achieve high availability for your database servers. Technology is only part of the solution. At the heart of every effective high availability and disaster recovery strategy is robust business analysis and planning.

Each technology option has its own associated costs, pros and cons. Designing the most appropriate solution (or combination of solutions) will be determined by your individual business requirements, such as the degree of business risk, your SLAs and in-house skills. If you’d like to know more, we are always happy to answer questions and discuss individual requirements. You can contact us here.

Are you ready for the coming HealthTech revolution?

07 Aug 2019 0 minutes to read Contributors

There are enormous shifts taking place across the healthcare sector. But one of the biggest is that almost everything, from patient identification to laboratory testing and trend analysis, is being run on a database system. It’s impossible to underestimate the importance of effective data management for today’s healthcare providers. 

Digital health records that accompany every patient from cradle to grave are already a reality. And, new systems and digitally connected devices are constantly being developed. In order to provide comprehensive patient care and safety, all that health data from a myriad of systems and care providers needs a secure but easily accessible database. 

No one wants to go back to the days of inefficient paper-based files. After all, easy access to data is also critical to tracking and evaluating healthcare systems and trends, such as vaccination levels, as well as clinical research. Yet, at the same time, healthcare providers must protect private, sensitive information from loss or misuse. 

All these trends pose big challenges specific to the health sector. 

There are a huge number of different, critical healthcare IT systems

Modern healthcare demands specialist practitioners and systems. This can mean different IT systems for patient information, laboratory testing, digital imaging and many other applications. In addition, individual patients have IoT-connected devices to monitor vital signs, such as their blood pressure or heart rate. In fact, there are predictions that the growth of IoT will result in over 30 billion devices in 2020.

Big Data: collation, storage and analytics is a huge challenge

Patients are seen by multiple professionals and organizations throughout their journey through the healthcare system. As they visit clinics, laboratories, hospitals and pharmacies their health and personal data are recorded and stored at every stage. This adds up to a huge amount of sensitive information that needs to be reliable, up-to-date and easily retrieved to draw accurate conclusions and implement the appropriate care. 

Privacy and security must be top priority

These vast healthcare databases are a high profile target for hackers. The UK’s National Health Service (NHS) suffered a major ransonware attack just last year. As more healthcare systems are migrated to the Cloud, data protection and security awareness training for all staff (not just database professionals) is essential to ensure they can recognize and respond to everything from criminal attacks to unintended internal security lapses. 

Supporting the next generation of healthcare professionals

In healthcare, inadequate database management could lead to more than simply wasted time and user frustrations with slow-running systems. Inaccurate, missing or difficult to access information can literally cost lives. As healthcare becomes a more digitally connected service, our highly-valued healthcare professionals will need the support of skilled database experts. 

To run their services efficiently and ensure patient safety, healthcare providers need access to data 24/7. They also need to filter, analyze and report on that data to improve the quality of care they provide. Data sharing can also be highly beneficial at a broader level; policymakers, for example, can use information collated across providers to implement life-changing health reforms. 

Are you ready for the revolution?

Technology is bringing huge benefits to healthcare. But, managing the impact on patient safety and privacy is a major concern. These are big challenges for all healthcare providers and institutions. 

There are no quick fixes or easy answers. But, having trusted database experts working alongside you as strategic partners is the best place to start. We are always happy for healthcare providers to contact us to discuss their specific database management needs. By sharing knowledge we can find the best solutions to current and potential future healthcare challenges.

Are database nightmares losing you money and customers?

07 Aug 2019 0 minutes to read Contributors

When applications slow down, or even shut down, your ability to perform core business functions is often hard. This can cause loss of sales, loss of productivity and low employee morale as they’re faced with responding to a host of frustrated customers; the costs will ripple through your business. That’s before we get into the potential cost of customer compensation and the harder to quantify damage to your brand. Sometimes the solution is not in the applications but adjustments in the database will solve the problem.

The need to protect against database outages and poor performance cannot be overstated. Database monitoring and finding issues in time can help your applications remain healthy and accessible. But, without solid monitoring in place, database problems can go unnoticed until it’s too late and your business is losing money and customers fast.

The top five database time killers

  1. Are our back-ups running as they should be? A good database maintenance plan ensures back-ups are running regularly and at a time when they will not affect other business functions.
  2. What disc space do we have? If your disc is almost full with data, your database will eventually break.
  3. Is the database structure clean? To ensure your database has accurate data and is running as fast as possible, you must make sure it’s not overloaded with data that is no longer needed.
  4. Is our database scaled correctly? Is it running on the right machine with the right disc space and memory capacity for the required tasks? Even simply monitoring memory usage can win you a lot of performance.
  5. Are queries optimized for the task? Often queries are slow running because they are not structured in an optimal way.

Monitoring should be a high priority for every Database Administrator (DBA)

Paying attention to these five key metrics means your DBA will be able to notice when something is wrong with your database, before you start getting complaints from users about slow performance. But, database monitoring is also probably the DBA’s most time consuming task. Integrating monitoring tools designed specifically for the SQL database environment with your existing hardware monitoring tools can save a huge amount of time. They’ll monitor your database 24/7 and alert you when, for example, disc space or memory is close to capacity or a back-up has failed.

Maintaining the health of your database environment keeps it running as fast and smoothly as possible. That means fewer performance complaints from users. Most importantly, it means no nasty surprises, loss of revenue or angry customers.

Are you doing everything you can to avoid database disasters?

Don’t wait for the nightmare wake-up call. The smart solution is to take a vigilant and proactive approach to your system’s performance management. This greatly reduces your risk of encountering issues with your database environment.

If you have any questions or concerns about your database performance, get in touch with us. We’ll see how we can help you monitor everything you need to keep your SQL environment running smoothly.

Summer’s coming. Is your business ready for the holidays?

07 Aug 2019 0 minutes to read Contributors

You’ve arranged it all. Your flights are booked. Your passport is up to date. Now you’re ready to relax on your well-earned summer holiday.

But, have you been as well-organized about your business and database planning? Managing everyone’s summer break with the minimum amount of disruption is a challenge for every business, particularly once the school holidays get underway. How will you ensure your customer service doesn’t suffer or that project timelines don’t slip?

Here’s our three-point database checklist to help ease you into summer.

1. What will your database needs be over the summer?

Do you have any special events or sales periods that need extra support? Now is a good time to plan how many team members you might need at any given time. Then, you can map your business and project plans against your team’s annual leave calendar.

Remember to include one-off leave requests for sports events, festivals and family celebrations. There’s an important football world cup happening in Europe this year! Highlighting all the important dates early lets you see where the resource gaps and pressure points are.

2. Consider getting extra help

Hiring temporary or contract staff can certainly take the pressure off your team. As an added bonus, they can also bring excellent benefits to a business in terms of a fresh outlook, different experiences and new ways of solving problems. But, they may not be as familiar with your database environment as your regular team, so it’s worth organizing an internal or external database support helpline to answer any queries.

Depending upon the complexity of the role or task, you may need to invest in more experienced extra team members. Or, you could consider a Virtual Database Administrator (vDBA); this can be an efficient way to plug gaps in internal projects so timelines don’t slip. But, remember to plan ahead as the right people may not always be available at short notice.

3. Give your database a pre-holiday health check

You’ll find it easier to relax on holiday knowing everything at work is under control. Ever thought of checking your database environment before going on holiday? Running a database health check will give you the insight and detailed information you need to fix any issues. Then, you get to go on holiday knowing that everything is working well.

For even greater peace of mind, consider downloading a Desired State Configurator tool to automate your SQL installation and deployment tasks. You can design it to meet your needs and schedule it to keep an eye on your database while you’re away. You can even set it to automatically revert to your original configurations if someone makes a change.

No-one can guarantee that nothing will go wrong with your database while you’re on holiday. But, if you start planning and working on it now, you can minimize the risk of disruption to your customers and projects. And, don’t forget about your teams – robust planning demonstrates you care about their needs as well.

Don’t leave it too late

Plan ahead and fix your database bugs now. Even small gestures, such as a database helpline or vDBA to call on, can go a long way to keeping your team spirit as high as the summer temperatures.

You shouldn’t have to worry about your database when you’re on holiday. Get in touch with us today and see how we can help you plan everything you need to keep your SQL environment running smoothly this summer.

Here’s an easy way to boost your SSMS productivity

07 Aug 2019 0 minutes to read Contributors

“We were getting the same feedback from DBAs practically every day,” says Vincent Riebeek, SQLTreeo’s Operational Manager. They kept telling us: “we work with lots of objects within SQL Server Management Studio (SSMS) and navigating the system is so time-consuming.”


Never one to shy away from a challenge, Vincent knew there must be a better way and set out to find a solution to our customers’ problem. The result is a unique extension to the Microsoft SSMS software that is already bringing multiple productivity benefits to DBA and developer teams.

“One SQL server can hold thousands of objects, such as tables, procedures and jobs,” explains Vincent. “Finding the correct object can really eat up your developers’ and DBAs’ time. We decided to develop some software that allows you to organize and group objects in certain folders.”

Organized, engaged and efficient teamwork

The customer feedback so far has been that this SSMS add-in radically speeds up the search process. But there have also been other benefits we hadn’t expected. “For a start,” says Vincent, “the DBAs and developers have all said how it makes their work more enjoyable; searching through thousands of objects was so frustrating.”

It may not sound all that significant but, if you’re working with SSMS daily, making lots of adjustments to procedures or reports, being able to navigate quickly to your destination is a huge relief and time saver. You can also customize the add-in, so you can group objects in a way that suits you best and still have a useful, at-a-glance overview of all your work.

“When we listened to our customers, they told us they wanted to be able group objects in two ways,” continues Vincent. “So, we built this into the tool. You can group into fully customized categories where the user decides exactly how they want to manage it, or you can group dynamically where, within a matter of seconds, you can group together all objects that meet certain criteria.”

Our customers have also seen improvements in the way project teams communicate. With this add-in, teams can work together on dynamic folders categorized by product, customer or team. And, they can import and export data across databases quickly and easily.

User feedback is extremely valuable

Our software development is often driven by users who request certain functionalities. One example of this is our ‘Compare’ feature. Developers can speed up deployment by adding this feature, which lets the user easily compare the differences between any object, such as a database or procedure, in the test and production environments.

Another efficiency gain is with reports and queries. “When a user has to check the properties of an object, such as how large is this table or how much data has been allocated or used, they would normally need to generate a report or query,” explains Vincent. “But, now with this software, you can see the most important object information, such as memory or disc space, at a glance.”

So there you have it. One incredibly simple way for you to save time, boost productivity, improve employee engagement and strengthen team working. The add-in only takes a minute or two to install and the user interface guides you through what it can do, how to do it and what results to expect.

You can find more information and get a 14-day free trial, here.

Your three step guide to building strong client partnerships

07 Aug 2019 0 minutes to read Contributors

A business is only as good as the clients it serves. When we founded SQLTreeo some 20 years ago, we knew our success would depend on building long-term client relationships based on flexibility and collaboration. Now we’d like to share with you our ideas on how to deliver this sort of value consistently to your clients.

In recent months, we’ve had many conversations about how companies can develop productive and enduring client relationships. With technology changing rapidly and specialist skills, such as data management, growing in importance, companies increasingly need to rely on trusted partners to meet their business objectives.

But how do you build those mutually beneficial client relationships? We’d love to share our experience of following these three simple steps:

1. Make client communication your #1 priority

You can eliminate nasty surprises by looping clients in on all project updates. But, if you also take the time to get to know their business, you can anticipate issues and give additional support. For example, clients come to us for our technical expertise and reliable database services. But they also, quite rightly, expect us to keep them informed and offer advice.

If you offer managed services, design them to give you time to listen to your client’s specific needs, understand their pain points and discuss their longer-term objectives. We find this is the most effective way to develop database solutions that help our clients grow their core business. With this type of partnership, we find that even small companies can achieve big things.

For example, one of our clients is an insurance adviser. He’s extremely knowledgeable and successful in his field. But, he isn’t a database expert. Now, he could hire someone who is, but that is expensive and time consuming.

He prefers to collaborate with us because we are fast and efficient. By keeping his databases running smoothly, we free him up to focus on his core insurance business. Together, we make a strong team.

Another client is a fast-growing scale-up. As partners, we think and anticipate alongside them. We monitor their databases 24/7/365 and give a heads up to any potential problems. That’s a huge relief for any growing business; they don’t have to worry about taking on new customers and not having the IT infrastructure to support their growth. Similarly, if your client’s business is project-based, try to find ways to help them scale up and down quickly and easily.

2. Share knowledge

If you want to be in business for the long haul, you need your clients to see you as more than just a vendor or supplier. Strong relationships thrive on two-way listening and sharing knowledge and ideas. For example, our DBAs become virtual members of your team, committed to adding value to your business. Likewise, you should look for ways of partnering with your clients to help them grow their business today, tomorrow and in the years to come.

3. Honesty, transparency and a desire to serve must be your core values

No long-term relationship survives if the two parties aren’t honest with each other. Without a reputation for integrity, you will never be able to cultivate the kind of partnership that successful businesses depend on. That’s why we’ve made “Our word is our bond” our guiding principle; we always do what we say we will do.

We’re a close-knit team here at SQLTreeo and we work with clients to meet their technical, operational and strategic business objectives. That means offering a complete package that ensures vital database updates meet your needs and maintenance tasks are completed on time. When looking at your clients, try to identify ways to reduce their stress and worry so they can focus on what really matters. This means going beyond simply delivering a product or service and developing a real partnership.

Potential clients won’t just take your word for it

You can start to build relationships before you do business with a client in a variety of ways. Social media, client testimonials and face-to-face meetings where you seek to understand their business needs will all help establish credibility and trust. That’s why we’re always happy to discuss your database performance and data management and security needs. It’s what we’re passionate about.

But, you don’t just have to take our word for it. You can read our client testimonials here.

Three ways data masking helps with GDPR compliance

07 Aug 2019 0 minutes to read Contributors

The one-year anniversary of the introduction of GDPR is 25 May 2019. The new regulations generated vast amounts of publicity. You’ll no doubt remember being bombarded with GDPR-related emails.

Are you now in control of your customer data?

According to the European Data Protection Board (EDPB), there were over 200,000 reported cases of GDPR breaches in the first nine months. It seems that many businesses are still neglecting GDPR. Yet, by failing to comply, they risk large fines and reputational damage.

As yet, there’s little hard evidence but anecdotally it seems that for many organizations the issue is not one of willful non-compliance. Rather, there is simply a huge amount of confusion about how to navigate the vast web of rules and regulations. The weight of GDPR communication may simply have resulted in managers switching off, instead of putting practical procedures in place to meet the new obligations.

That is a huge business risk. GDPR is not going away and every business needs to be continually reassessing its GDPR responsibilities and safeguarding policies. But, what actions are you taking?

One practical solution, data masking, helps make you GDPR compliant in three important ways:

1. You can demonstrate you are taking GDPR compliance seriously

Data masking helps you to protect, anonymize and safeguard personal and sensitive information. That’s why it is frequently used for bank account or credit card details, phone numbers and health and social security details. It means that, in the event of a data breach, no Personally Identifiable Information (PII) is visible. You can also set additional security access rules within your organization.

2. You can enhance your customer and business relationships

Convincing the data protection authorities that you are GDPR compliant is one thing, but how will you reassure your customers? One aspect of GDPR that many companies are unaware of is Right of Access or Subject Access Requests (SAR). This is where customers can ask companies what data they hold and what they’re using their data for.

Organizations are obliged to comply with these requests. Including your data masking policy sends a strong signal to your customers that you take their data protection seriously. This also applies to your suppliers and business partners across the supply chain. They will need to see evidence that you are doing everything possible to keep your data secure and minimize risk.

3. You can optimize your DevOps function

Another key area in which data masking is increasingly important is DevOps. Your developers, whether they are internal or outsourced to a third party, rely heavily on fictitious sets of data that look and behave like the real thing. Data masking allows you to use your own customers’ data for testing, while maintaining security and minimizing delays to DevOps across your organization.

GDPR compliance is a complex and on-going process. And, data masking will not solve everything. It is a bit like taking out fire insurance; it won’t prevent the fire and you still need a sensible fire safety strategy in place.

But, when you consider the severity of non-compliance (fines of up to €20million and long-term damage to your brand) implementing data masking is a small price to pay. More and more IT managers are finding that dynamic, real time data masking is an essential part of their GDPR strategy.

Would you like to know more?

Visit https://www.sqltreeo.com to find out how we can automate data masking across multiple databases and help you with your GDPR compliance.

Do you still need a DBA in the Cloud Economy?

07 Aug 2019 0 minutes to read Contributors

Database Administrators (DBAs) are the gatekeepers and caretakers of our modern economy’s most valuable asset: data. They implement and maintain database systems and ensure data is secure, up-to-date and reliable. But, like many other roles, new technology and automation is driving rapid change.

More organizations are moving from traditional in-house or data center systems to cloud solutions. Database monitoring, testing, patching and tuning is increasingly being automated.

Is this the end of the DBA? 

We don’t think so. Every organization, big and small, depends on the data in their systems to service customers, personalize marketing and make intelligent business decisions. They still need someone to manage the increasing amount of stored data, spot trends, validate decisions and give management information and advice.

Meet the modern DBA 

In the past, your DBA may have seemed somewhat mysterious. They tended to work alone and quietly. It’s likely that most managers didn’t really understand, or appreciate, what they actually did.

But, senior managers are becoming more aware of the benefits of fast, real time, reliable data. And data security is now a hot topic everywhere from the boardroom to the European Parliament. As such, DBAs are now taking center stage.

As organizations put more data in applications outside the firewall, DBAs have to find ways to maintain security and data integrity. At the same time, companies may still have their most critical data stored in traditional in-house or data center systems. The DBA of today is responsible for multiple types of database systems, data storage and maintenance. They need to be adaptive security experts rather than account administrators.

DBAs are now focused more on improving performance and finding better solutions than on routine maintenance tasks. They are your data experts, strategists and masters of cloud technology. If you want to understand the impact the Cloud has on the database environment, ask a DBA.

What does it take to be a successful DBA in this new world? 

Successful DBAs will be those who continuously update their knowledge and, in some cases, acquire a whole set of new skills. In the Cloud, companies don’t need to stick with one software provider, so DBAs will need to be familiar with different database platforms. But, there’s more to this new DBA role than technical skills.

DBAs have always been good at identifying what is going on with the database. Now, they also need critical thinking, creative problem solving and communication skills to answer questions such as, “What is going to happen?”, “What are the implications and how can I solve the problem?”. They’ll also need a broader business understanding to appreciate where the business is heading and how they can supply the information needed to help achieve those goals.

Far from being out-dated, the DBA role is more critical than ever. Companies will need DBAs or Virtual DBAs (remote DBAs) with deep expertise and attention to detail to work on disaster recovery, performance, security and back-up issues. But, they also need team players with excellent communication skills to act as pivotal collaborators between developers, customer support teams and business managers.

Technology changes at an astonishing pace. But, we’ll always have data and organizations will always want information from that data. The DBA role is evolving. In many ways it’s becoming more important and highly skilled, but it’s definitely not going away.

In fact, demand is high for expert DBAs and Virtual DBAs who embrace, rather than fear, automation. If you want to keep your business moving forward, check out SQLTreeo’s Virtual DBA service here.  

What could you achieve with 60% more time?

07 Aug 2019 0 minutes to read Contributors

Regardless of your job or industry, do you ever have enough hours in the day to get everything done? If not, what's the answer? Work more hours or work smarter? We prefer the second option. Automating as much as possible reduces the number of decisions and tasks you have to deal with throughout the day. That can help you get more done with less stress.

Always wear the same suit

One of Barack Obama’s presidential productivity secrets was apparently ‘always wear the same suit’. By eliminating unnecessary and mundane tasks, he could focus all of his energy on more important work decisions. Your DBAs and DevOps team would love to be able to do the same.

Here are four ways database automation can help you save time and money

1. It can free up 60-70% of your DBAs’ time

Businesses now have vast amounts of data to deal with in multiple locations, including private data centers and public clouds. You also have to take care of compliance, security, costs, GDPR, availability, migrations, upgrades and many other issues. Yet, one of the biggest issues Database Administrators (DBAs) have to deal with is finding and correcting human errors. These often go undetected, as someone makes a change that accidentally causes a problem in another part of the database. It happens a lot, because database users can make so many small changes. In fact, we estimate that 60-70% of DBA time is spent on solving these issues. Automation, such as Desired State Configuration (DSC), frees up time for your highly skilled DBAs. This enables them to work on higher value and more engaging database improvement work, rather than doing repetitive, boring and time-consuming tasks.

2. Your business can do more with less

The cost savings of automation are significant. We find some of the biggest savings are in database maintenance, where we estimate you could cut your costs by as much as 50%. For mid to large sized companies, the reduced staffing costs alone can run into hundreds of thousands of dollars a year. For your DevOps teams, database automation means the configuration for every server they are programming is exactly the same. This saves them time, reduces errors and helps them deliver cleaner, better code.

3. You can improve your data security

This is a priority in every organization. Database breaches can destroy brand value faster than problems with products or services. Customers will forgive many things but not the loss of their personal data. Who has access to our databases? What authentication do users need? How can we make the process easier to understand and safer to use? Automation can help answer these questions. It can make it easier for you to protect your customers’ data and stay GDPR compliant.

4. Continuous database delivery is easier

We can help you automate software and server updates, tests and checks. This means you can move from test to production more quickly and with less risk of human error. Implementing our Desired State Configurator (DSC) tool is quick and easy. You certainly don’t need to be a coder. Our simple Q&A sequence guides you through the process. It’s no more difficult than changing your email address. As Bill Gates famously said, "I always choose a lazy person to do a difficult job because they will find an easy way to do it." The assumption here is, of course, that the lazy person is actually very smart. That's where automation comes in. Savvy DBAs and developers know that database automation relieves them of many arduous, repetitive tasks. It allows them to focus on more important areas. Your people and your database and are valuable assets – giving them more time could be the best investment you can make. Visit https://www.sqltreeo.com/ to find out how our Desired State Configurator can help you automate a wide range of database maintenance and development tasks. See how it will save you time and money, improve data security and keep your highly skilled database professionals happy and engaged.

Database management: what’s in it for me?

07 Aug 2019 0 minutes to read Contributors

Meet Lisa; she’s a marketing manager with an international travel company. She recently bought a new handbag online. She’d spotted her favorite brand, in the right color and at a price she could afford.

Unfortunately, within a few days the bag broke. Disappointed – and prepared for a bit of a battle with the call center – Lisa rang to ask for a replacement. In fact, she was pleasantly surprised. Not only did she get an apology and a replacement bag, but the customer services representative also had all her details to hand.

Delighting your customers

This is a simple example of how companies with good data management processes can create not just satisfied, but delighted, customers. Lisa only had to tell the company her name and, within a few seconds, the customer services agent had the details of when she bought the bag, the model, color and price. Lisa went from being disappointed and irritated to feeling like a valued customer.

Exceptional customer experience creates repeat customers and brand advocates

Because the issue was resolved so quickly and painlessly, Lisa’s likely to use the same website again. She’s probably telling all her friends too. The reason her experience was so positive was down to good database management.

The staff in the contact center had all the data they needed to hand. It was fast, accurate and flowed through all the departments in the business to be in the right place at the right time. All the staff, including non-customer facing teams, had been briefed and trained on the importance of keeping the database up to date and working well.

These database management principles apply to every business and every product and service in both B2B and consumer markets. Fast, accurate data is one of the most value-added business activities you can do. Having the right information at your fingertips helps make customers feel special and is a positive brand differentiator that gives you a tremendous competitive advantage.

Good database management makes good financial sense

It’s easier to plan and budget effectively for your IT environment when you have proper database management processes. You won’t waste time constantly trouble shooting problems and you’ll know how much storage and back-ups you’ll need. Over time, these cost savings can be significant.

A skilled and experienced Database Administrator (DBA) is an invaluable company asset. They will know what’s going on in your database environment and when you need upgrades.

But, they do much more than that.

In my opinion, one of the most important DBA roles is as a management adviser. They don’t simply monitor databases, they understand the business implications of certain actions or inactions. Their advice is worth listening to; it can help prevent bigger problems further down the line.

Successful businesses don’t rely on gut feeling alone

As a business owner, you’ll have an intuitive feeling for what works and what doesn’t in your company. That’s great. But, only up to a point.

Every business needs good data management to be profitable and sustainable. You’ll reduce risk if you base longer-term financial decisions and business plans on accurate market data and trends. If you are looking to raise external finance, you’ll need robust market potential, pricing and cost numbers to convince lenders or investors.

Here’s the really good news

You don’t have to spend time and money on recruiting a new team member to achieve a healthy and effective database. A virtual Database Administrator (vDBA) could be just what your organization needs.

We can get your highly skilled and experienced vDBA up and running within one hour. You’ll have a trusted point of contact with the specialist knowledge to troubleshoot and solve problems quickly. And, with SQLTreeo, there’s no contract, lock-in or monthly subscription to pay.

Our vDBAs are also skilled in maintaining databases, writing procedures and queries to make the most of your data. With experience across different companies, they understand the business implications of data management. This puts them in an ideal position to advise on how your organization could benefit from data-driven business intelligence information.

Would you like us to manage your database, so you can get on with managing your business? Find out more about our vDBA service.

Four ways good database management can boost your business

07 Aug 2019 0 minutes to read Contributors

Whatever industry you work in, you will almost certainly have come across a story about how data is transforming your sector. That might be helping to cure a disease or making a building more energy efficient. Perhaps more annoyingly, data is also responsible for those targeted ads you keep getting – and ‘disrupting’ the global economy.

Data is now widely recognized as a vital business asset. In fact, Harvard Business School Professor, John A. Deighton, goes so far as to say, “Used well, data changes the basis of competition in industry after industry.” The problem is that, in many cases, data is not used well.

Crucially, it’s not just about collecting the data. You only get real value when you manage and analyze data effectively. Here are four ways a smart data management strategy can help you drive business growth.

1. Increased productivity and cost efficiency

When you put systems in place to properly manage and update data, your employees gain faster access to the information they need to do their job. A database that is full of errors and duplications not only wastes valuable time. It also leads to frustrated, dissatisfied and less productive employees.

A modern business needs engaged teams that can find, understand, evaluate and share information easily. Not spending their time grumbling over slow, inaccurate databases. Effective data storage also makes it less likely that your employees will be conducting the same tasks or research projects that have already been completed by another team member.

2. More agile operations

The speed at which you can make decisions, bring new products to market or change direction is a key factor in determining business success. Being slow to react gives your competitors an opportunity to get ahead. A good data management system allows you to access information and keep up-to-date with market trends and competitive threats.

Every part of your business can benefit from good data management. For example, your marketing team can analyze campaigns more accurately, and your HR team will have processes in place to ensure passwords and account access rights are closed when employees leave.

3. Reduced security risks

Data security is an essential part of database management. It protects your customers, employees, suppliers and business partners from data loss, thefts and breaches. Proper planning will ensure you’re GDPR compliant, and reduce the risk of vital information falling into the hands of the wrong people.

4. Enhanced decision-making

The right data and business intelligence gives your management team the opportunity to make smart decisions. Many organizations are now combining data from multiple sources, including finance and accounting platforms, team planning software and performance management systems. Your database is where all this information comes together and good workflow ensures that data is accurate.

This is where the business value can be tremendous. Accurate, rich and easily accessible data can give you real competitive advantage. On the other hand, an inadequate data management system can hamper decision makers across the organization.

Most businesses today have an abundance of data. Too often, they fail to analyze it effectively, which results in poor and inaccurate decisions. The winners will be those who learn to manage and interrogate data to develop smart, actionable insights.

Would you like to know more?

Visit https://www.sqltreeo.com to find out how we can help make your databases run smoothly, accurately and compliantly.

What’s the one thing you must do to win the trust of customers?

07 Aug 2019 0 minutes to read Contributors

We’re all spending much more time in the digital world. Ecommerce continues to grow in popularity and we all benefit from the convenience of booking holidays and paying bills online. But, while convenient, those digital experiences can also cause major headaches.

As consumers increasingly rely on the internet, so they develop ever-higher expectations of a seamless, reliable digital service. When disappointed with their online experience, they don’t always stay quiet about it. Research suggests 90% of online shoppers will respond to a bad experience by taking action, from simply never returning to your site to venting their anger on social media.

Recent events prove that even the big brands are vulnerable. Several international airlines have suffered customer database downtimes that caused massive delays, affected customer service and violated trust. Last year, a ‘weekend’ database upgrade at a UK bank locked around two million customers out of their internet and mobile accounts, with some accounts locked for over six weeks.

The cost of solving such problems pales into insignificance when you consider the damage to brand reputation and loss of consumer trust. And that’s the answer to our question: to win the trust of customers, you must never lose it. Ensuring a seamless and error-free customer experience is important to your brand. But, it is essential that consumers know they can trust you with their data.

In a post-GDPR world, there is heightened alertness and lower customer (and regulator) tolerance for database downtimes and security mistakes. When hackers stole over $2.5 million from around 9,000 customers of a supermarket bank, the regulators imposed a $20 million fine on the company. But, you can be certain the database vulnerabilities that made the hack possible cost the company much more in terms of brand damage and loss of customer trust.

What can you do about it – and how can SQLTreeo help?

The good news is that many database breakdowns and breaches are preventable. Regular Health Checks will tune up database performance to improve customer experience. Most importantly, they can pinpoint areas of weakness and vulnerability before they cause serious damage to your brand. Here’s how our experts check the health of your database:

1. We run risk assessments and give you a comprehensive view of your database health

Our Health Check application helps DBAs know what’s going on and what to focus on. It will tell you how well your databases are performing, how effective your back-ups are and alert you to any potential data security breaches. We can also give you specific reports and a clear overview of your server configuration, without making any changes to your database environment.

2. We help you fix any problems

Our Health Check application also identifies ways to improve your system’s performance and stability. You’ll have a report with these recommendations in your mailbox, normally within one hour. Then, if you need any further help with more complex issues, we are always on the other end of the phone.

Our comprehensive Health Checks are the database equivalent of a consultation with an experienced medical specialist. Our experts have seen – and solved – the same problems multiple times already. We will give you a detailed, best practice analysis of your SQL environment and flag potential issues before they cause problems or business disruption.

Remember Warren Buffet’s advice

The repercussions of database disruption can be huge. It will not only cost you money, but also your brand reputation and customer loyalty. As Warren Buffet says, “It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.” Visit https://www.sqltreeo.com/sqltreeo-health-check to find out more about how you can take a proactive approach to keeping your database healthy. We’ll help you give your customers the experience and security they deserve.

Five database management trends to embrace in 2019

07 Aug 2019 0 minutes to read Contributors

In 2018, companies looked for ways to improve their database performance and security. Their aim was to keep up with ever-higher expectations of speed, privacy and returns on investment. Here are five top trends we think will drive business activity in 2019.

1. Security will continue to be your biggest challenge

Cyber security hit the headlines in 2018. Major outages, frightening data breaches and privacy debates infused boardroom conversations across the globe. The millions of users affected by hacks at Facebook, Google+, Marriott and many others, just prove that when database security goes wrong, it can go very wrong. The New Year has started the same way with news of a major hack of politicians’ data in Germany. The big question of ‘how do we make our databases more secure?’ will continue to vex IT managers in 2019. Some will look to data masking to preserve privacy during DevOps projects. For others, where the number of database users has grown significantly over several years, a strategic review of access rights could improve security without damaging business operations.

2. IoT and Cloud computing pose opportunities and risks

As IoT permeates more aspects of our lives, an increasing amount of personal and business data will exist in the Cloud where it can interact with multiple devices. In fact, Gartner estimates that, by 2020, 50 billion devices will be connected to each other. All this brings new risks and challenges as DBAs will still be expected to manage database performance and functionality.

3. Automation is coming

Automation is affecting many workers, from farmers to taxi drivers, and database professionals are not immune. The need for greater efficiency and accuracy, alongside new tools such as Desired State Configuration, is driving the automation of many routine database maintenance tasks.

4. The skills gap is very real

The good news for database professionals is that automation will not eliminate their jobs. In fact, there is a shortage of people with specific database management skills. This is particularly the case for higher-level roles that require experience with tuning and tweaking databases to ensure security, functionality and performance.

5. GDPR is not going away

You might be breathing a huge sigh of relief because you ticked GDPR off your to-do list in 2018. But, you cannot afford to let this slip in 2019. Consumers are more aware than ever that companies have control of their digital information and the authorities will not be lenient with those organizations that are not GDPR compliant and constantly vigilant.

Are you ready for these challenges?

Looking ahead to 2019, we are heartened to see that most businesses are using what they’ve learned from the past few years to be more proactive in maintaining and improving their databases. With data being collected, analyzed and stored at exponential rates, your database is one of your most valuable assets. Preventing data leaks and security breaches should always be a priority and doing nothing is not an option.

What are your top database predictions for 2019? We’d love to know. If you have any thoughts to share or questions to ask, you can contact us here.

Why do we think water and databases are similar?

07 Aug 2019 0 minutes to read Contributors

It’s 20 years since Google was launched, the Euro was agreed, and the Three Tenors sang at the World Cup opening ceremony. That was also the year (1998) when we launched what became SQLTreeo, although some of the team had been working with databases for years before that. Yet, we’re all still as passionate about databases as we were all those years ago.

While celebrating our success, we also asked SQLTreeo’s COO, René Schreuder to reflect on what drives us, why we do what we do, and what the future holds.

Q: How do you keep everyone enthused when they’ve been working in the same field for so long?

A motivated team is important in any business, but that’s the easiest part of my job. Because, for database professionals, no two days are ever the same. It’s such a dynamic sector.

There are always new issues to solve and new approaches to try. We are constantly asking ourselves ‘why is that happening?’ and ‘what can I do about it?’ Our job is to drill down into the problem, analyze it step-by-step and come up with programmatic solutions or process changes to solve it.

It’s a real intellectual challenge. And, every customer has different needs, so we’re never solving the same problem in exactly the same way. That’s good, because the team thrives on variety.

Q: Do you have a particular sector you prefer to work in?

Not really. We work with everyone from call centers and insurance and energy companies, to government departments and the maritime industry. Every customer is interesting in the way they approach a problem or business need.

Sometimes an IT manager will need a particular set of data or maybe a finance director will need bespoke financial reports. They are all completely different. That’s why it’s so much fun to work with databases.

Q: What’s the most important character trait you need to work with databases?

Curiosity. Many people are surprized by that answer because they assume I’ll recite a list of technical skills and qualifications. Those are important of course, but curiosity is more important.

You have to be the type of person who relishes looking at problems from every angle. We also spend a lot of time listening to people to understand exactly what their business issues and needs are. We want to know how they are approaching their markets and how we can make their life easier.

This is a people business. Sometimes that is forgotten in all the talk about programming languages and other industry jargon. But we always want to talk to the people behind the data issues.

Q: What makes you most proud to be involved in the database industry?

We benefit society in so many ways. Engineers are celebrated when they build big infrastructure projects. And, rightly so.

But in this job we protect one of our society’s most important assets; our data. On a personal level, we all want to feel that the confidential information we give to banks, healthcare providers or other companies is safe. And, for businesses, if they don’t get this right it’s a disaster. Data security is a huge responsibility and one we take very seriously.

Q: Where do you see the big opportunities in the next five years?

Technology never stands still and we have to ensure we stay relevant. We are always looking at ways to make data more secure and give more efficient data access. We also work in partnership with our customers to develop new data management approaches that will help them meet their business challenges.

The Internet of Things (IOT) is booming and that means even more data management and reporting will be needed. Data’s like water. It never stops and it doesn’t stand still. But, that’s why it’s such a great industry to work in.

However, this is creating a big issue, in that many companies are becoming overwhelmed with data. They have so much data, they don’t know how to make the most of it. We are developing tools that will make data management easier and more intuitive. In the database world, there’s always more interesting things to discover.

The SQLTreeo team are happy to answer any questions you have about how we can help your business manage and optimize your databases. You can contact us here.

Three tips to help your database team avoid burn out this holiday season

07 Aug 2019 0 minutes to read Contributors

As you’re enjoying the festivities this month, spare a thought for your database administrators and IT teams. It’s one of the busiest and most stressful times of the year for them. They’re the ones running around making sure all the systems for profit and non-profit organizations are running smoothly.

While you welcome the spikes in customer orders every December, your staff are working extra hours to ensure those orders are fulfilled accurately and on time. They’re also striving to answer customer queries in a timely and friendly manner. It can often leave them feeling over-worked and overwhelmed rather than festive.

Ensuring all your databases are in good working order will go a long way to meeting customer demand and helping your staff have an enjoyable holiday season.

Here are our three tips to help your database teams celebrate rather than burn out:

1. Prevention is better than cure

That’s good advice to avoid an office party hangover, but it also applies to your database. Investing in on-going, proactive database monitoring tools will alert your teams to potential issues with your databases before the pre-holiday rush kicks in.

If there are issues – double records for example – a regular database healthcheck or audit will find the cause of the problem. Normally, we can fix these remotely on the same day. So, instead of reactive fire-fighting, take time to plan, pinpoint problems and solve them before they turn into expensive customer service disasters.

2. Get extra help if you need it

Sometimes a business is hit with a double whammy. Sales activity increases at the same time as staff go on leave or succumb to the inevitable outbreaks of seasonal colds and flu. That can put your customer service standards and brand reputation at risk.

It also means a huge amount of stress for those team members left to do the work. They’re doing everything they can to hold it all together but sometimes they need an extra pair of hands. When recruiting additional permanent team members just isn’t practical, why not consider outsourcing to a Virtual Database Administrator (VDBA)? They can be put to work immediately to resolve a crisis situation or just ease the pressure during busy times.

3. Have a back-up plan

If, despite all your planning, you have a disaster and your ecommerce site goes down on the busiest shopping day of the year, make sure you have a back-up plan in place. Always have an expert on call. Someone you can rely on 24/7 to investigate and hopefully find a solution they can implement as quickly as possible.

This year, let’s make the holidays as relaxed and enjoyable as possible for your customers and your staff. With forward planning, we can prevent many database problems occuring. Of course, if you need us over the holidays, we’ll be here but it’ll be a lot less stressful for everyone if you contact us now.

Would you give our database monitoring the UX thumbs up?

07 Aug 2019 0 minutes to read Contributors

“You’ve got to start with the customer experience and work backwards to the technology.”

You’ve probably heard this, and other similar UX advice, before. We are passionate believers in the idea that technology should help people solve their problems – but the focus should be on the customer not the clever tech. This is an important distinction and one we in the tech industry should never forget.

History is full of brilliant inventions, scientific breakthroughs and ‘light bulb’ moments. For many new products and services, the British saying “necessity is the mother of invention” explains what drives much of this progress. People invent things because they see a problem that needs solving.

This is exactly what happened to SQLTreeo founder Danny Riebeek in 2015. “We were offering managed monitoring services to multiple customers,” explains Danny. “Each had their own database environment and we had to fetch the information, monitor the alerts and assess what was happening. There was a lot of manual intervention that was costing us several hours a week. And, it wasn’t efficient; we wanted to be spending that time supporting our customers in their day-to-day business and on higher value development work.”

In summary, Danny had three main issues with the database monitoring tools available at the time:
They didn’t deliver all the information we needed.
There was no central dashboard showing all our customers.
We needed it to be faster and more accurate.
Danny decided to dig deeper. He knew that other businesses probably had similar issues. He also felt that, as companies started to rely on multiple cloud-based systems, they would find it challenging to get a reliable overview and accurate metrics of all the databases running across their business.

“I needed a monitoring tool that would let different environments communicate, that was efficient and did away with the manual work,” continues Danny. “I was sure other people would need it too, so I set about designing my ideal database monitoring solution.” The result is a unique, cloud-based monitoring tool that customers can use in different infrastructures and for as many users in as many countries as needed.

Customers can configure the messaging system from any location and retrieve all their performance metrics from the cloud. Other benefits include the monitoring of data replications and high availability environments, and granular security features.

The real test is always, ‘what do our customers think?’

“The monitoring tool is really easy to use,” says René Schreuder, SQLTreeo’s COO. “Our customers need to feel confident that their databases are working well. If there’s an issue, with the disk space for example, they need to know about it before it becomes a threat to the business.

“This tool raises those red flags in one easy-to-follow dashboard. Now, you don’t need a technical background to get insider knowledge of how your database environment is running. Our customers tell us that’s a huge relief.”

“Ultimately,” Danny concludes, “I wanted to design the perfect monitoring service I needed in my own business. But, it’s never finished. It’s a process of continuous improvement as we incorporate customer feedback or changes to the software or business environment.”

This relentless customer focus is why we built our database monitoring tool. You can find more information and download a free demo here. Please let us know if it gets your thumbs up.

A Health Check can keep your SQL database in great shape

07 Aug 2019 0 minutes to read Contributors

Modern businesses are data-driven. Many business leaders will tell you it is the “lifeblood” of their company. Good data helps us provide outstanding service to our customers, identify potential new clients, generate sales leads, track competitors, improve business processes and so much more.

You would think that protecting and optimizing this precious data would be the #1 business priority. Yet we regularly hear of disastrous security breaches and catastrophic database crashes. These seem to happen to everyone from airlines to government departments – and they destroy public trust as well as reputations.

Prevention is better than cure

We all know that eating well, exercising regularly and getting enough sleep is the best way to stay healthy and minimize the risk of serious illness. Similarly, we know it makes sense to look after our homes, our cars and even our bicycles. That’s because neglecting your own health or your car’s annual maintenance can have serious consequences.

Failure to maintain your SQL database can be just as serious. Misconfigurations and security vulnerabilities will put your data – even your whole organization – at risk. That’s why effective database management has to be about more than reacting to issues and disaster recovery.

Proactive management must involve performance monitoring and regular health checks. You can use these to identify potential problems and optimization issues before they turn into business inefficiencies or a PR disaster.

Here are the three questions a database health check should answer:

1. Are your servers and databases optimally configured?

Businesses lose thousands of work hours every year because of poorly performing databases. The most common complaints to IT helpdesks are to do with slow databases, frequent glitches or difficulties running timely and accurate reports. Such problems tend to indicate a database is sized incorrectly, is deadlocked from too many high consuming processes running during business hours, or is configured incorrectly.

These problems are frustrating for your employees. They also mean your IT team has to spend time fire fighting and troubleshooting, rather than working on higher value development work. Moving from a reactive to a proactive approach to database management is the best way to avoid disaster.

2. Do you have the right data security controls in place?

Do the right people – and only the right people – have access to each area of your database? Credit card numbers, for example, should not be available to everyone. A database health check can show the access configurations and give you the information you need to prevent PR disasters and GDPR non-compliance.

3. Is your back up configuration as effective as it could be?

Losing customer data is a nightmare for any company. Even having out of date or incomplete data is frustrating for your customers.

If data is the lifeblood of your organization, then your database is its heart. When this vital asset is running efficiently, your employees and your customers will be satisfied, and your team will be able to boost sales and track the profitability of each client. As an IT manager, you need to have all your database health information to hand at all times, so you can see at a glance if there are any issues.

Are you confident your database is as healthy as it could be? Our Health Checks identify current and potential issues with your database. Get in touch and we’ll be happy to show you how this works and answer any questions on how to optimize the health of your SQL database.

Three ways a virtual Database Administrator (vDBA) could be just what your business needs

07 Aug 2019 0 minutes to read Contributors

How often have you felt like you need an extra pair of hands? When you’re planning a party? Maybe. When moving home? Almost certainly.

But what about in business?

Do you ever feel that an extra pair of hands would help you get through your list of database projects? Or that some specialist expertise could help you solve a particular SQL problem? When it comes to your SQL database, if you answer ‘yes’ to either of those questions, then a virtual Database Administrator (vDBA) could be the perfect solution.

Here are three ways a vDBA could be just what your organization needs:

1. You’ll get the extra help when you need it

When your database maintenance workload seems insurmountable, there’s no need to go through the time and expense of recruiting a new team member. We can have your virtual Database Administrator up and running within one hour. You can have that extra pair of hands right away.

They’ll be available to help at any time from 9am to 5pm. And, there’s no contract, lock-in or monthly subscription to pay. All we ask is that you book a minimum of 20 hours of vDBA time.

We estimate that our vDBA service works out around 70% cheaper than paying for a similar help desk service on an hourly rate. And, most importantly, your project isn’t de-railed because of lack of time or resources. We fix the database issues and you get on with running your business.

2. You’ll have a database expert on hand

All our vDBAs are senior professionals with expertise in SQL Server Development, Replication, ETL and other data-oriented fields. They’ve not only done the courses and passed the exams. They have also gained many years of relevant database work experience.

As they work across business sectors, they’re asked different questions and solve multiple problems every day. This experience builds their knowledge and fine tunes their skills in everything from server deployment and always-on systems, to performance optimization and reporting in many languages. It’s like having a chef work in different kitchens – their experience teaches them how to prepare the best meals and they learn how to make the dishes better and better each time.

3. You’ll have a trusted point of contact

You can rest assured that, even though your database administrator is working remotely, we have systems in place to ensure data security and quality control. We work in a totally transparent way and you remain in full control of your work flows at all times. Our vDBAs agree the tasks to be done in advance and provide reports on what, when and how each task is completed.

Our vDBAs are trusted to work with any company of any size and in any sector – from a small business to a major government department. They quickly become an extension of your team with the specialist knowledge to troubleshoot and solve problems quickly. There really is no easier way to manage your database.

If you have any questions, please get in touch. We’d be happy to talk more about how our vDBAs can help you overcome any databases issues you may have.

Four tools to help implement Database Continuous Delivery

07 Aug 2019 0 minutes to read Contributors

“We don’t want to know that it’s broken, just that it’s working.”
“We want our database to always be available.”

These are the types of comments we hear from business leaders all the time. And, we agree. You have a business to run, customers to serve and people to manage, motivate and develop. You do not want to be worrying about your database maintenance - you just want it to work.

Adopting a Continuous Delivery approach to database management and maintenance can speed up development, allowing new features to be introduced faster while reducing errors.

We have identified four innovations that can help with Database Continuous Delivery:

1 Desired States Configuration (DSC)

SQL Treeo’s Desired States Configuration tool allows you to state the configuration you want for your database and then automate various changes and updates. Customizing your SQL servers can save your organization significant time and money. Identifying specific elements, such as the type of server, disc space, management settings, and group and user accounts, means deployment is largely automated and significantly simplified. Automating as much as possible also means fewer human errors.

2. Data Masking

Currently most database testing is done using test data (which can be inaccurate and unreliable) rather than real live data. We are looking to solve this issue by developing data masking software. With just one click we will be able to make all private data anonymous to allow testing and reporting to be carried out more quickly and accurately, while remaining GDPR compliant.

3. Monitoring and Reporting Tools

Effective database maintenance tools make standard reporting and monitoring easier. For example, we can compare two parts of scripts and tell you what code lines are incorrect. It means we can troubleshoot code faster.

We have built our monitoring tools from scratch and can add new customized features easily. We can identify potential problems and solutions, build in alerts and proactively work on issues before they affect the business. By monitoring script queries your system performance will be more reliable.

If you notice a problem with your database, if it’s not working or there’s a problem with your customer data, it’s too late; your customers are already annoyed and alienated and you may have already lost money. Continuous proactive monitoring tools ensure you fix problems before they become serious and you protect your brand reputation.

4. Security Management Tools

This is every IT Director’s worst nightmare. No-one wants to see their company or customer data on the street. Security management tools can be as simple as setting different access levels, so some users have read-only access while others have the ability to alter or delete data.

Businesses also need information to make effective management decisions and to comply with security and auditing regulations. With the right plug-ins Database Administrators can report on issues, such as who has logged in, for how long, what workflows were running and what type of data was transferred, with just one click.

We’d be happy to answer any questions you have about how these new innovations can help your business. You can contact us here.

Three skills to look for in a technology partner

07 Aug 2019 0 minutes to read Contributors

SQL Treeo’s founder and CEO, Danny Riebeek, reflects on his 25+ years in the IT industry.

Q: What prompted you to set up SQL Treeo?

When I talked to key decision makers - CEOs, finance directors and IT managers – about monitoring and optimising their SQL Servers, I kept hearing the same three things. They were looking for:

Technical expertise
Fast, reliable service
Flexible, collaborative working relationships.

I keep those three principles front of mind with every product we develop, every service we launch and every customer we work with.

Q: How did you establish SQL Treeo as a credible expert in the market?

We have always been a very technical company. I worked in IT, with databases, for many years before setting up the business. When formulating services and developing products that customers will want to buy, there is no substitute for technical skills.

We are sector neutral: anyone who uses Microsoft SQL Servers is a potential customer. We work with everyone from call centres and insurance and energy companies to government departments and the maritime industry. They all value our highly experienced team of senior people with the hands-on knowledge to solve their problems.

Q: Have you seen many changes in the market since founding SQL Treeo?

Yes, definitely. All our customers operate in fast growing business sectors. They often need to change their tactics quickly and we need to be pragmatic and flexible to adapt. Of course, when something in IT doesn’t work, everyone wants it solved quickly. That never changes.

We have the knowledge and expertise because we do this niche job all the time. Those specialist skills – plus our enthusiasm for the work – mean we are highly responsive. It enables us to troubleshoot problems and provide straightforward answers to our customers 24/7/365.

Q: What are the most common database problems you come across?

I would say that about 90% of customers’ problems are disaster recovery situations. The Microsoft system appears easy to install, so many customers don’t know they have a problem until something goes wrong. This is where our troubleshooting and process improvement skills come to the fore.

Performance issues and slow systems are another common scenario. A sales or data input application, for example, could be taking three to four seconds to save. That might not sound like much, but over the course of a day, it makes the business far less efficient.

Managing the huge amount of data our customers collect can also become an issue as they expand. They sometimes need a new application or platform and we work with them to design those. Or, we advise them on process improvements to boost efficiency or on how to consolidate their data to make it more accessible and reduce storage costs.

Q: How can businesses minimise the risk of problems occurring?

By working collaboratively with their IT providers. We work closely with our customers’ IT, maintenance and database teams and take responsibility for any problems – just like a team member. Auditing the environment, identifying threats and proactively maintaining systems also reduces potential problems.

We help by turning our knowledge into monitoring services. For instance, they can call our Virtual Database Assistant (vDBA) and ask questions at any time. All our tools and products enable customers to manage their databases more efficiently. If they have a specific problem or goal, we can run bespoke workshops to upscale their internal expertise.

Thank you, Danny. We’d be happy to answer any questions you have about how SQL Treeo can help your business manage and optimise your databases.

We’d also love to hear your thoughts. If you have any topics, concerns or questions that you’d like us to cover in future blogs, please let us know. You can contact us here.

GDPR Compliant worden

07 Aug 2019 0 minutes to read Contributors

Op 25 mei 2018 treedt de General Data Protection Regulation (GDPR) in werking. Deze nieuwe Europese privacywetgeving is van toepassing op alle lidstaten binnen de Europese Unie. Het doel hiervan is het aan de burgers teruggegeven van de grip op persoonsgegevens en het beschermen van de privacy. In de praktijk is de GDPR van toepassing op praktisch alle bedrijven en organisaties. Dit document vertelt u wat het allemaal inhoudt, link: GDPR-Compliant worden.

Perform a I/O stress test before you take your SQL Server into production

07 Aug 2019 0 minutes to read Contributors

Doing a stress test on your SQL Server DATA, LOG and TEMPDB drives is a wise thing to do. This way you gain knowledge about the maximum I/O capacity. 

In case you need to troubleshoot your SQL Server's I/O throughput this will help you. I mean, if you can compare the running numbers with the maximum throughput, you will know if you have reached your maximum or not.

Components

The stress test consist of a series of tests with the sqlio.exe program. We have encapsulated the program (SQLIO.EXE) together with several .CMD files and a README.TXT into a zip file. You can download it here.

* SQLTreeo (c) 1998 till current date
* README.TXT
* RUNSQLIO.cmd
* RUNSQLIO_data.cmd
* RUNSQLIO_log.cmd
* RUNSQLIO_tempdb.cmd
* SQLIO.EXE

Each of the RUNSQLIO cmd's will perform a series of tests on a specific drive (-d[driveletter] )
RUNSQLIO_data.cmd is currently configured to run on -dI
RUNSQLIO_log.cmd is currently configured to run on -dT
RUNSQLIO_tempdb.cmd is currently configured to run on -dX

you should run RUNSQLIO.CMD (it will execute all the above scripts)

  • If you have different drives, then change that parameter in the file. 
  • The output for each cmd will go to a sqlioresults txt file. If you feel the need to change that filename, you are free to do so.
  • Ignore the first error (if any) that comes out during execution.
  • Make sure the user account has write permissions in the root of the drives.
  • If you feel the need to start over, delete the result file(s) first
  • After stress tests send the files to servicedesk@sqltreeo.com.
  • If you have any questions, send a mail to servicedesk@sqltreeo.com.

After the stresstest files have been received, you will receive the analysis back that will show you the MB and IOPS capacity:

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

SQLTreeo SSMS Addin

The SQLTreeo SSMS Addin is an application developed by SQLTreeo.

This software allows users to create custom folders in SQL Server Management Studio - Object explorer.

Supports SSMS 2005 up to the latest SQL Server version.

SQLTreeoDBASuite

07 Aug 2019 0 minutes to read Contributors

Concept

Providing an improved overview  of available products, easily adding new modules or upgrading existing modules with minimal effort.

The SQLTreeoDBASuite allows new and existing users to quickly access and manage their modules by having everything in 1 place.

 

Accessing the DBASuite

The DBASuite can be downloaded by downloading the free trial version of any product or purchasing a product.

After the download/purchase a download link will be sent to the entered email-address from which the DBASuite can be downloaded.

The entered email address is also your username, after you followed the instructions in the e-mail you can log-in the DBASuite

 

Modules

Several actions are possible in the modules:

- Download ( will start downloading the module files ) 

- Information ( will redirect you to the module summary )

- Delete ( will remove an installed module )

- Upgrade ( upgrades In case there is a new version available of a downloaded module )

 

 

 

 

 

 

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

SQL Server Management Studio

  • Installed on the computer before the installation of SQLTreeo SSMS AddIn.
  • SQLTreeo will be installed in all locally installed SSMS versions.
  • SSMS 2005, SSMS2008, SSMS2008R2, SSMS2012, SSMS2016, SSMS2017, SSMS2018, SSMS2019 and up.

Windows

  • Minimum .Net version: .Net 4.0.
  • Minimum Windows version: Windows XP with additionally installed .Net 4.0.
  • Minimum Windows version with pre-installed .Net 4.0: Windows 8 / Latest Windows Server (all editions).
  • HDD footprint is about 4MB.

References

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

Features

The SQLTreeo Addin has the following features and functions available:

  • Create static folder
  • Create dynamic folder
  • Compare objects
  • Quick filter
  • Object information
  • Collapse all
  • Script CREATE/INSERT TO
  • Manage folders

Features per version

SQLTreeo Static Folders

  • Available from SSMS version 2005 and up.

SQLTreeo Dynamic Folders

  • Available from SSMS version 2016 and up.

SQLTreeo Compare

  • Available from SSMS version 2016 and up.
  • Requires license for Compare.

If SQL Server Management Studio is installed after SQLTreeo, then reinstall SQLTreeo SSMS Addin

SQL Server version/

Function

2008 2008R2 2012 2014 2016 2017 2018 2019
Static folder x x x x x x x x
Dynamic folder         x x x x
Compare         x x x x
Quick filter         x x x x
Object information x x x x x x x x
Collapse/script/manage x x x x x x x x

 

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

SQL Server Management Studio – Object tree

Depending on the SSMS version some features are not available, for more information first refer to Features

to see which features are available for your version.

 

Listed below are the objects in which the SQLTreeo features are available:

Object/Function Static folder Dynamic folder Compare Quick filter
Databases x x x X
Tables x x x X
Views x x x X
Stored procedures x x x x
Table-valued Functions x x x x
Scalar-valued Functions x x x X
Agent jobs x x x X

 

 

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

Quick filter

The Quick Filter is a function that filters database objects such as tables based on the input value. The Quick filter is much like the default SSMS filter function but then in a more convenient place.

By entering a value in the “click here to filter” folder it will immediately start to filter any targets that fulfill the value condition, allowing for fast searching.

The quick filter function is per Object in the SSMS explorer, it is not possible for example entering a value in "Tables"  object to find targets in "Views".

The filter does not change the order of existing objects but simply presents them below the folder.

The Quick Filter function is available in several tree objects as shown in Object explorer

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

Static folders

Static folders are folders that can be customized by the user and hold database objects of the placeholder.

Static folders cannot be created inside System databases.

By right-clicking on an object tree component a context menu will be opened with an option to create a “New Folder..”

this will create a new folder. The folder can be in any language.

The folder can be filled by dragging the database objects into the folder(note that only objects retained in the expanded menu can be placed in the folder, e.g. a folder created in tables menu cannot contain stored procedures or views).

Another option is to right-click the folder and select “Manage folder” to open the folder interface in which you can quickly select multiple tables to be placed in the folder at once.

Once objects are placed in the folder it can be expanded to show the content inside.

In case the folder is created and no objects are placed inside the folder it will disappear upon a refresh of the sub-menu or higher.

Created folders are registered inside the database in which it is created the information is visible in the extended properties of the database when database storage is selected.

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

Dynamic Folders

Dynamic folders can be given certain criteria to group all objects that meet the value condition.

By right-clicking and selecting “Create dynamic folder” a menu will open where you can name the folder and insert the condition value, upon creating it will automatically group all objects that meet the condition.

Dynamic folders cannot be created in the system databases

Selecting the properties of the folder opens the interface in which you can adjust the value condition and folder name.

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

Compare

The SQLTreeo Add-in comes with an additional module that allows users to compare objects of the same server or different server with each other.

It is possible to add multiple targets allowing for easy and quick comparison in for example an DTAP environment.

The compare module is a seperate feature of the SQLTreeo Add-in and is only included during trial period or when purchased seperately.

 

The link below is a tutorial on how to configure/work the compare function.

SSMS ADDIN Compare Module – How it Works 

 

 

 

 

Guide to SQLTreeo Addin

07 Aug 2019 0 minutes to read Contributors

Plugin Load Troubleshooting

Troubleshooting using Windows Assembly Load logging tool, Fuslogvw.exe

  1. A temp folder needs to be created on a local PC, where logging output will be written, i.e. c:\temp\fuslog.

  2. Find the 'FUSLOGVW.exe' tool. It might be installed already on the system. A typical location, depending on .Net SDK version, would be C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.7.2 Tools.
    If the tool is not installed, it can be downloaded from the Jira.

  3. Run the FUSLOGVW.exe as Admin. Click the Settings button. In the opened Log Settings window, select 'Log all binds to the disk'. Tick 'Enable custom log path' checkbox, and enter the path from step (1), i.e. c:\temp\fuslog, to the 'Custom log path' field.

  4. Click OK.
    NOTE: this utility must NOT be left unattended. The logging SHOULD be stopped after the testing is done. to avoid a significant overload of the system.

  5. Run SSMS and replicate the plugin issue that is being troubleshooted.

  6. Collect the log files from the folder that location was entered on step (3). Please zip all the content of that folder. It will contain multiple folders and files.

  7. CRITICAL!!! Switch to the utility. Click the Settings button. In the opened window tick 'Log Disabled' radiobutton in the Settings group. Click OK. Close the Fuslogvw utility.

SQLTreeo Addin Logging

  1. Run SSMS and replicate the plugin issue that is being troubleshooted.

  2. Collect the log files from the following folder:
    %appdata%..\local\SQLTreeo\SQLTreeo\logs

Running SSMS With Logging Enabled

  1. A temp folder needs to be created on a local PC, where logging output will be written, i.e. c:\temp\ssms_log.

  2. Please find SQL Server Management Studio in Windows Start menu, either using Windows Start menu 'Type here to search' prompt, or using Start menu programs list.

  3. Right click on the SSMS run menu item, select 'Open File Location'.

  4. In the opened window that contains shortcuts, select (click) the one that starts SSMS. Press Ctrl+C and Ctrl+V. Rename the shortcut using key F2 to something like 'SSMS 2017 with log'. Save.

  5. Right click the new shortcut, select Properties.

  6. At the end of the 'Target' field, add the following:
    <Existing path to the SSMS exe> -log "c:\temp\ssms_log\log.xml"

Click OK.

  1. Run SSMS using the new shortcut and replicate the plugin issue that is being troubleshooted.

  2. Collect the log files from c:\temp\ssms_log folder, and upload to SQLTreeo Plugin Support for further investigation.
    The log files will be named 'log.xml', 'log1.xml', ....
    Please also provide the ActivityLog.xsl that is going to be in the same folder.

SSMS *config Controlled Logging

  1. Please find SQL Server Management Studio in Windows Start menu, either using Windows Start menu 'Type here to search' prompt, or using Start menu programs list.

  2. Right click on the SSMS run menu item, select 'Open File Location'.

  3. In the opened window that contains shortcuts, right click the SQL Server Management Studio item, right click, choose Properties and then Open File Location button.

  4. Find a file named Ssms.exe.config. Make sure it contains the following:

<system.diagnostics> <sources> <source name="UserSettings" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" > <listeners> <add name="logFile" type="Microsoft.SqlServer.Diagnostics.STrace.LogFileTraceListener,&#xD;&#xA; Microsoft.SqlServer.Diagnostics.STrace, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" logPath="%TEMP%\SSMS\" encrypt="false" > </add> </listeners> </source> </sources> <switches> <add name="SourceSwitch" value="Verbose" /> </switches> <trace autoflush="true" indentsize="2" useGlobalLock="false"></trace> </system.diagnostics>

  1. Run SSMS and replicate the plugin issue that is being troubleshooted.

  2. Collect the log files from %TEMP%\SSMS folder, and upload to SQLTreeo Plugin Support for further investigation.

SQL Server Release History

07 Aug 2019 0 minutes to read Contributors

 

Below table shows the complete history of SQL Server. SQLTreeo has extensive experience in Microsoft SQL Server starting from 1993, SQL Server 4.21a

 
Version Year Release name Code name Internal database version
Old version, no longer supported: 1.0 (OS/2) 1989 SQL Server 1.0 (16-bit) Filipi -
Old version, no longer supported: 1.1 (OS/2) 1990 SQL Server 1.1 (16-bit) Pietro -
Old version, no longer supported: 4.2A (OS/2) 1992 SQL Server 4.2A (16-bit) - -
Old version, no longer supported: 4.2B (OS/2) 1993 SQL Server 4.2B (16-bit) - -
Old version, no longer supported: 4.21a (WinNT) 1993 SQL Server 4.21a SQLNT -
Old version, no longer supported: 6.0 1995 SQL Server 6.0 SQL95 406
Old version, no longer supported: 6.5 1996 SQL Server 6.5 Hydra 408
Old version, no longer supported: 7.0 1998 SQL Server 7.0 Sphinx 515
Old version, no longer supported: - 1999 SQL Server 7.0 OLAP Tools Plato -
Old version, no longer supported: 8.0 2000 SQL Server 2000 Shiloh 539
Old version, no longer supported: 8.0 2003 SQL Server 2000 64-bit Edition Liberty 539
Old version, no longer supported: 9.0 2005 SQL Server 2005 Yukon 611/612
Older version, yet still supported: 10.0 2008 SQL Server 2008 Katmai 655
Old version, no longer supported: 10.25 2010 Azure SQL database (initial release) Cloud database or CloudDB  
Older version, yet still supported: 10.50 2010 SQL Server 2008 R2 Kilimanjaro (aka KJ) 661
Older version, yet still supported: 11.0 2012 SQL Server 2012 Denali 706
Older version, yet still supported: 12.0 2014 SQL Server 2014 SQL14 782
Older version, yet still supported: 13.0 2016 SQL Server 2016 SQL16 852
Current stable version: 14.0 2017 SQL Server 2017 Helsinki 869
Current stable version: Not yet released 2019 SQL Server 2019 CTP Seattle 895
Legend:
Old version
Older version, still supported
Latest version
Latest preview version
Future release
 

source: https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Server

SQL Server Versions and Buildnumbers

07 Aug 2019 0 minutes to read Contributors

SSMS – SQL Server Management Studio versions

  RTM SP1 SP2 SP3 SP4 Most Recent Version
SQL Server 2019           15.0.1800.32 (CTP 3.2)
SQL Server 2017 14.0.1000.169         14.0.3223.3 RTM CU 16
SQL Server 2016 13.00.1601.5 13.0.4001.0 13.0.5026.0     13.0.5426.0 SP2 CU8
SQL Server 2014 12.00.2000.8 12.0.4100.1 12.0.5000.0 12.0.6024.0   12.0.6329.1 SP3 CU4
SQL Server 2012 11.00.2100.60 11.0.3000.0 11.0.5058.0 11.0.6020.0 11.0.7001.0 11.0.7469.6
(FIX for SP4)
SQL Server 2008R2 10.50.1600.1 10.50.2500 10.50.4000 10.50.6000   10.50.6560.0
(KB4057113)
SQL Server 2008 10.00.1600.22 10.00.2531 10.00.4000 10.00.5500 10.00.6000 10.00.6556.0
(KB4057114)
SQL Server 2005 9.00.1399.06 9.00.2047 9.00.3042 9.00.4035 9.00.5000 9.00.5324 (MS12-070)
SQL Server 2000 8.00.194 8.00.384 8.00.532 8.00.760 8.00.2039 8.00.2305 (MS12-060)
SQL Server 7.0 7.0.623 7.0.699 7.0.842 7.00.961 7.00.1063 7.00.1152 (MS08-040)
SQL Server 6.5 6.50.201 See below 6.50.416 (SP5a)
SQL Server 6.0 6.00.121 6.00.124 6.00.139 6.00.151   6.00.151 (SP3)

Source: https://buildnumbers.wordpress.com/sqlserver/

Release notes for SQL Server Management Studio (SSMS)

07 Aug 2019 0 minutes to read Contributors

This article provides details about updates, improvements, and bug fixes for the current and previous versions of SSMS.

SSMS 18.2

Download: Download SSMS 18.2
Build number: 15.0.18142.0
Release date: July 25, 2019

SSMS 18.2 is the latest general availability (GA) release of SSMS. If you need a previous version of SSMS, see previous SSMS releases.

18.2 is an update to 18.1 with the following new items and bug fixes.

What's new in 18.2

New Item Details
Integration Services (SSIS) Perf optimization for SSIS package scheduler in Azure.
Intellisense/Editor Added support for Data Classification.
OPTIMIZE_FOR_SEQUENTIAL_KEY Added Intellisense support.
OPTIMIZE_FOR_SEQUENTIAL_KEY Turns on an optimization within the database engine that helps improve throughput for high-concurrency inserts into the index. This option is intended for indexes that are prone to last-page insert contention, typically seen with indexes that have a sequential key such as an identity column, sequence, or date/time column. See CREATE INDEX for more details.
Query Execution or Results Added a Completion time in the messages to track when a given query completed its execution.
Query Execution or Results Allow more data to be displayed (Result to Text) and stored in cells (Result to Grid). SSMS now allows up to 2M characters for both (up from 256 and 64 K, respectively). This also addressed the issue of users not able to grab more than 43680 chars from the cells of the grid.
ShowPlan Added a new attribute in QueryPlan when inline scalar UDF feature is enabled (ContainsInlineScalarTsqludfs).
SMO Added support for Feature Restrictions. For more information on the feature itself, see Feature Restrictions.
SMO Added support for SQL Assessment API. For more information, see SQL Assessment API.
   

Bug fixes in 18.2

New Item Details
Accessibility Updated the XEvent UI (the grid) to be sortable by pressing F3.
Always On Fixed an issue where SSMS was throwing an error when trying to delete an Availability Group (AG)
Always On Fixed an issue where SSMS was presenting the wrong failover wizard when replicas are configured as Synchronous when using read scale AGs (cluster type=NONE). Now, SSMS presents the wizard for Force_Failover_Allow_Data_Loss option, which is the only one allowed for cluster-type NONE Availability
Always On Fixed an issue where the wizard was restricting the number of allowed synchronizations to three
Data Classification Fixed an issue where SSMS was throwing an Index (zero-based) must be greater than or equal to zero error when trying to view data classification reports on databases with CompatLevel less than 150.
General SSMS Fixed an issue where the user was unable to horizontal scroll Results pane via mouse wheel. See UserVoice for more details.
General SSMS Updated Activity Monitor to ignore benign wait types SQLTRACE_WAIT_ENTRIES
General SSMS Fixed an issue where some color options (Text Editor > Editor Tab and Status Bar) weren't persisted. See UserVoice
General SSMS In the connection dialog, replaced Azure Active Directory - Universal with MFA support with Azure Active Directory - Universal with MFA (functionality is the same, but hopefully it's less confusing).
General SSMS Updated SSMS to use correct defaults values when creating an Azure SQL Database.
General SSMS Fixed an issue where the user was not able to Start PowerShell from a node in Register Servers when the server is a SQL Linux container.
Import Flat File Fixed an issue where Import Flat File was not working after upgrading from SSMS 18.0 to 18.1. See UserVoice
Import Flat File Fixed an issue where Import Flat File Wizard was reporting a duplicate or invalid column on a .csv file with headers with Unicode characters.
Object Explorer Fixed an issue where some menu items (for example, SQL server Import and Export Wizard) were missing or disabled when connected to SQL Express. See UserVoice for more details.
Object Explorer Fixed an issue, which was causing SSMS to crash when an object is dragged from Object Explorer to the editor. See UserVoice for more details.
Object Explorer Fixed an issue where renaming databases was causing incorrect database names to show up in Object Explorer. See UserVoice for more details.
Object Explorer Fixed a long outstanding issue where trying to expand the Tables node in Object Explorer for a database, which is set to use a collation that isn't supported by Windows anymore triggers an error (and the user can't expand their tables). An example of such collation would be Korean_Wansung_Unicode_CI_AS.
Register Servers Fixed an issue where trying to issue a query against multiple servers (under a Group in Registered Servers) when the Registered Server uses either Active Directory - Integrated or Azure Active Directory - Universal with MFA did not work because SSMS failed to connect.
Register Servers Fixed an issue where trying to issue a query against multiple servers (under a Group in Registered Servers) when the registered server uses either Active Directory - Password or SQL Auth and the user chose not to remember the password would cause SSMS to crash.
Reports Fixed an issue in Disk Usage reports where the report was failing to when data files had a vast number of extents.
Replication Tools Fixed an issue where Replication Monitor was not working with publisher DB in AG and distributor in AG (this was previously fixed in SSMS 17.x
SQL Agent Fixed an issue that when Adding, inserting, editing, or removing job steps, was causing focus to be reset the first row instead of the active row. See UserVoice for more details.
SMO/Scripting Fixed an issue where CREATE OR ALTER was not scripting objects that had extended properties on them. See UserVoice for more details.
SMO/Scripting Fixed an issue where SSMS wasn't able to script CREATE EXTERNAL LIBRARY correctly. See UserVoice for more details.
SMO/Scripting Fixed an issue where trying to run the Generate Scripts against a database with a few thousand tables (was causing the progress dialog to appear to be stuck.
SMO/Scripting Fixed an issue where scripting of External Table on SQL 2019 did not work.
SMO/Scripting Fixed an issue where scripting of External Data Source on SQL 2019 did not work. See UserVoice for more details.
SMO/Scripting Fixed an issue where * extended properties* on columns weren't scripted when targeting Azure SQL DB. See stackoverflow for more details.
SMO/Scripting Last-page insert: SMO - Add property Index.IsOptimizedForSequentialKey
SSMS Setup Mitigated an issue where SSMS setup was incorrectly blocking the installation of SSMS reporting mismatching languages. This could have been an issue in some abnormal situations, like an aborted setup or an incorrect uninstall of a previous version of SSMS. See UserVoice for more details.
XEvent Profiler Fixed a crash when the viewer is being closed.

Known issues (18.2)

  • Database Diagram created from on an SSMS running on machine A cannot be modified from machine B (it would crash SSMS). See UserVoice for more details.

  • SSMS 18.0 redraw issues when switching between multiple query windows. See UserVoice. A workaround for this issue is to disable hardware acceleration under Tools > Options.

  • There is a limitation on the size of the data you see from SSMS results shown in grid, text, or file.

You can reference UserVoice for other known issues and to provide feedback to the product team.

Previous SSMS releases

Download previous SSMS versions by clicking the title links in the following sections:

download SSMS 18.1

  • Release number: 18.1
  • Build number: 15.0.18131.0
  • Release date: June 11, 2019

Chinese (Simplified) | Chinese (Traditional) | English (United States) | French | German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish

SSMS 18.1 is the latest general availability (GA) release of SSMS. If you need a previous version of SSMS, see previous SSMS releases.

18.1 is a small update to 18.0 with the following new item and bug fixes.

What's new in 18.1

New item Details
Database diagrams Database diagrams were added back into SSMS.
SSBDIAGNOSE.EXE The SQL Server Diagnose (command-line tool) was added back into the SSMS package.
Integration Services (SSIS) Support for scheduling SSIS package, located in SSIS Catalog in Azure or File System, in Azure. There are three entries for launching the New Schedule dialog, New Schedule… menu item is shown when right-clicking the SSIS package in SSIS Catalog in Azure, Schedule SSIS Package in Azure menu item under Migrate to Azure menu item under Tools menu item and "Schedule SSIS in Azure" shown when right-clicking Jobs folder under SQL Server agent of Azure SQL Database managed instance.

Bug fixes in 18.1

New Item Details
Accessibility Improved accessibility of the Agent Job UI.
Accessibility Improved accessibility on Stretch Monitor page by adding an accessible name for Auto Refresh button and also adding an intelligent Accessible Name that helps users know not only what button they're on but the impact of pressing it.
ADS integration Fixed a possible crash in SSMS when trying to use the ADS registered servers.
Database designer Added support for Latin1_General_100_BIN2_UTF8 collation (available in SQL Server 2019 CTP3.0)
Data classification Prevent adding sensitivity labels to columns in the history table, which isn't supported.
Data classification Addressed issue related to incorrect handling of compatibility level (server vs. database).
Database designer Added support for Latin1_General_100_BIN2_UTF8 collation (available in SQL2019 CTP3.0).
General SSMS Fixed an issue where scripting of pseudo columns in an index was incorrect.
General SSMS Fixed an issue in Login properties page where clicking on the Add Credential button could throw a Null Reference Exception.
General SSMS Fixed money column size display in the index properties page.
General SSMS Fixed an issue where SSMS wasn't honoring the Intellisense settings from Tools/Options in the SQL editor window.
General SSMS Fixed an issue where SSMS wasn't honoring the Help settings (online vs. offline).
High DPI Fixed layout of controls in error dialogs for unsupported query options.
High DPI Fixed layout of controls in New Availability Group page which on some localized version of SSMS.
High DPI Fixed layout of New Job Schedule page. See UserVoice for more details.
Import flat file Fixed in an issue where rows could be silently lost during the import.
Intellisense/editor Reduced SMO-based query traffic to Azure SQL databases for IntelliSense.
Intellisense/editor Fixed grammatical error in the tooltip displayed when typing T-SQL to create a user. Also, fixed the error message to disambiguate between users and logins.
Log Viewer Fixed an issue where SSMS always opens the current server (or agent) log, even if double-clicking an older archive sign in the Object Explorer. See UserVoice for more details.
SSMS setup Fixed the issue that was causing SSMS set up to fail when the setup log path contained spaces. See UserVoice for more details.
SSMS setup Fixed an issue where SSMS was exiting immediately after showing the splash screen.
See these sites for more details: UserVoiceSSMS Refuses to Start, and Database Administrators.
Object explorer Lifted restriction on enabling start PowerShell when connected to SQL on Linux.
Object explorer Fixed an issue that was causing SSMS to crash when trying to expand the Polybase/Scale-out Group node (when connected to a compute node).
Object explorer Fixed an issue where the Disabled menu item was still enabled, even after disabling a given Index. See UserVoice for more details.
Reports Correcting report to display GrantedQueryMemory in KB (SQL Performance Dashboard report). See UserVoice for more details.
Reports Improved the tracing of the log block in Always-On scenarios.
ShowPlan Added new showplan element SpillOccurred to showplan schema.
ShowPlan Added remote reads (ActualPageServerReads, ActualPageServerReadAheads ActualLobPageServerReads, ActualLobPageServerReadAheads) to showplan schema.
SMO/scripting Avoid querying edge constraints during scripting for non-graph tables.
SMO/scripting Removed constraint on sensitivity classification when scripting columns with Data classification.
SMO/scripting Fixed an issue where "Generate Script" on a graph table fails when generating data. See UserVoice for more details.
SMO/scripting Fixed an issue where the EnumObjects() method wasn't fetching schema name for a Synonym.
SMO/scripting Fixed an issue in UIConnectionInfo.LoadFromStream() where the AdvancedOptions section wasn't read (when a password wasn't specified).
SQL Agent Fixed an issue that was causing SSMS to crash while working with a Job Properties window. See UserVoice for more details.
SQL Agent Fixed an issue where the "View" button on the Job-Step Properties wasn't always enabled, thus preventing viewing the output of a given job step.
XEvent UI Added "Package" column to XEvents list to disambiguate events with identical names.
XEvent UI Added missing "EXTERNAL LIBRARY" class type mapping to XEventUI.

Known issues (18.1)

  • Users may see an error when dragging a table object from the Object Explorer into the Query Editor. We are aware of the issue, and the fix is planned for the next release.

  • The Group connections and Single-server connections color options under the Options -> Text Editor -> Editor Tab and Status Bar -> Status Bar Layout and Colors do not persist after closing SSMS 18.1. After you reopen SSMS, the Status Bar Layout and Colors option revert to default (white).

  • There is a limitation on the size of the data you see from SSMS results shown in grid, text, or file

download SSMS 18.0

  • Release number: 18.0
  • Build number: 15.0.18118.0
  • Release date: April 24, 2019

Chinese (Simplified)Chinese (Traditional)English (United States)FrenchGermanItalianJapaneseKoreanPortuguese (Brazil)RussianSpanish

What's new in 18.0

New item Details
Support for SQL Server 2019 SSMS 18.0 is the first release that is fully aware of SQL Server 2019 (compatLevel 150).
Support for SQL Server 2019 Support for "BATCH_STARTED_GROUP" and "BATCH_COMPLETED_GROUP" in SQL Server 2019 and SQL managed instance.
Support for SQL Server 2019 SMO: Added support for UDF Inlining.
Support for SQL Server 2019 GraphDB: Add a flag in showplan for Graph TC Sequence.
Support for SQL Server 2019 Always Encrypted: added support for AEv2 / Enclave.
Support for SQL Server 2019 Always Encrypted: connection dialog has a new tab "Always Encrypted" when the user clicks on the "Options" button to enable/configure Enclave support.
Smaller SSMS download size The current size is ~500 MB, approximately half of the SSMS 17.x bundle.
SSMS is based on the Visual Studio 2017 Isolated Shell The new shell (SSMS is based on Visual Studio 2017 15.9.11) unlocks all the accessibility fixes that went into both SSMS and Visual Studio, and includes the latest security fixes.
SSMS accessibility improvements Much work went in to address accessibility issues in all the tools (SSMS, DTA, and Profiler)
SSMS can now be installed in a custom folder This option is available from both the command line (useful for unattended installation) and the setup UI. From the command line, pass this extra argument to the SSMS-Setup-ENU.exe: SSMSInstallRoot=C:\MySSMS18 By default, the new install location for SSMS is: %ProgramFiles(x86)%\Microsoft SQL Server Management Studio 18\Common7\IDE\ssms.exe. This does not mean that SSMS is multi-instance.
SSMS allows installing in a language other than the OS language The block on mixed languages setup has been lifted. You can, for example, install SSMS German on a French Windows. If the OS language does not match the SSMS language, the user needs to change the language under Tools > Options > International Settings, otherwise SSMS shows the English UI.
SSMS no longer shares components with the SQL Engine Much effort went in to avoid sharing components with SQL Engine, which often resulted in serviceability issues (one clobbering the files installed by the other).
SSMS requires NetFx 4.7.2 or greater We upgraded our minimum requirement from NetFx4.6.1 to NetFx4.7.2: this allows us to take advantage of the new functionality exposed by the new framework.
Ability to migrate SSMS settings When SSMS 18 is started for the first time, the user is prompted to migrate the 17.x settings. The user setting files are now stored as a plain XML file, thus improving portability and possibly allowing editing.
Support for High DPI High DPI is now enabled by default.
SSMS ships with the Microsoft OLE DB driver For details, see Download Microsoft OLE DB Driver for SQL Server.
SSMS isn't supported on Windows 8. Windows 10 and Windows Server 2016 require version 1607 (10.0.14393) or later Due to the new dependency on NetFx 4.7.2, SSMS 18.0 does not install on Windows 8 and older versions of Windows 10 and Windows Server 2016. SSMS setup blocks those systems. Windows 8.1 is still supported.
SSMS is no longer added to the PATH environment variable Path to SSMS.EXE (and tools in general) isn't added to the path anymore. Users can either manually add it, or if on a modern Windows computer, use on the Start menu.
Package IDs are no longer needed to develop SSMS Extensions In the past, SSMS was selectively loading only well-known packages, thus requiring developers to register their own package. This is no longer the case.
General SSMS Exposing AUTOGROW_ALL_FILES config option for Filegroups in SSMS.
General SSMS Removed risky 'lightweight pooling' and 'priority boost' options from SSMS GUI. For details, see Priority boost details – and why it’s not recommended.
General SSMS New menu and key bindings to creates files: CTRL+ALT+N. CTRL+N continues to create a new query.
General SSMS New Firewall Rule dialog now allows the user to specify a rule name, instead of automatically generating one on behalf of the user.
General SSMS Improved IntelliSense in Editor especially for v140+ T-SQL.
General SSMS Added support in SSMS UI for UTF-8 on collation dialog.
General SSMS Switched to "Windows Credential Manager" for connection dialog MRU passwords. This addresses a long outstanding issue where persistence of passwords wasn't always reliable.
General SSMS Improved support for multi-monitor systems by making sure that more and more dialogs and windows pop up on the expected monitor.
General SSMS Exposed the 'backup checksum default' server configuration in the new Database Settings page of the Server Properties Dialog. For details, see https://feedback.azure.com/forums/08035-sql-server/suggestions/34634974.
General SSMS Exposed "maximum size for error log files" under "Configure SQL Server Error Logs". For details, see https://feedback.azure.com/forums/908035/suggestions/33624115.
General SSMS Added "Migrate to Azure" under Tools menu – We have integrated Database Migration Assistant and Azure Database Migration Service to provide quick and easy access to help accelerate your migrations to Azure.
General SSMS Added logic to prompt the user to commit open transactions when "Change connection" is used.
Azure Data Studio integration Added menu item to start/download Azure Data Studio.
Azure Data Studio integration Added "Start Azure Data Studio" menu item to Object Explorer.
Azure Data Studio integration When right-clicking on a database node in OE, the user is presented with context menus to either run a query or create a new notebook in Azure Data Studio.
Azure SQL support SLO/Edition/MaxSize database properties now accept custom names, making it easier to support future editions of Azure SQL databases.
Azure SQL support Added support for vCore SKUs (General Purpose and Business Critical): Gen4_24 and all the Gen5.
Azure SQL managed instance Added new "AAD logins" as a new login type in SMO and SSMS when connected to an Azure SQL managed instance.
Always On Rehash RTO (estimated recovery time) and RPO (estimated data loss) in SSMS Always on Dashboard. See the updated documentation at https://docs.microsoft.com/sql/database-engine/availability-groups/windows/monitor-performance-for-always-on-availability-groups.
Always Encrypted The Enable Always Encrypted checkbox in the new Always Encrypted tab in the Connect to Server dialog now provides an easy way to enable/disable Always Encrypted for a database connection.
Always Encrypted with secure enclaves Several enhancements have been made to support Always Encrypted with secure enclaves in SQL Server 2019 preview: A text field for specifying enclave attestation URL in the Connect to Server dialog (the new Always Encrypted tab). The new checkbox in the New Column Master Key dialog to control whether a new column master key allows enclave computations. Other Always Encrypted key management dialogs now expose the information on which column master keys allow enclave computations.
Audit Files Changed authentication method from Storage Account Key based to Azure AD-based authentication.
Audit Files Updated list of known audit actions to include FEATURE RESTRICTION ADD/CHANGE GROUP/DROP.
Data Classification Reorganized data classification task menu: added sub menu to the database tasks menu and added an option to open the report from the menu without opening the classify data window first.
Data Classification Added new feature 'Data classification' to SMO. Column object exposes new properties: SensitivityLabelName, SensitivityLabelId, SensitivityInformationTypeName, SensitivityInformationTypeId, and IsClassified (read-only). For more information, see ADD SENSITIVITY CLASSIFICATION (Transact-SQL)
Data Classification Added new "Classification Report" menu item to the "Data Classification" flyout.
Data Classification Updated recommendations.
Database Compatibility Level Upgrade Added a new option under Database name > Tasks > Database Upgrade. This starts the new Query Tuning Assistant (QTA) to guide the user through the process of: Collecting a performance baseline before upgrading the database compatibility level. Upgrading to the desired database compatibility level. Collecting a second pass of performance data over the same workload. Detect workload regressions, and provide tested recommendations to improve workload performance. This is close to the database upgrade process documented in query store usage scenarios, except for the last step where QTA does not rely on a previously known good state to generate recommendations.
Data-tier Application Wizard Added support to import/export data-tier application with graph tables.
Flat File Import Wizard Added logic to notify the user that an import may have resulted in a renaming of the columns.
Integration Services (SSIS) Added support to allow customers to schedule SSIS packages on Azure-SSIS IRs that are in Azure Government cloud.
Integration Services (SSIS) When you use SQL Agent of Azure SQL managed instance via SSMS, you can configure parameter and connection manager in SSIS agent job step.
Integration Services (SSIS) When connecting to Azure SQL DB/managed instance, you can connect to it with default as initial db.
Integration Services (SSIS) Added a new entry item Try SSIS in Azure Data Factory under "Integration Services Catalogs" node, which can be used to launch the "Integration Runtime Creation Wizard" and create "Azure-SSIS Integration Runtime" quickly.
Integration Services (SSIS) Added Create SSIS IR button in "Catalog Creation Wizard," which can be used to launch the "Integration Runtime Creation Wizard" and create "Azure-SSIS Integration Runtime" quickly.
Integration Services (SSIS) ISDeploymentWizard now supports SQL Auth, Azure Active Directory Integrated Auth, and Azure Active Directory Password Auth in command-line mode.
Integration Services (SSIS) Deployment Wizard now supports creating and deploying to Azure Data Factory SSIS Integration Runtime.
Object Scripting Add new menu items for "CREATE OR ALTER" when scripting objects.
Query Store Improved usability of some reports (Overall Resource Consumptions) by adding thousands of separators to numbers displayed on the Y-axis of the charts.
Query Store Added a new Query Wait Statistics report.
Query Store Added "Execution Count" metric to "Tracked Query" View.
Replication Tools Added support for non-default port specification feature in Replication Monitor and SSMS.
ShowPlan Added actual time elapsed, actual vs. estimated rows under ShowPlan operator node if they're available. This makes the actual plan look consistent with Live Query Stats plan.
ShowPlan Modified tooltip and added comment when clicking on Edit Query Button for a ShowPlan, to indicate to user that the ShowPlan might be truncated by the SQL engine if the query is over 4000 characters.
ShowPlan Added logic to display the "Materializer Operator (External Select)".
ShowPlan Add new showplan attribute BatchModeOnRowStoreUsed to easily identify queries that are using the " batch-mode scan on rowstores" feature. Anytime a query performs batch-mode scan on rowstores, a new attribute (BatchModeOnRowStoreUsed="true") gets added to StmtSimple element.
ShowPlan Added Showplan Support to LocalCube RelOp for DW ROLLUP and CUBE.
ShowPlan New LocalCube operator for the new ROLLUP and CUBE aggregation feature in Azure SQL Data Warehouse.
SMO Extend SMO Support for Resumable Index Creation.
SMO Added new event on SMO objects ("PropertyMissing") to help application authors to detect SMO performance issues sooner.
SMO Exposed new DefaultBackupChecksum property on the Configuration object, which maps to the "backup checksum default" server configuration.
SMO Exposed new ProductUpdateLevel property on the Server object, which maps to the servicing level for the version of SQL in use (for example, CU12, RTM).
SMO Exposed new LastGoodCheckDbTime property on Database object, which maps to "lastgoodcheckdbtime" database property. If such property isn't available, a default value of 1/1/1900 12:00:00 AM will be returned.
SMO Moved location for RegSrvr.xml file (Registered Server configuration file) to "%AppData%\Microsoft\SQL Server Management Studio" (unversioned, so it can be shared across versions of SSMS).
SMO Added "Cloud Witness" as a new quorum type and as a new resource type.
SMO Added support for "Edge Constraints" in both SMO and SSMS.
SMO Added cascade delete support to "Edge Constraints" in both SMO and SSMS.
SMO Added support for data classification "read-write" permissions.
Vulnerability Assessment Enabled Vulnerability Assessment tasks menu on Azure SQL DW.
Vulnerability Assessment Change the set of vulnerability assessment rules that are run on Azure SQL managed instance servers, so that "Vulnerability Assessment" scan results will be consistent with the ones in Azure SQL DB.
Vulnerability Assessment "Vulnerability Assessment" now supports Azure SQL DW.
Vulnerability Assessment Added a new exporting feature to export the vulnerability assessment scan results to Excel.
XEvent Viewer XEvent Viewer: enabled showplan window for more XEvents.

Bug fixes in 18.0

New item Details
Crashes and freezes Fixed a source of common SSMS crashes related to GDI objects.
Crashes and freezes Fixed a common source of hangs and poor performance when selecting "Script as Create/Update/Drop" (removed unnecessary fetches of SMO objects).
Crashes and freezes Fixed a hang when connecting to an Azure SQL DB using MFA while ADAL traces are enabled.
Crashes and freezes Fixed a hang (or perceived hang) in Live Query Statistics when invoked from Activity Monitor (the issue manifested when using SQL Server authentication with no "Persist Security Info" set).
Crashes and freezes Fixed a hang when selecting "Reports" in Object Explorer, which could manifest on high latency connections or temporary non-accessibility of the resources.
Crashes and freezes Fixed a crash in SSSM when trying to use Central Management Server and Azure SQL servers. For details, see SMSS 17.5 application error and crash when using Central Management Server.
Crashes and freezes Fixed a hang in Object Explorer by optimizing the way IsFullTextEnabled property is retrieved.
Crashes and freezes Fixed a hang in "Copy Database Wizard" by avoiding to build unnecessary queries to retrieve Database properties.
Crashes and freezes Fixed an issue that was causing SSMS to hang/crash while editing T-SQL.
Crashes and freezes Mitigated an issue where SSMS was becoming unresponsive when editing large T-SQL scripts.
Crashes and freezes Fixed an issue that was causing SSMS to run out of memory when handling the large datasets returned by queries.
General SSMS Fixed an issue there the "ApplicationIntent" wasn't passed along in connections in "Registered Servers".
General SSMS Fixed an issue where the "New XEvent Session Wizard UI" form wasn't rendered properly on High DPI monitors.
General SSMS Fixed an issue where trying to import a bacpac file.
General SSMS Fixed an issue where trying to display the properties of a database (with FILEGROWTH > 2048 GB) was throwing an arithmetic overflow error.
General SSMS Fixed an issue where the Perf Dashboard Report was reporting PAGELATCH and PAGEIOLATCH waits that could not found in subreports.
General SSMS Another round of fixes to make SSMS more multi-monitor aware by having it open dialog in the correct monitor.
Analysis Services (AS) Fixed an issue where the "Advanced Settings" to the AS XEvent UI was clipped.
Analysis Services (AS) Fixed an issue where DAX parsing throws file not found exception.
Azure SQL Database Fixed an issue where the database list wasn't populated correctly for Azure SQL Database query window when connected to a user database in Azure SQL DB instead of to master.
Azure SQL Database Fixed an issue where it wasn't possible to add a "Temporal Table" to an Azure SQL database.
Azure SQL Database Enabled the Statistics properties sub menu option under menu Statistics in Azure, since it has been fully supported for quite some time now.
Azure SQL - General Support Fixed issues in common Azure UI control that was preventing the user from displaying Azure subscriptions (if there were more than 50). Also, the sorting has been changed to be by name rather by Subscription ID. The user could run into this one when trying to restore a backup from URL, for example.
Azure SQL - General Support Fixed an issue in common Azure UI control when enumerating subscriptions, which could yield an "Index was out of range. Must be non-negative and less than the size of the collection." error when the user had no subscriptions in some tenants. The user could run into this one when trying to restore a backup from URL, for example.
Azure SQL - General Support Fixed issue where Service Level Objectives were hardcoded, thus making it harder for SSMS to support newer Azure SQL SLOs. Now, the user can sign in to Azure and allow SSMS to retrieve all the applicable SLO data (Edition and Max Size)
Azure SQL DB managed instance support Improved/polished the support for managed instances: disabled unsupported options in UI and a fix to the View Audit Logs option to handle URL audit target.
Azure SQL DB managed instance support "Generate and Publish scripts" wizard scripts unsupported CREATE DATABASE clauses.
Azure SQL DB managed instance support Enable Live Query Statistics for managed instances.
Azure SQL DB managed instance support Database properties->Files was incorrectly scripting ALTER DB ADD FILE.
Azure SQL DB managed instance support Fixed regression with SQL Agent scheduler where ONIDLE scheduling was chosen even when some other scheduling type was chosen.
Azure SQL DB managed instance support Adjusting MAXTRANSFERRATE, MAXBLOCKSIZE for doing backups on Azure Storage.
Azure SQL DB managed instance support The issue where tail log backup is scripted before RESTORE operation (this isn't supported on CL).
Azure SQL DB managed instance support Create database wizard not scripting correctly CREATE DATABASE statement.
Azure SQL DB managed instance support Special handling of SSIS packages within SSMS when connected to managed instances.
Azure SQL DB managed instance support Fixed an issue where an error was displayed while trying to use "Activity Monitor" when connected to managed instances.
Azure SQL DB managed instance support Improved support for AAD Logins (in SSMS Explorer).
Azure SQL DB managed instance support Improved scripting of SMO Filegroups objects.
Azure SQL DB managed instance support Improved UI for credentials.
Azure SQL DB managed instance support Added support for Logical Replication.
Azure SQL DB managed instance support Fixed an issue, which was causing right-clicking on a database and choosing 'Import data-tier application' to fail.
Azure SQL DB managed instance support Fixed an issue, which was causing right-clicking on a "TempDB" to show errors.
Azure SQL DB managed instance support Fixed an issue where trying to scripting ALTER DB ADD FILE statement in SMO was causing the generated T-SQL script to be empty.
Azure SQL DB managed instance support Improved display of managed instances server-specific properties (hardware generation, service tier, storage used and reserved).
Azure SQL DB managed instance support Fixed an issue where scripting of a database ("Script as Create...") wasn't scripting extra filegroups and files. For details, see https://feedback.azure.com/forums/908035/suggestions/37326799.
Backup/Restore/Attach/Detach DB Fixed an issue where the user was unable to attach a database when physical filename of .mdf file does not match the original filename.
Backup/Restore/Attach/Detach DB Fixed an issue where SSMS might not find a valid restore plan or might find one, which is suboptimal. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32897752.
Backup/Restore/Attach/Detach DB Fixed issue where the "Attach Database" wizard wasn't displaying secondary files that were renamed. Now, the file is displayed, and a comment about it is added (for example "Not Found"). For details, see https://feedback.azure.com/forums/908035/suggestions/32897434.
Copy Database Wizard Generate scripts/Transfer/Copy Database Wizard try to create a table with an in memory table doesn't force ansi_padding on.
Copy Database Wizard Transfer Database task/Copy Database Wizard broken on SQL Server 2017 and SQL Server 2019.
Copy Database Wizard Generate scripts/Transfer/Copy Database Wizard script table creation before creation of associated external data source.
Connection dialog Enabled the removal of usernames from previous username list by pressing the DEL key. For details, see Allow deletion of users from SSMS login window.
DAC Import Wizard Fixed an issue DAC Import Wizard wasn't working when connected using AAD.
Data Classification Fixed an issue when saving classifications in the data classification pane while there are another data classification panes open on other databases.
Data-tier Application Wizard Fixed an issue where the user wasn't able to import a Data-tier Application (.dacpac) due to limited access to the server (for example, no access to all the databases on the same server).
Data-tier Application Wizard Fixed an issue, which was causing the import to be extremely slow when many databases happened to be hosted on the same Azure SQL server.
External Tables Added support for Rejected_Row_Location in template, SMO, intellisense, and property grid.
Flat File Import Wizard Fixed an issue where the "Import Flat File Wizard" wasn't handling double quotes correctly (escaping). For details, see https://feedback.azure.com/forums/908035/suggestions/32897998.
Flat File Import Wizard Fixed an issue where related to incorrect handling of floating-point types (on locales that use a different delimiter for floating points).
Flat File Import Wizard Fixed an issue related to importing of bits when values are 0 or 1. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32898535.
Flat File Import Wizard Fixed an issue where floats were entered as nulls.
Flat File Import Wizard Fixed an issue where the import wizard wasn't able to process negative decimal values.
Flat File Import Wizard Fixed an issue where the wizard wasn't able to import from single column CSV files.
Flat File Import Wizard will be in SSMS 17.9] Fixed issue where Flat File Import does not allow changing destination table when table is already existing. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32896186.
Help Viewer Improved logic around honoring the online/offline modes (there may still be a few issues that need to be addressed).
Help Viewer Fixed the "View Help" to honor the online/offline settings. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32897791.
High Availability Disaster Recovery (HADR)
Availability Groups (AG)
Fixed an issue where roles in "Fail Over Availability Groups" wizard was always displayed as "Resolving".
High Availability Disaster Recovery (HADR)
Availability Groups (AG)
Fixed an issue where SSMS was showing truncated warnings in "AG Dashboard".
Integration Services (IS) Fixed a SxS issue that deployment wizard will fail to connect to sql server when SQL Server 2019 and SSMS 18.0 are installed on the same machine.
Integration Services (IS) Fixed an issue that maintenance plan task can’t be edited when designing the maintenance plan.
Integration Services (IS) Fixed an issue that the deployment wizard gets stuck if the project under deployment is renamed.
Integration Services (IS) Enabled environment setting in Azure-SSIS IR schedule feature.
Integration Services (IS) Fixed an issue that SSIS Integration Runtime Creation Wizard stops responding when the customer account belongs to more than 1 tenant.
Job Activity Monitor Fixed crash while using Job Activity Monitor (with filters).
Object Explorer Fixed an issue where SSMS was throwing an "Object cannot be cast from DBNull to other types" exception when trying to expand "Management" node in OE (misconfigured DataCollector).
Object Explorer Fixed an issue where OE wasn't escaping quotes before invoking the "Edit Top N..." causing the designer to get confused.
Object Explorer Fixed an issue where the "Import Data-Tier application" wizard was failing to launch from the Azure Storage tree.
Object Explorer Fixed an issue in "Database Mail Configuration" where the status of the SSL checkbox wasn't persisted. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32895541.
Object Explorer Fixed an issue where SSMS grayed out option to close existing connections when trying to restore database with is_auto_update_stats_async_on.
Object Explorer Fixed an issue where right-clicking on nodes in OE the (for example "Tables" and wanting to perform an action such as filtering tables by going to Filter > Filter Settings, the filter settings form can appear on the other screen than where SSMS is currently active). For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/34284106.
Object Explorer Fixed a long outstanding issue where the DELETE key wasn't working in OE while trying to rename an object. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/33073510https://feedback.azure.com/forums/908035/suggestions/32910247 and other duplicates.
Object Explorer When displaying the properties of existing database files, the size appears under a column "Size (MB)" instead of "Initial Size (MB)" which is what is displayed when creating a new database. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32629024.
Object Explorer Disabled the "Design" context-menu item on "Graph Tables" since there is no support for those tables in the current version of SSMS.
Object Explorer Fixed an issue where the "New Job Schedule" dialog wasn't rendering properly on High DPI monitors. For details, see https://feedback.azure.com/admin/v3/suggestions/35541262.
Object Explorer Fixed/improved the way an issue where a database size ("Size (MB)") is displayed in Object Explorer details: only 2 decimal digits and formatted using the thousands separator. For details, see https://feedback.azure.com/forums/908035/suggestions/34379308.
Object Explorer Fixed an issue that was causing the creation of a "Spatial Index" to fail with an error like "To accomplish this action, set property PartitionScheme".
Object Explorer Minor performance improvements in Object Explorer to avoid issuing extra queries, when possible.
Object Explorer Extended logic to request confirmation when renaming a database to all the schema objects (the setting can be configured).
Object Explorer Added proper escaping in Object Explorer filtering. For details, see https://feedback.azure.com/forums/908035/suggestions/36678803.
Object Explorer Fixed/improved the view in Object Explorer Details to show numbers with proper separators. For details, see https://feedback.azure.com/forums/908035/suggestions/32900944.
Object Explorer Fixed context menu on "Tables" node when connected to SQL Express, where the "New" fly-out was missing, Graph tables were incorrectly listed, and System-Versioned table was missing. For details, see https://feedback.azure.com/forums/908035/suggestions/37245529.
Object Scripting Overall perf improvements - Generate Scripts of WideWorldImporters takes half the time compared to SSMS 17.7.
Object Scripting When scripting objects, DB Scoped configuration, which has default values are omitted.
Object Scripting Don't generate dynamic T-SQL when scripting. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32898391.
Object Scripting Omit the graph syntax "as edge" and "as node" when scripting a table on SQL Server 2016 and earlier.
Object Scripting Fixed an issue where scripting of database objects was failing when connecting to an Azure SQL DB using AAD with MFA.
Object Scripting Fixed an issue where trying to script a spatial index with GEOMETRY_AUTO_GRID/GEOGRAPHY_AUTO_GRID on an Azure SQL DB was throwing an error.
Object Scripting Fixed an issue, which was causing the database scripting (of an Azure SQL database) to always target an on-prem SQL, even if the "Object Explorer" scripting settings were set to match the source.
Object Scripting Fixed an issue where trying to script a table in a SQL DW database involving clustered and nonclustered indexes was generating incorrect T-SQL statements.
Object Scripting Fixed an issue where trying to script a table in a SQL DW database with both "Clustered Columnstore Indexes" and "Clustered Indexes" was generating incorrect T-SQL (duplicate statements).
Object Scripting Fixed Partitioned table scripting with no range values (SQL DW databases).
Object Scripting Fixed an issue where the user would be unable to script an audit/audit specification SERVER_PERMISSION_CHANGE_GROUP.
Object Scripting Fix and issue where the user is unable to script statistics from SQL DW. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32897296.
Object Scripting Fixed an issue where the "Generate script wizard" shows incorrect table having scripting error when "Continue scripting on Error" is set to false.
Object Scripting Improved script generation on SQL Server 2019.
Profiler Added "Aggregate Table Rewrite Query" event to Profiler events.
Query Data Store Fixed an issue where a "DocumentFrame (SQLEditors)" exception could be thrown.
Query Data Store Fixed an issue when trying to set a custom time interval in the build-in Query Store reports the user wasn't able to select AM or PM on the start/end interval.
Results Grid Fixed an issue that was causing the in High Contrast mode (selected line numbers not visible).
Results Grid Fixed an issue, which resulted in an "Index out of range" exception when clicking on the grid.
Results Grid Fixed an issue where the grid result background color was being ignored. For details, see https://feedback.azure.com/forums/908035/suggestions/32895916.
ShowPlan New memory grant operator properties display incorrectly when there is more than one thread.
ShowPlan Add the following 4 attributes in RunTimeCountersPerThread of actual execution xml plan: HpcRowCount (Number of rows processed by hpc device), HpcKernelElapsedUs (elapsed time wait for kernel execution in use), HpcHostToDeviceBytes (bytes transferred from host to device), and HpcDeviceToHostBytes (bytes transferred from device to host).
ShowPlan Fixed an issue where the similar plan nodes are highlighted in the wrong position.
SMO Fixed an issue where SMO/ServerConnection did not SqlCredential-based connections correctly. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/33698941.
SMO Fixed an issue where an application written using SMO would encounter an error if it tried to enumerate databases from the same server on multiple threads even though it uses separate SqlConnection instances on each thread.
SMO Fixed performance regression in Transfer from External Tables.
SMO Fixed issue in ServerConnection thread-safety, which was causing SMO to leak SqlConnection instances when targeting Azure.
SMO Fixed an issue, which was causing a StringBuilder.FormatError when trying to restore a database, which had curly braces in its name.
SMO Fixed an issue where Azure databases in SMO were defaulting to Case-Insensitive collation for all string comparisons instead of using the specified collation for the database.
SSMS Editor Fixed an issue where "SQL System Table" where restoring the default colors was chancing the color to lime green, rather than the default green, making it hard to read on a white background. For details, see Restoring wrong default color for SQL System Table. The issue still persists on non-English versions of SSMS.
SSMS Editor Fixed issue where intellisense wasn't working when connected to Azure SQLDW using AAD authentication.
SSMS Editor Fixed intellisense in Azure when user lacks access to master database.
SSMS Editor Fixed code snippets to create "temporal tables", which were broken when the collation of the target database was case-sensitive.
SSMS Editor New TRANSLATE function now recognized by intellisense. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32898430.
SSMS Editor Improved intellisense on FORMAT built-in function. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32898676.
SSMS Editor LAG and LEAD are now recognized as built-in functions. For details, see https://feedback.azure.com/forums/908035-sql-server/suggestions/32898757.
SSMS Editor Fixed an issue where intellisense was giving a warning when using "ALTER TABLE...ADD CONSTRAINT...WITH(ONLINE=ON)".
SSMS Editor Fixed an issue where several system views and table values functions weren't properly colorized.
SSMS Editor Fixed an issue where clicking on editor tabs could cause the tab to be closed instead of getting the focus. For details, see https://feedback.azure.com/forums/908035/suggestions/37291114.
SSMS Options Fixed an issue where Tools > Options > SQL Server Object Explorer > Commands page wasn't resizing properly.
SSMS Options SSMS will now by default disable automatic download of DTD in XMLA editor -- XMLA script editor (which uses the xml language service) will by default now prevent automatically downloading the DTD for potentially malicious xmla files. This is controlled by turning off the “Automatically download DTDs and Schemas” setting in Tools > Options > Environment > Text Editor > XML > Miscellaneous.
SSMS Options Restored CTRL+D to be the shortcut as it used to be in older version of SSMS. For details, see https://feedback.azure.com/forums/908035/suggestions/35544754.
Table Designer Fixed a crash in "Edit 200 rows".
Table Designer Fixed an issue where the designer was allowing to add a table when connected to an Azure SQL database.
Vulnerability Assessment Fixed an issue where the scan results are not being loaded properly.
XEvent Added two columns "action_name" and "class_type_desc" that show action ID and class type fields as readable strings.
XEvent Removed the event XEvent Viewer cap of 1,000,000 events.
XEvent Profiler Fixed an issue where XEvent Profiler failed to launch when connected to a 96-core SQL Server.
XEvent Viewer Fixed an issue where XEvent Viewer was crashing when trying to group the events using the "Extended Event Toolbar Options".

Deprecated and removed features in 18.0

Deprecated / Removed Features

  • T-SQL Debugger
  • Database Diagrams
  • The following tools are no longer installed with SSMS:
    • OSQL.EXE
    • DReplay.exe
    • SQLdiag.exe
    • SSBDiagnose.exe
    • bcp.exe
    • sqlcmd.exe
  • Configuration Manager tools:
    • Both SQL Server Configuration Manager and Reporting Server Configuration Manager are not part of SSMS setup anymore.
  • DMF Standard Policies
    • The policies are not installed with SSMS anymore. They will be moved to Git. Users will be able to contribute and download/install them, if they want to.
  • SSMS command-line option -P removed
    • Due to security concerns, the option to specify clear-text passwords on the command line was removed.
  • Generate Scripts > Publish to Web Service removed
    • This deprecated feature was removed from the SSMS UI.
  • Removed node "Maintenance > Legacy" in Object Explorer.
    • The really old "Database Maintenance Plan" and "SQL Mail" nodes won't be accessible anymore. The modern "Database Mail" and "Maintenance Plans" nodes will continue to work as usual.

Known issues (18.0)

  • You might encounter an issue installing version 18.0, where you cannot run SQL Server Management Studio. If you encounter this issue, please follow the steps from the SSMS2018 - Installed, but will not run article.

  • There is a limitation on the size of the data you see from SSMS results shown in grid, text, or file

download SSMS 17.9.1

  • Release number: 17.9.1
  • Build number: 14.0.17289.0
  • Release date: November 21, 2018

17.9.1 is a small update to 17.9 with the following bug fixes:

  • Fixed an issue where users may experience their connection being closed and reopened with each query invocation when using "Azure Active Directory - Universal with MFA support" authentication with the SQL query editor. Side effects of the connection closing included global temporary tables being dropped unexpectedly, and sometimes a new SPID given to the connection.
  • Fixed a long outstanding issue where restore plan would fail to find a restore plan, or would generate an inefficient restore plan under certain conditions.
  • Fixed an issue in the "Import Data-tier Application" wizard, which could result in an error when connected to an Azure SQL database.

 Note

Non-English localized releases of SSMS 17.x require the KB 2862966 security update package if installing on: Windows 8, Windows 7, Windows Server 2012, and Windows Server 2008 R2.

Chinese (Simplified)Chinese (Traditional)English (United States)FrenchGermanItalianJapaneseKoreanPortuguese (Brazil)RussianSpanish

download SSMS 16.5.3

Generally available| Build number: 13.0.16106.4

Chinese (Simplified)Chinese (Traditional)English (United States)FrenchGermanItalianJapaneseKoreanPortuguese (Brazil)RussianSpanish

The following issues were fixed this release:

  • Fixed an issue introduced in SSMS 16.5.2, which was causing the expansion of the 'Table' node when the table had more than one sparse column.

  • Users can deploy SSIS packages containing OData Connection Manager, which connect to a Microsoft Dynamics AX/CRM Online resource to SSIS catalog. For more information, For details, see OData Connection Manager.

  • Configuring Always Encrypted on an existing table fails with errors on unrelated objects. Connect ID 3103181

  • Configuring Always Encrypted for an existing database with multiple schemas doesn't work. Connect ID 3109591

  • The Always Encrypted, Encrypted Column wizard fails due to the database containing views that reference system views. Connect ID 3111925

  • When encrypting using Always Encrypted, errors from refreshing modules after encryption are incorrectly handled.

  • Open recent menu doesn't show recently saved files. Connect ID 3113288

  • SSMS is slow when right-clicking an index for a table (over a remote (Internet) connection). Connect ID 3114074

  • Fixed an issue with the SQL Designer scrollbar. Connect ID 3114856

  • Context menu for tables momentarily hangs

  • SSMS occasionally throws exceptions in Activity Monitor and crashes. Connect ID 697527

  • SSMS 2016 crashes with error "The process was terminated due to an internal error in the .NET Runtime at IP 71AF8579 (71AE0000) with exit code 80131506"

Uninstall and reinstall SSMS 17.x

If your SSMS installation is having problems, and a standard uninstall and reinstall doesn't resolve them, you can first try repairing the Visual Studio 2015 IsoShell. If repairing the Visual Studio 2015 IsoShell doesn't resolve the problem, the following steps have been found to fix many random issues:

  1. Uninstall SSMS the same way you uninstall any application (using Apps & features, Programs, and features, depending on your version of Windows).

  2. Uninstall Visual Studio 2015 IsoShell from an elevated cmd prompt:

    PUSHD "C:\ProgramData\Package Cache\FE948F0DAB52EB8CB5A740A77D8934B9E1A8E301\redist"

    vs_isoshell.exe /Uninstall /Force /PromptRestart

  3. Uninstall Microsoft Visual C++ 2015 Redistributable the same way you uninstall any application. Uninstall both x86 and x64 if they're on your computer.

  4. Reinstall Visual Studio 2015 IsoShell from an elevated cmd prompt:

    PUSHD "C:\ProgramData\Package Cache\FE948F0DAB52EB8CB5A740A77D8934B9E1A8E301\redist"

    vs_isoshell.exe /PromptRestart

  5. Reinstall SSMS.

  6. Upgrade to the latest version of the Visual C++ 2015 Redistributable if you're not currently up-to-date.

Additional Downloads

For a list of all SQL Server Management Studio downloads, search the Microsoft Download Center.

For the latest release of SQL Server Management Studio, For details, see Download SQL Server Management Studio (SSMS).

Source: Microsoft, https://docs.microsoft.com/en-us/sql/ssms/release-notes-ssms

VBA Scripts

07 Aug 2019 0 minutes to read Contributors

Image result for vba

The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

That being said, we do our upmost best to provide working scripts for you. SQL Server knowledge is required though.

C# Scripts

07 Aug 2019 0 minutes to read Contributors

Image result for c#

The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

That being said, we do our upmost best to provide working scripts for you. SQL Server knowledge is required though.

Powershell scripts

07 Aug 2019 0 minutes to read Contributors

Image result for powershell

The sample scripts provided here are not supported under any Microsoft standard support program or service. All scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

That being said, we do our upmost best to provide working scripts for you. SQL Server knowledge is required though.

Importing SQL Server Extended Events XEL with BCP

07 Aug 2019 0 minutes to read Contributors

Extended Events

SQL Server has the ability to catch all kind of 'events' into an Extended Event File (XEL). These files are readable with C# / Powershell / SQL functions.

Reading XEL files can be a lengthy task especially when you want to do analysis on the data.

SQL Read XEL Function

SQL Server provides  a function to read XEL files: sys.fn_xe_file_target_read_file ( path, mdpath, initial_file_name, initial_offset )

That function reads files that are created by the Extended Events asynchronous file target. One event, in XML format, is returned per row.

More info here:

https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-xe-file-target-read-file-transact-sql

C# / Powershell

The function is implemented in Microsoft SQL Server libraries that are invokable by C# or Powershell:

Microsoft.SqlServer.XEvent.Linq.QueryableXEventData

Solution

We have created a powershell solution that uses this library to reads XEL files and import them via BCP into a database table.

Function ImportFile($ExEvFile)
{

####################################
# Read the XEL file into $rawXELdata
####################################

$loggingstring = "Reading XEL file...$ExEvFile";
$loggingstring

$rawXELdata = New-Object Microsoft.SqlServer.XEvent.Linq.QueryableXEventData("$exevfile")

#########################################################
# Create new virtual table to put formatted XEL data into
#########################################################

$virtualtable = New-Object -TypeName System.Data.DataTable
$virtualtable.TableName = 'TableExtendedEvents'

#####################################
# Define columns in the virtual table
#####################################

[Void]$virtualtable.Columns.Add("collect_system_time" ,[DateTime])
[Void]$virtualtable.Columns.Add("database_name" ,[String])
[Void]$virtualtable.Columns.Add("object_name" ,[String])
[Void]$virtualtable.Columns.Add("client_hostname" ,[String])
[Void]$virtualtable.Columns.Add("client_app_name" ,[String])
[Void]$virtualtable.Columns.Add("server_name" ,[String])
[Void]$virtualtable.Columns.Add("row_count" ,[Int64])
[Void]$virtualtable.Columns.Add("Duration" ,[Int64])
[Void]$virtualtable.Columns.Add("server_principal_name" ,[String])

#######################################################
# In a loop parse the rawXELdata into the virtual table
#######################################################

$loggingstring = "Parsing raw XEL data into virtual table...";
$loggingstring

try
{
$rawXELdata | %{

###############################################
# Data comes from Actions and Fields structures
###############################################

$collect_system_time = $_.Timestamp[0].Datetime.ToString();

$database_name = $_.Actions['database_name']
$object_name = $_.Fields['object_name']
$client_hostname = $_.Actions['client_hostname']
$client_app_name = $_.Actions['client_app_name']
$server_principal_name = $_.Actions['server_principal_name']
$row_count = $_.Fields['row_count']
$duration = $_.Fields['duration']
$server = $ExEvFile.Split("\")[5]

$row=@(
@{$true = $collect_system_time; $false = $null }[$collect_system_time -ne $null] ,
@{$true = $database_name.Value; $false = $null }[$database_name -ne $null] ,
@{$true = $object_name.Value; $false = $null }[$object_name -ne $null],
@{$true = $client_hostname.Value; $false = $null }[$client_hostname -ne $null],
@{$true = $client_app_name.Value; $false = $null }[$client_app_name -ne $null],
$server,
@{$true = [int64]$row_count.Value; $false = $null }[$row_count -ne $null],
@{$true = $duration.Value; $false = $null }[$duration -ne $null];
@{$true = $server_principal_name.Value; $false = $null }[$server_principal_name -ne $null]
);
$virtualtable.Rows.Add($row);
} |out-null
}
catch{
$loggingstring = "Processing virtual table:`t$_.Exception.Message`tReverting importfile";
$loggingstring >>$logfile
$_ = Invoke-Sqlcmd -ServerInstance $servername -Username $userid -password $pwd -Query "exec ObjectUsage.dbo.usp_objectusage_import '$ExEvFile', -1"
Write-Host $loggingstring
}


##############################################
# Bulk Insert into $dbTable from virtual table
##############################################

$loggingstring = "$($ExEvFile) - records in extended file:`t$($virtualtable.rows.count)`t";
$loggingstring >>$logfile
Write-Host $loggingstring

Try
{
$cn = new-object System.Data.SqlClient.SqlConnection("Data Source=$servername;user id=$userid;password=$pwd;Initial catalog=$dbname")
$cn.Open()

$bcp = New-Object -TypeName System.Data.SqlClient.SqlBulkCopy $cn
$bcp.DestinationTableName = $dbTable
$bcp.Batchsize = 10000
$bcp.BulkCopyTimeout = 0
$bcp.ColumnMappings.add('collect_system_time','collect_system_time') | Out-Null
$bcp.ColumnMappings.add('database_name','database_name') | Out-Null
$bcp.ColumnMappings.add('object_name','object_name')| Out-Null
$bcp.ColumnMappings.add('client_hostname','client_hostname')| Out-Null
$bcp.ColumnMappings.add('client_app_name','client_app_name')| Out-Null
$bcp.ColumnMappings.add('server_name','server_name')| Out-Null
$bcp.ColumnMappings.add('row_count','row_count')| Out-Null
$bcp.ColumnMappings.add('Duration','Duration')| Out-Null
$bcp.ColumnMappings.add('server_principal_name','username')| Out-Null

$loggingstring = "Starting bcp...";
Write-Host $loggingstring

$bcp.WriteToServer($virtualtable)
$bcp.Close()
$cn.close()
$count = $virtualtable.rows.count
$_ = Invoke-Sqlcmd -ServerInstance $servername -Username $userid -password $pwd -Query "exec ObjectUsage.dbo.usp_objectusage_import '$ExEvFile', $count "
$virtualtable.clear()
}
catch{
$loggingstring = "Running BCP:`t$_.Exception.Message`t";
$loggingstring >>$logfile
$_ = Invoke-Sqlcmd -ServerInstance $servername -Username $userid -password $pwd -Query "exec ObjectUsage.dbo.usp_objectusage_import '$ExEvFile', -1"
Write-Host $loggingstring
}

}

################################NOTES#######################
# Notes: bcp Extended Events files into to destination table
############################################################
#$ExEvFile='\\xxxxx-srv-xxxxxx\DBA\DatabaseObjectUsage\filename.xel'
Function Main{
param(
#[Parameter(Mandatory=$True)]
[string]$ExEvFile #New File name to move
)

$error.Clear();

$servername='xxxxx-srv-xxxxx'
$dbName='ObjectUsage'
$dbTable='dbo.ObjectUsage'
$UserID="login_extendedevents"
$Pwd ='password_extendedevents_user'
$connectionString ="Data Source=$ServerName;Database=$dbName;User Id=$UserID;Password=$Pwd;"

$LogFile='\\xxxx-srv-xxxx\dba\DatabaseObjectUsage\ImportXEL.log';

#############################################
# Output text to console and write log header
#############################################

$StartTime = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$loggingstring = "$StartTime`t$ServerName`tFile`t$ExEvFile`t"
$loggingstring >>$logfile
$loggingstring

################################################################################################
# Add-Type when running this script on MS SQL Server
# use this line of code on win2012 server, need only Microsoft.SqlServer.XEvent.Linq.dll library
# Add the path where this dll is.
################################################################################################

Add-Type -Path 'Microsoft.SqlServer.XEvent.Linq.dll' | out-null

if ($error.count -ne 0) {
$loggingstring = "Error:`t$error`t";
$loggingstring >>$logfile
$loggingstring
$error.clear();
}

if( $ExEvFile -eq "" )
{
$filelist = Get-ChildItem -Path "\\xxxxx-srv-xxxxx\DBA\DatabaseObjectUsage" -Filter "*.XEL" -Recurse -ErrorAction SilentlyContinue | Select-Object FullName, Length

foreach( $file in $filelist )
{
if( $file.length -ge 0 )
{
$ExEvFile = $file.fullname

$imported = Invoke-Sqlcmd -ServerInstance $servername -Username $userid -password $pwd -Query "exec ObjectUsage.dbo.usp_objectusage_import '$ExEvFile', NULL, $($file.length)"

if ($imported.ItemArray[0] -eq "0" )
{
ImportFile( $ExEvFile );

}
}
}
}
else
{
ImportFile( $ExEvFile );
}
#######################################
# End processing Write log file trailer
#######################################
$EndDate = Get-Date

if ($error.count -ne 0) {
$loggingstring += "Error:`t$error`t";
$error.clear();
}
else {
$loggingstring = "Finished successfully:`t$EndDate`n";
$loggingstring >>$logfile
$loggingstring
}
}
################## END OF SCRIPT FILE ######################
#### Call Main

Main

Managing Multi SQL Server Environments with MSX/TSX

07 Aug 2019 0 minutes to read Contributors

Managing a large SQL Server estate made easy

When you have to manage a large SQL Server estate starting from 10+ SQL Servers, you do not want to perform repetitive tasks on every SQL Server.

SQL Server has the ability of managing necessary DBA tasks from 1 single server in a couple of ways:

MSX/TSX

Main purpose managing multi server jobs. There are more possibilities with msx/tsx. Multi server jobs are jobs defined on 1 server (MSX) to be executed on multiple sql servers (TSX).

Multiserver administration requires that you set up a master server (MSX) and one or more target servers (TSX). Jobs that will be processed on all the target servers are first defined on the master server and then downloaded to the target servers.

Select multiple targets for the job:

On a target the job will look like this:

Any schedule attached to a job is honouring the local time of that SQL Server.

Targets do NOT need to be in the same domain.

Read more here: https://docs.microsoft.com/en-us/sql/ssms/agent/create-a-multiserver-environment

Central Management Server

Central management servers store a list of instances of SQL Server that is organized into one or more central management server groups. Actions that are taken by using a central management server group act on all servers in the server group. This includes connecting to servers by using Object Explorer and executing Transact-SQL statements and Policy-Based Management policies on multiple servers at the same time.

Read more here: https://docs.microsoft.com/en-us/sql/ssms/register-servers/create-a-central-management-server-and-server-group

There are many more techniques to be able to manage a large SQL Server Estate.

 

SQL Server Replication

07 Aug 2019 0 minutes to read Contributors

Image result for SQL Server replication

Replication

Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Use replication to distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.

Microsoft Sync Framework

As an alternative to replication, you can synchronize databases by using Microsoft Sync Framework. Sync Framework includes components and an intuitive and flexible API that make it easy to synchronize among SQL Server, SQL Server Express, SQL Server Compact, and SQL Azure databases. 

 

More info here: https://docs.microsoft.com/en-us/sql/relational-databases/replication/sql-server-replication

Auditing Technologies

07 Aug 2019 0 minutes to read Contributors

Image result for auditing

SQL Server Audit

Auditing an instance of the SQL Server Database Engine or an individual database involves tracking and logging events that occur on the Database Engine. SQL Server audit lets you create server audits, which can contain server audit specifications for server level events, and database audit specifications for database level events. Audited events can be written to the event logs or to audit files.

More info here: https://docs.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-database-engine

Triggers

Create a DML, DDL, or logon trigger. A trigger is a special type of stored procedure that automatically runs when an event occurs in the database server. DML triggers run when a user tries to modify data through a data manipulation language (DML) event. DML events are INSERT, UPDATE, or DELETE statements on a table or view. These triggers fire when any valid event fires, whether table rows are affected or not. 

More info here: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql

Temporal tables

SQL Server 2016 introduced support for temporal tables (also known as system-versioned temporal tables) as a database feature that brings built-in support for providing information about data stored in the table at any point in time rather than only the data that is correct at the current moment in time. Temporal is a database feature that was introduced in ANSI SQL 2011.

More info here: https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables

Change Data Capture

Change data capture records insert, update, and delete activity that is applied to a SQL Server table. This makes the details of the changes available in an easily consumed relational format. Column information and the metadata that is required to apply the changes to a target environment is captured for the modified rows and stored in change tables that mirror the column structure of the tracked source tables. Table-valued functions are provided to allow systematic access to the change data by consumers.

More info here: https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server

Extended Events

SQL Server The Extended Events architecture enables users to collect as much or as little data as is necessary to troubleshoot or identify any change. Extended Events is configurable, and it scales very well.

More info here: https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/extended-events

There are more techniques avaialble.

License Activation

07 Aug 2019 0 minutes to read Contributors

 

Methods for activating a license

The SQLTreeo addin can be activated online and offline for environments that have no connection to the network.

It is recommended to activate the software online as the offline method will be deprecated.

Requirements:

  • SQLTreeo Addin installation
  • SSMS (version 2005 or higher)
  • Browser(offline activation online)
  • notepad
  • Licensekey

Guide to activate SQLTreeo offline:

  1. Install SQLTreeo SSMS Addin
  2. Open SQL Server Management Studio
  3. Go to: Tools –> SQLTreeo –> License
  4. Check the “Use offline activation1” box and proceed to Add Key…
  5. Insert your licensekey and obtain token.
  6. Open notepad and copy + paste the token there
  7. Copy  + paste in notepad: http://ws.sqltreeo.com/sqltreeo.asmx/ActivateOffline?activationToken=(Paste_license_key_here)&hostSecret=(Paste_generated-token_here)
  8. replace the (…) fields with your licensekey and token as show in step 7, copy+paste the URL in any browser and execute.
  9. A response will be generated (example) :”SUCCESS: C/FSRSTR6HwUHRP2i6/qmQ==“.
  10. Copy + paste the RESPONSE into the SQLTreeo dialog : Server response.
  11. SQLTreeo is now licensed.
  12. To activate it on another workstation , first deactivate the license, and follow steps 2 to 10 again on the other workstation.

 

Online activation

To activate your licensekey after installing the software navigate to SQL Server Management studio --> Tools --> SQLTreeo, here is a menu to  "License"  in which you can paste the licensekey and press activate.

After activation it is required to restart SSMS for the software to work.

Help needed?

You can contact us any time on our website (support section) or send us a mail at servicedesk@sqltreeo.com.

Change workstation

07 Aug 2019 0 minutes to read Contributors

Changing workstations

The de-activation and re-activation for both the online and Offline activation are the same, all licenses must be de-activated first and re-activated according to the steps explained in https://www.sqltreeo.com/docs?id=216 for offline activation and https://www.sqltreeo.com/docs?id=217 for the online activation.

Duration

The process of deactivating SQLTreeo on your current workstation and re-activating it takes +/- 5 minutes

Version

Moving a SQLTreeo license key  to another workstation is only possible from v0.8.4 or higher.

Prerequisite

Before you switch to a new workstation or reinstall it there are certain steps you need to take. This assures that SQLTreeo can be used on a new workstation.

  • Verify if your  SQLTreeo is activated.

Check this in the SSMS GUI–> Tools –> SQLTreeo –> License.

once you verified the license key is activated. Press the Deactivate button whereafter you will be prompted to confirm your deactivation.

After the confirmation you need to restart your SSMS to take effect.

Now your SQLTreeo license key is deactivated.

Moving the license

Follow this link: https://www.sqltreeo.com/sqltreeo-ssms-add-in to install the SQLTreeo software on the destination workstation.

After succesfull installation, open SSMS. You will be notified that SQLTreeo is not licensed.

Open tools–>SQLTreeo–>License and enter your license key.

You will be prompted to restart SSMS to activate the software.

 

 

 

SQLTreeo SQL Server monitoring

07 Aug 2019 0 minutes to read Contributors

SQLTreeo Monitoring

The SQLTreeo monitoring agent is a software developed by SQLTreeo for SQL Server database administrators,developers and any other that make frequent use of SQL Server.

The software allows users to quickly gain insight in their environment, such as performance, status of key points, creating thresholds for automated monitoring.

The Agent which can be installed on any server will collect non-user metrics of the assigned SQL Servers and displays this information in a secure fashion in the user dashboard, the collecting of metrics is done in a way that avoids causing pressure on server resources or production.

 

Monitoring components

07 Aug 2019 0 minutes to read Contributors

SQLTreeo monitoring is build in a way that makes it easy to set up your monitoring environment in a matter of minutes.

The main components of SQLTreeo monitoring is as following:

- Monitoring environment which includes all the dashboards and configurable options

- Monitoring Agent which is a SAAS application that monitores your servers by collecting metrics.

 

 

Copy a SQL login without knowing the password

07 Aug 2019 0 minutes to read Contributors

Situation

You dont know the password of a login but you need to copy it to a destination server. 

Solution

You do not need to know the password of a login. You can copy the password (encrypted) to create it on the destination server.

Script

We create the script on the source server. We include checks to see if the login already exists when we create it on the destination server. We will take the encrypted password (convert it to NVARCHAR(MAX) and the SID to make sure they are the same on source and destination server (in case you want to copy the databases also, the security stays the same)

SELECT 'IF(SUSER_ID('+QUOTENAME(SL.name,'''')+') IS NULL)
BEGIN
CREATE LOGIN '+QUOTENAME(SL.name)+' WITH PASSWORD = '+CONVERT(NVARCHAR(MAX),SL.password_hash,1)+' HASHED, SID = '+CONVERT(NVARCHAR(MAX),SL.sid,1)+'
END'
FROM sys.sql_logins AS SL
WHERE name not like '##%##' and name <> 'sa'

Register a SQLTreeo account

07 Aug 2019 0 minutes to read Contributors

To access your SQLTreeo software applications it is necessary to have a account.

Creating an account allows you to access all applications, manage licenses submit requests for support.

Registration can be done on https://www.sqltreeo.com/#loginmodel

or when you have already downloaded an application you can click the " register now"  in the login screen of the DBASuite.

After registration the account needs to be verified through a verification e-mail.

setting up a monitoring environment

07 Aug 2019 0 minutes to read Contributors

A monitoring environment in SQLTreeo is an platform in which you can add servers, create new thresholds and agent(s),analyse alerts, view near-realtime data through the various dashboards.

It is possible for 1 account to create multiple monitoring environments depending on personal preferences, for example you're required to monitor 2 clients each with their own servers, where the data cannot be exposed to one another, in this case you can create environment A for client 1 and environment 2 for client 2.

To set up a monitoring environment navigate to https://www.sqltreeo.com/dashboards/servers or simply click the "monitoring"  tab which brings you to a demo environment.

proceed to selecting an environment either by clicking on the environment dropdown list in the left panel or selecting the environment name under your username in the top right to create a new environment.

Downloading the Monitoring Agent

07 Aug 2019 0 minutes to read Contributors

The monitoring agent is a software application that can be installed as a Service.

The agent is responsible for collecting metrics from the targeted SQL Server instances and sending them back to the monitoring environment for display.

The agent package can be downloaded once you are in your environment in the left menu item " Agents"  where you firstly add the agent prior to downloading.

initiating will directly start the download of the SQLTreeoDBAsuite, in which you install the SQLTreeo monitoring agent module.

Configuring the agent for monitoring

07 Aug 2019 0 minutes to read Contributors

The monitoring agent which can be downloaded from the SQLTreeoDBASuite can be opened to select the environment in which the agent is added and should monitor.

Opening the agent starts the installation wizard, after logging in with your registered account, you can select the environment and decide whether to install the agent as a Service or run it in console.

It is recommended to run it as a service since closing the console will stop the agent from monitoring and will only resume monitoring after the console is started again.

Adding servers to your environment for monitoring

07 Aug 2019 0 minutes to read Contributors

When the setup of the environment and monitoring agent is completed the next step is to add targets to the agents which will be monitored and the information displayed in your environment.

Adding servers is done in the left menu Configurations section, servers.

Following the instructions on how to add a server completes registering the server in the environment.

To finalize the process and start monitoring it is required to navigate to the Agent and Assign the servers to it, after which it will automatically begin collecting resources and data will initially slowly(may take up to a few minutes to see data) be represented in the respective dashboards.

Database replication: How to get it right!

07 Aug 2019 0 minutes to read Contributors

 

 

Many modern businesses need to have multiple copies of their database, or database objects, in different locations. You may have offices in several countries, or perhaps you have fully remote- working teams, or you want to distribute the load by separating, for example, your report server and production server. However your company is structured, there will probably be times when you need to ensure authorized users have access to your database in different places.

Replication is a great way to create multiple copies of your databases or database objects and distribute them over multiple locations across your organization, so they can be used for purposes such as operational reporting, testing, load balancing and disaster recovery.

“By failing to prepare, you are preparing to fail.” Benjamin Franklin

It’s worth remembering that there are different types of database replication and each has its own pros and cons. Every company also has individual needs. It’s essential to consider which type of replication will best fit your current requirements and future plans.

As a starting point for your business preparations, here’s an overview of the three main, and quite distinct, SQL database replication types.

Snapshot Replication

As the name suggests, this is where you take a snapshot of your data at that specific moment which is generated and sent to the subscribers. But, this does not check for updates done to the data. This type of replication is commonly used to initialize the following steps for transactional or merge replication. Snapshot replication is generally used in the following three scenarios: there are infrequent changes to the data or database objects, real-time data is not necessary compared to the publisher or in cases of small data sets or large data sets that occur in a small time frame.

Transactional Replication

Transactional replication offers four different types of publications. With standard transactional replication any data changes or modifications done on the publisher are replicated (delivered) almost real-time to the respective subscribers. This type of replication is commonly used if you want low latency between changes done on the publisher being replicated to the subscriber(s). Or, if there is a high activity of Data Manipulation Language (DML) statements on the publisher. It is recommended to treat the subscriber in a standard transactional replication as a read-only, as making modifications to the data or schema causes data consistency errors.

Merge Replication

Like transactional replication, merge replication also starts with a snapshot of the publication database. In merge replication, changes in data or modifications can be made both on the publisher and the subscriber(s). This is commonly used in scenarios where multiple subscribers update the same data and the changes need to be sent to the publisher to obtain a uniform (identical) result over all nodes.

The type of replication you use depends very much on how and why you are using the data. Is it just a one-way flow of information or do you want users to be able to make changes? If you are creating specific reports, do you need to replicate the entire database or just certain tables or database objects? You also need to think about security, such as encrypting the connections between the publisher and subscriber using several methods or providing only the necessary permissions to the replication agents.

You can gain many benefits from replicating your SQL databases. It can help you run your business more smoothly and efficiently, and deal with increased workloads and disaster recovery challenges. However, it is essential that you plan and prepare to ensure you have the optimal type of replication in place that fits your business requirements.

If you’d like to know more, we are always happy to answer questions and discuss individual requirements. You can contact us here.

knowledge we share

07 Aug 2019 0 minutes to read Contributors

blogs of SQLTreeo

Adding thresholds for your servers

07 Aug 2019 0 minutes to read Contributors

Thresholds are required to monitor the servers in your environment, the thresholds can be configured in the left menu under configuration "Add threshold".

The agent will start collecting data for the configured thresholds and represents the data in your monitoring environment.

 

Pre-requisite

Thresholds are created and assigned per group, allowing to assign thresholds to the specified servers of interest and filtering any unwanted notifications of servers with less priority.

Before creating the thresholds it is necessary to create an action first, the action defines what happens when a threshold is triggered.

The action can be one of the following but not limited to: Email, SNMP, Integration in third party applications(communication platforms, existing monitoring such as Nagios.

 

Configuring a thresholds

Thresholds are divided in 2 categories: Conditions defined by SQLTreeo and the SQL Server Counters.  The SQLTreeo defined conditions are mainly focused on daily activities or checks.

 

Once the threshold values are filled, it needs to be assigned to a pre-defined action which will be executed when the threshold is triggered.

 

 

 

Deactivating SQLTreeo SSMS addin License

07 Aug 2019 0 minutes to read Contributors

Deactivating a license is a pre-requisite in several scenarios:

- Activating a license on a different workstation.

- Resetting a license that is already activated.

 

It is recommended to always deactivate a license before activating it on another workstation to avoid invalid activation.

In some cases such as corruption or loss of machine the license can no longer be accessed, this issue can be solved in  the license overview panel.

Deactivating the SSMS Addin license

The SQLTreeo Add-in is always activated in the SQL Server Management Studio by navigating to tools and selecting SQLTreeo.
An license that is already activated can be deactivated by selecting the "Deactivate"  button in the SQLTreeo interface in SSMS.

After restarting the SSMS application the license is completely deactivated and can be activated on a different workstation.

Deactivating without access to the software

In some scenarios the software is no longer accessible for reasons such as disk corruption/hardware failure/loss of posession.

In these scenarios the software can not be deactivated by regular steps and also unable to activate it on a different workstation.

The solution for deactivating the license without access is as following:

- login to your user-account on www.sqltreeo.com

- navigate to license history and select the +/- symbol in the Disable/Enable column.

- Disable the license(by default enabled) 

- Enable the license

 

The above steps reset the license key, after which it can be activated through the standard procedure again.

 

 

 

 

Stretching a database to microsoft Azure cloud

07 Aug 2019 0 minutes to read Contributors

Database stretching is a SQL Server feature where data can be splitted between on-premises and cloud storage.

This is commonly used for cold, historical data that still needs to be accessible.

stretching cold data to the cloud keeps the active data on-premises which in most cases can be relatively small to have maximum performance on-premises.

 

When to stretch your database?

Stretching a database has several benefits:

- Cold, Historical data needs to be accessible.

- Large retention periods are required.

- Large tables that have huge amounts of rows and contain historical data.

- Due to large datasets it is a common occurence to add extra resources.

- Maintenance jobs/tasks take longer due to large size of databases and tables (e.g. creating indexes, index rebuilds, database integrity checks, backups)

- if there is need to save costs on storage facilities.

- SLA Requirements, e.g. downtime must be minimal but due to a large database containing historical data the restore process will take more time.

 

if any of the above matches your situation it might be worthwhile to consider database stretching.

 

 

Pre-requisites

Before database stretching can be implemented there are a few actions that need attention first.

-SQL Server version 2016 or higher.

-Microsoft Azure subscription.

-Microsoft Azure account

-Check limitations on tables (constraints, DML operations, Indexes, Properties)

-Check limitations on datatypes (column types, keys)

 

Without a Azure subscription or account it is not possible to implement database stretching to the cloud.

The Subscriptions start from 100 DSU(Database stretching unit) up to 2000 DSU. 

When implementing a stretch database there are several other factors that need to be accounted for as the initial setup might affect your production performance, and depending on the table sizes it can take a considerable amount of time before it is completed.