The biggest headache when managing any database system is enforcing data quality and consistency across the entire dataset. This can range from ensuring that field values are entered correctly through to enforcing rules to prevent duplicate records from even touching the periphery of your database. If you are using an application like CRM Online/Dynamics 365 for Enterprise (D365E), then the built-in functionality within these systems can assist with this, via the use of Option Sets for field values and handy features such as Duplicate Detection Rules. If you have developed your own system, then all due thought should be directed towards ensuring that your system can adequality enforce data quality at the client side, wherever possible. Doing this early on can save you a lot of work down the line.

If the backend database system you are using is SQL Server, then, fortunately, the SQL standard (and, specifically, Transact-SQL or T-SQL, Microsoft’s implementation of the standard) has some tricks up its sleeve that is worth noting. CONSTRAINT‘s are database objects that can be setup to enforce…well, constraints on the values that are allowed within a particular field. There are a number of different CONSTRAINT‘s that are at our disposal, but the most commonly used ones are:

  • PRIMARY KEY CONSTRAINT: Typically used to indicate a column that will always contain a unique value in the database, to facilitate working with a specific row of data in a table. Any field type can be setup as a PRIMARY KEY, but it is generally recommended to use an Integer (INT) or Globally Unique Identifier field (UNIQUEIDENTIFIER), more generally referred to as a GUID.
  • FOREIGN KEY CONSTRAINT: FOREIGN KEY‘s are used to indicate a field within a database that is a reference to another table within the database. Values entered into this field must match against a record within the related table and you can configure a wide range of options of how “parent” record behaves, based on how the related record is modified or removed in the database. Those coming from a CRM/D365E background can grasp this better by realising that these are essentially lookup fields as part of a one-to-many (1:N) relationship.
  • DEFAULT CONSTRAINT: On some occasions, when a value is not entered into a column, you may need to ensure that something is put into the field. This is particularly the case when you are working with NOT NULL fields, which always require a value. A DEFAULT CONSTRAINT gets around this issue by allowing you to specify an initial value for the column, should the database operation against the column results in a value not being specified as part of an INSERT statement.

W3 Schools have a handy list that covers all possible CONSTRAINT’s within SQL, but be sure to cross reference this with the relevant Microsoft documentation! As an implementation of the standard, T-SQL can have a few nice – but sometimes surprising – differences that you should be aware of 🙂 The upside of all of this is that, if you have the need to ensure that your database column values are protected against erroneous values, then a CHECK CONSTRAINT is your first port of call. What’s even better is that these are something that can be setup rather straightforwardly to ensure, for example, a field is only allowed to conform to a specific set of values.

A practical illustration is the best way to demonstrate the ease – and potential pitfall – you may hit when working with CHECK CONSTRAINT‘s containing a large list of potential values. Let’s say you want to create a table with a field – TestField – that should only ever accept the values A, B or C. Your CREATE TABLE script would look something like this:

CREATE TABLE [dbo].[Test]
(
	[TestField] CHAR(1)	NULL,
	CONSTRAINT CHK_TestField CHECK ([TestField] IN ('A', 'B', 'C'))
)

This works well within the following situations:

  • You are working with a handful of values that need to checked – ideally no more than a dozen.
  • You can guarantee that the list of values will not be subject to frequent changes.

If your situation fits within the opposite end of the parameters specified above, you may make the assumption that the best way to build a sustainable solution is via a dedicated lookup table within your database. The idea being with this is the list of values required for the CONSTRAINT can be managed in bulk, updated/removed via common T-SQL statements and also prevents you from managing particularly long-winded table scripts within your database. The following script will create a lookup table that records the fields you require CONSTRAINT‘s for (assuming this is the case; this can be removed at your discretion) and the values that need checking:

CREATE TABLE [dbo].[lkp_Test] 
(
	[lkpTestID]	INT	IDENTITY(1,1) NOT NULL,
	CONSTRAINT PK_lkp_Test_lkpTestID PRIMARY KEY CLUSTERED ([lkpTestID]),
	[FieldName] VARCHAR(100)	NOT NULL,
	[Value]		VARCHAR(100)	NOT NULL
)

Those who have good, but not extensive, experience with T-SQL may make the next step assumption that we can then modify our CHECK constraint to directly query the table, similar to the below:

--To recreate an existing Constraint, it has to be dropped and recreated

ALTER TABLE [dbo].[Test]
DROP CONSTRAINT CHK_TestField;

GO

ALTER TABLE [dbo].[Test]
ADD CONSTRAINT CHK_TestField CHECK ([TestField] IN (SELECT [Value] FROM [dbo].[lkp_Test] WHERE [FieldName] = 'TestField' AND [TestField] = [Value]));

GO

After executing the command, your next reaction may be confusion as an error is thrown back to you:

What this means is that there is no way that we can define a query within our CONSTRAINT to essentially “lookup” the values that are allowed from our table and enforce only the list of approved values. The workaround for this is that we can look at utilising a user-defined function, or UDF. The function will perform the necessary query against our lookup table but achieves the requirements of returning a single (i.e. scalar) value, which we can then filter against in our CONSTRAINT. Functions are a topic that has been covered previously on the blog, and, when used effectively, can help to encapsulate complicated functionality within simple objects that can be referenced via your T-SQL queries. Below is a function that can be setup to achieve the requisite conditions for our scenario:

CREATE FUNCTION dbo.fnGetTestConstraintValues(@FieldName VARCHAR(100), @Value VARCHAR(100))
RETURNS VARCHAR(5)
AS
BEGIN
	IF EXISTS (SELECT [Value] FROM [dbo].[lkp_Test] WHERE [FieldName] = @FieldName AND [Value] = @Value)
RETURN 'TRUE'
	IF @Value IS NULL
RETURN 'TRUE'
RETURN 'FALSE'
END

GO

Let’s break down what this is doing in more detail:

  • When called, you must supply two parameters with values – the name of the field that needs to be checked against (@FieldName) and the value to check (@Value). We’ll see how this works in practice in a few moments.
  • The function will always return a single value – either TRUE or FALSE – thereby ensuring we have a scalar value to interact with.
  • Because our TestField has been defined as NULL, we have to add some additional logic to handle these occurrences. SQL Server will not enforce a CHECK constraint for NULL values when entered/updated into a database, but our function does not extend this far. Therefore, unless a record is inserted into our lkp_Test table for our field with a NULL value, these value types will force the CONSTRAINT to be enforced. By adding in an IF condition to check for NULL‘s and return a value of ‘TRUE‘ in these instances, NULL values can be entered into the database successfully.

With this function now in place, our CONSTRAINT just needs to be modified to call the function, verifying that TRUE returns; if not, then the field value will not be allowed:

ADD CONSTRAINT CHK_TestField CHECK (dbo.fnGetTestConstraintValues('TestField', [TestField]) = 'TRUE');

Now we can ensure that we are able to specify a subquery within our CONSTRAINT within a supported manner – excellent! 🙂 CONSTRAINT‘s are a really useful tool at the disposal of any database administrator/developer, and something that you should always have at the front of your mind as part of any design. Doing so will ensure that you are building a solution which, as much as realistically conceivable, tries to keep the data within your database as squeaky clean as possible.

The very recent Microsoft Data Amp event provided an excellent forum for the SQL Server 2017 announcement, which is due to be released at some point this year. Perhaps the most touted feature of the new version is that it will be available to be installed on Linux; an entirely inconceivable premise 10 years ago, which just goes to show how far Microsoft have changed in their approach to supporting non-Windows platforms as standard. Past the big headline announcements, there is a lot to look forward to underneath the hood with SQL Server 2017 that may act as encouragement for organisations looking to upgrade in the near future.

In this week’s post, I’ll be taking a closer look at 3 new features I am most looking forward to, that are present within the SQL Server Community Technical Preview (CTP) 2.0 version and which will form part of the SQL Server 2017 release later on this year.

Power BI in SSRS: A Match Made in Heaven

This is by far the feature I am most looking forward to seeing in action. I have been working more and more with Power BI this year, often diving into the deep-end in respect to what can be achieved with the product, and I have been impressed with how it can be used for addressing reporting scenarios that SSRS may struggle with natively. The announcement earlier this year that Power BI would be included as part of SSRS in the next major release of the product was, therefore, incredibly welcome and its inclusion as part of SQL Server 2017 is confirmed by the inclusion of Power BI reports in the CTP 2.0 release.

For those who are already familiar with Power BI, there is thankfully not much that you need to learn to get up and running with Power BI in SSRS. One thing to point out is that you will need to download a completely separate version of the Power BI Desktop App to allow you to deploy your Power BI reports to SSRS. I would hope that this is mitigated once SQL Server 2017 is released so that we are can deploy from just a single application for either Online or SSRS 2017. Users who are experienced with the existing Power BI Desktop application should have no trouble using the equivalent product for SSRS, as they are virtually identical.

The actual process of deploying a Power BI report is relatively straightforward. After making sure that you have installed the SSRS Power BI Desktop Application, you can then navigate to your SSRS homepage and select + New -> Power BI Report:

You will be greeted with a prompt similar to the below and the Power BI Desktop application will open automatically:

Now it’s time to build your report 🙂 As an example, I have used the WideWorldImporters Sample Database to build a simplistic Power BI report:

If you were working with Power BI online, then this would be the stage where you would click the Publish button to get it onto your online Power BI tenant. The option to deploy to your SSRS instance is currently missing from Power BI in SSRS application; instead, you will need to manually upload your .pbix file into Reporting Services via the Upload button. Once uploaded, your report will be visible on the home page and can be navigated to in the usual manner:

Simplified CSV Importing

Anyone who has at least some experience working with databases and application systems should have a good overview of the nuances of delimited flat file types – in particular, Comma Separated Value (.csv) files. This file type is generally the de-facto format when working with data exported from systems and, more often than not, will be the most common file type that you will regularly need to import into a SQL Server database. Previously, if you didn’t opt to use the Import Wizard/.dtsx package to straightforwardly get your .csv file imported, you would have to rely on the following example script:

BULK INSERT dbo.TestTable
FROM 'C:\Test.csv'
WITH
	(
		FIELDTERMINATOR = ',',
		ROWTERMINATOR = '\n'
	)

Now, with SQL Server 2017, you can simplify your query by replacing FIELDTERMINATOR and ROWTERMINATOR with a new FORMAT parameter, that specifies the file format we are concerned with:

BULK INSERT dbo.TestTable
FROM 'C:\Test.csv'
WITH (FORMAT = 'CSV');

Whilst the overall impact on your query length is somewhat negligible, it is nice that a much more simplified means of accomplishing a common database task has been introduced and that we now also have the option of accessing Azure Blob Storage locations for import files.

Updated Icons for SSMS

Typically, as part of any major update to the application, the “under the hood” visual side of things are generally not changed much. A good example of this can be found within CRM Online/Dynamics 365 for Enterprise within the Customizations area of the application, which has not seen much of a facelift since CRM 2011. As a result, a lot of the icons can look inconsistent with the application as a whole. As these are generally the areas of the application that we use the most day in, day out, it can be a little discouraging not to see these areas get any love or attention as part of a major update… 🙁

With this in mind, it is pleasing to see that the updated SSMS client for SQL Server 2017 has been given refreshed icons that bring the application more in line with how Visual Studio and other Microsoft products are looking these days. Below is a comparison screenshot, comparing SSMS 2014 with SSMS 2017:

Conclusions or Wot I Think

Whilst there is a lot more to look forward to with the new release that is not covered in this post (for example, the enhancements to R server and deeper integration with AI tools), I believe that the most exciting and important announcement for those with their Business Intelligence/Reporting hats on is the introduction of Power BI into SSRS. Previously, each tool was well suited for a specific reporting purpose – SSRS was great for designing reports that require a lot of visual tailoring and widely common formats for exporting, whereas Power BI is more geared towards real-time, dashboard views that marry together disparate data sources in a straightforward way. By being able to leverage SSRS to fully utilise Power BI reports, the application suddenly becomes a lot more versatile and the potential for combining together functionality becomes a lot more recognisable. So, for example, having the ability to drill down to an SSRS report from a Power BI report would be an excellent way of providing reporting capabilities that satisfy end-user consumption in 2 different, but wildly applicable, scenarios.

In summary, the SQL Server 2017 release looks to be very much focused on bringing the product up to date with the new state of play at Microsoft, successfully managing to achieve cross-platform requirements alongside bringing exciting functionality (that was previously cloud-only) into the hands of organisations who still have a requirement to run their database systems on their on-premise infrastructure. I’m eagerly looking forward to the release later on this year and in seeing the product perform in action. 🙂

When you have spent any length of time working with Dynamics CRM Online/Dynamics 365 for Enterprise (D365E) data programmatically, you become accustomed to how Option Set, State and Status Reason values are presented to you in code. To explain, the application does not store your Option Set value display names within the SQL Server Entity tables; rather, the Option Set Value that has been specified alongside your Label is what is stored as an integer value. That is why you are always mandatorily prompted to provide both values within the application:

The following benefits are realised as a result of how this is setup:

That being said, when working with these field types in code, you do always have to have the application window open or a list of all Labels/Values to hand so that you don’t get too confused… 🙂

I have previously extolled the virtues of the Data Export Service on the blog, and why you should consider it if you have basic integration requirements for your CRM/D365E deployment. One area in which it differs from other products on the market is how it handles the field types discussed above. For example, when exporting data to a SQL database via Scribe Online, new columns are created alongside that contain the “Display Name” (i.e. label value) that correspond to each Option, Status and Status Reason Label. So by running the following query against a Scribe export database:

SELECT DISTINCT statecode, statecode_displayname
FROM dbo.account

We get the best of both worlds – our underlying statecode value and their display names – all in 2 lines of code:

This is a big help, particularly when you are then using the data as part of a report, as no additional transformation steps are required and your underlying SQL query can be kept as compact as possible.

The Data Export Service differs from the above in an understandable way, as display name values for Status, Status Reason and Option Set column values are instead segregated out into their own separate table objects in your Azure SQL database:

OptionSetMetadata

GlobalOptionSetMetadata

StateMetadata

StatusMetadata

Why understandable? If you consider how the application can support multiple languages, then you realise that this can also apply to metadata objects across the application – such as field names, view names and – wouldn’t you have guessed it – Labels too. So when we inspect the OptionSetMetadata table, we can see that the table structure accommodates the storing of labels in multiple languages via the LocalizedLabelLanguageCode field:

Unlike the Scribe Online route above (which I assume only retrieves the Labels that correspond to the user account that authenticates with CRM/D365E), the Data Export Service becomes instantly more desirable if you are required to build multi-language reports referencing CRM/D365E application data.

The issue that you have to reconcile yourself with is that your SQL queries, if being expressed as natively as possible, instantly become a whole lot more complex. For example, to achieve the same results as the query above, it would have to be adapted as follows for the Data Export Service:

SELECT DISTINCT statecode, LocalizedLabel
FROM dbo.account
 LEFT JOIN dbo.StateMetadata
  ON 'account' = EntityName
  AND statecode = [State]
  AND '1033' = LocalizedLabelLanguageCode

The above is a very basic example, but if your query is complex – and involves multiple Option Set Values – then you would have to resort to using Common Table Expressions (CTE’s) to accommodate each potential JOIN required to get the information you want.

In these moments, we can look at some of the wider functionality provided as part of SQL Server to develop a solution that will keep things as simple as possible and, in this particular instance, a user-defined function is an excellent candidate to consider. These enable you to perform complex operations against the database platform and encapsulate them within very simply expressed objects that can also accept parameters. The good thing about functions is that they can be used to return table objects and scalar (i.e. single) objects.

Using a scalar function, we can, therefore, remove some of the complexity behind returning Option Set, Status and Status Reason labels by creating a function that returns the correct label, based on input parameters received by the function. You could look at creating a “master” function that, based on the input parameters, queries the correct Metadata table for the information you need; but in this example, we are going to look at creating a function for each type of field – Status, Status Reason, Option Set and Global Option Set.

To do this, connect up to your Data Export Service database and open up a new query window, ensuring that the context is set to the correct database. Paste the following code in the window and then hit Execute:

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

--Create Function to return Global Option Set Labels

CREATE FUNCTION [dbo].[fnGetGlobalOptionSetLabel]
(
	@GlobalOptionSetName NVARCHAR(64), --The logical name of the Global Option Set
	@Option INT, --The option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain label for Global Option Set field ' + @GlobalOptionSetName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Option Set Labels

CREATE FUNCTION [dbo].[fnGetOptionSetLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Option Set field
	@OptionSetName NVARCHAR(64), --The logical name of the Option Set field
	@Option INT, --The option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain label for Option Set field ' + @OptionSetName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Status Labels

CREATE FUNCTION [dbo].[fnGetStateLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Status field
	@State INT, --The Status option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain State label for entity ' + @EntityName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Status Reason Labels

CREATE FUNCTION [dbo].[fnGetStatusLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Status Reason field
	@Status INT, --The Status Reason option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain Status label for entity ' + @EntityName AS INT);
	RETURN @Label;

END

GO

This will then go off and create the functions listed in code, which should then show up under the Programmability folder on your SQL database:

For those who are unsure at what the SQL code is doing, it first attempts to determine if only 1 Label can be found for your appropriate field type, based on the parameters provided. If it is successful, then a value is returned; otherwise, the CAST function is designed to force an error to return back to the caller to indicate that none or more than 1 Option Set value was found. In most cases, this would indicate a typo in the parameters you have specified.

As with anything, the best way to see how something works is in the practice! So if we again look at our previous examples shown in this post, we would utilise the dbo.fnGetStateLabel function as follows to return the correct label in English:

SELECT DISTINCT statecode, dbo.fnGetStateLabel('account', statecode, 1033) AS statecode_displayname
FROM dbo.account

With our results returning as follows:

Now we can expose this through our reports and not worry about having to do any kind of transformation/lookup table to get around the issue 🙂

Attempting to keep things as simple as possible by encapsulating complex functionality into simply and clearly expressed functions is an excellent way of ensuring that code can be kept as streamlined as possible, and also ensures that other colleagues can accomplish complex tasks, even if they do not have in-depth knowledge of Transact-SQL.

Although CRM Online/Dynamics 365 for Enterprise (D365E) does provide a plethora of different tools aimed at satisfying reporting requirements for users of the application, you are restricted in how data can be queried within the application. For example, you cannot just connect straight up to the applications SQL database and start writing stored procedures that perform complex data transformations or joins. Traditionally, to achieve this, you would need to look at one of the several tools in the marketplace that enable you to export your data out into a format that best pleases you; or even take the plunge and get a developer to write your own application that satisfies your integration requirements.

With the recent D365E release and in-line with Microsoft’s longstanding approach to how they approach customer data within their applications (i.e. “It’s yours! So just do what you want with it!), the parallel introduction of the Data Export Service last year further consolidates this approach and adds an arguably game-changing tool to the products arsenal. By using the service, relatively straightforward integration requirements can be satisfied in a pinch and a lot of the headache involved in setting up a backup of your organisation’s databases/LOB reporting application can be eliminated. Perhaps the most surprising and crucial aspect of all of this is that using this tool is not going to break the bank too much either.

In this week’s blog post, I’m going to take a closer look at just what the Data Export Service is, the setup involved and the overall experience of using the service from end-to-end.

What is the Data Export Service?

The Data Export Service is a new, free*, add-on for your CRM/D365E subscription, designed to accomplish basic integration requirements. Microsoft perhaps provides the best summary of what the tool is and what it can achieve via TechNet :

The Data Export Service intelligently synchronizes the entire Dynamics 365 data initially and thereafter synchronizes on a continuous basis as changes occur (delta changes) in the Microsoft Dynamics 365 (online) system. This helps enable several analytics and reporting scenarios on top of Dynamics 365 data with Azure data and analytics services and opens up new possibilities for customers and partners to build custom solutions.

The tool is compatible with versions 8.0, 8.1 and 8.2 of the application, which corresponds the following releases of CRM Online/D365E:

  • Dynamics CRM Online 2016
  • Dynamics CRM Online 2016 Update 1
  • Dynamics 365 December Update

*You will still need to pay for all required services in Azure, but the add-on itself is free to download.

The Installation Process

Getting everything configured for the Data Export Service can prove to be the most challenging – and potentially alienating – part of the entire process. For this, you will need the following at your disposal:

  • An active Azure Subscription.
  • An Azure SQL Server configured with a single database or an Azure VM running SQL Server. Microsoft recommends a Premium P1 database or better if you are using an Azure SQL database, but I have been able to get the service working without any issue on S0 tier databases. This is an important point to make, given the cost difference per month can amount to hundreds of £’s.
  • An Azure Key Vault. This is what will securely store the credentials for your DB.
  • PowerShell and access to the Azure Resource Manager (AzureRM) Cmdlets. Powershell can be installed as an OS feature on Windows based platforms, and can now be downloaded onto OS X/Linux as well. PowerShell is required to create an Azure Key Vault, although you can also use it to create your Azure SQL Server instance/Windows VM with SQL Server.

It is therefore recommended that you have at least some experience in how to use Azure – such as creating Resource Groups, deploying individual resources, how the interface works etc. – before you start setting up the Data Export Service. Failing this, you will have to kindly ask your nearest Azure whizz for assistance 🙂 Fortunately, if you know what you’re doing, you can get all of the above setup very quickly; in some cases, less than 10 minutes if you opt to script out the entire deployment via PowerShell.

For your setup with D365E, all is required is the installation of the approved solution via the Dynamics 365 Administration Centre. Highlight the instance that you wish to deploy to and click on the pen icon next to Solutions:

Then click on the Solution with the name Data Export Service for Dynamics 365 and click the Install button. The installation process will take a couple of minutes, so keep refreshing the screen until the Status is updated to Installed. Then, within the Settings area of the application, you can access the service via the Data Export icon:

Because the Data Export Service is required to automatically sign into an external provider, you may also need to verify that your Web Browser pop-up settings/firewall is configured to allow the https://discovery.crmreplication.azure.net/ URL. Otherwise, you are likely to encounter a blank screen when attempting to access the Data Export Service for the first time. You will know everything is working correctly when you are greeted with a screen similar to the below:

Setting up an Export Profile

After accepting the disclaimer and clicking on the New icon, you will be greeted with a wizard-like form, enabling you to specify the following:

  • Mandatory settings required, such as the Export Profile Name and the URL to your Key Vault credentials.
  • Optional settings, such as which database schema to use, any object prefix that you would like to use, retry settings and whether you want to log when records are deleted.
  • The Entities you wish to use with the Export Service. Note that, although most system entities will be pre-enabled to use this service, you will likely need to go into Customizations and enable any additional entities you wish to utilise with the service via the Change Tracking option:

  • Any Relationships that you want to include as part of the sync: To clarify, this is basically asking if you wish to include any default many-to-many (N:N) intersect tables as part of your export profile. The list of available options for this will depend on which entities you have chosen to sync. For example, if you select the AccountLead and Product entities, then the following intersect tables will be available for synchronisation:

Once you have configured your profile and saved it, the service will then attempt to start the import process.

The Syncing Experience A.K.A Why Delta Syncing is Awesome

When the service first starts to sync, one thing to point out is that it may initially return a result of Partial Success and show that it has failed for multiple entities. In most cases, this will be due to the fact that certain entities dependent records have not been synced across (for example, any Opportunity record that references the Account name Test Company ABC Ltd. will not sync until this Account record has been exported successfully). So rather than attempting to interrogate the error logs straightaway, I would suggest holding off a while. As you may also expect, the first sync will take some time to complete, depending on the number of records involved. My experience, however, suggests it is somewhat quick – for example, just under 1 million records takes around 3 hours to sync. I anticipate that the fact that the service is essentially an Azure to Azure export no doubt helps in ensuring a smooth data transit.

Following on from the above, syncs will then take place as and when entity data is modified within the application. The delay between this appears to be very small indeed – often tens of minutes, if not minutes itself. This, therefore, makes the Data Export Service an excellent candidate for a backup/primary reporting database to satisfy any requirements that cannot be achieved via FetchXML alone.

One small bug I have observed is with how the application deals with the listmember intersect entity. You may get an errors thrown back that indicate records failed to sync across successfully, which is not the case upon closer inspection. Hopefully, this is something that may get ironed out and is due to the rather strange way that the listmember entity appears to behave when interacting with it via the SDK.

Conclusions or Wot I Think

For a free add-on service, I have been incredibly impressed by the Data Export Service and what it can do. For those who have previously had to fork out big bucks for services such as Scribe Online or KingswaySoft in the past to achieve very basic replication/reporting requirements within CRM/D365E, the Data Export Service offers an inexpensive way of replacing these services. That’s not to say that the service should be your first destination if your integration requirements are complex – for example, integrating Dynamics 365 with SAP/Oracle ERP systems. In these cases, the names mentioned above will no doubt be the best services to look at to achieve your requirements in a simplistic way. I also have a few concerns that the setup involved as part of the Data Export Service could be a barrier towards its adoption. As mentioned above, experience with Azure is a mandatory requirement to even begin contemplating getting setup with the tool. And your organisation may also need to reconcile itself with utilising Azure SQL databases or SQL Server instances on Azure VM’s. Hopefully, as time goes on, we may start to see the setup process simplified – so, for example, seeing the Export Profile Wizard actually go off and create all the required resources in Azure by simply entering your Azure login credentials.

The D365E release has brought a lot of great new functionality and features to the table, that has been oft requested and adds real benefit to organisations who already or plan to use the application in the future. The Data Export Service is perhaps one of the great underdog features that D365E brings to the table, and is one that you should definitely consider using if you want a relatively smooth sailing data export experience.

Those who have experience working with an RDMS system like SQL Server will become accustomed towards a certain way of going about things. These can often involve a mixture of “lazy” query writing (e.g. using SELECT *… as opposed to SELECT Column1, Column2…), the manner in which you write your query (ALL CAPS or lower case) and best practice approaches. One arguable example of a best practice approach is the use of Stored Procedures. An illustration of how to use a Stored Procedure can most readily demonstrate their benefits. Take a look at the T-SQL query below, which should execute fine against the AdventureWorks2012 sample database:

SELECT P.[FirstName], P.[LastName], E.[JobTitle], E.[HireDate], D.[Name]
FROM [HumanResources].[Employee] AS E
 INNER JOIN [HumanResources].[EmployeeDepartmentHistory] AS DH
  ON E.[BusinessEntityID] = DH.[BusinessEntityID]
 INNER JOIN [HumanResources].[Department] AS D
  ON DH.[DepartmentID] = D.[DepartmentID]
 INNER JOIN [Person].[Person] AS P
  ON E.[BusinessEntityID] = P.[BusinessEntityID]
WHERE DH.[EndDate] IS NULL
AND E.[JobTitle] = 'Production Technician - WC50'

The query returns the data we need, but not in an efficient manner. Consider the following:

  • Executing a query like the above, in-code, as part of an end-user application could expose your database to the risk of an SQL Injection attack or similar.
  • The query compromises a lot of information regarding our underlying database structure, information which any underlying client executing the query neither cares or should have to worry about.
  • The example is a very precise query, with a specific function – i.e. get me all the current employees who have the Job Title of Production Technician – WC50. If we wanted to modify it to instead obtain all Senior Tool Designers, we would have to write a completely separate query to accommodate this.

Implementing a Stored Procedure to encapsulate our query logic immediately addresses the above concerns, by providing us with a single-line query into the database, giving us just the data we need and enables us to utilise the query for other scenarios as well. Setting one up is very straight forward via the CREATE PROCEDURE command – the rest is pretty much what we have put together already:

CREATE PROCEDURE dbo.uspGetEmployeesByJobTitle 
	@JobTitle NVARCHAR(50)
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

	SELECT P.[FirstName], P.[LastName], E.[JobTitle], E.[HireDate], D.[Name]
	FROM [HumanResources].[Employee] AS E
	 INNER JOIN [HumanResources].[EmployeeDepartmentHistory] AS DH
	  ON E.[BusinessEntityID] = DH.[BusinessEntityID]
	 INNER JOIN [HumanResources].[Department] AS D
	  ON DH.[DepartmentID] = D.[DepartmentID]
	 INNER JOIN [Person].[Person] AS P
	  ON E.[BusinessEntityID] = P.[BusinessEntityID]
	WHERE DH.[EndDate] IS NULL
	AND E.[JobTitle] = @JobTitle
END
GO

By utilising a parameter for our WHERE clause filter on the Job Title, we can pass any valid value back to our stored procedure, immediately making our initial query more versatile across our reporting/business application. And, as a primary bonus, we can now safely take a 10 line query down to 1:

EXECUTE dbo.uspGetEmployeesByJobTitle @JobTitle = 'Senior Tool Designer'

So we have established that Stored Procedures are wicked cool awesome – but what does this have to do with PowerBI?!? 

Having worked with SQL Server Reporting Services (SSRS) extensively in the past, I have become accustomed to using Stored Procedures as a mechanism for storing underlying query logic within the database and having a straightforward means of referencing this from my .rdl file. I can only assume from this that this is the “norm” and preferred method of querying SQL data, as opposed to a direct SELECT statement.

When recently doing some work within PowerBI involving Azure SQL Databases, I was, therefore, surprised that there was no option to return data via a stored procedure as default. Instead, PowerBI would prefer me to directly query underlying table/view objects:

Thankfully, when inspecting the underlying PowerQuery used to return an example table from the above, it doesn’t use any kind of SELECT query to get the data:

let
    Source = Sql.Databases("mydatabaseinstance"),
    AdventureWorks2012 = Source{[Name="AdventureWorks2012"]}[Data],
    Production_ProductModel = AdventureWorks2012{[Schema="Production",Item="ProductModel"]}[Data]
in
    Production_ProductModel

Unfortunately, the same cannot be said for if you select the Advanced options area and input your own SQL query directly:

let
    Source = Sql.Database("mydatabaseinstance", "AdventureWorks2012", [Query="SELECT P.[FirstName], P.[LastName], E.[JobTitle], E.[HireDate], D.[Name]#(lf)FROM [HumanResources].[Employee] AS E#(lf) INNER JOIN [HumanResources].[EmployeeDepartmentHistory] AS DH#(lf)  ON E.[BusinessEntityID] = DH.[BusinessEntityID]#(lf) INNER JOIN [HumanResources].[Department] AS D#(lf)  ON DH.[DepartmentID] = D.[DepartmentID]#(lf) INNER JOIN [Person].[Person] AS P#(lf)  ON E.[BusinessEntityID] = P.[BusinessEntityID]#(lf)WHERE DH.[EndDate] IS NULL#(lf)AND E.[JobTitle] = 'Senior Tool Designer'"])
in
    Source

I do NOT recommend you use the above method to query your SQL Server data!

I have spoken previously on the blog in respect to conventions around working with datasets i.e. only get what you need, and nothing else. As I work more and more with PowerBI, the tool very much seems to be geared towards flipping this mentality on its head. PowerBI has a number of built-in tools that seem to scream out “Just get ALL your data in here, we’ll worry about the rest!”. I realise that the difference between MB and GB these days, from a storage/cost point of view, is minimal; nevertheless, I still believe it is prudent not to put all your eggs in one basket and ensure that your business data is not being stored cavalier-esque within a multitude of different cloud services.

With this in mind, it is good to know that you can utilise stored procedures in PowerBI. You basically have two ways in which this can be achieved:

  • Going back to the Advanced options screen above on the SQL Server database wizard, you can EXECUTE your stored procedure directly using the following SQL Statement:
DECLARE @SP VARCHAR(100) = 'dbo.uspGetEmployeesByJobTitle @JobTitle = ''Senior Tool Designer'''
EXEC (@SP)

Be sure to specify your database and don’t forget the double quotes!

  • If you prefer to use PowerQuery as opposed to the wizard above, then the following code will also work:
let
    Source = Sql.Database("mydatabaseinstance", "AdventureWorks2012", [Query="DECLARE @SP VARCHAR(100) = 'dbo.uspGetEmployeesByJobTitle @JobTitle = ''Senior Tool Designer'''#(lf)EXEC (@SP)"])
in
    Source

In both cases, you will be required to authenticate with the database and your result set should return as follows if using the AdventureWorks2012 example database/code:

Finally, as a best-practice security step, you should ensure that the account connecting to your SQL Server instance is restricted to only EXECUTE the procedures you have specified. This can be achieved via the following T-SQL snippet, executed against your database instance:

GRANT EXECUTE ON OBJECT::dbo.uspMyStoredProcedure  
    TO MySQLServerLogin 

Conclusions or Wot I Think

PowerBI is increasingly becoming a more relevant tool for traditional Business Intelligence/Reporting Services experienced professionals. The bellwether for this can surely be seen in the current Technical Preview for SQL Server Reporting Services, which includes PowerBI reports built-in to the application. Although we have no timescales at this stage at when the next major version of SQL Server will be released, it is reasonable to assume by the end of this year at the earliest, bringing PowerBI reports as a new feature. I am really excited about the introduction of PowerBI into SSRS, as it would appear to be a match made in heaven – giving an opportunity for those with experience in both products the flexibility to develop a unified, best of breed solution, using traditional .rdl reporting capability and/or PowerQuery/DAX functionality.

With the above on the horizon, the importance of being able to integrate seamlessly with SQL Server and having support for traditional/well-proven practices become crucial indicators of whether this match will be over before the honeymoon. And so, I would hope to see the option to access SQL Server data via Stored Procedures become standard when using the built-in data connector within PowerBI. Based on the feedback I have seen online, I’d warrant towards how welcome this feature could be and an excellent way of reducing the need for direct coding to achieve a common requirement within PowerBI.