This is part 2 of a 5 part series, where we take a closer look at the practical implications the General Data Protection Regulation (GDPR) will have upon your organisation and some of the ways Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can assist you as part of the transition. Last week, we took a look at the database encryption feature within the application and why you should devote some time to understanding how it works. The primary focus of this weeks post is how an organisation can ensure that highly sensitive data categories are only made accessible to authorised individuals only.

All posts in the series will make frequent reference to the text (or “Articles”) contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union – a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO’s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.

Introduction – Sensitive Data Categories, their meaning and practical implications

We saw as part of last week’s post the importance encryption plays as a “reasonable” step that any well-established organisation should have implemented to safeguard themselves against the risk of a data breach. The implications of a data breach are covered more in-depth under Articles 33, 34 and 35 of the regulation. The key takeaway from this is that encryption is by no means a silver bullet, and you must instead look at a complementary range of solutions to mitigate the risk and impact of a data breach.

Although not technically a form of encryption, Field Level Security can be seen as an apparatus for providing encryption-like functionality on a very granular basis within your CRM/D365E deployment. Whilst implementing them does broadly conform to the specifications as set out in Article 32 of GDPR, they do also provide a means of satisfying some of the requirements set out in Article 9, which states clearly:

Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

Unless one of the following conditions apply:

  • The data subject has provided consent to record the data or has placed the details into the public domain.
  • The data needs to be processed as part of a specific line of legitimate business (employment, social security, social protection law, not-for-profit foundation/association, medical care, public health purposes or as part of scientific/historical research).
  • The recording of such personal details is required to protect the vital interests of the person concerned.

Many of the organisations listed above may already be using CRM/D365E as their primary business system and, as a consequence, will be storing the types of information referenced above. Whilst this is surely a legitimate case of data processing, issues may arise, for example, when it comes to which persons within the organisation can see and access this data; a medical doctor/nurse accessing a patient’s health information is appropriate, but surely a receptionist or IT support personnel viewing a patient record has no fair interest in viewing this information. Having appropriate controls in place to protect against these types of scenarios become a primary concern under GDPR, and Article 30 enshrines this further by requiring organisations to clearly document and implement processes that define individuals access to personal data:

Each controller and, where applicable, the controller’s representative, shall maintain a record of processing activities under its responsibility…[including] the categories of recipients to whom the personal data have been or will be disclosed including recipients in third countries or international organisations;

To summarise, therefore, by piggybacking upon the very robust security model contained within CRM/D365E, Field Level Security can very quickly be implemented to ensure that users of the system only see the information that is relevant to them as part of their role, without disrupting the entire end-user experience in the process.

With this in mind, let’s take a look at how straightforward it is to begin working with Field Security, by following the steps outlined below:

  1. Identify the field(s) that need to be secured from being accessed by a specific group of users. Navigate to the field(s) properties and verify that the Field Security option has been set to Enable. For this example, we are going to use the Primary Contact field on the Account entity:
  2. Within the Customizations area of the application, select the Field Security Profiles option on the left-hand bar and then click on New to create a Field Security Profile:
  3. On the New Field Security Profile window, specify a name and an (optional) description value for the new profile and press the Save button:
  4. Once saved, you can then begin to configure the two most important aspects of the profile – the permissions that are granted to secured fields and the Users/Teams in the application that they apply to. In this example, we are going to restrict the Primary Contact field from step 1) so that users who are part of our Account Executive team role cannot view, update or create a record with a value in this field. To begin with, click on the Teams button and then click on Add to find and select the Account Executive team role:
  5. Next, click on the Field Permissions icon and double-click the Primary Contact field on the list. Verify that the Allow ReadAllow Update and Allow Create options are set to No:

Now, when we log into the application as a user who is a part of the Account Executive role and navigate to a sample record on the system, we can see that the field in question has been obfuscated. We have no way of seeing, changing or otherwise interacting with the value contained within this field:

Fields that are impacted in some way as a result of Field Security can always be clearly distinguished by the key icon on the top left of the field name. This can prove useful in helping users to understand their current levels of access and in troubleshooting why a user cannot read or modify a particular field.

So what have we learned about Field Security Profiles and how they conform to GDPR? Here’s a quick summary of the key points:

  • Demonstrates that sensitive data information is stored with “appropriate security” in place (Article 5)
  • They can be used as a tool for storing and controlling access to sensitive data types (Article 9)
  • Provides a mechanism to demonstrate compliance with the relevant articles of GDPR, should the organisation be subject to an Audit as a Data Processor (Article 28)
  • Can be seen as an appropriate technical safeguard in the protection of both non-sensitive and sensitive data types (Article 32)
  • Could be used as documentary evidence (or the basis thereof) that covers the documentation requirements for data processing (Article 30)

Thanks for reading! As part of next’s week post, we will take a deeper dive into CRM/D365E’s wider security model and the importance of documentation in the context of GDPR.

Monday may not have been my day of choice for attending an all-day session on the General Data Protection Regulation (GDPR), but it was something that I walked away from feeling more well-informed on:

If you currently work within the IT industry, then I would be very surprised if you have not yet come across GDPR or are already in the process of assessing what your organisation needs to do to prepare for it. In a nutshell, GDPR replaces existing data protection legislation within EU countries on May 25th 2018 (for the UK, this will be the Data Protection Act 1998). GDPR brings data protection guidelines firmly into the 21st century and provides a framework for organisations to apply the appropriate steps to protect individuals data. Whilst there is much within the updated guidelines that remain unchanged, there is additional emphasis towards organisations implementing the appropriate levels of security (both physical and technical), applying regular auditing processes and documentation of processes to protect against a possible data breach. For an IT professional, one of the overriding questions you should be starting to ask yourself is “What can I do to make the systems I support/implement compatible with GDPR?

Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) is one system that is likely to be in place within businesses/organisations across the EU, and one that is arguably best placed to help meet the challenges that GDPR brings to the table. The wide berth of functionality within the application can be picked up and adapted to suit the following requirements:

  • Provide backend database encryption, to protect your key customer data in the event of a data breach.
  • Ensure that highly sensitive data categories are only accessed by relevant personnel within your organisation.
  • Enables you to implement a clear and comprehensive security model within your system, that can then clearly documented.
  • Helps you to implement a data retention policy that is line with contractual and statutory requirements.
  • Allow you to quickly and effectively respond to subject access requests, via the use of easy to generate document templates.

All of the above can be achieved using out of the box functionality within the application and, in some cases, can be more straightforwardly than you may assume.

As part of this and the next couple of week’s blog posts, I will take a look at each of the bullet points above, step by step. The aim of this is to highlight the specific elements within GDPR that each potential situation covers, how to go about implementing a solution within CRM/D365E to address each one and to provide other thoughts/considerations to better prepare yourself for GDPR. By doing so, I hope to make you aware of functionality within the application that hitherto you may never have looked at before and to explore specific use cases that provide a wider business relevance.

All posts in the series will make frequent reference to the text contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union – a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO’s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.

Without further ado, let’s jump into the focus for this weeks post: Understanding and effectively utilising Transparent Database Encryption (TDE) within your CRM/D365E deployment.

One area within GDPR that has changed significantly is data breaches and penalties for organisations that have demonstrated a clear dereliction of their responsibilities. When assessing whether a fine is issued by your countries appropriate authority, which could number in the millions of £’s or more, a determination is made whether the company has implemented sufficient technical controls to mitigate the potential impact of a data breach. Article 32 sets this out in broad terms:

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the [Data] controller and the [Data] processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:
(a) the pseudonymisation and encryption of personal data;
(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
(c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
(d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

It is worth noting that an assessment will be made of your businesses size, turnover etc. when a judgement is made on what “appropriate” steps your organisation has taken to mitigate their risk in this regard. Smaller businesses can, therefore, breathe a sigh of relief in not having to implement large scale and costly technical solutions within their businesses. Speaking more generally though, the importance of encryption within your organisation’s database and application systems becomes a primary concern in demonstrating GDPR compliance. It could also help you when it comes to determining whether you need to report a Data Breach, as an encrypted piece of hardware does not necessarily expose personal data; arguably meaning that no data breach has occurred.

CRM/D365E gives us the option to utilise a well-established feature within SQL Server to implement encryption for our data – Transparent Database Encryption (or TDE). Even better, it’s enabled by default. That being said, it is prudent for you to take a copy of the default encryption key or change it entirely if you haven’t done so already.

Doing either of the above is relatively straightforward. Navigate to Settings -> Data Management within the application and then click on the Data Encryption icon:

The Data Encryption pop-up window will appear, as indicated below:

From here you have two options at your disposal:

  • Use the Show Encryption Key to allow you to copy and paste the key to your location of choice. Note that as outlined by Microsoft, the key may contain Unicode characters, leading to a potential a loss of data when using applications such as Notepad.
  • Generate a new key that meets the requirements set out above and then click on Change.

In both cases, ensure that the encryption key is stored securely and segregated as far away as possible from your CRM/D365E deployment. Keep in mind as well that there are specific privileges that control if a user can access the above or even modify the encryption key in the first place. These privileges can be found on the Core Records tab within a Security Role page:

It may be tempting, knowing that encryption is enabled by default, to put your feet up and not worry about it. Here’s why it’s important to securely hold/segregate your database encryption key and also to think carefully about which users in your organisation have full Administrative privileges on the application:

Let’s assume the following scenario: your on-premise CRM 2016 organisation has database encryption enabled and SQL Server is installed on the same machine, along with all database files. The database encryption key is saved within a .txt file on the same computer.

A rogue member of staff with full Administrative privileges on CRM or an attacker manages to gain access to this server, in the process taking your CRM organisations .mdf database file. They also manage to either take a copy of the .txt file containing the encryption key or the currently configured encryption key by accessing your CRM instance. This person now has the ability to both mount and access the database file without issue. Under GDPR, this would constitute a data breach, requiring your business to do the following as immediate steps:

  • Notify the supervisory body within your country within 72 hours of the breach occurring (Article 33)
  • Notify every person whose personal data was stored in the database that a breach has occurred (Article 34)
  • Record the nature of the breach, the actual effect caused by it and all remedial steps taken to prevent the occurrence of a breach again in the future. All of this may be required by the supervisory body at any time (Article 35)

The fun does not stop there: depending on what processes your business had in place and, given the specific nature of the scenario, a fine may be more than likely. This is due to the clear steps that could have been taken to prevent the database from being so easily accessible. Having to explain this in front of senior executives of a business is not a prospect that any of us would particularly relish and could have been avoided had the following steps being implemented:

  • The rogue member of staff had been given a much more restrictive security role, that did not grant the Manage Data Encryption key – Read privilege.
  • The SQL Server instance had been installed on a different server.
  • The database encryption key had been saved on a different server
  • The database encryption key had been saved in a password protected/encrypted file.

This list is by no means exhaustive, and there is ultimately no silver bullet when it comes to situations like this; however, you can manage your risk much more effectively and demonstrate to authorities like the ICO that you have taken reasonable steps by taking some of the appropriate steps highlighted above.

In next week’s post, we will take a look at the importance of Field Security Profiles and how they can be utilised to satisfy several of the key requirements of GDPR in a pinch!

The sheer breadth of ways that you can utilise Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can sometimes boggle the mind. Whether it’s through a traditional web browser, mobile app, the new interactive service hub or even through your own website created via the SDK, organisations have an ever-increasing array of routes they can go down when deploying the application into their environment. Despite this, more often than not, you would expect a “standard” deployment to involve using the application via a web browser, either on a local machine or potentially via a Remote Desktop Session (RDS) instance. Whilst Microsoft’s support articles provide fairly definitive software requirements when working on a Windows desktop machine, it is difficult to determine if, for example, Google Chrome on a Windows Server 2012 RDS session is supported. This is an important omission that requires clarification and is worth discussing further to determine if a definitive conclusion can be reached, based on the available evidence.

In this week’s post, I will attempt to sleuth through the various pieces of evidence I can find on this subject, sprinkling this with some experience that I have had with CRM/D365E and RDS, to see if any definitive conclusion can be established.

Before we get into the heart of the matter, it may be useful to provide a brief overview of what RDS is

RDS is a fancy way of describing connecting to a remote computer via the Remote Desktop Connection client on your Windows or OS of choice. Often referred to as Terminal Services, it is a de facto requirement when accessing remote servers for a variety of reasons. Most commonly, you will witness it deployed as part of an internal corporate network, as a mechanism for users to “remote on” when working outside the office. Due to the familiarity of Windows Server compared with each versions corresponding Desktop OS, the look and feel of working on a normal computer can be achieved with minimal effort, and you can often guarantee that the same types of programmes will also work without issue.

Whilst RDS is still frequently used, it could be argued to have taken a back seat in recent years with the rise in virtualisation technologies, from the likes of Citrix and VMware. These solutions tend to offer the same benefits an RDS server can, but places more emphasis on utilising a local desktop environment to essentially stream desktops/applications to end users. As a result of the rise of these technologies, RDS is perhaps entering a period of uncertainty; whilst it will continue to be essential for remote server management, there are arguably much better technologies available that provide an enriched end-user experience, but offer the same benefits of having a centralised server within a backed up/cloud environment.

Now that you (hopefully!) have a good overview of what RDS is, let’s take a look at the evidence available in relation to CRM/D365E and RDS

Evidence #1: Technet Articles

The following TechNet articles provide, when collated together, a consolidated view of supported internet browsers and operating systems for CRM/D365:

From this, we can distill the following:

  • Windows 10, 8.1, 8 and 7 are supported, so long as they are using a “supported” browser:
    • Internet Explorer 10 is supported for Windows 7 and 8 only.
    • Internet Explorer 11 is supported for all Windows OS’s, with the exception of 8.
    • Edge is supported for Windows 10 only.
    • Firefox and Chrome are supported on all OS’s, so long as they are running the latest version.
  • OS X 10.8 (Mountain Lion), 10.9 (Mavericks) and 10.10 Yosemite are supported for Safari only, running the latest version
  • Android 10 is supported for the latest version of Chrome only
  • iPad is supported for the latest version of Safari only (i.e. the latest version of iOS)

The implication from this should be clear – although the following Windows Server devices (that are currently in mainstream support) can be running a supported web browser, they are not covered as part of the above operating server list:

  • Windows Server 2016
  • Windows Server 2012 R2
  • Windows Server 2012

Evidence #2: Notes from the Field

I have had extensive experience both deploying into and supporting CRM/D365E environments running RDS. These would typically involve servers with significant user load (20-30 per RDS server) and, the general experience and feedback from end users has always been…underwhelming. All issues generally came down to the speed of the application which, when compared to running on a standard, local machine, was at a snail’s pace by comparison. Things like loading a form, an entity view or Dialog became tortuous affairs and led to serious issues with user adoption across the deployments. I can only assume that the amount of local CPU/Memory required for CRM/D365E when running inside a web application was too much for the RDS server to handle; this was confirmed by frequent CPU spikes and high memory utilisation on the server.

I can also attest to working with Microsoft partners who have explicitly avoided having issues concerning RDS and CRM/D365E in-scope as part of any support agreement. When this was queried, the reasoning boiled down to the perceived hassle and complexity involved in managing these types of deployment.

To summarise, I would argue that this factors in additional ammunition for Evidence piece #1, insomuch as that RDS compatible servers are not covered on the supported operating system lists because these issues are known about generally.

Evidence #3: What Microsoft Actually Say

I was recently involved as part of a support case with Microsoft, where we were attempting to diagnose some of the performance issues discussed above within an RDS environment. The support professional assigned to the case came back and stated the following in regards to RDS and CRM/D365E:

…using Windows remote desktop service is supported but unfortunately using Windows server 2012 R2 is not supported. You have to use Windows server 2012. Also windows server 2016 is out of our support boundaries.

Whilst this statement is not backed up by an explicit online source (and I worry whether some confusion has been derived from the Dynamics 365 for Outlook application – see below for more info on this), it can be taken as saying that Windows Server 2012 is the only supported operating system that can be used to access CRM/D365E, with one of the supported web browsers mentioned above.

The Anomalous Piece of Evidence: Dynamics 365 for Outlook Application

Whilst it may not be 100% clear cut in regards to supported server operating systems, we can point to a very definitive statement in respect to the Dynamics 365 for Outlook application when used in conjunction with RDS:

Dynamics 365 for Outlook is supported for running on Windows Server 2012 Remote Desktop Services

Source: https://technet.microsoft.com/en-us/library/hh699743.aspx

Making assumptions here again, but can we take this to mean that the web application is supported within Windows Server 2012 RDS environments, as suggested by the Microsoft engineer above? If not, then you may start thinking to yourself “Well, why not just use this instead of a web browser on RDS to access CRM/D365E?”. Here are a few reasons why you wouldn’t really want to look at rolling out the Dynamics 365 for Outlook application any time soon within RDS:

  • If deploying the application into offline mode, then you will be required to install a SQL Express instance onto the machine in question. This is because the application needs to store copies of your synchronised entity data for whenever you go offline. The impact of this on a standard user machine will be minimal at best, but on a shared desktop environment, could lead to eventual performance issues on the RDS server in question
  • With the introduction of new ways to work within CRM/D365 data in an efficient way, such as with the Dynamics 365 App for Outlook, the traditional Outlook client is something that is becoming less of a requirement these days. There are plenty of rumours/commentary on the grapevine that the application may be due for depreciation in the near future, and even Microsoft have the following to say on the subject:

    Dynamics 365 App for Outlook isn’t the same thing as Dynamics 365 for Outlook. As of the December 2016 update for Dynamics 365 (online and on-premises), Microsoft Dynamics 365 App for Outlook paired with server-side synchronization is the preferred way to integrate Microsoft Dynamics 365 with Outlook.

  • I have observed performance issues with the add-in myself in the past – outlook freezing, the occasional crash and also issues with the Outlook ribbon displaying incorrectly.

As you can probably tell, I am not a big fan of the add-in, but the writing on the wall is fairly clear – Microsoft fully supports you accessing CRM/D365E from the Outlook client on Windows Server 2012 RDS.

After reviewing all the evidence, do we have enough to solve this case?

Whilst there is a lot of evidence to consider, the main thing I would highlight is the lack of a “smoking gun” in what has been reviewed. What I mean by this is the lack of a clear support article that states either “X Browser is supported on Windows Server X” or “X Browser is NOT supported on Windows Server X“. Without any of these specific statements, we are left in a situation where we have to infer that RDS is not a supported option for using the CRM/D365E web application. Certainly, the experience I have had with the web client in these environment types would seem to back this up, which may go some way towards explaining the reason why this is not implicitly supported.

So where does this leave you if you are planning to deploy CRM/D365E within an RDS environment? Your only option is to ensure that your RDS environment is running Windows Server 2012 and that your users are utilising the Outlook client, given that there is a very clear statement regarding its supportability. If you are hell bent on ensuring that your end users have the very best experience with CRM/D365E, then I would urge you to reconsider how your environment is configured and, if possible, move to a supported configuration – whether that’s local desktop or a VDI, running your browser of choice. Hopefully, the benefits of utilising the application will far outweigh any overriding concerns and business reasons for using RDS in the first place.

When you have spent any length of time working with Dynamics CRM Online/Dynamics 365 for Enterprise (D365E) data programmatically, you become accustomed to how Option Set, State and Status Reason values are presented to you in code. To explain, the application does not store your Option Set value display names within the SQL Server Entity tables; rather, the Option Set Value that has been specified alongside your Label is what is stored as an integer value. That is why you are always mandatorily prompted to provide both values within the application:

The following benefits are realised as a result of how this is setup:

That being said, when working with these field types in code, you do always have to have the application window open or a list of all Labels/Values to hand so that you don’t get too confused… 🙂

I have previously extolled the virtues of the Data Export Service on the blog, and why you should consider it if you have basic integration requirements for your CRM/D365E deployment. One area in which it differs from other products on the market is how it handles the field types discussed above. For example, when exporting data to a SQL database via Scribe Online, new columns are created alongside that contain the “Display Name” (i.e. label value) that correspond to each Option, Status and Status Reason Label. So by running the following query against a Scribe export database:

SELECT DISTINCT statecode, statecode_displayname
FROM dbo.account

We get the best of both worlds – our underlying statecode value and their display names – all in 2 lines of code:

This is a big help, particularly when you are then using the data as part of a report, as no additional transformation steps are required and your underlying SQL query can be kept as compact as possible.

The Data Export Service differs from the above in an understandable way, as display name values for Status, Status Reason and Option Set column values are instead segregated out into their own separate table objects in your Azure SQL database:

OptionSetMetadata

GlobalOptionSetMetadata

StateMetadata

StatusMetadata

Why understandable? If you consider how the application can support multiple languages, then you realise that this can also apply to metadata objects across the application – such as field names, view names and – wouldn’t you have guessed it – Labels too. So when we inspect the OptionSetMetadata table, we can see that the table structure accommodates the storing of labels in multiple languages via the LocalizedLabelLanguageCode field:

Unlike the Scribe Online route above (which I assume only retrieves the Labels that correspond to the user account that authenticates with CRM/D365E), the Data Export Service becomes instantly more desirable if you are required to build multi-language reports referencing CRM/D365E application data.

The issue that you have to reconcile yourself with is that your SQL queries, if being expressed as natively as possible, instantly become a whole lot more complex. For example, to achieve the same results as the query above, it would have to be adapted as follows for the Data Export Service:

SELECT DISTINCT statecode, LocalizedLabel
FROM dbo.account
 LEFT JOIN dbo.StateMetadata
  ON 'account' = EntityName
  AND statecode = [State]
  AND '1033' = LocalizedLabelLanguageCode

The above is a very basic example, but if your query is complex – and involves multiple Option Set Values – then you would have to resort to using Common Table Expressions (CTE’s) to accommodate each potential JOIN required to get the information you want.

In these moments, we can look at some of the wider functionality provided as part of SQL Server to develop a solution that will keep things as simple as possible and, in this particular instance, a user-defined function is an excellent candidate to consider. These enable you to perform complex operations against the database platform and encapsulate them within very simply expressed objects that can also accept parameters. The good thing about functions is that they can be used to return table objects and scalar (i.e. single) objects.

Using a scalar function, we can, therefore, remove some of the complexity behind returning Option Set, Status and Status Reason labels by creating a function that returns the correct label, based on input parameters received by the function. You could look at creating a “master” function that, based on the input parameters, queries the correct Metadata table for the information you need; but in this example, we are going to look at creating a function for each type of field – Status, Status Reason, Option Set and Global Option Set.

To do this, connect up to your Data Export Service database and open up a new query window, ensuring that the context is set to the correct database. Paste the following code in the window and then hit Execute:

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

--Create Function to return Global Option Set Labels

CREATE FUNCTION [dbo].[fnGetGlobalOptionSetLabel]
(
	@GlobalOptionSetName NVARCHAR(64), --The logical name of the Global Option Set
	@Option INT, --The option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain label for Global Option Set field ' + @GlobalOptionSetName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Option Set Labels

CREATE FUNCTION [dbo].[fnGetOptionSetLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Option Set field
	@OptionSetName NVARCHAR(64), --The logical name of the Option Set field
	@Option INT, --The option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain label for Option Set field ' + @OptionSetName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Status Labels

CREATE FUNCTION [dbo].[fnGetStateLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Status field
	@State INT, --The Status option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain State label for entity ' + @EntityName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Status Reason Labels

CREATE FUNCTION [dbo].[fnGetStatusLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Status Reason field
	@Status INT, --The Status Reason option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain Status label for entity ' + @EntityName AS INT);
	RETURN @Label;

END

GO

This will then go off and create the functions listed in code, which should then show up under the Programmability folder on your SQL database:

For those who are unsure at what the SQL code is doing, it first attempts to determine if only 1 Label can be found for your appropriate field type, based on the parameters provided. If it is successful, then a value is returned; otherwise, the CAST function is designed to force an error to return back to the caller to indicate that none or more than 1 Option Set value was found. In most cases, this would indicate a typo in the parameters you have specified.

As with anything, the best way to see how something works is in the practice! So if we again look at our previous examples shown in this post, we would utilise the dbo.fnGetStateLabel function as follows to return the correct label in English:

SELECT DISTINCT statecode, dbo.fnGetStateLabel('account', statecode, 1033) AS statecode_displayname
FROM dbo.account

With our results returning as follows:

Now we can expose this through our reports and not worry about having to do any kind of transformation/lookup table to get around the issue 🙂

Attempting to keep things as simple as possible by encapsulating complex functionality into simply and clearly expressed functions is an excellent way of ensuring that code can be kept as streamlined as possible, and also ensures that other colleagues can accomplish complex tasks, even if they do not have in-depth knowledge of Transact-SQL.

When working with form-level JScript functionality on Dynamics CRM/Dynamics 365 for Enterprise (D365E), you often uncover some interesting pieces of exposed functionality that can be utilised neatly for a specific business scenario. I did a blog post last year on arguably one of the best of these functions when working with Lookup field controls – the Xrm.Page.getControl().addPreSearch method. Similar to other methods exposed via the SDK, its prudent and effective implementation can greatly reduce the amount of steps/clicks that are involved when populating Entity forms.

I’ve already covered as part of last years post just what this method does, its sister method, addCustomFilter, and also some of the interesting problems that are encountered when working with the Customer lookup field type; a special, recently introduced field type that allows you to create a multi-entity lookup/relationship onto the Account and Contact entities on one field. I was doing some work again recently using these method(s) in the exact same conditions, and again came across some interesting quirks that are useful to know when determining whether the utilisation of these SDK methods is a journey worth starting in the first place. Without much further ado, here are two additional scenarios that involve utilising these methods and the “lessons learned” from each:

Pre-Filtering the Customer Lookup to return Account or Contact Records Only

Now, your first assumption with this may be that, if you wanted your lookup control to only return one of the above entity types, then surely it would be more straightforward to just setup a dedicated 1:N relationship between your corresponding entity types to achieve this? The benefits of this seem to be pretty clear – this is a no-code solution that, with a bit of ingenious use of Business Rules/Workflows, could be implemented in a way that the user never even suspects what is taking place (e.g. Business Rule to hide the corresponding Account/Contact lookup field if the other one contains a value). However, assume one (or all) of the following:

  • You are working with an existing System entity (e.g. Quote, Opportunity) that already has the Customer lookup field defined. This would, therefore, mean you would have to implement duplicate schema changes to your Entity to accommodate your scenario, a potential no-no from a best practice point of view.
  • Your entity in question already has a significant amount of custom fields, totalling more than 200-300 in total. Additional performance overheads may occur if you were to then choose to create two separate lookup fields as opposed to one.
  • The entity you are customising already has a Customer lookup field built in, which is populated with data across hundreds, maybe thousands, of records within the application. Attempting to implement two separate lookups and then going through the exercise of updating every record to populate the correct lookup field could take many hours to complete and also have unexpected knock-on effects across the application.

In these instances, it may make more practical sense to implement a small JScript function to conditionally alter how the Customer Lookup field allows the user to populate records when working on the form. The benefit of this being is that you can take advantage of the multi-entity capablities that this field type was designed for, and also enforce the integrity of your business logic/requirements on the applications form layer.

To that end, what you can look at doing is applying a custom FetchXML snippet that prevents either Account or Contact records from returning when a user clicks on the control. Paradoxically, this is not done by, as I first assumed, using the following snippet:

var filter = "<filter type='and'><condition attribute='accountid' operator='not-null' /></filter>";
Xrm.Page.getControl("mycustomerlookupfield").addCustomFilter(filter, "account");

This will lead to no records returning on your lookup control. Rather, you will need to filter the opposite way – only return Contact records where the contactid equals Null i.e. the record does not exist:

var filter = "<filter type='and'><condition attribute='contactid' operator='null' /></filter>";
Xrm.Page.getControl("mycustomerlookupfield").addCustomFilter(filter, "contact");

Don’t Try and Pass Parameters to your addCustomerFilter Function (CRM 2016 Update 1)

If your organisation is currently on Dynamics CRM 2016 Update 1, then you may encounter a strange – and from what I can gather, unresolvable – issue if you are working with multiple, parameterised functions in this scenario. To explain further, lets assume you have a Customer Lookup and a Contact Lookup field on your form. You want to filter the Contact Lookup field to only return Contacts that are associated with the Account populated on the Customer Lookup. Assume that there is already a mechanism in place to ensure that the Customer lookup will always have an Account record populated within it, and your functions to use in this specific scenario may look something like this:

function main() {

    //Filter Contact lookup field if Customer lookup contains a value

    var customerID = Xrm.Page.getAttribute('mycustomerlookupfield').getValue();

    if (customerID != null) {
   
        Xrm.Page.getControl("mycontactfield").addPreSearch(filterContactNameLookup(customerID[0].id));
    }
    
}

function filterContactNameLookup(customerID) {

    var filter = "<condition attribute='parentcustomerid' operator='eq' value='" + customerID + "' />";
    Xrm.Page.getControl("mycontactfield").addCustomFilter(filter, "account");

}

The above example is a perfectly sensible means of implementing this. Because, surely, it make more practical sense to only obtain the ID of our Customer Lookup field in one place and then pass this along to any subsequent functions? The problem is that CRM 2016 Update 1 throws some rather cryptic errors in the developer console when attempting to execute the code, and does nothing on the form itself:

Yet, when we re-write our functions as follows, explicitly obtaining our Customer ID on two occasions, this runs as we’d expect with no error:

function main() {

    //Filter Contact lookup field if Customer lookup contains a value

    var customerID = Xrm.Page.getAttribute('mycustomerlookupfield').getValue();

    if (customerID != null) {
   
        Xrm.Page.getControl("mycontactfield").addPreSearch(filterContactNameLookup);
    }
    
}

function filterContactNameLookup() {

    var customerID = Xrm.Page.getAttribute('mycustomerlookupfield').getValue()[0].id;
    var filter = "<condition attribute='parentcustomerid' operator='eq' value='" + customerID + "' />";
    Xrm.Page.getControl("mycontactfield").addCustomFilter(filter, "account");

}

I’ve been scratching my head at why this doesn’t work, and the only thing I can think of is that the first function – main – would be executed as part of the forms OnLoad event, whereas the filterContactNameLookup is only triggered at the point in which the lookup control is selected. It’s therefore highly possible that the first instance of the customerID is unobtainable by the platform at this stage, meaning that you have to get the value again each time the lookup control is interacted with. If anyone else can figure out what’s going on here or confirm whether this is a bug or not with Dynamics CRM 2016 Update 1, then do please let me know in the comments below.

Conclusions or Wot I Think

It could be argued quite strongly that the examples shown here in this article have little or no use practical use if you are approaching your CRM/D365E implementation from a purely functional point of view. Going back to my earlier example, it is surely a lot less hassle and error-prone to implement a solution using a mix of out of the box functionality within the application. The problem that you eventually may find with this is that the solution becomes so cumbersome and, frankly, undecipherable when someone is coming into your system cold. With anything, there always a balance should be striven for on all occasions and, with a bit of practical knowledge of how to write JScript functionality (something that any would-be CRM expert should have stored in their arsenal), you can put together a solution that is relatively clean from a coding point of view, but also benefits from utilising some great functionality built-in to the application.