The world of database security and protection can be a difficult path to tread at times. I often find myself having to adopt a “tin-foil hat” approach, obsessing over the smallest potential vulnerability that a database could be compromised with. This thought process can be considered easy compared with any protective steps that need to be implemented in practice, as these can often prove to be mind-bogglingly convoluted. This is one of the reasons why I like working with Microsoft Azure and features such as Azure SQL Database Firewall Rules. They present a familiar means of securing your databases to specific IP address endpoints and are not inordinately complex in how they need to be approached; just provide a name, Start/End IP range and hey presto! Your client/ application can communicate with your database. The nicest thing about them is that the feature is enabled by default, meaning you don’t have to worry about designing and implementing a solution to restrict your database from unauthorised access at the outset.

As alluded to above, Database Firewall Rules are added via T-SQL code (unlike Server Rules, which can be specified via the Azure portal), using syntax that most SQL developers should feel comfortable using. If you traditionally prefer to design and build your databases using a Visual Studio SQL Database project, however, you may encounter a problem when looking to add a Database Firewall rule to your project. There is no dedicated template item that can be used to add this to the database. In this eventuality, you would have to look at setting up a Post-Deployment Script or Pre-Deployment Script to handle the creation of any requisite rules you require. Yet this can present the following problems:

  • Visual Studio will be unable to provide you with the basic syntax to create the rules.
  • Related to the above, Intellisense support will be limited, so you may struggle to identify errors in your code until it is deployed.
  • When deploying changes out to your database, the project will be unable to successfully detect (and remove) any rules that are deleted from your project.

The last one could prove to be particularly cumbersome if you are tightly managing the security of your Azure SQL database. Putting aside the obvious risk of someone forgetting to remove a rule as part of a deployment process, you would then have to manually remove the rules by connecting to your database and executing the following T-SQL statement:

EXECUTE sp_delete_database_firewall_rule 'MyDBFirewallRule'

Not the end of the world by any stretch, but if you are using Visual Studio as your deployment method for managing changes to your database, then having to do this step seems a little counter-intuitive. Fortunately, with a bit of creative thinking and utilisation of more complex T-SQL functionality, we can get around the issue by developing a script that carries out the following steps in order:

  • Retrieve a list of all current Database Firewall Rules.
  • Iterate through the list of rules and remove them all from the database.
  • Proceed to re-create the required Database Firewall Rules from scratch

The second step involves the use of a T-SQL function that I have traditionally steered away from using – Cursors. This is not because they are bad in any way but because a) I have previously struggled to understand how they work and b) have never found a good scenario in which they could be used in. The best way of understanding them is to put on your C# hat for a few moments and consider the following code snippet:

string[] array = new string[] { "Test1", "Test2", "Test3" }; 

foreach(string s in array)
    {
        Console.WriteLine(s);
    }
    

To summarise how the above works, we take our collection of values – Test1, Test2 and Test3 – and carry out a particular action against each; in this case, print out their value into the console. This, in a nutshell, is how Cursors work, and you have a great deal of versatility on what action you take during each iteration of the “loop”.

With a clear understanding of how Cursors work. the below script that accomplishes the aims set out above should hopefully be a lot clearer:

DECLARE @FirewallRule NVARCHAR(128)

DECLARE REMOVEFWRULES_CURSOR CURSOR
	LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT DISTINCT [name]
FROM sys.database_firewall_rules

OPEN REMOVEFWRULES_CURSOR
FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
WHILE @@FETCH_STATUS = 0
BEGIN
	EXECUTE sp_delete_database_firewall_rule @FirewallRule
	PRINT 'Firewall rule ' + @FirewallRule + ' has been successfully deleted.'
	FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
END
CLOSE REMOVEFWRULES_CURSOR
DEALLOCATE REMOVEFWRULES_CURSOR

GO

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule1',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule2',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';
		

To integrate as part of your existing database project, add a new Post-Deployment Script file and modify the above to reflect your requirements. As the name indicates, the script will run after all other aspects of your solution deployment has been completed. Now, the key caveat to bear in mind with this solution is that, during deployment, there will be a brief period of time where all Database Firewall Rules are removed from the database. This could potentially prevent any current database connections from dropping or failing to connect altogether. You should take care when using the above code snippet within a production environment and I would recommend you look at an alternative solution if your application/system cannot tolerate even a second of downtime.

Office 365 groups have been a recurring topic of the blog in recent months – we’ve seen how we can force Office 365 to use custom domains when creating groups for the very first time and how you can straightforwardly integrate an Office 365 Group within Dynamics 365 for Customer Engagement. With this in mind, there is little point in providing a detailed description of what they are and how they can be used; suffice to say, if you are wanting to collaborate closely with internal/external colleagues for a particular project or department, Office 365 Groups are an excellent candidate to consider.

One of the cornerstones of Office 365 Groups is the ability for all conversations to be tracked via the use of a dedicated shared mailbox. This perhaps explains why the Office 365 portal will refuse to let you add any user within your organisation who does not have an Exchange Online license assigned to them. Case in point – let’s assume we have a user account with no such license assigned to them on the Office 365 portal:

When attempting to add this user into an Office 365 group, we get a message to let us know No match was found for the user account entered and, as a consequence, it cannot be added to the group:

From this, you can perhaps make the assumption that Office 365 groups are not supported at all for users who do not have a mailbox. This is notwithstanding the fact there are several different business scenarios that may necessitate this requirement:

  • A kiosk/”light-use” account may require access to the group to upload documents and manage the SharePoint site.
  • Integration with external applications may be required, stipulating the need for a service account to authenticate with the group to retrieve/add content dynamically.
  • The need to configure an account for external users to access, that is sufficiently locked down and inexpensive to maintain.

Fortunately, as with many other things relating to Office 365, we can get around this limitation within the Office 365 portal by resorting to PowerShell and adding the John Doe user account above to the Group.

The first step towards achieving this is to boot up a PowerShell window. Make sure you have access to this on your machine of choice then, after opening the application using the Run as administrator option, execute the following script:

##Set Execution Policy to Remote Signed - required to fully execute script

Set-ExecutionPolicy RemoteSigned

##Connect to Exchange Online. Enter administrator details when prompted.

$UserCredential = Get-Credential

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection

Import-PSSession $Session

##Add the non-mailbox user to the Office 365 Group. Substitute the Links value with the username of the account to add.

Add-UnifiedGroupLinks -Identity "Test Office 365 Group" -LinkType Members -Links john.doe@domain.com

##Confirm that the user has been added successfully by returning the Group member list

Get-UnifiedGroupLinks -Identity "Test Office 365 Group" -LinkType Members

##Cleanup by disconnecting from Exchange Online

Remove-PSSession $Session

The penultimate command will make something similar to the below appear in the console window. Interestingly, note that the John.Doe test user has a RecipientType value of User:

Now that the user has been added successfully, they will be able to access the SharePoint site for the group by navigating to the SharePoint library URL. This will look similar to the below and can be grabbed by logging in as another user who has the RecipientType value of UserMailbox and navigating to the Groups SharePoint site:

https://<Your On Microsoft domain prefix>.sharepoint.com/sites/<Your Office 365 Group Name/

Note that this will be on the only way the non-mailbox user can access the site. For example, there will be no link to SharePoint within Office 365 to guide you to the above location. After logging in, you should be greeted with a window similar to the one below:

The John Doe “light-use” account, as referenced above, will have full access to everything that is accessible within SharePoint concerning the Office 365 Group, such as:

  • The Home/News Page
  • Shared Documents Folder (“Documents“)
  • Shared OneNote (“Notebook“)
  • All Site Pages
  • Planner (navigated to via the following link: https://tasks.office.com/<Your Office 365 Primary domain>/en-GB/Home/Planner/)

Conversely, the following features will be inaccessible (due to requiring a Mailbox):

  • Conversations
  • Shared Calendar

If for example, you attempt to navigate to Conversations within SharePoint, you will get the following error message:

This is, perhaps, a small price to pay for what ends up to be a pretty feature-rich experience that can be given to additional users within your organisation at virtually no cost. Perhaps another good excuse to start rolling out Office 365 Groups across your tenant in the near future 🙂

Perhaps one of the most fiendish aspects of working with SQL Server Integration Services (SSIS) is the inevitable data transformation/conversion issues that get thrown up, even as part of relatively simplistic Extract, Transform & Load (ETL) packages. It doesn’t help as well if, having come from a strictly T-SQL focused background, you are then having to familiarise yourself with the differently named data types that SSIS has in comparison to SQL Server. Ultimately, whether you are still a noobie or season veteran in creating .dtsx packages, you should never be disheartened if you find yourself having to tackle data conversion issues during package development – put another way, there is always going to be a new system or data file format that comes out of nowhere to test your patience 🙂

I had a rather strange occurrence of this issue recently when working to import Globally Unique Identifier (GUID) data into SQL Server’s equivalent data type – the uniqueidentifier. GUIDs are very much the first choice these days if you are building large-scale applications requiring unique values to distinguish database records. Whereas back in the old days, you could get away with an integer column using the IDENTITY seed, the potential for current datasets to contain billions or more records make this option less practical compared with GUID’s – a data type that is almost always certainly going to be unique, even you are generating them at an insane pace, and which has the headroom to accommodate huge datasets.

Going back to strange occurrence I mentioned above – perhaps the best way to explain the issue (and its resolution) is to show the steps involved. To do this, access to a SQL Server database instance, interfaced with via SQL Server Management Studio (SSMS), is required. Once this has been obtained, a database needs to be created and the following example script executed against it to create the table used during this post:

CREATE TABLE [GUIDImportTest]
(
	[UID] UNIQUEIDENTIFIER NOT NULL,
	[TestCol1] NVARCHAR(MAX) NULL,
	[TestCol2] NVARCHAR(MAX) NULL
)

We then also have our test import file, saved as a .csv file:

With both of these ready, we can then get the error to generate using the SQL Server Import and Export Wizard – a handy tool that enables you to straightforwardly move uncomplex data between applications and file formats. This tool can be accessed via SSMS by right-clicking on any database and selecting Tasks -> Import Data…

Begin the wizard as indicated above and, when specifying the Data Source settings, select Flat File Source. In the Advanced tab, you should also override the default data type settings for the UID field and set it to unique identifier (DT_GUID):

The Target destination (accessed further along the wizard) should be set to SQL Server Native Client and to the server/database where the table created above resides.

On the Select Source Tables and Views screen, be sure that the correct table on the Destination drop-down. By default, if your import source does not match the destination name, then the wizard will assume you want to create a brand new table:

On the Review Data Type Mapping tab, a data conversion warning be will flagged up for the two TestCol fields; these can be safely disregarded, as the import package will successfully convert these values for you without further complaint:

After clicking Next and letting the package, we can then see the titular error of this post occur, which halts the package execution:

Initially, I thought the error was generating because the GUID values in the .csv file were not in upper case (when selecting uniqueidentifier data via a SQL query, this is always returned in this format), but the same error is thrown when importing data in this exact format. It turns out the issue was down to something that I should have readily realised based on my experience working with Dynamics CRM/Dynamics 365 for Customer Engagement. When working with URL’s and query string parameters in the application involving individual records, GUID values require special URL encoding to convert curly brace values – { and } respectively – into “URL friendly” format. So for example, the following:

{06e82887-9afc-4064-abad-f6fb60b8a1f3}

Is converted into:

%7B06e82887-9afc-4064-abad-f6fb60b8a1f3%7D

What does this have to do with SSIS and the task at hand? Well, it turns out that when importing uniqueidentifier data types into the application, the application expects the data to be in the above format, surrounded by curly braces. Our source data, therefore, needs to resemble the following image below to import successfully:

After making the appropriate changes to the source data, the package will then execute successfully, loading the data into the desired SQL table:

I guess the lesson here is that never take for granted any knowledge you may have garnered from a particular source  – even when dealing with what may be at first glance a completely disparate challenge. In all likelihood, it just might be that this past experience could present a means of thinking differently about a problem and, ultimately, overcome the challenge you are faced with.

Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE) is an incredibly flexible application for the most part. Regardless of how your business operates, you can generally tailor the system to suit your requirements and extend it to your heart’s content; often to the point where it is completely unrecognisable from the base application. Notwithstanding this argument, you will come across aspects of the application that are (literally) hard-coded to behave a certain way and cannot be straightforwardly overridden via the application interface. The most recognisable example of this is the Lead Qualification process. You are heavily restricted in how this piece of functionality acts by default but, thankfully, there are ways in which it can be modified if you are comfortable working with C#, JScript and Ribbon development.

Before we can start to look at options for tailoring the Lead Qualification process, it is important to understand what occurs during the default action within the application. In developer-speak, this is generally referred to as the QualifyLead message and most typically executes when you click the button below on the Lead form:

When called by default, the following occurs:

  • The Status/Status Reason of the Lead is changed to Qualified, making the record inactive and read-only.
  • A new OpportunityContact and Account record is created and populated with (some) of the details entered on the Lead record. For example, the Contact record will have a First Name/Last Name value supplied on the preceding Lead record.
  • You are automatically redirected to the newly created Opportunity record.

This is all well and good if you are able to map your existing business processes to the application, but most organisations will typically differ from the applications B2B orientated focus. For example, if you are working within a B2C business process, creating an Account record may not make sense, given that this is typically used to represent a company/organisation. Or, conversely, you may want to jump straight from a Lead to a Quote record. Both of these scenarios would require bespoke development to accommodate currently within CRM/D365CE. This can be broadly categorised into two distinct pieces of work:

  1. Modify the QualifyLead message during its execution to force the desired record creation behaviour.
  2. Implement client-side logic to ensure that the user is redirected to the appropriate record after qualification.

The remaining sections of this post will demonstrate how you can go about achieving the above requirements in two different ways.

Our first step is to “intercept” the QualifyLead message at runtime and inject our own custom business logic instead

I have seen a few ways that this can be done. One way, demonstrated here by the always helpful Jason Lattimer, involves creating a custom JScript function and a button on the form to execute your desired logic. As part of this code, you can then specify your record creation preferences. A nice and effective solution, but one in its guise above will soon obsolete as a result of the SOAP endpoint deprecation. An alternative way is to instead deploy a simplistic C# plugin class that ensures your custom logic is obeyed across the application, and not just when you are working from within the Lead form (e.g. you could have a custom application that qualifies leads using the SDK). Heres how the code would look in practice:

public void Execute(IServiceProvider serviceProvider)
    {
        //Obtain the execution context from the service provider.

        IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

        if (context.MessageName != "QualifyLead")
            return;

        //Get a reference to the Organization service.

        IOrganizationServiceFactory factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
        IOrganizationService service = factory.CreateOrganizationService(context.UserId);

        //Extract the tracing service for use in debugging sandboxed plug-ins

        ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

        tracingService.Trace("Input parameters before:");
        foreach (var item in context.InputParameters)
        {
            tracingService.Trace("{0}: {1}", item.Key, item.Value);
        }

        //Modify the below input parameters to suit your requirements.
        //In this example, only a Contact record will be created
        
        context.InputParameters["CreateContact"] = true;
        context.InputParameters["CreateAccount"] = false;
        context.InputParameters["CreateOpportunity"] = false;

        tracingService.Trace("Input parameters after:");
        foreach (var item in context.InputParameters)
        {
            tracingService.Trace("{0}: {1}", item.Key, item.Value);
        }
    }

To work correctly, you will need to ensure this is deployed out on the Pre-Operation stage, as by the time the message reaches the Post-Operation stage, you will be too late to modify the QualifyLead message.

The next challenge is to handle the redirect to your record of choice after Lead qualification

Jason’s code above handles this effectively, with a redirect after the QualifyLead request has completed successfully to the newly created Account (which can be tweaked to redirect to the Contact instead). The downside of the plugin approach is that this functionality is not supported. So, if you choose to disable the creation of an Opportunity record and then press the Qualify Lead button…nothing will happen. The record will qualify successfully (which you can confirm by refreshing the form) but you will then have to manually navigate to the record(s) that have been created.

The only way around this with the plugin approach is to look at implementing a similar solution to the above – a Web API request to retrieve your newly created Contact/Account record and then perform the necessary redirect to your chosen entity form:

function redirectOnQualify() {

    setTimeout(function(){
        
        var leadID = Xrm.Page.data.entity.getId();

        leadID = leadID.replace("{", "");
        leadID = leadID.replace("}", "");

        var req = new XMLHttpRequest();
        req.open("GET", Xrm.Page.context.getClientUrl() + "/api/data/v8.0/leads(" + leadID + ")?$select=_parentaccountid_value,_parentcontactid_value", true);
        req.setRequestHeader("OData-MaxVersion", "4.0");
        req.setRequestHeader("OData-Version", "4.0");
        req.setRequestHeader("Accept", "application/json");
        req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
        req.setRequestHeader("Prefer", "odata.include-annotations=\"OData.Community.Display.V1.FormattedValue\"");
        req.onreadystatechange = function () {
            if (this.readyState === 4) {
                req.onreadystatechange = null;
                if (this.status === 200) {
                    var result = JSON.parse(this.response);
                    
                    //Uncomment based on which record you which to redirect to.
                    //Currently, this will redirect to the newly created Account record
                    var accountID = result["_parentaccountid_value"];
                    Xrm.Utility.openEntityForm('account', accountID);

                    //var contactID = result["_parentcontactid_value"];
                    //Xrm.Utility.openEntityForm('contact', contactID);

                }
                else {
                    alert(this.statusText);
                }
            }
        };
        req.send();
        
    }, 6000);     
}

The code is set to execute the Web API call 6 seconds after the function triggers. This is to ensure adequate time for the QualifyLead request to finish and make the fields we need available for accessing.

To deploy out, we use the eternally useful Ribbon Workbench to access the existing Qualify Lead button and add on a custom command that will fire alongside the default one:

As this post has hopefully demonstrated, overcoming challenges within CRM/D365CE can often result in different – but no less preferred – approaches to achieve your desired outcome. Let me know in the comments below if you have found any other ways of modifying the default Lead Qualification process within the application.

Perhaps one of the most useful features at your disposal when working with Azure SQL Databases is the ability to integrate your Azure Active Directory (Azure AD) login accountsa la Windows Authentication for on-premise SQL Server. There are numerous benefits in shifting away from SQL Server-only user accounts in favour of Azure AD:

  • Ensures consistent login identities across multiple services.
  • Can enforce password complexity and refresh rules more easily.
  • Once configured, they behave exactly the same as standard SQL Server only logins.
  • Supports advanced usage scenarios involving Azure AD, such as multi-factor authentication and Single Sign-On (SSO) via Active Directory Federation Services (ADFS).

Setup can be completed in a pinch, although you will need to allocate a single/group of user(s) as the Active Directory admin for the Azure SQL Server. You may also choose to take due care and precautions when choosing your Active Directory admin(s); one suggestion would be to use a unique service account for the Active Directory admin, with a strong password, instead of granting such extensive privileges to normal user accounts.

Regardless of how you go about configuring the feature, I would recommend using it where-ever you can, for both internal purposes and also for anyone who wishes to access your SQL Server from an external directory. This second scenario is, you may be surprised to hear, fully supported. It assumes, first off, that you have added this account to your directory as a Guest/External User account. Then, you just follow the normal steps to get the account created on your Azure SQL Server.

There is one major “gotcha” to bear in mind when doing this. Let’s assume that you have added john.smith@domain.co.uk to the Azure AD tenant test.onmicrosoft.com. You then go to setup this account to access a SQL Server instance on the tenant. You will more than likely receive the following error message when using the example syntax below to create the account:

CREATE USER [john.smith@domain.co.uk] FROM EXTERNAL PROVIDER

The issue is, thankfully, simple to understand and fix. When External user accounts are added onto your Active Directory, despite having the same login name that derives from their source directory, they are stored in the new directory with a different UserPrincipalName (UPN). Consider the above example – the UPN in the source directory would be as follows:

john.smith@domain.co.uk

Whereas, as the Azure AD tenant name in this example is test.onmicrosoft.com, the UPN for the object would be:

john.smith_domain.co.uk#EXT#@test.onmicrosoft.com

I assume that this is done to prevent any UPN duplication across Microsoft’s no-doubt dizzying array of cloud Active Directory tenants and forests. In any event, knowing this, we can adjust our code above to suit – and successfully create our Database user account:

CREATE USER [john.smith_domain.co.uk#EXT#@test.onmicrosoft.com] FROM EXTERNAL PROVIDER

I guess this is one of those things where having at least a casual awareness of how other technologies within the Microsoft “stack” work can assist you greatly in troubleshooting what turn out to be simplistic errors in your code. Frustrating all the same, but we can claim knowledge of an obscure piece of Azure AD trivia as our end result 🙂