Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:

Limitations

The triggers do have the following limitations:

  • It does not work for on-premises SQL Server
  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger

A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements – contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on ūüôā

Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against the MyTable table will get it fully prepped for the service:

ALTER TABLE dbo.MyTable
ADD	[FlowID] INT IDENTITY(1,1) NOT NULL,
	[RowVersion] ROWVERSION
	

Accepting this fact, there may be certain situations when this is not the best option to implement:

  • The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
  • The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
  • If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
  • Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.

An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access – one which contains¬†only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design ūüėČ

Let’s see how the above example would work in practice, using the following example table:

CREATE TABLE [dbo].[SourceTable]
(
	[SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
	[SourceTableCol1] VARCHAR(50) NULL,
	[SourceTableCol2] VARCHAR(150) NULL,
	[SourceTableCol3] DATETIME NULL
)

In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!

The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:

CREATE TABLE [dbo].[SourceTableLog]
(
	[SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
	[SourceTableUID] UNIQUEIDENTIFIER NOT NULL,
	CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID])
		REFERENCES [dbo].[SourceTable] ([SourceTableUID])
		ON DELETE CASCADE,
	[TimeStamp] ROWVERSION
)

The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.

The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:

CREATE TRIGGER [trInsertNewSourceTableToLog]
ON [dbo].[SourceTable]
AFTER INSERT
AS
BEGIN
	INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID])
	SELECT [SourceTableUID]
	FROM inserted
END

For those unfamiliar with how triggers work, the¬†inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!

With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:

CREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow]
	WITH DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

Or a contained database user account (this would be my recommended option):

CREATE USER [mydatabase-flow] WITH PASSWORD = 'P@ssw0rd1',
	DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

From there, the world is your oyster – you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:

  • If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
  • In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
  • If working with Azure SQL, you will need to ensure that you have ticked the¬†Allow access to Azure services options on the Firewall rules page of your server.

Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.

In last week’s post, we took a look at how a custom Workflow activity can be implemented within Dynamics CRM/Dynamics 365 for Customer Engagement to obtain the name of the user who triggered the workflow. It may be useful to retrieve this information for a variety of different reasons, such as debugging, logging user activity or to automate the population of key record information. I mentioned in the post the “treasure trove” of information that the IWorkflowContext interface exposes to developers. Custom Workflow activities are not unique in having execution-specific information exposable, with an equivalent interface at our disposal when working with plug-ins. No prizes for guessing its name – the IPluginExecutionContext.

When comparing both interfaces, some comfort can be found in that they share almost identical properties, thereby allowing us to replicate the functionality demonstrated in last weeks post as Post-Execution Create step for the Lead entity. The order of work for this is virtually the same:

  1. Develop a plug-in C# class file that retrieves the User ID of the account that has triggered the plugin.
  2. Add supplementary logic to the above class file to retrieve the Display Name of the User.
  3. Deploy the compiled .dll file into the application via the Plug-in Registration Tool, adding on the appropriate execution step.

The emphasis on this approach, as will be demonstrated, is much more focused towards working outside of the application; something you may not necessarily be comfortable with. Nevertheless, I hope that the remaining sections will provide enough detail to enable you to replicate within your own environment.

Developing the Class File

As before, you’ll need to have ready access to a Visual Studio C# Class file project and the Dynamics 365 SDK. You’ll also need to ensure that your project has a Reference added to the Microsoft.Xrm.Sdk.dll. Create a new¬†Class¬†file and copy and paste the following code into the window:

using System;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;

namespace D365.BlogDemoAssets.Plugins
{
    public class PostLeadCreate_GetInitiatingUserExample : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            // Obtain the execution context from the service provider.

            IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

            // Obtain the organization service reference.
            IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);

            // The InputParameters collection contains all the data passed in the message request.
            if (context.InputParameters.Contains("Target") &&
                context.InputParameters["Target"] is Entity)

            {
                Entity lead = (Entity)context.InputParameters["Target"];

                //Use the Context to obtain the Guid of the user who triggered the plugin - this is the only piece of information exposed.
      
                Guid user = context.InitiatingUserId;

                //Then, use GetUserDisplayCustom method to retrieve the fullname attribute value for the record.

                string displayName = GetUserDisplayName(user, service);

                //Build out the note record with the required field values: Title, Regarding and Description field

                Entity note = new Entity("annotation");
                note["subject"] = "Test Note";
                note["objectid"] = new EntityReference("lead", lead.Id);
                note["notetext"] = @"This is a test note populated with the name of the user who triggered the Post Create plugin on the Lead entity:" + Environment.NewLine + Environment.NewLine + "Executing User: " + displayName;

                //Finally, create the record using the IOrganizationService reference

                service.Create(note);
            }
        }
    }
}

Note also that you will need to rename the namespace value to match against the name of your project.

To explain, the code replicates the same functionality developed as part of the Workflow on last week’s post – namely, create a Note¬†related to a newly created Lead record and populate it with the Display Name of the User who has triggered the plugin.

Retrieving the User’s Display Name

After copying the above code snippet into your project, you may notice a squiggly red line on the following method call:

The GetUserDisplayName is a custom method that needs to be added in manually and is the only way in which we can retrieve the Display Name of the user, which is not returned as part of the IPluginExecutionContext. We, therefore, need to query the User (systemuser) entity to return the Full Name (fullname) field, which we can then use to populate our newly create Note record. We use a custom method to return this value, which is provided below and should be placed after the last 2 curly braces after the Execute method, but before the final 2 closing braces:

private string GetUserDisplayName(Guid userID, IOrganizationService service)
    {
        Entity user = service.Retrieve("systemuser", userID, new ColumnSet("fullname"));
        return user.GetAttributeValue<string>("fullname");
    }

Deploy to the application using the Plug-in Registration Tool

The steps involved in this do not differ greatly from what was demonstrated in last week’s post, so I won’t repeat myself ūüôā The only thing you need to make sure you do after you have registered the plug-in is to configure the plug-in Step. Without this, your plug-in will not execute. Right-click your newly deployed plug-in on the main window of the Registration Tool and select¬†Register New Step:

On the form that appears, populate the fields/values indicated below:

  • Message:¬†Create
  • Primary Entity: Lead
  • Run in User’s Context: Calling User
  • Event Pipeline Stage of Execution: Post-Operation

The window should look similar to the below if populated correctly. If so, then you can click Register New Step to update the application:

All that remains is to perform a quick test within the application by creating a new Lead record. After saving, we can then verify that the plug-in has created the Note record as intended:

Having compared both solutions to achieve the same purpose, is there a recommended approach to take?

The examples shown in the past two blog posts indicate excellently how solutions to specific scenarios within the application can be achieved via differing ways. As clearly evidenced, one could argue that there is a code-heavy (plug-in) and a light-touch coding (custom Workflow assembly) option available, depending on how comfortable you are with working with the SDK. Plug-ins are a natural choice if you are confident working solely within Visual Studio or have a requirement to perform additional business logic as part of your requirements. This could range from complex record retrieval operations within the application or even an external integration piece involving specific and highly tailored code. The Workflow path clearly favours those of us who prefer to work within the application in a supported manner and, in this particular example, can make certain tasks easier to accomplish. As we have seen, the act of retrieving the Display Name of a user is greatly simplified when we go down the Workflow route. Custom Workflow assemblies also offer greater portability and reusability, meaning that you can tailor logic that can be applied to multiple different scenarios in the future. Code reusability is one of the key drivers in many organisations these days, and the use of custom Workflow assemblies neatly fits into this ethos.

These are perhaps a few considerations that you should make when choosing the option that fits the needs of your particular requirement, but it could be that the way you feel most comfortable with ultimately wins the day – so long as this does not compromise the organisation as a consequence, then this is an acceptable stance to take. Hopefully, this short series of posts have demonstrated the versatility of the application and the ability to approach challenges with equally acceptable pathways for resolution.