AL Extensions: Importing and Exporting Media Sets

One of the things that has changed when you are building V2 extensions for a cloud environment is that you cannot access most functions that work with physical files.

This presents a bit of a challenge when it comes to working with the media and media set field types, as the typical approach is to have an import and export function so that a user can get pictures in and out of the field.

An example of this is the Customer Picture fact box that’s on the Customer Card:

WorkingWithMediaFields1

As you can see, the import and export functions in C/Side leverage the FileManagement codeunit in order to transfer the picture image to and from a physical picture file. These functions are now blocked.

So…..we have got to take another approach. Enter streams.

Using the in and out stream types we can recreate the import and export functions without using any of the file based functions.

An import function would look like the following. In this example, the Picture field is defined as a Media Set field.

local procedure ImportPicture();
var
   PicInStream: InStream;
   FromFileName: Text;
   OverrideImageQst: Label 'The existing picture will be replaced. Do you want to continue?', Locked = false, MaxLength = 250;
begin
   if Picture.Count > 0 then
     if not Confirm(OverrideImageQst) then
       exit;

  if UploadIntoStream('Import', '', 'All Files (*.*)|*.*', FromFileName, PicInStream) then begin
    Clear(Picture);
    Picture.ImportStream(PicInStream, FromFileName);
    Modify(true);
  end;
end;

The UploadIntoStream function will prompt the user to choose a local picture file, and from there we upload that into an instream. At no point do we ever put the physical file on the server. Also note that the above example will always override any existing picture. You do not have to do this as media sets allow for multiple pictures. I’m just recreating the original example taken from the Customer Picture page.

For the export we have to write a bit more code. When using a Media Set field, we do not have access to any system function that allows us to export to a stream. To deal with this all we need to do is loop through the media set and get each of the corresponding media records. Once we have that then we can export each of those to a stream.

That would look like this:

local procedure ExportPicture();
var
   PicInStream: InStream;
   Index: Integer;
   TenantMedia: Record "Tenant Media";
   FileName: Text;
begin
   if Picture.Count = 0 then
      exit;

   for Index := 1 to Picture.Count do begin
      if TenantMedia.Get(Picture.Item(Index)) then begin
         TenantMedia.calcfields(Content);
         if TenantMedia.Content.HasValue then begin
            FileName := TableCaption + '_Image' + format(Index) + GetTenantMediaFileExtension(TenantMedia);
            TenantMedia.Content.CreateInStream(PicInstream);
            DownloadFromStream(PicInstream, '', '', '', FileName);
         end;
      end;
   end;
end;

We use the DownloadFromStream function to prompt the user to save each of the pictures in the media set. As in our first example, there are no physical files ever created on the server, so we’re cloud friendly!

You may notice that I use the function GetTenantMediaFileExtension in the export example to populate the extension of the picture file. Since the user can upload a variety of picture file types, we need to make sure we create the file using the correct format.

The function to do this is quite simple, however there is no current function in the product to handle it, so you’ll have to build this yourself for now. Hopefully in the near future this function will be added by Microsoft.

local procedure GetTenantMediaFileExtension(var TenantMedia: Record "Tenant Media"): Text;
begin
   case TenantMedia."Mime Type" of
      'image/jpeg' : exit('.jpg');
      'image/png' : exit('.png');
      'image/bmp' : exit('.bmp');
      'image/gif' : exit('.gif');
      'image/tiff' : exit('.tiff');
      'image/wmf' : exit('.wmf');
   end;
end;

Until next time, happy coding!

 

Advertisements

AL Extensions: Extending an Extension

Something I’ve been meaning to write about since I started developing extensions is how to extend another extension, or in other words, the process of creating a dependent extension.

For the sake of keeping things straight in this post, I am going to refer to the following 2 extensions:

  1. Parent: the extension that is being depended on.
  2. Child: is dependent on the Parent extension.

So, without further ado, here we go!

First, develop and build the Parent extension. Take note of the following values from the app.json of the Parent extension:

ExtendingExtension1

Now publish the Parent extension to your development system so that we can download the symbols for the Child extension.

Time to move to developing the Child extension, where we need to specify the dependency to the Parent extension. We do that using the following property in the app.json file.

ExtendingExtension2

Using the id, name, publisher, and version values from the Parent extension, we populate the property as follows:

ExtendingExtension3

As soon as you save the app.json file, you will be prompted that symbols are missing. This is because the AL compiler is now looking for the symbols for the dependent extensions.

Click the Download Symbols button in the VS Code notification and download the symbols form the development database where you published the Parent extension.

Take note that the dependencies property accepts an array of values, which means you can specify multiple dependency extensions.

Once the symbol download is complete, you will be able to reference any objects, events, functions, etc. that are available in the Parent extension. You’ll also be able to extend the Parent extension as needed.

Be Aware

  • Make sure that when you publish your extensions, you must do the extension that are depended on first (e.g. Parent). If you try and publish the extension with the dependency first (e.g. Child), it will error stating that references do not exist in the database.
  • When you remove extensions, you must remove the extensions that have the dependencies first (e.g. Child) as you cannot remove an extension that has other extensions depending on it.
  • If you update any of the extensions that are being depended on in your development database, you must also change the corresponding version in the dependencies property of any extension that is dependent on it. Once this is done you will be prompted to download the new symbols file. If you don’t do this you will be developing off of an old version of the symbols ad publishing the extension may fail.

How is this useful?

  • A Dynamics NAV ISV can use this method to create a “library” extension that contains standard functionality that is required to be available across one or more functional extensions.
    • By the way, this scenario is fully supported for any extensions submitted to AppSource. The library extension does not appear in AppSource. When a customer installs the functional extension, the library extension will also get installed.
  • A Dynamics NAV partner could create an extension that contains some common functionality they deliver to all of their customers. They could then create a dependent extension that contains a customized layer on top of the original extension that is unique to each customer.
  • A Dynamics NAV partner needs to extend another partner’s extension.
    • If you are going to do this, you must have access to getting the symbols for the extension you are building on top of. You don’t need the actual .al source files, just the symbols file.
  • A customer who develops their own custom solution can build their own extension and make it dependent on another extension that’s in their system.

Until next time, happy coding!

AL Extensions: Reading JSON info with Powershell

A quick post today on how to read values from a JSON file with PowerShell.

PowerShell contains the commandlet ConvertFrom-Json, which as its name implies, converts a JSON object into a PowerShell object, from which you can then access the values and read them into variables, etc.

Let’s assume we have a file named MyFile.json, which contains the following JSON object:

{
   "id": "123456",
   "name": "MyObjectName"
}

We can convert this into a PowerShell object with the following code:

$JsonObject = Get-Content -Raw -Path .\MyFile.json | ConvertFrom-Json

We can then read the value of  name into a variable like this:

$Name = $JsonObject.Name

SO WHAT?

As the title of this post suggests, we can make use of this handy code with our AL extensions.

In my previous post, I showed you how to create a custom AL extension build task with Visual Studio Code. You can expand on this and launch a PowerShell script to execute the ALC compiler. When you do that, one of the parameters you can supply to the compiler is the output path and resulting app file name. By using the app.json file that is part of every AL extension, you can utilize things such as the extension publisher, name, version, etc. in your compiler process.

You can do something like the following…

$JsonObject = Get-Content -Raw -Path .\app.json | ConvertFrom-Json

$ExtensionName = $JsonObject.Publisher + '_' + $JsonObject.Name + '_' + $JsonObject.Version + '.app'

$CompilerPath = 'C:\Users\<username>\.vscode\extensions\Microsoft.al-0.10.13928\bin'

$ALProjectFolder = 'C:\MyProjectFolder'
$ALPackageCachePath = Join-Path -Path $ALProjectFolder -ChildPath '.alPackages'
$AlPackageOutFilePath = Join-Path -Path 'C:\MyCustomOutputFolder' -ChildPath $ExtensionName

Set-Location -Path $CompilerPath
.\alc.exe /project:$ALProjectFolder /packagecachepath:$ALPackageCachePath /out:$AlPackageOutFilePath

…which would replicate the same file name that the out of box compile process does, but will still allow you to direct the file to be output to a custom path. For example, maybe a shared AL package cache that you use for multiple developers.

Until next time, happy coding!

AL Extensions: Custom Build Process Using Visual Studio Code Tasks

I recently discovered the Tasks menu in Visual Studio Code and after a bit of wondering why I’ve not seen it after using VS Code for many months, I then took a quick look into what it was all about.

Turns out VS Code has yet another amazing feature called Tasks, which let’s you automate things such as builds, packaging, deployment, and probably whatever else you can think of. You can read all about tasks in detail here.

This got me thinking, I wonder if I can use this to make my own build process for AL extensions? Turns out this is quite easy…and here’s how you can do it.

Open up your AL extension folder and select Tasks > Configure Tasks.

TasksConfigureTasks

In the command palette select Create tasks.json from template.

CommandBoxConfigureTasks

In the command palette select Others.

CommandBoxSelectOthers

You’ll now see a tasks.json file create in the .vscode folder of your AL project.

tasksJsonFile

In the tasks.json file you’ll see some default settings like this:

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "2.0.0",
    "tasks": [
        {
            "taskName": "echo",
            "type": "shell",
            "command": "echo Hello"
        }
    ]
}

We need to change the task command so that we build our AL extension. When you install the AL Language extension, behind the scenes is installed a command line compiler (alc.exe). The alc compiler accepts 3 parameters:

  1. project: The path to the AL project folder
  2. packagecachepath: The path to the folder containing the reference packages
  3. out: The path to output the packaged extension app file to. This parameter is optional. If it’s not specified then the compiler will output the app file in the project folder with the default name of <Publisher>_<Name>_<Version>.app

For example, in order to build our AL extension with the default output and name, we’d need a command such as this:

C:\Users\<username>\.vscode\extensions\Microsoft.al-0.10.13928\bin\alc.exe /project:C:\ALProject1 /packagecachepath:C:\ALProject1\.alpackages

One more quick thing before we continue. Within VS Code tasks, you have access to some predefined variables. You can read about what they are here. For our task, we’re interested in the ${workspaceFolder} variable, which will give us the path to the workspace folder of the project.

Back to the VS Code task. We need to update the tasks.json file now. We’ll change the taskName property to a more meaningful name, and we’ll update the command property to execute our alc compiler. Make sure to note the double slashes and that you replace the <username> and AL Language extension version number with the correct one as per your environment.

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "2.0.0",
    "tasks": [
        {
            "taskName": "Offline Build",
            "type": "shell",
            "command": "C:\\Users\\<username>\\.vscode\\extensions\\Microsoft.al-0.10.13928\\bin\\alc.exe /project:${workspaceFolder} /packagecachepath:${workspaceFolder}\\.alpackages"
        }
    ]
}

Now that we’ve updated the task file, you need to set this task as the default build command. To do that you select Tasks > Configure Default Build Task from the menu. Doing this will not change the existing F5 or Ctrl+F5 commands that Microsoft delivers with the AL Language extension.

TasksConfigureDefaultBuildTask.png

In the command palette select the name of the task you created above. In my case, I’ve named the task Offline Build.

CommandBoxSelectTaskName

The following lines were just added to your tasks.json file:

"group": {
    "kind": "build",
    "isDefault": true
}

You can close and save the tasks.json file now if you still have it open. You’re ready to use your new build task in the following ways:

  1. Tasks > Run Task
    • With this method you can select to run any task that you’ve created.
  2. Tasks > Run Build Task
    • This will work only if you’ve made your task the default build task.
  3. Ctrl+Shift+B
    • This will work only if you’ve made your task the default build task.

No matter what method you choose to execute the task, you can monitor it for errors in the VS Code terminal window.

TerminalWindowOutput

That’s it! Now you’ve got a way to build your extension without having to deploy it at the same time.

Pretty sure I’ve only scratched the surface on what VS Code tasks can do, so I’m excited to see how else I can make use of them in my AL development.

Until next time…..happy coding!

 

 

AL Language Extension in Marketplace

For those of you that have been trying out the Dynamics NAV Development Preview, you can now grab the AL language extension from the Visual Studio Code marketplace.

This will let you set up your local machine for coding and deploying extensions on the v2 platform. This version of the AL Language extension is not for coding against an on-prem NAV system. It requires a Dynamics 365 for Financials Sandbox tenant.

To get the AL Language extension, just do the following:

In Visual Studio Code, click the Extension icon at the left and in the search box at the top, type in AL

Capture.PNG

In the search results, select the AL Language extension, and click Install.

InstallAL

After it is installed, click the Reload button to reinitialize Visual Studio Code with your new extension.

ReloadAL

That’s it! You can now connect your AL Language extension to your sandbox environment and code away!

As always, happy coding!

New Development Tools – April Update

As all Dynamics NAV developers probably know by now, Microsoft is working on the next generation of development tools for working with Dynamics NAV and Dynamics 365.

They’ve just released the April update, and included in this update is the tool to convert your v1 extensions to the new v2 format. Exciting!!!

Check out what else is new in the update here.

Find Out if an Extension is Installed, part 2

“Hi, this is Library, is Extension 1 home? What about Extension 2, are they home too?”

In my previous post on this topic, I explained how you can use the NAV App Installed App system table to see if a specific extension is installed. I later found out though that the ability to access that table may not always be available in the Dynamics 365 for Financials platform, so back to square one. I want a stable long-lasting solution.

Alas…events!

First…some background on why I need this functionality. I’m developing 2 extensions that share a common library, but I want to develop these extensions so that they are independent, meaning that a customer is able to install only one of the extensions if they choose to, and not be forced to install both. I also need certain features in each extension to act differently depending on if the other extension is installed or not. I know….never simple. 🙂

For those that are not aware, when you submit an extension to AppSource, you are able to submit along with it a library extension. Multiple extensions can be dependent on the same library, which makes it easy to deliver foundation type functions that are shared amongst your extensions. The library extension will not be seen in AppSource, but it will be automatically installed when any of the dependent extensions are installed.

What this also means is that when I am developing in the context of one of my “functional” extensions, I am able to directly reference objects that are contained in my library because the library is guaranteed to exist when the functional extension is installed. What I cannot do though is the opposite, because although the library extension knows that at least one functional extension was installed, it does not know directly which one it was. Make sense!? Clear as mud I know. 🙂

In the example below, let’s assume that I have a “Library” extension, and two functional extensions, brilliantly named “Extension 1” and “Extension 2”.

First, I need to create a codeunit in my library extension that will act as the “extension handler” so to speak. In other words it will house the functions required to determine what extensions are installed. In my library codeunit, I’ll add the following functions:

CheckIsExtensionInstalled(ExtensionAppID : GUID) : Boolean
IsInstalled := FALSE;
IsExtensionInstalled(ExtensionAppID,IsInstalled);
EXIT(IsInstalled);

GetExtension1AppID() : GUID
EXIT('743ba26c-1f75-4a2b-9973-a0b77d2c77d3');

GetExtension2AppID() : GUID
EXIT('a618dfa7-3cec-463c-83f7-7d8f6f6d699b');

LOCAL [IntegrationEvent] IsExtensionInstalled(ExtensionAppID : GUID;VAR IsInstalled : Boolean)

The above functions allow me to call into the codeunit to determine if either of my functional extensions are installed. The GUIDs that I used are samples above, but you should use the same GUID that is used in the corresponding extension manifest file.

The piece that makes this all possible is the published event IsExtensionInstalled. What I can do now is from within each functional extension, I can subscribe to that event so that each extension can basically give “answer” the library when it asks if it’s installed.

To do that, I create 2 more codeunits, one in each functional extension. These codeunits will contain a subscriber to the event that we published in the library codeunit. This way, if the extension is installed, its subscriber will respond to the event and let the library know that it is installed. If the extension is not installed then there won’t be anyone home to answer that call.

Extension 1

LOCAL [EventSubscriber] OnCheckIsExtensionInstalled(ExtensionAppID : GUID;VAR IsInstalled : Boolean)
IF ExtensionAppID = ExtensionHandler.GetExtension1AppID THEN
  IsInstalled := TRUE;

Extension 2

LOCAL [EventSubscriber] OnCheckIsExtensionInstalled(ExtensionAppID : GUID;VAR IsInstalled : Boolean)
IF ExtensionAppID = ExtensionHandler.GetExtension2AppID THEN
  IsInstalled := TRUE;

So how do we use all of this? Easy, of course. The example below shows code that you could use from either of the functional extensions, so that your extensions can act differently depending on what other extensions are installed.

IF LibraryCodeunit.CheckIsExtensionInstalled(LibraryCodeunit.GetExtension1AppID) THEN
  MESSAGE('Extension 1 is installed.');

IF LibraryCodeunit.CheckIsExtensionInstalled(LibraryCodeunit.GetExtension2AppID) THEN
  MESSAGE('Extension 2 is installed.');

There, easy right? If you want to try it out yourself, you can grab the above example on GitHub here.

Now, while you can do this if you own all of the source code for each extension, you cannot use this solution to determine if “any old random extension” is installed, as you need to add the subscriber function to the functional extension code. But, if you are developing out multiple extensions and if you need to know which of them are installed, then this solution will work wonderfully!

Happy coding!