Launching H5SDK from H5 Script

Over the last month or two I’ve been doing some H5 development work and had a proof of concept script that ended up with a significant amount of scope creep – this pushed me from using a H5 Script and moving to the H5SDK.
This in turn introduced the challenge of needing figure out how to launch the H5SDK application and also pass arguments to the H5SDK application.

I ended up using the bookmarks functionality but even then I had some challenges passing parameters in to my application – and this was related to the order of passing the arguments.

name: the name that we want for the tab (this has some inconsistent behaviour between the classic UI and the modern UI – spaces should be URL Encoded – so a space should be %20
program: the url that you see in the H5 Administration window for the H5SDK

mforms://bookmark?name=<name for the tab>&program=/mne/apps/<h5sdk deployed name>?<parameters>

We would then retrieve the parameters using the HttpUtil.getParameter(<parameter name>)

So an example would be

mforms://bookmark?name=My%20Application&program=/mne/apps/myapplication?orno=123456789

And our H5SDK code would read the parameter like so:

let orderNumber : string = HttpUtil.getParameter(‘orno’);

Posted in Uncategorized | Leave a comment

CFT Agent

PXL_20210116_101837241

I’ve been pinged on several occasions to help out with issues with the Cloud File Transfer Agent (CFT Agent) over the last six months so I thought I’d pen an article around the CFT Agent, troubleshooting, observations and I’ll throw in some anecdotes to boot 🙂

The CFT Agent

The CFT Agent one of the ‘on premise’ components that we need to install as part of our M3CE installs. When you get that ‘wait’ status when going in to the Business Engine Files or Business Engine Data Management under the Administration Tools, this is checking to make sure that there is communication with the CFT Agent.

The CFT Agent is pretty nifty, it will establish an outbound https connection to M3CE and work passes over the connection it creates. Very handy as it means that we don’t need to worry about port forwarding and exposing our infrastructure to the internet and yet M3CE can still send data to the CFT Agent. We just need to allow the server to communicate out over https to the cloud.

So what does it actually do?

Continue reading

Posted in M3CE, Troubleshooting | 6 Comments

Business Engine Configuration Data – and CRS021 – Sorting Options

Sorting Orders in the M3BE Configuration Files

 

How many of you have been stuck in that situation where you have to manage exported CMS005, CMS010, CMS015 configurations and you’ve struck that situation where you get a failure on the import of your configuration file?

Continue reading

Posted in Development, IONAPI, M3CE, powertools | Leave a comment

MNS209 – Getting Data From IDM

When I first started working with M3CE it was still under controlled availability and there were a few irritations that myself and other consultants were having around printouts, especially given we were working with configurable XML and there were some potential gotchyas with outputs to IDM, especially in the early days of the deployments.

Trying to gather information for troubleshooting was a challenge (I’m aiming for a post on this subject in the near future). At that point we couldn’t do a MNS270 (well MNS270 didn’t exist then, but…) -> Related Options -> View wouldn’t display anything even if you had save turned on and the CFT Agent running.

We also only had a truncated error message in MNS209 and 9 times out of 10, we only had the useless portions of the error message before it was truncated (the field has been increased in length long since then).

In those early days I was quite interested in exploring H5 scripting, so I quickly hacked together two scripts over a couple of evenings. They would retrieve data from the special IDM document type MDS_DistributionJobs; the first would retrieve the complete error message and some other useful information, the second would retrieve either the XML or the raw json from IDM and drop them in to a textbox on the panel so you could inspect either for troubleshooting.

Continue reading

Posted in IONAPI, M3CE, Troubleshooting | Leave a comment

Send BOD to IMS

This is probably the longest time that I’ve had a post that I’ve started (in July) but haven’t finished…but it looks like we’ve finally made it 🙂

Over the last 18 months there have been several times when I’ve wanted to ‘resend’ a BOD to ION, however I haven’t had access to a sFTP server or I’m wanting to quickly create a simple Document Flow to test something really quickly.

Usually in those instances I’ve set up an ION connection point so I can push the document to an ION API via Postman, but the issue with this is that you have to use a specific ION API ClientID value – which is just irritating 🙂

Of-course, I ended up writing a quick hack to solve this problem.

This post will assume that you are familiar with Data Flows, Connection Points and the ION APIs.

The API that we are interested in is the /v2/message from the IONMessagingService, we will submit our BOD through this API.

Setup

Set up and Authorised App

This supports both Backend and Web Application Authorisations. In this example I use the Web backend

We will want to download the .ionapi file – note that we will need the ci field from that file when we set up the connection point.

Setup the Connection Point

We need to set up a connection point – an IMS via ION API. Here we will paste in the client id (ci field) from an authorised applications .ionapi (you can create either a web authentication or a backend service)

Then add the BOD we want to leverage

Set up the Data Flow

We then want to create the document / data flow. In my case I wanted it to go to M3

And again, we want to specify the BOD that we want to load, our IMS connection point and M3 as our destination connection point

The application

You can retrieve the application from github

https://github.com/potatoit/SendBODToIMS/

It is a simple .Net application that can be downloaded from github, extracted from the .zip file and then run (assuming you have .Net installed).

It is a little primitive as it was a quick hack – there’s definitely a lot of room for improvement with the UI and though it fulfilled my initial purposes, there are probably bugs – use with caution.

Running the UI

(The From LogicalID comes from ION Desks Connection Point)

Simply click on the ION API button, select your .ionapi file (this must be associated with the ClientID entered in to the connection point!), this will populate the tenant for you

Adjust the BOD, the From LogicalID and message ID if needed. Paste your BOD in to the text control and click on Send.

Easy as that.

Posted in Development, ION, IONAPI, M3CE, Misc | 3 Comments

EVS100 – Loading Data in to M3

For those of us that used to use MDIWS or other such tools, you’ll know that as of November 2020 certificate based authentication to the M3BE APIs was finally shutdown, this has put an end to several tools that people have been using over the years to push data in to M3. External programs now need to leverage the ION APIs which is a bit more complex to leverage than what we are used to.

November 2020 (or October?) also saw the introduction of EVS100, a program which allows us to import data from an Excel spreadsheet or from a .csv for which there is a M3 API in to M3. Refer to NCR 12180 / KB 2132563 and the Infor Docs https://docs.infor.com) for details.

In a nutshell, EVS100 will allow us to read an Excel .xlsx file with multiple tabs or a .csv from the MvxFileTransfer\FileImport directory on the CFT Agent and it will execute APIs to push data in to M3.

I’ve found that the spreadsheet version is quite useful as it allows me to upload data from multiple sheets in one ‘session’. This happens on the server so it shouldn’t suffer from the round-trip latency you get when running your own home-grown application from your machine.

It does have a few quirks at the moment (it’s particular about missing values in columns and doesn’t like spaces in file names) but it is improving.

When you are using a spreadsheet it does require a fairly specific format, and as I am inherently lazy, I didn’t really want to manually create a spreadsheet – so I hacked together a quick script which allows you to select a transaction from MRS002, click the Generate Sheet button and it will generate a template spreadsheet. It will also highlight the API mandatory fields 🙂

Screenshot of the template

The Script adds the Generate Sheet button

After uploading the script to the tenant, the script needs to be attached to MRS002/B

You can download the MRS002_GenerateSheet.js script from the Samples -> js directories from this github Repo https://github.com/potatoit/MRS002_GenerateSheet

It is something quick that has been thrown together, I’m sure that there’s lots of room for improvement…

The full Visual Studio solution is available for download – if you do, then please note that this leverages the xlsx-style project, so please keep the license with the distribution.

Happy generating templates! 🙂

Posted in Development, M3CE | 3 Comments

The Case of ___ERROR___ in your Configurable XML Output Field (in MWS435PF)

Recently I was involved with a project where the customer was having some issues where they had added some fields to MWS435PF. They added OACUOR from the OOHEAD table and whenever they ran the report they would get a message saying ___ERROR___ instead of the data

The eagle eyed will also notice that the date looks a little off as-well.

When the XML is generated, if there is bad data / illegal characters for the XML then the bad data will be substituted with a ___ERROR___. This should be relatively easy to track down. You have an order, you have an error – you look at the orders data…

In this case however, we would get this error regardless of what the customers order number had, be that blank or otherwise. It would also happen on every single order.

The odd thing was that if we reset to standard we would get our output, we add just these two fields and we get the error.

In another tenant we would reset to standard, add our two fields and we wouldn’t get the error. This would bias my thoughts to the issue being either a company specific setting or tenant specific setting.

Normally when examining this sort of issue, I’d be inclined to try importing the CMS005 configuration in to another company, ideally on another tenant. If we get the same issue, we know that the trigger is in our configurable XML import file. We also know that we have a good chance of being able to shuffle this off to support to replicate and get development to debug.

(Incidentally, if you open the exported .zip file, then open the XML file in the zip, you’ll see that these exports just run a sequence of API calls – and if you are having trouble with imports, it can be useful to manually step through each of the API calls).

In the tenant we previously added the two fields without getting an error, we imported the configuration.  Surprisingly when we then ran the output we got the error. So at this point we know that we can replicate the issue across tenants – it’s probably going to be something within the configurable XML itself.

And this is where it was puzzling – in this other tenant it had worked prior to the import, we reset MWS435PF back to standard and then added the two fields back in manually, now got an error. We had now ‘broken’ this other tenant. 😦

As it turns out, the autojob MWS972 caches configurable XML. If we reset MWS435PF to standard, then restarted MWS972, then added the fields we wouldn’t get the error. If we import the file, we would get the error. If we then reset to standard and added the fields we got the error.  The key being that if we generated the output after the import, then reset to standard, the error would persist with those fields until MWS972 was restarted.

As the investigation progressed, it was discovered that the CIDDRA table had been added as a relate table with the prefix of OA. OOHEAD is also OA and it is part of the standard MWS435PF related tables. We shouldn’t be able to add two tables using the same prefix to our configurable XML (one needs to be given a virtual prefix), and there are warnings to prevent you from doing this. It is still unclear how the conflicting prefix was added.  But essentially the error was being caused by trying to read the OACUOR data from a CIDDRA data object. And because this was bad data, it wasn’t flushed from MWS972 properly until the autojob was restarted.

Once the customer added a different prefix for the CIDDRA table, this issue disappeared.

I found this particular issue really interesting because we got very unexpected behaviour that left us scratching our heads as to what was going on.

Happy issue hunting!

Posted in FollowTheBreadCrumbs, TheCaseOf, Troubleshooting | Leave a comment

The Case of – the Long Processing Enterprise Collaborator Map

When I was young I started playing around with 68000 assembly, I was absolutely obsessed with squeezing every last byte out of my code and optimising it as much as I could, spending hours to get that extra few bytes out of the assembled executable. I continued on that front as I started playing with C until a friend of mine said to me “Computers get faster each year, you shouldn’t spend so much time optimising”. The general gist of his argument was that you hit the point of diminishing returns, especially in the face of computers getting quicker and quicker.

At the time I remained sceptical and felt that it was just ‘giving up’. Many years later when I was working on some projects for IFL, when my time became precious, I remembered his words and finally subscribed to the philosophy of striking a balance between optimisation and time invested.

What does all this have to do with M3CE? Earlier in the year I was asked to help on a project where they had a MEC Map that was failing part way through processing. This wasn’t the first time I had come across this particular issue, however in this instance I was tasked with helping out at a more detailed level.

In this article, I won’t discuss the merits or flaws of the approach as we had some very narrow parameters to work within. For those if you, especially the veterans that have worked with MEC maps in the past, it is worth while checking the available KB articles and resources online about the differences and recommendations when working with MEC in M3CE – it is different and there are several gotchyas that I’ve seen trip up people in the past. Of-course, as M3CE develops, so do the best practices…

But on to the subject of the post. This MEC map, when it was originally developed was taking a whopping 16 odd hours to process an incoming file. I think that it got shelved for several months as other project tasks took priority, however when it was revisited the MEC map would just fail. When we started looking in to this, we noticed that the map was failing after 1 hour, infact, it was being terminated. The mapping would only succeed if we used a very small subset of the data which wasn’t a workable solution.

At that point in time, there was a hard-limit of how long a MEC map could run and that limit was 1 hour. The limit had been introduced after the original map had been written.

We ended up doing a bit of a deeper dive in to the map. As it turned out, the map would read a file that was passed to it from ION. This file would have several months worth of records for various customers that would be injested by MEC, accumulated and then written in to M3. The map was designed in a way that is fairly common, basically take a record, then process the record through various logic and then update M3, then go on to the next record. Each record processed would be treated discretely.

Because it needed to accumulate data, the map ended up writing records out to the CUGEX1 table – it was a clever solution and it also allowed that data to be leveraged for another task. (However we generally shouldn’t be using the CUGEX1 table for transactional data).

On top of this, for each line we would be making API calls to CMS100MI to retrieve some data and another API call for some other data. We’d then perform some calculations and eventually get around to updating M3 (this was with the current accumulated value, not the final accumulated value). Then we would need to go and clean up the CUGEX1 table which resulted in more API calls.

What this meant was that for a file that had a little under 50,000 records, we were making >200,000 API calls (it could have been more, I really started loosing interest in counting when we were getting in to such high numbers). What’s worse, we were using CMS100MI which is not the fastest, and it had a related table where we weren’t using the data. It was probably going to be used and then later the decision was made not to.

With a little bit of analysis of the file we were injesting, it became apparent that the accumulated data points would mean that we would essentially only need to write a few hundred times to M3 for the 50,000 odd records. With that in mind we knew that we didn’t need to store very much data so we ended up doing the following

  • We created an in-memory array for our accumulated data points – this array would only have a few hundred records, so not too much memory
  • We eliminated the CMS100MI API call, we were able to leverage a standard Get based called to fulfil this requirement.
  • Where we could, we would cache the results of the Get calls in memory as we knew that the number of variations would be relatively small, so where we did a get for the item or the customer, we would cache the values
  • Because we were using an in-memory array for the data points, we didn’t need to use CUGEX1 any more, so we didn’t need to populate it, nor clean it up at the end

In total, we reduced the number of API calls for the 50,000 line file to just a few hundred API calls, it also reduced the time it took to process the file to around 6 to 7 minutes. There were still other opportunities for optimisation too, but we of-course start to get to the point of diminishing returns and other project tasks started to take priority.

Closing Comments

In the single tenant world, we could afford to spend less time on optimisation and efficiency as our servers really spent a lot of time waiting for work. In a multi-tenant environment we are interacting with shared resources and we need to be conscious that we should be fairly efficient so we don’t consume compute capacity needlessly.

And I know many of you that I known me for a long time have heard me express disdain for MEC, but after working in M3CE for a couple of years I’ve really come to appreciate and like the technology. With that said, I think that it is really important when designing solutions that we use the right tools for the job, and MEC isn’t always that tool. With that in mind, here are some general philosophies I have – I cannot add enough emphasis, that these are general and there are probably scenarios that I’d consider would be exceptions – but in general

  • If a map is taking more than a few minutes, we should consider rethinking the design
    • is MEC the right tool?
    • Are we using the right APIs?
    • does the process need to be redesigned.

    If the map is taking more than 10 minutes, then I think it is really important to take a long hard look at what we are doing and seriously consider a different approach

  • API calls are expensive, especially when we are making a lot of API calls. If we are submitting the same API call with the same data, many times, then investigate caching. This doesn’t apply to just MEC, I apply this philosophy to scripts, widgets etc (where practical). Of-course, we need to consider that caching consumes memory, so it may not be ideal for large volumes of data
  • Large files – MEC isn’t really the ideal tool for extracting or consuming large volumes of data. If you are wanting to do that, consider other tools

I hope that this has been useful…happy MeCing… 🙂

Posted in Uncategorized | 2 Comments

XtendM3

Over the last few weeks I’ve been working on refactoring some extensions. A couple of these have been XtendM3 extensions and I thought I’d share some of my experiences.

For those of you that haven’t come across XtendM3 before, it is a M3CE technology that allows us to change the behaviour of the M3 Business Engine. It allows us to hook in to exit points within the M3BE code and manipulate results, it allows us to read and write directly to the M3 database, create new tables and create new APIs. XtendM3 extensions are written in Groovy which is a language based on Java. This is a technology that runs on the server.

Because this code can manipulate M3BE directly there’s a few things that currently have to be done.

  1. You have to request that XtendM3 is enabled on your tenants
  2. To deploy your XtendM3 extension to your production tenant, you must have it reviewed and signed by Infor. Though I had a very quick turn around on my development, I would suggest that if you are estimating timelines, it would pay to add buffer incase you need to rework

This of-course is on top of writing good code 🙂

This post focuses really on my recent experience using a trigger.

General

The first thing that I’ll say is that the Xtend team have really put a lot of work in to making development and source control really easy, take advantage of the resources that they have provided 🙂

Source Control

Source control is important for version control and generally should be used. It is currently required for signing. You can clone the Xtend teams example repository at https://github.com/infor-cloud/acme-corp-extensions

Logging

Those of you that have read my posts in the past will know that I’m a big fan of logging information on the code execution, especially when it is challenging or not possible to use debugging. But with XtendM3 we are logging in to the BE logs, the BE logs that are monitored by CloudOps (can you see where this is going? :-O)

During the process of uplifting an extension and running a fair amount of logging to troubleshoot a couple of issues I ended up getting an email from CloudOps about why there were all these warnings appearing for the tenant I was working in (I’m pleased I wasn’t using error logging :-)).

The lesson here was, be sensible with the logging – when tracing you are better off logging at an info level, and be very conscious that if you use warning or error levels a lot then you will probably end up in a discussion with the cloud team.

Triggers

Some programs in M3 have extension points which allow for you to override specific methods in M3, this could be so you can add your own custom logic on returning a value. One of the key things about triggers is that they have to be exposed to the framework by M3. At the moment, if you don’t have an exit point in a program, you can request one gets added.

You can see the extension points on a specific program by looking in Metadata Publisher – there is a section called “Extension Points”. Taking a look at PPS912 we can see that there are extension points that allow us to trigger off calls to get the Purchase Price, the Purchase Quantity, Status Supplier and several other functions.

There are some considerations (this is not an exhaustive list by any stretch) when using these extension points that I came across that gave me some odd results when I was iteratively working through some XtendM3 developments

  • When you make changes to your extension, close the program session that you are testing with to ‘load’ the changed code. This meant essentially ‘closing the tab’.
  • Extensions are per tenant, if you are developing and testing in a tenant where there are other companies or users working, consider adding some temporary code to limit the testing to just your user or if you have exclusive access to a company, limit the code execution to that company. I had someone working in a tenant that had a lot of users, they made a change which created a warning and created a lot of confusion for the users trying to test normal M3 functionality.
  • Don’t forget that some programs get called by autojobs – I got caught out with the logging where I thought I had removed it, but CloudOps sent me an email indicating the logging was still active – it turns out I had to restart one of the auto jobs (MNS051) for my updated code to be made available.
  • The extension point may be called multiple times by a M3 program naturally; if you are taking an incoming value, adding another value and returning it, then you need to make sure that now, and in the future you don’t end up adding your value multiple times because the internal code of M3 calls that function more than once. This may involve keeping track of the number of times you have been run in a session.
  • Be conscious that M3 is now getting additional functionality added on a monthly basis, especially when you consider the point above, today in the normal sequence of events, the extension point may only be called once, next month there may be functionality added when it gets called multiple times
  • Clean and robust code – make sure your code is clean, tidy, robust and thoroughly thought out, especially if you’re going to be using this for production. Because you are working at a low level and because M3 is updated each month, it is really imperative that you think long and hard about the implications of what you are doing and whether or not it will get approved and signed (see the link below for the approval requirements)

Getting the Code Signed

There’s a link below with the approval requirements. There’ s a few hints to make the process a little smoother

  • Keep the code clean, remove dead, commented out code
  • Make sure your inline comments make sense so the review team can understand what is going on, clean up any comments which aren’t needed or relevant
  • Check your logging, if you log a lot (like I generally do during development) you may be asked to tone it down
  • Provide JavaDoc comments, making sure you break your code down in to functions/methods that make sense and allow you to effectively describe your code

Useful Links

XtendM3 Webinar
https://pages.infor.com/20191112-oth-edu-webinar-m3-extensibility-modifications-in-m3-cloud-edition.html

XtendM3 Documentation
https://infor-cloud.github.io/xtendm3/

XtendM3 SDK – clone this to build your dev environment
https://github.com/infor-cloud/xtendm3-sdk-java

XtendM3 Sample Repository
https://github.com/infor-cloud/acme-corp-extensions

XtendM3 Approval Requirements
https://infor-cloud.github.io/xtendm3/docs/documentation/approval-requirements/

Posted in Uncategorized | Tagged | 2 Comments

Testing APIs with Postman

Back in the old days I’d use SoapUI extensively for testing soap based connections. With M3CE we now work leverage the ION APIs which are REST based so nowadays I use Postman for my testing. As I tend to work many different tenants on a fairly regular basis I set up my Postman tenant in a specific way.

This post runs through how I have Postman set up and also shows how we can set up Postman to retrieve the OAuth2 bearer token automatically when we call our API. Please note that if you are planning on calling the APIs a lot, then you should cache the bearer token and handle the expiry. We should really expire the bearer token once we are finished but I haven’t looked at whether we can do that with Postman.

Continue reading

Posted in IONAPI, M3CE, Misc | 4 Comments