Custom widgets in SAP Analytics Cloud (SAC) enable developers to inject JavaScript code directly into an SAC story, adding interactive and customized functionalities. In this blog post, we will dive into the world of custom widgets in SAC, explore how React can supercharge your widget development, and introduce you to a useful example of how we’ve used React to build upon the Data Import API: the File Upload Widget.
The file upload widget extends upon the Data Import API to allow users to upload their FactData CSV or Excel datasets to public and private versions. The widget serves as an intermediary tool, parsing the file and sending the data contained within it to the Data Import API. This API will then create an ‘import job,’ which we will use to post, validate, and eventually write the data to a specified public or private . The File Upload Widget provides a user-friendly interface for customers to import their data and view the changes within the same story.
To develop the widget, we used the existing library of , which allowed us to align the style of our custom components with the existing styles throughout SAP Analytics Cloud.
To understand the File Upload Widget, a basic understanding of the Data Import API is needed. The Data Import API empowers users to import large volumes of data into their models. The basic flow of Data Import API is as follows:
Users can also apply the following parameters to the jobs:
All service client logic is handled by a class called DataImportServiceApi. This class uses a singleton pattern so that the Story and Builder widgets both have their own instance of the service client using getInstance() and they both consume the widget properties that we persisted to the story, like the model ID, mappings and default values.
Custom Widgets are built on two main components – the JSON definition file, and the resource files.
The JSON Definition file outlines the metadata for the widget. It communicates to the rest of the story what functionality, such as properties, events, and methods, are contained within this widget, as well as outlining some technical properties, such as web components.
The notable aspects to this file are:
The resource file contains all the JavaScript code for the logic of the widget itself. In traditional widget development, the user creates and populates one resource file for each of the web components. Then, they specify the URL value in the JSON to point to somewhere where this resource file is hosted. In the case of the File Upload Widget, the bundle file generated by React will be used as the resource file for all web components. The difference is that we pass in a prop to distinguish what code we want to render. This will be explained properly in the render process.
Integrating the Custom Widget into your story comes with two steps, uploading the widget to SAC and installing the widget into your story.
To upload the widget into SAC, go to SAC Homepage -> Side panel -> Analytical Applications -> Custom Widgets. Here, you will see an option to ‘Create’ a new custom widget. Click this, and you will be prompted to upload both the JSON Definition file, and the resource file. Click ‘OK’, and your widget will be uploaded to SAC.
Figure 1: Uploading Widget to SAC
Once the widget has been uploaded, navigate to your SAC story. Here, inside the Edit container of the story tool bar, click insert then navigate to Custom Widgets. Your newly uploaded widget available for selection.
Figure 2: Adding Widget to Story
Once added to the story, the widget is available for use
Figure 3: Widget inside Story page
We can use local React development servers to streamline the development of these file upload widgets. Here, instead of uploading the resource file to SAC, we are hosting it on our own machine. Then, we update the ‘FileUploadWidget.json’ to upload a new ‘LocalHost’ version of the widget. In this JSON file, the ‘url’s of our web components point to this local server, which means that every time the web component is rendered, it will look to our local React server to get the bundle file.
"webcomponents": [
{
"kind": "main",
"tag": "com-sap-file-upload-widget",
"url": "http://localhost:5173/dist/file-upload-widget.mjs", // local service
"integrity": "",
"ignoreIntegrity": true
},
{
"kind": "builder",
"tag": "com-sap-file-upload-widget-builder",
"url": "http://localhost:5173/dist/file-upload-widget.mjs", // local service
"integrity": "",
"ignoreIntegrity": true
}
]
Figure 4: Uploading Development Configuration
The benefit of serving/configuring the widget in this way is that it allows us to test and debug our widget using browser Devtools, so we can test the functionality and behaviour of our UI elements. We can navigate to the URL where the scripts are hosted and set breakpoints in the code to test, and we also receive the benefit or near instant updates and hot reloading of the content as we make changes in our development environment.
Figure 5: Debugging on Localhost
An alternative way of debugging that will grant you the context of an SAC story is when debugging from the Story itself; this is particularly useful when you want to integrate with any services within SAC or deployed on Cloud Foundry as you can inherit the Authentication in SAC from the cookies allowing you to call APIs from the widget and perform validation and testing.
Figure 6: Debugging inside an SAC Story
Here, we will outline the different flows that can be undertaken by either an admin user, or a business user.
Admin User
The Builder Panel of the widget captures all the configuration functionality within a side panel of the SAC story. The idea is that we want the flow for the end user to be as seamless as possible. Therefore, the admin user can select which model to import to, whether to import to a public or private versions, configure any data mappings or default values, and any job settings. This information will be saved to the story, allowing it to be used by the story component.
Figure 7: Builder Panel
Business User
Once configured, the File Upload Widget can be triggered by a business user in Story View mode (Note: Not applicable in ‘Edit Mode’). The user selects an Excel (xlsx) or CSV file for upload, and the widget parses and validates the structure of the file. Once this small client-side validation is completed, the widget will send the data to the Data Import API, which will perform validation of the data itself, ensuring that any filters or formats specified in the model are respected. After completion, the user can check the job status, and download a CSV file containing rejected or failed validation records.
Figure 8: File Upload Dialog
This section outlines how the react code is rendered within SAC.
Data is shared between components by persisting a ‘settings’ variable to the story. These settings are attached to the story, in much the same way that ‘localStorage’ attaches properties to a browser window. This will allow admin users to make changes to the widget settings in the builder panel, and have these settings be used when a business user is using the widget.
To update a value in the settings, we use the ‘onCustomWidgetAfterUpdate’ function. This takes in an argument of ‘changedProperties,’ which is a stringified JSON object containing key / value pairs of any objects that have changed since the last time a user saved their story. The keys will be the value that has changed, and the value will be the new value. Then, we update the value of this property on the component. Finally, we update the settings object by calling ‘updateSettings().’ This function creates a new JavaScript object, containing all the current properties of the JavaScript component, and persists a stringified version to the story.
Here is an example of this flow, where an admin user updates the import type from ‘factData’ to ‘privateFactData.’
Figure 9: Render workflow
The React App uses SAP UI5 React Components, allowing us to maintain visual clarity with the rest of the SAC application. Extensive documentation on these components, and more details on how to use them, can be found here: UI5 Web Components
ModelSelector.jsx
The Model Selector is a Combobox (Text box and Dropdown in one), which is populated by making a call to the /models endpoint from the Data Import API. This request returns a list of models available for import for your given SAC User along with some metadata such as the Model Name and Description.
The Combobox uses live search to filter the list of Models by name, description, and ID. Once a model is selected, the Combobox fires an onSelectionChange event which will set the ModelID to the state. This event will trigger a React Effect, which will make a call to /models/{modelID}/metadata, that returns a JSON Object that contains the column metadata for that given model. This metadata contains the names and datatypes of the columns which are used later for Mappings and Default Values.
When the widget is saved, the ID of the selected model is persisted to the ‘settings’ variable in the story, as modelId
Figure 10: Model Selector
Once a Model has been selected, the Import Type Selector will be enabled.
ImportTypeSelector.jsx
Below the Model Selector is the Import Type Selector, allowing users to choose which import type they want to upload to, be it FactData, or PrivateFactData. Upon selecting an import type, the Mappings, Default Values, and Job Settings components become enabled.
Figure 11: Import Type Selector
MappingSelector.jsx
The Mappings section is made up only of a button which triggers a dialog. We handle mapping definitions in two ways:
Figure 12: Data Mappings
To provide mappings, a user may upload a CSV/Excel file which represents a sample of the real dataset or even just a single row which contains the columns. The first row of the file is parsed and mapped to a list of strings. In the case of an excel file, we take the file headers from the first sheet by default, but also provide a dropdown to allow a user to trigger a different upload if necessary.
Figure 13: Fuzzy Search Mapping Inference
The MappingSelector will use a node module called to perform a fuzzy search on the Column Metadata for each column in the uploaded file. An instance of the Fuse Search will be created, with the file headers passed as parameters. Then, we specify a ‘fuzzySearch’ function, which takes a column name from the metadata as an argument, and searches to see if any of the columns in the file headers are a good match. It determines whether two columns match or not by applying a ’threshold of confidence’, essentially scoring the match. If it is below 0.34 (in this case, lower is better), then the dropdown box will be populated with the file header as it appears in the user’s source. If no mapping is found to be a good enough match, then the dropdown is left blank by default. Either way, users can then select what mappings they want to apply from the dropdown.
If the user prefers, they may set the mappings for the import job using raw text inputs. Each column from the metadata is paired with an input field, like so:
Figure 14: Raw Text Input Mapping
Once the mappings are set and the dialog is closed a mappings object is created with the keys are the columns as they appear in the metadata, and the value being the file column as it appears in the user’s file. Once saved, the object is persisted to the “mappings” widget property, and then, by extension, the settings attribute which is stored on the story. This object is used later for the request body in the creation of the Import Job, and looks like this:
"Mapping": {
"ItemNumber": "ITEM NUMBERS",
"Price": "ITEM PRICE"
}
DefaultValuesSelector.jsx
Default Values are values that will be used in case no value is present in the row. They are useful in the case that the dataset has repeating values for a column, a column is not included in the users file, or some rows in the file do not have a value set for them. The values will be set in the ‘DefaultValues’ dialog, and then will be sent to the Data Import API on creation of the job. Like the mapping’s component, we render a list of Label and Input pairs for each column found in the metadata object we fetched from the Data Import API.
Figure 15: Default Values Dialog
Once the fields are set, the properties are saved in the state. When the story is saved, the default values are persisted to the widget properties as defaultValues and later used in job creation, like so:
"DefaultValues" : {
"Version": "public.Actual",
"Date": "202303"
}
JobSettingsSelector.jsx
Job Settings are parameters we define on Job creation to affect the behaviour of an import. We expose these settings using the following controls:
Figure 16: Job Settings Panel
More details about these settings can be found above in the Data Import API Overview.
Based on the selection from the client we construct the Job Settings Object. We define these properties in the POST body sent along with Mappings and Default Values sent to the /models/{modelID}/{importType} endpoint.
{
"pivotOptions": {
"pivotKeyName": "ItemNumber",
"pivotValueName": "Price",
"pivotColumnStart": "3"
},
"dateFormats": {
"Date": "YYYYWW"
},
"executeWithFailedRows": true,
"importMethod": "Append"
}
The jobs timeline offers a chronological history of previously created Import Jobs. We make a call to the /jobs endpoint, which will give us a list of all import job records. Then, we sort them based on the objects lastUpdatedTime property. Finally, we filter the list of objects by the Model ID that is currently selected, giving us a timeline of the previously executed import jobs for a specific model.
Within each panel of the timeline, we expose two buttons to the user:
Both events use the Clipboard API and call its writeText() function to set the content to the users clipboard.
Updating all of these properties updates the state of the Builder Panel Widget, and once we save, these values are persisted to the story.
The Story widget encapsulates the end user flow, which is the business user who wishes to import the data based on the settings configured in the builder panel. The story widget acts a button which triggers an event to create a dialog which supplies the interface for file upload. The React app is rendered by both the Story Panel and the Builder Panel. As mentioned before, the difference in the app is the property ‘mode,’ which is set to ‘STORY’ on the story panel, and ‘BUILDER’ on the builder panel.
Once we add the widget to the story, the user will be presented with the ‘End User Upload’ Component. This component serves two purposes: to open up the ‘Import Dialog’, and to indicate the status of the import job, which will take the place of the text once the import has been completed. In this text, the user will also be given a link which will allow them to download a CSV file of all their failed rows.
Figure 17: End User Upload Component
Once the user clicks the button, the ‘EndUserEntryDialog’ will be loaded. If it is an XLSX file, then a dropdown will be rendered, allowing a user to choose which sheet they want to import from. This dialog will allow the user to upload their data and indicate to them how many rows have been counted, looking like this. Using the two libraries, we are able to find out how many rows are in the CSV Data.
Figure 18: End User Upload Component with an XLSX file uploaded to it
It will also do some simple client-side validation, to ensure that the data uploaded is consistent with how the Data Import API will expect it. If the data is not in the format that we expect, for instance there are columns not present / too many columns, then we can inform the user using some simple error messages, as seen in the example below. Here, I replaced the header for the ‘Account’ column, with ‘InvalidColumnName’, so we are getting two errors. These errors are React Components, which will render a ‘Negative’ Message Strip with the message that we want to display.
Figure 19: End User Upload Component after a successful import of 100,000 Rows
It is important to emphasise here that the validation that is being performed here is only done on the structure of the data, and not the data itself. This is because validation on the actual data will be handled by the Data Import API. We just want to indicate to users that their data might not be in the format that we expect.
Once the data is imported, we click run. This will send the data that the user has upload to the Data Import API, where a job will be created (with any mappings, default values or job settings specified by the end user). Then, the CSV data will be posted to the job in chunks of 100,000 rows, and the data will be validated and persisted to the model.
When this job completes, there will be an indication as to how many failed rows there are. These are rows which the Data Import API deems to be invalid, and therefore will not be persisted to the model. These rows can fail at two stages, during the posting of the row to the job, or during the validation.
Rows can fail during posting to the job if there is some data that does not comply with the structure of the table. For instance, if there is an integer value that is over the maximum possible value for an integer, or a string that goes over the max length for a column, then the Data Import API will not post this row to the job, and instead return to the user that this row is invalid, in the response of this request. If there are failed rows in this response, then we add them to a ‘FailedRows’ array in the React app.
Once all the data is posted, validation against the model occurs. Here, we validate that the data follows the model preferences, such as date range, as well as ensuring that the user who is posting the data to the specific model has got permission to do so. If rows fail this stage of validation, they can be retrieved by making a request to ‘/jobs/jobID/invalidRows’. Here, a sample size of up to 2000 failed rows can be stored, including a reason that the row was rejected. These rows will then also be added to the ‘FailedRows’ array.
When the job completes, and this ‘failedRows’ object contains any failed rows from either stage, then the End User Upload component will inform that the user that the import had completed with X failed rows, and they will be able to download a CSV file containing these failures. This file will be in the same format as the users model, with an additional column containing the rejection reason.
Figure 20: End User Upload indicating that there are failed rows in a job that has completed.
If there are no failed rows, then the user will receive an indication that their data has been successfully imported:
Figure 21: End User Upload indicating the data has been uploaded successfully
File Parsing
There are separate implementations to handle both CSV and XLSX file uploads. Both flows are captured in FileHandler.js. The two file types that we support in the File Upload Widget are ‘.csv’ files and ‘.xlsx’ files. We use two separate libraries to handle the parsing of the data from the file to the data that’s sent to the Data Import API.
For CSV Data, we use a npm package called to read CSV Data. This package takes in two arguments, the file and a callback function. The callback function is a way for us to specify what we want to be done with the data once it’s completed, such as executing some small, client side validation.
For XLSX files, we use the ‘’ library. This does a very similar thing, and converts the excel data into CSV format, which is usable by the Data Import API. Then, we perform the same validation to ensure that the data matches the structure that we expect. A notable difference here is that in XLSX files, it’s possible for the user to have multiple sheets. As such, a dropdown will be shown to users holding all the sheet names, and selecting a different sheet name will take the data in from that sheet.
Both flows complete a callback which adds the filedata to the State. The contents of the file are then used in a POST body when calling /jobs/{jobID} to write data to a job. In the final step of the import, the widget will call /jobs/{JobID}/run using the ID of the Job it created and executes the import. The widget polls the status of the job using /jobs/{JobID}/status and once the job has status COMPLETE, the number of records imported are exposed via a label on the Story Widget.
Web Workers
When handling large volumes of data, the libraries that we use can be quite resource intensive. If not handled correctly, it can put a lot of load onto the browser window, leading to poor performance, or potentially crashing. To prevent this, we’ve adopted the concept of ‘Web Workers’ into the file handling functions.
The idea behind Web Workers is to be able to offload some heavy computation from the current browser window, which will yield two benefits. Firstly, the speed at which the data can be parsed increases significantly, as the libraries are not having to compete for resources with the rest of SAC. Secondly, the current SAC window will stay alert and responsive, as it’s not having to compete for resources with the Web Worker. Once the computation is done, the data will be sent back to the current window, ready for consumption.
The way we’ve implemented this varies from library to library. In Papa Parse, we are able to pass an argument to the options of a parser of ‘worker’, which will then handle the rest of it for us. For XLSX, we have to create our own instance of the Web Worker. To do this, we create a new Worker Object, and write the code that we want to execute in a ‘blob’. This code can then take in the file, handle it, and give us back the data that we want to use.
Because this WebWorker is completely independent from the rest of the SAC window, and therefore also separate from the rest of the React Application, we need to specify a way that we can import the XLSX library into this WebWorker. To do this, we’ve hosted the minified version of it on the Data Import API, which will be able to be loaded into the WebWorker for consumption.
Figure 22: Parsing Excel files using SheetJS and a Web Worker
Here, we are going to explain the process of extending the File Upload Widget to support Master Data. This change will involve changing the code in certain areas and expanding it to consume different aspects of the Data Import API. We will go through the process of setting up the development environment, making the code changes, and building the widget to produce a final version.
When developing the File Upload Widget, we are using a React Development Server. This lets us make changes to the widget and have them applied (almost) instantly, rather then having to upload the widget each time we make a change.
SAP Analytics Cloud supports integration with these development servers. When we upload a JSON file during the process of uploading a file upload widget, we specify where we want to look for the resource file. Instead of uploading this resource file to SAC, we define the URL of the development server. Here, once the SAC story is loaded, it will load the resource file from the development server.
Now, a local development instance of the File Upload Widget will be hosted on a local development server. Given that this widget is built on the Vite template, it will be hosted on localhost, port 5173.
To consume this local instance of the widget within SAC:
Now, you will be able to see any changes that you’re making to the Custom Widget appear in the context of the SAC story.
From here, we want to extend the widget to be able to import MasterData. To do this, we need to execute the following steps:
shouldDisplayVersionDropdown={shouldDisplayVersionDropdown && defaultVersion === "" && props.importType !== "masterData"}
This means that the value can never be true if the import type selected is ‘MasterData’.