In my previous blog post, I presented the general concept and a demonstration version of such an assistant. This one is about the frontend part of the development – the shell plugin and chat design. For me, this project was a learning experience, and therefore, I set a few requirements for the chat application:
The second requirement comes with a drawback, but I will get to this in a moment. The source code can be found in this repo. This is an MTA app built in the Business Application Studio.
Launchpad homepage
At first glance, there are two possible launchpad elements we can use to add an assistant: either a separate application with its own tile or using the launchpad’s shell and docking it to it. My thinking was that such an assistant should always be available to a user and ideally provide help regarding the current application (be “aware” of the currently opened application). Therefore, a separate application didn’t quite work for me. Also, shell plugin development was something new for me that I wanted to try.
How to design a chat window when there are no chat-like elements available? I tried to find some help in the SAP community, but the projects I found (such as Mike’s blog and Jens’ blog) ere standalone apps, something I rejected from the start. On the other hand, there are already available examples all over the internet, mostly chats on different shop sites or Facebook’s Messenger. Since the source I used to develop a plugin (great Soeren’s entry) already covered adding a button to a shell, I thought that maybe combining this with a popular approach, using docked assistants, is the way to go.
I spent some time scrolling through examples in the UI5 Demo Kit. The best candidates were either a dialog or a popover. A dialog is nice, but a modal one makes all apps in the launchpad unavailable until the popup is closed.
Dialog from UI5 Demo Kit
It was something I could live with, but then I found a popover with nested containers. It has similar problems as a dialog (to some extent), but it gives a much better chat-like feeling when docked to a shell’s button. Even the example from the Demo Kit looks much like a chat. Just take a look at it:
Popover example from UI5 Demo Kit
A popover is a nice container, but how to build a list of messages inside it? Well… Now I think that sap.m.Feed is the best choice here. However, I missed that UI element (probably this is something to be changed in the next iteration) 🙂 I found sap.m.NotificationListItem first, which actually looks a lot like a messaging app:
NotificationListItem at UI5 Demo Kit
Gathering all elements together, it should look something like this:
Balsamiq mock
Because it was a learning process for me, I moved all complex logic outside the Fiori app. I decided to keep it simple and focus mostly on the UI experience. Therefore, in terms of communication, this is a simple application built on a single JSON data model (there are others used, but for some minor details). You can find a detailed sequence diagram in my first blog entry.
Briefly about the API used here (more about it in the next blog, explaining middleware development). For GPT chat integration, OpenAI provides a completion API (documentation). It requires a model and a messages array. This array includes a system message that will define our chat behavior.
The data model is based on the input payload of the method. Please note that:
In order to simplify UI development, I moved data processing (mapping, context feeding, merging chat history with model response) to the CI platform. So, in the end, the application operates only on a model that looks like this:
{
"model": "gpt-4-0613",
"messages": [
{
"role": "assistant",
"content": "Hello I'm fiori gpt-based, technical assistant. I can provide basic information about our fiori apps, team structure. How can I help you?"
},
{
"role": "user",
"content": "Hi"
},
{
"role": "assistant",
"content": "Hello, how can I help you?"
},
]
}
This can be easily bound to list/table like objects in SAPUI5 applications.
The plugin, as mentioned, will communicate with the GPT model via the CI interface. In order to do this, a few things need to be done:
{
"source": "^/cpiURL(.*)$",
"target": "/$1",
"destination": "cpi_gpt",
"authenticationType": "none"
},
MTA:
- name: cpi_gpt
type: org.cloudfoundry.managed-service
parameters:
service: destination
service-plan: lite
And manifest.json
{
"_version": "1.59.0",
"sap.app": {
"id": "c",
"type": "component",
"i18n": "i18n/i18n.properties",
"applicationVersion": {
"version": "0.0.1"
},
"title": "{{appTitle}}",
"description": "{{appDescription}}",
"resources": "resources.json",
"sourceTemplate": {...},
"crossNavigation": {...},
"dataSources": {
"cpiOpenAIAPI": {
"uri": "/cpiURL",
"type": "JSON"
}
}...}
}
Building a plugin, I based it on the mentioned earlier blog from Soeren and Arianna’s tutorial. You should definitely check them out if you need to learn more about it. I’ll focus here only on my little project specifics.
I wrapped the entire popup, with the chat inside it, in a fragment—a clean and reusable option. The fragment design follows the mock I created earlier. The full version can be found in the GitHub repo. Generally, you can see the pattern from the previous mock.
<core:FragmentDefinition xmlns="sap.m" xmlns:f="sap.f" xmlns:core="sap.ui.core" xmlns:l="sap.ui.layout">
<Popover info="Green, main container/popover">
<content>
<VBox>
<NotificationList info="Blue, main chat list">
<!--Black, items as messages, with some conditional formatting-->
<NotificationListItem title="{chat>content}"
authorName="{= ${chat>role} === 'assistant' ? 'Chat GPT' : ${userModel>/user/FirstName} }"
authorPicture="{= ${chat>role} === 'assistant' ? ${resourceModel>/img} : '' }"
authorInitials="{= ${chat>role} === 'assistant' ? '' : ${userModel>/user/Initials} }"/>
</NotificationList>
</VBox>
</content>
<footer info="Red, text input">
<l:VerticalLayout>
<TextArea/>
<Toolbar info="with some buttons">
<ToolbarSpacer id="ts"/>
<Button id="closeButton" text="Close" press=".handleActionClose"/>
<Button id="sendButton" text="Send" press=".handleActionSend"/>
</Toolbar>
</l:VerticalLayout>
</footer>
</Popover>
</core:FragmentDefinition>
As you can see there’re 3 models used:
initUserModel: function(){
//this get service UserInfo. Currently logged user's information can be retrieved (like name). Used for some information stuff in chat window
var userInfo = sap.ushell.Container.getService("UserInfo");
var oData = {
user: {
FirstName: userInfo.getFirstName(),
LastName: userInfo.getLastName(),
FullName: userInfo.getFullName(),
Initials: userInfo.getFirstName().charAt(0) + userInfo.getLastName().charAt(0)
}
};
return new JSONModel(oData);
}
var oResoucesModel = new JSONModel({img: sap.ui.require.toUrl("shodan") + "/img/bot.jpg"});
Models are used in expression binding on the item level for authorName (either the user’s name from userModel or ‘Chat GPT’), authorPicture (either SHODAN’s picture from resourceModel or empty – if empty, the app will take initials from authorInitials), and authorInitials (taken from userModel or empty for the assistant). And basically, that’s it when it comes to the view. Pretty basic 🙂
Because it is a plugin, all the logic must be put into Component.js. I followed the most common approach and initialized enhancement in the init function, using the Container’s renderer. The first few lines are models’ initialization:
init: function () {
// call the base component's init function
UIComponent.prototype.init.apply(this, arguments);
//get module path, used later on
this.modulePath = sap.ui.require.toUrl("shodan");
//models
var oModel = new JSONModel(this.modulePath + "/cpiURL");
var oModelUser = initUserModel();
var oResoucesModel = new JSONModel({img: this.modulePath + "/img/bot.jpg"});
Chat button (to open a popover) is initialized after renderrer’s promise is returned (method addHeaderItem), and fragment creation is coded at press function:
//gets renderer
var rendererPromise = this._getRenderer();
//create chat button
rendererPromise.then(function(oRenderer) {
oRenderer.addHeaderItem({
icon: "sap-icon://discussion",
tooltip: "Open chat bot",
press: function() {
Check if popover is already initialized and open. If it is close it (so button closes and opens the chat):
if(this.oPopover){
if(this.oPopover.isActive()){
this.oPopover.close();
return;
}
}
The rest is retrieving fragment, binding models and popover is opened:
//get popover fragment (chat container in this case)
this.oPopover = sap.ui.xmlfragment("shodan.fragment.Popup");
this.oPopover.attachAfterClose(function() {
this.destroy();
});
//set models: chat itself and user data (for displaying details)
this.oPopover.setModel(oModel, "chat");
this.oPopover.setModel(oModelUser, "userModel");
this.oPopover.setModel(oResoucesModel, "resourceModel");
//open popover
this.oPopover.openBy(this);
To get current, logged user information, UserInfo service is used. I got the data and fed new JSON model, which is bounded to chat fragment:
initUserModel: function(){
//this get service UserInfo. Currently logged user's information can be retrieved (like name). Used for some information stuff in chat window
var userInfo = sap.ushell.Container.getService("UserInfo");
var oData = {
user: {
FirstName: userInfo.getFirstName(),
LastName: userInfo.getLastName(),
FullName: userInfo.getFullName(),
Initials: userInfo.getFirstName().charAt(0) + userInfo.getLastName().charAt(0)
}
};
return new JSONModel(oData);
}
Fullname is used for each notification list item, as user name. Initials are put as user’s avatar.
Method handleActionSend is responsible for communication with GPT model (via CI). jQuery is used to call GPT model. Code is pretty well commented, basically what is going on:
handleActionSend: function(oEvent){
//get objects to be read
var oTextArea = sap.ui.getCore().byId("TextArea1");
var oNotificationList = sap.ui.getCore().byId("TNotificationList1");
//get textArea value and clear it
var textValue = oTextArea.getValue();
oTextArea.setValue("");
//get model
var oModel = this.oPopover.getModel("chat");
//get data and add new message
var oCurrentData = oModel.getData();
oCurrentData.messages.push({
role: "user",
content: textValue
});
oModel.setData(oCurrentData);
//set busy
oNotificationList.setBusy(true);
//call external, chat API
jQuery.ajax({
url: this.modulePath + "/cpiURL",
type: "POST",
data: JSON.stringify(oCurrentData),
async: true,
dataType: "json",
setTimeout: 60,
contentType: "application/json",
success: function (data, textStatus, jqXHR) {
oModel.setData(data);
//this scrolls down chat to the last message.
//-cont is area where content is generated, so it scrolls only this container
$("#myPopover-cont").animate({ scrollTop: 100000000000}, 1000);
//disable busy
oNotificationList.setBusy(false);
},
error: function (xhr, ajaxOptions, thrownError) {
//in case of an error user is on his own
console.log("-error->");
console.log(xhr);
console.log(thrownError);
console.log("<-error-");
}
});
}
I lack proper error handling at this point (only some logs are put into the console), but it is good enough for a learning project 🙂
At this point, if you don’t have the API yet, you should still be able to deploy everything and check if this is working. The chat window will be empty, but that is fine. Also, you can prepare a quick mock interface returning just a welcome message like:
Welcome message
For more details about deployment and shell plugin development, I, again, encourage you to visit Soeren’s blog. I’m using SAP Cloud Portal Service in CF. After building and deploying the app, it should be available in Content Manager. If not, refresh your content channel:
This app needs to be added to My Content (button on the right side of the panel). Then a role is needed (TestPlugin role in my case), which will be assigned to the site, plugin, and user testing the chat:
And in subaccount’s, security details (this needs to be assigned to a user):
And that’s it. Plugin is now available and can be accessed. If there’s no model yet, it will be empty chat, loading forever when interacted with:
No API behavior
Now, with a UI that allows interaction with a model, we can focus on OpenAI’s API and prompt engineering. This will be done in SAP Cloud Integration and eventually consumed in the chat application via the configured destination (mentioned earlier, cpi_gpt). I’ll focus on that in my next blog entry.