Introduction:
It’s crucial to recognize that integration, or the middleware, may experience intermittent periods of inactivity stemming from either the source or target systems. The integration team holds the responsibility of adeptly overseeing all integrations during these periods of downtime, with a focus on upholding data integrity and aiming for minimal or zero system failures.
What if I told you that we can make our integration smart enough to handle system downtime autonomously with just 3 components? In this blog, I will demonstrate an ingenious solution for managing system downtimes by harnessing the power of global variables.
Scenario:
Lets consider a SuccessFactors to 3rd Party Integration recurring every 30 mins in which last run variables is saved on the tenant so that the last recurring run can pick the records from the last run date, once the integration is successful, the last successful run date and time is saved, and the next run takes place from that last sucessful run time.
Let’s consider a scenario where there are 25 integrations on the tenant, and when the SuccessFactors system experiences scheduled downtime, the typical approach is to undeploy the iflows to prevent recurring failures every 30 minutes.
However, my idea is to maintain uninterrupted operations during these downtimes without any manual intervention. I aim to achieve seamless integrations by implementing a solution based on global variables.
Solution:
The concept involves creating an iflow responsible for storing the start and end times of system downtime in global variables. These global variables will then be utilized within the actual iflows to assess whether there is any ongoing system downtime. If downtime is detected, the iflow will be paused, ensuring that processing resumes according to the defined design only when the system is operational. This approach ensures that the integrations can automatically adapt to system downtime without the need for manual intervention.
IFlow Design – Update System Downtimes
This is a simple one component iflow, where we will configure the start time and end time of the systems as global variables. All the values for this iflow are configurable, we can change the values whenever needed using “configure” button. The iflow operates in a “run once” mode, enabling us to update the values and redeploy as needed without any hassle.
Iflow 1
Write Varibles – Global Variable
The first step of the mechanism is complete, we can add as many as variables in this iflow, just create a new variable and mark it as a configurable parameter, so that it can allow us to enter the timings as applicable.
Once you deploy this iflow, the variables would be seen in the variables section in the monitoring screen as below.
Global Variable 1
Global Variable 2
Now lets use this variables in the main iflows.
Main Iflow:
In the main iflow, we just have to add 3 components to accomodate this mechanism, the below snip shows these 3 components present in the black box, the components outside the black box are the part of the existing design.
1.Content Modifier
2. Groovy Script
3. Router
Main Iflow 1
Content Modifier:
In the first step use a content modifier and call those global variables as shown in the snip below:
Main IFlow – CM
Groovy Script:
In this process, we’ll access the global variables within a script to determine whether the start time and end time fall within the runtime of the iflow. If the time range is within the runtime, a new property named “systemAvailable” will be set to “no”; otherwise, it will be set to “yes.” This property will indicate the system’s availability during the iflow’s execution.
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.time.TimeCategory
def Message processData(Message message)
{
def properties = message.getProperties();
start = properties.get("SF_Start_Time");
end = properties.get("SF_End_Time");
sfSystemAvailable = isSystemAvailable(start,end);
if(sfSystemAvailable.equals("no"))
{
message.setProperty("systemAvailable","no");
}
return message;
}
def isSystemAvailable(def start,def end)
{
def isSystemAvailable = "yes"
if(!(start.equals("") || end.equals("")))
{
def today = new Date();
def temp = today.format("yyyy-MM-dd'T'HH:mm:ss'Z'",TimeZone.getTimeZone("UTC")); //Gets Hours in 24 Hr format
def timeNow = Date.parse("yyyy-MM-dd'T'HH:mm:ss'Z'",temp);
endDate = Date.parse("yyyy-MM-dd'T'HH:mm:ss'Z'",end);
startDate = Date.parse("yyyy-MM-dd'T'HH:mm:ss'Z'",start);
if(startDate.time <= timeNow.time && timeNow.time <= endDate.time)
{
isSystemAvailable = "no"
}
else
{
isSystemAvailable = "yes"
}
}
return isSystemAvailable
}
Router:
This critical step acts as the decision point and routes the message based on the value of the “systemAvailable” property. If the value is “no,” indicating that the SuccessFactors system is down, it will terminate the process, avoiding any message failures. If the value is “yes,” signifying that the SF system is operational, the processing will continue as usual without interruption.
Main Iflow – Router
Demo:
Lets see how its works in the runtime now, consider we have a SuccessFactors downtime and we have deployed the global variables. All the 25 Integrations are scheduled to run every 30 minutes through the downtime window.
Demo
Indeed, with this system in place, you can confidently sit back and relax during the scheduled runs of the 25 main iflows, even when SuccessFactors experiences downtime. This setup ensures there is no data loss. Since the last successful run is recorded at the end of the iflow and the variable isn’t updated during downtime, the integration will continue to poll and query SuccessFactors data from the last successful run timestamp. This method effectively maintains data integrity and prevents data loss during system downtime, contributing to a robust and resilient integration process.
Absolutely, this mechanism effectively eliminates the need for undeployment or deployment of the iflows, as the iflows themselves are designed to intelligently monitor both source and target downtimes. This level of automation ensures a streamlined, hands-off integration process that minimizes the potential for manual intervention and system failures. It’s a robust and self-sustaining approach that significantly enhances the reliability and efficiency of your integration workflows.
This mechansim is not only for SuccessFactors based Integration, this can be used in all the inbound to CPI scenarios.
I hope this is helpful.
Cheers,
Punith Oswal