How to Synchronize Data Between Distributed Servers Across Regions

This document explains how to set up one-way data synchronization between two NeevCloud servers located in different regions (Central India and Mumbai). The synchronization will ensure that new and updated files are transferred, while deletions on the source server do not remove files on the destination server.

We will use rclone, a powerful command-line tool for managing and syncing files across different storage systems.

Prerequisites

Two NeevCloud servers:

  • Server A (Central India) → Source

  • Server B (Mumbai) → Destination

SSH access between the two servers.

SSH key-based authentication configured:

  • id_rsa (private key) present on Server A

  • id_rsa.pub added to /home/ubuntu/.ssh/authorized_keys (or /root/.ssh/authorized_keys) on Server B

rclone installed on Server A (Central India):

Step 1: Verify SSH Key Access

From Server A, confirm that you can log in to Server B without a password.

Run the ssh-keygen command

ssh-keygen -t rsa -b 409

You can now check that the SSH key has been created

Copy your public key to the server.

ssh-copy-id -i ~/.ssh/id_rsa.pub user@server_ip

After running the command, it will ask for a password. Enter the password, and after logging that, disable password authentication.

Test the connection

ssh user@server_ip

Step 2: Configure rclone

rclone installed on Server A (Central India):

curl https://rclone.org/install.sh | sudo bash

Run the configuration command on Server A:

rclone config

Follow the prompts:

New remote → n

Name → mumbai

Storage type → 13) SFTP (The numbers may vary depending on the rclone version used

SSH host → <Mumbai_Server_IP>

SSH username → ubuntu (or root, depending on your setup)

SSH port → 22 (default)

SSH key file → /root/.ssh/id_rsa

key_use_agent → false

Password → n (leave blank)

After completing all the configurations, you will see a setup like this.

List the files in Server B’s home directory:

rclone ls your_rclone_name:/root

Step 3: Sync Files (Without Deletion)

To copy files from Server A → Server B while preserving existing files:

rclone copy /path/to/data your_rclone_name:/root/ --progress
  • copy → copies new/updated files only, does not delete existing ones

  • --progress → shows real-time progress

Now, check the other server in the Mumbai region

Step 5: Automate with Cron

To run that command every minute, the best method on a modern Linux system is to use a systemd service paired with a timer. This is more robust and manageable than a traditional cron job.

First, you need the full path to the rclone program and your test.txt file, as systemd services run in a clean environment without your personal items PATH settings.

Find the path to rclone:

which rclone

Create the Sync Script File

This file defines what command to run. We will rename the file.

Run the following command. Ensure that you replace the placeholder paths with your actual paths.

vi /usr/local/bin/rclone-sync.sh

SRC="/root/test.txt" ---> Source Path DST="testmumbai:/root/" ---> Destination Path

#!/usr/bin/env bash
set -euo pipefail

LOG_DIR="/var/log/rclone"
LOCK_FILE="/var/lock/rclone-sync.lock"
SRC="/root/test.txt" # <-- IMPORTANT FIX: Using the full path
DST="testmumbai:/root/" # Destination path

mkdir -p "$LOG_DIR"
ts=$(date +%F_%H-%M-%S)

# Pass variables as arguments to the bash -c shell
exec /usr/bin/flock -w 0 "$LOCK_FILE" bash -c '
  # Script arguments are assigned to variables for clarity
  log_dir="$1"
  timestamp="$2"
  source="$3"
  destination="$4"

  echo "[$(date -Is)] start" >> "${log_dir}/sync.log"

  # IMPORTANT FIX: --ignore-existing flag has been removed
  rclone sync "${source}" "${destination}" \
    --log-file "${log_dir}/sync-${timestamp}.log" \
    --log-level INFO
  rc=$?

  echo "[$(date -Is)] end rc=$rc" >> "${log_dir}/sync.log"
  exit $rc
' bash "$LOG_DIR" "$ts" "$SRC" "$DST"

Make it executable:

chmod +x /usr/local/bin/rclone-sync.sh

Create the Systemd Service File

vi /etc/systemd/system/rclone-sync.service
[Unit]
Description=Rclone periodic sync (test.txt to testmum)
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/rclone-sync.sh

[Install]
WantedBy=multi-user.target

Create the systemd Timer File

vi /etc/systemd/system/rclone-sync.timer
[Unit]
Description=Run rclone-sync.service every 1 minute
Requires=rclone-sync.service

[Timer]
OnBootSec=1min
OnUnitActiveSec=1min
Unit=rclone-sync.service

[Install]
WantedBy=timers.target

Reload systemd to make it aware of your new service and timer files.

sudo systemctl daemon-reload

Enable and start the rclone-sync service.

sudo systemctl start rclone-sync.service
sudo systemctl enable rclone-sync.service
sudo systemctl status rclone-sync.service

Enable and start the rclone-sync timer.

sudo systemctl start rclone-sync.timer
sudo systemctl enable rclone-sync.service
sudo systemctl status rclone-sync.service

After that, I created a file in the Indore region and transferred it to Mumbai.

sudo touch test.txt
sudo echo "Health check is okay" >> test.txt
sudo cat test.txt

Now log in to your Mumbai region server and check if the file is visible

Let's check if the file is visible and the content remains.

Note: Make sure when you use sync between two regions: in this scenario, if you delete data in one region, it will automatically be deleted in the other region as well.

Last updated