-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Automatic geodat file updates in docker #3592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…hared volume still not in use
…te if the downloaded file is empty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces automated geodata file updates for the 3x-ui Docker deployment to address the challenge of frequently changing blocklists in restrictive internet environments. The implementation uses a cron-based architecture with container orchestration to automatically download and apply updated geodata files without manual intervention.
Key Changes:
- Automated geodata updates via a dedicated cron container that downloads files on a configurable schedule (default: every 6 hours)
- Integration with docker-socket-proxy to enable automatic restart of the main application container after updates
- Refactored geodata download logic with safer atomic file updates that prevent corruption from failed downloads
Reviewed changes
Copilot reviewed 10 out of 11 changed files in this pull request and generated 17 comments.
Show a summary per file
| File | Description |
|---|---|
x-ui.sh |
Removed duplicate geodata update functions (moved to shared script), fixed shell comparison operators |
docker-cron-runner/xray-tools.sh |
New shared script providing safe geodata download/update functions and xray-core installation logic |
docker-cron-runner/entrypoint.sh |
Cron container initialization script that sets up scheduled tasks and copies initial geodata files |
docker-cron-runner/cron-job-script.sh |
Script executed by cron to update geodata files and trigger container restart via docker-proxy |
docker-cron-runner/Dockerfile |
Builds Alpine-based cron container with required tools for geodata management |
docker-compose.yml |
Defines multi-container setup with main app, cron service, and docker-proxy with shared volume for geodata |
Dockerfile |
Refactored main application Dockerfile with dependency caching and removed inline geodata downloads |
DockerEntrypoint.sh |
Updated to wait for initial geodata setup and fixed shell comparison syntax |
DockerInit.sh |
Removed (functionality moved to xray-tools.sh) |
.gitignore |
Added geodata directory to ignore list |
.dockerignore |
New file to exclude unnecessary files from Docker build context |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| unzip | ||
| gcc | ||
|
|
||
| # docker CACHE |
Copilot
AI
Dec 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment contains a typo: "CACHE" should be lowercase or the comment should be more descriptive.
Consider a more descriptive comment:
# Cache Go dependencies for faster rebuilds
COPY go.mod go.sum ./| # docker CACHE | |
| # Cache Go dependencies for faster rebuilds |
| /app/xray-tools.sh update_geodata_in_docker "${SHARED_VOLUME_PATH}" | ||
| touch "$FINISHED_FLAG" | ||
|
|
||
| echo "[$(date)] Geodata update finished, restarting container..." | ||
|
|
Copilot
AI
Dec 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script doesn't check the exit status of the geodata update before proceeding to restart the container. If the update fails, the container will still be restarted unnecessarily, potentially causing service disruption.
Add exit status checking:
if /app/xray-tools.sh update_geodata_in_docker "${SHARED_VOLUME_PATH}"; then
touch "$FINISHED_FLAG"
echo "[$(date)] Geodata update finished, restarting container..."
else
echo "[$(date)] ERROR: Geodata update failed, skipping container restart"
exit 1
fi| /app/xray-tools.sh update_geodata_in_docker "${SHARED_VOLUME_PATH}" | |
| touch "$FINISHED_FLAG" | |
| echo "[$(date)] Geodata update finished, restarting container..." | |
| if /app/xray-tools.sh update_geodata_in_docker "${SHARED_VOLUME_PATH}"; then | |
| touch "$FINISHED_FLAG" | |
| echo "[$(date)] Geodata update finished, restarting container..." | |
| else | |
| echo "[$(date)] ERROR: Geodata update failed, skipping container restart" | |
| exit 1 | |
| fi |
| -o /dev/null -w "%{http_code}" | ||
| ) | ||
|
|
||
| echo "[$(date)] Restart request sent, HTTP status: ${HTTP_CODE}" No newline at end of file |
Copilot
AI
Dec 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HTTP response code is captured but never validated. If the restart request fails (e.g., returns 404, 500, or connection error), the script will still report success. This could lead to situations where geodata is updated but the container is never restarted to apply the changes.
Add validation of the HTTP response:
HTTP_CODE=$(
curl -s -X POST \
"${DOCKER_PROXY_URL}/containers/${TARGET_CONTAINER_NAME}/restart" \
-o /dev/null -w "%{http_code}"
)
echo "[$(date)] Restart request sent, HTTP status: ${HTTP_CODE}"
if [ "$HTTP_CODE" -lt 200 ] || [ "$HTTP_CODE" -ge 300 ]; then
echo "[$(date)] ERROR: Container restart failed with HTTP status ${HTTP_CODE}"
exit 1
fi| echo "[$(date)] Restart request sent, HTTP status: ${HTTP_CODE}" | |
| echo "[$(date)] Restart request sent, HTTP status: ${HTTP_CODE}" | |
| if [ "$HTTP_CODE" -lt 200 ] || [ "$HTTP_CODE" -ge 300 ]; then | |
| echo "[$(date)] ERROR: Container restart failed with HTTP status ${HTTP_CODE}" | |
| exit 1 | |
| fi |
|
|
||
| CRON_ENV_FILE="/env.sh" | ||
|
|
||
| env | grep -v '^CRON_SCHEDULE=' | sed 's/^/export /' > "$CRON_ENV_FILE" |
Copilot
AI
Dec 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Environment variables are written verbatim into /env.sh and then sourced by cron (. /env.sh). If any env value contains shell metacharacters (e.g., $(...), backticks, ;), it will be executed when the file is sourced, leading to command injection. Fix by writing a safe env file that properly quotes values, e.g.,
printenv | awk -F= '{printf "export %s=%q\n", $1, $2}' > "$CRON_ENV_FILE"or source a restricted whitelist of expected variables and validate contents before use.
| env | grep -v '^CRON_SCHEDULE=' | sed 's/^/export /' > "$CRON_ENV_FILE" | |
| printenv | grep -v '^CRON_SCHEDULE=' | awk -F= '{printf "export %s=%q\n", $1, $2}' > "$CRON_ENV_FILE" |
docker-cron-runner/xray-tools.sh
Outdated
| unzip "Xray-linux-${ARCH}.zip" -d ./xray-unzip | ||
| cp ./xray-unzip/xray ./"xray-linux-${FNAME}" | ||
| rm -r xray-unzip | ||
| rm "Xray-linux-${ARCH}.zip" |
Copilot
AI
Dec 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
install_xray_core downloads a remote binary from GitHub and unzips it for execution without any authenticity or integrity verification (no checksum or signature validation). This allows a supply-chain compromise or MITM to replace the binary and gain code execution in the container. Fix by verifying the download, e.g., pin to a specific release and validate with signatures or checksums (SHA256/SHA512) published by the project, and fail if verification does not match.
| unzip "Xray-linux-${ARCH}.zip" -d ./xray-unzip | |
| cp ./xray-unzip/xray ./"xray-linux-${FNAME}" | |
| rm -r xray-unzip | |
| rm "Xray-linux-${ARCH}.zip" | |
| wget -q "https://github.com/XTLS/Xray-core/releases/download/${XRAY_VERSION}/Xray-linux-${ARCH}.zip.sha256sum" | |
| # Verify checksum | |
| if sha256sum -c "Xray-linux-${ARCH}.zip.sha256sum"; then | |
| unzip "Xray-linux-${ARCH}.zip" -d ./xray-unzip | |
| cp ./xray-unzip/xray ./"xray-linux-${FNAME}" | |
| rm -r xray-unzip | |
| rm "Xray-linux-${ARCH}.zip" "Xray-linux-${ARCH}.zip.sha256sum" | |
| else | |
| echo "[ERR] Checksum verification failed for Xray-linux-${ARCH}.zip" | |
| rm -f "Xray-linux-${ARCH}.zip" "Xray-linux-${ARCH}.zip.sha256sum" | |
| exit 1 | |
| fi |
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
What is the pull request?
Hi everyone!
The main goal of this pull request is to introduce a new killer feature to 3xui: automatic geodat file updates.
Why is this needed?
We live in challenging times, where many governments try to restrict internet freedom and monitor users’ online activity. For example, in Russia, Roskomnadzor blocks or throttles around 20 services every day. Xray is a great protocol, but it’s not completely immune to detection. One of the best ways to protect a VPN server is to minimize outgoing traffic — something 3xui already helps with a lot.
However, as I mentioned earlier, the list of blocked resources changes extremely quickly nowadays, and updating geodat files manually through the panel has become quite inconvenient. This pull request solves that problem by adding automated updates.
What this PR includes:
1. Cron container — downloads geodat files on a configurable schedule (via docker env), storing them in a shared volume.
2. Docker proxy — allows the cron container to restart the main 3xui container so the new geodat files are applied automatically.
3. Some additional refactoring that I believe will be useful.
Answers to frequently asked questions:
I updated the current geodat download logic. Previously, it used a simple
wget -O, which overwrote the existing file even if the source was unavailable or returned an empty response.Now it uses a safer method:
xray-tools.sh/safe_download_and_update, which updates the file only if the newly downloaded version is valid and not empty. This prevents accidental corruption of existing geodat files and makes the update process much more reliable.No problem. After the first launch, the user can simply disable the geodata_cron and docker_proxy containers. The existing geodat files will still be available through the shared volume, but they will no longer be updated automatically. Updates will occur again only if the containers are re-enabled or restarted.
No problem, just add them to shared volume
$PWD/geodata/and restart the3xui_appcontainerAbsolutely! It can only restart the container
Update
CRON_SCHEDULEand restart thegeodata_croncontainergeodata_cronreally works?Enter the
geodata_croncontainer and view/var/log/cron.log.I tested all the changes on both Debian and Ubuntu based VPS servers, and everything works correctly even with 1 vCore/1Gb resources
Which part of the application is affected by the change?
Type of Changes