diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
index befe7db7..11402129 100644
--- a/.github/FUNDING.yml
+++ b/.github/FUNDING.yml
@@ -1 +1 @@
-custom: https://mailcow.github.io/mailcow-dockerized-docs/#help-mailcow
+custom: ["https://www.servercow.de/mailcow?lang=en#sal"]
diff --git a/.github/ISSUE_TEMPLATE/Bug_report.yml b/.github/ISSUE_TEMPLATE/Bug_report.yml
index 6134a9ad..3cfbbe0d 100644
--- a/.github/ISSUE_TEMPLATE/Bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/Bug_report.yml
@@ -7,8 +7,8 @@ body:
label: Contribution guidelines
description: Please read the contribution guidelines before proceeding.
options:
- - label: I've read the [contribution guidelines](https://github.com/mailcow/mailcow-dockerized/blob/master/CONTRIBUTING.md) and wholeheartedly agree
- required: true
+ - label: I've read the [contribution guidelines](https://github.com/mailcow/mailcow-dockerized/blob/master/CONTRIBUTING.md) and wholeheartedly agree
+ required: true
- type: checkboxes
attributes:
label: I've found a bug and checked that ...
@@ -26,70 +26,132 @@ body:
attributes:
label: Description
description: Please provide a brief description of the bug in 1-2 sentences. If applicable, add screenshots to help explain your problem. Very useful for bugs in mailcow UI.
+ render: plain text
validations:
required: true
- type: textarea
attributes:
- label: Logs
- description: Please take a look at the [official documentation](https://mailcow.github.io/mailcow-dockerized-docs/debug-logs/) and post the last few lines of logs, when the error occurs. For example, docker container logs of affected containers. This will be automatically formatted into code, so no need for backticks.
- render: bash
+ label: "Logs:"
+ description: "Please take a look at the [official documentation](https://docs.mailcow.email/troubleshooting/debug-logs/) and post the last few lines of logs, when the error occurs. For example, docker container logs of affected containers. This will be automatically formatted into code, so no need for backticks."
+ render: plain text
validations:
required: true
- type: textarea
attributes:
- label: Steps to reproduce
- description: Please describe the steps to reproduce the bug. Screenshots can be added, if helpful.
+ label: "Steps to reproduce:"
+ description: "Please describe the steps to reproduce the bug. Screenshots can be added, if helpful."
+ render: plain text
placeholder: |-
1. ...
2. ...
3. ...
validations:
required: true
- - type: textarea
+ - type: markdown
attributes:
- label: System information
- description: In this stage we would kindly ask you to attach general system information about your setup.
- value: |-
- | Question | Answer |
- | --- | --- |
- | My operating system | I_DO_REPLY_HERE |
- | Is Apparmor, SELinux or similar active? | I_DO_REPLY_HERE |
- | Virtualization technology (KVM, VMware, Xen, etc - **LXC and OpenVZ are not supported** | I_DO_REPLY_HERE |
- | Server/VM specifications (Memory, CPU Cores) | I_DO_REPLY_HERE |
- | Docker version (`docker version`) | I_DO_REPLY_HERE |
- | docker-compose version (`docker-compose version`) | I_DO_REPLY_HERE |
- | mailcow version (```git describe --tags `git rev-list --tags --max-count=1` ```) | I_DO_REPLY_HERE |
- | Reverse proxy (custom solution) | I_DO_REPLY_HERE |
-
- Output of `git diff origin/master`, any other changes to the code? If so, **please post them**:
- ```
- YOUR OUTPUT GOES HERE
- ```
-
- All third-party firewalls and custom iptables rules are unsupported. **Please check the Docker docs about how to use Docker with your own ruleset**. Nevertheless, iptabels output can help us to help you:
- iptables -L -vn:
- ```
- YOUR OUTPUT GOES HERE
- ```
-
- ip6tables -L -vn:
- ```
- YOUR OUTPUT GOES HERE
- ```
-
- iptables -L -vn -t nat:
- ```
- YOUR OUTPUT GOES HERE
- ```
-
- ip6tables -L -vn -t nat:
- ```
- YOUR OUTPUT GOES HERE
- ```
-
- DNS problems? Please run `docker exec -it $(docker ps -qf name=acme-mailcow) dig +short stackoverflow.com @172.22.1.254` (set the IP accordingly, if you changed the internal mailcow network) and post the output:
- ```
- YOUR OUTPUT GOES HERE
- ```
+ value: |
+ ## System information
+ ### In this stage we would kindly ask you to attach general system information about your setup.
+ - type: dropdown
+ attributes:
+ label: "Which branch are you using?"
+ description: "#### `git rev-parse --abbrev-ref HEAD`"
+ multiple: false
+ options:
+ - master
+ - nightly
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "Operating System:"
+ placeholder: "e.g. Ubuntu 22.04 LTS"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "Server/VM specifications:"
+ placeholder: "Memory, CPU Cores"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "Is Apparmor, SELinux or similar active?"
+ placeholder: "yes/no"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "Virtualization technology:"
+ placeholder: "KVM, VMware, Xen, etc - **LXC and OpenVZ are not supported**"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "Docker version:"
+ description: "#### `docker version`"
+ placeholder: "20.10.21"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "docker-compose version or docker compose version:"
+ description: "#### `docker-compose version` or `docker compose version`"
+ placeholder: "v2.12.2"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "mailcow version:"
+ description: "#### ```git describe --tags `git rev-list --tags --max-count=1` ```"
+ placeholder: "2022-08"
+ validations:
+ required: true
+ - type: input
+ attributes:
+ label: "Reverse proxy:"
+ placeholder: "e.g. Nginx/Traefik"
+ validations:
+ required: true
+ - type: textarea
+ attributes:
+ label: "Logs of git diff:"
+ description: "#### Output of `git diff origin/master`, any other changes to the code? If so, **please post them**:"
+ render: plain text
+ validations:
+ required: true
+ - type: textarea
+ attributes:
+ label: "Logs of iptables -L -vn:"
+ description: "#### Output of `iptables -L -vn`"
+ render: plain text
+ validations:
+ required: true
+ - type: textarea
+ attributes:
+ label: "Logs of ip6tables -L -vn:"
+ description: "#### Output of `ip6tables -L -vn`"
+ render: plain text
+ validations:
+ required: true
+ - type: textarea
+ attributes:
+ label: "Logs of iptables -L -vn -t nat:"
+ description: "#### Output of `iptables -L -vn -t nat`"
+ render: plain text
+ validations:
+ required: true
+ - type: textarea
+ attributes:
+ label: "Logs of ip6tables -L -vn -t nat:"
+ description: "#### Output of `ip6tables -L -vn -t nat`"
+ render: plain text
+ validations:
+ required: true
+ - type: textarea
+ attributes:
+ label: "DNS check:"
+ description: "#### Output of `docker exec -it $(docker ps -qf name=acme-mailcow) dig +short stackoverflow.com @172.22.1.254` (set the IP accordingly, if you changed the internal mailcow network)"
+ render: plain text
validations:
required: true
diff --git a/.github/renovate.json b/.github/renovate.json
new file mode 100644
index 00000000..36b4aec5
--- /dev/null
+++ b/.github/renovate.json
@@ -0,0 +1,31 @@
+{
+ "enabled": true,
+ "timezone": "Europe/Berlin",
+ "dependencyDashboard": true,
+ "dependencyDashboardTitle": "Renovate Dashboard",
+ "commitBody": "Signed-off-by: milkmaker ",
+ "rebaseWhen": "auto",
+ "labels": ["renovate"],
+ "assignees": [
+ "@magiccc"
+ ],
+ "baseBranches": ["staging"],
+ "enabledManagers": ["github-actions", "regex", "docker-compose"],
+ "ignorePaths": [
+ "data\/web\/inc\/lib\/vendor\/matthiasmullie\/minify\/**"
+ ],
+ "regexManagers": [
+ {
+ "fileMatch": ["^helper-scripts\/nextcloud.sh$"],
+ "matchStrings": [
+ "#\\srenovate:\\sdatasource=(?.*?) depName=(?.*?)( versioning=(?.*?))?( extractVersion=(?.*?))?\\s.*?_VERSION=(?.*)"
+ ]
+ },
+ {
+ "fileMatch": ["(^|/)Dockerfile[^/]*$"],
+ "matchStrings": [
+ "#\\srenovate:\\sdatasource=(?.*?) depName=(?.*?)( versioning=(?.*?))?\\s(ENV|ARG) .*?_VERSION=(?.*)\\s"
+ ]
+ }
+ ]
+}
diff --git a/.github/workflows/assets/check_prs_if_on_staging.png b/.github/workflows/assets/check_prs_if_on_staging.png
new file mode 100644
index 00000000..2e0fc7ff
Binary files /dev/null and b/.github/workflows/assets/check_prs_if_on_staging.png differ
diff --git a/.github/workflows/check_prs_if_on_staging.yml b/.github/workflows/check_prs_if_on_staging.yml
new file mode 100644
index 00000000..485dc26e
--- /dev/null
+++ b/.github/workflows/check_prs_if_on_staging.yml
@@ -0,0 +1,33 @@
+name: Check PRs if on staging
+on:
+ pull_request_target:
+ types: [opened, edited]
+permissions: {}
+
+jobs:
+ is_not_staging:
+ runs-on: ubuntu-latest
+ if: github.event.pull_request.base.ref != 'staging' #check if the target branch is not staging
+ steps:
+ - name: Send message
+ uses: thollander/actions-comment-pull-request@v2.3.1
+ with:
+ GITHUB_TOKEN: ${{ secrets.CHECKIFPRISSTAGING_ACTION_PAT }}
+ message: |
+ Thanks for contributing!
+
+ I noticed that you didn't select `staging` as your base branch. Please change the base branch to `staging`.
+ See the attached picture on how to change the base branch to `staging`:
+
+ 
+
+ - name: Fail #we want to see failed checks in the PR
+ if: ${{ success() }} #set exit code to 1 even if commenting somehow failed
+ run: exit 1
+
+ is_staging:
+ runs-on: ubuntu-latest
+ if: github.event.pull_request.base.ref == 'staging' #check if the target branch is staging
+ steps:
+ - name: Success
+ run: exit 0
diff --git a/.github/workflows/close_old_issues_and_prs.yml b/.github/workflows/close_old_issues_and_prs.yml
index 83a75d25..64002617 100644
--- a/.github/workflows/close_old_issues_and_prs.yml
+++ b/.github/workflows/close_old_issues_and_prs.yml
@@ -14,7 +14,7 @@ jobs:
pull-requests: write
steps:
- name: Mark/Close Stale Issues and Pull Requests 🗑️
- uses: actions/stale@v6.0.1
+ uses: actions/stale@v7.0.0
with:
repo-token: ${{ secrets.STALE_ACTION_PAT }}
days-before-stale: 60
diff --git a/.github/workflows/image_builds.yml b/.github/workflows/image_builds.yml
index fe660754..65678dff 100644
--- a/.github/workflows/image_builds.yml
+++ b/.github/workflows/image_builds.yml
@@ -33,13 +33,11 @@ jobs:
run: |
curl -sSL https://get.docker.com/ | CHANNEL=stable sudo sh
sudo service docker start
- sudo curl -L https://github.com/docker/compose/releases/download/v$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- name: Prepair Image Builds
run: |
cp helper-scripts/docker-compose.override.yml.d/BUILD_FLAGS/docker-compose.override.yml docker-compose.override.yml
- name: Build Docker Images
run: |
- docker-compose build ${image}
+ docker compose build ${image}
env:
image: ${{ matrix.images }}
diff --git a/.github/workflows/integration_tests.yml b/.github/workflows/integration_tests.yml
deleted file mode 100644
index ee083bf4..00000000
--- a/.github/workflows/integration_tests.yml
+++ /dev/null
@@ -1,63 +0,0 @@
-name: mailcow Integration Tests
-
-on:
- push:
- branches: [ "master", "staging" ]
- workflow_dispatch:
-
-permissions:
- contents: read
-
-jobs:
- integration_tests:
- runs-on: ubuntu-latest
- steps:
- - name: Setup Ansible
- run: |
- export DEBIAN_FRONTEND=noninteractive
- sudo apt-get update
- sudo apt-get install python3 python3-pip git
- sudo pip3 install ansible
- - name: Prepair Test Environment
- run: |
- git clone https://github.com/mailcow/mailcow-integration-tests.git --branch $(curl -sL https://api.github.com/repos/mailcow/mailcow-integration-tests/releases/latest | jq -r '.tag_name') --single-branch .
- ./fork_check.sh
- ./ci.sh
- ./ci-pip-requirements.sh
- env:
- VAULT_PW: ${{ secrets.MAILCOW_TESTS_VAULT_PW }}
- VAULT_FILE: ${{ secrets.MAILCOW_TESTS_VAULT_FILE }}
- - name: Start Integration Test Server
- run: |
- ./fork_check.sh
- ansible-playbook mailcow-start-server.yml --diff
- env:
- PY_COLORS: '1'
- ANSIBLE_FORCE_COLOR: '1'
- ANSIBLE_HOST_KEY_CHECKING: 'false'
- - name: Setup Integration Test Server
- run: |
- ./fork_check.sh
- sleep 30
- ansible-playbook mailcow-setup-server.yml --private-key id_ssh_rsa --diff
- env:
- PY_COLORS: '1'
- ANSIBLE_FORCE_COLOR: '1'
- ANSIBLE_HOST_KEY_CHECKING: 'false'
- - name: Run Integration Tests
- run: |
- ./fork_check.sh
- ansible-playbook mailcow-integration-tests.yml --private-key id_ssh_rsa --diff
- env:
- PY_COLORS: '1'
- ANSIBLE_FORCE_COLOR: '1'
- ANSIBLE_HOST_KEY_CHECKING: 'false'
- - name: Delete Integration Test Server
- if: always()
- run: |
- ./fork_check.sh
- ansible-playbook mailcow-delete-server.yml --diff
- env:
- PY_COLORS: '1'
- ANSIBLE_FORCE_COLOR: '1'
- ANSIBLE_HOST_KEY_CHECKING: 'false'
diff --git a/.github/workflows/pr_to_nightly.yml b/.github/workflows/pr_to_nightly.yml
index fd9e4946..57aac781 100644
--- a/.github/workflows/pr_to_nightly.yml
+++ b/.github/workflows/pr_to_nightly.yml
@@ -12,7 +12,7 @@ jobs:
with:
fetch-depth: 0
- name: Run the Action
- uses: devops-infra/action-pull-request@v0.5.1
+ uses: devops-infra/action-pull-request@v0.5.5
with:
github_token: ${{ secrets.PRTONIGHTLY_ACTION_PAT }}
title: Automatic PR to nightly from ${{ github.event.repository.updated_at}}
diff --git a/.github/workflows/rebuild_backup_image.yml b/.github/workflows/rebuild_backup_image.yml
new file mode 100644
index 00000000..21c218a8
--- /dev/null
+++ b/.github/workflows/rebuild_backup_image.yml
@@ -0,0 +1,34 @@
+name: Build mailcow backup image
+
+on:
+ schedule:
+ # At 00:00 on Sunday
+ - cron: "0 0 * * 0"
+ workflow_dispatch: # Allow to run workflow manually
+
+jobs:
+ docker_image_build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v3
+
+ - name: Set up QEMU
+ uses: docker/setup-qemu-action@v2
+
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v2
+
+ - name: Login to Docker Hub
+ uses: docker/login-action@v2
+ with:
+ username: ${{ secrets.BACKUPIMAGEBUILD_ACTION_DOCKERHUB_USERNAME }}
+ password: ${{ secrets.BACKUPIMAGEBUILD_ACTION_DOCKERHUB_TOKEN }}
+
+ - name: Build and push
+ uses: docker/build-push-action@v4
+ with:
+ context: .
+ file: data/Dockerfiles/backup/Dockerfile
+ push: true
+ tags: mailcow/backup:latest
diff --git a/.github/workflows/tweet-trigger-publish-release.yml b/.github/workflows/tweet-trigger-publish-release.yml
deleted file mode 100644
index 82f1dc3a..00000000
--- a/.github/workflows/tweet-trigger-publish-release.yml
+++ /dev/null
@@ -1,17 +0,0 @@
-name: "Tweet trigger release"
-on:
- release:
- types: [published]
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - name: Tweet-trigger-publish-release
- uses: mugi111/tweet-trigger-release@v1.1
- with:
- consumer_key: ${{ secrets.CONSUMER_KEY }}
- consumer_secret: ${{ secrets.CONSUMER_SECRET }}
- access_token_key: ${{ secrets.ACCESS_TOKEN_KEY }}
- access_token_secret: ${{ secrets.ACCESS_TOKEN_SECRET }}
- tweet_body: 'A new mailcow-dockerized Release has been Released on GitHub! Checkout our GitHub Page for the latest Release: github.com/mailcow/mailcow-dockerized/releases/latest'
diff --git a/README.md b/README.md
index 313fa13b..c15b8ef0 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,5 @@
# mailcow: dockerized - 🐮 + 🐋 = 💕
-## We stand with 🇺🇦
-
-[](https://github.com/mailcow/mailcow-dockerized/actions/workflows/integration_tests.yml)
[](https://translate.mailcow.email/engage/mailcow-dockerized/)
[](https://twitter.com/mailcow_email)
@@ -36,3 +33,9 @@ Telegram desktop clients are available for [multiple platforms](https://desktop.
**Important**: mailcow makes use of various open-source software. Please assure you agree with their license before using mailcow.
Any part of mailcow itself is released under **GNU General Public License, Version 3**.
+
+mailcow is a registered word mark of The Infrastructure Company GmbH, Parkstr. 42, 47877 Willich, Germany.
+
+The project is managed and maintained by The Infrastructure Company GmbH.
+
+Originated from @andryyy (André)
\ No newline at end of file
diff --git a/data/Dockerfiles/acme/Dockerfile b/data/Dockerfiles/acme/Dockerfile
index f5b7b56c..571c3d08 100644
--- a/data/Dockerfiles/acme/Dockerfile
+++ b/data/Dockerfiles/acme/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.16
+FROM alpine:3.17
LABEL maintainer "Andre Peters "
diff --git a/data/Dockerfiles/acme/acme.sh b/data/Dockerfiles/acme/acme.sh
index 4f5cb803..1cd456a4 100755
--- a/data/Dockerfiles/acme/acme.sh
+++ b/data/Dockerfiles/acme/acme.sh
@@ -213,11 +213,13 @@ while true; do
done
ADDITIONAL_WC_ARR+=('autodiscover' 'autoconfig')
+ if [[ ${SKIP_IP_CHECK} != "y" ]]; then
# Start IP detection
log_f "Detecting IP addresses..."
IPV4=$(get_ipv4)
IPV6=$(get_ipv6)
log_f "OK: ${IPV4}, ${IPV6:-"0000:0000:0000:0000:0000:0000:0000:0000"}"
+ fi
#########################################
# IP and webroot challenge verification #
diff --git a/data/Dockerfiles/clamd/Dockerfile b/data/Dockerfiles/clamd/Dockerfile
index efbc6a4d..f381e0ef 100644
--- a/data/Dockerfiles/clamd/Dockerfile
+++ b/data/Dockerfiles/clamd/Dockerfile
@@ -1,4 +1,4 @@
-FROM clamav/clamav:0.105.1_base
+FROM clamav/clamav:1.0.1-1_base
LABEL maintainer "André Peters "
diff --git a/data/Dockerfiles/dockerapi/Dockerfile b/data/Dockerfiles/dockerapi/Dockerfile
index f021b73e..aa4a3858 100644
--- a/data/Dockerfiles/dockerapi/Dockerfile
+++ b/data/Dockerfiles/dockerapi/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.16
+FROM alpine:3.17
LABEL maintainer "Andre Peters "
@@ -13,6 +13,7 @@ RUN apk add --update --no-cache python3 \
fastapi \
uvicorn \
aiodocker \
+ docker \
redis
COPY docker-entrypoint.sh /app/
diff --git a/data/Dockerfiles/dockerapi/dockerapi.py b/data/Dockerfiles/dockerapi/dockerapi.py
index 304c1781..9e699c22 100644
--- a/data/Dockerfiles/dockerapi/dockerapi.py
+++ b/data/Dockerfiles/dockerapi/dockerapi.py
@@ -1,5 +1,6 @@
from fastapi import FastAPI, Response, Request
import aiodocker
+import docker
import psutil
import sys
import re
@@ -9,11 +10,38 @@ import json
import asyncio
import redis
from datetime import datetime
+import logging
+from logging.config import dictConfig
+log_config = {
+ "version": 1,
+ "disable_existing_loggers": False,
+ "formatters": {
+ "default": {
+ "()": "uvicorn.logging.DefaultFormatter",
+ "fmt": "%(levelprefix)s %(asctime)s %(message)s",
+ "datefmt": "%Y-%m-%d %H:%M:%S",
+
+ },
+ },
+ "handlers": {
+ "default": {
+ "formatter": "default",
+ "class": "logging.StreamHandler",
+ "stream": "ext://sys.stderr",
+ },
+ },
+ "loggers": {
+ "api-logger": {"handlers": ["default"], "level": "INFO"},
+ },
+}
+dictConfig(log_config)
+
containerIds_to_update = []
host_stats_isUpdating = False
app = FastAPI()
+logger = logging.getLogger('api-logger')
@app.get("/host/stats")
@@ -21,18 +49,15 @@ async def get_host_update_stats():
global host_stats_isUpdating
if host_stats_isUpdating == False:
- print("start host stats task")
asyncio.create_task(get_host_stats())
host_stats_isUpdating = True
while True:
if redis_client.exists('host_stats'):
break
- print("wait for host_stats results")
await asyncio.sleep(1.5)
- print("host stats pulled")
stats = json.loads(redis_client.get('host_stats'))
return Response(content=json.dumps(stats, indent=4), media_type="application/json")
@@ -106,14 +131,14 @@ async def post_containers(container_id : str, post_action : str, request: Reques
else:
api_call_method_name = '__'.join(['container_post', str(post_action) ])
- docker_utils = DockerUtils(async_docker_client)
+ docker_utils = DockerUtils(sync_docker_client)
api_call_method = getattr(docker_utils, api_call_method_name, lambda container_id: Response(content=json.dumps({'type': 'danger', 'msg':'container_post - unknown api call' }, indent=4), media_type="application/json"))
- print("api call: %s, container_id: %s" % (api_call_method_name, container_id))
- return await api_call_method(container_id, request_json)
+ logger.info("api call: %s, container_id: %s" % (api_call_method_name, container_id))
+ return api_call_method(container_id, request_json)
except Exception as e:
- print("error - container_post: %s" % str(e))
+ logger.error("error - container_post: %s" % str(e))
res = {
"type": "danger",
"msg": str(e)
@@ -152,398 +177,289 @@ class DockerUtils:
self.docker_client = docker_client
# api call: container_post - post_action: stop
- async def container_post__stop(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- await container.stop()
- res = {
- 'type': 'success',
- 'msg': 'command completed successfully'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ def container_post__stop(self, container_id, request_json):
+ for container in self.docker_client.containers.list(all=True, filters={"id": container_id}):
+ container.stop()
+ res = { 'type': 'success', 'msg': 'command completed successfully'}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: start
- async def container_post__start(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- await container.start()
- res = {
- 'type': 'success',
- 'msg': 'command completed successfully'
- }
+ def container_post__start(self, container_id, request_json):
+ for container in self.docker_client.containers.list(all=True, filters={"id": container_id}):
+ container.start()
+
+ res = { 'type': 'success', 'msg': 'command completed successfully'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
-
# api call: container_post - post_action: restart
- async def container_post__restart(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- await container.restart()
- res = {
- 'type': 'success',
- 'msg': 'command completed successfully'
- }
+ def container_post__restart(self, container_id, request_json):
+ for container in self.docker_client.containers.list(all=True, filters={"id": container_id}):
+ container.restart()
+
+ res = { 'type': 'success', 'msg': 'command completed successfully'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
-
# api call: container_post - post_action: top
- async def container_post__top(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- ps_exec = await container.exec("ps")
- async with ps_exec.start(detach=False) as stream:
- ps_return = await stream.read_out()
-
- exec_details = await ps_exec.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- res = {
- 'type': 'success',
- 'msg': ps_return.data.decode('utf-8')
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- res = {
- 'type': 'danger',
- 'msg': ''
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
+ def container_post__top(self, container_id, request_json):
+ for container in self.docker_client.containers.list(all=True, filters={"id": container_id}):
+ res = { 'type': 'success', 'msg': container.top()}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ # api call: container_post - post_action: stats
+ def container_post__stats(self, container_id, request_json):
+ for container in self.docker_client.containers.list(all=True, filters={"id": container_id}):
+ for stat in container.stats(decode=True, stream=True):
+ res = { 'type': 'success', 'msg': stat}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: mailq - task: delete
- async def container_post__exec__mailq__delete(self, container_id, request_json):
+ def container_post__exec__mailq__delete(self, container_id, request_json):
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-d %s' % i for i in filtered_qids]
- sanitized_string = str(' '.join(flagged_qids))
+ sanitized_string = str(' '.join(flagged_qids));
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ postsuper_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
+ return exec_run_handler('generic', postsuper_r)
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- postsuper_r_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
- return await exec_run_handler('generic', postsuper_r_exec)
# api call: container_post - post_action: exec - cmd: mailq - task: hold
- async def container_post__exec__mailq__hold(self, container_id, request_json):
+ def container_post__exec__mailq__hold(self, container_id, request_json):
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-h %s' % i for i in filtered_qids]
- sanitized_string = str(' '.join(flagged_qids))
-
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- postsuper_r_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
- return await exec_run_handler('generic', postsuper_r_exec)
+ sanitized_string = str(' '.join(flagged_qids));
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ postsuper_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
+ return exec_run_handler('generic', postsuper_r)
# api call: container_post - post_action: exec - cmd: mailq - task: cat
- async def container_post__exec__mailq__cat(self, container_id, request_json):
+ def container_post__exec__mailq__cat(self, container_id, request_json):
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
- sanitized_string = str(' '.join(filtered_qids))
+ sanitized_string = str(' '.join(filtered_qids));
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- postcat_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/postcat -q " + sanitized_string], user='postfix')
- return await exec_run_handler('utf8_text_only', postcat_exec)
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ postcat_return = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postcat -q " + sanitized_string], user='postfix')
+ if not postcat_return:
+ postcat_return = 'err: invalid'
+ return exec_run_handler('utf8_text_only', postcat_return)
# api call: container_post - post_action: exec - cmd: mailq - task: unhold
- async def container_post__exec__mailq__unhold(self, container_id, request_json):
+ def container_post__exec__mailq__unhold(self, container_id, request_json):
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-H %s' % i for i in filtered_qids]
- sanitized_string = str(' '.join(flagged_qids))
-
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- postsuper_r_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
- return await exec_run_handler('generic', postsuper_r_exec)
-
+ sanitized_string = str(' '.join(flagged_qids));
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ postsuper_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postsuper " + sanitized_string])
+ return exec_run_handler('generic', postsuper_r)
# api call: container_post - post_action: exec - cmd: mailq - task: deliver
- async def container_post__exec__mailq__deliver(self, container_id, request_json):
+ def container_post__exec__mailq__deliver(self, container_id, request_json):
if 'items' in request_json:
r = re.compile("^[0-9a-fA-F]+$")
filtered_qids = filter(r.match, request_json['items'])
if filtered_qids:
flagged_qids = ['-i %s' % i for i in filtered_qids]
-
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- for i in flagged_qids:
- postsuper_r_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/postqueue " + i], user='postfix')
- async with postsuper_r_exec.start(detach=False) as stream:
- postsuper_r_return = await stream.read_out()
- # todo: check each exit code
- res = {
- 'type': 'success',
- 'msg': 'Scheduled immediate delivery'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
-
- # api call: container_post - post_action: exec - cmd: mailq - task: list
- async def container_post__exec__mailq__list(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- mailq_exec = await container.exec(["/usr/sbin/postqueue", "-j"], user='postfix')
- return await exec_run_handler('utf8_text_only', mailq_exec)
-
-
- # api call: container_post - post_action: exec - cmd: mailq - task: flush
- async def container_post__exec__mailq__flush(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- postsuper_r_exec = await container.exec(["/usr/sbin/postqueue", "-f"], user='postfix')
- return await exec_run_handler('generic', postsuper_r_exec)
-
-
- # api call: container_post - post_action: exec - cmd: mailq - task: super_delete
- async def container_post__exec__mailq__super_delete(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- postsuper_r_exec = await container.exec(["/usr/sbin/postsuper", "-d", "ALL"])
- return await exec_run_handler('generic', postsuper_r_exec)
-
-
- # api call: container_post - post_action: exec - cmd: system - task: fts_rescan
- async def container_post__exec__system__fts_rescan(self, container_id, request_json):
- if 'username' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- rescan_exec = await container.exec(["/bin/bash", "-c", "/usr/bin/doveadm fts rescan -u '" + request_json['username'].replace("'", "'\\''") + "'"], user='vmail')
- async with rescan_exec.start(detach=False) as stream:
- rescan_return = await stream.read_out()
-
- exec_details = await rescan_exec.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- res = {
- 'type': 'success',
- 'msg': 'fts_rescan: rescan triggered'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- res = {
- 'type': 'warning',
- 'msg': 'fts_rescan error'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
- if 'all' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- rescan_exec = await container.exec(["/bin/bash", "-c", "/usr/bin/doveadm fts rescan -A"], user='vmail')
- async with rescan_exec.start(detach=False) as stream:
- rescan_return = await stream.read_out()
-
- exec_details = await rescan_exec.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- res = {
- 'type': 'success',
- 'msg': 'fts_rescan: rescan triggered'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- res = {
- 'type': 'warning',
- 'msg': 'fts_rescan error'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
-
- # api call: container_post - post_action: exec - cmd: system - task: df
- async def container_post__exec__system__df(self, container_id, request_json):
- if 'dir' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- df_exec = await container.exec(["/bin/bash", "-c", "/bin/df -H '" + request_json['dir'].replace("'", "'\\''") + "' | /usr/bin/tail -n1 | /usr/bin/tr -s [:blank:] | /usr/bin/tr ' ' ','"], user='nobody')
- async with df_exec.start(detach=False) as stream:
- df_return = await stream.read_out()
-
- print(df_return)
- print(await df_exec.inspect())
- exec_details = await df_exec.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- return df_return.data.decode('utf-8').rstrip()
- else:
- return "0,0,0,0,0,0"
-
-
- # api call: container_post - post_action: exec - cmd: system - task: mysql_upgrade
- async def container_post__exec__system__mysql_upgrade(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- sql_exec = await container.exec(["/bin/bash", "-c", "/usr/bin/mysql_upgrade -uroot -p'" + os.environ['DBROOT'].replace("'", "'\\''") + "'\n"], user='mysql')
- async with sql_exec.start(detach=False) as stream:
- sql_return = await stream.read_out()
-
- exec_details = await sql_exec.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- matched = False
- for line in sql_return.data.decode('utf-8').split("\n"):
- if 'is already upgraded to' in line:
- matched = True
- if matched:
- res = {
- 'type': 'success',
- 'msg': 'mysql_upgrade: already upgraded',
- 'text': sql_return.data.decode('utf-8')
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- await container.restart()
- res = {
- 'type': 'warning',
- 'msg': 'mysql_upgrade: upgrade was applied',
- 'text': sql_return.data.decode('utf-8')
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- res = {
- 'type': 'error',
- 'msg': 'mysql_upgrade: error running command',
- 'text': sql_return.data.decode('utf-8')
- }
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ for i in flagged_qids:
+ postqueue_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postqueue " + i], user='postfix')
+ # todo: check each exit code
+ res = { 'type': 'success', 'msg': 'Scheduled immediate delivery'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
- # api call: container_post - post_action: exec - cmd: system - task: mysql_tzinfo_to_sql
- async def container_post__exec__system__mysql_tzinfo_to_sql(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- sql_exec = await container.exec(["/bin/bash", "-c", "/usr/bin/mysql_tzinfo_to_sql /usr/share/zoneinfo | /bin/sed 's/Local time zone must be set--see zic manual page/FCTY/' | /usr/bin/mysql -uroot -p'" + os.environ['DBROOT'].replace("'", "'\\''") + "' mysql \n"], user='mysql')
- async with sql_exec.start(detach=False) as stream:
- sql_return = await stream.read_out()
-
- exec_details = await sql_exec.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- res = {
- 'type': 'info',
- 'msg': 'mysql_tzinfo_to_sql: command completed successfully',
- 'text': sql_return.data.decode('utf-8')
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- res = {
- 'type': 'error',
- 'msg': 'mysql_tzinfo_to_sql: error running command',
- 'text': sql_return.data.decode('utf-8')
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
-
- # api call: container_post - post_action: exec - cmd: reload - task: dovecot
- async def container_post__exec__reload__dovecot(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- reload_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/dovecot reload"])
- return await exec_run_handler('generic', reload_exec)
-
-
- # api call: container_post - post_action: exec - cmd: reload - task: postfix
- async def container_post__exec__reload__postfix(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- reload_exec = await container.exec(["/bin/bash", "-c", "/usr/sbin/postfix reload"])
- return await exec_run_handler('generic', reload_exec)
-
-
- # api call: container_post - post_action: exec - cmd: reload - task: nginx
- async def container_post__exec__reload__nginx(self, container_id, request_json):
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- reload_exec = await container.exec(["/bin/sh", "-c", "/usr/sbin/nginx -s reload"])
- return await exec_run_handler('generic', reload_exec)
-
-
- # api call: container_post - post_action: exec - cmd: sieve - task: list
- async def container_post__exec__sieve__list(self, container_id, request_json):
- if 'username' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- sieve_exec = await container.exec(["/bin/bash", "-c", "/usr/bin/doveadm sieve list -u '" + request_json['username'].replace("'", "'\\''") + "'"])
- return await exec_run_handler('utf8_text_only', sieve_exec)
-
-
- # api call: container_post - post_action: exec - cmd: sieve - task: print
- async def container_post__exec__sieve__print(self, container_id, request_json):
- if 'username' in request_json and 'script_name' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- cmd = ["/bin/bash", "-c", "/usr/bin/doveadm sieve get -u '" + request_json['username'].replace("'", "'\\''") + "' '" + request_json['script_name'].replace("'", "'\\''") + "'"]
- sieve_exec = await container.exec(cmd)
- return await exec_run_handler('utf8_text_only', sieve_exec)
-
-
- # api call: container_post - post_action: exec - cmd: maildir - task: cleanup
- async def container_post__exec__maildir__cleanup(self, container_id, request_json):
- if 'maildir' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- sane_name = re.sub(r'\W+', '', request_json['maildir'])
- cmd = ["/bin/bash", "-c", "if [[ -d '/var/vmail/" + request_json['maildir'].replace("'", "'\\''") + "' ]]; then /bin/mv '/var/vmail/" + request_json['maildir'].replace("'", "'\\''") + "' '/var/vmail/_garbage/" + str(int(time.time())) + "_" + sane_name + "'; fi"]
- maildir_cleanup_exec = await container.exec(cmd, user='vmail')
- return await exec_run_handler('generic', maildir_cleanup_exec)
-
- # api call: container_post - post_action: exec - cmd: rspamd - task: worker_password
- async def container_post__exec__rspamd__worker_password(self, container_id, request_json):
- if 'raw' in request_json:
- for container in (await self.docker_client.containers.list()):
- if container._id == container_id:
- cmd = "./set_worker_password.sh '" + request_json['raw'].replace("'", "'\\''") + "' 2> /dev/null"
- rspamd_password_exec = await container.exec(cmd, user='_rspamd')
- async with rspamd_password_exec.start(detach=False) as stream:
- rspamd_password_return = await stream.read_out()
-
- matched = False
- if "OK" in rspamd_password_return.data.decode('utf-8'):
+ # api call: container_post - post_action: exec - cmd: mailq - task: list
+ def container_post__exec__mailq__list(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ mailq_return = container.exec_run(["/usr/sbin/postqueue", "-j"], user='postfix')
+ return exec_run_handler('utf8_text_only', mailq_return)
+ # api call: container_post - post_action: exec - cmd: mailq - task: flush
+ def container_post__exec__mailq__flush(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ postqueue_r = container.exec_run(["/usr/sbin/postqueue", "-f"], user='postfix')
+ return exec_run_handler('generic', postqueue_r)
+ # api call: container_post - post_action: exec - cmd: mailq - task: super_delete
+ def container_post__exec__mailq__super_delete(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ postsuper_r = container.exec_run(["/usr/sbin/postsuper", "-d", "ALL"])
+ return exec_run_handler('generic', postsuper_r)
+ # api call: container_post - post_action: exec - cmd: system - task: fts_rescan
+ def container_post__exec__system__fts_rescan(self, container_id, request_json):
+ if 'username' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ rescan_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/doveadm fts rescan -u '" + request_json['username'].replace("'", "'\\''") + "'"], user='vmail')
+ if rescan_return.exit_code == 0:
+ res = { 'type': 'success', 'msg': 'fts_rescan: rescan triggered'}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ else:
+ res = { 'type': 'warning', 'msg': 'fts_rescan error'}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ if 'all' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ rescan_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/doveadm fts rescan -A"], user='vmail')
+ if rescan_return.exit_code == 0:
+ res = { 'type': 'success', 'msg': 'fts_rescan: rescan triggered'}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ else:
+ res = { 'type': 'warning', 'msg': 'fts_rescan error'}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ # api call: container_post - post_action: exec - cmd: system - task: df
+ def container_post__exec__system__df(self, container_id, request_json):
+ if 'dir' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ df_return = container.exec_run(["/bin/bash", "-c", "/bin/df -H '" + request_json['dir'].replace("'", "'\\''") + "' | /usr/bin/tail -n1 | /usr/bin/tr -s [:blank:] | /usr/bin/tr ' ' ','"], user='nobody')
+ if df_return.exit_code == 0:
+ return df_return.output.decode('utf-8').rstrip()
+ else:
+ return "0,0,0,0,0,0"
+ # api call: container_post - post_action: exec - cmd: system - task: mysql_upgrade
+ def container_post__exec__system__mysql_upgrade(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ sql_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/mysql_upgrade -uroot -p'" + os.environ['DBROOT'].replace("'", "'\\''") + "'\n"], user='mysql')
+ if sql_return.exit_code == 0:
+ matched = False
+ for line in sql_return.output.decode('utf-8').split("\n"):
+ if 'is already upgraded to' in line:
matched = True
- await container.restart()
+ if matched:
+ res = { 'type': 'success', 'msg':'mysql_upgrade: already upgraded', 'text': sql_return.output.decode('utf-8')}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ else:
+ container.restart()
+ res = { 'type': 'warning', 'msg':'mysql_upgrade: upgrade was applied', 'text': sql_return.output.decode('utf-8')}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ else:
+ res = { 'type': 'error', 'msg': 'mysql_upgrade: error running command', 'text': sql_return.output.decode('utf-8')}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ # api call: container_post - post_action: exec - cmd: system - task: mysql_tzinfo_to_sql
+ def container_post__exec__system__mysql_tzinfo_to_sql(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ sql_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/mysql_tzinfo_to_sql /usr/share/zoneinfo | /bin/sed 's/Local time zone must be set--see zic manual page/FCTY/' | /usr/bin/mysql -uroot -p'" + os.environ['DBROOT'].replace("'", "'\\''") + "' mysql \n"], user='mysql')
+ if sql_return.exit_code == 0:
+ res = { 'type': 'info', 'msg': 'mysql_tzinfo_to_sql: command completed successfully', 'text': sql_return.output.decode('utf-8')}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ else:
+ res = { 'type': 'error', 'msg': 'mysql_tzinfo_to_sql: error running command', 'text': sql_return.output.decode('utf-8')}
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ # api call: container_post - post_action: exec - cmd: reload - task: dovecot
+ def container_post__exec__reload__dovecot(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ reload_return = container.exec_run(["/bin/bash", "-c", "/usr/sbin/dovecot reload"])
+ return exec_run_handler('generic', reload_return)
+ # api call: container_post - post_action: exec - cmd: reload - task: postfix
+ def container_post__exec__reload__postfix(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ reload_return = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postfix reload"])
+ return exec_run_handler('generic', reload_return)
+ # api call: container_post - post_action: exec - cmd: reload - task: nginx
+ def container_post__exec__reload__nginx(self, container_id, request_json):
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ reload_return = container.exec_run(["/bin/sh", "-c", "/usr/sbin/nginx -s reload"])
+ return exec_run_handler('generic', reload_return)
+ # api call: container_post - post_action: exec - cmd: sieve - task: list
+ def container_post__exec__sieve__list(self, container_id, request_json):
+ if 'username' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ sieve_return = container.exec_run(["/bin/bash", "-c", "/usr/bin/doveadm sieve list -u '" + request_json['username'].replace("'", "'\\''") + "'"])
+ return exec_run_handler('utf8_text_only', sieve_return)
+ # api call: container_post - post_action: exec - cmd: sieve - task: print
+ def container_post__exec__sieve__print(self, container_id, request_json):
+ if 'username' in request.json and 'script_name' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ cmd = ["/bin/bash", "-c", "/usr/bin/doveadm sieve get -u '" + request_json['username'].replace("'", "'\\''") + "' '" + request_json['script_name'].replace("'", "'\\''") + "'"]
+ sieve_return = container.exec_run(cmd)
+ return exec_run_handler('utf8_text_only', sieve_return)
+ # api call: container_post - post_action: exec - cmd: maildir - task: cleanup
+ def container_post__exec__maildir__cleanup(self, container_id, request_json):
+ if 'maildir' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ sane_name = re.sub(r'\W+', '', request_json['maildir'])
+ cmd = ["/bin/bash", "-c", "if [[ -d '/var/vmail/" + request_json['maildir'].replace("'", "'\\''") + "' ]]; then /bin/mv '/var/vmail/" + request_json['maildir'].replace("'", "'\\''") + "' '/var/vmail/_garbage/" + str(int(time.time())) + "_" + sane_name + "'; fi"]
+ maildir_cleanup = container.exec_run(cmd, user='vmail')
+ return exec_run_handler('generic', maildir_cleanup)
+ # api call: container_post - post_action: exec - cmd: rspamd - task: worker_password
+ def container_post__exec__rspamd__worker_password(self, container_id, request_json):
+ if 'raw' in request_json:
+ for container in self.docker_client.containers.list(filters={"id": container_id}):
+ cmd = "/usr/bin/rspamadm pw -e -p '" + request_json['raw'].replace("'", "'\\''") + "' 2> /dev/null"
+ cmd_response = exec_cmd_container(container, cmd, user="_rspamd")
- if matched:
- res = {
- 'type': 'success',
- 'msg': 'command completed successfully'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
- else:
- res = {
- 'type': 'danger',
- 'msg': 'command did not complete'
- }
- return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ matched = False
+ for line in cmd_response.split("\n"):
+ if '$2$' in line:
+ hash = line.strip()
+ hash_out = re.search('\$2\$.+$', hash).group(0)
+ rspamd_passphrase_hash = re.sub('[^0-9a-zA-Z\$]+', '', hash_out.rstrip())
+ rspamd_password_filename = "/etc/rspamd/override.d/worker-controller-password.inc"
+ cmd = '''/bin/echo 'enable_password = "%s";' > %s && cat %s''' % (rspamd_passphrase_hash, rspamd_password_filename, rspamd_password_filename)
+ cmd_response = exec_cmd_container(container, cmd, user="_rspamd")
+ if rspamd_passphrase_hash.startswith("$2$") and rspamd_passphrase_hash in cmd_response:
+ container.restart()
+ matched = True
+ if matched:
+ res = { 'type': 'success', 'msg': 'command completed successfully' }
+ logger.info('success changing Rspamd password')
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+ else:
+ logger.error('failed changing Rspamd password')
+ res = { 'type': 'danger', 'msg': 'command did not complete' }
+ return Response(content=json.dumps(res, indent=4), media_type="application/json")
+def exec_cmd_container(container, cmd, user, timeout=2, shell_cmd="/bin/bash"):
-async def exec_run_handler(type, exec_obj):
- async with exec_obj.start(detach=False) as stream:
- exec_return = await stream.read_out()
+ def recv_socket_data(c_socket, timeout):
+ c_socket.setblocking(0)
+ total_data=[]
+ data=''
+ begin=time.time()
+ while True:
+ if total_data and time.time()-begin > timeout:
+ break
+ elif time.time()-begin > timeout*2:
+ break
+ try:
+ data = c_socket.recv(8192)
+ if data:
+ total_data.append(data.decode('utf-8'))
+ #change the beginning time for measurement
+ begin=time.time()
+ else:
+ #sleep for sometime to indicate a gap
+ time.sleep(0.1)
+ break
+ except:
+ pass
+ return ''.join(total_data)
+
- if exec_return == None:
- exec_return = ""
- else:
- exec_return = exec_return.data.decode('utf-8')
-
- if type == 'generic':
- exec_details = await exec_obj.inspect()
- if exec_details["ExitCode"] == None or exec_details["ExitCode"] == 0:
- res = {
- "type": "success",
- "msg": "command completed successfully"
- }
+ try :
+ socket = container.exec_run([shell_cmd], stdin=True, socket=True, user=user).output._sock
+ if not cmd.endswith("\n"):
+ cmd = cmd + "\n"
+ socket.send(cmd.encode('utf-8'))
+ data = recv_socket_data(socket, timeout)
+ socket.close()
+ return data
+ except Exception as e:
+ logger.error("error - exec_cmd_container: %s" % str(e))
+ traceback.print_exc(file=sys.stdout)
+def exec_run_handler(type, output):
+ if type == 'generic':
+ if output.exit_code == 0:
+ res = { 'type': 'success', 'msg': 'command completed successfully' }
return Response(content=json.dumps(res, indent=4), media_type="application/json")
else:
- res = {
- "type": "success",
- "msg": "'command failed: " + exec_return
- }
+ res = { 'type': 'danger', 'msg': 'command failed: ' + output.output.decode('utf-8') }
return Response(content=json.dumps(res, indent=4), media_type="application/json")
if type == 'utf8_text_only':
- return Response(content=exec_return, media_type="text/plain")
+ return Response(content=output.output.decode('utf-8'), media_type="text/plain")
async def get_host_stats(wait=5):
global host_stats_isUpdating
@@ -570,12 +486,10 @@ async def get_host_stats(wait=5):
"type": "danger",
"msg": str(e)
}
- print(json.dumps(res, indent=4))
await asyncio.sleep(wait)
host_stats_isUpdating = False
-
async def get_container_stats(container_id, wait=5, stop=False):
global containerIds_to_update
@@ -598,13 +512,11 @@ async def get_container_stats(container_id, wait=5, stop=False):
"type": "danger",
"msg": str(e)
}
- print(json.dumps(res, indent=4))
else:
res = {
"type": "danger",
"msg": "no or invalid id defined"
}
- print(json.dumps(res, indent=4))
await asyncio.sleep(wait)
if stop == True:
@@ -615,9 +527,13 @@ async def get_container_stats(container_id, wait=5, stop=False):
await get_container_stats(container_id, wait=0, stop=True)
+
if os.environ['REDIS_SLAVEOF_IP'] != "":
redis_client = redis.Redis(host=os.environ['REDIS_SLAVEOF_IP'], port=os.environ['REDIS_SLAVEOF_PORT'], db=0)
else:
redis_client = redis.Redis(host='redis-mailcow', port=6379, db=0)
+sync_docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock', version='auto')
async_docker_client = aiodocker.Docker(url='unix:///var/run/docker.sock')
+
+logger.info('DockerApi started')
diff --git a/data/Dockerfiles/dovecot/Dockerfile b/data/Dockerfiles/dovecot/Dockerfile
index 1fa97641..2460cb04 100644
--- a/data/Dockerfiles/dovecot/Dockerfile
+++ b/data/Dockerfiles/dovecot/Dockerfile
@@ -6,7 +6,7 @@ ARG DOVECOT=2.3.19.1
ARG FLATCURVE=v0.3.2
ARG XAPIAN=1.4.21
ENV LC_ALL C
-ENV GOSU_VERSION 1.14
+
# Add groups and users before installing Dovecot to not break compatibility
RUN touch /etc/default/locale \
diff --git a/data/Dockerfiles/dovecot/imapsync_runner.pl b/data/Dockerfiles/dovecot/imapsync_runner.pl
index d3aed97b..5b297abd 100644
--- a/data/Dockerfiles/dovecot/imapsync_runner.pl
+++ b/data/Dockerfiles/dovecot/imapsync_runner.pl
@@ -51,8 +51,8 @@ sub sig_handler {
die "sig_handler received signal, preparing to exit...\n";
};
-open my $file, '<', "/etc/sogo/sieve.creds";
-my $creds = <$file>;
+open my $file, '<', "/etc/sogo/sieve.creds";
+my $creds = <$file>;
close $file;
my ($master_user, $master_pass) = split /:/, $creds;
my $sth = $dbh->prepare("SELECT id,
@@ -166,17 +166,11 @@ while ($row = $sth->fetchrow_arrayref()) {
$success = 1;
}
- $keep_job_active = 1;
- if (defined $exit_status && $exit_status eq "EXIT_AUTHENTICATION_FAILURE_USER1") {
- $keep_job_active = 0;
- }
-
- $update = $dbh->prepare("UPDATE imapsync SET returned_text = ?, success = ?, exit_status = ?, active = ? WHERE id = ?");
+ $update = $dbh->prepare("UPDATE imapsync SET returned_text = ?, success = ?, exit_status = ? WHERE id = ?");
$update->bind_param( 1, ${stdout} );
$update->bind_param( 2, ${success} );
$update->bind_param( 3, ${exit_status} );
- $update->bind_param( 4, ${keep_job_active} );
- $update->bind_param( 5, ${id} );
+ $update->bind_param( 4, ${id} );
$update->execute();
} catch {
$update = $dbh->prepare("UPDATE imapsync SET returned_text = 'Could not start or finish imapsync', success = 0 WHERE id = ?");
diff --git a/data/Dockerfiles/netfilter/Dockerfile b/data/Dockerfiles/netfilter/Dockerfile
index 621da149..bc707391 100644
--- a/data/Dockerfiles/netfilter/Dockerfile
+++ b/data/Dockerfiles/netfilter/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.16
+FROM alpine:3.17
LABEL maintainer "Andre Peters "
ENV XTABLES_LIBDIR /usr/lib/xtables
diff --git a/data/Dockerfiles/netfilter/server.py b/data/Dockerfiles/netfilter/server.py
index 382a3f78..0b0e2a41 100644
--- a/data/Dockerfiles/netfilter/server.py
+++ b/data/Dockerfiles/netfilter/server.py
@@ -97,9 +97,9 @@ def refreshF2bregex():
f2bregex[3] = 'warning: .*\[([0-9a-f\.:]+)\]: SASL .+ authentication failed: (?!.*Connection lost to authentication server).+'
f2bregex[4] = 'warning: non-SMTP command from .*\[([0-9a-f\.:]+)]:.+'
f2bregex[5] = 'NOQUEUE: reject: RCPT from \[([0-9a-f\.:]+)].+Protocol error.+'
- f2bregex[6] = '-login: Disconnected \(auth failed, .+\): user=.*, method=.+, rip=([0-9a-f\.:]+),'
- f2bregex[7] = '-login: Aborted login \(auth failed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
- f2bregex[8] = '-login: Aborted login \(tried to use disallowed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
+ f2bregex[6] = '-login: Disconnected.+ \(auth failed, .+\): user=.*, method=.+, rip=([0-9a-f\.:]+),'
+ f2bregex[7] = '-login: Aborted login.+ \(auth failed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
+ f2bregex[8] = '-login: Aborted login.+ \(tried to use disallowed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
f2bregex[9] = 'SOGo.+ Login from \'([0-9a-f\.:]+)\' for user .+ might not have worked'
f2bregex[10] = '([0-9a-f\.:]+) \"GET \/SOGo\/.* HTTP.+\" 403 .+'
r.set('F2B_REGEX', json.dumps(f2bregex, ensure_ascii=False))
@@ -359,21 +359,28 @@ def snat4(snat_target):
chain = iptc.Chain(table, 'POSTROUTING')
table.autocommit = False
new_rule = get_snat4_rule()
- for position, rule in enumerate(chain.rules):
- match = all((
- new_rule.get_src() == rule.get_src(),
- new_rule.get_dst() == rule.get_dst(),
- new_rule.target.parameters == rule.target.parameters,
- new_rule.target.name == rule.target.name
- ))
- if position == 0:
- if not match:
- logInfo(f'Added POSTROUTING rule for source network {new_rule.src} to SNAT target {snat_target}')
- chain.insert_rule(new_rule)
- else:
- if match:
- logInfo(f'Remove rule for source network {new_rule.src} to SNAT target {snat_target} from POSTROUTING chain at position {position}')
- chain.delete_rule(rule)
+
+ if not chain.rules:
+ # if there are no rules in the chain, insert the new rule directly
+ logInfo(f'Added POSTROUTING rule for source network {new_rule.src} to SNAT target {snat_target}')
+ chain.insert_rule(new_rule)
+ else:
+ for position, rule in enumerate(chain.rules):
+ match = all((
+ new_rule.get_src() == rule.get_src(),
+ new_rule.get_dst() == rule.get_dst(),
+ new_rule.target.parameters == rule.target.parameters,
+ new_rule.target.name == rule.target.name
+ ))
+ if position == 0:
+ if not match:
+ logInfo(f'Added POSTROUTING rule for source network {new_rule.src} to SNAT target {snat_target}')
+ chain.insert_rule(new_rule)
+ else:
+ if match:
+ logInfo(f'Remove rule for source network {new_rule.src} to SNAT target {snat_target} from POSTROUTING chain at position {position}')
+ chain.delete_rule(rule)
+
table.commit()
table.autocommit = True
except:
diff --git a/data/Dockerfiles/olefy/Dockerfile b/data/Dockerfiles/olefy/Dockerfile
index 889f84b4..10d63d02 100644
--- a/data/Dockerfiles/olefy/Dockerfile
+++ b/data/Dockerfiles/olefy/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.16
+FROM alpine:3.17
LABEL maintainer "Andre Peters "
WORKDIR /app
diff --git a/data/Dockerfiles/phpfpm/Dockerfile b/data/Dockerfiles/phpfpm/Dockerfile
index 74035c02..c8713e04 100644
--- a/data/Dockerfiles/phpfpm/Dockerfile
+++ b/data/Dockerfiles/phpfpm/Dockerfile
@@ -1,12 +1,18 @@
-FROM php:8.0-fpm-alpine3.16
+FROM php:8.1-fpm-alpine3.17
LABEL maintainer "Andre Peters "
-ENV APCU_PECL 5.1.21
-ENV IMAGICK_PECL 3.7.0
-# Mailparse is pulled from master branch
-#ENV MAILPARSE_PECL 3.0.2
-ENV MEMCACHED_PECL 3.2.0
-ENV REDIS_PECL 5.3.7
+# renovate: datasource=github-tags depName=krakjoe/apcu versioning=semver-coerced
+ARG APCU_PECL_VERSION=5.1.22
+# renovate: datasource=github-tags depName=Imagick/imagick versioning=semver-coerced
+ARG IMAGICK_PECL_VERSION=3.7.0
+# renovate: datasource=github-tags depName=php/pecl-mail-mailparse versioning=semver-coerced
+ARG MAILPARSE_PECL_VERSION=3.1.4
+# renovate: datasource=github-tags depName=php-memcached-dev/php-memcached versioning=semver-coerced
+ARG MEMCACHED_PECL_VERSION=3.2.0
+# renovate: datasource=github-tags depName=phpredis/phpredis versioning=semver-coerced
+ARG REDIS_PECL_VERSION=5.3.7
+# renovate: datasource=github-tags depName=composer/composer versioning=semver-coerced
+ARG COMPOSER_VERSION=2.5.4
RUN apk add -U --no-cache autoconf \
aspell-dev \
@@ -18,6 +24,7 @@ RUN apk add -U --no-cache autoconf \
freetype-dev \
g++ \
git \
+ gettext \
gettext-dev \
gmp-dev \
gnupg \
@@ -27,8 +34,11 @@ RUN apk add -U --no-cache autoconf \
imagemagick-dev \
imap-dev \
jq \
+ libavif \
+ libavif-dev \
libjpeg-turbo \
libjpeg-turbo-dev \
+ libmemcached \
libmemcached-dev \
libpng \
libpng-dev \
@@ -38,7 +48,9 @@ RUN apk add -U --no-cache autoconf \
libtool \
libwebp-dev \
libxml2-dev \
+ libxpm \
libxpm-dev \
+ libzip \
libzip-dev \
make \
mysql-client \
@@ -49,22 +61,24 @@ RUN apk add -U --no-cache autoconf \
samba-client \
zlib-dev \
tzdata \
- && git clone https://github.com/php/pecl-mail-mailparse \
- && cd pecl-mail-mailparse \
- && pecl install package.xml \
- && cd .. \
- && rm -r pecl-mail-mailparse \
- && pecl install redis-${REDIS_PECL} memcached-${MEMCACHED_PECL} APCu-${APCU_PECL} imagick-${IMAGICK_PECL} \
+ && pecl install APCu-${APCU_PECL_VERSION} \
+ && pecl install imagick-${IMAGICK_PECL_VERSION} \
+ && pecl install mailparse-${MAILPARSE_PECL_VERSION} \
+ && pecl install memcached-${MEMCACHED_PECL_VERSION} \
+ && pecl install redis-${REDIS_PECL_VERSION} \
&& docker-php-ext-enable apcu imagick memcached mailparse redis \
&& pecl clear-cache \
&& docker-php-ext-configure intl \
&& docker-php-ext-configure exif \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ \
--with-jpeg=/usr/include/ \
+ --with-webp \
+ --with-xpm \
+ --with-avif \
&& docker-php-ext-install -j 4 exif gd gettext intl ldap opcache pcntl pdo pdo_mysql pspell soap sockets zip bcmath gmp \
&& docker-php-ext-configure imap --with-imap --with-imap-ssl \
&& docker-php-ext-install -j 4 imap \
- && curl --silent --show-error https://getcomposer.org/installer | php \
+ && curl --silent --show-error https://getcomposer.org/installer | php -- --version=${COMPOSER_VERSION} \
&& mv composer.phar /usr/local/bin/composer \
&& chmod +x /usr/local/bin/composer \
&& apk del --purge autoconf \
@@ -72,15 +86,21 @@ RUN apk add -U --no-cache autoconf \
cyrus-sasl-dev \
freetype-dev \
g++ \
+ gettext-dev \
icu-dev \
imagemagick-dev \
imap-dev \
+ libavif-dev \
libjpeg-turbo-dev \
+ libmemcached-dev \
libpng-dev \
libressl-dev \
libwebp-dev \
libxml2-dev \
+ libxpm-dev \
+ libzip-dev \
make \
+ openldap-dev \
pcre-dev \
zlib-dev
@@ -88,4 +108,4 @@ COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
-CMD ["php-fpm"]
+CMD ["php-fpm"]
\ No newline at end of file
diff --git a/data/Dockerfiles/sogo/Dockerfile b/data/Dockerfiles/sogo/Dockerfile
index f08600ac..da8f23be 100644
--- a/data/Dockerfiles/sogo/Dockerfile
+++ b/data/Dockerfiles/sogo/Dockerfile
@@ -3,8 +3,9 @@ LABEL maintainer "Andre Peters "
ARG DEBIAN_FRONTEND=noninteractive
ARG SOGO_DEBIAN_REPOSITORY=http://packages.sogo.nu/nightly/5/debian/
+# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced
+ARG GOSU_VERSION=1.16
ENV LC_ALL C
-ENV GOSU_VERSION 1.14
# Prerequisites
RUN echo "Building from repository $SOGO_DEBIAN_REPOSITORY" \
diff --git a/data/Dockerfiles/solr/Dockerfile b/data/Dockerfiles/solr/Dockerfile
index 06299257..0c5af1af 100644
--- a/data/Dockerfiles/solr/Dockerfile
+++ b/data/Dockerfiles/solr/Dockerfile
@@ -2,7 +2,8 @@ FROM solr:7.7-slim
USER root
-ENV GOSU_VERSION 1.11
+# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced
+ARG GOSU_VERSION=1.16
COPY solr.sh /
COPY solr-config-7.7.0.xml /
diff --git a/data/Dockerfiles/unbound/Dockerfile b/data/Dockerfiles/unbound/Dockerfile
index 0b1cefe9..d9756d04 100644
--- a/data/Dockerfiles/unbound/Dockerfile
+++ b/data/Dockerfiles/unbound/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.16
+FROM alpine:3.17
LABEL maintainer "Andre Peters "
diff --git a/data/Dockerfiles/watchdog/Dockerfile b/data/Dockerfiles/watchdog/Dockerfile
index 637c4680..654dea08 100644
--- a/data/Dockerfiles/watchdog/Dockerfile
+++ b/data/Dockerfiles/watchdog/Dockerfile
@@ -1,4 +1,4 @@
-FROM alpine:3.16
+FROM alpine:3.17
LABEL maintainer "André Peters "
# Installation
diff --git a/data/conf/rspamd/custom/bulk_header.map b/data/conf/rspamd/custom/bulk_header.map
index 39aa7fea..69a20af8 100644
--- a/data/conf/rspamd/custom/bulk_header.map
+++ b/data/conf/rspamd/custom/bulk_header.map
@@ -3,7 +3,6 @@
/.*episerver.*/i
/.*supergewinne.*/i
/List-Unsubscribe.*nbps\.eu/i
-/X-Mailer: AWeber.*/i
/.*regiofinder.*/i
/.*EmailSocket.*/i
/List-Unsubscribe:.*respread.*/i
diff --git a/data/conf/rspamd/local.d/metadata_exporter.conf b/data/conf/rspamd/local.d/metadata_exporter.conf
index 47373d99..daaa79b4 100644
--- a/data/conf/rspamd/local.d/metadata_exporter.conf
+++ b/data/conf/rspamd/local.d/metadata_exporter.conf
@@ -16,8 +16,7 @@ rules {
backend = "http";
url = "http://nginx:9081/pushover.php";
selector = "mailcow_rcpt";
- # Only return msgid, do not parse the full message
- formatter = "msgid";
+ formatter = "json";
meta_headers = true;
}
}
diff --git a/data/conf/rspamd/local.d/multimap.conf b/data/conf/rspamd/local.d/multimap.conf
index 17ada99e..3f554c5b 100644
--- a/data/conf/rspamd/local.d/multimap.conf
+++ b/data/conf/rspamd/local.d/multimap.conf
@@ -175,7 +175,7 @@ BAD_SUBJECT_00 {
type = "header";
header = "subject";
regexp = true;
- map = "http://nullnull.org/bad-subject-regex.txt";
+ map = "http://fuzzy.mailcow.email/bad-subject-regex.txt";
score = 6.0;
symbols_set = ["BAD_SUBJECT_00"];
}
diff --git a/data/conf/rspamd/meta_exporter/pushover.php b/data/conf/rspamd/meta_exporter/pushover.php
index a5e83343..10265d15 100644
--- a/data/conf/rspamd/meta_exporter/pushover.php
+++ b/data/conf/rspamd/meta_exporter/pushover.php
@@ -47,12 +47,14 @@ if (!function_exists('getallheaders')) {
}
$headers = getallheaders();
+$json_body = json_decode(file_get_contents('php://input'));
$qid = $headers['X-Rspamd-Qid'];
$rcpts = $headers['X-Rspamd-Rcpt'];
$sender = $headers['X-Rspamd-From'];
$ip = $headers['X-Rspamd-Ip'];
$subject = $headers['X-Rspamd-Subject'];
+$messageid= $json_body->message_id;
$priority = 0;
$symbols_array = json_decode($headers['X-Rspamd-Symbols'], true);
@@ -65,6 +67,20 @@ if (is_array($symbols_array)) {
}
}
+$sender_address = $json_body->header_from[0];
+$sender_name = '-';
+if (preg_match('/(?.*?)<(?.*?)>/i', $sender_address, $matches)) {
+ $sender_address = $matches['address'];
+ $sender_name = trim($matches['name'], '"\' ');
+}
+
+$to_address = $json_body->header_to[0];
+$to_name = '-';
+if (preg_match('/(?.*?)<(?.*?)>/i', $to_address, $matches)) {
+ $to_address = $matches['address'];
+ $to_name = trim($matches['name'], '"\' ');
+}
+
$rcpt_final_mailboxes = array();
// Loop through all rcpts
@@ -229,9 +245,16 @@ foreach ($rcpt_final_mailboxes as $rcpt_final) {
$post_fields = array(
"token" => $api_data['token'],
"user" => $api_data['key'],
- "title" => sprintf("%s", str_replace(array('{SUBJECT}', '{SENDER}'), array($subject, $sender), $title)),
+ "title" => sprintf("%s", str_replace(
+ array('{SUBJECT}', '{SENDER}', '{SENDER_NAME}', '{SENDER_ADDRESS}', '{TO_NAME}', '{TO_ADDRESS}', '{MSG_ID}'),
+ array($subject, $sender, $sender_name, $sender_address, $to_name, $to_address, $messageid), $title)
+ ),
"priority" => $priority,
- "message" => sprintf("%s", str_replace(array('{SUBJECT}', '{SENDER}'), array($subject, $sender), $text))
+ "message" => sprintf("%s", str_replace(
+ array('{SUBJECT}', '{SENDER}', '{SENDER_NAME}', '{SENDER_ADDRESS}', '{TO_NAME}', '{TO_ADDRESS}', '{MSG_ID}', '\n'),
+ array($subject, $sender, $sender_name, $sender_address, $to_name, $to_address, $messageid, PHP_EOL), $text)
+ ),
+ "sound" => $attributes['sound'] ?? "pushover"
);
if ($attributes['evaluate_x_prio'] == "1" && $priority == 1) {
$post_fields['expire'] = 600;
diff --git a/data/web/_status.502.html b/data/web/_status.502.html
index efbc0e8b..35a66ba9 100644
--- a/data/web/_status.502.html
+++ b/data/web/_status.502.html
@@ -13,12 +13,12 @@
Please check the logs or contact support if the error persists.
Quick debugging
Check Nginx and PHP logs:
- docker-compose logs --tail=200 php-fpm-mailcow nginx-mailcow
+ docker compose logs --tail=200 php-fpm-mailcow nginx-mailcow
Make sure your SQL credentials in mailcow.conf (a link to .env) do fit your initialized SQL volume. If you see an access denied, you might have the wrong mailcow.conf:
- source mailcow.conf ; docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME}
+ source mailcow.conf ; docker compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME}
In case of a previous failed installation, create a backup of your existing data, followed by removing all volumes and starting over (NEVER do this with a production system, it will remove ALL data):
BACKUP_LOCATION=/tmp/ ./helper-scripts/backup_and_restore.sh backup all
- docker-compose down --volumes ; docker-compose up -d
+ docker compose down --volumes ; docker compose up -d
Make sure your timezone is correct. Use "America/New_York" for example, do not use spaces. Check here for a list.
Click to learn more about getting support.