diff --git a/.dockerignore b/.dockerignore
new file mode 100644
index 0000000..cf6c3cd
--- /dev/null
+++ b/.dockerignore
@@ -0,0 +1,28 @@
+# Ignore files and directories starting with a dot
+
+# Ignore specific files
+.dockerignore
+.git
+
+# Ignore build artifacts
+logs/
+_output/
+# Ignore non-essential documentation
+README.md
+README-zh_CN.md
+CONTRIBUTING.md
+CHANGELOG/
+# LICENSE
+
+# Ignore testing and linting configuration
+.golangci.yml
+
+
+# Ignore assets
+assets/
+
+# Ignore components
+components/
+
+# Ignore tools and scripts
+.github/
diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 0000000..dfdb8b7
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1 @@
+*.sh text eol=lf
diff --git a/.gitea/SECRETS_CONFIG.md b/.gitea/SECRETS_CONFIG.md
new file mode 100644
index 0000000..da67ee7
--- /dev/null
+++ b/.gitea/SECRETS_CONFIG.md
@@ -0,0 +1,255 @@
+# OpenIM Server 项目 Gitea Actions 秘钥配置指南
+
+## 📋 概述
+
+本指南说明如何在 Gitea 中配置必要的秘钥,以支持 OpenIM Server 项目的自动构建和部署到阿里云 ACK。
+
+## 🔑 必需秘钥配置
+
+### 1. Docker Hub 相关秘钥
+
+#### DOCKER_USERNAME
+- **描述**: Docker Hub 用户名
+- **示例**: `openim`
+- **获取方式**: 在 Docker Hub 注册账号
+
+#### DOCKER_PASSWORD
+- **描述**: Docker Hub 密码或访问令牌
+- **示例**: `your-docker-password` 或 `dckr_pat_xxxxxxxxxxxx`
+- **推荐**: 使用访问令牌(Access Token)而不是密码
+- **获取方式**:
+ 1. 登录 Docker Hub
+ 2. 进入 Account Settings → Security
+ 3. 创建 New Access Token
+
+### 2. 阿里云 ACK 相关秘钥
+
+#### KUBECONFIG
+- **描述**: 阿里云 ACK 集群的 kubeconfig 内容(base64编码)
+- **格式**: base64编码的YAML格式kubeconfig文件内容
+- **获取方式**: 见下方详细说明
+
+#### ALIBABA_CLOUD_ACCESS_KEY_ID (可选)
+- **描述**: 阿里云AccessKey ID
+- **用途**: 用于通过aliyun CLI获取kubeconfig
+- **获取方式**: 阿里云控制台 → 访问控制 → AccessKey管理
+
+#### ALIBABA_CLOUD_ACCESS_KEY_SECRET (可选)
+- **描述**: 阿里云AccessKey Secret
+- **用途**: 用于通过aliyun CLI获取kubeconfig
+- **获取方式**: 阿里云控制台 → 访问控制 → AccessKey管理
+
+#### ACK_CLUSTER_ID (可选)
+- **描述**: 阿里云ACK集群ID
+- **用途**: 用于通过aliyun CLI获取kubeconfig
+- **获取方式**: ACK控制台集群详情页
+
+#### ACK_REGION (可选)
+- **描述**: 阿里云ACK集群区域
+- **默认值**: `cn-hangzhou`
+- **示例**: `cn-beijing`, `cn-shanghai`, `cn-shenzhen`
+
+#### NAMESPACE (可选)
+- **描述**: Kubernetes命名空间
+- **默认值**: `openim`
+- **示例**: `openim`, `default`, `production`
+
+## 🚀 获取阿里云 ACK KUBECONFIG
+
+### 方法1: 通过阿里云控制台(推荐)
+
+1. **登录阿里云控制台**
+ - 访问 [阿里云容器服务控制台](https://cs.console.aliyun.com/)
+
+2. **选择集群**
+ - 进入目标 ACK 集群详情页
+
+3. **获取连接信息**
+ - 点击 "连接信息" 标签
+ - 复制 "公网访问" 或 "内网访问" 的 kubeconfig 内容
+
+4. **编码kubeconfig**
+ ```bash
+ # 将获取的kubeconfig内容保存到文件
+ cat > kubeconfig.yaml << 'EOF'
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: LS0tLS1CRUdJTi...
+ server: https://your-cluster-id.cn-hangzhou.cs.aliyuncs.com:6443
+ name: kubernetes
+ contexts:
+ - context:
+ cluster: kubernetes
+ user: your-user
+ name: kubernetes
+ current-context: kubernetes
+ kind: Config
+ preferences: {}
+ users:
+ - name: your-user
+ user:
+ client-certificate-data: LS0tLS1CRUdJTi...
+ client-key-data: LS0tLS1CRUdJTi...
+ EOF
+
+ # 编码为base64
+ base64 -w 0 kubeconfig.yaml
+ ```
+
+### 方法2: 通过阿里云 CLI
+
+1. **安装阿里云 CLI**
+ ```bash
+ # macOS
+ brew install aliyun-cli
+
+ # Linux
+ curl -sSL https://aliyuncli.alicdn.com/aliyun-cli-linux-latest-amd64.tgz | tar -xzC /usr/local/bin
+ ```
+
+2. **配置认证**
+ ```bash
+ aliyun configure
+ # 输入 AccessKey ID 和 AccessKey Secret
+ ```
+
+3. **获取kubeconfig并编码**
+ ```bash
+ # 获取指定集群的kubeconfig
+ aliyun cs GET /k8s/clusters/{cluster_id}/user_config > kubeconfig.yaml
+
+ # 编码为base64
+ base64 -w 0 kubeconfig.yaml
+ ```
+
+## 🔧 在 Gitea 中配置秘钥
+
+### 1. 进入仓库设置
+
+1. 打开 OpenIM Server 项目仓库
+2. 点击 "Settings" 标签
+3. 在左侧菜单中点击 "Secrets"
+
+### 2. 添加秘钥
+
+点击 "New Secret" 按钮,依次添加以下秘钥:
+
+#### 必需秘钥
+
+##### DOCKER_USERNAME
+- **Name**: `DOCKER_USERNAME`
+- **Value**: 你的 Docker Hub 用户名
+
+##### DOCKER_PASSWORD
+- **Name**: `DOCKER_PASSWORD`
+- **Value**: 你的 Docker Hub 密码或访问令牌
+
+##### KUBECONFIG
+- **Name**: `KUBECONFIG`
+- **Value**: base64编码的kubeconfig文件内容
+
+#### 可选秘钥
+
+##### ALIBABA_CLOUD_ACCESS_KEY_ID
+- **Name**: `ALIBABA_CLOUD_ACCESS_KEY_ID`
+- **Value**: 阿里云AccessKey ID
+
+##### ALIBABA_CLOUD_ACCESS_KEY_SECRET
+- **Name**: `ALIBABA_CLOUD_ACCESS_KEY_SECRET`
+- **Value**: 阿里云AccessKey Secret
+
+##### ACK_CLUSTER_ID
+- **Name**: `ACK_CLUSTER_ID`
+- **Value**: ACK集群ID
+
+##### ACK_REGION
+- **Name**: `ACK_REGION`
+- **Value**: ACK集群区域(默认:cn-hangzhou)
+
+##### NAMESPACE
+- **Name**: `NAMESPACE`
+- **Value**: Kubernetes命名空间(默认:openim)
+
+## 🚨 安全注意事项
+
+### 1. 秘钥安全
+- **不要** 将秘钥提交到代码仓库
+- **定期轮换** 访问令牌和密码
+- **使用最小权限** 原则配置访问权限
+
+### 2. KUBECONFIG 安全
+- **限制权限**: 确保kubeconfig只有必要的权限
+- **定期更新**: 定期更新证书和密钥
+- **监控访问**: 监控集群访问日志
+
+### 3. Docker Hub 安全
+- **使用访问令牌**: 优先使用访问令牌而不是密码
+- **限制权限**: 只授予必要的仓库推送权限
+- **定期轮换**: 定期更新访问令牌
+
+## 🔍 故障排除
+
+### 常见问题
+
+#### 1. Docker 登录失败
+```
+Error: Cannot perform an interactive login from a non TTY device
+```
+**解决方案**: 检查 `DOCKER_USERNAME` 和 `DOCKER_PASSWORD` 是否正确配置
+
+#### 2. kubectl 连接失败
+```
+Unable to connect to the server: x509: certificate signed by unknown authority
+```
+**解决方案**: 检查 `KUBECONFIG` 中的证书数据是否正确
+
+#### 3. 镜像拉取失败
+```
+Error: pull access denied for openim/openim-api
+```
+**解决方案**: 检查 Docker Hub 权限和镜像名称是否正确
+
+#### 4. 部署超时
+```
+deployment "openim-api" exceeded its progress deadline
+```
+**解决方案**: 检查集群资源是否充足,Pod 是否正常启动
+
+### 调试命令
+
+```bash
+# 检查秘钥是否正确设置
+echo "Docker username: $DOCKER_USERNAME"
+echo "Kubeconfig length: ${#KUBECONFIG}"
+
+# 测试 Docker 登录
+echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
+
+# 测试 kubectl 连接
+echo "$KUBECONFIG" | base64 -d > ~/.kube/config
+kubectl cluster-info
+kubectl get nodes
+```
+
+## 📚 参考文档
+
+- [Gitea Actions 文档](https://docs.gitea.io/en-us/actions/)
+- [阿里云 ACK 文档](https://help.aliyun.com/product/85222.html)
+- [Docker Hub 访问令牌](https://docs.docker.com/docker-hub/access-tokens/)
+- [Kubernetes kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
+- [OpenIM 官方文档](https://docs.openim.io/)
+
+## 🆘 支持
+
+如遇到问题,请检查:
+1. 秘钥是否正确配置
+2. 网络连接是否正常
+3. 权限是否充足
+4. 集群状态是否正常
+
+更多帮助请参考:
+- Gitea Actions 日志
+- 阿里云 ACK 控制台
+- Docker Hub 状态页面
+- OpenIM 官方文档
diff --git a/.gitea/workflows/build.yml b/.gitea/workflows/build.yml
new file mode 100644
index 0000000..bfb90ea
--- /dev/null
+++ b/.gitea/workflows/build.yml
@@ -0,0 +1,535 @@
+name: OpenIM Server 构建和发布
+
+on:
+ push:
+ branches: [ main, master, wallet, develop, release-* ]
+ paths:
+ - '**'
+ pull_request:
+ branches: [ main, master, wallet ]
+ paths:
+ - '**'
+ release:
+ types: [published]
+ workflow_dispatch:
+ inputs:
+ tag:
+ description: "Tag version to be used for Docker image"
+ required: true
+ default: "latest"
+
+concurrency:
+ group: ${{ github.workflow }}-${{ github.ref }}
+ cancel-in-progress: true
+
+env:
+ REGISTRY: docker.io
+ DOCKER_USER: ${{ secrets.DOCKER_USERNAME || 'mag1666888' }}
+ GO_VERSION: "1.24"
+
+jobs:
+ build-and-push:
+ runs-on: openim
+ permissions:
+ contents: read
+ packages: write
+
+ steps:
+ - name: 检出代码
+ uses: actions/checkout@v4
+ with:
+ submodules: recursive
+
+ - name: 配置 SSH 密钥
+ run: |
+ echo "🔑 配置 SSH 密钥..."
+ # 创建 .ssh 目录
+ mkdir -p ~/.ssh
+ chmod 700 ~/.ssh
+
+ # 从 secrets 中获取 SSH 私钥并 base64 解码
+ echo "${{ secrets.SSH_PRIVATE_KEY }}" | base64 -d > ~/.ssh/id_rsa
+ chmod 600 ~/.ssh/id_rsa
+
+ # 配置 SSH 主机密钥验证(可选,避免首次连接时的确认提示)
+ ssh-keyscan -H git.imall.cloud >> ~/.ssh/known_hosts 2>/dev/null || true
+ chmod 644 ~/.ssh/known_hosts
+
+ # 验证 SSH 密钥
+ if [ -f ~/.ssh/id_rsa ]; then
+ echo "✅ SSH 密钥配置成功"
+ else
+ echo "❌ SSH 密钥配置失败"
+ exit 1
+ fi
+
+ - name: 检出 protocol 仓库
+ run: |
+ echo "📦 克隆 protocol 仓库..."
+ # 使用 SSH 方式克隆 protocol 仓库
+ git clone git@git.imall.cloud:openim/protocol.git ./protocol || true
+ cd ./protocol
+ git fetch --all
+ git checkout v1.0.4
+ cd ..
+ # 创建父目录的protocol符号链接,使replace ../protocol 能正确找到
+ ln -sf $PWD/protocol ../protocol || true
+ echo "✅ protocol 仓库检出完成"
+
+ - name: 设置 Go 缓存
+ uses: actions/cache@v4
+ with:
+ path: |
+ ~/.cache/go-build
+ ~/go/pkg/mod
+ key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
+ restore-keys: |
+ ${{ runner.os }}-go-
+
+ - name: 设置 Go 环境
+ uses: actions/setup-go@v5
+ with:
+ go-version: ${{ env.GO_VERSION }}
+ cache: true
+
+ - name: 检查 Go 环境
+ run: |
+ if command -v go &> /dev/null; then
+ echo "Go: $(go version)"
+ go mod verify >/dev/null 2>&1
+ go mod tidy >/dev/null 2>&1
+ go mod download >/dev/null 2>&1
+ echo "Go modules verified"
+ else
+ echo "Go not found, skipping Go steps"
+ fi
+
+ - name: 检查系统资源
+ run: |
+ echo "系统资源信息:"
+ echo "CPU核心数: $(nproc)"
+ echo "内存总量: $(free -h | awk '/^Mem:/ {print $2}')"
+ echo "可用内存: $(free -h | awk '/^Mem:/ {print $7}')"
+ echo "磁盘空间: $(df -h / | awk 'NR==2 {print $4}')"
+ echo ""
+
+ - name: 清理 Docker 环境
+ run: |
+ echo "清理Docker环境..."
+ docker image prune -f >/dev/null 2>&1 || true
+
+ - name: 登录到 Docker Hub
+ uses: docker/login-action@v3.3.0
+ with:
+ registry: ${{ env.REGISTRY }}
+ username: ${{ secrets.DOCKER_USERNAME }}
+ password: ${{ secrets.DOCKER_PASSWORD }}
+
+ - name: 设置 Docker Buildx
+ uses: docker/setup-buildx-action@v3.8.0
+ with:
+ driver-opts: |
+ image=moby/buildkit:buildx-stable-1
+ network=host
+
+ - name: 构建和推送所有服务镜像
+ working-directory: ${{ github.workspace }}
+ run: |
+ # 设置错误处理
+ trap 'echo "脚本在行 $LINENO 处失败,退出状态: $?"' ERR
+ set -e
+
+ # 获取版本标签(使用分支名)
+ if [ "${{ github.event_name }}" = "release" ]; then
+ VERSION_TAG="${{ github.event.release.tag_name }}"
+ elif [ -n "${{ github.event.inputs.tag }}" ]; then
+ VERSION_TAG="${{ github.event.inputs.tag }}"
+ else
+ # 使用分支名作为标签
+ VERSION_TAG="${{ github.ref_name }}"
+ fi
+
+ echo "构建OpenIM Server镜像 (标签: $VERSION_TAG)"
+ echo "Docker用户: $DOCKER_USER"
+
+ # 定义构建顺序(openim-msggateway 和 openim-push 优先构建)
+ CORE_SERVICES=(
+ "openim-msggateway"
+ "openim-push"
+ "openim-rpc-msg"
+ "openim-api"
+ "openim-msgtransfer"
+ "openim-crontask"
+ )
+
+ RPC_SERVICES=(
+ "openim-rpc-auth"
+ "openim-rpc-user"
+ "openim-rpc-friend"
+ "openim-rpc-group"
+ "openim-rpc-conversation"
+ "openim-rpc-third"
+ )
+
+ # 构建函数
+ build_service() {
+ local service=$1
+ local dockerfile="build/images/$service/Dockerfile"
+ local start_time=$(date +%s)
+
+ if [ -f "$dockerfile" ]; then
+ echo "📦 构建 $service..."
+
+ if docker buildx build \
+ --platform linux/amd64 \
+ --file "$dockerfile" \
+ --tag "$DOCKER_USER/$service:$VERSION_TAG" \
+ --tag "$DOCKER_USER/$service:prod" \
+ --push \
+ --progress=plain \
+ --cache-from type=gha,scope=openim-build-${{ github.ref_name }}-$service \
+ --cache-to type=gha,mode=max,scope=openim-build-${{ github.ref_name }}-$service \
+ --build-arg BUILDKIT_INLINE_CACHE=1 \
+ --build-arg GOCACHE=/go-cache \
+ --build-arg GOMAXPROCS=$(nproc) \
+ --provenance=false \
+ --sbom=false \
+ . > "/tmp/build_${service}.log" 2>&1; then
+
+ local end_time=$(date +%s)
+ local duration=$((end_time - start_time))
+ echo "✅ $service 构建成功 (${duration}秒)"
+ else
+ echo "❌ $service 构建失败,查看详细错误日志:"
+ echo "=========================================="
+ cat "/tmp/build_${service}.log"
+ echo "=========================================="
+ echo "最后 50 行错误日志:"
+ tail -50 "/tmp/build_${service}.log"
+ return 1
+ fi
+ else
+ echo "❌ $service - 未找到 Dockerfile: $dockerfile"
+ return 1
+ fi
+ }
+
+ # 初始化构建统计
+ failed_services=()
+ successful_services=()
+
+ # 计算总服务数
+ total_services=$((${#CORE_SERVICES[@]} + ${#RPC_SERVICES[@]}))
+ echo "开始构建 $total_services 个服务..."
+
+ # 先构建核心服务
+ for service in "${CORE_SERVICES[@]}"; do
+ if build_service "$service"; then
+ successful_services+=("$service")
+ else
+ failed_services+=("$service")
+ fi
+ done
+
+ # 构建RPC服务
+ for service in "${RPC_SERVICES[@]}"; do
+ if build_service "$service"; then
+ successful_services+=("$service")
+ else
+ failed_services+=("$service")
+ fi
+ done
+
+ # 输出构建结果统计
+ echo ""
+ echo "🎉 构建完成总结"
+ echo "✅ 成功构建: ${#successful_services[@]} 个服务"
+ echo "❌ 构建失败: ${#failed_services[@]} 个服务"
+ echo "📊 成功率: $(( ${#successful_services[@]} * 100 / total_services ))%"
+
+ if [ ${#failed_services[@]} -gt 0 ]; then
+ echo ""
+ echo "失败服务错误日志:"
+ for service in "${failed_services[@]}"; do
+ echo "=== $service ==="
+ cat "/tmp/build_${service}.log" 2>/dev/null || echo "无法读取日志文件"
+ echo ""
+ done
+ exit 1
+ fi
+
+ - name: 配置 kubectl
+ if: success()
+ run: |
+ echo "🔧 配置 kubectl 连接到阿里云 ACK..."
+ # 创建 kubeconfig 目录
+ mkdir -p ~/.kube
+
+ # 从 secrets 中获取 kubeconfig 内容并处理
+ echo "📝 处理 kubeconfig 文件..."
+
+ # 直接解码 base64 内容
+ echo "🔍 解码 base64 编码的 kubeconfig..."
+ echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config
+
+ # 验证解码是否成功
+ if [ ! -s ~/.kube/config ]; then
+ echo "❌ kubeconfig 解码失败或文件为空"
+ exit 1
+ fi
+
+ # 显示 kubeconfig 内容供手动检查
+ echo "🔍 解码后的 kubeconfig 内容:"
+ cat ~/.kube/config
+
+ chmod 600 ~/.kube/config
+ echo "✅ kubeconfig 配置完成"
+
+ # 详细验证 kubectl 配置
+ echo "🔍 验证 kubectl 配置..."
+ echo "当前上下文:"
+ kubectl config current-context 2>&1 || echo "无法获取当前上下文"
+
+ echo "可用上下文:"
+ kubectl config get-contexts 2>&1 || echo "无法获取上下文列表"
+
+ echo "集群信息:"
+ kubectl cluster-info 2>&1 || echo "无法获取集群信息"
+
+ echo "API 版本:"
+ kubectl version --short 2>&1 || echo "无法获取版本信息"
+
+ - name: 部署到阿里云 ACK
+ if: success()
+ run: |
+ echo "🚀 开始部署到阿里云 ACK..."
+
+ # 简单验证 kubectl
+ if ! kubectl version --client >/dev/null 2>&1; then
+ echo "❌ kubectl 不可用,跳过部署步骤"
+ exit 0
+ fi
+
+ echo "✅ kubectl 可用,尝试部署..."
+
+ # 获取版本标签(使用分支名)
+ if [ "${{ github.event_name }}" = "release" ]; then
+ VERSION_TAG="${{ github.event.release.tag_name }}"
+ elif [ -n "${{ github.event.inputs.tag }}" ]; then
+ VERSION_TAG="${{ github.event.inputs.tag }}"
+ else
+ # 使用分支名作为标签
+ VERSION_TAG="${{ github.ref_name }}"
+ fi
+
+ echo "版本标签: $VERSION_TAG"
+ echo "Docker用户: $DOCKER_USER"
+
+ # 检查部署文件是否存在
+ DEPLOY_DIR="deployments/deploy"
+ if [ ! -d "$DEPLOY_DIR" ]; then
+ echo "❌ 部署目录不存在: $DEPLOY_DIR"
+ echo "📁 当前目录内容:"
+ ls -la
+ echo "⚠️ 部署失败,但构建流程继续"
+ exit 0
+ fi
+
+ echo "📁 进入部署目录: $DEPLOY_DIR"
+ cd "$DEPLOY_DIR"
+
+ # 设置命名空间
+ NS=${NS:-default}
+ echo "使用命名空间: $NS"
+
+ # 创建命名空间(如果不存在)
+ kubectl get ns "$NS" >/dev/null 2>&1 || kubectl create ns "$NS"
+ kubectl config set-context --current --namespace="$NS"
+
+ # 尝试部署,如果失败则继续
+ echo "🚀 开始部署OpenIM Server服务..."
+
+ # 定义所有部署文件
+ DEPLOYMENT_FILES=(
+ "openim-api-deployment.yml"
+ "openim-crontask-deployment.yml"
+ "openim-rpc-user-deployment.yml"
+ "openim-msggateway-deployment.yml"
+ "openim-push-deployment.yml"
+ "openim-msgtransfer-deployment.yml"
+ "openim-rpc-conversation-deployment.yml"
+ "openim-rpc-auth-deployment.yml"
+ "openim-rpc-group-deployment.yml"
+ "openim-rpc-friend-deployment.yml"
+ "openim-rpc-msg-deployment.yml"
+ "openim-rpc-third-deployment.yml"
+ )
+
+ # 更新镜像标签并部署
+ DEPLOYMENT_ERRORS=()
+ for file in "${DEPLOYMENT_FILES[@]}"; do
+ if [ -f "$file" ]; then
+ echo "📦 处理 $file..."
+ # 备份原文件(保留 imagePullSecrets 等配置)
+ cp "$file" "$file.bak"
+ # 更新镜像标签(替换用户名和标签,但保留 imagePullSecrets)
+ sed -i "s|image: .*/openim-|image: $DOCKER_USER/openim-|g" "$file"
+ sed -i "s|:prod|:prod|g" "$file"
+ # 验证 imagePullSecrets 是否保留
+ if ! grep -q "imagePullSecrets:" "$file"; then
+ echo "⚠️ 警告: $file 中缺少 imagePullSecrets,从备份恢复..."
+ cp "$file.bak" "$file"
+ # 只更新镜像,不修改其他内容
+ sed -i "s|image: .*/openim-|image: $DOCKER_USER/openim-|g" "$file"
+ sed -i "s|:prod|:prod|g" "$file"
+ fi
+ # 应用文件并捕获详细错误
+ echo "🔍 执行: kubectl apply -f $file"
+ if kubectl apply -f "$file" 2>&1; then
+ echo "✅ $file 部署成功"
+ else
+ ERROR_OUTPUT=$(kubectl apply -f "$file" 2>&1)
+ echo "❌ $file 部署失败:"
+ echo "$ERROR_OUTPUT"
+ DEPLOYMENT_ERRORS+=("$file: $ERROR_OUTPUT")
+ echo "⚠️ 继续处理下一个文件..."
+ fi
+ # 清理备份文件
+ rm -f "$file.bak"
+ else
+ echo "⚠️ $file 不存在,跳过"
+ fi
+ done
+
+ # 显示部署错误总结
+ if [ ${#DEPLOYMENT_ERRORS[@]} -gt 0 ]; then
+ echo ""
+ echo "🚨 部署错误总结:"
+ echo "=========================================="
+ for error in "${DEPLOYMENT_ERRORS[@]}"; do
+ echo "❌ $error"
+ done
+ echo "=========================================="
+ fi
+
+ echo "✅ 部署步骤完成"
+
+ # 强制重启所有部署以使用新镜像
+ echo "🔄 强制重启所有部署以使用新镜像..."
+ DEPLOYMENTS=(
+ "openim-api"
+ "openim-crontask"
+ "messagegateway-rpc-server"
+ "openim-msgtransfer-server"
+ "push-rpc-server"
+ "auth-rpc-server"
+ "user-rpc-server"
+ "friend-rpc-server"
+ "group-rpc-server"
+ "conversation-rpc-server"
+ "third-rpc-server"
+ "msg-rpc-server"
+ )
+
+ for deployment in "${DEPLOYMENTS[@]}"; do
+ echo "🔄 重启部署: $deployment"
+ # 先删除 pod 以强制拉取新镜像(即使 imagePullPolicy 是 Always,有时也需要删除 pod)
+ if kubectl delete pods -l app="$deployment" --grace-period=0 --force 2>&1; then
+ echo "✅ $deployment Pod 已删除,将重新创建并拉取新镜像"
+ fi
+ # 然后重启 deployment
+ if kubectl rollout restart deployment "$deployment" 2>&1; then
+ echo "✅ $deployment 重启成功"
+ else
+ echo "⚠️ $deployment 重启失败,可能不存在"
+ fi
+ done
+
+ echo "⏳ 等待部署完成..."
+ for deployment in "${DEPLOYMENTS[@]}"; do
+ echo "⏳ 等待 $deployment 就绪..."
+ if kubectl rollout status deployment "$deployment" --timeout=300s; then
+ echo "✅ $deployment 部署完成"
+ else
+ echo "⚠️ $deployment 超时,检查Pod状态..."
+ echo "Pod状态:"
+ kubectl get pods -l app="$deployment" 2>&1 || echo "无法获取Pod状态"
+ echo "Pod事件:"
+ kubectl get events --field-selector involvedObject.name="$deployment" --sort-by='.lastTimestamp' 2>&1 | tail -10 || echo "无法获取事件"
+ echo "部署详情:"
+ kubectl describe deployment "$deployment" 2>&1 | tail -20 || echo "无法获取部署详情"
+ fi
+ done
+
+ - name: 验证部署状态
+ if: success()
+ run: |
+ echo "🔍 验证部署状态..."
+
+ # 详细检查集群连接和权限
+ echo "🔍 检查集群连接..."
+ if kubectl cluster-info 2>&1; then
+ echo "✅ 集群连接正常"
+ else
+ CLUSTER_ERROR=$(kubectl cluster-info 2>&1)
+ echo "❌ 集群连接失败: $CLUSTER_ERROR"
+ fi
+
+ echo "🔍 检查命名空间权限..."
+ if kubectl get namespaces 2>&1; then
+ echo "✅ 命名空间权限正常"
+ else
+ NS_ERROR=$(kubectl get namespaces 2>&1)
+ echo "❌ 命名空间权限不足: $NS_ERROR"
+ fi
+
+ echo "🔍 检查 Pod 状态..."
+ if kubectl get pods -l app=openim 2>&1; then
+ echo "✅ Pod 状态检查完成"
+ else
+ POD_ERROR=$(kubectl get pods -l app=openim 2>&1)
+ echo "❌ Pod 状态检查失败: $POD_ERROR"
+ fi
+
+ echo "🔍 检查部署状态..."
+ if kubectl get deployments -l app=openim 2>&1; then
+ echo "✅ 部署状态检查完成"
+ else
+ DEPLOY_ERROR=$(kubectl get deployments -l app=openim 2>&1)
+ echo "❌ 部署状态检查失败: $DEPLOY_ERROR"
+ fi
+
+ echo "✅ 验证完成"
+
+ - name: 生成构建报告
+ run: |
+ # 获取版本标签(使用分支名)
+ if [ "${{ github.event_name }}" = "release" ]; then
+ VERSION_TAG="${{ github.event.release.tag_name }}"
+ elif [ -n "${{ github.event.inputs.tag }}" ]; then
+ VERSION_TAG="${{ github.event.inputs.tag }}"
+ else
+ # 使用分支名作为标签
+ VERSION_TAG="${{ github.ref_name }}"
+ fi
+
+ echo "## OpenIM Server Build Complete" >> $GITHUB_STEP_SUMMARY
+ echo "**Tag:** $VERSION_TAG" >> $GITHUB_STEP_SUMMARY
+ echo "**Branch:** ${{ github.ref_name }}" >> $GITHUB_STEP_SUMMARY
+ echo "**Commit:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
+ echo "**Platform:** linux/amd64" >> $GITHUB_STEP_SUMMARY
+ echo "**Mode:** 完整构建和部署" >> $GITHUB_STEP_SUMMARY
+ echo "" >> $GITHUB_STEP_SUMMARY
+ echo "**Images:**" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-api:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-msggateway:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-msgtransfer:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-push:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-crontask:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-auth:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-user:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-friend:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-group:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-conversation:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-third:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
+ echo "- \`$DOCKER_USER/openim-rpc-msg:$VERSION_TAG\`" >> $GITHUB_STEP_SUMMARY
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..3d7401b
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,426 @@
+# Copyright © 2023 OpenIMSDK.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# ==============================================================================
+# For the entire design of.gitignore, ignore git commits and ignore files
+#===============================================================================
+#
+
+### OpenIM developer supplement ###
+logs
+.devcontainer
+components
+out-test
+Dockerfile.cross
+
+### Makefile ###
+tmp/
+bin/
+output/
+_output/
+deployments/charts/generated-configs/
+
+### OpenIM Config ###
+.env
+config/config.yaml
+config/notification.yaml
+
+### OpenIM deploy ###
+deployments/openim-server/charts
+
+# files used by the developer
+.idea.md
+.todo.md
+.note.md
+scripts/chat_register_login.sh
+scripts/check_online_vs_register.sh
+scripts/cleanup_online_keys.sh
+scripts/cleanup_online_keys_cluster.sh
+scripts/start_all_local.sh
+scripts/switch_online_prefix_roll_restart_verify.sh
+scripts/verify_online_prefix.sh
+
+# ==============================================================================
+# Created by https://www.toptal.com/developers/gitignore/api/go,git,vim,tags,test,emacs,backup,jetbrains
+# Edit at https://www.toptal.com/developers/gitignore?templates=go,git,vim,tags,test,emacs,backup,jetbrains
+
+### Backup ###
+*.bak
+*.gho
+*.ori
+*.orig
+*.tmp
+
+### Emacs ###
+# -*- mode: gitignore; -*-
+*~
+\#*\#
+/.emacs.desktop
+/.emacs.desktop.lock
+*.elc
+auto-save-list
+tramp
+.\#*
+
+# Org-mode
+.org-id-locations
+*_archive
+
+# flymake-mode
+*_flymake.*
+
+# eshell files
+/eshell/history
+/eshell/lastdir
+
+# elpa packages
+/elpa/
+
+# reftex files
+*.rel
+
+# AUCTeX auto folder
+/auto/
+
+# cask packages
+.cask/
+dist/
+
+# Flycheck
+flycheck_*.el
+
+# server auth directory
+/server/
+
+# projectiles files
+.projectile
+
+# directory configuration
+.dir-locals.el
+
+# network security
+/network-security.data
+
+### vscode ###
+.vscode
+.vscode/*
+!.vscode/settings.json
+!.vscode/tasks.json
+!.vscode/launch.json
+!.vscode/extensions.json
+*.code-workspace
+
+# End of https://www.toptal.com/developers/gitignore/api/vim,jetbrains,vscode,git,go,tags,backup,test
+
+### Git ###
+# Created by git for backups. To disable backups in Git:
+# $ git config --global mergetool.keepBackup false
+
+# Created by git when using merge tools for conflicts
+*.BACKUP.*
+*.BASE.*
+*.LOCAL.*
+*.REMOTE.*
+*_BACKUP_*.txt
+*_BASE_*.txt
+*_LOCAL_*.txt
+*_REMOTE_*.txt
+
+### Go ###
+# If you prefer the allow list template instead of the deny list, see community template:
+# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
+#
+# Binaries for programs and plugins
+*.exe
+*.exe~
+*.dll
+*.so
+*.dylib
+
+# Test binary, built with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
+# Dependency directories (remove the comment below to include it)
+vendor/
+
+# Go workspace file
+# go.work
+go.work.sum
+
+### JetBrains ###
+# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
+# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
+
+# User-specific stuff
+.idea/
+.idea/**/workspace.xml
+.idea/**/tasks.xml
+.idea/**/usage.statistics.xml
+.idea/**/dictionaries
+.idea/**/shelf
+
+# AWS User-specific
+.idea/**/aws.xml
+
+# Generated files
+.idea/**/contentModel.xml
+
+# Sensitive or high-churn files
+.idea/**/dataSources/
+.idea/**/dataSources.ids
+.idea/**/dataSources.local.xml
+.idea/**/sqlDataSources.xml
+.idea/**/dynamic.xml
+.idea/**/uiDesigner.xml
+.idea/**/dbnavigator.xml
+
+# Gradle
+.idea/**/gradle.xml
+.idea/**/libraries
+
+# Gradle and Maven with auto-import
+# When using Gradle or Maven with auto-import, you should exclude module files,
+# since they will be recreated, and may cause churn. Uncomment if using
+# auto-import.
+# .idea/artifacts
+# .idea/compiler.xml
+# .idea/jarRepositories.xml
+# .idea/modules.xml
+# .idea/*.iml
+# .idea/modules
+# *.iml
+# *.ipr
+
+# CMake
+cmake-build-*/
+
+# Mongo Explorer plugin
+.idea/**/mongoSettings.xml
+
+# File-based project format
+*.iws
+
+# IntelliJ
+out/
+
+# mpeltonen/sbt-idea plugin
+.idea_modules/
+
+# JIRA plugin
+atlassian-ide-plugin.xml
+
+# Cursive Clojure plugin
+.idea/replstate.xml
+
+# SonarLint plugin
+.idea/sonarlint/
+
+# Crashlytics plugin (for Android Studio and IntelliJ)
+com_crashlytics_export_strings.xml
+crashlytics.properties
+crashlytics-build.properties
+fabric.properties
+
+# Editor-based Rest Client
+.idea/httpRequests
+
+# Android studio 3.1+ serialized cache file
+.idea/caches/build_file_checksums.ser
+
+### JetBrains Patch ###
+# Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721
+
+# *.iml
+# modules.xml
+# .idea/misc.xml
+# *.ipr
+
+# Sonarlint plugin
+# https://plugins.jetbrains.com/plugin/7973-sonarlint
+.idea/**/sonarlint/
+
+# SonarQube Plugin
+# https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin
+.idea/**/sonarIssues.xml
+
+# Markdown Navigator plugin
+# https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced
+.idea/**/markdown-navigator.xml
+.idea/**/markdown-navigator-enh.xml
+.idea/**/markdown-navigator/
+
+# Cache file creation bug
+# See https://youtrack.jetbrains.com/issue/JBR-2257
+.idea/$CACHE_FILE$
+
+# CodeStream plugin
+# https://plugins.jetbrains.com/plugin/12206-codestream
+.idea/codestream.xml
+
+# Azure Toolkit for IntelliJ plugin
+# https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij
+.idea/**/azureSettings.xml
+
+### Tags ###
+# Ignore tags created by etags, ctags, gtags (GNU global) and cscope
+TAGS
+.TAGS
+!TAGS/
+tags
+.tags
+!tags/
+gtags.files
+GTAGS
+GRTAGS
+GPATH
+GSYMS
+cscope.files
+cscope.out
+cscope.in.out
+cscope.po.out
+
+
+### Test ###
+### Ignore all files that could be used to test your code and
+### you wouldn't want to push
+
+# Reference https://en.wikipedia.org/wiki/Metasyntactic_variable
+
+# Most common
+*foo
+*bar
+*fubar
+*foobar
+*baz
+
+# Less common
+*qux
+*quux
+*bongo
+*bazola
+*ztesch
+
+# UK, Australia
+*wibble
+*wobble
+*wubble
+*flob
+*blep
+*blah
+*boop
+*beep
+
+# Japanese
+*hoge
+*piyo
+*fuga
+*hogera
+*hogehoge
+
+# Portugal, Spain
+*fulano
+*sicrano
+*beltrano
+*mengano
+*perengano
+*zutano
+
+# France, Italy, the Netherlands
+*toto
+*titi
+*tata
+*tutu
+*pipppo
+*pluto
+*paperino
+*aap
+*noot
+*mies
+
+# Other names that would make sense
+*tests
+*testsdir
+*testsfile
+*testsfiles
+*testdir
+*testfile
+*testfiles
+*testing
+*testingdir
+*testingfile
+*testingfiles
+*temp
+*tempdir
+*tempfile
+*tempfiles
+*tmp
+*tmpdir
+*tmpfile
+*tmpfiles
+*lol
+
+### Vim ###
+# Swap
+[._]*.s[a-v][a-z]
+!*.svg # comment out if you don't need vector files
+[._]*.sw[a-p]
+[._]s[a-rt-v][a-z]
+[._]ss[a-gi-z]
+[._]sw[a-p]
+
+# Session
+Session.vim
+Sessionx.vim
+
+# Temporary
+.netrwhist
+# Auto-generated tag files
+# Persistent undo
+[._]*.un~
+
+# End of https://www.toptal.com/developers/gitignore/api/go,git,vim,tags,test,emacs,backup,jetbrains
+
+### macOS ###
+# General
+.DS_Store
+.AppleDouble
+.LSOverride
+
+# Icon must end with two \r
+Icon
+
+# Thumbnails
+._*
+
+# Files that might appear in the root of a volume
+.DocumentRevisions-V100
+.fseventsd
+.Spotlight-V100
+.TemporaryItems
+.Trashes
+.VolumeIcon.icns
+.com.apple.timemachine.donotpresent
+
+# Directories potentially created on remote AFP share
+.AppleDB
+.AppleDesktop
+Network Trash Folder
+Temporary Items
+.apdisk
+
+.idea
+dist/
diff --git a/.golangci.yml b/.golangci.yml
new file mode 100644
index 0000000..c1a5bfd
--- /dev/null
+++ b/.golangci.yml
@@ -0,0 +1,912 @@
+# options for analysis running
+run:
+ # default concurrency is a available CPU number
+ concurrency: 4
+
+ # timeout for analysis, e.g. 30s, 5m, default is 1m
+ timeout: 5m
+
+ # exit code when at least one issue was found, default is 1
+ issues-exit-code: 1
+
+ # include test files or not, default is true
+ tests: true
+
+ # list of build tags, all linters use it. Default is empty list.
+ build-tags:
+ - mytag
+
+ # which dirs to skip: issues from them won't be reported;
+ # can use regexp here: generated.*, regexp is applied on full path;
+ # default value is empty list, but default dirs are skipped independently
+ # from this option's value (see skip-dirs-use-default).
+ # "/" will be replaced by current OS file path separator to properly work
+ # on Windows.
+ # skip-dirs:
+ # - components
+ # - docs
+ # - util
+ # - .*~
+ # - api/swagger/docs
+
+
+ # - server/docs
+ # - components/mnt/config/certs
+ # - logs
+
+ # default is true. Enables skipping of directories:
+ # vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
+ # skip-dirs-use-default: true
+
+ # which files to skip: they will be analyzed, but issues from them
+ # won't be reported. Default value is empty list, but there is
+ # no need to include all autogenerated files, we confidently recognize
+ # autogenerated files. If it's not please let us know.
+ # "/" will be replaced by current OS file path separator to properly work
+ # on Windows.
+ # skip-files:
+ # - ".*\\.my\\.go$"
+ # - _test.go
+ # - ".*_test.go"
+ # - "mocks/"
+ # - ".github/"
+ # - "logs/"
+ # - "_output/"
+ # - "components/"
+
+ # by default isn't set. If set we pass it to "go list -mod={option}". From "go help modules":
+ # If invoked with -mod=readonly, the go command is disallowed from the implicit
+ # automatic updating of go.mod described above. Instead, it fails when any changes
+ # to go.mod are needed. This setting is most useful to check that go.mod does
+ # not need updates, such as in a continuous integration and testing system.
+ # If invoked with -mod=vendor, the go command assumes that the vendor
+ # directory holds the correct copies of dependencies and ignores
+ # the dependency descriptions in go.mod.
+ #modules-download-mode: release|readonly|vendor
+
+ # Allow multiple parallel golangci-lint instances running.
+ # If false (default) - golangci-lint acquires file lock on start.
+ allow-parallel-runners: true
+
+
+# output configuration options
+output:
+ # colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
+ # format: colored-line-number
+
+ # print lines of code with issue, default is true
+ print-issued-lines: true
+
+ # print linter name in the end of issue text, default is true
+ print-linter-name: true
+
+ # make issues output unique by line, default is true
+ uniq-by-line: true
+
+ # add a prefix to the output file references; default is no prefix
+ path-prefix: ""
+
+ # sorts results by: filepath, line and column
+ sort-results: true
+
+# all available settings of specific linters
+linters-settings:
+ bidichk:
+ # The following configurations check for all mentioned invisible unicode
+ # runes. It can be omitted because all runes are enabled by default.
+ left-to-right-embedding: true
+ right-to-left-embedding: true
+ pop-directional-formatting: true
+ left-to-right-override: true
+ right-to-left-override: true
+ left-to-right-isolate: true
+ right-to-left-isolate: true
+ first-strong-isolate: true
+ pop-directional-isolate: true
+
+ dupl:
+ # tokens count to trigger issue, 150 by default
+ threshold: 200
+ errcheck:
+ # report about not checking of errors in type assertions: `a := b.(MyStruct)`;
+ # default is false: such cases aren't reported by default.
+ check-type-assertions: false
+
+ # report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
+ # default is false: such cases aren't reported by default.
+ check-blank: false
+
+ # [deprecated] comma-separated list of pairs of the form pkg:regex
+ # the regex is used to ignore names within pkg. (default "fmt:.*").
+ # see https://github.com/kisielk/errcheck#the-deprecated-method for details
+ #ignore: GenMarkdownTree,os:.*,BindPFlags,WriteTo,Help
+ #ignore: (os\.)?std(out|err)\..*|.*Close|.*Flush|os\.Remove(All)?|.*print(f|ln)?|os\.(Un)?Setenv
+
+ # path to a file containing a list of functions to exclude from checking
+ # see https://github.com/kisielk/errcheck#excluding-functions for details
+ # exclude: errcheck.txt
+
+ errorlint:
+ # Check whether fmt.Errorf uses the %w verb for formatting errors. See the readme for caveats
+ errorf: true
+ # Check for plain type assertions and type switches
+ asserts: true
+ # Check for plain error comparisons
+ comparison: true
+
+ exhaustive:
+ # Program elements to check for exhaustiveness.
+ # Default: [ switch ]
+ check:
+ - switch
+ - map
+ # check switch statements in generated files also
+ check-generated: false
+ # indicates that switch statements are to be considered exhaustive if a
+ # 'default' case is present, even if all enum members aren't listed in the
+ # switch
+ default-signifies-exhaustive: false
+ # enum members matching the supplied regex do not have to be listed in
+ # switch statements to satisfy exhaustiveness
+ ignore-enum-members: ""
+ # consider enums only in package scopes, not in inner scopes
+ package-scope-only: false
+
+
+ forbidigo:
+ # # Forbid the following identifiers (identifiers are written using regexp):
+ forbid:
+ # - ^print.*$
+ - 'fmt\.Print.*'
+ - fmt.Println.* # too much log noise
+ - ^unsafe\..*$
+ - ^init$
+ - ^os.Exit$
+ - ^fmt.Print.*$
+ - errors.New.*$
+ - ^fmt.Println.*$
+ - ^panic$
+ - painc
+ # - ginkgo\\.F.* # these are used just for local development
+ # # Exclude godoc examples from forbidigo checks. Default is true.
+ # exclude_godoc_examples: false
+
+ funlen:
+ lines: 220
+ statements: 80
+
+ gocognit:
+ # minimal code complexity to report, 30 by default (but we recommend 10-20)
+ min-complexity: 30
+
+ goconst:
+ # minimal length of string constant, 3 by default
+ min-len: 3
+ # minimal occurrences count to trigger, 3 by default
+ min-occurrences: 3
+ # ignore test files, false by default
+ ignore-tests: false
+ # look for existing constants matching the values, true by default
+ match-constant: true
+ # search also for duplicated numbers, false by default
+ numbers: false
+ # minimum value, only works with goconst.numbers, 3 by default
+ min: 3
+ # maximum value, only works with goconst.numbers, 3 by default
+ max: 3
+ # ignore when constant is not used as function argument, true by default
+ ignore-calls: true
+
+ gocritic:
+ # Which checks should be enabled; can't be combined with 'disabled-checks';
+ # See https://go-critic.github.io/overview#checks-overview
+ # To check which checks are enabled run `GL_DEBUG=gocritic golangci-lint run`
+ # By default list of stable checks is used.
+ enabled-checks:
+ #- rangeValCopy
+ - ruleguard
+
+ # Which checks should be disabled; can't be combined with 'enabled-checks'; default is empty
+ disabled-checks:
+ - regexpMust
+ - ifElseChain
+ #- exitAfterDefer
+
+ # Enable multiple checks by tags, run `GL_DEBUG=gocritic golangci-lint run` to see all tags and checks.
+ # Empty list by default. See https://github.com/go-critic/go-critic#usage -> section "Tags".
+ enabled-tags:
+ - performance
+ disabled-tags:
+ - experimental
+
+ # Settings passed to gocritic.
+ # The settings key is the name of a supported gocritic checker.
+ # The list of supported checkers can be find in https://go-critic.github.io/overview.
+ settings:
+ captLocal: # must be valid enabled check name
+ # whether to restrict checker to params only (default true)
+ paramsOnly: true
+ elseif:
+ # whether to skip balanced if-else pairs (default true)
+ skipBalanced: true
+ hugeParam:
+ # size in bytes that makes the warning trigger (default 80)
+ sizeThreshold: 80
+ rangeExprCopy:
+ # size in bytes that makes the warning trigger (default 512)
+ sizeThreshold: 512
+ # whether to check test functions (default true)
+ skipTestFuncs: true
+ rangeValCopy:
+ # size in bytes that makes the warning trigger (default 128)
+ sizeThreshold: 32
+ # whether to check test functions (default true)
+ skipTestFuncs: true
+ ruleguard:
+ # path to a gorules file for the ruleguard checker
+ rules: ''
+ underef:
+ # whether to skip (*x).method() calls where x is a pointer receiver (default true)
+ skipRecvDeref: true
+
+ gocyclo:
+ # minimal code complexity to report, 30 by default (but we recommend 10-20)
+ min-complexity: 30
+ cyclop:
+ # the maximal code complexity to report
+ max-complexity: 50
+ # the maximal average package complexity. If it's higher than 0.0 (float) the check is enabled (default 0.0)
+ package-average: 0.0
+ # should ignore tests (default false)
+ skip-tests: false
+ godot:
+ # comments to be checked: `declarations`, `toplevel`, or `all`
+ scope: declarations
+ # list of regexps for excluding particular comment lines from check
+ exclude:
+ # example: exclude comments which contain numbers
+ - '[0-9]+'
+ - 'func\s+\w+'
+ - 'FIXME:'
+ - '.*func.*'
+ # check that each sentence starts with a capital letter
+ capital: true
+ godox:
+ # report any comments starting with keywords, this is useful for TODO or FIXME comments that
+ # might be left in the code accidentally and should be resolved before merging
+ keywords: # default keywords are TODO, BUG, and FIXME, these can be overwritten by this setting
+ #- TODO
+ - BUG
+ - FIXME
+ #- NOTE
+ - OPTIMIZE # marks code that should be optimized before merging
+ - HACK # marks hack-arounds that should be removed before merging
+ gofmt:
+ # simplify code: gofmt with `-s` option, true by default
+ simplify: true
+
+ gofumpt:
+ # Select the Go version to target. The default is `1.18`.
+ go-version: "1.21"
+
+ # Choose whether or not to use the extra rules that are disabled
+ # by default
+ extra-rules: false
+
+ # goheader:
+ # values:
+ # const:
+ # define here const type values in format k:v, for example:
+ # COMPANY: MY COMPANY
+ # regexp:
+ # define here regexp type values, for example
+ # AUTHOR: .*@mycompany\.com
+ # template: # |-
+ # put here copyright header template for source code files, for example:
+ # Note: {{ YEAR }} is a builtin value that returns the year relative to the current machine time.
+ #
+ # {{ AUTHOR }} {{ COMPANY }} {{ YEAR }}
+ # SPDX-License-Identifier: Apache-2.0
+
+ # Licensed under the Apache License, Version 2.0 (the "License");
+ # you may not use this file except in compliance with the License.
+ # You may obtain a copy of the License at:
+
+ # http://www.apache.org/licenses/LICENSE-2.0
+
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+ # template-path:
+ # also as alternative of directive 'template' you may put the path to file with the template source
+
+ goimports:
+ # put imports beginning with prefix after 3rd-party packages;
+ # it's a comma-separated list of prefixes
+ local-prefixes: github.com/openimsdk/open-im-server-deploy
+
+ gomnd:
+ # List of enabled checks, see https://github.com/tommy-muehle/go-mnd/#checks for description.
+ # Default: ["argument", "case", "condition", "operation", "return", "assign"]
+ checks:
+ - argument
+ - case
+ - condition
+ - operation
+ - return
+ - assign
+ # List of numbers to exclude from analysis.
+ # The numbers should be written as string.
+ # Values always ignored: "1", "1.0", "0" and "0.0"
+ # Default: []
+ ignored-numbers:
+ - '0666'
+ - '0755'
+ - '42'
+ # List of file patterns to exclude from analysis.
+ # Values always ignored: `.+_test.go`
+ # Default: []
+ ignored-files:
+ - 'magic1_.+\.go$'
+ # List of function patterns to exclude from analysis.
+ # Following functions are always ignored: `time.Date`,
+ # `strconv.FormatInt`, `strconv.FormatUint`, `strconv.FormatFloat`,
+ # `strconv.ParseInt`, `strconv.ParseUint`, `strconv.ParseFloat`.
+ # Default: []
+ ignored-functions:
+ - '^math\.'
+ - '^webhook\.StatusText$'
+ gomoddirectives:
+ # Allow local `replace` directives. Default is false.
+ replace-local: true
+ # List of allowed `replace` directives. Default is empty.
+ replace-allow-list:
+ - google.golang.org/grpc
+
+ # Allow to not explain why the version has been retracted in the `retract` directives. Default is false.
+ retract-allow-no-explanation: false
+ # Forbid the use of the `exclude` directives. Default is false.
+ exclude-forbidden: false
+
+ gomodguard:
+ allowed:
+ modules:
+ - gorm.io/gen # List of allowed modules
+ - gorm.io/gorm
+ - gorm.io/driver/mysql
+ - k8s.io/klog
+ - github.com/allowed/module
+ - go.mongodb.org/mongo-driver/mongo
+ # - gopkg.in/yaml.v2
+ domains: # List of allowed module domains
+ - google.golang.org
+ - gopkg.in
+ - golang.org
+ - github.com
+ - go.mongodb.org
+ - go.uber.org
+ - openim.io
+ - go.etcd.io
+ blocked:
+ versions:
+ - github.com/MakeNowJust/heredoc:
+ version: "> 2.0.9"
+ reason: "use the latest version"
+ local_replace_directives: false # Set to true to raise lint issues for packages that are loaded from a local path via replace directive
+
+ gosec:
+ # To select a subset of rules to run.
+ # Available rules: https://github.com/securego/gosec#available-rules
+ includes:
+ - G401
+ - G306
+ - G101
+ # To specify a set of rules to explicitly exclude.
+ # Available rules: https://github.com/securego/gosec#available-rules
+ excludes:
+ - G204
+ # Exclude generated files
+ exclude-generated: true
+ # Filter out the issues with a lower severity than the given value. Valid options are: low, medium, high.
+ severity: "low"
+ # Filter out the issues with a lower confidence than the given value. Valid options are: low, medium, high.
+ confidence: "low"
+ # To specify the configuration of rules.
+ # The configuration of rules is not fully documented by gosec:
+ # https://github.com/securego/gosec#configuration
+ # https://github.com/securego/gosec/blob/569328eade2ccbad4ce2d0f21ee158ab5356a5cf/rules/rulelist.go#L60-L102
+ config:
+ G306: "0600"
+ G101:
+ pattern: "(?i)example"
+ ignore_entropy: false
+ entropy_threshold: "80.0"
+ per_char_threshold: "3.0"
+ truncate: "32"
+
+ gosimple:
+ # Select the Go version to target. The default is '1.13'.
+ go: "1.20"
+ # https://staticcheck.io/docs/options#checks
+ checks: [ "all" ]
+
+ govet:
+ # settings per analyzer
+ settings:
+ printf: # analyzer name, run `go tool vet help` to see all analyzers
+ funcs: # run `go tool vet help printf` to see available settings for `printf` analyzer
+ - (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
+ - (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
+ - (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
+ - (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
+
+ # enable or disable analyzers by name
+ enable:
+ - atomicalign
+ enable-all: false
+ disable:
+ - shadow
+ disable-all: false
+
+ depguard:
+ rules:
+ prevent_unmaintained_packages:
+ list-mode: lax # allow unless explicitely denied
+ files:
+ - $all
+ - "!$test"
+ allow:
+ - $gostd
+ deny:
+ - pkg: io/ioutil
+ desc: "replaced by io and os packages since Go 1.16: https://tip.golang.org/doc/go1.16#ioutil"
+ - pkg: github.com/OpenIMSDK
+ desc: "The OpenIM organization has been replaced with lowercase, please do not use uppercase organization name, you will use openimsdk"
+ - pkg: log
+ desc: "We have a wrapped log package at openim, we recommend you to use our wrapped log package, https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/logging.md"
+ - pkg: errors
+ desc: "We have a wrapped errors package at openim, we recommend you to use our wrapped errors package, https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/error-code.md"
+
+ importas:
+ # if set to `true`, force to use alias.
+ no-unaliased: true
+ # List of aliases
+ alias:
+ # using `servingv1` alias for `knative.dev/serving/pkg/apis/serving/v1` package
+ - pkg: knative.dev/serving/pkg/apis/serving/v1
+ alias: servingv1
+ - pkg: gopkg.in/yaml.v2
+ alias: yaml
+ # using `autoscalingv1alpha1` alias for `knative.dev/serving/pkg/apis/autoscaling/v1alpha1` package
+ - pkg: knative.dev/serving/pkg/apis/autoscaling/v1alpha1
+ alias: autoscalingv1alpha1
+ # You can specify the package path by regular expression,
+ # and alias by regular expression expansion syntax like below.
+ # see https://github.com/julz/importas#use-regular-expression for details
+ - pkg: knative.dev/serving/pkg/apis/(\w+)/(v[\w\d]+)
+ alias: $1$2
+
+ ireturn:
+ # ireturn allows using `allow` and `reject` settings at the same time.
+ # Both settings are lists of the keywords and regular expressions matched to interface or package names.
+ # keywords:
+ # - `empty` for `interface{}`
+ # - `error` for errors
+ # - `stdlib` for standard library
+ # - `anon` for anonymous interfaces
+
+ # By default, it allows using errors, empty interfaces, anonymous interfaces,
+ # and interfaces provided by the standard library.
+ allow:
+ - anon
+ - error
+ - empty
+ - stdlib
+ # You can specify idiomatic endings for interface
+ - (or|er)$
+
+ # Reject patterns
+ reject:
+ - github.com\/user\/package\/v4\.Type
+
+ lll:
+ # max line length, lines longer will be reported. Default is 250.
+ # '\t' is counted as 1 character by default, and can be changed with the tab-width option
+ line-length: 250
+ # tab width in spaces. Default to 1.
+ tab-width: 4
+ misspell:
+ # Correct spellings using locale preferences for US or UK.
+ # Default is to use a neutral variety of English.
+ # Setting locale to US will correct the British spelling of 'colour' to 'color'.
+ locale: US
+ ignore-words:
+ - someword
+ nakedret:
+ # make an issue if func has more lines of code than this setting and it has naked returns; default is 30
+ max-func-lines: 30
+
+ nestif:
+ # minimal complexity of if statements to report, 5 by default
+ min-complexity: 4
+
+ nilnil:
+ # By default, nilnil checks all returned types below.
+ checked-types:
+ - ptr
+ - func
+ - iface
+ - map
+ - chan
+
+ nlreturn:
+ # size of the block (including return statement that is still "OK")
+ # so no return split required.
+ block-size: 1
+
+ nolintlint:
+ # Disable to ensure that all nolint directives actually have an effect. Default is true.
+ allow-unused: false
+ # Exclude following linters from requiring an explanation. Default is [].
+ allow-no-explanation: [ ]
+ # Enable to require an explanation of nonzero length after each nolint directive. Default is false.
+ require-explanation: false
+ # Enable to require nolint directives to mention the specific linter being suppressed. Default is false.
+ require-specific: true
+
+ prealloc:
+ # XXX: we don't recommend using this linter before doing performance profiling.
+ # For most programs usage of prealloc will be a premature optimization.
+
+ # Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
+ # True by default.
+ simple: true
+ range-loops: true # Report preallocation suggestions on range loops, true by default
+ for-loops: false # Report preallocation suggestions on for loops, false by default
+
+ promlinter:
+ # Promlinter cannot infer all metrics name in static analysis.
+ # Enable strict mode will also include the errors caused by failing to parse the args.
+ strict: false
+ # Please refer to https://github.com/yeya24/promlinter#usage for detailed usage.
+ disabled-linters:
+ - "Help"
+ - "MetricUnits"
+ - "Counter"
+ - "HistogramSummaryReserved"
+ - "MetricTypeInName"
+ - "ReservedChars"
+ - "CamelCase"
+
+ predeclared:
+ # comma-separated list of predeclared identifiers to not report on
+ ignore: ""
+ # include method names and field names (i.e., qualified names) in checks
+ q: false
+ rowserrcheck:
+ packages:
+ - github.com/jmoiron/sqlx
+
+ revive:
+ # see https://github.com/mgechev/revive#available-rules for details.
+ ignore-generated-header: true
+ severity: warning
+ rules:
+ - name: indent-error-flow
+ severity: warning
+ - name: exported
+ severity: warning
+ - name: var-naming
+ arguments: [ [ "OpenIM"] ]
+ # arguments: [ ["ID", "HTTP", "URL", "URI", "API", "APIKey", "Token", "TokenID", "TokenSecret", "TokenKey", "TokenSecret", "JWT", "JWTToken", "JWTTokenID", "JWTTokenSecret", "JWTTokenKey", "JWTTokenSecret", "OAuth", "OAuthToken", "RPC" ] ]
+ - name: atomic
+ - name: line-length-limit
+ severity: error
+ arguments: [200]
+ - name: unhandled-error
+ arguments : ["fmt.Printf", "myFunction"]
+
+ staticcheck:
+ # Select the Go version to target. The default is '1.13'.
+ go: "1.20"
+ # https://staticcheck.io/docs/options#checks
+ checks: [ "all" ]
+
+ stylecheck:
+ # Select the Go version to target. The default is '1.13'.
+ go: "1.20"
+
+ # https://staticcheck.io/docs/options#checks
+ checks: [ "all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022" ]
+ # https://staticcheck.io/docs/options#dot_import_whitelist
+ dot-import-whitelist:
+ - fmt
+ # https://staticcheck.io/docs/options#initialisms
+ initialisms: [ "ACL", "API", "ASCII", "CPU", "CSS", "DNS", "EOF", "GUID", "HTML", "HTTP", "HTTPS", "ID", "IP", "JSON", "QPS", "RAM", "RPC", "SLA", "SMTP", "SQL", "SSH", "TCP", "TLS", "TTL", "UDP", "UI", "GID", "UID", "UUID", "URI", "URL", "UTF8", "VM", "XML", "XMPP", "XSRF", "XSS" ]
+ # https://staticcheck.io/docs/options#http_status_code_whitelist
+ http-status-code-whitelist: [ "200", "400", "404", "500" ]
+
+ tagliatelle:
+ # check the struck tag name case
+ case:
+ # use the struct field name to check the name of the struct tag
+ use-field-name: true
+ rules:
+ # any struct tag type can be used.
+ # support string case: `camel`, `pascal`, `kebab`, `snake`, `goCamel`, `goPascal`, `goKebab`, `goSnake`, `upper`, `lower`
+ json: camel
+ yaml: camel
+ xml: camel
+ bson: camel
+ avro: snake
+ mapstructure: kebab
+
+ testpackage:
+ # regexp pattern to skip files
+ skip-regexp: (id|export|internal)_test\.go
+ thelper:
+ # The following configurations enable all checks. It can be omitted because all checks are enabled by default.
+ # You can enable only required checks deleting unnecessary checks.
+ test:
+ first: true
+ name: true
+ begin: true
+ benchmark:
+ first: true
+ name: true
+ begin: true
+ tb:
+ first: true
+ name: true
+ begin: true
+
+ tenv:
+ # The option `all` will run against whole test files (`_test.go`) regardless of method/function signatures.
+ # By default, only methods that take `*testing.T`, `*testing.B`, and `testing.TB` as arguments are checked.
+ all: false
+
+ unparam:
+ # Inspect exported functions, default is false. Set to true if no external program/library imports your code.
+ # XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
+ # if it's called for subdir of a project it can't find external interfaces. All text editor integrations
+ # with golangci-lint call it on a directory with the changed file.
+ check-exported: false
+ # unused:
+ # treat code as a program (not a library) and report unused exported identifiers; default is false.
+ # XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
+ # if it's called for subdir of a project it can't find funcs usages. All text editor integrations
+ # with golangci-lint call it on a directory with the changed file.
+ whitespace:
+ multi-if: false # Enforces newlines (or comments) after every multi-line if statement
+ multi-func: false # Enforces newlines (or comments) after every multi-line function signature
+
+ wrapcheck:
+ # An array of strings that specify substrings of signatures to ignore.
+ # If this set, it will override the default set of ignored signatures.
+ # See https://github.com/tomarrell/wrapcheck#configuration for more information.
+ ignoreSigs:
+ - .Errorf(
+ - errors.New(
+ - errors.Unwrap(
+ - .Wrap(
+ - .WrapMsg(
+ - .Wrapf(
+ - .WithMessage(
+ - .WithMessagef(
+ - .WithStack(
+ ignorePackageGlobs:
+ - encoding/*
+ - github.com/pkg/*
+ - github.com/openimsdk/*
+ - github.com/OpenIMSDK/*
+
+ wsl:
+ # If true append is only allowed to be cuddled if appending value is
+ # matching variables, fields or types on line above. Default is true.
+ strict-append: true
+ # Allow calls and assignments to be cuddled as long as the lines have any
+ # matching variables, fields or types. Default is true.
+ allow-assign-and-call: true
+ # Allow assignments to be cuddled with anything. Default is false.
+ allow-assign-and-anything: false
+ # Allow multiline assignments to be cuddled. Default is true.
+ allow-multiline-assign: true
+ # Allow declarations (var) to be cuddled.
+ allow-cuddle-declarations: false
+ # Allow trailing comments in ending of blocks
+ allow-trailing-comment: false
+ # Force newlines in end of case at this limit (0 = never).
+ force-case-trailing-whitespace: 0
+ # Force cuddling of err checks with err var assignment
+ force-err-cuddling: false
+ # Allow leading comments to be separated with empty liens
+ allow-separated-leading-comment: false
+ makezero:
+ # Allow only slices initialized with a length of zero. Default is false.
+ always: false
+
+ # The custom section can be used to define linter plugins to be loaded at runtime. See README doc
+ # for more info.
+ #custom:
+ # Each custom linter should have a unique name.
+ #example:
+ # The path to the plugin *.so. Can be absolute or local. Required for each custom linter
+ #path: /path/to/example.so
+ # The description of the linter. Optional, just for documentation purposes.
+ #description: This is an example usage of a plugin linter.
+ # Intended to point to the repo location of the linter. Optional, just for documentation purposes.
+ #original-url: github.com/golangci/example-linter
+
+linters:
+ # please, do not use `enable-all`: it's deprecated and will be removed soon.
+ # inverted configuration with `enable-all` and `disable` is not scalable during updates of golangci-lint
+ # enable-all: true
+ disable-all: true
+ enable:
+ - typecheck # Basic type checking
+ - gofmt # Format check
+ - govet # Go's standard linting tool
+ - gosimple # Suggestions for simplifying code
+ - errcheck
+ - decorder
+ - ineffassign
+ - forbidigo
+ - revive
+ - reassign
+ - tparallel
+ - unconvert
+ - fieldalignment
+ - dupl
+ - dupword
+ - errname
+ - gci
+ - exhaustive
+ - gocritic
+ - goprintffuncname
+ - gomnd
+ - goconst
+ - gosec
+ - misspell # Spelling mistakes
+ - staticcheck # Static analysis
+ - unused # Checks for unused code
+ # - goimports # Checks if imports are correctly sorted and formatted
+ - godot # Checks for comment punctuation
+ - bodyclose # Ensures HTTP response body is closed
+ - stylecheck # Style checker for Go code
+ - unused # Checks for unused code
+ - errcheck # Checks for missed error returns
+ fast: true
+
+issues:
+ # List of regexps of issue texts to exclude, empty list by default.
+ # But independently from this option we use default exclude patterns,
+ # it can be disabled by `exclude-use-default: false`. To list all
+ # excluded by default patterns execute `golangci-lint run --help`
+ exclude:
+ - tools/.*
+ - test/.*
+ - components/*
+ - third_party/.*
+
+ # Excluding configuration per-path, per-linter, per-text and per-source
+ exclude-rules:
+ - linters:
+ - revive
+ path: (log/.*)\.go
+
+ - linters:
+ - wrapcheck
+ path: (cmd/.*|pkg/.*)\.go
+
+ - linters:
+ - typecheck
+ #path: (pkg/storage/.*)\.go
+ path: (internal/.*|pkg/.*)\.go
+
+ - path: (cmd/.*|test/.*|tools/.*|internal/pump/pumps/.*)\.go
+ linters:
+ - forbidigo
+
+ - path: (cmd/[a-z]*/.*|store/.*)\.go
+ linters:
+ - dupl
+
+ - linters:
+ - gocritic
+ text: (hugeParam:|rangeValCopy:)
+
+ - path: (cmd/[a-z]*/.*)\.go
+ linters:
+ - lll
+
+ - path: (validator/.*|code/.*|validator/.*|watcher/watcher/.*)
+ linters:
+ - gochecknoinits
+
+ - path: (internal/.*/options|internal/pump|pkg/log/options.go|internal/authzserver|tools/)
+ linters:
+ - tagliatelle
+
+ - path: (pkg/app/.*)\.go
+ linters:
+ - unused
+ - forbidigo
+
+ # Exclude some staticcheck messages
+ - linters:
+ - staticcheck
+ text: "SA9003:"
+
+ # Exclude lll issues for long lines with go:generate
+ - linters:
+ - lll
+ source: "^//go:generate "
+
+ - text: ".*[\u4e00-\u9fa5]+.*"
+ linters:
+ - golint
+ source: "^//.*$"
+
+ # Independently from option `exclude` we use default exclude patterns,
+ # it can be disabled by this option. To list all
+ # excluded by default patterns execute `golangci-lint run --help`.
+ # Default value for this option is true.
+ exclude-use-default: true
+
+ # The default value is false. If set to true exclude and exclude-rules
+ # regular expressions become case sensitive.
+ exclude-case-sensitive: false
+
+ # The list of ids of default excludes to include or disable. By default it's empty.
+ include:
+ - EXC0002 # disable excluding of issues about comments from golint
+
+ # Maximum issues count per one linter. Set to 0 to disable. Default is 50.
+ max-issues-per-linter: 0
+
+ # Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
+ max-same-issues: 0
+
+ # Show only new issues: if there are unstaged changes or untracked files,
+ # only those changes are analyzed, else only changes in HEAD~ are analyzed.
+ # It's a super-useful option for integration of golangci-lint into existing
+ # large codebase. It's not practical to fix all existing issues at the moment
+ # of integration: much better don't allow issues in new code.
+ # Default is false.
+ new: false
+
+ # Show only new issues created after git revision `REV`
+ # new-from-rev: REV
+
+ # Show only new issues created in git patch with set file path.
+ #new-from-patch: path/to/patch/file
+
+ # Fix found issues (if it's supported by the linter)
+ fix: true
+
+severity:
+ # Default value is empty string.
+ # Set the default severity for issues. If severity rules are defined and the issues
+ # do not match or no severity is provided to the rule this will be the default
+ # severity applied. Severities should match the supported severity names of the
+ # selected out format.
+ # - Code climate: https://docs.codeclimate.com/docs/issues#issue-severity
+ # - Checkstyle: https://checkstyle.sourceforge.io/property_types.html#severity
+ # - Github: https://help.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-error-message
+ default-severity: error
+
+ # The default value is false.
+ # If set to true severity-rules regular expressions become case sensitive.
+ case-sensitive: false
+
+ # Default value is empty list.
+ # When a list of severity rules are provided, severity information will be added to lint
+ # issues. Severity rules have the same filtering capability as exclude rules except you
+ # are allowed to specify one matcher per severity rule.
+ # Only affects out formats that support setting severity information.
+ rules:
+ - linters:
+ - dupl
+ severity: info
diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 120000
index 0000000..c72a9ce
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1 @@
+CHANGELOG/CHANGELOG.md
\ No newline at end of file
diff --git a/CHANGELOG/CHANGELOG-3.8.md b/CHANGELOG/CHANGELOG-3.8.md
new file mode 100644
index 0000000..e348e56
--- /dev/null
+++ b/CHANGELOG/CHANGELOG-3.8.md
@@ -0,0 +1,70 @@
+## [v3.8.3-patch.6](https://github.com/openimsdk/open-im-server-deploy/releases/tag/v3.8.3-patch.6) (2025-07-23)
+
+### Bug Fixes
+* fix: Add friend DB in notification sender [#3438](https://github.com/openimsdk/open-im-server-deploy/pull/3438)
+* fix: remove update version file workflows have new line in 3.8.3-patch branch. [#3452](https://github.com/openimsdk/open-im-server-deploy/pull/3452)
+* fix: s3 aws init [#3454](https://github.com/openimsdk/open-im-server-deploy/pull/3454)
+* fix: use safe submodule init in workflows in v3.8.3-patch. [#3469](https://github.com/openimsdk/open-im-server-deploy/pull/3469)
+
+**Full Changelog**: [v3.8.3-patch.5...v3.8.3-patch.6](https://github.com/openimsdk/open-im-server-deploy/compare/v3.8.3-patch.5...v3.8.3-patch.6)
+
+## [v3.8.3-patch.5](https://github.com/openimsdk/open-im-server-deploy/releases/tag/v3.8.3-patch.5) (2025-06-10)
+
+### New Features
+* feat: optimize friend and group applications [#3396](https://github.com/openimsdk/open-im-server-deploy/pull/3396)
+
+### Bug Fixes
+* fix: solve unocrrect invite notification [Created [#3219](https://github.com/openimsdk/open-im-server-deploy/pull/3219)
+
+### Builds
+* build: update gomake version in dockerfile.[Patch branch] [#3416](https://github.com/openimsdk/open-im-server-deploy/pull/3416)
+
+**Full Changelog**: [v3.8.3...v3.8.3-patch.5](https://github.com/openimsdk/open-im-server-deploy/compare/v3.8.3...v3.8.3-patch.5)
+
+## [v3.8.3-patch.4](https://github.com/openimsdk/open-im-server-deploy/releases/tag/v3.8.3-patch.4) (2025-03-13)
+
+### Bug Fixes
+* fix: solve unocrrect invite notificationfrom #3213
+
+**Full Changelog**: [v3.8.3-patch.3...v3.8.3-patch.4](https://github.com/openimsdk/open-im-server-deploy/compare/v3.8.3-patch.3...v3.8.3-patch.4)
+
+## [v3.8.3-patch.3](https://github.com/openimsdk/open-im-server-deploy/releases/tag/v3.8.3-patch.3) (2025-03-07)
+
+### New Features
+* feat: optimizing BatchGetIncrementalGroupMember #3180
+
+### Bug Fixes
+* fix: solve uncorrect notification when set group info #3172
+* fix: the sorting is wrong after canceling the administrator in group settings #3185
+* fix: solve uncorrect GroupMember enter group notification type. #3188
+
+### Refactors
+* refactor: change sendNotification to sendMessage to avoid ambiguity regarding message sending behavior. #3173
+
+**Full Changelog**: [v3.8.3-patch.2...v3.8.3-patch.3](https://github.com/openimsdk/open-im-server-deploy/compare/v3.8.3-patch.2...v3.8.3-patch.3)
+
+## [v3.8.3-patch.2](https://github.com/openimsdk/open-im-server-deploy/releases/tag/v3.8.3-patch.2) (2025-02-28)
+
+### Bug Fixes
+* fix: Offline push does not have a badge && Android offline push (#3146) [#3174](https://github.com/openimsdk/open-im-server-deploy/pull/3174)
+
+**Full Changelog**: [v3.8.3-patch.1...v3.8.3-patch.2](https://github.com/openimsdk/open-im-server-deploy/compare/v3.8.3-patch.1...v3.8.3-patch.2)
+
+## [v3.8.3-patch.1](https://github.com/openimsdk/open-im-server-deploy/releases/tag/v3.8.3-patch.1) (2025-02-25)
+
+### New Features
+* feat: add backup volume && optimize log print [Created [#3121](https://github.com/openimsdk/open-im-server-deploy/pull/3121)
+
+### Bug Fixes
+* fix: seq conversion failed without exiting [Created [#3120](https://github.com/openimsdk/open-im-server-deploy/pull/3120)
+* fix: check error in BatchSetTokenMapByUidPid [Created [#3123](https://github.com/openimsdk/open-im-server-deploy/pull/3123)
+* fix: DeleteDoc crash [Created [#3124](https://github.com/openimsdk/open-im-server-deploy/pull/3124)
+* fix: the abnormal message has no sending time, causing the SDK to be abnormal [Created [#3126](https://github.com/openimsdk/open-im-server-deploy/pull/3126)
+* fix: crash caused [#3127](https://github.com/openimsdk/open-im-server-deploy/pull/3127)
+* fix: the user sets the conversation timer cleanup timestamp unit incorrectly [Created [#3128](https://github.com/openimsdk/open-im-server-deploy/pull/3128)
+* fix: seq conversion not reading env in docker environment [Created [#3131](https://github.com/openimsdk/open-im-server-deploy/pull/3131)
+
+### Builds
+* build: improve workflows contents. [Created [#3125](https://github.com/openimsdk/open-im-server-deploy/pull/3125)
+
+**Full Changelog**: [v3.8.3-e-v1.1.5...v3.8.3-patch.1-e-v1.1.5](https://github.com/openimsdk/open-im-server-deploy-enterprise/compare/v3.8.3-e-v1.1.5...v3.8.3-patch.1-e-v1.1.5)
\ No newline at end of file
diff --git a/CHANGELOG/README.md b/CHANGELOG/README.md
new file mode 100644
index 0000000..204194d
--- /dev/null
+++ b/CHANGELOG/README.md
@@ -0,0 +1,4 @@
+# CHANGELOGs
+
+- [CHANGELOG-3.8.md](./CHANGELOG-3.8.md)
+
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..66b1ee7
--- /dev/null
+++ b/CODE_OF_CONDUCT.md
@@ -0,0 +1,128 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+We as members, contributors, and leaders pledge to make participation in our
+community a harassment-free experience for everyone, regardless of age, body
+size, visible or invisible disability, ethnicity, sex characteristics, gender
+identity and expression, level of experience, education, socio-economic status,
+nationality, personal appearance, race, religion, or sexual identity
+and orientation.
+
+We pledge to act and interact in ways that contribute to an open, welcoming,
+diverse, inclusive, and healthy community.
+
+## Our Standards
+
+Examples of behavior that contributes to a positive environment for our
+community include:
+
+* Demonstrating empathy and kindness toward other people
+* Being respectful of differing opinions, viewpoints, and experiences
+* Giving and gracefully accepting constructive feedback
+* Accepting responsibility and apologizing to those affected by our mistakes,
+ and learning from the experience
+* Focusing on what is best not just for us as individuals, but for the
+ overall community
+
+Examples of unacceptable behavior include:
+
+* The use of sexualized language or imagery, and sexual attention or
+ advances of any kind
+* Trolling, insulting or derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or email
+ address, without their explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+
+## Enforcement Responsibilities
+
+Community leaders are responsible for clarifying and enforcing our standards of
+acceptable behavior and will take appropriate and fair corrective action in
+response to any behavior that they deem inappropriate, threatening, offensive,
+or harmful.
+
+Community leaders have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are
+not aligned to this Code of Conduct, and will communicate reasons for moderation
+decisions when appropriate.
+
+## Scope
+
+This Code of Conduct applies within all community spaces, and also applies when
+an individual is officially representing the community in public spaces.
+Examples of representing our community include using an official e-mail address,
+posting via an official social media account, or acting as an appointed
+representative at an online or offline event.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported to the community leaders responsible for enforcement at
+`security@openim.io`.
+All complaints will be reviewed and investigated promptly and fairly.
+
+All community leaders are obligated to respect the privacy and security of the
+reporter of any incident.
+
+## Enforcement Guidelines
+
+Community leaders will follow these Community Impact Guidelines in determining
+the consequences for any action they deem in violation of this Code of Conduct:
+
+### 1. Correction
+
+**Community Impact**: Use of inappropriate language or other behavior deemed
+unprofessional or unwelcome in the community.
+
+**Consequence**: A private, written warning from community leaders, providing
+clarity around the nature of the violation and an explanation of why the
+behavior was inappropriate. A public apology may be requested.
+
+### 2. Warning
+
+**Community Impact**: A violation through a single incident or series
+of actions.
+
+**Consequence**: A warning with consequences for continued behavior. No
+interaction with the people involved, including unsolicited interaction with
+those enforcing the Code of Conduct, for a specified period of time. This
+includes avoiding interactions in community spaces as well as external channels
+like social media. Violating these terms may lead to a temporary or
+permanent ban.
+
+### 3. Temporary Ban
+
+**Community Impact**: A serious violation of community standards, including
+sustained inappropriate behavior.
+
+**Consequence**: A temporary ban from any sort of interaction or public
+communication with the community for a specified period of time. No public or
+private interaction with the people involved, including unsolicited interaction
+with those enforcing the Code of Conduct, is allowed during this period.
+Violating these terms may lead to a permanent ban.
+
+### 4. Permanent Ban
+
+**Community Impact**: Demonstrating a pattern of violation of community
+standards, including sustained inappropriate behavior, harassment of an
+individual, or aggression toward or disparagement of classes of individuals.
+
+**Consequence**: A permanent ban from any sort of public interaction within
+the community.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 2.0, available at
+https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
+
+Community Impact Guidelines were inspired by [Mozilla's code of conduct
+enforcement ladder](https://github.com/mozilla/diversity).
+
+[homepage]: https://www.contributor-covenant.org
+
+For answers to common questions about this code of conduct, see the FAQ at
+https://www.contributor-covenant.org/faq. Translations are available at
+https://www.contributor-covenant.org/translations.
diff --git a/CONTRIBUTING-zh_CN.md b/CONTRIBUTING-zh_CN.md
new file mode 100644
index 0000000..ecb88cb
--- /dev/null
+++ b/CONTRIBUTING-zh_CN.md
@@ -0,0 +1,96 @@
+
+
+# 如何给 OpenIM 贡献代码(提交 Pull Request)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+本指南将以 [openimsdk/open-im-server-deploy](https://github.com/openimsdk/open-im-server-deploy) 为例,详细说明如何为 OpenIM 项目贡献代码。我们采用“一问题一分支”的策略,确保每个 Issue 都对应一个专门的分支,以便有效管理代码变更。
+
+### 1. Fork 仓库
+前往 [openimsdk/open-im-server-deploy](https://github.com/openimsdk/open-im-server-deploy) GitHub 页面,点击右上角的 "Fork" 按钮,将仓库 Fork 到你的 GitHub 账户下。
+
+### 2. 克隆仓库
+将你 Fork 的仓库克隆到本地:
+```bash
+git clone https://github.com/your-username/open-im-server-deploy.git
+```
+
+### 3. 设置远程上游
+添加原始仓库为远程上游以便跟踪其更新:
+```bash
+git remote add upstream https://github.com/openimsdk/open-im-server-deploy.git
+```
+
+### 4. 创建 Issue
+在原始仓库中创建一个新的 Issue,详细描述你遇到的问题或希望添加
+
+的新功能。
+
+### 5. 创建新分支
+基于主分支创建一个新分支,并使用描述性的名称与 Issue ID,例如:
+```bash
+git checkout -b fix-bug-123
+```
+
+### 6. 提交更改
+在你的本地分支上进行更改后,提交这些更改:
+```bash
+git add .
+git commit -m "Describe your changes in detail"
+```
+
+### 7. 推送分支
+将你的分支推送回你的 GitHub Fork:
+```bash
+git push origin fix-bug-123
+```
+
+### 8. 创建 Pull Request
+在 GitHub 上转到你的 Fork 仓库,点击 "Pull Request" 按钮。确保 PR 描述清楚,并链接到相关的 Issue。
+
+### 9. 签署 CLA
+如果这是你第一次提交 PR,你需要在 PR 的评论中回复:
+```
+I have read the CLA Document and I hereby sign the CLA
+```
+
+### 编程规范
+请参考以下文档以了解关于 Go 语言编程规范的详细信息:
+- [Go 编码规范](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/go-code.md)
+- [代码约定](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/code-conventions.md)
+
+### 日志规范
+- **禁止使用标准的 `log` 包**。
+- 应使用 `"github.com/openimsdk/tools/log"` 包来打印日志,该包支持多种日志级别:`debug`、`info`、`warn`、`error`。
+- **错误日志应仅在首次调用的函数中打印**,以防止日志重复,并确保错误的上下文清晰。
+
+### 异常及错误处理
+- **禁止使用 `panic`**:程序中不应使用 `panic`,以避免在遇到不可恢复的错误时突然终止。
+- **错误包裹**:使用 `"github.com/openimsdk/tools/errs"` 来包裹错误,保持错误信息的完整性并增加调试便利。
+- **错误传递**:如果函数本身不能处理错误,应将错误返回给调用者,而不是隐藏或忽略这些错误。
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..c7ded50
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,94 @@
+# How to Contribute to OpenIM (Submitting Pull Requests)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+This guide will use [openimsdk/open-im-server-deploy](https://github.com/openimsdk/open-im-server-deploy) as an example to explain in detail how to contribute code to the OpenIM project. We adopt a "one issue, one branch" strategy to ensure each issue corresponds to a dedicated branch for effective code change management.
+
+### 1. Fork the Repository
+Go to the [openimsdk/open-im-server-deploy](https://github.com/openimsdk/open-im-server-deploy) GitHub page, click the "Fork" button in the upper right corner to fork the repository to your GitHub account.
+
+### 2. Clone the Repository
+Clone the repository you forked to your local machine:
+```bash
+git clone https://github.com/your-username/open-im-server-deploy.git
+```
+
+### 3. Set Upstream Remote
+Add the original repository as a remote upstream to track updates:
+```bash
+git remote add upstream https://github.com/openimsdk/open-im-server-deploy.git
+```
+
+### 4. Create an Issue
+Create a new issue in the original repository detailing the problem you encountered or the new feature you wish to add.
+
+### 5. Create a New Branch
+Create a new branch off the main branch with a descriptive name and Issue ID, for example:
+```bash
+git checkout -b fix-bug-123
+```
+
+### 6. Commit Changes
+After making changes on your local branch, commit these changes:
+```bash
+git add .
+git commit -m "Describe your changes
+
+ in detail"
+```
+
+### 7. Push the Branch
+Push your branch back to your GitHub fork:
+```bash
+git push origin fix-bug-123
+```
+
+### 8. Create a Pull Request
+Go to your fork on GitHub and click the "Pull Request" button. Ensure the PR description is clear and links to the related issue.
+
+### 9. Sign the CLA
+If this is your first time submitting a PR, you will need to reply in the comments of the PR:
+```
+I have read the CLA Document and I hereby sign the CLA
+```
+
+### Programming Standards
+Please refer to the following documents for detailed information on Go language programming standards:
+- [Go Coding Standards](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/go-code.md)
+- [Code Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/code-conventions.md)
+
+### Logging Standards
+- **Do not use the standard `log` package**.
+- Use the `"github.com/openimsdk/tools/log"` package for logging, which supports multiple log levels: `debug`, `info`, `warn`, `error`.
+- **Error logs should only be printed in the function where they are first actively called** to prevent log duplication and ensure clear error context.
+
+### Exception and Error Handling
+- **Prohibit the use of `panic`**: The code should not use `panic` to avoid abrupt termination when encountering unrecoverable errors.
+- **Error Wrapping**: Use `"github.com/openimsdk/tools/errs"` to wrap errors, maintaining the integrity of error information and facilitating debugging.
+- **Error Propagation**: If a function cannot handle an error itself, it should return the error to the caller, rather than hiding or ignoring it.
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 0000000..8a95b68
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,49 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Set the Go proxy to improve dependency resolution speed
+# ENV GOPROXY=https://goproxy.io,direct
+
+# Copy all files from the current directory into the container
+COPY . .
+
+RUN go mod download
+
+# Install Mage to use for building the application
+RUN go install github.com/magefile/mage@v1.15.0
+
+# Optionally build your application if needed
+RUN mage build
+
+# Using Alpine Linux with Go environment for the final image
+FROM golang:1.22-alpine
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+COPY --from=builder /go/bin/mage /usr/local/bin/mage
+COPY --from=builder $SERVER_DIR/magefile_windows.go $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/magefile_unix.go $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/magefile.go $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/start-config.yml $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/go.mod $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/go.sum $SERVER_DIR/
+
+RUN go get github.com/openimsdk/gomake@v0.0.15-alpha.1
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "mage start && tail -f /dev/null"]
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..261eeb9
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/README.md b/README.md
index d2beaf1..3810b7d 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,140 @@
-# open-im-server-deploy
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+[](https://gurubase.io/g/openim)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## :busts_in_silhouette: Join Our Community
+
+- 💬 [Follow us on Twitter](https://twitter.com/founder_im63606)
+- 🚀 [Join our Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Join our WeChat Group](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## Ⓜ️ About OpenIM
+
+Unlike standalone chat applications such as Telegram, Signal, and Rocket.Chat, OpenIM offers an open-source instant messaging solution designed specifically for developers rather than as a directly installable standalone chat app. Comprising OpenIM SDK and OpenIM Server, it provides developers with a complete set of tools and services to integrate instant messaging functions into their applications, including message sending and receiving, user management, and group management. Overall, OpenIM aims to provide developers with the necessary tools and framework to implement efficient instant messaging solutions in their applications.
+
+
+
+## 🚀 Introduction to OpenIMSDK
+
+**OpenIMSDK**, designed for **OpenIMServer**, is an IM SDK created specifically for integration into client applications. It supports various functionalities and modules:
+
+- 🌟 Main Features:
+
+ - 📦 Local Storage
+ - 🔔 Listener Callbacks
+ - 🛡️ API Wrapping
+ - 🌐 Connection Management
+
+- 📚 Main Modules:
+ 1. 🚀 Initialization and Login
+ 2. 👤 User Management
+ 3. 👫 Friends Management
+ 4. 🤖 Group Functions
+ 5. 💬 Session Handling
+
+Built with Golang and supports cross-platform deployment to ensure a consistent integration experience across all platforms.
+
+👉 **[Explore the GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Introduction to OpenIMServer
+
+- **OpenIMServer** features include:
+ - 🌐 Microservices Architecture: Supports cluster mode, including a gateway and multiple rpc services.
+ - 🚀 Diverse Deployment Options: Supports source code, Kubernetes, or Docker deployment.
+ - Massive User Support: Supports large-scale groups with hundreds of thousands, millions of users, and billions of messages.
+
+### Enhanced Business Functions:
+
+- **REST API**: Provides a REST API for business systems to enhance functionality, such as group creation and message pushing through backend interfaces.
+
+- **Webhooks**: Expands business forms through callbacks, sending requests to business servers before or after certain events.
+
+ 
+
+## :rocket: Quick Start
+
+Experience online for iOS/Android/H5/PC/Web:
+
+👉 **[OpenIM Online Demo](https://www.openim.io/en/commercial)**
+
+To facilitate user experience, we offer various deployment solutions. You can choose your preferred deployment method from the list below:
+
+- **[Source Code Deployment Guide](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Docker Deployment Guide](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+
+## System Support
+
+Supports Linux, Windows, Mac systems, and ARM and AMD CPU architectures.
+
+## :link: Links
+
+- **[Developer Manual](https://docs.openim.io/)**
+- **[Changelog](https://github.com/openimsdk/open-im-server-deploy/blob/main/CHANGELOG.md)**
+
+## :writing_hand: How to Contribute
+
+We welcome contributions of any kind! Please make sure to read our [Contributor Documentation](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md) before submitting a Pull Request.
+
+- **[Report a Bug](https://github.com/openimsdk/open-im-server-deploy/issues/new?assignees=&labels=bug&template=bug_report.md&title=)**
+- **[Suggest a Feature](https://github.com/openimsdk/open-im-server-deploy/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=)**
+- **[Submit a Pull Request](https://github.com/openimsdk/open-im-server-deploy/pulls)**
+
+Thank you for contributing to building a powerful instant messaging solution!
+
+## :closed_book: License
+
+This software is licensed under the Apache License 2.0
+
+## 🔮 Thanks to our contributors!
+
+
+
+
diff --git a/README_zh_CN.md b/README_zh_CN.md
new file mode 100644
index 0000000..066666f
--- /dev/null
+++ b/README_zh_CN.md
@@ -0,0 +1,139 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## :busts_in_silhouette: 加入我们的社区
+
+- 💬 [关注我们的 Twitter](https://twitter.com/founder_im63606)
+- 🚀 [加入我们的 Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2hljfom5u-9ZuzP3NfEKW~BJKbpLm0Hw)
+- :eyes: [加入我们的微信群](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## Ⓜ️ 关于 OpenIM
+
+与 Telegram、Signal、Rocket.Chat 等独立聊天应用不同,OpenIM 提供了专为开发者设计的开源即时通讯解决方案,而不是直接安装使用的独立聊天应用。OpenIM 由 OpenIM SDK 和 OpenIM Server 两大部分组成,为开发者提供了一整套集成即时通讯功能的工具和服务,包括消息发送接收、用户管理和群组管理等。总体来说,OpenIM 旨在为开发者提供必要的工具和框架,帮助他们在自己的应用中实现高效的即时通讯解决方案。
+
+
+
+## 🚀 OpenIMSDK 介绍
+
+**OpenIMSDK** 是为 **OpenIMServer** 设计的 IM SDK,专为集成到客户端应用而生。它支持多种功能和模块:
+
+- 🌟 主要功能:
+
+ - 📦 本地存储
+ - 🔔 监听器回调
+ - 🛡️ API 封装
+ - 🌐 连接管理
+
+- 📚 主要模块:
+ 1. 🚀 初始化及登录
+ 2. 👤 用户管理
+ 3. 👫 好友管理
+ 4. 🤖 群组功能
+ 5. 💬 会话处理
+
+它使用 Golang 构建,并支持跨平台部署,确保在所有平台上提供一致的接入体验。
+
+👉 **[探索 GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 OpenIMServer 介绍
+
+- **OpenIMServer** 的特点包括:
+ - 🌐 微服务架构:支持集群模式,包括网关(gateway)和多个 rpc 服务。
+ - 🚀 多样的部署方式:支持源代码、Kubernetes 或 Docker 部署。
+ - 海量用户支持:支持十万级超大群组,千万级用户和百亿级消息。
+
+### 增强的业务功能:
+
+- **REST API**:为业务系统提供 REST API,增加群组创建、消息推送等后台接口功能。
+
+- **Webhooks**:通过事件前后的回调,向业务服务器发送请求,扩展更多的业务形态。
+
+ 
+
+## :rocket: 快速入门
+
+在线体验 iOS/Android/H5/PC/Web:
+
+👉 **[OpenIM 在线演示](https://www.openim.io/en/commercial)**
+
+为了便于用户体验,我们提供了多种部署解决方案,您可以根据以下列表选择适合您的部署方式:
+
+- **[源代码部署指南](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Docker 部署指南](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+
+## 系统支持
+
+支持 Linux、Windows、Mac 系统以及 ARM 和 AMD CPU 架构。
+
+## :link: 相关链接
+
+- **[开发手册](https://docs.openim.io/)**
+- **[更新日志](https://github.com/openimsdk/open-im-server-deploy/blob/main/CHANGELOG.md)**
+
+## :writing_hand: 如何贡献
+
+我们欢迎任何形式的贡献!在提交 Pull Request 之前,请确保阅读我们的[贡献者文档](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md)
+
+- **[报告 Bug](https://github.com/openimsdk/open-im-server-deploy/issues/new?assignees=&labels=bug&template=bug_report.md&title=)**
+- **[提出新特性](https://github.com/openimsdk/open-im-server-deploy/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=)**
+- **[提交 Pull Request](https://github.com/openimsdk/open-im-server-deploy/pulls)**
+
+感谢您的贡献,一起来打造强大的即时通讯解决方案!
+
+## :closed_book: 开源许可证 License
+
+This software is licensed under the Apache License 2.0
+
+## 🔮 Thanks to our contributors!
+
+
+
+
diff --git a/assets/README.md b/assets/README.md
new file mode 100644
index 0000000..96e9a78
--- /dev/null
+++ b/assets/README.md
@@ -0,0 +1,32 @@
+# `/assets`
+
+The `/assets` directory in the OpenIM repository contains various assets such as images, logos, and animated GIFs. These assets serve different purposes and contribute to the functionality and aesthetics of the OpenIM project.
+
+## Directory Structure:
+
+```bash
+assets/
+├── README.md # Documentation for the assets directory
+├── images # Directory holding images related to OpenIM
+│ ├── architecture.png # Image depicting the architecture of OpenIM
+│ └── mvc.png # Image illustrating the Model-View-Controller (MVC) pattern
+├── intive-slack.png # Image displaying the Intive Slack logo
+├── logo # Directory containing various logo variations for OpenIM
+│ ├── openim-logo-black.png # OpenIM logo with a black background
+│ ├── openim-logo-blue.png # OpenIM logo with a blue background
+│ ├── openim-logo-green.png # OpenIM logo with a green background
+│ ├── openim-logo-purple.png # OpenIM logo with a purple background
+│ ├── openim-logo-white.png # OpenIM logo with a white background
+│ ├── openim-logo-yellow.png # OpenIM logo with a yellow background
+│ └── openim-logo.png # OpenIM logo with a transparent background
+└── logo-gif # Directory containing animated GIF versions of the OpenIM logo
+ └── openim-log.gif # Animated OpenIM logo with a transparent background
+```
+
+## Copyright Notice:
+
+The OpenIM logo, including its variations and animated versions, displayed in this repository [OpenIM](https://github.com/openimsdk/open-im-server-deploy) under the `/assets/logo` and `/assets/logo-gif` directories, are protected by copyright laws.
+
+The logo design is credited to @Xx(席欣).
+
+Please respect the intellectual property rights and refrain from unauthorized use and distribution of these assets.
\ No newline at end of file
diff --git a/assets/colors.md b/assets/colors.md
new file mode 100644
index 0000000..cf87777
--- /dev/null
+++ b/assets/colors.md
@@ -0,0 +1,11 @@
+# Official Colors
+
+The openim logo has an official blue color. When reproducing the logo, please use the official color, when possible.
+
+## Pantone
+
+When possible, the Pantone color is preferred for print material. The official Pantone color is *285C*.
+
+## RGB
+
+When used digitally, the official RGB color code is *#326CE5*.
diff --git a/assets/demo/README.md b/assets/demo/README.md
new file mode 100644
index 0000000..ce4cc0f
--- /dev/null
+++ b/assets/demo/README.md
@@ -0,0 +1,14 @@
+## :star2: Why OpenIM
+
+**🔍 Function screenshot display**
+
+
+
+
+| multiple message | Efficient meetings |
+| :---------------------------------------: | :---------------------------------------------: |
+|  |  |
+| **One-to-one and group chats** | **Special features - Custom messages** |
+|  |  |
+
+
diff --git a/assets/demo/efficient-meetings.png b/assets/demo/efficient-meetings.png
new file mode 100644
index 0000000..46b009d
Binary files /dev/null and b/assets/demo/efficient-meetings.png differ
diff --git a/assets/demo/group-chat.png b/assets/demo/group-chat.png
new file mode 100644
index 0000000..1e9f7b6
Binary files /dev/null and b/assets/demo/group-chat.png differ
diff --git a/assets/demo/hello-openim.png b/assets/demo/hello-openim.png
new file mode 100644
index 0000000..2dd7e48
Binary files /dev/null and b/assets/demo/hello-openim.png differ
diff --git a/assets/demo/multi-terminal-synchronization.png b/assets/demo/multi-terminal-synchronization.png
new file mode 100644
index 0000000..62549aa
Binary files /dev/null and b/assets/demo/multi-terminal-synchronization.png differ
diff --git a/assets/demo/multiple-message.png b/assets/demo/multiple-message.png
new file mode 100644
index 0000000..b02fef4
Binary files /dev/null and b/assets/demo/multiple-message.png differ
diff --git a/assets/demo/special-function.png b/assets/demo/special-function.png
new file mode 100644
index 0000000..c6943ec
Binary files /dev/null and b/assets/demo/special-function.png differ
diff --git a/assets/intive-slack.png b/assets/intive-slack.png
new file mode 100644
index 0000000..3f27e0c
Binary files /dev/null and b/assets/intive-slack.png differ
diff --git a/assets/logo-gif/LICENSE b/assets/logo-gif/LICENSE
new file mode 100644
index 0000000..fe61855
--- /dev/null
+++ b/assets/logo-gif/LICENSE
@@ -0,0 +1 @@
+# The OpenIM logo files are licensed under a choice of either Apache-2.0 or CC-BY-4.0 (Creative Commons Attribution 4.0 International).
\ No newline at end of file
diff --git a/assets/logo-gif/openim-logo.gif b/assets/logo-gif/openim-logo.gif
new file mode 100644
index 0000000..f4b4817
Binary files /dev/null and b/assets/logo-gif/openim-logo.gif differ
diff --git a/assets/logo/LICENSE b/assets/logo/LICENSE
new file mode 100644
index 0000000..fe61855
--- /dev/null
+++ b/assets/logo/LICENSE
@@ -0,0 +1 @@
+# The OpenIM logo files are licensed under a choice of either Apache-2.0 or CC-BY-4.0 (Creative Commons Attribution 4.0 International).
\ No newline at end of file
diff --git a/assets/logo/openim-logo-blue.png b/assets/logo/openim-logo-blue.png
new file mode 100644
index 0000000..555bde4
Binary files /dev/null and b/assets/logo/openim-logo-blue.png differ
diff --git a/assets/logo/openim-logo-cyan.png b/assets/logo/openim-logo-cyan.png
new file mode 100644
index 0000000..a27807f
Binary files /dev/null and b/assets/logo/openim-logo-cyan.png differ
diff --git a/assets/logo/openim-logo-gradient.png b/assets/logo/openim-logo-gradient.png
new file mode 100644
index 0000000..40198d5
Binary files /dev/null and b/assets/logo/openim-logo-gradient.png differ
diff --git a/assets/logo/openim-logo-green.png b/assets/logo/openim-logo-green.png
new file mode 100644
index 0000000..d071443
Binary files /dev/null and b/assets/logo/openim-logo-green.png differ
diff --git a/assets/logo/openim-logo-orange.png b/assets/logo/openim-logo-orange.png
new file mode 100644
index 0000000..398d9d6
Binary files /dev/null and b/assets/logo/openim-logo-orange.png differ
diff --git a/assets/logo/openim-logo-purple.png b/assets/logo/openim-logo-purple.png
new file mode 100644
index 0000000..f08dd8a
Binary files /dev/null and b/assets/logo/openim-logo-purple.png differ
diff --git a/assets/logo/openim-logo-red.png b/assets/logo/openim-logo-red.png
new file mode 100644
index 0000000..eb1e542
Binary files /dev/null and b/assets/logo/openim-logo-red.png differ
diff --git a/assets/logo/openim-logo-yellow.png b/assets/logo/openim-logo-yellow.png
new file mode 100644
index 0000000..7440e3c
Binary files /dev/null and b/assets/logo/openim-logo-yellow.png differ
diff --git a/assets/logo/openim-logo.png b/assets/logo/openim-logo.png
new file mode 100644
index 0000000..cc87f0a
Binary files /dev/null and b/assets/logo/openim-logo.png differ
diff --git a/assets/openim-logo-gradient.pdf b/assets/openim-logo-gradient.pdf
new file mode 100644
index 0000000..3621176
Binary files /dev/null and b/assets/openim-logo-gradient.pdf differ
diff --git a/assets/openim-logo-gradient.svg b/assets/openim-logo-gradient.svg
new file mode 100644
index 0000000..2e9b86d
--- /dev/null
+++ b/assets/openim-logo-gradient.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/assets/openim-logo-green.pdf b/assets/openim-logo-green.pdf
new file mode 100644
index 0000000..c8b7e11
Binary files /dev/null and b/assets/openim-logo-green.pdf differ
diff --git a/assets/openim-logo-green.svg b/assets/openim-logo-green.svg
new file mode 100644
index 0000000..cb49a52
--- /dev/null
+++ b/assets/openim-logo-green.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/bootstrap.bat b/bootstrap.bat
new file mode 100644
index 0000000..819f19c
--- /dev/null
+++ b/bootstrap.bat
@@ -0,0 +1,31 @@
+@echo off
+SETLOCAL
+
+mage -version >nul 2>&1
+IF %ERRORLEVEL% EQU 0 (
+ echo Mage is already installed.
+ GOTO DOWNLOAD
+)
+
+go version >nul 2>&1
+IF NOT %ERRORLEVEL% EQU 0 (
+ echo Go is not installed. Please install Go and try again.
+ exit /b 1
+)
+
+echo Installing Mage...
+go install github.com/magefile/mage@latest
+
+mage -version >nul 2>&1
+IF NOT %ERRORLEVEL% EQU 0 (
+ echo Mage installation failed.
+ echo Please ensure that %GOPATH%/bin is in your PATH.
+ exit /b 1
+)
+
+echo Mage installed successfully.
+
+:DOWNLOAD
+go mod download
+
+ENDLOCAL
diff --git a/bootstrap.sh b/bootstrap.sh
new file mode 100644
index 0000000..f79cd1f
--- /dev/null
+++ b/bootstrap.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+
+if [[ ":$PATH:" == *":$HOME/.local/bin:"* ]]; then
+ TARGET_DIR="$HOME/.local/bin"
+else
+ TARGET_DIR="/usr/local/bin"
+ echo "Using /usr/local/bin as the installation directory. Might require sudo permissions."
+fi
+
+if ! command -v mage &> /dev/null; then
+ echo "Installing Mage to $TARGET_DIR ..."
+ GOBIN=$TARGET_DIR go install github.com/magefile/mage@latest
+fi
+
+if ! command -v mage &> /dev/null; then
+ echo "Mage installation failed."
+ echo "Please ensure that $TARGET_DIR is in your \$PATH."
+ exit 1
+fi
+
+echo "Mage installed successfully."
+
+go mod download
diff --git a/build/README.md b/build/README.md
new file mode 100644
index 0000000..edd419a
--- /dev/null
+++ b/build/README.md
@@ -0,0 +1,65 @@
+# Building OpenIM
+
+Building OpenIM is easy if you take advantage of the containerized build environment. This document will help guide you through understanding this build process.
+
+## Requirements
+
+1. Docker, using one of the following configurations:
+ * **macOS** Install Docker for Mac. See installation instructions [here](https://docs.docker.com/docker-for-mac/).
+ **Note**: You will want to set the Docker VM to have at least 4GB of initial memory or building will likely fail.
+ * **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS.
+ * **Windows with Docker Desktop WSL2 backend** Install Docker according to the [instructions](https://docs.docker.com/docker-for-windows/wsl-tech-preview/). Be sure to store your sources in the local Linux file system, not the Windows remote mount at `/mnt/c`.
+
+ **Note**: You will need to check if Docker CLI plugin buildx is properly installed (`docker-buildx` file should be present in `~/.docker/cli-plugins`). You can install buildx according to the [instructions](https://github.com/docker/buildx/blob/master/README.md#installing).
+
+2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/)
+
+You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise.
+
+## Actions
+
+About [Images packages](https://github.com/orgs/OpenIMSDK/packages?repo_name=Open-IM-Server)
+
+All files in the `build/images` directory are not templated and are instead rendered by Github Actions, which is an automated process.
+
+Trigger condition:
+1. create a new tag with the format `vX.Y.Z` (e.g. `v1.0.0`)
+2. push the tag to the remote repository
+3. wait for the build to finish
+4. download the artifacts from the release page
+
+## Make images
+
+**help info:**
+
+```bash
+$ make image.help
+```
+
+**build images:**
+
+```bash
+$ make image
+```
+
+## Overview
+
+While it is possible to build OpenIM using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment.
+
+
+## Basic Flow
+
+The scripts directly under [`build/`](.) are used to build and test. They will ensure that the `openim-build` Docker image is built (based on [`build/build-image/Dockerfile`](../Dockerfile) and after base image's `OPENIM_BUILD_IMAGE_CROSS_TAG` from Dockerfile is replaced with one of those actual tags of the base image, like `v1.13.9-2`) and then execute the appropriate command in that container. These scripts will both ensure that the right data is cached from run to run for incremental builds and will copy the results back out of the container. You can specify a different registry/name and version for `openim-cross` by setting `OPENIM_CROSS_IMAGE` and `OPENIM_CROSS_VERSION`, see [`common.sh`](common.sh) for more details.
+
+The `openim-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the OpenIM repo to minimize the amount of data we need to package up when building the image.
+
+There are 3 different containers instances that are run from this image. The first is a "data" container to store all data that needs to persist across to support incremental builds. Next there is an "rsync" container that is used to transfer data in and out to the data container. Lastly there is a "build" container that is used for actually doing build actions. The data container persists across runs while the rsync and build containers are deleted after each use.
+
+`rsync` is used transparently behind the scenes to efficiently move data in and out of the container. This will use an ephemeral port picked by Docker. You can modify this by setting the `OPENIM_RSYNC_PORT` env variable.
+
+All Docker names are suffixed with a hash derived from the file path (to allow concurrent usage on things like CI machines) and a version number. When the version number changes all state is cleared and clean build is started. This allows the build infrastructure to be changed and signal to CI systems that old artifacts need to be deleted.
+
+## Build artifacts
+The build system output all its products to a top level directory in the source repository named `_output`.
+These include the binary compiled packages (e.g. imctl, openim-api etc.) and archived Docker images.
+If you intend to run a component with a docker image you will need to import it from this directory with
diff --git a/build/goreleaser.yaml b/build/goreleaser.yaml
new file mode 100644
index 0000000..3704565
--- /dev/null
+++ b/build/goreleaser.yaml
@@ -0,0 +1,431 @@
+# This is an example .goreleaser.yml file with some sensible defaults.
+# Make sure to check the documentation at https://goreleaser.com
+
+before:
+ hooks:
+ - make clean
+ # You may remove this if you don't use go modules.
+ - make tidy
+ - make copyright.add
+ # you may remove this if you don't need go generate
+ - go generate ./...
+
+git:
+ # What should be used to sort tags when gathering the current and previous
+ # tags if there are more than one tag in the same commit.
+ #
+ # Default: '-version:refname'
+ tag_sort: -version:creatordate
+
+ # What should be used to specify prerelease suffix while sorting tags when gathering
+ # the current and previous tags if there are more than one tag in the same commit.
+ #
+ # Since: v1.17
+ prerelease_suffix: "-"
+
+ # Tags to be ignored by GoReleaser.
+ # This means that GoReleaser will not pick up tags that match any of the
+ # provided values as either previous or current tags.
+ #
+ # Templates: allowed.
+ # Since: v1.21.
+ ignore_tags:
+ - nightly
+ # - "{{.Env.IGNORE_TAG}}"
+
+snapshot:
+ name_template: "{{ incpatch .Version }}-next"
+
+# gomod:
+# proxy: true
+
+report_sizes: true
+
+# metadata:
+# mod_timestamp: "{{ .CommitTimestamp }}"
+
+builds:
+ - binary: openim-api
+ id: openim-api
+ main: ./cmd/openim-api/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-cmdutils
+ id: openim-cmdutils
+ main: ./cmd/openim-cmdutils/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-crontask
+ id: openim-crontask
+ main: ./cmd/openim-crontask/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-msggateway
+ id: openim-msggateway
+ main: ./cmd/openim-msggateway/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-msgtransfer
+ id: openim-msgtransfer
+ main: ./cmd/openim-msgtransfer/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-push
+ id: openim-push
+ main: ./cmd/openim-push/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-auth
+ id: openim-rpc-auth
+ main: ./cmd/openim-rpc/openim-rpc-auth/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-conversation
+ id: openim-rpc-conversation
+ main: ./cmd/openim-rpc/openim-rpc-conversation/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-friend
+ id: openim-rpc-friend
+ main: ./cmd/openim-rpc/openim-rpc-friend/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-group
+ id: openim-rpc-group
+ main: ./cmd/openim-rpc/openim-rpc-group/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-msg
+ id: openim-rpc-msg
+ main: ./cmd/openim-rpc/openim-rpc-msg/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-third
+ id: openim-rpc-third
+ main: ./cmd/openim-rpc/openim-rpc-third/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+ - binary: openim-rpc-user
+ id: openim-rpc-user
+ main: ./cmd/openim-rpc/openim-rpc-user/main.go
+ goos:
+ - darwin
+ - windows
+ - linux
+ goarch:
+ - amd64
+ - arm64
+
+
+# TODO:Need a script, such as the init - release to help binary to find the right directory
+# ,which can be compiled binary
+archives:
+ - format: tar.gz
+ # this name template makes the OS and Arch compatible with the results of uname.
+ name_template: >-
+ {{ .ProjectName }}_
+ {{- title .Os }}_
+ {{- if eq .Arch "amd64" }}x86_64
+ {{- else if eq .Arch "386" }}i386
+ {{- else }}{{ .Arch }}{{ end }}
+ {{- if .Arm }}v{{ .Arm }}{{ end }}
+
+ # Set this to true if you want all files in the archive to be in a single directory.
+ # If set to true and you extract the archive 'goreleaser_Linux_arm64.tar.gz',
+ # you'll get a folder 'goreleaser_Linux_arm64'.
+ # If set to false, all files are extracted separately.
+ # You can also set it to a custom folder name (templating is supported).
+ wrap_in_directory: true
+
+ # use zip for windows archives
+ files:
+ - CHANGELOG/*
+ - deployment/*
+ - config/*
+ - build/*
+ - scripts/*
+ - Makefile
+ - install.sh
+ - docs/*
+ - src: "*.md"
+ dst: docs
+
+ # Strip parent folders when adding files to the archive.
+ strip_parent: true
+
+ # File info.
+ # Not all fields are supported by all formats available formats.
+ #
+ # Default: copied from the source file
+ info:
+ # Templates: allowed (since v1.14)
+ owner: root
+
+ # Templates: allowed (since v1.14)
+ group: root
+
+ # Must be in time.RFC3339Nano format.
+ #
+ # Templates: allowed (since v1.14)
+ mtime: "{{ .CommitDate }}"
+
+ # File mode.
+ mode: 0644
+
+ format_overrides:
+ - goos: windows
+ format: zip
+
+changelog:
+ sort: asc
+ use: github
+ filters:
+ exclude:
+ - "^test:"
+ - "^chore"
+ - "merge conflict"
+ - Merge pull request
+ - Merge remote-tracking branch
+ - Merge branch
+ - go mod tidy
+ groups:
+ - title: Dependency updates
+ regexp: '^.*?(feat|fix)\(deps\)!?:.+$'
+ order: 300
+ - title: "New Features"
+ regexp: '^.*?feat(\([[:word:]]+\))??!?:.+$'
+ order: 100
+ - title: "Security updates"
+ regexp: '^.*?sec(\([[:word:]]+\))??!?:.+$'
+ order: 150
+ - title: "Bug fixes"
+ regexp: '^.*?fix(\([[:word:]]+\))??!?:.+$'
+ order: 200
+ - title: "Documentation updates"
+ regexp: ^.*?doc(\([[:word:]]+\))??!?:.+$
+ order: 400
+ - title: "Build process updates"
+ regexp: ^.*?build(\([[:word:]]+\))??!?:.+$
+ order: 400
+ - title: Other work
+ order: 9999
+
+# dockers:
+# - image_templates:
+# - "openimsdk/open-im-server-deploy:{{ .Tag }}-amd64"
+# - "ghcr.io/goreleaser/goreleaser:{{ .Tag }}-amd64"
+# dockerfile: build/images/openim-api/Dockerfile.release
+# ids:
+# - openim-api
+# use: buildx
+# build_flag_templates:
+# - "--pull"
+# - "--label=io.artifacthub.package.readme-url=https://raw.githubusercontent.com/openimsdk/open-im-server-deploy/main/README.md"
+# - "--label=io.artifacthub.package.logo-url=hhttps://github.com/openimsdk/open-im-server-deploy/blob/main/assets/logo/openim-logo-green.png"
+# - '--label=io.artifacthub.package.maintainers=[{"name":"Xinwei Xiong","email":"3293172751nss@gmail.com"}]'
+# - "--label=io.artifacthub.package.license=Apace-2.0"
+# - "--label=org.opencontainers.image.description=OpenIM Open source top instant messaging system"
+# - "--label=org.opencontainers.image.created={{.Date}}"
+# - "--label=org.opencontainers.image.name={{.ProjectName}}"
+# - "--label=org.opencontainers.image.revision={{.FullCommit}}"
+# - "--label=org.opencontainers.image.version={{.Version}}"
+# - "--label=org.opencontainers.image.source={{.GitURL}}"
+# - "--platform=linux/amd64"
+# extra_files:
+# - scripts/entrypoint.sh
+# - image_templates:
+# - "goreleaser/goreleaser:{{ .Tag }}-arm64"
+# - "ghcr.io/goreleaser/goreleaser:{{ .Tag }}-arm64"
+# dockerfile: build/images/openim-api/Dockerfile.release
+# use: buildx
+# build_flag_templates:
+# - "--pull"
+# - "--label=io.artifacthub.package.readme-url=https://raw.githubusercontent.com/openimsdk/open-im-server-deploy/main/README.md"
+# - "--label=io.artifacthub.package.logo-url=hhttps://github.com/openimsdk/open-im-server-deploy/blob/main/assets/logo/openim-logo-green.png"
+# - '--label=io.artifacthub.package.maintainers=[{"name":"Xinwei Xiong","email":"3293172751nss@gmail.com"}]'
+# - "--label=io.artifacthub.package.license=Apace-2.0"
+# - "--label=org.opencontainers.image.description=OpenIM Open source top instant messaging system"
+# - "--label=org.opencontainers.image.created={{.Date}}"
+# - "--label=org.opencontainers.image.name={{.ProjectName}}"
+# - "--label=org.opencontainers.image.revision={{.FullCommit}}"
+# - "--label=org.opencontainers.image.version={{.Version}}"
+# - "--label=org.opencontainers.image.source={{.GitURL}}"
+# - "--platform=linux/arm64"
+# goarch: arm64
+# extra_files:
+# - scripts/entrypoint.sh
+
+# docker_manifests:
+# - name_template: "goreleaser/goreleaser:{{ .Tag }}"
+# image_templates:
+# - "goreleaser/goreleaser:{{ .Tag }}-amd64"
+# - "goreleaser/goreleaser:{{ .Tag }}-arm64"
+# - name_template: "ghcr.io/goreleaser/goreleaser:{{ .Tag }}"
+# image_templates:
+# - "ghcr.io/goreleaser/goreleaser:{{ .Tag }}-amd64"
+# - "ghcr.io/goreleaser/goreleaser:{{ .Tag }}-arm64"
+# - name_template: "goreleaser/goreleaser:latest"
+# image_templates:
+# - "goreleaser/goreleaser:{{ .Tag }}-amd64"
+# - "goreleaser/goreleaser:{{ .Tag }}-arm64"
+# - name_template: "ghcr.io/goreleaser/goreleaser:latest"
+# image_templates:
+# - "ghcr.io/goreleaser/goreleaser:{{ .Tag }}-amd64"
+# - "ghcr.io/goreleaser/goreleaser:{{ .Tag }}-arm64"
+
+nfpms:
+ - id: packages
+ builds:
+ - openim-api
+ - openim-cmdutils
+ - openim-crontask
+ - openim-msggateway
+ - openim-msgtransfer
+ - openim-push
+ - openim-rpc-auth
+ - openim-rpc-conversation
+ - openim-rpc-friend
+ - openim-rpc-group
+ - openim-rpc-msg
+ - openim-rpc-third
+ - openim-rpc-user
+ # Your app's vendor.
+ vendor: OpenIMSDK
+ homepage: https://github.com/openimsdk/open-im-server-deploy
+ maintainer: kubbot
+ description: |-
+ Auto sync github labels
+ kubbot && openimbot
+ license: MIT
+ formats:
+ - apk
+ - deb
+ - rpm
+ - termux.deb # Since: v1.11
+ - archlinux # Since: v1.13
+ dependencies:
+ - git
+ recommends:
+ - golang
+
+
+# The lines beneath this are called `modelines`. See `:help modeline`
+# Feel free to remove those if you don't want/use them.
+# yaml-language-server: $schema=https://goreleaser.com/static/schema.json
+# vim: set ts=2 sw=2 tw=0 fo=cnqoj
+
+# Default: './dist'
+dist: ./_output/dist
+
+# .goreleaser.yaml
+milestones:
+ # You can have multiple milestone configs
+ -
+ # Repository for the milestone
+ # Default is extracted from the origin remote URL
+ repo:
+ owner: OpenIMSDK
+ name: Open-IM-Server
+
+ # Whether to close the milestone
+ close: true
+
+ # Fail release on errors, such as missing milestone.
+ fail_on_error: false
+
+ # Name of the milestone
+ #
+ # Default: '{{ .Tag }}'
+ name_template: "Current Release"
+
+# publishers:
+# - name: "fury.io"
+# ids:
+# - packages
+# dir: "{{ dir .ArtifactPath }}"
+# cmd: |
+# bash -c '
+# if [[ "{{ .Tag }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
+# curl -F package=@{{ .ArtifactName }} https://{{ .Env.FURY_TOKEN }}@push.fury.io/{{ .Env.USERNAME }}/
+# else
+# echo "Skipping deployment: Non-production release detected"
+# fi'
+
+checksum:
+ name_template: "{{ .ProjectName }}_checksums.txt"
+ algorithm: sha256
+
+release:
+ prerelease: auto
diff --git a/build/images/Dockerfile b/build/images/Dockerfile
new file mode 100644
index 0000000..020d507
--- /dev/null
+++ b/build/images/Dockerfile
@@ -0,0 +1,24 @@
+# # Copyright © 2023 OpenIM. All rights reserved.
+# #
+# # Licensed under the Apache License, Version 2.0 (the "License");
+# # you may not use this file except in compliance with the License.
+# # You may obtain a copy of the License at
+# #
+# # http://www.apache.org/licenses/LICENSE-2.0
+# #
+# # Unless required by applicable law or agreed to in writing, software
+# # distributed under the License is distributed on an "AS IS" BASIS,
+# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# # See the License for the specific language governing permissions and
+# # limitations under the License.
+
+# FROM BASE_IMAGE
+
+# WORKDIR ${SERVER_WORKDIR}
+
+# # Set HTTP proxy
+# ARG BINARY_NAME
+
+# COPY BINARY_NAME ./bin/BINARY_NAME
+
+# ENTRYPOINT ["./bin/BINARY_NAME"]
\ No newline at end of file
diff --git a/build/images/openim-api/Dockerfile b/build/images/openim-api/Dockerfile
new file mode 100644
index 0000000..88f2c72
--- /dev/null
+++ b/build/images/openim-api/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory first (must be before COPY . . to avoid conflicts)
+# This copies protocol to /protocol, which is ../protocol relative to SERVER_DIR
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN echo "执行 go mod tidy..." && go mod tidy || (echo "go mod tidy 失败" && exit 1)
+
+RUN echo "开始编译 openim-api..." && \
+ go build -v -o _output/openim-api ./cmd/openim-api || \
+ (echo "编译失败,查看详细错误信息" && exit 1)
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-api"]
diff --git a/build/images/openim-crontask/Dockerfile b/build/images/openim-crontask/Dockerfile
new file mode 100644
index 0000000..94e043e
--- /dev/null
+++ b/build/images/openim-crontask/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+
+
+RUN go build -o _output/openim-crontask ./cmd/openim-crontask
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-crontask"]
diff --git a/build/images/openim-msggateway/Dockerfile b/build/images/openim-msggateway/Dockerfile
new file mode 100644
index 0000000..67fb839
--- /dev/null
+++ b/build/images/openim-msggateway/Dockerfile
@@ -0,0 +1,43 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory first (must be before COPY . . to avoid conflicts)
+# This copies protocol to /protocol, which is ../protocol relative to SERVER_DIR
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN echo "执行 go mod tidy..." && go mod tidy || (echo "go mod tidy 失败" && exit 1)
+
+RUN echo "开始编译 openim-msggateway..." && \
+ go build -v -o _output/openim-msggateway ./cmd/openim-msggateway || \
+ (echo "编译失败,查看详细错误信息" && exit 1)
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-msggateway"]
diff --git a/build/images/openim-msgtransfer/Dockerfile b/build/images/openim-msgtransfer/Dockerfile
new file mode 100644
index 0000000..b8a46a3
--- /dev/null
+++ b/build/images/openim-msgtransfer/Dockerfile
@@ -0,0 +1,43 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory first (must be before COPY . . to avoid conflicts)
+# This copies protocol to /protocol, which is ../protocol relative to SERVER_DIR
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN echo "执行 go mod tidy..." && go mod tidy || (echo "go mod tidy 失败" && exit 1)
+
+RUN echo "开始编译 openim-msgtransfer..." && \
+ go build -v -o _output/openim-msgtransfer ./cmd/openim-msgtransfer || \
+ (echo "编译失败,查看详细错误信息" && exit 1)
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-msgtransfer"]
diff --git a/build/images/openim-push/Dockerfile b/build/images/openim-push/Dockerfile
new file mode 100644
index 0000000..c4da3f7
--- /dev/null
+++ b/build/images/openim-push/Dockerfile
@@ -0,0 +1,43 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory first (must be before COPY . . to avoid conflicts)
+# This copies protocol to /protocol, which is ../protocol relative to SERVER_DIR
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN echo "执行 go mod tidy..." && go mod tidy || (echo "go mod tidy 失败" && exit 1)
+
+RUN echo "开始编译 openim-push..." && \
+ go build -v -o _output/openim-push ./cmd/openim-push || \
+ (echo "编译失败,查看详细错误信息" && exit 1)
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-push"]
diff --git a/build/images/openim-rpc-auth/Dockerfile b/build/images/openim-rpc-auth/Dockerfile
new file mode 100644
index 0000000..9e7793d
--- /dev/null
+++ b/build/images/openim-rpc-auth/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+
+
+RUN go build -o _output/openim-rpc-auth ./cmd/openim-rpc/openim-rpc-auth
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-auth"]
diff --git a/build/images/openim-rpc-conversation/Dockerfile b/build/images/openim-rpc-conversation/Dockerfile
new file mode 100644
index 0000000..2b74c17
--- /dev/null
+++ b/build/images/openim-rpc-conversation/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+
+
+RUN go build -o _output/openim-rpc-conversation ./cmd/openim-rpc/openim-rpc-conversation
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-conversation"]
diff --git a/build/images/openim-rpc-friend/Dockerfile b/build/images/openim-rpc-friend/Dockerfile
new file mode 100644
index 0000000..bb146f1
--- /dev/null
+++ b/build/images/openim-rpc-friend/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+
+
+RUN go build -o _output/openim-rpc-friend ./cmd/openim-rpc/openim-rpc-friend
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-friend"]
diff --git a/build/images/openim-rpc-group/Dockerfile b/build/images/openim-rpc-group/Dockerfile
new file mode 100644
index 0000000..70c994e
--- /dev/null
+++ b/build/images/openim-rpc-group/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+
+
+RUN go build -o _output/openim-rpc-group ./cmd/openim-rpc/openim-rpc-group
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-group"]
diff --git a/build/images/openim-rpc-msg/Dockerfile b/build/images/openim-rpc-msg/Dockerfile
new file mode 100644
index 0000000..10a5bf0
--- /dev/null
+++ b/build/images/openim-rpc-msg/Dockerfile
@@ -0,0 +1,64 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git, build tools and dependencies for Quirc
+RUN apk add --no-cache git ca-certificates build-base make gcc musl-dev pkgconfig
+
+# Build and install Quirc library (static library)
+RUN cd /tmp && \
+ git clone --depth 1 https://github.com/dlbeer/quirc.git && \
+ cd quirc && \
+ sed -i 's/\$(shell pkg-config --cflags sdl 2>&1)/\$(shell pkg-config --cflags sdl 2>\/dev\/null || true)/g' Makefile && \
+ sed -i 's/\$(shell pkg-config --libs sdl)/\$(shell pkg-config --libs sdl 2>\/dev\/null || true)/g' Makefile && \
+ sed -i 's/\$(shell pkg-config --cflags opencv4 2>&1)/\$(shell pkg-config --cflags opencv4 2>\/dev\/null || true)/g' Makefile && \
+ sed -i 's/\$(shell pkg-config --libs opencv4)/\$(shell pkg-config --libs opencv4 2>\/dev\/null || true)/g' Makefile && \
+ make libquirc.a && \
+ mkdir -p /usr/local/lib /usr/local/include && \
+ cp libquirc.a /usr/local/lib/ && \
+ cp lib/quirc.h /usr/local/include/ && \
+ ls -la /usr/local/lib/libquirc.a /usr/local/include/quirc.h && \
+ rm -rf /tmp/quirc
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+ENV CGO_ENABLED=1
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+# Build with static linking for Quirc (静态链接,运行时无需库文件)
+# 使用 -ldflags 强制静态链接 Quirc 库和数学库
+# 注意:Alpine Linux 使用 musl libc,需要静态链接数学库
+RUN go build -tags cgo \
+ -ldflags '-linkmode external -extldflags "-static -L/usr/local/lib -lquirc -lm"' \
+ -o _output/openim-rpc-msg ./cmd/openim-rpc/openim-rpc-msg && \
+ echo "构建完成,验证二进制文件..." && \
+ file _output/openim-rpc-msg && \
+ ldd _output/openim-rpc-msg 2>&1 | head -5 || echo "静态链接验证:二进制文件不依赖动态库(这是正常的)"
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-msg"]
diff --git a/build/images/openim-rpc-third/Dockerfile b/build/images/openim-rpc-third/Dockerfile
new file mode 100644
index 0000000..9f5751b
--- /dev/null
+++ b/build/images/openim-rpc-third/Dockerfile
@@ -0,0 +1,42 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+
+
+RUN go build -o _output/openim-rpc-third ./cmd/openim-rpc/openim-rpc-third
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-third"]
diff --git a/build/images/openim-rpc-user/Dockerfile b/build/images/openim-rpc-user/Dockerfile
new file mode 100644
index 0000000..b36115f
--- /dev/null
+++ b/build/images/openim-rpc-user/Dockerfile
@@ -0,0 +1,40 @@
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+
+# Install git for repository access
+RUN apk add --no-cache git ca-certificates
+
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Copy protocol directory
+COPY protocol /protocol
+
+# Copy current directory
+COPY . .
+
+RUN go mod tidy
+
+RUN go build -o _output/openim-rpc-user ./cmd/openim-rpc/openim-rpc-user
+
+
+# Using Alpine Linux for the final image
+FROM alpine:latest
+
+# Install necessary packages, such as bash
+RUN apk add --no-cache bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+# COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "_output/openim-rpc-user"]
diff --git a/build/images/openim-tools/component/Dockerfile b/build/images/openim-tools/component/Dockerfile
new file mode 100644
index 0000000..ae8de80
--- /dev/null
+++ b/build/images/openim-tools/component/Dockerfile
@@ -0,0 +1,108 @@
+# # Copyright © 2023 OpenIM. All rights reserved.
+# #
+# # Licensed under the Apache License, Version 2.0 (the "License");
+# # you may not use this file except in compliance with the License.
+# # You may obtain a copy of the License at
+# #
+# # http://www.apache.org/licenses/LICENSE-2.0
+# #
+# # Unless required by applicable law or agreed to in writing, software
+# # distributed under the License is distributed on an "AS IS" BASIS,
+# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# # See the License for the specific language governing permissions and
+# # limitations under the License.
+
+# # OpenIM base image: https://github.com/openim-sigs/openim-base-image
+
+# # Set go mod installation source and proxy
+
+# FROM golang:1.20 AS builder
+
+#
+
+# WORKDIR /openim/openim-server
+
+#
+# ENV GOPROXY=$GOPROXY
+
+# COPY go.mod go.sum ./
+# RUN go mod download
+
+# COPY . .
+
+# RUN make clean
+# RUN make build BINS=component
+
+# # FROM ghcr.io/openim-sigs/openim-bash-image:latest
+# FROM ghcr.io/openim-sigs/openim-bash-image:latest
+
+# WORKDIR /openim/openim-server
+
+# COPY --from=builder /openim/openim-server/_output/bin/tools /openim/openim-server/_output/bin/tools/
+# COPY --from=builder /openim/openim-server/config /openim/openim-server/config
+
+# ENV OPENIM_SERVER_CONFIG_NAME=/openim/openim-server/config
+
+# RUN mv ${OPENIM_SERVER_BINDIR}/platforms/$(get_os)/$(get_arch)/component /usr/bin/component
+
+# ENTRYPOINT ["bash", "-c", "component -c $OPENIM_SERVER_CONFIG_NAME"]
+
+
+# Use Go 1.22 Alpine as the base image for building the application
+FROM golang:1.22-alpine AS builder
+# Define the base directory for the application as an environment variable
+ENV SERVER_DIR=/openim-server
+
+# Set the working directory inside the container based on the environment variable
+WORKDIR $SERVER_DIR
+
+# Set the Go proxy to improve dependency resolution speed
+
+#ENV GOPROXY=https://goproxy.io,direct
+
+# Copy all files from the current directory into the container
+COPY . .
+
+RUN go mod download
+
+# Install Mage to use for building the application
+RUN go install github.com/magefile/mage@v1.15.0
+
+# ENV BINS=openim-rpc-user
+
+# Optionally build your application if needed
+# RUN mage build ${BINS} check-free-memory seq || true
+RUN mage build check-free-memory seq || true
+
+# Using Alpine Linux with Go environment for the final image
+FROM golang:1.22-alpine
+
+# Install necessary packages, such as bash
+RUN apk add bash
+
+# Set the environment and work directory
+ENV SERVER_DIR=/openim-server
+WORKDIR $SERVER_DIR
+
+
+# Copy the compiled binaries and mage from the builder image to the final image
+COPY --from=builder $SERVER_DIR/_output $SERVER_DIR/_output
+COPY --from=builder $SERVER_DIR/config $SERVER_DIR/config
+COPY --from=builder /go/bin/mage /usr/local/bin/mage
+COPY --from=builder $SERVER_DIR/magefile_windows.go $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/magefile_unix.go $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/magefile.go $SERVER_DIR/
+# COPY --from=builder $SERVER_DIR/start-config.yml $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/go.mod $SERVER_DIR/
+COPY --from=builder $SERVER_DIR/go.sum $SERVER_DIR/
+
+
+RUN echo -e "serviceBinaries:\n \n" \
+ > $SERVER_DIR/start-config.yml && \
+ echo -e "toolBinaries:\n - check-free-memory\n - seq\n" >> $SERVER_DIR/start-config.yml && \
+ echo "maxFileDescriptors: 10000" >> $SERVER_DIR/start-config.yml
+
+RUN go get github.com/openimsdk/gomake@v0.0.15-alpha.1
+
+# Set the command to run when the container starts
+ENTRYPOINT ["sh", "-c", "mage start && tail -f /dev/null"]
diff --git a/cmd/main.go b/cmd/main.go
new file mode 100644
index 0000000..07a71fe
--- /dev/null
+++ b/cmd/main.go
@@ -0,0 +1,406 @@
+package main
+
+import (
+ "bytes"
+ "context"
+ "flag"
+ "fmt"
+ "net"
+ "os"
+ "os/signal"
+ "path"
+ "path/filepath"
+ "reflect"
+ "runtime"
+ "strings"
+ "sync"
+ "syscall"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/api"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/msggateway"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/msgtransfer"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/auth"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/conversation"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/group"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/msg"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/relation"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/third"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/user"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/tools/cron"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/mitchellh/mapstructure"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/discovery/standalone"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/spf13/viper"
+ "google.golang.org/grpc"
+)
+
+func init() {
+ config.SetStandalone()
+ prommetrics.RegistryAll()
+}
+
+func main() {
+ var configPath string
+ flag.StringVar(&configPath, "c", "", "config path")
+ flag.Parse()
+ if configPath == "" {
+ _, _ = fmt.Fprintln(os.Stderr, "config path is empty")
+ os.Exit(1)
+ return
+ }
+ cmd := newCmds(configPath)
+ putCmd(cmd, false, auth.Start)
+ putCmd(cmd, false, conversation.Start)
+ putCmd(cmd, false, relation.Start)
+ putCmd(cmd, false, group.Start)
+ putCmd(cmd, false, msg.Start)
+ putCmd(cmd, false, third.Start)
+ putCmd(cmd, false, user.Start)
+ putCmd(cmd, false, push.Start)
+ putCmd(cmd, true, msggateway.Start)
+ putCmd(cmd, true, msgtransfer.Start)
+ putCmd(cmd, true, api.Start)
+ putCmd(cmd, true, cron.Start)
+ ctx := context.Background()
+ if err := cmd.run(ctx); err != nil {
+ _, _ = fmt.Fprintf(os.Stderr, "server exit %s", err)
+ os.Exit(1)
+ return
+ }
+}
+
+func newCmds(confPath string) *cmds {
+ return &cmds{confPath: confPath}
+}
+
+type cmdName struct {
+ Name string
+ Func func(ctx context.Context) error
+ Block bool
+}
+type cmds struct {
+ confPath string
+ cmds []cmdName
+ config config.AllConfig
+ conf map[string]reflect.Value
+}
+
+func (x *cmds) getTypePath(typ reflect.Type) string {
+ return path.Join(typ.PkgPath(), typ.Name())
+}
+
+func (x *cmds) initDiscovery() {
+ x.config.Discovery.Enable = "standalone"
+ vof := reflect.ValueOf(&x.config.Discovery.RpcService).Elem()
+ tof := reflect.TypeOf(&x.config.Discovery.RpcService).Elem()
+ num := tof.NumField()
+ for i := 0; i < num; i++ {
+ field := tof.Field(i)
+ if !field.IsExported() {
+ continue
+ }
+ if field.Type.Kind() != reflect.String {
+ continue
+ }
+ vof.Field(i).SetString(field.Name)
+ }
+}
+
+func (x *cmds) initAllConfig() error {
+ x.conf = make(map[string]reflect.Value)
+ vof := reflect.ValueOf(&x.config).Elem()
+ num := vof.NumField()
+ for i := 0; i < num; i++ {
+ field := vof.Field(i)
+ for ptr := true; ptr; {
+ if field.Kind() == reflect.Ptr {
+ field = field.Elem()
+ } else {
+ ptr = false
+ }
+ }
+ x.conf[x.getTypePath(field.Type())] = field
+ val := field.Addr().Interface()
+ name := val.(interface{ GetConfigFileName() string }).GetConfigFileName()
+ confData, err := os.ReadFile(filepath.Join(x.confPath, name))
+ if err != nil {
+ if os.IsNotExist(err) {
+ continue
+ }
+ return err
+ }
+ v := viper.New()
+ v.SetConfigType("yaml")
+ if err := v.ReadConfig(bytes.NewReader(confData)); err != nil {
+ return err
+ }
+ opt := func(conf *mapstructure.DecoderConfig) {
+ conf.TagName = config.StructTagName
+ }
+ if err := v.Unmarshal(val, opt); err != nil {
+ return err
+ }
+ }
+ x.initDiscovery()
+ x.config.Redis.Disable = false
+ x.config.LocalCache = config.LocalCache{}
+ config.InitNotification(&x.config.Notification)
+ return nil
+}
+
+func (x *cmds) parseConf(conf any) error {
+ vof := reflect.ValueOf(conf)
+ for {
+ if vof.Kind() == reflect.Ptr {
+ vof = vof.Elem()
+ } else {
+ break
+ }
+ }
+ tof := vof.Type()
+ numField := vof.NumField()
+ for i := 0; i < numField; i++ {
+ typeField := tof.Field(i)
+ if !typeField.IsExported() {
+ continue
+ }
+ field := vof.Field(i)
+ pkt := x.getTypePath(field.Type())
+ val, ok := x.conf[pkt]
+ if !ok {
+ switch field.Interface().(type) {
+ case config.Index:
+ case config.Path:
+ field.SetString(x.confPath)
+ case config.AllConfig:
+ field.Set(reflect.ValueOf(x.config))
+ case *config.AllConfig:
+ field.Set(reflect.ValueOf(&x.config))
+ default:
+ return fmt.Errorf("config field %s %s not found", vof.Type().Name(), typeField.Name)
+ }
+ continue
+ }
+ field.Set(val)
+ }
+ return nil
+}
+
+func (x *cmds) add(name string, block bool, fn func(ctx context.Context) error) {
+ x.cmds = append(x.cmds, cmdName{Name: name, Block: block, Func: fn})
+}
+
+func (x *cmds) initLog() error {
+ conf := x.config.Log
+ if err := log.InitLoggerFromConfig(
+ "openim-server",
+ program.GetProcessName(),
+ "", "",
+ conf.RemainLogLevel,
+ conf.IsStdout,
+ conf.IsJson,
+ conf.StorageLocation,
+ conf.RemainRotationCount,
+ conf.RotationTime,
+ strings.TrimSpace(version.Version),
+ conf.IsSimplify,
+ ); err != nil {
+ return err
+ }
+ return nil
+
+}
+
+func (x *cmds) run(ctx context.Context) error {
+ if len(x.cmds) == 0 {
+ return fmt.Errorf("no command to run")
+ }
+ if err := x.initAllConfig(); err != nil {
+ return err
+ }
+ if err := x.initLog(); err != nil {
+ return err
+ }
+
+ ctx, cancel := context.WithCancelCause(ctx)
+
+ go func() {
+ <-ctx.Done()
+ log.ZError(ctx, "context server exit cause", context.Cause(ctx))
+ }()
+
+ if prometheus := x.config.API.Prometheus; prometheus.Enable {
+ var (
+ port int
+ err error
+ )
+ if !prometheus.AutoSetPorts {
+ port, err = datautil.GetElemByIndex(prometheus.Ports, 0)
+ if err != nil {
+ return err
+ }
+ }
+ listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
+ if err != nil {
+ return fmt.Errorf("prometheus listen %d error %w", port, err)
+ }
+ defer listener.Close()
+ log.ZDebug(ctx, "prometheus start", "addr", listener.Addr())
+ go func() {
+ err := prommetrics.Start(listener)
+ if err == nil {
+ err = fmt.Errorf("http done")
+ }
+ cancel(fmt.Errorf("prometheus %w", err))
+ }()
+ }
+
+ go func() {
+ sigs := make(chan os.Signal, 1)
+ signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGKILL)
+ select {
+ case <-ctx.Done():
+ return
+ case val := <-sigs:
+ log.ZDebug(ctx, "recv signal", "signal", val.String())
+ cancel(fmt.Errorf("signal %s", val.String()))
+ }
+ }()
+
+ for i := range x.cmds {
+ cmd := x.cmds[i]
+ if cmd.Block {
+ continue
+ }
+ if err := cmd.Func(ctx); err != nil {
+ cancel(fmt.Errorf("server %s exit %w", cmd.Name, err))
+ return err
+ }
+ go func() {
+ if cmd.Block {
+ cancel(fmt.Errorf("server %s exit", cmd.Name))
+ }
+ }()
+ }
+
+ var wait cmdManger
+ for i := range x.cmds {
+ cmd := x.cmds[i]
+ if !cmd.Block {
+ continue
+ }
+ wait.Start(cmd.Name)
+ go func() {
+ defer wait.Shutdown(cmd.Name)
+ if err := cmd.Func(ctx); err != nil {
+ cancel(fmt.Errorf("server %s exit %w", cmd.Name, err))
+ return
+ }
+ cancel(fmt.Errorf("server %s exit", cmd.Name))
+ }()
+ }
+ <-ctx.Done()
+ exitCause := context.Cause(ctx)
+ log.ZWarn(ctx, "notification of service closure", exitCause)
+ done := wait.Wait()
+ timeout := time.NewTimer(time.Second * 10)
+ defer timeout.Stop()
+ for {
+ select {
+ case <-timeout.C:
+ log.ZWarn(ctx, "server exit timeout", nil, "running", wait.Running())
+ return exitCause
+ case _, ok := <-done:
+ if ok {
+ log.ZWarn(ctx, "waiting for the service to exit", nil, "running", wait.Running())
+ } else {
+ log.ZInfo(ctx, "all server exit done")
+ return exitCause
+ }
+ }
+ }
+}
+
+func putCmd[C any](cmd *cmds, block bool, fn func(ctx context.Context, config *C, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error) {
+ name := path.Base(runtime.FuncForPC(reflect.ValueOf(fn).Pointer()).Name())
+ if index := strings.Index(name, "."); index >= 0 {
+ name = name[:index]
+ }
+ cmd.add(name, block, func(ctx context.Context) error {
+ var conf C
+ if err := cmd.parseConf(&conf); err != nil {
+ return err
+ }
+ return fn(ctx, &conf, standalone.GetSvcDiscoveryRegistry(), standalone.GetServiceRegistrar())
+ })
+}
+
+type cmdManger struct {
+ lock sync.Mutex
+ done chan struct{}
+ count int
+ names map[string]struct{}
+}
+
+func (x *cmdManger) Start(name string) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if x.names == nil {
+ x.names = make(map[string]struct{})
+ }
+ if x.done == nil {
+ x.done = make(chan struct{}, 1)
+ }
+ if _, ok := x.names[name]; ok {
+ panic(fmt.Errorf("cmd %s already exists", name))
+ }
+ x.count++
+ x.names[name] = struct{}{}
+}
+
+func (x *cmdManger) Shutdown(name string) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if _, ok := x.names[name]; !ok {
+ panic(fmt.Errorf("cmd %s not exists", name))
+ }
+ delete(x.names, name)
+ x.count--
+ if x.count == 0 {
+ close(x.done)
+ } else {
+ select {
+ case x.done <- struct{}{}:
+ default:
+ }
+ }
+}
+
+func (x *cmdManger) Wait() <-chan struct{} {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if x.count == 0 || x.done == nil {
+ tmp := make(chan struct{})
+ close(tmp)
+ return tmp
+ }
+ return x.done
+}
+
+func (x *cmdManger) Running() []string {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ names := make([]string, 0, len(x.names))
+ for name := range x.names {
+ names = append(names, name)
+ }
+ return names
+}
diff --git a/cmd/openim-api/main.go b/cmd/openim-api/main.go
new file mode 100644
index 0000000..0e15d55
--- /dev/null
+++ b/cmd/openim-api/main.go
@@ -0,0 +1,28 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ _ "net/http/pprof"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewApiCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-cmdutils/main.go b/cmd/openim-cmdutils/main.go
new file mode 100644
index 0000000..fa53c02
--- /dev/null
+++ b/cmd/openim-cmdutils/main.go
@@ -0,0 +1,66 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ msgUtilsCmd := cmd.NewMsgUtilsCmd("openIMCmdUtils", "openIM cmd utils", nil)
+ getCmd := cmd.NewGetCmd()
+ fixCmd := cmd.NewFixCmd()
+ clearCmd := cmd.NewClearCmd()
+ seqCmd := cmd.NewSeqCmd()
+ msgCmd := cmd.NewMsgCmd()
+ getCmd.AddCommand(seqCmd.GetSeqCmd(), msgCmd.GetMsgCmd())
+ getCmd.AddSuperGroupIDFlag()
+ getCmd.AddUserIDFlag()
+ getCmd.AddConfigDirFlag()
+ getCmd.AddIndexFlag()
+ getCmd.AddBeginSeqFlag()
+ getCmd.AddLimitFlag()
+ // openIM get seq --userID=xxx
+ // openIM get seq --superGroupID=xxx
+ // openIM get msg --userID=xxx --beginSeq=100 --limit=10
+ // openIM get msg --superGroupID=xxx --beginSeq=100 --limit=10
+
+ fixCmd.AddCommand(seqCmd.FixSeqCmd())
+ fixCmd.AddSuperGroupIDFlag()
+ fixCmd.AddUserIDFlag()
+ fixCmd.AddConfigDirFlag()
+ fixCmd.AddIndexFlag()
+ fixCmd.AddFixAllFlag()
+ // openIM fix seq --userID=xxx
+ // openIM fix seq --superGroupID=xxx
+ // openIM fix seq --fixAll
+
+ clearCmd.AddCommand(msgCmd.ClearMsgCmd())
+ clearCmd.AddSuperGroupIDFlag()
+ clearCmd.AddUserIDFlag()
+ clearCmd.AddConfigDirFlag()
+ clearCmd.AddIndexFlag()
+ clearCmd.AddClearAllFlag()
+ clearCmd.AddBeginSeqFlag()
+ clearCmd.AddLimitFlag()
+ // openIM clear msg --userID=xxx --beginSeq=100 --limit=10
+ // openIM clear msg --superGroupID=xxx --beginSeq=100 --limit=10
+ // openIM clear msg --clearAll
+ msgUtilsCmd.AddCommand(&getCmd.Command, &fixCmd.Command, &clearCmd.Command)
+ if err := msgUtilsCmd.Execute(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-crontask/main.go b/cmd/openim-crontask/main.go
new file mode 100644
index 0000000..e67abf1
--- /dev/null
+++ b/cmd/openim-crontask/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewCronTaskCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-msggateway/main.go b/cmd/openim-msggateway/main.go
new file mode 100644
index 0000000..4e65223
--- /dev/null
+++ b/cmd/openim-msggateway/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewMsgGatewayCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-msgtransfer/main.go b/cmd/openim-msgtransfer/main.go
new file mode 100644
index 0000000..46089a5
--- /dev/null
+++ b/cmd/openim-msgtransfer/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewMsgTransferCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-push/main.go b/cmd/openim-push/main.go
new file mode 100644
index 0000000..0d9b94c
--- /dev/null
+++ b/cmd/openim-push/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewPushRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-auth/main.go b/cmd/openim-rpc/openim-rpc-auth/main.go
new file mode 100644
index 0000000..40e2730
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-auth/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewAuthRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-conversation/main.go b/cmd/openim-rpc/openim-rpc-conversation/main.go
new file mode 100644
index 0000000..8d5018d
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-conversation/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewConversationRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-friend/main.go b/cmd/openim-rpc/openim-rpc-friend/main.go
new file mode 100644
index 0000000..d115850
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-friend/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewFriendRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-group/main.go b/cmd/openim-rpc/openim-rpc-group/main.go
new file mode 100644
index 0000000..96f4708
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-group/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewGroupRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-msg/main.go b/cmd/openim-rpc/openim-rpc-msg/main.go
new file mode 100644
index 0000000..6eb4cc5
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-msg/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewMsgRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-third/main.go b/cmd/openim-rpc/openim-rpc-third/main.go
new file mode 100644
index 0000000..0902df5
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-third/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewThirdRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/cmd/openim-rpc/openim-rpc-user/main.go b/cmd/openim-rpc/openim-rpc-user/main.go
new file mode 100644
index 0000000..57c7277
--- /dev/null
+++ b/cmd/openim-rpc/openim-rpc-user/main.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ "github.com/openimsdk/tools/system/program"
+)
+
+func main() {
+ if err := cmd.NewUserRpcCmd().Exec(); err != nil {
+ program.ExitWithError(err)
+ }
+}
diff --git a/config/README.md b/config/README.md
new file mode 100644
index 0000000..eff2bb9
--- /dev/null
+++ b/config/README.md
@@ -0,0 +1,87 @@
+# OpenIM Configuration File Descriptions and Common Configuration Modifications
+
+## External Component Configurations
+
+| Configuration File | Description |
+| ------------------ |-------------------------------------------------------------|
+| **kafka.yml** | Configuration for Kafka username, password, address, etc. |
+| **redis.yml** | Configuration for Redis password, address, etc. |
+| **minio.yml** | Configuration for MinIO username, password, address, etc. |
+| **mongodb.yml** | Configuration for MongoDB username, password, address, etc. |
+| **discovery.yml** | Service discovery and etcd credentials and address. |
+
+## OpenIMServer Related Configurations
+| Configuration File | Description |
+| ------------------------------- | ---------------------------------------------- |
+| **log.yml** | Configuration for logging levels and storage directory |
+| **notification.yml** | Event notification settings (e.g., add friend, create group) |
+| **share.yml** | Common settings for all services (e.g., secrets) |
+| **webhooks.yml** | Webhook URLs and related settings |
+| **local-cache.yml** | Local cache settings (generally do not modify) |
+| **openim-rpc-third.yml** | openim-rpc-third listen IP, port, and object storage settings |
+| **openim-rpc-user.yml** | openim-rpc-user listen IP and port settings |
+| **openim-api.yml** | openim-api listen IP, port, and other settings |
+| **openim-crontask.yml** | openim-crontask scheduled task settings |
+| **openim-msggateway.yml** | openim-msggateway listen IP, port, and other settings |
+| **openim-msgtransfer.yml** | Settings for openim-msgtransfer service |
+| **openim-push.yml** | openim-push listen IP, port, and offline push settings |
+| **openim-rpc-auth.yml** | openim-rpc-auth listen IP, port, token validity settings |
+| **openim-rpc-conversation.yml** | openim-rpc-conversation listen IP and port settings |
+| **openim-rpc-friend.yml** | openim-rpc-friend listen IP and port settings |
+| **openim-rpc-group.yml** | openim-rpc-group listen IP and port settings |
+| **openim-rpc-msg.yml** | openim-rpc-msg listen IP and port settings |
+
+
+## Monitoring and Alerting Related Configurations
+| Configuration File | Description |
+| ------------------------------ | --------------- |
+| **prometheus.yml** | Prometheus configuration |
+| **instance-down-rules.yml** | Alert rules |
+| **alertmanager.yml** | Alertmanager configuration |
+| **email.tmpl** | Email alert template |
+| **grefana-template/Demo.json** | Default Grafana dashboard |
+
+## Common Configuration Modifications
+| Configuration Item | Configuration File |
+| -------------------------------------------------------- | ----------------------- |
+| Configure MinIO as object storage (focus on the externalAddress field) | `minio.yml` |
+| Adjust log level and number of log files | `log.yml` |
+| Enable or disable friend verification when sending messages | `openim-rpc-msg.yml` |
+| OpenIMServer secret | `share.yml` |
+| Configure OSS, COS, AWS, or Kodo as object storage | `openim-rpc-third.yml` |
+| Multi-end mutual kick strategy and max concurrent connections per gateway | `openim-msggateway.yml` |
+| Offline message push configuration | `openim-push.yml` |
+| Configure webhooks for callback notifications (e.g., before/after message send) | `webhooks.yml` |
+| Whether new group members can view historical messages | `openim-rpc-group.yml` |
+| Token expiration time settings | `openim-rpc-auth.yml` |
+| Scheduled task settings (e.g., how long to retain messages) | `openim-crontask.yml` |
+
+## Starting Multiple Instances of a Service and Maximum File Descriptors
+
+
+To start multiple instances of an OpenIM service, simply add the corresponding port numbers and modify the `start-config.yml` file in the project’s root directory,
+then restart the service. For example, to start 2 instances of `openim-rpc-user`:
+
+```yaml
+rpc:
+ registerIP: ''
+ listenIP: 0.0.0.0
+ ports: [ 10110, 10111 ]
+
+prometheus:
+ enable: true
+ ports: [ 20100, 20101 ]
+```
+
+Modify`start-config.yml`:
+
+```yaml
+serviceBinaries:
+ openim-rpc-user: 2
+```
+
+To set the maximum number of open file descriptors (typically one per online user):
+
+```
+maxFileDescriptors: 10000
+```
diff --git a/config/README_zh_CN.md b/config/README_zh_CN.md
new file mode 100644
index 0000000..679bfe5
--- /dev/null
+++ b/config/README_zh_CN.md
@@ -0,0 +1,86 @@
+# OpenIM配置文件说明以及常用配置修改说明
+
+## 外部组件相关配置
+
+| Configuration File | Description |
+| ------------------ | ---------------------------------- |
+| **kafka.yml** | Kafka用户名、密码、地址等配置 |
+| **redis.yml** | Redis密码、地址等配置 |
+| **minio.yml** | MinIO用户名、密码、地址等配置 |
+| **mongodb.yml** | MongoDB用户名、密码、地址等配置 |
+| **discovery.yml** | 服务发现以及etcd用户名、密码、地址 |
+
+## OpenIMServer相关配置
+| Configuration File | Description |
+| ------------------------------- | ---------------------------------------------- |
+| **log.yml** | 日志级别及存储目录等配置 |
+| **notification.yml** | 添加好友、创建群组等事件通知配置 |
+| **share.yml** | 各服务所需的公共配置,如secret等 |
+| **webhooks.yml** | Webhook中URL等配置 |
+| **local-cache.yml** | 本地缓存配置,一般不用修改 |
+| **openim-rpc-third.yml** | openim-rpc-third监听IP、端口及对象存储配置 |
+| **openim-rpc-user.yml** | openim-rpc-user监听IP、端口配置 |
+| **openim-api.yml** | openim-api监听IP、端口等配置 |
+| **openim-crontask.yml** | openim-crontask定时任务配置 |
+| **openim-msggateway.yml** | openim-msggateway监听IP、端口等配置 |
+| **openim-msgtransfer.yml** | openim-msgtransfer服务配置 |
+| **openim-push.yml** | openim-push监听IP、端口及离线推送配置 |
+| **openim-rpc-auth.yml** | openim-rpc-auth监听IP、端口及token有效期等配置 |
+| **openim-rpc-conversation.yml** | openim-rpc-conversation监听IP、端口等配置 |
+| **openim-rpc-friend.yml** | openim-rpc-friend监听IP、端口等配置 |
+| **openim-rpc-group.yml** | openim-rpc-group监听IP、端口等配置 |
+| **openim-rpc-msg.yml** | openim-rpc-msg服务的监听IP、端口等配置 |
+
+
+## 监控告警相关配置
+| Configuration File | Description |
+| ------------------------------ | --------------- |
+| **prometheus.yml** | prometheus配置 |
+| **instance-down-rules.yml** | 告警规则 |
+| **alertmanager.yml** | 告警管理配置 |
+| **email.tmpl** | 邮件告警模版 |
+| **grefana-template/Demo.json** | 默认的dashboard |
+
+## 常用配置修改
+| 修改配置项 | 配置文件 |
+| -------------------------------------------------------- | ----------------------- |
+| 使用minio作为对象存储时配置,重点关注externalAddress字段 | `minio.yml` |
+| 日志级别及日志文件数量调整 | `log.yml` |
+| 发送消息是否需要验证好友关系 | `openim-rpc-msg.yml` |
+| OpenIMServer秘钥 | `share.yml` |
+| 使用oss, cos, aws, kodo作为对象存储时配置 | `openim-rpc-third.yml` |
+| 多端互踢策略,单个gateway同时最大连接数 | `openim-msggateway.yml` |
+| 消息离线推送 | `openim-push.yml` |
+| 配置webhook来通知回调服务器,如消息发送前后回调 | `webhooks.yml` |
+| 新入群用户是否可以查看历史消息 | `openim-rpc-group.yml` |
+| token 过期时间设置 | `openim-rpc-auth.yml` |
+| 定时任务设置,例如消息保存多长时间 | `openim-crontask.yml` |
+
+## 启动某个服务的多个实例和最大文件句柄数
+
+
+若要启动某个OpenIM的多个实例,只需增加对应的端口数,并修改项目根目录下的`start-config.yml`文件,重启服务即可生效。例如,启动2个`openim-rpc-user`实例的配置如下:
+
+```yaml
+rpc:
+ registerIP: ''
+ listenIP: 0.0.0.0
+ ports: [ 10110, 10111 ]
+
+prometheus:
+ enable: true
+ ports: [ 20100, 20101 ]
+```
+
+修改`start-config.yml`:
+
+```yaml
+serviceBinaries:
+ openim-rpc-user: 2
+```
+
+修改最大同时打开的文件句柄数,一般是每个在线用户占用一个
+
+```
+maxFileDescriptors: 10000
+```
diff --git a/config/alertmanager.yml b/config/alertmanager.yml
new file mode 100644
index 0000000..6c675ab
--- /dev/null
+++ b/config/alertmanager.yml
@@ -0,0 +1,34 @@
+global:
+ resolve_timeout: 5m
+ smtp_from: alert@openim.io
+ smtp_smarthost: smtp.163.com:465
+ smtp_auth_username: alert@openim.io
+ smtp_auth_password: YOURAUTHPASSWORD
+ smtp_require_tls: false
+ smtp_hello: xxx
+
+templates:
+ - /etc/alertmanager/email.tmpl
+
+route:
+ group_by: [ 'alertname' ]
+ group_wait: 5s
+ group_interval: 5s
+ repeat_interval: 5m
+ receiver: email
+ routes:
+ - matchers:
+ - alertname = "XXX"
+ group_by: [ 'instance' ]
+ group_wait: 5s
+ group_interval: 5s
+ repeat_interval: 5m
+ receiver: email
+
+receivers:
+ - name: email
+ email_configs:
+ - to: 'alert@example.com'
+ html: '{{ template "email.to.html" . }}'
+ headers: { Subject: "[OPENIM-SERVER]Alarm" }
+ send_resolved: true
diff --git a/config/discovery.yml b/config/discovery.yml
new file mode 100644
index 0000000..2251dce
--- /dev/null
+++ b/config/discovery.yml
@@ -0,0 +1,22 @@
+enable: etcd
+etcd:
+ rootDirectory: openim
+ address: [localhost:12379]
+ ## Attention: If you set auth in etcd
+ ## you must also update the username and password in Chat project.
+ username:
+ password:
+
+kubernetes:
+ namespace: default
+
+rpcService:
+ user: user-rpc-service
+ friend: friend-rpc-service
+ msg: msg-rpc-service
+ push: push-rpc-service
+ messageGateway: messagegateway-rpc-service
+ group: group-rpc-service
+ auth: auth-rpc-service
+ conversation: conversation-rpc-service
+ third: third-rpc-service
diff --git a/config/email.tmpl b/config/email.tmpl
new file mode 100644
index 0000000..824144e
--- /dev/null
+++ b/config/email.tmpl
@@ -0,0 +1,36 @@
+{{ define "email.to.html" }}
+{{ if eq .Status "firing" }}
+ {{ range .Alerts }}
+
+
+
OpenIM Alert
+
Alert Status: firing
+
Alert Program: Prometheus Alert
+
Severity Level: {{ .Labels.severity }}
+
Alert Type: {{ .Labels.alertname }}
+
Affected Host: {{ .Labels.instance }}
+
Affected Service: {{ .Labels.job }}
+
Alert Subject: {{ .Annotations.summary }}
+
Trigger Time: {{ .StartsAt.Format "2006-01-02 15:04:05" }}
+
+ {{ end }}
+
+
+{{ else if eq .Status "resolved" }}
+ {{ range .Alerts }}
+
+
+
OpenIM Alert
+
Alert Status: resolved
+
Alert Program: Prometheus Alert
+
Severity Level: {{ .Labels.severity }}
+
Alert Type: {{ .Labels.alertname }}
+
Affected Host: {{ .Labels.instance }}
+
Affected Service: {{ .Labels.job }}
+
Alert Subject: {{ .Annotations.summary }}
+
Trigger Time: {{ .StartsAt.Format "2006-01-02 15:04:05" }}
+
+ {{ end }}
+
+{{ end }}
+{{ end }}
diff --git a/config/grafana-template/Demo.json b/config/grafana-template/Demo.json
new file mode 100644
index 0000000..ea17d2c
--- /dev/null
+++ b/config/grafana-template/Demo.json
@@ -0,0 +1,5576 @@
+{
+ "__inputs": [
+ {
+ "name": "DS_PROMETHEUS",
+ "label": "prometheus",
+ "description": "",
+ "type": "datasource",
+ "pluginId": "prometheus",
+ "pluginName": "Prometheus"
+ }
+ ],
+ "__elements": {},
+ "__requires": [
+ {
+ "type": "grafana",
+ "id": "grafana",
+ "name": "Grafana",
+ "version": "11.0.1"
+ },
+ {
+ "type": "datasource",
+ "id": "prometheus",
+ "name": "Prometheus",
+ "version": "1.0.0"
+ },
+ {
+ "type": "panel",
+ "id": "timeseries",
+ "name": "Time series",
+ "version": ""
+ }
+ ],
+ "annotations": {
+ "list": [
+ {
+ "builtIn": 1,
+ "datasource": {
+ "type": "grafana",
+ "uid": "-- Grafana --"
+ },
+ "enable": true,
+ "hide": true,
+ "iconColor": "rgba(0, 211, 255, 1)",
+ "name": "Annotations & Alerts",
+ "type": "dashboard"
+ }
+ ]
+ },
+ "editable": true,
+ "fiscalYearStartMonth": 0,
+ "graphTooltip": 0,
+ "id": null,
+ "links": [],
+ "liveNow": false,
+ "panels": [
+ {
+ "collapsed": false,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 0
+ },
+ "id": 35,
+ "panels": [],
+ "title": "Server",
+ "type": "row"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "Is the service up.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "stepBefore",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 2,
+ "pointSize": 9,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bool_on_off"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 6,
+ "y": 1
+ },
+ "id": 1,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "up",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "UP",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of online users and login users within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "online users"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "#37bbff",
+ "mode": "fixed",
+ "seriesBy": "last"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 12
+ },
+ "id": 37,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "online_user_num",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "online users",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "increase(user_login_total[$time])",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "login num",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Login Information",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of register users within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "register users"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "#7437ff",
+ "mode": "fixed",
+ "seriesBy": "last"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 12
+ },
+ "id": 59,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "user_register_total",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "register users",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Register num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of chat msg success.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 0,
+ "y": 23
+ },
+ "id": 38,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "increase(single_chat_msg_process_success_total[$time])",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "single msgs",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "increase(group_chat_msg_process_success_total[$time])",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "group msgs",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Chat Msg Success Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of chat msg failed .",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "single msgs"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "#ff00dc",
+ "mode": "fixed",
+ "seriesBy": "last"
+ }
+ }
+ ]
+ },
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "group msgs"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "#0cffef",
+ "mode": "fixed"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 23
+ },
+ "id": 39,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "increase(single_chat_msg_process_failed_total[$time])",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "single msgs",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "increase(group_chat_msg_process_failed_total[$time])",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "group msgs",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Chat Msg Failed Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of msg failed offline pushed.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "failed msgs"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "dark-red",
+ "mode": "fixed",
+ "seriesBy": "last"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 8,
+ "x": 0,
+ "y": 33
+ },
+ "id": 42,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "increase(msg_offline_push_failed_total[$time])",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "addr:{{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Msg Offline Push Failed Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of failed set seq.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "failed msgs"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "semi-dark-green",
+ "mode": "fixed",
+ "seriesBy": "last"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 8,
+ "x": 8,
+ "y": 33
+ },
+ "id": 43,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "increase(seq_set_failed_total[$time])",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "addr: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Seq Set Failed Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of messages that take a long time to send.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "failed msgs"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "dark-red",
+ "mode": "fixed",
+ "seriesBy": "last"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 8,
+ "x": 16,
+ "y": 33
+ },
+ "id": 60,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "msg_long_time_push_total",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "addr:{{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Long Time Send Msg Total",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of successfully inserted messages.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 0,
+ "y": 44
+ },
+ "id": 44,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "increase(msg_insert_redis_success_total[$time])",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "redis: {{instance}}",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "increase(msg_insert_mongo_success_total[$time])",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "mongo: {{instance}}",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Msg Success Insert Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of failed insertion messages.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 44
+ },
+ "id": 45,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "increase(msg_insert_redis_failed_total[$time])",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "redis: {{instance}}",
+ "range": true,
+ "refId": "A"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "increase(msg_insert_mongo_failed_total[$time])",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "mongo: {{instance}}",
+ "range": true,
+ "refId": "B"
+ }
+ ],
+ "title": "Msg Failed Insert Num",
+ "type": "timeseries"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 54
+ },
+ "id": 22,
+ "panels": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of call of all API.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 9,
+ "w": 12,
+ "x": 0,
+ "y": 13
+ },
+ "id": 29,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (api_count)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API Requests Total",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of call of all API within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": [
+ {
+ "__systemRef": "hideSeriesFrom",
+ "matcher": {
+ "id": "byNames",
+ "options": {
+ "mode": "exclude",
+ "names": [
+ "/friend/get_friend_list"
+ ],
+ "prefix": "All except:",
+ "readOnly": true
+ }
+ },
+ "properties": [
+ {
+ "id": "custom.hideFrom",
+ "value": {
+ "legend": false,
+ "tooltip": false,
+ "viz": true
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 9,
+ "w": 12,
+ "x": 12,
+ "y": 13
+ },
+ "id": 48,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (increase(api_count[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API Requests Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of err return of API.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 14,
+ "w": 12,
+ "x": 0,
+ "y": 22
+ },
+ "id": 24,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (api_count{code != \"0\"})",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API Error Total",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of err return of API with err code.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 14,
+ "w": 12,
+ "x": 12,
+ "y": 22
+ },
+ "id": 23,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path, code) (api_count{code != \"0\"})",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{path}}: code={{code}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API Error Total With Code",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the qps of API.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "reqps"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "Value"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "#1ed9d4",
+ "mode": "fixed"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 9,
+ "w": 24,
+ "x": 0,
+ "y": 36
+ },
+ "id": 51,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum(rate(api_count[1m]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "qps",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API QPS",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of err return of API within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 12,
+ "w": 12,
+ "x": 0,
+ "y": 45
+ },
+ "id": 49,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (increase(api_count{code != \"0\"}[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API Error Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of err return of API with err code within the time frame..",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 12,
+ "w": 12,
+ "x": 12,
+ "y": 45
+ },
+ "id": 50,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path, code) (increase(api_count{code != \"0\"}[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{path}}: code={{code}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "API Error Num With Code",
+ "type": "timeseries"
+ }
+ ],
+ "title": "API",
+ "type": "row"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 55
+ },
+ "id": 28,
+ "panels": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of call of all RPC.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 24,
+ "x": 0,
+ "y": 14
+ },
+ "id": 21,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (rpc_count)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Total Count",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the error return of RPC.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 0,
+ "y": 24
+ },
+ "id": 31,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (rpc_count{code!=\"0\"})",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Error Count",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the error return of RPC with code.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 24
+ },
+ "id": 33,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path, code) (rpc_count{code!=\"0\"})",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{path}}: code={{code}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Error Count With Code",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of call of all RPC within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 9,
+ "w": 24,
+ "x": 0,
+ "y": 34
+ },
+ "id": 52,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (increase(rpc_count[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Total Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of RPC calls within the time frame, aggregated by name.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 13,
+ "w": 12,
+ "x": 0,
+ "y": 43
+ },
+ "id": 30,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (name) (increase(rpc_count[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Num by Name",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of call of RPC within the time frame, aggregated by address.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 13,
+ "w": 12,
+ "x": 12,
+ "y": 43
+ },
+ "id": 32,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (instance) (increase(rpc_count[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Num by Address",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the error return of RPC within the time frame within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 0,
+ "y": 56
+ },
+ "id": 54,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path) (increase(rpc_count{code!=\"0\"}[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "__auto",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Error Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the error return of RPC with code within the time frame within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 10,
+ "w": 12,
+ "x": 12,
+ "y": 56
+ },
+ "id": 53,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (path, code) (increase(rpc_count{code!=\"0\"}[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{path}}: code={{code}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "RPC Error Num With Code",
+ "type": "timeseries"
+ }
+ ],
+ "title": "RPC",
+ "type": "row"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 56
+ },
+ "id": 25,
+ "panels": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of HTTP requests.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 15
+ },
+ "id": 27,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (method, path) (http_count)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{method}}: {{path}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "HTTP Total Count",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of HTTP requests with status.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 15
+ },
+ "id": 26,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (method, path, status) (http_count)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{method}}: {{path}}: {{status}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "HTTP Total Count With Status",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of HTTP requests within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 26
+ },
+ "id": 55,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (method, path) (increase(http_count[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{method}}: {{path}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "HTTP Total Num",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of HTTP requests with status within the time frame.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 26
+ },
+ "id": 56,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "right",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum by (method, path, status) (increase(http_count[$time]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{method}}: {{path}}: {{status}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "HTTP Total Num With Status",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the qps of HTTP.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "reqps"
+ },
+ "overrides": [
+ {
+ "matcher": {
+ "id": "byName",
+ "options": "Value"
+ },
+ "properties": [
+ {
+ "id": "color",
+ "value": {
+ "fixedColor": "#1ed9d4",
+ "mode": "fixed"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "gridPos": {
+ "h": 9,
+ "w": 24,
+ "x": 0,
+ "y": 37
+ },
+ "id": 57,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "sum(rate(http_count[1m]))",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "qps",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "HTTP QPS",
+ "type": "timeseries"
+ }
+ ],
+ "title": "HTTP",
+ "type": "row"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 57
+ },
+ "id": 6,
+ "panels": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the proportion of CPU runtime within 1 second. It is calculated as the average CPU runtime over 1 minute.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 5
+ },
+ "id": 5,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n rate(process_cpu_seconds_total{job=~\"$rpcNameFilter\"}[1m])*100,\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "CPU Usage Percentage",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the proportion of CPU runtime within 1 second. It is calculated as the average CPU runtime over 1 minute.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 5
+ },
+ "id": 4,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n rate(process_cpu_seconds_total{job!~\"$rpcNameFilter\"}[1m])*100,\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "CPU Usage Percentage",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of open file descriptors.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 16
+ },
+ "id": 7,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n process_open_fds{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Open File Descriptors",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of open file descriptors.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 16
+ },
+ "id": 8,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n process_open_fds{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Open File Descriptors",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of process virtual memory bytes.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 27
+ },
+ "id": 9,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n process_virtual_memory_bytes{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Virtual Memory bytes",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of process virtual memory bytes.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green",
+ "value": null
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 27
+ },
+ "id": 10,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n process_virtual_memory_bytes{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Virtual Memory bytes",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of process resident memory bytes.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 38
+ },
+ "id": 11,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n process_resident_memory_bytes{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Resident Memory bytes",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of process resident memory bytes.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 38
+ },
+ "id": 12,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "pluginVersion": "10.3.7",
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n process_resident_memory_bytes{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "{{job}}: {{instance}}",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Resident Memory bytes",
+ "type": "timeseries"
+ }
+ ],
+ "title": "Process",
+ "type": "row"
+ },
+ {
+ "collapsed": true,
+ "gridPos": {
+ "h": 1,
+ "w": 24,
+ "x": 0,
+ "y": 58
+ },
+ "id": 3,
+ "panels": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "Measures the frequency of garbage collection operations in the Go environment, averaged over the last five minutes.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "s"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 6
+ },
+ "id": 58,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n rate(go_gc_duration_seconds_count{job=~\"$rpcNameFilter\"}[5m]),\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "GC Rate Per Second",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "Measures the frequency of garbage collection operations in the Go environment, averaged over the last five minutes.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "s"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 6
+ },
+ "id": 2,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "expr": "label_replace(\r\n rate(go_gc_duration_seconds_count{job!~\"$rpcNameFilter\"}[5m]),\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "hide": false,
+ "instant": false,
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "GC Rate Per Second",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of goroutines.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 17
+ },
+ "id": 13,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_goroutines{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Goroutines",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of goroutines.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 17
+ },
+ "id": 14,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_goroutines{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Goroutines",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of bytes allocated and still in use.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 28
+ },
+ "id": 15,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_memstats_alloc_bytes{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Go Alloc Bytes ",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of bytes allocated and still in use.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 28
+ },
+ "id": 16,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_memstats_alloc_bytes{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Go Alloc Bytes ",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of bytes used by the profiling bucket hash table.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 39
+ },
+ "id": 17,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_memstats_buck_hash_sys_bytes{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Go Buck Hash Sys Bytes ",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of bytes used by the profiling bucket hash table.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 39
+ },
+ "id": 18,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_memstats_buck_hash_sys_bytes{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Go Buck Hash Sys Bytes ",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of bytes in use by mcache structures.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 0,
+ "y": 50
+ },
+ "id": 19,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_memstats_mcache_inuse_bytes{job=~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Go Mcache Bytes",
+ "type": "timeseries"
+ },
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "description": "This metric represents the number of bytes in use by mcache structures.",
+ "fieldConfig": {
+ "defaults": {
+ "color": {
+ "mode": "palette-classic"
+ },
+ "custom": {
+ "axisBorderShow": false,
+ "axisCenteredZero": false,
+ "axisColorMode": "text",
+ "axisLabel": "",
+ "axisPlacement": "auto",
+ "barAlignment": 0,
+ "drawStyle": "line",
+ "fillOpacity": 0,
+ "gradientMode": "none",
+ "hideFrom": {
+ "legend": false,
+ "tooltip": false,
+ "viz": false
+ },
+ "insertNulls": false,
+ "lineInterpolation": "linear",
+ "lineStyle": {
+ "fill": "solid"
+ },
+ "lineWidth": 1,
+ "pointSize": 5,
+ "scaleDistribution": {
+ "type": "linear"
+ },
+ "showPoints": "auto",
+ "spanNulls": false,
+ "stacking": {
+ "group": "A",
+ "mode": "none"
+ },
+ "thresholdsStyle": {
+ "mode": "off"
+ }
+ },
+ "fieldMinMax": false,
+ "mappings": [],
+ "thresholds": {
+ "mode": "absolute",
+ "steps": [
+ {
+ "color": "green"
+ },
+ {
+ "color": "red",
+ "value": 80
+ }
+ ]
+ },
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "gridPos": {
+ "h": 11,
+ "w": 12,
+ "x": 12,
+ "y": 50
+ },
+ "id": 20,
+ "options": {
+ "legend": {
+ "calcs": [],
+ "displayMode": "list",
+ "placement": "bottom",
+ "showLegend": true
+ },
+ "tooltip": {
+ "maxHeight": 600,
+ "mode": "single",
+ "sort": "none"
+ }
+ },
+ "targets": [
+ {
+ "datasource": {
+ "type": "prometheus",
+ "uid": "${DS_PROMETHEUS}"
+ },
+ "editorMode": "code",
+ "exemplar": false,
+ "expr": "label_replace(\r\n go_memstats_mcache_inuse_bytes{job!~\"$rpcNameFilter\"},\r\n \"job\",\r\n \"$1\",\r\n \"job\",\r\n \".*openim-(.*)\"\r\n)",
+ "format": "time_series",
+ "hide": false,
+ "instant": false,
+ "interval": "",
+ "legendFormat": "$legendName",
+ "range": true,
+ "refId": "A"
+ }
+ ],
+ "title": "Go Mcache Bytes",
+ "type": "timeseries"
+ }
+ ],
+ "title": "GO infomation",
+ "type": "row"
+ }
+ ],
+ "refresh": "5s",
+ "schemaVersion": 39,
+ "tags": [],
+ "templating": {
+ "list": [
+ {
+ "current": {
+ "selected": false,
+ "text": "openimserver-openim-rpc.*",
+ "value": "openimserver-openim-rpc.*"
+ },
+ "hide": 0,
+ "includeAll": false,
+ "label": "filter",
+ "multi": false,
+ "name": "rpcNameFilter",
+ "options": [
+ {
+ "selected": true,
+ "text": "openimserver-openim-rpc.*",
+ "value": "openimserver-openim-rpc.*"
+ }
+ ],
+ "query": "openimserver-openim-rpc.*",
+ "queryValue": "",
+ "skipUrlSync": false,
+ "type": "custom"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "{{job}}: {{instance}}",
+ "value": "{{job}}: {{instance}}"
+ },
+ "description": "common legend name",
+ "hide": 0,
+ "includeAll": false,
+ "label": "legend",
+ "multi": false,
+ "name": "legendName",
+ "options": [
+ {
+ "selected": true,
+ "text": "{{job}}: {{instance}}",
+ "value": "{{job}}: {{instance}}"
+ }
+ ],
+ "query": "{{job}}: {{instance}}",
+ "queryValue": "",
+ "skipUrlSync": false,
+ "type": "custom"
+ },
+ {
+ "current": {
+ "selected": false,
+ "text": "5m",
+ "value": "5m"
+ },
+ "description": "Global promQL time range.",
+ "hide": 0,
+ "includeAll": false,
+ "label": "time",
+ "multi": false,
+ "name": "time",
+ "options": [
+ {
+ "selected": false,
+ "text": "1m",
+ "value": "1m"
+ },
+ {
+ "selected": true,
+ "text": "5m",
+ "value": "5m"
+ },
+ {
+ "selected": false,
+ "text": "30m",
+ "value": "30m"
+ },
+ {
+ "selected": false,
+ "text": "1h",
+ "value": "1h"
+ },
+ {
+ "selected": false,
+ "text": "3h",
+ "value": "3h"
+ },
+ {
+ "selected": false,
+ "text": "6h",
+ "value": "6h"
+ },
+ {
+ "selected": false,
+ "text": "12h",
+ "value": "12h"
+ },
+ {
+ "selected": false,
+ "text": "24h",
+ "value": "24h"
+ },
+ {
+ "selected": false,
+ "text": "1w",
+ "value": "1w"
+ },
+ {
+ "selected": false,
+ "text": "4w",
+ "value": "4w"
+ },
+ {
+ "selected": false,
+ "text": "12w",
+ "value": "12w"
+ },
+ {
+ "selected": false,
+ "text": "24w",
+ "value": "24w"
+ },
+ {
+ "selected": false,
+ "text": "1y",
+ "value": "1y"
+ },
+ {
+ "selected": false,
+ "text": "2y",
+ "value": "2y"
+ },
+ {
+ "selected": false,
+ "text": "4y",
+ "value": "4y"
+ },
+ {
+ "selected": false,
+ "text": "10y",
+ "value": "10y"
+ }
+ ],
+ "query": "1m,5m,30m,1h,3h,6h,12h,24h,1w,4w,12w,24w,1y,2y,4y,10y",
+ "queryValue": "",
+ "skipUrlSync": false,
+ "type": "custom"
+ }
+ ]
+ },
+ "time": {
+ "from": "now-15m",
+ "to": "now"
+ },
+ "timeRangeUpdatedDuringEditOrView": false,
+ "timepicker": {},
+ "timezone": "",
+ "title": "Demo",
+ "uid": "a506d250-b606-4702-86a7-ac6aa1d069a1",
+ "version": 2,
+ "weekStart": ""
+}
\ No newline at end of file
diff --git a/config/instance-down-rules.yml b/config/instance-down-rules.yml
new file mode 100644
index 0000000..bcac7ba
--- /dev/null
+++ b/config/instance-down-rules.yml
@@ -0,0 +1,44 @@
+groups:
+ - name: instance_down
+ rules:
+ - alert: InstanceDown
+ expr: up == 0
+ for: 1m
+ labels:
+ severity: critical
+ annotations:
+ summary: "Instance {{ $labels.instance }} down"
+ description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 1 minutes."
+
+ - name: database_insert_failure_alerts
+ rules:
+ - alert: DatabaseInsertFailed
+ expr: (increase(msg_insert_redis_failed_total[5m]) > 0) or (increase(msg_insert_mongo_failed_total[5m]) > 0)
+ for: 1m
+ labels:
+ severity: critical
+ annotations:
+ summary: "Increase in MsgInsertRedisFailedCounter or MsgInsertMongoFailedCounter detected"
+ description: "Either MsgInsertRedisFailedCounter or MsgInsertMongoFailedCounter has increased in the last 5 minutes, indicating failures in message insert operations to Redis or MongoDB,maybe the redis or mongodb is crash."
+
+ - name: registrations_few
+ rules:
+ - alert: RegistrationsFew
+ expr: increase(user_login_total[1h]) == 0
+ for: 1m
+ labels:
+ severity: info
+ annotations:
+ summary: "Too few registrations within the time frame"
+ description: "The number of registrations in the last hour is 0. There might be some issues."
+
+ - name: messages_few
+ rules:
+ - alert: MessagesFew
+ expr: (increase(single_chat_msg_process_success_total[1h])+increase(group_chat_msg_process_success_total[1h])) == 0
+ for: 1m
+ labels:
+ severity: info
+ annotations:
+ summary: "Too few messages within the time frame"
+ description: "The number of messages sent in the last hour is 0. There might be some issues."
diff --git a/config/kafka.yml b/config/kafka.yml
new file mode 100644
index 0000000..2e9b529
--- /dev/null
+++ b/config/kafka.yml
@@ -0,0 +1,40 @@
+## Kafka authentication
+username:
+password:
+
+# Producer acknowledgment settings
+producerAck:
+# Compression type to use (e.g., none, gzip, snappy)
+compressType: none
+# List of Kafka broker addresses
+address: [localhost:19094]
+# Kafka topic for Redis integration
+toRedisTopic: toRedis
+# Kafka topic for MongoDB integration
+toMongoTopic: toMongo
+# Kafka topic for push notifications
+toPushTopic: toPush
+# Kafka topic for offline push notifications
+toOfflinePushTopic: toOfflinePush
+# Consumer group ID for Redis topic
+toRedisGroupID: redis
+# Consumer group ID for MongoDB topic
+toMongoGroupID: mongo
+# Consumer group ID for push notifications topic
+toPushGroupID: push
+# Consumer group ID for offline push notifications topic
+toOfflinePushGroupID: offlinePush
+# TLS (Transport Layer Security) configuration
+tls:
+ # Enable or disable TLS
+ enableTLS: false
+ # CA certificate file path
+ caCrt:
+ # Client certificate file path
+ clientCrt:
+ # Client key file path
+ clientKey:
+ # Client key password
+ clientKeyPwd:
+ # Whether to skip TLS verification (not recommended for production)
+ insecureSkipVerify: false
diff --git a/config/local-cache.yml b/config/local-cache.yml
new file mode 100644
index 0000000..036dfaa
--- /dev/null
+++ b/config/local-cache.yml
@@ -0,0 +1,34 @@
+auth:
+ topic: DELETE_CACHE_AUTH
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+
+user:
+ topic: DELETE_CACHE_USER
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+
+group:
+ topic: DELETE_CACHE_GROUP
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+
+friend:
+ topic: DELETE_CACHE_FRIEND
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+
+conversation:
+ topic: DELETE_CACHE_CONVERSATION
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
diff --git a/config/log.yml b/config/log.yml
new file mode 100644
index 0000000..1c563eb
--- /dev/null
+++ b/config/log.yml
@@ -0,0 +1,14 @@
+# Log storage path, default is acceptable, change to a full path if modification is needed
+storageLocation: ../../../../logs/
+# Log rotation period (in hours), default is acceptable
+rotationTime: 24
+# Number of log files to retain, default is acceptable
+remainRotationCount: 2
+# Log level settings: 3 for production environment; 6 for more verbose logging in debugging environments
+remainLogLevel: 6
+# Whether to output to standard output, default is acceptable
+isStdout: true
+# Whether to log in JSON format, default is acceptable
+isJson: false
+# output simplify log when KeyAndValues's value len is bigger than 50 in rpc method log
+isSimplify: true
\ No newline at end of file
diff --git a/config/minio.yml b/config/minio.yml
new file mode 100644
index 0000000..0836d05
--- /dev/null
+++ b/config/minio.yml
@@ -0,0 +1,16 @@
+# Name of the bucket in MinIO
+bucket: images
+# Access key ID for MinIO authentication
+accessKeyID: Z9Mgqtdm9OczzeRG
+# Secret access key for MinIO authentication
+secretAccessKey: vV6CzNvxYaN9jSZ8g7nOhGF1N4ygLJbE
+# Session token for MinIO authentication (optional)
+sessionToken:
+# Internal address of the MinIO server
+internalAddress: s3.jizhying.com
+# External address of the MinIO server, accessible from outside. Supports both HTTP and HTTPS using a domain name
+externalAddress: https://s3.jizhying.com
+# Flag to enable or disable public read access to the bucket
+publicRead: true
+
+
diff --git a/config/mongodb.yml b/config/mongodb.yml
new file mode 100644
index 0000000..ca45fea
--- /dev/null
+++ b/config/mongodb.yml
@@ -0,0 +1,51 @@
+# URI for database connection, leave empty if using address and credential settings directly
+uri:
+# List of MongoDB server addresses
+address: [localhost:37017]
+# Name of the database
+database: openim_v3
+# Username for database authentication
+username: openIM
+# Password for database authentication
+password: openIM123
+# Authentication source for database authentication, if use root user, set it to admin
+authSource: openim_v3
+# Maximum number of connections in the connection pool
+maxPoolSize: 100
+# Maximum number of retry attempts for a failed database connection
+maxRetry: 10
+# MongoDB Mode, including "standalone", "replicaSet"
+mongoMode: "standalone"
+
+# The following configurations only take effect when mongoMode is set to "replicaSet"
+replicaSet:
+ name: rs0
+ hosts: [127.0.0.1:37017, 127.0.0.1:37018, 127.0.0.1:37019]
+ # Read concern level: "local", "available", "majority", "linearizable", "snapshot"
+ readConcern: majority
+ # maximum staleness of data in seconds
+ maxStaleness: 90s
+
+# The following configurations only take effect when mongoMode is set to "replicaSet"
+readPreference:
+ # Read preference mode, can be "primary", "primaryPreferred", "secondary", "secondaryPreferred", "nearest"
+ mode: primary
+ maxStaleness: 90s
+ # TagSets is an array of maps with priority based on order, empty map must be placed last for fallback tagSets
+ tagSets:
+ - datacenter: "cn-east"
+ rack: "1"
+ storage: "ssd"
+ - datacenter: "cn-east"
+ storage: "ssd"
+ - datacenter: "cn-east"
+ - {} # Empty map, indicates any node
+
+# The following configurations only take effect when mongoMode is set to "replicaSet"
+writeConcern:
+ # Write node count or tag (int, "majority", or custom tag)
+ w: majority
+ # Whether to wait for journal confirmation
+ j: true
+ # Write timeout duration
+ wtimeout: 30s
diff --git a/config/notification.yml b/config/notification.yml
new file mode 100644
index 0000000..4f58219
--- /dev/null
+++ b/config/notification.yml
@@ -0,0 +1,326 @@
+groupCreated:
+ isSendMsg: true
+# Deprecated. Fixed as 1.
+ reliabilityLevel: 1
+# Deprecated. Fixed as false.
+ unreadCount: false
+# Configuration for offline push notifications.
+ offlinePush:
+ # Enables or disables offline push notifications.
+ enable: false
+ # Title for the notification when a group is created.
+ title: create group title
+ # Description for the notification.
+ desc: create group desc
+ # Additional information for the notification.
+ ext: create group ext
+
+groupInfoSet:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupInfoSet title
+ desc: groupInfoSet desc
+ ext: groupInfoSet ext
+
+
+joinGroupApplication:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: joinGroupApplication title
+ desc: joinGroupApplication desc
+ ext: joinGroupApplication ext
+
+memberQuit:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberQuit title
+ desc: memberQuit desc
+ ext: memberQuit ext
+
+groupApplicationAccepted:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: groupApplicationAccepted title
+ desc: groupApplicationAccepted desc
+ ext: groupApplicationAccepted ext
+
+groupApplicationRejected:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: groupApplicationRejected title
+ desc: groupApplicationRejected desc
+ ext: groupApplicationRejected ext
+
+
+groupOwnerTransferred:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupOwnerTransferred title
+ desc: groupOwnerTransferred desc
+ ext: groupOwnerTransferred ext
+
+memberKicked:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberKicked title
+ desc: memberKicked desc
+ ext: memberKicked ext
+
+memberInvited:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberInvited title
+ desc: memberInvited desc
+ ext: memberInvited ext
+
+memberEnter:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberEnter title
+ desc: memberEnter desc
+ ext: memberEnter ext
+
+groupDismissed:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupDismissed title
+ desc: groupDismissed desc
+ ext: groupDismissed ext
+
+groupMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMuted title
+ desc: groupMuted desc
+ ext: groupMuted ext
+
+groupCancelMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupCancelMuted title
+ desc: groupCancelMuted desc
+ ext: groupCancelMuted ext
+ defaultTips:
+ tips: group Cancel Muted
+
+
+groupMemberMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMemberMuted title
+ desc: groupMemberMuted desc
+ ext: groupMemberMuted ext
+
+groupMemberCancelMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMemberCancelMuted title
+ desc: groupMemberCancelMuted desc
+ ext: groupMemberCancelMuted ext
+
+groupMemberInfoSet:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMemberInfoSet title
+ desc: groupMemberInfoSet desc
+ ext: groupMemberInfoSet ext
+
+groupInfoSetAnnouncement:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupInfoSetAnnouncement title
+ desc: groupInfoSetAnnouncement desc
+ ext: groupInfoSetAnnouncement ext
+
+
+groupInfoSetName:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupInfoSetName title
+ desc: groupInfoSetName desc
+ ext: groupInfoSetName ext
+
+
+#############################friend#################################
+friendApplicationAdded:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Somebody applies to add you as a friend
+ desc: Somebody applies to add you as a friend
+ ext: Somebody applies to add you as a friend
+
+friendApplicationApproved:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Someone applies to add your friend application
+ desc: Someone applies to add your friend application
+ ext: Someone applies to add your friend application
+
+friendApplicationRejected:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Someone rejected your friend application
+ desc: Someone rejected your friend application
+ ext: Someone rejected your friend application
+
+friendAdded:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: We have become friends
+ desc: We have become friends
+ ext: We have become friends
+
+friendDeleted:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: deleted a friend
+ desc: deleted a friend
+ ext: deleted a friend
+
+friendRemarkSet:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: Your friend's profile has been changed
+ desc: Your friend's profile has been changed
+ ext: Your friend's profile has been changed
+
+blackAdded:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: blocked a user
+ desc: blocked a user
+ ext: blocked a user
+
+blackDeleted:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: Remove a blocked user
+ desc: Remove a blocked user
+ ext: Remove a blocked user
+
+friendInfoUpdated:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: friend info updated
+ desc: friend info updated
+ ext: friend info updated
+
+#####################user#########################
+userInfoUpdated:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: userInfo updated
+ desc: userInfo updated
+ ext: userInfo updated
+
+userStatusChanged:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: user status changed
+ desc: user status changed
+ ext: user status changed
+
+#####################conversation#########################
+conversationChanged:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: conversation changed
+ desc: conversation changed
+ ext: conversation changed
+
+conversationSetPrivate:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: burn after reading
+ desc: burn after reading
+ ext: burn after reading
diff --git a/config/openim-api.yml b/config/openim-api.yml
new file mode 100644
index 0000000..e29b63c
--- /dev/null
+++ b/config/openim-api.yml
@@ -0,0 +1,33 @@
+api:
+ # Listening IP; 0.0.0.0 means both internal and external IPs are listened to, default is recommended
+ listenIP: 0.0.0.0
+ # Listening ports; if multiple are configured, multiple instances will be launched, must be consistent with the number of prometheus.ports
+ ports: [ 10002 ]
+ # API compression level; 0: default compression, 1: best compression, 2: best speed, -1: no compression
+ compressionLevel: 0
+
+
+prometheus:
+ # Whether to enable prometheus
+ enable: true
+ # autoSetPorts indicates whether to automatically set the ports
+ autoSetPorts: true
+ # Prometheus listening ports, must match the number of api.ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+ # This address can be accessed via a browser
+ grafanaURL:
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+onlineCountRefresh:
+ enable: true
+ interval: 30s
diff --git a/config/openim-crontask.yml b/config/openim-crontask.yml
new file mode 100644
index 0000000..ff69c7d
--- /dev/null
+++ b/config/openim-crontask.yml
@@ -0,0 +1,4 @@
+cronExecuteTime: 0 2 * * *
+retainChatRecords: 365
+fileExpireTime: 180
+deleteObjectType: ["msg-picture","msg-file", "msg-voice","msg-video","msg-video-snapshot","sdklog"]
\ No newline at end of file
diff --git a/config/openim-msggateway.yml b/config/openim-msggateway.yml
new file mode 100644
index 0000000..135259e
--- /dev/null
+++ b/config/openim-msggateway.yml
@@ -0,0 +1,45 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+# IP address that the RPC/WebSocket service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+listenIP: 0.0.0.0
+
+longConnSvr:
+ # WebSocket listening ports, must match the number of rpc.ports
+ ports: [ 10001 ]
+ # Maximum number of WebSocket connections
+ websocketMaxConnNum: 100000
+ # Maximum length of the entire WebSocket message packet
+ websocketMaxMsgLen: 4096
+ # WebSocket connection handshake timeout in seconds
+ websocketTimeout: 10
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-msgtransfer.yml b/config/openim-msgtransfer.yml
new file mode 100644
index 0000000..9b46ba6
--- /dev/null
+++ b/config/openim-msgtransfer.yml
@@ -0,0 +1,25 @@
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # autoSetPorts indicates whether to automatically set the ports
+ autoSetPorts: true
+ # List of ports that Prometheus listens on; each port corresponds to an instance of monitoring. Ensure these are managed accordingly
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-push.yml b/config/openim-push.yml
new file mode 100644
index 0000000..e63815e
--- /dev/null
+++ b/config/openim-push.yml
@@ -0,0 +1,64 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+maxConcurrentWorkers: 50
+#Use geTui for offline push notifications, or choose fcm or jpns; corresponding configuration settings must be specified.
+enable:
+getui:
+ pushUrl: https://restapi.getui.com/v2/$appId
+ masterSecret:
+ appKey:
+ intent:
+ channelID:
+ channelName:
+fcm:
+ # Prioritize using file paths. If the file path is empty, use URL
+ filePath: # File path is concatenated with the parameters passed in through - c(`mage` default pass in `config/`) and filePath.
+ authURL: # Must start with https or http.
+jpush:
+ appKey:
+ masterSecret:
+ pushURL:
+ pushIntent:
+
+# iOS system push sound and badge count
+iosPush:
+ pushSound: xxx
+ badgeCount: true
+ production: false
+
+fullUserCache: true
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-rpc-auth.yml b/config/openim-rpc-auth.yml
new file mode 100644
index 0000000..b2fbf70
--- /dev/null
+++ b/config/openim-rpc-auth.yml
@@ -0,0 +1,39 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+tokenPolicy:
+ # Token validity period, in days
+ expire: 90
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-rpc-conversation.yml b/config/openim-rpc-conversation.yml
new file mode 100644
index 0000000..1cd7119
--- /dev/null
+++ b/config/openim-rpc-conversation.yml
@@ -0,0 +1,35 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-rpc-friend.yml b/config/openim-rpc-friend.yml
new file mode 100644
index 0000000..1cd7119
--- /dev/null
+++ b/config/openim-rpc-friend.yml
@@ -0,0 +1,35 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-rpc-group.yml b/config/openim-rpc-group.yml
new file mode 100644
index 0000000..4731c19
--- /dev/null
+++ b/config/openim-rpc-group.yml
@@ -0,0 +1,38 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+
+enableHistoryForNewMembers: true
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-rpc-msg.yml b/config/openim-rpc-msg.yml
new file mode 100644
index 0000000..9668282
--- /dev/null
+++ b/config/openim-rpc-msg.yml
@@ -0,0 +1,39 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+
+# Does sending messages require friend verification
+friendVerify: false
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/openim-rpc-third.yml b/config/openim-rpc-third.yml
new file mode 100644
index 0000000..f635d14
--- /dev/null
+++ b/config/openim-rpc-third.yml
@@ -0,0 +1,69 @@
+rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
+
+object:
+ # Use MinIO as object storage, or set to "cos", "oss", "kodo", "aws", while also configuring the corresponding settings
+ # Cloudflare R2 使用 aws 模式,配置 endpoint 即可
+ enable: minio
+ cos:
+ endpoint: https://e032b3e2e74d56c41118001d0f8e8106.r2.cloudflarestorage.com
+ secretID: TVLQOpXcTCjpePajNI8qnD2tp4C9eean4tVdOT17
+ secretKey: fbafa94b5036c147d5f27ffa55417a5daab662e348acb3a21b73c33405633cc8
+ sessionToken:
+ publicRead: true
+ oss:
+ endpoint: https://oss-ap-southeast-1.aliyuncs.com
+ bucket: chatall
+ bucketURL: http://asset.imall.cloud
+ accessKeyID: LTAI5t6DiZgPducgW28HW9sv
+ accessKeySecret: Hre20TaRDQadYZfQzp8ZwS9HfHIPrw
+ sessionToken:
+ publicRead: true
+ kodo:
+ endpoint: https://s3.cn-south-1.qiniucs.com
+ bucket: testdemo12313
+ bucketURL: http://so2at6d05.hn-bkt.clouddn.com
+ accessKeyID:
+ accessKeySecret:
+ sessionToken:
+ publicRead: false
+ aws:
+ region: ap-southeast-1
+ bucket: im1688
+ accessKeyID: AKIA5TMMSZWVFYCLKJ2G
+ secretAccessKey: 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
+ sessionToken:
+ publicRead: true
\ No newline at end of file
diff --git a/config/openim-rpc-user.yml b/config/openim-rpc-user.yml
new file mode 100644
index 0000000..f951244
--- /dev/null
+++ b/config/openim-rpc-user.yml
@@ -0,0 +1,35 @@
+rpc:
+ # API or other RPCs can access this RPC through this IP; if left blank, the internal network IP is obtained by default
+ registerIP:
+ # Listening IP; 0.0.0.0 means both internal and external IPs are listened to, if blank, the internal network IP is automatically obtained by default
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: true
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+prometheus:
+ # Whether to enable prometheus
+ enable: true
+ # Prometheus listening ports, must be consistent with the number of rpc.ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports:
+
+ratelimiter:
+ # Whether to enable rate limiting
+ enable: false
+ # WindowSize defines time duration per window
+ window: 20s
+ # BucketNum defines bucket number for each window
+ bucket: 500
+ # CPU threshold; valid range 0–1000 (1000 = 100%)
+ cpuThreshold: 850
+
+circuitBreaker:
+ enable: false
+ window: 5s # Time window size (seconds)
+ bucket: 100 # Number of buckets
+ success: 0.6 # Success rate threshold (0.6 means 60%)
+ request: 500 # Request threshold; circuit breaker evaluation occurs when reached
diff --git a/config/prometheus.yml b/config/prometheus.yml
new file mode 100644
index 0000000..0b13326
--- /dev/null
+++ b/config/prometheus.yml
@@ -0,0 +1,119 @@
+# my global config
+global:
+ scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
+ evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
+ # scrape_timeout is set to the global default (10s).
+
+# Alertmanager configuration
+alerting:
+ alertmanagers:
+ - static_configs:
+ - targets: [127.0.0.1:19093]
+
+# Load rules once and periodically evaluate them according to the global evaluation_interval.
+rule_files:
+ - instance-down-rules.yml
+# - first_rules.yml
+# - second_rules.yml
+
+# A scrape configuration containing exactly one endpoint to scrape:
+# Here it's Prometheus itself.
+scrape_configs:
+ # The job name is added as a label "job=job_name" to any timeseries scraped from this config.
+ # Monitored information captured by prometheus
+
+ # prometheus fetches application services
+ - job_name: node_exporter
+ static_configs:
+ - targets: [ 127.0.0.1:19100 ]
+
+ - job_name: openimserver-openim-api
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/api"
+# static_configs:
+# - targets: [ 127.0.0.1:12002 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-msggateway
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/msg_gateway"
+# static_configs:
+# - targets: [ 127.0.0.1:12140 ]
+# # - targets: [ 127.0.0.1:12140, 127.0.0.1:12141, 127.0.0.1:12142, 127.0.0.1:12143, 127.0.0.1:12144, 127.0.0.1:12145, 127.0.0.1:12146, 127.0.0.1:12147, 127.0.0.1:12148, 127.0.0.1:12149, 127.0.0.1:12150, 127.0.0.1:12151, 127.0.0.1:12152, 127.0.0.1:12153, 127.0.0.1:12154, 127.0.0.1:12155 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-msgtransfer
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/msg_transfer"
+# static_configs:
+# - targets: [ 127.0.0.1:12020, 127.0.0.1:12021, 127.0.0.1:12022, 127.0.0.1:12023, 127.0.0.1:12024, 127.0.0.1:12025, 127.0.0.1:12026, 127.0.0.1:12027 ]
+# # - targets: [ 127.0.0.1:12020, 127.0.0.1:12021, 127.0.0.1:12022, 127.0.0.1:12023, 127.0.0.1:12024, 127.0.0.1:12025, 127.0.0.1:12026, 127.0.0.1:12027, 127.0.0.1:12028, 127.0.0.1:12029, 127.0.0.1:12030, 127.0.0.1:12031, 127.0.0.1:12032, 127.0.0.1:12033, 127.0.0.1:12034, 127.0.0.1:12035 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-push
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/push"
+# static_configs:
+# - targets: [ 127.0.0.1:12170, 127.0.0.1:12171, 127.0.0.1:12172, 127.0.0.1:12173, 127.0.0.1:12174, 127.0.0.1:12175, 127.0.0.1:12176, 127.0.0.1:12177 ]
+## - targets: [ 127.0.0.1:12170, 127.0.0.1:12171, 127.0.0.1:12172, 127.0.0.1:12173, 127.0.0.1:12174, 127.0.0.1:12175, 127.0.0.1:12176, 127.0.0.1:12177, 127.0.0.1:12178, 127.0.0.1:12179, 127.0.0.1:12180, 127.0.0.1:12182, 127.0.0.1:12183, 127.0.0.1:12184, 127.0.0.1:12185, 127.0.0.1:12186 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-rpc-auth
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/auth"
+# static_configs:
+# - targets: [ 127.0.0.1:12200 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-rpc-conversation
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/conversation"
+# static_configs:
+# - targets: [ 127.0.0.1:12220 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-rpc-friend
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/friend"
+# static_configs:
+# - targets: [ 127.0.0.1:12240 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-rpc-group
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/group"
+# static_configs:
+# - targets: [ 127.0.0.1:12260 ]
+# labels:
+# namespace: default.
+
+ - job_name: openimserver-openim-rpc-msg
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/msg"
+# static_configs:
+# - targets: [ 127.0.0.1:12280 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-rpc-third
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/third"
+# static_configs:
+# - targets: [ 127.0.0.1:12300 ]
+# labels:
+# namespace: default
+
+ - job_name: openimserver-openim-rpc-user
+ http_sd_configs:
+ - url: "http://127.0.0.1:10002/prometheus_discovery/user"
+# static_configs:
+# - targets: [ 127.0.0.1:12320 ]
+# labels:
+# namespace: default
\ No newline at end of file
diff --git a/config/redis.yml b/config/redis.yml
new file mode 100644
index 0000000..b60ac04
--- /dev/null
+++ b/config/redis.yml
@@ -0,0 +1,16 @@
+address: [localhost:16379]
+username:
+password: openIM123
+# redis Mode, including "standalone","cluster","sentinel"
+redisMode: "standalone"
+db: 0
+maxRetry: 10
+poolSize: 100
+onlineKeyPrefix: "openim:cms-test"
+onlineKeyPrefixHashTag: false
+# Sentinel configuration (only used when redisMode is "sentinel")
+sentinelMode:
+ masterName: "redis-master"
+ sentinelsAddrs: ["127.0.0.1:26379", "127.0.0.1:26380", "127.0.0.1:26381"]
+ routeByLatency: true
+ routeRandomly: true
diff --git a/config/share.yml b/config/share.yml
new file mode 100644
index 0000000..6240aea
--- /dev/null
+++ b/config/share.yml
@@ -0,0 +1,20 @@
+secret: openIM123
+
+# imAdminUser: Configuration for instant messaging system administrators
+imAdminUser:
+ # userIDs: List of administrator user IDs.
+ # Each entry here corresponds by index to the matching entry in the nicknames list below.
+ userIDs: [imAdmin]
+ # nicknames: List of administrator display names.
+ # Each entry here corresponds by index to the matching entry in the userIDs list above.
+ nicknames: [superAdmin]
+
+# 1: For Android, iOS, Windows, Mac, and web platforms, only one instance can be online at a time
+multiLogin:
+ policy: 1
+ # max num of tokens in one end
+ maxNumOneEnd: 30
+
+rpcMaxBodySize:
+ requestMaxBodySize: 67108864 # 64MB
+ responseMaxBodySize: 67108864 # 64MB
diff --git a/config/webhooks.yml b/config/webhooks.yml
new file mode 100644
index 0000000..c1645e7
--- /dev/null
+++ b/config/webhooks.yml
@@ -0,0 +1,202 @@
+url: http://127.0.0.1:10006/callbackExample
+beforeSendSingleMsg:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ # Only the contentType not in deniedTypes will send the callback.
+ # If not set, all contentType messages will through this filter.
+ deniedTypes: []
+beforeUpdateUserInfoEx:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterUpdateUserInfoEx:
+ enable: false
+ timeout: 5
+afterSendSingleMsg:
+ enable: false
+ timeout: 5
+ # Only the recvIDs specified in attentionIds will send the callback
+ # if not set, all user messages will be callback
+ attentionIds: []
+ # See beforeSendSingleMsg comment.
+ deniedTypes: []
+beforeSendGroupMsg:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ # See beforeSendSingleMsg comment.
+ deniedTypes: []
+beforeMsgModify:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ # See beforeSendSingleMsg comment.
+ deniedTypes: []
+afterSendGroupMsg:
+ enable: false
+ timeout: 5
+ # Only the GroupIDs specified in attentionIds will send the callback
+ # if not set, all user messages will be callback
+ attentionIds: []
+ # See beforeSendSingleMsg comment.
+ deniedTypes: []
+afterMsgSaveDB:
+ enable: false
+ timeout: 5
+afterUserOnline:
+ enable: false
+ timeout: 5
+afterUserOffline:
+ enable: false
+ timeout: 5
+afterUserKickOff:
+ enable: false
+ timeout: 5
+beforeOfflinePush:
+ enable: false
+ timeout: 5
+ failedContinue: true
+beforeOnlinePush:
+ enable: false
+ timeout: 5
+ failedContinue: true
+beforeGroupOnlinePush:
+ enable: false
+ timeout: 5
+ failedContinue: true
+beforeAddFriend:
+ enable: false
+ timeout: 5
+ failedContinue: true
+beforeUpdateUserInfo:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterUpdateUserInfo:
+ enable: false
+ timeout: 5
+beforeCreateGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterCreateGroup:
+ enable: false
+ timeout: 5
+beforeMemberJoinGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+beforeSetGroupMemberInfo:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterSetGroupMemberInfo:
+ enable: false
+ timeout: 5
+afterQuitGroup:
+ enable: false
+ timeout: 5
+afterKickGroupMember:
+ enable: false
+ timeout: 5
+afterDismissGroup:
+ enable: false
+ timeout: 5
+beforeApplyJoinGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterGroupMsgRead:
+ enable: false
+ timeout: 5
+afterSingleMsgRead:
+ enable: false
+ timeout: 5
+beforeUserRegister:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterUserRegister:
+ enable: false
+ timeout: 5
+afterTransferGroupOwner:
+ enable: false
+ timeout: 5
+beforeSetFriendRemark:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterSetFriendRemark:
+ enable: false
+ timeout: 5
+afterGroupMsgRevoke:
+ enable: false
+ timeout: 5
+afterJoinGroup:
+ enable: false
+ timeout: 5
+beforeInviteUserToGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterSetGroupInfo:
+ enable: false
+ timeout: 5
+beforeSetGroupInfo:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterSetGroupInfoEx:
+ enable: false
+ timeout: 5
+beforeSetGroupInfoEx:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterRevokeMsg:
+ enable: false
+ timeout: 5
+beforeAddBlack:
+ enable: false
+ timeout: 5
+ failedContinue:
+afterAddFriend:
+ enable: false
+ timeout: 5
+beforeAddFriendAgree:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterAddFriendAgree:
+ enable: false
+ timeout: 5
+afterDeleteFriend:
+ enable: false
+ timeout: 5
+beforeImportFriends:
+ enable: false
+ timeout: 5
+ failedContinue: true
+afterImportFriends:
+ enable: false
+ timeout: 5
+afterRemoveBlack:
+ enable: false
+ timeout: 5
+beforeCreateSingleChatConversations:
+ enable: false
+ timeout: 5
+ failedContinue: false
+afterCreateSingleChatConversations:
+ enable: false
+ timeout: 5
+ failedContinue: false
+beforeCreateGroupChatConversations:
+ enable: false
+ timeout: 5
+ failedContinue: false
+afterCreateGroupChatConversations:
+ enable: false
+ timeout: 5
+ failedContinue: false
diff --git a/deployments/Readme.md b/deployments/Readme.md
new file mode 100644
index 0000000..8da4f90
--- /dev/null
+++ b/deployments/Readme.md
@@ -0,0 +1,188 @@
+# Kubernetes Deployment
+
+## Resource Requests
+
+- CPU: 2 cores
+- Memory: 4 GiB
+- Disk usage: 20 GiB (on Node)
+
+## Preconditions
+
+ensure that you have already deployed the following components:
+
+- Redis
+- MongoDB
+- Kafka
+- MinIO
+
+## Origin Deploy
+
+### Enter the target dir
+
+`cd ./deployments/deploy/`
+
+### Deploy configs and dependencies
+
+Upate your configMap `openim-config.yml`. **You can check the official docs for more details.**
+
+In `openim-config.yml`, you need modify the following configurations:
+
+**discovery.yml**
+
+- `kubernetes.namespace`: default is `default`, you can change it to your namespace.
+
+**mongodb.yml**
+
+- `address`: set to your already mongodb address or mongo Service name and port in your deployed.
+- `database`: set to your mongodb database name.(Need have a created database.)
+- `authSource`: set to your mongodb authSource. (authSource is specify the database name associated with the user's credentials, user need create in this database.)
+
+**kafka.yml**
+
+- `address`: set to your already kafka address or kafka Service name and port in your deployed.
+
+**redis.yml**
+
+- `address`: set to your already redis address or redis Service name and port in your deployed.
+
+**minio.yml**
+
+- `internalAddress`: set to your minio Service name and port in your deployed.
+- `externalAddress`: set to your already expose minio external address.
+
+### Set the secret
+
+A Secret is an object that contains a small amount of sensitive data. Such as password and secret. Secret is similar to ConfigMaps.
+
+#### Redis:
+
+Update the `redis-password` value in `redis-secret.yml` to your Redis password encoded in base64.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-redis-secret
+type: Opaque
+data:
+ redis-password: b3BlbklNMTIz # update to your redis password encoded in base64, if need empty, you can set to ""
+```
+
+#### Mongo:
+
+Update the `mongo_openim_username`, `mongo_openim_password` value in `mongo-secret.yml` to your Mongo username and password encoded in base64.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-mongo-secret
+type: Opaque
+data:
+ mongo_openim_username: b3BlbklN # update to your mongo username encoded in base64, if need empty, you can set to "" (this user credentials need in authSource database).
+ mongo_openim_password: b3BlbklNMTIz # update to your mongo password encoded in base64, if need empty, you can set to ""
+```
+
+#### Minio:
+
+Update the `minio-root-user` and `minio-root-password` value in `minio-secret.yml` to your MinIO accessKeyID and secretAccessKey encoded in base64.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-minio-secret
+type: Opaque
+data:
+ minio-root-user: cm9vdA== # update to your minio accessKeyID encoded in base64, if need empty, you can set to ""
+ minio-root-password: b3BlbklNMTIz # update to your minio secretAccessKey encoded in base64, if need empty, you can set to ""
+```
+
+#### Kafka:
+
+Update the `kafka-password` value in `kafka-secret.yml` to your Kafka password encoded in base64.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-kafka-secret
+type: Opaque
+data:
+ kafka-password: b3BlbklNMTIz # update to your kafka password encoded in base64, if need empty, you can set to ""
+```
+
+### Apply the secret.
+
+```shell
+kubectl apply -f redis-secret.yml -f minio-secret.yml -f mongo-secret.yml -f kafka-secret.yml
+```
+
+### Apply all config
+
+`kubectl apply -f ./openim-config.yml`
+
+> Attation: If you use `default` namespace, you can excute `clusterRile.yml` to create a cluster role binding for default service account.
+>
+> Namespace is modify to `discovery.yml` in `openim-config.yml`, you can change `kubernetes.namespace` to your namespace.
+
+**Excute `clusterRole.yml`**
+
+`kubectl apply -f ./clusterRole.yml`
+
+### run all deployments and services
+
+> Note: Ensure that infrastructure services like MinIO, Redis, and Kafka are running before deploying the main applications.
+
+```bash
+kubectl apply \
+ -f openim-api-deployment.yml \
+ -f openim-api-service.yml \
+ -f openim-crontask-deployment.yml \
+ -f openim-rpc-user-deployment.yml \
+ -f openim-rpc-user-service.yml \
+ -f openim-msggateway-deployment.yml \
+ -f openim-msggateway-service.yml \
+ -f openim-push-deployment.yml \
+ -f openim-push-service.yml \
+ -f openim-msgtransfer-service.yml \
+ -f openim-msgtransfer-deployment.yml \
+ -f openim-rpc-conversation-deployment.yml \
+ -f openim-rpc-conversation-service.yml \
+ -f openim-rpc-auth-deployment.yml \
+ -f openim-rpc-auth-service.yml \
+ -f openim-rpc-group-deployment.yml \
+ -f openim-rpc-group-service.yml \
+ -f openim-rpc-friend-deployment.yml \
+ -f openim-rpc-friend-service.yml \
+ -f openim-rpc-msg-deployment.yml \
+ -f openim-rpc-msg-service.yml \
+ -f openim-rpc-third-deployment.yml \
+ -f openim-rpc-third-service.yml
+```
+
+### Verification
+
+After deploying the services, verify that everything is running smoothly:
+
+```bash
+# Check the status of all pods
+kubectl get pods
+
+# Check the status of services
+kubectl get svc
+
+# Check the status of deployments
+kubectl get deployments
+
+# View all resources
+kubectl get all
+```
+
+### clean all
+
+`kubectl delete -f ./`
+
+### Notes:
+
+- If you use a specific namespace for your deployment, be sure to append the -n flag to your kubectl commands.
diff --git a/deployments/deploy/clusterRole.yml b/deployments/deploy/clusterRole.yml
new file mode 100644
index 0000000..190c0b2
--- /dev/null
+++ b/deployments/deploy/clusterRole.yml
@@ -0,0 +1,24 @@
+# ClusterRole.yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: service-reader
+rules:
+ - apiGroups: [""]
+ resources: ["services", "endpoints"]
+ verbs: ["get", "list", "watch"]
+
+---
+# ClusterRoleBinding.yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: default-service-reader-binding
+subjects:
+ - kind: ServiceAccount
+ name: default
+ namespace: default
+roleRef:
+ kind: ClusterRole
+ name: service-reader
+ apiGroup: rbac.authorization.k8s.io
diff --git a/deployments/deploy/ingress.yml b/deployments/deploy/ingress.yml
new file mode 100644
index 0000000..8a4fbaa
--- /dev/null
+++ b/deployments/deploy/ingress.yml
@@ -0,0 +1,25 @@
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: openim-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /
+spec:
+ ingressClassName: openim-nginx
+ rules:
+ - http:
+ paths:
+ - path: /openim-api
+ pathType: Prefix
+ backend:
+ service:
+ name: openim-api-service
+ port:
+ number: 10002
+ - path: /openim-msggateway
+ pathType: Prefix
+ backend:
+ service:
+ name: openim-msggateway-service
+ port:
+ number: 10001
diff --git a/deployments/deploy/kafka-secret.yml b/deployments/deploy/kafka-secret.yml
new file mode 100644
index 0000000..dcee689
--- /dev/null
+++ b/deployments/deploy/kafka-secret.yml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-kafka-secret
+type: Opaque
+data:
+ kafka-password: ""
diff --git a/deployments/deploy/kafka-service.yml b/deployments/deploy/kafka-service.yml
new file mode 100644
index 0000000..675600b
--- /dev/null
+++ b/deployments/deploy/kafka-service.yml
@@ -0,0 +1,20 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: kafka-service
+ labels:
+ app: kafka
+spec:
+ ports:
+ - name: plaintext
+ port: 9092
+ targetPort: 9092
+ - name: controller
+ port: 9093
+ targetPort: 9093
+ - name: external
+ port: 19094
+ targetPort: 9094
+ selector:
+ app: kafka
+ type: ClusterIP
diff --git a/deployments/deploy/kafka-statefulset.yml b/deployments/deploy/kafka-statefulset.yml
new file mode 100644
index 0000000..0e3c78b
--- /dev/null
+++ b/deployments/deploy/kafka-statefulset.yml
@@ -0,0 +1,71 @@
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: kafka-statefulset
+ labels:
+ app: kafka
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: kafka
+ serviceName: "kafka-service"
+ template:
+ metadata:
+ labels:
+ app: kafka
+ spec:
+ containers:
+ - name: kafka
+ image: bitnami/kafka:3.5.1
+ imagePullPolicy: IfNotPresent
+ resources:
+ limits:
+ memory: "2Gi"
+ cpu: "1000m"
+ requests:
+ memory: "1Gi"
+ cpu: "500m"
+ ports:
+ - containerPort: 9092 # PLAINTEXT
+ - containerPort: 9093 # CONTROLLER
+ - containerPort: 9094 # EXTERNAL
+ env:
+ - name: TZ
+ value: "Asia/Shanghai"
+ - name: KAFKA_CFG_NODE_ID
+ value: "0"
+ - name: KAFKA_CFG_PROCESS_ROLES
+ value: "controller,broker"
+ - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS
+ value: "0@kafka-service:9093"
+ - name: KAFKA_CFG_LISTENERS
+ value: "PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094"
+ - name: KAFKA_CFG_ADVERTISED_LISTENERS
+ value: "PLAINTEXT://kafka-service:9092,EXTERNAL://kafka-service:19094"
+ - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
+ value: "CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT"
+ - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
+ value: "CONTROLLER"
+ - name: KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE
+ value: "true"
+ volumeMounts:
+ - name: kafka-data
+ mountPath: /bitnami/kafka
+
+ volumes:
+ - name: kafka-data
+ persistentVolumeClaim:
+ claimName: kafka-pvc
+
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: kafka-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
diff --git a/deployments/deploy/minio-secret.yml b/deployments/deploy/minio-secret.yml
new file mode 100644
index 0000000..3f07102
--- /dev/null
+++ b/deployments/deploy/minio-secret.yml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-minio-secret
+type: Opaque
+data:
+ minio-root-user: WjlNZ3F0ZG05T2N6emVSRw== # Base64 encoded "root"
+ minio-root-password: dlY2Q3pOdnhZYU45alNaOGc3bk9oR0YxTjR5Z0xKYkU= # Base64 encoded "openIM123"
diff --git a/deployments/deploy/minio-service.yml b/deployments/deploy/minio-service.yml
new file mode 100644
index 0000000..1aeeb5f
--- /dev/null
+++ b/deployments/deploy/minio-service.yml
@@ -0,0 +1,18 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: minio-service
+spec:
+ selector:
+ app: minio
+ ports:
+ - name: minio
+ protocol: TCP
+ port: 10005 # External port for accessing MinIO service
+ targetPort: 9000 # Container port for MinIO service
+ - name: minio-console
+ protocol: TCP
+ port: 19090 # External port for accessing MinIO console
+ targetPort: 9090 # Container port for MinIO console
+ type: NodePort
diff --git a/deployments/deploy/minio-statefulset.yml b/deployments/deploy/minio-statefulset.yml
new file mode 100644
index 0000000..9cf0a42
--- /dev/null
+++ b/deployments/deploy/minio-statefulset.yml
@@ -0,0 +1,79 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: minio
+ labels:
+ app: minio
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: minio
+ template:
+ metadata:
+ labels:
+ app: minio
+ spec:
+ containers:
+ - name: minio
+ image: minio/minio:RELEASE.2024-01-11T07-46-16Z
+ ports:
+ - containerPort: 9000 # MinIO service port
+ - containerPort: 9090 # MinIO console port
+ volumeMounts:
+ - name: minio-data
+ mountPath: /data
+ - name: minio-config
+ mountPath: /root/.minio
+ env:
+ - name: TZ
+ value: "Asia/Shanghai"
+ - name: MINIO_ROOT_USER
+ valueFrom:
+ secretKeyRef:
+ name: openim-minio-secret
+ key: minio-root-user
+ - name: MINIO_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-minio-secret
+ key: minio-root-password
+ command:
+ - "/bin/sh"
+ - "-c"
+ - |
+ mkdir -p /data && \
+ minio server /data --console-address ":9090"
+ volumes:
+ - name: minio-data
+ persistentVolumeClaim:
+ claimName: minio-pvc
+ - name: minio-config
+ persistentVolumeClaim:
+ claimName: minio-config-pvc
+
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: minio-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: minio-config-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+
+
diff --git a/deployments/deploy/mongo-secret.yml b/deployments/deploy/mongo-secret.yml
new file mode 100644
index 0000000..059eca5
--- /dev/null
+++ b/deployments/deploy/mongo-secret.yml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-mongo-secret
+type: Opaque
+data:
+ mongo_openim_username: cm9vdA== # base64 for "root"
+ mongo_openim_password: bGNFdU11OHFzNDdyb2VCUzkzazExNVNPYzRnVTlVWFY= # base64 for "lcEuMu8qs47roeBS93k115SOc4gU9UXV"
diff --git a/deployments/deploy/mongo-service.yml b/deployments/deploy/mongo-service.yml
new file mode 100644
index 0000000..c3b3a10
--- /dev/null
+++ b/deployments/deploy/mongo-service.yml
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: mongo-service
+spec:
+ selector:
+ app: mongo
+ ports:
+ - name: mongodb-port
+ protocol: TCP
+ port: 37017
+ targetPort: 27017
+ type: NodePort
diff --git a/deployments/deploy/mongo-statefulset.yml b/deployments/deploy/mongo-statefulset.yml
new file mode 100644
index 0000000..41cd4cb
--- /dev/null
+++ b/deployments/deploy/mongo-statefulset.yml
@@ -0,0 +1,108 @@
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: mongo-statefulset
+spec:
+ serviceName: "mongo"
+ replicas: 2
+ selector:
+ matchLabels:
+ app: mongo
+ template:
+ metadata:
+ labels:
+ app: mongo
+ spec:
+ containers:
+ - name: mongo
+ image: mongo:7.0
+ command: ["/bin/bash", "-c"]
+ args:
+ - >
+ docker-entrypoint.sh mongod --wiredTigerCacheSizeGB ${wiredTigerCacheSizeGB} --auth &
+ until mongosh -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase admin --eval "db.runCommand({ ping: 1 })" &>/dev/null; do
+ echo "Waiting for MongoDB to start...";
+ sleep 1;
+ done &&
+ mongosh -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase admin --eval "
+ db = db.getSiblingDB(\"${MONGO_INITDB_DATABASE}\");
+ if (!db.getUser(\"${MONGO_OPENIM_USERNAME}\")) {
+ db.createUser({
+ user: \"${MONGO_OPENIM_USERNAME}\",
+ pwd: \"${MONGO_OPENIM_PASSWORD}\",
+ roles: [{role: \"readWrite\", db: \"${MONGO_INITDB_DATABASE}\"}]
+ });
+ print(\"User created successfully: \");
+ print(\"Username: ${MONGO_OPENIM_USERNAME}\");
+ print(\"Password: ${MONGO_OPENIM_PASSWORD}\");
+ print(\"Database: ${MONGO_INITDB_DATABASE}\");
+ } else {
+ print(\"User already exists in database: ${MONGO_INITDB_DATABASE}, Username: ${MONGO_OPENIM_USERNAME}\");
+ }
+ " &&
+ tail -f /dev/null
+ ports:
+ - containerPort: 27017
+ env:
+ - name: MONGO_INITDB_ROOT_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-init-secret
+ key: mongo_initdb_root_username
+ - name: MONGO_INITDB_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-init-secret
+ key: mongo_initdb_root_password
+ - name: MONGO_INITDB_DATABASE
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-init-secret
+ key: mongo_initdb_database
+ - name: MONGO_OPENIM_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-init-secret
+ key: mongo_openim_username
+ - name: MONGO_OPENIM_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-init-secret
+ key: mongo_openim_password
+ - name: TZ
+ value: "Asia/Shanghai"
+ - name: wiredTigerCacheSizeGB
+ value: "1"
+ volumeMounts:
+ - name: mongo-storage
+ mountPath: /data/db
+
+ volumes:
+ - name: mongo-storage
+ persistentVolumeClaim:
+ claimName: mongo-pvc
+
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: mongo-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 5Gi
+
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-mongo-init-secret
+type: Opaque
+data:
+ mongo_initdb_root_username: cm9vdA== # base64 for "root"
+ mongo_initdb_root_password: b3BlbklNMTIz # base64 for "openIM123"
+ mongo_initdb_database: b3BlbmltX3Yz # base64 for "openim_v3"
+ mongo_openim_username: b3BlbklN # base64 for "openIM"
+ mongo_openim_password: b3BlbklNMTIz # base64 for "openIM123"
diff --git a/deployments/deploy/openim-api-deployment.yml b/deployments/deploy/openim-api-deployment.yml
new file mode 100644
index 0000000..5c24be6
--- /dev/null
+++ b/deployments/deploy/openim-api-deployment.yml
@@ -0,0 +1,59 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: openim-api
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: openim-api
+ template:
+ metadata:
+ labels:
+ app: openim-api
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: openim-api-container
+ image: mag1666888/openim-api:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10002
+ - containerPort: 12002
+ resources:
+ requests:
+ cpu: "200m"
+ memory: "256Mi"
+ ephemeral-storage: "2Gi"
+ limits:
+ cpu: "4000m"
+ memory: "4Gi"
+ ephemeral-storage: "20Gi"
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-api-service.yml b/deployments/deploy/openim-api-service.yml
new file mode 100644
index 0000000..a75bcd3
--- /dev/null
+++ b/deployments/deploy/openim-api-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: openim-api-service
+spec:
+ selector:
+ app: openim-api
+ ports:
+ - name: http-10002
+ protocol: TCP
+ port: 10002
+ targetPort: 10002
+ - name: prometheus-12002
+ protocol: TCP
+ port: 12002
+ targetPort: 12002
+ type: NodePort
diff --git a/deployments/deploy/openim-config.yml b/deployments/deploy/openim-config.yml
new file mode 100644
index 0000000..1e8bbd5
--- /dev/null
+++ b/deployments/deploy/openim-config.yml
@@ -0,0 +1,1060 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: openim-config
+data:
+ discovery.yml: |
+ enable: "kubernetes" # "kubernetes" or "etcd"
+ kubernetes:
+ namespace: default
+ etcd:
+ rootDirectory: openim
+ address: [ localhost:12379 ]
+ username: ''
+ password: ''
+
+ rpcService:
+ user: user-rpc-service
+ friend: friend-rpc-service
+ msg: msg-rpc-service
+ push: push-rpc-service
+ messageGateway: messagegateway-rpc-service
+ group: group-rpc-service
+ auth: auth-rpc-service
+ conversation: conversation-rpc-service
+ third: third-rpc-service
+
+ log.yml: |
+ # Log storage path, default is acceptable, change to a full path if modification is needed
+ storageLocation: ./logs/
+ # Log rotation period (in hours), default is acceptable
+ rotationTime: 24
+ # Number of log files to retain, default is acceptable
+ remainRotationCount: 2
+ # Log level settings: 3 for production environment; 6 for more verbose logging in debugging environments
+ remainLogLevel: 6
+ # Whether to output to standard output, default is acceptable
+ isStdout: true
+ # Whether to log in JSON format, default is acceptable
+ isJson: false
+ # output simplify log when KeyAndValues's value len is bigger than 50 in rpc method log
+ isSimplify: true
+
+ mongodb.yml: |
+ # URI for database connection, leave empty if using address and credential settings directly
+ uri: ''
+ # List of MongoDB server addresses
+ address: [ mongo-service:37017 ]
+ # Name of the database
+ database: openim_v3
+ # Username for database authentication
+ username: '' # openIM
+ # Password for database authentication
+ password: '' # openIM123
+ # Authentication source for database authentication, if use root user, set it to admin
+ authSource: openim_v3
+ # Maximum number of connections in the connection pool
+ maxPoolSize: 100
+ # Maximum number of retry attempts for a failed database connection
+ maxRetry: 10
+
+ local-cache.yml: |
+ user:
+ topic: DELETE_CACHE_USER
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+ group:
+ topic: DELETE_CACHE_GROUP
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+ friend:
+ topic: DELETE_CACHE_FRIEND
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+ conversation:
+ topic: DELETE_CACHE_CONVERSATION
+ slotNum: 100
+ slotSize: 2000
+ successExpire: 300
+ failedExpire: 5
+
+ openim-api.yml: |
+ api:
+ # Listening IP; 0.0.0.0 means both internal and external IPs are listened to, default is recommended
+ listenIP: 0.0.0.0
+ # Listening ports; if multiple are configured, multiple instances will be launched, must be consistent with the number of prometheus.ports
+ ports: [ 10002 ]
+ # API compression level; 0: default compression, 1: best compression, 2: best speed, -1: no compression
+ compressionLevel: 0
+
+ prometheus:
+ # Whether to enable prometheus
+ enable: true
+ # Prometheus listening ports, must match the number of api.ports
+ ports: [ 12002 ]
+ # This address can be accessed via a browser
+ grafanaURL: http://127.0.0.1:13000/
+
+ openim-rpc-user.yml: |
+ rpc:
+ # API or other RPCs can access this RPC through this IP; if left blank, the internal network IP is obtained by default
+ registerIP:
+ # Listening IP; 0.0.0.0 means both internal and external IPs are listened to, if blank, the internal network IP is automatically obtained by default
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10320 ]
+ prometheus:
+ # Whether to enable prometheus
+ enable: true
+ # Prometheus listening ports, must be consistent with the number of rpc.ports
+ ports: [ 12320 ]
+
+ openim-crontask.yml: |
+ cronExecuteTime: 0 2 * * *
+ retainChatRecords: 365
+ fileExpireTime: 180
+ deleteObjectType: ["msg-picture","msg-file", "msg-voice","msg-video","msg-video-snapshot","sdklog"]
+
+ openim-msggateway.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10140 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12140 ]
+
+ # IP address that the RPC/WebSocket service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+
+ longConnSvr:
+ # WebSocket listening ports, must match the number of rpc.ports
+ ports: [ 10001 ]
+ # Maximum number of WebSocket connections
+ websocketMaxConnNum: 100000
+ # Maximum length of the entire WebSocket message packet
+ websocketMaxMsgLen: 4096
+ # WebSocket connection handshake timeout in seconds
+ websocketTimeout: 10
+
+ openim-msgtransfer.yml: |
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; each port corresponds to an instance of monitoring. Ensure these are managed accordingly
+ # Because four instances have been launched, four ports need to be specified
+ ports: [ 12020 ]
+
+ openim-push.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10170 ]
+
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12170 ]
+
+ maxConcurrentWorkers: 3
+ #Use geTui for offline push notifications, or choose fcm or jpns; corresponding configuration settings must be specified.
+ enable:
+ geTui:
+ pushUrl: https://restapi.getui.com/v2/$appId
+ masterSecret:
+ appKey:
+ intent:
+ channelID:
+ channelName:
+ fcm:
+ # Prioritize using file paths. If the file path is empty, use URL
+ filePath: # File path is concatenated with the parameters passed in through - c(`mage` default pass in `config/`) and filePath.
+ authURL: # Must start with https or http.
+ jpush:
+ appKey:
+ masterSecret:
+ pushURL:
+ pushIntent:
+
+ # iOS system push sound and badge count
+ iosPush:
+ pushSound: xxx
+ badgeCount: true
+ production: false
+
+ fullUserCache: true
+
+ openim-rpc-auth.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10200 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [12200]
+
+ tokenPolicy:
+ # Token validity period, in days
+ expire: 90
+
+ openim-rpc-conversation.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10220 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12200 ]
+
+ tokenPolicy:
+ # Token validity period, in days
+ expire: 90
+
+ openim-rpc-friend.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10240 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12240 ]
+
+ openim-rpc-group.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10260 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12260 ]
+
+ enableHistoryForNewMembers: true
+
+ openim-rpc-msg.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ ports: [ 10280 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12280 ]
+
+
+ # Does sending messages require friend verification
+ friendVerify: false
+
+ openim-rpc-third.yml: |
+ rpc:
+ # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
+ registerIP:
+ # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+ listenIP: 0.0.0.0
+ # autoSetPorts indicates whether to automatically set the ports
+ # if you use in kubernetes, set it to false
+ autoSetPorts: false
+ # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
+ # It will only take effect when autoSetPorts is set to false.
+ ports: [ 10300 ]
+
+ prometheus:
+ # Enable or disable Prometheus monitoring
+ enable: true
+ # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
+ ports: [ 12300 ]
+
+
+ object:
+ # Use MinIO as object storage, or set to "cos", "oss", "kodo", "aws", while also configuring the corresponding settings
+ enable: minio
+ cos:
+ bucketURL: https://temp-1252357374.cos.ap-chengdu.myqcloud.com
+ secretID:
+ secretKey:
+ sessionToken:
+ publicRead: false
+ oss:
+ endpoint: https://oss-cn-chengdu.aliyuncs.com
+ bucket: demo-9999999
+ bucketURL: https://demo-9999999.oss-cn-chengdu.aliyuncs.com
+ accessKeyID:
+ accessKeySecret:
+ sessionToken:
+ publicRead: false
+ kodo:
+ endpoint: http://s3.cn-south-1.qiniucs.com
+ bucket: kodo-bucket-test
+ bucketURL: http://kodo-bucket-test-oetobfb.qiniudns.com
+ accessKeyID:
+ accessKeySecret:
+ sessionToken:
+ publicRead: false
+ aws:
+ region: ap-southeast-2
+ bucket: testdemo832234
+ accessKeyID:
+ secretAccessKey:
+ sessionToken:
+ publicRead: false
+
+ share.yml: |
+ secret: openIM123
+
+ imAdminUserID: ["imAdmin"]
+
+ # 1: For Android, iOS, Windows, Mac, and web platforms, only one instance can be online at a time
+ multiLogin:
+ policy: 1
+ maxNumOneEnd: 30
+
+ rpcMaxBodySize:
+ requestMaxBodySize: 8388608
+ responseMaxBodySize: 8388608
+
+ kafka.yml: |
+ # Username for authentication
+ username: ''
+ # Password for authentication
+ password: ''
+ # Producer acknowledgment settings
+ producerAck:
+ # Compression type to use (e.g., none, gzip, snappy)
+ compressType: none
+ # List of Kafka broker addresses
+ address: [ "kafka-service:19094" ]
+ # Kafka topic for Redis integration
+ toRedisTopic: toRedis
+ # Kafka topic for MongoDB integration
+ toMongoTopic: toMongo
+ # Kafka topic for push notifications
+ toPushTopic: toPush
+ # Kafka topic for offline push notifications
+ toOfflinePushTopic: toOfflinePush
+ # Consumer group ID for Redis topic
+ toRedisGroupID: redis
+ # Consumer group ID for MongoDB topic
+ toMongoGroupID: mongo
+ # Consumer group ID for push notifications topic
+ toPushGroupID: push
+ # Consumer group ID for offline push notifications topic
+ toOfflinePushGroupID: offlinePush
+ # TLS (Transport Layer Security) configuration
+ tls:
+ # Enable or disable TLS
+ enableTLS: false
+ # CA certificate file path
+ caCrt:
+ # Client certificate file path
+ clientCrt:
+ # Client key file path
+ clientKey:
+ # Client key password
+ clientKeyPwd:
+ # Whether to skip TLS verification (not recommended for production)
+ insecureSkipVerify: false
+
+ redis.yml: |
+ address: [ "redis-service:16379" ]
+ username:
+ password: # openIM123
+ clusterMode: false
+ db: 0
+ maxRetry: 10
+ poolSize: 100
+
+ minio.yml: |
+ # Name of the bucket in MinIO
+ bucket: openim
+ # Access key ID for MinIO authentication
+ accessKeyID: root
+ # Secret access key for MinIO authentication
+ secretAccessKey: # openIM123
+ # Session token for MinIO authentication (optional)
+ sessionToken:
+ # Internal address of the MinIO server
+ internalAddress: minio-service:10005
+ # External address of the MinIO server, accessible from outside. Supports both HTTP and HTTPS using a domain name
+ externalAddress: http://minio-service:10005
+ # Flag to enable or disable public read access to the bucket
+ publicRead: "false"
+
+ notification.yml: |
+ groupCreated:
+ isSendMsg: true
+ # Reliability level of the message sending.
+ # Set to 1 to send only when online, 2 for guaranteed delivery.
+ reliabilityLevel: 1
+ # This setting is effective only when 'isSendMsg' is true.
+ # It controls whether to count unread messages.
+ unreadCount: false
+ # Configuration for offline push notifications.
+ offlinePush:
+ # Enables or disables offline push notifications.
+ enable: false
+ # Title for the notification when a group is created.
+ title: create group title
+ # Description for the notification.
+ desc: create group desc
+ # Additional information for the notification.
+ ext: create group ext
+
+ groupInfoSet:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupInfoSet title
+ desc: groupInfoSet desc
+ ext: groupInfoSet ext
+
+ joinGroupApplication:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: joinGroupApplication title
+ desc: joinGroupApplication desc
+ ext: joinGroupApplication ext
+
+ memberQuit:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberQuit title
+ desc: memberQuit desc
+ ext: memberQuit ext
+
+ groupApplicationAccepted:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupApplicationAccepted title
+ desc: groupApplicationAccepted desc
+ ext: groupApplicationAccepted ext
+
+ groupApplicationRejected:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupApplicationRejected title
+ desc: groupApplicationRejected desc
+ ext: groupApplicationRejected ext
+
+ groupOwnerTransferred:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupOwnerTransferred title
+ desc: groupOwnerTransferred desc
+ ext: groupOwnerTransferred ext
+
+ memberKicked:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberKicked title
+ desc: memberKicked desc
+ ext: memberKicked ext
+
+ memberInvited:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberInvited title
+ desc: memberInvited desc
+ ext: memberInvited ext
+
+ memberEnter:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: memberEnter title
+ desc: memberEnter desc
+ ext: memberEnter ext
+
+ groupDismissed:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupDismissed title
+ desc: groupDismissed desc
+ ext: groupDismissed ext
+
+ groupMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMuted title
+ desc: groupMuted desc
+ ext: groupMuted ext
+
+ groupCancelMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupCancelMuted title
+ desc: groupCancelMuted desc
+ ext: groupCancelMuted ext
+ defaultTips:
+ tips: group Cancel Muted
+
+ groupMemberMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMemberMuted title
+ desc: groupMemberMuted desc
+ ext: groupMemberMuted ext
+
+ groupMemberCancelMuted:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMemberCancelMuted title
+ desc: groupMemberCancelMuted desc
+ ext: groupMemberCancelMuted ext
+
+ groupMemberInfoSet:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupMemberInfoSet title
+ desc: groupMemberInfoSet desc
+ ext: groupMemberInfoSet ext
+
+ groupInfoSetAnnouncement:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupInfoSetAnnouncement title
+ desc: groupInfoSetAnnouncement desc
+ ext: groupInfoSetAnnouncement ext
+
+ groupInfoSetName:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: groupInfoSetName title
+ desc: groupInfoSetName desc
+ ext: groupInfoSetName ext
+
+ #############################friend#################################
+ friendApplicationAdded:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: Somebody applies to add you as a friend
+ desc: Somebody applies to add you as a friend
+ ext: Somebody applies to add you as a friend
+
+ friendApplicationApproved:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Someone applies to add your friend application
+ desc: Someone applies to add your friend application
+ ext: Someone applies to add your friend application
+
+ friendApplicationRejected:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Someone rejected your friend application
+ desc: Someone rejected your friend application
+ ext: Someone rejected your friend application
+
+ friendAdded:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: We have become friends
+ desc: We have become friends
+ ext: We have become friends
+
+ friendDeleted:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: deleted a friend
+ desc: deleted a friend
+ ext: deleted a friend
+
+ friendRemarkSet:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Your friend's profile has been changed
+ desc: Your friend's profile has been changed
+ ext: Your friend's profile has been changed
+
+ blackAdded:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: blocked a user
+ desc: blocked a user
+ ext: blocked a user
+
+ blackDeleted:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: Remove a blocked user
+ desc: Remove a blocked user
+ ext: Remove a blocked user
+
+ friendInfoUpdated:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: friend info updated
+ desc: friend info updated
+ ext: friend info updated
+
+ #####################user#########################
+ userInfoUpdated:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: userInfo updated
+ desc: userInfo updated
+ ext: userInfo updated
+
+ userStatusChanged:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: false
+ title: user status changed
+ desc: user status changed
+ ext: user status changed
+
+ #####################conversation#########################
+ conversationChanged:
+ isSendMsg: false
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: conversation changed
+ desc: conversation changed
+ ext: conversation changed
+
+ conversationSetPrivate:
+ isSendMsg: true
+ reliabilityLevel: 1
+ unreadCount: false
+ offlinePush:
+ enable: true
+ title: burn after reading
+ desc: burn after reading
+ ext: burn after reading
+
+ webhooks.yml: |
+ url: http://127.0.0.1:10006/callbackExample
+ beforeSendSingleMsg:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ # Only the contentType in allowedTypes will send the callback.
+ # Supports two formats: a single type or a range. The range is defined by the lower and upper bounds connected with a hyphen ("-").
+ # e.g. allowedTypes: [1, 100, 200-500, 600-700] means that only contentType within the range
+ # {1, 100} ∪ [200, 500] ∪ [600, 700] will be allowed through the filter.
+ # If not set, all contentType messages will through this filter.
+ allowedTypes: []
+ # Only the contentType not in deniedTypes will send the callback.
+ # Supports two formats, same as allowedTypes.
+ # If not set, all contentType messages will through this filter.
+ deniedTypes: []
+ beforeUpdateUserInfoEx:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterUpdateUserInfoEx:
+ enable: false
+ timeout: 5
+ afterSendSingleMsg:
+ enable: false
+ timeout: 5
+ # Only the senID/recvID specified in attentionIds will send the callback
+ # if not set, all user messages will be callback
+ attentionIds: []
+ # See beforeSendSingleMsg comment.
+ allowedTypes: []
+ deniedTypes: []
+ beforeSendGroupMsg:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ # See beforeSendSingleMsg comment.
+ allowedTypes: []
+ deniedTypes: []
+ beforeMsgModify:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ # See beforeSendSingleMsg comment.
+ allowedTypes: []
+ deniedTypes: []
+ afterSendGroupMsg:
+ enable: false
+ timeout: 5
+ # See beforeSendSingleMsg comment.
+ allowedTypes: []
+ deniedTypes: []
+ afterUserOnline:
+ enable: false
+ timeout: 5
+ afterUserOffline:
+ enable: false
+ timeout: 5
+ afterUserKickOff:
+ enable: false
+ timeout: 5
+ beforeOfflinePush:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ beforeOnlinePush:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ beforeGroupOnlinePush:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ beforeAddFriend:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ beforeUpdateUserInfo:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterUpdateUserInfo:
+ enable: false
+ timeout: 5
+ beforeCreateGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterCreateGroup:
+ enable: false
+ timeout: 5
+ beforeMemberJoinGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ beforeSetGroupMemberInfo:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterSetGroupMemberInfo:
+ enable: false
+ timeout: 5
+ afterQuitGroup:
+ enable: false
+ timeout: 5
+ afterKickGroupMember:
+ enable: false
+ timeout: 5
+ afterDismissGroup:
+ enable: false
+ timeout: 5
+ beforeApplyJoinGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterGroupMsgRead:
+ enable: false
+ timeout: 5
+ afterSingleMsgRead:
+ enable: false
+ timeout: 5
+ beforeUserRegister:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterUserRegister:
+ enable: false
+ timeout: 5
+ afterTransferGroupOwner:
+ enable: false
+ timeout: 5
+ beforeSetFriendRemark:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterSetFriendRemark:
+ enable: false
+ timeout: 5
+ afterGroupMsgRevoke:
+ enable: false
+ timeout: 5
+ afterJoinGroup:
+ enable: false
+ timeout: 5
+ beforeInviteUserToGroup:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterSetGroupInfo:
+ enable: false
+ timeout: 5
+ beforeSetGroupInfo:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterSetGroupInfoEx:
+ enable: false
+ timeout: 5
+ beforeSetGroupInfoEx:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterRevokeMsg:
+ enable: false
+ timeout: 5
+ beforeAddBlack:
+ enable: false
+ timeout: 5
+ failedContinue:
+ afterAddFriend:
+ enable: false
+ timeout: 5
+ beforeAddFriendAgree:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterAddFriendAgree:
+ enable: false
+ timeout: 5
+ afterDeleteFriend:
+ enable: false
+ timeout: 5
+ beforeImportFriends:
+ enable: false
+ timeout: 5
+ failedContinue: true
+ afterImportFriends:
+ enable: false
+ timeout: 5
+ afterRemoveBlack:
+ enable: false
+ timeout: 5
+
+ prometheus.yml: |
+ # my global config
+ global:
+ scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
+ evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
+ # scrape_timeout is set to the global default (10s).
+
+ # Alertmanager configuration
+ alerting:
+ alertmanagers:
+ - static_configs:
+ - targets: [internal_ip:19093]
+
+ # Load rules once and periodically evaluate them according to the global evaluation_interval.
+ rule_files:
+ - instance-down-rules.yml
+ # - first_rules.yml
+ # - second_rules.yml
+
+ # A scrape configuration containing exactly one endpoint to scrape:
+ # Here it's Prometheus itself.
+ scrape_configs:
+ # The job name is added as a label "job=job_name" to any timeseries scraped from this config.
+ # Monitored information captured by prometheus
+
+ # prometheus fetches application services
+ - job_name: node_exporter
+ static_configs:
+ - targets: [ internal_ip:20500 ]
+ - job_name: openimserver-openim-api
+ static_configs:
+ - targets: [ internal_ip:12002 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-msggateway
+ static_configs:
+ - targets: [ internal_ip:12140 ]
+ # - targets: [ internal_ip:12140, internal_ip:12141, internal_ip:12142, internal_ip:12143, internal_ip:12144, internal_ip:12145, internal_ip:12146, internal_ip:12147, internal_ip:12148, internal_ip:12149, internal_ip:12150, internal_ip:12151, internal_ip:12152, internal_ip:12153, internal_ip:12154, internal_ip:12155 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-msgtransfer
+ static_configs:
+ - targets: [ internal_ip:12020, internal_ip:12021, internal_ip:12022, internal_ip:12023, internal_ip:12024, internal_ip:12025, internal_ip:12026, internal_ip:12027 ]
+ # - targets: [ internal_ip:12020, internal_ip:12021, internal_ip:12022, internal_ip:12023, internal_ip:12024, internal_ip:12025, internal_ip:12026, internal_ip:12027, internal_ip:12028, internal_ip:12029, internal_ip:12030, internal_ip:12031, internal_ip:12032, internal_ip:12033, internal_ip:12034, internal_ip:12035 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-push
+ static_configs:
+ - targets: [ internal_ip:12170, internal_ip:12171, internal_ip:12172, internal_ip:12173, internal_ip:12174, internal_ip:12175, internal_ip:12176, internal_ip:12177 ]
+ # - targets: [ internal_ip:12170, internal_ip:12171, internal_ip:12172, internal_ip:12173, internal_ip:12174, internal_ip:12175, internal_ip:12176, internal_ip:12177, internal_ip:12178, internal_ip:12179, internal_ip:12180, internal_ip:12182, internal_ip:12183, internal_ip:12184, internal_ip:12185, internal_ip:12186 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-auth
+ static_configs:
+ - targets: [ internal_ip:12200 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-conversation
+ static_configs:
+ - targets: [ internal_ip:12220 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-friend
+ static_configs:
+ - targets: [ internal_ip:12240 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-group
+ static_configs:
+ - targets: [ internal_ip:12260 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-msg
+ static_configs:
+ - targets: [ internal_ip:12280 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-third
+ static_configs:
+ - targets: [ internal_ip:12300 ]
+ labels:
+ namespace: default
+ - job_name: openimserver-openim-rpc-user
+ static_configs:
+ - targets: [ internal_ip:12320 ]
+ labels:
+ namespace: default
diff --git a/deployments/deploy/openim-crontask-deployment.yml b/deployments/deploy/openim-crontask-deployment.yml
new file mode 100644
index 0000000..d09a13c
--- /dev/null
+++ b/deployments/deploy/openim-crontask-deployment.yml
@@ -0,0 +1,31 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: openim-crontask
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: crontask
+ template:
+ metadata:
+ labels:
+ app: crontask
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: crontask-container
+ image: mag1666888/openim-crontask:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-msggateway-deployment.yml b/deployments/deploy/openim-msggateway-deployment.yml
new file mode 100644
index 0000000..0d006f4
--- /dev/null
+++ b/deployments/deploy/openim-msggateway-deployment.yml
@@ -0,0 +1,48 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: messagegateway-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: messagegateway-rpc-server
+ template:
+ metadata:
+ labels:
+ app: messagegateway-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: openim-msggateway-container
+ image: mag1666888/openim-msggateway:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10140
+ - containerPort: 12001
+ resources:
+ requests:
+ cpu: "200m"
+ memory: "256Mi"
+ ephemeral-storage: "2Gi"
+ limits:
+ cpu: "4000m"
+ memory: "4Gi"
+ ephemeral-storage: "20Gi"
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-msggateway-service.yml b/deployments/deploy/openim-msggateway-service.yml
new file mode 100644
index 0000000..15982cf
--- /dev/null
+++ b/deployments/deploy/openim-msggateway-service.yml
@@ -0,0 +1,21 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: messagegateway-rpc-service
+spec:
+ selector:
+ app: messagegateway-rpc-server
+ ports:
+ - name: longconnserver-10001
+ protocol: TCP
+ port: 10001
+ targetPort: 10001
+ - name: grpc-10140
+ protocol: TCP
+ port: 10140
+ targetPort: 10140
+ - name: prometheus-12001
+ protocol: TCP
+ port: 12001
+ targetPort: 12001
+ type: NodePort
diff --git a/deployments/deploy/openim-msgtransfer-deployment.yml b/deployments/deploy/openim-msgtransfer-deployment.yml
new file mode 100644
index 0000000..c2fd086
--- /dev/null
+++ b/deployments/deploy/openim-msgtransfer-deployment.yml
@@ -0,0 +1,60 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: openim-msgtransfer-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: openim-msgtransfer-server
+ template:
+ metadata:
+ labels:
+ app: openim-msgtransfer-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: openim-msgtransfer-container
+ image: mag1666888/openim-msgtransfer:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ - name: IMENV_KAFKA_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-kafka-secret
+ key: kafka-password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 12020
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "512Mi"
+ limits:
+ cpu: "1000m"
+ memory: "1Gi"
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-msgtransfer-service.yml b/deployments/deploy/openim-msgtransfer-service.yml
new file mode 100644
index 0000000..387208b
--- /dev/null
+++ b/deployments/deploy/openim-msgtransfer-service.yml
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: openim-msgtransfer-service
+spec:
+ selector:
+ app: openim-msgtransfer-server
+ ports:
+ - name: prometheus-12020
+ protocol: TCP
+ port: 12020
+ targetPort: 12020
+ type: ClusterIP
diff --git a/deployments/deploy/openim-push-deployment.yml b/deployments/deploy/openim-push-deployment.yml
new file mode 100644
index 0000000..caf9e88
--- /dev/null
+++ b/deployments/deploy/openim-push-deployment.yml
@@ -0,0 +1,51 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: push-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: push-rpc-server
+ template:
+ metadata:
+ labels:
+ app: push-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: push-rpc-server-container
+ image: mag1666888/openim-push:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_KAFKA_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-kafka-secret
+ key: kafka-password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10170
+ - containerPort: 12170
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "512Mi"
+ limits:
+ cpu: "1000m"
+ memory: "1Gi"
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-push-service.yml b/deployments/deploy/openim-push-service.yml
new file mode 100644
index 0000000..33f39c2
--- /dev/null
+++ b/deployments/deploy/openim-push-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: push-rpc-service
+spec:
+ selector:
+ app: push-rpc-server
+ ports:
+ - name: http-10170
+ protocol: TCP
+ port: 10170
+ targetPort: 10170
+ - name: prometheus-12170
+ protocol: TCP
+ port: 12170
+ targetPort: 12170
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-auth-deployment.yml b/deployments/deploy/openim-rpc-auth-deployment.yml
new file mode 100644
index 0000000..b41d44a
--- /dev/null
+++ b/deployments/deploy/openim-rpc-auth-deployment.yml
@@ -0,0 +1,39 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: auth-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: auth-rpc-server
+ template:
+ metadata:
+ labels:
+ app: auth-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: auth-rpc-server-container
+ image: mag1666888/openim-rpc-auth:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10200
+ - containerPort: 12200
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-auth-service.yml b/deployments/deploy/openim-rpc-auth-service.yml
new file mode 100644
index 0000000..7d79838
--- /dev/null
+++ b/deployments/deploy/openim-rpc-auth-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: auth-rpc-service
+spec:
+ selector:
+ app: auth-rpc-server
+ ports:
+ - name: http-10200
+ protocol: TCP
+ port: 10200
+ targetPort: 10200
+ - name: prometheus-12200
+ protocol: TCP
+ port: 12200
+ targetPort: 12200
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-conversation-deployment.yml b/deployments/deploy/openim-rpc-conversation-deployment.yml
new file mode 100644
index 0000000..19821ef
--- /dev/null
+++ b/deployments/deploy/openim-rpc-conversation-deployment.yml
@@ -0,0 +1,49 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: conversation-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: conversation-rpc-server
+ template:
+ metadata:
+ labels:
+ app: conversation-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: conversation-rpc-server-container
+ image: mag1666888/openim-rpc-conversation:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10220
+ - containerPort: 12220
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-conversation-service.yml b/deployments/deploy/openim-rpc-conversation-service.yml
new file mode 100644
index 0000000..f9be231
--- /dev/null
+++ b/deployments/deploy/openim-rpc-conversation-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: conversation-rpc-service
+spec:
+ selector:
+ app: conversation-rpc-server
+ ports:
+ - name: http-10220
+ protocol: TCP
+ port: 10220
+ targetPort: 10220
+ - name: prometheus-12220
+ protocol: TCP
+ port: 12220
+ targetPort: 12220
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-friend-deployment.yml b/deployments/deploy/openim-rpc-friend-deployment.yml
new file mode 100644
index 0000000..c538be2
--- /dev/null
+++ b/deployments/deploy/openim-rpc-friend-deployment.yml
@@ -0,0 +1,49 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: friend-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: friend-rpc-server
+ template:
+ metadata:
+ labels:
+ app: friend-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: friend-rpc-server-container
+ image: mag1666888/openim-rpc-friend:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10240
+ - containerPort: 12240
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-friend-service.yml b/deployments/deploy/openim-rpc-friend-service.yml
new file mode 100644
index 0000000..b6b512b
--- /dev/null
+++ b/deployments/deploy/openim-rpc-friend-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: friend-rpc-service
+spec:
+ selector:
+ app: friend-rpc-server
+ ports:
+ - name: http-10240
+ protocol: TCP
+ port: 10240
+ targetPort: 10240
+ - name: prometheus-12240
+ protocol: TCP
+ port: 12240
+ targetPort: 12240
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-group-deployment.yml b/deployments/deploy/openim-rpc-group-deployment.yml
new file mode 100644
index 0000000..10a33ef
--- /dev/null
+++ b/deployments/deploy/openim-rpc-group-deployment.yml
@@ -0,0 +1,49 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: group-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: group-rpc-server
+ template:
+ metadata:
+ labels:
+ app: group-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: group-rpc-server-container
+ image: mag1666888/openim-rpc-group:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10260
+ - containerPort: 12260
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-group-service.yml b/deployments/deploy/openim-rpc-group-service.yml
new file mode 100644
index 0000000..bccc080
--- /dev/null
+++ b/deployments/deploy/openim-rpc-group-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: group-rpc-service
+spec:
+ selector:
+ app: group-rpc-server
+ ports:
+ - name: http-10260
+ protocol: TCP
+ port: 10260
+ targetPort: 10260
+ - name: prometheus-12260
+ protocol: TCP
+ port: 12260
+ targetPort: 12260
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-msg-deployment.yml b/deployments/deploy/openim-rpc-msg-deployment.yml
new file mode 100644
index 0000000..9e5aa7b
--- /dev/null
+++ b/deployments/deploy/openim-rpc-msg-deployment.yml
@@ -0,0 +1,63 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: msg-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: msg-rpc-server
+ template:
+ metadata:
+ labels:
+ app: msg-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: msg-rpc-server-container
+ image: mag1666888/openim-rpc-msg:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ - name: IMENV_KAFKA_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-kafka-secret
+ key: kafka-password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10280
+ - containerPort: 12280
+ resources:
+ requests:
+ cpu: "200m"
+ memory: "256Mi"
+ ephemeral-storage: "2Gi"
+ limits:
+ cpu: "4000m"
+ memory: "4Gi"
+ ephemeral-storage: "20Gi"
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-msg-service.yml b/deployments/deploy/openim-rpc-msg-service.yml
new file mode 100644
index 0000000..db7610e
--- /dev/null
+++ b/deployments/deploy/openim-rpc-msg-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: msg-rpc-service
+spec:
+ selector:
+ app: msg-rpc-server
+ ports:
+ - name: http-10280
+ protocol: TCP
+ port: 10280
+ targetPort: 10280
+ - name: prometheus-12280
+ protocol: TCP
+ port: 12280
+ targetPort: 12280
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-third-deployment.yml b/deployments/deploy/openim-rpc-third-deployment.yml
new file mode 100644
index 0000000..d78ff29
--- /dev/null
+++ b/deployments/deploy/openim-rpc-third-deployment.yml
@@ -0,0 +1,59 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: third-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: third-rpc-server
+ template:
+ metadata:
+ labels:
+ app: third-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: third-rpc-server-container
+ image: mag1666888/openim-rpc-third:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_MINIO_ACCESSKEYID
+ valueFrom:
+ secretKeyRef:
+ name: openim-minio-secret
+ key: minio-root-user
+ - name: IMENV_MINIO_SECRETACCESSKEY
+ valueFrom:
+ secretKeyRef:
+ name: openim-minio-secret
+ key: minio-root-password
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10300
+ - containerPort: 12300
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-third-service.yml b/deployments/deploy/openim-rpc-third-service.yml
new file mode 100644
index 0000000..8cd34c2
--- /dev/null
+++ b/deployments/deploy/openim-rpc-third-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: third-rpc-service
+spec:
+ selector:
+ app: third-rpc-server
+ ports:
+ - name: http-10300
+ protocol: TCP
+ port: 10300
+ targetPort: 10300
+ - name: prometheus-12300
+ protocol: TCP
+ port: 12300
+ targetPort: 12300
+ type: ClusterIP
diff --git a/deployments/deploy/openim-rpc-user-deployment.yml b/deployments/deploy/openim-rpc-user-deployment.yml
new file mode 100644
index 0000000..a527194
--- /dev/null
+++ b/deployments/deploy/openim-rpc-user-deployment.yml
@@ -0,0 +1,63 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: user-rpc-server
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: user-rpc-server
+ template:
+ metadata:
+ labels:
+ app: user-rpc-server
+ spec:
+ imagePullSecrets:
+ - name: dockerhub-secret
+ containers:
+ - name: user-rpc-server-container
+ image: mag1666888/openim-rpc-user:prod
+ imagePullPolicy: Always
+ env:
+ - name: CONFIG_PATH
+ value: "/config"
+ - name: IMENV_REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-redis-secret
+ key: redis-password
+ - name: IMENV_MONGODB_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_username
+ - name: IMENV_MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-mongo-secret
+ key: mongo_openim_password
+ - name: IMENV_KAFKA_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: openim-kafka-secret
+ key: kafka-password
+ volumeMounts:
+ - name: openim-config
+ mountPath: "/config"
+ readOnly: true
+ ports:
+ - containerPort: 10320
+ - containerPort: 12320
+ resources:
+ requests:
+ cpu: "200m"
+ memory: "256Mi"
+ ephemeral-storage: "2Gi"
+ limits:
+ cpu: "4000m"
+ memory: "4Gi"
+ ephemeral-storage: "20Gi"
+ volumes:
+ - name: openim-config
+ configMap:
+ name: openim-config
diff --git a/deployments/deploy/openim-rpc-user-service.yml b/deployments/deploy/openim-rpc-user-service.yml
new file mode 100644
index 0000000..50cef3c
--- /dev/null
+++ b/deployments/deploy/openim-rpc-user-service.yml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: user-rpc-service
+spec:
+ selector:
+ app: user-rpc-server
+ ports:
+ - name: http-10320
+ protocol: TCP
+ port: 10320
+ targetPort: 10320
+ - name: prometheus-12320
+ protocol: TCP
+ port: 12320
+ targetPort: 12320
+ type: ClusterIP
diff --git a/deployments/deploy/redis-secret.yml b/deployments/deploy/redis-secret.yml
new file mode 100644
index 0000000..f4fae83
--- /dev/null
+++ b/deployments/deploy/redis-secret.yml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: openim-redis-secret
+type: Opaque
+data:
+ redis-password: WU9iaEI5S0dwRm9KT3dMc0dPVE1Ub0x6RU56elVYUXM= # "openIM123" in base64
diff --git a/deployments/deploy/redis-service.yml b/deployments/deploy/redis-service.yml
new file mode 100644
index 0000000..d076fd1
--- /dev/null
+++ b/deployments/deploy/redis-service.yml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: redis-service
+ labels:
+ app: redis
+spec:
+ type: ClusterIP
+ selector:
+ app: redis
+ ports:
+ - name: redis-port
+ protocol: TCP
+ port: 16379
+ targetPort: 6379
diff --git a/deployments/deploy/redis-statefulset.yml b/deployments/deploy/redis-statefulset.yml
new file mode 100644
index 0000000..5668b20
--- /dev/null
+++ b/deployments/deploy/redis-statefulset.yml
@@ -0,0 +1,55 @@
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: redis-statefulset
+spec:
+ serviceName: "redis"
+ replicas: 2
+ selector:
+ matchLabels:
+ app: redis
+ template:
+ metadata:
+ labels:
+ app: redis
+ spec:
+ containers:
+ - name: redis
+ image: redis:7.0.0
+ ports:
+ - containerPort: 6379
+ env:
+ - name: TZ
+ value: "Asia/Shanghai"
+ - name: REDIS_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: redis-secret
+ key: redis-password
+ volumeMounts:
+ - name: redis-data
+ mountPath: /data
+ command:
+ [
+ "/bin/sh",
+ "-c",
+ 'redis-server --requirepass "$REDIS_PASSWORD" --appendonly yes',
+ ]
+ volumes:
+ - name: redis-config-volume
+ configMap:
+ name: openim-config
+ - name: redis-data
+ persistentVolumeClaim:
+ claimName: redis-pvc
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: redis-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 5Gi
diff --git a/docker-compose.yml b/docker-compose.yml
new file mode 100644
index 0000000..ed67813
--- /dev/null
+++ b/docker-compose.yml
@@ -0,0 +1,342 @@
+networks:
+ openim:
+ driver: bridge
+
+services:
+ mongodb:
+ image: "${MONGO_IMAGE}"
+ ports:
+ - "37017:27017"
+ container_name: mongo
+ command: >
+ bash -c '
+ docker-entrypoint.sh mongod --wiredTigerCacheSizeGB $$wiredTigerCacheSizeGB --auth &
+ until mongosh -u $$MONGO_INITDB_ROOT_USERNAME -p $$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase admin --eval "db.runCommand({ ping: 1 })" &>/dev/null; do
+ echo "Waiting for MongoDB to start..."
+ sleep 1
+ done &&
+ mongosh -u $$MONGO_INITDB_ROOT_USERNAME -p $$MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase admin --eval "
+ db = db.getSiblingDB(\"$$MONGO_INITDB_DATABASE\");
+ if (!db.getUser(\"$$MONGO_OPENIM_USERNAME\")) {
+ db.createUser({
+ user: \"$$MONGO_OPENIM_USERNAME\",
+ pwd: \"$$MONGO_OPENIM_PASSWORD\",
+ roles: [{role: \"readWrite\", db: \"$$MONGO_INITDB_DATABASE\"}]
+ });
+ print(\"User created successfully: \");
+ print(\"Username: $$MONGO_OPENIM_USERNAME\");
+ print(\"Password: $$MONGO_OPENIM_PASSWORD\");
+ print(\"Database: $$MONGO_INITDB_DATABASE\");
+ } else {
+ print(\"User already exists in database: $$MONGO_INITDB_DATABASE, Username: $$MONGO_OPENIM_USERNAME\");
+ }
+ " &&
+ tail -f /dev/null
+ '
+ volumes:
+ - "${DATA_DIR}/components/mongodb/data/db:/data/db"
+ - "${DATA_DIR}/components/mongodb/data/logs:/data/logs"
+ - "${DATA_DIR}/components/mongodb/data/conf:/etc/mongo"
+ - "${MONGO_BACKUP_DIR}:/data/backup"
+ environment:
+ - TZ=Asia/Shanghai
+ - wiredTigerCacheSizeGB=1
+ - MONGO_INITDB_ROOT_USERNAME=root
+ - MONGO_INITDB_ROOT_PASSWORD=openIM123
+ - MONGO_INITDB_DATABASE=openim_v3
+ - MONGO_OPENIM_USERNAME=openIM
+ - MONGO_OPENIM_PASSWORD=openIM123
+ restart: always
+ networks:
+ - openim
+
+ redis:
+ image: "${REDIS_IMAGE}"
+ container_name: redis
+ ports:
+ - "16379:6379"
+ volumes:
+ - "${DATA_DIR}/components/redis/data:/data"
+ - "${DATA_DIR}/components/redis/config/redis.conf:/usr/local/redis/config/redis.conf"
+ environment:
+ TZ: Asia/Shanghai
+ restart: always
+ sysctls:
+ net.core.somaxconn: 1024
+ command: >
+ redis-server
+ --requirepass openIM123
+ --appendonly yes
+ --aof-use-rdb-preamble yes
+ --save ""
+ networks:
+ - openim
+
+ etcd:
+ image: "${ETCD_IMAGE}"
+ container_name: etcd
+ ports:
+ - "12379:2379"
+ - "12380:2380"
+ environment:
+ - ETCD_NAME=s1
+ - ETCD_DATA_DIR=/etcd-data
+ - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
+ - ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379
+ - ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
+ - ETCD_INITIAL_ADVERTISE_PEER_URLS=http://0.0.0.0:2380
+ - ETCD_INITIAL_CLUSTER=s1=http://0.0.0.0:2380
+ - ETCD_INITIAL_CLUSTER_TOKEN=tkn
+ - ETCD_INITIAL_CLUSTER_STATE=new
+ - ALLOW_NONE_AUTHENTICATION=no
+
+ ## Optional: Enable etcd authentication by setting the following credentials
+ # - ETCD_ROOT_USER=root
+ # - ETCD_ROOT_PASSWORD=openIM123
+ # - ETCD_USERNAME=openIM
+ # - ETCD_PASSWORD=openIM123
+ volumes:
+ - "${DATA_DIR}/components/etcd:/etcd-data"
+ command: >
+ /bin/sh -c '
+ etcd &
+ export ETCDCTL_API=3
+ echo "Waiting for etcd to become healthy..."
+ until etcdctl --endpoints=http://127.0.0.1:2379 endpoint health &>/dev/null; do
+ echo "Waiting for ETCD to start..."
+ sleep 1
+ done
+
+ echo "etcd is healthy."
+
+ if [ -n "$${ETCD_ROOT_USER}" ] && [ -n "$${ETCD_ROOT_PASSWORD}" ] && [ -n "$${ETCD_USERNAME}" ] && [ -n "$${ETCD_PASSWORD}" ]; then
+ echo "Authentication credentials provided. Setting up authentication..."
+
+ echo "Checking authentication status..."
+ if ! etcdctl --endpoints=http://127.0.0.1:2379 auth status | grep -q "Authentication Status: true"; then
+ echo "Authentication is disabled. Creating users and enabling..."
+
+ # Create users and setup permissions
+ etcdctl --endpoints=http://127.0.0.1:2379 user add $${ETCD_ROOT_USER} --new-user-password=$${ETCD_ROOT_PASSWORD} || true
+ etcdctl --endpoints=http://127.0.0.1:2379 user add $${ETCD_USERNAME} --new-user-password=$${ETCD_PASSWORD} || true
+
+ etcdctl --endpoints=http://127.0.0.1:2379 role add openim-role || true
+ etcdctl --endpoints=http://127.0.0.1:2379 role grant-permission openim-role --prefix=true readwrite / || true
+ etcdctl --endpoints=http://127.0.0.1:2379 role grant-permission openim-role --prefix=true readwrite "" || true
+ etcdctl --endpoints=http://127.0.0.1:2379 user grant-role $${ETCD_USERNAME} openim-role || true
+
+ etcdctl --endpoints=http://127.0.0.1:2379 user grant-role $${ETCD_ROOT_USER} $${ETCD_USERNAME} root || true
+
+ echo "Enabling authentication..."
+ etcdctl --endpoints=http://127.0.0.1:2379 auth enable
+ echo "Authentication enabled successfully"
+ else
+ echo "Authentication is already enabled. Checking OpenIM user..."
+
+ # Check if openIM user exists and can perform operations
+ if ! etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_USERNAME}:$${ETCD_PASSWORD} put /test/auth "auth-check" &>/dev/null; then
+ echo "OpenIM user test failed. Recreating user with root credentials..."
+
+ # Try to create/update the openIM user using root credentials
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_ROOT_USER}:$${ETCD_ROOT_PASSWORD} user add $${ETCD_USERNAME} --new-user-password=$${ETCD_PASSWORD} --no-password-file || true
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_ROOT_USER}:$${ETCD_ROOT_PASSWORD} role add openim-role || true
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_ROOT_USER}:$${ETCD_ROOT_PASSWORD} role grant-permission openim-role --prefix=true readwrite / || true
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_ROOT_USER}:$${ETCD_ROOT_PASSWORD} role grant-permission openim-role --prefix=true readwrite "" || true
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_ROOT_USER}:$${ETCD_ROOT_PASSWORD} user grant-role $${ETCD_USERNAME} openim-role || true
+ etcdctl --endpoints=http://127.0.0.1:2379 user grant-role $${ETCD_ROOT_USER} $${ETCD_USERNAME} root || true
+
+ echo "OpenIM user recreated with required permissions"
+ else
+ echo "OpenIM user exists and has correct permissions"
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_USERNAME}:$${ETCD_PASSWORD} del /test/auth &>/dev/null
+ fi
+ fi
+ echo "Testing authentication with OpenIM user..."
+ if etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_USERNAME}:$${ETCD_PASSWORD} put /test/auth "auth-works"; then
+ echo "Authentication working properly"
+ etcdctl --endpoints=http://127.0.0.1:2379 --user=$${ETCD_USERNAME}:$${ETCD_PASSWORD} del /test/auth
+ else
+ echo "WARNING: Authentication test failed"
+ fi
+ else
+ echo "No authentication credentials provided. Running in no-auth mode."
+ echo "To enable authentication, set ETCD_ROOT_USER, ETCD_ROOT_PASSWORD, ETCD_USERNAME, and ETCD_PASSWORD environment variables."
+ fi
+
+ tail -f /dev/null
+ '
+ restart: always
+ networks:
+ - openim
+
+ kafka:
+ image: "${KAFKA_IMAGE}"
+ container_name: kafka
+ user: root
+ restart: always
+ ports:
+ - "19094:9094"
+ volumes:
+ - "${DATA_DIR}/components/kafka:/bitnami/kafka"
+ environment:
+ #KAFKA_HEAP_OPTS: "-Xms128m -Xmx256m"
+ TZ: Asia/Shanghai
+ # Unique identifier for the Kafka node (required in controller mode)
+ KAFKA_CFG_NODE_ID: 0
+ # Defines the roles this Kafka node plays: broker, controller, or both
+ KAFKA_CFG_PROCESS_ROLES: controller,broker
+ # Specifies which nodes are controller nodes for quorum voting.
+ # The syntax follows the KRaft mode (no ZooKeeper): node.id@host:port
+ # The controller listener endpoint here is kafka:9093
+ KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafka:9093
+ # Specifies which listener is used for controller-to-controller communication
+ KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
+ # Default number of partitions for new topics
+ KAFKA_NUM_PARTITIONS: 8
+ # Whether to enable automatic topic creation
+ KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
+ # Kafka internal listeners; Kafka supports multiple ports with different protocols
+ # Each port is used for a specific purpose: INTERNAL for internal broker communication,
+ # CONTROLLER for controller communication, EXTERNAL for external client connections.
+ # These logical listener names are mapped to actual protocols via KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
+ # In short, Kafka is listening on three logical ports: 9092 for internal communication,
+ # 9093 for controller traffic, and 9094 for external access.
+ KAFKA_CFG_LISTENERS: "INTERNAL://:9092,CONTROLLER://:9093,EXTERNAL://:9094"
+ # Addresses advertised to clients. INTERNAL://kafka:9092 uses the internal Docker service name 'kafka',
+ # so other containers can access Kafka via kafka:9092.
+ # EXTERNAL://localhost:19094 is the address external clients (e.g., in the LAN) should use to connect.
+ # If Kafka is deployed on a different machine than IM, 'localhost' should be replaced with the LAN IP.
+ KAFKA_CFG_ADVERTISED_LISTENERS: "INTERNAL://kafka:9092,EXTERNAL://localhost:19094"
+ # Maps logical listener names to actual protocols.
+ # Supported protocols include: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL
+ KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT"
+ # Defines which listener is used for inter-broker communication within the Kafka cluster
+ KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "INTERNAL"
+
+ # Authentication configuration variables - comment out to disable auth
+ # KAFKA_USERNAME: "openIM"
+ # KAFKA_PASSWORD: "openIM123"
+ command: >
+ /bin/sh -c '
+ if [ -n "$${KAFKA_USERNAME}" ] && [ -n "$${KAFKA_PASSWORD}" ]; then
+ echo "=== Kafka SASL Authentication ENABLED ==="
+ echo "Username: $${KAFKA_USERNAME}"
+
+ # Set environment variables for SASL authentication
+ export KAFKA_CFG_LISTENERS="SASL_PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094"
+ export KAFKA_CFG_ADVERTISED_LISTENERS="SASL_PLAINTEXT://kafka:9092,EXTERNAL://localhost:19094"
+ export KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP="CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT"
+ export KAFKA_CFG_SASL_ENABLED_MECHANISMS="PLAIN"
+ export KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL="PLAIN"
+ export KAFKA_CFG_INTER_BROKER_LISTENER_NAME="SASL_PLAINTEXT"
+ export KAFKA_CLIENT_USERS="$${KAFKA_USERNAME}"
+ export KAFKA_CLIENT_PASSWORDS="$${KAFKA_PASSWORD}"
+ fi
+
+ # Start Kafka with the configured environment
+ exec /opt/bitnami/scripts/kafka/entrypoint.sh /opt/bitnami/scripts/kafka/run.sh
+ '
+ networks:
+ - openim
+
+ minio:
+ image: "${MINIO_IMAGE}"
+ ports:
+ - "10005:9000"
+ - "19090:9090"
+ container_name: minio
+ volumes:
+ - "${DATA_DIR}/components/mnt/data:/data"
+ - "${DATA_DIR}/components/mnt/config:/root/.minio"
+ environment:
+ TZ: Asia/Shanghai
+ MINIO_ROOT_USER: root
+ MINIO_ROOT_PASSWORD: openIM123
+ restart: always
+ command: minio server /data --console-address ':9090'
+ networks:
+ - openim
+
+ openim-web-front:
+ image: ${OPENIM_WEB_FRONT_IMAGE}
+ container_name: openim-web-front
+ restart: always
+ ports:
+ - "11001:80"
+ networks:
+ - openim
+
+ # openim-admin-front:
+ # image: ${OPENIM_ADMIN_FRONT_IMAGE}
+ # container_name: openim-admin-front
+ # restart: always
+ # ports:
+ # - "11002:80"
+ # networks:
+ # - openim
+
+ prometheus:
+ image: ${PROMETHEUS_IMAGE}
+ container_name: prometheus
+ restart: always
+ user: root
+ profiles:
+ - m
+ volumes:
+ - ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
+ - ./config/instance-down-rules.yml:/etc/prometheus/instance-down-rules.yml:ro
+ - ${DATA_DIR}/components/prometheus/data:/prometheus
+ command:
+ - "--config.file=/etc/prometheus/prometheus.yml"
+ - "--storage.tsdb.path=/prometheus"
+ - "--web.listen-address=:${PROMETHEUS_PORT}"
+ network_mode: host
+
+ alertmanager:
+ image: ${ALERTMANAGER_IMAGE}
+ container_name: alertmanager
+ restart: always
+ profiles:
+ - m
+ volumes:
+ - ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml
+ - ./config/email.tmpl:/etc/alertmanager/email.tmpl
+ command:
+ - "--config.file=/etc/alertmanager/alertmanager.yml"
+ - "--web.listen-address=:${ALERTMANAGER_PORT}"
+ network_mode: host
+
+ grafana:
+ image: ${GRAFANA_IMAGE}
+ container_name: grafana
+ user: root
+ restart: always
+ profiles:
+ - m
+ environment:
+ - GF_SECURITY_ALLOW_EMBEDDING=true
+ - GF_SESSION_COOKIE_SAMESITE=none
+ - GF_SESSION_COOKIE_SECURE=true
+ - GF_AUTH_ANONYMOUS_ENABLED=true
+ - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
+ - GF_SERVER_HTTP_PORT=${GRAFANA_PORT}
+ volumes:
+ - ${DATA_DIR:-./}/components/grafana:/var/lib/grafana
+ network_mode: host
+
+ node-exporter:
+ image: ${NODE_EXPORTER_IMAGE}
+ container_name: node-exporter
+ restart: always
+ profiles:
+ - m
+ volumes:
+ - /proc:/host/proc:ro
+ - /sys:/host/sys:ro
+ - /:/rootfs:ro
+ command:
+ - "--path.procfs=/host/proc"
+ - "--path.sysfs=/host/sys"
+ - "--path.rootfs=/rootfs"
+ - "--web.listen-address=:19100"
+ network_mode: host
diff --git a/docs/.generated_docs b/docs/.generated_docs
new file mode 100644
index 0000000..f9b8da6
--- /dev/null
+++ b/docs/.generated_docs
@@ -0,0 +1,77 @@
+docs/.generated_docs
+
+docs/guide/en-US/cmd/openim/openim.md
+docs/guide/en-US/cmd/openim/openim_color.md
+docs/guide/en-US/cmd/openim/openim_completion.md
+docs/guide/en-US/cmd/openim/openim_info.md
+docs/guide/en-US/cmd/openim/openim_jwt.md
+docs/guide/en-US/cmd/openim/openim_jwt_show.md
+docs/guide/en-US/cmd/openim/openim_jwt_sign.md
+docs/guide/en-US/cmd/openim/openim_jwt_verify.md
+docs/guide/en-US/cmd/openim/openim_new.md
+docs/guide/en-US/cmd/openim/openim_options.md
+docs/guide/en-US/cmd/openim/openim_policy.md
+docs/guide/en-US/cmd/openim/openim_policy_create.md
+docs/guide/en-US/cmd/openim/openim_policy_delete.md
+docs/guide/en-US/cmd/openim/openim_policy_get.md
+docs/guide/en-US/cmd/openim/openim_policy_list.md
+docs/guide/en-US/cmd/openim/openim_policy_update.md
+docs/guide/en-US/cmd/openim/openim_secret.md
+docs/guide/en-US/cmd/openim/openim_secret_create.md
+docs/guide/en-US/cmd/openim/openim_secret_delete.md
+docs/guide/en-US/cmd/openim/openim_secret_get.md
+docs/guide/en-US/cmd/openim/openim_secret_list.md
+docs/guide/en-US/cmd/openim/openim_secret_update.md
+docs/guide/en-US/cmd/openim/openim_set.md
+docs/guide/en-US/cmd/openim/openim-rpc-user.md
+docs/guide/en-US/cmd/openim/openim-rpc-user_create.md
+docs/guide/en-US/cmd/openim/openim-rpc-user_delete.md
+docs/guide/en-US/cmd/openim/openim-rpc-user_get.md
+docs/guide/en-US/cmd/openim/openim-rpc-user_list.md
+docs/guide/en-US/cmd/openim/openim-rpc-user_update.md
+docs/guide/en-US/cmd/openim/openim_validate.md
+docs/guide/en-US/cmd/openim/openim_version.md
+docs/guide/en-US/yaml/openim/config.yaml
+docs/guide/en-US/yaml/openim/openim_color.yaml
+docs/guide/en-US/yaml/openim/openim_completion.yaml
+docs/guide/en-US/yaml/openim/openim_info.yaml
+docs/guide/en-US/yaml/openim/openim_jwt.yaml
+docs/guide/en-US/yaml/openim/openim_new.yaml
+docs/guide/en-US/yaml/openim/openim_options.yaml
+docs/guide/en-US/yaml/openim/openim_policy.yaml
+docs/guide/en-US/yaml/openim/openim_secret.yaml
+docs/guide/en-US/yaml/openim/openim_set.yaml
+docs/guide/en-US/yaml/openim/openim-rpc-user.yaml
+docs/guide/en-US/yaml/openim/openim_validate.yaml
+docs/guide/en-US/yaml/openim/openim_version.yaml
+docs/man/man1/openim-color.1
+docs/man/man1/openim-completion.1
+docs/man/man1/openim-info.1
+docs/man/man1/openim-jwt-show.1
+docs/man/man1/openim-jwt-sign.1
+docs/man/man1/openim-jwt-verify.1
+docs/man/man1/openim-jwt.1
+docs/man/man1/openim-new.1
+docs/man/man1/openim-options.1
+docs/man/man1/openim-policy-create.1
+docs/man/man1/openim-policy-delete.1
+docs/man/man1/openim-policy-get.1
+docs/man/man1/openim-policy-list.1
+docs/man/man1/openim-policy-update.1
+docs/man/man1/openim-policy.1
+docs/man/man1/openim-secret-create.1
+docs/man/man1/openim-secret-delete.1
+docs/man/man1/openim-secret-get.1
+docs/man/man1/openim-secret-list.1
+docs/man/man1/openim-secret-update.1
+docs/man/man1/openim-secret.1
+docs/man/man1/openim-set.1
+docs/man/man1/openim-user-create.1
+docs/man/man1/openim-user-delete.1
+docs/man/man1/openim-user-get.1
+docs/man/man1/openim-user-list.1
+docs/man/man1/openim-user-update.1
+docs/man/man1/openim-user.1
+docs/man/man1/openim-validate.1
+docs/man/man1/openim-version.1
+docs/man/man1/openim.1
diff --git a/docs/CODEOWNERS b/docs/CODEOWNERS
new file mode 100644
index 0000000..5c0d904
--- /dev/null
+++ b/docs/CODEOWNERS
@@ -0,0 +1,4 @@
+# CODEOWNERS file
+# This file is used to specify the individuals who are required to review changes in this repository.
+
+* @Bloomingg @FGadvancer @skiffer-git @withchao
\ No newline at end of file
diff --git a/docs/K8s服务发现Bug最终修复方案.md b/docs/K8s服务发现Bug最终修复方案.md
new file mode 100644
index 0000000..ed9f0a8
--- /dev/null
+++ b/docs/K8s服务发现Bug最终修复方案.md
@@ -0,0 +1,925 @@
+# K8s 服务发现 Bug 最终修复方案(含调试日志)
+
+## 目录
+- [1. 修复方案概述](#1-修复方案概述)
+- [2. 完整修复代码](#2-完整修复代码)
+- [3. 调试日志说明](#3-调试日志说明)
+- [4. 测试验证](#4-测试验证)
+- [5. 问题排查指南](#5-问题排查指南)
+
+---
+
+## 1. 修复方案概述
+
+### 1.1 修复内容
+
+基于历史修复尝试的教训,本次修复包含以下内容:
+
+1. ✅ **修复监听资源类型**:从 Pod 改为 Endpoints
+2. ✅ **GetConn 使用 DNS**:避免连接被强制关闭(关键!)
+3. ✅ **GetConns 使用 Endpoints**:支持负载均衡和自动更新
+4. ✅ **延迟关闭旧连接**:避免正在进行的请求失败
+5. ✅ **添加健康检查**:确保连接有效性
+6. ✅ **添加 KeepAlive**:支持自动重连
+7. ✅ **添加详细调试日志**:方便问题排查
+
+### 1.2 核心原则
+
+- **GetConn → DNS**:避免连接被强制关闭,导致消息同步和推送失败
+- **GetConns → Endpoints**:支持负载均衡和自动更新
+- **延迟关闭**:给正在进行的请求时间完成
+- **详细日志**:记录关键操作,方便调试
+
+---
+
+## 2. 完整修复代码
+
+### 2.1 修复后的完整文件
+
+```go
+package kubernetes
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+ "sync"
+ "time"
+
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/connectivity"
+ "google.golang.org/grpc/credentials/insecure"
+ "google.golang.org/grpc/keepalive"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/informers"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "k8s.io/client-go/tools/cache"
+)
+
+type KubernetesConnManager struct {
+ clientset *kubernetes.Clientset
+ namespace string
+ dialOptions []grpc.DialOption
+
+ rpcTargets map[string]string
+ selfTarget string
+
+ mu sync.RWMutex
+ connMap map[string][]*grpc.ClientConn
+}
+
+// NewKubernetesConnManager creates a new connection manager that uses Kubernetes services for service discovery.
+func NewKubernetesConnManager(namespace string, options ...grpc.DialOption) (*KubernetesConnManager, error) {
+ log.Printf("[K8s Discovery] Initializing Kubernetes connection manager, namespace: %s", namespace)
+
+ // 获取集群内配置
+ config, err := rest.InClusterConfig()
+ if err != nil {
+ log.Printf("[K8s Discovery] ERROR: Failed to create in-cluster config: %v", err)
+ return nil, fmt.Errorf("failed to create in-cluster config: %v", err)
+ }
+ log.Printf("[K8s Discovery] Successfully created in-cluster config")
+
+ // 创建 K8s API 客户端
+ clientset, err := kubernetes.NewForConfig(config)
+ if err != nil {
+ log.Printf("[K8s Discovery] ERROR: Failed to create clientset: %v", err)
+ return nil, fmt.Errorf("failed to create clientset: %v", err)
+ }
+ log.Printf("[K8s Discovery] Successfully created clientset")
+
+ // 初始化连接管理器
+ k := &KubernetesConnManager{
+ clientset: clientset,
+ namespace: namespace,
+ dialOptions: options,
+ connMap: make(map[string][]*grpc.ClientConn),
+ rpcTargets: make(map[string]string),
+ }
+
+ // 启动后台 goroutine 监听 Endpoints 变化
+ log.Printf("[K8s Discovery] Starting Endpoints watcher")
+ go k.watchEndpoints()
+
+ log.Printf("[K8s Discovery] Kubernetes connection manager initialized successfully")
+ return k, nil
+}
+
+// initializeConns 初始化指定服务的所有 gRPC 连接
+func (k *KubernetesConnManager) initializeConns(serviceName string) error {
+ log.Printf("[K8s Discovery] [%s] Starting to initialize connections", serviceName)
+
+ // 步骤 1: 获取 Service 的端口
+ port, err := k.getServicePort(serviceName)
+ if err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to get service port: %v", serviceName, err)
+ return fmt.Errorf("failed to get service port: %w", err)
+ }
+ log.Printf("[K8s Discovery] [%s] Service port: %d", serviceName, port)
+
+ // 步骤 2: 获取 Service 对应的 Endpoints
+ endpoints, err := k.clientset.CoreV1().Endpoints(k.namespace).Get(
+ context.Background(),
+ serviceName,
+ metav1.GetOptions{},
+ )
+ if err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to get endpoints: %v", serviceName, err)
+ return fmt.Errorf("failed to get endpoints for service %s: %w", serviceName, err)
+ }
+
+ // 统计 Endpoints 数量
+ var totalAddresses int
+ for _, subset := range endpoints.Subsets {
+ totalAddresses += len(subset.Addresses)
+ }
+ log.Printf("[K8s Discovery] [%s] Found %d endpoint addresses", serviceName, totalAddresses)
+
+ // 步骤 3: 为每个 Pod IP 创建 gRPC 连接
+ var newConns []*grpc.ClientConn
+ var newTargets []string
+ var failedTargets []string
+
+ for _, subset := range endpoints.Subsets {
+ for _, address := range subset.Addresses {
+ target := fmt.Sprintf("%s:%d", address.IP, port)
+ log.Printf("[K8s Discovery] [%s] Creating connection to %s", serviceName, target)
+
+ // 创建 gRPC 连接,配置 KeepAlive
+ conn, err := grpc.Dial(
+ target,
+ append(k.dialOptions,
+ grpc.WithTransportCredentials(insecure.NewCredentials()),
+ grpc.WithKeepaliveParams(keepalive.ClientParameters{
+ Time: 10 * time.Second,
+ Timeout: 3 * time.Second,
+ PermitWithoutStream: true,
+ }),
+ )...,
+ )
+ if err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to dial %s: %v", serviceName, target, err)
+ failedTargets = append(failedTargets, target)
+ // 如果连接失败,关闭已创建的连接并返回错误
+ for _, c := range newConns {
+ _ = c.Close()
+ }
+ return fmt.Errorf("failed to dial endpoint %s: %w", target, err)
+ }
+
+ // 检查连接状态
+ state := conn.GetState()
+ log.Printf("[K8s Discovery] [%s] Connection to %s created, state: %v", serviceName, target, state)
+
+ newConns = append(newConns, conn)
+ newTargets = append(newTargets, target)
+ }
+ }
+
+ if len(failedTargets) > 0 {
+ log.Printf("[K8s Discovery] [%s] WARNING: Failed to connect to %d targets: %v", serviceName, len(failedTargets), failedTargets)
+ }
+
+ log.Printf("[K8s Discovery] [%s] Successfully created %d connections", serviceName, len(newConns))
+
+ // 步骤 4: 获取旧连接并延迟关闭
+ k.mu.Lock()
+ oldConns, exists := k.connMap[serviceName]
+ var oldConnCount int
+ if exists {
+ oldConnCount = len(oldConns)
+ log.Printf("[K8s Discovery] [%s] Found %d old connections to close", serviceName, oldConnCount)
+ }
+
+ // 步骤 5: 立即替换为新连接,让新请求使用新连接
+ k.connMap[serviceName] = newConns
+ k.mu.Unlock()
+
+ log.Printf("[K8s Discovery] [%s] Connection map updated: %d old -> %d new", serviceName, oldConnCount, len(newConns))
+
+ // 步骤 6: 延迟关闭旧连接,给正在进行的请求时间完成
+ if exists && len(oldConns) > 0 {
+ log.Printf("[K8s Discovery] [%s] Scheduling delayed close for %d old connections (5 seconds delay)", serviceName, len(oldConns))
+ go func() {
+ // 等待 5 秒,让正在进行的请求完成
+ time.Sleep(5 * time.Second)
+ log.Printf("[K8s Discovery] [%s] Closing %d old connections", serviceName, len(oldConns))
+ closedCount := 0
+ for _, oldConn := range oldConns {
+ if err := oldConn.Close(); err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to close old connection: %v", serviceName, err)
+ } else {
+ closedCount++
+ }
+ }
+ log.Printf("[K8s Discovery] [%s] Closed %d/%d old connections", serviceName, closedCount, len(oldConns))
+ }()
+ }
+
+ log.Printf("[K8s Discovery] [%s] Connection initialization completed successfully", serviceName)
+ return nil
+}
+
+// GetConns returns gRPC client connections for a given Kubernetes service name.
+func (k *KubernetesConnManager) GetConns(ctx context.Context, serviceName string, opts ...grpc.DialOption) ([]*grpc.ClientConn, error) {
+ log.Printf("[K8s Discovery] [%s] GetConns called", serviceName)
+
+ // 步骤 1: 第一次检查缓存(读锁)
+ k.mu.RLock()
+ conns, exists := k.connMap[serviceName]
+ k.mu.RUnlock()
+
+ // 步骤 2: 如果缓存中有连接,检查健康状态
+ if exists {
+ log.Printf("[K8s Discovery] [%s] Found %d connections in cache, checking health", serviceName, len(conns))
+
+ // 检查连接健康状态
+ validConns := k.filterValidConns(serviceName, conns)
+
+ // 如果还有有效连接,更新缓存并返回
+ if len(validConns) > 0 {
+ // 如果有效连接数量减少,更新缓存
+ if len(validConns) < len(conns) {
+ log.Printf("[K8s Discovery] [%s] Removed %d invalid connections, %d valid connections remaining",
+ serviceName, len(conns)-len(validConns), len(validConns))
+ k.mu.Lock()
+ k.connMap[serviceName] = validConns
+ k.mu.Unlock()
+ } else {
+ log.Printf("[K8s Discovery] [%s] All %d connections are healthy", serviceName, len(validConns))
+ }
+ return validConns, nil
+ }
+
+ // 如果所有连接都失效,清除缓存并重新初始化
+ log.Printf("[K8s Discovery] [%s] All connections are invalid, clearing cache and reinitializing", serviceName)
+ k.mu.Lock()
+ delete(k.connMap, serviceName)
+ k.mu.Unlock()
+ } else {
+ log.Printf("[K8s Discovery] [%s] No connections in cache, initializing", serviceName)
+ }
+
+ // 步骤 3: 缓存中没有连接或所有连接都失效,重新初始化
+ k.mu.Lock()
+ // 双重检查:在获取写锁后再次检查,避免重复初始化
+ conns, exists = k.connMap[serviceName]
+ if exists {
+ log.Printf("[K8s Discovery] [%s] Connections were initialized by another goroutine", serviceName)
+ k.mu.Unlock()
+ return conns, nil
+ }
+ k.mu.Unlock()
+
+ // 初始化新连接
+ log.Printf("[K8s Discovery] [%s] Initializing new connections", serviceName)
+ if err := k.initializeConns(serviceName); err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to initialize connections: %v", serviceName, err)
+ return nil, fmt.Errorf("failed to initialize connections for service %s: %w", serviceName, err)
+ }
+
+ // 返回新初始化的连接
+ k.mu.RLock()
+ conns = k.connMap[serviceName]
+ k.mu.RUnlock()
+
+ log.Printf("[K8s Discovery] [%s] Returning %d connections", serviceName, len(conns))
+ return conns, nil
+}
+
+// filterValidConns 过滤出有效的连接
+func (k *KubernetesConnManager) filterValidConns(serviceName string, conns []*grpc.ClientConn) []*grpc.ClientConn {
+ validConns := make([]*grpc.ClientConn, 0, len(conns))
+ invalidStates := make(map[connectivity.State]int)
+
+ for i, conn := range conns {
+ state := conn.GetState()
+
+ // 只保留 Ready 和 Idle 状态的连接
+ if state == connectivity.Ready || state == connectivity.Idle {
+ validConns = append(validConns, conn)
+ } else {
+ invalidStates[state]++
+ log.Printf("[K8s Discovery] [%s] Connection #%d is invalid, state: %v, closing", serviceName, i, state)
+ // 连接失效,关闭它
+ if err := conn.Close(); err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to close invalid connection #%d: %v", serviceName, i, err)
+ }
+ }
+ }
+
+ if len(invalidStates) > 0 {
+ log.Printf("[K8s Discovery] [%s] Invalid connection states: %v", serviceName, invalidStates)
+ }
+
+ return validConns
+}
+
+// GetConn returns a single gRPC client connection for a given Kubernetes service name.
+// 重要:GetConn 使用 DNS,避免连接被强制关闭
+// 原因:
+// 1. GetConn 返回的连接可能被长期复用
+// 2. 如果使用 Endpoints 直连,连接会在 Endpoints 刷新时被关闭
+// 3. 这会导致正在进行的请求失败:grpc: the client connection is closing
+// 4. DNS 方式由 gRPC 客户端管理连接,不会受到 Endpoints 刷新的影响
+func (k *KubernetesConnManager) GetConn(ctx context.Context, serviceName string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+ log.Printf("[K8s Discovery] [%s] GetConn called (using DNS)", serviceName)
+
+ var target string
+
+ // 检查是否有自定义目标
+ if k.rpcTargets[serviceName] == "" {
+ // 获取 Service 端口
+ svcPort, err := k.getServicePort(serviceName)
+ if err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to get service port: %v", serviceName, err)
+ return nil, err
+ }
+
+ // 构建 K8s DNS 名称
+ // 格式:..svc.cluster.local:
+ // K8s DNS 会自动解析到所有后端 Pod,并实现负载均衡
+ target = fmt.Sprintf("%s.%s.svc.cluster.local:%d", serviceName, k.namespace, svcPort)
+ log.Printf("[K8s Discovery] [%s] Using DNS target: %s", serviceName, target)
+ } else {
+ // 使用自定义目标(如果有)
+ target = k.rpcTargets[serviceName]
+ log.Printf("[K8s Discovery] [%s] Using custom target: %s", serviceName, target)
+ }
+
+ // 创建 gRPC 连接
+ log.Printf("[K8s Discovery] [%s] Dialing DNS target: %s", serviceName, target)
+ conn, err := grpc.DialContext(
+ ctx,
+ target,
+ append([]grpc.DialOption{
+ grpc.WithTransportCredentials(insecure.NewCredentials()),
+ grpc.WithDefaultCallOptions(
+ grpc.MaxCallRecvMsgSize(1024*1024*10), // 最大接收消息大小:10MB
+ grpc.MaxCallSendMsgSize(1024*1024*20), // 最大发送消息大小:20MB
+ ),
+ // 配置 KeepAlive
+ grpc.WithKeepaliveParams(keepalive.ClientParameters{
+ Time: 10 * time.Second,
+ Timeout: 3 * time.Second,
+ PermitWithoutStream: true,
+ }),
+ }, k.dialOptions...)...,
+ )
+ if err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to dial DNS target %s: %v", serviceName, target, err)
+ return nil, err
+ }
+
+ // 检查连接状态
+ state := conn.GetState()
+ log.Printf("[K8s Discovery] [%s] Connection created, state: %v", serviceName, state)
+
+ // 如果连接不是 Ready 状态,等待一下
+ if state != connectivity.Ready {
+ log.Printf("[K8s Discovery] [%s] Connection not ready, waiting for state change", serviceName)
+ ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
+ defer cancel()
+ if conn.WaitForStateChange(ctx, state) {
+ newState := conn.GetState()
+ log.Printf("[K8s Discovery] [%s] Connection state changed: %v -> %v", serviceName, state, newState)
+ } else {
+ log.Printf("[K8s Discovery] [%s] WARNING: Connection state change timeout", serviceName)
+ }
+ }
+
+ log.Printf("[K8s Discovery] [%s] GetConn completed successfully", serviceName)
+ return conn, nil
+}
+
+// watchEndpoints 监听 Endpoints 资源变化
+func (k *KubernetesConnManager) watchEndpoints() {
+ log.Printf("[K8s Discovery] Starting Endpoints watcher")
+
+ // 创建 Informer 工厂
+ // resyncPeriod: 10 分钟,定期重新同步资源
+ informerFactory := informers.NewSharedInformerFactory(k.clientset, time.Minute*10)
+
+ // 创建 Endpoints Informer
+ // 注意:这里修复了原来的 bug
+ // 原来监听的是 Pod,现在改为 Endpoints
+ informer := informerFactory.Core().V1().Endpoints().Informer()
+ log.Printf("[K8s Discovery] Endpoints Informer created")
+
+ // 注册事件处理器
+ informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
+ // AddFunc: 当新的 Endpoints 资源被创建时触发
+ AddFunc: func(obj interface{}) {
+ endpoint := obj.(*v1.Endpoints)
+ log.Printf("[K8s Discovery] [Watcher] Endpoints ADDED: %s", endpoint.Name)
+ k.handleEndpointChange(obj)
+ },
+ // UpdateFunc: 当 Endpoints 资源被更新时触发
+ UpdateFunc: func(oldObj, newObj interface{}) {
+ oldEndpoint := oldObj.(*v1.Endpoints)
+ newEndpoint := newObj.(*v1.Endpoints)
+
+ // 检查是否有实际变化
+ if k.endpointsChanged(oldEndpoint, newEndpoint) {
+ log.Printf("[K8s Discovery] [Watcher] Endpoints UPDATED: %s", newEndpoint.Name)
+ k.handleEndpointChange(newObj)
+ } else {
+ log.Printf("[K8s Discovery] [Watcher] Endpoints %s updated but no meaningful change", newEndpoint.Name)
+ }
+ },
+ // DeleteFunc: 当 Endpoints 资源被删除时触发
+ DeleteFunc: func(obj interface{}) {
+ endpoint := obj.(*v1.Endpoints)
+ log.Printf("[K8s Discovery] [Watcher] Endpoints DELETED: %s", endpoint.Name)
+ k.handleEndpointChange(obj)
+ },
+ })
+
+ // 启动 Informer
+ log.Printf("[K8s Discovery] Starting Informer factory")
+ informerFactory.Start(context.Background().Done())
+
+ // 等待 Informer 同步完成
+ log.Printf("[K8s Discovery] Waiting for Informer cache to sync")
+ if !cache.WaitForCacheSync(context.Background().Done(), informer.HasSynced) {
+ log.Printf("[K8s Discovery] ERROR: Failed to sync Informer cache")
+ return
+ }
+ log.Printf("[K8s Discovery] Informer cache synced successfully")
+
+ // 阻塞等待,直到程序退出
+ log.Printf("[K8s Discovery] Endpoints watcher is running")
+ <-context.Background().Done()
+ log.Printf("[K8s Discovery] Endpoints watcher stopped")
+}
+
+// endpointsChanged 检查 Endpoints 是否有实际变化
+func (k *KubernetesConnManager) endpointsChanged(old, new *v1.Endpoints) bool {
+ // 比较地址列表
+ oldAddresses := make(map[string]bool)
+ for _, subset := range old.Subsets {
+ for _, address := range subset.Addresses {
+ oldAddresses[address.IP] = true
+ }
+ }
+
+ newAddresses := make(map[string]bool)
+ for _, subset := range new.Subsets {
+ for _, address := range subset.Addresses {
+ newAddresses[address.IP] = true
+ }
+ }
+
+ // 比较数量
+ if len(oldAddresses) != len(newAddresses) {
+ return true
+ }
+
+ // 比较内容
+ for ip := range oldAddresses {
+ if !newAddresses[ip] {
+ return true
+ }
+ }
+
+ return false
+}
+
+// handleEndpointChange 处理 Endpoints 资源变化
+func (k *KubernetesConnManager) handleEndpointChange(obj interface{}) {
+ // 类型断言
+ endpoint, ok := obj.(*v1.Endpoints)
+ if !ok {
+ // 类型断言失败,记录日志但不中断程序
+ log.Printf("[K8s Discovery] [Watcher] ERROR: Expected *v1.Endpoints, got %T", obj)
+ return
+ }
+
+ serviceName := endpoint.Name
+ log.Printf("[K8s Discovery] [Watcher] Handling Endpoints change for service: %s", serviceName)
+
+ // 统计 Endpoints 信息
+ var totalAddresses int
+ for _, subset := range endpoint.Subsets {
+ totalAddresses += len(subset.Addresses)
+ }
+ log.Printf("[K8s Discovery] [Watcher] Service %s has %d endpoint addresses", serviceName, totalAddresses)
+
+ // 重新初始化连接
+ if err := k.initializeConns(serviceName); err != nil {
+ // 初始化失败,记录错误但不中断程序
+ log.Printf("[K8s Discovery] [Watcher] ERROR: Failed to initialize connections for %s: %v", serviceName, err)
+ } else {
+ log.Printf("[K8s Discovery] [Watcher] Successfully updated connections for %s", serviceName)
+ }
+}
+
+// getServicePort 获取 Service 的 RPC 端口
+func (k *KubernetesConnManager) getServicePort(serviceName string) (int32, error) {
+ log.Printf("[K8s Discovery] [%s] Getting service port", serviceName)
+
+ svc, err := k.clientset.CoreV1().Services(k.namespace).Get(
+ context.Background(),
+ serviceName,
+ metav1.GetOptions{},
+ )
+ if err != nil {
+ log.Printf("[K8s Discovery] [%s] ERROR: Failed to get service: %v", serviceName, err)
+ return 0, fmt.Errorf("failed to get service %s: %w", serviceName, err)
+ }
+
+ if len(svc.Spec.Ports) == 0 {
+ log.Printf("[K8s Discovery] [%s] ERROR: Service has no ports defined", serviceName)
+ return 0, fmt.Errorf("service %s has no ports defined", serviceName)
+ }
+
+ // 找到 RPC 端口(非 10001)
+ var svcPort int32
+ for _, port := range svc.Spec.Ports {
+ if port.Port != 10001 {
+ svcPort = port.Port
+ break
+ }
+ }
+
+ if svcPort == 0 {
+ log.Printf("[K8s Discovery] [%s] ERROR: Service has no RPC port (all ports are 10001)", serviceName)
+ return 0, fmt.Errorf("service %s has no RPC port (all ports are 10001)", serviceName)
+ }
+
+ log.Printf("[K8s Discovery] [%s] Service port: %d", serviceName, svcPort)
+ return svcPort, nil
+}
+
+// Close 关闭所有连接
+func (k *KubernetesConnManager) Close() {
+ log.Printf("[K8s Discovery] Closing all connections")
+
+ k.mu.Lock()
+ defer k.mu.Unlock()
+
+ totalConns := 0
+ for serviceName, conns := range k.connMap {
+ log.Printf("[K8s Discovery] Closing %d connections for service %s", len(conns), serviceName)
+ for i, conn := range conns {
+ if err := conn.Close(); err != nil {
+ log.Printf("[K8s Discovery] ERROR: Failed to close connection #%d for service %s: %v", i, serviceName, err)
+ }
+ }
+ totalConns += len(conns)
+ }
+
+ log.Printf("[K8s Discovery] Closed %d total connections", totalConns)
+ k.connMap = make(map[string][]*grpc.ClientConn)
+}
+
+// GetSelfConnTarget returns the connection target for the current service.
+func (k *KubernetesConnManager) GetSelfConnTarget() string {
+ if k.selfTarget == "" {
+ hostName := os.Getenv("HOSTNAME")
+ log.Printf("[K8s Discovery] Getting self connection target, HOSTNAME: %s", hostName)
+
+ pod, err := k.clientset.CoreV1().Pods(k.namespace).Get(context.Background(), hostName, metav1.GetOptions{})
+ if err != nil {
+ log.Printf("[K8s Discovery] ERROR: Failed to get pod %s: %v", hostName, err)
+ }
+
+ for pod.Status.PodIP == "" {
+ log.Printf("[K8s Discovery] Waiting for pod %s IP to be assigned", hostName)
+ pod, err = k.clientset.CoreV1().Pods(k.namespace).Get(context.TODO(), hostName, metav1.GetOptions{})
+ if err != nil {
+ log.Printf("[K8s Discovery] ERROR: Failed to get pod: %v", err)
+ }
+ time.Sleep(3 * time.Second)
+ }
+
+ var selfPort int32
+ for _, port := range pod.Spec.Containers[0].Ports {
+ if port.ContainerPort != 10001 {
+ selfPort = port.ContainerPort
+ break
+ }
+ }
+
+ k.selfTarget = fmt.Sprintf("%s:%d", pod.Status.PodIP, selfPort)
+ log.Printf("[K8s Discovery] Self connection target: %s", k.selfTarget)
+ }
+
+ return k.selfTarget
+}
+
+// AddOption appends gRPC dial options to the existing options.
+func (k *KubernetesConnManager) AddOption(opts ...grpc.DialOption) {
+ k.mu.Lock()
+ defer k.mu.Unlock()
+ k.dialOptions = append(k.dialOptions, opts...)
+ log.Printf("[K8s Discovery] Added %d dial options", len(opts))
+}
+
+// CloseConn closes a given gRPC client connection.
+func (k *KubernetesConnManager) CloseConn(conn *grpc.ClientConn) {
+ log.Printf("[K8s Discovery] Closing single connection")
+ conn.Close()
+}
+
+func (k *KubernetesConnManager) Register(serviceName, host string, port int, opts ...grpc.DialOption) error {
+ // K8s 环境下不需要注册,返回 nil
+ return nil
+}
+
+func (k *KubernetesConnManager) UnRegister() error {
+ // K8s 环境下不需要注销,返回 nil
+ return nil
+}
+
+func (k *KubernetesConnManager) GetUserIdHashGatewayHost(ctx context.Context, userId string) (string, error) {
+ // K8s 环境下不支持,返回空
+ return "", nil
+}
+```
+
+---
+
+## 3. 调试日志说明
+
+### 3.1 日志格式
+
+所有日志都使用统一的前缀:`[K8s Discovery]`,方便过滤和查找。
+
+### 3.2 日志级别
+
+- **INFO**:正常操作流程
+- **WARNING**:需要注意但不影响功能
+- **ERROR**:错误信息,需要关注
+
+### 3.3 关键日志点
+
+#### 3.3.1 初始化日志
+
+```
+[K8s Discovery] Initializing Kubernetes connection manager, namespace: default
+[K8s Discovery] Successfully created in-cluster config
+[K8s Discovery] Successfully created clientset
+[K8s Discovery] Starting Endpoints watcher
+[K8s Discovery] Kubernetes connection manager initialized successfully
+```
+
+#### 3.3.2 连接初始化日志
+
+```
+[K8s Discovery] [user-rpc-service] Starting to initialize connections
+[K8s Discovery] [user-rpc-service] Service port: 10320
+[K8s Discovery] [user-rpc-service] Found 3 endpoint addresses
+[K8s Discovery] [user-rpc-service] Creating connection to 10.244.1.5:10320
+[K8s Discovery] [user-rpc-service] Connection to 10.244.1.5:10320 created, state: Connecting
+[K8s Discovery] [user-rpc-service] Successfully created 3 connections
+[K8s Discovery] [user-rpc-service] Found 2 old connections to close
+[K8s Discovery] [user-rpc-service] Connection map updated: 2 old -> 3 new
+[K8s Discovery] [user-rpc-service] Scheduling delayed close for 2 old connections (5 seconds delay)
+[K8s Discovery] [user-rpc-service] Connection initialization completed successfully
+```
+
+#### 3.3.3 Endpoints 监听日志
+
+```
+[K8s Discovery] [Watcher] Endpoints UPDATED: user-rpc-service
+[K8s Discovery] [Watcher] Service user-rpc-service has 3 endpoint addresses
+[K8s Discovery] [Watcher] Handling Endpoints change for service: user-rpc-service
+[K8s Discovery] [Watcher] Successfully updated connections for user-rpc-service
+```
+
+#### 3.3.4 连接健康检查日志
+
+```
+[K8s Discovery] [user-rpc-service] GetConns called
+[K8s Discovery] [user-rpc-service] Found 3 connections in cache, checking health
+[K8s Discovery] [user-rpc-service] Connection #1 is invalid, state: Shutdown, closing
+[K8s Discovery] [user-rpc-service] Removed 1 invalid connections, 2 valid connections remaining
+[K8s Discovery] [user-rpc-service] Returning 2 connections
+```
+
+#### 3.3.5 GetConn 日志
+
+```
+[K8s Discovery] [msg-rpc-service] GetConn called (using DNS)
+[K8s Discovery] [msg-rpc-service] Using DNS target: msg-rpc-service.default.svc.cluster.local:10280
+[K8s Discovery] [msg-rpc-service] Dialing DNS target: msg-rpc-service.default.svc.cluster.local:10280
+[K8s Discovery] [msg-rpc-service] Connection created, state: Ready
+[K8s Discovery] [msg-rpc-service] GetConn completed successfully
+```
+
+---
+
+## 4. 测试验证
+
+### 4.1 测试场景 1:Pod 重建
+
+**步骤**:
+```bash
+# 1. 查看日志
+kubectl logs -f | grep "K8s Discovery"
+
+# 2. 触发 Pod 重建
+kubectl delete pod
+
+# 3. 观察日志输出
+# 应该看到:
+# - Endpoints UPDATED 事件
+# - 连接重新初始化
+# - 旧连接延迟关闭
+```
+
+**预期日志**:
+```
+[K8s Discovery] [Watcher] Endpoints UPDATED: user-rpc-service
+[K8s Discovery] [user-rpc-service] Starting to initialize connections
+[K8s Discovery] [user-rpc-service] Found 2 old connections to close
+[K8s Discovery] [user-rpc-service] Scheduling delayed close for 2 old connections (5 seconds delay)
+[K8s Discovery] [user-rpc-service] Connection initialization completed successfully
+[K8s Discovery] [user-rpc-service] Closing 2 old connections
+[K8s Discovery] [user-rpc-service] Closed 2/2 old connections
+```
+
+### 4.2 测试场景 2:消息同步和推送
+
+**步骤**:
+```bash
+# 1. 发送消息
+# 2. 触发 Pod 重建
+kubectl delete pod
+
+# 3. 验证消息同步和推送不失败
+# 4. 查看日志确认连接正常
+```
+
+**预期结果**:
+- ✅ 消息同步不失败(GetConn 使用 DNS,不受影响)
+- ✅ 消息推送不失败(GetConns 自动更新连接)
+- ✅ 日志显示连接自动恢复
+
+### 4.3 测试场景 3:连接健康检查
+
+**步骤**:
+```bash
+# 1. 查看当前连接状态
+kubectl logs | grep "checking health"
+
+# 2. 模拟连接失效(停止目标服务)
+kubectl scale deployment --replicas=0
+
+# 3. 等待一段时间后查看日志
+# 应该看到连接被标记为无效并移除
+```
+
+**预期日志**:
+```
+[K8s Discovery] [user-rpc-service] GetConns called
+[K8s Discovery] [user-rpc-service] Found 3 connections in cache, checking health
+[K8s Discovery] [user-rpc-service] Connection #0 is invalid, state: TransientFailure, closing
+[K8s Discovery] [user-rpc-service] Connection #1 is invalid, state: TransientFailure, closing
+[K8s Discovery] [user-rpc-service] Connection #2 is invalid, state: TransientFailure, closing
+[K8s Discovery] [user-rpc-service] All connections are invalid, clearing cache and reinitializing
+```
+
+---
+
+## 5. 问题排查指南
+
+### 5.1 常见问题
+
+#### 问题 1:连接初始化失败
+
+**症状**:
+```
+[K8s Discovery] [user-rpc-service] ERROR: Failed to dial endpoint 10.244.1.5:10320: connection refused
+```
+
+**可能原因**:
+- Pod 还未就绪
+- 网络问题
+- 端口配置错误
+
+**排查步骤**:
+1. 检查 Pod 状态:`kubectl get pods`
+2. 检查 Service 配置:`kubectl get svc user-rpc-service -o yaml`
+3. 检查 Endpoints:`kubectl get endpoints user-rpc-service`
+
+#### 问题 2:Endpoints 监听不工作
+
+**症状**:
+- Pod 重建后连接不更新
+- 没有看到 `[Watcher] Endpoints UPDATED` 日志
+
+**可能原因**:
+- Informer 未启动
+- 权限问题
+- 网络问题
+
+**排查步骤**:
+1. 检查日志中是否有 `[K8s Discovery] Starting Endpoints watcher`
+2. 检查 ServiceAccount 权限
+3. 手动检查 Endpoints:`kubectl get endpoints -w`
+
+#### 问题 3:连接被强制关闭
+
+**症状**:
+```
+grpc: the client connection is closing
+```
+
+**可能原因**:
+- GetConn 使用了 Endpoints 直连(错误)
+- 旧连接被立即关闭
+
+**排查步骤**:
+1. 检查日志中是否有 `[K8s Discovery] [xxx] GetConn called (using DNS)`
+2. 确认 GetConn 使用的是 DNS 而不是 Endpoints
+3. 检查延迟关闭是否生效
+
+#### 问题 4:连接泄漏
+
+**症状**:
+- 连接数量持续增长
+- 内存使用持续增长
+
+**可能原因**:
+- 旧连接未正确关闭
+- 延迟关闭的 goroutine 未执行
+
+**排查步骤**:
+1. 查看日志中是否有 `[K8s Discovery] [xxx] Closing X old connections`
+2. 检查延迟关闭的日志
+3. 监控连接数量变化
+
+### 5.2 日志过滤命令
+
+**查看所有 K8s Discovery 日志**:
+```bash
+kubectl logs | grep "K8s Discovery"
+```
+
+**查看特定服务的日志**:
+```bash
+kubectl logs | grep "K8s Discovery.*user-rpc-service"
+```
+
+**查看错误日志**:
+```bash
+kubectl logs | grep "K8s Discovery.*ERROR"
+```
+
+**查看 Watcher 日志**:
+```bash
+kubectl logs | grep "K8s Discovery.*Watcher"
+```
+
+**实时监控日志**:
+```bash
+kubectl logs -f | grep "K8s Discovery"
+```
+
+### 5.3 调试技巧
+
+1. **启用详细日志**:如果默认日志不够,可以增加日志级别
+2. **监控连接状态**:定期检查连接数量和状态
+3. **对比 Endpoints**:手动检查 Endpoints 是否与连接列表一致
+4. **测试 Pod 重建**:主动触发 Pod 重建,观察连接更新过程
+
+---
+
+## 6. 总结
+
+### 6.1 修复要点
+
+1. ✅ **修复监听资源类型**:从 Pod 改为 Endpoints
+2. ✅ **GetConn 使用 DNS**:避免连接被强制关闭
+3. ✅ **GetConns 使用 Endpoints**:支持负载均衡和自动更新
+4. ✅ **延迟关闭旧连接**:避免正在进行的请求失败
+5. ✅ **添加健康检查**:确保连接有效性
+6. ✅ **添加 KeepAlive**:支持自动重连
+7. ✅ **添加详细日志**:方便问题排查
+
+### 6.2 关键改进
+
+- **解决了历史修复尝试中的问题**:GetConn 使用 DNS,避免连接被强制关闭
+- **添加了完整的调试日志**:每个关键操作都有日志记录
+- **改进了错误处理**:更好的错误信息和恢复机制
+
+### 6.3 使用建议
+
+1. **部署前**:充分测试 Pod 重建场景
+2. **部署后**:监控日志,观察连接更新过程
+3. **问题排查**:使用日志过滤命令快速定位问题
+4. **持续优化**:根据实际使用情况调整延迟关闭时间
+
+---
+
+## 7. 相关文件
+
+- 修复文件:`pkg/common/discovery/kubernetes/kubernetes.go`
+- 测试脚本:可以编写自动化测试脚本验证修复效果
+- 监控指标:可以添加 Prometheus 指标监控连接状态
+
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000..487b17e
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,66 @@
+# OpenIM Server Docs
+
+Welcome to the OpenIM Documentation hub! This center provides a comprehensive range of guides and manuals designed to help you get the most out of your OpenIM experience.
+
+## Table of Contents
+
+1. [Contrib](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib) - Guidance on contributing and configurations for developers
+2. [Conversions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib) - Coding conventions, logging policies, and other transformation tools
+
+
+## Contrib
+
+This section offers developers a detailed guide on how to contribute code, set up their environment, and follow the associated processes.
+
+- [Code Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/code-conventions.md) - Rules and conventions for writing code in OpenIM.
+- [Development Guide](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/development.md) - A guide on how to carry out development within OpenIM.
+- [Git Cherry Pick](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/gitcherry-pick.md) - Guidelines on cherry-picking operations.
+- [Git Workflow](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/git-workflow.md) - The git workflow in OpenIM.
+- [Initialization Configurations](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/init-config.md) - Guidance on setting up and initializing OpenIM.
+- [Docker Installation](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/install-docker.md) - How to install Docker on your machine.
+- [Linux Development Environment](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/linux-development.md) - Guide to set up the development environment on Linux.
+- [Local Actions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/local-actions.md) - Guidelines on how to carry out certain common actions locally.
+- [Offline Deployment](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/offline-deployment.md) - Methods of deploying OpenIM offline.
+- [Protoc Tools](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/protoc-tools.md) - Guide on using protoc tools.
+- [Go Tools](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/util-go.md) - Tools and libraries in OpenIM for Go.
+- [Makefile Tools](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/util-makefile.md) - Best practices and tools for Makefile.
+- [Script Tools](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/util-scripts.md) - Best practices and tools for scripts.
+
+## Conversions
+
+This section introduces various conventions and policies within OpenIM, encompassing code, logs, versions, and more.
+
+- [API Conversions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/api.md) - Guidelines and methods for API conversions.
+- [Logging Policy](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/bash-log.md) - Logging policies and conventions in OpenIM.
+- [CI/CD Actions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/cicd-actions.md) - Procedures and conventions for CI/CD.
+- [Commit Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/commit.md) - Conventions for code commits in OpenIM.
+- [Directory Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/directory.md) - Directory structure and conventions within OpenIM.
+- [Error Codes](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/error-code.md) - List and descriptions of error codes.
+- [Go Code Conversions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/go-code.md) - Conventions and conversions for Go code.
+- [Docker Image Strategy](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/images.md) - Management strategies for OpenIM Docker images, spanning multiple architectures and image repositories.
+- [Logging Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/logging.md) - Further detailed conventions on logging.
+- [Version Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/version.md) - Naming and management strategies for OpenIM versions.
+
+
+## For Developers, Contributors, and Community Maintainers
+
+### Developers & Contributors
+
+If you're a developer or someone keen on contributing:
+
+- Familiarize yourself with our [Code Conventions](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/code-conventions.md) and [Git Workflow](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/git-workflow.md) to ensure smooth contributions.
+- Dive into the [Development Guide](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/development.md) to get a hang of the development practices in OpenIM.
+
+### Community Maintainers
+
+As a community maintainer:
+
+- Ensure that contributions align with the standards outlined in our documentation.
+- Regularly review the [Logging Policy](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/bash-log.md) and [Error Codes](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/error-code.md) to stay updated.
+
+## For Users
+
+Users should pay particular attention to:
+
+- [Docker Installation](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/install-docker.md) - Necessary if you're planning to use Docker images of OpenIM.
+- [Docker Image Strategy](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/images.md) - To understand the different images available and how to choose the right one for your architecture.
\ No newline at end of file
diff --git a/docs/SuperGroup使用限制分析.md b/docs/SuperGroup使用限制分析.md
new file mode 100644
index 0000000..29b1667
--- /dev/null
+++ b/docs/SuperGroup使用限制分析.md
@@ -0,0 +1,183 @@
+# SuperGroup (GroupType=1) 使用限制分析
+
+## 一、核心问题
+
+如果绕过创建限制,创建了 `GroupType=1` (SuperGroup) 的群组,在使用上会有以下**关键差异和潜在问题**:
+
+## 二、关键差异点
+
+### 2.1 消息验证逻辑差异 ⚠️ **重要**
+
+**位置**: `internal/rpc/msg/verify.go:187-220`
+
+**SuperGroup (GroupType=1) 的特殊处理**:
+```go
+if groupInfo.GroupType == constant.SuperGroup {
+ // SuperGroup 跳过大部分检查,但仍需检查文件权限
+ if data.MsgData.ContentType == constant.File {
+ // 只检查文件发送权限
+ }
+ return nil // 直接返回,跳过后续所有检查
+}
+```
+
+**WorkingGroup (GroupType=2) 的完整验证**:
+- ✅ 检查用户是否在群内
+- ✅ 检查文件发送权限(群主/管理员/userType=1)
+- ✅ 检查链接发送权限(userType=1/群主/管理员可发送链接)
+- ✅ 检查二维码图片(userType=0 普通成员不能发送包含二维码的图片)
+- ✅ 检查禁言状态(MuteEndTime)
+- ✅ 检查群组是否被禁言(GroupStatusMuted)
+
+**差异总结**:
+| 检查项 | SuperGroup | WorkingGroup |
+|--------|-----------|--------------|
+| 文件发送权限 | ✅ 检查 | ✅ 检查 |
+| 链接检测 | ❌ **跳过** | ✅ 检查 |
+| 二维码检测 | ❌ **跳过** | ✅ 检查 |
+| 禁言检查 | ❌ **跳过** | ✅ 检查 |
+| 群组禁言检查 | ❌ **跳过** | ✅ 检查 |
+
+**⚠️ 风险**:
+- SuperGroup 可以发送包含链接的消息,即使 userType=0
+- SuperGroup 可以发送包含二维码的图片,即使 userType=0
+- SuperGroup 成员即使被禁言也可以发送消息
+- SuperGroup 即使群组被禁言,成员仍可发送消息
+
+### 2.2 消息推送机制
+
+**位置**: `internal/push/push_handler.go:104-116`
+
+**推送逻辑**:
+```go
+switch msgFromMQ.MsgData.SessionType {
+case constant.ReadGroupChatType:
+ err = c.Push2Group(ctx, msgFromMQ.MsgData.GroupID, msgFromMQ.MsgData)
+default:
+ err = c.Push2User(ctx, pushUserIDList, msgFromMQ.MsgData)
+}
+```
+
+**关键发现**:
+- 推送逻辑**不依赖 GroupType**,而是依赖 **SessionType**
+- `SessionType` 是客户端发送消息时指定的,**不是根据 GroupType 自动确定的**
+- 如果客户端发送消息时 `SessionType != ReadGroupChatType`,会走 `Push2User` 而不是 `Push2Group`
+
+**⚠️ 潜在问题**:
+- 如果创建了 `GroupType=1` 的群组,但客户端发送消息时使用 `SessionType=WriteGroupChatType`(值为2),消息会按单聊方式推送,可能导致推送逻辑异常
+- `Push2Group` 和 `Push2User` 的推送机制不同,可能影响消息分发
+
+### 2.3 会话ID生成差异
+
+**位置**: `pkg/msgprocessor/conversation.go:68-97`
+
+**会话ID生成规则**:
+```go
+case constant.WriteGroupChatType:
+ return "g_" + msg.GroupID // 普通群聊
+case constant.ReadGroupChatType:
+ return "sg_" + msg.GroupID // 超级群聊
+```
+
+**差异**:
+- `WriteGroupChatType` (值为2): 会话ID前缀为 `g_`
+- `ReadGroupChatType` (值为3): 会话ID前缀为 `sg_`
+
+**⚠️ 潜在问题**:
+- 如果 `GroupType=1` 但使用 `SessionType=WriteGroupChatType`,会话ID会是 `g_xxx` 而不是 `sg_xxx`
+- 这可能导致会话管理不一致
+
+### 2.4 在线推送方法
+
+**位置**: `internal/push/onlinepusher.go:99`
+
+**推送方法**:
+- 所有群聊消息(无论 GroupType)都使用 `SuperGroupOnlineBatchPushOneMsg` 方法
+- 方法名虽然叫 "SuperGroup",但实际上所有群聊都使用这个方法
+
+**结论**:推送方法本身**不受 GroupType 影响**,但推送逻辑受 `SessionType` 影响
+
+## 三、使用限制和潜在错误
+
+### 3.1 必须使用正确的 SessionType ⚠️ **关键**
+
+**问题**:如果创建了 `GroupType=1` 的群组,但发送消息时:
+- ❌ 使用 `SessionType=WriteGroupChatType` (值为2) → 会走 `Push2User`,推送逻辑可能异常
+- ✅ 必须使用 `SessionType=ReadGroupChatType` (值为3) → 会走 `Push2Group`,推送正常
+
+**建议**:客户端发送消息时,需要根据群组的 `GroupType` 自动设置正确的 `SessionType`:
+- `GroupType=1` (SuperGroup) → `SessionType=ReadGroupChatType` (值为3)
+- `GroupType=2` (WorkingGroup) → `SessionType=WriteGroupChatType` (值为2) 或 `ReadGroupChatType` (值为3)
+
+### 3.2 权限检查缺失 ⚠️ **安全风险**
+
+**SuperGroup 跳过的检查**:
+1. **链接检测**:普通成员可以发送包含链接的消息
+2. **二维码检测**:普通成员可以发送包含二维码的图片
+3. **禁言检查**:被禁言的成员仍可发送消息
+4. **群组禁言检查**:群组被禁言时,成员仍可发送消息
+
+**安全影响**:
+- 可能被用于发送恶意链接
+- 可能被用于发送包含二维码的垃圾信息
+- 禁言功能失效
+
+### 3.3 会话管理不一致
+
+**问题**:
+- 如果 `GroupType=1` 但使用 `SessionType=WriteGroupChatType`,会话ID会是 `g_xxx`
+- 如果使用 `SessionType=ReadGroupChatType`,会话ID会是 `sg_xxx`
+- 这可能导致同一个群组产生不同的会话ID,造成会话管理混乱
+
+## 四、总结
+
+### 4.1 主要差异
+
+| 方面 | SuperGroup (GroupType=1) | WorkingGroup (GroupType=2) |
+|------|-------------------------|---------------------------|
+| **消息验证** | 跳过大部分检查(链接、二维码、禁言) | 完整验证 |
+| **文件权限** | 检查(群主/管理员/userType=1) | 检查(群主/管理员/userType=1) |
+| **推送机制** | 依赖 SessionType,需使用 ReadGroupChatType | 依赖 SessionType |
+| **会话ID** | 需使用 ReadGroupChatType 生成 sg_ 前缀 | 可使用 WriteGroupChatType 或 ReadGroupChatType |
+
+### 4.2 使用建议
+
+如果允许创建 `SuperGroup` 类型的群组,需要:
+
+1. **客户端适配**:
+ - 发送消息时,根据群组的 `GroupType` 自动设置正确的 `SessionType`
+ - `GroupType=1` → `SessionType=ReadGroupChatType` (值为3)
+
+2. **安全考虑**:
+ - 明确 SuperGroup 跳过权限检查的设计意图
+ - 如果需要安全控制,考虑在业务层或回调中实现
+
+3. **一致性**:
+ - 确保所有使用 SuperGroup 的地方都使用 `ReadGroupChatType`
+ - 统一会话ID生成规则
+
+### 4.3 潜在错误场景
+
+1. **推送错误**:如果使用错误的 SessionType,消息可能无法正确推送给所有群成员
+2. **会话混乱**:同一个群组可能产生不同的会话ID
+3. **权限绕过**:SuperGroup 会跳过禁言等权限检查
+4. **安全风险**:可以发送链接和二维码,可能被滥用
+
+## 五、代码修改建议
+
+如果要支持 SuperGroup,建议:
+
+1. **创建群组时**:允许 `GroupType=1` 和 `GroupType=2`
+2. **消息发送时**:根据 `GroupType` 自动设置 `SessionType`
+3. **消息验证时**:明确 SuperGroup 的特殊逻辑,考虑是否需要保留部分安全检查
+4. **文档说明**:明确 SuperGroup 和 WorkingGroup 的差异和使用场景
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/client-api.md b/docs/client-api.md
new file mode 100644
index 0000000..8b8171c
--- /dev/null
+++ b/docs/client-api.md
@@ -0,0 +1,1455 @@
+# OpenIM 客户端接口文档
+
+本文档用于前端开发人员对接 OpenIM 服务器 API 接口。
+
+## 目录
+
+- [基础说明](#基础说明)
+- [认证接口](#认证接口)
+- [用户接口](#用户接口)
+- [好友接口](#好友接口)
+- [群组接口](#群组接口)
+- [消息接口](#消息接口)
+- [会话接口](#会话接口)
+- [第三方服务接口](#第三方服务接口)
+- [红包接口](#红包接口)
+- [会议接口](#会议接口)
+- [钱包接口](#钱包接口)
+- [错误码说明](#错误码说明)
+
+---
+
+## 基础说明
+
+### 请求格式
+
+- **请求方法**: 所有接口均使用 `POST` 方法(除特殊说明外)
+- **Content-Type**: `application/json`
+- **请求头**: 需要在请求头中携带 `token` 进行身份验证(除白名单接口外)
+
+```http
+POST /api/user/get_users_info
+Content-Type: application/json
+token: your_token_here
+```
+
+### 响应格式
+
+所有接口统一返回以下格式:
+
+```json
+{
+ "errCode": 0, // 错误码,0表示成功
+ "errMsg": "", // 错误消息
+ "errDlt": "", // 错误详情
+ "data": {} // 响应数据,具体结构见各接口说明
+}
+```
+
+### 认证说明
+
+- 大部分接口需要在请求头中携带 `token`
+- 白名单接口(无需token):
+ - `/auth/get_admin_token` - 获取管理员token
+ - `/auth/parse_token` - 解析token
+
+### 基础URL
+
+根据部署环境配置,例如:
+- 开发环境: `http://localhost:10002`
+- 生产环境: `https://your-domain.com`
+
+---
+
+## 认证接口
+
+### 1. 获取用户Token
+
+**接口地址**: `POST /auth/get_user_token`
+
+**接口描述**: 用户登录时获取token
+
+**请求参数**:
+```json
+{
+ "secret": "string", // 密钥(必填)
+ "userID": "string", // 用户ID(必填)
+ "platform": 1 // 平台ID(必填):1-iOS, 2-Android, 3-Windows, 4-OSX, 5-Web, 6-小程序
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
+ "expireTimeSeconds": 604800
+ }
+}
+```
+
+### 2. 解析Token
+
+**接口地址**: `POST /auth/parse_token`
+
+**接口描述**: 解析token,验证token有效性(白名单接口,无需token)
+
+**请求参数**:
+```json
+{
+ "token": "string" // token字符串(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "userID": "string",
+ "platform": 1,
+ "expireTimeSeconds": 604800
+ }
+}
+```
+
+### 3. 强制登出
+
+**接口地址**: `POST /auth/force_logout`
+
+**接口描述**: 强制用户登出,使token失效
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "platform": 1 // 平台ID(必填)
+}
+```
+
+---
+
+## 用户接口
+
+### 1. 用户注册
+
+**接口地址**: `POST /user/user_register`
+
+**请求参数**:
+```json
+{
+ "secret": "string", // 密钥(必填)
+ "users": [ // 用户列表(必填)
+ {
+ "userID": "string", // 用户ID(必填)
+ "nickname": "string", // 昵称(必填)
+ "faceURL": "string", // 头像URL(选填)
+ "ex": "string" // 扩展字段(选填)
+ }
+ ]
+}
+```
+
+### 2. 更新用户信息
+
+**接口地址**: `POST /user/update_user_info_ex`
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "nickname": "string", // 昵称(选填)
+ "faceURL": "string", // 头像URL(选填)
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+### 3. 获取用户信息
+
+**接口地址**: `POST /user/get_users_info`
+
+**请求参数**:
+```json
+{
+ "userIDs": ["user1", "user2"] // 用户ID列表(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "users": [
+ {
+ "userID": "string",
+ "nickname": "string",
+ "faceURL": "string",
+ "ex": "string",
+ "createTime": 1234567890000
+ }
+ ]
+ }
+}
+```
+
+### 4. 获取用户在线状态
+
+**接口地址**: `POST /user/get_users_online_status`
+
+**请求参数**:
+```json
+{
+ "userIDs": ["user1", "user2"] // 用户ID列表(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "successResult": [
+ {
+ "userID": "string",
+ "status": "online", // online/offline
+ "platformIDs": [1, 2]
+ }
+ ],
+ "failedResult": [
+ {
+ "userID": "string",
+ "errMsg": "string"
+ }
+ ]
+ }
+}
+```
+
+### 5. 订阅用户状态
+
+**接口地址**: `POST /user/subscribe_users_status`
+
+**请求参数**:
+```json
+{
+ "userIDs": ["user1", "user2"] // 要订阅的用户ID列表(必填)
+}
+```
+
+### 6. 获取订阅的用户状态
+
+**接口地址**: `POST /user/get_subscribe_users_status`
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "statusList": [
+ {
+ "userID": "string",
+ "status": "online",
+ "platformIDs": [1, 2]
+ }
+ ]
+ }
+}
+```
+
+### 7. 设置全局消息接收选项
+
+**接口地址**: `POST /user/set_global_msg_recv_opt`
+
+**请求参数**:
+```json
+{
+ "globalRecvMsgOpt": 0 // 0-接收并通知,1-接收但不通知,2-不接收
+}
+```
+
+---
+
+## 好友接口
+
+### 1. 申请添加好友
+
+**接口地址**: `POST /friend/add_friend`
+
+**请求参数**:
+```json
+{
+ "toUserID": "string", // 目标用户ID(必填)
+ "reqMsg": "string", // 申请消息(选填)
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+### 2. 处理好友申请
+
+**接口地址**: `POST /friend/add_friend_response`
+
+**请求参数**:
+```json
+{
+ "fromUserID": "string", // 申请者用户ID(必填)
+ "handleResult": 1, // 处理结果(必填):0-拒绝,1-同意
+ "handleMsg": "string" // 处理消息(选填)
+}
+```
+
+### 3. 删除好友
+
+**接口地址**: `POST /friend/delete_friend`
+
+**请求参数**:
+```json
+{
+ "friendUserID": "string" // 好友用户ID(必填)
+}
+```
+
+### 4. 获取好友列表
+
+**接口地址**: `POST /friend/get_friend_list`
+
+**请求参数**:
+```json
+{
+ "pagination": { // 分页参数(选填)
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "friendsInfo": [
+ {
+ "ownerUserID": "string",
+ "remark": "string",
+ "createTime": 1234567890000,
+ "friendUser": {
+ "userID": "string",
+ "nickname": "string",
+ "faceURL": "string"
+ }
+ }
+ ],
+ "total": 100
+ }
+}
+```
+
+### 5. 获取指定好友信息
+
+**接口地址**: `POST /friend/get_designated_friends`
+
+**请求参数**:
+```json
+{
+ "friendUserIDs": ["user1", "user2"] // 好友用户ID列表(必填)
+}
+```
+
+### 6. 设置好友备注
+
+**接口地址**: `POST /friend/set_friend_remark`
+
+**请求参数**:
+```json
+{
+ "toUserID": "string", // 好友用户ID(必填)
+ "remark": "string" // 备注(必填)
+}
+```
+
+### 7. 获取好友申请列表
+
+**接口地址**: `POST /friend/get_friend_apply_list`
+
+**请求参数**:
+```json
+{
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+### 8. 获取自己的申请列表
+
+**接口地址**: `POST /friend/get_self_friend_apply_list`
+
+**请求参数**: 同上
+
+### 9. 添加黑名单
+
+**接口地址**: `POST /friend/add_black`
+
+**请求参数**:
+```json
+{
+ "toUserID": "string" // 目标用户ID(必填)
+}
+```
+
+### 10. 获取黑名单列表
+
+**接口地址**: `POST /friend/get_black_list`
+
+**请求参数**:
+```json
+{
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+### 11. 移除黑名单
+
+**接口地址**: `POST /friend/remove_black`
+
+**请求参数**:
+```json
+{
+ "toUserID": "string" // 目标用户ID(必填)
+}
+```
+
+### 12. 检查是否为好友
+
+**接口地址**: `POST /friend/is_friend`
+
+**请求参数**:
+```json
+{
+ "toUserIDs": ["user1", "user2"] // 用户ID列表(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "results": [
+ {
+ "userID": "string",
+ "isFriend": true
+ }
+ ]
+ }
+}
+```
+
+---
+
+## 群组接口
+
+### 1. 创建群组
+
+**接口地址**: `POST /group/create_group`
+
+**请求参数**:
+```json
+{
+ "groupInfo": {
+ "groupName": "string", // 群名称(必填)
+ "introduction": "string", // 群介绍(选填)
+ "faceURL": "string", // 群头像(选填)
+ "ex": "string" // 扩展字段(选填)
+ },
+ "memberUserIDs": ["user1", "user2"] // 初始成员列表(选填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "groupInfo": {
+ "groupID": "string",
+ "groupName": "string",
+ "introduction": "string",
+ "faceURL": "string",
+ "ownerUserID": "string",
+ "createTime": 1234567890000,
+ "memberCount": 1
+ }
+ }
+}
+```
+
+### 2. 设置群信息
+
+**接口地址**: `POST /group/set_group_info`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "groupName": "string", // 群名称(选填)
+ "introduction": "string", // 群介绍(选填)
+ "faceURL": "string", // 群头像(选填)
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+### 3. 加入群组
+
+**接口地址**: `POST /group/join_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "reqMessage": "string", // 申请消息(选填)
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+### 4. 退出群组
+
+**接口地址**: `POST /group/quit_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string" // 群ID(必填)
+}
+```
+
+### 5. 处理群组申请
+
+**接口地址**: `POST /group/group_application_response`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "fromUserID": "string", // 申请者用户ID(必填)
+ "handleResult": 1, // 处理结果(必填):0-拒绝,1-同意
+ "handledMsg": "string" // 处理消息(选填)
+}
+```
+
+### 6. 转让群主
+
+**接口地址**: `POST /group/transfer_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "newOwnerUserID": "string" // 新群主用户ID(必填)
+}
+```
+
+### 7. 获取群组信息
+
+**接口地址**: `POST /group/get_groups_info`
+
+**请求参数**:
+```json
+{
+ "groupIDs": ["group1", "group2"] // 群ID列表(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "groups": [
+ {
+ "groupID": "string",
+ "groupName": "string",
+ "introduction": "string",
+ "faceURL": "string",
+ "ownerUserID": "string",
+ "createTime": 1234567890000,
+ "memberCount": 10
+ }
+ ]
+ }
+}
+```
+
+### 8. 获取已加入的群组列表
+
+**接口地址**: `POST /group/get_joined_group_list`
+
+**请求参数**:
+```json
+{
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+### 9. 获取群成员列表
+
+**接口地址**: `POST /group/get_group_member_list`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "members": [
+ {
+ "groupID": "string",
+ "userID": "string",
+ "roleLevel": 1, // 1-群主,2-管理员,3-普通成员
+ "joinTime": 1234567890000,
+ "nickname": "string",
+ "faceURL": "string"
+ }
+ ],
+ "total": 100
+ }
+}
+```
+
+### 10. 邀请用户加入群组
+
+**接口地址**: `POST /group/invite_user_to_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "invitedUserIDs": ["user1", "user2"], // 被邀请用户ID列表(必填)
+ "reason": "string" // 邀请理由(选填)
+}
+```
+
+### 11. 踢出群成员
+
+**接口地址**: `POST /group/kick_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "kickedUserIDs": ["user1"] // 被踢出用户ID列表(必填)
+}
+```
+
+### 12. 解散群组
+
+**接口地址**: `POST /group/dismiss_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string" // 群ID(必填)
+}
+```
+
+### 13. 设置群成员信息
+
+**接口地址**: `POST /group/set_group_member_info`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "userID": "string", // 用户ID(必填)
+ "nickname": "string", // 群内昵称(选填)
+ "faceURL": "string", // 头像(选填)
+ "roleLevel": 2, // 角色等级(选填):1-群主,2-管理员,3-普通成员
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+### 14. 禁言群成员
+
+**接口地址**: `POST /group/mute_group_member`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "userID": "string", // 用户ID(必填)
+ "mutedSeconds": 3600 // 禁言时长(秒,必填)
+}
+```
+
+### 15. 取消禁言群成员
+
+**接口地址**: `POST /group/cancel_mute_group_member`
+
+**请求参数**:
+```json
+{
+ "groupID": "string", // 群ID(必填)
+ "userID": "string" // 用户ID(必填)
+}
+```
+
+### 16. 禁言群组
+
+**接口地址**: `POST /group/mute_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string" // 群ID(必填)
+}
+```
+
+### 17. 取消禁言群组
+
+**接口地址**: `POST /group/cancel_mute_group`
+
+**请求参数**:
+```json
+{
+ "groupID": "string" // 群ID(必填)
+}
+```
+
+---
+
+## 消息接口
+
+### 1. 发送消息
+
+**接口地址**: `POST /msg/send_msg`
+
+**请求参数**:
+```json
+{
+ "sendID": "string", // 发送者ID(必填)
+ "groupID": "string", // 群ID(选填,单聊不填,群聊必填)
+ "recvID": "string", // 接收者ID(选填,单聊必填,群聊不填)
+ "senderNickname": "string", // 发送者昵称(选填)
+ "senderFaceURL": "string", // 发送者头像(选填)
+ "contentType": 101, // 消息类型(必填):101-文本,102-图片,103-语音,104-视频等
+ "content": "string", // 消息内容(必填)
+ "options": { // 消息选项(选填)
+ "isHistory": true, // 是否存储历史
+ "isPersistent": true, // 是否持久化
+ "isSenderSync": true // 是否同步给发送者
+ },
+ "clientMsgID": "string", // 客户端消息ID(必填)
+ "offlinePushInfo": { // 离线推送信息(选填)
+ "title": "string",
+ "desc": "string"
+ }
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "serverMsgID": "string",
+ "clientMsgID": "string",
+ "sendTime": 1234567890000
+ }
+}
+```
+
+### 2. 发送简单消息
+
+**接口地址**: `POST /msg/send_simple_msg`
+
+**请求参数**: 同上,但参数更简化
+
+### 3. 批量发送消息
+
+**接口地址**: `POST /msg/batch_send_msg`
+
+**请求参数**:
+```json
+{
+ "sendID": "string",
+ "groupID": "string",
+ "recvIDs": ["user1", "user2"], // 接收者ID列表
+ "contentType": 101,
+ "content": "string",
+ "clientMsgID": "string"
+}
+```
+
+### 4. 获取最新序列号
+
+**接口地址**: `POST /msg/newest_seq`
+
+**请求参数**:
+```json
+{
+ "userID": "string" // 用户ID(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "seq": 1000 // 最新序列号
+ }
+}
+```
+
+### 5. 根据序列号拉取消息
+
+**接口地址**: `POST /msg/pull_msg_by_seq`
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "seqRanges": [ // 序列号范围列表(必填)
+ {
+ "start": 1,
+ "end": 100
+ }
+ ]
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "msgs": [
+ {
+ "serverMsgID": "string",
+ "clientMsgID": "string",
+ "sendID": "string",
+ "recvID": "string",
+ "groupID": "string",
+ "contentType": 101,
+ "content": "string",
+ "sendTime": 1234567890000
+ }
+ ]
+ }
+}
+```
+
+### 6. 搜索消息
+
+**接口地址**: `POST /msg/search_msg`
+
+**请求参数**:
+```json
+{
+ "sendID": "string", // 发送者ID(选填)
+ "recvID": "string", // 接收者ID(选填)
+ "groupID": "string", // 群ID(选填)
+ "contentType": 101, // 消息类型(选填)
+ "keyword": "string", // 关键词(选填)
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+### 7. 撤回消息
+
+**接口地址**: `POST /msg/revoke_msg`
+
+**请求参数**:
+```json
+{
+ "conversationID": "string", // 会话ID(必填)
+ "seq": 100, // 消息序列号(必填)
+ "userID": "string" // 用户ID(必填)
+}
+```
+
+### 8. 标记消息已读
+
+**接口地址**: `POST /msg/mark_msgs_as_read`
+
+**请求参数**:
+```json
+{
+ "conversationID": "string", // 会话ID(必填)
+ "seqs": [100, 101, 102], // 消息序列号列表(必填)
+ "userID": "string" // 用户ID(必填)
+}
+```
+
+### 9. 标记会话已读
+
+**接口地址**: `POST /msg/mark_conversation_as_read`
+
+**请求参数**:
+```json
+{
+ "conversationID": "string", // 会话ID(必填)
+ "hasReadSeq": 100, // 已读序列号(必填)
+ "userID": "string" // 用户ID(必填)
+}
+```
+
+### 10. 获取会话已读序列号
+
+**接口地址**: `POST /msg/get_conversations_has_read_and_max_seq`
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "conversationIDs": ["conv1", "conv2"] // 会话ID列表(必填)
+}
+```
+
+### 11. 删除消息
+
+**接口地址**: `POST /msg/delete_msgs`
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "conversationID": "string", // 会话ID(必填)
+ "seqs": [100, 101, 102] // 消息序列号列表(必填)
+}
+```
+
+### 12. 清空会话消息
+
+**接口地址**: `POST /msg/clear_conversation_msg`
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "conversationID": "string" // 会话ID(必填)
+}
+```
+
+### 13. 获取服务器时间
+
+**接口地址**: `POST /msg/get_server_time`
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "serverTime": 1234567890000 // 服务器时间戳(毫秒)
+ }
+}
+```
+
+---
+
+## 会话接口
+
+### 1. 获取所有会话
+
+**接口地址**: `POST /conversation/get_all_conversations`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string" // 用户ID(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "conversations": [
+ {
+ "ownerUserID": "string",
+ "conversationID": "string",
+ "conversationType": 1, // 1-单聊,2-群聊
+ "userID": "string", // 单聊时使用
+ "groupID": "string", // 群聊时使用
+ "recvMsgOpt": 0, // 接收消息选项:0-接收并通知,1-接收但不通知,2-不接收
+ "unreadCount": 0, // 未读消息数
+ "draftText": "string", // 草稿
+ "draftTextTime": 0, // 草稿时间
+ "isPinned": false, // 是否置顶
+ "isPrivateChat": false, // 是否私聊
+ "burnDuration": 0, // 阅后即焚时长(秒)
+ "groupAtType": 0, // 群@类型
+ "maxSeq": 0, // 最大序列号
+ "hasReadSeq": 0, // 已读序列号
+ "updateUnreadCountTime": 0
+ }
+ ]
+ }
+}
+```
+
+### 2. 获取排序后的会话列表
+
+**接口地址**: `POST /conversation/get_sorted_conversation_list`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string", // 用户ID(必填)
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+### 3. 获取指定会话
+
+**接口地址**: `POST /conversation/get_conversation`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string", // 用户ID(必填)
+ "conversationID": "string" // 会话ID(必填)
+}
+```
+
+### 4. 批量获取会话
+
+**接口地址**: `POST /conversation/get_conversations`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string", // 用户ID(必填)
+ "conversationIDs": ["conv1", "conv2"] // 会话ID列表(必填)
+}
+```
+
+### 5. 设置会话
+
+**接口地址**: `POST /conversation/set_conversations`
+
+**请求参数**:
+```json
+{
+ "conversations": [
+ {
+ "conversationID": "string",
+ "recvMsgOpt": 0,
+ "isPinned": false,
+ "isPrivateChat": false,
+ "groupAtType": 0,
+ "draftText": "string",
+ "draftTextTime": 0,
+ "burnDuration": 0
+ }
+ ]
+}
+```
+
+### 6. 获取增量会话
+
+**接口地址**: `POST /conversation/get_incremental_conversations`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string", // 用户ID(必填)
+ "seq": 100 // 起始序列号(必填)
+}
+```
+
+### 7. 获取置顶会话ID列表
+
+**接口地址**: `POST /conversation/get_pinned_conversation_ids`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string" // 用户ID(必填)
+}
+```
+
+### 8. 删除会话
+
+**接口地址**: `POST /conversation/delete_conversations`
+
+**请求参数**:
+```json
+{
+ "ownerUserID": "string", // 用户ID(必填)
+ "conversationIDs": ["conv1", "conv2"] // 会话ID列表(必填)
+}
+```
+
+---
+
+## 第三方服务接口
+
+### 1. 更新FCM Token
+
+**接口地址**: `POST /third/fcm_update_token`
+
+**请求参数**:
+```json
+{
+ "platform": 1, // 平台ID(必填)
+ "fcmToken": "string", // FCM Token(必填)
+ "userID": "string" // 用户ID(必填)
+}
+```
+
+### 2. 设置应用角标
+
+**接口地址**: `POST /third/set_app_badge`
+
+**请求参数**:
+```json
+{
+ "userID": "string", // 用户ID(必填)
+ "unreadCount": 10 // 未读数(必填)
+}
+```
+
+### 3. 上传日志
+
+**接口地址**: `POST /third/logs/upload`
+
+**请求参数**: 文件上传(multipart/form-data)
+
+### 4. 对象存储 - 初始化分片上传
+
+**接口地址**: `POST /object/initiate_multipart_upload`
+
+**请求参数**:
+```json
+{
+ "name": "string", // 对象名称(必填)
+ "contentType": "string" // 内容类型(选填)
+}
+```
+
+### 5. 对象存储 - 完成分片上传
+
+**接口地址**: `POST /object/complete_multipart_upload`
+
+**请求参数**:
+```json
+{
+ "name": "string", // 对象名称(必填)
+ "uploadID": "string", // 上传ID(必填)
+ "parts": [ // 分片列表(必填)
+ {
+ "partNumber": 1,
+ "etag": "string"
+ }
+ ]
+}
+```
+
+### 6. 对象存储 - 获取访问URL
+
+**接口地址**: `POST /object/access_url`
+
+**请求参数**:
+```json
+{
+ "name": "string", // 对象名称(必填)
+ "expires": 3600 // 过期时间(秒,选填)
+}
+```
+
+---
+
+## 红包接口
+
+详细文档请参考:[红包接口文档](./redpacket-api.md)
+
+### 主要接口
+
+1. **发送红包**: `POST /redpacket/send_redpacket`
+2. **领取红包**: `POST /redpacket/receive`
+3. **查询红包详情**: `POST /redpacket/get_detail`
+
+---
+
+## 会议接口
+
+详细文档请参考:[会议接口文档](./meeting-api.md)
+
+### 用户端接口
+
+1. **获取会议信息**: `POST /meeting/get_meeting`
+2. **获取会议列表**: `POST /meeting/get_meetings_public`
+
+---
+
+## 钱包接口
+
+### 1. 获取钱包列表(管理接口)
+
+**接口地址**: `POST /wallet/get_wallets`
+
+**请求参数**:
+```json
+{
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+}
+```
+
+### 2. 批量更新余额(管理接口)
+
+**接口地址**: `POST /wallet/batch_update_balance`
+
+**请求参数**:
+```json
+{
+ "updates": [
+ {
+ "userID": "string",
+ "amount": 10000, // 金额(分)
+ "changeType": 1 // 变更类型:1-增加,2-减少
+ }
+ ]
+}
+```
+
+---
+
+## 错误码说明
+
+### 通用错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 0 | 成功 |
+| 500 | 服务器内部错误 |
+| 1001 | 参数错误 |
+| 1002 | 权限不足 |
+| 1003 | 重复键错误 |
+| 1004 | 记录不存在 |
+
+### 用户相关错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 1101 | 用户ID不存在或未注册 |
+| 1102 | 用户已注册 |
+
+### 群组相关错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 1201 | 群ID不存在 |
+| 1202 | 群ID已存在 |
+| 1203 | 尚未加入群组 |
+| 1204 | 群组已解散 |
+| 1205 | 群组类型不支持 |
+| 1206 | 群组申请已处理 |
+
+### 好友相关错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 1301 | 不能添加自己为好友 |
+| 1302 | 被对方拉黑 |
+| 1303 | 不是对方的好友 |
+| 1304 | 已经是好友关系 |
+
+### 消息相关错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 1401 | 消息已读功能禁用 |
+| 1402 | 群成员被禁言 |
+| 1403 | 群组被禁言 |
+| 1404 | 消息已撤回 |
+| 1405 | 消息包含链接(不允许) |
+| 1406 | 图片包含二维码(不允许) |
+
+### Token相关错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 1501 | Token已过期 |
+| 1502 | Token无效 |
+| 1503 | Token格式错误 |
+| 1504 | Token尚未生效 |
+| 1505 | Token未知错误 |
+| 1506 | Token被踢出 |
+| 1507 | Token不存在 |
+
+### 红包相关错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 1801 | 红包已被领完 |
+| 1802 | 红包已过期 |
+| 1803 | 用户已领取过该红包 |
+
+---
+
+## 注意事项
+
+### 1. 认证
+
+- 除白名单接口外,所有接口都需要在请求头中携带 `token`
+- Token通过 `/auth/get_user_token` 接口获取
+- Token过期后需要重新获取
+
+### 2. 请求格式
+
+- 所有接口使用 `POST` 方法(除特殊说明外)
+- Content-Type: `application/json`
+- 请求体为JSON格式
+
+### 3. 响应格式
+
+- 所有接口统一返回格式:`{errCode, errMsg, errDlt, data}`
+- `errCode` 为 0 表示成功,非0表示失败
+- 失败时查看 `errMsg` 和 `errDlt` 获取错误详情
+
+### 4. 分页参数
+
+- 分页参数统一格式:
+ ```json
+ {
+ "pagination": {
+ "pageNumber": 1, // 页码,从1开始
+ "showNumber": 20 // 每页数量
+ }
+ }
+ ```
+
+### 5. 时间戳
+
+- 所有时间戳均为毫秒级Unix时间戳
+- 例如:`1234567890000` 表示 2009-02-13 23:31:30
+
+### 6. 平台ID
+
+- 1: iOS
+- 2: Android
+- 3: Windows
+- 4: OSX
+- 5: Web
+- 6: 小程序
+
+### 7. 消息类型
+
+- 101: 文本消息
+- 102: 图片消息
+- 103: 语音消息
+- 104: 视频消息
+- 105: 文件消息
+- 106: @消息
+- 107: 位置消息
+- 108: 自定义消息
+- 109: 引用消息
+- 110: 自定义消息(红包等)
+
+### 8. 会话类型
+
+- 1: 单聊
+- 2: 群聊
+
+### 9. 群成员角色
+
+- 1: 群主
+- 2: 管理员
+- 3: 普通成员
+
+### 10. 接收消息选项
+
+- 0: 接收并通知
+- 1: 接收但不通知
+- 2: 不接收
+
+---
+
+## 接口汇总表
+
+| 模块 | 接口数量 | 主要功能 |
+|------|---------|---------|
+| 认证 | 3 | 获取token、解析token、强制登出 |
+| 用户 | 19 | 注册、更新信息、查询、在线状态等 |
+| 好友 | 16 | 添加、删除、查询、黑名单等 |
+| 群组 | 17 | 创建、加入、退出、管理成员等 |
+| 消息 | 13 | 发送、接收、搜索、撤回等 |
+| 会话 | 8 | 获取、设置、删除会话等 |
+| 第三方服务 | 6+ | 文件上传、对象存储等 |
+| 红包 | 3 | 发送、领取、查询 |
+| 会议 | 2 | 查询会议信息 |
+| 钱包 | 2 | 查询钱包、更新余额 |
+
+---
+
+## 相关文档
+
+- [红包接口详细文档](./redpacket-api.md)
+- [会议接口详细文档](./meeting-api.md)
+- [错误码标准](./contrib/error-code.md)
+- [API标准](./contrib/api.md)
+
+---
+
+**最后更新时间**: 2025-01-23
+
diff --git a/docs/contrib/README.md b/docs/contrib/README.md
new file mode 100644
index 0000000..3d51704
--- /dev/null
+++ b/docs/contrib/README.md
@@ -0,0 +1,42 @@
+# Contrib Documentation Index
+
+## 📚 General Information
+- [📄 README](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/README.md) - General introduction to the contribution documentation.
+- [📑 Development Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md) - Guidelines for setting up a development environment.
+
+## 🛠 Setup and Installation
+- [🌍 Environment Setup](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md) - Instructions on setting up the development environment.
+- [🐳 Docker Installation Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md) - Steps to install Docker for container management.
+- [🔧 OpenIM Linux System Installation](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md) - Guide for installing OpenIM on a Linux system.
+
+## 💻 Development Practices
+- [👨💻 Code Conventions](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md) - Coding standards to follow for consistency.
+- [📐 Directory Structure](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md) - Explanation of the repository's directory layout.
+- [🔀 Git Workflow](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md) - The workflow for using Git in this project (note the file extension error).
+- [💾 GitHub Workflow](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md) - Workflow guidelines for GitHub.
+
+## 🧪 Testing and Deployment
+- [⚙️ CI/CD Actions](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md) - Continuous integration and deployment configurations.
+- [🚀 Offline Deployment](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md) - How to deploy the application offline.
+
+## 🔧 Utilities and Tools
+- [📦 Protoc Tools](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md) - Protobuf compiler-related utilities.
+- [🔨 Utility Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md) - Go utilities and helper functions.
+- [🛠 Makefile Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md) - Makefile scripts for automation.
+- [📜 Script Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md) - Utility scripts for development.
+
+## 📋 Standards and Conventions
+- [🚦 Commit Guidelines](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md) - Standards for writing commit messages.
+- [✅ Testing Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md) - Guidelines and conventions for writing tests.
+- [📈 Versioning](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md) - Version management for the project.
+
+## 🖼 Additional Resources
+- [🌐 API Reference](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md) - Detailed API documentation.
+- [📚 Go Code Standards](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md) - Go programming language standards.
+- [🖼 Image Guidelines](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md) - Guidelines for image assets.
+
+## 🐛 Troubleshooting
+- [🔍 Error Code Reference](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md) - List of error codes and their meanings.
+- [🐚 Bash Logging](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md) - Logging standards for bash scripts.
+- [📈 Logging Conventions](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md) - Conventions for application logging.
+- [🛠 Local Actions Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md) - How to perform local actions for troubleshooting.
diff --git a/docs/contrib/api.md b/docs/contrib/api.md
new file mode 100644
index 0000000..1dd3892
--- /dev/null
+++ b/docs/contrib/api.md
@@ -0,0 +1,5 @@
+## Interface Standards
+
+Our project, OpenIM, adheres to the [OpenAPI 3.0](https://spec.openapis.org/oas/latest.html) interface standards.
+
+> Chinese translation: [OpenAPI Specification Chinese Translation](https://fishead.gitbook.io/openapi-specification-zhcn-translation/3.0.0.zhcn)
\ No newline at end of file
diff --git a/docs/contrib/bash-log.md b/docs/contrib/bash-log.md
new file mode 100644
index 0000000..f8b319f
--- /dev/null
+++ b/docs/contrib/bash-log.md
@@ -0,0 +1,49 @@
+## OpenIM Logging System: Design and Usage
+
+**PATH:** `scripts/lib/logging.sh`
+
+
+
+### Introduction
+
+OpenIM, an intricate project, requires a robust logging mechanism to diagnose issues, maintain system health, and provide insights. A custom-built logging system embedded within OpenIM ensures consistent and structured logs. Let's delve into the design of this logging system and understand its various functions and their usage scenarios.
+
+### Design Overview
+
+1. **Initialization**: The system begins by determining the verbosity level through the `OPENIM_VERBOSE` variable. If it's not set, a default value of 5 is assigned. This verbosity level dictates the depth of the log details.
+2. **Log File Setup**: Logs are stored in the directory specified by `OPENIM_OUTPUT`. If this variable isn't explicitly set, it defaults to the `_output` directory relative to the script location. Each log file is named based on the date to facilitate easy identification.
+3. **Logging Function**: The `echo_log()` function plays a pivotal role by writing messages to both the console (stdout) and the log file.
+4. **Logging to a file**: The `echo_log()` function writes to the log file by appending the message to the file. It also adds a timestamp to the message. path: `_output/logs/*`, Enable logging by default. Set to false to disable. If you wish to turn off output to log files set `export ENABLE_LOGGING=flase`.
+
+### Key Functions & Their Usages
+
+1. **Error Handling**:
+ - `openim::log::errexit()`: Activated when a command exits with an error. It prints a call tree showing the sequence of functions leading to the error and then calls `openim::log::error_exit()` with relevant information.
+ - `openim::log::install_errexit()`: Sets up the trap for catching errors and ensures that the error handler (`errexit`) gets propagated to various script constructs like functions, expansions, and subshells.
+2. **Logging Levels**:
+ - `openim::log::error()`: Logs error messages with a timestamp. The log message starts with '!!!' to indicate its severity.
+ - `openim::log::info()`: Provides informational messages. The display of these messages is governed by the verbosity level (`OPENIM_VERBOSE`).
+ - `openim::log::progress()`: Designed for logging progress messages or creating progress bars.
+ - `openim::log::status()`: Logs status messages with a timestamp, prefixing each entry with '+++' for easy identification.
+ - `openim::log::success()`: Highlights successful operations with a bright green prefix. It's ideal for visually signifying operations that completed successfully.
+3. **Exit and Stack Trace**:
+ - `openim::log::error_exit()`: Logs an error message, dumps the call stack, and exits the script with a specified exit code.
+ - `openim::log::stack()`: Prints out a stack trace, showing the call hierarchy leading to the point where this function was invoked.
+4. **Usage Information**:
+ - `openim::log::usage() & openim::log::usage_from_stdin()`: Both functions provide a mechanism to display usage instructions. The former accepts arguments directly, while the latter reads them from stdin.
+5. **Test Function**:
+ - `openim::log::test_log()`: This function is a test suite to verify that all logging functions are operating as expected.
+
+### Usage Scenario
+
+Imagine a situation where an OpenIM operation fails, and you need to ascertain the cause. With the logging system in place, you can:
+
+- Check the log file for the specific day to find error messages with the '!!!' prefix.
+- View the call tree and stack trace to trace back the sequence of operations leading to the failure.
+- Use the verbosity level to filter out unnecessary details and focus on the crux of the issue.
+
+This systematic and structured approach greatly simplifies the debugging process, making system maintenance more efficient.
+
+### Conclusion
+
+OpenIM's logging system is a testament to the importance of structured and detailed logging in complex projects. By using this logging mechanism, developers and system administrators can streamline troubleshooting and ensure the seamless operation of the OpenIM project.
\ No newline at end of file
diff --git a/docs/contrib/cicd-actions.md b/docs/contrib/cicd-actions.md
new file mode 100644
index 0000000..99072f3
--- /dev/null
+++ b/docs/contrib/cicd-actions.md
@@ -0,0 +1,129 @@
+# Continuous Integration and Automation
+
+Every change on the OpenIM repository, either made through a pull request or direct push, triggers the continuous integration pipelines defined within the same repository. Needless to say, all the OpenIM contributions can be merged until all the checks pass (AKA having green builds).
+
+- [Continuous Integration and Automation](#continuous-integration-and-automation)
+ - [CI Platforms](#ci-platforms)
+ - [GitHub Actions](#github-actions)
+ - [Running locally](#running-locally)
+
+## CI Platforms
+
+Currently, there are two different platforms involved in running the CI processes:
+
+- GitHub actions
+- Drone pipelines on CNCF infrastructure
+
+### GitHub Actions
+
+All the existing GitHub Actions are defined as YAML files under the `.github/workflows` directory. These can be grouped into:
+
+- **PR Checks**. These actions run all the required validations upon PR creation and update. Covering the DCO compliance check, `x86_64` test batteries (unit, integration, smoke), and code coverage.
+- **Repository automation**. Currently, it only covers issues and epic grooming.
+
+Everything runs on GitHub's provided runners; thus, the tests are limited to run in `x86_64` architectures.
+
+
+## Running locally
+
+A contributor should verify their changes locally to speed up the pull request process. Fortunately, all the CI steps can be on local environments, except for the publishing ones, through either of the following methods:
+
+**User Makefile:**
+```bash
+root@PS2023EVRHNCXG:~/workspaces/openim/Open-IM-Server# make help 😊
+
+Usage: make ...
+
+Targets:
+
+all Run tidy, gen, add-copyright, format, lint, cover, build 🚀
+build Build binaries by default 🛠️
+multiarch Build binaries for multiple platforms. See option PLATFORMS. 🌍
+tidy tidy go.mod ✨
+vendor vendor go.mod 📦
+style code style -> fmt,vet,lint 💅
+fmt Run go fmt against code. ✨
+vet Run go vet against code. ✅
+lint Check syntax and styling of go sources. ✔️
+format Gofmt (reformat) package sources (exclude vendor dir if existed). 🔄
+test Run unit test. 🧪
+cover Run unit test and get test coverage. 📊
+updates Check for updates to go.mod dependencies 🆕
+imports task to automatically handle import packages in Go files using goimports tool 📥
+clean Remove all files that are created by building. 🗑️
+image Build docker images for host arch. 🐳
+image.multiarch Build docker images for multiple platforms. See option PLATFORMS. 🌍🐳
+push Build docker images for host arch and push images to registry. 📤🐳
+push.multiarch Build docker images for multiple platforms and push images to registry. 🌍📤🐳
+tools Install dependent tools. 🧰
+gen Generate all necessary files. 🧩
+swagger Generate swagger document. 📖
+serve-swagger Serve swagger spec and docs. 🚀📚
+verify-copyright Verify the license headers for all files. ✅
+add-copyright Add copyright ensure source code files have license headers. 📄
+release release the project 🎉
+help Show this help info. ℹ️
+help-all Show all help details info. ℹ️📚
+
+Options:
+
+DEBUG Whether or not to generate debug symbols. Default is 0. ❓
+
+BINS Binaries to build. Default is all binaries under cmd. 🛠️
+This option is available when using: make {build}(.multiarch) 🧰
+Example: make build BINS="openim-api openim_cms_api".
+
+PLATFORMS Platform to build for. Default is linux_arm64 and linux_amd64. 🌍
+This option is available when using: make {build}.multiarch 🌍
+Example: make multiarch PLATFORMS="linux_s390x linux_mips64
+linux_mips64le darwin_amd64 windows_amd64 linux_amd64 linux_arm64".
+
+V Set to 1 enable verbose build. Default is 0. 📝
+```
+
+
+How to Use Makefile to Help Contributors Build Projects Quickly 😊
+
+The `make help` command is a handy tool that provides useful information on how to utilize the Makefile effectively. By running this command, contributors will gain insights into various targets and options available for building projects swiftly.
+
+Here's a breakdown of the targets and options provided by the Makefile:
+
+**Targets 😃**
+
+1. `all`: This target runs multiple tasks like `tidy`, `gen`, `add-copyright`, `format`, `lint`, `cover`, and `build`. It ensures comprehensive project building.
+2. `build`: The primary target that compiles binaries by default. It is particularly useful for creating the necessary executable files.
+3. `multiarch`: A target that builds binaries for multiple platforms. Contributors can specify the desired platforms using the `PLATFORMS` option.
+4. `tidy`: This target cleans up the `go.mod` file, ensuring its consistency.
+5. `vendor`: A target that updates the project dependencies based on the `go.mod` file.
+6. `style`: Checks the code style using tools like `fmt`, `vet`, and `lint`. It ensures a consistent coding style throughout the project.
+7. `fmt`: Formats the code using the `go fmt` command, ensuring proper indentation and formatting.
+8. `vet`: Runs the `go vet` command to identify common errors in the code.
+9. `lint`: Validates the syntax and styling of Go source files using a linter.
+10. `format`: Reformats the package sources using `gofmt`. It excludes the vendor directory if it exists.
+11. `test`: Executes unit tests to ensure the functionality and stability of the code.
+12. `cover`: Performs unit tests and calculates the test coverage of the code.
+13. `updates`: Checks for updates to the project's dependencies specified in the `go.mod` file.
+14. `imports`: Automatically handles import packages in Go files using the `goimports` tool.
+15. `clean`: Removes all files generated during the build process, effectively cleaning up the project directory.
+16. `image`: Builds Docker images for the host architecture.
+17. `image.multiarch`: Similar to the `image` target, but it builds Docker images for multiple platforms. Contributors can specify the desired platforms using the `PLATFORMS` option.
+18. `push`: Builds Docker images for the host architecture and pushes them to a registry.
+19. `push.multiarch`: Builds Docker images for multiple platforms and pushes them to a registry. Contributors can specify the desired platforms using the `PLATFORMS` option.
+20. `tools`: Installs the necessary tools or dependencies required by the project.
+21. `gen`: Generates all the required files automatically.
+22. `swagger`: Generates the swagger document for the project.
+23. `serve-swagger`: Serves the swagger specification and documentation.
+24. `verify-copyright`: Verifies the license headers for all project files.
+25. `add-copyright`: Adds copyright headers to the source code files.
+26. `release`: Releases the project, presumably for distribution.
+27. `help`: Displays information about available targets and options.
+28. `help-all`: Shows detailed information about all available targets and options.
+
+**Options 😄**
+
+1. `DEBUG`: A boolean option that determines whether or not to generate debug symbols. The default value is 0 (false).
+2. `BINS`: Specifies the binaries to build. By default, it builds all binaries under the `cmd` directory. Contributors can provide a list of specific binaries using this option.
+3. `PLATFORMS`: Specifies the platforms to build for. The default platforms are `linux_arm64` and `linux_amd64`. Contributors can specify multiple platforms by providing a space-separated list of platform names.
+4. `V`: A boolean option that enables verbose build output when set to 1 (true). The default value is 0 (false).
+
+With these targets and options in place, contributors can efficiently build projects using the Makefile. Happy coding! 🚀😊
diff --git a/docs/contrib/code-conventions.md b/docs/contrib/code-conventions.md
new file mode 100644
index 0000000..f93ad64
--- /dev/null
+++ b/docs/contrib/code-conventions.md
@@ -0,0 +1,88 @@
+# Code conventions
+
+- [Code conventions](#code-conventions)
+ - [POSIX shell](#posix-shell)
+ - [Go](#go)
+ - [OpenIM Naming Conventions Guide](#openim-naming-conventions-guide)
+ - [1. General File Naming](#1-general-file-naming)
+ - [2. Special File Types](#2-special-file-types)
+ - [a. Script and Markdown Files](#a-script-and-markdown-files)
+ - [b. Uppercase Markdown Documentation](#b-uppercase-markdown-documentation)
+ - [3. Directory Naming](#3-directory-naming)
+ - [4. Configuration Files](#4-configuration-files)
+ - [Best Practices](#best-practices)
+ - [Directory and File Conventions](#directory-and-file-conventions)
+ - [Testing conventions](#testing-conventions)
+
+## POSIX shell
+
+- [Style guide](https://google.github.io/styleguide/shell.xml)
+
+## Go
+
+- [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
+- [Effective Go](https://golang.org/doc/effective_go.html)
+- Know and avoid [Go landmines](https://gist.github.com/lavalamp/4bd23295a9f32706a48f)
+- Comment your code.
+ - [Go's commenting conventions](http://blog.golang.org/godoc-documenting-go-code)
+ - If reviewers ask questions about why the code is the way it is, that's a sign that comments might be helpful.
+- Command-line flags should use dashes, not underscores
+- Naming
+ - Please consider package name when selecting an interface name, and avoid redundancy. For example, `storage.Interface` is better than `storage.StorageInterface`.
+ - Do not use uppercase characters, underscores, or dashes in package names.
+ - Please consider parent directory name when choosing a package name. For example, `pkg/controllers/autoscaler/foo.go` should say `package autoscaler` not `package autoscalercontroller`.
+ - Unless there's a good reason, the `package foo` line should match the name of the directory in which the `.go` file exists.
+ - Importers can use a different name if they need to disambiguate.Ⓜ️
+
+## OpenIM Naming Conventions Guide
+
+Welcome to the OpenIM Naming Conventions Guide. This document outlines the best practices and standardized naming conventions that our project follows to maintain clarity, consistency, and alignment with industry standards, specifically taking cues from the Google Naming Conventions.
+
+### 1. General File Naming
+
+Files within the OpenIM project should adhere to the following rules:
+
++ Both hyphens (`-`) and underscores (`_`) are acceptable in file names.
++ Underscores (`_`) are preferred for general files to enhance readability and compatibility.
++ For example: `data_processor.py`, `user_profile_generator.go`
+
+### 2. Special File Types
+
+#### a. Script and Markdown Files
+
++ Bash scripts and Markdown files should use hyphens (`-`) to facilitate better searchability and compatibility in web browsers.
++ For example: `deploy-script.sh`, `project-overview.md`
+
+#### b. Uppercase Markdown Documentation
+
++ Markdown files with uppercase names, such as `README`, may include underscores (`_`) to separate words if necessary.
++ For example: `README_SETUP.md`, `CONTRIBUTING_GUIDELINES.md`
+
+### 3. Directory Naming
+
++ Directories must use hyphens (`-`) exclusively to maintain a clean and organized file structure.
++ For example: `image-assets`, `user-data`
+
+### 4. Configuration Files
+
++ Configuration files, including but not limited to `.yaml` files, should use hyphens (`-`).
++ For example: `app-config.yaml`, `logging-config.yaml`
+
+### Best Practices
+
++ Keep names concise but descriptive enough to convey the file's purpose or contents at a glance.
++ Avoid using spaces in names; use hyphens or underscores instead to improve compatibility across different operating systems and environments.
++ Stick to lowercase naming where possible for consistency and to prevent issues with case-sensitive systems.
++ Include version numbers or dates in file names if the file is subject to updates, following the format: `project-plan-v1.2.md` or `backup-2023-03-15.sql`.
+
+## Directory and File Conventions
+
+- Avoid generic utility packages. Instead of naming a package "util", choose a name that clearly describes its purpose. For instance, functions related to waiting operations are contained within the `wait` package, which includes methods like `Poll`, fully named as `wait.Poll`.
+- All filenames, script files, configuration files, and directories should be in lowercase and use dashes (`-`) as separators.
+- For Go language files, filenames should be in lowercase and use underscores (`_`).
+- Package names should match their directory names to ensure consistency. For example, within the `openim-api` directory, the Go file should be named `openim-api.go`, following the convention of using dashes for directory names and aligning package names with directory names.
+
+
+## Testing conventions
+
+Please refer to [TESTING.md](https://github.com/openimsdk/open-im-server-deploy/tree/main/test/readme) document.
diff --git a/docs/contrib/commit.md b/docs/contrib/commit.md
new file mode 100644
index 0000000..661661f
--- /dev/null
+++ b/docs/contrib/commit.md
@@ -0,0 +1,9 @@
+## Commit Standards
+
+Our project, OpenIM, follows the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0) standards.
+
+> Chinese translation: [Conventional Commits: A Specification Making Commit Logs More Human and Machine-friendly](https://tool.lu/en_US/article/2ac/preview)
+
+In addition to adhering to these standards, we encourage all contributors to the OpenIM project to ensure that their commit messages are clear and descriptive. This helps in maintaining a clean and meaningful project history. Each commit message should succinctly describe the changes made and, where necessary, the reasoning behind those changes.
+
+To facilitate a streamlined process, we also recommend using appropriate commit type based on Conventional Commits guidelines such as `fix:` for bug fixes, `feat:` for new features, and so forth. Understanding and using these conventions helps in generating automatic release notes, making versioning easier, and improving overall readability of commit history.
\ No newline at end of file
diff --git a/docs/contrib/development.md b/docs/contrib/development.md
new file mode 100644
index 0000000..41209e9
--- /dev/null
+++ b/docs/contrib/development.md
@@ -0,0 +1,72 @@
+# Development Guide
+
+Since OpenIM is written in Go, it is fair to assume that the Go tools are all one needs to contribute to this project. Unfortunately, there is a point where this no longer holds true when required to test or build local changes. This document elaborates on the required tooling for OpenIM development.
+
+- [Development Guide](#development-guide)
+ - [Non-Linux environment prerequisites](#non-linux-environment-prerequisites)
+ - [Windows Setup](#windows-setup)
+ - [macOS Setup](#macos-setup)
+ - [Installing Required Software](#installing-required-software)
+ - [Go](#go)
+ - [Docker](#docker)
+ - [Vagrant](#vagrant)
+ - [Dependency management](#dependency-management)
+
+## Non-Linux environment prerequisites
+
+All the test and build scripts within this repository were created to be run on GNU Linux development environments. Due to this, it is suggested to use the virtual machine defined on this repository's [Vagrantfile](https://developer.hashicorp.com/vagrant/docs/vagrantfile) to use them.
+
+Either way, if one still wants to build and test OpenIM on non-Linux environments, specific setups are to be followed.
+
+### Windows Setup
+
+To build OpenIM on Windows is only possible for versions that support Windows Subsystem for Linux (WSL). If the development environment in question has Windows 10, Version 2004, Build 19041 or higher, [follow these instructions to install WSL2](https://docs.microsoft.com/en-us/windows/wsl/install-win10); otherwise, use a Linux Virtual machine instead.
+
+### macOS Setup
+
+The shell scripts in charge of the build and test processes rely on GNU utils (i.e. `sed`), [which slightly differ on macOS](https://unix.stackexchange.com/a/79357), meaning that one must make some adjustments before using them.
+
+First, install the GNU utils:
+
+```sh
+brew install coreutils findutils gawk gnu-sed gnu-tar grep make
+```
+
+Then update the shell init script (i.e. `.bashrc`) to prepend the GNU Utils to the `$PATH` variable
+
+```sh
+GNUBINS="$(find /usr/local/opt -type d -follow -name gnubin -print)"
+
+for bindir in ${GNUBINS[@]}; do
+ PATH=$bindir:$PATH
+done
+
+export PATH
+```
+
+## Installing Required Software
+
+### Go
+
+It is well known that OpenIM is written in [Go](http://golang.org). Please follow the [Go Getting Started guide](https://golang.org/doc/install) to install and set up the Go tools used to compile and run the test batteries.
+
+| OpenIM | requires Go |
+|----------------|-------------|
+| 2.24 - 3.00 | 1.15 + |
+| 3.30 + | 1.18 + |
+
+### Docker
+
+OpenIM build and test processes development require Docker to run certain steps. [Follow the Docker website instructions to install Docker](https://docs.docker.com/get-docker/) in the development environment.
+
+### Vagrant
+
+As described in the [Testing documentation](https://github.com/openimsdk/open-im-server-deploy/tree/main/test/readme), all the smoke tests are run in virtual machines managed by Vagrant. To install Vagrant in the development environment, [follow the instructions from the Hashicorp website](https://www.vagrantup.com/downloads), alongside any of the following hypervisors:
+
+- [VirtualBox](https://www.virtualbox.org/)
+- [libvirt](https://libvirt.org/) and the [vagrant-libvirt plugin](https://github.com/vagrant-libvirt/vagrant-libvirt#installation)
+
+
+## Dependency management
+
+OpenIM uses [go modules](https://github.com/golang/go/wiki/Modules) to manage dependencies.
diff --git a/docs/contrib/directory.md b/docs/contrib/directory.md
new file mode 100644
index 0000000..4e30d38
--- /dev/null
+++ b/docs/contrib/directory.md
@@ -0,0 +1,3 @@
+## Catalog Service Interface Specification
+
++ [https://github.com/kubecub/go-project-layout](https://github.com/kubecub/go-project-layout)
\ No newline at end of file
diff --git a/docs/contrib/environment.md b/docs/contrib/environment.md
new file mode 100644
index 0000000..fb1d9fd
--- /dev/null
+++ b/docs/contrib/environment.md
@@ -0,0 +1,537 @@
+# OpenIM ENVIRONMENT CONFIGURATION
+
+
+* 1. [OpenIM Deployment Guide](#OpenIMDeploymentGuide)
+ * 1.1. [Deployment Strategies](#DeploymentStrategies)
+ * 1.2. [Source Code Deployment](#SourceCodeDeployment)
+ * 1.3. [Docker Compose Deployment](#DockerComposeDeployment)
+ * 1.4. [Environment Variable Configuration](#EnvironmentVariableConfiguration)
+ * 1.4.1. [Recommended using environment variables](#Recommendedusingenvironmentvariables)
+ * 1.4.2. [Additional Configuration](#AdditionalConfiguration)
+ * 1.4.3. [Security Considerations](#SecurityConsiderations)
+ * 1.4.4. [Data Management](#DataManagement)
+ * 1.4.5. [Monitoring and Logging](#MonitoringandLogging)
+ * 1.4.6. [Troubleshooting](#Troubleshooting)
+ * 1.4.7. [Conclusion](#Conclusion)
+ * 1.4.8. [Additional Resources](#AdditionalResources)
+* 2. [Further Configuration](#FurtherConfiguration)
+ * 2.1. [Image Registry Configuration](#ImageRegistryConfiguration)
+ * 2.2. [OpenIM Docker Network Configuration](#OpenIMDockerNetworkConfiguration)
+ * 2.3. [OpenIM Configuration](#OpenIMConfiguration)
+ * 2.4. [OpenIM Chat Configuration](#OpenIMChatConfiguration)
+ * 2.5. [Zookeeper Configuration](#ZookeeperConfiguration)
+ * 2.6. [MySQL Configuration](#MySQLConfiguration)
+ * 2.7. [MongoDB Configuration](#MongoDBConfiguration)
+ * 2.8. [Tencent Cloud COS Configuration](#TencentCloudCOSConfiguration)
+ * 2.9. [Alibaba Cloud OSS Configuration](#AlibabaCloudOSSConfiguration)
+ * 2.10. [Redis Configuration](#RedisConfiguration)
+ * 2.11. [Kafka Configuration](#KafkaConfiguration)
+ * 2.12. [OpenIM Web Configuration](#OpenIMWebConfiguration)
+ * 2.13. [RPC Configuration](#RPCConfiguration)
+ * 2.14. [Prometheus Configuration](#PrometheusConfiguration)
+ * 2.15. [Grafana Configuration](#GrafanaConfiguration)
+ * 2.16. [RPC Port Configuration Variables](#RPCPortConfigurationVariables)
+ * 2.17. [RPC Register Name Configuration](#RPCRegisterNameConfiguration)
+ * 2.18. [Log Configuration](#LogConfiguration)
+ * 2.19. [Additional Configuration Variables](#AdditionalConfigurationVariables)
+ * 2.20. [Prometheus Configuration](#PrometheusConfiguration-1)
+ * 2.20.1. [General Configuration](#GeneralConfiguration)
+ * 2.20.2. [Service-Specific Prometheus Ports](#Service-SpecificPrometheusPorts)
+ * 2.21. [Qiniu Cloud Kodo Configuration](#QiniuCloudKODOConfiguration)
+
+## 0. OpenIM Config File
+
+Ensuring that OpenIM operates smoothly requires clear direction on the configuration file's location. Here's a detailed step-by-step guide on how to provide this essential path to OpenIM:
+
+1. **Using the Command-line Argument**:
+
+ + **For Configuration Path**: When initializing OpenIM, you can specify the path to the configuration file directly using the `-c` or `--config_folder_path` option.
+
+ ```bash
+ ❯ _output/bin/platforms/linux/amd64/openim-api --config_folder_path="/your/config/folder/path"
+ ```
+
+ + **For Port Specification**: Similarly, if you wish to designate a particular port, utilize the `-p` option followed by the desired port number.
+
+ ```bash
+ ❯ _output/bin/platforms/linux/amd64/openim-api -p 1234
+ ```
+
+ Note: If the port is not specified here, OpenIM will fetch it from the configuration file. Setting the port via environment variables isn't supported. We recommend consolidating settings in the configuration file for a more consistent and streamlined setup.
+
+2. **Leveraging the Environment Variable**:
+
+ You have the flexibility to determine OpenIM's configuration path by setting an `OPENIMCONFIG` environment variable. This method provides a seamless way to instruct OpenIM without command-line parameters every time.
+
+ ```bash
+ export OPENIMCONFIG="/path/to/your/config"
+ ```
+
+3. **Relying on the Default Path**:
+
+ In scenarios where neither command-line arguments nor environment variables are provided, OpenIM will intuitively revert to the `config/` directory to locate its configuration.
+
+
+
+## 1. OpenIM Deployment Guide
+
+Welcome to the OpenIM Deployment Guide! OpenIM offers a versatile and robust instant messaging server, and deploying it can be achieved through various methods. This guide will walk you through the primary deployment strategies, ensuring you can set up OpenIM in a way that best suits your needs.
+
+### 1.1. Deployment Strategies
+
+OpenIM provides multiple deployment methods, each tailored to different use cases and technical preferences:
+
+1. **[Source Code Deployment Guide](https://doc.rentsoft.cn/guides/gettingStarted/imSourceCodeDeployment)**
+2. **[Docker Deployment Guide](https://doc.rentsoft.cn/guides/gettingStarted/dockerCompose)**
+3. **[Kubernetes Deployment Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/deployments)**
+
+While the first two methods will be our main focus, it's worth noting that the third method, Kubernetes deployment, is also viable and can be rendered via the `environment.sh` script variables.
+
+### 1.2. Source Code Deployment
+
+In the source code deployment method, the configuration generation process involves executing `make init`, which fundamentally runs the script `./scripts/init-config.sh`. This script utilizes variables defined in the [`environment.sh`](https://github.com/openimsdk/open-im-server-deploy/blob/main/scripts/install/environment.sh) script to render the [`config.yaml`](https://github.com/openimsdk/open-im-server-deploy/blob/main/deployments/templates/config.yaml) template file, subsequently generating the [`config.yaml`](https://github.com/openimsdk/open-im-server-deploy/blob/main/config/config.yaml) configuration file.
+
+### 1.3. Docker Compose Deployment
+
+Docker deployment offers a slightly more intricate template. Within the [openim-server](https://github.com/openimsdk/openim-docker/tree/main/openim-server) directory, multiple subdirectories correspond to various versions, each aligning with `openim-chat` as illustrated below:
+
+| openim-server | openim-chat |
+| ------------------------------------------------------------ | ------------------------------------------------------------ |
+| [main](https://github.com/openimsdk/openim-docker/tree/main/openim-server/main) | [main](https://github.com/openimsdk/openim-docker/tree/main/openim-chat/main) |
+| [release-v3.2](https://github.com/openimsdk/openim-docker/tree/main/openim-server/release-v3.3) | [release-v3.2](https://github.com/openimsdk/openim-docker/tree/main/openim-chat/release-v1.3) |
+| [release-v3.2](https://github.com/openimsdk/openim-docker/tree/main/openim-server/release-v3.2) | [release-v3.2](https://github.com/openimsdk/openim-docker/tree/main/openim-chat/release-v1.2) |
+
+Configuration file modifications can be made by specifying corresponding environment variables, for instance:
+
+```bash
+export CHAT_IMAGE_VERSION="main"
+export SERVER_IMAGE_VERSION="main"
+```
+
+These variables are stored within the [`environment.sh`](https://github.com/OpenIMSDK/open-im-server-deploy/blob/main/scripts/install/environment.sh) configuration:
+
+```bash
+readonly CHAT_IMAGE_VERSION=${CHAT_IMAGE_VERSION:-'main'}
+readonly SERVER_IMAGE_VERSION=${SERVER_IMAGE_VERSION:-'main'}
+```
+> [!IMPORTANT]
+> Can learn to read our mirror version strategy: https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/images.md
+
+
+Setting a variable, e.g., `export CHAT_IMAGE_VERSION="release-v1.3"`, will prioritize `CHAT_IMAGE_VERSION="release-v1.3"` as the variable value. Ultimately, the chosen image version is determined, and rendering is achieved through `make init` (or `./scripts/init-config.sh`).
+
+> Note: Direct modifications to the `config.yaml` file are also permissible without utilizing `make init`.
+
+### 1.4. Environment Variable Configuration
+
+For convenience, configuration through modifying environment variables is recommended:
+
+#### 1.4.1. Recommended using environment variables
+
++ PASSWORD
+
+ + **Description**: Password for mongodb, redis, and minio.
+ + **Default**: `openIM123`
+ + Notes:
+ + Minimum password length: 8 characters.
+ + Special characters are not allowed.
+
+ ```bash
+ export PASSWORD="openIM123"
+ ```
+
++ OPENIM_USER
+
+ + **Description**: Username for redis, and minio.
+ + **Default**: `root`
+
+ ```bash
+ export OPENIM_USER="root"
+ ```
+
+> mongo is `openIM`, use `export MONGO_OPENIM_USERNAME="openIM"` to modify
+
++ OPENIM_IP
+
+ + **Description**: API address.
+ + **Note**: If the server has an external IP, it will be automatically obtained. For internal networks, set this variable to the IP serving internally.
+
+ ```bash
+ export OPENIM_IP="ip"
+ ```
+
++ DATA_DIR
+
+ + **Description**: Data mount directory for components.
+ + **Default**: `/data/openim`
+
+ ```bash
+ export DATA_DIR="/data/openim"
+ ```
+
+#### 1.4.2. Additional Configuration
+
+##### MinIO Access and Secret Key
+
+To secure your MinIO server, you should set up an access key and secret key. These credentials are used to authenticate requests to your MinIO server.
+
+```bash
+export MINIO_ACCESS_KEY="YourAccessKey"
+export MINIO_SECRET_KEY="YourSecretKey"
+```
+
+##### MinIO Browser
+
+MinIO comes with an embedded web-based object browser. You can control the availability of the MinIO browser by setting the `MINIO_BROWSER` environment variable.
+
+```bash
+export MINIO_BROWSER="on"
+```
+
+#### 1.4.3. Security Considerations
+
+##### TLS/SSL Configuration
+
+For secure communication, it's recommended to enable TLS/SSL for your MinIO server. You can do this by providing the path to the SSL certificate and key files.
+
+```bash
+export MINIO_CERTS_DIR="/path/to/certs/directory"
+```
+
+#### 1.4.4. Data Management
+
+##### Data Retention Policy
+
+You may want to set up a data retention policy to automatically delete objects after a specified period.
+
+```bash
+export MINIO_RETENTION_DAYS="30"
+```
+
+#### 1.4.5. Monitoring and Logging
+
+##### [Audit Logging](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/environment.md#audit-logging)
+
+Enable audit logging to keep track of access and changes to your data.
+
+```bash
+export MINIO_AUDIT="on"
+```
+
+#### 1.4.6. Troubleshooting
+
+##### Debug Mode
+
+In case of issues, you may enable debug mode to get more detailed logs to assist in troubleshooting.
+
+```bash
+export MINIO_DEBUG="on"
+```
+
+#### 1.4.7. Conclusion
+
+With the environment variables configured as per your requirements, your MinIO server should be ready to securely store and manage your object data. Ensure to verify the setup and monitor the logs for any unusual activities or errors. Regularly update the MinIO server and review your configuration to adapt to any changes or improvements in the MinIO system.
+
+#### 1.4.8. Additional Resources
+
++ [MinIO Client Quickstart Guide](https://docs.min.io/docs/minio-client-quickstart-guide)
++ [MinIO Admin Complete Guide](https://docs.min.io/docs/minio-admin-complete-guide)
++ [MinIO Docker Quickstart Guide](https://docs.min.io/docs/minio-docker-quickstart-guide)
+
+Feel free to explore the MinIO documentation for more advanced configurations and usage scenarios.
+
+
+
+## 2. Further Configuration
+
+### 2.1. Image Registry Configuration
+
+**Description**: The image registry configuration allows users to select an image address for use. The default is set to use GITHUB images, but users can opt for Docker Hub or Ali Cloud, especially beneficial for Chinese users due to its local proximity.
+
+| Parameter | Default Value | Description |
+| ---------------- | --------------------- | ------------------------------------------------------------ |
+| `IMAGE_REGISTRY` | `"ghcr.io/openimsdk"` | The registry from which Docker images will be pulled. Other options include `"openim"` and `"registry.cn-hangzhou.aliyuncs.com/openimsdk"`. |
+
+### 2.2. OpenIM Docker Network Configuration
+
+**Description**: This section configures the Docker network subnet and generates IP addresses for various services within the defined subnet.
+
+| Parameter | Example Value | Description |
+| --------------------------- | ----------------- | ------------------------------------------------------------ |
+| `DOCKER_BRIDGE_SUBNET` | `'172.28.0.0/16'` | The subnet for the Docker network. |
+| `DOCKER_BRIDGE_GATEWAY` | Generated IP | The gateway IP address within the Docker subnet. |
+| `[SERVICE]_NETWORK_ADDRESS` | Generated IP | The network IP address for a specific service (e.g., MYSQL, MONGO, REDIS, etc.) within the Docker subnet. |
+
+### 2.3. OpenIM Configuration
+
+**Description**: OpenIM configuration involves setting up directories for data, installation, configuration, and logs. It also involves configuring the OpenIM server address and ports for WebSocket and API.
+
+| Parameter | Default Value | Description |
+| ----------------------- | ------------------------ | ----------------------------------------- |
+| `OPENIM_DATA_DIR` | `"/data/openim"` | Directory for OpenIM data. |
+| `OPENIM_INSTALL_DIR` | `"/opt/openim"` | Directory where OpenIM is installed. |
+| `OPENIM_CONFIG_DIR` | `"/etc/openim"` | Directory for OpenIM configuration files. |
+| `OPENIM_LOG_DIR` | `"/var/log/openim"` | Directory for OpenIM logs. |
+| `OPENIM_SERVER_ADDRESS` | Docker Bridge Gateway IP | OpenIM server address. |
+| `OPENIM_WS_PORT` | `'10001'` | Port for OpenIM WebSocket. |
+| `API_OPENIM_PORT` | `'10002'` | Port for OpenIM API. |
+
+### 2.4. OpenIM Chat Configuration
+
+**Description**: Configuration for OpenIM chat, including data directory, server address, and ports for API and chat functionalities.
+
+| Parameter | Example Value | Description |
+| ----------------------- | -------------------------- | ------------------------------- |
+| `OPENIM_CHAT_DATA_DIR` | `"./openim-chat/[BRANCH]"` | Directory for OpenIM chat data. |
+| `OPENIM_CHAT_ADDRESS` | Docker Bridge Gateway IP | OpenIM chat service address. |
+| `OPENIM_CHAT_API_PORT` | `"10008"` | Port for OpenIM chat API. |
+| `OPENIM_ADMIN_API_PORT` | `"10009"` | Port for OpenIM Admin API. |
+| `OPENIM_ADMIN_PORT` | `"30200"` | Port for OpenIM chat Admin. |
+| `OPENIM_CHAT_PORT` | `"30300"` | Port for OpenIM chat. |
+
+### 2.5. Zookeeper Configuration
+
+**Description**: Configuration for Zookeeper, including schema, port, address, and credentials.
+
+| Parameter | Example Value | Description |
+| -------------------- | ------------------------ | ----------------------- |
+| `ZOOKEEPER_SCHEMA` | `"openim"` | Schema for Zookeeper. |
+| `ZOOKEEPER_PORT` | `"12181"` | Port for Zookeeper. |
+| `ZOOKEEPER_ADDRESS` | Docker Bridge Gateway IP | Address for Zookeeper. |
+| `ZOOKEEPER_USERNAME` | `""` | Username for Zookeeper. |
+| `ZOOKEEPER_PASSWORD` | `""` | Password for Zookeeper. |
+
+### 2.7. MongoDB Configuration
+
+This section involves setting up MongoDB, including its port, address, and credentials.
+
+
+| Parameter | Example Value | Description |
+| -------------- | -------------- | ----------------------- |
+| MONGO_PORT | "27017" | Port used by MongoDB. |
+| MONGO_ADDRESS | [Generated IP] | IP address for MongoDB. |
+| MONGO_USERNAME | [User Defined] | Admin Username for MongoDB. |
+| MONGO_PASSWORD | [User Defined] | Admin Password for MongoDB. |
+| MONGO_OPENIM_USERNAME | [User Defined] | OpenIM Username for MongoDB. |
+| MONGO_OPENIM_PASSWORD | [User Defined] | OpenIM Password for MongoDB. |
+
+### 2.8. Tencent Cloud COS Configuration
+
+This section involves setting up Tencent Cloud COS, including its bucket URL and credentials.
+
+| Parameter | Example Value | Description |
+| ----------------- | ------------------------------------------------------------ | ------------------------------------ |
+| COS_BUCKET_URL | "[https://temp-1252357374.cos.ap-chengdu.myqcloud.com](https://temp-1252357374.cos.ap-chengdu.myqcloud.com/)" | Tencent Cloud COS bucket URL. |
+| COS_SECRET_ID | [User Defined] | Secret ID for Tencent Cloud COS. |
+| COS_SECRET_KEY | [User Defined] | Secret key for Tencent Cloud COS. |
+| COS_SESSION_TOKEN | [User Defined] | Session token for Tencent Cloud COS. |
+| COS_PUBLIC_READ | "false" | Public read access. |
+
+### 2.9. Alibaba Cloud OSS Configuration
+
+This section involves setting up Alibaba Cloud OSS, including its endpoint, bucket name, and credentials.
+
+| Parameter | Example Value | Description |
+| --------------------- | ------------------------------------------------------------ | ---------------------------------------- |
+| OSS_ENDPOINT | "[https://oss-cn-chengdu.aliyuncs.com](https://oss-cn-chengdu.aliyuncs.com/)" | Endpoint URL for Alibaba Cloud OSS. |
+| OSS_BUCKET | "demo-9999999" | Bucket name for Alibaba Cloud OSS. |
+| OSS_BUCKET_URL | "[https://demo-9999999.oss-cn-chengdu.aliyuncs.com](https://demo-9999999.oss-cn-chengdu.aliyuncs.com/)" | Bucket URL for Alibaba Cloud OSS. |
+| OSS_ACCESS_KEY_ID | [User Defined] | Access key ID for Alibaba Cloud OSS. |
+| OSS_ACCESS_KEY_SECRET | [User Defined] | Access key secret for Alibaba Cloud OSS. |
+| OSS_SESSION_TOKEN | [User Defined] | Session token for Alibaba Cloud OSS. |
+| OSS_PUBLIC_READ | "false" | Public read access. |
+
+### 2.10. Redis Configuration
+
+This section involves setting up Redis, including its port, address, and credentials.
+
+| Parameter | Example Value | Description |
+| -------------- | -------------------------- | --------------------- |
+| REDIS_PORT | "16379" | Port used by Redis. |
+| REDIS_ADDRESS | "${DOCKER_BRIDGE_GATEWAY}" | IP address for Redis. |
+| REDIS_USERNAME | [User Defined] | Username for Redis. |
+| REDIS_PASSWORD | "${PASSWORD}" | Password for Redis. |
+
+### 2.11. Kafka Configuration
+
+This section involves setting up Kafka, including its port, address, credentials, and topics.
+
+| Parameter | Example Value | Description |
+| ---------------------------- | -------------------------- | ----------------------------------- |
+| KAFKA_USERNAME | [User Defined] | Username for Kafka. |
+| KAFKA_PASSWORD | [User Defined] | Password for Kafka. |
+| KAFKA_PORT | "19094" | Port used by Kafka. |
+| KAFKA_ADDRESS | "${DOCKER_BRIDGE_GATEWAY}" | IP address for Kafka. |
+| KAFKA_LATESTMSG_REDIS_TOPIC | "latestMsgToRedis" | Topic for latest message to Redis. |
+| KAFKA_OFFLINEMSG_MONGO_TOPIC | "offlineMsgToMongoMysql" | Topic for offline message to Mongo. |
+| KAFKA_MSG_PUSH_TOPIC | "msgToPush" | Topic for message to push. |
+| KAFKA_CONSUMERGROUPID_REDIS | "redis" | Consumer group ID to Redis. |
+| KAFKA_CONSUMERGROUPID_MONGO | "mongo" | Consumer group ID to Mongo. |
+| KAFKA_CONSUMERGROUPID_MYSQL | "mysql" | Consumer group ID to MySQL. |
+| KAFKA_CONSUMERGROUPID_PUSH | "push" | Consumer group ID to push. |
+
+Note: Ensure to replace placeholder values (like [User Defined], `${DOCKER_BRIDGE_GATEWAY}`, and `${PASSWORD}`) with actual values before deploying the configuration.
+
+
+
+### 2.12. OpenIM Web Configuration
+
+This section involves setting up OpenIM Web, including its port, address, and dist path.
+
+| Parameter | Example Value | Description |
+| -------------------- | -------------------------- | ------------------------- |
+| OPENIM_WEB_PORT | "11001" | Port used by OpenIM Web. |
+| OPENIM_WEB_ADDRESS | "${DOCKER_BRIDGE_GATEWAY}" | Address for OpenIM Web. |
+| OPENIM_WEB_DIST_PATH | "/app/dist" | Dist path for OpenIM Web. |
+
+### 2.13. RPC Configuration
+
+Configuration for RPC, including the register and listen IP.
+
+| Parameter | Example Value | Description |
+| --------------- | -------------- | -------------------- |
+| RPC_REGISTER_IP | [User Defined] | Register IP for RPC. |
+| RPC_LISTEN_IP | "0.0.0.0" | Listen IP for RPC. |
+
+### 2.14. Prometheus Configuration
+
+Setting up Prometheus, including its port and address.
+
+| Parameter | Example Value | Description |
+| ------------------ | -------------------------- | ------------------------ |
+| PROMETHEUS_PORT | "19090" | Port used by Prometheus. |
+| PROMETHEUS_ADDRESS | "${DOCKER_BRIDGE_GATEWAY}" | Address for Prometheus. |
+
+### 2.15. Grafana Configuration
+
+Configuration for Grafana, including its port and address.
+
+| Parameter | Example Value | Description |
+| --------------- | -------------------------- | --------------------- |
+| GRAFANA_PORT | "13000" | Port used by Grafana. |
+| GRAFANA_ADDRESS | "${DOCKER_BRIDGE_GATEWAY}" | Address for Grafana. |
+
+### 2.16. RPC Port Configuration Variables
+
+Configuration for various RPC ports. Note: For launching multiple programs, just fill in multiple ports separated by commas. Try not to have spaces.
+
+| Parameter | Example Value | Description |
+| --------------------------- | ------------- | ----------------------------------- |
+| OPENIM_USER_PORT | '10110' | OpenIM User Service Port. |
+| OPENIM_FRIEND_PORT | '10120' | OpenIM Friend Service Port. |
+| OPENIM_MESSAGE_PORT | '10130' | OpenIM Message Service Port. |
+| OPENIM_MESSAGE_GATEWAY_PORT | '10140' | OpenIM Message Gateway Service Port |
+| OPENIM_GROUP_PORT | '10150' | OpenIM Group Service Port. |
+| OPENIM_AUTH_PORT | '10160' | OpenIM Authorization Service Port. |
+| OPENIM_PUSH_PORT | '10170' | OpenIM Push Service Port. |
+| OPENIM_CONVERSATION_PORT | '10180' | OpenIM Conversation Service Port. |
+| OPENIM_THIRD_PORT | '10190' | OpenIM Third-Party Service Port. |
+
+### 2.17. RPC Register Name Configuration
+
+This section involves setting up the RPC Register Names for various OpenIM services.
+
+| Parameter | Example Value | Description |
+| --------------------------- | ---------------- | ----------------------------------- |
+| OPENIM_USER_NAME | "User" | OpenIM User Service Name |
+| OPENIM_FRIEND_NAME | "Friend" | OpenIM Friend Service Name |
+| OPENIM_MSG_NAME | "Msg" | OpenIM Message Service Name |
+| OPENIM_PUSH_NAME | "Push" | OpenIM Push Service Name |
+| OPENIM_MESSAGE_GATEWAY_NAME | "MessageGateway" | OpenIM Message Gateway Service Name |
+| OPENIM_GROUP_NAME | "Group" | OpenIM Group Service Name |
+| OPENIM_AUTH_NAME | "Auth" | OpenIM Authorization Service Name |
+| OPENIM_CONVERSATION_NAME | "Conversation" | OpenIM Conversation Service Name |
+| OPENIM_THIRD_NAME | "Third" | OpenIM Third-Party Service Name |
+
+### 2.18. Log Configuration
+
+This section involves configuring the log settings, including storage location, rotation time, and log level.
+
+| Parameter | Example Value | Description |
+| ------------------------- | ------------------------ | --------------------------------- |
+| LOG_STORAGE_LOCATION | "${OPENIM_ROOT}/_output/logs/" | Location for storing logs |
+| LOG_ROTATION_TIME | "24" | Log rotation time (in hours) |
+| LOG_REMAIN_ROTATION_COUNT | "2" | Number of log rotations to retain |
+| LOG_REMAIN_LOG_LEVEL | "6" | Log level to retain |
+| LOG_IS_STDOUT | "false" | Output log to standard output |
+| LOG_IS_JSON | "false" | Log in JSON format |
+| LOG_WITH_STACK | "false" | Include stack info in logs |
+
+### 2.19. Additional Configuration Variables
+
+This section involves setting up additional configuration variables for Websocket, Push Notifications, and Chat.
+
+| Parameter | Example Value | Description |
+|-------------------------|-------------------|----------------------------------|
+| WEBSOCKET_MAX_CONN_NUM | "100000" | Maximum Websocket connections |
+| WEBSOCKET_MAX_MSG_LEN | "4096" | Maximum Websocket message length |
+| WEBSOCKET_TIMEOUT | "10" | Websocket timeout |
+| PUSH_ENABLE | "getui" | Push notification enable status |
+| GETUI_PUSH_URL | [Generated URL] | GeTui Push Notification URL |
+| GETUI_MASTER_SECRET | [User Defined] | GeTui Master Secret |
+| GETUI_APP_KEY | [User Defined] | GeTui Application Key |
+| GETUI_INTENT | [User Defined] | GeTui Push Intent |
+| GETUI_CHANNEL_ID | [User Defined] | GeTui Channel ID |
+| GETUI_CHANNEL_NAME | [User Defined] | GeTui Channel Name |
+| FCM_SERVICE_ACCOUNT | "x.json" | FCM Service Account |
+| JPUSH_APP_KEY | [User Defined] | JPUSH Application Key |
+| JPUSH_MASTER_SECRET | [User Defined] | JPUSH Master Secret |
+| JPUSH_PUSH_URL | [User Defined] | JPUSH Push Notification URL |
+| JPUSH_PUSH_INTENT | [User Defined] | JPUSH Push Intent |
+| IM_ADMIN_USERID | "imAdmin" | IM Administrator ID |
+| IM_ADMIN_NAME | "imAdmin" | IM Administrator Nickname |
+| MULTILOGIN_POLICY | "1" | Multi-login Policy |
+| CHAT_PERSISTENCE_MYSQL | "true" | Chat Persistence in MySQL |
+| MSG_CACHE_TIMEOUT | "86400" | Message Cache Timeout |
+| GROUP_MSG_READ_RECEIPT | "true" | Group Message Read Receipt Enable |
+| SINGLE_MSG_READ_RECEIPT | "true" | Single Message Read Receipt Enable |
+| RETAIN_CHAT_RECORDS | "365" | Retain Chat Records (in days) |
+| CHAT_RECORDS_CLEAR_TIME | [Cron Expression] | Chat Records Clear Time |
+| MSG_DESTRUCT_TIME | [Cron Expression] | Message Destruct Time |
+| SECRET | "${PASSWORD}" | Secret Key |
+| TOKEN_EXPIRE | "90" | Token Expiry Time |
+| FRIEND_VERIFY | "false" | Friend Verification Enable |
+| IOS_PUSH_SOUND | "xxx" | iOS |
+| CALLBACK_ENABLE | "false" | Enable callback |
+| CALLBACK_TIMEOUT | "5" | Maximum timeout for callback call |
+| CALLBACK_FAILED_CONTINUE| "true" | fails to continue to the next step |
+### 2.20. Prometheus Configuration
+
+This section involves configuring Prometheus, including enabling/disabling it and setting up ports for various services.
+
+#### 2.20.1. General Configuration
+
+| Parameter | Example Value | Description |
+| ------------------- | ------------- | ----------------------------- |
+| `PROMETHEUS_ENABLE` | "false" | Whether to enable Prometheus. |
+
+#### 2.20.2. Service-Specific Prometheus Ports
+
+| Service | Parameter | Default Port Value | Description |
+| ------------------------ | ------------------------ | ---------------------------- | -------------------------------------------------- |
+| User Service | `USER_PROM_PORT` | '20110' | Prometheus port for the User service. |
+| Friend Service | `FRIEND_PROM_PORT` | '20120' | Prometheus port for the Friend service. |
+| Message Service | `MESSAGE_PROM_PORT` | '20130' | Prometheus port for the Message service. |
+| Message Gateway | `MSG_GATEWAY_PROM_PORT` | '20140' | Prometheus port for the Message Gateway. |
+| Group Service | `GROUP_PROM_PORT` | '20150' | Prometheus port for the Group service. |
+| Auth Service | `AUTH_PROM_PORT` | '20160' | Prometheus port for the Auth service. |
+| Push Service | `PUSH_PROM_PORT` | '20170' | Prometheus port for the Push service. |
+| Conversation Service | `CONVERSATION_PROM_PORT` | '20230' | Prometheus port for the Conversation service. |
+| RTC Service | `RTC_PROM_PORT` | '21300' | Prometheus port for the RTC service. |
+| Third Service | `THIRD_PROM_PORT` | '21301' | Prometheus port for the Third service. |
+| Message Transfer Service | `MSG_TRANSFER_PROM_PORT` | '21400, 21401, 21402, 21403' | Prometheus ports for the Message Transfer service. |
+
+
+### 2.21. Qiniu Cloud Kodo Configuration
+
+This section involves setting up Qiniu Cloud Kodo, including its endpoint, bucket name, and credentials.
+
+| Parameter | Example Value | Description |
+| --------------------- | ------------------------------------------------------------ | ---------------------------------------- |
+| KODO_ENDPOINT | "[http://s3.cn-east-1.qiniucs.com](http://s3.cn-east-1.qiniucs.com)" | Endpoint URL for Qiniu Cloud Kodo. |
+| KODO_BUCKET | "demo-9999999" | Bucket name for Qiniu Cloud Kodo. |
+| KODO_BUCKET_URL | "[http://your.domain.com](http://your.domain.com)" | Bucket URL for Qiniu Cloud Kodo. |
+| KODO_ACCESS_KEY_ID | [User Defined] | Access key ID for Qiniu Cloud Kodo. |
+| KODO_ACCESS_KEY_SECRET | [User Defined] | Access key secret for Qiniu Cloud Kodo. |
+| KODO_SESSION_TOKEN | [User Defined] | Session token for Qiniu Cloud Kodo. |
+| KODO_PUBLIC_READ | "false" | Public read access. |
diff --git a/docs/contrib/error-code.md b/docs/contrib/error-code.md
new file mode 100644
index 0000000..169567b
--- /dev/null
+++ b/docs/contrib/error-code.md
@@ -0,0 +1,22 @@
+## Error Code Standards
+
+Error codes are one of the important means for users to locate and solve problems. When an application encounters an exception, users can quickly locate and resolve the problem based on the error code and the description and solution of the error code in the documentation.
+
+### Error Code Naming Standards
+
+- Follow CamelCase notation;
+- Error codes are divided into two levels. For example, `InvalidParameter.BindError`, separated by a `.`. The first-level error code is platform-level, and the second-level error code is resource-level, which can be customized according to the scenario;
+- The second-level error code can only use English letters or numbers ([a-zA-Z0-9]), and should use standard English word spelling, standard abbreviations, RFC term abbreviations, etc.;
+- The error code should avoid multiple definitions of the same semantics, for example: `InvalidParameter.ErrorBind`, `InvalidParameter.BindError`.
+
+### First-Level Common Error Codes
+
+| Error Code | Error Description | Error Type |
+| ---------------- | ------------------------------------------------------------ | ---------- |
+| InternalError | Internal error | 1 |
+| InvalidParameter | Parameter error (including errors in parameter type, format, value, etc.) | 0 |
+| AuthFailure | Authentication / Authorization error | 0 |
+| ResourceNotFound | Resource does not exist | 0 |
+| FailedOperation | Operation failed | 2 |
+
+> Error Type: 0 represents the client, 1 represents the server, 2 represents both the client / server.
\ No newline at end of file
diff --git a/docs/contrib/git-workflow.md b/docs/contrib/git-workflow.md
new file mode 100644
index 0000000..30fc95b
--- /dev/null
+++ b/docs/contrib/git-workflow.md
@@ -0,0 +1,102 @@
+# Git workflows
+
+This document is an overview of OpenIM git workflow. It includes conventions, tips, and how to maintain good repository hygiene.
+
+- [Git workflows](#git-workflows)
+ - [Branching model](#branching-model)
+ - [Branch naming conventions](#branch-naming-conventions)
+ - [Backport policy](#backport-policy)
+ - [Git operations](#git-operations)
+ - [Setting up](#setting-up)
+ - [Branching out](#branching-out)
+ - [Keeping local branches in sync](#keeping-local-branches-in-sync)
+ - [Pushing changes](#pushing-changes)
+
+## Branching model
+
+OpenIM project uses the [GitHub flow](https://docs.github.com/en/get-started/quickstart/github-flow) as its branching model, where most of the changes come from repositories forks instead of branches within the same one.
+
+### Branch naming conventions
+
+Every forked repository works independently, meaning that any contributor can create branches with the name they see fit. However, it is worth noting that OpenIM mirrors [OpenIM version skew policy](https://github.com/openimsdk/open-im-server-deploy/releases) by maintaining release branches for the most recent three minor releases. The only exception is that the main branch mirrors the latest OpenIM release (3.10) instead of using a `release-` prefixed one.
+
+```text
+main -------------------------------------------. (OpenIM 3.10)
+release-3.0.0 \---------------|---------------. (OpenIM 3.00)
+release-2.4.0 \---------------. (OpenIM 2.40)
+```
+
+
+### Backport policy
+
+All new work happens on the main branch, which means that for most cases, one should branch out from there and create the pull request against it. If the change involves adding a feature or patching OpenIM, the maintainers will backport it into the supported release branches.
+
+## Git operations
+
+There are everyday tasks related to git that every contributor needs to perform, and this section elaborates on them.
+
+### Setting up
+
+Creating a OpenIM fork, cloning it, and setting its upstream remote can be summarized on:
+
+1. Visit
+2. Click the `Fork` button (top right) to establish a cloud-based fork
+3. Clone fork to local storage
+4. Add to your fork OpenIM remote as upstream
+
+Once cloned, in code it would look this way:
+
+```sh
+## Clone fork to local storage
+export user="your github profile name"
+git clone https://github.com/$user/OpenIM.git
+# or: git clone git@github.com:$user/OpenIM.git
+
+## Add OpenIM as upstream to your fork
+cd OpenIM
+git remote add upstream https://github.com/openimsdk/open-im-server-deploy.git
+# or: git remote add upstream git@github.com:openimsdk/open-im-server-deploy.git
+
+## Ensure to never push to upstream directly
+git remote set-url --push upstream no_push
+
+## Confirm that your remotes make sense:
+git remote -v
+```
+
+### Branching out
+
+Every time one wants to work on a new OpenIM feature, we do:
+
+1. Get local main branch up to date
+2. Create a new branch from the main one (i.e.: myfeature branch )
+
+In code it would look this way:
+
+```sh
+## Get local main up to date
+# Assuming the OpenIM clone is the current working directory
+git fetch upstream
+git checkout main
+git rebase upstream/main
+
+## Create a new branch from main
+git checkout -b myfeature
+```
+
+### Keeping local branches in sync
+
+Either when branching out from main or a release one, keep in mind it is worth checking if any change has been pushed upstream by doing:
+
+```sh
+git fetch upstream
+git rebase upstream/main
+```
+
+It is suggested to `fetch` then `rebase` instead of `pull` since the latter does a merge, which leaves merge commits. For this, one can consider changing the local repository configuration by doing `git config branch.autoSetupRebase always` to change the behavior of `git pull`, or another non-merge option such as `git pull --rebase`.
+
+### Pushing changes
+
+For commit messages and signatures please refer to the [CONTRIBUTING.md](../../CONTRIBUTING.md) document.
+
+Nobody should push directly to upstream, even if one has such contributor access; instead, prefer [Github's pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) mechanism to contribute back into OpenIM. For expectations and guidelines about pull requests, consult the [CONTRIBUTING.md](../../CONTRIBUTING.md) document.
diff --git a/docs/contrib/gitcherry-pick.md b/docs/contrib/gitcherry-pick.md
new file mode 100644
index 0000000..3affb5c
--- /dev/null
+++ b/docs/contrib/gitcherry-pick.md
@@ -0,0 +1,176 @@
+# Git Cherry-Pick Guide
+
+- Git Cherry-Pick Guide
+ - [Introduction](#introduction)
+ - [What is git cherry-pick?](#what-is-git-cherry-pick)
+ - [Using git cherry-pick](#using-git-cherry-pick)
+ - [Applying Multiple Commits](#applying-multiple-commits)
+ - [Configurations](#configurations)
+ - [Handling Conflicts](#handling-conflicts)
+ - [Applying Commits from Another Repository](#applying-commits-from-another-repository)
+
+## Introduction
+
+Author: @cubxxw
+
+As OpenIM has progressively embarked on a standardized path, I've had the honor of initiating a significant project, `git cherry-pick`. While some may see it as merely a naming convention in the Go language, it represents more. It's a thoughtful design within the OpenIM project, my very first conscious design, and a first in laying out an extensive collaboration process and copyright management with goals of establishing a top-tier community standard.
+
+## What is git cherry-pick?
+
+In multi-branch repositories, transferring commits from one branch to another is common. You can either merge all changes from one branch (using `git merge`) or selectively apply certain commits. This selective application of commits is where `git cherry-pick` comes into play.
+
+Our collaboration strategy with GitHub necessitates maintenance of multiple `release-v*` branches alongside the `main` branch. To manage this, we mainly develop on the `main` branch and selectively merge into `release-v*` branches. This ensures the `main` branch stays current while the `release-v*` branches remain stable.
+
+Ensuring this strategy's success extends beyond just documentation; it hinges on well-engineered solutions and automation tools, like Makefile, powerful CI/CD processes, and even Prow.
+
+## Prerequisites
+
+- [Contributor License Agreement](https://github.com/openim-sigs/cla) is considered implicit for all code within cherry pick pull requests, **unless there is a large conflict**.
+- A pull request merged against the `main` branch.
+- The release branch exists (example: [`release-1.18`](https://github.com/openimsdk/open-im-server-deploy/tree/release-v3.1))
+- The normal git and GitHub configured shell environment for pushing to your openim-server `origin` fork on GitHub and making a pull request against a configured remote `upstream` that tracks `https://github.com/openimsdk/open-im-server-deploy.git`, including `GITHUB_USER`.
+- Have GitHub CLI (`gh`) installed following [installation instructions](https://github.com/cli/cli#installation).
+- A github personal access token which has permissions "repo" and "read:org". Permissions are required for [gh auth login](https://cli.github.com/manual/gh_auth_login) and not used for anything unrelated to cherry-pick creation process (creating a branch and initiating PR).
+
+## What Kind of PRs are Good for Cherry Picks
+
+Compared to the normal main branch's merge volume across time, the release branches see one or two orders of magnitude less PRs. This is because there is an order or two of magnitude higher scrutiny. Again, the emphasis is on critical bug fixes, e.g.,
+
+- Loss of data
+- Memory corruption
+- Panic, crash, hang
+- Security
+
+A bugfix for a functional issue (not a data loss or security issue) that only affects an alpha feature does not qualify as a critical bug fix.
+
+If you are proposing a cherry pick and it is not a clear and obvious critical bug fix, please reconsider. If upon reflection you wish to continue, bolster your case by supplementing your PR with e.g.,
+
+- A GitHub issue detailing the problem
+- Scope of the change
+- Risks of adding a change
+- Risks of associated regression
+- Testing performed, test cases added
+- Key stakeholder SIG reviewers/approvers attesting to their confidence in the change being a required backport
+
+If the change is in cloud provider-specific platform code (which is in the process of being moved out of core openim-server), describe the customer impact, how the issue escaped initial testing, remediation taken to prevent similar future escapes, and why the change cannot be carried in your downstream fork of the openim-server project branches.
+
+It is critical that our full community is actively engaged on enhancements in the project. If a released feature was not enabled on a particular provider's platform, this is a community miss that needs to be resolved in the `main` branch for subsequent releases. Such enabling will not be backported to the patch release branches.
+
+## Initiate a Cherry Pick
+
+### Before you begin
+
+- Plan to initiate a cherry-pick against *every* supported release branch. If you decide to skip some release branch, explain your decision in a comment to the PR being cherry-picked.
+- Initiate cherry-picks in order, from newest to oldest supported release branches. For example, if 3.1 is the newest supported release branch, then, before cherry-picking to 2.25, make sure the cherry-pick PR already exists for in 2.26 and 3.1. This helps to prevent regressions as a result of an upgrade to the next release.
+
+### Steps
+
+- Run the [cherry pick script](https://github.com/openimsdk/open-im-server-deploy/tree/main/scripts/cherry-pick.sh)
+
+ This example applies a main branch PR #98765 to the remote branch `upstream/release-v3.1`:
+
+ ```
+ scripts/cherry-pick.sh upstream/release-v3.1 98765
+ ```
+
+ - Be aware the cherry pick script assumes you have a git remote called `upstream` that points at the openim-server github org.
+
+ Please see our [recommended Git workflow](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/github-workflow.md#workflow).
+
+ - You will need to run the cherry pick script separately for each patch release you want to cherry pick to. Cherry picks should be applied to all [active](https://github.com/openimsdk/open-im-server-deploy/releases) release branches where the fix is applicable.
+
+ - If `GITHUB_TOKEN` is not set you will be asked for your github password: provide the github [personal access token](https://github.com/settings/tokens) rather than your actual github password. If you can securely set the environment variable `GITHUB_TOKEN` to your personal access token then you can avoid an interactive prompt. Refer [mislav/hub#2655 (comment)](https://github.com/mislav/hub/issues/2655#issuecomment-735836048)
+
+- Your cherry pick PR will immediately get the `do-not-merge/cherry-pick-not-approved` label.
+
+
+## Cherry Pick Review
+
+As with any other PR, code OWNERS review (`/lgtm`) and approve (`/approve`) on cherry pick PRs as they deem appropriate.
+
+The same release note requirements apply as normal pull requests, except the release note stanza will auto-populate from the main branch pull request from which the cherry pick originated.
+
+
+## Using git cherry-pick
+
+`git cherry-pick` applies specified commits from one branch to another.
+
+```bash
+$ git cherry-pick
+```
+
+As an example, consider a repository with `main` and `release-v3.1` branches. To apply commit `f` from the `release-v3.1` branch to the `main` branch:
+
+```
+# Switch to main branch
+$ git checkout main
+
+# Perform cherry-pick
+$ git cherry-pick f
+```
+
+You can also use a branch name instead of a commit hash to cherry-pick the latest commit from that branch.
+
+```bash
+$ git cherry-pick release-v3.1
+```
+
+## Applying Multiple Commits
+
+To apply multiple commits simultaneously:
+
+```bash
+$ git cherry-pick
+```
+
+To apply a range of consecutive commits:
+
+```bash
+$ git cherry-pick ..
+```
+
+## Configurations
+
+Here are some commonly used configurations for `git cherry-pick`:
+
+- **`-e`, `--edit`**: Open an external editor to modify the commit message.
+- **`-n`, `--no-commit`**: Update the working directory and staging area without creating a new commit.
+- **`-x`**: Append a reference in the commit message for tracking the source of the cherry-picked commit.
+- **`-s`, `--signoff`**: Add a sign-off message at the end of the commit indicating who performed the cherry-pick.
+- **`-m parent-number`, `--mainline parent-number`**: When the original commit is a merge of two branches, specify which parent branch's changes should be used.
+
+## Handling Conflicts
+
+If conflicts arise during the cherry-pick:
+
+- **`--continue`**: After resolving conflicts, stage the changes with `git add .` and then continue the cherry-pick process.
+- **`--abort`**: Abandon the cherry-pick and revert to the previous state.
+- **`--quit`**: Exit the cherry-pick without reverting to the previous state.
+
+## Applying Commits from Another Repository
+
+You can also cherry-pick commits from another repository:
+
+1. Add the external repository as a remote:
+
+ ```
+ $ git remote add target git://gitUrl
+ ```
+
+2. Fetch the commits from the remote:
+
+ ```
+ $ git fetch target
+ ```
+
+3. Identify the commit hash you wish to cherry-pick:
+
+ ```
+ $ git log target/main
+ ```
+
+4. Perform the cherry-pick:
+
+ ```
+ $ git cherry-pick
+ ```
\ No newline at end of file
diff --git a/docs/contrib/github-workflow.md b/docs/contrib/github-workflow.md
new file mode 100644
index 0000000..611f323
--- /dev/null
+++ b/docs/contrib/github-workflow.md
@@ -0,0 +1,283 @@
+---
+title: "GitHub Workflow"
+weight: 6
+description: |
+ This document is an overview of the GitHub workflow used by the
+ open-im-server-deploy project. It includes tips and suggestions on keeping your
+ local environment in sync with upstream and how to maintain good
+ commit hygiene.
+---
+
+
+## 1. Fork in the cloud
+
+1. Visit https://github.com/openimsdk/open-im-server-deploy
+2. Click `Fork` button (top right) to establish a cloud-based fork.
+
+## 2. Clone fork to local storage
+
+Per Go's [workspace instructions][go-workspace], place open-im-server-deploy' code on your
+`GOPATH` using the following cloning procedure.
+
+[go-workspace]: https://golang.org/doc/code.html#Workspaces
+
+In your shell, define a local working directory as `working_dir`. If your `GOPATH` has multiple paths, pick
+just one and use it instead of `$GOPATH`. You must follow exactly this pattern,
+neither `$GOPATH/src/github.com/${your github profile name}/`
+nor any other pattern will work.
+
+```sh
+export working_dir="$(go env GOPATH)/src/github.com/openimsdk"
+```
+
+If you already do Go development on github, the `github.com/openimsdk` directory
+will be a sibling to your existing `github.com` directory.
+
+Set `user` to match your github profile name:
+
+```sh
+export user=
+```
+
+Both `$working_dir` and `$user` are mentioned in the figure above.
+
+Create your clone:
+
+```sh
+mkdir -p $working_dir
+cd $working_dir
+git clone https://github.com/$user/open-im-server-deploy.git
+# or: git clone git@github.com:$user/open-im-server-deploy.git
+
+cd $working_dir/open-im-server-deploy
+git remote add upstream https://github.com/openimsdk/open-im-server-deploy.git
+# or: git remote add upstream git@github.com:openimsdk/open-im-server-deploy.git
+
+# Never push to upstream master
+git remote set-url --push upstream no_push
+
+# Confirm that your remotes make sense:
+git remote -v
+```
+
+## 3. Create a Working Branch
+
+Get your local master up to date. Note that depending on which repository you are working from,
+the default branch may be called "main" instead of "master".
+
+```sh
+cd $working_dir/open-im-server-deploy
+git fetch upstream
+git checkout master
+git rebase upstream/master
+```
+
+Create your new branch.
+
+```sh
+git checkout -b myfeature
+```
+
+You may now edit files on the `myfeature` branch.
+
+### Building open-im-server-deploy
+
+This workflow is process-specific. For quick-start build instructions for [openimsdk/open-im-server-deploy](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/util-makefile.md)
+
+## 4. Keep your branch in sync
+
+You will need to periodically fetch changes from the `upstream`
+repository to keep your working branch in sync. Note that depending on which repository you are working from,
+the default branch may be called 'main' instead of 'master'.
+
+Make sure your local repository is on your working branch and run the
+following commands to keep it in sync:
+
+```sh
+git fetch upstream
+git rebase upstream/master
+```
+
+Please don't use `git pull` instead of the above `fetch` and
+`rebase`. Since `git pull` executes a merge, it creates merge commits. These make the commit history messy
+and violate the principle that commits ought to be individually understandable
+and useful (see below).
+
+You might also consider changing your `.git/config` file via
+`git config branch.autoSetupRebase always` to change the behavior of `git pull`, or another non-merge option such as `git pull --rebase`.
+
+## 5. Commit Your Changes
+
+You will probably want to regularly commit your changes. It is likely that you will go back and edit,
+build, and test multiple times. After a few cycles of this, you might
+[amend your previous commit](https://www.w3schools.com/git/git_amend.asp).
+
+```sh
+git commit
+```
+
+## 6. Push to GitHub
+
+When your changes are ready for review, push your working branch to
+your fork on GitHub.
+
+```sh
+git push -f myfeature
+```
+
+## 7. Create a Pull Request
+
+1. Visit your fork at `https://github.com//open-im-server-deploy`
+2. Click the **Compare & Pull Request** button next to your `myfeature` branch.
+3. Check out the pull request process for more details and
+ advice.
+
+_If you have upstream write access_, please refrain from using the GitHub UI for
+creating PRs, because GitHub will create the PR branch inside the main
+repository rather than inside your fork.
+
+### Get a code review
+
+Once your pull request has been opened it will be assigned to one or more
+reviewers. Those reviewers will do a thorough code review, looking for
+correctness, bugs, opportunities for improvement, documentation and comments,
+and style.
+
+Commit changes made in response to review comments to the same branch on your
+fork.
+
+Very small PRs are easy to review. Very large PRs are very difficult to review.
+
+### Squash commits
+
+After a review, prepare your PR for merging by squashing your commits.
+
+All commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process.
+
+Before merging a PR, squash the following kinds of commits:
+
+- Fixes/review feedback
+- Typos
+- Merges and rebases
+- Work in progress
+
+Aim to have every commit in a PR compile and pass tests independently if you can, but it's not a requirement. In particular, `merge` commits must be removed, as they will not pass tests.
+
+To squash your commits, perform an [interactive rebase](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History):
+
+1. Check your git branch:
+
+ ```
+ git status
+ ```
+
+ The output should be similar to this:
+
+ ```
+ On branch your-contribution
+ Your branch is up to date with 'origin/your-contribution'.
+ ```
+
+2. Start an interactive rebase using a specific commit hash, or count backwards from your last commit using `HEAD~`, where `` represents the number of commits to include in the rebase.
+
+ ```
+ git rebase -i HEAD~3
+ ```
+
+ The output should be similar to this:
+
+ ```
+ pick 2ebe926 Original commit
+ pick 31f33e9 Address feedback
+ pick b0315fe Second unit of work
+
+ # Rebase 7c34fc9..b0315ff onto 7c34fc9 (3 commands)
+ #
+ # Commands:
+ # p, pick = use commit
+ # r, reword = use commit, but edit the commit message
+ # e, edit = use commit, but stop for amending
+ # s, squash = use commit, but meld into previous commit
+ # f, fixup = like "squash", but discard this commit's log message
+
+ ...
+
+ ```
+
+3. Use a command line text editor to change the word `pick` to `squash` for the commits you want to squash, then save your changes and continue the rebase:
+
+ ```
+ pick 2ebe926 Original commit
+ squash 31f33e9 Address feedback
+ pick b0315fe Second unit of work
+
+ ...
+
+ ```
+
+ The output after saving changes should look similar to this:
+
+ ```
+ [detached HEAD 61fdded] Second unit of work
+ Date: Thu Mar 5 19:01:32 2020 +0100
+ 2 files changed, 15 insertions(+), 1 deletion(-)
+
+ ...
+
+ Successfully rebased and updated refs/heads/master.
+ ```
+4. Force push your changes to your remote branch:
+
+ ```
+ git push --force
+ ```
+
+For mass automated fixups such as automated doc formatting, use one or more
+commits for the changes to tooling and a final commit to apply the fixup en
+masse. This makes reviews easier.
+
+An alternative to this manual squashing process is to use the Prow and Tide based automation that is configured in GitHub: adding a comment to your PR with `/label tide/merge-method-squash` will trigger the automation so that GitHub squash your commits onto the target branch once the PR is approved. Using this approach simplifies things for those less familiar with Git, but there are situations in where it's better to squash locally; reviewers will have this in mind and can ask for manual squashing to be done.
+
+By squashing locally, you control the commit message(s) for your work, and can separate a large PR into logically separate changes.
+For example: you have a pull request that is code complete and has 24 commits. You rebase this against the same merge base, simplifying the change to two commits. Each of those two commits represents a single logical change and each commit message summarizes what changes. Reviewers see that the set of changes are now understandable, and approve your PR.
+
+## Merging a commit
+
+Once you've received review and approval, your commits are squashed, your PR is ready for merging.
+
+Merging happens automatically after both a Reviewer and Approver have approved the PR. If you haven't squashed your commits, they may ask you to do so before approving a PR.
+
+## Reverting a commit
+
+In case you wish to revert a commit, use the following instructions.
+
+_If you have upstream write access_, please refrain from using the
+`Revert` button in the GitHub UI for creating the PR, because GitHub
+will create the PR branch inside the main repository rather than inside your fork.
+
+- Create a branch and sync it with upstream. Note that depending on which repository you are working from, the default branch may be called 'main' instead of 'master'.
+ ```sh
+ # create a branch
+ git checkout -b myrevert
+
+ # sync the branch with upstream
+ git fetch upstream
+ git rebase upstream/master
+ ```
+- If the commit you wish to revert is a *merge commit*, use this command:
+ ```sh
+ # SHA is the hash of the merge commit you wish to revert
+ git revert -m 1
+ ```
+ If it is a *single commit*, use this command:
+ ```sh
+ # SHA is the hash of the single commit you wish to revert
+ git revert
+ ```
+
+- This will create a new commit reverting the changes. Push this new commit to your remote.
+ ```sh
+ git push myrevert
+ ```
+
+- Finally, [create a Pull Request](#7-create-a-pull-request) using this branch.
\ No newline at end of file
diff --git a/docs/contrib/go-code.md b/docs/contrib/go-code.md
new file mode 100644
index 0000000..79a7271
--- /dev/null
+++ b/docs/contrib/go-code.md
@@ -0,0 +1,1478 @@
+## OpenIM development specification
+We have very high standards for code style and specification, and we want our products to be polished and perfect
+
+## 1. Code style
+
+### 1.1 Code format
+
+- Code must be formatted with `gofmt`.
+- Leave spaces between operators and operands.
+- It is recommended that a line of code does not exceed 120 characters. If the part exceeds, please use an appropriate line break method. But there are also some exception scenarios, such as import lines, code automatically generated by tools, and struct fields with tags.
+- The file length cannot exceed 800 lines.
+- Function length cannot exceed 80 lines.
+- import specification
+- All code must be formatted with `goimports` (it is recommended to set the code Go code editor to: run `goimports` on save).
+- Do not use relative paths to import packages, such as `import ../util/net`.
+- Import aliases must be used when the package name does not match the last directory name of the import path, or when multiple identical package names conflict.
+
+```go
+// bad
+"github.com/dgrijalva/jwt-go/v4"
+
+//good
+jwt "github.com/dgrijalva/jwt-go/v4"
+```
+- Imported packages are suggested to be grouped, and anonymous package references use a new group, and anonymous package references are explained.
+
+```go
+import (
+ // go standard package
+ "fmt"
+
+ // third party package
+ "github.com/jinzhu/gorm"
+ "github.com/spf13/cobra"
+ "github.com/spf13/viper"
+
+ // Anonymous packages are grouped separately, and anonymous package references are explained
+ // import mysql driver
+ _ "github.com/jinzhu/gorm/dialects/mysql"
+
+ // inner package
+)
+```
+
+### 1.2 Declaration, initialization and definition
+
+When multiple variables need to be used in a function, the `var` declaration can be used at the beginning of the function. Declaration outside the function must use `var`, do not use `:=`, it is easy to step on the scope of the variable.
+
+```go
+var (
+ Width int
+ Height int
+)
+```
+
+- When initializing a structure reference, please use `&T{}` instead of `new(T)` to make it consistent with structure initialization.
+
+```go
+ // bad
+ sptr := new(T)
+ sptr.Name = "bar"
+
+ // good
+ sptr := &T{Name: "bar"}
+```
+
+- The struct declaration and initialization format takes multiple lines and is defined as follows.
+
+```go
+ type User struct{
+ Username string
+ Email string
+ }
+
+ user := User{
+ Username: "belm",
+ Email: "nosbelm@qq.com",
+}
+```
+
+- Similar declarations are grouped together, and the same applies to constant, variable, and type declarations.
+
+```go
+// bad
+import "a"
+import "b"
+
+//good
+import (
+ "a"
+ "b"
+)
+```
+
+- Specify container capacity where possible to pre-allocate memory for the container, for example:
+
+```go
+v := make(map[int]string, 4)
+v := make([]string, 0, 4)
+```
+
+- At the top level, use the standard var keyword. Do not specify a type unless it is different from the type of the expression.
+
+```go
+// bad
+var s string = F()
+
+func F() string { return "A" }
+
+// good
+var s = F()
+// Since F already explicitly returns a string type, we don't need to explicitly specify the type of _s
+// still of that type
+
+func F() string { return "A" }
+```
+
+- This example emphasizes using PascalCase for exported constants and camelCase for unexported ones, avoiding all caps and underscores.
+
+```go
+// bad
+const (
+ MAX_COUNT = 100
+ timeout = 30
+)
+
+// good
+const (
+ MaxCount = 100 // Exported constants should use PascalCase.
+ defaultTimeout = 30 // Unexported constants should use camelCase.
+)
+```
+
+- Grouping related constants enhances organization and readability, especially when there are multiple constants related to a particular feature or configuration.
+
+```go
+// bad
+const apiVersion = "v1"
+const retryInterval = 5
+
+// good
+const (
+ ApiVersion = "v1" // Group related constants together for better organization.
+ RetryInterval = 5
+)
+```
+
+- The "good" practice utilizes iota for a clear, concise, and auto-incrementing way to define enumerations, reducing the potential for errors and improving maintainability.
+
+```go
+// bad
+const (
+ StatusActive = 0
+ StatusInactive = 1
+ StatusUnknown = 2
+)
+
+// good
+const (
+ StatusActive = iota // Use iota for simple and efficient constant enumerations.
+ StatusInactive
+ StatusUnknown
+)
+```
+
+- Specifying types explicitly improves clarity, especially when the purpose or type of a constant might not be immediately obvious. Additionally, adding comments to exported constants or those whose purpose isn't clear from the name alone can greatly aid in understanding the code.
+
+```go
+// bad
+const serverAddress = "localhost:8080"
+const debugMode = 1 // Is this supposed to be a boolean or an int?
+
+// good
+const ServerAddress string = "localhost:8080" // Specify type for clarity.
+// DebugMode indicates if the application should run in debug mode (true for debug mode).
+const DebugMode bool = true
+```
+
+- By defining a contextKey type and making userIDKey of this type, you avoid potential collisions with other context keys. This approach leverages Go's type system to provide compile-time checks against misuse.
+
+```go
+// bad
+const userIDKey = "userID"
+
+// In this example, userIDKey is a string type, which can lead to conflicts or accidental misuse because string keys are prone to typos and collisions in a global namespace.
+
+
+// good
+type contextKey string
+
+const userIDKey contextKey = "userID"
+```
+
+
+- Embedded types (such as mutexes) should be at the top of the field list within the struct, and there must be a blank line separating embedded fields from regular fields.
+
+```go
+// bad
+type Client struct {
+ version int
+ http.Client
+}
+
+//good
+type Client struct {
+ http.Client
+
+ version int
+}
+```
+
+### 1.3 Error Handling
+
+- `error` is returned as the value of the function, `error` must be handled, or the return value assigned to explicitly ignore. For `defer xx.Close()`, there is no need to explicitly handle it.
+
+```go
+func load() error {
+// normal code
+}
+
+// bad
+load()
+
+//good
+ _ = load()
+```
+
+- When `error` is returned as the value of a function and there are multiple return values, `error` must be the last parameter.
+
+```go
+// bad
+func load() (error, int) {
+// normal code
+}
+
+//good
+func load() (int, error) {
+// normal code
+}
+```
+
+- Perform error handling as early as possible and return as early as possible to reduce nesting.
+
+```go
+// bad
+if err != nil {
+// error code
+} else {
+// normal code
+}
+
+//good
+if err != nil {
+// error handling
+return err
+}
+// normal code
+```
+
+- If you need to use the result of the function call outside if, you should use the following method.
+
+```go
+// bad
+if v, err := foo(); err != nil {
+// error handling
+}
+
+// good
+v, err := foo()
+if err != nil {
+// error handling
+}
+```
+
+- Errors should be judged independently, not combined with other logic.
+
+```go
+// bad
+v, err := foo()
+if err != nil || v == nil {
+ // error handling
+ return err
+}
+
+//good
+v, err := foo()
+if err != nil {
+ // error handling
+ return err
+}
+
+if v == nil {
+ // error handling
+ return errors. New("invalid value v")
+}
+```
+
+- If the return value needs to be initialized, use the following method.
+
+```go
+v, err := f()
+if err != nil {
+ // error handling
+ return // or continue.
+}
+```
+
+- Bug description suggestions
+- Error descriptions start with a lowercase letter and do not end with punctuation, for example:
+
+```go
+// bad
+errors.New("Redis connection failed")
+errors.New("redis connection failed.")
+
+// good
+errors.New("redis connection failed")
+```
+
+- Tell users what they can do, not what they can't.
+- When declaring a requirement, use must instead of should. For example, `must be greater than 0, must match regex '[a-z]+'`.
+- When declaring that a format is incorrect, use must not. For example, `must not contain`.
+- Use may not when declaring an action. For example, `may not be specified when otherField is empty, only name may be specified`.
+- When quoting a literal string value, indicate the literal in single quotes. For example, `ust not contain '..'`.
+- When referencing another field name, specify that name in backticks. For example, must be greater than `request`.
+- When specifying unequal, use words instead of symbols. For example, `must be less than 256, must be greater than or equal to 0 (do not use larger than, bigger than, more than, higher than)`.
+- When specifying ranges of numbers, use inclusive ranges whenever possible.
+- Go 1.13 or above is recommended, and the error generation method is `fmt.Errorf("module xxx: %w", err)`.
+
+### 1.4 Panic Processing
+
+The use of `panic` should be carefully controlled in Go applications to ensure program stability and predictable error handling. Following are revised guidelines emphasizing the restriction on using `panic` and promoting alternative strategies for error handling and program termination.
+
+- **Prohibited in Business Logic:** Using `panic` within business logic processing is strictly prohibited. Business logic should handle errors gracefully and use error returns to propagate issues up the call stack.
+
+- **Restricted Use in Main Package:** In the main package, the use of `panic` should be reserved for situations where the program is entirely inoperable, such as failure to open essential files, inability to connect to the database, or other critical startup issues. Even in these scenarios, prefer using structured error handling to terminate the program.
+
+- **Prohibition on Exportable Interfaces:** Exportable interfaces must not invoke `panic`. They should handle errors gracefully and return errors as part of their contract.
+
+- **Prefer Errors Over Panic:** It is recommended to use error returns instead of panic to convey errors within a package. This approach promotes error handling that integrates smoothly with Go's error handling idioms.
+
+#### Alternative to Panic: Structured Program Termination
+
+To enforce these guidelines, consider implementing structured functions to terminate the program gracefully in the face of unrecoverable errors, while providing clear error messages. Here are two recommended functions:
+
+```go
+// ExitWithError logs an error message and exits the program with a non-zero status.
+func ExitWithError(err error) {
+ progName := filepath.Base(os.Args[0])
+ fmt.Fprintf(os.Stderr, "%s exit -1: %+v\n", progName, err)
+ os.Exit(-1)
+}
+
+// SIGTERMExit logs a warning message when the program receives a SIGTERM signal and exits with status 0.
+func SIGTERMExit() {
+ progName := filepath.Base(os.Args[0])
+ fmt.Fprintf(os.Stderr, "Warning %s receive process terminal SIGTERM exit 0\n", progName)
+}
+```
+
+#### Example Usage:
+
+```go
+import (
+ _ "net/webhook/pprof"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ util "git.imall.cloud/openim/open-im-server-deploy/pkg/util/genutil"
+)
+
+func main() {
+ apiCmd := cmd.NewApiCmd()
+ apiCmd.AddPortFlag()
+ apiCmd.AddPrometheusPortFlag()
+ if err := apiCmd.Execute(); err != nil {
+ util.ExitWithError(err)
+ }
+}
+```
+
+In this example, `ExitWithError` is used to terminate the program when an unrecoverable error occurs, providing a clear error message to stderr and exiting with a non-zero status. This approach ensures that critical errors are logged and the program exits in a controlled manner, facilitating troubleshooting and maintaining the stability of the application.
+
+
+### 1.5 Unit Tests
+
+- The unit test filename naming convention is `example_test.go`.
+- Write a test case for every important exportable function.
+- Because the functions in the unit test file are not external, the exportable structures, functions, etc. can be uncommented.
+- If `func (b *Bar) Foo` exists, the single test function can be `func TestBar_Foo`.
+
+### 1.6 Type assertion failure handling
+
+- A single return value from a type assertion will panic for an incorrect type. Always use the "comma ok" idiom.
+
+```go
+// bad
+t := n.(int)
+
+//good
+t, ok := n.(int)
+if !ok {
+// error handling
+}
+```
+
+## 2. Naming convention
+
+The naming convention is a very important part of the code specification. A uniform, short, and precise naming convention can greatly improve the readability of the code and avoid unnecessary bugs.
+
+### 2.1 Package Naming
+
+- The package name must be consistent with the directory name, try to use a meaningful and short package name, and do not conflict with the standard library.
+- Package names are all lowercase, without uppercase or underscores, and use multi-level directories to divide the hierarchy.
+- Item names can connect multiple words with dashes.
+- Do not use plurals for the package name and the directory name where the package is located, for example, `net/url` instead of `net/urls`.
+- Don't use broad, meaningless package names like common, util, shared or lib.
+- The package name should be simple and clear, such as net, time, log.
+
+
+### 2.2 Function Naming Conventions
+
+Function names should adhere to the following guidelines, inspired by OpenIM’s standards and Google’s Go Style Guide:
+
+- Use camel case for function names. Start with an uppercase letter for public functions (`MixedCaps`) and a lowercase letter for private functions (`mixedCaps`).
+- Exceptions to this rule include code automatically generated by tools (e.g., `xxxx.pb.go`) and test functions that use underscores for clarity (e.g., `TestMyFunction_WhatIsBeingTested`).
+
+### 2.3 File and Directory Naming Practices
+
+To maintain consistency and readability across the OpenIM project, observe the following naming practices:
+
+**File Names:**
+- Use underscores (`_`) as the default separator in filenames, keeping them short and descriptive.
+- Both hyphens (`-`) and underscores (`_`) are allowed, but underscores are preferred for general use.
+
+**Script and Markdown Files:**
+- Prefer hyphens (`-`) for shell scripts and Markdown (`.md`) files to enhance searchability and web compatibility.
+
+**Directories:**
+- Name directories with hyphens (`-`) exclusively to separate words, ensuring consistency and readability.
+
+Remember to keep filenames lowercase and use meaningful, concise identifiers to facilitate better organization and navigation within the project.
+
+
+
+### 2.4 Structure Naming
+
+- The camel case is adopted, and the first letter is uppercase or lowercase according to the access control, such as `MixedCaps` or `mixedCaps`.
+- Struct names should not be verbs, but should be nouns, such as `Node`, `NodeSpec`.
+- Avoid using meaningless structure names such as Data and Info.
+- The declaration and initialization of the structure should take multiple lines, for example:
+
+```go
+// User multi-line declaration
+type User struct {
+ name string
+ Email string
+}
+
+// multi-line initialization
+u := User{
+ UserName: "belm",
+ Email: "nosbelm@qq.com",
+}
+```
+
+### 2.5 Interface Naming
+
+- The interface naming rules are basically consistent with the structure naming rules:
+- Interface names of individual functions suffixed with "er"" (e.g. Reader, Writer) can sometimes lead to broken English, but that's okay.
+- The interface name of the two functions is named after the two function names, eg ReadWriter.
+- An interface name for more than three functions, similar to a structure name.
+
+For example:
+
+```go
+// Seeking to an offset before the start of the file is an error.
+// Seeking to any positive offset is legal, but the behavior of subsequent
+// I/O operations on the underlying object are implementation-dependent.
+type Seeker interface {
+ Seek(offset int64, whence int) (int64, error)
+}
+
+// ReadWriter is the interface that groups the basic Read and Write methods.
+type ReadWriter interface {
+ reader
+ Writer
+}
+```
+
+### 2.6 Variable Naming
+
+- Variable names must follow camel case, and the initial letter is uppercase or lowercase according to the access control decision.
+- In relatively simple (few objects, highly targeted) environments, some names can be abbreviated from full words to single letters, for example:
+- user can be abbreviated as u;
+- userID can be abbreviated as uid.
+- When using proper nouns, the following rules need to be followed:
+- If the variable is private and the proper noun is the first word, use lowercase, such as apiClient.
+- In other cases, the original wording of the noun should be used, such as APIClient, repoID, UserID.
+
+Some common nouns are listed below.
+
+```go
+// A GonicMapper that contains a list of common initialisms taken from golang/lint
+var LintGonicMapper = GonicMapper{
+ "API": true,
+ "ASCII": true,
+ "CPU": true,
+ "CSS": true,
+ "DNS": true,
+ "EOF": true,
+ "GUID": true,
+ "HTML": true,
+ "HTTP": true,
+ "HTTPS": true,
+ "ID": true,
+ "IP": true,
+ "JSON": true,
+ "LHS": true,
+ "QPS": true,
+ "RAM": true,
+ "RHS": true,
+ "RPC": true,
+ "SLA": true,
+ "SMTP": true,
+ "SSH": true,
+ "TLS": true,
+ "TTL": true,
+ "UI": true,
+ "UID": true,
+ "UUID": true,
+ "URI": true,
+ "URL": true,
+ "UTF8": true,
+ "VM": true,
+ "XML": true,
+ "XSRF": true,
+ "XSS": true,
+}
+```
+
+- If the variable type is bool, the name should start with Has, Is, Can or Allow, for example:
+
+```go
+var hasConflict bool
+var isExist bool
+var canManage bool
+var allowGitHook bool
+```
+
+- Local variables should be as short as possible, for example, use buf to refer to buffer, and use i to refer to index.
+- The code automatically generated by the code generation tool can exclude this rule (such as the Id in `xxx.pb.go`)
+
+### 2.7 Constant Naming
+
+In Go, constants play a critical role in defining values that do not change throughout the execution of a program. Adhering to best practices in naming constants can significantly improve the readability and maintainability of your code. Here are some guidelines for constant naming:
+
+- **Camel Case Naming:** The name of a constant must follow the camel case notation. The initial letter should be uppercase or lowercase based on the access control requirements. Uppercase indicates that the constant is exported (visible outside the package), while lowercase indicates package-private visibility (visible only within its own package).
+
+- **Enumeration Type Constants:** For constants that represent a set of enumerated values, it's recommended to define a corresponding type first. This approach not only enhances type safety but also improves code readability by clearly indicating the purpose of the enumeration.
+
+**Example:**
+
+```go
+// Code defines an error code type.
+type Code int
+
+// Internal errors.
+const (
+ // ErrUnknown - 0: An unknown error occurred.
+ ErrUnknown Code = iota
+ // ErrFatal - 1: A fatal error occurred.
+ ErrFatal
+)
+```
+
+In the example above, `Code` is defined as a new type based on `int`. The enumerated constants `ErrUnknown` and `ErrFatal` are then defined with explicit comments to indicate their purpose and values. This pattern is particularly useful for grouping related constants and providing additional context.
+
+### Global Variables and Constants Across Packages
+
+- **Use Constants for Global Variables:** When defining variables that are intended to be accessed across packages, prefer using constants to ensure immutability. This practice avoids unintended modifications to the value, which can lead to unpredictable behavior or hard-to-track bugs.
+
+- **Lowercase for Package-Private Usage:** If a global variable or constant is intended for use only within its own package, it should start with a lowercase letter. This clearly signals its limited scope of visibility, adhering to Go's access control mechanism based on naming conventions.
+
+**Guideline:**
+
+- For global constants that need to be accessed across packages, declare them with an uppercase initial letter. This makes them exported, adhering to Go's visibility rules.
+- For constants used within the same package, start their names with a lowercase letter to limit their scope to the package.
+
+**Example:**
+
+```go
+package config
+
+// MaxConnections - the maximum number of allowed connections. Visible across packages.
+const MaxConnections int = 100
+
+// minIdleTime - the minimum idle time before a connection is considered stale. Only visible within the config package.
+const minIdleTime int = 30
+```
+
+In this example, `MaxConnections` is a global constant meant to be accessed across packages, hence it starts with an uppercase letter. On the other hand, `minIdleTime` is intended for use only within the `config` package, so it starts with a lowercase letter.
+
+Following these guidelines ensures that your Go code is more readable, maintainable, and consistent with Go's design philosophy and access control mechanisms.
+
+### 2.8 Error naming
+
+- The Error type should be written in the form of FooError.
+
+```go
+type ExitError struct {
+// ....
+}
+```
+
+- The Error variable is written in the form of ErrFoo.
+
+```go
+var ErrFormat = errors. New("unknown format")
+```
+
+For non-standard Err naming, CICD will report an error
+
+
+### 2.9 Handling Errors Properly
+
+In Go, proper error handling is crucial for creating reliable and maintainable applications. It's important to ensure that errors are not ignored or discarded, as this can lead to unpredictable behavior and difficult-to-debug issues. Here are the guidelines and examples regarding the proper handling of errors.
+
+#### Guideline: Do Not Discard Errors
+
+- **Mandatory Error Propagation:** When calling a function that returns an error, the calling function must handle or propagate the error, instead of ignoring it. This approach ensures that errors are not silently ignored, allowing higher-level logic to make informed decisions about error handling.
+
+#### Incorrect Example: Discarding an Error
+
+```go
+package main
+
+import (
+ "io/ioutil"
+ "log"
+)
+
+func ReadFileContent(filename string) string {
+ content, _ := ioutil.ReadFile(filename) // Incorrect: Error is ignored
+ return string(content)
+}
+
+func main() {
+ content := ReadFileContent("example.txt")
+ log.Println(content)
+}
+```
+
+In this incorrect example, the error returned by `ioutil.ReadFile` is ignored. This can lead to situations where the program continues execution even if the file doesn't exist or cannot be accessed, potentially causing more cryptic errors downstream.
+
+#### Correct Example: Propagating an Error
+
+```go
+package main
+
+import (
+ "io/ioutil"
+ "log"
+)
+
+// ReadFileContent attempts to read and return the content of the specified file.
+// It returns an error if reading fails.
+func ReadFileContent(filename string) (string, error) {
+ content, err := ioutil.ReadFile(filename)
+ if err != nil {
+ // Correct: Propagate the error
+ return "", err
+ }
+ return string(content), nil
+}
+
+func main() {
+ content, err := ReadFileContent("example.txt")
+ if err != nil {
+ log.Fatalf("Failed to read file: %v", err)
+ }
+ log.Println(content)
+}
+```
+
+In the correct example, the error returned by `ioutil.ReadFile` is propagated back to the caller. The `main` function then checks the error and terminates the program with an appropriate error message if an error occurred. This approach ensures that errors are handled appropriately, and the program does not proceed with invalid state.
+
+### Best Practices for Error Handling
+
+1. **Always check the error returned by a function.** Do not ignore it.
+2. **Propagate errors up the call stack unless they can be handled gracefully at the current level.**
+3. **Provide context for errors when propagating them, making it easier to trace the source of the error.** This can be achieved using `fmt.Errorf` with the `%w` verb or dedicated wrapping functions provided by some error handling packages.
+4. **Log the error at the point where it is handled or makes the program to terminate, to provide insight into the failure.**
+
+By following these guidelines, you ensure that your Go applications handle errors in a consistent and effective manner, improving their reliability and maintainability.
+
+
+### 2.10 Using Context with IO or Inter-Process Communication (IPC)
+
+In Go, `context.Context` is a powerful construct for managing deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes. It is particularly important in I/O operations or inter-process communication (IPC), where operations might need to be cancelled or timed out.
+
+#### Guideline: Use Context for IO and IPC
+
+- **Mandatory Use of Context:** When performing I/O operations or inter-process communication, it's crucial to use `context.Context` to manage the lifecycle of these operations. This includes setting deadlines, handling cancellation signals, and passing request-scoped values.
+
+#### Incorrect Example: Ignoring Context in an HTTP Call
+
+```go
+package main
+
+import (
+ "io/ioutil"
+ "net/http"
+ "log"
+)
+
+// FetchData makes an HTTP GET request to the specified URL and returns the response body.
+// This function does not use context, making it impossible to cancel the request or set a deadline.
+func FetchData(url string) (string, error) {
+ resp, err := http.Get(url) // Incorrect: Ignoring context
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return "", err
+ }
+
+ return string(body), nil
+}
+
+func main() {
+ data, err := FetchData("http://example.com")
+ if err != nil {
+ log.Fatalf("Failed to fetch data: %v", err)
+ }
+ log.Println(data)
+}
+```
+
+In this incorrect example, the `FetchData` function makes an HTTP GET request without using a `context`. This approach does not allow the request to be cancelled or a timeout to be set, potentially leading to resources being wasted if the server takes too long to respond or if the operation needs to be aborted for any reason.
+
+#### Correct Example: Using Context in an HTTP Call
+
+```go
+package main
+
+import (
+ "context"
+ "io/ioutil"
+ "net/http"
+ "log"
+ "time"
+)
+
+// FetchDataWithContext makes an HTTP GET request to the specified URL using the provided context.
+// This allows the request to be cancelled or timed out according to the context's deadline.
+func FetchDataWithContext(ctx context.Context, url string) (string, error) {
+ req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
+ if err != nil {
+ return "", err
+ }
+
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return "", err
+ }
+
+ return string(body), nil
+}
+
+func main() {
+ // Create a context with a 5-second timeout
+ ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
+ defer cancel()
+
+ data, err := FetchDataWithContext(ctx, "http://example.com")
+ if err != nil {
+ log.Fatalf("Failed to fetch data: %v", err)
+ }
+ log.Println(data)
+}
+```
+
+In the correct example, `FetchDataWithContext` uses a context to make the HTTP GET request. This allows the operation to be cancelled or subjected to a timeout, as dictated by the context passed to it. The `context.WithTimeout` function is used in `main` to create a context that cancels the request if it takes longer than 5 seconds, demonstrating a practical use of context to manage operation lifecycle.
+
+### Best Practices for Using Context
+
+1. **Pass context as the first parameter of a function**, following the convention `func(ctx context.Context, ...)`.
+2. **Never ignore the context** provided to you in functions that support it. Always use it in your I/O or IPC operations.
+3. **Avoid storing context in a struct**. Contexts are meant to be passed around within the call stack, not stored.
+4. **Use context's cancellation and deadline features** to control the lifecycle of blocking operations, especially in network I/O and IPC scenarios.
+5. **Propagate context down the call stack** to any function that supports it, ensuring that your application can respond to cancellation signals and deadlines effectively.
+
+By adhering to these guidelines and examples, you can ensure that your Go applications handle I/O and IPC operations more reliably and efficiently, with proper support for cancellation, timeouts, and request-scoped values.
+
+
+## 3. Comment specification
+
+- Each exportable name must have a comment, which briefly introduces the exported variables, functions, structures, interfaces, etc.
+- All single-line comments are used, and multi-line comments are prohibited.
+- Same as the code specification, single-line comments should not be too long, and no more than 120 characters are allowed. If it exceeds, please use a new line to display, and try to keep the format elegant.
+- A comment must be a complete sentence, starting with the content to be commented and ending with a period, `the format is // name description.`. For example:
+
+```go
+// bad
+// logs the flags in the flagset.
+func PrintFlags(flags *pflag. FlagSet) {
+// normal code
+}
+
+//good
+// PrintFlags logs the flags in the flagset.
+func PrintFlags(flags *pflag. FlagSet) {
+// normal code
+}
+```
+
+- All commented out code should be deleted before submitting code review, otherwise, it should explain why it is not deleted, and give follow-up processing suggestions.
+
+- Multiple comments can be separated by blank lines, as follows:
+
+```go
+// Package superman implements methods for saving the world.
+//
+// Experience has shown that a small number of procedures can prove
+// helpful when attempting to save the world.
+package superman
+```
+
+### 3.1 Package Notes
+
+- Each package has one and only one package-level annotation.
+- Package comments are uniformly commented with // in the format of `// Package package description`, for example:
+
+```go
+// Package genericclioptions contains flags which can be added to you command, bound, completed, and produce
+// useful helper functions.
+package genericclioptions
+```
+
+### 3.2 Variable/Constant Comments
+
+- Each variable/constant that can be exported must have a comment description, `the format is // variable name variable description`, for example:
+
+```go
+// ErrSigningMethod defines invalid signing method error.
+var ErrSigningMethod = errors. New("Invalid signing method")
+```
+
+- When there is a large block of constant or variable definition, you can comment a general description in front, and then comment the definition of the constant in detail before or at the end of each line of constant, for example:
+```go
+// Code must start with 1xxxxx.
+const (
+ // ErrSuccess - 200: OK.
+ ErrSuccess int = iota + 100001
+
+ // ErrUnknown - 500: Internal server error.
+ ErrUnknown
+
+ // ErrBind - 400: Error occurred while binding the request body to the struct.
+ ErrBind
+
+ // ErrValidation - 400: Validation failed.
+ ErrValidation
+)
+```
+### 3.3 Structure Annotation
+
+- Each structure or interface that needs to be exported must have a comment description, the format is `// structure name structure description.`.
+- The name of the exportable member variable in the structure, if the meaning is not clear, a comment must be given and placed before the member variable or at the end of the same line. For example:
+
+```go
+// User represents a user restful resource. It is also used as gorm model.
+type User struct {
+ // Standard object's metadata.
+ metav1.ObjectMeta `json:"metadata,omitempty"`
+
+ Nickname string `json:"nickname" gorm:"column:nickname"`
+ Password string `json:"password" gorm:"column:password"`
+ Email string `json:"email" gorm:"column:email"`
+ Phone string `json:"phone" gorm:"column:phone"`
+ IsAdmin int `json:"isAdmin,omitempty" gorm:"column:isAdmin"`
+}
+```
+
+### 3.4 Method Notes
+
+Each function or method that needs to be exported must have a comment, the format is // function name function description., for examplelike:
+
+```go
+// BeforeUpdate run before update database record.
+func (p *Policy) BeforeUpdate() (err error) {
+// normal code
+ return nil
+}
+```
+
+### 3.5 Type annotations
+
+- Each type definition and type alias that needs to be exported must have a comment description, the format is `// type name type description.`, for example:
+
+```go
+// Code defines an error code type.
+type Code int
+```
+
+## 4. Type
+
+### 4.1 Strings
+
+- Empty string judgment.
+
+```go
+// bad
+if s == "" {
+ // normal code
+}
+
+//good
+if len(s) == 0 {
+ // normal code
+}
+```
+
+- `[]byte`/`string` equality comparison.
+
+```go
+// bad
+var s1 []byte
+var s2 []byte
+...
+bytes.Equal(s1, s2) == 0
+bytes.Equal(s1, s2) != 0
+
+//good
+var s1 []byte
+var s2 []byte
+...
+bytes. Compare(s1, s2) == 0
+bytes. Compare(s1, s2) != 0
+```
+
+- Complex strings use raw strings to avoid character escaping.
+
+```go
+// bad
+regexp.MustCompile("\\.")
+
+//good
+regexp.MustCompile(`\.`)
+```
+
+### 4.2 Slicing
+
+- Empty slice judgment.
+
+```go
+// bad
+if len(slice) = 0 {
+ // normal code
+}
+
+//good
+if slice != nil && len(slice) == 0 {
+ // normal code
+}
+```
+
+The above judgment also applies to map and channel.
+
+- Declare a slice.
+
+```go
+// bad
+s := []string{}
+s := make([]string, 0)
+
+//good
+var s[]string
+```
+
+- slice copy.
+
+```go
+// bad
+var b1, b2 []byte
+for i, v := range b1 {
+ b2[i] = v
+}
+for i := range b1 {
+ b2[i] = b1[i]
+}
+
+//good
+copy(b2, b1)
+```
+
+- slice added.
+
+```go
+// bad
+var a, b []int
+for _, v := range a {
+ b = append(b, v)
+}
+
+//good
+var a, b []int
+b = append(b, a...)
+```
+
+### 4.3 Structure
+
+- struct initialization.
+
+The struct is initialized in multi-line format.
+
+```go
+type user struct {
+Id int64
+name string
+}
+
+u1 := user{100, "Colin"}
+
+u2 := user{
+ Id: 200,
+ Name: "Lex",
+}
+```
+
+## 5. Control Structure
+
+### 5.1 if
+
+- if accepts the initialization statement, the convention is to create local variables in the following way.
+
+```go
+if err := loadConfig(); err != nil {
+// error handling
+return err
+}
+```
+
+- if For variables of bool type, true and false judgments should be made directly.
+
+```go
+var isAllow bool
+if isAllow {
+// normal code
+}
+```
+
+### 5.2 for
+
+- Create local variables using short declarations.
+
+```go
+sum := 0
+for i := 0; i < 10; i++ {
+ sum += 1
+}
+```
+
+- Don't use defer in for loop, defer will only be executed when the function exits.
+
+```go
+// bad
+for file := range files {
+ fd, err := os. Open(file)
+ if err != nil {
+ return err
+}
+defer fd. Close()
+// normal code
+}
+
+//good
+for file := range files {
+ func() {
+ fd, err := os. Open(file)
+ if err != nil {
+ return err
+ }
+ defer fd. Close()
+ // normal code
+ }()
+}
+```
+
+### 5.3 range
+
+- If only the first item (key) is needed, discard the second.
+
+```go
+for keyIndex := range keys {
+// normal code
+}
+```
+
+- If only the second item is required, underline the first item.
+
+```go
+sum := 0
+for _, value := range array {
+ sum += value
+}
+```
+
+### 5.4 switch
+
+- must have default.
+
+```go
+switch os := runtime.GOOS; os {
+ case "linux":
+ fmt.Println("Linux.")
+ case "darwin":
+ fmt.Println("OS X.")
+ default:
+ fmt.Printf("%s.\n", os)
+}
+```
+
+### 5.5 goto
+- Business code prohibits the use of goto.
+- Try not to use frameworks or other low-level source code.
+
+## 6. Functions
+
+- Incoming variables and return variables start with a lowercase letter.
+- The number of function parameters cannot exceed 5.
+- Function grouping and ordering
+- Functions should be sorted in rough calling order.
+- Functions in the same file should be grouped by receiver.
+- Try to use value transfer instead of pointer transfer.
+- The incoming parameters are map, slice, chan, interface, do not pass pointers.
+
+### 6.1 Function parameters
+
+- If the function returns two or three arguments of the same type, or if the meaning of the result is not clear from the context, use named returns, otherwise it is not recommended to use named returns, for example:
+
+```go
+func coordinate() (x, y float64, err error) {
+// normal code
+}
+```
+- Both incoming and returned variables start with a lowercase letter.
+- Try to pass by value instead of pointer.
+- The number of parameters cannot exceed 5.
+- Multiple return values can return up to three, and if there are more than three, please use struct.
+
+### 6.2 defer
+
+- When resources are created, resources should be released immediately after defer (defer can be used boldly, the performance of defer is greatly improved in Go1.14 version, and the performance loss of defer can be ignored even in performance-sensitive businesses).
+- First judge whether there is an error, and then defer to release resources, for example:
+
+```go
+rep, err := http. Get(url)
+if err != nil {
+ return err
+}
+
+defer resp.Body.Close()
+```
+
+### 6.3 Method Receiver
+
+- It is recommended to use the lowercase of the first English letter of the class name as the name of the receiver.
+- Don't use a single character in the name of the receiver when the function exceeds 20 lines.
+- The name of the receiver cannot use confusing names such as me, this, and self.
+
+### 6.4 Nesting
+- The nesting depth cannot exceed 4 levels.
+
+### 6.5 Variable Naming
+- The variable declaration should be placed before the first use of the variable as far as possible, following the principle of proximity.
+- If the magic number appears more than twice, it is forbidden to use it and use a constant instead, for example:
+
+```go
+// PI...
+const Price = 3.14
+
+func getAppleCost(n float64) float64 {
+return Price * n
+}
+
+func getOrangeCost(n float64) float64 {
+return Price * n
+}
+```
+
+## 7. GOPATH setting specification
+- After Go 1.11, the GOPATH rule has been weakened. Existing code (many libraries must have been created before 1.11) must conform to this rule. It is recommended to keep the GOPATH rule to facilitate code maintenance.
+- Only one GOPATH is recommended, multiple GOPATHs are not recommended. If multiple GOPATHs are used, the bin directory where compilation takes effect is under the first GOPATH.
+
+## 8. Dependency Management
+
+- Go 1.11 and above must use Go Modules.
+- When using Go Modules as a dependency management project, it is not recommended to submit the vendor directory.
+- When using Go Modules as a dependency management project, the go.sum file must be submitted.
+
+### 9. Best Practices
+
+- Minimize the use of global variables, but pass parameters, so that each function is "stateless". This reduces coupling and facilitates division of labor and unit testing.
+- Verify interface compliance at compile time, for example:
+
+```go
+type LogHandler struct {
+ h http.Handler
+ log *zap. Logger
+}
+var_http.Handler = LogHandler{}
+```
+
+- When the server processes a request, it should create a context, save the relevant information of the request (such as requestID), and pass it in the function call chain.
+
+### 9.1 Performance
+- string represents an immutable string variable, modifying string is a relatively heavy operation, and basically needs to re-apply for memory. Therefore, if there is no special need, use []byte more when you need to modify.
+- Prefer strconv over fmt.
+
+### 9.2 Precautions
+
+- append Be careful about automatically allocating memory, append may return a newly allocated address.
+- If you want to directly modify the value of the map, the value can only be a pointer, otherwise the original value must be overwritten.
+- map needs to be locked during concurrency.
+- The conversion of interface{} cannot be checked during compilation, it can only be checked at runtime, be careful to cause panic.
+
+## 10 Golang CI Lint
+
+- Golang CI Lint is a fast Go linters runner. It runs linters in parallel, uses caching, and works well with all environments, including CI.
+
+**In local development, you can use the following command to install Golang CI Lint: **
+
+```bash
+make lint
+```
+
+**In CI/CD, Check the Github Actions status code below after you submit the code directly**
+
+[](https://github.com/openimsdk/open-im-server-deploy/actions/workflows/golangci-lint.yml)
+
+golangci lint can select the types of tools, refer to the official documentation: [https://golangci-lint.run/usage/linters/](https://golangci-lint.run/usage/linters/)
+
+The types of comments we currently use include: [https://github.com/openimsdk/open-im-server-deploy/blob/main/.golangci.yml](https://github.com/openimsdk/open-im-server-deploy/blob/main/.golangci.yml) the `linters.enable` field in the file.
+
+e.g:
+```yaml
+linters:
+ # please, do not use `enable-all`: it's deprecated and will be removed soon.
+ # inverted configuration with `enable-all` and `disable` is not scalable during updates of golangci-lint
+ # enable-all: true
+ disable-all: true
+ enable:
+ - typecheck # Basic type checking
+ - gofmt # Format check
+ - govet # Go's standard linting tool
+ - gosimple # Suggestions for simplifying code
+ - misspell # Spelling mistakes
+ - staticcheck # Static analysis
+ - unused # Checks for unused code
+ - goimports # Checks if imports are correctly sorted and formatted
+ - godot # Checks for comment punctuation
+ - bodyclose # Ensures HTTP response body is closed
+ - errcheck # Checks for missed error returns
+ fast: true
+```
+
+Add that Chinese comments are not allowed in go code, please write a complete golangci lint specification on the basis of the above.
+
+
+### 10.1 Configuration Document
+
+This configuration document is designed to configure the operational parameters of OpenIM (a hypothetical or specific code analysis tool), customize output formats, and provide detailed settings for specific code checkers (linters). Below is a summary of the document drafted based on the provided configuration information.
+
+#### 10.1 Runtime Options
+
+- **Concurrency** (`concurrency`): Default to use the available CPU count, can be manually set to 4 for parallel analysis.
+- **Timeout** (`timeout`): Timeout duration for analysis operations, default is 1 minute, set here to 5 minutes.
+- **Issue Exit Code** (`issues-exit-code`): Exit code defaults to 1 if at least one issue is found.
+- **Test Files** (`tests`): Whether to include test files, defaults to true.
+- **Build Tags** (`build-tags`): Specify build tags used by all linters, defaults to an empty list. Example adds `mytag`.
+- **Skip Directories** (`skip-dirs`): Configure which directories' issues are not reported, defaults to empty, but some default directories are independently skipped.
+- **Skip Files** (`skip-files`): Specify files where issues should not be reported, supports regular expressions.
+
+#### 10.2 Output Configuration
+
+- **Format** (`format`): Set output format, default is "colored-line-number".
+- **Print Issued Lines** (`print-issued-lines`): Whether to print the lines where issues occur, defaults to true.
+- **Print Linter Name** (`print-linter-name`): Whether to print the linter name at the end of issue text, defaults to true.
+- **Uniqueness Filter** (`uniq-by-line`): Whether to make issue outputs unique per line, defaults to true.
+- **Path Prefix** (`path-prefix`): Prefix to add to output file references, defaults to no prefix.
+- **Sort Results** (`sort-results`): Sort results by file path, line number, and column number.
+
+#### 10.3 Linters Settings
+
+In the configuration file, the `linters-settings` section allows detailed configuration of individual linters. Below are examples of specific linters settings and their purposes:
+
+- **bidichk**: Used to check bidirectional text characters, ensuring correct display direction of text, especially when dealing with mixed left-to-right (LTR) and right-to-left (RTL) text.
+
+- **dogsled**: Monitors excessive use of blank identifiers (`_`) in assignment operations, which may obscure data processing errors or unclear logic.
+
+- **dupl**: Identifies duplicate code blocks, helping developers avoid code redundancy. The `threshold` parameter in settings allows adjustment of code similarity threshold triggering warnings.
+
+- **errcheck**: Checks for unhandled errors. In Go, error handling is achieved by checking function return values. This linter helps ensure all errors are properly handled.
+
+- **exhaustive**: Checks if `switch` statements include all possible values of an enum type, ensuring exhaustiveness of code. This helps avoid forgetting to handle certain cases.
+
+#### 10.4 Example: `errcheck`
+
+**Incorrect Code Example**:
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+)
+
+func main() {
+ f, _ := os.Open("filename.ext")
+ defer f.Close()
+}
+```
+
+**Issue**: In the above code, the error return value of `os.Open` function is explicitly ignored. This is a common mistake as it may lead to unhandled errors and hard-to-trace bugs.
+
+**Correct Form**:
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+)
+
+func main() {
+ f, err := os.Open("filename.ext")
+ if err != nil {
+ fmt.Printf("error opening file: %v\n", err)
+ return
+ }
+ defer f.Close()
+}
+```
+
+In the correct form, by checking the error (`err`) returned by `os.Open`, we gracefully handle error cases rather than simply ignoring them.
+
+#### 10.5 Example: `gofmt`
+
+**Incorrect Code Example**:
+```go
+package main
+import "fmt"
+func main() {
+fmt.Println("Hello, world!")
+}
+```
+
+**Issue**: This code snippet doesn't follow Go's standard formatting rules, for example, incorrect indentation of `fmt.Println`.
+
+**Correct Form**:
+```go
+package main
+
+import "fmt"
+
+func main() {
+ fmt.Println("Hello, world!")
+}
+```
+
+Using `gofmt` tool can automatically fix such formatting issues, ensuring the code adheres to the coding standards of the Go community.
+
+#### 10.6 Example: `unused`
+
+**Incorrect Code Example**:
+```go
+package main
+
+func helper() {}
+
+func main() {}
+```
+
+**Issue**: The `helper` function is defined but not called anywhere, indicating potential redundant code or missing functionality implementation.
+
+**Correct Form**:
+```go
+package main
+
+// If the helper function is indeed needed, ensure it's used properly.
+func helper() {
+ // Implement the function's functionality or ensure it's called elsewhere
+}
+
+func main() {
+ helper()
+}
+```
+
+To improve the section on Linters settings in the document, we'll expand with more detailed explanations and reinforce understanding through examples.
+
+#### 10.7 Example: `dogsled`
+
+**Incorrect Code Example**:
+```go
+func getValues() (int, int, int) {
+ return 1, 2, 3
+}
+
+func main() {
+ _, _, val := getValues()
+ fmt.Println(val) // Only interested in the third return value
+}
+```
+
+**Explanation**: In the above code, we use two blank identifiers to ignore the first two return values. Excessive use of blank identifiers can make code reading difficult.
+
+**Improved Code**:
+Consider refactoring the function or the usage of return values to reduce the need for blank identifiers or explicitly comment why ignoring certain values is safe.
+
+#### 10.8: `exhaustive`
+
+**Incorrect Code Example**:
+```go
+type Fruit int
+
+const (
+ Apple Fruit = iota
+ Banana
+ Orange
+)
+
+func getFruitName(f Fruit) string {
+ switch f {
+ case Apple:
+ return "Apple"
+ case Banana:
+ return "Banana"
+ // Missing handling for Orange
+ }
+ return "Unknown"
+}
+```
+
+**Explanation**: In this code, the `switch` statement doesn't cover all possible values of the `Fruit` type; the case for `Orange` is missing.
+
+**Improved Code**:
+```go
+func getFruitName(f Fruit) string {
+ switch f {
+ case Apple:
+ return "Apple"
+ case Banana:
+ return "Banana"
+ case Orange:
+ return "Orange"
+ }
+ return "Unknown"
+}
+```
+
+By adding the missing `case`, we ensure the `switch` statement is exhaustive, handling every possible enum value.
+
+#### 10.9 Optimization of Configuration Files and Application of Code Analysis Tools
+
+Through these examples, we demonstrate how to improve code quality by identifying and fixing common coding issues. OpenIM's configuration files allow developers to customize linters' behavior according to project requirements, ensuring code compliance with predefined quality standards and style guidelines.
+
+By employing these tools and configuration strategies, teams can reduce the number of bugs, enhance code maintainability, and facilitate efficient collaboration during code review processes.
diff --git a/docs/contrib/go-code1.md b/docs/contrib/go-code1.md
new file mode 100644
index 0000000..0aea671
--- /dev/null
+++ b/docs/contrib/go-code1.md
@@ -0,0 +1,1554 @@
+## OpenIM development specification
+We have very high standards for code style and specification, and we want our products to be polished and perfect
+
+## 1. Code style
+
+### 1.1 Code format
+
+- Code must be formatted with `gofmt`.
+- Leave spaces between operators and operands.
+- It is recommended that a line of code does not exceed 120 characters. If the part exceeds, please use an appropriate line break method. But there are also some exception scenarios, such as import lines, code automatically generated by tools, and struct fields with tags.
+- The file length cannot exceed 800 lines.
+- Function length cannot exceed 80 lines.
+- import specification
+- All code must be formatted with `goimports` (it is recommended to set the code Go code editor to: run `goimports` on save).
+- Do not use relative paths to import packages, such as `import ../util/net`.
+- Import aliases must be used when the package name does not match the last directory name of the import path, or when multiple identical package names conflict.
+
+```go
+// bad
+"github.com/dgrijalva/jwt-go/v4"
+
+//good
+jwt "github.com/dgrijalva/jwt-go/v4"
+```
+- Imported packages are suggested to be grouped, and anonymous package references use a new group, and anonymous package references are explained.
+
+```go
+import (
+ // go standard package
+ "fmt"
+
+ // third party package
+ "github.com/jinzhu/gorm"
+ "github.com/spf13/cobra"
+ "github.com/spf13/viper"
+
+ // Anonymous packages are grouped separately, and anonymous package references are explained
+ // import mysql driver
+ _ "github.com/jinzhu/gorm/dialects/mysql"
+
+ // inner package
+)
+```
+
+### 1.2 Declaration, initialization and definition
+
+When multiple variables need to be used in a function, the `var` declaration can be used at the beginning of the function. Declaration outside the function must use `var`, do not use `:=`, it is easy to step on the scope of the variable.
+
+```go
+var (
+ Width int
+ Height int
+)
+```
+
+- When initializing a structure reference, please use `&T{}` instead of `new(T)` to make it consistent with structure initialization.
+
+```go
+ // bad
+ sptr := new(T)
+ sptr.Name = "bar"
+
+ // good
+ sptr := &T{Name: "bar"}
+```
+
+- The struct declaration and initialization format takes multiple lines and is defined as follows.
+
+```go
+ type User struct{
+ Username string
+ Email string
+ }
+
+ user := User{
+ Username: "belm",
+ Email: "nosbelm@qq.com",
+}
+```
+
+- Similar declarations are grouped together, and the same applies to constant, variable, and type declarations.
+
+```go
+// bad
+import "a"
+import "b"
+
+//good
+import (
+ "a"
+ "b"
+)
+```
+
+- Specify container capacity where possible to pre-allocate memory for the container, for example:
+
+```go
+v := make(map[int]string, 4)
+v := make([]string, 0, 4)
+```
+
+- At the top level, use the standard var keyword. Do not specify a type unless it is different from the type of the expression.
+
+```go
+// bad
+var s string = F()
+
+func F() string { return "A" }
+
+// good
+var s = F()
+// Since F already explicitly returns a string type, we don't need to explicitly specify the type of _s
+// still of that type
+
+func F() string { return "A" }
+```
+
+- This example emphasizes using PascalCase for exported constants and camelCase for unexported ones, avoiding all caps and underscores.
+
+```go
+// bad
+const (
+ MAX_COUNT = 100
+ timeout = 30
+)
+
+// good
+const (
+ MaxCount = 100 // Exported constants should use PascalCase.
+ defaultTimeout = 30 // Unexported constants should use camelCase.
+)
+```
+
+- Grouping related constants enhances organization and readability, especially when there are multiple constants related to a particular feature or configuration.
+
+```go
+// bad
+const apiVersion = "v1"
+const retryInterval = 5
+
+// good
+const (
+ ApiVersion = "v1" // Group related constants together for better organization.
+ RetryInterval = 5
+)
+```
+
+- The "good" practice utilizes iota for a clear, concise, and auto-incrementing way to define enumerations, reducing the potential for errors and improving maintainability.
+
+```go
+// bad
+const (
+ StatusActive = 0
+ StatusInactive = 1
+ StatusUnknown = 2
+)
+
+// good
+const (
+ StatusActive = iota // Use iota for simple and efficient constant enumerations.
+ StatusInactive
+ StatusUnknown
+)
+```
+
+- Specifying types explicitly improves clarity, especially when the purpose or type of a constant might not be immediately obvious. Additionally, adding comments to exported constants or those whose purpose isn't clear from the name alone can greatly aid in understanding the code.
+
+```go
+// bad
+const serverAddress = "localhost:8080"
+const debugMode = 1 // Is this supposed to be a boolean or an int?
+
+// good
+const ServerAddress string = "localhost:8080" // Specify type for clarity.
+// DebugMode indicates if the application should run in debug mode (true for debug mode).
+const DebugMode bool = true
+```
+
+- By defining a contextKey type and making userIDKey of this type, you avoid potential collisions with other context keys. This approach leverages Go's type system to provide compile-time checks against misuse.
+
+```go
+// bad
+const userIDKey = "userID"
+
+// In this example, userIDKey is a string type, which can lead to conflicts or accidental misuse because string keys are prone to typos and collisions in a global namespace.
+
+
+// good
+type contextKey string
+
+const userIDKey contextKey = "userID"
+```
+
+
+- Embedded types (such as mutexes) should be at the top of the field list within the struct, and there must be a blank line separating embedded fields from regular fields.
+
+```go
+// bad
+type Client struct {
+ version int
+ http.Client
+}
+
+//good
+type Client struct {
+ http.Client
+
+ version int
+}
+```
+
+
+### 1.5 Unit Tests
+
+- The unit test filename naming convention is `example_test.go`.
+- Write a test case for every important exportable function.
+- Because the functions in the unit test file are not external, the exportable structures, functions, etc. can be uncommented.
+- If `func (b *Bar) Foo` exists, the single test function can be `func TestBar_Foo`.
+
+## 2. Naming convention
+
+The naming convention is a very important part of the code specification. A uniform, short, and precise naming convention can greatly improve the readability of the code and avoid unnecessary bugs.
+
+### 2.1 Package Naming
+
+- The package name must be consistent with the directory name, try to use a meaningful and short package name, and do not conflict with the standard library.
+- Package names are all lowercase, without uppercase or underscores, and use multi-level directories to divide the hierarchy.
+- Item names can connect multiple words with dashes.
+- Do not use plurals for the package name and the directory name where the package is located, for example, `net/url` instead of `net/urls`.
+- Don't use broad, meaningless package names like common, util, shared or lib.
+- The package name should be simple and clear, such as net, time, log.
+
+
+### 2.2 Function Naming Conventions
+
+Function names should adhere to the following guidelines, inspired by OpenIM’s standards and Google’s Go Style Guide:
+
+- Use camel case for function names. Start with an uppercase letter for public functions (`MixedCaps`) and a lowercase letter for private functions (`mixedCaps`).
+- Exceptions to this rule include code automatically generated by tools (e.g., `xxxx.pb.go`) and test functions that use underscores for clarity (e.g., `TestMyFunction_WhatIsBeingTested`).
+
+### 2.3 File and Directory Naming Practices
+
+To maintain consistency and readability across the OpenIM project, observe the following naming practices:
+
+**File Names:**
+- Use underscores (`_`) as the default separator in filenames, keeping them short and descriptive.
+- Both hyphens (`-`) and underscores (`_`) are allowed, but underscores are preferred for general use.
+
+**Script and Markdown Files:**
+- Prefer hyphens (`-`) for shell scripts and Markdown (`.md`) files to enhance searchability and web compatibility.
+
+**Directories:**
+- Name directories with hyphens (`-`) exclusively to separate words, ensuring consistency and readability.
+
+Remember to keep filenames lowercase and use meaningful, concise identifiers to facilitate better organization and navigation within the project.
+
+### 2.4 Structure Naming
+
+- The camel case is adopted, and the first letter is uppercase or lowercase according to the access control, such as `MixedCaps` or `mixedCaps`.
+- Struct names should not be verbs, but should be nouns, such as `Node`, `NodeSpec`.
+- Avoid using meaningless structure names such as Data and Info.
+- The declaration and initialization of the structure should take multiple lines, for example:
+
+```go
+// User multi-line declaration
+type User struct {
+ name string
+ Email string
+}
+
+// multi-line initialization
+u := User{
+ UserName: "belm",
+ Email: "nosbelm@qq.com",
+}
+```
+
+### 2.5 Interface Naming
+
+- The interface naming rules are basically consistent with the structure naming rules:
+- Interface names of individual functions suffixed with "er"" (e.g. Reader, Writer) can sometimes lead to broken English, but that's okay.
+- The interface name of the two functions is named after the two function names, eg ReadWriter.
+- An interface name for more than three functions, similar to a structure name.
+
+For example:
+
+```go
+// Seeking to an offset before the start of the file is an error.
+// Seeking to any positive offset is legal, but the behavior of subsequent
+// I/O operations on the underlying object are implementation-dependent.
+type Seeker interface {
+ Seek(offset int64, whence int) (int64, error)
+}
+
+// ReadWriter is the interface that groups the basic Read and Write methods.
+type ReadWriter interface {
+ reader
+ Writer
+}
+```
+
+### 2.6 Variable Naming
+
+- Variable names must follow camel case, and the initial letter is uppercase or lowercase according to the access control decision.
+- In relatively simple (few objects, highly targeted) environments, some names can be abbreviated from full words to single letters, for example:
+- user can be abbreviated as u;
+- userID can be abbreviated as uid.
+- When using proper nouns, the following rules need to be followed:
+- If the variable is private and the proper noun is the first word, use lowercase, such as apiClient.
+- In other cases, the original wording of the noun should be used, such as APIClient, repoID, UserID.
+
+Some common nouns are listed below.
+
+```go
+// A GonicMapper that contains a list of common initialisms taken from golang/lint
+var LintGonicMapper = GonicMapper{
+ "API": true,
+ "ASCII": true,
+ "CPU": true,
+ "CSS": true,
+ "DNS": true,
+ "EOF": true,
+ "GUID": true,
+ "HTML": true,
+ "HTTP": true,
+ "HTTPS": true,
+ "ID": true,
+ "IP": true,
+ "JSON": true,
+ "LHS": true,
+ "QPS": true,
+ "RAM": true,
+ "RHS": true,
+ "RPC": true,
+ "SLA": true,
+ "SMTP": true,
+ "SSH": true,
+ "TLS": true,
+ "TTL": true,
+ "UI": true,
+ "UID": true,
+ "UUID": true,
+ "URI": true,
+ "URL": true,
+ "UTF8": true,
+ "VM": true,
+ "XML": true,
+ "XSRF": true,
+ "XSS": true,
+}
+```
+
+- If the variable type is bool, the name should start with Has, Is, Can or Allow, for example:
+
+```go
+var hasConflict bool
+var isExist bool
+var canManage bool
+var allowGitHook bool
+```
+
+- Local variables should be as short as possible, for example, use buf to refer to buffer, and use i to refer to index.
+- The code automatically generated by the code generation tool can exclude this rule (such as the Id in `xxx.pb.go`)
+
+### 2.7 Constant Naming
+
+In Go, constants play a critical role in defining values that do not change throughout the execution of a program. Adhering to best practices in naming constants can significantly improve the readability and maintainability of your code. Here are some guidelines for constant naming:
+
+- **Camel Case Naming:** The name of a constant must follow the camel case notation. The initial letter should be uppercase or lowercase based on the access control requirements. Uppercase indicates that the constant is exported (visible outside the package), while lowercase indicates package-private visibility (visible only within its own package).
+
+- **Enumeration Type Constants:** For constants that represent a set of enumerated values, it's recommended to define a corresponding type first. This approach not only enhances type safety but also improves code readability by clearly indicating the purpose of the enumeration.
+
+**Example:**
+
+```go
+// Code defines an error code type.
+type Code int
+
+// Internal errors.
+const (
+ // ErrUnknown - 0: An unknown error occurred.
+ ErrUnknown Code = iota
+ // ErrFatal - 1: A fatal error occurred.
+ ErrFatal
+)
+```
+
+In the example above, `Code` is defined as a new type based on `int`. The enumerated constants `ErrUnknown` and `ErrFatal` are then defined with explicit comments to indicate their purpose and values. This pattern is particularly useful for grouping related constants and providing additional context.
+
+### Global Variables and Constants Across Packages
+
+- **Use Constants for Global Variables:** When defining variables that are intended to be accessed across packages, prefer using constants to ensure immutability. This practice avoids unintended modifications to the value, which can lead to unpredictable behavior or hard-to-track bugs.
+
+- **Lowercase for Package-Private Usage:** If a global variable or constant is intended for use only within its own package, it should start with a lowercase letter. This clearly signals its limited scope of visibility, adhering to Go's access control mechanism based on naming conventions.
+
+**Guideline:**
+
+- For global constants that need to be accessed across packages, declare them with an uppercase initial letter. This makes them exported, adhering to Go's visibility rules.
+- For constants used within the same package, start their names with a lowercase letter to limit their scope to the package.
+
+**Example:**
+
+```go
+package config
+
+// MaxConnections - the maximum number of allowed connections. Visible across packages.
+const MaxConnections int = 100
+
+// minIdleTime - the minimum idle time before a connection is considered stale. Only visible within the config package.
+const minIdleTime int = 30
+```
+
+In this example, `MaxConnections` is a global constant meant to be accessed across packages, hence it starts with an uppercase letter. On the other hand, `minIdleTime` is intended for use only within the `config` package, so it starts with a lowercase letter.
+
+Following these guidelines ensures that your Go code is more readable, maintainable, and consistent with Go's design philosophy and access control mechanisms.
+
+
+
+
+### 2.10 Using Context with IO or Inter-Process Communication (IPC)
+
+In Go, `context.Context` is a powerful construct for managing deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes. It is particularly important in I/O operations or inter-process communication (IPC), where operations might need to be cancelled or timed out.
+
+#### Guideline: Use Context for IO and IPC
+
+- **Mandatory Use of Context:** When performing I/O operations or inter-process communication, it's crucial to use `context.Context` to manage the lifecycle of these operations. This includes setting deadlines, handling cancellation signals, and passing request-scoped values.
+
+#### Incorrect Example: Ignoring Context in an HTTP Call
+
+```go
+package main
+
+import (
+ "io/ioutil"
+ "net/http"
+ "log"
+)
+
+// FetchData makes an HTTP GET request to the specified URL and returns the response body.
+// This function does not use context, making it impossible to cancel the request or set a deadline.
+func FetchData(url string) (string, error) {
+ resp, err := http.Get(url) // Incorrect: Ignoring context
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return "", err
+ }
+
+ return string(body), nil
+}
+
+func main() {
+ data, err := FetchData("http://example.com")
+ if err != nil {
+ log.Fatalf("Failed to fetch data: %v", err)
+ }
+ log.Println(data)
+}
+```
+
+In this incorrect example, the `FetchData` function makes an HTTP GET request without using a `context`. This approach does not allow the request to be cancelled or a timeout to be set, potentially leading to resources being wasted if the server takes too long to respond or if the operation needs to be aborted for any reason.
+
+#### Correct Example: Using Context in an HTTP Call
+
+```go
+package main
+
+import (
+ "context"
+ "io/ioutil"
+ "net/http"
+ "log"
+ "time"
+)
+
+// FetchDataWithContext makes an HTTP GET request to the specified URL using the provided context.
+// This allows the request to be cancelled or timed out according to the context's deadline.
+func FetchDataWithContext(ctx context.Context, url string) (string, error) {
+ req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
+ if err != nil {
+ return "", err
+ }
+
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return "", err
+ }
+
+ return string(body), nil
+}
+
+func main() {
+ // Create a context with a 5-second timeout
+ ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
+ defer cancel()
+
+ data, err := FetchDataWithContext(ctx, "http://example.com")
+ if err != nil {
+ log.Fatalf("Failed to fetch data: %v", err)
+ }
+ log.Println(data)
+}
+```
+
+In the correct example, `FetchDataWithContext` uses a context to make the HTTP GET request. This allows the operation to be cancelled or subjected to a timeout, as dictated by the context passed to it. The `context.WithTimeout` function is used in `main` to create a context that cancels the request if it takes longer than 5 seconds, demonstrating a practical use of context to manage operation lifecycle.
+
+### Best Practices for Using Context
+
+1. **Pass context as the first parameter of a function**, following the convention `func(ctx context.Context, ...)`.
+2. **Never ignore the context** provided to you in functions that support it. Always use it in your I/O or IPC operations.
+3. **Avoid storing context in a struct**. Contexts are meant to be passed around within the call stack, not stored.
+4. **Use context's cancellation and deadline features** to control the lifecycle of blocking operations, especially in network I/O and IPC scenarios.
+5. **Propagate context down the call stack** to any function that supports it, ensuring that your application can respond to cancellation signals and deadlines effectively.
+
+By adhering to these guidelines and examples, you can ensure that your Go applications handle I/O and IPC operations more reliably and efficiently, with proper support for cancellation, timeouts, and request-scoped values.
+
+
+
+## 3. 日志规范
+
+启动时正常日志,打印流程日志,如链接mongo成功,注意不要打印密码等敏感信息。
+
+启动时以及运行中异常终止日志,如果需要终止程序,调用ExitWithError
+
+运行时日志打印,对于错误日志,使用日志库打印,仅在最上层调用打印;对于debug日志,可以随意打印;对于关键日志打印 info;
+
+## 5.异常及错误处理
+
+任何情况禁止使用panic
+
+错误需要wrap,并带上message和key value,用户排查问题;错误wrap仅一次,及函数本身出现的错误,或者调用项目之外的函数产生的错误。
+
+用errs.New()替代errors.New()
+
+
+
+
+
+### 1.4 Panic Processing
+
+The use of `panic` should be carefully controlled in Go applications to ensure program stability and predictable error handling. Following are revised guidelines emphasizing the restriction on using `panic` and promoting alternative strategies for error handling and program termination.
+
+- **Prohibited in Business Logic:** Using `panic` within business logic processing is strictly prohibited. Business logic should handle errors gracefully and use error returns to propagate issues up the call stack.
+
+- **Restricted Use in Main Package:** In the main package, the use of `panic` should be reserved for situations where the program is entirely inoperable, such as failure to open essential files, inability to connect to the database, or other critical startup issues. Even in these scenarios, prefer using structured error handling to terminate the program.
+
+- **Prohibition on Exportable Interfaces:** Exportable interfaces must not invoke `panic`. They should handle errors gracefully and return errors as part of their contract.
+
+- **Prefer Errors Over Panic:** It is recommended to use error returns instead of panic to convey errors within a package. This approach promotes error handling that integrates smoothly with Go's error handling idioms.
+
+#### Alternative to Panic: Structured Program Termination
+
+To enforce these guidelines, consider implementing structured functions to terminate the program gracefully in the face of unrecoverable errors, while providing clear error messages. Here are two recommended functions:
+
+```go
+// ExitWithError logs an error message and exits the program with a non-zero status.
+func ExitWithError(err error) {
+ progName := filepath.Base(os.Args[0])
+ fmt.Fprintf(os.Stderr, "%s exit -1: %+v\n", progName, err)
+ os.Exit(-1)
+}
+
+// SIGTERMExit logs a warning message when the program receives a SIGTERM signal and exits with status 0.
+func SIGTERMExit() {
+ progName := filepath.Base(os.Args[0])
+ fmt.Fprintf(os.Stderr, "Warning %s receive process terminal SIGTERM exit 0\n", progName)
+}
+```
+
+#### Example Usage:
+
+```go
+import (
+ _ "net/webhook/pprof"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
+ util "git.imall.cloud/openim/open-im-server-deploy/pkg/util/genutil"
+)
+
+func main() {
+ apiCmd := cmd.NewApiCmd()
+ apiCmd.AddPortFlag()
+ apiCmd.AddPrometheusPortFlag()
+ if err := apiCmd.Execute(); err != nil {
+ util.ExitWithError(err)
+ }
+}
+```
+
+In this example, `ExitWithError` is used to terminate the program when an unrecoverable error occurs, providing a clear error message to stderr and exiting with a non-zero status. This approach ensures that critical errors are logged and the program exits in a controlled manner, facilitating troubleshooting and maintaining the stability of the application.
+
+
+
+### 1.3 Error Handling
+
+- `error` is returned as the value of the function, `error` must be handled, or the return value assigned to explicitly ignore. For `defer xx.Close()`, there is no need to explicitly handle it.
+
+```go
+func load() error {
+// normal code
+}
+
+// bad
+load()
+
+//good
+ _ = load()
+```
+
+- When `error` is returned as the value of a function and there are multiple return values, `error` must be the last parameter.
+
+```go
+// bad
+func load() (error, int) {
+// normal code
+}
+
+//good
+func load() (int, error) {
+// normal code
+}
+```
+
+- Perform error handling as early as possible and return as early as possible to reduce nesting.
+
+```go
+// bad
+if err != nil {
+// error code
+} else {
+// normal code
+}
+
+//good
+if err != nil {
+// error handling
+return err
+}
+// normal code
+```
+
+- If you need to use the result of the function call outside if, you should use the following method.
+
+```go
+// bad
+if v, err := foo(); err != nil {
+// error handling
+}
+
+// good
+v, err := foo()
+if err != nil {
+// error handling
+}
+```
+
+- Errors should be judged independently, not combined with other logic.
+
+```go
+// bad
+v, err := foo()
+if err != nil || v == nil {
+ // error handling
+ return err
+}
+
+//good
+v, err := foo()
+if err != nil {
+ // error handling
+ return err
+}
+
+if v == nil {
+ // error handling
+ return errors. New("invalid value v")
+}
+```
+
+- If the return value needs to be initialized, use the following method.
+
+```go
+v, err := f()
+if err != nil {
+ // error handling
+ return // or continue.
+}
+```
+
+- Bug description suggestions
+- Error descriptions start with a lowercase letter and do not end with punctuation, for example:
+
+```go
+// bad
+errors.New("Redis connection failed")
+errors.New("redis connection failed.")
+
+// good
+errors.New("redis connection failed")
+```
+
+- Tell users what they can do, not what they can't.
+- When declaring a requirement, use must instead of should. For example, `must be greater than 0, must match regex '[a-z]+'`.
+- When declaring that a format is incorrect, use must not. For example, `must not contain`.
+- Use may not when declaring an action. For example, `may not be specified when otherField is empty, only name may be specified`.
+- When quoting a literal string value, indicate the literal in single quotes. For example, `ust not contain '..'`.
+- When referencing another field name, specify that name in backticks. For example, must be greater than `request`.
+- When specifying unequal, use words instead of symbols. For example, `must be less than 256, must be greater than or equal to 0 (do not use larger than, bigger than, more than, higher than)`.
+- When specifying ranges of numbers, use inclusive ranges whenever possible.
+- Go 1.13 or above is recommended, and the error generation method is `fmt.Errorf("module xxx: %w", err)`.
+
+### 1.6 Type assertion failure handling
+
+- A single return value from a type assertion will panic for an incorrect type. Always use the "comma ok" idiom.
+
+```go
+// bad
+t := n.(int)
+
+//good
+t, ok := n.(int)
+if !ok {
+// error handling
+}
+```
+
+
+
+### 2.8 Error naming
+
+- The Error type should be written in the form of FooError.
+
+```go
+type ExitError struct {
+// ....
+}
+```
+
+- The Error variable is written in the form of ErrFoo.
+
+```go
+var ErrFormat = errors. New("unknown format")
+```
+
+For non-standard Err naming, CICD will report an error
+
+
+### 2.9 Handling Errors Properly
+
+In Go, proper error handling is crucial for creating reliable and maintainable applications. It's important to ensure that errors are not ignored or discarded, as this can lead to unpredictable behavior and difficult-to-debug issues. Here are the guidelines and examples regarding the proper handling of errors.
+
+#### Guideline: Do Not Discard Errors
+
+- **Mandatory Error Propagation:** When calling a function that returns an error, the calling function must handle or propagate the error, instead of ignoring it. This approach ensures that errors are not silently ignored, allowing higher-level logic to make informed decisions about error handling.
+
+#### Incorrect Example: Discarding an Error
+
+```go
+package main
+
+import (
+ "io/ioutil"
+ "log"
+)
+
+func ReadFileContent(filename string) string {
+ content, _ := ioutil.ReadFile(filename) // Incorrect: Error is ignored
+ return string(content)
+}
+
+func main() {
+ content := ReadFileContent("example.txt")
+ log.Println(content)
+}
+```
+
+In this incorrect example, the error returned by `ioutil.ReadFile` is ignored. This can lead to situations where the program continues execution even if the file doesn't exist or cannot be accessed, potentially causing more cryptic errors downstream.
+
+#### Correct Example: Propagating an Error
+
+```go
+package main
+
+import (
+ "io/ioutil"
+ "log"
+)
+
+// ReadFileContent attempts to read and return the content of the specified file.
+// It returns an error if reading fails.
+func ReadFileContent(filename string) (string, error) {
+ content, err := ioutil.ReadFile(filename)
+ if err != nil {
+ // Correct: Propagate the error
+ return "", err
+ }
+ return string(content), nil
+}
+
+func main() {
+ content, err := ReadFileContent("example.txt")
+ if err != nil {
+ log.Fatalf("Failed to read file: %v", err)
+ }
+ log.Println(content)
+}
+```
+
+In the correct example, the error returned by `ioutil.ReadFile` is propagated back to the caller. The `main` function then checks the error and terminates the program with an appropriate error message if an error occurred. This approach ensures that errors are handled appropriately, and the program does not proceed with invalid state.
+
+### Best Practices for Error Handling
+
+1. **Always check the error returned by a function.** Do not ignore it.
+2. **Propagate errors up the call stack unless they can be handled gracefully at the current level.**
+3. **Provide context for errors when propagating them, making it easier to trace the source of the error.** This can be achieved using `fmt.Errorf` with the `%w` verb or dedicated wrapping functions provided by some error handling packages.
+4. **Log the error at the point where it is handled or makes the program to terminate, to provide insight into the failure.**
+
+By following these guidelines, you ensure that your Go applications handle errors in a consistent and effective manner, improving their reliability and maintainability.
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Suggestions
+
+
+
+## 3. Comment specification
+
+- Each exportable name must have a comment, which briefly introduces the exported variables, functions, structures, interfaces, etc.
+- All single-line comments are used, and multi-line comments are prohibited.
+- Same as the code specification, single-line comments should not be too long, and no more than 120 characters are allowed. If it exceeds, please use a new line to display, and try to keep the format elegant.
+- A comment must be a complete sentence, starting with the content to be commented and ending with a period, `the format is // name description.`. For example:
+
+```go
+// bad
+// logs the flags in the flagset.
+func PrintFlags(flags *pflag. FlagSet) {
+// normal code
+}
+
+//good
+// PrintFlags logs the flags in the flagset.
+func PrintFlags(flags *pflag. FlagSet) {
+// normal code
+}
+```
+
+- All commented out code should be deleted before submitting code review, otherwise, it should explain why it is not deleted, and give follow-up processing suggestions.
+
+- Multiple comments can be separated by blank lines, as follows:
+
+```go
+// Package superman implements methods for saving the world.
+//
+// Experience has shown that a small number of procedures can prove
+// helpful when attempting to save the world.
+package superman
+```
+
+### 3.1 Package Notes
+
+- Each package has one and only one package-level annotation.
+- Package comments are uniformly commented with // in the format of `// Package package description`, for example:
+
+```go
+// Package genericclioptions contains flags which can be added to you command, bound, completed, and produce
+// useful helper functions.
+package genericclioptions
+```
+
+### 3.2 Variable/Constant Comments
+
+- Each variable/constant that can be exported must have a comment description, `the format is // variable name variable description`, for example:
+
+```go
+// ErrSigningMethod defines invalid signing method error.
+var ErrSigningMethod = errors. New("Invalid signing method")
+```
+
+- When there is a large block of constant or variable definition, you can comment a general description in front, and then comment the definition of the constant in detail before or at the end of each line of constant, for example:
+
+```go
+// Code must start with 1xxxxx.
+const (
+ // ErrSuccess - 200: OK.
+ ErrSuccess int = iota + 100001
+
+ // ErrUnknown - 500: Internal server error.
+ ErrUnknown
+
+ // ErrBind - 400: Error occurred while binding the request body to the struct.
+ ErrBind
+
+ // ErrValidation - 400: Validation failed.
+ ErrValidation
+)
+```
+
+### 3.3 Structure Annotation
+
+- Each structure or interface that needs to be exported must have a comment description, the format is `// structure name structure description.`.
+- The name of the exportable member variable in the structure, if the meaning is not clear, a comment must be given and placed before the member variable or at the end of the same line. For example:
+
+```go
+// User represents a user restful resource. It is also used as gorm model.
+type User struct {
+ // Standard object's metadata.
+ metav1.ObjectMeta `json:"metadata,omitempty"`
+
+ Nickname string `json:"nickname" gorm:"column:nickname"`
+ Password string `json:"password" gorm:"column:password"`
+ Email string `json:"email" gorm:"column:email"`
+ Phone string `json:"phone" gorm:"column:phone"`
+ IsAdmin int `json:"isAdmin,omitempty" gorm:"column:isAdmin"`
+}
+```
+
+### 3.4 Method Notes
+
+Each function or method that needs to be exported must have a comment, the format is // function name function description., for examplelike:
+
+```go
+// BeforeUpdate run before update database record.
+func (p *Policy) BeforeUpdate() (err error) {
+// normal code
+ return nil
+}
+```
+
+### 3.5 Type annotations
+
+- Each type definition and type alias that needs to be exported must have a comment description, the format is `// type name type description.`, for example:
+
+```go
+// Code defines an error code type.
+type Code int
+```
+
+## 4. Type
+
+### 4.1 Strings
+
+- Empty string judgment.
+
+```go
+// bad
+if s == "" {
+ // normal code
+}
+
+//good
+if len(s) == 0 {
+ // normal code
+}
+```
+
+- `[]byte`/`string` equality comparison.
+
+```go
+// bad
+var s1 []byte
+var s2 []byte
+...
+bytes.Equal(s1, s2) == 0
+bytes.Equal(s1, s2) != 0
+
+//good
+var s1 []byte
+var s2 []byte
+...
+bytes. Compare(s1, s2) == 0
+bytes. Compare(s1, s2) != 0
+```
+
+- Complex strings use raw strings to avoid character escaping.
+
+```go
+// bad
+regexp.MustCompile("\\.")
+
+//good
+regexp.MustCompile(`\.`)
+```
+
+### 4.2 Slicing
+
+- Empty slice judgment.
+
+```go
+// bad
+if len(slice) = 0 {
+ // normal code
+}
+
+//good
+if slice != nil && len(slice) == 0 {
+ // normal code
+}
+```
+
+The above judgment also applies to map and channel.
+
+- Declare a slice.
+
+```go
+// bad
+s := []string{}
+s := make([]string, 0)
+
+//good
+var s[]string
+```
+
+- slice copy.
+
+```go
+// bad
+var b1, b2 []byte
+for i, v := range b1 {
+ b2[i] = v
+}
+for i := range b1 {
+ b2[i] = b1[i]
+}
+
+//good
+copy(b2, b1)
+```
+
+- slice added.
+
+```go
+// bad
+var a, b []int
+for _, v := range a {
+ b = append(b, v)
+}
+
+//good
+var a, b []int
+b = append(b, a...)
+```
+
+### 4.3 Structure
+
+- struct initialization.
+
+The struct is initialized in multi-line format.
+
+```go
+type user struct {
+Id int64
+name string
+}
+
+u1 := user{100, "Colin"}
+
+u2 := user{
+ Id: 200,
+ Name: "Lex",
+}
+```
+
+-
+
+
+
+## 5. Control Structure
+
+### 5.1 if
+
+- if accepts the initialization statement, the convention is to create local variables in the following way.
+
+```go
+if err := loadConfig(); err != nil {
+// error handling
+return err
+}
+```
+
+- if For variables of bool type, true and false judgments should be made directly.
+
+```go
+var isAllow bool
+if isAllow {
+// normal code
+}
+```
+
+### 5.2 for
+
+- Create local variables using short declarations.
+
+```go
+sum := 0
+for i := 0; i < 10; i++ {
+ sum += 1
+}
+```
+
+- Don't use defer in for loop, defer will only be executed when the function exits.
+
+```go
+// bad
+for file := range files {
+ fd, err := os. Open(file)
+ if err != nil {
+ return err
+}
+defer fd. Close()
+// normal code
+}
+
+//good
+for file := range files {
+ func() {
+ fd, err := os. Open(file)
+ if err != nil {
+ return err
+ }
+ defer fd. Close()
+ // normal code
+ }()
+}
+```
+
+### 5.3 range
+
+- If only the first item (key) is needed, discard the second.
+
+```go
+for keyIndex := range keys {
+// normal code
+}
+```
+
+- If only the second item is required, underline the first item.
+
+```go
+sum := 0
+for _, value := range array {
+ sum += value
+}
+```
+
+### 5.4 switch
+
+- must have default.
+
+```go
+switch os := runtime.GOOS; os {
+ case "linux":
+ fmt.Println("Linux.")
+ case "darwin":
+ fmt.Println("OS X.")
+ default:
+ fmt.Printf("%s.\n", os)
+}
+```
+
+### 5.5 goto
+
+- Business code prohibits the use of goto.
+- Try not to use frameworks or other low-level source code.
+
+## 6. Functions
+
+- Incoming variables and return variables start with a lowercase letter.
+- The number of function parameters cannot exceed 5.
+- Function grouping and ordering
+- Functions should be sorted in rough calling order.
+- Functions in the same file should be grouped by receiver.
+- Try to use value transfer instead of pointer transfer.
+- The incoming parameters are map, slice, chan, interface, do not pass pointers.
+
+### 6.1 Function parameters
+
+- If the function returns two or three arguments of the same type, or if the meaning of the result is not clear from the context, use named returns, otherwise it is not recommended to use named returns, for example:
+
+```go
+func coordinate() (x, y float64, err error) {
+// normal code
+}
+```
+
+- Both incoming and returned variables start with a lowercase letter.
+- Try to pass by value instead of pointer.
+- The number of parameters cannot exceed 5.
+- Multiple return values can return up to three, and if there are more than three, please use struct.
+
+### 6.2 defer
+
+- When resources are created, resources should be released immediately after defer (defer can be used boldly, the performance of defer is greatly improved in Go1.14 version, and the performance loss of defer can be ignored even in performance-sensitive businesses).
+- First judge whether there is an error, and then defer to release resources, for example:
+
+```go
+rep, err := http. Get(url)
+if err != nil {
+ return err
+}
+
+defer resp.Body.Close()
+```
+
+### 6.3 Method Receiver
+
+- It is recommended to use the lowercase of the first English letter of the class name as the name of the receiver.
+- Don't use a single character in the name of the receiver when the function exceeds 20 lines.
+- The name of the receiver cannot use confusing names such as me, this, and self.
+
+### 6.4 Nesting
+
+- The nesting depth cannot exceed 4 levels.
+
+### 6.5 Variable Naming
+
+- The variable declaration should be placed before the first use of the variable as far as possible, following the principle of proximity.
+- If the magic number appears more than twice, it is forbidden to use it and use a constant instead, for example:
+
+```go
+// PI...
+const Price = 3.14
+
+func getAppleCost(n float64) float64 {
+return Price * n
+}
+
+func getOrangeCost(n float64) float64 {
+return Price * n
+}
+```
+
+## 7. GOPATH setting specification
+
+- After Go 1.11, the GOPATH rule has been weakened. Existing code (many libraries must have been created before 1.11) must conform to this rule. It is recommended to keep the GOPATH rule to facilitate code maintenance.
+- Only one GOPATH is recommended, multiple GOPATHs are not recommended. If multiple GOPATHs are used, the bin directory where compilation takes effect is under the first GOPATH.
+
+
+
+## 8. Dependency Management
+
+- Go 1.11 and above must use Go Modules.
+- When using Go Modules as a dependency management project, it is not recommended to submit the vendor directory.
+- When using Go Modules as a dependency management project, the go.sum file must be submitted.
+
+### 9. Best Practices
+
+- Minimize the use of global variables, but pass parameters, so that each function is "stateless". This reduces coupling and facilitates division of labor and unit testing.
+- Verify interface compliance at compile time, for example:
+
+```go
+type LogHandler struct {
+ h http.Handler
+ log *zap. Logger
+}
+var_http.Handler = LogHandler{}
+```
+
+- When the server processes a request, it should create a context, save the relevant information of the request (such as requestID), and pass it in the function call chain.
+
+### 9.1 Performance
+
+- string represents an immutable string variable, modifying string is a relatively heavy operation, and basically needs to re-apply for memory. Therefore, if there is no special need, use []byte more when you need to modify.
+- Prefer strconv over fmt.
+
+### 9.2 Precautions
+
+- append Be careful about automatically allocating memory, append may return a newly allocated address.
+- If you want to directly modify the value of the map, the value can only be a pointer, otherwise the original value must be overwritten.
+- map needs to be locked during concurrency.
+- The conversion of interface{} cannot be checked during compilation, it can only be checked at runtime, be careful to cause panic.
+
+
+
+
+
+## 10 Golang CI Lint
+
+- Golang CI Lint is a fast Go linters runner. It runs linters in parallel, uses caching, and works well with all environments, including CI.
+
+**In local development, you can use the following command to install Golang CI Lint: **
+
+```bash
+make lint
+```
+
+**In CI/CD, Check the Github Actions status code below after you submit the code directly**
+
+[](https://github.com/openimsdk/open-im-server-deploy/actions/workflows/golangci-lint.yml)
+
+golangci lint can select the types of tools, refer to the official documentation: [https://golangci-lint.run/usage/linters/](https://golangci-lint.run/usage/linters/)
+
+The types of comments we currently use include: [https://github.com/openimsdk/open-im-server-deploy/blob/main/.golangci.yml](https://github.com/openimsdk/open-im-server-deploy/blob/main/.golangci.yml) the `linters.enable` field in the file.
+
+e.g:
+
+```yaml
+linters:
+ # please, do not use `enable-all`: it's deprecated and will be removed soon.
+ # inverted configuration with `enable-all` and `disable` is not scalable during updates of golangci-lint
+ # enable-all: true
+ disable-all: true
+ enable:
+ - typecheck # Basic type checking
+ - gofmt # Format check
+ - govet # Go's standard linting tool
+ - gosimple # Suggestions for simplifying code
+ - misspell # Spelling mistakes
+ - staticcheck # Static analysis
+ - unused # Checks for unused code
+ - goimports # Checks if imports are correctly sorted and formatted
+ - godot # Checks for comment punctuation
+ - bodyclose # Ensures HTTP response body is closed
+ - errcheck # Checks for missed error returns
+ fast: true
+```
+
+Add that Chinese comments are not allowed in go code, please write a complete golangci lint specification on the basis of the above.
+
+
+### 10.1 Configuration Document
+
+This configuration document is designed to configure the operational parameters of OpenIM (a hypothetical or specific code analysis tool), customize output formats, and provide detailed settings for specific code checkers (linters). Below is a summary of the document drafted based on the provided configuration information.
+
+#### 10.1 Runtime Options
+
+- **Concurrency** (`concurrency`): Default to use the available CPU count, can be manually set to 4 for parallel analysis.
+- **Timeout** (`timeout`): Timeout duration for analysis operations, default is 1 minute, set here to 5 minutes.
+- **Issue Exit Code** (`issues-exit-code`): Exit code defaults to 1 if at least one issue is found.
+- **Test Files** (`tests`): Whether to include test files, defaults to true.
+- **Build Tags** (`build-tags`): Specify build tags used by all linters, defaults to an empty list. Example adds `mytag`.
+- **Skip Directories** (`skip-dirs`): Configure which directories' issues are not reported, defaults to empty, but some default directories are independently skipped.
+- **Skip Files** (`skip-files`): Specify files where issues should not be reported, supports regular expressions.
+
+#### 10.2 Output Configuration
+
+- **Format** (`format`): Set output format, default is "colored-line-number".
+- **Print Issued Lines** (`print-issued-lines`): Whether to print the lines where issues occur, defaults to true.
+- **Print Linter Name** (`print-linter-name`): Whether to print the linter name at the end of issue text, defaults to true.
+- **Uniqueness Filter** (`uniq-by-line`): Whether to make issue outputs unique per line, defaults to true.
+- **Path Prefix** (`path-prefix`): Prefix to add to output file references, defaults to no prefix.
+- **Sort Results** (`sort-results`): Sort results by file path, line number, and column number.
+
+#### 10.3 Linters Settings
+
+In the configuration file, the `linters-settings` section allows detailed configuration of individual linters. Below are examples of specific linters settings and their purposes:
+
+- **bidichk**: Used to check bidirectional text characters, ensuring correct display direction of text, especially when dealing with mixed left-to-right (LTR) and right-to-left (RTL) text.
+
+- **dogsled**: Monitors excessive use of blank identifiers (`_`) in assignment operations, which may obscure data processing errors or unclear logic.
+
+- **dupl**: Identifies duplicate code blocks, helping developers avoid code redundancy. The `threshold` parameter in settings allows adjustment of code similarity threshold triggering warnings.
+
+- **errcheck**: Checks for unhandled errors. In Go, error handling is achieved by checking function return values. This linter helps ensure all errors are properly handled.
+
+- **exhaustive**: Checks if `switch` statements include all possible values of an enum type, ensuring exhaustiveness of code. This helps avoid forgetting to handle certain cases.
+
+#### 10.4 Example: `errcheck`
+
+**Incorrect Code Example**:
+
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+)
+
+func main() {
+ f, _ := os.Open("filename.ext")
+ defer f.Close()
+}
+```
+
+**Issue**: In the above code, the error return value of `os.Open` function is explicitly ignored. This is a common mistake as it may lead to unhandled errors and hard-to-trace bugs.
+
+**Correct Form**:
+
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+)
+
+func main() {
+ f, err := os.Open("filename.ext")
+ if err != nil {
+ fmt.Printf("error opening file: %v\n", err)
+ return
+ }
+ defer f.Close()
+}
+```
+
+In the correct form, by checking the error (`err`) returned by `os.Open`, we gracefully handle error cases rather than simply ignoring them.
+
+#### 10.5 Example: `gofmt`
+
+**Incorrect Code Example**:
+
+```go
+package main
+import "fmt"
+func main() {
+fmt.Println("Hello, world!")
+}
+```
+
+**Issue**: This code snippet doesn't follow Go's standard formatting rules, for example, incorrect indentation of `fmt.Println`.
+
+**Correct Form**:
+
+```go
+package main
+
+import "fmt"
+
+func main() {
+ fmt.Println("Hello, world!")
+}
+```
+
+Using `gofmt` tool can automatically fix such formatting issues, ensuring the code adheres to the coding standards of the Go community.
+
+#### 10.6 Example: `unused`
+
+**Incorrect Code Example**:
+
+```go
+package main
+
+func helper() {}
+
+func main() {}
+```
+
+**Issue**: The `helper` function is defined but not called anywhere, indicating potential redundant code or missing functionality implementation.
+
+**Correct Form**:
+
+```go
+package main
+
+// If the helper function is indeed needed, ensure it's used properly.
+func helper() {
+ // Implement the function's functionality or ensure it's called elsewhere
+}
+
+func main() {
+ helper()
+}
+```
+
+To improve the section on Linters settings in the document, we'll expand with more detailed explanations and reinforce understanding through examples.
+
+#### 10.7 Example: `dogsled`
+
+**Incorrect Code Example**:
+
+```go
+func getValues() (int, int, int) {
+ return 1, 2, 3
+}
+
+func main() {
+ _, _, val := getValues()
+ fmt.Println(val) // Only interested in the third return value
+}
+```
+
+**Explanation**: In the above code, we use two blank identifiers to ignore the first two return values. Excessive use of blank identifiers can make code reading difficult.
+
+**Improved Code**:
+Consider refactoring the function or the usage of return values to reduce the need for blank identifiers or explicitly comment why ignoring certain values is safe.
+
+#### 10.8: `exhaustive`
+
+**Incorrect Code Example**:
+
+```go
+type Fruit int
+
+const (
+ Apple Fruit = iota
+ Banana
+ Orange
+)
+
+func getFruitName(f Fruit) string {
+ switch f {
+ case Apple:
+ return "Apple"
+ case Banana:
+ return "Banana"
+ // Missing handling for Orange
+ }
+ return "Unknown"
+}
+```
+
+**Explanation**: In this code, the `switch` statement doesn't cover all possible values of the `Fruit` type; the case for `Orange` is missing.
+
+**Improved Code**:
+
+```go
+func getFruitName(f Fruit) string {
+ switch f {
+ case Apple:
+ return "Apple"
+ case Banana:
+ return "Banana"
+ case Orange:
+ return "Orange"
+ }
+ return "Unknown"
+}
+```
+
+By adding the missing `case`, we ensure the `switch` statement is exhaustive, handling every possible enum value.
+
+#### 10.9 Optimization of Configuration Files and Application of Code Analysis Tools
+
+Through these examples, we demonstrate how to improve code quality by identifying and fixing common coding issues. OpenIM's configuration files allow developers to customize linters' behavior according to project requirements, ensuring code compliance with predefined quality standards and style guidelines.
+
+By employing these tools and configuration strategies, teams can reduce the number of bugs, enhance code maintainability, and facilitate efficient collaboration during code review processes.
+
+
+
+
+
+
+
diff --git a/docs/contrib/go-doc.md b/docs/contrib/go-doc.md
new file mode 100644
index 0000000..65b8851
--- /dev/null
+++ b/docs/contrib/go-doc.md
@@ -0,0 +1,50 @@
+# Go Language Documentation for OpenIM
+
+In the realm of software development, especially within Go language projects, documentation plays a crucial role in ensuring code maintainability and ease of use. Properly written and accurate documentation is not only essential for understanding and utilizing software effectively but also needs to be easy to write and maintain. This principle is at the heart of OpenIM's approach to supporting commands and generating documentation.
+
+## Supported Commands in OpenIM
+
+OpenIM leverages Go language's documentation standards to facilitate clear and maintainable code documentation. Below are some of the key commands used in OpenIM for documentation purposes:
+
+### `go doc` Command
+
+The `go doc` command is used to print documentation for Go language entities such as variables, constants, functions, structures, and interfaces. This command allows specifying the identifier of the program entity to tailor the output. Examples of `go doc` command usage include:
+
+- `go doc sync.WaitGroup.Add` prints the documentation for a specific method of a type in a package.
+- `go doc -u -all sync.WaitGroup` displays all program entities, including unexported ones, for a specified type.
+- `go doc -u sync` outputs all program entities for a specified package, focusing on exported ones without detailed comments.
+
+### `godoc` Command
+
+For environments lacking internet access, the `godoc` command serves to view the Go language standard library and project dependency library documentation in a web format. Notably, post-Go 1.12 versions, `godoc` is not part of the Go compiler suite. It can be installed using:
+
+```shell
+go get -u -v golang.org/x/tools/cmd/godoc
+```
+
+The `godoc` command, once running, hosts a local web server (by default on port 6060) to facilitate documentation browsing at http://127.0.0.1:6060. It generates documentation based on the GOROOT and GOPATH directories, showcasing both the project's own documentation and that of third-party packages installed via `go get`.
+
+### Custom Documentation Generation Commands in OpenIM
+
+OpenIM includes a suite of commands aimed at initializing, generating, and maintaining project documentation and associated files. Some notable commands are:
+
+- `gen.init`: Initializes the OpenIM server project.
+- `gen.docgo`: Generates missing `doc.go` files for Go packages, crucial for package-level documentation.
+- `gen.errcode.doc`: Generates markdown documentation for OpenIM error codes.
+- `gen.ca`: Generates CA files for all certificates, enhancing security documentation.
+
+These commands underscore the project's commitment to thorough and accessible documentation, supporting both developers and users alike.
+
+## Writing Your Own Documentation
+
+When creating documentation for Go projects, including OpenIM, it's important to follow certain practices:
+
+1. **Commenting**: Use single-line (`//`) and block (`/* */`) comments to provide detailed documentation within the code. Block comments are especially useful for package documentation, which should immediately precede the package statement without any intervening blank lines.
+
+2. **Overview Section**: To create an overview section in the documentation, place a block comment directly before the package statement. This section should succinctly describe the package's purpose and functionality.
+
+3. **Detailed Descriptions**: Comments placed before functions, structures, or variables will be used to generate detailed descriptions in the documentation. Follow the same commenting rules as for the overview section.
+
+4. **Examples**: Include example functions prefixed with `Example` to demonstrate usage. Output from these examples can be documented at the end of the function, starting with `// Output:` followed by the expected result.
+
+Through adherence to these documentation practices, OpenIM ensures that its codebase remains accessible, maintainable, and easy to use for developers and users alike.
\ No newline at end of file
diff --git a/docs/contrib/images.md b/docs/contrib/images.md
new file mode 100644
index 0000000..e50e730
--- /dev/null
+++ b/docs/contrib/images.md
@@ -0,0 +1,116 @@
+# OpenIM Image Management Strategy and Pulling Guide
+
+OpenIM is an efficient, stable, and scalable instant messaging framework that provides convenient deployment methods through Docker images. OpenIM manages multiple image sources, hosted respectively on GitHub (ghcr), Alibaba Cloud, and Docker Hub. This document is aimed at detailing the image management strategy of OpenIM and providing the steps for pulling these images.
+
+
+## Image Management Strategy
+
+OpenIM's versions correspond to GitHub's tag versions. Each time we release a new version and tag it on GitHub, an automated process is triggered that pushes the new Docker image version to the following three platforms:
+
+1. **GitHub (ghcr.io):** We use GitHub Container Registry (ghcr.io) to host OpenIM's Docker images. This allows us to better integrate with the GitHub source code repository, providing better version control and continuous integration/deployment (CI/CD) features. You can view all GitHub images [here](https://github.com/orgs/OpenIMSDK/packages).
+2. **Alibaba Cloud (registry.cn-hangzhou.aliyuncs.com):** For users in Mainland China, we also host OpenIM's Docker images on Alibaba Cloud to provide faster pull speeds. You can view all Alibaba Cloud images on this [page](https://cr.console.aliyun.com/cn-hangzhou/instances/repositories) of Alibaba Cloud Image Service (note that you need to log in to your Alibaba Cloud account first).
+3. **Docker Hub (docker.io):** Docker Hub is the most commonly used Docker image hosting platform, and we also host OpenIM's images there to facilitate developers worldwide. You can view all Docker Hub images on the [OpenIM's Docker Hub page](https://hub.docker.com/r/openim).
+
+## Base images design
+
++ [https://github.com/openim-sigs/openim-base-image](https://github.com/openim-sigs/openim-base-image)
+
+## OpenIM Image Design and Usage Guide
+
+OpenIM offers a comprehensive and flexible system of Docker images, available across multiple repositories. We actively maintain these images across different platforms, namely GitHub's ghcr.io, Alibaba Cloud, and Docker Hub. However, we highly recommend ghcr.io for deployment.
+
+### Available Versions
+
+We provide multiple versions of our images to meet different project requirements. Here's a quick overview of what you can expect:
+
+1. `main`: This image corresponds to the latest version of the main branch in OpenIM. It is updated frequently, making it perfect for users who want to stay at the cutting edge of our features.
+2. `release-v3.*`: This is the image that corresponds to the latest version of OpenIM's stable release branch. It's ideal for users who prefer a balance between new features and stability.
+3. `v3.*.*`: These images are specific to each tag in OpenIM. They are preserved in their original state and are never overwritten. These are the go-to images for users who need a specific, unchanging version of OpenIM.
+4. The image versions adhere to Semantic Versioning 2.0.0 strategy. Taking the `openim-server` image as an example, available at [openim-server container package](https://github.com/openimsdk/open-im-server-deploy/pkgs/container/openim-server): upon tagging with v3.5.0, the CI automatically releases the following tags - `openim-server:3`, `openim-server:3.5`, `openim-server:3.5.0`, `openim-server:v3.5.0`, `openim-server:latest`, and `sha-e0244d9`. It's important to note that only `sha-e0244d9` is absolutely unique, whereas `openim-server:v3.5.0` and `openim-server:3.5.0` maintain a degree of uniqueness.
+
+### Multi-Architecture Images
+
+In order to cater to a wider range of needs, some of our images are provided with multiple architectures under `OS / Arch`. These images offer greater compatibility across different operating systems and hardware architectures, ensuring that OpenIM can be deployed virtually anywhere.
+
+**Example:**
+
++ [https://github.com/OpenIMSDK/chat/pkgs/container/openim-chat/113925695?tag=v1.1.0](https://github.com/OpenIMSDK/chat/pkgs/container/openim-chat/113925695?tag=v1.1.0)
+
+
+## Methods and Steps for Pulling Images
+
+When pulling OpenIM's Docker images, you can choose the most suitable source based on your geographic location and network conditions. Here are the steps to pull OpenIM images from each source:
+
+### Select image
+
+1. Choose the image repository platform you prefer. As previously mentioned, we recommend [OpenIM ghcr.io](https://github.com/orgs/OpenIMSDK/packages).
+
+2. Choose the image name and image version that suits your needs. Refer to the description above for more details.
+
+
+### Install image
+
+1. First, make sure Docker is installed on your machine. If not, you can refer to the [Docker official documentation](https://docs.docker.com/get-docker/) for installation.
+
+2. Open the terminal and run the following commands to pull the images:
+
+ For OpenIM Server:
+
+ - Pull from GitHub:
+
+ ```bash
+ docker pull ghcr.io/openimsdk/openim-server:latest
+ ```
+
+ - Pull from Alibaba Cloud:
+
+ ```bash
+ docker pull registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-server:latest
+ ```
+
+ - Pull from Docker Hub:
+
+ ```bash
+ docker pull docker.io/openim/openim-server:latest
+ ```
+
+ For OpenIM Chat:
+
+ - Pull from GitHub:
+
+ ```bash
+ docker pull ghcr.io/openimsdk/openim-chat:latest
+ ```
+
+ - Pull from Alibaba Cloud:
+
+ ```bash
+ docker pull registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-chat:latest
+ ```
+
+ - Pull from Docker Hub:
+
+ ```bash
+ docker pull docker.io/openim/openim-chat:latest
+ ```
+
+3. Run the `docker images` command to confirm that the image has been successfully pulled.
+
+### Accelerating Deployment for Users in China with Aliyun Mirror or Alternative Image Addresses
+
+For users in China looking to speed up the deployment process of OpenIM, leveraging a mirror image address is a highly recommended practice. After executing the `make init` command, a `.env` file is generated, which you'll need to edit to configure the image registry source. This configuration is crucial for optimizing download speeds and ensuring a smoother setup process.
+
+Within the generated `.env` file, you'll find a section dedicated to choosing the image address. It includes options for GitHub (`ghcr.io/openimsdk`), Docker Hub (`openim`), and Ali Cloud (`registry.cn-hangzhou.aliyuncs.com/openimsdk`). To achieve the best performance within China, it is advised to use the Aliyun image address.
+
+To do this, you need to comment out the current `IMAGE_REGISTRY` setting and uncomment the Aliyun option. Here is how you can adjust it for Aliyun:
+
+```bash
+# Choose the image address: GitHub (ghcr.io/openimsdk), Docker Hub (openim),
+# or Ali Cloud (registry.cn-hangzhou.aliyuncs.com/openimsdk).
+# Uncomment one of the following three options. Aliyun is recommended for users in China.
+# IMAGE_REGISTRY="ghcr.io/openimsdk"
+# IMAGE_REGISTRY="openim"
+IMAGE_REGISTRY="registry.cn-hangzhou.aliyuncs.com/openimsdk"
+```
+
+This change directs the deployment process to fetch the required images from the Aliyun registry, significantly improving download and installation speeds due to the geographical and network advantages within China. If, for any reason, you prefer not to use Aliyun or encounter issues, consider switching to another mirror address listed in the `.env` file by following the same uncommenting process. This flexibility ensures that users can select the most suitable image source for their specific situation, leading to a more efficient deployment of OpenIM.
diff --git a/docs/contrib/init-config.md b/docs/contrib/init-config.md
new file mode 100644
index 0000000..5e3139d
--- /dev/null
+++ b/docs/contrib/init-config.md
@@ -0,0 +1,74 @@
+# Init OpenIM Config
+
+- [Init OpenIM Config](#init-openim-config)
+ - [Start](#start)
+ - [Define Automated Configuration](#define-automated-configuration)
+ - [Define Configuration Variables](#define-configuration-variables)
+ - [Bash Parsing Features](#bash-parsing-features)
+ - [Reasons and Advantages of the Design](#reasons-and-advantages-of-the-design)
+
+
+## Start
+
+With the increasing complexity of software engineering, effective configuration management has become more and more important. Yaml and other configuration files provide the necessary parameters and guidance for systems, but they also impose additional management overhead for developers. This article explores how to automate and optimize configuration management, thereby improving efficiency and reducing the chances of errors.
+
+First, obtain the OpenIM code through the contributor documentation and initialize it following the steps below.
+
+## Define Automated Configuration
+
+We no longer strongly recommend modifying the same configuration file. If you have a new configuration file related to your business, we suggest generating and managing it through automation.
+
+In the `scripts/init_config.sh` file, we defined some template files. These templates will be automatically generated to the corresponding directories when executing `make init`.
+
+```
+# Defines an associative array where the keys are the template files and the values are the corresponding output files.
+declare -A TEMPLATES=(
+ ["${OPENIM_ROOT}/scripts/template/config-tmpl/env.template"]="${OPENIM_OUTPUT_SUBPATH}/bin/.env"
+ ["${OPENIM_ROOT}/scripts/template/config-tmpl/config.yaml"]="${OPENIM_OUTPUT_SUBPATH}/bin/config.yaml"
+)
+```
+
+If you have your new mapping files, you can implement them by appending them to the array.
+
+Lastly, run:
+
+```
+./scripts/init_config.sh
+```
+
+## Define Configuration Variables
+
+In the `scripts/install/environment.sh` file, we defined many reusable variables for automation convenience.
+
+In the provided example, the def function is a core element. This function not only provides a concise way to define variables but also offers default value options for each variable. This way, even if a specific variable is not explicitly set in an environment or scenario, it can still have an expected value.
+
+```
+function def() {
+ local var_name="$1"
+ local default_value="$2"
+ eval "readonly $var_name=\${$var_name:-$default_value}"
+}
+```
+
+### Bash Parsing Features
+
+Since bash is a parsing script language, it executes commands in the order they appear in the script. This characteristic means we can define commonly used or core variables at the beginning of the script and then reuse or modify them later on.
+
+For instance, we can initially set a universal password and reuse this password in subsequent variable definitions.
+
+```
+# Set a consistent password for easy memory
+def "PASSWORD" "openIM123"
+
+# Linux system user for openim
+def "LINUX_USERNAME" "openim"
+def "LINUX_PASSWORD" "${PASSWORD}"
+```
+
+## Reasons and Advantages of the Design
+
+1. **Simplify Configuration Management**: Through automation scripts, we can avoid manual operations and configurations, thus reducing tedious repetitive tasks.
+2. **Reduce Errors**: Manually editing yaml or other configuration files can lead to formatting mistakes or typographical errors. Automating with scripts can lower the risk of such errors.
+3. **Enhanced Readability**: Using the `def` function and other bash scripting techniques, we can establish a clear, easy-to-read, and maintainable configuration system.
+4. **Improved Reusability**: As demonstrated above, we can reuse variables and functions in different parts of the script, reducing redundant code and increasing overall consistency.
+5. **Flexible Default Value Mechanism**: By providing default values for variables, we can ensure configurations are complete and consistent across various scenarios, while also offering customization options for advanced users.
\ No newline at end of file
diff --git a/docs/contrib/install-docker.md b/docs/contrib/install-docker.md
new file mode 100644
index 0000000..ce04f8a
--- /dev/null
+++ b/docs/contrib/install-docker.md
@@ -0,0 +1,53 @@
+
+
+
+
+# Install Docker
+
+
+The installation command is as follows:
+
+```bash
+$ curl -fsSL https://get.docker.com | bash -s docker --mirror aliyun
+``
+
+## 2.2 Start Docker
+
+```bash
+$ systemctl start docker
+```
+
+## 2.3 Test Docker
+
+```bash
+$ docker run hello-world
+```
+
+## 2.4 Configure Docker Acceleration
+
+```bash
+$ mkdir -p /etc/docker
+$ tee /etc/docker/daemon.json <<-'EOF'
+{
+ "registry-mirrors": ["https://registry.docker-cn.com"]
+}
+EOF
+$ systemctl daemon-reload
+$ systemctl restart docker
+```
+
+## 2.5 Install Docker Compose
+
+```bash
+$ sudo curl -L "https://github.com/docker/compose/releases/download/latest/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
+$ sudo chmod +x /usr/local/bin/docker-compose
+```
+
+## 2.6 Test Docker Compose
+
+```bash
+$ docker-compose --version
+```
diff --git a/docs/contrib/install-openim-linux-system.md b/docs/contrib/install-openim-linux-system.md
new file mode 100644
index 0000000..5d5d587
--- /dev/null
+++ b/docs/contrib/install-openim-linux-system.md
@@ -0,0 +1,353 @@
+# OpenIM System: Setup and Usage Guide
+
+
+* 1. [1. Introduction](#Introduction)
+* 2. [2. Prerequisites (Requires root permissions)](#PrerequisitesRequiresrootpermissions)
+* 3. [3. Create `openim-api` systemd unit template file](#Createopenim-apisystemdunittemplatefile)
+* 4. [4. Copy systemd unit template file to systemd config directory (Requires root permissions)](#CopysystemdunittemplatefiletosystemdconfigdirectoryRequiresrootpermissions)
+* 5. [5. Start systemd service](#Startsystemdservice)
+
+
+## 0. 0. Introduction
+
+Systemd is the default service management form for the latest Linux distributions, replacing the original init.
+
+The OpenIM system is a comprehensive suite of services tailored to address a wide variety of messaging needs. This guide will walk you through the steps of setting up the OpenIM system services and provide insights into its usage.
+
+**Prerequisites:**
+
++ A Linux server with necessary privileges.
++ Ensure you have `systemctl` installed and running.
+
+
+## 1. 1. Deployment
+
+1. **Retrieve the Installation Script**:
+
+ Begin by obtaining the OpenIM installation script which will be utilized to deploy the entire OpenIM system.
+
+2. **Install OpenIM**:
+
+ To install all the components of OpenIM, run:
+
+ ```bash
+ ./scripts/install/install.sh -i
+ ```
+
+ or
+
+ ```bash
+ ./scripts/install/install.sh --install
+ ```
+
+ This will initiate the installation process for all OpenIM components.
+
+3. **Check the Status**:
+
+ Post installation, it is good practice to verify if all the services are running as expected:
+
+ ```bash
+ systemctl status openim.target
+ ```
+
+ This will list the status of all related services of OpenIM.
+
+**Maintenance & Management:**
+
+1. **Checking Individual Service Status**:
+
+ You can monitor the status of individual services with the following command:
+
+ ```bash
+ systemctl status
+ ```
+
+ For instance:
+
+ ```bash
+ systemctl status openim-api.service
+ ``
+
+2. **Starting and Stopping Services**:
+
+ If you wish to start or stop any specific service, you can do so with `systemctl start` or `systemctl stop` followed by the service name:
+
+ ```bash
+ systemctl start openim-api.service
+ systemctl stop openim-api.service
+ ```
+
+3. **Uninstalling OpenIM**:
+
+ In case you wish to remove the OpenIM components from your server, utilize:
+
+ ```bash
+ ./scripts/install/install.sh -u
+ ```
+
+ or
+
+ ```bash
+ ./scripts/install/install.sh --uninstall
+ ```
+
+ Ensure you take a backup of any important data before executing the uninstall command.
+
+4. **Logs & Troubleshooting**:
+
+ Logs play a pivotal role in understanding the system's operation and troubleshooting any issues. OpenIM logs can typically be found in the directory specified during installation, usually `${OPENIM_LOG_DIR}`.
+
+ Always refer to the logs when troubleshooting. Look for any error messages or warnings that might give insights into the issue at hand.
+
+
+**Note:**
+
++ `openim-api.service`: Manages the main API gateways for OpenIM communication.
++ `openim-crontask.service`: Manages scheduled tasks and jobs.
++ `openim-msggateway.service`: Takes care of message gateway operations.
++ `openim-msgtransfer.service`: Handles message transfer functionalities.
++ `openim-push.service`: Responsible for push notification services.
++ `openim-rpc-auth.service`: Manages RPC (Remote Procedure Call) for authentication.
++ `openim-rpc-conversation.service`: Manages RPC for conversations.
++ `openim-rpc-friend.service`: Handles RPC for friend-related operations.
++ `openim-rpc-group.service`: Manages group-related RPC operations.
++ `openim-rpc-msg.service`: Takes care of message RPCs.
++ `openim-rpc-third.service`: Deals with third-party integrations using RPC.
++ `openim-rpc-user.service`: Manages user-related RPC operations.
++ `openim.target`: A target that bundles all the above services for collective operations.
+
+
+**Viewing Logs with `journalctl`:**
+
+`systemctl` services usually log their output to the systemd journal, which you can access using the `journalctl` command.
+
+1. **View Logs for a Specific Service**:
+
+ To view the logs for a particular service, you can use:
+
+ ```bash
+ journalctl -u
+ ```
+
+ For example, to see the logs for the `openim-api.service`, you would use:
+
+ ```bash
+ journalctl -u openim-api.service
+ ```
+
+2. **Filtering Logs**:
+
+ + By Time
+
+ : If you wish to see logs since a specific time:
+
+ ```bash
+ journalctl -u openim-api.service --since "2023-10-28 12:00:00"
+ ```
+
+ + Most Recent Logs
+
+ : To view the most recent logs, you can combine
+`tail` functionality with `journalctl`:
+
+ ```bash
+ journalctl -u openim-api.service -n 100
+ ```
+
+3. **Continuous Monitoring of Logs**:
+
+ To see new log messages in real-time, you can use the `-f` flag, which mimics the behavior of `tail -f`:
+
+ ```bash
+ journalctl -u openim-api.service -f
+ ```
+
+### Continued Maintenance:
+
+1. **Regularly Check Service Status**:
+
+ It's good practice to routinely verify that all services are active and running. This can be done with:
+
+ ```bash
+ systemctl status openim-api.service openim-push.service openim-rpc-group.service openim-crontask.service openim-rpc-auth.service openim-rpc-msg.service openim-msggateway.service openim-rpc-conversation.service openim-rpc-third.service openim-msgtransfer.service openim-rpc-friend.service openim-rpc-user.service
+ ```
+
+2. **Update Services**:
+
+ Periodically, there might be updates or patches to the OpenIM system or its components. Make sure you keep the system updated. After updating any service, always reload the daemon and restart the service:
+
+ ```bash
+ systemctl daemon-reload
+ systemctl restart openim-api.service
+ ```
+
+3. **Backup Important Data**:
+
+ Regularly backup any configuration files, user data, and other essential data. This ensures that you can restore the system to a working state in case of failures.
+
+### Important `systemctl` and Logging Commands to Learn:
+
+1. **Start/Stop/Restart Services**:
+
+ ```bash
+ systemctl start
+ systemctl stop
+ systemctl restart
+ ```
+
+2. **Enable/Disable Services**:
+
+ If you want a service to start automatically at boot:
+
+ ```bash
+ systemctl enable
+ ```
+
+ To prevent it from starting at boot:
+
+ ```bash
+ systemctl disable
+ ```
+
+3. **Check Failed Services**:
+
+ To quickly check if any service has failed:
+
+ ```bash
+ systemctl --failed
+ ```
+
+4. **Log Rotation**:
+
+ `journalctl` logs can grow large. To clear all archived journal entries, use:
+
+ ```bash
+ journalctl --vacuum-time=1d
+ ```
+
+
+**Advanced requirements:**
+
+- Convenient service runtime log recording for problem analysis
+- Service management logs
+- Option to restart upon abnormal exit
+
+The daemon does not meet these advanced requirements.
+
+`nohup` only logs the service's runtime outputs and errors.
+
+Only systemd can fulfill all of the above requirements.
+
+> The default logs are enhanced with timestamps, usernames, service names, PIDs, etc., making them user-friendly. You can view logs of abnormal service exits. Advanced customization is possible through the configuration files in `/lib/systemd/system/`.
+
+In short, systemd is the current mainstream way to manage backend services on Linux, so I've abandoned `nohup` in my new versions of bash scripts, opting instead for systemd.
+
+## 2. Prerequisites (Requires root permissions)
+
+1. Configure `environment.sh` based on the comments.
+2. Create a data directory:
+
+```bash
+mkdir -p ${OPENIM_DATA_DIR}/{openim-api,openim-crontask}
+```
+
+3. Create a bin directory and copy `openim-api` and `openim-crontask` executable files:
+
+```bash
+source ./environment.sh
+mkdir -p ${OPENIM_INSTALL_DIR}/bin
+cp openim-api openim-crontask ${OPENIM_INSTALL_DIR}/bin
+```
+
+4. Copy the configuration files of `openim-api` and `openim-crontask` to the `${OPENIM_CONFIG_DIR}` directory:
+
+```bash
+mkdir -p ${OPENIM_CONFIG_DIR}
+cp openim-api.yaml openim-crontask.yaml ${OPENIM_CONFIG_DIR}
+```
+
+## 3. Create `openim-api` systemd unit template file
+
+For each OpenIM service, we will create a systemd unit template. Follow the steps below for each service:
+
+Run the following shell script to generate the `openim-api.service.template`:
+
+```bash
+source ./environment.sh
+cat > openim-api.service.template < $service.service.template <Copy systemd unit template file to systemd config directory (Requires root permissions)
+
+Ensure you have root permissions to perform this operation:
+
+```bash
+for service in "${services[@]}"
+do
+ sudo cp $service.service.template /etc/systemd/system/$service.service
+done
+...
+```
+
+## 5. Start systemd service
+
+To start the OpenIM services:
+
+```bash
+for service in "${services[@]}"
+do
+ sudo systemctl daemon-reload
+ sudo systemctl enable $service
+ sudo systemctl restart $service
+done
+```
diff --git a/docs/contrib/kafka.md b/docs/contrib/kafka.md
new file mode 100644
index 0000000..4547c94
--- /dev/null
+++ b/docs/contrib/kafka.md
@@ -0,0 +1,162 @@
+# OpenIM Kafka Guide
+
+This document aims to provide a set of concise guidelines to help you quickly install and use Kafka through Docker Compose.
+
+## Installing Kafka
+
+With the Docker Compose script provided by OpenIM, you can easily install Kafka. Use the following command to start Kafka:
+
+```bash
+docker compose up -d
+```
+
+After executing this command, Kafka will be installed and started. You can confirm the Kafka container is running with the following command:
+
+```bash
+docker ps | grep kafka
+```
+
+The output of this command, as shown below, displays the status information of the Kafka container:
+
+```
+be416b5a0851 bitnami/kafka:3.5.1 "/opt/bitnami/script…" 3 days ago Up 2 days 9092/tcp, 0.0.0.0:19094->9094/tcp, :::19094->9094/tcp kafka
+```
+
+### References
+
+- Official Docker installation documentation: [Click here](http://events.jianshu.io/p/b60afa35303a)
+- Detailed installation guide: [Tutorial on Towards Data Science](https://towardsdatascience.com/how-to-install-apache-kafka-using-docker-the-easy-way-4ceb00817d8b)
+
+## Using Kafka
+
+### Entering the Kafka Container
+
+To execute Kafka commands, you first need to enter the Kafka container. Use the following command:
+
+```bash
+docker exec -it kafka bash
+```
+
+### Kafka Command Tools
+
+Inside the Kafka container, you can use various command-line tools to manage Kafka. These tools include but are not limited to:
+
+- `kafka-topics.sh`: For creating, deleting, listing, or altering topics.
+- `kafka-console-producer.sh`: Allows sending messages to a specified topic from the command line.
+- `kafka-console-consumer.sh`: Allows reading messages from the command line, with the ability to specify topics.
+- `kafka-consumer-groups.sh`: For managing consumer group information.
+
+### Kafka Client Tool Installation
+
+For easier Kafka management, you can install Kafka client tools. If you installed Kafka through OpenIM's Docker Compose, you can install the Kafka client tools with the following command:
+
+```bash
+make install.kafkactl
+```
+
+### Automatic Topic Creation
+
+When installing Kafka through OpenIM's Docker Compose method, OpenIM automatically creates the following topics:
+
+- `latestMsgToRedis`
+- `msgToPush`
+- `offlineMsgToMongoMysql`
+
+These topics are created using the `scripts/create-topic.sh` script. The script waits for Kafka to be ready before executing the commands to create topics:
+
+```bash
+# Wait for Kafka to be ready
+until /opt/bitnami/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:9092; do
+ echo "Waiting for Kafka to be ready..."
+ sleep 2
+done
+
+# Create topics
+/opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic latestMsgToRedis
+/opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic msgToPush
+/opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic offlineMsgToMongoMysql
+
+echo "Topics created."
+```
+
+The optimized and expanded documentation further details some basic commands for operations inside the Kafka container, as well as basic commands for managing Kafka using `kafkactl`. Here is a more detailed guide.
+
+
+## Basic Commands in the Kafka Container
+
+### Listing Topics
+
+To list all existing topics, you can use the following command:
+
+```bash
+kafka-topics.sh --list --bootstrap-server localhost:9092
+```
+
+### Creating a New Topic
+
+When creating a new topic, you can specify the number of partitions and the replication factor. Here is the command to create a new topic:
+
+```bash
+kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic your_topic_name
+```
+
+### Producing Messages
+
+To send messages to a specific topic, you can use the producer command. The following command prompts you to enter messages, which are sent to the specified topic with each press of the Enter key:
+
+```bash
+kafka-console-producer.sh --broker-list localhost:9092 --topic your_topic_name
+```
+
+### Consuming Messages
+
+To read messages from a specific topic, you can use the consumer command. The following command reads new messages from the specified topic and outputs them on the console:
+
+```bash
+kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic your_topic_name --from-beginning
+```
+
+The `
+
+--from-beginning` parameter reads messages from the beginning of the topic. If this parameter is omitted, only new messages will be read.
+
+
+## Basic Commands Using `kafkactl`
+
+`kafkactl` is a command-line tool for managing and operating Kafka clusters. It offers a more modern way to interact with Kafka.
+
+### Listing Topics
+
+To list all topics, you can use:
+
+```bash
+kafkactl get topics
+```
+
+### Creating a New Topic
+
+To create a new topic with `kafkactl`, use:
+
+```bash
+kafkactl create topic your_topic_name --partitions 1 --replication-factor 1
+```
+
+### Producing Messages
+
+To send messages to a topic, you can use:
+
+```bash
+kafkactl produce your_topic_name --value "your message"
+```
+
+Here, `"your message"` is the content of the message you want to send.
+
+### Consuming Messages
+
+To consume messages from a topic, use:
+
+```bash
+kafkactl consume your_topic_name --from-beginning
+```
+
+Again, the `--from-beginning` parameter will start consuming messages from the beginning of the topic. If you do not wish to start from the beginning, you can omit this parameter.
\ No newline at end of file
diff --git a/docs/contrib/linux-development.md b/docs/contrib/linux-development.md
new file mode 100644
index 0000000..4a0cb9f
--- /dev/null
+++ b/docs/contrib/linux-development.md
@@ -0,0 +1,137 @@
+# Ubuntu 22.04 OpenIM Project Development Guide
+
+## TOC
+- [Ubuntu 22.04 OpenIM Project Development Guide](#ubuntu-2204-openim-project-development-guide)
+ - [TOC](#toc)
+ - [1. Setting Up Ubuntu Server](#1-setting-up-ubuntu-server)
+ - [1.1 Create `openim` Standard User](#11-create-openim-standard-user)
+ - [1.2 Setting up the `openim` User's Shell Environment](#12-setting-up-the-openim-users-shell-environment)
+ - [1.3 Installing Dependencies](#13-installing-dependencies)
+
+## 1. Setting Up Ubuntu Server
+
+You can use tools like PuTTY or other SSH clients to log in to your Ubuntu server. Once logged in, a few fundamental configurations are required, such as creating a standard user, adding to sudoers, and setting up the `$HOME/.bashrc` file. The steps are as follows:
+
+## 1.1 Create `openim` Standard User
+
+1. Log in to the Ubuntu system as the `root` user and create a standard user.
+
+Generally, a project will involve multiple developers. Instead of provisioning a server for every developer, many organizations share a single development machine among developers. To simulate this real-world scenario, we'll use a standard user for development. To create the `openim` user:
+
+```
+# adduser openim # Create the openim user, which developers will use for login and development.
+# passwd openim # Set the login password for openim.
+```
+
+Working with a non-root user ensures the system's safety and is a good practice. It's recommended to avoid using the root user as much as possible during everyday development.
+
+1. Add to sudoers.
+
+Often, even standard users need root privileges. Instead of frequently asking the system administrator for the root password, you can add the standard user to the sudoers. This allows them to temporarily gain root access using the sudo command. To add the `openim` user to sudoers:
+
+```
+
+# sed -i '/^root.*ALL=(ALL:ALL).*ALL/a\openim\tALL=(ALL) \tALL' /etc/sudoers
+```
+
+## 1.2 Setting up the `openim` User's Shell Environment
+
+1. Log into the Ubuntu system.
+
+Assuming we're using the **openim** user, log in using PuTTY or other SSH clients.
+
+1. Configure the `$HOME/.bashrc` file.
+
+The first step after logging into a new server is to configure the `$HOME/.bashrc` file. It makes the Linux shell more user-friendly by setting environment variables like `LANG` and `PS1`. Here's how the configuration would look:
+
+```
+# .bashrc
+
+# User specific aliases and functions
+
+alias rm='rm -i'
+alias cp='cp -i'
+alias mv='mv -i'
+
+# Source global definitions
+if [ -f /etc/bashrc ]; then
+ . /etc/bashrc
+fi
+
+if [ ! -d $HOME/workspace ]; then
+ mkdir -p $HOME/workspace
+fi
+
+# User specific environment
+export LANG="en_US.UTF-8"
+export PS1='[\u@dev \W]\$ '
+export WORKSPACE="$HOME/workspace"
+export PATH=$HOME/bin:$PATH
+
+cd $WORKSPACE
+```
+
+After updating `$HOME/.bashrc`, run the `bash` command to reload the configurations into the current shell.
+
+## 1.3 Installing Dependencies
+
+The OpenIM project on Ubuntu may have various dependencies. Some are direct, and others are indirect. Installing these in advance prevents issues like missing packages or compile-time errors later on.
+
+1. Install dependencies.
+
+You can use the `apt` command to install the required tools on Ubuntu:
+
+```
+$ sudo apt-get update
+$ sudo apt-get install build-essential autoconf automake cmake perl libcurl4-gnutls-dev libtool gcc g++ glibc-doc-reference zlib1g-dev git-lfs telnet lrzsz jq libexpat1-dev libssl-dev
+$ sudo apt install libcurl4-openssl-dev
+```
+
+1. Install Git.
+
+A higher version of Git ensures compatibility with certain commands like `git fetch --unshallow`. To install a recent version:
+
+```
+$ cd /tmp
+$ wget --no-check-certificate https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.36.1.tar.gz
+$ tar -xvzf git-2.36.1.tar.gz
+$ cd git-2.36.1/
+$ ./configure
+$ make
+$ sudo make install
+$ git --version
+```
+
+Then, add Git's binary directory to the `PATH`:
+
+```
+
+$ echo 'export PATH=/usr/local/libexec/git-core:$PATH' >> $HOME/.bashrc
+```
+
+1. Configure Git.
+
+To set up Git:
+
+```
+$ git config --global user.name "Your Name"
+$ git config --global user.email "your_email@example.com"
+$ git config --global credential.helper store
+$ git config --global core.longpaths true
+```
+
+Other Git configurations include:
+
+```
+
+$ git config --global core.quotepath off
+```
+
+And for handling larger files:
+
+```
+
+$ git lfs install --skip-repo
+```
+
+By following the steps in this guide, your Ubuntu 22.04 server should now be set up and ready for OpenIM project development.
\ No newline at end of file
diff --git a/docs/contrib/local-actions.md b/docs/contrib/local-actions.md
new file mode 100644
index 0000000..f28bbe7
--- /dev/null
+++ b/docs/contrib/local-actions.md
@@ -0,0 +1,14 @@
+# act
+
+Run your [GitHub Actions](https://developer.github.com/actions/) locally! Why would you want to do this? Two reasons:
+
+- **Fast Feedback** - Rather than having to commit/push every time you want to test out the changes you are making to your `.github/workflows/` files (or for any changes to embedded GitHub actions), you can use `act` to run the actions locally. The [environment variables](https://help.github.com/en/actions/configuring-and-managing-workflows/using-environment-variables#default-environment-variables) and [filesystem](https://help.github.com/en/actions/reference/virtual-environments-for-github-hosted-runners#filesystems-on-github-hosted-runners) are all configured to match what GitHub provides.
+- **Local Task Runner** - I love [make](https://en.wikipedia.org/wiki/Make_(software)). However, I also hate repeating myself. With `act`, you can use the GitHub Actions defined in your `.github/workflows/` to replace your `Makefile`!
+
+## install act
+
++ [https://github.com/nektos/act](https://github.com/nektos/act)
+
+```bash
+curl -s https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
+···
\ No newline at end of file
diff --git a/docs/contrib/logging.md b/docs/contrib/logging.md
new file mode 100644
index 0000000..fd5e9cb
--- /dev/null
+++ b/docs/contrib/logging.md
@@ -0,0 +1,507 @@
+# OpenIM Logging and Error Handling Documentation
+
+## Script Logging Documentation Link
+
+If you wish to view the script's logging documentation, you can click on this link: [Logging Documentation](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/bash-log.md).
+
+Below is the documentation for logging and error handling in the OpenIM Go project.
+
+To create a standard set of documentation that is quick to read and easy to understand, we will highlight key information about the `Logger` interface and the `CodeError` interface. This includes the purpose of each interface, key methods, and their use cases. This will help developers quickly grasp how to effectively use logging and error handling within the project.
+
+## Logging (`Logger` Interface)
+
+### Purpose
+The `Logger` interface aims to provide the OpenIM project with a unified and flexible logging mechanism, supporting structured logging formats for efficient log management and analysis.
+
+### Key Methods
+
+- **Debug, Info, Warn, Error**
+ Log messages of different levels to suit various logging needs and scenarios. These methods accept a context (`context.Context`), a message (`string`), and key-value pairs (`...interface{}`), allowing the log to carry rich context information.
+
+- **WithValues**
+ Append key-value pair information to log messages, returning a new `Logger` instance. This helps in adding consistent context information.
+
+- **WithName**
+ Set the name of the logger, which helps in identifying the source of the logs.
+
+- **WithCallDepth**
+ Adjust the call stack depth to accurately identify the source of the log message.
+
+### Use Cases
+
+- Developers should choose the appropriate logging level (such as `Debug`, `Info`, `Warn`, `Error`) based on the importance of the information when logging.
+- Use `WithValues` and `WithName` to add richer context information to logs, facilitating subsequent tracking and analysis.
+
+## Error Handling (`CodeError` Interface)
+
+### Purpose
+The `CodeError` interface is designed to provide a unified mechanism for error handling and wrapping, making error information more detailed and manageable.
+
+### Key Methods
+
+- **Code**
+ Return the error code to distinguish between different types of errors.
+
+- **Msg**
+ Return the error message description to display to the user.
+
+- **Detail**
+ Return detailed information about the error for further debugging by developers.
+
+- **WithDetail**
+ Add detailed information to the error, returning a new `CodeError` instance.
+
+- **Is**
+ Determine whether the current error matches a specified error, supporting a flexible error comparison mechanism.
+
+- **Wrap**
+ Wrap another error with additional message description, facilitating the tracing of the error's cause.
+
+### Use Cases
+
+- When defining errors with specific codes and messages, use error types that implement the `CodeError` interface.
+- Use `WithDetail` to add additional context information to errors for more accurate problem localization.
+- Use the `Is` method to judge the type of error for conditional branching.
+- Use the `Wrap` method to wrap underlying errors while adding more contextual descriptions.
+
+## Logging Standards and Code Examples
+
+In the OpenIM project, we use the unified logging package `github.com/openimsdk/tools/log` for logging to achieve efficient log management and analysis. This logging package supports structured logging formats, making it easier for developers to handle log information.
+
+### Logger Interface and Implementation
+
+The logger interface is defined as follows:
+
+```go
+type Logger interface {
+ Debug(ctx context.Context, msg string, keysAndValues ...interface{})
+ Info(ctx context.Context, msg string, keysAndValues ...interface{})
+ Warn(ctx context.Context, msg string, err error, keysAndValues ...interface{})
+ Error(ctx context.Context, msg string, err error, keysAndValues ...interface{})
+ WithValues(keysAndValues ...interface{}) Logger
+ WithName(name string) Logger
+ WithCallDepth(depth int) Logger
+}
+```
+
+Example code: Using the `Logger` interface to log at the info level.
+
+```go
+func main() {
+ logger := log.NewLogger().WithName("MyService")
+ ctx := context.Background()
+ logger.Info(ctx, "Service started", "port", "8080")
+}
+```
+
+## Error Handling and Code Examples
+
+We use the `github.com/openimsdk/tools/errs` package for unified error handling and wrapping.
+
+### CodeError Interface and Implementation
+
+The error interface is defined as follows:
+
+```go
+type CodeError interface {
+ Code() int
+ Msg() string
+ Detail() string
+ WithDetail(detail string) CodeError
+ Is(err error, loose ...bool) bool
+ Wrap(msg ...string) error
+ error
+}
+```
+
+Example code: Creating and using the `CodeError` interface to handle errors.
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/openimsdk/tools/errs"
+)
+
+func main() {
+ err := errs.New(404, "Resource not found")
+ err = err.WithDetail("
+
+More details")
+ if e, ok := err.(errs.CodeError); ok {
+ fmt.Println(e.Code(), e.Msg(), e.Detail())
+ }
+}
+```
+
+### Detailed Logging Standards and Code Examples
+
+1. **Print key information at startup**
+ It is crucial to print entry parameters and key process information at program startup. This helps understand the startup state and configuration of the program.
+
+ **Code Example**:
+ ```go
+ package main
+
+ import (
+ "fmt"
+ "os"
+ )
+
+ func main() {
+ fmt.Println("Program startup, version: 1.0.0")
+ fmt.Printf("Connecting to database: %s\n", os.Getenv("DATABASE_URL"))
+ }
+ ```
+
+2. **Use `tools/log` and `fmt` for logging**
+ Logging should be done using a specialized logging library for unified management and formatted log output.
+
+ **Code Example**: Logging an info level message with `tools/log`.
+ ```go
+ package main
+
+ import (
+ "context"
+ "github.com/openimsdk/tools/log"
+ )
+
+ func main() {
+ ctx := context.Background()
+ log.Info(ctx, "Application started successfully")
+ }
+ ```
+
+3. **Use standard error output for startup failures or critical information**
+ Critical error messages or program startup failures should be indicated to the user through standard error output.
+
+ **Code Example**:
+ ```go
+ package main
+
+ import (
+ "fmt"
+ "os"
+ )
+
+ func checkEnvironment() bool {
+ return os.Getenv("REQUIRED_ENV") != ""
+ }
+
+ func main() {
+ if !checkEnvironment() {
+ fmt.Fprintln(os.Stderr, "Missing required environment variable")
+ os.Exit(1)
+ }
+ }
+ ```
+
+ We encapsulate it into separate tools, which can output error information through the `tools/log` package.
+
+ ```go
+ package main
+
+ import (
+ util "git.imall.cloud/openim/open-im-server-deploy/pkg/util/genutil"
+ )
+
+ func main() {
+ if err := apiCmd.Execute(); err != nil {
+ util.ExitWithError(err)
+ }
+ }
+ ```
+
+4. **Use `tools/log` package for runtime logging**
+ This ensures consistency and control over logging.
+
+ **Code Example**: Same as the above example using `tools/log`. When `tools/log` is not initialized, consider using `fmt` for standard output.
+
+5. **Error logs should be printed by the top-level caller**
+ This is to avoid duplicate logging of errors, typically errors are caught and logged at the application's outermost level.
+
+ **Code Example**:
+ ```go
+ package main
+
+ import (
+ "github.com/openimsdk/tools/log"
+ "context"
+ )
+
+ func doSomething() error {
+ // An error occurs here
+ return errs.Wrap(errors.New("An error occurred"))
+ }
+
+ func controller() error {
+ err := doSomething()
+ if err != nil {
+ return err
+ }
+ return nil
+ }
+
+ func main() {
+ err := controller()
+ if err != nil {
+ log.Error(context.Background(), "Operation failed", err)
+ }
+ }
+ ```
+
+6. **Handling logs for API RPC calls and non-RPC applications**
+
+ For API RPC calls using gRPC, logs at the information level are printed by middleware on the gRPC server side, reducing the need to manually log in each RPC method. For non-RPC applications, it's recommended to manually log key execution paths to track the application's execution flow.
+
+ **gRPC Server-Side Logging Middleware:**
+
+ In gRPC, `UnaryInterceptor` and `StreamInterceptor` can intercept Unary and Stream type RPC calls, respectively. Here's an example of how to implement a simple Unary RPC logging middleware:
+
+ ```go
+ package main
+
+ import (
+ "context"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+ "log"
+ "time"
+ )
+
+ // unaryServerInterceptor returns a new unary server interceptor that logs each request.
+ func unaryServerInterceptor() grpc.UnaryServerInterceptor {
+ return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
+ // Record the start time of the request
+ start := time.Now()
+ // Call the actual RPC method
+ resp, err = handler(ctx, req)
+ // After the request ends, log the duration and other information
+ log.Printf("Request method: %s, duration: %s, error status: %v", info.FullMethod, time.Since(start), status.Code(err))
+ return resp, err
+ }
+ }
+
+ func main() {
+ // Create a gRPC server and add the middleware
+ s := grpc.NewServer
+
+(grpc.UnaryInterceptor(unaryServerInterceptor()))
+ // Register your service
+
+ // Start the gRPC server
+ log.Println("Starting gRPC server...")
+ // ...
+ }
+ ```
+
+ **Logging for Non-RPC Applications:**
+
+ For non-RPC applications, the key is to log at appropriate places in the code to maintain an execution trace. Here's a simple example showing how to log when handling a task:
+
+ ```go
+ package main
+
+ import (
+ "log"
+ )
+
+ func processTask(taskID string) {
+ // Log the start of task processing
+ log.Printf("Starting task processing: %s", taskID)
+ // Suppose this is where the task is processed
+
+ // Log after the task is completed
+ log.Printf("Task processing completed: %s", taskID)
+ }
+
+ func main() {
+ // Example task ID
+ taskID := "task123"
+ processTask(taskID)
+ }
+ ```
+
+ In both scenarios, appropriate logging can help developers and operators monitor the health of the system, trace the source of issues, and quickly locate and resolve problems. For gRPC logging, using middleware can effectively centralize log management and control. For non-RPC applications, ensuring logs are placed at critical execution points can help understand the program's operational flow and state changes.
+
+### When to Wrap Errors?
+
+1. **Wrap errors generated within the function**
+ When an error occurs within a function, use `errs.Wrap` to add context information to the original error.
+
+ **Code Example**:
+ ```go
+ func doSomething() error {
+ // Suppose an error occurs here
+ err, _ := someFunc()
+ if err != nil {
+ return errs.WrapMsg(err, "doSomething failed")
+ }
+ }
+ ```
+
+ It just works if the package is wrong:
+
+ ```go
+ func doSomething() error {
+ // Suppose an error occurs here
+ err, _ := someFunc()
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ }
+ ```
+
+2. **Wrap errors from system calls or other packages**
+ When calling external libraries or system functions that return errors, also add context information to wrap the error.
+
+ **Code Example**:
+ ```go
+ func readConfig(file string) error {
+ _, err := os.ReadFile(file)
+ if err != nil {
+ return errs.Wrap(err, "Failed to read config file")
+ }
+ return nil
+ }
+ ```
+
+3. **No need to re-wrap errors for internal module calls**
+
+ If an error has been appropriately wrapped with sufficient context information in an internal module call, there's no need to wrap it again.
+
+ **Code Example**:
+ ```go
+ func doSomething() error {
+ err := doAnotherThing()
+ if err != nil {
+ return err
+ }
+ return nil
+ }
+ ```
+
+4. **Ensure comprehensive wrapping of errors with detailed messages**
+ When wrapping errors, ensure to provide ample context information to make the error more understandable and easier to debug.
+
+ **Code Example**:
+ ```go
+ func connectDatabase() error {
+ err := db.Connect(config.DatabaseURL)
+ if err != nil {
+ return errs.Wrap(err, fmt.Sprintf("Failed to connect to database, URL: %s", config.DatabaseURL))
+ }
+ return nil
+ }
+ ```
+
+
+### About WrapMsg Use
+
+```go
+// "github.com/openimsdk/tools/errs"
+func WrapMsg(err error, msg string, kv ...any) error {
+ if len(kv) == 0 {
+ if len(msg) == 0 {
+ return errors.WithStack(err)
+ } else {
+ return errors.WithMessage(err, msg)
+ }
+ }
+ var buf bytes.Buffer
+ if len(msg) > 0 {
+ buf.WriteString(msg)
+ buf.WriteString(" ")
+ }
+ for i := 0; i < len(kv); i += 2 {
+ if i > 0 {
+ buf.WriteString(", ")
+ }
+ buf.WriteString(toString(kv[i]))
+ buf.WriteString("=")
+ buf.WriteString(toString(kv[i+1]))
+ }
+ return errors.WithMessage(err, buf.String())
+}
+```
+
+1. **Function Signature**:
+ - `err error`: The original error object.
+ - `msg string`: The message text to append to the error.
+ - `kv ...any`: A variable number of parameters used to pass key-value pairs. `any` was introduced in Go 1.18 and is equivalent to `interface{}`, meaning any type.
+
+2. **Logic**:
+ - If there are no key-value pairs (`kv` is empty):
+ - If `msg` is also empty, use `errors.WithStack(err)` to return the original error with the call stack appended.
+ - If `msg` is not empty, use `errors.WithMessage(err, msg)` to append the message text to the original error.
+ - If there are key-value pairs, the function constructs a string containing the message text and all key-value pairs. The key-value pairs are added in the format `"key=value"`, separated by commas. If a message text is provided, it is added first, followed by a space.
+
+3. **Key-Value Pair Formatting**:
+ - A loop iterates over all the key-value pairs, processing one pair at a time.
+ - The `toString` function (although not provided in the code, we can assume it converts any type to a string) is used to convert both keys and values to strings, and they are added to a `bytes.Buffer` in the format `"key=value"`.
+
+4. **Result**:
+ - Use `errors.WithMessage(err, buf.String())` to append the constructed message text to the original error, and return this new error object.
+
+Next, let's demonstrate several ways to use the `WrapMsg` function:
+
+**Example 1: No Additional Information**
+
+```go
+// "github.com/openimsdk/tools/errs"
+err := errors.New("original error")
+wrappedErr := errs.WrapMsg(err, "")
+// wrappedErr will contain the original error and its call stack
+```
+
+**Example 2: Message Text Only**
+
+```go
+// "github.com/openimsdk/tools/errs"
+err := errors.New("original error")
+wrappedErr := errs.WrapMsg(err, "additional error information")
+// wrappedErr will contain the original error, call stack, and "additional error information"
+```
+
+**Example 3: Message Text and Key-Value Pairs**
+
+```go
+// "github.com/openimsdk/tools/errs"
+err := errors.New("original error")
+wrappedErr := errs.WrapMsg(err, "problem occurred", "code", 404, "url", "webhook://example.com")
+// wrappedErr will contain the original error, call stack, and "problem occurred code=404, url=http://example.com"
+```
+
+**Example 4: Key-Value Pairs Only**
+
+```go
+// "github.com/openimsdk/tools/errs"
+err := errors.New("original error")
+wrappedErr := errs.WrapMsg(err, "", "user", "john_doe", "action", "login")
+// wrappedErr will contain the original error, call stack, and "user=john_doe, action=login"
+```
+
+> [!TIP] WThese examples demonstrate how the `errs.WrapMsg` function can flexibly handle error messages and context data, helping developers to more effectively track and debug their programs.
+
+
+### Example 5: Dynamic Key-Value Pairs from Context
+Suppose we have some runtime context variables, such as a user ID and the type of operation being performed, and we want to include these variables in the error message. This can help with later debugging and identifying the specific environment of the issue.
+
+```go
+// Define some context variables
+userID := "user123"
+operation := "update profile"
+errorCode := 500
+requestURL := "webhook://example.com/updateProfile"
+
+// Create a new error
+err := errors.New("original error")
+
+// Wrap the error, including dynamic key-value pairs from the context
+wrappedErr := errs.WrapMsg(err, "operation failed", "user", userID, "action", operation, "code", errorCode, "url", requestURL)
+// wrappedErr will contain the original error, call stack, and "operation failed user=user123, action=update profile, code=500, url=http://example.com/updateProfile"
+```
+
+> [!TIP]In this example, the `WrapMsg` function accepts not just a static error message and additional information, but also dynamic key-value pairs generated from the code's execution context, such as the user ID, operation type, error code, and the URL of the request. Including this contextual information in the error message makes it easier for developers to understand and resolve the issue.
\ No newline at end of file
diff --git a/docs/contrib/mac-developer-deployment-guide.md b/docs/contrib/mac-developer-deployment-guide.md
new file mode 100644
index 0000000..2b7c65d
--- /dev/null
+++ b/docs/contrib/mac-developer-deployment-guide.md
@@ -0,0 +1,258 @@
+# Mac Developer Deployment Guide for OpenIM
+
+## Introduction
+
+This guide aims to assist Mac-based developers in contributing effectively to OpenIM. It covers the setup of a development environment tailored for Mac, including the use of GitHub for development workflow and `devcontainer` for a consistent development experience.
+
+Before contributing to OpenIM through issues and pull requests, make sure you are familiar with GitHub and the [pull request workflow](https://docs.github.com/en/get-started/quickstart/github-flow).
+
+## Prerequisites
+
+### System Requirements
+
+- macOS (latest stable version recommended)
+- Internet connection
+- Administrator access
+
+### Knowledge Requirements
+
+- Basic understanding of Git and GitHub
+- Familiarity with Docker and containerization
+- Experience with Go programming language
+
+## Setting up the Development Environment
+
+### Installing Homebrew
+
+Homebrew is an essential package manager for macOS. Install it using:
+
+```sh
+/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+```
+
+### Installing and Configuring Git
+
+1. Install Git:
+
+ ```sh
+ brew install git
+ ```
+
+2. Configure Git with your user details:
+
+ ```sh
+ git config --global user.name "Your Name"
+ git config --global user.email "your.email@example.com"
+ ```
+
+### Setting Up the Devcontainer
+
+`Devcontainers` provide a Docker-based isolated development environment.
+
+Read [README.md](https://github.com/openimsdk/open-im-server-deploy/tree/main/.devcontainer) in the `.devcontainer` directory of the project to learn more about the devcontainer.
+
+To set it up:
+
+1. Install Docker Desktop for Mac from [Docker Hub](https://docs.docker.com/desktop/install/mac-install/).
+2. Install Visual Studio Code and the Remote - Containers extension.
+3. Open the cloned OpenIM repository in VS Code.
+4. VS Code will prompt to reopen the project in a container. Accept this to set up the environment automatically.
+
+### Installing Go and Dependencies
+
+Use Homebrew to install Go:
+
+```sh
+brew install go
+```
+
+Ensure the version of Go is compatible with the version required by OpenIM (refer to the main documentation for version requirements).
+
+### Additional Tools
+
+Install other required tools like Docker, Vagrant, and necessary GNU utils as described in the main documentation.
+
+## Mac Deployment openim-chat and openim-server
+
+To integrate the Chinese document into an English document for Linux deployment, we will first translate the content and then adapt it to suit the Linux environment. Here's how the translated and adapted content might look:
+
+### Ensure a Clean Environment
+
+- It's recommended to execute in a new directory.
+- Run `ps -ef | grep openim` to ensure no OpenIM processes are running.
+- Run `ps -ef | grep chat` to check for absence of chat-related processes.
+- Execute `docker ps` to verify there are no related containers running.
+
+### Source Code Deployment
+
+#### Deploying openim-server
+
+Source code deployment is slightly more complex because Docker's networking on Linux differs from Mac.
+
+```bash
+git clone https://github.com/openimsdk/open-im-server-deploy
+cd open-im-server-deploy
+
+export OPENIM_IP="Your IP" # If it's a cloud server, setting might not be needed
+make init # Generates configuration files
+```
+
+Before deploying openim-server, modify the Kafka logic in the docker-compose.yml file. Replace:
+
+```yaml
+- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://${DOCKER_BRIDGE_GATEWAY:-172.28.0.1}:${KAFKA_PORT:-19094}
+```
+
+With:
+
+```yaml
+- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://127.0.0.1:${KAFKA_PORT:-19094}
+```
+
+Then start the service:
+
+```bash
+docker compose up -d
+```
+
+Before starting the openim-server source, set `config/config.yaml` by replacing all instances of `172.28.0.1` with `127.0.0.1`:
+
+```bash
+vim config/config.yaml -c "%s/172\.28\.0\.1/127.0.0.1/g" -c "wq"
+```
+
+Then start openim-server:
+
+```bash
+make start
+```
+
+To check the startup:
+
+```bash
+make check
+```
+
+
+🚧 To avoid mishaps, it's best to wait five minutes before running `make check` again.
+
+
+
+#### Deploying openim-chat
+
+There are several ways to deploy openim-chat, either by source code or using Docker.
+
+Navigate back to the parent directory:
+
+```bash
+cd ..
+```
+
+First, let's look at deploying chat from source:
+
+```bash
+git clone https://github.com/openimsdk/chat
+cd chat
+make init # Generates configuration files
+```
+
+If openim-chat has not deployed MySQL, you will need to deploy it. Note that the official Docker Hub for MySQL does not support architectures like ARM, so you can use the newer version of the open-source edition:
+
+```bash
+docker run -d \
+ --name mysql \
+ -p 13306:3306 \
+ -p 3306:33060 \
+ -v "$(pwd)/components/mysql/data:/var/lib/mysql" \
+ -v "/etc/localtime:/etc/localtime" \
+ -e MYSQL_ROOT_PASSWORD="openIM123" \
+ --restart always \
+ mariadb:10.6
+```
+
+Before starting the source code of openim-chat, set `config/config.yaml` by replacing all instances of `172.28.0.1` with `127.0.0.1`:
+
+```bash
+vim config/config.yaml -c "%s/172\.28\.0\.1/127.0.0.1/g" -c "wq"
+```
+
+Then start openim-chat from source:
+
+```bash
+make start
+```
+
+To check, ensure the following four processes start successfully:
+
+```bash
+make check
+```
+
+### Docker Deployment
+
+Refer to https://github.com/openimsdk/openim-docker for Docker deployment instructions, which can be followed similarly on Linux.
+
+```bash
+git clone https://github.com/openimsdk/openim-docker
+cd openim-docker
+export OPENIM_IP="Your IP"
+make init
+docker compose up -d
+docker compose logs -f openim-server
+docker compose logs -f openim-chat
+```
+
+## GitHub Development Workflow
+
+### Creating a New Branch
+
+For new features or fixes, create a new branch:
+
+```sh
+git checkout -b feat/your-feature-name
+```
+
+### Making Changes and Committing
+
+1. Make your changes in the code.
+2. Stage your changes:
+
+ ```sh
+ git add .
+ ```
+
+3. Commit with a meaningful message:
+
+ ```sh
+ git commit -m "Add a brief description of your changes"
+ ```
+
+### Pushing Changes and Creating Pull Requests
+
+1. Push your branch to GitHub:
+
+ ```sh
+ git push origin feat/your-feature-name
+ ```
+
+2. Go to your fork on GitHub and create a pull request to the main OpenIM repository.
+
+### Keeping Your Fork Updated
+
+Regularly sync your fork with the main repository:
+
+```sh
+git fetch upstream
+git checkout main
+git rebase upstream/main
+```
+
+More read: [https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md)
+
+## Testing and Quality Assurance
+
+Run tests as described in the OpenIM documentation to ensure your changes do not break existing functionality.
+
+## Conclusion
+
+This guide provides a comprehensive overview for Mac developers to set up and contribute to OpenIM. By following these steps, you can ensure a smooth and efficient development experience. Happy coding!
\ No newline at end of file
diff --git a/docs/contrib/offline-deployment.md b/docs/contrib/offline-deployment.md
new file mode 100644
index 0000000..db672ab
--- /dev/null
+++ b/docs/contrib/offline-deployment.md
@@ -0,0 +1,178 @@
+# OpenIM Offline Deployment Design
+
+## 1. Base Images
+
+Below are the base images and their versions you'll need:
+
+- [ ] bitnami/kafka:3.5.1
+- [ ] redis:7.0.0
+- [ ] mongo:6.0.2
+- [ ] bitnami/zookeeper:3.8
+- [ ] minio/minio:RELEASE.2024-01-11T07-46-16Z
+
+> [!IMPORTANT]
+> It is important to note that OpenIM removed mysql components from versions v3.5.0 (release-v3.5) and above, so mysql can be deployed without this requirement or above
+
+**If you need to install more IM components or monitoring products:**
+
+OpenIM:
+
+> [!TIP]
+> If you need to install more IM components or monitoring products [images.md](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/images.md)
+
+- [ ] ghcr.io/openimsdk/openim-web:
+- [ ] ghcr.io/openimsdk/openim-admin:
+- [ ] ghcr.io/openimsdk/openim-chat:
+- [ ] ghcr.io/openimsdk/openim-server:
+
+
+Monitoring:
+
+- [ ] prom/prometheus:v2.48.1
+- [ ] prom/alertmanager:v0.23.0
+- [ ] grafana/grafana:10.2.2
+- [ ] bitnami/node-exporter:1.7.0
+
+
+Use the following commands to pull these base images:
+
+```bash
+docker pull bitnami/kafka:3.5.1
+docker pull redis:7.0.0
+docker pull mongo:6.0.2
+docker pull mariadb:10.6
+docker pull bitnami/zookeeper:3.8
+docker pull minio/minio:2024-01-11T07-46-16Z
+```
+
+If you need to install more IM components or monitoring products:
+
+```bash
+docker pull prom/prometheus:v2.48.1
+docker pull prom/alertmanager:v0.23.0
+docker pull grafana/grafana:10.2.2
+docker pull bitnami/node-exporter:1.7.0
+```
+
+## 2. OpenIM Images
+
+**For detailed understanding of version management and storage of OpenIM and Chat**: [version.md](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/version.md)
+
+### OpenIM Image
+
+- Get image version info: [images.md](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/images.md)
+- Depending on the required version, execute the following command:
+
+```bash
+docker pull ghcr.io/openimsdk/openim-server:
+```
+
+### Chat Image
+
+- Execute the following command to pull the image:
+
+```bash
+docker pull ghcr.io/openimsdk/openim-chat:
+```
+
+### Web Image
+
+- Execute the following command to pull the image:
+
+```bash
+docker pull ghcr.io/openimsdk/openim-web:
+```
+
+### Admin Image
+
+- Execute the following command to pull the image:
+
+```bash
+docker pull ghcr.io/openimsdk/openim-admin:
+```
+
+
+## 3. Image Storage Selection
+
+**Repositories**:
+
+- Alibaba Cloud: `registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-server`
+- Docker Hub: `openim/openim-server`
+
+**Version Selection**:
+
+- Stable: e.g. release-v3.2 (or 3.1, 3.3)
+- Latest: latest
+- Latest of main: main
+
+## 4. Version Selection
+
+You can select from the following versions:
+
+- Stable: e.g. release-v3.2
+- Latest: latest
+- Latest from main branch: main
+
+## 5. Offline Deployment Steps
+
+1. **Pull images**: Execute the above `docker pull` commands to pull all required images locally.
+2. **Save images**:
+
+```bash
+docker save -o .tar
+```
+
+If you want to save all the images, use the following command:
+
+```bash
+docker save -o .tar $(docker images -q)
+```
+
+3. **Fetch code**: Clone the repository:
+
+```bash
+git clone https://github.com/openimsdk/openim-docker.git
+```
+
+Or download the code from [Releases](https://github.com/openimsdk/openim-docker/releases/).
+
+> Because of the difference between win and linux newlines, please do not clone the repository with win and then synchronize scp to linux.
+
+4. **Transfer files**: Use `scp` to transfer all images and code to the intranet server.
+
+```bash
+scp .tar user@remote-ip:/path/on/remote/server
+```
+
+Or choose other transfer methods such as a hard drive.
+
+5. **Import images**: On the intranet server:
+
+```bash
+docker load -i .tar
+```
+
+Import directly with shortcut commands:
+
+```bash
+for i in `ls ./`;do docker load -i $i;done
+```
+
+6. **Deploy**: Navigate to the `openim-docker` repository directory and follow the [README guide](https://github.com/openimsdk/openim-docker) for deployment.
+
+7. **Deploy using docker compose**:
+
+```bash
+export OPENIM_IP="your ip" # Set Ip
+make init # Init config
+docker compose up -d # Deployment
+docker compose ps # Verify
+```
+
+> **Note**: If you're using a version of Docker prior to 20, make sure you've installed `docker-compose`.
+
+## 6. Reference Links
+
+- [openimsdk Issue #432](https://github.com/openimsdk/open-im-server-deploy/issues/432)
+- [Notion Link](https://nsddd.notion.site/435ee747c0bc44048da9300a2d745ad3?pvs=25)
+- [openimsdk Issue #474](https://github.com/openimsdk/open-im-server-deploy/issues/474)
diff --git a/docs/contrib/prometheus-grafana.md b/docs/contrib/prometheus-grafana.md
new file mode 100644
index 0000000..8e6c5b1
--- /dev/null
+++ b/docs/contrib/prometheus-grafana.md
@@ -0,0 +1,326 @@
+# Deployment and Design of OpenIM's Management Backend and Monitoring
+
+
+* 1. [Source Code & Docker](#SourceCodeDocker)
+ * 1.1. [Deployment](#Deployment)
+ * 1.2. [Configuration](#Configuration)
+ * 1.3. [Monitoring Running in Docker Guide](#MonitoringRunninginDockerGuide)
+ * 1.3.1. [Introduction](#Introduction)
+ * 1.3.2. [Prerequisites](#Prerequisites)
+ * 1.3.3. [Step 1: Clone the Repository](#Step1:ClonetheRepository)
+ * 1.3.4. [Step 2: Start Docker Compose](#Step2:StartDockerCompose)
+ * 1.3.5. [Step 3: Use the OpenIM Web Interface](#Step3:UsetheOpenIMWebInterface)
+ * 1.3.6. [Running Effect](#RunningEffect)
+ * 1.3.7. [Step 4: Access the Admin Panel](#Step4:AccesstheAdminPanel)
+ * 1.3.8. [Step 5: Access the Monitoring Interface](#Step5:AccesstheMonitoringInterface)
+ * 1.3.9. [Next Steps](#NextSteps)
+ * 1.3.10. [Troubleshooting](#Troubleshooting)
+* 2. [Kubernetes](#Kubernetes)
+ * 2.1. [Middleware Monitoring](#MiddlewareMonitoring)
+ * 2.2. [Custom OpenIM Metrics](#CustomOpenIMMetrics)
+ * 2.3. [Node Exporter](#NodeExporter)
+* 3. [Setting Up and Configuring AlertManager Using Environment Variables and `make init`](#SettingUpandConfiguringAlertManagerUsingEnvironmentVariablesandmakeinit)
+ * 3.1. [Introduction](#Introduction-1)
+ * 3.2. [Prerequisites](#Prerequisites-1)
+ * 3.3. [Configuration Steps](#ConfigurationSteps)
+ * 3.3.1. [Exporting Environment Variables](#ExportingEnvironmentVariables)
+ * 3.3.2. [Initializing AlertManager](#InitializingAlertManager)
+ * 3.3.3. [Key Configuration Fields](#KeyConfigurationFields)
+ * 3.3.4. [Configuring SMTP Authentication Password](#ConfiguringSMTPAuthenticationPassword)
+ * 3.3.5. [Useful Links for Common Email Servers](#UsefulLinksforCommonEmailServers)
+ * 3.4. [Conclusion](#Conclusion)
+
+
+
+
+OpenIM offers various flexible deployment options to suit different environments and requirements. Here is a simplified and optimized description of these deployment options:
+
+1. Source Code Deployment:
+ + **Regular Source Code Deployment**: Deployment using the `nohup` method. This is a basic deployment method suitable for development and testing environments. For details, refer to the [Regular Source Code Deployment Guide](https://docs.openim.io/).
+ + **Production-Level Deployment**: Deployment using the `system` method, more suitable for production environments. This method provides higher stability and reliability. For details, refer to the [Production-Level Deployment Guide](https://docs.openim.io/guides/gettingStarted/install-openim-linux-system).
+2. Cluster Deployment:
+ + **Kubernetes Deployment**: Provides two deployment methods, including deployment through Helm and sealos. This is suitable for environments that require high availability and scalability. Specific methods can be found in the [Kubernetes Deployment Guide](https://docs.openim.io/guides/gettingStarted/k8s-deployment).
+3. Docker Deployment:
+ + **Regular Docker Deployment**: Suitable for quick deployments and small projects. For detailed information, refer to the [Docker Deployment Guide](https://docs.openim.io/guides/gettingStarted/dockerCompose).
+ + **Docker Compose Deployment**: Provides more convenient service management and configuration, suitable for complex multi-container applications.
+
+Next, we will introduce the specific steps, monitoring, and management backend configuration for each of these deployment methods, as well as usage tips to help you choose the most suitable deployment option according to your needs.
+
+## 1. Source Code & Docker
+
+### 1.1. Deployment
+
+OpenIM deploys openim-server and openim-chat from source code, while other components are deployed via Docker.
+
+For Docker deployment, you can deploy all components with a single command using the [openimsdk/openim-docker](https://github.com/openimsdk/openim-docker) repository. The deployment configuration can be found in the [environment.sh](https://github.com/openimsdk/open-im-server-deploy/blob/main/scripts/install/environment.sh) document, which provides information on how to learn and familiarize yourself with various environment variables.
+
+For Prometheus, it is not enabled by default. To enable it, set the environment variable before executing `make init`:
+
+```bash
+export PROMETHEUS_ENABLE=true # Default is false
+```
+
+Then, execute:
+
+```bash
+make init
+docker compose up -d
+```
+
+### 1.2. Configuration
+
+To configure Prometheus data sources in Grafana, follow these steps:
+
+1. **Log in to Grafana**: First, open your web browser and access the Grafana URL. If you haven't changed the port, the address is typically [http://localhost:13000](http://localhost:13000/).
+
+2. **Log in with default credentials**: Grafana's default username and password are both `admin`. You will be prompted to change the password on your first login.
+
+3. **Access Data Sources Settings**:
+
+ + In the left menu of Grafana, look for and click the "gear" icon representing "Configuration."
+ + In the configuration menu, select "Data Sources."
+
+4. **Add a New Data Source**:
+
+ + On the Data Sources page, click the "Add data source" button.
+ + In the list, find and select "Prometheus."
+
+ 
+
+ Click `Add New connection` to add more data sources, such as Loki (responsible for log storage and query processing).
+
+5. **Configure the Prometheus Data Source**:
+
+ + On the configuration page, fill in the details of the Prometheus server. This typically includes the URL of the Prometheus service (e.g., if Prometheus is running on the same machine as OpenIM, the URL might be `http://172.28.0.1:19090`, with the address matching the `DOCKER_BRIDGE_GATEWAY` variable address). OpenIM and the components are linked via a gateway. The default port used by OpenIM is `19090`.
+ + Adjust other settings as needed, such as authentication and TLS settings.
+
+ 
+
+6. **Save and Test**:
+
+ + After completing the configuration, click the "Save & Test" button to ensure that Grafana can successfully connect to Prometheus.
+
+**Importing Dashboards in Grafana**
+
+Importing Grafana Dashboards is a straightforward process and is applicable to OpenIM Server application services and Node Exporter. Here are detailed steps and necessary considerations:
+
+**Key Metrics Overview and Deployment Steps**
+
+To monitor OpenIM in Grafana, you need to focus on three categories of key metrics, each with its specific deployment and configuration steps:
+
+**OpenIM Metrics (`prometheus-dashboard.yaml`)**:
+
+- **Configuration File Path**: Find this at `config/prometheus-dashboard.yaml`.
+- **Enabling Monitoring**: Activate Prometheus monitoring by setting the environment variable: `export PROMETHEUS_ENABLE=true`.
+- **More Information**: For detailed instructions, see the [OpenIM Configuration Guide](https://docs.openim.io/configurations/prometheus-integration).
+
+**Node Exporter**:
+
+- **Container Deployment**: Use the container `quay.io/prometheus/node-exporter` for effective node monitoring.
+- **Access Dashboard**: Visit the [Node Exporter Full Feature Dashboard](https://grafana.com/grafana/dashboards/1860-node-exporter-full/) for dashboard integration either through YAML file download or ID.
+- **Deployment Guide**: For deployment steps, consult the [Node Exporter Deployment Documentation](https://prometheus.io/docs/guides/node-exporter/).
+
+**Middleware Metrics**: Different middlewares require unique steps and configurations for monitoring:
+
+- MySQL:
+ - **Configuration**: Make sure MySQL is set up for performance monitoring.
+ - **Guide**: See the [MySQL Monitoring Configuration Guide](https://grafana.com/docs/grafana/latest/datasources/mysql/).
+- Redis:
+ - **Configuration**: Adjust Redis settings to enable monitoring data export.
+ - **Guide**: Consult the [Redis Monitoring Guide](https://grafana.com/docs/grafana/latest/datasources/redis/).
+- MongoDB:
+ - **Configuration**: Configure MongoDB for monitoring metrics.
+ - **Guide**: Visit the [MongoDB Monitoring Guide](https://grafana.com/grafana/plugins/grafana-mongodb-datasource/).
+- Kafka:
+ - **Configuration**: Set up Kafka for Prometheus monitoring integration.
+ - **Guide**: Refer to the [Kafka Monitoring Guide](https://grafana.com/grafana/plugins/grafana-kafka-datasource/).
+- Zookeeper:
+ - **Configuration**: Ensure Prometheus can monitor Zookeeper.
+ - **Guide**: Check out the [Zookeeper Monitoring Configuration](https://grafana.com/docs/grafana/latest/datasources/zookeeper/).
+
+**Importing Steps**:
+
+1. Access the Dashboard Import Interface:
+
+ + Click the `+` icon on the left menu or in the top right corner of Grafana, then select "Create."
+ + Choose "Import" to access the dashboard import interface.
+
+2. **Perform Dashboard Import**:
+ + **Upload via File**: Directly upload your YAML file.
+ + **Paste Content**: Open the YAML file, copy its content, and paste it into the import interface.
+ + **Import via Grafana.com Dashboard**: Visit [Grafana Dashboards](https://grafana.com/grafana/dashboards/), search for the desired dashboard, and import it using its ID.
+3. **Configure the Dashboard**:
+ + Select the appropriate data source, such as the previously configured Prometheus.
+ + Adjust other settings, such as the dashboard name or folder.
+4. **Save and View the Dashboard**:
+ + After configuring, click "Import" to complete the process.
+ + Immediately view the new dashboard after successful import.
+
+**Graph Examples:**
+
+
+
+
+
+### 1.3. Monitoring Running in Docker Guide
+
+#### 1.3.1. Introduction
+
+This guide provides the steps to run OpenIM using Docker. OpenIM is an open-source instant messaging solution that can be quickly deployed using Docker. For more information, please refer to the [OpenIM Docker GitHub](https://github.com/openimsdk/openim-docker).
+
+#### 1.3.2. Prerequisites
+
++ Ensure that Docker and Docker Compose are installed.
++ Basic understanding of Docker and containerization technology.
+
+#### 1.3.3. Step 1: Clone the Repository
+
+First, clone the OpenIM Docker repository:
+
+```bash
+git clone https://github.com/openimsdk/openim-docker.git
+```
+
+Navigate to the repository directory and check the `README` file for more information and configuration options.
+
+#### 1.3.4. Step 2: Start Docker Compose
+
+In the repository directory, run the following command to start the service:
+
+```bash
+docker-compose up -d
+```
+
+This will download the required Docker images and start the OpenIM service.
+
+#### 1.3.5. Step 3: Use the OpenIM Web Interface
+
++ Open a browser in private mode and access [OpenIM Web](http://localhost:11001/).
++ Register two users and try adding friends.
++ Test sending messages and pictures.
+
+#### 1.3.6. Running Effect
+
+
+
+#### 1.3.7. Step 4: Access the Admin Panel
+
++ Access the [OpenIM Admin Panel](http://localhost:11002/).
++ Log in using the default username and password (`admin1:admin1`).
+
+Running Effect Image:
+
+
+
+#### 1.3.8. Step 5: Access the Monitoring Interface
+
++ Log in to the [Monitoring Interface](http://localhost:3000/login) using the credentials (`admin:admin`).
+
+#### 1.3.9. Next Steps
+
++ Configure and manage the services following the steps provided in the OpenIM source code.
++ Refer to the `README` file for advanced configuration and management.
+
+#### 1.3.10. Troubleshooting
+
++ If you encounter any issues, please check the documentation on [OpenIM Docker GitHub](https://github.com/openimsdk/openim-docker) or search for related issues in the Issues section.
++ If the problem persists, you can create an issue on the [openim-docker](https://github.com/openimsdk/openim-docker/issues/new/choose) repository or the [openim-server](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) repository.
+
+
+
+## 2. Kubernetes
+
+Refer to [openimsdk/helm-charts](https://github.com/openimsdk/helm-charts).
+
+When deploying and monitoring OpenIM in a Kubernetes environment, you will focus on three main metrics: middleware, custom OpenIM metrics, and Node Exporter. Here are detailed steps and guidelines:
+
+### 2.1. Middleware Monitoring
+
+Middleware monitoring is crucial to ensure the overall system's stability. Typically, this includes monitoring the following components:
+
++ **MySQL**: Monitor database performance, query latency, and more.
++ **Redis**: Track operation latency, memory usage, and more.
++ **MongoDB**: Observe database operations, resource usage, and more.
++ **Kafka**: Monitor message throughput, latency, and more.
++ **Zookeeper**: Keep an eye on cluster status, performance metrics, and more.
+
+For Kubernetes environments, you can use the corresponding Prometheus Exporters to collect monitoring data for these middleware components.
+
+### 2.2. Custom OpenIM Metrics
+
+Custom OpenIM metrics provide essential information about the OpenIM application itself, such as user activity, message traffic, system performance, and more. To monitor these metrics in Kubernetes:
+
++ Ensure OpenIM application configurations expose Prometheus metrics.
++ When deploying using Helm charts (refer to [OpenIM Helm Charts](https://github.com/openimsdk/helm-charts)), pay attention to configuring relevant monitoring settings.
+
+### 2.3. Node Exporter
+
+Node Exporter is used to collect hardware and operating system-level metrics for Kubernetes nodes, such as CPU, memory, disk usage, and more. To integrate Node Exporter in Kubernetes:
+
++ Deploy Node Exporter using the appropriate Helm chart. You can find information and guides on [Prometheus Community](https://prometheus.io/docs/guides/node-exporter/).
++ Ensure Node Exporter's data is collected by Prometheus instances within your cluster.
+
+
+
+## 3. Setting Up and Configuring AlertManager Using Environment Variables and `make init`
+
+### 3.1. Introduction
+
+AlertManager, a component of the Prometheus monitoring system, handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver. This document outlines how to set up and configure AlertManager using environment variables and the `make init` command. We will focus on configuring key fields like the sender's email, SMTP settings, and SMTP authentication password.
+
+### 3.2. Prerequisites
+
++ Basic knowledge of terminal and command-line operations.
++ AlertManager installed on your system.
++ Access to an SMTP server for sending emails.
+
+### 3.3. Configuration Steps
+
+#### 3.3.1. Exporting Environment Variables
+
+Before initializing AlertManager, you need to set environment variables. These variables are used to configure the AlertManager settings without altering the code. Use the `export` command in your terminal. Here are some key variables you might set:
+
++ `export ALERTMANAGER_RESOLVE_TIMEOUT='5m'`
++ `export ALERTMANAGER_SMTP_FROM='alert@example.com'`
++ `export ALERTMANAGER_SMTP_SMARTHOST='smtp.example.com:465'`
++ `export ALERTMANAGER_SMTP_AUTH_USERNAME='alert@example.com'`
++ `export ALERTMANAGER_SMTP_AUTH_PASSWORD='your_password'`
++ `export ALERTMANAGER_SMTP_REQUIRE_TLS='false'`
+
+#### 3.3.2. Initializing AlertManager
+
+After setting the necessary environment variables, you can initialize AlertManager by running the `make init` command. This command typically runs a script that prepares AlertManager with the provided configuration.
+
+#### 3.3.3. Key Configuration Fields
+
+##### a. Sender's Email (`ALERTMANAGER_SMTP_FROM`)
+
+This variable sets the email address that will appear as the sender in the notifications sent by AlertManager.
+
+##### b. SMTP Configuration
+
++ **SMTP Server (`ALERTMANAGER_SMTP_SMARTHOST`):** Specifies the address and port of the SMTP server used for sending emails.
++ **SMTP Authentication Username (`ALERTMANAGER_SMTP_AUTH_USERNAME`):** The username for authenticating with the SMTP server.
++ **SMTP Authentication Password (`ALERTMANAGER_SMTP_AUTH_PASSWORD`):** The password for SMTP server authentication. It's crucial to keep this value secure.
+
+#### 3.3.4. Configuring SMTP Authentication Password
+
+The SMTP authentication password can be set using the `ALERTMANAGER_SMTP_AUTH_PASSWORD` environment variable. It's recommended to use a secure method to set this variable to avoid exposing sensitive information. For instance, you might read the password from a secure file or a secret management tool.
+
+#### 3.3.5. Useful Links for Common Email Servers
+
+For specific configurations related to common email servers, you may refer to their respective documentation:
+
++ Gmail SMTP Settings:
+ + [Gmail SMTP Configuration](https://support.google.com/mail/answer/7126229?hl=en)
++ Microsoft Outlook SMTP Settings:
+ + [Outlook Email Settings](https://support.microsoft.com/en-us/office/pop-imap-and-smtp-settings-8361e398-8af4-4e97-b147-6c6c4ac95353)
++ Yahoo Mail SMTP Settings:
+ + [Yahoo SMTP Configuration](https://help.yahoo.com/kb/SLN4724.html)
+
+### 3.4. Conclusion
+
+Setting up and configuring AlertManager with environment variables provides a flexible and secure way to manage alert settings. By following the above steps, you can easily configure AlertManager for your monitoring needs. Always ensure to secure sensitive information, especially when dealing with SMTP authentication credentials.
\ No newline at end of file
diff --git a/docs/contrib/protoc-tools.md b/docs/contrib/protoc-tools.md
new file mode 100644
index 0000000..b164728
--- /dev/null
+++ b/docs/contrib/protoc-tools.md
@@ -0,0 +1,60 @@
+# OpenIM Protoc Tool
+
+## Introduction
+
+OpenIM is passionate about ensuring that its suite of tools is custom-tailored to cater to the unique needs of its users. That commitment led us to develop and release our custom Protoc tool, version v1.0.0.
+
+### Why a Custom Version?
+
+There are several reasons to choose our custom Protoc tool over generic open-source versions:
+
+- **Specialized Features**: OpenIM's Protoc tool has been enriched with features and plugins that are optimized for the OpenIM ecosystem. This makes it more aligned with the needs of OpenIM users.
+- **Optimized Performance**: Built from the ground up with OpenIM's infrastructure in mind, our tool guarantees faster and more efficient operations.
+- **Enhanced Compatibility**: Our Protoc tool ensures full compatibility with OpenIM's offerings, minimizing potential conflicts and integration challenges.
+- **Rich Output Support**: Unlike generic tools, our custom tool provides a wide array of output options including C++, C#, Java, Kotlin, Objective-C, PHP, Python, Ruby, and more. This allows developers to generate code for their preferred platform with ease.
+
+## Download
+
++ https://github.com/OpenIMSDK/Open-IM-Protoc
+
+Access the official release of the Protoc tool on the OpenIM repository here: [OpenIM Protoc Tool v1.0.0 Release](https://github.com/OpenIMSDK/Open-IM-Protoc/releases/tag/v1.0.0)
+
+### Direct Download Links:
+
+- **Windows**: [Download for Windows](https://github.com/OpenIMSDK/Open-IM-Protoc/releases/download/v1.0.0/windows.zip)
+- **Linux**: [Download for Linux](https://github.com/OpenIMSDK/Open-IM-Protoc/releases/download/v1.0.0/linux.zip)
+
+## Installation
+
+For Windows:
+
+1. Navigate to the Windows download link provided above and download the version suitable for your system.
+2. Extract the contents of the zip file.
+3. Add the path of the extracted tool to your `PATH` environment variable to run the Protoc tool directly from the command line.
+
+For Linux:
+
+1. Navigate to the Linux download link provided above and download the version suitable for your system.
+2. Extract the contents of the zip file.
+3. Use `chmod +x ./*` to make the extracted files executable.
+4. Add the path of the extracted tool to your `PATH` environment variable to run the Protoc tool directly from the command line.
+
+## Usage
+
+The OpenIM Protoc tool provides a multitude of options for parsing `.proto` files and generating output:
+
+```
+
+./protoc [OPTION] PROTO_FILES
+```
+
+Some of the key options include:
+
+- `--proto_path=PATH`: Specify the directory to search for imports.
+- `--version`: Show version info.
+- `--encode=MESSAGE_TYPE`: Convert a text-format message of a given type from standard input to binary on standard output.
+- `--decode=MESSAGE_TYPE`: Convert a binary message of a given type from standard input to text format on standard output.
+- `--cpp_out=OUT_DIR`: Generate C++ header and source.
+- `--java_out=OUT_DIR`: Generate Java source file.
+
+... and many more. For a full list of options, run `./protoc --help` or refer to the official documentation.
\ No newline at end of file
diff --git a/docs/contrib/release.md b/docs/contrib/release.md
new file mode 100644
index 0000000..54429de
--- /dev/null
+++ b/docs/contrib/release.md
@@ -0,0 +1,251 @@
+# OpenIM Release Automation Design Document
+
+This document outlines the automation process for releasing OpenIM. You can use the `make release` command for automated publishing. We will discuss how to use the `make release` command and Github Actions CICD separately, while also providing insight into the design principles involved.
+
+## Github Actions Automation
+
+In our CICD pipeline, we have implemented logic for automating the release process using the goreleaser tool. To achieve this, follow these steps on your local machine or server:
+
+```bash
+git clone https://github.com/openimsdk/open-im-server-deploy
+cd open-im-server-deploy
+git tag -a v3.6.0 -s -m "release: xxx"
+# For pre-release versions: git tag -a v3.6.0-rc.0 -s -m "pre-release: xxx"
+git push origin v3.6.0
+```
+
+The remaining tasks are handled by automated processes:
+
++ Automatically complete the release publication on Github
++ Automatically build the `v3.6.0` version image and push it to aliyun, dockerhub, and github
+
+Through these automated steps, we achieve rapid and efficient OpenIM version releases, simplifying the release process and enhancing productivity.
+
+
+Certainly, here is the continuation of the document in English:
+
+## Local Make Release Design
+
+There are two primary scenarios for local usage:
+
++ Advanced compilation and release, manually executed locally
++ Quick compilation verification and version release, manually executed locally
+
+**These two scenarios can also be combined, for example, by tagging locally and then releasing:**
+
+```bash
+git add .
+git commit -a -s -m "release(v3.6.0): ......"
+git tag v3.6.0
+git release
+git push origin main
+```
+
+In a local environment, you can use the `make release` command to complete the release process. The main implementation logic can be found in the `/data/workspaces/open-im-server-deploy/scripts/lib/release.sh` file. First, let's explore its usage through the help information.
+
+### Help Information
+
+To view the help information, execute the following command:
+
+```bash
+$ ./scripts/release.sh --help
+Usage: release.sh [options]
+Options:
+ -h, --help Display this help message
+ -se, --setup-env Execute environment setup
+ -vp, --verify-prereqs Execute prerequisite verification
+ -bc, --build-command Execute build command
+ -bi, --build-image Execute build image (default is not executed)
+ -pt, --package-tarballs Execute tarball packaging
+ -ut, --upload-tarballs Execute tarball upload
+ -gr, --github-release Execute GitHub release
+ -gc, --generate-changelog Execute changelog generation
+```
+
+### Default Behavior
+
+If no options are provided, all operations are executed by default:
+
+```bash
+# If no options are provided, enable all operations by default
+if [ "$#" -eq 0 ]; then
+ perform_setup_env=true
+ perform_verify_prereqs=true
+ perform_build_command=true
+ perform_package_tarballs=true
+ perform_upload_tarballs=true
+ perform_github_release=true
+ perform_generate_changelog=true
+ # TODO: Defaultly not enable build_image
+ # perform_build_image=true
+fi
+```
+
+### Environment Variable Setup
+
+Before starting, you need to set environment variables:
+
+```bash
+export TENCENT_SECRET_KEY=OZZ****************************
+export TENCENT_SECRET_ID=AKI****************************
+```
+
+### Modifying COS Account and Password
+
+If you need to change the COS account, password, and bucket information, please modify the following section in the `/data/workspaces/open-im-server-deploy/scripts/lib/release.sh` file:
+
+```bash
+readonly BUCKET="openim-1306374445"
+readonly REGION="ap-guangzhou"
+readonly COS_RELEASE_DIR="openim-release"
+```
+
+### GitHub Release Configuration
+
+If you intend to use the GitHub Release feature, you also need to set the environment variable:
+
+```bash
+export GITHUB_TOKEN="your_github_token"
+```
+
+### Modifying GitHub Release Basic Information
+
+If you need to modify the basic information of GitHub Release, please edit the following section in the `/data/workspaces/open-im-server-deploy/scripts/lib/release.sh` file:
+
+```bash
+# OpenIM GitHub account information
+readonly OPENIM_GITHUB_ORG=openimsdk
+readonly OPENIM_GITHUB_REPO=open-im-server-deploy
+```
+
+This setup allows you to configure and execute the local release process according to your specific needs.
+
+
+### GitHub Release Versioning Rules
+
+Firstly, it's important to note that GitHub Releases should primarily be for pre-release versions. However, goreleaser might provide a `prerelease: auto` option, which automatically marks versions with pre-release indicators like `-rc1`, `-beta`, etc., as pre-releases.
+
+So, if your most recent tag does not have pre-release indicators such as `-rc1` or `-beta`, even if you use `make release` for pre-release versions, goreleaser might still consider them as formal releases.
+
+To avoid this issue, I have added the `--draft` flag to github-release. This way, all releases are created as drafts.
+
+## CICD Release Documentation Design
+
+The release records still require manual composition for GitHub Release. This is different from github-release.
+
+This approach ensures that all releases are initially created as drafts, allowing you to manually review and edit the release documentation on GitHub. This manual step provides more control and allows you to curate release notes and other information before making them public.
+
+
+## Makefile Section
+
+This document aims to elaborate and explain key sections of the OpenIM Release automation design, including the Makefile section and functions within the code. Below, we will provide a detailed explanation of the logic and functions of each section.
+
+In the project's root directory, the Makefile imports a subdirectory:
+
+```makefile
+include scripts/make-rules/release.mk
+```
+
+And defines the `release` target as follows:
+
+```makefile
+## release: release the project ✨
+.PHONY: release release: release.verify release.ensure-tag
+ @scripts/release.sh
+```
+
+### Importing Subdirectory
+
+At the beginning of the Makefile, the `include scripts/make-rules/release.mk` statement imports the `release.mk` file from the subdirectory. This file contains rules and configurations related to releases to be used in subsequent operations.
+
+### The `release` Target
+
+The Makefile defines a target named `release`, which is used to execute the project's release operation. This target is marked as a phony target (`.PHONY`), meaning it doesn't represent an actual file or directory but serves as an identifier for executing a series of actions.
+
+In the `release` target, two dependency targets are executed first: `release.verify` and `release.ensure-tag`. Afterward, the `scripts/release.sh` script is called to perform the actual release operation.
+
+## Logic of `release.verify` and `release.ensure-tag`
+
+```makefile
+## release.verify: Check if a tool is installed and install it
+.PHONY: release.verify
+release.verify: tools.verify.git-chglog tools.verify.github-release tools.verify.coscmd tools.verify.coscli
+
+## release.ensure-tag: ensure tag
+.PHONY: release.ensure-tag
+release.ensure-tag: tools.verify.gsemver
+ @scripts/ensure-tag.sh
+```
+
+### `release.verify` Target
+
+The `release.verify` target is used to check and install tools. It depends on four sub-targets: `tools.verify.git-chglog`, `tools.verify.github-release`, `tools.verify.coscmd`, and `tools.verify.coscli`. These sub-targets aim to check if specific tools are installed and attempt to install them if they are not.
+
+The purpose of this target is to ensure that the necessary tools required for the release process are available so that subsequent operations can be executed smoothly.
+
+### `release.ensure-tag` Target
+
+The `release.ensure-tag` target is used to ensure that the project has a version tag. It depends on the sub-target `tools.verify.gsemver`, indicating that it should check if the `gsemver` tool is installed before executing.
+
+When the `release.ensure-tag` target is executed, it calls the `scripts/ensure-tag.sh` script to ensure that the project has a version tag. Version tags are typically used to identify specific versions of the project for management and release in version control systems.
+
+## Logic of `release.sh` Script
+
+```bash
+openim::golang::setup_env
+openim::build::verify_prereqs
+openim::release::verify_prereqs
+#openim::build::build_image
+openim::build::build_command
+openim::release::package_tarballs
+openim::release::upload_tarballs
+git push origin ${VERSION}
+#openim::release::github_release
+#openim::release::generate_changelog
+```
+
+The `release.sh` script is responsible for executing the actual release operations. Below is the logic of this script:
+
+1. `openim::golang::setup_env`: This function sets up some configurations for the Golang development environment.
+
+2. `openim::build::verify_prereqs`: This function is used to verify whether the prerequisites for building are met. This includes checking dependencies, environment variables, and more.
+
+3. `openim::release::verify_prereqs`: Similar to the previous function, this one is used to verify whether the prerequisites for the release are met. It focuses on conditions relevant to the release.
+
+4. `openim::build::build_command`: This function is responsible for building the project's command, which typically involves compiling the project or performing other build operations.
+
+5. `openim::release::package_tarballs`: This function is used to package tarball files required for the release. These tarballs are usually used for distribution packages during the release.
+
+6. `openim::release::upload_tarballs`: This function is used to upload the packaged tarball files, typically to a distribution platform or repository.
+
+7. `git push origin ${VERSION}`: This line of command pushes the version tag to the remote Git repository's `origin` branch, marking this release in the version control system.
+
+In the comments, you can see that there are some operations that are commented out, such as `openim::build::build_image`, `openim::release::github_release`, and `openim::release::generate_changelog`. These operations are related to building images, releasing to GitHub, and generating changelogs, and they can be enabled in the release process as needed.
+
+Let's take a closer look at the function responsible for packaging the tarball files:
+
+```bash
+function openim::release::package_tarballs() {
+ # Clean out any old releases
+ rm -rf "${RELEASE_STAGE}" "${RELEASE_TARS}" "${RELEASE_IMAGES}"
+ mkdir -p "${RELEASE_TARS}"
+ openim::release::package_src_tarball &
+ openim::release::package_client_tarballs &
+ openim::release::package_openim_manifests_tarball &
+ openim::release::package_server_tarballs &
+ openim::util::wait-for-jobs || { openim::log::error "previous tarball phase failed"; return 1; }
+
+ openim::release::package_final_tarball & # _final depends on some of the previous phases
+ openim::util::wait-for-jobs || { openim::log::error "previous tarball phase failed"; return 1; }
+}
+```
+
+The `openim::release::package_tarballs()` function is responsible for packaging the tarball files required for the release. Here is the specific logic of this function:
+
+1. `rm -rf "${RELEASE_STAGE}" "${RELEASE_TARS}" "${RELEASE_IMAGES}"`: First, the function removes any old release directories and files to ensure a clean starting state.
+
+2. `mkdir -p "${RELEASE_TARS}"`: Next, it creates a directory `${RELEASE_TARS}` to store the packaged tarball files. If the directory doesn't exist, it will be created.
+
+3. `openim::release::package_final_tarball &`: This is an asynchronous operation that depends on some of the previous phases. It is likely used to package the final tarball file, which includes the contents of all previous asynchronous operations.
+
+4. `openim::util::wait-for-jobs`: It waits for all asynchronous operations to complete. If any of the previous asynchronous operations fail, an error will be returned.
diff --git a/docs/contrib/test.md b/docs/contrib/test.md
new file mode 100644
index 0000000..f847dbf
--- /dev/null
+++ b/docs/contrib/test.md
@@ -0,0 +1,263 @@
+# OpenIM RPC Service Test Control Script Documentation
+
+This document serves as a comprehensive guide to understanding and utilizing the `test.sh` script for testing OpenIM RPC services. The `test.sh` script is a collection of bash functions designed to test various aspects of the OpenIM RPC services, ensuring that each part of the API is functioning as expected.
+
++ Scripts:https://github.com/openimsdk/open-im-server-deploy/tree/main/scripts/install/test.sh
+
+For some complex, bulky functional tests, performance tests, and various e2e tests, We are all in the current warehouse to https://github.com/openimsdk/open-im-server-deploy/tree/main/test or https://github.com/openim-sigs/test-infra directory In the.
+
++ About OpenIM Feature [Test Docs](https://docs.google.com/spreadsheets/d/1zELWkwxgOOZ7u5pmYCqqaFnvZy2SVajv/edit?usp=sharing&ouid=103266350914914783293&rtpof=true&sd=true)
+
+## Util Test
+
+Let's restructure and enhance the document under a unified second-level heading, adding clarity and details for better comprehension and visual appeal.
+
+---
+
+## Development Guide
+
+### Comprehensive Testing Instructions
+
+#### Running Unit Tests
+
+- **Command**: To execute unit tests, input the following in your terminal:
+ ```
+ make test
+ ```
+
+#### Evaluating Test Coverage
+
+- **Overview**: It's crucial to assess how much of your code is covered by tests.
+- **Command**:
+ ```bash
+ make cover
+ ```
+ This command generates a report detailing the percentage of your code tested, ensuring adherence to quality standards.
+
+#### Conducting API Tests
+
+- **Purpose**: API tests validate the interaction and functionality of your application's interfaces.
+- **How to Run**:
+ ```
+ make test-api
+ ```
+ Use this to check the integrity and reliability of your API endpoints.
+
+#### End-to-End (E2E) Testing
+
+- **Scope**: E2E tests simulate real-user scenarios from start to finish.
+- **Execution**:
+ ```
+ make test-e2e
+ ```
+ This comprehensive testing ensures your application performs as expected in real-world situations.
+
+### Crafting Unit Test Cases
+
+#### Setup for Test Case Generation
+
+- **Installation**: Install the `gotests` tool to generate test cases automatically.
+ ```bash
+ make install.gotests
+ ```
+ This command installs the `gotests` tool for test case generation.
+
+- **Environment Preparation**: Define your test template environment variable and generate test cases as shown below:
+ ```bash
+ export GOTESTS_TEMPLATE=testify
+ gotests -i -w -only keyFunc .
+ ```
+ This prepares your environment for test case generation using the `testify` template.
+
+#### Isolating Function Tests
+
+- **Single Function Testing**: When you need to focus on testing a single function for detailed examination.
+- **Method**:
+ ```bash
+ go test -v -run TestKeyFunc
+ ```
+ This command specifically runs tests for `TestKeyFunc`, allowing targeted debugging and validation.
+
+### Important Note
+
+- **Quality Assurance**: Throughout your development process, it is imperative to ensure that the unit test coverage meets or surpasses the standards set by OpenIM.
+- **Maintaining Standards**: Regularly running your tests with
+ ```make test```
+ supports maintaining high code quality and adherence to OpenIM's rigorous testing benchmarks.
+
+## E2E Test
+
+TODO
+
+## Api Test
+
+The `test.sh` script is located within the `./scripts/install/` directory of the OpenIM service's codebase. To use the script, navigate to this directory from your terminal:
+
+```bash
+cd ./scripts/install/
+chmod +x test.sh
+```
+
+### Running the Entire Test Suite
+
+To execute all available tests, you can either call the script directly or use the `make` command:
+
+```
+./test.sh openim::test::test
+```
+
+Or, if you have a `Makefile` that defines the `test-api` target:
+
+```bash
+make test-api
+```
+
+Alternatively, you can invoke specific test functions by passing them as arguments:
+
+```
+./test.sh openim::test::
+```
+
+This `make` command should be equivalent to running `./test.sh openim::test::test`, provided that the `Makefile` is configured accordingly.
+
+
+
+### Executing Individual Test Functions
+
+If you wish to run a specific set of tests, you can call the relevant function by passing it as an argument to the script. Here are some examples:
+
+**Message Tests:**
+
+```bash
+./test.sh openim::test::msg
+```
+
+**Authentication Tests:**
+
+```bash
+./test.sh openim::test::auth
+```
+
+**User Tests:**
+
+```bash
+./test.sh openim::test::user
+```
+
+**Friend Tests:**
+
+```bash
+./test.sh openim::test::friend
+```
+
+**Group Tests:**
+
+```bash
+./test.sh openim::test::group
+```
+
+Each of these commands will run the test suite associated with the specific functionality of the OpenIM service.
+
+
+
+### Detailed Function Test Examples
+
+T**esting Message Sending and Receiving:**
+
+To test message functionality, the `openim::test::msg` function is called. It will register a user, send a message, and clear messages to ensure that the messaging service is operational.
+
+```bash
+./test.sh openim::test::msg
+```
+
+**Testing User Registration and Account Checks:**
+
+The `openim::test::user` function will create new user accounts and perform a series of checks on these accounts to verify that user registration and account queries are functioning properly.
+
+```bash
+./test.sh openim::test::user
+```
+
+**Testing Friend Management:**
+
+By invoking `openim::test::friend`, the script will test adding friends, checking friendship status, managing friend requests, and handling blacklisting.
+
+```bash
+./test.sh openim::test::friend
+```
+
+**Testing Group Operations:**
+
+The `openim::test::group` function tests group creation, member addition, information retrieval, and member management within groups.
+
+```bash
+./test.sh openim::test::group
+```
+
+### Log Output
+
+Each test function will output logs to the terminal to confirm the success or failure of the tests. These logs are crucial for identifying issues and verifying that each part of the service is tested thoroughly.
+
+Each function logs its success upon completion, which aids in debugging and understanding the test flow. The success message is standardized across functions:
+
+```
+openim::log::success " completed successfully."
+```
+
+By following the guidelines and instructions outlined in this document, you can effectively utilize the `test.sh` script to test and verify the OpenIM RPC services' functionality.
+
+
+
+## Function feature
+
+| Function Name | Corresponding API/Action | Function Purpose |
+| ---------------------------------------------------- | --------------------------------------------- | ------------------------------------------------------------ |
+| `openim::test::msg` | Messaging Operations | Tests all aspects of messaging, including sending, receiving, and managing messages. |
+| `openim::test::auth` | Authentication Operations | Validates the authentication process and session management, including token handling and forced logout. |
+| `openim::test::user` | User Account Operations | Covers testing for user account creation, retrieval, updating, and overall management. |
+| `openim::test::friend` | Friend Relationship Operations | Ensures friend management functions correctly, including requests, listing, and blacklisting. |
+| `openim::test::group` | Group Management Operations | Checks group-related functionalities like creation, invitation, information retrieval, and member management. |
+| `openim::test::send_msg` | Send Message API | Simulates sending a message from one user to another or within a group. |
+| `openim::test::revoke_msg` | Revoke Message API (TODO) | (Planned) Will test the revocation of a previously sent message. |
+| `openim::test::user_register` | User Registration API | Registers a new user in the system to validate the registration process. |
+| `openim::test::check_account` | Account Check API | Checks if an account exists for a given user ID. |
+| `openim::test::user_clear_all_msg` | Clear All Messages API | Clears all messages for a given user to validate message history management. |
+| `openim::test::get_token` | Token Retrieval API | Retrieves an authentication token to validate token management. |
+| `openim::test::force_logout` | Force Logout API | Forces a logout for a test user to validate session control. |
+| `openim::test::check_user_account` | User Account Existence Check API | Confirms the existence of a test user's account. |
+| `openim::test::get_users` | Get Users API | Retrieves a list of users to validate user query functionality. |
+| `openim::test::get_users_info` | Get User Information API | Obtains detailed information for a given user. |
+| `openim::test::get_users_online_status` | Get User Online Status API | Checks the online status of a user to validate presence functionality. |
+| `openim::test::update_user_info` | Update User Information API | Updates a user's information to validate account update capabilities. |
+| `openim::test::get_subscribe_users_status` | Get Subscribed Users' Status API | Retrieves the status of users that a test user has subscribed to. |
+| `openim::test::subscribe_users_status` | Subscribe to Users' Status API | Subscribes a test user to a set of user statuses. |
+| `openim::test::set_global_msg_recv_opt` | Set Global Message Receiving Option API | Sets the message receiving option for a test user. |
+| `openim::test::is_friend` | Check Friendship Status API | Verifies if two users are friends within the system. |
+| `openim::test::add_friend` | Send Friend Request API | Sends a friend request from one user to another. |
+| `openim::test::get_friend_list` | Get Friend List API | Retrieves the friend list of a test user. |
+| `openim::test::get_friend_apply_list` | Get Friend Application List API | Retrieves friend applications for a test user. |
+| `openim::test::get_self_friend_apply_list` | Get Self-Friend Application List API | Retrieves the friend applications that the user has applied for. |
+| `openim::test::add_black` | Add User to Blacklist API | Adds a user to the test user's blacklist to validate blacklist functionality. |
+| `openim::test::remove_black` | Remove User from Blacklist API | Removes a user from the test user's blacklist. |
+| `openim::test::get_black_list` | Get Blacklist API | Retrieves the blacklist for a test user. |
+| `openim::test::create_group` | Group Creation API | Creates a new group with test users to validate group creation. |
+| `openim::test::invite_user_to_group` | Invite User to Group API | Invites a user to join a group to test invitation functionality. |
+| `openim::test::transfer_group` | Group Ownership Transfer API | Tests the transfer of group ownership from one member to another. |
+| `openim::test::get_groups_info` | Get Group Information API | Retrieves information for specified groups to validate group query functionality. |
+| `openim::test::kick_group` | Kick User from Group API | Simulates kicking a user from a group to test group membership management. |
+| `openim::test::get_group_members_info` | Get Group Members Information API | Obtains detailed information for members of a specified group. |
+| `openim::test::get_group_member_list` | Get Group Member List API | Retrieves a list of members for a given group to ensure member listing is functional. |
+| `openim::test::get_joined_group_list` | Get Joined Group List API | Retrieves a list of groups that a user has joined to validate user's group memberships. |
+| `openim::test::set_group_member_info` | Set Group Member Information API | Updates the information for a group member to test the update functionality. |
+| `openim::test::mute_group` | Mute Group API | Tests the ability to mute a group, disabling message notifications for its members. |
+| `openim::test::cancel_mute_group` | Cancel Mute Group API | Tests the ability to cancel the mute status of a group, re-enabling message notifications. |
+| `openim::test::dismiss_group` | Dismiss Group API | Tests the ability to dismiss and delete a group from the system. |
+| `openim::test::cancel_mute_group_member` | Cancel Mute Group Member API | Tests the ability to cancel mute status for a specific group member. |
+| `openim::test::join_group` | Join Group API (TODO) | (Planned) Will test the functionality for a user to join a specified group. |
+| `openim::test::set_group_info` | Set Group Information API | Tests the ability to update the group information, such as the name or description. |
+| `openim::test::quit_group` | Quit Group API | Tests the functionality for a user to leave a specified group. |
+| `openim::test::get_recv_group_applicationList` | Get Received Group Application List API | Retrieves the list of group applications received by a user to validate application management. |
+| `openim::test::group_application_response` | Group Application Response API (TODO) | (Planned) Will test the functionality to respond to a group join request. |
+| `openim::test::get_user_req_group_applicationList` | Get User Requested Group Application List API | Retrieves the list of group applications requested by a user to validate tracking of user's applications. |
+| `openim::test::mute_group_member` | Mute Group Member API | Tests the ability to mute a specific member within a group, disabling their ability to send messages. |
+| `openim::test::get_group_users_req_application_list` | Get Group Users Request Application List API | Retrieves a list of user requests for group applications to validate group request management. |
diff --git a/docs/contrib/util-go.md b/docs/contrib/util-go.md
new file mode 100644
index 0000000..2a1d2e9
--- /dev/null
+++ b/docs/contrib/util-go.md
@@ -0,0 +1,8 @@
+# utils go
+
++ [toold readme](https://github.com/openimsdk/open-im-server-deploy/tree/main/tools)
+
+about scripts fix:
+```
+"${OPENIM-ROOT}/_output/bin/tools/${platform}/${lookfor}"
+```
diff --git a/docs/contrib/util-makefile.md b/docs/contrib/util-makefile.md
new file mode 100644
index 0000000..f14e705
--- /dev/null
+++ b/docs/contrib/util-makefile.md
@@ -0,0 +1,90 @@
+# Open-IM-Server Development Tools Guide
+
+- [Open-IM-Server Development Tools Guide](#open-im-server-deploy-development-tools-guide)
+ - [Introduction](#introduction)
+ - [Getting Started](#getting-started)
+ - [Toolset Categories](#toolset-categories)
+ - [Installation Commands](#installation-commands)
+ - [Basic Installation](#basic-installation)
+ - [Individual Tool Installation](#individual-tool-installation)
+ - [Tool Verification](#tool-verification)
+ - [Detailed Tool Descriptions](#detailed-tool-descriptions)
+ - [Best Practices](#best-practices)
+ - [Conclusion](#conclusion)
+
+
+## Introduction
+
+Open-IM-Server boasts a robust set of tools encapsulated within its Makefile system, designed to ease development, code formatting, and tool management. This guide aims to familiarize developers with the features and usage of the Makefile toolset provided within the Open-IM-Server project.
+
+## Getting Started
+
+Executing `make tools` ensures verification and installation of the default tools:
+
+- golangci-lint
+- goimports
+- addlicense
+- deepcopy-gen
+- conversion-gen
+- ginkgo
+- go-junit-report
+- go-gitlint
+
+The installation path is situated at `./_output/tools/`.
+
+## Toolset Categories
+
+The Makefile logically groups tools into different categories for better management:
+
+- **Development Tools**: `BUILD_TOOLS`
+- **Code Analysis Tools**: `ANALYSIS_TOOLS`
+- **Code Generation Tools**: `GENERATION_TOOLS`
+- **Testing Tools**: `TEST_TOOLS`
+- **Version Control Tools**: `VERSION_CONTROL_TOOLS`
+- **Utility Tools**: `UTILITY_TOOLS`
+- **Tencent Cloud Object Storage Tools**: `COS_TOOLS`
+
+## Installation Commands
+
+1. **golangci-lint**: high performance Go code linter with integration of multiple inspection tools.
+2. **goimports**: Used to format Go source files and automatically add or remove imports.
+3. **addlicense**: Adds a license header to the source file.
+4. **deepcopy-gen and conversions-gen **: Generate deepcopy and conversion functionality.
+5. **ginkgo**: Testing framework for Go.
+6. **go-junit-report**: Converts Go test output to junit xml format.
+7. **go-gitlint**: For checking git commit information. ... (And so on, list all the tools according to the `make tools.help` output above)...
+
+
+
+### Basic Installation
+
+- `tools.install`: Installs tools mentioned in the `BUILD_TOOLS` list.
+- `tools.install-all`: Installs all tools from the `ALL_TOOLS` list.
+
+### Individual Tool Installation
+
+- `tools.install.%`: Installs a single tool in the `$GOBIN/` directory.
+- `tools.install-all.%`: Parallelly installs an individual tool located in `./tools/*`.
+
+### Tool Verification
+
+- `tools.verify.%`: Checks if a specific tool is installed, and if not, installs it.
+
+## Detailed Tool Descriptions
+
+The following commands serve the purpose of installing particular development tools:
+
+- `install.golangci-lint`: Installs `golangci-lint`.
+- `install.addlicense`: Installs `addlicense`. ... (and so on for every tool as mentioned in the provided Makefile source)...
+
+The commands primarily leverage Go's `install` operation, fetching and installing tools from their respective repositories. This method is especially convenient as it auto-handles dependencies and installation paths. For tools not written directly with Go (like `install.coscli`), other installation methods like wget or pip are employed.
+
+## Best Practices
+
+1. **Regular Updates**: To ensure tools are up-to-date, periodically run the `make tools` command.
+2. **Individual Tools**: If only specific tools are required, employ the `make install.` command for individual installations.
+3. **Verification**: Before code submissions, use the `make tools.verify.%` command to guarantee that all necessary tools are present and up-to-date.
+
+## Conclusion
+
+The Makefile provided by Open-IM-Server presents a centralized approach to manage and install all necessary tools during the development process. It ensures that all developers employ consistent tool versions, reducing potential issues due to version disparities. Whether you're a maintainer or a contributor to the Open-IM-Server project, understanding the workings of this Makefile will significantly enhance your developmental efficiency.
diff --git a/docs/contrib/util-scripts.md b/docs/contrib/util-scripts.md
new file mode 100644
index 0000000..30da871
--- /dev/null
+++ b/docs/contrib/util-scripts.md
@@ -0,0 +1,248 @@
+# OpenIM Bash Utility Script
+
+This script offers a variety of utilities and helpers to enhance and simplify operations related to the OpenIM project.
+
+## Table of Contents
+
+- [OpenIM Bash Utility Script](#openim-bash-utility-script)
+ - [Table of Contents](#table-of-contents)
+ - [brief descriptions of each function](#brief-descriptions-of-each-function)
+ - [Introduction](#introduction)
+ - [Usage](#usage)
+ - [SSH Key Setup](#ssh-key-setup)
+ - [openim::util::ensure-gnu-sed](#openimutilensure-gnu-sed)
+ - [openim::util::ensure-gnu-date](#openimutilensure-gnu-date)
+ - [openim::util::check-file-in-alphabetical-order](#openimutilcheck-file-in-alphabetical-order)
+ - [openim::util::require-jq](#openimutilrequire-jq)
+ - [openim::util::md5](#openimutilmd5)
+ - [openim::util::read-array](#openimutilread-array)
+ - [Color Definitions](#color-definitions)
+ - [openim::util::desc and related functions](#openimutildesc-and-related-functions)
+ - [openim::util::onCtrlC](#openimutilonctrlc)
+ - [openim::util::list-to-string](#openimutillist-to-string)
+ - [openim::util::remove-space](#openimutilremove-space)
+ - [openim::util::gencpu](#openimutilgencpu)
+ - [openim::util::gen-os-arch](#openimutilgen-os-arch)
+ - [openim::util::download-file](#openimutildownload-file)
+ - [openim::util::get-public-ip](#openimutilget-public-ip)
+ - [openim::util::extract-tarball](#openimutilextract-tarball)
+ - [openim::util::check-port-open](#openimutilcheck-port-open)
+ - [openim::util::file-lines-count](#openimutilfile-lines-count)
+
+
+## brief descriptions of each function
+
+**English:**
+1. `openim::util::ensure-gnu-sed` - Determines if GNU version of `sed` exists on the system and sets its name.
+2. `openim::util::ensure-gnu-date` - Determines if GNU version of `date` exists on the system and sets its name.
+3. `openim::util::check-file-in-alphabetical-order` - Checks if a file is sorted in alphabetical order.
+4. `openim::util::require-jq` - Checks if `jq` is installed.
+5. `openim::util::md5` - Outputs the MD5 hash of a file.
+6. `openim::util::read-array` - Reads content from standard input into an array.
+7. `openim::util::desc` - Displays descriptive information.
+8. `openim::util::run::prompt` - Displays a prompt.
+9. `openim::util::run::maybe-first-prompt` - Possibly displays the first prompt based on whether it's started or not.
+10. `openim::util::run` - Executes a command and captures its output.
+11. `openim::util::run::relative` - Returns paths relative to the current script.
+12. `openim::util::onCtrlC` - Performs an action when Ctrl+C is pressed.
+13. `openim::util::list-to-string` - Converts a list into a string.
+14. `openim::util::remove-space` - Removes spaces from a string.
+15. `openim::util::gencpu` - Retrieves CPU information.
+16. `openim::util::gen-os-arch` - Generates a repository directory based on the operating system and architecture.
+17. `openim::util::download-file` - Downloads a file from a URL.
+18. `openim::util::get-public-ip` - Retrieves the public IP address of the machine.
+19. `openim::util::extract-tarball` - Extracts a tarball to a specified directory.
+20. `openim::util::check-port-open` - Checks if a given port is open on the machine.
+21. `openim::util::file-lines-count` - Counts the number of lines in a file.
+
+
+
+## Introduction
+
+This script is mainly used to validate whether the code is correctly formatted by `gofmt`. Apart from that, it offers utilities like setting up SSH keys, various wait conditions, host and platform detection, documentation generation, etc.
+
+## Usage
+
+### SSH Key Setup
+
+To set up an SSH key:
+
+```bash
+#1. Write IPs in a file, one IP per line. Let's name it hosts-file.
+#2. Modify the default username and password in the script.
+hosts-file-path="path/to/your/hosts/file"
+openim:util::setup_ssh_key_copy "$hosts-file-path" "root" "123"
+```
+
+## openim::util::ensure-gnu-sed
+
+Ensures the presence of the GNU version of the `sed` command. Different operating systems may have variations of the `sed` command, and this utility function is used to make sure the script uses the GNU version. If it finds the GNU `sed`, it sets the `SED` variable accordingly. If not found, it checks for `gsed`, which is usually the name of GNU `sed` on macOS. If neither is found, an error message is displayed.
+
+
+
+## openim::util::ensure-gnu-date
+
+Similar to the function for `sed`, this function ensures the script uses the GNU version of the `date` command. If it identifies the GNU `date`, it sets the `DATE` variable. On macOS, it looks for `gdate` as an alternative. In the absence of both, an error message is recommended.
+
+
+
+## openim::util::check-file-in-alphabetical-order
+
+This function checks if the contents of a given file are sorted in alphabetical order. If not, it provides a command suggestion for the user to sort the file correctly.
+
+
+
+## openim::util::require-jq
+
+Verifies the installation of `jq`, a popular command-line JSON parser. If it's not present, a prompt to install it is displayed.
+
+
+
+## openim::util::md5
+
+A cross-platform function that computes the MD5 hash of its input. This function takes into account the differences in the `md5` command between macOS and Linux.
+
+
+
+## openim::util::read-array
+
+A function designed to read from stdin and populate an array, line by line. It's provided as an alternative to `mapfile -t` and is compatible with bash 3.
+
+
+
+## Color Definitions
+
+The script also defines a set of colors to enhance its console output. These include colors like red, yellow, green, blue, cyan, etc., which can be used for better user experience and clear logs.
+
+
+
+## openim::util::desc and related functions
+
+These functions seem to aid in building interactive demonstrations or tutorials in the terminal. They use the `pv` utility to control the display rate of the output, emulating typing. There's also functionality to handle user prompts and execute commands while capturing their output.
+
+
+
+## openim::util::onCtrlC
+
+Handles the `CTRL+C` command. It terminates background processes of the script when the user interrupts it using `CTRL+C`.
+
+
+
+## openim::util::list-to-string
+
+Transforms a list format (like `[10023, 2323, 3434]`) to a space-separated string (`10023 2323 3434`). Also removes unnecessary spaces and characters.
+
+
+
+## openim::util::remove-space
+
+Removes spaces from a given string.
+
+
+
+## openim::util::gencpu
+
+Fetches the number of CPUs using the `lscpu` command.
+
+
+
+## openim::util::gen-os-arch
+
+Identifies the operating system and architecture of the system running the script. This is useful to determine directories or binaries specific to that OS and architecture.
+
+
+
+## openim::util::download-file
+
+This function can be used to download a file from a URL. If `curl` is available, it uses `curl`. If not, it falls back to `wget`.
+
+```bash
+function openim::util::download-file() {
+ local url="$1"
+ local dest="$2"
+
+ if command -v curl &>/dev/null; then
+ curl -L "${url}" -o "${dest}"
+ elif command -v wget &>/dev/null; then
+ wget "${url}" -O "${dest}"
+ else
+ openim::log::error "Neither curl nor wget available. Cannot download file."
+ return 1
+ fi
+}
+```
+
+
+
+## openim::util::get-public-ip
+
+Fetches the public IP address of the machine.
+
+```bash
+function openim::util::get-public-ip() {
+ if command -v curl &>/dev/null; then
+ curl -s https://ipinfo.io/ip
+ elif command -v wget &>/dev/null; then
+ wget -qO- https://ipinfo.io/ip
+ else
+ openim::log::error "Neither curl nor wget available. Cannot fetch public IP."
+ return 1
+ fi
+}
+```
+
+
+
+## openim::util::extract-tarball
+
+This function extracts a tarball to a specified directory.
+
+```bash
+function openim::util::extract-tarball() {
+ local tarball="$1"
+ local dest="$2"
+
+ mkdir -p "${dest}"
+ tar -xzf "${tarball}" -C "${dest}"
+}
+```
+
+
+
+## openim::util::check-port-open
+
+Checks if a given port is open on the local machine.
+
+```bash
+function openim::util::check-port-open() {
+ local port="$1"
+ if command -v nc &>/dev/null; then
+ echo -n > /dev/tcp/127.0.0.1/"${port}" 2>&1
+ return $?
+ elif command -v telnet &>/dev/null; then
+ telnet 127.0.0.1 "${port}" 2>&1 | grep -q "Connected"
+ return $?
+ else
+ openim::log::error "Neither nc nor telnet available. Cannot check port."
+ return 1
+ fi
+}
+```
+
+
+
+## openim::util::file-lines-count
+
+Counts the number of lines in a file.
+
+```bash
+function openim::util::file-lines-count() {
+ local file="$1"
+ if [[ -f "${file}" ]]; then
+ wc -l < "${file}"
+ else
+ openim::log::error "File does not exist: ${file}"
+ return 1
+ fi
+}
+```
\ No newline at end of file
diff --git a/docs/contrib/version.md b/docs/contrib/version.md
new file mode 100644
index 0000000..cbcddad
--- /dev/null
+++ b/docs/contrib/version.md
@@ -0,0 +1,238 @@
+# OpenIM Branch Management and Versioning: A Blueprint for High-Grade Software Development
+
+[📚 **OpenIM TOC**](#openim-branch-management-and-versioning-a-blueprint-for-high-grade-software-development)
+- [OpenIM Branch Management and Versioning: A Blueprint for High-Grade Software Development](#openim-branch-management-and-versioning-a-blueprint-for-high-grade-software-development)
+ - [Unfolding the Mechanism of OpenIM Version Maintenance](#unfolding-the-mechanism-of-openim-version-maintenance)
+ - [Main Branch: The Heart of OpenIM Development](#main-branch-the-heart-of-openim-development)
+ - [Release Branch: The Beacon of Stability](#release-branch-the-beacon-of-stability)
+ - [Tag Management: The Cornerstone of Version Control](#tag-management-the-cornerstone-of-version-control)
+ - [Release Management: A Guided Tour](#release-management-a-guided-tour)
+ - [Milestones, Branching, and Addressing Major Bugs](#milestones-branching-and-addressing-major-bugs)
+ - [Version Skew Policy](#version-skew-policy)
+ - [Supported version skew](#supported-version-skew)
+ - [OpenIM Versioning, Branching, and Tag Strategy](#openim-versioning-branching-and-tag-strategy)
+ - [Supported Version Skew](#supported-version-skew-1)
+ - [openim-api](#openim-api)
+ - [openim-rpc-\* Components](#openim-rpc--components)
+ - [Other OpenIM Services](#other-openim-services)
+ - [Supported Component Upgrade Order](#supported-component-upgrade-order)
+ - [openim-api](#openim-api-1)
+ - [openim-rpc-\* Components](#openim-rpc--components-1)
+ - [Other OpenIM Services](#other-openim-services-1)
+ - [Conclusion](#conclusion)
+ - [Applying Principles: A Git Workflow Example](#applying-principles-a-git-workflow-example)
+ - [Release Process](#release-process)
+ - [Docker Images Version Management](#docker-images-version-management)
+ - [More](#more)
+
+
+At OpenIM, we acknowledge the profound impact of implementing a robust and efficient version management system, hence we abide by the established standards of [Semantic Versioning 2.0.0](https://semver.org/lang/zh-CN/).
+
+Our software blueprint orchestrates a tripartite version management system that integrates the `main` branch, the `release` branch, and `tag` management. These constituents operate in synchrony to preserve the reliability and traceability of our software across various stages of development.
+
+## Unfolding the Mechanism of OpenIM Version Maintenance
+
+Our version maintenance protocol revolves around two primary branches, namely: `main` and `release`. We resort to Semantic Versioning 2.0.0 for marking distinctive versions of our software, representing substantial milestones in its evolution.
+
+In the OpenIM repository, version identification strictly complies with the `MAJOR.MINOR.PATCH` protocol. Herein:
+
+- The `MAJOR` version indicates a shift arising from incompatible changes to the API.
+- The `MINOR` version suggests the addition of features in a backward-compatible manner.
+- The `PATCH` version flags backward-compatible bug fixes.
+
+## Main Branch: The Heart of OpenIM Development
+
+The `main` branch is the operational heart of our development process. Housing the most recent and advanced features, this branch serves as the nerve center for all enhancements and updates. It encapsulates the freshest, though possibly unstable, facets of the software. Visit our `main` branch [here](https://github.com/openimsdk/open-im-server-deploy/tree/main).
+
+## Release Branch: The Beacon of Stability
+
+For every major release, we curate a corresponding `release` branch, e.g., `release-v3.1`. This branch symbolizes an embodiment of stability and ensures an updated version of the software, providing a dependable option for users favoring stability over nascent, yet possibly unstable, features. Visit the `release-v3.1` branch [here](https://github.com/openimsdk/open-im-server-deploy/tree/release-v3.1).
+
+## Tag Management: The Cornerstone of Version Control
+
+In OpenIM's version control system, the role of `tags` stands paramount. Owing to their immutable nature, tags can be effectively utilized to retrieve a specific version of the software. Explore our library of tags [here](https://github.com/openimsdk/open-im-server-deploy/tags).
+
+Our Docker image versions are intimately entwined with these tripartite components. For instance, a Docker image tag may correspond to `ghcr.io/openimsdk/openim-server:v3.1.0`, a release to `ghcr.io/openimsdk/openim-server:release-v3.0`, and the main branch to `ghcr.io/openimsdk/openim-server:main` or `ghcr.io/openimsdk/openim-server:latest`.
+
+To further clarify, the semantics of our version numbers are as follows:
+
+- **Revision version number**: This represents bug fixes or code optimizations. Typically, it entails no new feature additions and ensures backward compatibility.
+- **Build version number**: Auto-generated by the system, each code submission prompts an automatic increment by 1.
+- **Version modifiers**: These hint at the software's development stage and stability. Some commonly used modifiers are `alpha`, `beta`, `rc`, `ga`, `r/release/or nothing`, and `lts`.
+ - `alpha`: An internal testing version with numerous bugs, typically used for communication among developers.
+ - `beta`: A test version with numerous bugs, generally used for testing by eager community members, who provide feedback to the developers.
+ - `rc`: Release candidate, which is to be released as the official version. It's the last test version before the official version.
+ - `ga`: General Availability, the first stable release.
+ - `r/release/or nothing`: The final release version, intended for general users.
+ - `lts`: Long Term Support, the official will specify the maintenance year for this version and will fix all bugs discovered in this version.
+
+Whenever a project undergoes a partial functional addition, the minor version number increments by 1, resetting the revision version number to 0. In contrast, any major project overhaul results in an increment by 1 in the major version number. The build number, typically auto-generated during the compilation process, only requires format definition, thereby eliminating manual control.
+
+## Release Management: A Guided Tour
+
+Our GitHub repository at https://github.com/openimsdk/open-im-server-deploy/releases associates a release with each tag, with a distinction between Pre-release and Latest, determined by the branch source. Every significant feature launch prompts the issue of a `release` branch, such as `release-v3.2`, as a beacon of stability and Latest release.
+
+Pre-releases correspond to releases from the `main` branch, denoting tags with Version modifiers such as `v3.2.1-beta.0`, `v3.2.1-rc.1`, etc. If you are seeking the most recent, albeit possibly unstable, release with new features, these tags, originating from the latest `main` branch code, are your go-to.
+
+Conversely, if stability is your primary concern, you should opt for the release tagged Latest, denoted by tags without Version modifiers, such as `v3.2.1`, `v3.2.2` etc. These tags are linked to the latest stable maintenance branch, like `release-v3.2`.
+
+## Milestones, Branching, and Addressing Major Bugs
+
+**About:**
+
++ [OpenIM Milestones](https://github.com/openimsdk/open-im-server-deploy/milestones)
++ [OpenIM Tags](https://github.com/openimsdk/open-im-server-deploy/tags)
++ [OpenIM Branches](https://github.com/openimsdk/open-im-server-deploy/branches)
+
+We create a new branch, such as `release-v3.1`, for each significant milestone (e.g., v3.1.0), housing all relevant code for that release. All enhancements and bug fixes targeting the subsequent version (e.g., v3.2.0) are integrated into this branch.
+
+`PATCH` versions (represented by Z in `X.Y.Z`) are primarily propelled by bug fixes, and their release may be either priority-driven or scheduled. In contrast, `MINOR` versions (represented by Y in `X.Y.Z`) are contingent upon the project's roadmap, milestone completion, or a pre-established timeline, always maintaining backward-compatible APIs.
+
+When dealing with major bugs, we selectively merge the fix into the affected version (e.g., v3.1 or the `release-v3.1` branch), as well as the `main` branch. This dual pronged strategy ensures that users on older versions receive crucial bug fixes, while also keeping the `main` branch updated.
+
+We reinforce our approach to branch management and versioning with stringent testing protocols. Automated tests and code review sessions form vital components of maintaining a robust and reliable codebase.
+
+## Version Skew Policy
+
+This document describes the maximum version skew supported between various openim components. Specific cluster deployment tools may place additional restrictions on version skew.
+
+### Supported version skew
+
+In highly-available (HA) clusters, the newest and oldest `openim-api` instances must be within one minor version.
+
+### OpenIM Versioning, Branching, and Tag Strategy
+
+Similar to Kubernetes, OpenIM has a strict versioning, branching, and tag strategy to ensure compatibility among its various services and components. This document outlines the policies, especially focusing on the version skew supported between OpenIM's components. Given that the current version is v3.3, the policy references will be centered around this version.
+
+#### Supported Version Skew
+
+##### openim-api
+
+In highly-available (HA) clusters, the newest and oldest `openim-api` instances must be within one minor version.
+
+Example:
+
++ Newest `openim-api` is at v3.3
++ Other `openim-api` instances are supported at v3.3 and v3.2
+
+##### openim-rpc-* Components
+
+All `openim-rpc-*` components (e.g., `openim-rpc-auth`, `openim-rpc-conversation`, etc.) should adhere to the following rules:
+
+1. They must not be newer than `openim-api`.
+2. They may be up to one minor version older than `openim-api`.
+
+Example:
+
++ `openim-api` is at v3.3
++ All `openim-rpc-*` components are supported at v3.3 and v3.2
+
+Note: If version skew exists between `openim-api` instances in an HA cluster, this narrows the allowed `openim-rpc-*` components versions.
+
+##### Other OpenIM Services
+
+Other OpenIM services such as `openim-cmdutils`, `openim-crontask`, `openim-msggateway`, etc. should adhere to the following rules:
+
+1. These services must not be newer than `openim-api`.
+2. They are expected to match the `openim-api` minor version but may be up to one minor version older (to allow live upgrades).
+
+Example:
+
++ `openim-api` is at v3.3
++ `openim-msggateway`, `openim-cmdutils`, and other services are supported at v3.3 and v3.2
+
+#### Supported Component Upgrade Order
+
+The version skew policy has implications on the order in which components should be upgraded. Below is the recommended order to transition an existing cluster from version v3.2 to v3.3:
+
+##### openim-api
+
+Pre-requisites:
+
+1. In a single-instance cluster, the existing `openim-api` instance is v3.2.
+2. In an HA cluster, all `openim-api` instances are at v3.2 or v3.3.
+3. All `openim-rpc-*` and other OpenIM services communicating with this server are at version v3.2.
+
+Upgrade Procedure:
+
+1. Upgrade `openim-api` to v3.3.
+
+##### openim-rpc-* Components
+
+Pre-requisites:
+
+1. The `openim-api` instances these components communicate with are at v3.3.
+
+Upgrade Procedure:
+
+2. Upgrade all `openim-rpc-*` components to v3.3.
+
+##### Other OpenIM Services
+
+Pre-requisites:
+
+1. The `openim-api` instances these services communicate with are at v3.3.
+
+Upgrade Procedure:
+
+2. Upgrade other OpenIM services such as `openim-msggateway`, `openim-cmdutils`, etc., to v3.3.
+
+#### Conclusion
+
+Just like Kubernetes, it's essential for OpenIM to have a strict versioning and upgrade policy to ensure seamless operation and compatibility among its various services. Adhering to the policies outlined above will help in achieving this goal.
+
+
+## Applying Principles: A Git Workflow Example
+
+The workflow to address a bug fix might follow these steps:
+
+```bash
+# Checkout the branch for the version that needs the bug fix
+git checkout release-v3.1
+
+# Create a new branch for the bug fix
+git checkout -b bug/bug-name
+
+# ... Make changes, commit your work ...
+
+# Push the branch to your remote repository
+git push origin bug/bug-name
+
+# After the pull request is merged into the release-v3.1 branch,
+# checkout and update your main branch
+git checkout main
+git pull origin main
+
+# Merge or rebase the changes from release-v3.1 into main
+git merge release-v3.1
+
+# Push the updates to the main branch
+git push origin main
+```
+
+## Release Process
+
+```
+Publishing v3.2.0: A Step-by-Step Guide
+(1) Create the tag v3.2.0-alpha.0 from the main branch.
+(2) Bugs are fixed on the main branch. Once the bugs are resolved, tag the main branch as v3.2.0-rc.0.
+(3) After further testing, if v3.2.0-rc.0 is deemed stable, create a branch named release-v3.2 from the tag v3.2.0-rc.0.
+(4) From the release-v3.2 branch, create the tag v3.2.0. At this point, the official release of v3.2.0 is complete.
+
+After the release of v3.2.0, if urgent bugs are discovered, fix them on the release-v3.2 branch. Then, submit two pull requests (PRs) to both the main and release-v3.2 branches. Tag the release-v3.2 branch as v3.2.1.
+```
+
+Throughout this process, active communication within the team is pivotal to maintaining transparency and consensus on changes.
+
+## Docker Images Version Management
+
+For more details on managing Docker image versions, visit [OpenIM Docker Images Administration](https://github.com/openimsdk/open-im-server-deploy/blob/main/docs/contrib/images.md).
+
+## More
+
+More on multi-branch version management design and version management design at helm charts:
+
+About Helm's version management strategy for Multiple Apps and multiple Services:
+
++ [中文版本管理文档](https://github.com/openimsdk/helm-charts/blob/main/docs/contrib/version-zh.md)
++ [English version management documents](https://github.com/openimsdk/helm-charts/blob/main/docs/contrib/version.md)
diff --git a/docs/contributing/CONTRIBUTING-JP.md b/docs/contributing/CONTRIBUTING-JP.md
new file mode 100644
index 0000000..1798d4e
--- /dev/null
+++ b/docs/contributing/CONTRIBUTING-JP.md
@@ -0,0 +1,33 @@
+# How do I contribute code to OpenIM
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/contributing/CONTRIBUTING-PL.md b/docs/contributing/CONTRIBUTING-PL.md
new file mode 100644
index 0000000..1798d4e
--- /dev/null
+++ b/docs/contributing/CONTRIBUTING-PL.md
@@ -0,0 +1,33 @@
+# How do I contribute code to OpenIM
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/images/architecture-layers.png b/docs/images/architecture-layers.png
new file mode 100644
index 0000000..d9e6e4d
Binary files /dev/null and b/docs/images/architecture-layers.png differ
diff --git a/docs/images/architecture.jpg b/docs/images/architecture.jpg
new file mode 100644
index 0000000..138a724
Binary files /dev/null and b/docs/images/architecture.jpg differ
diff --git a/docs/images/oepnim-design.png b/docs/images/oepnim-design.png
new file mode 100644
index 0000000..74cb79a
Binary files /dev/null and b/docs/images/oepnim-design.png differ
diff --git a/docs/images/open-im-logo.png b/docs/images/open-im-logo.png
new file mode 100644
index 0000000..4fb6a67
Binary files /dev/null and b/docs/images/open-im-logo.png differ
diff --git a/docs/images/open-im-server.png b/docs/images/open-im-server.png
new file mode 100644
index 0000000..35df36f
Binary files /dev/null and b/docs/images/open-im-server.png differ
diff --git a/docs/images/wechat.jpg b/docs/images/wechat.jpg
new file mode 100644
index 0000000..85b812a
Binary files /dev/null and b/docs/images/wechat.jpg differ
diff --git a/docs/meeting-api.md b/docs/meeting-api.md
new file mode 100644
index 0000000..f35ea81
--- /dev/null
+++ b/docs/meeting-api.md
@@ -0,0 +1,415 @@
+# 会议管理接口文档
+
+## 接口列表
+
+### 1. 创建会议
+
+**接口地址:** `POST /meeting/create_meeting`
+
+**请求参数:**
+
+```json
+{
+ "meetingID": "string", // 会议ID(选填,不填则自动生成)
+ "subject": "string", // 会议主题(必填)
+ "coverURL": "string", // 封面URL(选填)
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒,必填)
+ "description": "string", // 会议描述(选填)
+ "duration": 60, // 会议时长(分钟,选填)
+ "estimatedCount": 100, // 会议预估人数(选填)
+ "enableMic": true, // 是否开启连麦(选填,默认false)
+ "enableComment": true, // 是否开启评论(选填,默认false)
+ "anchorUserIDs": ["user1", "user2"], // 主播用户ID列表(多选,选填)
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+**响应参数:**
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "meetingInfo": {
+ "meetingID": "string", // 会议ID
+ "subject": "string", // 会议主题
+ "coverURL": "string", // 封面URL
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒)
+ "status": 1, // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ "creatorUserID": "string", // 创建者用户ID
+ "description": "string", // 会议描述
+ "duration": 60, // 会议时长(分钟)
+ "estimatedCount": 100, // 会议预估人数
+ "enableMic": true, // 是否开启连麦
+ "enableComment": true, // 是否开启评论
+ "anchorUserIDs": ["user1", "user2"], // 主播用户ID列表(多选)
+ "anchorUsers": [ // 主播用户信息列表
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ },
+ {
+ "userID": "user2",
+ "nickname": "主播2",
+ "faceURL": "https://example.com/avatar2.jpg"
+ }
+ ],
+ "createTime": 1234567890000, // 创建时间戳(毫秒)
+ "updateTime": 1234567890000, // 更新时间戳(毫秒)
+ "ex": "string", // 扩展字段
+ "groupID": "string" // 关联的群聊ID
+ },
+ "groupID": "string" // 创建的群聊ID
+ }
+}
+```
+
+**说明:**
+- 创建会议时会自动创建一个群聊,群聊名称为"群聊-[会议主题]"
+- 会议封面会作为群聊头像
+- 如果有主播列表,第一个主播自动成为群主,其他主播成为管理员
+- 如果没有主播列表,创建者自动成为群主
+
+---
+
+### 2. 更新会议
+
+**接口地址:** `POST /meeting/update_meeting`
+
+**请求参数:**
+
+```json
+{
+ "meetingID": "string", // 会议ID(必填)
+ "subject": "string", // 会议主题(选填)
+ "coverURL": "string", // 封面URL(选填)
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒,选填)
+ "status": 2, // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消(选填)
+ "description": "string", // 会议描述(选填)
+ "duration": 60, // 会议时长(分钟,选填)
+ "estimatedCount": 100, // 会议预估人数(选填)
+ "enableMic": true, // 是否开启连麦(选填,使用指针以区分是否设置)
+ "enableComment": true, // 是否开启评论(选填,使用指针以区分是否设置)
+ "anchorUserIDs": ["user1", "user2"], // 主播用户ID列表(多选,选填)
+ "ex": "string" // 扩展字段(选填)
+}
+```
+
+**响应参数:**
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "meetingInfo": {
+ "meetingID": "string", // 会议ID
+ "subject": "string", // 会议主题
+ "coverURL": "string", // 封面URL
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒)
+ "status": 2, // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ "creatorUserID": "string", // 创建者用户ID
+ "description": "string", // 会议描述
+ "duration": 60, // 会议时长(分钟)
+ "estimatedCount": 100, // 会议预估人数
+ "enableMic": true, // 是否开启连麦
+ "enableComment": true, // 是否开启评论
+ "anchorUserIDs": ["user1", "user2"], // 主播用户ID列表(多选)
+ "anchorUsers": [ // 主播用户信息列表
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ },
+ {
+ "userID": "user2",
+ "nickname": "主播2",
+ "faceURL": "https://example.com/avatar2.jpg"
+ }
+ ],
+ "createTime": 1234567890000, // 创建时间戳(毫秒)
+ "updateTime": 1234567890000, // 更新时间戳(毫秒)
+ "ex": "string", // 扩展字段
+ "groupID": "string" // 关联的群聊ID
+ }
+ }
+}
+```
+
+**说明:**
+- 可以单独更新任意字段,只传需要更新的字段即可
+- 更新主题时会同步更新群聊名称
+- 更新封面时会同步更新群聊头像
+- 更新主播列表时,第一个主播会成为群主,其他主播会成为管理员
+- 只有创建者可以更新会议
+
+---
+
+### 3. 获取会议列表
+
+**接口地址:** `POST /meeting/get_meetings`
+
+**请求参数:**
+
+```json
+{
+ "creatorUserID": "string", // 创建者用户ID(选填)
+ "status": 1, // 会议状态(选填):1-已预约,2-进行中,3-已结束,4-已取消
+ "keyword": "string", // 搜索关键词(选填,搜索主题和描述)
+ "startTime": 1234567890000, // 开始时间戳(毫秒,选填)
+ "endTime": 1234567890000, // 结束时间戳(毫秒,选填)
+ "pagination": {
+ "pageNumber": 1, // 页码,从1开始(选填,默认1)
+ "showNumber": 20 // 每页数量(选填,默认20)
+ }
+}
+```
+
+**响应参数:**
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "total": 100, // 总数
+ "meetings": [
+ {
+ "meetingID": "string", // 会议ID
+ "subject": "string", // 会议主题
+ "coverURL": "string", // 封面URL
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒)
+ "status": 1, // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ "creatorUserID": "string", // 创建者用户ID
+ "description": "string", // 会议描述
+ "duration": 60, // 会议时长(分钟)
+ "estimatedCount": 100, // 会议预估人数
+ "enableMic": true, // 是否开启连麦
+ "enableComment": true, // 是否开启评论
+ "anchorUserIDs": ["user1", "user2"], // 主播用户ID列表(多选)
+ "anchorUsers": [ // 主播用户信息列表
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ },
+ {
+ "userID": "user2",
+ "nickname": "主播2",
+ "faceURL": "https://example.com/avatar2.jpg"
+ }
+ ],
+ "createTime": 1234567890000, // 创建时间戳(毫秒)
+ "updateTime": 1234567890000, // 更新时间戳(毫秒)
+ "ex": "string", // 扩展字段
+ "groupID": "string" // 关联的群聊ID
+ }
+ ]
+ }
+}
+```
+
+**说明:**
+- 支持多种查询条件,可以组合使用
+- 查询优先级:creatorUserID > status > keyword > startTime/endTime > 全部
+- 支持分页查询
+- 返回的会议信息中包含主播的完整用户信息(用户ID、昵称、头像等)
+
+---
+
+### 4. 删除会议
+
+**接口地址:** `POST /meeting/delete_meeting`
+
+**请求参数:**
+
+```json
+{
+ "meetingID": "string" // 会议ID(必填)
+}
+```
+
+**响应参数:**
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {}
+}
+```
+
+**说明:**
+- 只有创建者可以删除会议
+- 删除会议不会删除关联的群聊
+
+---
+
+## 会议状态说明
+
+- `1` - 已预约:会议已创建,等待开始
+- `2` - 进行中:会议正在进行
+- `3` - 已结束:会议已结束(用户端不可见)
+- `4` - 已取消:会议已取消(用户端不可见)
+
+**注意:** 用户端接口只能查看状态1(已预约)和2(进行中)的会议,状态3(已结束)和4(已取消)的会议对用户端不可见。
+
+---
+
+## 通用响应格式
+
+所有接口都遵循统一的响应格式:
+
+```json
+{
+ "errCode": 0, // 错误码,0表示成功
+ "errMsg": "", // 错误消息
+ "errDlt": "", // 错误详情
+ "data": {} // 响应数据
+}
+```
+
+---
+
+## 用户端接口
+
+### 5. 获取会议信息(用户端)
+
+**接口地址:** `POST /meeting/get_meeting`
+
+**请求参数:**
+
+```json
+{
+ "meetingID": "string" // 会议ID(必填)
+}
+```
+
+**响应参数:**
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "meetingInfo": {
+ "meetingID": "string", // 会议ID
+ "subject": "string", // 会议主题
+ "coverURL": "string", // 封面URL
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒)
+ "status": 1, // 会议状态:1-已预约,2-进行中
+ "description": "string", // 会议描述
+ "duration": 60, // 会议时长(分钟)
+ "enableMic": true, // 是否开启连麦
+ "enableComment": true, // 是否开启评论
+ "anchorUsers": [ // 主播用户信息列表
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ },
+ {
+ "userID": "user2",
+ "nickname": "主播2",
+ "faceURL": "https://example.com/avatar2.jpg"
+ }
+ ],
+ "groupID": "string" // 关联的群聊ID
+ }
+ }
+}
+```
+
+**说明:**
+- 用户端接口只显示状态为1(已预约)和2(进行中)的会议
+- 如果会议状态为3(已结束)或4(已取消),将返回错误
+- 用户端接口不返回管理字段(创建者ID、扩展字段、创建时间、更新时间等)
+- 返回的主播信息包含用户ID、昵称、头像等完整用户信息
+
+---
+
+### 6. 获取会议列表(用户端)
+
+**接口地址:** `POST /meeting/get_meetings_public`
+
+**请求参数:**
+
+```json
+{
+ "status": 1, // 会议状态(选填):1-已预约,2-进行中(只能查询这两个状态)
+ "keyword": "string", // 搜索关键词(选填,搜索主题和描述)
+ "startTime": 1234567890000, // 开始时间戳(毫秒,选填)
+ "endTime": 1234567890000, // 结束时间戳(毫秒,选填)
+ "pagination": {
+ "pageNumber": 1, // 页码,从1开始(选填,默认1)
+ "showNumber": 20 // 每页数量(选填,默认20)
+ }
+}
+```
+
+**响应参数:**
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "total": 100, // 总数(只包含状态1和2的会议)
+ "meetings": [
+ {
+ "meetingID": "string", // 会议ID
+ "subject": "string", // 会议主题
+ "coverURL": "string", // 封面URL
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒)
+ "status": 1, // 会议状态:1-已预约,2-进行中
+ "description": "string", // 会议描述
+ "duration": 60, // 会议时长(分钟)
+ "enableMic": true, // 是否开启连麦
+ "enableComment": true, // 是否开启评论
+ "anchorUsers": [ // 主播用户信息列表
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ },
+ {
+ "userID": "user2",
+ "nickname": "主播2",
+ "faceURL": "https://example.com/avatar2.jpg"
+ }
+ ],
+ "groupID": "string" // 关联的群聊ID
+ }
+ ]
+ }
+}
+```
+
+**说明:**
+- 用户端接口只显示状态为1(已预约)和2(进行中)的会议
+- 如果指定status参数,只能传入1或2,传入其他值将返回错误
+- 如果不指定status,查询结果会自动过滤,只返回状态1和2的会议
+- 用户端接口不支持按创建者查询
+- 用户端接口不返回管理字段(创建者ID、扩展字段、创建时间、更新时间等)
+- 返回的主播信息包含用户ID、昵称、头像等完整用户信息
+
+---
+
+## 注意事项
+
+1. 所有接口都需要在请求头中携带 `token` 进行身份验证
+2. 创建和更新会议时,预约时间不能早于当前时间
+3. 创建会议时会自动创建关联的群聊,群聊名称为"群聊-[会议主题]"
+4. 更新会议主题或封面时,会同步更新关联群聊的名称或头像
+5. 只有会议创建者可以更新和删除会议
+6. 用户端接口只提供查看功能,不包含创建、更新、删除等管理功能
+7. 用户端接口返回的数据已过滤管理字段(创建者ID、扩展字段、创建时间、更新时间等)
+8. 用户端接口只显示状态为1(已预约)和2(进行中)的会议,状态3(已结束)和4(已取消)的会议对用户端不可见
diff --git a/docs/meeting-client-api.md b/docs/meeting-client-api.md
new file mode 100644
index 0000000..9f81f05
--- /dev/null
+++ b/docs/meeting-client-api.md
@@ -0,0 +1,619 @@
+# OpenIM 会议接口文档(用户端 - 前端对接版)
+
+本文档专门用于前端开发人员对接 OpenIM 会议相关 API 接口(用户端)。
+
+## 目录
+
+- [基础说明](#基础说明)
+- [会议状态说明](#会议状态说明)
+- [用户端接口](#用户端接口)
+- [业务规则](#业务规则)
+- [错误码说明](#错误码说明)
+- [注意事项](#注意事项)
+
+---
+
+## 基础说明
+
+### 请求格式
+
+- **请求方法**: 所有接口均使用 `POST` 方法
+- **Content-Type**: `application/json`
+- **请求头**: 需要在请求头中携带 `token` 进行身份验证
+
+```http
+POST /meeting/get_meeting
+Content-Type: application/json
+token: your_token_here
+```
+
+### 响应格式
+
+所有接口统一返回以下格式:
+
+```json
+{
+ "errCode": 0, // 错误码,0表示成功
+ "errMsg": "", // 错误消息
+ "errDlt": "", // 错误详情
+ "data": {} // 响应数据,具体结构见各接口说明
+}
+```
+
+### 基础URL
+
+根据部署环境配置,例如:
+- 开发环境: `http://localhost:10002`
+- 生产环境: `https://your-domain.com`
+
+---
+
+## 会议状态说明
+
+| 状态值 | 状态名称 | 说明 | 用户端可见 |
+|--------|---------|------|-----------|
+| 1 | 已预约 | 会议已创建,等待开始 | ✅ 可见 |
+| 2 | 进行中 | 会议正在进行 | ✅ 可见 |
+| 3 | 已结束 | 会议已结束 | ❌ 不可见 |
+| 4 | 已取消 | 会议已取消 | ❌ 不可见 |
+
+**重要提示**:
+- 用户端接口只能查看状态为 **1(已预约)** 和 **2(进行中)** 的会议
+- 状态为 **3(已结束)** 和 **4(已取消)** 的会议对用户端不可见
+
+---
+
+## 用户端接口
+
+### 1. 获取会议信息
+
+**接口地址**: `POST /meeting/get_meeting`
+
+**接口描述**: 获取单个会议信息,只能查看已预约和进行中的会议
+
+**请求参数**:
+```json
+{
+ "meetingID": "string" // 会议ID(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "meetingInfo": {
+ "meetingID": "string", // 会议ID
+ "subject": "string", // 会议主题
+ "coverURL": "string", // 封面URL
+ "scheduledTime": 1234567890000, // 预约时间戳(毫秒)
+ "status": 1, // 会议状态:1-已预约,2-进行中
+ "description": "string", // 会议描述
+ "duration": 60, // 会议时长(分钟)
+ "enableMic": true, // 是否开启连麦
+ "enableComment": true, // 是否开启评论
+ "anchorUsers": [ // 主播用户信息列表
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ },
+ {
+ "userID": "user2",
+ "nickname": "主播2",
+ "faceURL": "https://example.com/avatar2.jpg"
+ }
+ ],
+ "groupID": "string", // 关联的群聊ID
+ "checkInCount": 50 // 签到人数统计
+ }
+ }
+}
+```
+
+**业务规则**:
+1. 用户端接口只显示状态为 **1(已预约)** 和 **2(进行中)** 的会议
+2. 如果会议状态为 **3(已结束)** 或 **4(已取消)**,将返回错误
+3. 用户端接口不返回管理字段:
+ - `creatorUserID`(创建者ID)
+ - `ex`(扩展字段)
+ - `createTime`(创建时间)
+ - `updateTime`(更新时间)
+ - `estimatedCount`(预估人数)
+ - `anchorUserIDs`(主播ID列表,只返回 `anchorUsers`)
+4. 返回的主播信息包含用户ID、昵称、头像等完整用户信息
+5. 返回签到人数统计(`checkInCount`),用于显示会议签到情况
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误或会议不可用(会议已结束或已取消)
+- `1004`: 会议不存在
+- `500`: 服务器内部错误
+
+---
+
+### 2. 获取会议列表
+
+**接口地址**: `POST /meeting/get_meetings_public`
+
+**接口描述**: 获取会议列表,只能查看已预约和进行中的会议
+
+**请求参数**:
+```json
+{
+ "status": 1, // 会议状态(选填):1-已预约,2-进行中(只能查询这两个状态)
+ "keyword": "string", // 搜索关键词(选填,搜索主题和描述)
+ "startTime": 1234567890000, // 开始时间戳(毫秒,选填)
+ "endTime": 1234567890000, // 结束时间戳(毫秒,选填)
+ "pagination": {
+ "pageNumber": 1, // 页码,从1开始(选填,默认1)
+ "showNumber": 20 // 每页数量(选填,默认20)
+ }
+}
+```
+
+**参数说明**:
+- `status`: 如果指定,只能传入 `1` 或 `2`,传入其他值将返回错误
+- 如果不指定 `status`,查询结果会自动过滤,只返回状态1和2的会议
+- 用户端接口不支持按创建者查询(没有 `creatorUserID` 参数)
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "total": 100, // 总数(只包含状态1和2的会议)
+ "meetings": [
+ {
+ "meetingID": "string",
+ "subject": "string",
+ "coverURL": "string",
+ "scheduledTime": 1234567890000,
+ "status": 1,
+ "description": "string",
+ "duration": 60,
+ "enableMic": true,
+ "enableComment": true,
+ "anchorUsers": [
+ {
+ "userID": "user1",
+ "nickname": "主播1",
+ "faceURL": "https://example.com/avatar1.jpg"
+ }
+ ],
+ "groupID": "string", // 关联的群聊ID
+ "checkInCount": 50 // 签到人数统计
+ }
+ ]
+ }
+}
+```
+
+**业务规则**:
+1. 用户端接口只显示状态为 **1(已预约)** 和 **2(进行中)** 的会议
+2. 如果指定 `status` 参数,只能传入 `1` 或 `2`,传入其他值将返回错误
+3. 如果不指定 `status`,查询结果会自动过滤,只返回状态1和2的会议
+4. 用户端接口不支持按创建者查询
+5. 用户端接口不返回管理字段(创建者ID、扩展字段、创建时间、更新时间、预估人数等)
+6. 返回的主播信息包含用户ID、昵称、头像等完整用户信息
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误(如状态值不是1或2)
+- `500`: 服务器内部错误
+
+---
+
+## 签到接口
+
+### 1. 会议签到
+
+**接口地址**: `POST /meeting/check_in`
+
+**接口描述**: 用户对会议进行签到,一个用户在一个会议中只能签到一次
+
+**请求参数**:
+```json
+{
+ "meetingID": "string" // 会议ID(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "checkInID": "string", // 签到ID
+ "checkInTime": 1234567890000 // 签到时间戳(毫秒)
+ }
+}
+```
+
+**业务规则**:
+1. 一个用户在一个会议中只能签到一次
+2. 只能对状态为 **1(已预约)** 和 **2(进行中)** 的会议签到
+3. 签到成功后会自动更新会议的签到统计(`checkInCount`)
+4. 如果用户已经签到过,会返回错误
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误或会议不可用(会议已结束或已取消)
+- `1001`: 用户已签到
+- `1004`: 会议不存在
+- `500`: 服务器内部错误
+
+---
+
+### 2. 检查用户是否已签到
+
+**接口地址**: `POST /meeting/check_user_check_in`
+
+**接口描述**: 检查当前用户是否已对指定会议签到
+
+**请求参数**:
+```json
+{
+ "meetingID": "string" // 会议ID(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "isCheckedIn": true, // 是否已签到
+ "checkInInfo": { // 签到信息(如果已签到)
+ "checkInID": "string",
+ "meetingID": "string",
+ "userID": "string",
+ "checkInTime": 1234567890000,
+ "userInfo": {
+ "userID": "string",
+ "nickname": "string",
+ "faceURL": "string"
+ }
+ }
+ }
+}
+```
+
+**业务规则**:
+1. 如果用户已签到,返回 `isCheckedIn: true` 和签到详细信息
+2. 如果用户未签到,返回 `isCheckedIn: false`,`checkInInfo` 为 `null`
+3. 已签到时,会返回用户的基本信息(用户ID、昵称、头像等)
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误
+- `500`: 服务器内部错误
+
+---
+
+### 3. 获取会议签到列表
+
+**接口地址**: `POST /meeting/get_check_ins`
+
+**接口描述**: 获取指定会议的所有签到记录列表
+
+**请求参数**:
+```json
+{
+ "meetingID": "string", // 会议ID(必填)
+ "pagination": {
+ "pageNumber": 1, // 页码,从1开始(选填,默认1)
+ "showNumber": 20 // 每页数量(选填,默认20)
+ }
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "total": 100, // 总数
+ "checkIns": [
+ {
+ "checkInID": "string", // 签到ID
+ "meetingID": "string", // 会议ID
+ "userID": "string", // 用户ID
+ "checkInTime": 1234567890000, // 签到时间戳(毫秒)
+ "userInfo": { // 用户信息
+ "userID": "string",
+ "nickname": "string",
+ "faceURL": "string"
+ }
+ }
+ ]
+ }
+}
+```
+
+**业务规则**:
+1. 支持分页查询
+2. 签到列表按签到时间倒序排列(最新签到的在前)
+3. 每个签到记录包含用户的完整信息(用户ID、昵称、头像等)
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误
+- `500`: 服务器内部错误
+
+---
+
+### 4. 获取会议签到统计
+
+**接口地址**: `POST /meeting/get_check_in_stats`
+
+**接口描述**: 获取指定会议的签到人数统计
+
+**请求参数**:
+```json
+{
+ "meetingID": "string" // 会议ID(必填)
+}
+```
+
+**响应参数**:
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "meetingID": "string", // 会议ID
+ "checkInCount": 50 // 签到人数
+ }
+}
+```
+
+**业务规则**:
+1. 返回会议的签到总人数
+2. 这个统计也会在会议信息中的 `checkInCount` 字段中返回
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误
+- `500`: 服务器内部错误
+
+---
+
+## 业务规则
+
+### 1. 会议与群聊的关联
+
+- 创建会议时会自动创建一个群聊
+- 群聊名称为:`"会议群-{会议主题}"`
+- 会议封面会作为群聊头像
+- 群聊的 `ex` 字段中会包含会议标识:
+ ```json
+ {
+ "isMeetingGroup": true,
+ "meetingID": "会议ID"
+ }
+ ```
+
+### 2. 主播与群角色的关系
+
+- **有主播列表时**:
+ - 第一个主播 → 群主
+ - 其他主播 → 管理员
+ - 创建者(如果不在主播列表中)→ 普通成员
+- **没有主播列表时**:
+ - 创建者 → 群主
+
+### 3. 评论开关与群禁言
+
+- **开启评论** (`enableComment: true`):
+ - 自动取消群禁言
+ - 群成员可以发送消息
+- **关闭评论** (`enableComment: false`):
+ - 自动禁言群
+ - 群成员无法发送消息
+
+### 4. 权限控制
+
+- **查看会议**:用户端只能查看已预约和进行中的会议
+
+### 5. 时间验证
+
+- 创建会议时,预约时间不能早于当前时间
+- 更新会议时,如果更新预约时间,也不能早于当前时间
+
+---
+
+## 错误码说明
+
+### 通用错误码
+
+| 错误码 | 说明 |
+|--------|------|
+| 0 | 成功 |
+| 500 | 服务器内部错误 |
+| 1001 | 参数错误 |
+| 1002 | 权限不足 |
+| 1004 | 记录不存在 |
+
+### 会议相关错误场景
+
+| 错误码 | 场景 |
+|--------|------|
+| 1001 | 查询状态不是1或2 |
+| 1004 | 会议不存在 |
+| 1004 | 查询已结束或已取消的会议 |
+
+---
+
+## 注意事项
+
+### 1. 认证
+
+- 所有接口都需要在请求头中携带 `token` 进行身份验证
+- Token通过 `/auth/get_user_token` 接口获取
+
+### 2. 请求格式
+
+- 所有接口使用 `POST` 方法
+- Content-Type: `application/json`
+- 请求体为JSON格式
+
+### 3. 响应格式
+
+- 所有接口统一返回格式:`{errCode, errMsg, errDlt, data}`
+- `errCode` 为 0 表示成功,非0表示失败
+- 失败时查看 `errMsg` 和 `errDlt` 获取错误详情
+
+### 4. 时间戳
+
+- 所有时间戳均为毫秒级Unix时间戳
+- 例如:`1234567890000` 表示 2009-02-13 23:31:30
+
+### 5. 分页参数
+
+- 分页参数统一格式:
+ ```json
+ {
+ "pagination": {
+ "pageNumber": 1, // 页码,从1开始
+ "showNumber": 20 // 每页数量
+ }
+ }
+ ```
+
+### 6. 用户端接口特点
+
+- 只能查看状态为 **1(已预约)** 和 **2(进行中)** 的会议
+- 不返回管理字段:
+ - `creatorUserID`(创建者ID)
+ - `ex`(扩展字段)
+ - `createTime`(创建时间)
+ - `updateTime`(更新时间)
+ - `estimatedCount`(预估人数)
+ - `anchorUserIDs`(主播ID列表,只返回 `anchorUsers`)
+- 不支持按创建者查询(没有 `creatorUserID` 参数)
+
+### 7. 主播信息
+
+- 用户端接口只返回 `anchorUsers`(主播详细信息)
+- 主播信息包含:`userID`、`nickname`、`faceURL` 等
+
+### 8. 群聊关联
+
+- 每个会议都会关联一个群聊
+- 通过 `groupID` 字段可以获取关联的群聊ID
+- 可以使用群聊相关接口(`/group/*`)操作关联的群聊
+
+### 9. 会议状态流转
+
+```
+1(已预约) → 2(进行中) → 3(已结束)
+ ↓
+ 4(已取消)
+```
+
+- 用户端只能查看状态 **1(已预约)** 和 **2(进行中)** 的会议
+- 状态 **3(已结束)** 和 **4(已取消)** 的会议对用户端不可见
+
+---
+
+## 接口汇总表
+
+| 接口路径 | 方法 | 描述 | 权限要求 |
+|---------|------|------|---------|
+| `/meeting/get_meeting` | POST | 获取会议信息 | 登录用户 |
+| `/meeting/get_meetings_public` | POST | 获取会议列表 | 登录用户 |
+
+---
+
+## 请求示例
+
+```bash
+curl -X POST http://localhost:10002/meeting/get_meeting \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "meetingID": "meeting123"
+ }'
+```
+
+### 获取会议列表
+
+```bash
+curl -X POST http://localhost:10002/meeting/get_meetings_public \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "status": 1,
+ "keyword": "产品",
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+ }'
+```
+
+### 会议签到
+
+```bash
+curl -X POST http://localhost:10002/meeting/check_in \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "meetingID": "meeting123"
+ }'
+```
+
+### 检查用户是否已签到
+
+```bash
+curl -X POST http://localhost:10002/meeting/check_user_check_in \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "meetingID": "meeting123"
+ }'
+```
+
+### 获取会议签到列表
+
+```bash
+curl -X POST http://localhost:10002/meeting/get_check_ins \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "meetingID": "meeting123",
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+ }'
+```
+
+### 获取会议签到统计
+
+```bash
+curl -X POST http://localhost:10002/meeting/get_check_in_stats \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "meetingID": "meeting123"
+ }'
+```
+
+---
+
+**最后更新时间**: 2025-01-23
+
diff --git a/docs/readme/README_cs.md b/docs/readme/README_cs.md
new file mode 100644
index 0000000..145dc54
--- /dev/null
+++ b/docs/readme/README_cs.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ O OpenIM
+
+OpenIM je platforma služeb speciálně navržená pro integraci chatu, audio-video hovorů, upozornění a chatbotů AI do aplikací. Poskytuje řadu výkonných rozhraní API a webhooků, které vývojářům umožňují snadno začlenit tyto interaktivní funkce do svých aplikací. OpenIM není samostatná chatovací aplikace, ale spíše slouží jako platforma pro podporu jiných aplikací při dosahování bohatých komunikačních funkcí. Následující diagram ilustruje interakci mezi AppServer, AppClient, OpenIMServer a OpenIMSDK pro podrobné vysvětlení.
+
+
+
+## 🚀 O OpenIMSDK
+
+**OpenIMSDK** je IM SDK navržený pro**OpenIMServer**, vytvořený speciálně pro vkládání do klientských aplikací. Jeho hlavní vlastnosti a moduly jsou následující:
+
+- 🌟 Hlavní vlastnosti:
+
+ - 📦 Místní úložiště
+ - 🔔 Zpětná volání posluchačů
+ - 🛡️ API obalování
+ - 🌐 Správa připojení
+
+- 📚 hlavní moduly:
+
+ 1. 🚀 Inicializace a přihlášení
+ 2. 👤 Správa uživatelů
+ 3. 👫 Správa přátel
+ 4. 🤖 Skupinové funkce
+ 5. 💬 Zpracování konverzace
+
+Je postaven pomocí Golang a podporuje nasazení napříč platformami, což zajišťuje konzistentní přístup na všech platformách.
+
+👉 **[Prozkoumat GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 O OpenIMServeru
+
+- **OpenIMServer** má následující vlastnosti:
+ - 🌐 Architektura mikroslužeb: Podporuje režim clusteru, včetně brány a více služeb RPC.
+ - 🚀 Různé metody nasazení: Podporuje nasazení prostřednictvím zdrojového kódu, Kubernetes nebo Docker.
+ - Podpora masivní uživatelské základny: Super velké skupiny se stovkami tisíc uživatelů, desítkami milionů uživatelů a miliardami zpráv.
+
+### Vylepšené obchodní funkce:
+
+- **REST API**: OpenIMServer nabízí REST API pro podnikové systémy, jejichž cílem je poskytnout podnikům více funkcí, jako je vytváření skupin a odesílání push zpráv přes backendová rozhraní.
+- **Webhooks**: OpenIMServer poskytuje možnosti zpětného volání pro rozšíření více obchodních formulářů. Zpětné volání znamená, že OpenIMServer odešle požadavek na obchodní server před nebo po určité události, jako jsou zpětná volání před nebo po odeslání zprávy.
+
+👉 **[Další informace](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Celková architektura
+
+Ponořte se do srdce funkčnosti Open-IM-Server s naším diagramem architektury.
+
+
+
+## :rocket: Rychlý start
+
+Podporujeme mnoho platforem. Zde jsou adresy pro rychlou práci na webové stránce:
+
+👉 **[Online webová ukázka OpenIM](https://web-enterprise.rentsoft.cn/)**
+
+🤲 Pro usnadnění uživatelské zkušenosti nabízíme různá řešení nasazení. Způsob nasazení si můžete vybrat ze seznamu níže:
+
+- **[Průvodce nasazením zdrojového kódu](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Docker Deployment Guide](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Průvodce nasazením Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Průvodce nasazením pro vývojáře Mac](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Chcete-li začít vyvíjet OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM Naším cílem je vybudovat špičkovou open source komunitu. Máme soubor standardů v [komunitním repozitáři](https://github.com/OpenIMSDK/community).
+
+Pokud byste chtěli přispět do tohoto úložiště Open-IM-Server, přečtěte si naši [dokumentaci pro přispěvatele](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Než začnete, ujistěte se, že jsou vaše změny vyžadovány. Nejlepší pro to je vytvořit [nová diskuze](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) NEBO [Slack Communication](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A), nebo pokud narazíte na problém, [nahlásit jej](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) jako první.
+
+- [OpenIM API Reference](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [Protokolování OpenIM Bash](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [Akce OpenIM CI/CD](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [Konvence kódu OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [Pokyny k zavázání OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [Průvodce vývojem OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [Struktura adresáře OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [Nastavení prostředí OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [Referenční kód chybového kódu OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [Pracovní postup OpenIM Git](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [OpenIM Git Cherry Pick Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [Pracovní postup OpenIM GitHub](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [standardy kódu OpenIM Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [Pokyny pro obrázky OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [Počáteční konfigurace OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [Průvodce instalací OpenIM Docker](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [nstalace systému OpenIM OpenIM Linux](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [OpenIM Linux Development Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [Průvodce místními akcemi OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [Konvence protokolování OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [Offline nasazení OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [Nástroje protokolu OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [Příručka testování OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM Utility Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM Makefile Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [OpenIM Script Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [OpenIM Versioning](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Spravovat backend a monitorovat nasazení](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Průvodce nasazením pro vývojáře Mac pro OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Společenství
+
+- 📚 [Komunita OpenIM](https://github.com/OpenIMSDK/community)
+- 💕 [Zájmová skupina OpenIM](https://github.com/Openim-sigs)
+- 🚀 [Připojte se k naší komunitě Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Připojte se k našemu wechatu](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Komunitní setkání
+
+Chceme, aby se do naší komunity a přispívání kódu zapojil kdokoli, nabízíme dárky a odměny a vítáme vás, abyste se k nám připojili každý čtvrtek večer.
+
+Naše konference je v [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, pak můžete vyhledat kanál Open-IM-Server a připojit se
+
+Zaznamenáváme si každou [dvoutýdenní schůzku](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting)do [diskuzí na GitHubu](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), naše historické poznámky ze schůzek a také záznamy schůzek jsou k dispozici na [Dokumenty Google :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing).
+
+## :eyes: Kdo používá OpenIM
+
+Podívejte se na naši stránku [případové studie uživatelů](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md), kde najdete seznam uživatelů projektu. Neváhejte zanechat[📝komentář](https://github.com/openimsdk/open-im-server-deploy/issues/379) a podělte se o svůj případ použití.
+
+## :page_facing_up: License
+
+OpenIM je licencován pod licencí Apache 2.0. Úplný text licence naleznete v [LICENCE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE).
+
+Logo OpenIM, včetně jeho variací a animovaných verzí, zobrazené v tomto úložišti [OpenIM](https://github.com/openimsdk/open-im-server-deploy)v adresářích [assets/logo](./assets/logo) a [assets/logo-gif](assets/logo-gif) je chráněno autorským právem.
+
+## 🔮 Děkujeme našim přispěvatelům!
+
+
+
+
diff --git a/docs/readme/README_da.md b/docs/readme/README_da.md
new file mode 100644
index 0000000..be5a1a3
--- /dev/null
+++ b/docs/readme/README_da.md
@@ -0,0 +1,185 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## :busts_in_silhouette: Fællesskab
+
+- 📚 [OpenIM-fællesskab](https://github.com/OpenIMSDK/community)
+- 💕 [OpenIM-interessegruppe](https://github.com/Openim-sigs)
+- 🚀 [Deltag i vores Slack-fællesskab](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Deltag i vores WeChat (微信群)](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+- 👫 [Deltag i vores Reddit](https://www.reddit.com/r/OpenIMessaging)
+- 💬 [Følg vores Twitter-konto](https://twitter.com/openimsdk)
+
+## Ⓜ️ Om OpenIM
+
+OpenIM er en serviceplatform designet specifikt til integration af chat, lyd-videoopkald, notifikationer og AI-chatbots i applikationer. Den tilbyder en række kraftfulde API'er og Webhooks, som gør det let for udviklere at integrere disse interaktive funktioner i deres applikationer. OpenIM er ikke en selvstændig chatapplikation, men fungerer snarere som en platform, der understøtter andre applikationer i at opnå omfattende kommunikationsfunktionaliteter. Følgende diagram illustrerer interaktionen mellem AppServer, AppClient, OpenIMServer og OpenIMSDK for at forklare detaljeret.
+
+
+
+## 🚀 Om OpenIMSDK
+
+**OpenIMSDK** er en IM SDK designet til **OpenIMServer**, skabt specifikt til indlejring i klientapplikationer. Dens vigtigste funktioner og moduler er som følger:
+
+- 🌟 Hovedfunktioner:
+
+ - 📦 Lokal lagring
+ - 🔔 Lytter-callbacks
+ - 🛡️ API-indkapsling
+ - 🌐 Forbindelsesstyring
+
+ ## 📚 Hovedmoduler:
+
+ 1. 🚀 Initialisering og login
+ 2. 👤 Brugerstyring
+ 3. 👫 Venstyring
+ 4. 🤖 Gruppefunktioner
+ 5. 💬 Håndtering af samtaler
+
+Det er bygget ved hjælp af Golang og understøtter tværplatformsudrulning, hvilket sikrer en konsekvent adgangsoplevelse på tværs af alle platforme.
+
+👉 **[Udforsk GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Om OpenIMServer
+
+- **OpenIMServer** har følgende karakteristika:
+ - 🌐 Mikroservicarkitektur: Understøtter klyngetilstand, inklusive en gateway og flere rpc-tjenester.
+ - 🚀 Forskellige udrulningsmetoder: Understøtter udrulning via kildekode, Kubernetes eller Docker.
+ - Støtte til massiv brugerbase: Super store grupper med hundredtusinder af brugere, titusinder af brugere og milliarder af beskeder.
+
+### Forbedret forretningsfunktionalitet:
+
+- **REST API**:OpenIMServer tilbyder REST API'er til forretningssystemer, med det formål at give virksomheder flere funktioner, såsom at oprette grupper og sende push-beskeder gennem backend-grænseflader.
+- **Webhooks**:OpenIMServer giver mulighed for callback-funktionalitet for at udvide flere forretningsformer. Et callback betyder, at OpenIMServer sender en anmodning til forretningsserveren før eller efter en bestemt begivenhed, som callbacks før eller efter at have sendt en besked.
+
+👉 **[Lær mere](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Samlet Arkitektur
+
+Dyk ned i hjertet af Open-IM-Servers funktionalitet med vores arkitekturdiagram.
+
+
+
+## :rocket: Hurtig start
+
+Vi understøtter mange platforme. Her er adresserne for hurtig oplevelse på websiden:
+
+👉 **[OpenIM online demo](https://www.openim.io/zh/commercial)**
+
+🤲 For at lette brugeroplevelsen tilbyder vi forskellige udrulningsløsninger. Du kan vælge din udrulningsmetode fra listen nedenfor:
+
+- **[Vejledning til udrulning af kildekode](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Vejledning til Docker-udrulning](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Vejledning til Kubernetes-udrulning](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Vejledning til Mac-udviklerudrulning](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: For at starte udviklingen af OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM Vores mål er at bygge et topniveau åben kildekode-fællesskab. Vi har et sæt standarder i [Community-repositoriet](https://github.com/OpenIMSDK/community).
+
+Hvis du gerne vil bidrage til dette Open-IM-Server-repositorium, bedes du læse vores [dokumentation for bidragydere](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Før du starter, skal du sikre dig, at dine ændringer er efterspurgte. Det bedste for det er at oprette en [ny diskussion](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) ELLER [Slack-kommunikation](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A), eller hvis du finder et problem, [rapportere det](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) først.
+
+- [OpenIM API-referencer](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [OpenIM Bash-logging](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [OpenIM CI/CD-handlinger](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [OpenIM kodekonventioner](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [OpenIM commit-retningslinjer](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [OpenIM udviklingsguide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [OpenIM mappestruktur](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [OpenIM miljøopsætning](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [OpenIM fejlkode-reference](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [OpenIM Git-arbejdsgang](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [OpenIM Git Cherry Pick-guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [OpenIM GitHub-arbejdsgang](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [OpenIM Go kode-standarder](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [OpenIM billedretningslinjer](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [OpenIM initialkonfiguration](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [OpenIM Docker installationsguide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [OpenIM OpenIM Linux-systeminstallation](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [OpenIM Linux-udviklingsguide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [OpenIM lokale handlingsguide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [OpenIM logningskonventioner](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [OpenIM offline-udrulning](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [OpenIM Protoc-værktøjer](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [OpenIM testguide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM Utility Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM Makefile-værktøjer](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [OpenIM skriptværktøjer](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [OpenIM versionsstyring](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Administrer backend og overvåg udrulning](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Mac-udviklerudrulningsguide for OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :calendar: Fællesskabsmøder
+
+Vi ønsker, at alle involverer sig i vores fællesskab og bidrager med kode, vi tilbyder gaver og belønninger, og vi byder dig velkommen til at deltage hver torsdag aften.
+
+Vores konference er på [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, derefter kan du søge Open-IM-Server pipeline for at deltage.
+
+Vi tager [notater](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) af hvert fjortendages møde i [GitHub-diskussioner](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), Vores historiske mødenotater samt genudsendelser af møderne er tilgængelige på [Google Docs](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing) 📑.
+
+## :eyes: Hvem Bruger OpenIM
+
+Tjek vores side med [brugercasestudier](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) for en liste over projektbrugerne. Tøv ikke med at efterlade en 📝[kommentar](https://github.com/openimsdk/open-im-server-deploy/issues/379) og dele dit brugstilfælde.
+
+## :page_facing_up: Licens
+
+OpenIM er licenseret under Apache 2.0-licensen. Se [LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) for den fulde licens tekst.
+
+OpenIM-logoet, inklusive dets variationer og animerede versioner, vist i dette repositorium [OpenIM](https://github.com/openimsdk/open-im-server-deploy) under mapperne [assets/logo](../../assets/logo) og [assets/logo-gif](../../assets/logo-gif), er beskyttet af ophavsretslove.
+
+## 🔮 Tak til vores bidragydere!
+
+
+
+
diff --git a/docs/readme/README_el.md b/docs/readme/README_el.md
new file mode 100644
index 0000000..e9f6536
--- /dev/null
+++ b/docs/readme/README_el.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ Σχετικά με το OpenIM
+
+Το OpenIM είναι μια πλατφόρμα υπηρεσιών σχεδιασμένη ειδικά για την ενσωμάτωση συνομιλίας, κλήσεων ήχου-βίντεο, ειδοποιήσεων και chatbots AI σε εφαρμογές. Παρέχει μια σειρά από ισχυρά API και Webhooks, επιτρέποντας στους προγραμματιστές να ενσωματώσουν εύκολα αυτές τις αλληλεπιδραστικές λειτουργίες στις εφαρμογές τους. Το OpenIM δεν είναι μια αυτόνομη εφαρμογή συνομιλίας, αλλά λειτουργεί ως πλατφόρμα υποστήριξης άλλων εφαρμογών για την επίτευξη πλούσιων λειτουργιών επικοινωνίας. Το παρακάτω διάγραμμα απεικονίζει την αλληλεπίδραση μεταξύ AppServer, AppClient, OpenIMServer και OpenIMSDK για να εξηγήσει αναλυτικά.
+
+
+
+## 🚀 Σχετικά με το OpenIMSDK
+
+Το **OpenIMSDK** είναι ένα SDK για αμεση ανταλλαγή μηνυμάτων σχεδιασμένο για το **OpenIMServer**, δημιουργήθηκε ειδικά για ενσωμάτωση σε εφαρμογές πελατών. Οι κύριες δυνατότητες και μονάδες του είναι οι εξής:
+
+- 🌟 Κύριες Δυνατότητες:
+
+ - 📦 Τοπική αποθήκευση
+ - 🔔 Callbacks ακροατών
+ - 🛡️ Περιτύλιγμα API
+ - 🌐 Διαχείριση σύνδεσης
+
+- 📚 Κύριες Μονάδες:
+
+ 1. 🚀 Αρχικοποίηση και Σύνδεση
+ 2. 👤 Διαχείριση Χρηστών
+ 3. 👫 Διαχείριση Φίλων
+ 4. 🤖 Λειτουργίες Ομάδας
+ 5. 💬 Διαχείριση Συνομιλιών
+
+Είναι κατασκευασμένο χρησιμοποιώντας Golang και υποστηρίζει διασταυρούμενη πλατφόρμα ανάπτυξης, διασφαλίζοντας μια συνεπή εμπειρία πρόσβασης σε όλες τις πλατφόρμες.
+
+👉 **[Εξερευνήστε το GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Σχετικά με το OpenIMServer
+
+- Το **OpenIMServer** έχει τις ακόλουθες χαρακτηριστικές:
+ - 🌐 Αρχιτεκτονική μικροϋπηρεσιών: Υποστηρίζει λειτουργία σε σύμπλεγμα, περιλαμβάνοντας έναν πύλη και πολλαπλές υπηρεσίες rpc.
+ - 🚀 Διάφοροι τρόποι ανάπτυξης: Υποστηρίζει ανάπτυξη μέσω πηγαίου κώδικα, Kubernetes, ή Docker.
+ - Υποστήριξη για τεράστια βάση χρηστών: Πολύ μεγάλες ομάδες με εκατοντάδες χιλιάδες χρήστες, δεκάδες εκατομμύρια χρήστες και δισεκατομμύρια μηνύματα.
+
+### Ενισχυμένη Επιχειρηματική Λειτουργικότητα:
+
+- **REST API**: Το OpenIMServer προσφέρει REST APIs για επιχειρηματικά συστήματα, με στόχο την ενδυνάμωση των επιχειρήσεων με περισσότερες λειτουργικότητες, όπως η δημιουργία ομάδων και η αποστολή μηνυμάτων push μέσω backend διεπαφών.
+- **Webhooks**: Το OpenIMServer παρέχει δυνατότητες επανάκλησης για την επέκταση περισσότερων επιχειρηματικών μορφών. Μια επανάκληση σημαίνει ότι το OpenIMServer στέλνει ένα αίτημα στον επιχειρηματικό διακομιστή πριν ή μετά από ένα συγκεκριμένο γεγονός, όπως επανακλήσεις πριν ή μετά την αποστολή ενός μηνύματος.
+
+👉 **[Μάθετε περισσότερα](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Συνολική Αρχιτεκτονική
+
+Εξερευνήστε σε βάθος τη λειτουργικότητα του Open-IM-Server με το διάγραμμα αρχιτεκτονικής μας.
+
+
+
+## :rocket: Γρήγορη Εκκίνηση
+
+Υποστηρίζουμε πολλές πλατφόρμες. Εδώ είναι οι διευθύνσεις για γρήγορη εμπειρία στην πλευρά του διαδικτύου:
+
+👉 **[Διαδικτυακή επίδειξη του OpenIM](https://web-enterprise.rentsoft.cn/)**
+
+🤲 Για να διευκολύνουμε την εμπειρία του χρήστη, προσφέρουμε διάφορες λύσεις ανάπτυξης. Μπορείτε να επιλέξετε τη μέθοδο ανάπτυξης σας από την παρακάτω λίστα:
+
+- **[Οδηγός Ανάπτυξης Κώδικα Πηγής](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[δηγός Ανάπτυξης μέσω Docker](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Οδηγός Ανάπτυξης Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Οδηγός Ανάπτυξης για Αναπτυξιακούς στο Mac](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Για να Αρχίσετε την Ανάπτυξη του OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM Στόχος μας είναι να δημιουργήσουμε μια κορυφαίου επιπέδου ανοιχτή πηγή κοινότητας. Διαθέτουμε ένα σύνολο προτύπων, στο [Αποθετήριο Κοινότητας](https://github.com/OpenIMSDK/community).
+
+Εάν θέλετε να συνεισφέρετε σε αυτό το αποθετήριο Open-IM-Server, παρακαλούμε διαβάστε την [τεκμηρίωση συνεισφέροντος](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Πριν ξεκινήσετε, παρακαλούμε βεβαιωθείτε ότι οι αλλαγές σας είναι ζητούμενες. Το καλύτερο για αυτό είναι να δημιουργήσετε ένα [νέα συζήτηση](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) ή [Επικοινωνία Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A), ή αν βρείτε ένα ζήτημα, [αναφέρετέ το](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) πρώτα.
+
+- [Αναφορά API του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [Καταγραφή Bash του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [Ενέργειες CI/CD του OpenIMs](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [Συμβάσεις Κώδικα του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [Οδηγίες Commit του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [Οδηγός Ανάπτυξης του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [Δομή Καταλόγου του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [Ρύθμιση Περιβάλλοντος του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [Αναφορά Κωδικών Σφάλματος του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [Ροή Εργασίας Git του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [Οδηγός Cherry Pick του Git του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [Ροή Εργασίας GitHub του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [Πρότυπα Κώδικα Go του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [Οδηγίες Εικόνας του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [Αρχική Διαμόρφωση του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [Οδηγός Εγκατάστασης Docker του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [Οδηγός Εγκατάστασης Συστήματος Linux του Open](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [Οδηγός Ανάπτυξης Linux του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [Οδηγός Τοπικών Δράσεων του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [Συμβάσεις Καταγραφής του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [Αποστολή Εκτός Σύνδεσης του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [Εργαλεία Protoc του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [Οδηγός Δοκιμών του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [Χρησιμότητα Go του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [Χρησιμότητες Makefile του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [Χρησιμότητες Σεναρίου του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [Έκδοση του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Διαχείριση backend και παρακολούθηση ανάπτυξης](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Οδηγός Ανάπτυξης για Προγραμματιστές Mac του OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Κοινότητα
+
+- 📚 [Κοινότητα OpenIM](https://github.com/OpenIMSDK/community)
+- 💕 [Ομάδα Ενδιαφέροντος OpenIM](https://github.com/Openim-sigs)
+- 🚀 [Εγγραφείτε στην κοινότητα Slack μας](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [γγραφείτε στην ομάδα μας wechat (微信群)](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Συναντήσεις της κοινότητας
+
+Θέλουμε οποιονδήποτε να εμπλακεί στην κοινότητά μας και να συνεισφέρει κώδικα. Προσφέρουμε δώρα και ανταμοιβές και σας καλωσορίζουμε να μας ενταχθείτε κάθε Πέμπτη βράδυ.
+
+Η διάσκεψή μας είναι στο [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, στη συνέχεια μπορείτε να αναζητήσετε τη διαδικασία Open-IM-Server για να συμμετάσχετε
+
+Κάνουμε σημειώσεις για κάθε μια [Σημειώνουμε κάθε διμηνιαία συνάντηση](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) στις [συζητήσεις του GitHub](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), Οι ιστορικές μας σημειώσεις συναντήσεων, καθώς και οι επαναλήψεις των συναντήσεων είναι διαθέσιμες στο[Έγγραφα της Google :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing).
+
+## :eyes: Ποιοί Χρησιμοποιούν το OpenIM
+
+Ελέγξτε τη σελίδα με τις [μελέτες περίπτωσης χρήσης ](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) μας για μια λίστα των χρηστών του έργου. Μην διστάσετε να αφήσετε ένα[📝σχόλιο](https://github.com/openimsdk/open-im-server-deploy/issues/379) και να μοιραστείτε την περίπτωση χρήσης σας.
+
+## :page_facing_up: Άδεια Χρήσης
+
+Το OpenIM διατίθεται υπό την άδεια Apache 2.0. Δείτε τη [ΑΔΕΙΑ ΧΡΗΣΗΣ](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) για το πλήρες κείμενο της άδειας.
+
+Το λογότυπο του OpenIM, συμπεριλαμβανομένων των παραλλαγών και των κινούμενων εικόνων, που εμφανίζονται σε αυτό το αποθετήριο[OpenIM](https://github.com/openimsdk/open-im-server-deploy) υπό τις διευθύνσεις [assets/logo](../../assets/logo) και [assets/logo-gif](../../assets/logo-gif) προστατεύονται από τους νόμους περί πνευματικής ιδιοκτησίας.
+
+## 🔮 Ευχαριστούμε τους συνεισφέροντες μας!
+
+
+
+
diff --git a/docs/readme/README_es.md b/docs/readme/README_es.md
new file mode 100644
index 0000000..30cc4db
--- /dev/null
+++ b/docs/readme/README_es.md
@@ -0,0 +1,185 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ Acerca de OpenIM
+
+OpenIM es una plataforma de servicio diseñada específicamente para integrar chat, llamadas de audio y video, notificaciones y chatbots de IA en aplicaciones. Proporciona una gama de potentes API y Webhooks, lo que permite a los desarrolladores incorporar fácilmente estas características interactivas en sus aplicaciones. OpenIM no es una aplicación de chat independiente, sino que sirve como una plataforma para apoyar a otras aplicaciones en lograr funcionalidades de comunicación enriquecidas. El siguiente diagrama ilustra la interacción entre AppServer, AppClient, OpenIMServer y OpenIMSDK para explicar en detalle.
+
+
+
+## 🚀 Acerca de OpenIMSDK
+
+**OpenIMSDK** es un SDK de mensajería instantánea diseñado para **OpenIMServer**, creado específicamente para su incorporación en aplicaciones cliente. Sus principales características y módulos son los siguientes:
+
+- 🌟 Características Principales:
+
+ - 📦 Almacenamiento local
+ - 🔔 Callbacks de escuchas
+ - 🛡️ Envoltura de API
+ - 🌐 Gestión de conexiones
+
+- 📚 Módulos Principales:
+
+ 1. 🚀 Inicialización y acceso
+ 2. 👤 Gestión de usuarios
+ 3. 👫 Gestión de amigos
+ 4. 🤖 Funciones de grupo
+ 5. 💬 Manejo de conversaciones
+
+Está construido con Golang y soporta despliegue multiplataforma, asegurando una experiencia de acceso consistente en todas las plataformas.
+
+👉 **[Explora el SDK de GO](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Acerca de OpenIMServer
+
+- **OpenIMServer** tiene las siguientes características:
+ - 🌐 Arquitectura de microservicios: Soporta modo cluster, incluyendo un gateway y múltiples servicios rpc.
+ - 🚀 Métodos de despliegue diversos: Soporta el despliegue a través de código fuente, Kubernetes o Docker.
+ - Soporte para una base de usuarios masiva: Grupos super grandes con cientos de miles de usuarios, decenas de millones de usuarios y miles de millones de mensajes.
+
+### Funcionalidad Empresarial Mejorada:
+
+- **API REST**: OpenIMServer ofrece APIs REST para sistemas empresariales, destinadas a empoderar a las empresas con más funcionalidades, como la creación de grupos y el envío de mensajes push a través de interfaces de backend.
+- **Webhooks**: OpenIMServer proporciona capacidades de callback para extender más formas de negocio. Un callback significa que OpenIMServer envía una solicitud al servidor empresarial antes o después de un cierto evento, como callbacks antes o después de enviar un mensaje.
+
+👉 **[Aprende más](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Arquitectura General
+
+Adéntrate en el corazón de la funcionalidad de Open-IM-Server con nuestro diagrama de arquitectura.
+
+
+
+## :rocket: Inicio Rápido
+
+:rocket: Inicio Rápido
+Apoyamos muchas plataformas. Aquí están las direcciones para una experiencia rápida en el lado web:
+
+👉 **[ Demostración web en línea de OpenIM](https://web-enterprise.rentsoft.cn/)**
+
+🤲 Para facilitar la experiencia del usuario, ofrecemos varias soluciones de despliegue. Puedes elegir tu método de despliegue de la lista a continuación:
+
+- **[Guía de Despliegue de Código Fuente](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Guía de Despliegue con Docker](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Guía de Despliegue con Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Guía de Despliegue para Desarrolladores en Mac](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Para Comenzar a Desarrollar en OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+Nuestro objetivo en OpenIM es construir una comunidad de código abierto de nivel superior. Tenemos un conjunto de estándares,
+en el [repositorio de la Comunidad.](https://github.com/OpenIMSDK/community).
+
+Si te gustaría contribuir a este repositorio de Open-IM-Server, por favor lee nuestra [documentación para colaboradores](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Antes de comenzar, asegúrate de que tus cambios sean demandados. Lo mejor para eso es crear una [nueva discusión](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) O [Comunicación en Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A), o si encuentras un problema, [repórtalo](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) primero.
+
+- [Referencia de API de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [Registro de Bash de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [Acciones de CI/CD de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [Convenciones de Código de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [Guías de Commit de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [Guía de Desarrollo de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [Estructura de Directorios de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [Configuración de Entorno de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [Referencia de Códigos de Error de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [Flujo de Trabajo de Git de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [Guía de Cherry Pick de Git de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [Flujo de Trabajo de GitHub de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [Estándares de Código Go de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [Guías de Imágenes de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [Configuración Inicial de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [Guía de Instalación de Docker de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [Instalación del Sistema Linux de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [Guía de Desarrollo Linux de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [Guía de Acciones Locales de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [Convenciones de Registro de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [Despliegue sin Conexión de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [Herramientas Protoc de OpenIMM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [Guía de Pruebas de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [Utilidades Go de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [Utilidades de Makefile de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [Utilidades de Script de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [Versionado de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Gestión de backend y despliegue de monitoreo](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Guía de Despliegue para Desarrolladores Mac de OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Comunidad
+
+- 📚 [Comunidad de OpenIM](https://github.com/OpenIMSDK/community)
+- 💕 [Grupo de Interés de OpenIM](https://github.com/Openim-sigs)
+- 🚀 [Únete a nuestra comunidad de Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Únete a nuestro wechat (微信群)](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Reuniones de la Comunidad
+
+Queremos que cualquiera se involucre en nuestra comunidad y contribuya con código, ofrecemos regalos y recompensas, y te damos la bienvenida para que te unas a nosotros cada jueves por la noche.
+
+Nuestra conferencia está en [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, luego puedes buscar el pipeline de Open-IM-Server para unirte
+
+Tomamos notas de cada [reunión quincenal](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) en [discusiones de GitHub](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), Nuestras notas de reuniones históricas, así como las repeticiones de las reuniones están disponibles en [Google Docs :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing).
+
+## :eyes: Quiénes Están Usando OpenIM
+
+Consulta nuestros [estudios de caso de usuarios](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) página para obtener una lista de los usuarios del proyecto. No dudes en dejar un [📝comentario](https://github.com/openimsdk/open-im-server-deploy/issues/379) y compartir tu caso de uso.
+
+## :page_facing_up: Licencia
+
+OpenIM está bajo la licencia Apache 2.0. Consulta [LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) para ver el texto completo de la licencia.
+
+El logotipo de OpenIM, incluyendo sus variaciones y versiones animadas, que se muestran en este repositorio [OpenIM](https://github.com/openimsdk/open-im-server-deploy) en los directorios [assets/logo](../../assets/logo) y [assets/logo-gif](assets/logo-gif) están protegidos por las leyes de derechos de autor.
+
+## 🔮 iGracias a nuestros colaboradores!
+
+
+
+
diff --git a/docs/readme/README_fa.md b/docs/readme/README_fa.md
new file mode 100644
index 0000000..418b85b
--- /dev/null
+++ b/docs/readme/README_fa.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## درباره OpenIM Ⓜ️
+
+OpenIM یک پلتفرم خدماتی است که به طور خاص برای ادغام چت، تماس های صوتی و تصویری، اعلان ها و چت ربات های هوش مصنوعی در برنامه ها طراحی شده است. این مجموعه ای از API ها و Webhook های قدرتمند را ارائه می دهد که به توسعه دهندگان این امکان را می دهد تا به راحتی این ویژگی های تعاملی را در برنامه های خود بگنجانند. OpenIM یک برنامه چت مستقل نیست، بلکه به عنوان یک پلتفرم برای پشتیبانی از برنامه های کاربردی دیگر در دستیابی به قابلیت های ارتباطی غنی عمل می کند. نمودار زیر تعامل بین AppServer، AppClient، OpenIMServer و OpenIMSDK را برای توضیح جزئیات نشان می دهد.
+
+
+
+## 🚀 درباره OpenIMSDK
+
+**OpenIMSDK** یک IM SDK است که برای **OpenIMServer** طراحی شده است که به طور خاص برای جاسازی در برنامه های مشتری ایجاد شده است. ویژگی ها و ماژول های اصلی آن به شرح زیر است:
+
+- 🌟 ویژگی های اصلی:
+
+ - 📦 ذخیره سازی محلی
+ - 🔔 پاسخ تماس شنونده
+ - 🛡️ بسته بندی API
+ - 🌐 مدیریت اتصال
+
+- 📚 ماژول های اصلی:
+
+ 1. 🚀 مقداردهی اولیه و ورود
+ 2. 👤 مدیریت کاربر
+ 3. 👫 مدیریت دوست
+ 4. 🤖 توابع گروه
+ 5. 💬 مدیریت مکالمه
+
+این برنامه با استفاده از Golang ساخته شده است و از استقرار چند پلت فرم پشتیبانی می کند و تجربه دسترسی ثابت را در تمام پلتفرم ها تضمین می کند.
+
+👉 **[کاوش GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 درباره OpenIMServer
+
+- **OpenIMServer** دارای ویژگی های زیر است:
+ - 🌐 معماری Microservice: از حالت کلاستر، از جمله یک دروازه و چندین سرویس rpc پشتیبانی می کند.
+ - 🚀 روشهای استقرار متنوع: از استقرار از طریق کد منبع، Kubernetes یا Docker پشتیبانی میکند.
+ - پشتیبانی از پایگاه عظیم کاربران: گروه های فوق العاده بزرگ با صدها هزار کاربر، ده ها میلیون کاربر و میلیاردها پیام.
+
+### عملکردهای تجاری پیشرفته:
+
+- **REST API**: OpenIMServer APIهای REST را برای سیستمهای تجاری ارائه میکند، با هدف توانمندسازی کسبوکارها با قابلیتهای بیشتر، مانند ایجاد گروهها و ارسال پیامهای فشار از طریق رابطهای باطنی.
+- **Webhooks**: OpenIMServer قابلیت های پاسخ به تماس را برای گسترش بیشتر فرم های تجاری ارائه می دهد. پاسخ به تماس به این معنی است که OpenIMServer درخواستی را قبل یا بعد از یک رویداد خاص به سرور تجاری ارسال می کند، مانند تماس های قبل یا بعد از ارسال یک پیام.
+
+👉 **[بیشتر بدانید](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: معماری کلی
+
+با نمودار معماری ما به قلب عملکرد Open-IM-Server بپردازید.
+
+
+
+## :rocket: شروع سریع
+
+ما از بسیاری از پلتفرم ها پشتیبانی می کنیم. در اینجا آدرس هایی برای تجربه سریع در سمت وب آمده است:
+
+👉 **[نسخه نمایشی وب آنلاین OpenIM](https://web-enterprise.rentsoft.cn/)**
+
+🤲 برای تسهیل تجربه کاربر، ما راه حل های مختلف استقرار را ارائه می دهیم. می توانید روش استقرار خود را از لیست زیر انتخاب کنید:
+
+- **[راهنمای استقرار کد منبع](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[راهنمای استقرار داکر](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[راهنمای استقرار Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[راهنمای استقرار توسعه دهنده مک](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: برای شروع توسعه OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM هدف ما ایجاد یک جامعه منبع باز سطح بالا است. ما مجموعه ای از استانداردها را در [مخزن انجمن](https://github.com/OpenIMSDK/community) داریم..
+
+اگر میخواهید در این مخزن Open-IM-Server مشارکت کنید، لطفاً [مستندات مشارکتکننده](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md) ما را بخوانید.
+
+قبل از شروع، لطفاً مطمئن شوید که تغییرات شما مورد تقاضا هستند. بهترین کار برای آن این است که یک [بحث جدید](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) یا [ارتباط اسلک](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) ایجاد کنید، یا اگر مشکلی پیدا کردید، ابتدا [آن را گزارش کنید](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose).
+
+- [مرجع OpenIM API](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [OpenIM Bash Logging](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [OpenIM CI/CD Actions](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [کنوانسیون کد OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [دستورالعمل های تعهد OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [راهنمای توسعه OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [ساختار دایرکتوری OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [تنظیم محیط OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [مرجع کد خطا OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [OpenIM Git Workflow](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [راهنمای انتخاب گیلاس OpenIM Git](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [OpenIM GitHub Workflow](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [استانداردهای کد OpenIM Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [دستورالعمل های تصویر OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [پیکربندی اولیه OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [راهنمای نصب OpenIM Docker](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [نصب سیستم OpenIM Linux OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [راهنمای توسعه OpenIM Linux](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [راهنمای اقدامات محلی OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [OpenIM Logging Conventions](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [استقرار آفلاین OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [OpenIM Protoc Tools](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [راهنمای تست OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM Utility Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM Makefile Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [ابزارهای OpenIM Script](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [نسخه OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [مدیریت استقرار باطن و نظارت](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [راهنمای استقرار توسعه دهنده مک برای OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: انجمن
+
+- 📚 [انجمن OpenIM](https://github.com/OpenIMSDK/community)
+- 💕 [گروه علاقه OpenIM](https://github.com/Openim-sigs)
+- 🚀 [به انجمن Slack ما بپیوندید](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [به وی چت ما بپیوندید](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: جلسات جامعه
+
+ما میخواهیم هر کسی در انجمن ما مشارکت کند و در کد مشارکت کند، ما هدایا و جوایزی ارائه میکنیم، و از شما استقبال میکنیم که هر پنجشنبه شب به ما بپیوندید.
+
+کنفرانس ما در [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯 است، سپس می توانید خط لوله Open-IM-Server را برای پیوستن جستجو کنید.
+
+ما از هر [جلسه دو هفتهای](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) در [بحثهای GitHub](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting) یادداشتبرداری میکنیم، یادداشتهای جلسه تاریخی ما، و همچنین بازپخش جلسات در [Google Docs :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing) موجود است.
+
+## :eyes: چه کسانی از OpenIM استفاده می کنند
+
+صفحه [مطالعات موردی کاربر](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) ما را برای لیستی از کاربران پروژه بررسی کنید. از گذاشتن [نظر📝](https://github.com/openimsdk/open-im-server-deploy/issues/379) و به اشتراک گذاری مورد استفاده خود دریغ نکنید.
+
+## :page_facing_up: مجوز
+
+OpenIM تحت مجوز Apache 2.0 مجوز دارد. برای متن کامل مجوز به [LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) مراجعه کنید.
+
+نشانواره OpenIM، شامل انواع و نسخههای متحرک آن، که در این مخزن [OpenIM](https://github.com/openimsdk/open-im-server-deploy) تحت فهرستهای [assets/logo](./assets/logo) و [assets/logo-gif](assets/logo-gif) نمایش داده میشود، توسط قوانین حق چاپ محافظت میشود.
+
+## 🔮 با تشکر از همکاران ما!
+
+
+
+
diff --git a/docs/readme/README_fr.md b/docs/readme/README_fr.md
new file mode 100644
index 0000000..0810b9d
--- /dev/null
+++ b/docs/readme/README_fr.md
@@ -0,0 +1,172 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ À propos de OpenIM
+
+OpenIM est une plateforme de services conçue spécifiquement pour intégrer des fonctionnalités de communication telles que le chat, les appels audio et vidéo, les notifications, ainsi que les robots de chat IA dans les applications. Elle offre une série d'API puissantes et de Webhooks, permettant aux développeurs d'incorporer facilement ces caractéristiques interactives dans leurs applications. OpenIM n'est pas en soi une application de chat autonome, mais sert de plateforme supportant d'autres applications pour réaliser des fonctionnalités de communication enrichies. L'image ci-dessous montre les relations d'interaction entre AppServer, AppClient, OpenIMServer et OpenIMSDK pour illustrer spécifiquement.
+
+
+
+## 🚀 À propos de OpenIMSDK
+
+**OpenIMSDK** est un SDK IM conçu pour **OpenIMServer** spécialement créé pour être intégré dans les applications clientes. Ses principales fonctionnalités et modules comprennent :
+
+- 🌟 Fonctionnalités clés :
+
+ - 📦 Stockage local
+ - 🔔 Rappels de l'écouteur
+ - 🛡️ Encapsulation d'API
+ - 🌐 Gestion de la connexion
+
+ ## 📚 Modules principaux :
+
+ 1. 🚀 Initialisation et connexion
+ 2. 👤 Gestion des utilisateurs
+ 3. 👫 Gestion des amis
+ 4. 🤖 Fonctionnalités de groupe
+ 5. 💬 Traitement des conversations
+
+Il est construit avec Golang et supporte le déploiement multiplateforme, assurant une expérience d'accès cohérente sur toutes les plateformes。
+
+👉 **[Explorer le SDK GO](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 À propos de OpenIMServer
+
+- **OpenIMServer** présente les caractéristiques suivantes :
+ - 🌐 Architecture microservices : prend en charge le mode cluster, incluant le gateway (passerelle) et plusieurs services rpc。
+ - 🚀 Divers modes de déploiement : supporte le déploiement via le code source, Kubernetes ou Docker。
+ - Support d'une masse d'utilisateurs : plus de cent mille pour les super grands groupes, des millions d'utilisateurs, et des milliards de messages。
+
+### Fonctionnalités commerciales améliorées :
+
+- **REST API**:OpenIMServer fournit une REST API pour les systèmes commerciaux, visant à accorder plus de fonctionnalités, telles que la création de groupes via l'interface backend, l'envoi de messages push, etc。
+- **Webhooks**:OpenIMServer offre des capacités de rappel pour étendre davantage les formes d'entreprise. Un rappel signifie que OpenIMServer enverra une requête au serveur d'entreprise avant ou après qu'un événement se soit produit, comme un rappel avant ou après l'envoi d'un message。
+
+👉 **[En savoir plus](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Architecture globale
+
+Plongez dans le cœur de la fonctionnalité d'Open-IM-Server avec notre diagramme d'architecture.
+
+
+
+## :rocket: Démarrage rapide
+
+Nous prenons en charge de nombreuses plateformes. Voici les adresses pour une expérience rapide du côté web :
+
+👉 **[Démo web en ligne OpenIM](https://www.openim.io/zh/commercial)**
+
+🤲 Pour faciliter l'expérience utilisateur, nous proposons plusieurs solutions de déploiement. Vous pouvez choisir votre méthode de déploiement selon la liste ci-dessous :
+
+- **[Guide de déploiement du code source](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Guide de déploiement Docker](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Guide de déploiement Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Guide de déploiement pour développeur Mac](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Commencer à développer avec OpenIM
+
+Chez OpenIM, notre objectif est de construire une communauté open source de premier plan. Nous avons un ensemble de standards, disponibles dans le[ dépôt communautaire](https://github.com/OpenIMSDK/community)。
+Si vous souhaitez contribuer à ce dépôt Open-IM-Server, veuillez lire notre[ document pour les contributeurs](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md)。
+
+Avant de commencer, assurez-vous que vos modifications sont nécessaires. La meilleure manière est de créer une[ nouvelle discussion ](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) ou une [ communication Slack,](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A),ou si vous identifiez un problème, de[ signaler d'abord ](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose)。
+
+- [Référence de l'API OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [Journalisation Bash OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [Actions CI/CD OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [Conventions de code OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [Directives de commit OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [Guide de développement OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [Structure de répertoire OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [Configuration de l'environnement OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [Référence des codes d'erreur OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [Workflow Git OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [Guide Cherry Pick Git OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [Workflow GitHub OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [Normes de code Go OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [Directives d'image OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [Configuration initiale OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [Guide d'installation Docker OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [Installation du système Linux OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [Guide de développement Linux OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [Guide des actions locales OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [Conventions de journalisation OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [Déploiement hors ligne OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [Outils Protoc OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [Guide de test OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [Utilitaire Go OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [Utilitaires Makefile OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [Utilitaires de script OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [Versionnement OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Gérer le déploiement du backend et la surveillance](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Guide de déploiement pour développeur Mac pour OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+> ## :calendar: Réunions de la Communauté
+
+Nous voulons que tout le monde s'implique dans notre communauté et contribue au code, nous offrons des cadeaux et des récompenses, et nous vous invitons à nous rejoindre chaque jeudi soir.
+Notre conférence se trouve dans le [ Slack OpenIM ](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, ensuite vous pouvez rechercher le pipeline Open-IM-Server pour rejoindre
+
+Nous prenons des notes de chaque [réunion bihebdomadaire ](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) dans les [discussions GitHub](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), Nos notes de réunion historiques, ainsi que les rediffusions des réunions sont disponibles sur [ Google Docs :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing).
+
+## :eyes: Qui Utilise OpenIM
+
+Consultez notre page [ études de cas d'utilisateurs ](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) pour une liste des utilisateurs du projet. N'hésitez pas à laisser un [📝commentaire](https://github.com/openimsdk/open-im-server-deploy/issues/379) et partager votre cas d'utilisation.
+
+## :page_facing_up: License
+
+OpenIM est sous licence Apache 2.0. Voir [LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) pour le texte complet de la licence.
+
+Le logo OpenIM, y compris ses variations et versions animées, affiché dans ce dépôt[OpenIM](https://github.com/openimsdk/open-im-server-deploy) sous les répertoires [assets/logo](../../assets/logo) et [assets/logo-gif](assets/logo-gif) sont protégés par les lois sur le droit d'auteur.
+
+## 🔮 Merci à nos contributeurs !
+
+
+
+
diff --git a/docs/readme/README_hu.md b/docs/readme/README_hu.md
new file mode 100644
index 0000000..2ea51e0
--- /dev/null
+++ b/docs/readme/README_hu.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ Az OpenIM-ről
+
+Az OpenIM egy szolgáltatási platform, amelyet kifejezetten a csevegés, az audio-video hívások, az értesítések és az AI chatbotok alkalmazásokba történő integrálására terveztek. Számos hatékony API-t és Webhookot kínál, lehetővé téve a fejlesztők számára, hogy ezeket az interaktív szolgáltatásokat könnyen beépítsék alkalmazásaikba. Az OpenIM nem egy önálló csevegőalkalmazás, hanem platformként szolgál más alkalmazások támogatására a gazdag kommunikációs funkciók elérésében. A következő diagram az AppServer, az AppClient, az OpenIMServer és az OpenIMSDK közötti interakciót szemlélteti részletesen.
+
+
+
+## 🚀 Az OpenIMSDK-ról
+
+Az **OpenIMSDK** egy **OpenIMServer** számára készült azonnali üzenetküldő SDK, amelyet kifejezetten ügyfélalkalmazásokba való beágyazáshoz hoztak létre. Fő jellemzői és moduljai a következők:
+
+- 🌟 Főbb jellemzők:
+
+ - 📦 Helyi raktár
+ - 🔔 Hallgatói visszahívások
+ - 🛡️ API-csomagolás
+ - 🌐 Kapcsolatkezelés
+
+- 📚 Fő modulok:
+
+ 1. 🚀 Inicializálás és bejelentkezés
+ 2. 👤 Felhasználókezelés
+ 3. 👫 Barátkezelés
+ 4. 🤖 Csoportfunkciók
+ 5. 💬 Beszélgetéskezelés
+
+Golang használatával készült, és támogatja a többplatformos telepítést, biztosítva a konzisztens hozzáférési élményt minden platformon.
+
+👉 **[Fedezze fel a GO SDK-t](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Az OpenIMServerről
+
+- **OpenIMServer** a következő jellemzőkkel rendelkezik:
+ - 🌐 Mikroszolgáltatási architektúra: Támogatja a fürt módot, beleértve az átjárót és több rpc szolgáltatást.
+ - 🚀 Változatos telepítési módszerek: Támogatja a forráskódon, Kubernetesen vagy Dockeren keresztül történő telepítést.
+ - Hatalmas felhasználói bázis támogatása: Szuper nagy csoportok több százezer felhasználóval, több tízmillió felhasználóval és több milliárd üzenettel.
+
+### Továbbfejlesztett üzleti funkcionalitás:
+
+- **REST API**: Az OpenIMServer REST API-kat kínál az üzleti rendszerek számára, amelyek célja, hogy a vállalkozásokat több funkcióval ruházza fel, mint például csoportok létrehozása és push üzenetek küldése háttérfelületeken keresztül.
+- **Webhooks**: Az OpenIMServer visszahívási lehetőségeket biztosít több üzleti forma kiterjesztéséhez. A visszahívás azt jelenti, hogy az OpenIMServer kérelmet küld az üzleti szervernek egy bizonyos esemény előtt vagy után, például visszahívásokat üzenet küldése előtt vagy után.
+
+👉 **[Tudj meg többet](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Általános építészet
+
+Merüljön el az Open-IM-Server funkcióinak szívében az architektúra diagramunk segítségével.
+
+
+
+## :rocket: Gyors indítás
+
+Számos platformot támogatunk. Íme a címek a gyors weboldali használathoz:
+
+👉 **[OpenIM online webdemó](https://web-enterprise.rentsoft.cn/)**
+
+🤲 A felhasználói élmény megkönnyítése érdekében különféle telepítési megoldásokat kínálunk. Az alábbi listából választhatja ki a telepítési módot:
+
+- **[Forráskód-telepítési útmutató](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Docker telepítési útmutató](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Kubernetes telepítési útmutató](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Mac fejlesztői telepítési útmutató](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Az OpenIM fejlesztésének megkezdéséhez
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM Célunk egy felső szintű nyílt forráskódú közösség felépítése. Van egy szabványkészletünk a [Közösségi adattárban](https://github.com/OpenIMSDK/community).
+
+Ha hozzá szeretne járulni ehhez az Open-IM-Server adattárhoz, kérjük, olvassa el [közreműködői dokumentációnkat](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Mielőtt elkezdené, győződjön meg arról, hogy a változtatásokra van-e igény. Erre a legjobb egy [új beszélgetés](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) VAGY [Slack Communication](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)létrehozása, vagy ha problémát talál, először [jelentse](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) first.
+
+- [OpenIM API referencia](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [OpenIM Bash naplózás](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [OpenIM CI/CD műveletek](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [OpenIM Code-egyezmények](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [OpenIM Commit Guidelines](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [OpenIM fejlesztési útmutató](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [OpenIM címtárszerkezet](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [OpenIM környezet beállítása](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [OpenIM hibakód hivatkozás](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [OpenIM Git Workflow](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [OpenIM Git Cherry Pick Guide](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [OpenIM GitHub munkafolyamat](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [OpenIM Go Code szabványok](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [OpenIM képre vonatkozó irányelvek](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [OpenIM kezdeti konfiguráció](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [OpenIM Docker telepítési útmutató](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [OpenIM OpenIM Linux rendszertelepítés](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [OpenIM Linux fejlesztési útmutató](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [OpenIM helyi műveletek útmutatója](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [OpenIM naplózási egyezmények](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [OpenIM offline telepítés](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [OpenIM Protoc Tools](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [OpenIM tesztelési útmutató](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM Utility Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM Makefile Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [OpenIM Script Utilities](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [OpenIM verzió](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [A háttérrendszer kezelése és a telepítés figyelése](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Mac Developer Deployment Guide for OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Közösség
+
+- 📚 [OpenIM közösség](https://github.com/OpenIMSDK/community)
+- 💕 [OpenIM érdeklődési csoport](https://github.com/Openim-sigs)
+- 🚀 [Csatlakozz a Slack közösségünkhöz](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Csatlakozz a wechathez](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Közösségi Találkozók
+
+Szeretnénk, ha bárki bekapcsolódna közösségünkbe és hozzájárulna kódunkhoz, ajándékokat és jutalmakat kínálunk, és szeretettel várjuk, hogy csatlakozzon hozzánk minden csütörtök este.
+
+Konferenciánk az [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯alatt van, akkor kereshet az Open-IM-Server folyamatban a csatlakozáshoz
+
+A [GitHub-beszélgetések](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting)minden [kéthetente történő megbeszélésről](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting) jegyzeteket készítünk. A találkozók történeti feljegyzései, valamint az értekezletek visszajátszásai a [Google Dokumentumok :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing) webhelyen érhetők el.
+
+## :eyes: Kik használják az OpenIM-et
+
+Tekintse meg [felhasználói esettanulmányok](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) oldalunkat a projekt felhasználóinak listájáért. Ne habozzon, hagyjon [📝megjegyzést](https://github.com/openimsdk/open-im-server-deploy/issues/379), és ossza meg használati esetét.
+
+## :page_facing_up: Engedély
+
+Az OpenIM licence az Apache 2.0 licence alá tartozik. A teljes licencszövegért lásd: [LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE).
+
+Az ebben az [OpenIM](https://github.com/openimsdk/open-im-server-deploy) tárolóban az [assets/logo](./assets/logo) és [assets/logo-gif](assets/logo-gif) könyvtárak alatt megjelenő OpenIM logót, beleértve annak változatait és animált változatait, szerzői jogi törvények védik.
+
+## 🔮 Köszönjük közreműködőinknek!
+
+
+
+
diff --git a/docs/readme/README_ja.md b/docs/readme/README_ja.md
new file mode 100644
index 0000000..45e7aa3
--- /dev/null
+++ b/docs/readme/README_ja.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ OpenIM について
+
+OpenIM は、アプリケーション内でチャット、音声通話、通知、AI チャットボットなどの通信機能を統合するために特別に設計されたサービスプラットフォームです。一連の強力な API と Webhooks を提供することで、開発者はアプリケーションに簡単にこれらの通信機能を統合できます。OpenIM 自体は独立したチャットアプリではなく、アプリケーションにサポートを提供し、豊富な通信機能を実現するプラットフォームです。以下の図は、AppServer、AppClient、OpenIMServer、OpenIMSDK 間の相互作用を示しています。
+
+
+
+## 🚀 OpenIMSDK について
+
+**OpenIMSDK**は、**OpenIMServer**用に設計された IM SDK で、クライアントアプリケーションに組み込むためのものです。主な機能とモジュールは以下の通りです:
+
+- 🌟 主な機能:
+
+ - 📦 ローカルストレージ
+ - 🔔 リスナーコールバック
+ - 🛡️ API のラッピング
+ - 🌐 接続管理
+
+ ## 📚 主なモジュール:
+
+ 1. 🚀 初初期化とログイン
+ 2. 👤 ユーザー管理
+ 3. 👫 友達管理
+ 4. 🤖 グループ機能
+ 5. 💬 会話処理
+
+Golang を使用して構築され、クロスプラットフォームの導入をサポートし、すべてのプラットフォームで一貫したアクセス体験を提供します。
+
+👉 **[GO SDK を探索する](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 OpenIMServer について
+
+- **OpenIMServer** には以下の特徴があります:
+ - 🌐 マイクロサービスアーキテクチャ:クラスターモードをサポートし、ゲートウェイ(gateway)と複数の rpc サービスを含みます。
+ - 🚀 多様なデプロイメント方法:ソースコード、kubernetes、または docker でのデプロイメントをサポートします。
+ - 海量ユーザーサポート:十万人規模の超大型グループ、千万人のユーザー、および百億のメッセージ
+
+### 強化されたビジネス機能:
+
+- **REST API**:OpenIMServer は、ビジネスシステム用の REST API を提供しており、ビジネスにさらに多くの機能を提供することを目指しています。たとえば、バックエンドインターフェースを通じてグループを作成したり、プッシュメッセージを送信したりするなどです。
+- **Webhooks**:OpenIMServer は、より多くのビジネス形態を拡張するためのコールバック機能を提供しています。コールバックとは、特定のイベントが発生する前後に、OpenIMServer がビジネスサーバーにリクエストを送信することを意味します。例えば、メッセージ送信の前後のコールバックなどです。
+
+👉 **[もっと詳しく知る](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: 全体のアーキテクチャ
+
+Open-IM-Server の機能の核心に迫るために、アーキテクチャダイアグラムをご覧ください。
+
+
+
+## :rocket: クイックスタート
+
+iOS/Android/H5/PC/Web でのオンライン体験:
+
+👉 **[OpenIM online demo](https://www.openim.io/zh/commercial)**
+
+🤲 ユーザー体験を容易にするために、私たちは様々なデプロイメントソリューションを提供しています。以下のリストから、ご自身のデプロイメント方法を選択できます:
+
+- **[ソースコードデプロイメントガイド](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Docker デプロイメントガイド](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Kubernetes デプロイメントガイド](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Mac 開発者向けデプロイメントガイド](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: OpenIM の開発を始める
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM 私たちの目標は、トップレベルのオープンソースコミュニティを構築することです。[コミュニティリポジトリ](https://github.com/OpenIMSDK/community)には一連の基準があります。
+
+この Open-IM-Server リポジトリに貢献したい場合は、[貢献者ドキュメントをお読みください](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md)。
+
+始める前に、変更に必要があることを確認してください。最良の方法は、[新しいディスカッション](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose)や[Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)での通信を作成すること、または問題を発見した場合は、まずそれを[報告](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose)することです。
+
+- [OpenIM API リファレンス](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [OpenIM Bash ロギング](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [OpenIM CI/CD アクション](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [OpenIM コード規約](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [OpenIM コミットガイドライン](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [OpenIM 開発ガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [OpenIM ディレクトリ構造](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [OpenIM 環境設定](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [OpenIM エラーコードリファレンス](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [OpenIM Git ワークフロー](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [OpenIM Git チェリーピックガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [OpenIM GitHub ワークフロー](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [OpenIM Go コード基準](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [OpenIM 画像ガイドライン](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [OpenIM 初期設定](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [OpenIM Docker インストールガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [OpenIM Linux システムインストール](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [OpenIM Linux 開発ガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [OpenIM ローカルアクションガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [OpenIM ロギング規約](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [OpenIM オフラインデプロイメント](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [OpenIM Protoc ツール](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [OpenIM テスティングガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM ユーティリティ Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM Makefile ユーティリティ](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [OpenIM スクリプトユーティリティ](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [OpenIM バージョニング](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [バックエンド管理とモニターデプロイメント](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [OpenIM 用 Mac 開発者デプロイメントガイド](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: コミュニティ
+
+- 📚 [OpenIM コミュニティ](https://github.com/OpenIMSDK/community)
+- 💕 [OpenIM 興味グループ](https://github.com/Openim-sigs)
+- 🚀 [私たちの Slack コミュニティに参加する](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [私たちの WeChat(微信群)に参加する](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: コミュニティミーティング
+
+私たちは、誰もがコミュニティに参加し、コードに貢献してもらいたいと考えています。私たちは、ギフトや報酬を提供し、毎週木曜日の夜に参加していただくことを歓迎します。
+
+私たちの会議は[OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)🎯 で行われます。そこで Open-IM-Server パイプラインを検索して参加できます。
+
+私たちは[隔週の会議](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting)のメモを[GitHub ディスカッション](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting)に記録しています。歴史的な会議のメモや会議のリプレイは[Google Docs📑](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing)で利用可能です。
+
+## :eyes: OpenIM を使用している人たち
+
+プロジェクトユーザーのリストについては、[ユーザーケーススタディ](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md)ページをご覧ください。[コメント 📝](https://github.com/openimsdk/open-im-server-deploy/issues/379)を残して、あなたの使用例を共有することを躊躇しないでください。
+
+## :page_facing_up: ライセンス
+
+OpenIM は Apache 2.0 ライセンスの下でライセンスされています。完全なライセンステキストについては、[LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE)を参照してください。
+
+このリポジトリに表示される[OpenIM](https://github.com/openimsdk/open-im-server-deploy)ロゴ、そのバリエーション、およびアニメーションバージョン([assets/logo](./assets/logo)および[assets/logo-gif](assets/logo-gif)ディレクトリ内)は、著作権法によって保護されています。
+
+## 🔮 貢献者の皆様に感謝します!
+
+
+
+
diff --git a/docs/readme/README_ko.md b/docs/readme/README_ko.md
new file mode 100644
index 0000000..761d884
--- /dev/null
+++ b/docs/readme/README_ko.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ OpenIM에 대하여
+
+OpenIM은 채팅, 오디오-비디오 통화, 알림 및 AI 챗봇을 애플리케이션에 통합하기 위해 특별히 설계된 서비스 플랫폼입니다. 이 플랫폼은 강력한 API와 웹훅을 제공하여 개발자가 이러한 상호작용 기능을 애플리케이션에 쉽게 통합할 수 있게 합니다. OpenIM은 독립 실행형 채팅 애플리케이션이 아니라, 다른 애플리케이션들이 풍부한 커뮤니케이션 기능을 달성할 수 있도록 지원하는 플랫폼으로서의 역할을 합니다. 다음 다이어그램은 AppServer, AppClient, OpenIMServer, 및 OpenIMSDK 간의 상호작용을 자세히 설명하기 위해 제시되었습니다.
+
+
+
+## 🚀 OpenIMSDK에 대하여
+
+**OpenIMSDK**는**OpenIMServer**를 위해 특별히 제작된 IM SDK로, 클라이언트 애플리케이션 내에 내장하기 위해 설계되었습니다. 그 주요 기능 및 모듈은 다음과 같습니다:
+
+- 🌟 주요 기능:
+
+ - 📦 로컬 스토리지
+ - 🔔 리스너 콜백
+ - 🛡️ API 래핑
+ - 🌐 연결 관리
+
+ ## 📚 주요 모듈:
+
+ 1. 🚀 초기화 및 로그인
+ 2. 👤 사용자 관리
+ 3. 👫 친구 관리
+ 4. 🤖 그룹 기능
+ 5. 💬 대화 처리
+
+이는 Golang을 사용하여 구축되었으며, 모든 플랫폼에서 일관된 접근 경험을 보장하는 크로스 플랫폼 배포를 지원합니다.
+
+👉 **[GO SDK 탐색하기](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 OpenIMServer에 대하여
+
+- **OpenIMServer** 는 다음과 같은 특성을 가지고 있습니다:
+ - 🌐 마이크로서비스 아키텍처: 게이트웨이 및 다수의 rpc 서비스를 포함하는 클러스터 모드를 지원합니다.
+ - 🚀 다양한 배포 방법: 소스 코드, 쿠버네티스 또는 도커를 통한 배포를 지원합니다.
+ - 대규모 사용자 기반 지원: 수십만 명의 사용자를 포함하는 초대형 그룹, 수천만 명의 사용자 및 수십억 건의 메시지를 지원합니다.
+
+### 강화된 비즈니스 기능:
+
+- **REST API**:OpenIMServer는 비즈니스 시스템을 위한 REST API를 제공하여, 백엔드 인터페이스를 통해 그룹 생성 및 푸시 메시지 전송과 같은 더 많은 기능을 비즈니스에 제공하기 위해 설계되었습니다.
+- **Webhooks**:OpenIMServer는 더 많은 비즈니스 형태를 확장할 수 있는 콜백 기능을 제공합니다. 콜백이란 메시지 전송 전후와 같은 특정 이벤트 전후에 OpenIMServer가 비즈니스 서버로 요청을 보내는 것을 의미합니다.
+
+👉 **[더 알아보기](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: 전체 아키텍처
+
+Open-IM-Server의 기능의 핵심으로 들어가 우리의 아키텍처 다이어그램을 자세히 살펴보세요.
+
+
+
+## :rocket: 빠른 시작
+
+우리는 많은 플랫폼을 지원합니다. 웹 측에서 빠른 체험을 위한 주소는 다음과 같습니다:
+
+👉 **[OpenIM online demo](https://www.openim.io/zh/commercial)**
+
+🤲 사용자 경험을 용이하게 하기 위해, 다양한 배포 솔루션을 제공합니다. 아래 목록에서 배포 방법을 선택할 수 있습니다:
+
+- **[소스 코드 배포 가이드](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[docker 배포 가이드](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Kubernetes 배포 가이드](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Mac 개발자 배포 가이드](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: OpenIM 개발 시작하기
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM의 목표는 최상위 수준의 오픈 소스 커뮤니티를 구축하는 것입니다. 우리는 [커뮤니티 리포지토리에서](https://github.com/OpenIMSDK/community) 일련의 표준을 가지고 있습니다.
+
+이 Open-IM-Server 리포지토리에 기여하고 싶다면, 우리의 [기여자 문서](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md)를 읽어주세요.
+
+시작하기 전에, 변경 사항이 필요한지 확인해 주세요. 가장 좋은 방법은 [새로운 토론](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose)을 생성하거나 [Slack 통신을](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 하거나, 문제를 발견했다면 먼저 [보고](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose)하는 것입니다.
+
+- [OpenIM API 참조](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [OpenIM Bash 로깅](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [OpenIM CI/CD 액션](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [OpenIM 코드 규칙](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [OpenIM 커밋 가이드라인](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [OpenIM 개발 가이드](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [OpenIM 디렉토리 구조](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [OpenIM 환경 설정](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [OpenIM 오류 코드 참조](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [OpenIM Git 작업 흐름](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [OpenIM Git 체리 픽 가이드](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [OpenIM GitHub 작업 흐름](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [OpenIM Go 코드 표준](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [OpenIM 이미지 가이드라인](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [OpenIM 초기 구성](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [OpenIM docker 설치 가이드](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [OpenIM OpenIM Linux 설치](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [OpenIM Linux 개발 가이드](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [OpenIM 로컬 액션 가이드](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [OpenIM 로깅 규칙](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [OpenIM 오프라인 배포](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [OpenIM Protoc 도구](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [OpenIM 테스트 가이드](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM 유틸리티 Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM 메이크파일 유틸리티](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [OpenIM 스크립트 유틸리티](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [OpenIM 버전 관리](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [백엔드 관리 및 모니터 배포](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [맥 개발자 배포 가이드 for OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: 커뮤니티
+
+- 📚 [OpenIM 커뮤니티](https://github.com/OpenIMSDK/community)
+- 💕 [OpenIM 관심 그룹](https://github.com/Openim-sigs)
+- 🚀 [우리의 Slack 커뮤니티에 가입하기](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [우리의 위챗(微信群)에 가입하기](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: 커뮤니티 미팅
+
+우리는 누구나 커뮤니티에 참여하고 코드를 기여할 수 있도록 하며, 선물과 보상을 제공하며, 매주 목요일 밤에 여러분을 환영합니다.
+
+우리의 회의는 [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯에서 이루어지며, Open-IM-Server 파이프라인을 검색하여 참여할 수 있습니다.
+
+우리는 격주 회의의 메모를 [GitHub 토론](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting)에서 기록하며, 우리의 역사적 [회의](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) 노트와 회의 재생은 [Google Docs 📑](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing)에서 이용할 수 있습니다.
+
+## :eyes: OpenIM을 사용하는 사람들
+
+프로젝트 사용자 목록을 위한 우리의 [사용자 사례 연구](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) 페이지를 확인하세요. 사용 사례를 공유하고 싶다면 주저하지 말고 [📝코멘트](https://github.com/openimsdk/open-im-server-deploy/issues/379)를 남겨주세요.
+
+## :page_facing_up: 라이선스
+
+OpenIM은 Apache 2.0 라이선스에 따라 라이선스가 부여됩니다. 전체 라이선스 텍스트는 [LICENSE](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE)에서 확인할 수 있습니다.
+
+이 리포지토리 [OpenIM](https://github.com/openimsdk/open-im-server-deploy)에 표시된 OpenIM 로고, 그 변형 및 애니메이션 버전은 [assets/logo](../../assets/logo) 및 [assets/logo-gif](../../assets/logo-gif) 디렉토리 아래에 있으며, 저작권 법에 의해 보호됩니다.
+
+## 🔮 우리의 기여자들에게 감사합니다!
+
+
+
+
diff --git a/docs/readme/README_tr.md b/docs/readme/README_tr.md
new file mode 100644
index 0000000..c6019d7
--- /dev/null
+++ b/docs/readme/README_tr.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ OpenIM Hakkında
+
+OpenIM, uygulamalara sohbet, sesli-görüntülü aramalar, bildirimler ve AI sohbet robotları entegre etmek için özel olarak tasarlanmış bir hizmet platformudur. Güçlü API'ler ve Webhook'lar sunarak, geliştiricilerin bu etkileşimli özellikleri uygulamalarına kolayca dahil etmelerini sağlar. OpenIM bağımsız bir sohbet uygulaması değildir, ancak zengin iletişim işlevselliği sağlama amacıyla diğer uygulamaları destekleyen bir platform olarak hizmet verir. Aşağıdaki diyagram, AppServer, AppClient, OpenIMServer ve OpenIMSDK arasındaki etkileşimi detaylandırmak için açıklar.
+
+
+
+## 🚀 OpenIMSDK Hakkında
+
+**OpenIMSDK**, müşteri uygulamalarına gömülmek üzere özel olarak oluşturulan **OpenIMServer** için tasarlanmış bir IM SDK'sıdır. Ana özellikleri ve modülleri aşağıdaki gibidir:
+
+- 🌟 Ana Özellikler:
+
+ - 📦 Yerel depolama
+ - 🔔 Dinleyici geri çağırmaları
+ - 🛡️ API sarımı
+ - 🌐 Bağlantı yönetimi
+
+ ## 📚 Ana Modüller:
+
+ 1. 🚀 Başlatma ve Giriş
+ 2. 👤 Kullanıcı Yönetimi
+ 3. 👫 Arkadaş Yönetimi
+ 4. 🤖 Grup Fonksiyonları
+ 5. 💬 Konuşma Yönetimi
+
+Golang kullanılarak inşa edilmiş ve tüm platformlarda tutarlı bir erişim deneyimi sağlayacak şekilde çapraz platform dağıtımını destekler.
+
+👉 **[GO SDK Keşfet](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 OpenIMServer Hakkında
+
+- **OpenIMServer** aşağıdaki özelliklere sahiptir:
+ - 🌐 Mikroservis mimarisi: Bir kapı ve çoklu rpc servisleri içeren küme modunu destekler.
+ - 🚀 Çeşitli dağıtım yöntemleri: Kaynak kodu, Kubernetes veya Docker aracılığıyla dağıtımı destekler.
+ - Büyük kullanıcı tabanı desteği: Yüz binlerce kullanıcısı olan süper büyük gruplar, on milyonlarca kullanıcı ve milyarlarca mesaj.
+
+### Geliştirilmiş İşlevsellik:
+
+- **REST API**:OpenIMServer, işletmeleri gruplar oluşturma ve arka plan arayüzleri aracılığıyla itme mesajları gönderme gibi daha fazla işlevsellikle güçlendirmeyi amaçlayan iş sistemleri için REST API'leri sunar.
+- **Webhooks**:OpenIMServer, daha fazla iş formunu genişletme yetenekleri sağlayan geri çağırma özellikleri sunar. Geri çağırma, OpenIMServer'ın belirli bir olaydan önce veya sonra, örneğin bir mesaj göndermeden önce veya sonra iş sunucusuna bir istek göndermesi anlamına gelir.
+
+👉 **[Daha fazla bilgi edinin](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Genel Mimarisi
+
+Mimari diyagramımızla Open-IM-Server'ın işlevselliğinin kalbine dalın.
+
+
+
+## :rocket: Hızlı Başlangıç
+
+Birçok platformu destekliyoruz. Web tarafında hızlı deneyim için adresler şunlardır:
+
+👉 **[OpenIM online demo](https://www.openim.io/zh/commercial)**
+
+🤲 Kullanıcı deneyimini kolaylaştırmak için çeşitli dağıtım çözümleri sunuyoruz. Aşağıdaki listeden dağıtım yönteminizi seçebilirsiniz:
+
+- **[Kaynak Kodu Dağıtım Kılavuzu](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Docker Dağıtım Kılavuzu](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Kubernetes Dağıtım Kılavuzu](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Mac Geliştirici Dağıtım Kılavuzu](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: OpenIM Geliştirmeye Başlamak
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM Amacımız, üst düzey bir açık kaynak topluluğu oluşturmaktır. [Topluluk deposunda](https://github.com/OpenIMSDK/community) bir dizi standartımız var.
+
+Bu Open-IM-Server deposuna katkıda bulunmak istiyorsanız, lütfen katkıda bulunanlar için [dokümantasyonumuzu](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md) okuyun.
+
+Başlamadan önce, lütfen değişikliklerinizin talep edildiğinden emin olun. Bunun için en iyisi, [yeni bir tartışma OLUŞTURMAK](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) veya [Slack İletişimi](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) kurmak, ya da bir sorun bulursanız, önce bunu [rapor](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) etmektir.
+
+- [OpenIM API Referansı](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [OpenIM Bash Günlüğü](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [OpenIM CI/CD İşlemleri](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [OpenIM Kod Kuralları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [OpenIM Taahhüt Kuralları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [OpenIM Geliştirme Kılavuzu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [OpenIM Dizin Yapısı](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [OpenIM Ortam Kurulumu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [OpenIM Hata Kodu Referansı](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [OpenIM Git İş Akışı](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [OpenIM Git Cherry Pick Kılavuzu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [OpenIM GitHub İş Akışı](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [OpenIM Go Kod Standartları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [OpenIM Görüntü Kuralları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [OpenIM İlk Yapılandırma](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [OpenIM Docker Kurulum Kılavuzu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [OpenIM Linux Sistem Kurulumu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [OpenIM Linux Geliştirme Kılavuzu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [OpenIM Yerel İşlemler Kılavuzu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [OpenIM Günlük Kuralları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [OpenIM Çevrimdışı Dağıtım](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [OpenIM Protoc Araçları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [OpenIM Test Kılavuzu](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [OpenIM Yardımcı Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [OpenIM Makefile Yardımcı Programları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [OOpenIM Betik Yardımcı Programları](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [OpenIM Sürümleme](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Arka uç yönetimi ve izleme dağıtımı](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Mac Geliştirici Dağıtım Kılavuzu for OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Topluluk
+
+- 📚 [OpenIM Topluluğu](https://github.com/OpenIMSDK/community)
+- 💕 [OpenIM İlgi Grubu](https://github.com/Openim-sigs)
+- 🚀 [Slack topluluğumuza katılın](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Wechat grubumuza katılın (微信群)](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Topluluk Toplantıları
+
+Topluluğumuza herkesin katılmasını ve kod katkısında bulunmasını istiyoruz, hediyeler ve ödüller sunuyoruz ve sizi her Perşembe gecesi bize katılmaya davet ediyoruz.
+
+Konferansımız [OpenIM Slack'te](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, ardından Open-IM-Server boru hattını arayıp katılabilirsiniz.
+
+İki haftada bir yapılan toplantının [notlarını](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) [GitHub tartışmalarında alıyoruz](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), Tarihi toplantı notlarımız ve toplantıların tekrarları [Google Docs'ta](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing) 📑 mevcut.
+
+## :eyes: Kimler OpenIM Kullanıyor
+
+Proje kullanıcılarının bir listesi için [kullanıcı vaka çalışmaları](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) sayfamıza göz atın. Bir 📝[yorum](https://github.com/openimsdk/open-im-server-deploy/issues/379) bırakmaktan ve kullanım durumunuzu paylaşmaktan çekinmeyin.
+
+## :page_facing_up: Lisans
+
+OpenIM, Apache 2.0 lisansı altında lisanslanmıştır. Tam lisans metni için [LICENSE'ı](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) görün.
+
+Bu depoda, [assets/logo](../../assets/logo) ve [assets/logo-gif](../../assets/logo-gif) dizinlerinde görüntülenen [OpenIM](https://github.com/openimsdk/open-im-server-deploy) logosu, çeşitleri ve animasyonlu versiyonları, telif hakkı yasaları tarafından korunmaktadır.
+
+## 🔮 Katkıda bulunanlarımıza teşekkürler!
+
+
+
+
diff --git a/docs/readme/README_uk.md b/docs/readme/README_uk.md
new file mode 100644
index 0000000..476a997
--- /dev/null
+++ b/docs/readme/README_uk.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ Про OpenIM
+
+OpenIM — це сервісна платформа, спеціально розроблена для інтеграції чату, аудіо-відеодзвінків, сповіщень і чат-ботів штучного інтелекту в програми. Він надає ряд потужних API і веб-хуків, що дозволяє розробникам легко включати ці інтерактивні функції у свої програми. OpenIM не є окремою програмою для чату, а скоріше служить платформою для підтримки інших програм у досягненні широких можливостей спілкування. На наступній діаграмі детально показано взаємодію між AppServer, AppClient, OpenIMServer і OpenIMSDK.
+
+
+
+## 🚀 Про OpenIMSDK
+
+**OpenIMSDK** – це пакет IM SDK, розроблений для **OpenIMServer**, створений спеціально для вбудовування в клієнтські програми. Його основні функції та модулі такі:
+
+- 🌟 Основні характеристики:
+
+ - 📦 Локальне сховище
+ - 🔔 Зворотні виклики слухача
+ - 🛡️ Обгортка API
+ - 🌐 Керування підключенням
+
+- 📚 Основні модулі:
+
+ 1. 🚀 Ініціалізація та вхід
+ 2. 👤 Керування користувачами
+ 3. 👫 Керування друзями
+ 4. 🤖 Групові функції
+ 5. 💬 Ведення розмови
+
+Він створений за допомогою Golang і підтримує кросплатформне розгортання, забезпечуючи послідовний доступ на всіх платформах.
+
+👉 **[Дослідити GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Про OpenIMServer
+
+- **OpenIMServer** має такі характеристики:
+ - 🌐 Архітектура мікросервісу: підтримує режим кластера, включаючи шлюз і кілька служб rpc.
+ - 🚀 Різноманітні методи розгортання: підтримує розгортання через вихідний код, Kubernetes або Docker.
+ - Підтримка величезної бази користувачів: надвеликі групи із сотнями тисяч користувачів, десятками мільйонів користувачів і мільярдами повідомлень.
+
+### Розширена бізнес-функціональність:
+
+- **REST API**: OpenIMServer пропонує REST API для бізнес-систем, спрямованих на надання компаніям додаткових можливостей, таких як створення груп і надсилання push-повідомлень через серверні інтерфейси.
+- **Веб-перехоплення**: OpenIMServer надає можливості зворотного виклику, щоб розширити більше бізнес-форм. Зворотний виклик означає, що OpenIMServer надсилає запит на бізнес-сервер до або після певної події, як зворотні виклики до або після надсилання повідомлення.
+
+👉 **[Докладніше](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Загальна архітектура
+
+Пориньте в серце функціональності Open-IM-Server за допомогою нашої діаграми архітектури.
+
+
+
+## :rocket: Швидкий початок
+
+Ми підтримуємо багато платформ. Ось адреси для швидкого використання веб-сайту:
+
+👉 **[Онлайн-демонстрація OpenIM](https://web-enterprise.rentsoft.cn/)**
+
+🤲 Щоб полегшити роботу користувача, ми пропонуємо різні рішення для розгортання. Ви можете вибрати спосіб розгортання зі списку нижче:
+
+- **[Посібник із розгортання вихідного коду](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Посібник із розгортання Docker](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Посібник із розгортання Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Посібник із розгортання розробника Mac](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Щоб розпочати розробку OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+OpenIM. Наша мета — побудувати спільноту з відкритим кодом найвищого рівня. У нас є набір стандартів у [репозиторії спільноти](https://github.com/OpenIMSDK/community).
+
+Якщо ви хочете внести свій внесок у це сховище Open-IM-Server, прочитайте нашу [документацію для учасників](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Перш ніж почати, переконайтеся, що ваші зміни затребувані. Найкраще для цього створити [нове обговорення](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) АБО [Нездійснене спілкування](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)або, якщо ви виявите проблему, спершу [повідомити про неї](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose).
+
+- [Довідка щодо API OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [Ведення журналу OpenIM Bash](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [Дії OpenIM CI/CD](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [Положення про код OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [Інструкції щодо фіксації OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [Посібник з розробки OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [Структура каталогу OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [Налаштування середовища OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [Довідка про код помилки OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [Робочий процес OpenIM Git](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [Посібник із вибору OpenIM Git Cherry](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [Робочий процес OpenIM GitHub](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [Стандарти коду OpenIM Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [Інструкції щодо зображення OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [Початкова конфігурація OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [Посібник із встановлення OpenIM Docker](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [Встановлення системи OpenIM OpenIM Linux](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [Посібник із розробки OpenIM Linux](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [Локальний посібник із дій OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [Положення про протоколювання OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [Офлайн-розгортання OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [Інструменти OpenIM Protoc](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [Посібник з тестування OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [Утиліта OpenIM Go](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [Утиліти OpenIM Makefile](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [Утиліти сценарію OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [Версії OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Керування серверною частиною та моніторинг розгортання](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Посібник із розгортання розробника Mac для OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Спільнота
+
+- 📚 [Спільнота OpenIM](https://github.com/OpenIMSDK/community)
+- 💕 [Група інтересів OpenIM](https://github.com/Openim-sigs)
+- 🚀 [Приєднайтеся до нашої спільноти Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Приєднайтеся до нашого wechat](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Збори громади
+
+Ми хочемо, щоб будь-хто долучився до нашої спільноти та додав код, ми пропонуємо подарунки та нагороди, і ми запрошуємо вас приєднатися до нас щочетверга ввечері.
+
+Наша конференція знаходиться в [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, тоді ви можете шукати конвеєр Open-IM-Server, щоб приєднатися.
+
+Ми робимо нотатки про кожну [двотижневу зустріч](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting)в [обговореннях GitHub](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting). Наші історичні нотатки зустрічей, а також повтори зустрічей доступні в[Google Docs :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing).
+
+## :eyes: Хто використовує OpenIM
+
+Перегляньте нашу сторінку [тематичні дослідження користувачів](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md), щоб отримати список користувачів проекту. Не соромтеся залишити [📝коментар](https://github.com/openimsdk/open-im-server-deploy/issues/379)і поділитися своїм випадком використання.
+
+## :page_facing_up: Ліцензія
+
+OpenIM ліцензовано за ліцензією Apache 2.0. Див. [ЛІЦЕНЗІЯ](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) для повного тексту ліцензії.
+
+Логотип OpenIM, включаючи його варіації та анімовані версії, що відображаються в цьому сховищі[OpenIM](https://github.com/openimsdk/open-im-server-deploy)у каталогах [assets/logo](./assets/logo)і [assets/logo-gif](assets/logo-gif) , захищені законами про авторське право.
+
+## 🔮 Дякуємо нашим дописувачам!
+
+
+
+
diff --git a/docs/readme/README_vi.md b/docs/readme/README_vi.md
new file mode 100644
index 0000000..dad0fb8
--- /dev/null
+++ b/docs/readme/README_vi.md
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+
+
+[](https://github.com/openimsdk/open-im-server-deploy/stargazers)
+[](https://github.com/openimsdk/open-im-server-deploy/network/members)
+[](https://app.codecov.io/gh/openimsdk/open-im-server-deploy)
+[](https://goreportcard.com/report/github.com/openimsdk/open-im-server-deploy)
+[](https://pkg.go.dev/git.imall.cloud/openim/open-im-server-deploy)
+[](https://github.com/openimsdk/open-im-server-deploy/blob/main/LICENSE)
+[](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+[](https://www.bestpractices.dev/projects/8045)
+[](https://github.com/openimsdk/open-im-server-deploy/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
+[](https://golang.org/)
+
+
+ English ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
+
+## Ⓜ️ Về OpenIM
+
+OpenIM là một nền tảng dịch vụ được thiết kế đặc biệt cho việc tích hợp chat, cuộc gọi âm thanh-video, thông báo và chatbot AI vào các ứng dụng. Nó cung cấp một loạt các API mạnh mẽ và Webhooks, giúp các nhà phát triển dễ dàng tích hợp các tính năng tương tác này vào ứng dụng của mình. OpenIM không phải là một ứng dụng chat độc lập, mà là một nền tảng hỗ trợ các ứng dụng khác để đạt được các chức năng giao tiếp phong phú. Sơ đồ sau đây minh họa sự tương tác giữa AppServer, AppClient, OpenIMServer và OpenIMSDK để giải thích chi tiết.
+
+
+
+## 🚀 Về OpenIMSDK
+
+**OpenIMSDK** là một SDK IM được thiết kế cho **OpenIMServer**, được tạo ra đặc biệt để nhúng vào các ứng dụng khách. Các tính năng chính và các mô-đun của nó như sau:
+
+- 🌟 Các Tính Năng Chính:
+
+ - 📦 Lưu trữ cục bộ
+ - 🔔 Gọi lại sự kiện (Listener callbacks)
+ - 🛡️ Bọc API
+ - 🌐 Quản lý kết nối
+
+- 📚 Các Mô-đun Chính:
+
+ 1. 🚀 Khởi tạo và Đăng nhập
+ 2. 👤 Quản lý Người dùng
+ 3. 👫 Quản lý Bạn bè
+ 4. 🤖 Chức năng Nhóm
+ 5. 💬 Xử lý Cuộc trò chuyện
+
+Nó được xây dựng bằng Golang và hỗ trợ triển khai đa nền tảng, đảm bảo trải nghiệm truy cập nhất quán trên tất cả các nền tảng
+
+👉 **[Khám phá GO SDK](https://github.com/openimsdk/openim-sdk-core)**
+
+## 🌐 Về OpenIMServer
+
+- **OpenIMServer** có những đặc điểm sau:
+ - 🌐 Kiến trúc vi dịch vụ: Hỗ trợ chế độ cluster, bao gồm một gateway và nhiều dịch vụ rpc.
+ - 🚀 Phương pháp triển khai đa dạng: Hỗ trợ triển khai qua mã nguồn, Kubernetes hoặc Docker.
+ - Hỗ trợ cho cơ sở người dùng lớn: Nhóm siêu lớn với hàng trăm nghìn người dùng, hàng chục triệu người dùng và hàng tỷ tin nhắn.
+
+### Tăng cường Chức năng Kinh doanh:
+
+- **REST API**: OpenIMServer cung cấp REST APIs cho các hệ thống kinh doanh, nhằm tăng cường khả năng cho doanh nghiệp với nhiều chức năng hơn, như tạo nhóm và gửi tin nhắn đẩy qua giao diện backend.
+- **Webhooks**: OpenIMServer cung cấp khả năng gọi lại để mở rộng thêm hình thức kinh doanh. Một gọi lại có nghĩa là OpenIMServer gửi một yêu cầu đến máy chủ kinh doanh trước hoặc sau một sự kiện nhất định, giống như gọi lại trước hoặc sau khi gửi một tin nhắn.
+
+👉 **[Learn more](https://docs.openim.io/guides/introduction/product)**
+
+## :building_construction: Kiến trúc tổng thể
+
+Làm sâu sắc vào trái tim của chức năng Open-IM-Server với sơ đồ kiến trúc của chúng tôi.
+
+
+
+## :rocket: Bắt đầu nhanh
+
+Chúng tôi hỗ trợ nhiều nền tảng. Dưới đây là các địa chỉ để trải nghiệm nhanh trên phía web:
+
+👉 **[Demo web trực tuyến OpenIM](https://web-enterprise.rentsoft.cn/)**
+
+🤲 Để tạo thuận lợi cho trải nghiệm người dùng, chúng tôi cung cấp các giải pháp triển khai đa dạng. Bạn có thể chọn phương thức triển khai từ danh sách dưới đây:
+
+- **[Hướng dẫn Triển khai Mã Nguồn](https://docs.openim.io/guides/gettingStarted/imSourceCodeDeployment)**
+- **[Hướng dẫn Triển khai Docker](https://docs.openim.io/guides/gettingStarted/dockerCompose)**
+- **[Hướng dẫn Triển khai Kubernetes](https://docs.openim.io/guides/gettingStarted/k8s-deployment)**
+- **[Hướng dẫn Triển khai cho Nhà Phát Triển Mac](https://docs.openim.io/guides/gettingstarted/mac-deployment-guide)**
+
+## :hammer_and_wrench: Để Bắt Đầu Phát Triển OpenIM
+
+[](https://vscode.dev/github/openimsdk/open-im-server-deploy)
+
+Mục tiêu của OpenIM là xây dựng một cộng đồng mã nguồn mở cấp cao. Chúng tôi có một bộ tiêu chuẩn, Trong [kho lưu trữ Cộng đồng](https://github.com/OpenIMSDK/community).
+
+Nếu bạn muốn đóng góp cho kho lưu trữ Open-IM-Server này, vui lòng đọc [tài liệu hướng dẫn cho người đóng góp](https://github.com/openimsdk/open-im-server-deploy/blob/main/CONTRIBUTING.md).
+
+Trước khi bạn bắt đầu, hãy chắc chắn rằng các thay đổi của bạn được yêu cầu. Cách tốt nhất là tạo một [cuộc thảo luận mới](https://github.com/openimsdk/open-im-server-deploy/discussions/new/choose) hoặc [Giao tiếp Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A), hoặc nếu bạn tìm thấy một vấn đề, [báo cáo nó ](https://github.com/openimsdk/open-im-server-deploy/issues/new/choose) trước.
+
+- [Tham khảo API OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/api.md)
+- [Nhật ký Bash OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/bash-log.md)
+- [Hành động CI/CD OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/cicd-actions.md)
+- [Quy ước Mã OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/code-conventions.md)
+- [Hướng dẫn Commit OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/commit.md)
+- [Hướng dẫn Phát triển OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/development.md)
+- [Cấu trúc Thư mục OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/directory.md)
+- [Cài đặt Môi trường OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/environment.md)
+- [Tham khảo Mã Lỗi OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/error-code.md)
+- [Quy trình Git OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/git-workflow.md)
+- [Hướng dẫn Cherry Pick Git OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/gitcherry-pick.md)
+- [Quy trình GitHub OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/github-workflow.md)
+- [Tiêu chuẩn Mã Go OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/go-code.md)
+- [Hướng dẫn Hình ảnh OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/images.md)
+- [Cấu hình Ban đầu OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/init-config.md)
+- [Hướng dẫn Cài đặt Docker OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-docker.md)
+- [Hướng dẫn Cài đặt Hệ thống Linux OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/install-openim-linux-system.md)
+- [Hướng dẫn Phát triển Linux OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/linux-development.md)
+- [Hướng dẫn Hành động Địa phương OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/local-actions.md)
+- [Quy ước Nhật ký OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/logging.md)
+- [Triển khai Ngoại tuyến OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/offline-deployment.md)
+- [Công cụ Protoc OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/protoc-tools.md)
+- [Hướng dẫn Kiểm thử OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/test.md)
+- [Utility Go OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-go.md)
+- [Tiện ích Makefile OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-makefile.md)
+- [Tiện ích Kịch bản OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/util-scripts.md)
+- [Quản lý Phiên bản OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/version.md)
+- [Quản lý triển khai và giám sát backend](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/prometheus-grafana.md)
+- [Hướng dẫn Triển khai cho Nhà Phát triển Mac OpenIM](https://github.com/openimsdk/open-im-server-deploy/tree/main/docs/contrib/mac-developer-deployment-guide.md)
+
+## :busts_in_silhouette: Cộng đồng
+
+- 📚 [Cộng đồng OpenIM](https://github.com/OpenIMSDK/community)
+- 💕 [Nhóm Quan tâm OpenIM](https://github.com/Openim-sigs)
+- 🚀 [Tham gia cộng đồng Slack của chúng tôi](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A)
+- :eyes: [Tham gia nhóm WeChat của chúng tôi (微信群)](https://openim-1253691595.cos.ap-nanjing.myqcloud.com/WechatIMG20.jpeg)
+
+## :calendar: Cuộc họp Cộng đồng
+
+Chúng tôi muốn bất kỳ ai cũng có thể tham gia cộng đồng và đóng góp mã nguồn, chúng tôi cung cấp quà tặng và phần thưởng, và chúng tôi chào đón bạn tham gia cùng chúng tôi mỗi tối thứ Năm.
+
+Hội nghị của chúng tôi được tổ chức trên Slack của [OpenIM Slack](https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A) 🎯, sau đó bạn có thể tìm kiếm pipeline Open-IM-Server để tham gia
+
+Chúng tôi ghi chú mỗi [cuộc họp hai tuần một lần](https://github.com/orgs/OpenIMSDK/discussions/categories/meeting) trong [các cuộc thảo luận GitHub](https://github.com/openimsdk/open-im-server-deploy/discussions/categories/meeting), ghi chú cuộc họp lịch sử của chúng tôi cũng như các bản ghi lại của cuộc họp có sẵn tại [Google Docs :bookmark_tabs:](https://docs.google.com/document/d/1nx8MDpuG74NASx081JcCpxPgDITNTpIIos0DS6Vr9GU/edit?usp=sharing).
+
+## :eyes: Ai Đang Sử Dụng OpenIM
+
+Xem trangr [các nghiên cứu trường hợp người dùng](https://github.com/OpenIMSDK/community/blob/main/ADOPTERS.md) của chúng tôi để biết danh sách các người dùng dự án. Đừng ngần ngại để lại [📝bình luận](https://github.com/openimsdk/open-im-server-deploy/issues/379) và chia sẻ trường hợp sử dụng của bạn.
+
+## :page_facing_up: Giấy phép
+
+OpenIM được cấp phép theo giấy phép Apache 2.0. Xem [GIẤY PHÉP](https://github.com/openimsdk/open-im-server-deploy/tree/main/LICENSE) để biết toàn bộ nội dung giấy phép.
+
+Logo OpenIM, bao gồm các biến thể và phiên bản hoạt hình, được hiển thị trong kho lưu trữ này [OpenIM](https://github.com/openimsdk/open-im-server-deploy) dưới các thư mục [assets/logo](../../assets/logo) và [assets/logo-gif](assets/logo-gif) được bảo vệ bởi luật bản quyền.
+
+## 🔮 Cảm ơn các đóng góp của bạn!
+
+
+
+
diff --git a/docs/redpacket-api.md b/docs/redpacket-api.md
new file mode 100644
index 0000000..a3249d3
--- /dev/null
+++ b/docs/redpacket-api.md
@@ -0,0 +1,565 @@
+# 红包接口文档
+
+## 接口列表
+
+### 1. 发送红包
+
+**接口地址**: `POST /redpacket/send_redpacket`
+
+**接口描述**: 在群聊中发送红包,发送用户默认为群主
+
+**请求参数**:
+
+```json
+{
+ "groupID": "group123", // 群ID(必填,string)
+ "redPacketType": 1, // 红包类型(必填,int32):1-普通红包(平均分配),2-拼手气红包(随机分配)
+ "totalAmount": 10000, // 总金额(必填,int64,单位:分)
+ "totalCount": 10, // 总个数(必填,int32)
+ "blessing": "恭喜发财" // 祝福语(可选,string)
+}
+```
+
+**参数说明**:
+- `groupID`: 群ID,必须是已存在的群
+- `redPacketType`: 红包类型
+ - `1`: 普通红包,每个红包金额 = 总金额 / 总个数
+ - `2`: 拼手气红包,金额随机分配
+- `totalAmount`: 总金额,单位:分(例如:10000 = 100元)
+- `totalCount`: 红包总个数,必须大于0
+- `blessing`: 祝福语,可选
+
+**响应参数**:
+
+```json
+{
+ "errCode": 0, // 错误码,0表示成功
+ "errMsg": "", // 错误信息
+ "errDlt": "", // 错误详情
+ "data": {
+ "redPacketID": "rp_1234567890abcdef", // 红包ID(string)
+ "serverMsgID": "msg_1234567890abcdef", // 服务器消息ID(string)
+ "clientMsgID": "client_1234567890", // 客户端消息ID(string)
+ "sendTime": 1704067200000 // 发送时间戳(int64,毫秒)
+ }
+}
+```
+
+**响应说明**:
+- `redPacketID`: 红包唯一标识,用于后续领取、查询等操作
+- `serverMsgID`: 服务器生成的消息ID
+- `clientMsgID`: 客户端消息ID
+- `sendTime`: 发送时间戳(毫秒)
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误(金额、个数、类型等)
+- `1002`: 群不存在
+- `1003`: 群主不存在
+- `1004`: 创建红包记录失败
+- `1005`: 发送消息失败
+
+**请求示例**:
+
+```bash
+curl -X POST http://localhost:10002/redpacket/send_redpacket \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "groupID": "group123",
+ "redPacketType": 1,
+ "totalAmount": 10000,
+ "totalCount": 10,
+ "blessing": "恭喜发财"
+ }'
+```
+
+**响应示例**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "redPacketID": "rp_1234567890abcdef",
+ "serverMsgID": "msg_1234567890abcdef",
+ "clientMsgID": "client_1234567890",
+ "sendTime": 1704067200000
+ }
+}
+```
+
+## 业务规则
+
+1. **发送用户**: 自动使用群主作为发送用户,无需在请求中指定
+2. **红包类型**:
+ - 普通红包(type=1): 每个红包金额 = 总金额 / 总个数(向下取整,最后一个红包包含余数)
+ - 拼手气红包(type=2): 金额随机分配,保证最后一个红包能领完剩余金额
+3. **过期时间**: 红包默认24小时后过期
+4. **红包状态**:
+ - `0`: 进行中(可领取)
+ - `1`: 已领完
+ - `2`: 已过期
+5. **消息类型**: 红包消息使用自定义消息类型 `110`(`constant.Custom`),通过 `description` 字段标识二级类型为 `"redpacket"`
+
+## 数据库记录
+
+发送红包时会自动创建以下数据库记录:
+
+**red_packets 表**:
+- 记录红包基本信息(金额、个数、类型等)
+- 记录红包状态和剩余信息
+- 记录过期时间
+
+**消息记录**:
+- 在群聊中发送一条红包消息
+- 消息内容包含红包ID和基本信息
+- 群成员可以看到红包消息
+
+## 注意事项
+
+1. 只支持群聊,不支持单聊
+2. 发送用户固定为群主,不能指定其他用户
+3. 红包金额单位是"分",不是"元"
+4. 红包个数必须大于0
+5. 总金额必须大于0
+6. 红包类型只能是1或2
+
+### 2. 领取红包
+
+**接口地址**: `POST /redpacket/receive`
+
+**接口描述**: 领取红包,支持普通红包和拼手气红包
+
+**请求参数**:
+
+```json
+{
+ "redPacketID": "rp_1234567890abcdef" // 红包ID(必填,string)
+}
+```
+
+**参数说明**:
+- `redPacketID`: 红包ID,从发送红包接口的响应中获取
+
+**响应参数**:
+
+```json
+{
+ "errCode": 0, // 错误码,0表示成功
+ "errMsg": "", // 错误信息
+ "errDlt": "", // 错误详情
+ "data": {
+ "redPacketID": "rp_1234567890abcdef", // 红包ID(string)
+ "amount": 1000, // 领取金额(int64,单位:分)
+ "isLucky": false // 是否为手气最佳(bool,仅拼手气红包有效)
+ }
+}
+```
+
+**响应说明**:
+- `redPacketID`: 红包ID
+- `amount`: 领取到的金额,单位:分
+- `isLucky`: 是否为手气最佳,仅拼手气红包有效
+ - `true`: 是手气最佳(领取金额最大)
+ - `false`: 不是手气最佳
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误(红包ID为空)
+- `1004`: 红包不存在(RecordNotFoundError)
+- `1801`: 红包已被领完(RedPacketFinishedError)
+- `1802`: 红包已过期(RedPacketExpiredError)
+- `1803`: 用户已领取过该红包(RedPacketAlreadyReceivedError)
+- `500`: 服务器内部错误(Redis队列操作失败等)
+
+**请求示例**:
+
+```bash
+curl -X POST http://localhost:10002/redpacket/receive \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "redPacketID": "rp_1234567890abcdef"
+ }'
+```
+
+**响应示例**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "redPacketID": "rp_1234567890abcdef",
+ "amount": 1000,
+ "isLucky": false
+ }
+}
+```
+
+**业务规则**:
+
+1. **领取限制**:
+ - 每个用户只能领取一次
+ - 红包必须处于"进行中"状态
+ - 红包不能已过期
+ - 红包不能已被领完
+
+2. **金额分配**:
+ - **普通红包(type=1)**: 金额 = 剩余金额 / 剩余个数(向下取整,最后一个包含余数)
+ - **拼手气红包(type=2)**: 从Redis队列中获取预分配的随机金额
+
+3. **手气最佳判断**:
+ - 仅拼手气红包有效
+ - 领取金额最大的用户被标记为手气最佳
+ - 如果有多个相同最大金额,第一个领取的会被标记为手气最佳
+
+4. **消息更新**:
+ - 领取红包后,客户端下次拉取消息时会自动看到已领取状态
+ - 消息同步时会动态填充领取信息(`IsReceived` 和 `ReceiveInfo`)
+
+5. **原子操作**:
+ - 领取操作是原子的,确保并发安全
+ - 使用数据库事务保证数据一致性
+
+### 3. 查询红包列表(后台管理接口)
+
+**接口地址**: `POST /redpacket/get_redpackets_by_group`
+
+**接口描述**: 根据群ID查询红包列表,支持查询所有红包或指定群的红包
+
+**请求参数**:
+
+```json
+{
+ "groupID": "group123", // 群ID(选填,string),不填或为空则查询所有红包
+ "pagination": {
+ "pageNumber": 1, // 页码(选填,int32),从1开始,默认1
+ "showNumber": 20 // 每页数量(选填,int32),默认20
+ }
+}
+```
+
+**参数说明**:
+- `groupID`: 群ID,选填
+ - 如果为空或不传,则查询所有红包
+ - 如果指定群ID,则只查询该群的红包
+- `pagination`: 分页参数,选填
+ - `pageNumber`: 页码,从1开始,默认1
+ - `showNumber`: 每页数量,默认20
+
+**响应参数**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "total": 100, // 总数(int64)
+ "redPackets": [ // 红包列表(array)
+ {
+ "redPacketID": "rp_1234567890abcdef", // 红包ID(string)
+ "sendUserID": "user123", // 发送者ID(string)
+ "groupID": "group123", // 群ID(string)
+ "groupName": "测试群", // 群名称(string)
+ "redPacketType": 1, // 红包类型(int32):1-普通红包,2-拼手气红包
+ "totalAmount": 10000, // 总金额(int64,单位:分)
+ "totalCount": 10, // 总个数(int32)
+ "remainAmount": 5000, // 剩余金额(int64,单位:分)
+ "remainCount": 5, // 剩余个数(int32)
+ "blessing": "恭喜发财", // 祝福语(string)
+ "status": 0, // 状态(int32):0-进行中,1-已领完,2-已过期
+ "expireTime": 1704153600000, // 过期时间戳(int64,毫秒)
+ "createTime": 1704067200000 // 创建时间戳(int64,毫秒)
+ }
+ ]
+ }
+}
+```
+
+**响应说明**:
+- `total`: 符合条件的红包总数
+- `redPackets`: 红包列表,按创建时间倒序排列
+- 每个红包包含完整的信息:ID、发送者、群ID、群名称、类型、金额、个数、剩余信息、状态等
+- `groupName`: 群名称,通过批量查询群信息获取,如果群不存在或查询失败则为空字符串
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误
+- `500`: 服务器内部错误
+
+**请求示例**:
+
+```bash
+# 查询所有红包
+curl -X POST http://localhost:10002/redpacket/get_redpackets_by_group \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+ }'
+
+# 查询指定群的红包
+curl -X POST http://localhost:10002/redpacket/get_redpackets_by_group \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "groupID": "group123",
+ "pagination": {
+ "pageNumber": 1,
+ "showNumber": 20
+ }
+ }'
+```
+
+**响应示例**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "total": 100,
+ "redPackets": [
+ {
+ "redPacketID": "rp_1234567890abcdef",
+ "sendUserID": "user123",
+ "groupID": "group123",
+ "groupName": "测试群",
+ "redPacketType": 1,
+ "totalAmount": 10000,
+ "totalCount": 10,
+ "remainAmount": 5000,
+ "remainCount": 5,
+ "blessing": "恭喜发财",
+ "status": 0,
+ "expireTime": 1704153600000,
+ "createTime": 1704067200000
+ }
+ ]
+ }
+}
+```
+
+### 4. 查询红包领取情况(后台管理接口)
+
+**接口地址**: `POST /redpacket/get_receive_info`
+
+**接口描述**: 查询指定红包的领取情况,包括所有领取记录和详细信息
+
+**请求参数**:
+
+```json
+{
+ "redPacketID": "rp_1234567890abcdef" // 红包ID(必填,string)
+}
+```
+
+**参数说明**:
+- `redPacketID`: 红包ID,必填
+
+**响应参数**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "redPacketID": "rp_1234567890abcdef", // 红包ID(string)
+ "totalAmount": 10000, // 总金额(int64,单位:分)
+ "totalCount": 10, // 总个数(int32)
+ "remainAmount": 5000, // 剩余金额(int64,单位:分)
+ "remainCount": 5, // 剩余个数(int32)
+ "status": 0, // 状态(int32):0-进行中,1-已领完,2-已过期
+ "receives": [ // 领取记录列表(array)
+ {
+ "receiveID": "rec_1234567890", // 领取记录ID(string)
+ "receiveUserID": "user456", // 领取者ID(string)
+ "amount": 1000, // 领取金额(int64,单位:分)
+ "receiveTime": 1704070800000, // 领取时间戳(int64,毫秒)
+ "isLucky": false // 是否为手气最佳(bool,仅拼手气红包有效)
+ }
+ ]
+ }
+}
+```
+
+**响应说明**:
+- `redPacketID`: 红包ID
+- `totalAmount`: 红包总金额
+- `totalCount`: 红包总个数
+- `remainAmount`: 剩余金额
+- `remainCount`: 剩余个数
+- `status`: 红包状态
+- `receives`: 领取记录列表,按领取时间正序排列
+ - `receiveID`: 领取记录ID
+ - `receiveUserID`: 领取者用户ID
+ - `amount`: 领取的金额
+ - `receiveTime`: 领取时间戳
+ - `isLucky`: 是否为手气最佳(仅拼手气红包有效)
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误(红包ID为空)
+- `1004`: 红包不存在
+- `500`: 服务器内部错误
+
+**请求示例**:
+
+```bash
+curl -X POST http://localhost:10002/redpacket/get_receive_info \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "redPacketID": "rp_1234567890abcdef"
+ }'
+```
+
+**响应示例**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "redPacketID": "rp_1234567890abcdef",
+ "totalAmount": 10000,
+ "totalCount": 10,
+ "remainAmount": 5000,
+ "remainCount": 5,
+ "status": 0,
+ "receives": [
+ {
+ "receiveID": "rec_1234567890",
+ "receiveUserID": "user456",
+ "amount": 1000,
+ "receiveTime": 1704070800000,
+ "isLucky": false
+ },
+ {
+ "receiveID": "rec_1234567891",
+ "receiveUserID": "user789",
+ "amount": 2000,
+ "receiveTime": 1704071400000,
+ "isLucky": true
+ }
+ ]
+ }
+}
+```
+
+### 5. 暂停红包(后台管理接口)
+
+**接口地址**: `POST /redpacket/pause`
+
+**接口描述**: 暂停红包,清空Redis队列,使红包无法继续被领取(仅对拼手气红包有效)
+
+**请求参数**:
+
+```json
+{
+ "redPacketID": "rp_1234567890abcdef" // 红包ID(必填,string)
+}
+```
+
+**参数说明**:
+- `redPacketID`: 红包ID,必填
+
+**响应参数**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "redPacketID": "rp_1234567890abcdef" // 红包ID(string)
+ }
+}
+```
+
+**响应说明**:
+- `redPacketID`: 红包ID
+
+**错误码**:
+- `0`: 成功
+- `1001`: 参数错误(红包ID为空)
+- `1004`: 红包不存在
+- `500`: 服务器内部错误(Redis操作失败等)
+
+**请求示例**:
+
+```bash
+curl -X POST http://localhost:10002/redpacket/pause \
+ -H "Content-Type: application/json" \
+ -H "token: your_token" \
+ -d '{
+ "redPacketID": "rp_1234567890abcdef"
+ }'
+```
+
+**响应示例**:
+
+```json
+{
+ "errCode": 0,
+ "errMsg": "",
+ "errDlt": "",
+ "data": {
+ "redPacketID": "rp_1234567890abcdef"
+ }
+}
+```
+
+**业务规则**:
+
+1. **暂停操作**:
+ - 清空该红包在Redis中的队列(`redpacket:queue:{redPacketID}`)
+ - 清空后,后续领取请求会因队列为空而失败(返回 `1801: red packet has been finished`)
+ - 已领取的记录不受影响,只是无法继续领取
+
+2. **红包类型**:
+ - **拼手气红包(type=2)**: 会清空Redis队列
+ - **普通红包(type=1)**: 没有Redis队列,直接返回成功
+
+3. **注意事项**:
+ - 暂停操作不可逆,清空后的Redis队列无法恢复
+ - 暂停后,红包状态不会自动更新,但实际已无法领取
+ - 建议在暂停后手动更新红包状态为"已领完"或"已过期"
+
+## 接口汇总
+
+| 接口路径 | 方法 | 描述 | 类型 |
+|---------|------|------|------|
+| `/redpacket/send_redpacket` | POST | 发送红包 | 用户接口 |
+| `/redpacket/receive` | POST | 领取红包 | 用户接口 |
+| `/redpacket/get_redpackets_by_group` | POST | 查询红包列表 | 后台管理接口 |
+| `/redpacket/get_receive_info` | POST | 查询红包领取情况 | 后台管理接口 |
+| `/redpacket/pause` | POST | 暂停红包 | 后台管理接口 |
+
+## 错误码汇总
+
+| 错误码 | 说明 | 适用接口 |
+|--------|------|----------|
+| `0` | 成功 | 所有接口 |
+| `1001` | 参数错误 | 所有接口 |
+| `1004` | 记录不存在 | 领取、查询、暂停接口 |
+| `1801` | 红包已被领完 | 领取接口 |
+| `1802` | 红包已过期 | 领取接口 |
+| `1803` | 用户已领取过该红包 | 领取接口 |
+| `500` | 服务器内部错误 | 所有接口 |
+
+## 客户端消息结构
+
+客户端收到的红包消息结构请参考:[红包消息结构文档](./redpacket-message-structure.md)
+
diff --git a/docs/redpacket-flow.md b/docs/redpacket-flow.md
new file mode 100644
index 0000000..e607b51
--- /dev/null
+++ b/docs/redpacket-flow.md
@@ -0,0 +1,390 @@
+# 红包系统完整流程文档
+
+## 一、整体架构设计
+
+### 核心原则
+- **Redis 负责并发控制与计算**:使用 Lua 脚本实现原子操作
+- **MongoDB 负责最终记录与查询**:异步写入,不参与并发竞争
+- **立即返回 + 异步写入**:提升响应速度,降低延迟
+- **失败补偿机制**:使用 Redis Stream 确保数据最终一致性
+
+### 数据流向
+```
+客户端请求
+ ↓
+Go API(无状态)
+ ↓
+Redis Lua脚本(原子抢红包)← 并发控制核心
+ ↓
+立即返回结果 ✅
+ ↓
+异步写入 MongoDB(goroutine)
+ ├─ 写入领取记录
+ ├─ 更新钱包余额
+ └─ 写入补偿Stream
+```
+
+---
+
+## 二、发送红包流程(SendRedPacket)
+
+### 1. 参数验证
+- 验证 `totalAmount > 0`
+- 验证 `totalCount > 0`
+- 验证 `redPacketType` 为 1(普通)或 2(拼手气)
+
+### 2. 群信息验证
+- 获取群信息,验证群是否存在
+- 获取群主ID(发送用户默认为群主)
+- 获取群主用户信息(昵称、头像)
+
+### 3. 生成红包ID和基本信息
+- 生成红包ID:`MD5(sendUserID + groupID + timestamp)`
+- 计算会话ID
+- 设置过期时间:24小时
+
+### 4. 初始化Redis数据结构(关键步骤)
+
+#### Redis Key 设计
+- `rp:{packetId}:list` - 红包金额队列(List)
+- `rp:{packetId}:users` - 已领取用户集合(Set)
+- 过期时间:24小时
+
+#### 普通红包(type=1)
+```go
+avgAmount := totalAmount / totalCount
+for i := 0; i < totalCount; i++ {
+ RPush("rp:{packetId}:list", avgAmount) // 每个元素都是平均金额
+}
+Expire("rp:{packetId}:list", 24h)
+Expire("rp:{packetId}:users", 24h)
+```
+
+#### 拼手气红包(type=2)
+```go
+amounts := allocateRandomAmounts(totalAmount, totalCount) // 预先分配随机金额
+for _, amount := range amounts {
+ RPush("rp:{packetId}:list", amount) // 每个元素是不同的随机金额
+}
+Expire("rp:{packetId}:list", 24h)
+Expire("rp:{packetId}:users", 24h)
+```
+
+**拼手气红包分配算法**:
+- 确保每个红包至少 1 分
+- 前 n-1 个红包:在最小金额和最大金额之间随机分配
+- 最后一个红包:直接分配剩余金额
+- 打乱顺序增加随机性
+
+### 5. 创建MongoDB记录
+```go
+redPacketRecord := &model.RedPacket{
+ RedPacketID: redPacketID,
+ SendUserID: sendUserID,
+ GroupID: groupID,
+ RedPacketType: req.RedPacketType,
+ TotalAmount: req.TotalAmount,
+ TotalCount: req.TotalCount,
+ RemainAmount: req.TotalAmount, // 初始等于总金额
+ RemainCount: req.TotalCount, // 初始等于总个数
+ Status: Active,
+ ExpireTime: expireTime,
+ CreateTime: time.Now(),
+}
+```
+
+### 6. 发送消息
+- 构建红包消息内容(自定义消息格式)
+- 通过 RPC 发送消息到群聊
+
+### 7. 返回响应
+- 返回 `redPacketID`、`serverMsgID`、`clientMsgID`、`sendTime`
+
+---
+
+## 三、领取红包流程(ReceiveRedPacket)
+
+### 1. 参数验证
+- 获取用户ID(从token中)
+- 验证请求参数
+
+### 2. 红包状态检查
+- 查询MongoDB获取红包基本信息
+- 检查红包是否过期(`Status == Expired`)
+- **注意**:不检查是否已领完,因为Redis会处理
+
+### 3. 【核心】Redis Lua脚本抢红包
+
+#### Lua脚本逻辑
+```lua
+-- KEYS[1] = rp:{packetId}:list (红包金额队列)
+-- KEYS[2] = rp:{packetId}:users (已领取用户集合)
+-- ARGV[1] = userId
+
+-- 步骤1:检查是否已领取
+if redis.call('SISMEMBER', KEYS[2], ARGV[1]) == 1 then
+ return -1 -- 用户已领取
+end
+
+-- 步骤2:从队列中抢红包(原子操作)
+local money = redis.call('LPOP', KEYS[1])
+if not money then
+ return -2 -- 红包已领完
+end
+
+-- 步骤3:记录用户已领取(原子操作)
+redis.call('SADD', KEYS[2], ARGV[1])
+
+-- 步骤4:返回金额
+return money
+```
+
+#### 返回值处理
+- `-1` → 用户已领取 → 返回 `ErrRedPacketAlreadyReceived`
+- `-2` → 红包已领完 → 返回 `ErrRedPacketFinished`
+- `>0` → 领取成功 → 返回金额(分)
+
+### 4. 立即返回结果
+```go
+resp.RedPacketID = req.RedPacketID
+resp.Amount = amount
+resp.IsLucky = false
+apiresp.GinSuccess(c, resp) // 立即返回,不等待MongoDB写入
+```
+
+### 5. 【异步写入】后台goroutine处理
+
+#### 5.1 写入补偿Stream
+```go
+streamKey := "rp:" + redPacketID + ":stream"
+XAdd(streamKey, {
+ "userID": userID,
+ "amount": amount,
+ "time": timestamp
+})
+```
+**作用**:用于失败补偿,确保数据最终一致性
+
+#### 5.2 写入MongoDB领取记录
+```go
+receiveRecord := &model.RedPacketReceive{
+ ReceiveID: receiveID,
+ RedPacketID: redPacketID,
+ ReceiveUserID: userID,
+ Amount: amount,
+ ReceiveTime: time.Now(),
+ IsLucky: false,
+}
+Create(receiveRecord)
+```
+**幂等性保护**:
+- 唯一索引:`(receive_user_id, red_packet_id)`
+- 如果唯一索引冲突(E11000错误),说明已写入,忽略错误
+
+#### 5.3 更新钱包余额
+```go
+// 查询或创建钱包
+wallet := Take(userID)
+if not exists {
+ Create(wallet) // 初始余额为0
+}
+
+// 使用版本号乐观锁更新余额
+UpdateBalanceWithVersion(userID, "add", amount, oldVersion)
+// 如果版本号冲突,自动重试(最多3次)
+
+// 创建余额记录
+CreateBalanceRecord({
+ Operation: "add",
+ Amount: amount,
+ Remark: "领取红包: " + redPacketID
+})
+```
+
+---
+
+## 四、并发控制机制
+
+### 1. Redis层面(核心)
+- **Lua脚本原子性**:整个抢红包过程是原子操作
+- **LPOP原子性**:从队列中取出金额是原子操作
+- **SADD原子性**:记录用户是原子操作
+- **SISMEMBER检查**:防止重复领取
+
+### 2. MongoDB层面(兜底)
+- **唯一索引**:`(receive_user_id, red_packet_id)` 唯一索引
+- **幂等写入**:如果已存在,忽略重复写入错误
+
+### 3. 钱包余额更新
+- **版本号乐观锁**:使用 `version` 字段防止并发覆盖
+- **自动重试**:并发冲突时自动重试(最多3次)
+
+---
+
+## 五、失败补偿机制
+
+### 1. Redis Stream记录
+- 每次领取成功后,写入Redis Stream
+- Stream包含:`userID`、`amount`、`time`
+
+### 2. 补偿Worker(待实现)
+```go
+// 后台worker消费Stream
+for {
+ messages := XRead("rp:{packetId}:stream", lastID)
+ for _, msg := range messages {
+ // 检查MongoDB是否已写入
+ if !ExistsInMongoDB(msg.userID, msg.redPacketID) {
+ // 重新写入MongoDB
+ WriteToMongoDB(msg)
+ }
+ // ACK消息
+ XAck(streamKey, msg.ID)
+ }
+}
+```
+
+### 3. 补偿场景
+- Redis成功,但MongoDB写入失败
+- 服务宕机导致异步写入中断
+- 网络闪断导致写入失败
+
+---
+
+## 六、Redis Key设计总结
+
+| Key | 类型 | 说明 | 过期时间 |
+|-----|------|------|----------|
+| `rp:{packetId}:list` | List | 红包金额队列 | 24小时 |
+| `rp:{packetId}:users` | Set | 已领取用户集合 | 24小时 |
+| `rp:{packetId}:stream` | Stream | 领取日志流(补偿用) | 24小时 |
+
+---
+
+## 七、数据一致性保证
+
+### 1. 强一致性(Redis)
+- Redis Lua脚本保证抢红包的原子性
+- 确保不会超发、不会重复领取
+
+### 2. 最终一致性(MongoDB)
+- 异步写入MongoDB,可能短暂延迟
+- 唯一索引确保幂等性
+- 补偿机制确保最终一致
+
+### 3. 钱包余额
+- 使用版本号乐观锁
+- 自动重试机制
+- 确保余额更新成功
+
+---
+
+## 八、性能特点
+
+### 1. 高并发支持
+- Redis Lua脚本原子操作,天然支持高并发
+- 支持万人同时抢红包
+
+### 2. 低延迟
+- 立即返回结果,不等待MongoDB写入
+- 响应时间主要取决于Redis性能
+
+### 3. 高可用
+- MongoDB不参与并发竞争,避免被打爆
+- 异步写入不影响主流程
+- 补偿机制确保数据不丢失
+
+---
+
+## 九、关键代码位置
+
+### 发送红包
+- `internal/api/redpacket.go:SendRedPacket()` - 主流程
+- `internal/api/redpacket.go:allocateRandomAmounts()` - 拼手气红包分配算法
+
+### 领取红包
+- `internal/api/redpacket.go:ReceiveRedPacket()` - 主流程
+- `internal/api/redpacket.go:grabRedPacketFromRedis()` - Redis Lua脚本执行
+- `internal/api/redpacket.go:grabRedPacketLuaScript` - Lua脚本定义
+- `internal/api/redpacket.go:writeRedPacketRecordToMongoDB()` - 异步写入MongoDB
+- `internal/api/redpacket.go:updateWalletBalanceAsync()` - 异步更新钱包余额
+- `internal/api/redpacket.go:writeToCompensationStream()` - 写入补偿Stream
+
+### Redis Key生成
+- `internal/api/redpacket.go:getRedPacketQueueKey()` - 队列Key
+- `internal/api/redpacket.go:getRedPacketUsersKey()` - 用户Set Key
+- `internal/api/redpacket.go:getRedPacketStreamKey()` - Stream Key
+
+---
+
+## 十、注意事项
+
+1. **Redis必须可用**:发送和领取红包都依赖Redis
+2. **MongoDB异步写入**:可能短暂延迟,但不影响用户体验
+3. **补偿机制**:需要实现后台worker消费Stream
+4. **唯一索引**:MongoDB领取记录的唯一索引是幂等性的关键
+5. **钱包余额**:异步更新,可能有短暂延迟
+
+---
+
+## 十一、流程图
+
+### 发送红包流程图
+```
+验证参数
+ ↓
+验证群信息
+ ↓
+生成红包ID
+ ↓
+初始化Redis(关键)
+ ├─ 普通红包:推送N个平均金额
+ └─ 拼手气红包:推送N个随机金额
+ ↓
+创建MongoDB记录
+ ↓
+发送消息
+ ↓
+返回结果
+```
+
+### 领取红包流程图
+```
+验证参数
+ ↓
+查询红包基本信息
+ ↓
+检查是否过期
+ ↓
+【核心】Redis Lua脚本抢红包
+ ├─ 检查是否已领取(SISMEMBER)
+ ├─ 抢红包(LPOP)
+ └─ 记录用户(SADD)
+ ↓
+立即返回结果 ✅
+ ↓
+异步写入(goroutine)
+ ├─ 写入补偿Stream
+ ├─ 写入MongoDB领取记录
+ └─ 更新钱包余额
+```
+
+---
+
+## 十二、错误处理
+
+### 发送红包错误
+- Redis不可用 → 返回错误,不允许发送
+- MongoDB写入失败 → 返回错误,不允许发送
+
+### 领取红包错误
+- 红包不存在 → 返回 `red packet not found`
+- 红包已过期 → 返回 `red packet expired`
+- 用户已领取 → 返回 `red packet already received`(Redis检查)
+- 红包已领完 → 返回 `red packet finished`(Redis检查)
+- Redis不可用 → 返回 `redis client is not available`
+
+### 异步写入错误
+- MongoDB写入失败 → 记录日志,补偿机制处理
+- 钱包余额更新失败 → 记录日志,可手动补偿
+- Stream写入失败 → 记录日志,不影响主流程
diff --git a/docs/redpacket-implementation-plan.md b/docs/redpacket-implementation-plan.md
new file mode 100644
index 0000000..ece42fe
--- /dev/null
+++ b/docs/redpacket-implementation-plan.md
@@ -0,0 +1,208 @@
+# 红包功能实现方案
+
+## 功能概述
+实现类似微信的抢红包功能,包括:
+- 发送红包(普通红包、拼手气红包)
+- 领取红包
+- 查看红包详情和领取记录
+- 红包过期处理
+
+## 需要修改的文件清单
+
+### 1. 消息类型定义
+**文件**: `../protocol/constant/constant.go`
+- 添加 `RedPacket = 123` 消息类型常量
+- 在 `ContentType2PushContent` 映射中添加红包推送内容
+
+### 2. 数据库模型
+**新建文件**: `pkg/common/storage/model/redpacket.go`
+- `RedPacket` - 红包主表
+- `RedPacketReceive` - 红包领取记录表
+
+### 3. 数据库操作接口
+**新建文件**: `pkg/common/storage/database/redpacket.go`
+- 定义红包数据库操作接口
+
+### 4. MongoDB实现
+**新建文件**: `pkg/common/storage/database/mgo/redpacket.go`
+- 实现红包数据库操作
+
+### 5. RPC服务
+**新建目录**: `internal/rpc/redpacket/`
+- `redpacket.go` - 红包RPC服务实现
+- `server.go` - RPC服务注册
+
+### 6. API接口
+**新建文件**: `internal/api/redpacket.go`
+- 发送红包接口
+- 领取红包接口
+- 查询红包详情接口
+- 查询红包领取记录接口
+
+### 7. 消息类型处理
+**修改文件**:
+- `internal/api/msg.go` - 添加红包消息解析
+- `internal/rpc/msg/verify.go` - 添加红包消息封装
+- `pkg/apistruct/msg.go` - 添加红包消息结构体
+
+### 8. 推送处理
+**修改文件**: `internal/push/offlinepush_handler.go`
+- 添加红包消息的离线推送内容
+
+### 9. 路由注册
+**修改文件**: `internal/api/router.go`
+- 注册红包相关路由
+
+## 数据库表设计
+
+### red_packets 表
+```go
+type RedPacket struct {
+ RedPacketID string `bson:"red_packet_id"` // 红包ID
+ SendUserID string `bson:"send_user_id"` // 发送者ID
+ GroupID string `bson:"group_id"` // 群ID(群红包)
+ ConversationID string `bson:"conversation_id"` // 会话ID
+ SessionType int32 `bson:"session_type"` // 会话类型
+ RedPacketType int32 `bson:"red_packet_type"` // 红包类型:1-普通红包,2-拼手气红包
+ TotalAmount int64 `bson:"total_amount"` // 总金额(分)
+ TotalCount int32 `bson:"total_count"` // 总个数
+ RemainAmount int64 `bson:"remain_amount"` // 剩余金额(分)
+ RemainCount int32 `bson:"remain_count"` // 剩余个数
+ Blessing string `bson:"blessing"` // 祝福语
+ Status int32 `bson:"status"` // 状态:0-进行中,1-已领完,2-已过期
+ ExpireTime time.Time `bson:"expire_time"` // 过期时间
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+ Ex string `bson:"ex"` // 扩展字段
+}
+```
+
+### red_packet_receives 表
+```go
+type RedPacketReceive struct {
+ ReceiveID string `bson:"receive_id"` // 领取记录ID
+ RedPacketID string `bson:"red_packet_id"` // 红包ID
+ ReceiveUserID string `bson:"receive_user_id"` // 领取者ID
+ Amount int64 `bson:"amount"` // 领取金额(分)
+ ReceiveTime time.Time `bson:"receive_time"` // 领取时间
+ Ex string `bson:"ex"` // 扩展字段
+}
+```
+
+## API接口设计
+
+### 1. 发送红包 ✅ 已实现
+```
+POST /redpacket/send_redpacket
+Request:
+{
+ "groupID": "group123", // 群ID(必填)
+ "redPacketType": 1, // 红包类型:1-普通红包,2-拼手气红包(必填)
+ "totalAmount": 10000, // 总金额(分)(必填)
+ "totalCount": 10, // 总个数(必填)
+ "blessing": "恭喜发财" // 祝福语(可选)
+}
+Response:
+{
+ "redPacketID": "rp123", // 红包ID
+ "serverMsgID": "msg123", // 服务器消息ID
+ "clientMsgID": "client123", // 客户端消息ID
+ "sendTime": 1234567890 // 发送时间戳(毫秒)
+}
+```
+
+**特性**:
+- 只支持群聊
+- 发送用户默认为群主(自动获取)
+- 自动创建数据库记录
+- 自动发送红包消息到群聊
+
+### 2. 领取红包
+```
+POST /redpacket/receive
+Request:
+{
+ "redPacketID": "rp123"
+}
+Response:
+{
+ "amount": 1000, // 领取金额(分)
+ "remainAmount": 9000,
+ "remainCount": 9,
+ "status": 0 // 0-进行中,1-已领完
+}
+```
+
+### 3. 查询红包详情
+```
+GET /redpacket/detail?redPacketID=rp123
+Response:
+{
+ "redPacketID": "rp123",
+ "sendUserID": "user123",
+ "totalAmount": 10000,
+ "totalCount": 10,
+ "remainAmount": 9000,
+ "remainCount": 9,
+ "blessing": "恭喜发财",
+ "status": 0,
+ "receiveList": [...]
+}
+```
+
+### 4. 查询红包领取记录
+```
+GET /redpacket/receives?redPacketID=rp123
+Response:
+{
+ "total": 1,
+ "receives": [
+ {
+ "receiveUserID": "user456",
+ "amount": 1000,
+ "receiveTime": "2024-01-01T10:00:00Z"
+ }
+ ]
+}
+```
+
+## 业务逻辑
+
+### 发送红包流程
+1. 验证参数(金额、个数、类型)
+2. 创建红包记录
+3. 发送红包消息(ContentType=RedPacket)
+4. 返回红包ID和消息ID
+
+### 领取红包流程
+1. 验证红包是否存在且可领取
+2. 检查用户是否已领取
+3. 计算领取金额(普通红包平均分配,拼手气红包随机)
+4. 更新红包记录(剩余金额、剩余个数)
+5. 创建领取记录
+6. 如果领完,更新红包状态
+7. 发送领取通知消息
+
+### 红包过期处理
+- 建议使用定时任务检查过期红包
+- 过期红包状态更新为已过期
+- 剩余金额退回(如需要)
+
+## 注意事项
+
+1. **并发控制**:领取红包需要使用分布式锁,防止重复领取
+2. **金额计算**:拼手气红包需要保证最后一个红包能领完剩余金额
+3. **消息推送**:红包消息需要特殊推送内容
+4. **数据一致性**:红包金额和领取记录需要事务保证一致性
+5. **性能优化**:红包详情查询需要缓存优化
+
+## 实现步骤
+
+1. ✅ 定义消息类型常量
+2. ✅ 创建数据库模型
+3. ✅ 实现数据库操作
+4. ✅ 实现RPC服务
+5. ✅ 实现API接口
+6. ✅ 添加消息类型处理
+7. ✅ 添加推送处理
+8. ✅ 测试和优化
+
diff --git a/docs/redpacket-message-structure.md b/docs/redpacket-message-structure.md
new file mode 100644
index 0000000..7344bd4
--- /dev/null
+++ b/docs/redpacket-message-structure.md
@@ -0,0 +1,289 @@
+# 红包消息结构文档
+
+## 客户端收到的推送消息结构
+
+### 完整消息结构 (MsgData)
+
+客户端通过WebSocket或HTTP拉取收到的红包消息,消息结构如下:
+
+```json
+{
+ "clientMsgID": "client_1234567890abcdef", // 客户端消息ID(string)
+ "serverMsgID": "msg_1234567890abcdef", // 服务器消息ID(string)
+ "sendID": "user_owner123", // 发送者ID(群主ID)(string)
+ "recvID": "", // 接收者ID(群消息为空)(string)
+ "groupID": "group123", // 群ID(string)
+ "senderPlatformID": 0, // 发送者平台ID(int32,0表示系统)
+ "senderNickname": "群主昵称", // 发送者昵称(string)
+ "senderFaceURL": "https://...", // 发送者头像URL(string)
+ "sessionType": 3, // 会话类型(int32):3-群聊
+ "msgFrom": 200, // 消息来源(int32):200-系统消息
+ "contentType": 110, // 消息类型(int32):110-自定义消息
+ "content": "{\"data\":\"{\\\"redPacketID\\\":\\\"rp_123\\\",\\\"redPacketType\\\":1,\\\"blessing\\\":\\\"恭喜发财\\\"}\",\"description\":\"redpacket\",\"extension\":\"\"}", // 消息内容(JSON字符串,自定义消息格式)
+ "seq": 123456, // 消息序号(int64)
+ "sendTime": 1704067200000, // 发送时间戳(int64,毫秒)
+ "createTime": 1704067200000, // 创建时间戳(int64,毫秒)
+ "status": 2, // 消息状态(int32):2-发送成功
+ "isRead": false, // 是否已读(bool)
+ "options": { // 消息选项(map[string]bool)
+ "history": true, // 是否保存历史
+ "persistent": true, // 是否持久化
+ "offlinePush": true, // 是否离线推送
+ "unreadCount": true, // 是否计入未读数
+ "conversationUpdate": true, // 是否更新会话
+ "senderSync": true // 是否同步给发送者
+ },
+ "offlinePushInfo": { // 离线推送信息(可选)
+ "title": "[HONGBAO]", // 推送标题
+ "desc": "[HONGBAO]", // 推送描述
+ "ex": "", // 扩展字段
+ "iosPushSound": "", // iOS推送声音
+ "iosBadgeCount": false // iOS角标计数
+ },
+ "atUserIDList": [], // @用户列表(string[])
+ "attachedInfo": "", // 附加信息(string)
+ "ex": "" // 扩展字段(string)
+}
+```
+
+### Content字段解析
+
+`content` 字段是一个JSON字符串,需要先解析为 `CustomElem` 结构(自定义消息格式),然后从 `data` 字段中解析出 `RedPacketElem` 结构:
+
+#### CustomElem 结构(自定义消息外层结构)
+
+```json
+{
+ "data": "{\"redPacketID\":\"rp_123\",\"redPacketType\":1,\"blessing\":\"恭喜发财\",\"isReceived\":false,\"receiveInfo\":null}", // 红包数据的JSON字符串(string,必填)
+ "description": "redpacket", // 二级类型标识(string):"redpacket"表示红包消息
+ "extension": "" // 扩展字段(string,可选)
+}
+```
+
+#### RedPacketElem 结构(从data字段中解析)
+
+```json
+{
+ "redPacketID": "rp_1234567890abcdef", // 红包ID(string,必填)
+ "redPacketType": 1, // 红包类型(int32,必填):1-普通红包,2-拼手气红包
+ "blessing": "恭喜发财", // 祝福语(string,可选)
+ "isReceived": false, // 当前用户是否已领取(bool,服务器填充)
+ "receiveInfo": null // 领取信息(RedPacketReceiveInfo,仅拼手气红包且已领取时返回)
+}
+```
+
+#### RedPacketReceiveInfo 结构(领取信息)
+
+```json
+{
+ "amount": 1000, // 领取金额(int64,单位:分)
+ "receiveTime": 1704067200000, // 领取时间戳(int64,毫秒)
+ "isLucky": false // 是否为手气最佳(bool,仅拼手气红包有效)
+}
+```
+
+#### 字段说明
+
+| 字段名 | 类型 | 必填 | 说明 |
+|--------|------|------|------|
+| `redPacketID` | string | 是 | 红包唯一标识,用于后续领取、查询等操作(总金额、总个数等信息可通过红包ID查询获取) |
+| `redPacketType` | int32 | 是 | 红包类型:1-普通红包(平均分配),2-拼手气红包(随机分配) |
+| `blessing` | string | 否 | 祝福语 |
+| `isReceived` | bool | 是 | 当前用户是否已领取(服务器根据用户ID自动填充) |
+| `receiveInfo` | RedPacketReceiveInfo | 否 | 领取信息,仅当 `isReceived=true` 且 `redPacketType=2`(拼手气红包)时返回 |
+
+#### receiveInfo 字段说明
+
+| 字段名 | 类型 | 说明 |
+|--------|------|------|
+| `amount` | int64 | 领取金额,单位:分 |
+| `receiveTime` | int64 | 领取时间戳,毫秒 |
+| `isLucky` | bool | 是否为手气最佳(仅拼手气红包有效) |
+
+### 消息类型常量
+
+- **ContentType**: `110` (`constant.Custom` - 自定义消息)
+- **二级类型标识**: `"redpacket"` (存储在 `CustomElem.description` 字段中)
+- **SessionType**: `3` (`constant.ReadGroupChatType` - 群聊)
+- **MsgFrom**: `200` (`constant.SysMsgType` - 系统消息)
+
+### 客户端解析示例
+
+#### JavaScript/TypeScript
+
+```typescript
+interface RedPacketReceiveInfo {
+ amount: number; // 领取金额(分)
+ receiveTime: number; // 领取时间戳(毫秒)
+ isLucky: boolean; // 是否为手气最佳
+}
+
+interface RedPacketElem {
+ redPacketID: string;
+ redPacketType: number; // 1-普通红包,2-拼手气红包
+ blessing?: string;
+ isReceived: boolean; // 当前用户是否已领取
+ receiveInfo?: RedPacketReceiveInfo; // 领取信息(仅拼手气红包且已领取时返回)
+}
+
+interface CustomElem {
+ data: string; // 红包数据的JSON字符串
+ description: string; // 二级类型标识:"redpacket"
+ extension?: string; // 扩展字段
+}
+
+interface RedPacketMessage {
+ clientMsgID: string;
+ serverMsgID: string;
+ sendID: string;
+ groupID: string;
+ contentType: number; // 110 (自定义消息)
+ content: string; // CustomElem的JSON字符串
+ sendTime: number;
+ // ... 其他字段
+}
+
+// 解析消息
+function parseRedPacketMessage(msg: RedPacketMessage): RedPacketElem | null {
+ // 检查是否为自定义消息类型
+ if (msg.contentType !== 110) {
+ return null;
+ }
+
+ try {
+ // 先解析自定义消息结构
+ const customElem: CustomElem = JSON.parse(msg.content);
+
+ // 检查二级类型是否为红包
+ if (customElem.description !== "redpacket") {
+ return null;
+ }
+
+ // 从data字段中解析红包数据
+ const redPacketElem: RedPacketElem = JSON.parse(customElem.data);
+ return redPacketElem;
+ } catch (e) {
+ console.error('Failed to parse red packet content:', e);
+ return null;
+ }
+}
+```
+
+#### Go
+
+```go
+type RedPacketReceiveInfo struct {
+ Amount int64 `json:"amount"` // 领取金额(分)
+ ReceiveTime int64 `json:"receiveTime"` // 领取时间戳(毫秒)
+ IsLucky bool `json:"isLucky"` // 是否为手气最佳
+}
+
+type RedPacketElem struct {
+ RedPacketID string `json:"redPacketID"`
+ RedPacketType int32 `json:"redPacketType"`
+ TotalAmount int64 `json:"totalAmount"`
+ TotalCount int32 `json:"totalCount"`
+ Blessing string `json:"blessing"`
+ IsReceived bool `json:"isReceived"` // 当前用户是否已领取
+ ReceiveInfo *RedPacketReceiveInfo `json:"receiveInfo,omitempty"` // 领取信息(仅拼手气红包且已领取时返回)
+}
+
+type CustomElem struct {
+ Data string `json:"data"` // 红包数据的JSON字符串
+ Description string `json:"description"` // 二级类型标识:"redpacket"
+ Extension string `json:"extension"` // 扩展字段
+}
+
+func ParseRedPacketContent(content []byte) (*RedPacketElem, error) {
+ // 先解析自定义消息结构
+ var customElem CustomElem
+ if err := json.Unmarshal(content, &customElem); err != nil {
+ return nil, err
+ }
+
+ // 检查二级类型是否为红包
+ if customElem.Description != "redpacket" {
+ return nil, fmt.Errorf("not a red packet message")
+ }
+
+ // 从data字段中解析红包数据
+ var redPacketElem RedPacketElem
+ if err := json.Unmarshal([]byte(customElem.Data), &redPacketElem); err != nil {
+ return nil, err
+ }
+
+ return &redPacketElem, nil
+}
+```
+
+### 完整消息示例
+
+```json
+{
+ "clientMsgID": "client_1704067200000_abc123",
+ "serverMsgID": "msg_1704067200000_def456",
+ "sendID": "user_owner123",
+ "recvID": "",
+ "groupID": "group123",
+ "senderPlatformID": 0,
+ "senderNickname": "群主",
+ "senderFaceURL": "https://example.com/avatar.jpg",
+ "sessionType": 3,
+ "msgFrom": 200,
+ "contentType": 110,
+ "content": "{\"data\":\"{\\\"redPacketID\\\":\\\"rp_1234567890abcdef\\\",\\\"redPacketType\\\":1,\\\"blessing\\\":\\\"恭喜发财\\\",\\\"isReceived\\\":false,\\\"receiveInfo\\\":null}\",\"description\":\"redpacket\",\"extension\":\"\"}",
+ "seq": 123456,
+ "sendTime": 1704067200000,
+ "createTime": 1704067200000,
+ "status": 2,
+ "isRead": false,
+ "options": {
+ "history": true,
+ "persistent": true,
+ "offlinePush": true,
+ "unreadCount": true,
+ "conversationUpdate": true,
+ "senderSync": true
+ },
+ "offlinePushInfo": {
+ "title": "[HONGBAO]",
+ "desc": "[HONGBAO]"
+ },
+ "atUserIDList": [],
+ "attachedInfo": "",
+ "ex": ""
+}
+```
+
+### 消息选项说明
+
+| 选项名 | 说明 |
+|--------|------|
+| `history` | 是否保存到历史记录 |
+| `persistent` | 是否持久化存储 |
+| `offlinePush` | 是否离线推送 |
+| `unreadCount` | 是否计入未读数 |
+| `conversationUpdate` | 是否更新会话 |
+| `senderSync` | 是否同步给发送者 |
+
+### 注意事项
+
+1. **Content字段**: 是JSON字符串,需要先解析为 `CustomElem` 结构,然后从 `data` 字段中解析 `RedPacketElem`
+2. **消息类型**: 使用自定义消息类型(`contentType = 110`),通过 `description` 字段标识二级类型为 `"redpacket"`
+3. **二级类型扩展**: 未来可以扩展其他自定义消息类型,只需在 `description` 字段中使用不同的标识(如 `"wallet"`、`"coupon"` 等)
+4. **金额单位**: `receiveInfo.amount` 单位是"分",不是"元"
+5. **总金额和总个数**: 不在消息中传递,客户端可通过 `redPacketID` 调用查询接口获取详细信息
+5. **红包类型**:
+ - `1` = 普通红包(平均分配)
+ - `2` = 拼手气红包(随机分配)
+6. **发送者**: 固定为群主,`sendID` 为群主ID
+7. **消息来源**: `msgFrom = 200` 表示系统消息
+8. **会话类型**: `sessionType = 3` 表示群聊
+9. **领取状态**: `isReceived` 字段由服务器根据当前用户ID自动填充,客户端无需设置
+10. **领取信息**: `receiveInfo` 仅在以下情况返回:
+ - `isReceived = true`(用户已领取)
+ - `redPacketType = 2`(拼手气红包)
+ - 普通红包(`redPacketType = 1`)即使已领取也不会返回 `receiveInfo`
+11. **客户端存储**: 客户端收到消息后,应将 `isReceived` 和 `receiveInfo` 存储到本地,用于UI展示
+12. **兼容性**: 使用标准自定义消息类型,无需修改客户端SDK,只需在客户端添加红包消息的解析逻辑
+
diff --git a/docs/statistics-api.md b/docs/statistics-api.md
new file mode 100644
index 0000000..c163e73
--- /dev/null
+++ b/docs/statistics-api.md
@@ -0,0 +1,267 @@
+# 统计接口文档
+
+## 接口列表
+
+### 1. POST /statistics/user/register
+**用户注册统计**
+
+**请求参数** (`UserRegisterCountReq`):
+```json
+{
+ "start": 0, // int64 - 开始时间(毫秒时间戳)
+ "end": 0 // int64 - 结束时间(毫秒时间戳)
+}
+```
+
+**响应参数** (`UserRegisterCountResp`):
+```json
+{
+ "total": 0, // int64 - 总注册用户数
+ "before": 0, // int64 - 开始时间之前的注册用户数
+ "count": [] // []map[string]int64 - 时间段内每日注册用户数统计
+}
+```
+
+---
+
+### 2. POST /statistics/user/active
+**活跃用户统计**
+
+**请求参数** (`GetActiveUserReq`):
+```json
+{
+ "start": 0, // int64 - 开始时间(毫秒时间戳)
+ "end": 0, // int64 - 结束时间(毫秒时间戳)
+ "group": "", // string - 群ID(选填)
+ "ase": "", // string - 排序方式(选填)
+ "pagination": { // Pagination - 分页参数
+ "pageNumber": 1, // int32 - 页码,从1开始
+ "showNumber": 10 // int32 - 每页数量
+ }
+}
+```
+
+**响应参数** (`GetActiveUserResp`):
+```json
+{
+ "msgCount": 0, // int64 - 消息总数
+ "userCount": 0, // int64 - 活跃用户总数
+ "dateCount": [], // []map[string]int64 - 每日活跃用户数统计
+ "users": [ // []ActiveUser - 活跃用户列表
+ {
+ "user": { // UserInfo - 用户信息
+ "userID": "",
+ "nickname": "",
+ "faceURL": "",
+ // ... 其他用户字段
+ },
+ "count": 0 // int64 - 该用户发送的消息数
+ }
+ ]
+}
+```
+
+---
+
+### 3. POST /statistics/group/create
+**群创建统计**
+
+**请求参数** (`GroupCreateCountReq`):
+```json
+{
+ "start": 0, // int64 - 开始时间(毫秒时间戳)
+ "end": 0 // int64 - 结束时间(毫秒时间戳)
+}
+```
+
+**响应参数** (`GroupCreateCountResp`):
+```json
+{
+ "total": 0, // int64 - 总创建群数
+ "before": 0, // int64 - 开始时间之前创建的群数
+ "count": [] // []map[string]int64 - 时间段内每日创建群数统计
+}
+```
+
+---
+
+### 4. POST /statistics/group/active
+**活跃群统计**
+
+**请求参数** (`GetActiveGroupReq`):
+```json
+{
+ "start": 0, // int64 - 开始时间(毫秒时间戳)
+ "end": 0, // int64 - 结束时间(毫秒时间戳)
+ "ase": "", // string - 排序方式(选填)
+ "pagination": { // Pagination - 分页参数
+ "pageNumber": 1, // int32 - 页码,从1开始
+ "showNumber": 10 // int32 - 每页数量
+ }
+}
+```
+
+**响应参数** (`GetActiveGroupResp`):
+```json
+{
+ "msgCount": 0, // int64 - 消息总数
+ "groupCount": 0, // int64 - 活跃群总数
+ "dateCount": [], // []map[string]int64 - 每日活跃群数统计
+ "groups": [ // []ActiveGroup - 活跃群列表
+ {
+ "group": { // GroupInfo - 群信息
+ "groupID": "",
+ "groupName": "",
+ "faceURL": "",
+ // ... 其他群字段
+ },
+ "count": 0 // int64 - 该群发送的消息数
+ }
+ ]
+}
+```
+
+---
+
+### 5. POST /statistics/online_user_count
+**在线人数统计**
+
+**请求参数**: 无
+
+**响应参数** (`OnlineUserCountResp`):
+```json
+{
+ "onlineCount": 0 // int64 - 当前在线用户数
+}
+```
+
+---
+
+### 6. POST /statistics/online_user_count_trend
+**在线人数走势统计**
+
+**请求参数** (`OnlineUserCountTrendReq`):
+```json
+{
+ "startTime": 0, // int64 - 统计开始时间(毫秒时间戳),为空时默认最近24小时
+ "endTime": 0, // int64 - 统计结束时间(毫秒时间戳),为空时默认当前时间
+ "intervalMinutes": 15 // int32 - 统计间隔(分钟),仅支持15/30/60,必填
+}
+```
+
+**响应参数** (`OnlineUserCountTrendResp`):
+```json
+{
+ "intervalMinutes": 15, // int32 - 统计间隔(分钟)
+ "points": [ // []OnlineUserCountTrendItem - 走势数据点
+ {
+ "timestamp": 0, // int64 - 区间起始时间(毫秒时间戳)
+ "onlineCount": 0 // int64 - 区间内平均在线人数
+ }
+ ]
+}
+```
+
+---
+
+### 7. POST /statistics/user_send_msg_count
+**用户发送消息总数统计**
+
+**请求参数** (`UserSendMsgCountReq`): 无
+
+**响应参数** (`UserSendMsgCountResp`):
+```json
+{
+ "count24h": 0, // int64 - 最近24小时发送消息总数
+ "count7d": 0, // int64 - 最近7天发送消息总数
+ "count30d": 0 // int64 - 最近30天发送消息总数
+}
+```
+
+---
+
+### 8. POST /statistics/user_send_msg_count_trend
+**用户发送消息走势统计**
+
+**请求参数** (`UserSendMsgCountTrendReq`):
+```json
+{
+ "userID": "", // string - 发送者用户ID,必填
+ "chatType": 1, // int32 - 聊天类型:1-单聊,2-群聊,必填
+ "startTime": 0, // int64 - 统计开始时间(毫秒时间戳),为空时默认最近24小时
+ "endTime": 0, // int64 - 统计结束时间(毫秒时间戳),为空时默认当前时间
+ "intervalMinutes": 15 // int32 - 统计间隔(分钟),仅支持15/30/60,必填
+}
+```
+
+**响应参数** (`UserSendMsgCountTrendResp`):
+```json
+{
+ "userID": "", // string - 发送者用户ID
+ "chatType": 1, // int32 - 聊天类型:1-单聊,2-群聊
+ "intervalMinutes": 15, // int32 - 统计间隔(分钟)
+ "points": [ // []UserSendMsgCountTrendItem - 走势数据点
+ {
+ "timestamp": 0, // int64 - 区间起始时间(毫秒时间戳)
+ "count": 0 // int64 - 区间内发送消息数
+ }
+ ]
+}
+```
+
+---
+
+### 9. POST /statistics/user_send_msg_query
+**用户发送消息查询**
+
+**请求参数** (`UserSendMsgQueryReq`):
+```json
+{
+ "userID": "", // string - 用户ID(选填)
+ "startTime": 0, // int64 - 开始时间(毫秒时间戳,选填)
+ "endTime": 0, // int64 - 结束时间(毫秒时间戳,选填)
+ "content": "", // string - 内容搜索关键词(选填)
+ "pageNumber": 1, // int32 - 页码,从1开始,默认1
+ "showNumber": 50 // int32 - 每页条数,默认50,最大200
+}
+```
+
+**响应参数** (`UserSendMsgQueryResp`):
+```json
+{
+ "count": 0, // int64 - 总记录数
+ "pageNumber": 1, // int32 - 当前页码
+ "showNumber": 50, // int32 - 每页条数
+ "records": [ // []UserSendMsgQueryRecord - 消息记录列表
+ {
+ "msgID": "", // string - 消息ID(服务端消息ID)
+ "sendID": "", // string - 发送者ID
+ "senderName": "", // string - 发送者昵称或名称
+ "recvID": "", // string - 接收者ID(群聊为群ID)
+ "recvName": "", // string - 接收者昵称或名称(群聊为群名称)
+ "contentType": 0, // int32 - 消息类型编号
+ "contentTypeName": "", // string - 消息类型名称(如:文本消息、图片消息等)
+ "sessionType": 0, // int32 - 聊天类型编号
+ "chatTypeName": "", // string - 聊天类型名称(如:单聊、群聊、通知)
+ "content": "", // string - 消息内容
+ "sendTime": 0 // int64 - 消息发送时间(毫秒时间戳)
+ }
+ ]
+}
+```
+
+---
+
+## 注意事项
+
+1. **权限要求**: 所有统计接口都需要管理员权限(`authverify.CheckAdmin`)
+2. **时间格式**: 所有时间字段均为毫秒时间戳(int64)
+3. **统计间隔**: 走势统计接口的 `intervalMinutes` 仅支持 15、30、60 分钟
+4. **分页参数**:
+ - `pageNumber` 从 1 开始
+ - `showNumber` 默认值不同接口可能不同,最大值为 200
+5. **消息类型**:
+ - 文本消息、图片消息、语音消息、视频消息、文件消息、艾特消息、合并消息、名片消息、位置消息、自定义消息、撤回消息、Markdown消息等
+6. **聊天类型**:
+ - 单聊(1)、群聊(2)、通知
+
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
new file mode 100644
index 0000000..e892c03
--- /dev/null
+++ b/docs/troubleshooting.md
@@ -0,0 +1,92 @@
+# OpenIM ACK 排障流程(标准化指南)
+
+> **目标**:在 ACK/Kubernetes 环境下定位“消息不同步/推送缓慢”时,有一套可复用的排障步骤,避免因为主观猜测去改底层逻辑。
+
+---
+
+## 1. 明确现象与影响范围
+- **收集信息**:受影响的用户、群组、时间段、终端类型(iOS/Android/H5)。
+- **区分场景**:登录拉取历史 vs. 实时推送,是否所有人都不行、还是个别用户/群。
+- **保存日志与报错**:客户端/服务端的时间点,便于与服务端日志对应。
+
+> 不要在未复现/未收集信息的情况下就进行代码改动。
+
+---
+
+## 2. 逐层排查流程
+
+### 2.1 API 层(openim-api)
+1. 查看 `kubectl logs deploy/openim-api`,确认登录/发消息请求是否返回 200。
+2. 如出现 `token invalid`、`rpc timeout` 等明确错误,先排除配置、网络等问题。
+
+### 2.2 RPC 层(openim-rpc-xxx)
+1. 针对消息问题,重点查看 `openim-rpc-msg`、`openim-rpc-group` 等服务日志。
+2. 出现 `MsgToMongoMQ error`、`redis set failed` 等,说明生产端异常,需要先修复 RPC/存储配置。
+
+### 2.3 Kafka & 消费端
+1. 使用 `kubectl logs deploy/openim-msgtransfer`、`deploy/push-rpc-server`,查看是否有:
+ - `xxxConsumer err, will retry in 5 seconds`(真正的连不上 Kafka);
+ - `Subscribe returned normally...`(**正常现象**,新版 SDK 每条消息都会返回一次)。
+2. 如怀疑 Kafka 累计延迟,可登录 Kafka 集群或使用 `kafka-consumer-groups.sh --describe` 查看 Lag。
+
+### 2.4 持久化(Mongo / Redis)
+1. 抽取异常消息的 conversationId & seq,直接在 Mongo/Redis 中查询是否存在。
+2. 若 Redis 缺失最新 seq,说明 msgtransfer 未写入;继续回头查消费日志。
+
+### 2.5 推送服务(push)与网关(msggateway)
+1. `kubectl logs deploy/push-rpc-server`:关注推送 RPC 的失败/超时日志;
+2. `kubectl logs deploy/messagegateway-rpc-server`:观察是否大量 `write: broken pipe`(通常是客户端断线,非服务异常);
+3. 确认 `openim-push` 是否因为配置(厂商证书、推送速率限制)导致卡住。
+
+### 2.6 客户端侧验证
+- 使用不同终端(iOS/Android/H5)交叉验证;
+- 检查客户端本地缓存/seq 是否有延迟;必要时抓包或查看客户端日志。
+
+---
+
+## 3. 变更前的对照与验证
+- **先查证**:如果怀疑配置/逻辑问题,先在官方仓库(`upstream/main`)对比差异,确认是否确实存在 bug。
+- **最小化修改**:优先通过配置/运维手段解决(如重启 Pod、更新镜像、修复 Kafka 集群);代码层改动需:
+ 1. 写清楚“问题现象 → 复现步骤 → 根因分析 → 预期影响”;
+ 2. 在测试环境复盘,多人同行评审;
+ 3. 修改后 `mage build` + `kubectl rollout` 验证,确保上下游联调通过。
+- **禁止“先改再说”**:避免仅凭几行日志或猜测就改动核心组件(如 discovery、在线推送、MQ 消费),否则容易引入新的系统风险。
+
+---
+
+## 4. 常见误区与经验
+- **“Subscribe returned normally” ≠ 错误**:新版本 `NewMConsumerGroupV2` 每处理一条消息就会返回,外层循环需要继续 `Subscribe`。
+- **ACK Pod 重建是常态**:官方推荐使用 Service DNS/Headless Service + readiness/liveness,不要擅自关闭连接。
+- **推送失败不等于刷新连接**:先确认网关/客户端状态,只有在明确连接池崩溃时才考虑 `ForceRefresh`。
+- **go mod tidy 的影响**:在本地执行 `go mod tidy` 可能引入上游依赖,CI/build 失败时先恢复 `go.mod/go.sum`。
+
+---
+
+## 5. 工具清单
+- `kubectl logs/get/describe`:查看 Pod/Deployment 状态;
+ - `kubectl top pods -n `:实时查看 Pod CPU / MEM;
+ - `kubectl top nodes`:评估集群整体资源是否紧张;
+ - `kubectl top node `:关注某台节点是否 CPU / MEM / LOAD 异常;
+- `kubectl rollout status deployment/`:确认升级完成;
+- `mage build`:本地编译检查;
+- Kafka CLI 或监控平台:定位消费滞后;
+- Prometheus/Grafana:查看服务 CPU、内存、QPS、错误率、延迟等指标;
+- 日志聚合(ELK / Loki 等):快速检索关键字;
+- SSH 到节点(需要权限):
+ - `top` / `htop`:查看 CPU、内存占用;
+ - `free -h`:系统整体内存状况;
+ - `df -h`:磁盘容量,防止磁盘打满影响容器;
+ - `du -sh /var/lib/docker`, `/var/log` 等目录,确认是否存在异常增长。
+
+---
+
+## 6. 复盘与持续改进
+1. 每次重大故障后,输出“故障记录 + 根因 + 修复方案 + 预防措施”。
+2. 同步更新此文档或内部 Wiki,确保新同学也能快速掌握流程。
+3. 定期与官方版本对齐,避免长期“漂移”导致的隐藏问题。
+
+---
+
+> 若有新的场景/问题,请在这里补充,保持流程可演进。
+
+
diff --git a/docs/webhook-config.md b/docs/webhook-config.md
new file mode 100644
index 0000000..8069a40
--- /dev/null
+++ b/docs/webhook-config.md
@@ -0,0 +1,587 @@
+# Webhook 配置说明文档
+
+## 概述
+
+Webhook 配置用于设置 OpenIM 服务器在特定事件发生时的回调通知。配置支持从数据库动态读取,每小时自动刷新一次。如果数据库中没有配置或配置无效,将使用 `config/webhooks.yml` 中的默认配置。
+
+## 配置结构
+
+### 基础字段
+
+- **url** (string, 必填): Webhook 回调服务器的地址,所有回调请求都会发送到这个地址
+
+### BeforeConfig(前置回调配置)
+
+前置回调在操作执行前触发,可以阻止或修改操作。
+
+| 字段 | 类型 | 必填 | 说明 |
+|------|------|------|------|
+| enable | boolean | 是 | 是否启用该回调 |
+| timeout | integer | 是 | 超时时间(秒) |
+| failedContinue | boolean | 是 | 回调失败时是否继续执行原操作(true=继续,false=停止) |
+| deniedTypes | array[int32] | 否 | 不触发回调的消息类型列表,空数组表示所有类型都触发 |
+
+### AfterConfig(后置回调配置)
+
+后置回调在操作执行后触发,用于通知和记录。
+
+| 字段 | 类型 | 必填 | 说明 |
+|------|------|------|------|
+| enable | boolean | 是 | 是否启用该回调 |
+| timeout | integer | 是 | 超时时间(秒) |
+| attentionIds | array[string] | 否 | 仅对指定用户ID或群组ID触发回调,空数组表示所有都触发 |
+| deniedTypes | array[int32] | 否 | 不触发回调的消息类型列表,空数组表示所有类型都触发 |
+
+## 支持的回调事件
+
+### 消息相关
+
+| 事件名称 | 类型 | 说明 |
+|---------|------|------|
+| beforeSendSingleMsg | BeforeConfig | 发送单聊消息前 |
+| afterSendSingleMsg | AfterConfig | 发送单聊消息后 |
+| beforeSendGroupMsg | BeforeConfig | 发送群聊消息前 |
+| afterSendGroupMsg | AfterConfig | 发送群聊消息后 |
+| beforeMsgModify | BeforeConfig | 修改消息前(可用于消息过滤) |
+| afterMsgSaveDB | AfterConfig | 消息保存到数据库后 |
+| afterRevokeMsg | AfterConfig | 撤回消息后 |
+| afterGroupMsgRevoke | AfterConfig | 撤回群消息后 |
+| afterGroupMsgRead | AfterConfig | 群消息已读后 |
+| afterSingleMsgRead | AfterConfig | 单聊消息已读后 |
+
+### 用户相关
+
+| 事件名称 | 类型 | 说明 |
+|---------|------|------|
+| beforeUserRegister | BeforeConfig | 用户注册前 |
+| afterUserRegister | AfterConfig | 用户注册后 |
+| beforeUpdateUserInfo | BeforeConfig | 更新用户信息前 |
+| afterUpdateUserInfo | AfterConfig | 更新用户信息后 |
+| beforeUpdateUserInfoEx | BeforeConfig | 更新用户扩展信息前 |
+| afterUpdateUserInfoEx | AfterConfig | 更新用户扩展信息后 |
+| afterUserOnline | AfterConfig | 用户上线后 |
+| afterUserOffline | AfterConfig | 用户下线后 |
+| afterUserKickOff | AfterConfig | 用户被踢下线后 |
+
+### 群组相关
+
+| 事件名称 | 类型 | 说明 |
+|---------|------|------|
+| beforeCreateGroup | BeforeConfig | 创建群组前 |
+| afterCreateGroup | AfterConfig | 创建群组后 |
+| beforeSetGroupInfo | BeforeConfig | 设置群组信息前 |
+| afterSetGroupInfo | AfterConfig | 设置群组信息后 |
+| beforeSetGroupInfoEx | BeforeConfig | 设置群组扩展信息前 |
+| afterSetGroupInfoEx | AfterConfig | 设置群组扩展信息后 |
+| beforeMemberJoinGroup | BeforeConfig | 成员加入群组前 |
+| afterJoinGroup | AfterConfig | 成员加入群组后 |
+| beforeApplyJoinGroup | BeforeConfig | 申请加入群组前 |
+| beforeInviteUserToGroup | BeforeConfig | 邀请用户加入群组前 |
+| afterQuitGroup | AfterConfig | 退出群组后 |
+| afterKickGroupMember | AfterConfig | 踢出群成员后 |
+| afterDismissGroup | AfterConfig | 解散群组后 |
+| afterTransferGroupOwner | AfterConfig | 转移群主后 |
+| beforeSetGroupMemberInfo | BeforeConfig | 设置群成员信息前 |
+| afterSetGroupMemberInfo | AfterConfig | 设置群成员信息后 |
+
+### 好友相关
+
+| 事件名称 | 类型 | 说明 |
+|---------|------|------|
+| beforeAddFriend | BeforeConfig | 添加好友前 |
+| afterAddFriend | AfterConfig | 添加好友后 |
+| beforeAddFriendAgree | BeforeConfig | 同意好友申请前 |
+| afterAddFriendAgree | AfterConfig | 同意好友申请后 |
+| afterDeleteFriend | AfterConfig | 删除好友后 |
+| beforeSetFriendRemark | BeforeConfig | 设置好友备注前 |
+| afterSetFriendRemark | AfterConfig | 设置好友备注后 |
+| beforeImportFriends | BeforeConfig | 导入好友前 |
+| afterImportFriends | AfterConfig | 导入好友后 |
+| beforeAddBlack | BeforeConfig | 加入黑名单前 |
+| afterRemoveBlack | AfterConfig | 移除黑名单后 |
+
+### 推送相关
+
+| 事件名称 | 类型 | 说明 |
+|---------|------|------|
+| beforeOfflinePush | BeforeConfig | 离线推送前 |
+| beforeOnlinePush | BeforeConfig | 在线推送前 |
+| beforeGroupOnlinePush | BeforeConfig | 群组在线推送前 |
+
+### 会话相关
+
+| 事件名称 | 类型 | 说明 |
+|---------|------|------|
+| beforeCreateSingleChatConversations | BeforeConfig | 创建单聊会话前 |
+| afterCreateSingleChatConversations | AfterConfig | 创建单聊会话后 |
+| beforeCreateGroupChatConversations | BeforeConfig | 创建群聊会话前 |
+| afterCreateGroupChatConversations | AfterConfig | 创建群聊会话后 |
+
+## JSON 配置示例
+
+### 完整配置示例
+
+```json
+{
+ "url": "http://your-webhook-server:8080/callback",
+ "beforeSendSingleMsg": {
+ "enable": true,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSendSingleMsg": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeSendGroupMsg": {
+ "enable": true,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSendGroupMsg": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeMsgModify": {
+ "enable": true,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterMsgSaveDB": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterUserOnline": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterUserOffline": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterUserKickOff": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeOfflinePush": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "beforeOnlinePush": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "beforeGroupOnlinePush": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "beforeAddFriend": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "beforeUpdateUserInfo": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterUpdateUserInfo": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeCreateGroup": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterCreateGroup": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeMemberJoinGroup": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "beforeSetGroupMemberInfo": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSetGroupMemberInfo": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterQuitGroup": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterKickGroupMember": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterDismissGroup": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeApplyJoinGroup": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterGroupMsgRead": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterSingleMsgRead": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeUserRegister": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterUserRegister": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterTransferGroupOwner": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeSetFriendRemark": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSetFriendRemark": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterGroupMsgRevoke": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterJoinGroup": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeInviteUserToGroup": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSetGroupInfo": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeSetGroupInfo": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSetGroupInfoEx": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeSetGroupInfoEx": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterRevokeMsg": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeAddBlack": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterAddFriend": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeAddFriendAgree": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterAddFriendAgree": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterDeleteFriend": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeImportFriends": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterImportFriends": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "afterRemoveBlack": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeCreateSingleChatConversations": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": false,
+ "deniedTypes": []
+ },
+ "afterCreateSingleChatConversations": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeCreateGroupChatConversations": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": false,
+ "deniedTypes": []
+ },
+ "afterCreateGroupChatConversations": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeUpdateUserInfoEx": {
+ "enable": false,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterUpdateUserInfoEx": {
+ "enable": false,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ }
+}
+```
+
+### 最小配置示例(仅启用消息回调)
+
+```json
+{
+ "url": "http://your-webhook-server:8080/callback",
+ "beforeSendSingleMsg": {
+ "enable": true,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSendSingleMsg": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ },
+ "beforeSendGroupMsg": {
+ "enable": true,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": []
+ },
+ "afterSendGroupMsg": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": [],
+ "deniedTypes": []
+ }
+}
+```
+
+### 过滤特定消息类型示例
+
+```json
+{
+ "url": "http://your-webhook-server:8080/callback",
+ "beforeSendSingleMsg": {
+ "enable": true,
+ "timeout": 5,
+ "failedContinue": true,
+ "deniedTypes": [101, 102]
+ }
+}
+```
+
+### 仅关注特定用户/群组示例
+
+```json
+{
+ "url": "http://your-webhook-server:8080/callback",
+ "afterSendSingleMsg": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": ["user001", "user002"],
+ "deniedTypes": []
+ },
+ "afterSendGroupMsg": {
+ "enable": true,
+ "timeout": 5,
+ "attentionIds": ["group001", "group002"],
+ "deniedTypes": []
+ }
+}
+```
+
+## 配置说明
+
+### URL 配置
+
+- **格式**: `http://host:port/path` 或 `https://host:port/path`
+- **示例**:
+ - `http://127.0.0.1:8080/callback`
+ - `http://webhook.example.com:8080/callback`
+ - `https://webhook.example.com/callback`
+
+### timeout(超时时间)
+
+- **单位**: 秒
+- **建议值**: 5-30 秒
+- **说明**: 如果回调服务器在超时时间内没有响应,将根据 `failedContinue` 决定是否继续执行
+
+### failedContinue(失败是否继续)
+
+- **仅适用于**: BeforeConfig 类型的事件
+- **true**: 回调失败时继续执行原操作
+- **false**: 回调失败时停止执行原操作
+
+### deniedTypes(拒绝的消息类型)
+
+- **类型**: 整数数组
+- **说明**: 列表中的消息类型不会触发回调
+- **空数组**: 表示所有消息类型都触发回调
+- **常用消息类型**:
+ - 101: 文本消息
+ - 102: 图片消息
+ - 103: 语音消息
+ - 104: 视频消息
+ - 105: 文件消息
+
+### attentionIds(关注ID列表)
+
+- **仅适用于**: AfterConfig 类型的事件
+- **类型**: 字符串数组
+- **说明**:
+ - 对于消息回调:仅对指定的用户ID或群组ID触发回调
+ - 空数组表示所有用户/群组都触发回调
+- **示例**: `["user001", "user002", "group001"]`
+
+## 配置存储
+
+### 数据库存储
+
+配置可以存储在 MongoDB 的 `system_configs` 表中:
+
+- **表名**: `system_configs`
+- **配置键**: `webhook_config`
+- **值类型**: JSON (ValueType = 4)
+
+### 配置优先级
+
+1. **数据库配置**(如果存在且有效)
+2. **文件配置**(`config/webhooks.yml`)
+
+如果数据库中没有配置或配置无效(为空、禁用、格式错误等),将自动使用文件配置。
+
+## 配置刷新
+
+- **刷新间隔**: 30 秒(调试模式,生产环境建议改为1小时)
+- **自动刷新**: 服务启动后自动开始定时刷新
+- **立即生效**: 配置更新后,最多30秒内自动生效(无需重启服务)
+
+## 注意事项
+
+1. **URL 必须有效**: 配置的 URL 必须可访问,否则回调会失败
+2. **超时设置**: 建议根据回调服务器的响应时间合理设置超时时间
+3. **失败处理**: BeforeConfig 类型的回调建议设置 `failedContinue: true`,避免因回调失败影响正常业务
+4. **性能考虑**: 启用过多回调可能影响系统性能,建议按需启用
+5. **安全性**: 确保回调服务器有适当的认证机制
+
diff --git a/docs/使用Ex字段区分群类型方案.md b/docs/使用Ex字段区分群类型方案.md
new file mode 100644
index 0000000..5e3ea6c
--- /dev/null
+++ b/docs/使用Ex字段区分群类型方案.md
@@ -0,0 +1,208 @@
+# 使用 Ex 字段区分群类型方案
+
+## 一、方案概述
+
+使用群组的 `Ex` 扩展字段来区分不同类型的群,在客户端根据 `Ex` 字段控制功能展示,服务端保持统一的验证逻辑。
+
+## 二、Ex 字段格式建议
+
+### 2.1 JSON 格式
+
+```json
+{
+ "groupCategory": "super", // 群类型:super(超级群)、normal(普通群)、work(工作群)
+ "features": {
+ "allowLink": true, // 是否允许发送链接
+ "allowQRCode": true, // 是否允许发送二维码
+ "showMemberList": true, // 是否显示成员列表
+ "showAdminPanel": false // 是否显示管理面板
+ },
+ "custom": {} // 其他自定义字段
+}
+```
+
+### 2.2 简化格式(如果只需要区分类型)
+
+```json
+{
+ "type": "super" // super, normal, work
+}
+```
+
+## 三、实现方式
+
+### 3.1 创建群组时设置 Ex
+
+**创建群组时**,在 `GroupInfo.Ex` 中设置群类型:
+
+```go
+// 创建超级群
+groupInfo := &sdkws.GroupInfo{
+ GroupName: "超级群",
+ Ex: `{"type":"super"}`,
+ // ... 其他字段
+}
+
+// 创建普通群
+groupInfo := &sdkws.GroupInfo{
+ GroupName: "普通群",
+ Ex: `{"type":"normal"}`,
+ // ... 其他字段
+}
+```
+
+### 3.2 客户端解析 Ex 字段
+
+**客户端获取群信息后**,解析 `Ex` 字段:
+
+```javascript
+// 示例:JavaScript/TypeScript
+function getGroupType(groupInfo) {
+ try {
+ const ex = JSON.parse(groupInfo.Ex || '{}');
+ return ex.type || 'normal'; // 默认为普通群
+ } catch (e) {
+ return 'normal';
+ }
+}
+
+// 根据群类型控制功能显示
+function shouldShowFeature(groupInfo, feature) {
+ const type = getGroupType(groupInfo);
+
+ switch (feature) {
+ case 'memberList':
+ // 超级群可能不显示成员列表
+ return type !== 'super';
+ case 'adminPanel':
+ // 只有工作群显示管理面板
+ return type === 'work';
+ case 'linkMessage':
+ // 根据类型决定是否允许发送链接
+ return type !== 'super'; // 超级群不允许发送链接
+ default:
+ return true;
+ }
+}
+```
+
+### 3.3 更新群信息时保持 Ex 字段
+
+**使用 `SetGroupInfoEx` 接口更新群信息时**,可以更新 `Ex` 字段:
+
+```go
+// 更新群扩展信息
+req := &pbgroup.SetGroupInfoExReq{
+ GroupID: groupID,
+ Ex: &wrapperspb.StringValue{
+ Value: `{"type":"super","features":{"allowLink":false}}`,
+ },
+}
+```
+
+## 四、优势
+
+### 4.1 服务端优势
+
+1. **统一验证逻辑**:所有群组都使用 `WorkingGroup` 的完整验证,安全可靠
+2. **无需修改代码**:不需要修改服务端的验证、推送等逻辑
+3. **向后兼容**:不影响现有群组,新群组通过 Ex 字段区分
+4. **灵活扩展**:Ex 字段可以存储任意 JSON 数据,便于后续扩展
+
+### 4.2 客户端优势
+
+1. **完全可控**:客户端可以根据业务需求灵活控制功能展示
+2. **易于实现**:只需要解析 JSON 并做条件判断
+3. **不影响现有功能**:对没有 Ex 字段或 Ex 字段格式不正确的群组,可以设置默认行为
+
+## 五、注意事项
+
+### 5.1 Ex 字段格式
+
+- **建议使用 JSON 格式**:便于解析和扩展
+- **字段命名规范**:使用小驼峰或下划线命名
+- **版本兼容**:如果后续需要修改格式,考虑版本号字段
+
+### 5.2 默认值处理
+
+- **客户端**:如果 Ex 字段为空或解析失败,应设置合理的默认值
+- **服务端**:创建群组时,如果没有设置 Ex,可以设置默认值
+
+### 5.3 数据一致性
+
+- **更新 Ex 字段时**:确保 JSON 格式正确
+- **验证**:客户端解析 Ex 字段时,应该处理 JSON 解析错误
+
+## 六、示例代码
+
+### 6.1 Go 服务端示例
+
+```go
+// 创建群组时设置 Ex
+func createSuperGroup(req *pbgroup.CreateGroupReq) {
+ req.GroupInfo.Ex = `{"type":"super","features":{"allowLink":false}}`
+ // ... 创建群组
+}
+
+// 解析 Ex 字段(如果需要)
+func parseGroupEx(ex string) map[string]interface{} {
+ var result map[string]interface{}
+ if err := json.Unmarshal([]byte(ex), &result); err != nil {
+ return make(map[string]interface{})
+ }
+ return result
+}
+```
+
+### 6.2 客户端示例(React/TypeScript)
+
+```typescript
+interface GroupEx {
+ type?: 'super' | 'normal' | 'work';
+ features?: {
+ allowLink?: boolean;
+ allowQRCode?: boolean;
+ showMemberList?: boolean;
+ };
+}
+
+function parseGroupEx(ex: string): GroupEx {
+ try {
+ return JSON.parse(ex || '{}') as GroupEx;
+ } catch {
+ return { type: 'normal' };
+ }
+}
+
+function useGroupFeatures(groupInfo: GroupInfo) {
+ const ex = parseGroupEx(groupInfo.Ex);
+
+ return {
+ canSendLink: ex.features?.allowLink !== false,
+ canSendQRCode: ex.features?.allowQRCode !== false,
+ showMemberList: ex.features?.showMemberList !== false,
+ isSuperGroup: ex.type === 'super',
+ };
+}
+```
+
+## 七、总结
+
+使用 `Ex` 字段区分群类型是一个**保守、安全、灵活**的方案:
+
+- ✅ **服务端**:保持统一逻辑,无需修改验证和推送代码
+- ✅ **客户端**:完全可控,根据业务需求灵活展示功能
+- ✅ **扩展性**:Ex 字段可以存储任意扩展信息
+- ✅ **兼容性**:不影响现有群组和功能
+
+这个方案避免了修改服务端核心逻辑带来的风险,同时提供了足够的灵活性来满足不同的业务需求。
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/统计接口调用example.md b/docs/统计接口调用example.md
new file mode 100644
index 0000000..720952a
--- /dev/null
+++ b/docs/统计接口调用example.md
@@ -0,0 +1,34 @@
+- open-im-server-deploy 在线人数统计
+curl -sS -X POST "http://127.0.0.1:10002/statistics/online_user_count" \
+-H "Content-Type: application/json" \
+-H "token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySUQiOiJpbUFkbWluIiwiUGxhdGZvcm1JRCI6MTAsImV4cCI6MTc3NTI5NTQ1MCwiaWF0IjoxNzY3NTE5NDQ1fQ.IZxx7OoLbuFo6OHoE_W3MQ5TQkj4PKbyzkdL0n--9RE" \
+ -H "operationID: op_online_001" \
+ -d "{}"
+
+- open-im-server-deploy 在线人数走势统计
+curl -sS -X POST "http://127.0.0.1:10002/statistics/online_user_count_trend" \
+-H "Content-Type: application/json" \
+-H "token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySUQiOiJpbUFkbWluIiwiUGxhdGZvcm1JRCI6MTAsImV4cCI6MTc3NTI5NTQ1MCwiaWF0IjoxNzY3NTE5NDQ1fQ.IZxx7OoLbuFo6OHoE_W3MQ5TQkj4PKbyzkdL0n--9RE" \
+ -H "operationID: op_online_trend_001" \
+ -d '{"startTime":1700000000000,"endTime":1700086400000,"intervalMinutes":15}'
+
+- open-im-server-deploy 用户发送消息总数统计
+curl -sS -X POST "http://127.0.0.1:10002/statistics/user_send_msg_count" \
+-H "Content-Type: application/json" \
+-H "token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySUQiOiJpbUFkbWluIiwiUGxhdGZvcm1JRCI6MTAsImV4cCI6MTc3NTI5NTQ1MCwiaWF0IjoxNzY3NTE5NDQ1fQ.IZxx7OoLbuFo6OHoE_W3MQ5TQkj4PKbyzkdL0n--9RE" \
+ -H "operationID: op_msg_count_001" \
+ -d "{}"
+
+- open-im-server-deploy 用户发送消息总数走势统计
+curl -sS -X POST "http://127.0.0.1:10002/statistics/user_send_msg_count_trend" \
+-H "Content-Type: application/json" \
+-H "token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySUQiOiJpbUFkbWluIiwiUGxhdGZvcm1JRCI6MTAsImV4cCI6MTc3NTI5NTQ1MCwiaWF0IjoxNzY3NTE5NDQ1fQ.IZxx7OoLbuFo6OHoE_W3MQ5TQkj4PKbyzkdL0n--9RE" \
+ -H "operationID: op_msg_trend_001" \
+ -d '{"userID":"u001","chatType":1,"startTime":1700000000000,"endTime":1700086400000,"intervalMinutes":30}'
+
+- open-im-server-deploy 用户发送消息筛选与查询
+curl -sS -X POST "http://127.0.0.1:10002/statistics/user_send_msg_query" \
+-H "Content-Type: application/json" \
+-H "token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySUQiOiJpbUFkbWluIiwiUGxhdGZvcm1JRCI6MTAsImV4cCI6MTc3NTI5NTQ1MCwiaWF0IjoxNzY3NTE5NDQ1fQ.IZxx7OoLbuFo6OHoE_W3MQ5TQkj4PKbyzkdL0n--9RE" \
+ -H "operationID: op_msg_query_001" \
+ -d '{"userID":"u001","startTime":1700000000000,"endTime":1700086400000,"content":"关键词","pageNumber":1,"showNumber":50}'
diff --git a/go.mod b/go.mod
new file mode 100644
index 0000000..12cbd85
--- /dev/null
+++ b/go.mod
@@ -0,0 +1,231 @@
+module git.imall.cloud/openim/open-im-server-deploy
+
+go 1.22.7
+
+require (
+ firebase.google.com/go/v4 v4.14.1
+ git.imall.cloud/openim/protocol v1.0.4
+ github.com/dtm-labs/rockscache v0.1.1
+ github.com/gin-gonic/gin v1.9.1
+ github.com/go-playground/validator/v10 v10.20.0
+ github.com/gogo/protobuf v1.3.2 // indirect
+ github.com/golang-jwt/jwt/v4 v4.5.1
+ github.com/gorilla/websocket v1.5.1
+ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
+ github.com/mitchellh/mapstructure v1.5.0
+ github.com/openimsdk/tools v0.0.50-alpha.105
+ github.com/pkg/errors v0.9.1 // indirect
+ github.com/prometheus/client_golang v1.18.0
+ github.com/stretchr/testify v1.10.0
+ go.mongodb.org/mongo-driver v1.14.0
+ google.golang.org/api v0.170.0
+ google.golang.org/grpc v1.71.0
+ google.golang.org/protobuf v1.36.4
+ gopkg.in/yaml.v3 v3.0.1
+)
+
+replace git.imall.cloud/openim/protocol => ../protocol
+
+require github.com/google/uuid v1.6.0
+
+require (
+ github.com/aws/aws-sdk-go-v2 v1.32.5
+ github.com/aws/aws-sdk-go-v2/credentials v1.17.46
+ github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1
+ github.com/fatih/color v1.14.1
+ github.com/gin-contrib/gzip v1.0.1
+ github.com/go-redis/redis v6.15.9+incompatible
+ github.com/go-redis/redismock/v9 v9.2.0
+ github.com/hashicorp/golang-lru/v2 v2.0.7
+ github.com/kelindar/bitmap v1.5.2
+ github.com/likexian/gokit v0.25.13
+ github.com/makiuchi-d/gozxing v0.1.1
+ github.com/openimsdk/gomake v0.0.15-alpha.11
+ github.com/redis/go-redis/v9 v9.4.0
+ github.com/robfig/cron/v3 v3.0.1
+ github.com/shirou/gopsutil v3.21.11+incompatible
+ github.com/spf13/viper v1.18.2
+ go.etcd.io/etcd/client/v3 v3.5.13
+ go.uber.org/automaxprocs v1.5.3
+ golang.org/x/sync v0.10.0
+ k8s.io/api v0.31.2
+ k8s.io/apimachinery v0.31.2
+ k8s.io/client-go v0.31.2
+)
+
+require (
+ cloud.google.com/go v0.112.1 // indirect
+ cloud.google.com/go/compute/metadata v0.6.0 // indirect
+ cloud.google.com/go/firestore v1.15.0 // indirect
+ cloud.google.com/go/iam v1.1.7 // indirect
+ cloud.google.com/go/longrunning v0.5.5 // indirect
+ cloud.google.com/go/storage v1.40.0 // indirect
+ github.com/IBM/sarama v1.43.0 // indirect
+ github.com/MicahParks/keyfunc v1.9.0 // indirect
+ github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible // indirect
+ github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 // indirect
+ github.com/aws/aws-sdk-go-v2/config v1.28.5 // indirect
+ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sso v1.24.6 // indirect
+ github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sts v1.33.1 // indirect
+ github.com/aws/smithy-go v1.22.1 // indirect
+ github.com/beorn7/perks v1.0.1 // indirect
+ github.com/bytedance/sonic v1.11.6 // indirect
+ github.com/bytedance/sonic/loader v0.1.1 // indirect
+ github.com/cespare/xxhash/v2 v2.3.0 // indirect
+ github.com/clbanning/mxj v1.8.4 // indirect
+ github.com/cloudwego/base64x v0.1.4 // indirect
+ github.com/cloudwego/iasm v0.2.0 // indirect
+ github.com/coreos/go-semver v0.3.0 // indirect
+ github.com/coreos/go-systemd/v22 v22.3.2 // indirect
+ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
+ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
+ github.com/dustin/go-humanize v1.0.1 // indirect
+ github.com/eapache/go-resiliency v1.6.0 // indirect
+ github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 // indirect
+ github.com/eapache/queue v1.1.0 // indirect
+ github.com/emicklei/go-restful/v3 v3.11.0 // indirect
+ github.com/felixge/httpsnoop v1.0.4 // indirect
+ github.com/fsnotify/fsnotify v1.9.0 // indirect
+ github.com/fxamacker/cbor/v2 v2.7.0 // indirect
+ github.com/gabriel-vasile/mimetype v1.4.3 // indirect
+ github.com/gin-contrib/sse v0.1.0 // indirect
+ github.com/go-logr/logr v1.4.2 // indirect
+ github.com/go-logr/stdr v1.2.2 // indirect
+ github.com/go-ole/go-ole v1.2.6 // indirect
+ github.com/go-openapi/jsonpointer v0.19.6 // indirect
+ github.com/go-openapi/jsonreference v0.20.2 // indirect
+ github.com/go-openapi/swag v0.22.4 // indirect
+ github.com/go-playground/universal-translator v0.18.1 // indirect
+ github.com/go-zookeeper/zk v1.0.3 // indirect
+ github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
+ github.com/golang/protobuf v1.5.4 // indirect
+ github.com/golang/snappy v0.0.4 // indirect
+ github.com/google/gnostic-models v0.6.8 // indirect
+ github.com/google/go-cmp v0.6.0 // indirect
+ github.com/google/go-querystring v1.1.0 // indirect
+ github.com/google/gofuzz v1.2.0 // indirect
+ github.com/google/s2a-go v0.1.7 // indirect
+ github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect
+ github.com/googleapis/gax-go/v2 v2.12.3 // indirect
+ github.com/hashicorp/errwrap v1.1.0 // indirect
+ github.com/hashicorp/go-multierror v1.1.1 // indirect
+ github.com/hashicorp/go-uuid v1.0.3 // indirect
+ github.com/hashicorp/hcl v1.0.0 // indirect
+ github.com/inconshreveable/mousetrap v1.1.0 // indirect
+ github.com/jcmturner/aescts/v2 v2.0.0 // indirect
+ github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect
+ github.com/jcmturner/gofork v1.7.6 // indirect
+ github.com/jcmturner/gokrb5/v8 v8.4.4 // indirect
+ github.com/jcmturner/rpc/v2 v2.0.3 // indirect
+ github.com/jinzhu/copier v0.4.0 // indirect
+ github.com/jinzhu/inflection v1.0.0 // indirect
+ github.com/jinzhu/now v1.1.5 // indirect
+ github.com/josharian/intern v1.0.0 // indirect
+ github.com/json-iterator/go v1.1.12 // indirect
+ github.com/kelindar/simd v1.1.2 // indirect
+ github.com/klauspost/compress v1.17.7 // indirect
+ github.com/klauspost/cpuid/v2 v2.2.7 // indirect
+ github.com/leodido/go-urn v1.4.0 // indirect
+ github.com/lestrrat-go/strftime v1.0.6 // indirect
+ github.com/lithammer/shortuuid v3.0.0+incompatible // indirect
+ github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
+ github.com/magefile/mage v1.15.0 // indirect
+ github.com/magiconair/properties v1.8.7 // indirect
+ github.com/mailru/easyjson v0.7.7 // indirect
+ github.com/mattn/go-colorable v0.1.13 // indirect
+ github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
+ github.com/minio/md5-simd v1.1.2 // indirect
+ github.com/minio/minio-go/v7 v7.0.69 // indirect
+ github.com/minio/sha256-simd v1.0.1 // indirect
+ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
+ github.com/modern-go/reflect2 v1.0.2 // indirect
+ github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe // indirect
+ github.com/mozillazg/go-httpheader v0.4.0 // indirect
+ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
+ github.com/openimsdk/protocol v0.0.72 // indirect
+ github.com/pelletier/go-toml/v2 v2.2.2 // indirect
+ github.com/pierrec/lz4/v4 v4.1.21 // indirect
+ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
+ github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
+ github.com/prometheus/client_model v0.5.0 // indirect
+ github.com/prometheus/common v0.45.0 // indirect
+ github.com/prometheus/procfs v0.12.0 // indirect
+ github.com/qiniu/go-sdk/v7 v7.18.2 // indirect
+ github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
+ github.com/rs/xid v1.5.0 // indirect
+ github.com/sagikazarmark/locafero v0.4.0 // indirect
+ github.com/sagikazarmark/slog-shim v0.1.0 // indirect
+ github.com/shirou/gopsutil/v3 v3.24.5 // indirect
+ github.com/shoenig/go-m1cpu v0.1.6 // indirect
+ github.com/sourcegraph/conc v0.3.0 // indirect
+ github.com/spf13/afero v1.11.0 // indirect
+ github.com/spf13/cast v1.6.0 // indirect
+ github.com/spf13/pflag v1.0.5 // indirect
+ github.com/stretchr/objx v0.5.2 // indirect
+ github.com/subosito/gotenv v1.6.0 // indirect
+ github.com/tencentyun/cos-go-sdk-v5 v0.7.47 // indirect
+ github.com/tklauser/go-sysconf v0.3.13 // indirect
+ github.com/tklauser/numcpus v0.7.0 // indirect
+ github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
+ github.com/x448/float16 v0.8.4 // indirect
+ github.com/xdg-go/pbkdf2 v1.0.0 // indirect
+ github.com/xdg-go/scram v1.1.2 // indirect
+ github.com/xdg-go/stringprep v1.0.4 // indirect
+ github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
+ github.com/yusufpapurcu/wmi v1.2.4 // indirect
+ go.etcd.io/etcd/api/v3 v3.5.13 // indirect
+ go.etcd.io/etcd/client/pkg/v3 v3.5.13 // indirect
+ go.opencensus.io v0.24.0 // indirect
+ go.opentelemetry.io/auto/sdk v1.1.0 // indirect
+ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect
+ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
+ go.opentelemetry.io/otel v1.34.0 // indirect
+ go.opentelemetry.io/otel/metric v1.34.0 // indirect
+ go.opentelemetry.io/otel/trace v1.34.0 // indirect
+ go.uber.org/atomic v1.9.0 // indirect
+ go.uber.org/multierr v1.11.0 // indirect
+ golang.org/x/arch v0.7.0 // indirect
+ golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
+ golang.org/x/image v0.15.0 // indirect
+ golang.org/x/net v0.34.0 // indirect
+ golang.org/x/oauth2 v0.25.0 // indirect
+ golang.org/x/sys v0.29.0 // indirect
+ golang.org/x/term v0.28.0 // indirect
+ golang.org/x/text v0.21.0 // indirect
+ golang.org/x/time v0.5.0 // indirect
+ golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 // indirect
+ google.golang.org/appengine/v2 v2.0.2 // indirect
+ google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 // indirect
+ google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 // indirect
+ google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f // indirect
+ gopkg.in/inf.v0 v0.9.1 // indirect
+ gopkg.in/yaml.v2 v2.4.0 // indirect
+ gorm.io/gorm v1.25.8 // indirect
+ k8s.io/klog/v2 v2.130.1 // indirect
+ k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
+ k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 // indirect
+ sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
+ sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
+ sigs.k8s.io/yaml v1.4.0 // indirect
+)
+
+require (
+ github.com/go-playground/locales v0.14.1 // indirect
+ github.com/goccy/go-json v0.10.2 // indirect
+ github.com/mattn/go-isatty v0.0.20 // indirect
+ github.com/spf13/cobra v1.8.0
+ github.com/ugorji/go/codec v1.2.12 // indirect
+ go.uber.org/zap v1.24.0 // indirect
+ golang.org/x/crypto v0.32.0 // indirect
+ gopkg.in/ini.v1 v1.67.0 // indirect
+)
diff --git a/go.sum b/go.sum
new file mode 100644
index 0000000..1fa5807
--- /dev/null
+++ b/go.sum
@@ -0,0 +1,683 @@
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.112.1 h1:uJSeirPke5UNZHIb4SxfZklVSiWWVqW4oXlETwZziwM=
+cloud.google.com/go v0.112.1/go.mod h1:+Vbu+Y1UU+I1rjmzeMOb/8RfkKJK2Gyxi1X6jJCZLo4=
+cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I=
+cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
+cloud.google.com/go/firestore v1.15.0 h1:/k8ppuWOtNuDHt2tsRV42yI21uaGnKDEQnRFeBpbFF8=
+cloud.google.com/go/firestore v1.15.0/go.mod h1:GWOxFXcv8GZUtYpWHw/w6IuYNux/BtmeVTMmjrm4yhk=
+cloud.google.com/go/iam v1.1.7 h1:z4VHOhwKLF/+UYXAJDFwGtNF0b6gjsW1Pk9Ml0U/IoM=
+cloud.google.com/go/iam v1.1.7/go.mod h1:J4PMPg8TtyurAUvSmPj8FF3EDgY1SPRZxcUGrn7WXGA=
+cloud.google.com/go/longrunning v0.5.5 h1:GOE6pZFdSrTb4KAiKnXsJBtlE6mEyaW44oKyMILWnOg=
+cloud.google.com/go/longrunning v0.5.5/go.mod h1:WV2LAxD8/rg5Z1cNW6FJ/ZpX4E4VnDnoTk0yawPBB7s=
+cloud.google.com/go/storage v1.40.0 h1:VEpDQV5CJxFmJ6ueWNsKxcr1QAYOXEgxDa+sBbJahPw=
+cloud.google.com/go/storage v1.40.0/go.mod h1:Rrj7/hKlG87BLqDJYtwR0fbPld8uJPbQ2ucUMY7Ir0g=
+firebase.google.com/go/v4 v4.14.1 h1:4qiUETaFRWoFGE1XP5VbcEdtPX93Qs+8B/7KvP2825g=
+firebase.google.com/go/v4 v4.14.1/go.mod h1:fgk2XshgNDEKaioKco+AouiegSI9oTWVqRaBdTTGBoM=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/IBM/sarama v1.43.0 h1:YFFDn8mMI2QL0wOrG0J2sFoVIAFl7hS9JQi2YZsXtJc=
+github.com/IBM/sarama v1.43.0/go.mod h1:zlE6HEbC/SMQ9mhEYaF7nNLYOUyrs0obySKCckWP9BM=
+github.com/MicahParks/keyfunc v1.9.0 h1:lhKd5xrFHLNOWrDc4Tyb/Q1AJ4LCzQ48GVJyVIID3+o=
+github.com/MicahParks/keyfunc v1.9.0/go.mod h1:IdnCilugA0O/99dW+/MkvlyrsX8+L8+x95xuVNtM5jw=
+github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM=
+github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
+github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
+github.com/aws/aws-sdk-go-v2 v1.32.5 h1:U8vdWJuY7ruAkzaOdD7guwJjD06YSKmnKCJs7s3IkIo=
+github.com/aws/aws-sdk-go-v2 v1.32.5/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 h1:ZY3108YtBNq96jNZTICHxN1gSBSbnvIdYwwqnvCV4Mc=
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1/go.mod h1:t8PYl/6LzdAqsU4/9tz28V/kU+asFePvpOMkdul0gEQ=
+github.com/aws/aws-sdk-go-v2/config v1.28.5 h1:Za41twdCXbuyyWv9LndXxZZv3QhTG1DinqlFsSuvtI0=
+github.com/aws/aws-sdk-go-v2/config v1.28.5/go.mod h1:4VsPbHP8JdcdUDmbTVgNL/8w9SqOkM5jyY8ljIxLO3o=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.46 h1:AU7RcriIo2lXjUfHFnFKYsLCwgbz1E7Mm95ieIRDNUg=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.46/go.mod h1:1FmYyLGL08KQXQ6mcTlifyFXfJVCNJTVGuQP4m0d/UA=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20 h1:sDSXIrlsFSFJtWKLQS4PUWRvrT580rrnuLydJrCQ/yA=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20/go.mod h1:WZ/c+w0ofps+/OUqMwWgnfrgzZH1DZO1RIkktICsqnY=
+github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24 h1:4usbeaes3yJnCFC7kfeyhkdkPtoRYPa/hTmCqMpKpLI=
+github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24/go.mod h1:5CI1JemjVwde8m2WG3cz23qHKPOxbpkq0HaoreEgLIY=
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24 h1:N1zsICrQglfzaBnrfM0Ys00860C+QFwu6u/5+LomP+o=
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24/go.mod h1:dCn9HbJ8+K31i8IQ8EWmWj0EiIk0+vKiHNMxTTYveAg=
+github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
+github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
+github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 h1:40Q4X5ebZruRtknEZH/bg91sT5pR853F7/1X9QRbI54=
+github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4/go.mod h1:u77N7eEECzUv7F0xl2gcfK/vzc8wcjWobpy+DcrLJ5E=
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 h1:iXtILhvDxB6kPvEXgsDhGaZCSC6LQET5ZHSdJozeI0Y=
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1/go.mod h1:9nu0fVANtYiAePIBh2/pFUSwtJ402hLnp854CNoDOeE=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 h1:6DRKQc+9cChgzL5gplRGusI5dBGeiEod4m/pmGbcX48=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4/go.mod h1:s8ORvrW4g4v7IvYKIAoBg17w3GQ+XuwXDXYrQ5SkzU0=
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5 h1:wtpJ4zcwrSbwhECWQoI/g6WM9zqCcSpHDJIWSbMLOu4=
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5/go.mod h1:qu/W9HXQbbQ4+1+JcZp0ZNPV31ym537ZJN+fiS7Ti8E=
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 h1:o3DcfCxGDIT20pTbVKVhp3vWXOj/VvgazNJvumWeYW0=
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4/go.mod h1:Uy0KVOxuTK2ne+/PKQ+VvEeWmjMMksE17k/2RK/r5oM=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 h1:1w11lfXOa8HoHoSlNtt4mqv/N3HmDOa+OnUH3Y9DHm8=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1/go.mod h1:dqJ5JBL0clzgHriH35Amx3LRFY6wNIPUX7QO/BerSBo=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.6 h1:3zu537oLmsPfDMyjnUS2g+F2vITgy5pB74tHI+JBNoM=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.6/go.mod h1:WJSZH2ZvepM6t6jwu4w/Z45Eoi75lPN7DcydSRtJg6Y=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5 h1:K0OQAsDywb0ltlFrZm0JHPY3yZp/S9OaoLU33S7vPS8=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5/go.mod h1:ORITg+fyuMoeiQFiVGoqB3OydVTLkClw/ljbblMq6Cc=
+github.com/aws/aws-sdk-go-v2/service/sts v1.33.1 h1:6SZUVRQNvExYlMLbHdlKB48x0fLbc2iVROyaNEwBHbU=
+github.com/aws/aws-sdk-go-v2/service/sts v1.33.1/go.mod h1:GqWyYCwLXnlUB1lOAXQyNSPqPLQJvmo8J0DWBzp9mtg=
+github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
+github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
+github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
+github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
+github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
+github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
+github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
+github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
+github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
+github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
+github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
+github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
+github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
+github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
+github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
+github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/clbanning/mxj v1.8.4 h1:HuhwZtbyvyOw+3Z1AowPkU87JkJUSv751ELWaiTpj8I=
+github.com/clbanning/mxj v1.8.4/go.mod h1:BVjHeAH+rl9rs6f+QIpeRl0tfu10SXn1pUSa5PVGJng=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
+github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
+github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
+github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
+github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
+github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
+github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
+github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
+github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
+github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
+github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
+github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
+github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
+github.com/dtm-labs/rockscache v0.1.1 h1:6S1vgaHvGqrLd8Ka4hRTKeKPV7v+tT0MSkTIX81LRyA=
+github.com/dtm-labs/rockscache v0.1.1/go.mod h1:c76WX0kyIibmQ2ACxUXvDvaLykoPakivMqIxt+UzE7A=
+github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
+github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
+github.com/eapache/go-resiliency v1.6.0 h1:CqGDTLtpwuWKn6Nj3uNUdflaq+/kIPsg0gfNzHton30=
+github.com/eapache/go-resiliency v1.6.0/go.mod h1:5yPzW0MIvSe0JDsv0v+DvcjEv2FyD6iZYSs1ZI+iQho=
+github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 h1:Oy0F4ALJ04o5Qqpdz8XLIpNA3WM/iSIXqxtqo7UGVws=
+github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3/go.mod h1:YvSRo5mw33fLEx1+DlK6L2VV43tJt5Eyel9n9XBcR+0=
+github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
+github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
+github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
+github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
+github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
+github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w=
+github.com/fatih/color v1.14.1/go.mod h1:2oHN61fhTpgcxD3TSWCgKDiH1+x4OiDVVGH8WlgGZGg=
+github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
+github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
+github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
+github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
+github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
+github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
+github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
+github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
+github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
+github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
+github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
+github.com/gin-contrib/gzip v1.0.1 h1:HQ8ENHODeLY7a4g1Au/46Z92bdGFl74OhxcZble9WJE=
+github.com/gin-contrib/gzip v1.0.1/go.mod h1:njt428fdUNRvjuJf16tZMYZ2Yl+WQB53X5wmhDwXvC4=
+github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
+github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
+github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
+github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
+github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
+github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
+github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
+github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
+github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
+github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
+github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
+github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
+github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
+github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
+github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
+github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU=
+github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
+github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
+github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
+github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
+github.com/go-playground/locales v0.13.0/go.mod h1:taPMhCMXrRLJO55olJkUXHZBHCxTMfnGwq/HNwmWNS8=
+github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs=
+github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
+github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
+github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+Scu5vgOQjsIJAF8j9muTVoKLVtA=
+github.com/go-playground/universal-translator v0.18.0/go.mod h1:UvRDBj+xPUEGrFYl+lu/H90nyDXpg0fqeB/AQUGNTVA=
+github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
+github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
+github.com/go-playground/validator/v10 v10.8.0/go.mod h1:9JhgTzTaE31GZDpH/HSvHiRJrJ3iKAgqqH0Bl/Ocjdk=
+github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
+github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
+github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg=
+github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
+github.com/go-redis/redismock/v9 v9.2.0 h1:ZrMYQeKPECZPjOj5u9eyOjg8Nnb0BS9lkVIZ6IpsKLw=
+github.com/go-redis/redismock/v9 v9.2.0/go.mod h1:18KHfGDK4Y6c2R0H38EUGWAdc7ZQS9gfYxc94k7rWT0=
+github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
+github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
+github.com/go-zookeeper/zk v1.0.3 h1:7M2kwOsc//9VeeFiPtf+uSJlVpU66x9Ba5+8XK7/TDg=
+github.com/go-zookeeper/zk v1.0.3/go.mod h1:nOB03cncLtlp4t+UAkGSV+9beXP/akpekBwL+UX1Qcw=
+github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
+github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
+github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
+github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
+github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
+github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
+github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
+github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
+github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
+github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
+github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
+github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
+github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
+github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
+github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
+github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
+github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
+github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
+github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
+github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
+github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
+github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/martian/v3 v3.3.2 h1:IqNFLAmvJOgVlpdEBiQbDc2EwKW77amAycfTuWKdfvw=
+github.com/google/martian/v3 v3.3.2/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
+github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af h1:kmjWCqn2qkEml422C2Rrd27c3VGxi6a/6HNq8QmHRKM=
+github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo=
+github.com/google/s2a-go v0.1.7 h1:60BLSyTrOV4/haCDW4zb1guZItoSq8foHCXrAnjBo/o=
+github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw=
+github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
+github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs=
+github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
+github.com/googleapis/gax-go/v2 v2.12.3 h1:5/zPPDvw8Q1SuXjrqrZslrqT7dL/uJT2CQii/cLCKqA=
+github.com/googleapis/gax-go/v2 v2.12.3/go.mod h1:AKloxT6GtNbaLm8QTNSidHUVsHYcBHwWRvkNFJUQcS4=
+github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
+github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
+github.com/gorilla/websocket v1.5.1 h1:gmztn0JnHVt9JZquRuzLw3g4wouNVzKL15iLr/zn/QY=
+github.com/gorilla/websocket v1.5.1/go.mod h1:x3kM2JMyaluk02fnUJpQuwD2dCS5NDG2ZHL0uE0tcaY=
+github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92BcuyuQ/YW4NSIpoGtfXNho=
+github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
+github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
+github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
+github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
+github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
+github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
+github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
+github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
+github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
+github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
+github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
+github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8=
+github.com/jcmturner/aescts/v2 v2.0.0/go.mod h1:AiaICIRyfYg35RUkr8yESTqvSy7csK90qZ5xfvvsoNs=
+github.com/jcmturner/dnsutils/v2 v2.0.0 h1:lltnkeZGL0wILNvrNiVCR6Ro5PGU/SeBvVO/8c/iPbo=
+github.com/jcmturner/dnsutils/v2 v2.0.0/go.mod h1:b0TnjGOvI/n42bZa+hmXL+kFJZsFT7G4t3HTlQ184QM=
+github.com/jcmturner/gofork v1.7.6 h1:QH0l3hzAU1tfT3rZCnW5zXl+orbkNMMRGJfdJjHVETg=
+github.com/jcmturner/gofork v1.7.6/go.mod h1:1622LH6i/EZqLloHfE7IeZ0uEJwMSUyQ/nDd82IeqRo=
+github.com/jcmturner/goidentity/v6 v6.0.1 h1:VKnZd2oEIMorCTsFBnJWbExfNN7yZr3EhJAxwOkZg6o=
+github.com/jcmturner/goidentity/v6 v6.0.1/go.mod h1:X1YW3bgtvwAXju7V3LCIMpY0Gbxyjn/mY9zx4tFonSg=
+github.com/jcmturner/gokrb5/v8 v8.4.4 h1:x1Sv4HaTpepFkXbt2IkL29DXRf8sOfZXo8eRKh687T8=
+github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP+F6aCACiMrs=
+github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
+github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
+github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
+github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
+github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
+github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
+github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
+github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
+github.com/jonboulle/clockwork v0.4.0 h1:p4Cf1aMWXnXAUh8lVfewRBx1zaTSYKrKMF2g3ST4RZ4=
+github.com/jonboulle/clockwork v0.4.0/go.mod h1:xgRqUGwRcjKCO1vbZUEtSLrqKoPSsUpK7fnezOII0kc=
+github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
+github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
+github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
+github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
+github.com/kelindar/bitmap v1.5.2 h1:XwX7CTvJtetQZ64zrOkApoZZHBJRkjE23NfqUALA/HE=
+github.com/kelindar/bitmap v1.5.2/go.mod h1:j3qZjxH9s4OtvsnFTP2bmPkjqil9Y2xQlxPYHexasEA=
+github.com/kelindar/simd v1.1.2 h1:KduKb+M9cMY2HIH8S/cdJyD+5n5EGgq+Aeeleos55To=
+github.com/kelindar/simd v1.1.2/go.mod h1:inq4DFudC7W8L5fhxoeZflLRNpWSs0GNx6MlWFvuvr0=
+github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/klauspost/compress v1.17.7 h1:ehO88t2UGzQK66LMdE8tibEd1ErmzZjNEqWkjLAKQQg=
+github.com/klauspost/compress v1.17.7/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
+github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
+github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
+github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
+github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
+github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
+github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
+github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
+github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
+github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
+github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
+github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
+github.com/lestrrat-go/envload v0.0.0-20180220234015-a3eb8ddeffcc h1:RKf14vYWi2ttpEmkA4aQ3j4u9dStX2t4M8UM6qqNsG8=
+github.com/lestrrat-go/envload v0.0.0-20180220234015-a3eb8ddeffcc/go.mod h1:kopuH9ugFRkIXf3YoqHKyrJ9YfUFsckUU9S7B+XP+is=
+github.com/lestrrat-go/file-rotatelogs v2.4.0+incompatible h1:Y6sqxHMyB1D2YSzWkLibYKgg+SwmyFU9dF2hn6MdTj4=
+github.com/lestrrat-go/file-rotatelogs v2.4.0+incompatible/go.mod h1:ZQnN8lSECaebrkQytbHj4xNgtg8CR7RYXnPok8e0EHA=
+github.com/lestrrat-go/strftime v1.0.6 h1:CFGsDEt1pOpFNU+TJB0nhz9jl+K0hZSLE205AhTIGQQ=
+github.com/lestrrat-go/strftime v1.0.6/go.mod h1:f7jQKgV5nnJpYgdEasS+/y7EsTb8ykN2z68n3TtcTaw=
+github.com/likexian/gokit v0.25.13 h1:p2Uw3+6fGG53CwdU2Dz0T6bOycdb2+bAFAa3ymwWVkM=
+github.com/likexian/gokit v0.25.13/go.mod h1:qQhEWFBEfqLCO3/vOEo2EDKd+EycekVtUK4tex+l2H4=
+github.com/lithammer/shortuuid v3.0.0+incompatible h1:NcD0xWW/MZYXEHa6ITy6kaXN5nwm/V115vj2YXfhS0w=
+github.com/lithammer/shortuuid v3.0.0+incompatible/go.mod h1:FR74pbAuElzOUuenUHTK2Tciko1/vKuIKS9dSkDrA4w=
+github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
+github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
+github.com/magefile/mage v1.15.0 h1:BvGheCMAsG3bWUDbZ8AyXXpCNwU9u5CB6sM+HNb9HYg=
+github.com/magefile/mage v1.15.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
+github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
+github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
+github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
+github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
+github.com/makiuchi-d/gozxing v0.1.1 h1:xxqijhoedi+/lZlhINteGbywIrewVdVv2wl9r5O9S1I=
+github.com/makiuchi-d/gozxing v0.1.1/go.mod h1:eRIHbOjX7QWxLIDJoQuMLhuXg9LAuw6znsUtRkNw9DU=
+github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
+github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
+github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
+github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
+github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
+github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg=
+github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k=
+github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
+github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
+github.com/minio/minio-go/v7 v7.0.69 h1:l8AnsQFyY1xiwa/DaQskY4NXSLA2yrGsW5iD9nRPVS0=
+github.com/minio/minio-go/v7 v7.0.69/go.mod h1:XAvOPJQ5Xlzk5o3o/ArO2NMbhSGkimC+bpW/ngRKDmQ=
+github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
+github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
+github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
+github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
+github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
+github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
+github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe h1:iruDEfMl2E6fbMZ9s0scYfZQ84/6SPL6zC8ACM2oIL0=
+github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
+github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60=
+github.com/mozillazg/go-httpheader v0.4.0 h1:aBn6aRXtFzyDLZ4VIRLsZbbJloagQfMnCiYgOq6hK4w=
+github.com/mozillazg/go-httpheader v0.4.0/go.mod h1:PuT8h0pw6efvp8ZeUec1Rs7dwjK08bt6gKSReGMqtdA=
+github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
+github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
+github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
+github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
+github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
+github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
+github.com/onsi/ginkgo/v2 v2.19.0 h1:9Cnnf7UHo57Hy3k6/m5k3dRfGTMXGvxhHFvkDTCTpvA=
+github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To=
+github.com/onsi/gomega v1.25.0 h1:Vw7br2PCDYijJHSfBOWhov+8cAnUf8MfMaIOV323l6Y=
+github.com/onsi/gomega v1.25.0/go.mod h1:r+zV744Re+DiYCIPRlYOTxn0YkOLcAnW8k1xXdMPGhM=
+github.com/openimsdk/gomake v0.0.15-alpha.11 h1:PQudYDRESYeYlUYrrLLJhYIlUPO5x7FAx+o5El9U/Bw=
+github.com/openimsdk/gomake v0.0.15-alpha.11/go.mod h1:PndCozNc2IsQIciyn9mvEblYWZwJmAI+06z94EY+csI=
+github.com/openimsdk/protocol v0.0.72 h1:K+vslwaR7lDXyBzb07UuEQITaqsgighz7NyXVIWsu6A=
+github.com/openimsdk/protocol v0.0.72/go.mod h1:OZQA9FR55lseYoN2Ql1XAHYKHJGu7OMNkUbuekrKCM8=
+github.com/openimsdk/tools v0.0.50-alpha.105 h1:axuCvKXhxY2RGLhpMMFNgBtE0B65T2Sr1JDW3UD9nBs=
+github.com/openimsdk/tools v0.0.50-alpha.105/go.mod h1:x9i/e+WJFW4tocy6RNJQ9NofQiP3KJ1Y576/06TqOG4=
+github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
+github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
+github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
+github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
+github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
+github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
+github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
+github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
+github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
+github.com/prometheus/client_golang v1.18.0 h1:HzFfmkOzH5Q8L8G+kSJKUx5dtG87sewO+FoDDqP5Tbk=
+github.com/prometheus/client_golang v1.18.0/go.mod h1:T+GXkCk5wSJyOqMIzVgvvjFDlkOQntgjkJWKrN5txjA=
+github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
+github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
+github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM=
+github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY=
+github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo=
+github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
+github.com/qiniu/dyn v1.3.0/go.mod h1:E8oERcm8TtwJiZvkQPbcAh0RL8jO1G0VXJMW3FAWdkk=
+github.com/qiniu/go-sdk/v7 v7.18.2 h1:vk9eo5OO7aqgAOPF0Ytik/gt7CMKuNgzC/IPkhda6rk=
+github.com/qiniu/go-sdk/v7 v7.18.2/go.mod h1:nqoYCNo53ZlGA521RvRethvxUDvXKt4gtYXOwye868w=
+github.com/qiniu/x v1.10.5/go.mod h1:03Ni9tj+N2h2aKnAz+6N0Xfl8FwMEDRC2PAlxekASDs=
+github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
+github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
+github.com/redis/go-redis/v9 v9.4.0 h1:Yzoz33UZw9I/mFhx4MNrB6Fk+XHO1VukNcCa1+lwyKk=
+github.com/redis/go-redis/v9 v9.4.0/go.mod h1:hdY0cQFCN4fnSYT6TkisLufl/4W5UIXyv0b/CLO2V2M=
+github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
+github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
+github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
+github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
+github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
+github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
+github.com/rs/xid v1.5.0 h1:mKX4bl4iPYJtEIxp6CYiUuLQ/8DYMoz0PUdtGgMFRVc=
+github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
+github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
+github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
+github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
+github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
+github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
+github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
+github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
+github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
+github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
+github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
+github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
+github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
+github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
+github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
+github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
+github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
+github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
+github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
+github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
+github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0=
+github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho=
+github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
+github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
+github.com/spf13/viper v1.18.2 h1:LUXCnvUvSM6FXAsj6nnfc8Q2tp1dIgUfY9Kc8GsSOiQ=
+github.com/spf13/viper v1.18.2/go.mod h1:EKmWIqdnk5lOcmR72yw6hS+8OPYcwD0jteitLMVB+yk=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
+github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
+github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
+github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
+github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
+github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
+github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
+github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
+github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
+github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
+github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.563/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/kms v1.0.563/go.mod h1:uom4Nvi9W+Qkom0exYiJ9VWJjXwyxtPYTkKkaLMlfE0=
+github.com/tencentyun/cos-go-sdk-v5 v0.7.47 h1:uoS4Sob16qEYoapkqJq1D1Vnsy9ira9BfNUMtoFYTI4=
+github.com/tencentyun/cos-go-sdk-v5 v0.7.47/go.mod h1:DH9US8nB+AJXqwu/AMOrCFN1COv3dpytXuJWHgdg7kE=
+github.com/tklauser/go-sysconf v0.3.13 h1:GBUpcahXSpR2xN01jhkNAbTLRk2Yzgggk8IM08lq3r4=
+github.com/tklauser/go-sysconf v0.3.13/go.mod h1:zwleP4Q4OehZHGn4CYZDipCgg9usW5IJePewFCGVEa0=
+github.com/tklauser/numcpus v0.7.0 h1:yjuerZP127QG9m5Zh/mSO4wqurYil27tHrqwRoRjpr4=
+github.com/tklauser/numcpus v0.7.0/go.mod h1:bb6dMVcj8A42tSE7i32fsIUCbQNllK5iDguyOZRUzAY=
+github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
+github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
+github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
+github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
+github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
+github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
+github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
+github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
+github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
+github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
+github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
+github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
+github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d h1:splanxYIlg+5LfHAM6xpdFEAYOk8iySO56hMFq6uLyA=
+github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
+github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
+github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
+go.etcd.io/etcd/api/v3 v3.5.13 h1:8WXU2/NBge6AUF1K1gOexB6e07NgsN1hXK0rSTtgSp4=
+go.etcd.io/etcd/api/v3 v3.5.13/go.mod h1:gBqlqkcMMZMVTMm4NDZloEVJzxQOQIls8splbqBDa0c=
+go.etcd.io/etcd/client/pkg/v3 v3.5.13 h1:RVZSAnWWWiI5IrYAXjQorajncORbS0zI48LQlE2kQWg=
+go.etcd.io/etcd/client/pkg/v3 v3.5.13/go.mod h1:XxHT4u1qU12E2+po+UVPrEeL94Um6zL58ppuJWXSAB8=
+go.etcd.io/etcd/client/v3 v3.5.13 h1:o0fHTNJLeO0MyVbc7I3fsCf6nrOqn5d+diSarKnB2js=
+go.etcd.io/etcd/client/v3 v3.5.13/go.mod h1:cqiAeY8b5DEEcpxvgWKsbLIWNM/8Wy2xJSDMtioMcoI=
+go.mongodb.org/mongo-driver v1.14.0 h1:P98w8egYRjYe3XDjxhYJagTokP/H6HzlsnojRgZRd80=
+go.mongodb.org/mongo-driver v1.14.0/go.mod h1:Vzb0Mk/pa7e6cWw85R4F/endUC3u0U9jGcNU603k65c=
+go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
+go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
+go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
+go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
+go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg=
+go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
+go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
+go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
+go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
+go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
+go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
+go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
+go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
+go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
+go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
+go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
+go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
+go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
+go.uber.org/automaxprocs v1.5.3 h1:kWazyxZUrS3Gs4qUpbwo5kEIMGe/DAvi5Z4tl2NW4j8=
+go.uber.org/automaxprocs v1.5.3/go.mod h1:eRbA25aqJrxAbsLO0xy5jVwPt7FQnRgjW+efnwa1WM0=
+go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
+go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
+go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
+go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
+go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
+go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
+golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
+golang.org/x/arch v0.7.0 h1:pskyeJh/3AmoQ8CPE95vxHLqp1G1GfGNXTmcl9NEKTc=
+golang.org/x/arch v0.7.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
+golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
+golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
+golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
+golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
+golang.org/x/image v0.15.0 h1:kOELfmgrmJlw4Cdb7g/QGuB3CvDrXbqEIww/pNtNBm8=
+golang.org/x/image v0.15.0/go.mod h1:HUYqC05R2ZcZ3ejNQsIHQDQiwWM4JBqmm6MKANTp4LE=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20220708220712-1185a9018129/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
+golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
+golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70=
+golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
+golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
+golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
+golang.org/x/term v0.28.0 h1:/Ts8HFuMR2E6IP/jlo7QVLZHggjKQbhu/7H0LJFr3Gg=
+golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
+golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
+golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
+golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
+golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
+golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
+golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg=
+golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 h1:+cNy6SZtPcJQH3LJVLOSmiC7MMxXNOb3PU/VUEz+EhU=
+golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=
+google.golang.org/api v0.170.0 h1:zMaruDePM88zxZBG+NG8+reALO2rfLhe/JShitLyT48=
+google.golang.org/api v0.170.0/go.mod h1:/xql9M2btF85xac/VAm4PsLMTLVGUOpq4BE9R8jyNy8=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine/v2 v2.0.2 h1:MSqyWy2shDLwG7chbwBJ5uMyw6SNqJzhJHNDwYB0Akk=
+google.golang.org/appengine/v2 v2.0.2/go.mod h1:PkgRUWz4o1XOvbqtWTkBtCitEJ5Tp4HoVEdMMYQR/8E=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 h1:9+tzLLstTlPTRyJTh+ah5wIMsBW5c4tQwGTN3thOW9Y=
+google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:mqHbVIp48Muh7Ywss/AD6I5kNVKZMmAa/QEW58Gxp2s=
+google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 h1:GVIKPyP/kLIyVOgOnTwFOrvQaQUzOzGMCxgFUOEmm24=
+google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422/go.mod h1:b6h1vNKhxaSoEI+5jc3PJUCustfli/mRab7295pY7rw=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
+google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
+google.golang.org/grpc v1.71.0 h1:kF77BGdPTQ4/JZWMlb9VpJ5pa25aqvVqogsxNHHdeBg=
+google.golang.org/grpc v1.71.0/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec=
+google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
+google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
+google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
+google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
+google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
+google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM=
+google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
+gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
+gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
+gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
+gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
+gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gorm.io/gorm v1.25.8 h1:WAGEZ/aEcznN4D03laj8DKnehe1e9gYQAjW8xyPRdeo=
+gorm.io/gorm v1.25.8/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+k8s.io/api v0.31.2 h1:3wLBbL5Uom/8Zy98GRPXpJ254nEFpl+hwndmk9RwmL0=
+k8s.io/api v0.31.2/go.mod h1:bWmGvrGPssSK1ljmLzd3pwCQ9MgoTsRCuK35u6SygUk=
+k8s.io/apimachinery v0.31.2 h1:i4vUt2hPK56W6mlT7Ry+AO8eEsyxMD1U44NR22CLTYw=
+k8s.io/apimachinery v0.31.2/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
+k8s.io/client-go v0.31.2 h1:Y2F4dxU5d3AQj+ybwSMqQnpZH9F30//1ObxOKlTI9yc=
+k8s.io/client-go v0.31.2/go.mod h1:NPa74jSVR/+eez2dFsEIHNa+3o09vtNaWwWwb1qSxSs=
+k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
+k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
+k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=
+k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
+k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=
+k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50=
+rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
+sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
+sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
+sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
+sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
+sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
+sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
diff --git a/install.sh b/install.sh
new file mode 100755
index 0000000..38406f0
--- /dev/null
+++ b/install.sh
@@ -0,0 +1,555 @@
+#!/usr/bin/env bash
+
+# Copyright © 2023 OpenIM. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# https://gist.github.com/cubxxw/28f997f2c9aff408630b072f010c1d64
+#
+
+set -e
+
+
+############################## OpenIM Github ##############################
+# ... rest of the script ...
+
+# TODO
+# You can configure this script in three ways.
+# 1. First, set the variables in this column with more comments.
+# 2. The second is to pass an environment variable via a flag such as --help.
+# 3. The third way is to set the variable externally, or pass it in as an environment variable
+
+# Default configuration for OpenIM Repo
+# The OpenIM Repo settings can be customized according to your needs.
+
+# OpenIM Repo owner, by default it's set to "OpenIMSDK". If you're using a different owner, replace accordingly.
+OWNER="OpenIMSDK"
+
+# The repository name, by default it's "Open-IM-Server". If you're using a different repository, replace accordingly.
+REPO="Open-IM-Server"
+
+# Version of Go you want to use, make sure it is compatible with your OpenIM-Server requirements.
+# Default is 1.18, if you want to use a different version, replace accordingly.
+GO_VERSION="1.20"
+
+# Default HTTP_PORT is 80. If you want to use a different port, uncomment and replace the value.
+# HTTP_PORT=80
+
+# CPU core number for concurrent execution. By default it's determined automatically.
+# Uncomment the next line if you want to set it manually.
+# CPU=$(grep -c ^processor /proc/cpuinfo)
+
+# By default, the script uses the latest tag from OpenIM-Server releases.
+# If you want to use a specific tag, uncomment and replace "v3.0.0" with the desired tag.
+# LATEST_TAG=v3.0.0
+
+# Default OpenIM install directory is /tmp. If you want to use a different directory, uncomment and replace "/test".
+# DOWNLOAD_OPENIM_DIR="/test"
+
+# GitHub proxy settings. If you are using a proxy, uncomment and replace the empty field with your proxy URL.
+PROXY=
+
+# If you have a GitHub token, replace the empty field with your token.
+GITHUB_TOKEN=
+
+# Default user is "root". If you need to modify it, uncomment and replace accordingly.
+# OPENIM_USER=root
+
+# Default password for redis, mysql, mongo, as well as accessSecret in config/config.yaml.
+# Remember, it should be a combination of 8 or more numbers and letters. If you want to set a different password, uncomment and replace "openIM123".
+# PASSWORD=openIM123
+
+# Default endpoint for minio's external service IP and port. If you want to use a different endpoint, uncomment and replace.
+# ENDPOINT=http://127.0.0.1:10005
+
+# Default API_URL, replace if necessary.
+# API_URL=http://127.0.0.1:10002/object/
+
+# Default data directory. If you want to specify a different directory, uncomment and replace "./".
+# DATA_DIR=./
+
+############################## OpenIM Functions ##############################
+# Install horizon of the script
+#
+# Pre-requisites:
+# - git
+# - make
+# - jq
+# - docker
+# - docker-compose
+# - go
+#
+
+# Check if the script is run as root
+function check_isroot() {
+ if [ "$EUID" -ne 0 ]; then
+ fatal "Please run the script as root or use sudo."
+ fi
+}
+
+# check if the current directory is a OpenIM git repository
+function check_git_repo() {
+ if git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
+ # Inside a git repository
+ for remote in $(git remote); do
+ repo_url=$(git remote get-url $remote)
+ if [[ $repo_url == "https://github.com/openimsdk/open-im-server-deploy.git" || \
+ $repo_url == "https://github.com/openimsdk/open-im-server-deploy" || \
+ $repo_url == "git@github.com:openimsdk/open-im-server-deploy.git" ]]; then
+ # If it's OpenIMSDK repository
+ info "Current directory is OpenIMSDK git repository."
+ info "Executing installation directly."
+ install_openim
+ exit 0
+ fi
+ debug "Remote: $remote, URL: $repo_url"
+ done
+ # If it's not OpenIMSDK repository
+ debug "Current directory is not OpenIMSDK git repository."
+ fi
+ info "Current directory is not a git repository."
+}
+
+# Function to update and install necessary tools
+function install_tools() {
+ info "Checking and installing necessary tools, about git, make, jq, docker, docker-compose."
+ local tools=("git" "make" "jq" "docker" "docker-compose")
+ local install_cmd update_cmd os
+
+ if grep -qEi "debian|buntu|mint" /etc/os-release; then
+ os="Ubuntu"
+ install_cmd="sudo apt install -y"
+ update_cmd="sudo apt update"
+ elif grep -qEi "fedora|rhel" /etc/os-release; then
+ os="CentOS"
+ install_cmd="sudo yum install -y"
+ update_cmd="sudo yum update"
+ else
+ fatal "Unsupported OS, please use Ubuntu or CentOS."
+ fi
+
+ debug "Detected OS: $os"
+ info "Updating system package repositories..."
+ $update_cmd
+
+ for tool in "${tools[@]}"; do
+ if ! command -v $tool &> /dev/null; then
+ warn "$tool is not installed. Installing now..."
+ $install_cmd $tool
+ success "$tool has been installed successfully."
+ else
+ info "$tool is already installed."
+ fi
+ done
+}
+
+# Function to check if Docker and Docker Compose are installed
+function check_docker() {
+ if ! command -v docker &> /dev/null; then
+ fatal "Docker is not installed. Please install Docker first."
+ fi
+ if ! command -v docker-compose &> /dev/null; then
+ fatal "Docker Compose is not installed. Please install Docker Compose first."
+ fi
+}
+
+# Function to download and install Go if it's not already installed
+function install_go() {
+ command -v go >/dev/null 2>&1
+ # Determines if GO_VERSION is defined
+ if [ -z "$GO_VERSION" ]; then
+ GO_VERSION="1.20"
+ fi
+
+ if [[ $? -ne 0 ]]; then
+ warn "Go is not installed. Installing now..."
+ curl -LO "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz"
+ if [ $? -ne 0 ]; then
+ fatal "Download failed! Please check your network connectivity."
+ fi
+ sudo tar -C /usr/local -xzf "go${GO_VERSION}.linux-amd64.tar.gz"
+ echo "export PATH=$PATH:/usr/local/go/bin" >> ~/.bashrc
+ source ~/.bashrc
+ success "Go has been installed successfully."
+ else
+ info "Go is already installed."
+ fi
+}
+
+function download_source_code() {
+
+ # If LATEST_TAG was not defined outside the function, get it here example: v3.0.1-beta.1
+ if [ -z "$LATEST_TAG" ]; then
+ LATEST_TAG=$(curl -s "https://api.github.com/repos/$OWNER/$REPO/tags" | jq -r '.[0].name')
+ fi
+
+ # If LATEST_TAG is still empty, set a default value
+ local DEFAULT_TAG="v3.0.0"
+
+ LATEST_TAG="${LATEST_TAG:-$DEFAULT_TAG}"
+
+ debug "DEFAULT_TAG: $DEFAULT_TAG"
+ info "Use OpenIM Version LATEST_TAG: $LATEST_TAG"
+
+ # If MODIFIED_TAG was not defined outside the function, modify it here,example: 3.0.1-beta.1
+ if [ -z "$MODIFIED_TAG" ]; then
+ MODIFIED_TAG=$(echo $LATEST_TAG | sed 's/v//')
+ fi
+
+ # If MODIFIED_TAG is still empty, set a default value
+ local DEFAULT_MODIFIED_TAG="${DEFAULT_TAG#v}"
+ MODIFIED_TAG="${MODIFIED_TAG:-$DEFAULT_MODIFIED_TAG}"
+
+ debug "MODIFIED_TAG: $MODIFIED_TAG"
+
+ # Construct the tarball URL
+ TARBALL_URL="${PROXY}https://github.com/$OWNER/$REPO/archive/refs/tags/$LATEST_TAG.tar.gz"
+
+ info "Downloaded OpenIM TARBALL_URL: $TARBALL_URL"
+
+ info "Starting the OpenIM automated one-click deployment script."
+
+ # Set the download and extract directory to /tmp
+ if [ -z "$DOWNLOAD_OPENIM_DIR" ]; then
+ DOWNLOAD_OPENIM_DIR="/tmp"
+ fi
+
+ # Check if /tmp directory exists
+ if [ ! -d "$DOWNLOAD_OPENIM_DIR" ]; then
+ warn "$DOWNLOAD_OPENIM_DIR does not exist. Creating it..."
+ mkdir -p "$DOWNLOAD_OPENIM_DIR"
+ fi
+
+ info "Downloading OpenIM source code from $TARBALL_URL to $DOWNLOAD_OPENIM_DIR"
+
+ curl -L -o "${DOWNLOAD_OPENIM_DIR}/${MODIFIED_TAG}.tar.gz" $TARBALL_URL
+
+ tar -xzvf "${DOWNLOAD_OPENIM_DIR}/${MODIFIED_TAG}.tar.gz" -C "$DOWNLOAD_OPENIM_DIR"
+ cd "$DOWNLOAD_OPENIM_DIR/$REPO-$MODIFIED_TAG"
+ git init && git add . && git commit -m "init" --no-verify
+
+ success "Source code downloaded and extracted to $REPO-$MODIFIED_TAG"
+}
+
+function set_openim_env() {
+ warn "This command can only be executed once. It will modify the component passwords in docker-compose based on the PASSWORD variable in .env, and modify the component passwords in config/config.yaml. If the password in .env changes, you need to first execute docker-compose down; rm components -rf and then execute this command."
+ # Set default values for user input
+ # If the OPENIM_USER environment variable is not set, it defaults to 'root'
+ if [ -z "$OPENIM_USER" ]; then
+ OPENIM_USER="root"
+ debug "OPENIM_USER is not set. Defaulting to 'root'."
+ fi
+
+ # If the PASSWORD environment variable is not set, it defaults to 'openIM123'
+ # This password applies to redis, mysql, mongo, as well as accessSecret in config/config.yaml
+ if [ -z "$PASSWORD" ]; then
+ PASSWORD="openIM123"
+ debug "PASSWORD is not set. Defaulting to 'openIM123'."
+ fi
+
+ # If the ENDPOINT environment variable is not set, it defaults to 'http://127.0.0.1:10005'
+ # This is minio's external service IP and port, or it could be a domain like storage.xx.xx
+ # The app must be able to access this IP and port or domain
+ if [ -z "$ENDPOINT" ]; then
+ ENDPOINT="http://127.0.0.1:10005"
+ debug "ENDPOINT is not set. Defaulting to 'http://127.0.0.1:10005'."
+ fi
+
+ # If the API_URL environment variable is not set, it defaults to 'http://127.0.0.1:10002/object/'
+ # The app must be able to access this IP and port or domain
+ if [ -z "$API_URL" ]; then
+ API_URL="http://127.0.0.1:10002/object/"
+ debug "API_URL is not set. Defaulting to 'http://127.0.0.1:10002/object/'."
+ fi
+
+ # If the DATA_DIR environment variable is not set, it defaults to the current directory './'
+ # This can be set to a directory with large disk space
+ if [ -z "$DATA_DIR" ]; then
+ DATA_DIR="./"
+ debug "DATA_DIR is not set. Defaulting to './'."
+ fi
+}
+
+function install_openim() {
+ info "Installing OpenIM"
+ make -j${CPU} install V=1
+
+ info "Checking installation"
+ make check
+
+ success "OpenIM installation completed successfully. Happy chatting!"
+}
+
+############################## OpenIM Help ##############################
+
+# Function to display help message
+function cmd_help() {
+ openim_color
+ color_echo ${BRIGHT_GREEN_PREFIX} "Usage: $0 [options]"
+ color_echo ${BRIGHT_GREEN_PREFIX} "Options:"
+ echo
+ color_echo ${BLUE_PREFIX} "-i, --install ${CYAN_PREFIX}Execute the installation logic of the script${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-u, --user ${CYAN_PREFIX}set user (default: root)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-p, --password ${CYAN_PREFIX}set password (default: openIM123)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-e, --endpoint ${CYAN_PREFIX}set endpoint (default: http://127.0.0.1:10005)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-a, --api ${CYAN_PREFIX}set API URL (default: http://127.0.0.1:10002/object/)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-d, --directory ${CYAN_PREFIX}set directory for large disk space (default: ./)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-h, --help ${CYAN_PREFIX}display this help message and exit${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-cn, --china ${CYAN_PREFIX}set to use the Chinese domestic proxy${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-t, --tag ${CYAN_PREFIX}specify the tag (default option, set to latest if not specified)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-r, --release ${CYAN_PREFIX}specify the release branch (cannot be used with the tag option)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-gt, --github-token ${CYAN_PREFIX}set the GITHUB_TOKEN (default: not set)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "-g, --go-version ${CYAN_PREFIX}set the Go language version (default: GO_VERSION=\"1.20\")${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "--install-dir ${CYAN_PREFIX}set the OpenIM installation directory (default: /tmp)${COLOR_SUFFIX}"
+ color_echo ${BLUE_PREFIX} "--cpu ${CYAN_PREFIX}set the number of concurrent processes${COLOR_SUFFIX}"
+ echo
+ color_echo ${RED_PREFIX} "Note: Only one of the -t/--tag or -r/--release options can be used at a time.${COLOR_SUFFIX}"
+ color_echo ${RED_PREFIX} "If both are used or none of them are used, the -t/--tag option will be prioritized.${COLOR_SUFFIX}"
+ echo
+ exit 1
+}
+
+function parseinput() {
+ # set default values
+ # OPENIM_USER=root
+ # PASSWORD=openIM123
+ # ENDPOINT=http://127.0.0.1:10005
+ # API=http://127.0.0.1:10002/object/
+ # DIRECTORY=./
+ # CHINA=false
+ # TAG=latest
+ # RELEASE=""
+ # GO_VERSION=1.20
+ # INSTALL_DIR=/tmp
+ # GITHUB_TOKEN=""
+ # CPU=$(nproc)
+
+ if [ $# -eq 0 ]; then
+ cmd_help
+ exit 1
+ fi
+
+ while [ $# -gt 0 ]; do
+ case $1 in
+ -h|--help)
+ cmd_help
+ exit
+ ;;
+ -u|--user)
+ shift
+ OPENIM_USER=$1
+ ;;
+ -p|--password)
+ shift
+ PASSWORD=$1
+ ;;
+ -e|--endpoint)
+ shift
+ ENDPOINT=$1
+ ;;
+ -a|--api)
+ shift
+ API=$1
+ ;;
+ -d|--directory)
+ shift
+ DIRECTORY=$1
+ ;;
+ -cn|--china)
+ CHINA=true
+ ;;
+ -t|--tag)
+ shift
+ TAG=$1
+ ;;
+ -r|--release)
+ shift
+ RELEASE=$1
+ ;;
+ -g|--go-version)
+ shift
+ GO_VERSION=$1
+ ;;
+ --install-dir)
+ shift
+ INSTALL_DIR=$1
+ ;;
+ -gt|--github-token)
+ shift
+ GITHUB_TOKEN=$1
+ ;;
+ --cpu)
+ shift
+ CPU=$1
+ ;;
+ -i|--install)
+ openim_main
+ exit
+ ;;
+ *)
+ echo "Unknown option: $1"
+ cmd_help
+ exit 1
+ ;;
+ esac
+ shift
+ done
+}
+
+############################## OpenIM LOG ##############################
+# Set text color to cyan for header and URL
+print_with_delay() {
+ text="$1"
+ delay="$2"
+
+ for i in $(seq 0 $((${#text}-1))); do
+ printf "${text:$i:1}"
+ sleep $delay
+ done
+ printf "\n"
+}
+
+print_progress() {
+ total="$1"
+ delay="$2"
+
+ printf "["
+ for i in $(seq 1 $total); do
+ printf "#"
+ sleep $delay
+ done
+ printf "]\n"
+}
+
+# Function for colored echo
+color_echo() {
+ COLOR=$1
+ shift
+ echo -e "${COLOR} $* ${COLOR_SUFFIX}"
+}
+
+# Color definitions
+function openim_color() {
+ COLOR_SUFFIX="\033[0m" # End all colors and special effects
+
+ BLACK_PREFIX="\033[30m" # Black prefix
+ RED_PREFIX="\033[31m" # Red prefix
+ GREEN_PREFIX="\033[32m" # Green prefix
+ YELLOW_PREFIX="\033[33m" # Yellow prefix
+ BLUE_PREFIX="\033[34m" # Blue prefix
+ SKY_BLUE_PREFIX="\033[36m" # Sky blue prefix
+ WHITE_PREFIX="\033[37m" # White prefix
+ BOLD_PREFIX="\033[1m" # Bold prefix
+ UNDERLINE_PREFIX="\033[4m" # Underline prefix
+ ITALIC_PREFIX="\033[3m" # Italic prefix
+ BRIGHT_GREEN_PREFIX='\033[1;32m' # Bright green prefix
+
+ CYAN_PREFIX="\033[0;36m" # Cyan prefix
+}
+
+# --- helper functions for logs ---
+info() {
+ echo -e "[${GREEN_PREFIX}INFO${COLOR_SUFFIX}] " "$@"
+}
+warn() {
+ echo -e "[${YELLOW_PREFIX}WARN${COLOR_SUFFIX}] " "$@" >&2
+}
+fatal() {
+ echo -e "[${RED_PREFIX}ERROR${COLOR_SUFFIX}] " "$@" >&2
+ exit 1
+}
+debug() {
+ echo -e "[${BLUE_PREFIX}DEBUG${COLOR_SUFFIX}]===> " "$@"
+}
+success() {
+ echo -e "${BRIGHT_GREEN_PREFIX}=== [SUCCESS] ===${COLOR_SUFFIX}\n=> " "$@"
+}
+
+function openim_logo() {
+ # Set text color to cyan for header and URL
+ echo -e "\033[0;36m"
+
+ # Display fancy ASCII Art logo
+ # look http://patorjk.com/software/taag/#p=display&h=1&v=1&f=Doh&t=OpenIM
+ print_with_delay '
+
+
+ OOOOOOOOO IIIIIIIIIIMMMMMMMM MMMMMMMM
+ OO:::::::::OO I::::::::IM:::::::M M:::::::M
+ OO:::::::::::::OO I::::::::IM::::::::M M::::::::M
+O:::::::OOO:::::::O II::::::IIM:::::::::M M:::::::::M
+O::::::O O::::::Oppppp ppppppppp eeeeeeeeeeee nnnn nnnnnnnn I::::I M::::::::::M M::::::::::M
+O:::::O O:::::Op::::ppp:::::::::p ee::::::::::::ee n:::nn::::::::nn I::::I M:::::::::::M M:::::::::::M
+O:::::O O:::::Op:::::::::::::::::p e::::::eeeee:::::een::::::::::::::nn I::::I M:::::::M::::M M::::M:::::::M
+O:::::O O:::::Opp::::::ppppp::::::pe::::::e e:::::enn:::::::::::::::n I::::I M::::::M M::::M M::::M M::::::M
+O:::::O O:::::O p:::::p p:::::pe:::::::eeeee::::::e n:::::nnnn:::::n I::::I M::::::M M::::M::::M M::::::M
+O:::::O O:::::O p:::::p p:::::pe:::::::::::::::::e n::::n n::::n I::::I M::::::M M:::::::M M::::::M
+O:::::O O:::::O p:::::p p:::::pe::::::eeeeeeeeeee n::::n n::::n I::::I M::::::M M:::::M M::::::M
+O::::::O O::::::O p:::::p p::::::pe:::::::e n::::n n::::n I::::I M::::::M MMMMM M::::::M
+O:::::::OOO:::::::O p:::::ppppp:::::::pe::::::::e n::::n n::::nII::::::IIM::::::M M::::::M
+ OO:::::::::::::OO p::::::::::::::::p e::::::::eeeeeeee n::::n n::::nI::::::::IM::::::M M::::::M
+ OO:::::::::OO p::::::::::::::pp ee:::::::::::::e n::::n n::::nI::::::::IM::::::M M::::::M
+ OOOOOOOOO p::::::pppppppp eeeeeeeeeeeeee nnnnnn nnnnnnIIIIIIIIIIMMMMMMMM MMMMMMMM
+ p:::::p
+ p:::::p
+ p:::::::p
+ p:::::::p
+ p:::::::p
+ ppppppppp
+
+ ' 0.0001
+
+ # Display product URL
+ print_with_delay "Discover more and contribute at: https://github.com/openimsdk/open-im-server-deploy" 0.01
+
+ # Reset text color back to normal
+ echo -e "\033[0m"
+
+ # Set text color to green for product description
+ echo -e "\033[1;32m"
+
+ print_with_delay "Open-IM-Server: Reinventing Instant Messaging" 0.01
+ print_progress 50 0.02
+
+ print_with_delay "Open-IM-Server is not just a product; it's a revolution. It's about bringing the power of seamless, real-time messaging to your fingertips. And it's about joining a global community of developers, dedicated to pushing the boundaries of what's possible." 0.01
+
+ print_progress 50 0.02
+
+ # Reset text color back to normal
+ echo -e "\033[0m"
+
+ # Set text color to yellow for the Slack link
+ echo -e "\033[1;33m"
+
+ print_with_delay "Join our developer community on Slack: https://join.slack.com/t/openimsdk/shared_invite/zt-2ijy1ys1f-O0aEDCr7ExRZ7mwsHAVg9A" 0.01
+
+ # Reset text color back to normal
+ echo -e "\033[0m"
+}
+
+# Main function to run the script
+function openim_main() {
+ check_git_repo
+ check_isroot
+ openim_color
+ install_tools
+ check_docker
+ install_go
+ download_source_code
+ set_openim_env
+ install_openim
+ openim_logo
+
+}
+
+parseinput "$@"
\ No newline at end of file
diff --git a/internal/api/auth.go b/internal/api/auth.go
new file mode 100644
index 0000000..a8e0797
--- /dev/null
+++ b/internal/api/auth.go
@@ -0,0 +1,45 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "git.imall.cloud/openim/protocol/auth"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/a2r"
+)
+
+type AuthApi struct {
+ Client auth.AuthClient
+}
+
+func NewAuthApi(client auth.AuthClient) AuthApi {
+ return AuthApi{client}
+}
+
+func (o *AuthApi) GetAdminToken(c *gin.Context) {
+ a2r.Call(c, auth.AuthClient.GetAdminToken, o.Client)
+}
+
+func (o *AuthApi) GetUserToken(c *gin.Context) {
+ a2r.Call(c, auth.AuthClient.GetUserToken, o.Client)
+}
+
+func (o *AuthApi) ParseToken(c *gin.Context) {
+ a2r.Call(c, auth.AuthClient.ParseToken, o.Client)
+}
+
+func (o *AuthApi) ForceLogout(c *gin.Context) {
+ a2r.Call(c, auth.AuthClient.ForceLogout, o.Client)
+}
diff --git a/internal/api/config_manager.go b/internal/api/config_manager.go
new file mode 100644
index 0000000..24901c4
--- /dev/null
+++ b/internal/api/config_manager.go
@@ -0,0 +1,413 @@
+package api
+
+import (
+ "encoding/json"
+ "reflect"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery/etcd"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ clientv3 "go.etcd.io/etcd/client/v3"
+)
+
+const (
+ // wait for Restart http call return
+ waitHttp = time.Millisecond * 200
+)
+
+type ConfigManager struct {
+ imAdminUserID []string
+ config *config.AllConfig
+ client *clientv3.Client
+
+ configPath string
+ systemConfigDB database.SystemConfig
+}
+
+func NewConfigManager(IMAdminUserID []string, cfg *config.AllConfig, client *clientv3.Client, configPath string, systemConfigDB database.SystemConfig) *ConfigManager {
+ cm := &ConfigManager{
+ imAdminUserID: IMAdminUserID,
+ config: cfg,
+ client: client,
+ configPath: configPath,
+ systemConfigDB: systemConfigDB,
+ }
+ return cm
+}
+
+func (cm *ConfigManager) CheckAdmin(c *gin.Context) {
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, err)
+ c.Abort()
+ }
+}
+
+func (cm *ConfigManager) GetConfig(c *gin.Context) {
+ var req apistruct.GetConfigReq
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ conf := cm.config.Name2Config(req.ConfigName)
+ if conf == nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail("config name not found").Wrap())
+ return
+ }
+ b, err := json.Marshal(conf)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ apiresp.GinSuccess(c, string(b))
+}
+
+func (cm *ConfigManager) GetConfigList(c *gin.Context) {
+ var resp apistruct.GetConfigListResp
+ resp.ConfigNames = cm.config.GetConfigNames()
+ resp.Environment = runtimeenv.RuntimeEnvironment()
+ resp.Version = version.Version
+
+ apiresp.GinSuccess(c, resp)
+}
+
+func (cm *ConfigManager) SetConfig(c *gin.Context) {
+ if cm.config.Discovery.Enable != config.ETCD {
+ apiresp.GinError(c, errs.New("only etcd support set config").Wrap())
+ return
+ }
+ var req apistruct.SetConfigReq
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ var err error
+ switch req.ConfigName {
+ case cm.config.Discovery.GetConfigFileName():
+ err = compareAndSave[config.Discovery](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Kafka.GetConfigFileName():
+ err = compareAndSave[config.Kafka](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.LocalCache.GetConfigFileName():
+ err = compareAndSave[config.LocalCache](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Log.GetConfigFileName():
+ err = compareAndSave[config.Log](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Minio.GetConfigFileName():
+ err = compareAndSave[config.Minio](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Mongo.GetConfigFileName():
+ err = compareAndSave[config.Mongo](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Notification.GetConfigFileName():
+ err = compareAndSave[config.Notification](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.API.GetConfigFileName():
+ err = compareAndSave[config.API](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.CronTask.GetConfigFileName():
+ err = compareAndSave[config.CronTask](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.MsgGateway.GetConfigFileName():
+ err = compareAndSave[config.MsgGateway](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.MsgTransfer.GetConfigFileName():
+ err = compareAndSave[config.MsgTransfer](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Push.GetConfigFileName():
+ err = compareAndSave[config.Push](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Auth.GetConfigFileName():
+ err = compareAndSave[config.Auth](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Conversation.GetConfigFileName():
+ err = compareAndSave[config.Conversation](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Friend.GetConfigFileName():
+ err = compareAndSave[config.Friend](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Group.GetConfigFileName():
+ err = compareAndSave[config.Group](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Msg.GetConfigFileName():
+ err = compareAndSave[config.Msg](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Third.GetConfigFileName():
+ err = compareAndSave[config.Third](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.User.GetConfigFileName():
+ err = compareAndSave[config.User](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Redis.GetConfigFileName():
+ err = compareAndSave[config.Redis](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Share.GetConfigFileName():
+ err = compareAndSave[config.Share](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ case cm.config.Webhooks.GetConfigFileName():
+ err = compareAndSave[config.Webhooks](c, cm.config.Name2Config(req.ConfigName), &req, cm)
+ default:
+ apiresp.GinError(c, errs.ErrArgs.Wrap())
+ return
+ }
+ if err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ apiresp.GinSuccess(c, nil)
+}
+
+func (cm *ConfigManager) SetConfigs(c *gin.Context) {
+ if cm.config.Discovery.Enable != config.ETCD {
+ apiresp.GinError(c, errs.New("only etcd support set config").Wrap())
+ return
+ }
+ var req apistruct.SetConfigsReq
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ var (
+ err error
+ ops []*clientv3.Op
+ )
+
+ for _, cf := range req.Configs {
+ var op *clientv3.Op
+ switch cf.ConfigName {
+ case cm.config.Discovery.GetConfigFileName():
+ op, err = compareAndOp[config.Discovery](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Kafka.GetConfigFileName():
+ op, err = compareAndOp[config.Kafka](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.LocalCache.GetConfigFileName():
+ op, err = compareAndOp[config.LocalCache](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Log.GetConfigFileName():
+ op, err = compareAndOp[config.Log](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Minio.GetConfigFileName():
+ op, err = compareAndOp[config.Minio](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Mongo.GetConfigFileName():
+ op, err = compareAndOp[config.Mongo](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Notification.GetConfigFileName():
+ op, err = compareAndOp[config.Notification](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.API.GetConfigFileName():
+ op, err = compareAndOp[config.API](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.CronTask.GetConfigFileName():
+ op, err = compareAndOp[config.CronTask](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.MsgGateway.GetConfigFileName():
+ op, err = compareAndOp[config.MsgGateway](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.MsgTransfer.GetConfigFileName():
+ op, err = compareAndOp[config.MsgTransfer](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Push.GetConfigFileName():
+ op, err = compareAndOp[config.Push](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Auth.GetConfigFileName():
+ op, err = compareAndOp[config.Auth](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Conversation.GetConfigFileName():
+ op, err = compareAndOp[config.Conversation](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Friend.GetConfigFileName():
+ op, err = compareAndOp[config.Friend](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Group.GetConfigFileName():
+ op, err = compareAndOp[config.Group](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Msg.GetConfigFileName():
+ op, err = compareAndOp[config.Msg](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Third.GetConfigFileName():
+ op, err = compareAndOp[config.Third](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.User.GetConfigFileName():
+ op, err = compareAndOp[config.User](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Redis.GetConfigFileName():
+ op, err = compareAndOp[config.Redis](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Share.GetConfigFileName():
+ op, err = compareAndOp[config.Share](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ case cm.config.Webhooks.GetConfigFileName():
+ op, err = compareAndOp[config.Webhooks](c, cm.config.Name2Config(cf.ConfigName), &cf, cm)
+ default:
+ apiresp.GinError(c, errs.ErrArgs.Wrap())
+ return
+ }
+ if err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ if op != nil {
+ ops = append(ops, op)
+ }
+ }
+ if len(ops) > 0 {
+ tx := cm.client.Txn(c)
+ if _, err = tx.Then(datautil.Batch(func(op *clientv3.Op) clientv3.Op { return *op }, ops)...).Commit(); err != nil {
+ apiresp.GinError(c, errs.WrapMsg(err, "save to etcd failed"))
+ return
+ }
+
+ }
+
+ apiresp.GinSuccess(c, nil)
+}
+
+func compareAndOp[T any](c *gin.Context, old any, req *apistruct.SetConfigReq, cm *ConfigManager) (*clientv3.Op, error) {
+ conf := new(T)
+ err := json.Unmarshal([]byte(req.Data), &conf)
+ if err != nil {
+ return nil, errs.ErrArgs.WithDetail(err.Error()).Wrap()
+ }
+ eq := reflect.DeepEqual(old, conf)
+ if eq {
+ return nil, nil
+ }
+ data, err := json.Marshal(conf)
+ if err != nil {
+ return nil, errs.ErrArgs.WithDetail(err.Error()).Wrap()
+ }
+ op := clientv3.OpPut(etcd.BuildKey(req.ConfigName), string(data))
+ return &op, nil
+}
+
+func compareAndSave[T any](c *gin.Context, old any, req *apistruct.SetConfigReq, cm *ConfigManager) error {
+ conf := new(T)
+ err := json.Unmarshal([]byte(req.Data), &conf)
+ if err != nil {
+ return errs.ErrArgs.WithDetail(err.Error()).Wrap()
+ }
+ eq := reflect.DeepEqual(old, conf)
+ if eq {
+ return nil
+ }
+ data, err := json.Marshal(conf)
+ if err != nil {
+ return errs.ErrArgs.WithDetail(err.Error()).Wrap()
+ }
+ _, err = cm.client.Put(c, etcd.BuildKey(req.ConfigName), string(data))
+ if err != nil {
+ return errs.WrapMsg(err, "save to etcd failed")
+ }
+ return nil
+}
+
+func (cm *ConfigManager) ResetConfig(c *gin.Context) {
+ go func() {
+ if err := cm.resetConfig(c, true); err != nil {
+ log.ZError(c, "reset config err", err)
+ }
+ }()
+ apiresp.GinSuccess(c, nil)
+}
+
+func (cm *ConfigManager) resetConfig(c *gin.Context, checkChange bool, ops ...clientv3.Op) error {
+ txn := cm.client.Txn(c)
+ type initConf struct {
+ old any
+ new any
+ }
+ configMap := map[string]*initConf{
+ cm.config.Discovery.GetConfigFileName(): {old: &cm.config.Discovery, new: new(config.Discovery)},
+ cm.config.Kafka.GetConfigFileName(): {old: &cm.config.Kafka, new: new(config.Kafka)},
+ cm.config.LocalCache.GetConfigFileName(): {old: &cm.config.LocalCache, new: new(config.LocalCache)},
+ cm.config.Log.GetConfigFileName(): {old: &cm.config.Log, new: new(config.Log)},
+ cm.config.Minio.GetConfigFileName(): {old: &cm.config.Minio, new: new(config.Minio)},
+ cm.config.Mongo.GetConfigFileName(): {old: &cm.config.Mongo, new: new(config.Mongo)},
+ cm.config.Notification.GetConfigFileName(): {old: &cm.config.Notification, new: new(config.Notification)},
+ cm.config.API.GetConfigFileName(): {old: &cm.config.API, new: new(config.API)},
+ cm.config.CronTask.GetConfigFileName(): {old: &cm.config.CronTask, new: new(config.CronTask)},
+ cm.config.MsgGateway.GetConfigFileName(): {old: &cm.config.MsgGateway, new: new(config.MsgGateway)},
+ cm.config.MsgTransfer.GetConfigFileName(): {old: &cm.config.MsgTransfer, new: new(config.MsgTransfer)},
+ cm.config.Push.GetConfigFileName(): {old: &cm.config.Push, new: new(config.Push)},
+ cm.config.Auth.GetConfigFileName(): {old: &cm.config.Auth, new: new(config.Auth)},
+ cm.config.Conversation.GetConfigFileName(): {old: &cm.config.Conversation, new: new(config.Conversation)},
+ cm.config.Friend.GetConfigFileName(): {old: &cm.config.Friend, new: new(config.Friend)},
+ cm.config.Group.GetConfigFileName(): {old: &cm.config.Group, new: new(config.Group)},
+ cm.config.Msg.GetConfigFileName(): {old: &cm.config.Msg, new: new(config.Msg)},
+ cm.config.Third.GetConfigFileName(): {old: &cm.config.Third, new: new(config.Third)},
+ cm.config.User.GetConfigFileName(): {old: &cm.config.User, new: new(config.User)},
+ cm.config.Redis.GetConfigFileName(): {old: &cm.config.Redis, new: new(config.Redis)},
+ cm.config.Share.GetConfigFileName(): {old: &cm.config.Share, new: new(config.Share)},
+ cm.config.Webhooks.GetConfigFileName(): {old: &cm.config.Webhooks, new: new(config.Webhooks)},
+ }
+
+ changedKeys := make([]string, 0, len(configMap))
+ for k, v := range configMap {
+ err := config.Load(cm.configPath, k, config.EnvPrefixMap[k], v.new)
+ if err != nil {
+ log.ZError(c, "load config failed", err)
+ continue
+ }
+ equal := reflect.DeepEqual(v.old, v.new)
+ if !checkChange || !equal {
+ changedKeys = append(changedKeys, k)
+ }
+ }
+
+ for _, k := range changedKeys {
+ data, err := json.Marshal(configMap[k].new)
+ if err != nil {
+ log.ZError(c, "marshal config failed", err)
+ continue
+ }
+ ops = append(ops, clientv3.OpPut(etcd.BuildKey(k), string(data)))
+ }
+ if len(ops) > 0 {
+ txn.Then(ops...)
+ _, err := txn.Commit()
+ if err != nil {
+ return errs.WrapMsg(err, "commit etcd txn failed")
+ }
+ }
+ return nil
+}
+
+func (cm *ConfigManager) Restart(c *gin.Context) {
+ go cm.restart(c)
+ apiresp.GinSuccess(c, nil)
+}
+
+func (cm *ConfigManager) restart(c *gin.Context) {
+ time.Sleep(waitHttp) // wait for Restart http call return
+ t := time.Now().Unix()
+ _, err := cm.client.Put(c, etcd.BuildKey(etcd.RestartKey), strconv.Itoa(int(t)))
+ if err != nil {
+ log.ZError(c, "restart etcd put key failed", err)
+ }
+}
+
+func (cm *ConfigManager) SetEnableConfigManager(c *gin.Context) {
+ if cm.config.Discovery.Enable != config.ETCD {
+ apiresp.GinError(c, errs.New("only etcd support config manager").Wrap())
+ return
+ }
+ var req apistruct.SetEnableConfigManagerReq
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ var enableStr string
+ if req.Enable {
+ enableStr = etcd.Enable
+ } else {
+ enableStr = etcd.Disable
+ }
+ resp, err := cm.client.Get(c, etcd.BuildKey(etcd.EnableConfigCenterKey))
+ if err != nil {
+ apiresp.GinError(c, errs.WrapMsg(err, "getEnableConfigManager failed"))
+ return
+ }
+ if !(resp.Count > 0 && string(resp.Kvs[0].Value) == etcd.Enable) && req.Enable {
+ go func() {
+ time.Sleep(waitHttp) // wait for Restart http call return
+ err := cm.resetConfig(c, false, clientv3.OpPut(etcd.BuildKey(etcd.EnableConfigCenterKey), enableStr))
+ if err != nil {
+ log.ZError(c, "resetConfig failed", err)
+ }
+ }()
+ } else {
+ _, err = cm.client.Put(c, etcd.BuildKey(etcd.EnableConfigCenterKey), enableStr)
+ if err != nil {
+ apiresp.GinError(c, errs.WrapMsg(err, "setEnableConfigManager failed"))
+ return
+ }
+ }
+
+ apiresp.GinSuccess(c, nil)
+}
+
+func (cm *ConfigManager) GetEnableConfigManager(c *gin.Context) {
+ resp, err := cm.client.Get(c, etcd.BuildKey(etcd.EnableConfigCenterKey))
+ if err != nil {
+ apiresp.GinError(c, errs.WrapMsg(err, "getEnableConfigManager failed"))
+ return
+ }
+ var enable bool
+ if resp.Count > 0 && string(resp.Kvs[0].Value) == etcd.Enable {
+ enable = true
+ }
+ apiresp.GinSuccess(c, &apistruct.GetEnableConfigManagerResp{Enable: enable})
+}
+
diff --git a/internal/api/conversation.go b/internal/api/conversation.go
new file mode 100644
index 0000000..4956433
--- /dev/null
+++ b/internal/api/conversation.go
@@ -0,0 +1,82 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "github.com/gin-gonic/gin"
+
+ "git.imall.cloud/openim/protocol/conversation"
+ "github.com/openimsdk/tools/a2r"
+)
+
+type ConversationApi struct {
+ Client conversation.ConversationClient
+}
+
+func NewConversationApi(client conversation.ConversationClient) ConversationApi {
+ return ConversationApi{client}
+}
+
+func (o *ConversationApi) GetAllConversations(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetAllConversations, o.Client)
+}
+
+func (o *ConversationApi) GetSortedConversationList(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetSortedConversationList, o.Client)
+}
+
+func (o *ConversationApi) GetConversation(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetConversation, o.Client)
+}
+
+func (o *ConversationApi) GetConversations(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetConversations, o.Client)
+}
+
+func (o *ConversationApi) SetConversations(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.SetConversations, o.Client)
+}
+
+//func (o *ConversationApi) GetConversationOfflinePushUserIDs(c *gin.Context) {
+// a2r.Call(c, conversation.ConversationClient.GetConversationOfflinePushUserIDs, o.Client)
+//}
+
+func (o *ConversationApi) GetFullOwnerConversationIDs(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetFullOwnerConversationIDs, o.Client)
+}
+
+func (o *ConversationApi) GetIncrementalConversation(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetIncrementalConversation, o.Client)
+}
+
+func (o *ConversationApi) GetOwnerConversation(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetOwnerConversation, o.Client)
+}
+
+func (o *ConversationApi) GetNotNotifyConversationIDs(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetNotNotifyConversationIDs, o.Client)
+}
+
+func (o *ConversationApi) GetPinnedConversationIDs(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.GetPinnedConversationIDs, o.Client)
+}
+
+func (o *ConversationApi) UpdateConversationsByUser(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.UpdateConversationsByUser, o.Client)
+}
+
+func (o *ConversationApi) DeleteConversations(c *gin.Context) {
+ a2r.Call(c, conversation.ConversationClient.DeleteConversations, o.Client)
+}
diff --git a/internal/api/custom_validator.go b/internal/api/custom_validator.go
new file mode 100644
index 0000000..8d4ba55
--- /dev/null
+++ b/internal/api/custom_validator.go
@@ -0,0 +1,34 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/go-playground/validator/v10"
+)
+
+// RequiredIf validates if the specified field is required based on the session type.
+func RequiredIf(fl validator.FieldLevel) bool {
+ sessionType := fl.Parent().FieldByName("SessionType").Int()
+
+ switch sessionType {
+ case constant.SingleChatType, constant.NotificationChatType:
+ return fl.FieldName() != "RecvID" || fl.Field().String() != ""
+ case constant.WriteGroupChatType, constant.ReadGroupChatType:
+ return fl.FieldName() != "GroupID" || fl.Field().String() != ""
+ default:
+ return true
+ }
+}
diff --git a/internal/api/friend.go b/internal/api/friend.go
new file mode 100644
index 0000000..decd89d
--- /dev/null
+++ b/internal/api/friend.go
@@ -0,0 +1,120 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "github.com/gin-gonic/gin"
+
+ "git.imall.cloud/openim/protocol/relation"
+ "github.com/openimsdk/tools/a2r"
+)
+
+type FriendApi struct {
+ Client relation.FriendClient
+}
+
+func NewFriendApi(client relation.FriendClient) FriendApi {
+ return FriendApi{client}
+}
+
+func (o *FriendApi) ApplyToAddFriend(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.ApplyToAddFriend, o.Client)
+}
+
+func (o *FriendApi) RespondFriendApply(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.RespondFriendApply, o.Client)
+}
+
+func (o *FriendApi) DeleteFriend(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.DeleteFriend, o.Client)
+}
+
+func (o *FriendApi) GetFriendApplyList(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetPaginationFriendsApplyTo, o.Client)
+}
+
+func (o *FriendApi) GetDesignatedFriendsApply(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetDesignatedFriendsApply, o.Client)
+}
+
+func (o *FriendApi) GetSelfApplyList(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetPaginationFriendsApplyFrom, o.Client)
+}
+
+func (o *FriendApi) GetFriendList(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetPaginationFriends, o.Client)
+}
+
+func (o *FriendApi) GetDesignatedFriends(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetDesignatedFriends, o.Client)
+}
+
+func (o *FriendApi) SetFriendRemark(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.SetFriendRemark, o.Client)
+}
+
+func (o *FriendApi) AddBlack(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.AddBlack, o.Client)
+}
+
+func (o *FriendApi) GetPaginationBlacks(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetPaginationBlacks, o.Client)
+}
+
+func (o *FriendApi) GetSpecifiedBlacks(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetSpecifiedBlacks, o.Client)
+}
+
+func (o *FriendApi) RemoveBlack(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.RemoveBlack, o.Client)
+}
+
+func (o *FriendApi) ImportFriends(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.ImportFriends, o.Client)
+}
+
+func (o *FriendApi) IsFriend(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.IsFriend, o.Client)
+}
+
+func (o *FriendApi) GetFriendIDs(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetFriendIDs, o.Client)
+}
+
+func (o *FriendApi) GetSpecifiedFriendsInfo(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetSpecifiedFriendsInfo, o.Client)
+}
+
+func (o *FriendApi) UpdateFriends(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.UpdateFriends, o.Client)
+}
+
+func (o *FriendApi) GetIncrementalFriends(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetIncrementalFriends, o.Client)
+}
+
+// GetIncrementalBlacks is temporarily unused.
+// Deprecated: This function is currently unused and may be removed in future versions.
+func (o *FriendApi) GetIncrementalBlacks(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetIncrementalBlacks, o.Client)
+}
+
+func (o *FriendApi) GetFullFriendUserIDs(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetFullFriendUserIDs, o.Client)
+}
+
+func (o *FriendApi) GetSelfUnhandledApplyCount(c *gin.Context) {
+ a2r.Call(c, relation.FriendClient.GetSelfUnhandledApplyCount, o.Client)
+}
diff --git a/internal/api/group.go b/internal/api/group.go
new file mode 100644
index 0000000..9c8e7c3
--- /dev/null
+++ b/internal/api/group.go
@@ -0,0 +1,171 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "git.imall.cloud/openim/protocol/group"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/a2r"
+)
+
+type GroupApi struct {
+ Client group.GroupClient
+}
+
+func NewGroupApi(client group.GroupClient) GroupApi {
+ return GroupApi{client}
+}
+
+func (o *GroupApi) CreateGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.CreateGroup, o.Client)
+}
+
+func (o *GroupApi) SetGroupInfo(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.SetGroupInfo, o.Client)
+}
+
+func (o *GroupApi) SetGroupInfoEx(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.SetGroupInfoEx, o.Client)
+}
+
+func (o *GroupApi) JoinGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.JoinGroup, o.Client)
+}
+
+func (o *GroupApi) QuitGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.QuitGroup, o.Client)
+}
+
+func (o *GroupApi) ApplicationGroupResponse(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GroupApplicationResponse, o.Client)
+}
+
+func (o *GroupApi) TransferGroupOwner(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.TransferGroupOwner, o.Client)
+}
+
+func (o *GroupApi) GetRecvGroupApplicationList(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupApplicationList, o.Client)
+}
+
+func (o *GroupApi) GetUserReqGroupApplicationList(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetUserReqApplicationList, o.Client)
+}
+
+func (o *GroupApi) GetGroupUsersReqApplicationList(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupUsersReqApplicationList, o.Client)
+}
+
+func (o *GroupApi) GetSpecifiedUserGroupRequestInfo(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetSpecifiedUserGroupRequestInfo, o.Client)
+}
+
+func (o *GroupApi) GetGroupsInfo(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupsInfo, o.Client)
+ //a2r.Call(c, group.GroupClient.GetGroupsInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupsInfo))
+}
+
+func (o *GroupApi) KickGroupMember(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.KickGroupMember, o.Client)
+}
+
+func (o *GroupApi) GetGroupMembersInfo(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupMembersInfo, o.Client)
+ //a2r.Call(c, group.GroupClient.GetGroupMembersInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupMembersInfo))
+}
+
+func (o *GroupApi) GetGroupMemberList(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupMemberList, o.Client)
+}
+
+func (o *GroupApi) InviteUserToGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.InviteUserToGroup, o.Client)
+}
+
+func (o *GroupApi) GetJoinedGroupList(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetJoinedGroupList, o.Client)
+}
+
+func (o *GroupApi) DismissGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.DismissGroup, o.Client)
+}
+
+func (o *GroupApi) MuteGroupMember(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.MuteGroupMember, o.Client)
+}
+
+func (o *GroupApi) CancelMuteGroupMember(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.CancelMuteGroupMember, o.Client)
+}
+
+func (o *GroupApi) MuteGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.MuteGroup, o.Client)
+}
+
+func (o *GroupApi) CancelMuteGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.CancelMuteGroup, o.Client)
+}
+
+func (o *GroupApi) SetGroupMemberInfo(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.SetGroupMemberInfo, o.Client)
+}
+
+func (o *GroupApi) GetGroupAbstractInfo(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupAbstractInfo, o.Client)
+}
+
+// func (g *Group) SetGroupMemberNickname(c *gin.Context) {
+// a2r.Call(c, group.GroupClient.SetGroupMemberNickname, g.userClient)
+//}
+//
+// func (g *Group) GetGroupAllMemberList(c *gin.Context) {
+// a2r.Call(c, group.GroupClient.GetGroupAllMember, g.userClient)
+//}
+
+func (o *GroupApi) GroupCreateCount(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GroupCreateCount, o.Client)
+}
+
+func (o *GroupApi) GetGroups(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroups, o.Client)
+}
+
+func (o *GroupApi) GetGroupMemberUserIDs(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupMemberUserIDs, o.Client)
+}
+
+func (o *GroupApi) GetIncrementalJoinGroup(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetIncrementalJoinGroup, o.Client)
+}
+
+func (o *GroupApi) GetIncrementalGroupMember(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetIncrementalGroupMember, o.Client)
+}
+
+func (o *GroupApi) GetIncrementalGroupMemberBatch(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.BatchGetIncrementalGroupMember, o.Client)
+}
+
+func (o *GroupApi) GetFullGroupMemberUserIDs(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetFullGroupMemberUserIDs, o.Client)
+}
+
+func (o *GroupApi) GetFullJoinGroupIDs(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetFullJoinGroupIDs, o.Client)
+}
+
+func (o *GroupApi) GetGroupApplicationUnhandledCount(c *gin.Context) {
+ a2r.Call(c, group.GroupClient.GetGroupApplicationUnhandledCount, o.Client)
+}
diff --git a/internal/api/init.go b/internal/api/init.go
new file mode 100644
index 0000000..82d6df6
--- /dev/null
+++ b/internal/api/init.go
@@ -0,0 +1,104 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "net"
+ "net/http"
+ "strconv"
+ "time"
+
+ conf "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/network"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "google.golang.org/grpc"
+)
+
+type Config struct {
+ conf.AllConfig
+
+ ConfigPath conf.Path
+ Index conf.Index
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, service grpc.ServiceRegistrar) error {
+ apiPort, err := datautil.GetElemByIndex(config.API.Api.Ports, int(config.Index))
+ if err != nil {
+ return err
+ }
+
+ router, err := newGinRouter(ctx, client, config)
+ if err != nil {
+ return err
+ }
+
+ apiCtx, apiCancel := context.WithCancelCause(context.Background())
+ done := make(chan struct{})
+ go func() {
+ httpServer := &http.Server{
+ Handler: router,
+ Addr: net.JoinHostPort(network.GetListenIP(config.API.Api.ListenIP), strconv.Itoa(apiPort)),
+ }
+ go func() {
+ defer close(done)
+ select {
+ case <-ctx.Done():
+ apiCancel(fmt.Errorf("recv ctx %w", context.Cause(ctx)))
+ case <-apiCtx.Done():
+ }
+ log.ZDebug(ctx, "api server is shutting down")
+ if err := httpServer.Shutdown(context.Background()); err != nil {
+ log.ZWarn(ctx, "api server shutdown err", err)
+ }
+ }()
+ log.CInfo(ctx, "api server is init", "runtimeEnv", runtimeenv.RuntimeEnvironment(), "address", httpServer.Addr, "apiPort", apiPort)
+ err := httpServer.ListenAndServe()
+ if err == nil {
+ err = errors.New("api done")
+ }
+ apiCancel(err)
+ }()
+
+ //if config.Discovery.Enable == conf.ETCD {
+ // cm := disetcd.NewConfigManager(client.(*etcd.SvcDiscoveryRegistryImpl).GetClient(), config.GetConfigNames())
+ // cm.Watch(ctx)
+ //}
+ //sigs := make(chan os.Signal, 1)
+ //signal.Notify(sigs, syscall.SIGTERM)
+ //select {
+ //case val := <-sigs:
+ // log.ZDebug(ctx, "recv exit", "signal", val.String())
+ // cancel(fmt.Errorf("signal %s", val.String()))
+ //case <-ctx.Done():
+ //}
+ <-apiCtx.Done()
+ exitCause := context.Cause(apiCtx)
+ log.ZWarn(ctx, "api server exit", exitCause)
+ timer := time.NewTimer(time.Second * 15)
+ defer timer.Stop()
+ select {
+ case <-timer.C:
+ log.ZWarn(ctx, "api server graceful stop timeout", nil)
+ case <-done:
+ log.ZDebug(ctx, "api server graceful stop done")
+ }
+ return exitCause
+}
diff --git a/internal/api/jssdk/jssdk.go b/internal/api/jssdk/jssdk.go
new file mode 100644
index 0000000..0a597f0
--- /dev/null
+++ b/internal/api/jssdk/jssdk.go
@@ -0,0 +1,287 @@
+package jssdk
+
+import (
+ "context"
+ "sort"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/log"
+
+ "github.com/gin-gonic/gin"
+
+ "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/jssdk"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+const (
+ maxGetActiveConversation = 500
+ defaultGetActiveConversation = 100
+)
+
+func NewJSSdkApi(userClient *rpcli.UserClient, relationClient *rpcli.RelationClient, groupClient *rpcli.GroupClient,
+ conversationClient *rpcli.ConversationClient, msgClient *rpcli.MsgClient) *JSSdk {
+ return &JSSdk{
+ userClient: userClient,
+ relationClient: relationClient,
+ groupClient: groupClient,
+ conversationClient: conversationClient,
+ msgClient: msgClient,
+ }
+}
+
+type JSSdk struct {
+ userClient *rpcli.UserClient
+ relationClient *rpcli.RelationClient
+ groupClient *rpcli.GroupClient
+ conversationClient *rpcli.ConversationClient
+ msgClient *rpcli.MsgClient
+}
+
+func (x *JSSdk) GetActiveConversations(c *gin.Context) {
+ call(c, x.getActiveConversations)
+}
+
+func (x *JSSdk) GetConversations(c *gin.Context) {
+ call(c, x.getConversations)
+}
+
+func (x *JSSdk) fillConversations(ctx context.Context, conversations []*jssdk.ConversationMsg) error {
+ if len(conversations) == 0 {
+ return nil
+ }
+ var (
+ userIDs []string
+ groupIDs []string
+ )
+ for _, c := range conversations {
+ if c.Conversation.GroupID == "" {
+ userIDs = append(userIDs, c.Conversation.UserID)
+ } else {
+ groupIDs = append(groupIDs, c.Conversation.GroupID)
+ }
+ }
+ var (
+ userMap map[string]*sdkws.UserInfo
+ friendMap map[string]*relation.FriendInfoOnly
+ groupMap map[string]*sdkws.GroupInfo
+ )
+ if len(userIDs) > 0 {
+ users, err := x.userClient.GetUsersInfo(ctx, userIDs)
+ if err != nil {
+ return err
+ }
+ friends, err := x.relationClient.GetFriendsInfo(ctx, conversations[0].Conversation.OwnerUserID, userIDs)
+ if err != nil {
+ return err
+ }
+ userMap = datautil.SliceToMap(users, (*sdkws.UserInfo).GetUserID)
+ friendMap = datautil.SliceToMap(friends, (*relation.FriendInfoOnly).GetFriendUserID)
+ }
+ if len(groupIDs) > 0 {
+ groups, err := x.groupClient.GetGroupsInfo(ctx, groupIDs)
+ if err != nil {
+ return err
+ }
+ groupMap = datautil.SliceToMap(groups, (*sdkws.GroupInfo).GetGroupID)
+ }
+ for _, c := range conversations {
+ if c.Conversation.GroupID == "" {
+ c.User = userMap[c.Conversation.UserID]
+ c.Friend = friendMap[c.Conversation.UserID]
+ } else {
+ c.Group = groupMap[c.Conversation.GroupID]
+ }
+ }
+ return nil
+}
+
+func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActiveConversationsReq) (*jssdk.GetActiveConversationsResp, error) {
+ if req.Count <= 0 || req.Count > maxGetActiveConversation {
+ req.Count = defaultGetActiveConversation
+ }
+ req.OwnerUserID = mcontext.GetOpUserID(ctx)
+ conversationIDs, err := x.conversationClient.GetConversationIDs(ctx, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ if len(conversationIDs) == 0 {
+ return &jssdk.GetActiveConversationsResp{}, nil
+ }
+
+ activeConversation, err := x.msgClient.GetActiveConversation(ctx, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ if len(activeConversation) == 0 {
+ return &jssdk.GetActiveConversationsResp{}, nil
+ }
+ readSeq, err := x.msgClient.GetHasReadSeqs(ctx, conversationIDs, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ sortConversations := sortActiveConversations{
+ Conversation: activeConversation,
+ }
+ if len(activeConversation) > 1 {
+ pinnedConversationIDs, err := x.conversationClient.GetPinnedConversationIDs(ctx, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ sortConversations.PinnedConversationIDs = datautil.SliceSet(pinnedConversationIDs)
+ }
+ sort.Sort(&sortConversations)
+ sortList := sortConversations.Top(int(req.Count))
+ conversations, err := x.conversationClient.GetConversations(ctx, datautil.Slice(sortList, func(c *msg.ActiveConversation) string {
+ return c.ConversationID
+ }), req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ msgs, err := x.msgClient.GetSeqMessage(ctx, req.OwnerUserID, datautil.Slice(sortList, func(c *msg.ActiveConversation) *msg.ConversationSeqs {
+ return &msg.ConversationSeqs{
+ ConversationID: c.ConversationID,
+ Seqs: []int64{c.MaxSeq},
+ }
+ }))
+ if err != nil {
+ return nil, err
+ }
+ x.checkMessagesAndGetLastMessage(ctx, req.OwnerUserID, msgs)
+ conversationMap := datautil.SliceToMap(conversations, func(c *conversation.Conversation) string {
+ return c.ConversationID
+ })
+ resp := make([]*jssdk.ConversationMsg, 0, len(sortList))
+ for _, c := range sortList {
+ conv, ok := conversationMap[c.ConversationID]
+ if !ok {
+ continue
+ }
+ if msgList, ok := msgs[c.ConversationID]; ok && len(msgList.Msgs) > 0 {
+ resp = append(resp, &jssdk.ConversationMsg{
+ Conversation: conv,
+ LastMsg: msgList.Msgs[0],
+ MaxSeq: c.MaxSeq,
+ ReadSeq: readSeq[c.ConversationID],
+ })
+ }
+
+ }
+ if err := x.fillConversations(ctx, resp); err != nil {
+ return nil, err
+ }
+ var unreadCount int64
+ for _, c := range activeConversation {
+ count := c.MaxSeq - readSeq[c.ConversationID]
+ if count > 0 {
+ unreadCount += count
+ }
+ }
+ return &jssdk.GetActiveConversationsResp{
+ Conversations: resp,
+ UnreadCount: unreadCount,
+ }, nil
+}
+
+func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversationsReq) (*jssdk.GetConversationsResp, error) {
+ req.OwnerUserID = mcontext.GetOpUserID(ctx)
+ conversations, err := x.conversationClient.GetConversations(ctx, req.ConversationIDs, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ if len(conversations) == 0 {
+ return &jssdk.GetConversationsResp{}, nil
+ }
+ req.ConversationIDs = datautil.Slice(conversations, func(c *conversation.Conversation) string {
+ return c.ConversationID
+ })
+ maxSeqs, err := x.msgClient.GetMaxSeqs(ctx, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ readSeqs, err := x.msgClient.GetHasReadSeqs(ctx, req.ConversationIDs, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ conversationSeqs := make([]*msg.ConversationSeqs, 0, len(conversations))
+ for _, c := range conversations {
+ if seq := maxSeqs[c.ConversationID]; seq > 0 {
+ conversationSeqs = append(conversationSeqs, &msg.ConversationSeqs{
+ ConversationID: c.ConversationID,
+ Seqs: []int64{seq},
+ })
+ }
+ }
+ var msgs map[string]*sdkws.PullMsgs
+ if len(conversationSeqs) > 0 {
+ msgs, err = x.msgClient.GetSeqMessage(ctx, req.OwnerUserID, conversationSeqs)
+ if err != nil {
+ return nil, err
+ }
+ }
+ x.checkMessagesAndGetLastMessage(ctx, req.OwnerUserID, msgs)
+ resp := make([]*jssdk.ConversationMsg, 0, len(conversations))
+ for _, c := range conversations {
+ if msgList, ok := msgs[c.ConversationID]; ok && len(msgList.Msgs) > 0 {
+ resp = append(resp, &jssdk.ConversationMsg{
+ Conversation: c,
+ LastMsg: msgList.Msgs[0],
+ MaxSeq: maxSeqs[c.ConversationID],
+ ReadSeq: readSeqs[c.ConversationID],
+ })
+ }
+
+ }
+ if err := x.fillConversations(ctx, resp); err != nil {
+ return nil, err
+ }
+ var unreadCount int64
+ for conversationID, maxSeq := range maxSeqs {
+ count := maxSeq - readSeqs[conversationID]
+ if count > 0 {
+ unreadCount += count
+ }
+ }
+ return &jssdk.GetConversationsResp{
+ Conversations: resp,
+ UnreadCount: unreadCount,
+ }, nil
+}
+
+// This function checks whether the latest MaxSeq message is valid.
+// If not, it needs to fetch a valid message again.
+func (x *JSSdk) checkMessagesAndGetLastMessage(ctx context.Context, userID string, messages map[string]*sdkws.PullMsgs) {
+ var conversationIDs []string
+
+ for conversationID, message := range messages {
+ allInValid := true
+ for _, data := range message.Msgs {
+ if data.Status < constant.MsgStatusHasDeleted {
+ allInValid = false
+ break
+ }
+ }
+ if allInValid {
+ conversationIDs = append(conversationIDs, conversationID)
+ }
+ }
+ if len(conversationIDs) > 0 {
+ resp, err := x.msgClient.GetLastMessage(ctx, &msg.GetLastMessageReq{
+ UserID: userID,
+ ConversationIDs: conversationIDs,
+ })
+ if err != nil {
+ log.ZError(ctx, "fetchLatestValidMessages", err, "conversationIDs", conversationIDs)
+ return
+ }
+ for conversationID, message := range resp.Msgs {
+ messages[conversationID] = &sdkws.PullMsgs{Msgs: []*sdkws.MsgData{message}}
+ }
+ }
+
+}
diff --git a/internal/api/jssdk/sort.go b/internal/api/jssdk/sort.go
new file mode 100644
index 0000000..4db274e
--- /dev/null
+++ b/internal/api/jssdk/sort.go
@@ -0,0 +1,33 @@
+package jssdk
+
+import "git.imall.cloud/openim/protocol/msg"
+
+type sortActiveConversations struct {
+ Conversation []*msg.ActiveConversation
+ PinnedConversationIDs map[string]struct{}
+}
+
+func (s sortActiveConversations) Top(limit int) []*msg.ActiveConversation {
+ if limit > 0 && len(s.Conversation) > limit {
+ return s.Conversation[:limit]
+ }
+ return s.Conversation
+}
+
+func (s sortActiveConversations) Len() int {
+ return len(s.Conversation)
+}
+
+func (s sortActiveConversations) Less(i, j int) bool {
+ iv, jv := s.Conversation[i], s.Conversation[j]
+ _, ip := s.PinnedConversationIDs[iv.ConversationID]
+ _, jp := s.PinnedConversationIDs[jv.ConversationID]
+ if ip != jp {
+ return ip
+ }
+ return iv.LastTime > jv.LastTime
+}
+
+func (s sortActiveConversations) Swap(i, j int) {
+ s.Conversation[i], s.Conversation[j] = s.Conversation[j], s.Conversation[i]
+}
diff --git a/internal/api/jssdk/tools.go b/internal/api/jssdk/tools.go
new file mode 100644
index 0000000..c19d897
--- /dev/null
+++ b/internal/api/jssdk/tools.go
@@ -0,0 +1,77 @@
+package jssdk
+
+import (
+ "context"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/a2r"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/checker"
+ "github.com/openimsdk/tools/errs"
+ "google.golang.org/grpc"
+ "google.golang.org/protobuf/proto"
+ "io"
+ "strings"
+)
+
+func field[A, B, C any](ctx context.Context, fn func(ctx context.Context, req *A, opts ...grpc.CallOption) (*B, error), req *A, get func(*B) C) (C, error) {
+ resp, err := fn(ctx, req)
+ if err != nil {
+ var c C
+ return c, err
+ }
+ return get(resp), nil
+}
+
+func call[A, B any](c *gin.Context, fn func(ctx context.Context, req *A) (*B, error)) {
+ var isJSON bool
+ switch contentType := c.GetHeader("Content-Type"); {
+ case contentType == "":
+ isJSON = true
+ case strings.Contains(contentType, "application/json"):
+ isJSON = true
+ case strings.Contains(contentType, "application/protobuf"):
+ case strings.Contains(contentType, "application/x-protobuf"):
+ default:
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("unsupported content type"))
+ return
+ }
+ var req *A
+ if isJSON {
+ var err error
+ req, err = a2r.ParseRequest[A](c)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ } else {
+ body, err := io.ReadAll(c.Request.Body)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ req = new(A)
+ if err := proto.Unmarshal(body, any(req).(proto.Message)); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if err := checker.Validate(&req); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ }
+ resp, err := fn(c, req)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if isJSON {
+ apiresp.GinSuccess(c, resp)
+ return
+ }
+ body, err := proto.Marshal(any(resp).(proto.Message))
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ apiresp.GinSuccess(c, body)
+}
diff --git a/internal/api/meeting.go b/internal/api/meeting.go
new file mode 100644
index 0000000..7368233
--- /dev/null
+++ b/internal/api/meeting.go
@@ -0,0 +1,1371 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "context"
+ "crypto/md5"
+ "encoding/hex"
+ "encoding/json"
+ "fmt"
+ "math/big"
+ "math/rand"
+ "time"
+
+ "github.com/gin-gonic/gin"
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/wrapperspb"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/idutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+type MeetingApi struct {
+ meetingDB database.Meeting
+ meetingCheckInDB database.MeetingCheckIn
+ groupClient *rpcli.GroupClient
+ userClient *rpcli.UserClient
+ conversationClient *rpcli.ConversationClient
+ systemConfigDB database.SystemConfig
+}
+
+func NewMeetingApi(meetingDB database.Meeting, meetingCheckInDB database.MeetingCheckIn, groupClient *rpcli.GroupClient, userClient *rpcli.UserClient, conversationClient *rpcli.ConversationClient, systemConfigDB database.SystemConfig) *MeetingApi {
+ return &MeetingApi{
+ meetingDB: meetingDB,
+ meetingCheckInDB: meetingCheckInDB,
+ groupClient: groupClient,
+ userClient: userClient,
+ conversationClient: conversationClient,
+ systemConfigDB: systemConfigDB,
+ }
+}
+
+// IsNotFound 检查错误是否是记录不存在
+func (m *MeetingApi) IsNotFound(err error) bool {
+ return errs.ErrRecordNotFound.Is(err) || errs.Unwrap(err) == mongo.ErrNoDocuments
+}
+
+// loadAnchorUsers 加载主播用户信息
+func (m *MeetingApi) loadAnchorUsers(ctx context.Context, anchorUserIDs []string) []*sdkws.UserInfo {
+ if len(anchorUserIDs) == 0 {
+ return nil
+ }
+ users, err := m.userClient.GetUsersInfo(ctx, anchorUserIDs)
+ if err != nil {
+ log.ZWarn(ctx, "loadAnchorUsers: failed to get anchor users info", err, "anchorUserIDs", anchorUserIDs)
+ return nil
+ }
+ return users
+}
+
+// genMeetingPassword 生成6位数字密码
+func (m *MeetingApi) genMeetingPassword() string {
+ return fmt.Sprintf("%06d", rand.Intn(1000000))
+}
+
+// genMeetingID 生成会议ID
+func (m *MeetingApi) genMeetingID(ctx context.Context, meetingID string) (string, error) {
+ if meetingID != "" {
+ _, err := m.meetingDB.Take(ctx, meetingID)
+ if err == nil {
+ return "", errs.ErrArgs.WrapMsg("meeting id already exists: " + meetingID)
+ }
+ // 如果记录不存在,说明可以使用这个ID
+ if m.IsNotFound(err) {
+ return meetingID, nil
+ }
+ // 其他错误直接返回
+ return "", err
+ }
+ // 生成唯一ID
+ for i := 0; i < 10; i++ {
+ opID := mcontext.GetOperationID(ctx)
+ timestamp := time.Now().UnixNano()
+ random := rand.Int()
+ data := fmt.Sprintf("%s,%d,%d", opID, timestamp, random)
+ hash := md5.Sum([]byte(data))
+ id := hex.EncodeToString(hash[:])[:16]
+ bi := big.NewInt(0)
+ bi.SetString(id, 16)
+ id = bi.String()
+ _, err := m.meetingDB.Take(ctx, id)
+ if err == nil {
+ // ID已存在,继续生成下一个
+ continue
+ }
+ // 如果记录不存在,说明可以使用这个ID
+ if m.IsNotFound(err) {
+ return id, nil
+ }
+ // 其他错误直接返回
+ return "", err
+ }
+ return "", errs.ErrInternalServer.WrapMsg("failed to generate meeting id")
+}
+
+// paginationWrapper 实现 pagination.Pagination 接口
+type meetingPaginationWrapper struct {
+ pageNumber int32
+ showNumber int32
+}
+
+func (p *meetingPaginationWrapper) GetPageNumber() int32 {
+ if p.pageNumber <= 0 {
+ return 1
+ }
+ return p.pageNumber
+}
+
+func (p *meetingPaginationWrapper) GetShowNumber() int32 {
+ if p.showNumber <= 0 {
+ return 20
+ }
+ return p.showNumber
+}
+
+// CreateMeeting 创建会议
+func (m *MeetingApi) CreateMeeting(c *gin.Context) {
+ var (
+ req apistruct.CreateMeetingReq
+ resp apistruct.CreateMeetingResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ creatorUserID := mcontext.GetOpUserID(c)
+ if creatorUserID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("user id not found"))
+ return
+ }
+
+ // 生成会议ID
+ meetingID, err := m.genMeetingID(c, req.MeetingID)
+ if err != nil {
+ log.ZError(c, "CreateMeeting: failed to generate meeting id", err)
+ apiresp.GinError(c, err)
+ return
+ }
+
+ // 验证预约时间不能早于当前时间
+ scheduledTime := time.UnixMilli(req.ScheduledTime)
+ if scheduledTime.Before(time.Now()) {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("scheduled time cannot be earlier than current time"))
+ return
+ }
+
+ // 处理密码:如果未提供,自动生成6位数字密码
+ password := req.Password
+ if password == "" {
+ password = m.genMeetingPassword()
+ } else {
+ // 验证密码格式:必须是6位数字
+ if len(password) != 6 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("password must be 6 digits"))
+ return
+ }
+ for _, char := range password {
+ if char < '0' || char > '9' {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("password must be 6 digits"))
+ return
+ }
+ }
+ }
+
+ // 创建群聊,群聊名称为"会议群-[会议主题]"
+ groupName := fmt.Sprintf("会议群-%s", req.Subject)
+ // 在Ex字段中设置会议标识
+ exData := map[string]interface{}{
+ "isMeetingGroup": true,
+ "meetingID": meetingID,
+ }
+ exJSON, err := json.Marshal(exData)
+ if err != nil {
+ log.ZError(c, "CreateMeeting: failed to marshal ex data", err)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to marshal ex data"))
+ return
+ }
+
+ groupInfo := &sdkws.GroupInfo{
+ GroupName: groupName,
+ GroupType: constant.WorkingGroup,
+ FaceURL: req.CoverURL, // 使用会议封面作为群头像
+ Ex: string(exJSON), // 设置会议标识
+ NeedVerification: constant.Directly, // 设置为直接加入,任何人都能随意加群
+ }
+
+ // 设置群主和管理员:第一个主播为群主,其他主播为管理员
+ var ownerUserID string
+ var adminUserIDs []string
+ var memberUserIDs []string
+ if len(req.AnchorUserIDs) > 0 {
+ // 有主播时,第一个主播为群主,其他主播为管理员
+ ownerUserID = req.AnchorUserIDs[0]
+ if len(req.AnchorUserIDs) > 1 {
+ adminUserIDs = req.AnchorUserIDs[1:]
+ }
+ // 如果创建者不在主播列表中,将创建者作为普通成员加入
+ anchorMap := make(map[string]bool)
+ for _, anchorID := range req.AnchorUserIDs {
+ anchorMap[anchorID] = true
+ }
+ if !anchorMap[creatorUserID] {
+ memberUserIDs = []string{creatorUserID}
+ }
+ } else {
+ // 没有主播时,创建者为群主
+ ownerUserID = creatorUserID
+ }
+
+ createGroupReq := &group.CreateGroupReq{
+ OwnerUserID: ownerUserID,
+ AdminUserIDs: adminUserIDs,
+ MemberUserIDs: memberUserIDs,
+ GroupInfo: groupInfo,
+ }
+
+ createGroupResp, err := m.groupClient.GroupClient.CreateGroup(c, createGroupReq)
+ if err != nil {
+ log.ZError(c, "CreateMeeting: failed to create group", err, "meetingID", meetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to create group"))
+ return
+ }
+
+ groupID := createGroupResp.GroupInfo.GroupID
+
+ // 兜底:再次对主播列表进行角色同步,确保主播=群主/管理员
+ if len(req.AnchorUserIDs) > 0 {
+ if err := m.updateGroupAnchors(c, groupID, req.AnchorUserIDs); err != nil {
+ log.ZWarn(c, "CreateMeeting: failed to sync anchors to owner/admin", err, "groupID", groupID, "anchorUserIDs", req.AnchorUserIDs)
+ }
+ }
+
+ // 收集所有群成员ID(群主、管理员、普通成员),用于创建会话
+ allMemberUserIDs := make([]string, 0)
+ allMemberUserIDs = append(allMemberUserIDs, ownerUserID)
+ allMemberUserIDs = append(allMemberUserIDs, adminUserIDs...)
+ allMemberUserIDs = append(allMemberUserIDs, memberUserIDs...)
+ // 去重
+ allMemberUserIDs = datautil.Distinct(allMemberUserIDs)
+
+ // 为所有群成员创建会话记录
+ if len(allMemberUserIDs) > 0 {
+ if err := m.conversationClient.CreateGroupChatConversations(c, groupID, allMemberUserIDs); err != nil {
+ log.ZWarn(c, "CreateMeeting: failed to create group chat conversations", err, "groupID", groupID, "userIDs", allMemberUserIDs)
+ // 不阻断流程,继续创建会议,只是会话可能不会立即显示
+ }
+ }
+
+ // 根据评论开关设置群的禁言状态
+ // 如果开启评论,取消群禁言;如果关闭评论,禁言群
+ if req.EnableComment {
+ // 开启评论,取消群禁言
+ _, err := m.groupClient.GroupClient.CancelMuteGroup(c, &group.CancelMuteGroupReq{
+ GroupID: groupID,
+ })
+ if err != nil {
+ log.ZWarn(c, "CreateMeeting: failed to cancel mute group", err, "groupID", groupID)
+ // 不阻断流程,继续创建会议
+ }
+ } else {
+ // 关闭评论,禁言群
+ _, err := m.groupClient.GroupClient.MuteGroup(c, &group.MuteGroupReq{
+ GroupID: groupID,
+ })
+ if err != nil {
+ log.ZWarn(c, "CreateMeeting: failed to mute group", err, "groupID", groupID)
+ // 不阻断流程,继续创建会议
+ }
+ }
+
+ // 创建会议记录
+ meeting := &model.Meeting{
+ MeetingID: meetingID,
+ Subject: req.Subject,
+ CoverURL: req.CoverURL,
+ ScheduledTime: scheduledTime,
+ Status: model.MeetingStatusScheduled,
+ CreatorUserID: creatorUserID,
+ Description: req.Description,
+ Duration: req.Duration,
+ EstimatedCount: req.EstimatedCount,
+ EnableMic: req.EnableMic,
+ EnableComment: req.EnableComment,
+ AnchorUserIDs: req.AnchorUserIDs,
+ GroupID: groupID,
+ CheckInCount: 0, // 初始化签到统计为0
+ Password: password, // 会议密码
+ }
+
+ if err := m.meetingDB.Create(c, meeting); err != nil {
+ log.ZError(c, "CreateMeeting: failed to create meeting", err, "meetingID", meetingID)
+ // 如果创建会议失败,尝试删除已创建的群聊,避免孤立群
+ if groupID != "" {
+ if dismissErr := m.groupClient.DismissGroup(c, groupID, true); dismissErr != nil {
+ log.ZWarn(c, "CreateMeeting: failed to rollback created group", dismissErr, "meetingID", meetingID, "groupID", groupID)
+ }
+ }
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to create meeting"))
+ return
+ }
+
+ // 加载主播信息
+ anchorUsers := m.loadAnchorUsers(c, meeting.AnchorUserIDs)
+
+ // 转换为响应格式
+ resp.MeetingInfo = &apistruct.MeetingInfo{
+ MeetingID: meeting.MeetingID,
+ Subject: meeting.Subject,
+ CoverURL: meeting.CoverURL,
+ ScheduledTime: meeting.ScheduledTime.UnixMilli(),
+ Status: meeting.Status,
+ CreatorUserID: meeting.CreatorUserID,
+ Description: meeting.Description,
+ Duration: meeting.Duration,
+ EstimatedCount: meeting.EstimatedCount,
+ EnableMic: meeting.EnableMic,
+ EnableComment: meeting.EnableComment,
+ AnchorUserIDs: meeting.AnchorUserIDs,
+ AnchorUsers: anchorUsers,
+ CreateTime: meeting.CreateTime.UnixMilli(),
+ UpdateTime: meeting.UpdateTime.UnixMilli(),
+ Ex: meeting.Ex,
+ GroupID: meeting.GroupID,
+ CheckInCount: meeting.CheckInCount,
+ Password: meeting.Password,
+ }
+ resp.GroupID = groupID
+
+ // 将会议群ID添加到webhook配置的attentionIds中
+ if m.systemConfigDB != nil && groupID != "" {
+ if err := webhook.UpdateAttentionIds(c, m.systemConfigDB, groupID, true); err != nil {
+ log.ZWarn(c, "CreateMeeting: failed to add groupID to webhook attentionIds", err, "meetingID", meetingID, "groupID", groupID)
+ }
+ }
+
+ log.ZInfo(c, "CreateMeeting: success", "meetingID", meetingID, "groupID", groupID, "creatorUserID", creatorUserID)
+ apiresp.GinSuccess(c, resp)
+}
+
+// UpdateMeeting 更新会议(可以单独编辑某个字段)
+func (m *MeetingApi) UpdateMeeting(c *gin.Context) {
+ var (
+ req apistruct.UpdateMeetingReq
+ resp apistruct.UpdateMeetingResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ opUserID := mcontext.GetOpUserID(c)
+
+ // 查询会议是否存在
+ meeting, err := m.meetingDB.Take(c, req.MeetingID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("meeting not found"))
+ return
+ }
+ log.ZError(c, "UpdateMeeting: failed to get meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to get meeting"))
+ return
+ }
+
+ // 验证权限(只有创建者可以修改)
+ if meeting.CreatorUserID != opUserID {
+ apiresp.GinError(c, errs.ErrNoPermission.WrapMsg("only creator can update meeting"))
+ return
+ }
+
+ // 已结束或已取消的会议不允许修改
+ if meeting.Status == model.MeetingStatusFinished || meeting.Status == model.MeetingStatusCancelled {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("cannot update finished or cancelled meeting"))
+ return
+ }
+
+ // 构建更新数据(只更新提供的字段)
+ updateData := make(map[string]any)
+ if req.Subject != "" {
+ updateData["subject"] = req.Subject
+ // 如果更新了主题,同步更新群聊名称
+ if meeting.GroupID != "" {
+ groupName := fmt.Sprintf("会议群-%s", req.Subject)
+ _, err := m.groupClient.GroupClient.SetGroupInfo(c, &group.SetGroupInfoReq{
+ GroupInfoForSet: &sdkws.GroupInfoForSet{
+ GroupID: meeting.GroupID,
+ GroupName: groupName,
+ },
+ })
+ if err != nil {
+ log.ZWarn(c, "UpdateMeeting: failed to update group name", err, "groupID", meeting.GroupID)
+ }
+ }
+ }
+ if req.CoverURL != "" {
+ updateData["cover_url"] = req.CoverURL
+ // 如果更新了封面,同步更新群聊头像
+ if meeting.GroupID != "" {
+ _, err := m.groupClient.GroupClient.SetGroupInfo(c, &group.SetGroupInfoReq{
+ GroupInfoForSet: &sdkws.GroupInfoForSet{
+ GroupID: meeting.GroupID,
+ FaceURL: req.CoverURL,
+ },
+ })
+ if err != nil {
+ log.ZWarn(c, "UpdateMeeting: failed to update group face", err, "groupID", meeting.GroupID)
+ }
+ }
+ }
+ if req.ScheduledTime > 0 {
+ scheduledTime := time.UnixMilli(req.ScheduledTime)
+ // 验证预约时间不能早于当前时间
+ if scheduledTime.Before(time.Now()) {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("scheduled time cannot be earlier than current time"))
+ return
+ }
+ updateData["scheduled_time"] = scheduledTime
+ }
+ if req.Status > 0 {
+ // 验证状态值
+ if req.Status < 1 || req.Status > 4 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("invalid status, must be 1-4"))
+ return
+ }
+ updateData["status"] = req.Status
+ }
+ if req.Description != "" {
+ updateData["description"] = req.Description
+ }
+ if req.Duration > 0 {
+ updateData["duration"] = req.Duration
+ }
+ if req.EstimatedCount > 0 {
+ updateData["estimated_count"] = req.EstimatedCount
+ }
+ if req.EnableMic != nil {
+ updateData["enable_mic"] = *req.EnableMic
+ }
+ if req.EnableComment != nil {
+ updateData["enable_comment"] = *req.EnableComment
+ // 如果更新了评论开关,同步更新群的禁言状态
+ if meeting.GroupID != "" {
+ if *req.EnableComment {
+ // 开启评论,取消群禁言
+ _, err := m.groupClient.GroupClient.CancelMuteGroup(c, &group.CancelMuteGroupReq{
+ GroupID: meeting.GroupID,
+ })
+ if err != nil {
+ log.ZWarn(c, "UpdateMeeting: failed to cancel mute group", err, "groupID", meeting.GroupID)
+ // 不阻断流程,继续更新会议
+ }
+ } else {
+ // 关闭评论,禁言群
+ _, err := m.groupClient.GroupClient.MuteGroup(c, &group.MuteGroupReq{
+ GroupID: meeting.GroupID,
+ })
+ if err != nil {
+ log.ZWarn(c, "UpdateMeeting: failed to mute group", err, "groupID", meeting.GroupID)
+ // 不阻断流程,继续更新会议
+ }
+ }
+ }
+ }
+ if req.AnchorUserIDs != nil {
+ updateData["anchor_user_ids"] = req.AnchorUserIDs
+ // 如果更新了主播列表,需要同步更新群的管理员和群主
+ if meeting.GroupID != "" {
+ if err := m.updateGroupAnchors(c, meeting.GroupID, req.AnchorUserIDs); err != nil {
+ log.ZWarn(c, "UpdateMeeting: failed to update group anchors", err, "groupID", meeting.GroupID, "anchorUserIDs", req.AnchorUserIDs)
+ // 不阻断流程,继续更新会议
+ }
+ }
+ }
+ if req.Password != nil {
+ password := *req.Password
+ // 验证密码格式:必须是6位数字
+ if password != "" {
+ if len(password) != 6 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("password must be 6 digits"))
+ return
+ }
+ for _, char := range password {
+ if char < '0' || char > '9' {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("password must be 6 digits"))
+ return
+ }
+ }
+ }
+ updateData["password"] = password
+ }
+ if req.Ex != "" {
+ updateData["ex"] = req.Ex
+ }
+
+ if len(updateData) == 0 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("no fields to update"))
+ return
+ }
+
+ // 更新会议
+ if err := m.meetingDB.Update(c, req.MeetingID, updateData); err != nil {
+ log.ZError(c, "UpdateMeeting: failed to update meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to update meeting"))
+ return
+ }
+
+ // 重新查询会议信息
+ meeting, err = m.meetingDB.Take(c, req.MeetingID)
+ if err != nil {
+ log.ZError(c, "UpdateMeeting: failed to get updated meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to get updated meeting"))
+ return
+ }
+
+ // 加载主播信息
+ anchorUsers := m.loadAnchorUsers(c, meeting.AnchorUserIDs)
+
+ // 转换为响应格式
+ resp.MeetingInfo = &apistruct.MeetingInfo{
+ MeetingID: meeting.MeetingID,
+ Subject: meeting.Subject,
+ CoverURL: meeting.CoverURL,
+ ScheduledTime: meeting.ScheduledTime.UnixMilli(),
+ Status: meeting.Status,
+ CreatorUserID: meeting.CreatorUserID,
+ Description: meeting.Description,
+ Duration: meeting.Duration,
+ EstimatedCount: meeting.EstimatedCount,
+ EnableMic: meeting.EnableMic,
+ EnableComment: meeting.EnableComment,
+ AnchorUserIDs: meeting.AnchorUserIDs,
+ AnchorUsers: anchorUsers,
+ CreateTime: meeting.CreateTime.UnixMilli(),
+ UpdateTime: meeting.UpdateTime.UnixMilli(),
+ Ex: meeting.Ex,
+ GroupID: meeting.GroupID,
+ CheckInCount: meeting.CheckInCount,
+ Password: meeting.Password,
+ }
+
+ log.ZInfo(c, "UpdateMeeting: success", "meetingID", req.MeetingID)
+ apiresp.GinSuccess(c, resp)
+}
+
+// updateGroupAnchors 更新群的主播(群主和管理员)
+func (m *MeetingApi) updateGroupAnchors(ctx context.Context, groupID string, newAnchorUserIDs []string) error {
+ // 获取当前群的所有成员
+ currentMemberUserIDs, err := m.groupClient.GetGroupMemberUserIDs(ctx, groupID)
+ if err != nil {
+ return errs.WrapMsg(err, "failed to get group members")
+ }
+
+ // 构建当前成员的映射
+ currentMemberMap := make(map[string]bool)
+ for _, userID := range currentMemberUserIDs {
+ currentMemberMap[userID] = true
+ }
+
+ // 找出不在群里的主播,先把他们拉进群
+ var usersToInvite []string
+ for _, anchorID := range newAnchorUserIDs {
+ if !currentMemberMap[anchorID] {
+ usersToInvite = append(usersToInvite, anchorID)
+ }
+ }
+
+ // 邀请不在群里的主播进群
+ if len(usersToInvite) > 0 {
+ _, err := m.groupClient.GroupClient.InviteUserToGroup(ctx, &group.InviteUserToGroupReq{
+ GroupID: groupID,
+ InvitedUserIDs: usersToInvite,
+ })
+ if err != nil {
+ return errs.WrapMsg(err, "failed to invite anchors to group")
+ }
+
+ // 为新加入的成员创建会话记录
+ if err := m.conversationClient.CreateGroupChatConversations(ctx, groupID, usersToInvite); err != nil {
+ log.ZWarn(ctx, "updateGroupAnchors: failed to create conversations for new members", err, "groupID", groupID, "userIDs", usersToInvite)
+ // 不阻断流程,继续执行
+ }
+ }
+
+ // 如果没有主播,不需要更新群主和管理员
+ if len(newAnchorUserIDs) == 0 {
+ return nil
+ }
+
+ // 获取当前群主
+ groupInfo, err := m.groupClient.GetGroupInfo(ctx, groupID)
+ if err != nil {
+ return errs.WrapMsg(err, "failed to get group info")
+ }
+
+ currentOwnerUserID := groupInfo.OwnerUserID
+ newOwnerUserID := newAnchorUserIDs[0] // 第一个主播为群主
+ var newAdminUserIDs []string
+ if len(newAnchorUserIDs) > 1 {
+ newAdminUserIDs = newAnchorUserIDs[1:] // 其他主播为管理员
+ }
+
+ // 获取所有群成员信息(包括角色)
+ allMemberUserIDs := append([]string{currentOwnerUserID}, newAnchorUserIDs...)
+ allMemberUserIDs = datautil.Distinct(allMemberUserIDs)
+ members, err := m.groupClient.GetGroupMembersInfo(ctx, groupID, allMemberUserIDs)
+ if err != nil {
+ return errs.WrapMsg(err, "failed to get group members info")
+ }
+
+ // 构建成员映射
+ memberMap := make(map[string]*sdkws.GroupMemberFullInfo)
+ for _, member := range members {
+ memberMap[member.UserID] = member
+ }
+
+ // 如果群主需要变更,先转移群主
+ if currentOwnerUserID != newOwnerUserID {
+ // 确保新群主在群里
+ if _, exists := memberMap[newOwnerUserID]; !exists {
+ return errs.ErrArgs.WrapMsg("new owner not in group")
+ }
+
+ // 获取当前群主的角色级别(用于转移后设置)
+ currentOwnerMember := memberMap[currentOwnerUserID]
+ var oldOwnerNewRoleLevel int32 = constant.GroupOrdinaryUsers
+ if currentOwnerMember != nil {
+ oldOwnerNewRoleLevel = currentOwnerMember.RoleLevel
+ }
+
+ // 如果当前群主不在新主播列表中,降级为普通成员
+ anchorMap := make(map[string]bool)
+ for _, anchorID := range newAnchorUserIDs {
+ anchorMap[anchorID] = true
+ }
+ if !anchorMap[currentOwnerUserID] {
+ oldOwnerNewRoleLevel = constant.GroupOrdinaryUsers
+ } else {
+ // 如果当前群主在新主播列表中但不是第一个,降级为管理员
+ if currentOwnerUserID != newOwnerUserID {
+ oldOwnerNewRoleLevel = constant.GroupAdmin
+ }
+ }
+
+ // 转移群主
+ _, err := m.groupClient.GroupClient.TransferGroupOwner(ctx, &group.TransferGroupOwnerReq{
+ GroupID: groupID,
+ OldOwnerUserID: currentOwnerUserID,
+ NewOwnerUserID: newOwnerUserID,
+ })
+ if err != nil {
+ return errs.WrapMsg(err, "failed to transfer group owner")
+ }
+
+ // 如果旧群主需要降级,设置其角色
+ if oldOwnerNewRoleLevel != constant.GroupOwner {
+ _, err := m.groupClient.GroupClient.SetGroupMemberInfo(ctx, &group.SetGroupMemberInfoReq{
+ Members: []*group.SetGroupMemberInfo{
+ {
+ GroupID: groupID,
+ UserID: currentOwnerUserID,
+ RoleLevel: &wrapperspb.Int32Value{Value: oldOwnerNewRoleLevel},
+ },
+ },
+ })
+ if err != nil {
+ log.ZWarn(ctx, "updateGroupAnchors: failed to set old owner role", err, "groupID", groupID, "userID", currentOwnerUserID, "roleLevel", oldOwnerNewRoleLevel)
+ }
+ }
+ }
+
+ // 设置其他主播为管理员
+ if len(newAdminUserIDs) > 0 {
+ var membersToUpdate []*group.SetGroupMemberInfo
+ for _, adminID := range newAdminUserIDs {
+ // 检查该成员当前角色
+ member := memberMap[adminID]
+ if member == nil || member.RoleLevel != constant.GroupAdmin {
+ membersToUpdate = append(membersToUpdate, &group.SetGroupMemberInfo{
+ GroupID: groupID,
+ UserID: adminID,
+ RoleLevel: &wrapperspb.Int32Value{Value: constant.GroupAdmin},
+ })
+ }
+ }
+
+ if len(membersToUpdate) > 0 {
+ _, err := m.groupClient.GroupClient.SetGroupMemberInfo(ctx, &group.SetGroupMemberInfoReq{
+ Members: membersToUpdate,
+ })
+ if err != nil {
+ log.ZWarn(ctx, "updateGroupAnchors: failed to set admins", err, "groupID", groupID, "adminUserIDs", newAdminUserIDs)
+ }
+ }
+ }
+
+ // 如果原来的管理员不在新主播列表中,降级为普通成员
+ // 获取所有当前管理员
+ var currentAdminsToDemote []string
+ for userID, member := range memberMap {
+ if member.RoleLevel == constant.GroupAdmin {
+ // 检查是否在新主播列表中
+ isInNewAnchors := false
+ for _, anchorID := range newAnchorUserIDs {
+ if anchorID == userID {
+ isInNewAnchors = true
+ break
+ }
+ }
+ // 如果不在新主播列表中,需要降级
+ if !isInNewAnchors {
+ currentAdminsToDemote = append(currentAdminsToDemote, userID)
+ }
+ }
+ }
+
+ // 降级不在新主播列表中的管理员
+ if len(currentAdminsToDemote) > 0 {
+ var membersToDemote []*group.SetGroupMemberInfo
+ for _, userID := range currentAdminsToDemote {
+ membersToDemote = append(membersToDemote, &group.SetGroupMemberInfo{
+ GroupID: groupID,
+ UserID: userID,
+ RoleLevel: &wrapperspb.Int32Value{Value: constant.GroupOrdinaryUsers},
+ })
+ }
+
+ _, err := m.groupClient.GroupClient.SetGroupMemberInfo(ctx, &group.SetGroupMemberInfoReq{
+ Members: membersToDemote,
+ })
+ if err != nil {
+ log.ZWarn(ctx, "updateGroupAnchors: failed to demote admins", err, "groupID", groupID, "userIDs", currentAdminsToDemote)
+ }
+ }
+
+ return nil
+}
+
+// GetMeetings 获取会议列表
+func (m *MeetingApi) GetMeetings(c *gin.Context) {
+ var (
+ req apistruct.GetMeetingsReq
+ resp apistruct.GetMeetingsResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 设置默认分页参数
+ if req.Pagination.PageNumber <= 0 {
+ req.Pagination.PageNumber = 1
+ }
+ if req.Pagination.ShowNumber <= 0 {
+ req.Pagination.ShowNumber = 20
+ }
+
+ // 创建分页对象
+ pagination := &meetingPaginationWrapper{
+ pageNumber: req.Pagination.PageNumber,
+ showNumber: req.Pagination.ShowNumber,
+ }
+
+ var total int64
+ var meetings []*model.Meeting
+ var err error
+
+ // 根据查询条件查询会议
+ if req.CreatorUserID != "" {
+ // 按创建者查询
+ total, meetings, err = m.meetingDB.FindByCreator(c, req.CreatorUserID, pagination)
+ } else if req.Status > 0 {
+ // 按状态查询
+ total, meetings, err = m.meetingDB.FindByStatus(c, req.Status, pagination)
+ } else if req.Keyword != "" {
+ // 按关键词搜索
+ total, meetings, err = m.meetingDB.Search(c, req.Keyword, pagination)
+ } else if req.StartTime > 0 && req.EndTime > 0 {
+ // 按时间范围查询
+ total, meetings, err = m.meetingDB.FindByScheduledTimeRange(c, req.StartTime, req.EndTime, pagination)
+ } else {
+ // 查询所有会议
+ total, meetings, err = m.meetingDB.FindAll(c, pagination)
+ }
+
+ if err != nil {
+ log.ZError(c, "GetMeetings: failed to find meetings", err)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find meetings"))
+ return
+ }
+
+ // 收集所有主播ID
+ allAnchorUserIDs := make([]string, 0)
+ for _, meeting := range meetings {
+ allAnchorUserIDs = append(allAnchorUserIDs, meeting.AnchorUserIDs...)
+ }
+ // 去重
+ allAnchorUserIDs = datautil.Distinct(allAnchorUserIDs)
+
+ // 批量获取主播用户信息
+ anchorUserMap := make(map[string]*sdkws.UserInfo)
+ if len(allAnchorUserIDs) > 0 {
+ anchorUsers, err := m.userClient.GetUsersInfo(c, allAnchorUserIDs)
+ if err != nil {
+ log.ZWarn(c, "GetMeetings: failed to get anchor users info", err, "anchorUserIDs", allAnchorUserIDs)
+ // 不阻断流程,继续返回会议列表,只是没有主播信息
+ } else {
+ for _, user := range anchorUsers {
+ anchorUserMap[user.UserID] = user
+ }
+ }
+ }
+
+ // 转换为响应格式
+ meetingInfos := make([]*apistruct.MeetingInfo, 0, len(meetings))
+ for _, meeting := range meetings {
+ // 获取该会议的主播信息
+ anchorUsers := make([]*sdkws.UserInfo, 0, len(meeting.AnchorUserIDs))
+ for _, anchorID := range meeting.AnchorUserIDs {
+ if user, ok := anchorUserMap[anchorID]; ok {
+ anchorUsers = append(anchorUsers, user)
+ }
+ }
+
+ meetingInfos = append(meetingInfos, &apistruct.MeetingInfo{
+ MeetingID: meeting.MeetingID,
+ Subject: meeting.Subject,
+ CoverURL: meeting.CoverURL,
+ ScheduledTime: meeting.ScheduledTime.UnixMilli(),
+ Status: meeting.Status,
+ CreatorUserID: meeting.CreatorUserID,
+ Description: meeting.Description,
+ Duration: meeting.Duration,
+ EstimatedCount: meeting.EstimatedCount,
+ EnableMic: meeting.EnableMic,
+ EnableComment: meeting.EnableComment,
+ AnchorUserIDs: meeting.AnchorUserIDs,
+ AnchorUsers: anchorUsers,
+ CreateTime: meeting.CreateTime.UnixMilli(),
+ UpdateTime: meeting.UpdateTime.UnixMilli(),
+ Ex: meeting.Ex,
+ GroupID: meeting.GroupID,
+ CheckInCount: meeting.CheckInCount,
+ Password: meeting.Password,
+ })
+ }
+
+ resp.Total = total
+ resp.Meetings = meetingInfos
+
+ log.ZInfo(c, "GetMeetings: success", "total", total, "count", len(meetingInfos))
+ apiresp.GinSuccess(c, resp)
+}
+
+// DeleteMeeting 删除会议
+func (m *MeetingApi) DeleteMeeting(c *gin.Context) {
+ var (
+ req apistruct.DeleteMeetingReq
+ resp apistruct.DeleteMeetingResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ opUserID := mcontext.GetOpUserID(c)
+
+ // 查询会议是否存在
+ meeting, err := m.meetingDB.Take(c, req.MeetingID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("meeting not found"))
+ return
+ }
+ log.ZError(c, "DeleteMeeting: failed to get meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to get meeting"))
+ return
+ }
+
+ // 验证权限(只有创建者可以删除)
+ if meeting.CreatorUserID != opUserID {
+ apiresp.GinError(c, errs.ErrNoPermission.WrapMsg("only creator can delete meeting"))
+ return
+ }
+
+ // 如果有关联的群聊,先解散群聊
+ if meeting.GroupID != "" {
+ err := m.groupClient.DismissGroup(c, meeting.GroupID, false)
+ if err != nil {
+ log.ZWarn(c, "DeleteMeeting: failed to dismiss group", err, "meetingID", req.MeetingID, "groupID", meeting.GroupID)
+ // 不阻断流程,继续删除会议
+ } else {
+ log.ZInfo(c, "DeleteMeeting: successfully dismissed group", "meetingID", req.MeetingID, "groupID", meeting.GroupID)
+ }
+
+ // 从webhook配置的attentionIds中移除会议群ID
+ if m.systemConfigDB != nil {
+ if err := webhook.UpdateAttentionIds(c, m.systemConfigDB, meeting.GroupID, false); err != nil {
+ log.ZWarn(c, "DeleteMeeting: failed to remove groupID from webhook attentionIds", err, "meetingID", req.MeetingID, "groupID", meeting.GroupID)
+ }
+ }
+ }
+
+ // 删除会议
+ if err := m.meetingDB.Delete(c, req.MeetingID); err != nil {
+ log.ZError(c, "DeleteMeeting: failed to delete meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to delete meeting"))
+ return
+ }
+
+ log.ZInfo(c, "DeleteMeeting: success", "meetingID", req.MeetingID)
+ apiresp.GinSuccess(c, resp)
+}
+
+// GetMeetingPublic 获取会议信息(用户端)
+func (m *MeetingApi) GetMeetingPublic(c *gin.Context) {
+ var (
+ req apistruct.GetMeetingReq
+ resp apistruct.GetMeetingResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 查询会议
+ meeting, err := m.meetingDB.Take(c, req.MeetingID)
+ if err != nil {
+ if m.IsNotFound(err) {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("meeting not found"))
+ return
+ }
+ log.ZError(c, "GetMeetingPublic: failed to get meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to get meeting"))
+ return
+ }
+
+ // 用户端只能查看已预约(1)和进行中(2)的会议
+ if meeting.Status != model.MeetingStatusScheduled && meeting.Status != model.MeetingStatusOngoing {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("meeting not available"))
+ return
+ }
+
+ // 加载主播信息
+ anchorUsers := m.loadAnchorUsers(c, meeting.AnchorUserIDs)
+
+ // 转换为用户端响应格式(过滤管理字段)
+ resp.MeetingInfo = &apistruct.MeetingPublicInfo{
+ MeetingID: meeting.MeetingID,
+ Subject: meeting.Subject,
+ CoverURL: meeting.CoverURL,
+ ScheduledTime: meeting.ScheduledTime.UnixMilli(),
+ Status: meeting.Status,
+ Description: meeting.Description,
+ Duration: meeting.Duration,
+ EnableMic: meeting.EnableMic,
+ EnableComment: meeting.EnableComment,
+ AnchorUsers: anchorUsers,
+ GroupID: meeting.GroupID,
+ CheckInCount: meeting.CheckInCount,
+ Password: meeting.Password,
+ }
+
+ apiresp.GinSuccess(c, resp)
+}
+
+// GetMeetingsPublic 获取会议列表(用户端)
+func (m *MeetingApi) GetMeetingsPublic(c *gin.Context) {
+ var (
+ req apistruct.GetMeetingsPublicReq
+ resp apistruct.GetMeetingsPublicResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 设置默认分页参数
+ if req.Pagination.PageNumber <= 0 {
+ req.Pagination.PageNumber = 1
+ }
+ if req.Pagination.ShowNumber <= 0 {
+ req.Pagination.ShowNumber = 20
+ }
+
+ // 创建分页对象
+ pagination := &meetingPaginationWrapper{
+ pageNumber: req.Pagination.PageNumber,
+ showNumber: req.Pagination.ShowNumber,
+ }
+
+ var total int64
+ var meetings []*model.Meeting
+ var err error
+
+ // 用户端只显示已预约(1)和进行中(2)的会议
+ // 如果指定了状态,验证是否为允许的状态
+ if req.Status > 0 {
+ if req.Status != model.MeetingStatusScheduled && req.Status != model.MeetingStatusOngoing {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("user can only view scheduled (1) or ongoing (2) meetings"))
+ return
+ }
+ // 按状态查询
+ total, meetings, err = m.meetingDB.FindByStatus(c, req.Status, pagination)
+ } else {
+ // 对于其他查询方式,先查询再过滤状态
+ var allMeetings []*model.Meeting
+ if req.Keyword != "" {
+ // 按关键词搜索
+ total, allMeetings, err = m.meetingDB.Search(c, req.Keyword, pagination)
+ } else if req.StartTime > 0 && req.EndTime > 0 {
+ // 按时间范围查询
+ total, allMeetings, err = m.meetingDB.FindByScheduledTimeRange(c, req.StartTime, req.EndTime, pagination)
+ } else {
+ // 查询所有会议
+ total, allMeetings, err = m.meetingDB.FindAll(c, pagination)
+ }
+
+ if err == nil {
+ // 过滤只保留已预约和进行中的会议
+ meetings = make([]*model.Meeting, 0, len(allMeetings))
+ for _, meeting := range allMeetings {
+ if meeting.Status == model.MeetingStatusScheduled || meeting.Status == model.MeetingStatusOngoing {
+ meetings = append(meetings, meeting)
+ }
+ }
+ total = int64(len(meetings))
+ }
+ }
+
+ if err != nil {
+ log.ZError(c, "GetMeetingsPublic: failed to find meetings", err)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find meetings"))
+ return
+ }
+
+ // 收集所有主播ID
+ allAnchorUserIDs := make([]string, 0)
+ for _, meeting := range meetings {
+ allAnchorUserIDs = append(allAnchorUserIDs, meeting.AnchorUserIDs...)
+ }
+ // 去重
+ allAnchorUserIDs = datautil.Distinct(allAnchorUserIDs)
+
+ // 批量获取主播用户信息
+ anchorUserMap := make(map[string]*sdkws.UserInfo)
+ if len(allAnchorUserIDs) > 0 {
+ anchorUsers, err := m.userClient.GetUsersInfo(c, allAnchorUserIDs)
+ if err != nil {
+ log.ZWarn(c, "GetMeetingsPublic: failed to get anchor users info", err, "anchorUserIDs", allAnchorUserIDs)
+ // 不阻断流程,继续返回会议列表,只是没有主播信息
+ } else {
+ for _, user := range anchorUsers {
+ anchorUserMap[user.UserID] = user
+ }
+ }
+ }
+
+ // 转换为用户端响应格式(过滤管理字段)
+ meetingInfos := make([]*apistruct.MeetingPublicInfo, 0, len(meetings))
+ for _, meeting := range meetings {
+ // 获取该会议的主播信息
+ anchorUsers := make([]*sdkws.UserInfo, 0, len(meeting.AnchorUserIDs))
+ for _, anchorID := range meeting.AnchorUserIDs {
+ if user, ok := anchorUserMap[anchorID]; ok {
+ anchorUsers = append(anchorUsers, user)
+ }
+ }
+
+ meetingInfos = append(meetingInfos, &apistruct.MeetingPublicInfo{
+ MeetingID: meeting.MeetingID,
+ Subject: meeting.Subject,
+ CoverURL: meeting.CoverURL,
+ ScheduledTime: meeting.ScheduledTime.UnixMilli(),
+ Status: meeting.Status,
+ Description: meeting.Description,
+ Duration: meeting.Duration,
+ EnableMic: meeting.EnableMic,
+ EnableComment: meeting.EnableComment,
+ AnchorUsers: anchorUsers,
+ GroupID: meeting.GroupID,
+ CheckInCount: meeting.CheckInCount,
+ Password: meeting.Password,
+ })
+ }
+
+ resp.Total = total
+ resp.Meetings = meetingInfos
+
+ log.ZInfo(c, "GetMeetingsPublic: success", "total", total, "count", len(meetingInfos))
+ apiresp.GinSuccess(c, resp)
+}
+
+// CheckInMeeting 会议签到
+func (m *MeetingApi) CheckInMeeting(c *gin.Context) {
+ var (
+ req apistruct.CheckInMeetingReq
+ resp apistruct.CheckInMeetingResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ userID := mcontext.GetOpUserID(c)
+ if userID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("user id not found"))
+ return
+ }
+
+ // 验证会议是否存在
+ meeting, err := m.meetingDB.Take(c, req.MeetingID)
+ if err != nil {
+ if m.IsNotFound(err) {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("meeting not found"))
+ return
+ }
+ log.ZError(c, "CheckInMeeting: failed to get meeting", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to get meeting"))
+ return
+ }
+
+ // 验证会议状态(只能对已预约和进行中的会议签到)
+ if meeting.Status != model.MeetingStatusScheduled && meeting.Status != model.MeetingStatusOngoing {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("can only check in for scheduled or ongoing meetings"))
+ return
+ }
+
+ // 检查是否已经签到
+ existingCheckIn, err := m.meetingCheckInDB.FindByUserAndMeetingID(c, userID, req.MeetingID)
+ if err == nil && existingCheckIn != nil {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("already checked in"))
+ return
+ }
+ if err != nil && !m.IsNotFound(err) {
+ log.ZError(c, "CheckInMeeting: failed to check existing check-in", err, "userID", userID, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to check existing check-in"))
+ return
+ }
+
+ // 生成签到ID
+ checkInID := idutil.GetMsgIDByMD5(userID + req.MeetingID + timeutil.GetCurrentTimeFormatted())
+
+ // 创建签到记录
+ checkIn := &model.MeetingCheckIn{
+ CheckInID: checkInID,
+ MeetingID: req.MeetingID,
+ UserID: userID,
+ CheckInTime: time.Now(),
+ }
+
+ if err := m.meetingCheckInDB.Create(c, checkIn); err != nil {
+ log.ZError(c, "CheckInMeeting: failed to create check-in", err, "meetingID", req.MeetingID, "userID", userID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to create check-in"))
+ return
+ }
+
+ // 更新会议的签到统计
+ checkInCount, err := m.meetingCheckInDB.CountByMeetingID(c, req.MeetingID)
+ if err != nil {
+ log.ZWarn(c, "CheckInMeeting: failed to count check-ins", err, "meetingID", req.MeetingID)
+ // 不阻断流程,继续返回成功
+ } else {
+ if err := m.meetingDB.Update(c, req.MeetingID, map[string]any{"check_in_count": int32(checkInCount)}); err != nil {
+ log.ZWarn(c, "CheckInMeeting: failed to update meeting check-in count", err, "meetingID", req.MeetingID)
+ // 不阻断流程,继续返回成功
+ }
+ }
+
+ resp.CheckInID = checkInID
+ resp.CheckInTime = checkIn.CheckInTime.UnixMilli()
+
+ log.ZInfo(c, "CheckInMeeting: success", "meetingID", req.MeetingID, "userID", userID, "checkInID", checkInID)
+ apiresp.GinSuccess(c, resp)
+}
+
+// GetMeetingCheckIns 获取会议签到列表
+func (m *MeetingApi) GetMeetingCheckIns(c *gin.Context) {
+ var (
+ req apistruct.GetMeetingCheckInsReq
+ resp apistruct.GetMeetingCheckInsResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 设置默认分页参数
+ if req.Pagination.PageNumber <= 0 {
+ req.Pagination.PageNumber = 1
+ }
+ if req.Pagination.ShowNumber <= 0 {
+ req.Pagination.ShowNumber = 20
+ }
+
+ // 创建分页对象
+ pagination := &meetingPaginationWrapper{
+ pageNumber: req.Pagination.PageNumber,
+ showNumber: req.Pagination.ShowNumber,
+ }
+
+ // 查询签到列表
+ total, checkIns, err := m.meetingCheckInDB.FindByMeetingID(c, req.MeetingID, pagination)
+ if err != nil {
+ log.ZError(c, "GetMeetingCheckIns: failed to find check-ins", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find check-ins"))
+ return
+ }
+
+ // 收集所有用户ID
+ userIDs := make([]string, 0, len(checkIns))
+ for _, checkIn := range checkIns {
+ userIDs = append(userIDs, checkIn.UserID)
+ }
+
+ // 批量获取用户信息
+ userMap := make(map[string]*sdkws.UserInfo)
+ if len(userIDs) > 0 {
+ users, err := m.userClient.GetUsersInfo(c, userIDs)
+ if err != nil {
+ log.ZWarn(c, "GetMeetingCheckIns: failed to get users info", err, "userIDs", userIDs)
+ // 不阻断流程,继续返回签到列表,只是没有用户信息
+ } else {
+ for _, user := range users {
+ userMap[user.UserID] = user
+ }
+ }
+ }
+
+ // 转换为响应格式
+ checkInInfos := make([]*apistruct.MeetingCheckInInfo, 0, len(checkIns))
+ for _, checkIn := range checkIns {
+ checkInInfo := &apistruct.MeetingCheckInInfo{
+ CheckInID: checkIn.CheckInID,
+ MeetingID: checkIn.MeetingID,
+ UserID: checkIn.UserID,
+ CheckInTime: checkIn.CheckInTime.UnixMilli(),
+ }
+ if user, ok := userMap[checkIn.UserID]; ok {
+ checkInInfo.UserInfo = user
+ }
+ checkInInfos = append(checkInInfos, checkInInfo)
+ }
+
+ resp.Total = total
+ resp.CheckIns = checkInInfos
+
+ log.ZInfo(c, "GetMeetingCheckIns: success", "meetingID", req.MeetingID, "total", total, "count", len(checkInInfos))
+ apiresp.GinSuccess(c, resp)
+}
+
+// GetMeetingCheckInStats 获取会议签到统计
+func (m *MeetingApi) GetMeetingCheckInStats(c *gin.Context) {
+ var (
+ req apistruct.GetMeetingCheckInStatsReq
+ resp apistruct.GetMeetingCheckInStatsResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 查询签到统计
+ checkInCount, err := m.meetingCheckInDB.CountByMeetingID(c, req.MeetingID)
+ if err != nil {
+ log.ZError(c, "GetMeetingCheckInStats: failed to count check-ins", err, "meetingID", req.MeetingID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to count check-ins"))
+ return
+ }
+
+ resp.MeetingID = req.MeetingID
+ resp.CheckInCount = checkInCount
+
+ log.ZInfo(c, "GetMeetingCheckInStats: success", "meetingID", req.MeetingID, "checkInCount", checkInCount)
+ apiresp.GinSuccess(c, resp)
+}
+
+// CheckUserCheckIn 检查用户是否已签到
+func (m *MeetingApi) CheckUserCheckIn(c *gin.Context) {
+ var (
+ req apistruct.CheckUserCheckInReq
+ resp apistruct.CheckUserCheckInResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ userID := mcontext.GetOpUserID(c)
+ if userID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("user id not found"))
+ return
+ }
+
+ // 查询用户是否已签到
+ checkIn, err := m.meetingCheckInDB.FindByUserAndMeetingID(c, userID, req.MeetingID)
+ if err != nil {
+ if m.IsNotFound(err) {
+ // 未签到
+ resp.IsCheckedIn = false
+ log.ZInfo(c, "CheckUserCheckIn: user not checked in", "meetingID", req.MeetingID, "userID", userID)
+ apiresp.GinSuccess(c, resp)
+ return
+ }
+ log.ZError(c, "CheckUserCheckIn: failed to find check-in", err, "meetingID", req.MeetingID, "userID", userID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find check-in"))
+ return
+ }
+
+ // 已签到,获取用户信息
+ var userInfo *sdkws.UserInfo
+ users, err := m.userClient.GetUsersInfo(c, []string{userID})
+ if err != nil {
+ log.ZWarn(c, "CheckUserCheckIn: failed to get user info", err, "userID", userID)
+ // 不阻断流程,继续返回签到信息,只是没有用户详细信息
+ } else if len(users) > 0 {
+ userInfo = users[0]
+ }
+
+ resp.IsCheckedIn = true
+ resp.CheckInInfo = &apistruct.MeetingCheckInInfo{
+ CheckInID: checkIn.CheckInID,
+ MeetingID: checkIn.MeetingID,
+ UserID: checkIn.UserID,
+ CheckInTime: checkIn.CheckInTime.UnixMilli(),
+ UserInfo: userInfo,
+ }
+
+ log.ZInfo(c, "CheckUserCheckIn: success", "meetingID", req.MeetingID, "userID", userID, "isCheckedIn", true)
+ apiresp.GinSuccess(c, resp)
+}
diff --git a/internal/api/msg.go b/internal/api/msg.go
new file mode 100644
index 0000000..174bed7
--- /dev/null
+++ b/internal/api/msg.go
@@ -0,0 +1,575 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "encoding/base64"
+ "encoding/json"
+ "sync"
+
+ "github.com/gin-gonic/gin"
+ "github.com/go-playground/validator/v10"
+ "github.com/mitchellh/mapstructure"
+ "google.golang.org/protobuf/reflect/protoreflect"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/a2r"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/idutil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+var (
+ msgDataDescriptor []protoreflect.FieldDescriptor
+ msgDataDescriptorOnce sync.Once
+)
+
+func getMsgDataDescriptor() []protoreflect.FieldDescriptor {
+ msgDataDescriptorOnce.Do(func() {
+ skip := make(map[string]struct{})
+ respFields := new(msg.SendMsgResp).ProtoReflect().Descriptor().Fields()
+ for i := 0; i < respFields.Len(); i++ {
+ field := respFields.Get(i)
+ if !field.HasJSONName() {
+ continue
+ }
+ skip[field.JSONName()] = struct{}{}
+ }
+ fields := new(sdkws.MsgData).ProtoReflect().Descriptor().Fields()
+ num := fields.Len()
+ msgDataDescriptor = make([]protoreflect.FieldDescriptor, 0, num)
+ for i := 0; i < num; i++ {
+ field := fields.Get(i)
+ if !field.HasJSONName() {
+ continue
+ }
+ if _, ok := skip[field.JSONName()]; ok {
+ continue
+ }
+ msgDataDescriptor = append(msgDataDescriptor, fields.Get(i))
+ }
+ })
+ return msgDataDescriptor
+}
+
+type MessageApi struct {
+ Client msg.MsgClient
+ userClient *rpcli.UserClient
+ imAdminUserID []string
+ validate *validator.Validate
+}
+
+func NewMessageApi(client msg.MsgClient, userClient *rpcli.UserClient, imAdminUserID []string) MessageApi {
+ return MessageApi{Client: client, userClient: userClient, imAdminUserID: imAdminUserID, validate: validator.New()}
+}
+
+func (*MessageApi) SetOptions(options map[string]bool, value bool) {
+ datautil.SetSwitchFromOptions(options, constant.IsHistory, value)
+ datautil.SetSwitchFromOptions(options, constant.IsPersistent, value)
+ datautil.SetSwitchFromOptions(options, constant.IsSenderSync, value)
+ datautil.SetSwitchFromOptions(options, constant.IsConversationUpdate, value)
+}
+
+func (m *MessageApi) newUserSendMsgReq(_ *gin.Context, params *apistruct.SendMsg, data any) *msg.SendMsgReq {
+ msgData := &sdkws.MsgData{
+ SendID: params.SendID,
+ GroupID: params.GroupID,
+ ClientMsgID: idutil.GetMsgIDByMD5(params.SendID),
+ SenderPlatformID: params.SenderPlatformID,
+ SenderNickname: params.SenderNickname,
+ SenderFaceURL: params.SenderFaceURL,
+ SessionType: params.SessionType,
+ MsgFrom: constant.SysMsgType,
+ ContentType: params.ContentType,
+ CreateTime: timeutil.GetCurrentTimestampByMill(),
+ SendTime: params.SendTime,
+ OfflinePushInfo: params.OfflinePushInfo,
+ Ex: params.Ex,
+ }
+ var newContent string
+ options := make(map[string]bool, 5)
+ switch params.ContentType {
+ case constant.OANotification:
+ notification := sdkws.NotificationElem{}
+ notification.Detail = jsonutil.StructToJsonString(params.Content)
+ newContent = jsonutil.StructToJsonString(¬ification)
+ case constant.Text:
+ fallthrough
+ case constant.AtText:
+ if atElem, ok := data.(*apistruct.AtElem); ok {
+ msgData.AtUserIDList = atElem.AtUserList
+ }
+ fallthrough
+ case constant.Picture:
+ fallthrough
+ case constant.Custom:
+ fallthrough
+ case constant.Voice:
+ fallthrough
+ case constant.Video:
+ fallthrough
+ case constant.File:
+ fallthrough
+ default:
+ newContent = jsonutil.StructToJsonString(params.Content)
+ }
+ if params.IsOnlineOnly {
+ m.SetOptions(options, false)
+ }
+ if params.NotOfflinePush {
+ datautil.SetSwitchFromOptions(options, constant.IsOfflinePush, false)
+ }
+ msgData.Content = []byte(newContent)
+ msgData.Options = options
+ pbData := msg.SendMsgReq{
+ MsgData: msgData,
+ }
+ return &pbData
+}
+
+func (m *MessageApi) GetSeq(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetMaxSeq, m.Client)
+}
+
+func (m *MessageApi) PullMsgBySeqs(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.PullMessageBySeqs, m.Client)
+}
+
+func (m *MessageApi) RevokeMsg(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.RevokeMsg, m.Client)
+}
+
+func (m *MessageApi) MarkMsgsAsRead(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.MarkMsgsAsRead, m.Client)
+}
+
+func (m *MessageApi) MarkConversationAsRead(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.MarkConversationAsRead, m.Client)
+}
+
+func (m *MessageApi) GetConversationsHasReadAndMaxSeq(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetConversationsHasReadAndMaxSeq, m.Client)
+}
+
+func (m *MessageApi) SetConversationHasReadSeq(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.SetConversationHasReadSeq, m.Client)
+}
+
+func (m *MessageApi) ClearConversationsMsg(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.ClearConversationsMsg, m.Client)
+}
+
+func (m *MessageApi) UserClearAllMsg(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.UserClearAllMsg, m.Client)
+}
+
+func (m *MessageApi) DeleteMsgs(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.DeleteMsgs, m.Client)
+}
+
+func (m *MessageApi) DeleteMsgPhysicalBySeq(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.DeleteMsgPhysicalBySeq, m.Client)
+}
+
+func (m *MessageApi) DeleteMsgPhysical(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.DeleteMsgPhysical, m.Client)
+}
+
+func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendMsgReq *msg.SendMsgReq, err error) {
+ var data any
+ log.ZDebug(c, "getSendMsgReq", "req", req.Content)
+ switch req.ContentType {
+ case constant.Text:
+ data = &apistruct.TextElem{}
+ case constant.Picture:
+ data = &apistruct.PictureElem{}
+ case constant.Voice:
+ data = &apistruct.SoundElem{}
+ case constant.Video:
+ data = &apistruct.VideoElem{}
+ case constant.File:
+ data = &apistruct.FileElem{}
+ case constant.AtText:
+ data = &apistruct.AtElem{}
+ case constant.Custom:
+ data = &apistruct.CustomElem{}
+ case constant.MarkdownText:
+ data = &apistruct.MarkdownTextElem{}
+ case constant.Quote:
+ data = &apistruct.QuoteElem{}
+ case constant.OANotification:
+ data = &apistruct.OANotificationElem{}
+ req.SessionType = constant.NotificationChatType
+ if err = m.userClient.GetNotificationByID(c, req.SendID); err != nil {
+ return nil, err
+ }
+ default:
+ return nil, errs.WrapMsg(errs.ErrArgs, "unsupported content type", "contentType", req.ContentType)
+ }
+ if err := mapstructure.WeakDecode(req.Content, data); err != nil {
+ return nil, errs.WrapMsg(err, "failed to decode message content")
+ }
+ log.ZDebug(c, "getSendMsgReq", "decodedContent", data)
+ if err := m.validate.Struct(data); err != nil {
+ return nil, errs.WrapMsg(err, "validation error")
+ }
+ return m.newUserSendMsgReq(c, &req, data), nil
+}
+
+func (m *MessageApi) getModifyFields(req, respModify *sdkws.MsgData) map[string]any {
+ if req == nil || respModify == nil {
+ return nil
+ }
+ fields := make(map[string]any)
+ reqProtoReflect := req.ProtoReflect()
+ respProtoReflect := respModify.ProtoReflect()
+ for _, descriptor := range getMsgDataDescriptor() {
+ reqValue := reqProtoReflect.Get(descriptor)
+ respValue := respProtoReflect.Get(descriptor)
+ if !reqValue.Equal(respValue) {
+ val := respValue.Interface()
+ name := descriptor.JSONName()
+ if name == "content" {
+ if bs, ok := val.([]byte); ok {
+ val = string(bs)
+ }
+ }
+ fields[name] = val
+ }
+ }
+ if len(fields) == 0 {
+ fields = nil
+ }
+ return fields
+}
+
+func (m *MessageApi) ginRespSendMsg(c *gin.Context, req *msg.SendMsgReq, resp *msg.SendMsgResp) {
+ res := m.getModifyFields(req.MsgData, resp.Modify)
+ resp.Modify = nil
+ apiresp.GinSuccess(c, &apistruct.SendMsgResp{
+ SendMsgResp: resp,
+ Modify: res,
+ })
+}
+
+// SendMessage handles the sending of a message. It's an HTTP handler function to be used with Gin framework.
+func (m *MessageApi) SendMessage(c *gin.Context) {
+ // Initialize a request struct for sending a message.
+ req := apistruct.SendMsgReq{}
+
+ // Bind the JSON request body to the request struct.
+ if err := c.BindJSON(&req); err != nil {
+ // Respond with an error if request body binding fails.
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // Check if the user has the app manager role.
+ if !authverify.IsAdmin(c) {
+ // Respond with a permission error if the user is not an app manager.
+ apiresp.GinError(c, errs.ErrNoPermission.WrapMsg("only app manager can send message"))
+ return
+ }
+
+ // Prepare the message request with additional required data.
+ sendMsgReq, err := m.getSendMsgReq(c, req.SendMsg)
+ if err != nil {
+ // Log and respond with an error if preparation fails.
+ apiresp.GinError(c, err)
+ return
+ }
+
+ // Set the receiver ID in the message data.
+ sendMsgReq.MsgData.RecvID = req.RecvID
+
+ // Attempt to send the message using the client.
+ respPb, err := m.Client.SendMsg(c, sendMsgReq)
+ if err != nil {
+ // Set the status to failed and respond with an error if sending fails.
+ apiresp.GinError(c, err)
+ return
+ }
+
+ // Set the status to successful if the message is sent.
+ var status = constant.MsgSendSuccessed
+
+ // Attempt to update the message sending status in the system.
+ _, err = m.Client.SetSendMsgStatus(c, &msg.SetSendMsgStatusReq{
+ Status: int32(status),
+ })
+
+ if err != nil {
+ // Log the error if updating the status fails.
+ apiresp.GinError(c, err)
+ return
+ }
+
+ // Respond with a success message and the response payload.
+ m.ginRespSendMsg(c, sendMsgReq, respPb)
+}
+
+func (m *MessageApi) SendBusinessNotification(c *gin.Context) {
+ req := struct {
+ Key string `json:"key"`
+ Data string `json:"data"`
+ SendUserID string `json:"sendUserID" binding:"required"`
+ RecvUserID string `json:"recvUserID"`
+ RecvGroupID string `json:"recvGroupID"`
+ SendMsg bool `json:"sendMsg"`
+ ReliabilityLevel *int `json:"reliabilityLevel"`
+ }{}
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ if req.RecvUserID == "" && req.RecvGroupID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("recvUserID and recvGroupID cannot be empty at the same time"))
+ return
+ }
+ if req.RecvUserID != "" && req.RecvGroupID != "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("recvUserID and recvGroupID cannot be set at the same time"))
+ return
+ }
+ var sessionType int32
+ if req.RecvUserID != "" {
+ sessionType = constant.SingleChatType
+ } else {
+ sessionType = constant.ReadGroupChatType
+ }
+ if req.ReliabilityLevel == nil {
+ req.ReliabilityLevel = datautil.ToPtr(1)
+ }
+ if !authverify.IsAdmin(c) {
+ apiresp.GinError(c, errs.ErrNoPermission.WrapMsg("only app manager can send message"))
+ return
+ }
+ sendMsgReq := msg.SendMsgReq{
+ MsgData: &sdkws.MsgData{
+ SendID: req.SendUserID,
+ RecvID: req.RecvUserID,
+ GroupID: req.RecvGroupID,
+ Content: []byte(jsonutil.StructToJsonString(&sdkws.NotificationElem{
+ Detail: jsonutil.StructToJsonString(&struct {
+ Key string `json:"key"`
+ Data string `json:"data"`
+ }{Key: req.Key, Data: req.Data}),
+ })),
+ MsgFrom: constant.SysMsgType,
+ ContentType: constant.BusinessNotification,
+ SessionType: sessionType,
+ CreateTime: timeutil.GetCurrentTimestampByMill(),
+ ClientMsgID: idutil.GetMsgIDByMD5(mcontext.GetOpUserID(c)),
+ Options: config.GetOptionsByNotification(config.NotificationConfig{
+ IsSendMsg: req.SendMsg,
+ ReliabilityLevel: *req.ReliabilityLevel,
+ UnreadCount: false,
+ }, nil),
+ },
+ }
+ respPb, err := m.Client.SendMsg(c, &sendMsgReq)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ m.ginRespSendMsg(c, &sendMsgReq, respPb)
+}
+
+func (m *MessageApi) BatchSendMsg(c *gin.Context) {
+ var (
+ req apistruct.BatchSendMsgReq
+ resp apistruct.BatchSendMsgResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, errs.ErrNoPermission.WrapMsg("only app manager can send message"))
+ return
+ }
+
+ var recvIDs []string
+ if req.IsSendAll {
+ var pageNumber int32 = 1
+ const showNumber = 500
+ for {
+ recvIDsPart, err := m.userClient.GetAllUserIDs(c, pageNumber, showNumber)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ recvIDs = append(recvIDs, recvIDsPart...)
+ if len(recvIDsPart) < showNumber {
+ break
+ }
+ pageNumber++
+ }
+ } else {
+ recvIDs = req.RecvIDs
+ }
+ log.ZDebug(c, "BatchSendMsg nums", "nums ", len(recvIDs))
+ sendMsgReq, err := m.getSendMsgReq(c, req.SendMsg)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ for _, recvID := range recvIDs {
+ sendMsgReq.MsgData.RecvID = recvID
+ rpcResp, err := m.Client.SendMsg(c, sendMsgReq)
+ if err != nil {
+ resp.FailedIDs = append(resp.FailedIDs, recvID)
+ continue
+ }
+ resp.Results = append(resp.Results, &apistruct.SingleReturnResult{
+ ServerMsgID: rpcResp.ServerMsgID,
+ ClientMsgID: rpcResp.ClientMsgID,
+ SendTime: rpcResp.SendTime,
+ RecvID: recvID,
+ Modify: m.getModifyFields(sendMsgReq.MsgData, rpcResp.Modify),
+ })
+ }
+ apiresp.GinSuccess(c, resp)
+}
+
+func (m *MessageApi) SendSimpleMessage(c *gin.Context) {
+ encodedKey, ok := c.GetQuery(webhook.Key)
+ if !ok {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail("missing key in query").Wrap())
+ return
+ }
+
+ decodedData, err := base64.StdEncoding.DecodeString(encodedKey)
+ if err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ var (
+ req apistruct.SendSingleMsgReq
+ keyMsgData apistruct.KeyMsgData
+
+ sendID string
+ sessionType int32
+ recvID string
+ )
+ if err = c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ err = json.Unmarshal(decodedData, &keyMsgData)
+ if err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ if keyMsgData.GroupID != "" {
+ sessionType = constant.ReadGroupChatType
+ sendID = req.SendID
+ } else {
+ sessionType = constant.SingleChatType
+ sendID = keyMsgData.RecvID
+ recvID = keyMsgData.SendID
+ }
+ // check param
+ if keyMsgData.SendID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail("missing recvID or GroupID").Wrap())
+ return
+ }
+ if sendID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail("missing sendID").Wrap())
+ return
+ }
+
+ content, err := jsonutil.JsonMarshal(apistruct.MarkdownTextElem{Content: req.Content})
+ if err != nil {
+ apiresp.GinError(c, errs.Wrap(err))
+ return
+ }
+ msgData := &sdkws.MsgData{
+ SendID: sendID,
+ RecvID: recvID,
+ GroupID: keyMsgData.GroupID,
+ ClientMsgID: idutil.GetMsgIDByMD5(sendID),
+ SenderPlatformID: constant.AdminPlatformID,
+ SessionType: sessionType,
+ MsgFrom: constant.UserMsgType,
+ ContentType: constant.MarkdownText,
+ Content: content,
+ OfflinePushInfo: req.OfflinePushInfo,
+ Ex: req.Ex,
+ }
+
+ sendReq := &msg.SendSimpleMsgReq{
+ MsgData: msgData,
+ }
+
+ respPb, err := m.Client.SendSimpleMsg(c, sendReq)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+
+ var status = constant.MsgSendSuccessed
+
+ _, err = m.Client.SetSendMsgStatus(c, &msg.SetSendMsgStatusReq{
+ Status: int32(status),
+ })
+
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+
+ m.ginRespSendMsg(c, &msg.SendMsgReq{MsgData: sendReq.MsgData}, &msg.SendMsgResp{
+ ServerMsgID: respPb.ServerMsgID,
+ ClientMsgID: respPb.ClientMsgID,
+ SendTime: respPb.SendTime,
+ Modify: respPb.Modify,
+ })
+}
+
+func (m *MessageApi) CheckMsgIsSendSuccess(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetSendMsgStatus, m.Client)
+}
+
+func (m *MessageApi) GetUsersOnlineStatus(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetSendMsgStatus, m.Client)
+}
+
+func (m *MessageApi) GetActiveUser(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetActiveUser, m.Client)
+}
+
+func (m *MessageApi) GetActiveGroup(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetActiveGroup, m.Client)
+}
+
+func (m *MessageApi) SearchMsg(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.SearchMessage, m.Client)
+}
+
+func (m *MessageApi) GetServerTime(c *gin.Context) {
+ a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
+}
diff --git a/internal/api/prometheus_discovery.go b/internal/api/prometheus_discovery.go
new file mode 100644
index 0000000..5d27b2d
--- /dev/null
+++ b/internal/api/prometheus_discovery.go
@@ -0,0 +1,99 @@
+package api
+
+import (
+ "encoding/json"
+ "errors"
+ "net/http"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+)
+
+type PrometheusDiscoveryApi struct {
+ config *Config
+ kv discovery.KeyValue
+}
+
+func NewPrometheusDiscoveryApi(config *Config, client discovery.SvcDiscoveryRegistry) *PrometheusDiscoveryApi {
+ api := &PrometheusDiscoveryApi{
+ config: config,
+ kv: client,
+ }
+ return api
+}
+
+func (p *PrometheusDiscoveryApi) discovery(c *gin.Context, key string) {
+ value, err := p.kv.GetKeyWithPrefix(c, prommetrics.BuildDiscoveryKeyPrefix(key))
+ if err != nil {
+ if errors.Is(err, discovery.ErrNotSupported) {
+ c.JSON(http.StatusOK, []struct{}{})
+ return
+ }
+ apiresp.GinError(c, errs.WrapMsg(err, "get key value"))
+ return
+ }
+ if len(value) == 0 {
+ c.JSON(http.StatusOK, []*prommetrics.RespTarget{})
+ return
+ }
+ var resp prommetrics.RespTarget
+ for i := range value {
+ var tmp prommetrics.Target
+ if err = json.Unmarshal(value[i], &tmp); err != nil {
+ apiresp.GinError(c, errs.WrapMsg(err, "json unmarshal err"))
+ return
+ }
+
+ resp.Targets = append(resp.Targets, tmp.Target)
+ resp.Labels = tmp.Labels // default label is fixed. See prommetrics.BuildDefaultTarget
+ }
+
+ c.JSON(http.StatusOK, []*prommetrics.RespTarget{&resp})
+}
+
+func (p *PrometheusDiscoveryApi) Api(c *gin.Context) {
+ p.discovery(c, prommetrics.APIKeyName)
+}
+
+func (p *PrometheusDiscoveryApi) User(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.User)
+}
+
+func (p *PrometheusDiscoveryApi) Group(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Group)
+}
+
+func (p *PrometheusDiscoveryApi) Msg(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Msg)
+}
+
+func (p *PrometheusDiscoveryApi) Friend(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Friend)
+}
+
+func (p *PrometheusDiscoveryApi) Conversation(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Conversation)
+}
+
+func (p *PrometheusDiscoveryApi) Third(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Third)
+}
+
+func (p *PrometheusDiscoveryApi) Auth(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Auth)
+}
+
+func (p *PrometheusDiscoveryApi) Push(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.Push)
+}
+
+func (p *PrometheusDiscoveryApi) MessageGateway(c *gin.Context) {
+ p.discovery(c, p.config.Discovery.RpcService.MessageGateway)
+}
+
+func (p *PrometheusDiscoveryApi) MessageTransfer(c *gin.Context) {
+ p.discovery(c, prommetrics.MessageTransferKeyName)
+}
diff --git a/internal/api/ratelimit.go b/internal/api/ratelimit.go
new file mode 100644
index 0000000..0bbf973
--- /dev/null
+++ b/internal/api/ratelimit.go
@@ -0,0 +1,83 @@
+package api
+
+import (
+ "fmt"
+ "math"
+ "net/http"
+ "strconv"
+ "time"
+
+ "github.com/gin-gonic/gin"
+
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/stability/ratelimit"
+ "github.com/openimsdk/tools/stability/ratelimit/bbr"
+)
+
+type RateLimiter struct {
+ Enable bool `yaml:"enable"`
+ Window time.Duration `yaml:"window"` // time duration per window
+ Bucket int `yaml:"bucket"` // bucket number for each window
+ CPUThreshold int64 `yaml:"cpuThreshold"` // CPU threshold; valid range 0–1000 (1000 = 100%)
+}
+
+func RateLimitMiddleware(config *RateLimiter) gin.HandlerFunc {
+ if !config.Enable {
+ return func(c *gin.Context) {
+ c.Next()
+ }
+ }
+
+ limiter := bbr.NewBBRLimiter(
+ bbr.WithWindow(config.Window),
+ bbr.WithBucket(config.Bucket),
+ bbr.WithCPUThreshold(config.CPUThreshold),
+ )
+
+ return func(c *gin.Context) {
+ status := limiter.Stat()
+
+ c.Header("X-BBR-CPU", strconv.FormatInt(status.CPU, 10))
+ c.Header("X-BBR-MinRT", strconv.FormatInt(status.MinRt, 10))
+ c.Header("X-BBR-MaxPass", strconv.FormatInt(status.MaxPass, 10))
+ c.Header("X-BBR-MaxInFlight", strconv.FormatInt(status.MaxInFlight, 10))
+ c.Header("X-BBR-InFlight", strconv.FormatInt(status.InFlight, 10))
+
+ done, err := limiter.Allow()
+ if err != nil {
+
+ c.Header("X-RateLimit-Policy", "BBR")
+ c.Header("Retry-After", calculateBBRRetryAfter(status))
+ c.Header("X-RateLimit-Limit", strconv.FormatInt(status.MaxInFlight, 10))
+ c.Header("X-RateLimit-Remaining", "0") // There is no concept of remaining quota in BBR.
+
+ fmt.Println("rate limited:", err, "path:", c.Request.URL.Path)
+ log.ZWarn(c, "rate limited", err, "path", c.Request.URL.Path)
+ c.AbortWithStatus(http.StatusTooManyRequests)
+ apiresp.GinError(c, errs.NewCodeError(http.StatusTooManyRequests, "too many requests, please try again later"))
+ return
+ }
+
+ c.Next()
+ done(ratelimit.DoneInfo{})
+ }
+}
+
+func calculateBBRRetryAfter(status bbr.Stat) string {
+ loadRatio := float64(status.CPU) / float64(status.CPU)
+
+ if loadRatio < 0.8 {
+ return "1"
+ }
+ if loadRatio < 0.95 {
+ return "2"
+ }
+
+ backoff := 1 + int64(math.Pow(loadRatio-0.95, 2)*50)
+ if backoff > 5 {
+ backoff = 5
+ }
+ return strconv.FormatInt(backoff, 10)
+}
diff --git a/internal/api/redpacket.go b/internal/api/redpacket.go
new file mode 100644
index 0000000..eaefc01
--- /dev/null
+++ b/internal/api/redpacket.go
@@ -0,0 +1,1022 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "context"
+ "fmt"
+ "math/rand"
+ "strconv"
+ "time"
+
+ "github.com/gin-gonic/gin"
+ "github.com/redis/go-redis/v9"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/idutil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+type RedPacketApi struct {
+ groupClient *rpcli.GroupClient
+ userClient *rpcli.UserClient
+ msgClient msg.MsgClient
+ redPacketDB database.RedPacket
+ redPacketReceiveDB database.RedPacketReceive
+ walletDB database.Wallet
+ walletBalanceRecordDB database.WalletBalanceRecord
+ redisClient redis.UniversalClient
+}
+
+func NewRedPacketApi(groupClient *rpcli.GroupClient, userClient *rpcli.UserClient, msgClient msg.MsgClient, redPacketDB database.RedPacket, redPacketReceiveDB database.RedPacketReceive, walletDB database.Wallet, walletBalanceRecordDB database.WalletBalanceRecord, redisClient redis.UniversalClient) *RedPacketApi {
+ return &RedPacketApi{
+ groupClient: groupClient,
+ userClient: userClient,
+ msgClient: msgClient,
+ redPacketDB: redPacketDB,
+ redPacketReceiveDB: redPacketReceiveDB,
+ walletDB: walletDB,
+ walletBalanceRecordDB: walletBalanceRecordDB,
+ redisClient: redisClient,
+ }
+}
+
+// SendRedPacket 发送红包(只支持群聊,发送用户默认为群主)
+func (r *RedPacketApi) SendRedPacket(c *gin.Context) {
+ var (
+ req apistruct.SendRedPacketReq
+ resp apistruct.SendRedPacketResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 验证红包参数
+ if req.TotalAmount <= 0 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("totalAmount must be greater than 0"))
+ return
+ }
+ if req.TotalCount <= 0 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("totalCount must be greater than 0"))
+ return
+ }
+ if req.RedPacketType != 1 && req.RedPacketType != 2 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("redPacketType must be 1 (normal) or 2 (random)"))
+ return
+ }
+
+ // 获取群信息,验证群是否存在
+ groupInfos, err := r.groupClient.GetGroupsInfo(c, []string{req.GroupID})
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if len(groupInfos) == 0 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("group not found"))
+ return
+ }
+ groupInfo := groupInfos[0]
+
+ // 获取群主ID(发送用户默认为群主)
+ sendUserID := groupInfo.OwnerUserID
+ if sendUserID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("group owner not found"))
+ return
+ }
+
+ // 获取群主用户信息(用于设置发送者昵称和头像)
+ userInfos, err := r.userClient.GetUsersInfo(c, []string{sendUserID})
+ if err != nil {
+ log.ZWarn(c, "SendRedPacket: failed to get user info", err, "sendUserID", sendUserID)
+ // 如果获取用户信息失败,继续发送,但不设置昵称和头像
+ }
+
+ var senderNickname, senderFaceURL string
+ if len(userInfos) > 0 && userInfos[0] != nil {
+ senderNickname = userInfos[0].Nickname
+ senderFaceURL = userInfos[0].FaceURL
+ }
+
+ // 生成红包ID
+ redPacketID := idutil.GetMsgIDByMD5(sendUserID + req.GroupID + timeutil.GetCurrentTimeFormatted())
+
+ // 计算会话ID
+ conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, req.GroupID)
+
+ // 计算过期时间(默认24小时)
+ expireTime := time.Now().Add(24 * time.Hour)
+
+ // 初始化Redis数据结构(所有红包类型都需要)
+ if r.redisClient == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("redis client is not available"))
+ return
+ }
+
+ ctx := context.Background()
+ queueKey := r.getRedPacketQueueKey(redPacketID)
+ usersKey := r.getRedPacketUsersKey(redPacketID)
+ expireDuration := 24 * time.Hour
+
+ if req.RedPacketType == model.RedPacketTypeRandom {
+ // 拼手气红包:预先分配金额并推送到Redis队列
+ amounts, err := r.allocateRandomAmounts(req.TotalAmount, req.TotalCount)
+ if err != nil {
+ log.ZError(c, "SendRedPacket: failed to allocate random amounts", err, "redPacketID", redPacketID, "groupID", req.GroupID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to allocate random amounts"))
+ return
+ }
+
+ pipe := r.redisClient.Pipeline()
+ for _, amount := range amounts {
+ pipe.RPush(ctx, queueKey, strconv.FormatInt(amount, 10))
+ }
+ pipe.Expire(ctx, queueKey, expireDuration)
+ pipe.Expire(ctx, usersKey, expireDuration)
+ if _, err := pipe.Exec(ctx); err != nil {
+ log.ZError(c, "SendRedPacket: failed to push amounts to redis queue", err, "redPacketID", redPacketID, "groupID", req.GroupID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to push amounts to redis queue"))
+ return
+ }
+ log.ZInfo(c, "SendRedPacket: pushed amounts to redis queue", "redPacketID", redPacketID, "count", len(amounts))
+ } else {
+ // 普通红包:初始化队列,每个元素为平均金额
+ avgAmount := req.TotalAmount / int64(req.TotalCount)
+ pipe := r.redisClient.Pipeline()
+ for i := int32(0); i < req.TotalCount; i++ {
+ pipe.RPush(ctx, queueKey, strconv.FormatInt(avgAmount, 10))
+ }
+ pipe.Expire(ctx, queueKey, expireDuration)
+ pipe.Expire(ctx, usersKey, expireDuration)
+ if _, err := pipe.Exec(ctx); err != nil {
+ log.ZError(c, "SendRedPacket: failed to initialize redis queue", err, "redPacketID", redPacketID, "groupID", req.GroupID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to initialize redis queue"))
+ return
+ }
+ log.ZInfo(c, "SendRedPacket: initialized redis queue for normal red packet", "redPacketID", redPacketID, "count", req.TotalCount, "avgAmount", avgAmount)
+ }
+
+ // 创建红包数据库记录
+ redPacketRecord := &model.RedPacket{
+ RedPacketID: redPacketID,
+ SendUserID: sendUserID,
+ GroupID: req.GroupID,
+ ConversationID: conversationID,
+ SessionType: constant.ReadGroupChatType,
+ RedPacketType: req.RedPacketType,
+ TotalAmount: req.TotalAmount,
+ TotalCount: req.TotalCount,
+ RemainAmount: req.TotalAmount,
+ RemainCount: req.TotalCount,
+ Blessing: req.Blessing,
+ Status: model.RedPacketStatusActive,
+ ExpireTime: expireTime,
+ CreateTime: time.Now(),
+ }
+
+ // 保存红包记录到数据库
+ if err := r.redPacketDB.Create(c, redPacketRecord); err != nil {
+ log.ZError(c, "SendRedPacket: failed to create red packet record", err, "redPacketID", redPacketID, "groupID", req.GroupID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to create red packet record"))
+ return
+ }
+
+ // 创建红包消息内容(使用自定义消息格式)
+ redPacketElem := apistruct.RedPacketElem{
+ RedPacketID: redPacketID,
+ RedPacketType: req.RedPacketType,
+ Blessing: req.Blessing,
+ }
+ // 将红包数据序列化为JSON字符串,存储在data字段中
+ redPacketData := jsonutil.StructToJsonString(redPacketElem)
+
+ // 构建自定义消息结构
+ customElem := apistruct.CustomElem{
+ Data: redPacketData,
+ Description: "redpacket", // 二级类型标识:红包消息
+ Extension: "", // 扩展字段,可用于未来扩展
+ }
+ content := jsonutil.StructToJsonString(customElem)
+
+ // 构建消息请求
+ msgData := &sdkws.MsgData{
+ SendID: sendUserID,
+ GroupID: req.GroupID,
+ ClientMsgID: idutil.GetMsgIDByMD5(sendUserID),
+ SenderPlatformID: 0, // 系统发送
+ SenderNickname: senderNickname,
+ SenderFaceURL: senderFaceURL,
+ SessionType: constant.ReadGroupChatType,
+ MsgFrom: constant.SysMsgType,
+ ContentType: constant.Custom, // 使用自定义消息类型(110)
+ CreateTime: timeutil.GetCurrentTimestampByMill(),
+ SendTime: timeutil.GetCurrentTimestampByMill(),
+ Content: []byte(content),
+ Options: make(map[string]bool),
+ }
+
+ // 发送消息
+ sendMsgReq := &msg.SendMsgReq{
+ MsgData: msgData,
+ }
+ sendMsgResp, err := r.msgClient.SendMsg(c, sendMsgReq)
+ if err != nil {
+ log.ZError(c, "SendRedPacket: failed to send message", err, "redPacketID", redPacketID, "groupID", req.GroupID)
+ apiresp.GinError(c, err)
+ return
+ }
+
+ // 返回响应
+ resp.RedPacketID = redPacketID
+ resp.ServerMsgID = sendMsgResp.ServerMsgID
+ resp.ClientMsgID = msgData.ClientMsgID
+ resp.SendTime = sendMsgResp.SendTime
+
+ log.ZInfo(c, "SendRedPacket: success", "redPacketID", redPacketID, "groupID", req.GroupID, "sendUserID", sendUserID)
+ apiresp.GinSuccess(c, resp)
+}
+
+// allocateRandomAmounts 分配拼手气红包金额(仿照微信红包算法)
+// 算法原理:
+// 1. 确保每个红包至少有最小金额(1分)
+// 2. 对于前n-1个红包,每次分配时动态计算最大可分配金额
+// 3. 在最小金额和最大金额之间随机分配
+// 4. 最后一个红包直接分配剩余金额
+// 5. 打乱顺序增加随机性
+func (r *RedPacketApi) allocateRandomAmounts(totalAmount int64, totalCount int32) ([]int64, error) {
+ if totalCount <= 0 {
+ return nil, errs.ErrArgs.WrapMsg("totalCount must be greater than 0")
+ }
+ if totalAmount < int64(totalCount) {
+ return nil, errs.ErrArgs.WrapMsg("totalAmount must be at least totalCount (minimum 1 cent per packet)")
+ }
+
+ const minAmount = 1 // 最小金额:1分
+ amounts := make([]int64, totalCount)
+ remainAmount := totalAmount
+ remainCount := totalCount
+
+ // 使用随机种子
+ rand.Seed(time.Now().UnixNano())
+
+ // 分配前n-1个红包
+ for i := int32(0); i < totalCount-1; i++ {
+ // 计算最大可分配金额:剩余金额 / 剩余红包数 * 2
+ // 这样可以保证后续红包还能分配,避免出现0或全部的情况
+ // 但需要确保至少是最小金额
+ maxAmount := remainAmount / int64(remainCount) * 2
+ if maxAmount < minAmount {
+ maxAmount = minAmount
+ }
+
+ // 如果剩余金额不足以分配给剩余红包(每个至少1分),则直接分配剩余金额
+ if remainAmount <= int64(remainCount)*minAmount {
+ amounts[i] = minAmount
+ remainAmount -= minAmount
+ remainCount--
+ continue
+ }
+
+ // 确保最大金额不超过剩余金额减去剩余红包的最小金额
+ // 这样可以保证后续红包至少能分配到最小金额
+ maxAllowed := remainAmount - int64(remainCount-1)*minAmount
+ if maxAmount > maxAllowed {
+ maxAmount = maxAllowed
+ }
+
+ // 在最小金额和最大金额之间随机分配
+ // 如果最大金额等于最小金额,直接使用最小金额
+ var amount int64
+ if maxAmount <= minAmount {
+ amount = minAmount
+ } else {
+ amount = rand.Int63n(maxAmount-minAmount+1) + minAmount
+ }
+
+ amounts[i] = amount
+ remainAmount -= amount
+ remainCount--
+ }
+
+ // 最后一个红包,直接分配剩余金额
+ amounts[totalCount-1] = remainAmount
+
+ // 验证:确保每个红包至少1分
+ if amounts[totalCount-1] < minAmount {
+ return nil, errs.ErrInternalServer.WrapMsg("last packet amount is less than minimum")
+ }
+
+ // 打乱顺序(增加随机性)
+ rand.Shuffle(len(amounts), func(i, j int) {
+ amounts[i], amounts[j] = amounts[j], amounts[i]
+ })
+
+ // 最终验证:确保每个红包至少1分,总和等于总金额
+ var sum int64
+ for i := int32(0); i < totalCount; i++ {
+ if amounts[i] < minAmount {
+ return nil, errs.ErrInternalServer.WrapMsg("allocated amount is less than minimum")
+ }
+ sum += amounts[i]
+ }
+ if sum != totalAmount {
+ return nil, errs.ErrInternalServer.WrapMsg("allocated amount sum mismatch")
+ }
+
+ return amounts, nil
+}
+
+// writeRedPacketRecordToMongoDB 异步写入MongoDB领取记录
+func (r *RedPacketApi) writeRedPacketRecordToMongoDB(ctx context.Context, redPacketID, userID string, amount int64) {
+ receiveID := idutil.GetMsgIDByMD5(userID + redPacketID + timeutil.GetCurrentTimeFormatted())
+ receiveRecord := &model.RedPacketReceive{
+ ReceiveID: receiveID,
+ RedPacketID: redPacketID,
+ ReceiveUserID: userID,
+ Amount: amount,
+ ReceiveTime: time.Now(),
+ IsLucky: false,
+ }
+
+ if err := r.redPacketReceiveDB.Create(ctx, receiveRecord); err != nil {
+ // 检查是否是唯一索引冲突(MongoDB E11000错误)
+ errStr := err.Error()
+ if errs.ErrArgs.Is(err) || (len(errStr) > 0 && (errStr == "E11000 duplicate key error" ||
+ errStr == "duplicate key error collection" ||
+ len(errStr) > 5 && errStr[:5] == "E11000")) {
+ log.ZDebug(ctx, "ReceiveRedPacket: receive record already exists (duplicate key)", "redPacketID", redPacketID, "userID", userID)
+ return
+ }
+ // 其他错误记录日志,后续补偿机制会处理
+ log.ZError(ctx, "ReceiveRedPacket: failed to write receive record to MongoDB", err,
+ "redPacketID", redPacketID,
+ "userID", userID,
+ "amount", amount)
+ }
+}
+
+// updateWalletBalanceAsync 异步更新钱包余额
+func (r *RedPacketApi) updateWalletBalanceAsync(ctx context.Context, userID string, amount int64, redPacketID string) {
+ maxRetries := 3
+ var updateSuccess bool
+
+ for retry := 0; retry < maxRetries; retry++ {
+ // 查询或创建用户钱包
+ wallet, err := r.walletDB.Take(ctx, userID)
+ if err != nil {
+ // 如果钱包不存在,创建新钱包
+ if errs.ErrRecordNotFound.Is(err) || mgo.IsNotFound(err) {
+ wallet = &model.Wallet{
+ UserID: userID,
+ Balance: 0,
+ Version: 1,
+ CreateTime: time.Now(),
+ UpdateTime: time.Now(),
+ }
+ if err := r.walletDB.Create(ctx, wallet); err != nil {
+ log.ZError(ctx, "ReceiveRedPacket: failed to create wallet", err, "userID", userID, "retry", retry)
+ // 如果创建失败,可能是并发创建,下次重试会查到
+ if retry < maxRetries-1 {
+ time.Sleep(time.Millisecond * 50)
+ continue
+ }
+ return
+ }
+ } else {
+ log.ZError(ctx, "ReceiveRedPacket: failed to get wallet", err, "userID", userID, "retry", retry)
+ if retry < maxRetries-1 {
+ time.Sleep(time.Millisecond * 50)
+ continue
+ }
+ return
+ }
+ }
+
+ // 使用版本号更新余额(防止并发覆盖)
+ params := &database.WalletUpdateParams{
+ UserID: userID,
+ Operation: "add",
+ Amount: amount,
+ OldBalance: wallet.Balance,
+ OldVersion: wallet.Version,
+ }
+ result, err := r.walletDB.UpdateBalanceWithVersion(ctx, params)
+ if err != nil {
+ // 如果是并发冲突(版本号不匹配),重试
+ log.ZWarn(ctx, "ReceiveRedPacket: concurrent modification detected, retrying", err,
+ "userID", userID,
+ "retry", retry+1,
+ "maxRetries", maxRetries)
+ if retry < maxRetries-1 {
+ time.Sleep(time.Millisecond * 100)
+ continue
+ }
+ log.ZError(ctx, "ReceiveRedPacket: failed to update wallet balance after retries", err, "userID", userID, "amount", amount)
+ return
+ }
+
+ // 更新成功,创建余额记录
+ updateSuccess = true
+
+ // 创建余额记录
+ recordID := idutil.GetMsgIDByMD5(userID + timeutil.GetCurrentTimeFormatted() + "add" + strconv.FormatInt(amount, 10) + redPacketID)
+ balanceRecord := &model.WalletBalanceRecord{
+ ID: recordID,
+ UserID: userID,
+ Amount: amount, // 领取红包金额(正数)
+ Type: 8, // 8 = 抢红包
+ BeforeBalance: params.OldBalance,
+ AfterBalance: result.NewBalance,
+ OrderID: "",
+ TransactionID: "",
+ RedPacketID: redPacketID,
+ Remark: "领取红包",
+ CreateTime: time.Now(),
+ }
+ if err := r.walletBalanceRecordDB.Create(ctx, balanceRecord); err != nil {
+ log.ZWarn(ctx, "ReceiveRedPacket: failed to create balance record", err,
+ "userID", userID,
+ "redPacketID", redPacketID,
+ "amount", amount)
+ }
+
+ log.ZInfo(ctx, "ReceiveRedPacket: wallet balance updated async",
+ "userID", userID,
+ "oldBalance", wallet.Balance,
+ "newBalance", result.NewBalance,
+ "amount", amount)
+ break
+ }
+
+ if !updateSuccess {
+ log.ZError(ctx, "ReceiveRedPacket: failed to update wallet balance after all retries", nil, "userID", userID, "maxRetries", maxRetries)
+ }
+}
+
+// writeToCompensationStream 写入失败补偿Stream(Redis Stream)
+func (r *RedPacketApi) writeToCompensationStream(ctx context.Context, redPacketID, userID string, amount int64) {
+ if r.redisClient == nil {
+ return
+ }
+
+ streamKey := r.getRedPacketStreamKey(redPacketID)
+ // 使用 XADD 写入Stream,字段包含必要信息
+ values := map[string]interface{}{
+ "userID": userID,
+ "amount": amount,
+ "time": time.Now().Unix(),
+ }
+
+ if err := r.redisClient.XAdd(ctx, &redis.XAddArgs{
+ Stream: streamKey,
+ Values: values,
+ }).Err(); err != nil {
+ log.ZWarn(ctx, "ReceiveRedPacket: failed to write to compensation stream", err,
+ "redPacketID", redPacketID,
+ "userID", userID)
+ }
+}
+
+// ReceiveRedPacket 领取红包(新方案:Redis负责并发控制,MongoDB异步写入)
+func (r *RedPacketApi) ReceiveRedPacket(c *gin.Context) {
+ var (
+ req apistruct.ReceiveRedPacketReq
+ resp apistruct.ReceiveRedPacketResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ opUserID := mcontext.GetOpUserID(c)
+ if opUserID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("userID is required"))
+ return
+ }
+
+ // 查询红包基本信息(用于状态检查)
+ redPacket, err := r.redPacketDB.Take(c, req.RedPacketID)
+ if err != nil {
+ log.ZError(c, "ReceiveRedPacket: failed to get red packet", err, "redPacketID", req.RedPacketID, "userID", opUserID)
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("red packet not found"))
+ return
+ }
+
+ // 检查红包状态(过期状态检查)
+ if redPacket.Status == model.RedPacketStatusExpired {
+ apiresp.GinError(c, servererrs.ErrRedPacketExpired)
+ return
+ }
+
+ // 【核心】使用Redis Lua脚本原子性地抢红包
+ ctx := context.Background()
+ amount, err := r.grabRedPacketFromRedis(ctx, req.RedPacketID, opUserID)
+ if err != nil {
+ if servererrs.ErrRedPacketAlreadyReceived.Is(err) {
+ apiresp.GinError(c, err)
+ return
+ }
+ if servererrs.ErrRedPacketFinished.Is(err) {
+ apiresp.GinError(c, err)
+ return
+ }
+ // 其他错误(可能是Lua脚本执行失败或返回值解析失败)
+ log.ZError(c, "ReceiveRedPacket: failed to grab red packet from redis", err,
+ "redPacketID", req.RedPacketID,
+ "userID", opUserID,
+ "error", err.Error())
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to grab red packet: "+err.Error()))
+ return
+ }
+
+ // Redis抢红包成功,立即返回结果
+ resp.RedPacketID = req.RedPacketID
+ resp.Amount = amount
+ resp.IsLucky = false
+
+ log.ZInfo(c, "ReceiveRedPacket: successfully grabbed from redis", "redPacketID", req.RedPacketID, "userID", opUserID, "amount", amount)
+ apiresp.GinSuccess(c, resp)
+
+ // 【异步写入MongoDB】Redis成功后,异步写入MongoDB和补偿Stream
+ go func() {
+ // 写入失败补偿Stream(用于后续补偿)
+ r.writeToCompensationStream(ctx, req.RedPacketID, opUserID, amount)
+
+ // 异步写入MongoDB领取记录
+ r.writeRedPacketRecordToMongoDB(ctx, req.RedPacketID, opUserID, amount)
+
+ // 异步更新钱包余额
+ r.updateWalletBalanceAsync(ctx, opUserID, amount, req.RedPacketID)
+ }()
+}
+
+// getRedPacketQueueKey 获取红包队列的Redis key
+func (r *RedPacketApi) getRedPacketQueueKey(redPacketID string) string {
+ return "rp:" + redPacketID + ":list"
+}
+
+// getRedPacketUsersKey 获取已领取用户Set的Redis key
+func (r *RedPacketApi) getRedPacketUsersKey(redPacketID string) string {
+ return "rp:" + redPacketID + ":users"
+}
+
+// getRedPacketStreamKey 获取红包领取日志Stream的Redis key(用于失败补偿)
+func (r *RedPacketApi) getRedPacketStreamKey(redPacketID string) string {
+ return "rp:" + redPacketID + ":stream"
+}
+
+// grabRedPacketLuaScript Redis Lua脚本:原子性地抢红包
+// 返回值:
+//
+// -1: 用户已领取
+// -2: 红包已领完
+// -3: 金额解析失败(数据异常)
+// >0: 领取成功,返回金额(分)
+var grabRedPacketLuaScript = redis.NewScript(`
+-- KEYS[1] 红包金额 list (rp:{packetId}:list)
+-- KEYS[2] 已抢用户 set (rp:{packetId}:users)
+-- ARGV[1] userId
+
+-- 已抢判断
+if redis.call('SISMEMBER', KEYS[2], ARGV[1]) == 1 then
+ return -1
+end
+
+-- 抢红包
+local money = redis.call('LPOP', KEYS[1])
+if not money then
+ return -2
+end
+
+-- 将字符串金额转换为数字
+local amount = tonumber(money)
+if not amount then
+ return -3
+end
+
+-- 记录用户(在返回之前记录,确保原子性)
+redis.call('SADD', KEYS[2], ARGV[1])
+
+-- 返回数字金额
+return amount
+`)
+
+// grabRedPacketFromRedis 使用Redis Lua脚本原子性地抢红包
+func (r *RedPacketApi) grabRedPacketFromRedis(ctx context.Context, redPacketID, userID string) (int64, error) {
+ if r.redisClient == nil {
+ return 0, errs.ErrInternalServer.WrapMsg("redis client is not available")
+ }
+
+ listKey := r.getRedPacketQueueKey(redPacketID)
+ usersKey := r.getRedPacketUsersKey(redPacketID)
+
+ // 执行Lua脚本
+ result, err := grabRedPacketLuaScript.Eval(ctx, r.redisClient, []string{listKey, usersKey}, userID).Result()
+ if err != nil {
+ log.ZError(ctx, "ReceiveRedPacket: lua script execution failed", err,
+ "redPacketID", redPacketID,
+ "userID", userID,
+ "listKey", listKey,
+ "usersKey", usersKey)
+ return 0, errs.Wrap(err)
+ }
+
+ // 记录原始返回值,用于调试
+ log.ZDebug(ctx, "ReceiveRedPacket: lua script result", "redPacketID", redPacketID, "userID", userID, "result", result, "resultType", fmt.Sprintf("%T", result))
+
+ // 解析返回值(Lua脚本返回的是数字,但Redis可能返回字符串或数字)
+ var code int64
+ switch v := result.(type) {
+ case int64:
+ code = v
+ case int:
+ code = int64(v)
+ case string:
+ // 如果返回字符串,尝试转换为数字
+ parsed, err := strconv.ParseInt(v, 10, 64)
+ if err != nil {
+ log.ZError(ctx, "ReceiveRedPacket: failed to parse lua script return value", err, "redPacketID", redPacketID, "userID", userID, "result", result)
+ return 0, errs.ErrInternalServer.WrapMsg("invalid lua script return value: " + v)
+ }
+ code = parsed
+ default:
+ log.ZError(ctx, "ReceiveRedPacket: unexpected lua script return type", nil, "redPacketID", redPacketID, "userID", userID, "result", result, "type", fmt.Sprintf("%T", result))
+ return 0, errs.ErrInternalServer.WrapMsg(fmt.Sprintf("invalid lua script return value type: %T", result))
+ }
+
+ switch code {
+ case -1:
+ return 0, servererrs.ErrRedPacketAlreadyReceived
+ case -2:
+ return 0, servererrs.ErrRedPacketFinished
+ case -3:
+ return 0, errs.ErrInternalServer.WrapMsg("invalid red packet amount data")
+ default:
+ // code > 0 表示领取成功,返回金额
+ return code, nil
+ }
+}
+
+// paginationWrapper 实现 pagination.Pagination 接口
+type paginationWrapper struct {
+ pageNumber int32
+ showNumber int32
+}
+
+func (p *paginationWrapper) GetPageNumber() int32 {
+ if p.pageNumber <= 0 {
+ return 1
+ }
+ return p.pageNumber
+}
+
+func (p *paginationWrapper) GetShowNumber() int32 {
+ if p.showNumber <= 0 {
+ return 20
+ }
+ return p.showNumber
+}
+
+// GetRedPacketsByGroup 根据群ID查询红包列表(后台管理接口)
+func (r *RedPacketApi) GetRedPacketsByGroup(c *gin.Context) {
+ var (
+ req apistruct.GetRedPacketsByGroupReq
+ resp apistruct.GetRedPacketsByGroupResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 设置默认分页参数
+ if req.Pagination.PageNumber <= 0 {
+ req.Pagination.PageNumber = 1
+ }
+ if req.Pagination.ShowNumber <= 0 {
+ req.Pagination.ShowNumber = 20
+ }
+
+ // 创建分页对象
+ pagination := &paginationWrapper{
+ pageNumber: req.Pagination.PageNumber,
+ showNumber: req.Pagination.ShowNumber,
+ }
+
+ // 查询红包列表
+ var total int64
+ var redPackets []*model.RedPacket
+ var err error
+ if req.GroupID == "" {
+ // 如果群ID为空,查询所有红包
+ total, redPackets, err = r.redPacketDB.FindAllRedPackets(c, pagination)
+ } else {
+ // 如果群ID不为空,查询指定群的红包
+ total, redPackets, err = r.redPacketDB.FindRedPacketsByGroup(c, req.GroupID, pagination)
+ }
+ if err != nil {
+ log.ZError(c, "GetRedPacketsByGroup: failed to find red packets", err, "groupID", req.GroupID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find red packets"))
+ return
+ }
+
+ // 收集所有唯一的群ID
+ groupIDMap := make(map[string]bool)
+ for _, rp := range redPackets {
+ if rp.GroupID != "" {
+ groupIDMap[rp.GroupID] = true
+ }
+ }
+
+ // 批量查询群信息
+ groupIDList := make([]string, 0, len(groupIDMap))
+ for groupID := range groupIDMap {
+ groupIDList = append(groupIDList, groupID)
+ }
+
+ groupInfoMap := make(map[string]string) // groupID -> groupName
+ if len(groupIDList) > 0 {
+ groupInfos, err := r.groupClient.GetGroupsInfo(c, groupIDList)
+ if err == nil && groupInfos != nil {
+ for _, groupInfo := range groupInfos {
+ groupInfoMap[groupInfo.GroupID] = groupInfo.GroupName
+ }
+ } else {
+ log.ZWarn(c, "GetRedPacketsByGroup: failed to get groups info", err, "groupIDs", groupIDList)
+ }
+ }
+
+ // 转换为响应格式
+ resp.Total = total
+ resp.RedPackets = make([]*apistruct.RedPacketInfo, 0, len(redPackets))
+ for _, rp := range redPackets {
+ groupName := groupInfoMap[rp.GroupID]
+ resp.RedPackets = append(resp.RedPackets, &apistruct.RedPacketInfo{
+ RedPacketID: rp.RedPacketID,
+ SendUserID: rp.SendUserID,
+ GroupID: rp.GroupID,
+ GroupName: groupName,
+ RedPacketType: rp.RedPacketType,
+ TotalAmount: rp.TotalAmount,
+ TotalCount: rp.TotalCount,
+ RemainAmount: rp.RemainAmount,
+ RemainCount: rp.RemainCount,
+ Blessing: rp.Blessing,
+ Status: rp.Status,
+ ExpireTime: rp.ExpireTime.UnixMilli(),
+ CreateTime: rp.CreateTime.UnixMilli(),
+ })
+ }
+
+ log.ZInfo(c, "GetRedPacketsByGroup: success", "groupID", req.GroupID, "total", total, "count", len(resp.RedPackets))
+ apiresp.GinSuccess(c, resp)
+}
+
+// GetRedPacketReceiveInfo 查询红包领取情况(后台管理接口)
+func (r *RedPacketApi) GetRedPacketReceiveInfo(c *gin.Context) {
+ var (
+ req apistruct.GetRedPacketReceiveInfoReq
+ resp apistruct.GetRedPacketReceiveInfoResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 查询红包信息
+ redPacket, err := r.redPacketDB.Take(c, req.RedPacketID)
+ if err != nil {
+ log.ZError(c, "GetRedPacketReceiveInfo: failed to get red packet", err, "redPacketID", req.RedPacketID)
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("red packet not found"))
+ return
+ }
+
+ // 查询领取记录
+ receives, err := r.redPacketReceiveDB.FindByRedPacketID(c, req.RedPacketID)
+ if err != nil {
+ log.ZError(c, "GetRedPacketReceiveInfo: failed to find receives", err, "redPacketID", req.RedPacketID)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find receives"))
+ return
+ }
+
+ // 填充响应数据
+ resp.RedPacketID = redPacket.RedPacketID
+ resp.TotalAmount = redPacket.TotalAmount
+ resp.TotalCount = redPacket.TotalCount
+ resp.RemainAmount = redPacket.RemainAmount
+ resp.RemainCount = redPacket.RemainCount
+ resp.Status = redPacket.Status
+ resp.Receives = make([]*apistruct.RedPacketReceiveDetail, 0, len(receives))
+ for _, rec := range receives {
+ resp.Receives = append(resp.Receives, &apistruct.RedPacketReceiveDetail{
+ ReceiveID: rec.ReceiveID,
+ ReceiveUserID: rec.ReceiveUserID,
+ Amount: rec.Amount,
+ ReceiveTime: rec.ReceiveTime.UnixMilli(),
+ IsLucky: false, // 已去掉手气最佳功能,始终返回 false
+ })
+ }
+
+ log.ZInfo(c, "GetRedPacketReceiveInfo: success", "redPacketID", req.RedPacketID, "receiveCount", len(resp.Receives))
+ apiresp.GinSuccess(c, resp)
+}
+
+// PauseRedPacket 暂停红包(后台管理接口)- 清空Redis队列
+func (r *RedPacketApi) PauseRedPacket(c *gin.Context) {
+ var (
+ req apistruct.PauseRedPacketReq
+ resp apistruct.PauseRedPacketResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 查询红包信息,验证红包是否存在
+ redPacket, err := r.redPacketDB.Take(c, req.RedPacketID)
+ if err != nil {
+ log.ZError(c, "PauseRedPacket: failed to get red packet", err, "redPacketID", req.RedPacketID)
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("red packet not found"))
+ return
+ }
+
+ // 只有拼手气红包才有Redis队列,需要清空
+ if redPacket.RedPacketType == model.RedPacketTypeRandom {
+ if r.redisClient == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("redis client is not available"))
+ return
+ }
+
+ // 获取Redis队列key
+ queueKey := r.getRedPacketQueueKey(req.RedPacketID)
+ ctx := context.Background()
+
+ // 清空Redis队列(删除整个key)
+ err = r.redisClient.Del(ctx, queueKey).Err()
+ if err != nil {
+ log.ZError(c, "PauseRedPacket: failed to delete redis queue", err, "redPacketID", req.RedPacketID, "queueKey", queueKey)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to pause red packet"))
+ return
+ }
+
+ log.ZInfo(c, "PauseRedPacket: cleared redis queue", "redPacketID", req.RedPacketID, "queueKey", queueKey)
+ } else {
+ // 普通红包没有Redis队列,直接返回成功
+ log.ZInfo(c, "PauseRedPacket: normal red packet, no redis queue to clear", "redPacketID", req.RedPacketID)
+ }
+
+ // 返回响应
+ resp.RedPacketID = req.RedPacketID
+
+ log.ZInfo(c, "PauseRedPacket: success", "redPacketID", req.RedPacketID)
+ apiresp.GinSuccess(c, resp)
+}
+
+// GetRedPacketDetail 查询红包详情(用户端接口)
+func (r *RedPacketApi) GetRedPacketDetail(c *gin.Context) {
+ var (
+ req apistruct.GetRedPacketDetailReq
+ resp apistruct.GetRedPacketDetailResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 获取当前用户ID
+ opUserID := mcontext.GetOpUserID(c)
+ if opUserID == "" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("userID is required"))
+ return
+ }
+
+ // 查询红包信息
+ redPacket, err := r.redPacketDB.Take(c, req.RedPacketID)
+ if err != nil {
+ log.ZError(c, "GetRedPacketDetail: failed to get red packet", err, "redPacketID", req.RedPacketID, "userID", opUserID)
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("red packet not found"))
+ return
+ }
+
+ // 验证用户是否在群里
+ groupMember, err := r.groupClient.GetGroupMemberInfo(c, redPacket.GroupID, opUserID)
+ if err != nil || groupMember == nil {
+ log.ZWarn(c, "GetRedPacketDetail: user not in group", err, "redPacketID", req.RedPacketID, "groupID", redPacket.GroupID, "userID", opUserID)
+ apiresp.GinError(c, errs.ErrNoPermission.WrapMsg("user not in group"))
+ return
+ }
+
+ // 判断是否过期(超过一周)
+ oneWeekAgo := time.Now().Add(-7 * 24 * time.Hour)
+ isExpired := redPacket.CreateTime.Before(oneWeekAgo)
+
+ // 查询当前用户的领取信息
+ myReceive, err := r.redPacketReceiveDB.FindByUserAndRedPacketID(c, opUserID, req.RedPacketID)
+ var myReceiveDetail *apistruct.RedPacketMyReceiveDetail
+ if err == nil && myReceive != nil {
+ // 自己的领取信息不返回用户ID、昵称、头像
+ myReceiveDetail = &apistruct.RedPacketMyReceiveDetail{
+ ReceiveID: myReceive.ReceiveID,
+ Amount: myReceive.Amount,
+ ReceiveTime: myReceive.ReceiveTime.UnixMilli(),
+ IsLucky: false, // 已去掉手气最佳功能,始终返回 false
+ }
+ }
+
+ // 判断是否是群主或管理员
+ isOwnerOrAdmin := groupMember.RoleLevel == constant.GroupOwner || groupMember.RoleLevel == constant.GroupAdmin
+
+ // 构建响应
+ resp.RedPacketID = redPacket.RedPacketID
+ resp.GroupID = redPacket.GroupID
+ resp.RedPacketType = redPacket.RedPacketType
+ resp.TotalAmount = redPacket.TotalAmount
+ resp.TotalCount = redPacket.TotalCount
+ resp.RemainAmount = redPacket.RemainAmount
+ resp.RemainCount = redPacket.RemainCount
+ resp.Blessing = redPacket.Blessing
+ resp.Status = redPacket.Status
+ resp.IsExpired = isExpired
+ resp.MyReceive = myReceiveDetail
+ resp.Receives = []*apistruct.RedPacketUserReceiveDetail{}
+
+ // 如果是群主或管理员,返回所有领取记录
+ if isOwnerOrAdmin {
+ receives, err := r.redPacketReceiveDB.FindByRedPacketID(c, req.RedPacketID)
+ if err != nil {
+ log.ZWarn(c, "GetRedPacketDetail: failed to find receives", err, "redPacketID", req.RedPacketID)
+ } else {
+ // 收集所有领取者ID
+ userIDs := make([]string, 0, len(receives))
+ for _, rec := range receives {
+ userIDs = append(userIDs, rec.ReceiveUserID)
+ }
+
+ // 批量获取用户信息
+ userInfos, err := r.userClient.GetUsersInfo(c, userIDs)
+ if err != nil {
+ log.ZWarn(c, "GetRedPacketDetail: failed to get users info", err, "userIDs", userIDs)
+ }
+
+ // 构建用户信息映射
+ userInfoMap := make(map[string]*sdkws.UserInfo)
+ if userInfos != nil {
+ for _, userInfo := range userInfos {
+ if userInfo != nil {
+ userInfoMap[userInfo.UserID] = userInfo
+ }
+ }
+ }
+
+ // 构建领取记录列表
+ resp.Receives = make([]*apistruct.RedPacketUserReceiveDetail, 0, len(receives))
+ for _, rec := range receives {
+ userInfo := userInfoMap[rec.ReceiveUserID]
+ nickname := rec.ReceiveUserID
+ faceURL := ""
+ if userInfo != nil {
+ nickname = userInfo.Nickname
+ faceURL = userInfo.FaceURL
+ }
+ resp.Receives = append(resp.Receives, &apistruct.RedPacketUserReceiveDetail{
+ ReceiveID: rec.ReceiveID,
+ ReceiveUserID: rec.ReceiveUserID,
+ Nickname: nickname,
+ FaceURL: faceURL,
+ Amount: rec.Amount,
+ ReceiveTime: rec.ReceiveTime.UnixMilli(),
+ IsLucky: false, // 已去掉手气最佳功能,始终返回 false
+ })
+ }
+ }
+ }
+
+ log.ZInfo(c, "GetRedPacketDetail: success", "redPacketID", req.RedPacketID, "userID", opUserID, "isOwnerOrAdmin", isOwnerOrAdmin, "isExpired", isExpired)
+ apiresp.GinSuccess(c, resp)
+}
diff --git a/internal/api/router.go b/internal/api/router.go
new file mode 100644
index 0000000..1905636
--- /dev/null
+++ b/internal/api/router.go
@@ -0,0 +1,493 @@
+package api
+
+import (
+ "context"
+ "net/http"
+ "strings"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/api/jssdk"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ pbAuth "git.imall.cloud/openim/protocol/auth"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/third"
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/gin-contrib/gzip"
+ "github.com/gin-gonic/gin"
+ "github.com/gin-gonic/gin/binding"
+ "github.com/go-playground/validator/v10"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/discovery/etcd"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mw"
+ "github.com/openimsdk/tools/mw/api"
+ clientv3 "go.etcd.io/etcd/client/v3"
+)
+
+const (
+ NoCompression = -1
+ DefaultCompression = 0
+ BestCompression = 1
+ BestSpeed = 2
+)
+
+func prommetricsGin() gin.HandlerFunc {
+ return func(c *gin.Context) {
+ c.Next()
+ path := c.FullPath()
+ if c.Writer.Status() == http.StatusNotFound {
+ prommetrics.HttpCall("<404>", c.Request.Method, c.Writer.Status())
+ } else {
+ prommetrics.HttpCall(path, c.Request.Method, c.Writer.Status())
+ }
+ if resp := apiresp.GetGinApiResponse(c); resp != nil {
+ prommetrics.APICall(path, c.Request.Method, resp.ErrCode)
+ }
+ }
+}
+
+func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, cfg *Config) (*gin.Engine, error) {
+ authConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.Auth)
+ if err != nil {
+ return nil, err
+ }
+ userConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.User)
+ if err != nil {
+ return nil, err
+ }
+ groupConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.Group)
+ if err != nil {
+ return nil, err
+ }
+ friendConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.Friend)
+ if err != nil {
+ return nil, err
+ }
+ conversationConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.Conversation)
+ if err != nil {
+ return nil, err
+ }
+ thirdConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.Third)
+ if err != nil {
+ return nil, err
+ }
+ msgConn, err := client.GetConn(ctx, cfg.Discovery.RpcService.Msg)
+ if err != nil {
+ return nil, err
+ }
+
+ // 初始化数据库连接(用于红包功能)
+ dbb := dbbuild.NewBuilder(&cfg.AllConfig.Mongo, &cfg.AllConfig.Redis)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return nil, err
+ }
+ redisClient, err := dbb.Redis(ctx)
+ if err != nil {
+ return nil, err
+ }
+ redPacketDB, err := mgo.NewRedPacketMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ redPacketReceiveDB, err := mgo.NewRedPacketReceiveMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ walletDB, err := mgo.NewWalletMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ walletBalanceRecordDB, err := mgo.NewWalletBalanceRecordMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ userDB, err := mgo.NewUserMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ meetingDB, err := mgo.NewMeetingMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ meetingCheckInDB, err := mgo.NewMeetingCheckInMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ msgDocDatabase, err := mgo.NewMsgMongo(mgocli.GetDB())
+ if err != nil {
+ return nil, err
+ }
+ startOnlineCountRefresher(ctx, cfg, redisClient)
+
+ gin.SetMode(gin.ReleaseMode)
+ r := gin.New()
+ if v, ok := binding.Validator.Engine().(*validator.Validate); ok {
+ _ = v.RegisterValidation("required_if", RequiredIf)
+ }
+ switch cfg.API.Api.CompressionLevel {
+ case NoCompression:
+ case DefaultCompression:
+ r.Use(gzip.Gzip(gzip.DefaultCompression))
+ case BestCompression:
+ r.Use(gzip.Gzip(gzip.BestCompression))
+ case BestSpeed:
+ r.Use(gzip.Gzip(gzip.BestSpeed))
+ }
+
+ // Use rate limiter middleware
+ if cfg.API.RateLimiter.Enable {
+ rl := &RateLimiter{
+ Enable: cfg.API.RateLimiter.Enable,
+ Window: cfg.API.RateLimiter.Window,
+ Bucket: cfg.API.RateLimiter.Bucket,
+ CPUThreshold: cfg.API.RateLimiter.CPUThreshold,
+ }
+ r.Use(RateLimitMiddleware(rl))
+ }
+
+ if config.Standalone() {
+ r.Use(func(c *gin.Context) {
+ c.Set(authverify.CtxAdminUserIDsKey, cfg.Share.IMAdminUser.UserIDs)
+ })
+ }
+ r.Use(api.GinLogger(), prommetricsGin(), gin.RecoveryWithWriter(gin.DefaultErrorWriter, mw.GinPanicErr), mw.CorsHandler(),
+ mw.GinParseOperationID(), GinParseToken(rpcli.NewAuthClient(authConn)), setGinIsAdmin(cfg.Share.IMAdminUser.UserIDs))
+
+ u := NewUserApi(user.NewUserClient(userConn), client, cfg.Discovery.RpcService)
+ {
+ userRouterGroup := r.Group("/user")
+ userRouterGroup.POST("/user_register", u.UserRegister)
+ userRouterGroup.POST("/update_user_info", u.UpdateUserInfo)
+ userRouterGroup.POST("/update_user_info_ex", u.UpdateUserInfoEx)
+ userRouterGroup.POST("/set_global_msg_recv_opt", u.SetGlobalRecvMessageOpt)
+ userRouterGroup.POST("/get_users_info", u.GetUsersPublicInfo)
+ userRouterGroup.POST("/get_all_users_uid", u.GetAllUsersID)
+ userRouterGroup.POST("/account_check", u.AccountCheck)
+ userRouterGroup.POST("/get_users", u.GetUsers)
+ userRouterGroup.POST("/get_users_online_status", u.GetUsersOnlineStatus)
+ userRouterGroup.POST("/get_users_online_token_detail", u.GetUsersOnlineTokenDetail)
+ userRouterGroup.POST("/subscribe_users_status", u.SubscriberStatus)
+ userRouterGroup.POST("/get_users_status", u.GetUserStatus)
+ userRouterGroup.POST("/get_subscribe_users_status", u.GetSubscribeUsersStatus)
+
+ userRouterGroup.POST("/process_user_command_add", u.ProcessUserCommandAdd)
+ userRouterGroup.POST("/process_user_command_delete", u.ProcessUserCommandDelete)
+ userRouterGroup.POST("/process_user_command_update", u.ProcessUserCommandUpdate)
+ userRouterGroup.POST("/process_user_command_get", u.ProcessUserCommandGet)
+ userRouterGroup.POST("/process_user_command_get_all", u.ProcessUserCommandGetAll)
+
+ userRouterGroup.POST("/add_notification_account", u.AddNotificationAccount)
+ userRouterGroup.POST("/update_notification_account", u.UpdateNotificationAccountInfo)
+ userRouterGroup.POST("/search_notification_account", u.SearchNotificationAccount)
+
+ userRouterGroup.POST("/get_user_client_config", u.GetUserClientConfig)
+ userRouterGroup.POST("/set_user_client_config", u.SetUserClientConfig)
+ userRouterGroup.POST("/del_user_client_config", u.DelUserClientConfig)
+ userRouterGroup.POST("/page_user_client_config", u.PageUserClientConfig)
+ }
+ // friend routing group
+ {
+ f := NewFriendApi(relation.NewFriendClient(friendConn))
+ friendRouterGroup := r.Group("/friend")
+ friendRouterGroup.POST("/delete_friend", f.DeleteFriend)
+ friendRouterGroup.POST("/get_friend_apply_list", f.GetFriendApplyList)
+ friendRouterGroup.POST("/get_designated_friend_apply", f.GetDesignatedFriendsApply)
+ friendRouterGroup.POST("/get_self_friend_apply_list", f.GetSelfApplyList)
+ friendRouterGroup.POST("/get_friend_list", f.GetFriendList)
+ friendRouterGroup.POST("/get_designated_friends", f.GetDesignatedFriends)
+ friendRouterGroup.POST("/add_friend", f.ApplyToAddFriend)
+ friendRouterGroup.POST("/add_friend_response", f.RespondFriendApply)
+ friendRouterGroup.POST("/set_friend_remark", f.SetFriendRemark)
+ friendRouterGroup.POST("/add_black", f.AddBlack)
+ friendRouterGroup.POST("/get_black_list", f.GetPaginationBlacks)
+ friendRouterGroup.POST("/get_specified_blacks", f.GetSpecifiedBlacks)
+ friendRouterGroup.POST("/remove_black", f.RemoveBlack)
+ friendRouterGroup.POST("/get_incremental_blacks", f.GetIncrementalBlacks)
+ friendRouterGroup.POST("/import_friend", f.ImportFriends)
+ friendRouterGroup.POST("/is_friend", f.IsFriend)
+ friendRouterGroup.POST("/get_friend_id", f.GetFriendIDs)
+ friendRouterGroup.POST("/get_specified_friends_info", f.GetSpecifiedFriendsInfo)
+ friendRouterGroup.POST("/update_friends", f.UpdateFriends)
+ friendRouterGroup.POST("/get_incremental_friends", f.GetIncrementalFriends)
+ friendRouterGroup.POST("/get_full_friend_user_ids", f.GetFullFriendUserIDs)
+ friendRouterGroup.POST("/get_self_unhandled_apply_count", f.GetSelfUnhandledApplyCount)
+ }
+
+ g := NewGroupApi(group.NewGroupClient(groupConn))
+ {
+ groupRouterGroup := r.Group("/group")
+ groupRouterGroup.POST("/create_group", g.CreateGroup)
+ groupRouterGroup.POST("/set_group_info", g.SetGroupInfo)
+ groupRouterGroup.POST("/set_group_info_ex", g.SetGroupInfoEx)
+ groupRouterGroup.POST("/join_group", g.JoinGroup)
+ groupRouterGroup.POST("/quit_group", g.QuitGroup)
+ groupRouterGroup.POST("/group_application_response", g.ApplicationGroupResponse)
+ groupRouterGroup.POST("/transfer_group", g.TransferGroupOwner)
+ groupRouterGroup.POST("/get_recv_group_applicationList", g.GetRecvGroupApplicationList)
+ groupRouterGroup.POST("/get_user_req_group_applicationList", g.GetUserReqGroupApplicationList)
+ groupRouterGroup.POST("/get_group_users_req_application_list", g.GetGroupUsersReqApplicationList)
+ groupRouterGroup.POST("/get_specified_user_group_request_info", g.GetSpecifiedUserGroupRequestInfo)
+ groupRouterGroup.POST("/get_groups_info", g.GetGroupsInfo)
+ groupRouterGroup.POST("/kick_group", g.KickGroupMember)
+ groupRouterGroup.POST("/get_group_members_info", g.GetGroupMembersInfo)
+ groupRouterGroup.POST("/get_group_member_list", g.GetGroupMemberList)
+ groupRouterGroup.POST("/invite_user_to_group", g.InviteUserToGroup)
+ groupRouterGroup.POST("/get_joined_group_list", g.GetJoinedGroupList)
+ groupRouterGroup.POST("/dismiss_group", g.DismissGroup) //
+ groupRouterGroup.POST("/mute_group_member", g.MuteGroupMember)
+ groupRouterGroup.POST("/cancel_mute_group_member", g.CancelMuteGroupMember)
+ groupRouterGroup.POST("/mute_group", g.MuteGroup)
+ groupRouterGroup.POST("/cancel_mute_group", g.CancelMuteGroup)
+ groupRouterGroup.POST("/set_group_member_info", g.SetGroupMemberInfo)
+ groupRouterGroup.POST("/get_group_abstract_info", g.GetGroupAbstractInfo)
+ groupRouterGroup.POST("/get_groups", g.GetGroups)
+ groupRouterGroup.POST("/get_group_member_user_id", g.GetGroupMemberUserIDs)
+ groupRouterGroup.POST("/get_incremental_join_groups", g.GetIncrementalJoinGroup)
+ groupRouterGroup.POST("/get_incremental_group_members", g.GetIncrementalGroupMember)
+ groupRouterGroup.POST("/get_incremental_group_members_batch", g.GetIncrementalGroupMemberBatch)
+ groupRouterGroup.POST("/get_full_group_member_user_ids", g.GetFullGroupMemberUserIDs)
+ groupRouterGroup.POST("/get_full_join_group_ids", g.GetFullJoinGroupIDs)
+ groupRouterGroup.POST("/get_group_application_unhandled_count", g.GetGroupApplicationUnhandledCount)
+ }
+ // certificate
+ {
+ a := NewAuthApi(pbAuth.NewAuthClient(authConn))
+ authRouterGroup := r.Group("/auth")
+ authRouterGroup.POST("/get_admin_token", a.GetAdminToken)
+ authRouterGroup.POST("/get_user_token", a.GetUserToken)
+ authRouterGroup.POST("/parse_token", a.ParseToken)
+ authRouterGroup.POST("/force_logout", a.ForceLogout)
+
+ }
+ // Third service
+ {
+ t := NewThirdApi(third.NewThirdClient(thirdConn), cfg.API.Prometheus.GrafanaURL)
+ thirdGroup := r.Group("/third")
+ thirdGroup.GET("/prometheus", t.GetPrometheus)
+ thirdGroup.POST("/fcm_update_token", t.FcmUpdateToken)
+ thirdGroup.POST("/set_app_badge", t.SetAppBadge)
+
+ logs := thirdGroup.Group("/logs")
+ logs.POST("/upload", t.UploadLogs)
+ logs.POST("/delete", t.DeleteLogs)
+ logs.POST("/search", t.SearchLogs)
+
+ objectGroup := r.Group("/object")
+
+ objectGroup.POST("/part_limit", t.PartLimit)
+ objectGroup.POST("/part_size", t.PartSize)
+ objectGroup.POST("/initiate_multipart_upload", t.InitiateMultipartUpload)
+ objectGroup.POST("/auth_sign", t.AuthSign)
+ objectGroup.POST("/complete_multipart_upload", t.CompleteMultipartUpload)
+ objectGroup.POST("/access_url", t.AccessURL)
+ objectGroup.POST("/initiate_form_data", t.InitiateFormData)
+ objectGroup.POST("/complete_form_data", t.CompleteFormData)
+ objectGroup.GET("/*name", t.ObjectRedirect)
+ }
+ // Message
+ m := NewMessageApi(msg.NewMsgClient(msgConn), rpcli.NewUserClient(userConn), cfg.Share.IMAdminUser.UserIDs)
+ {
+ msgGroup := r.Group("/msg")
+ msgGroup.POST("/newest_seq", m.GetSeq)
+ msgGroup.POST("/search_msg", m.SearchMsg)
+ msgGroup.POST("/send_msg", m.SendMessage)
+ msgGroup.POST("/send_business_notification", m.SendBusinessNotification)
+ msgGroup.POST("/pull_msg_by_seq", m.PullMsgBySeqs)
+ msgGroup.POST("/revoke_msg", m.RevokeMsg)
+ msgGroup.POST("/mark_msgs_as_read", m.MarkMsgsAsRead)
+ msgGroup.POST("/mark_conversation_as_read", m.MarkConversationAsRead)
+ msgGroup.POST("/get_conversations_has_read_and_max_seq", m.GetConversationsHasReadAndMaxSeq)
+ msgGroup.POST("/set_conversation_has_read_seq", m.SetConversationHasReadSeq)
+
+ msgGroup.POST("/clear_conversation_msg", m.ClearConversationsMsg)
+ msgGroup.POST("/user_clear_all_msg", m.UserClearAllMsg)
+ msgGroup.POST("/delete_msgs", m.DeleteMsgs)
+ msgGroup.POST("/delete_msg_phsical_by_seq", m.DeleteMsgPhysicalBySeq)
+ msgGroup.POST("/delete_msg_physical", m.DeleteMsgPhysical)
+
+ msgGroup.POST("/batch_send_msg", m.BatchSendMsg)
+ msgGroup.POST("/send_simple_msg", m.SendSimpleMessage)
+ msgGroup.POST("/check_msg_is_send_success", m.CheckMsgIsSendSuccess)
+ msgGroup.POST("/get_server_time", m.GetServerTime)
+ }
+ // RedPacket
+ {
+ rp := NewRedPacketApi(rpcli.NewGroupClient(groupConn), rpcli.NewUserClient(userConn), msg.NewMsgClient(msgConn), redPacketDB, redPacketReceiveDB, walletDB, walletBalanceRecordDB, redisClient)
+ redPacketGroup := r.Group("/redpacket")
+ redPacketGroup.POST("/send_redpacket", rp.SendRedPacket)
+ redPacketGroup.POST("/receive", rp.ReceiveRedPacket)
+ redPacketGroup.POST("/get_detail", rp.GetRedPacketDetail) // 用户端查询红包详情
+ // 后台管理接口
+ redPacketGroup.POST("/get_redpackets_by_group", rp.GetRedPacketsByGroup)
+ redPacketGroup.POST("/get_receive_info", rp.GetRedPacketReceiveInfo)
+ redPacketGroup.POST("/pause", rp.PauseRedPacket)
+ }
+ // Wallet
+ {
+ wa := NewWalletApi(walletDB, walletBalanceRecordDB, userDB, rpcli.NewUserClient(userConn))
+ walletGroup := r.Group("/wallet")
+ // 后台管理接口
+ walletGroup.POST("/get_wallets", wa.GetWallets)
+ walletGroup.POST("/batch_update_balance", wa.BatchUpdateWalletBalance)
+ }
+ // Meeting
+ {
+ // 使用已初始化的systemConfigDB(如果失败则使用nil)
+ meetingSystemConfigDB, initErr := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if initErr != nil {
+ log.ZWarn(ctx, "failed to init system config db for meeting api, webhook attentionIds update will be disabled", initErr)
+ meetingSystemConfigDB = nil
+ }
+ ma := NewMeetingApi(meetingDB, meetingCheckInDB, rpcli.NewGroupClient(groupConn), rpcli.NewUserClient(userConn), rpcli.NewConversationClient(conversationConn), meetingSystemConfigDB)
+ meetingGroup := r.Group("/meeting")
+ // 管理接口
+ meetingGroup.POST("/create_meeting", ma.CreateMeeting)
+ meetingGroup.POST("/update_meeting", ma.UpdateMeeting)
+ meetingGroup.POST("/get_meetings", ma.GetMeetings)
+ meetingGroup.POST("/delete_meeting", ma.DeleteMeeting)
+ // 用户端接口
+ meetingGroup.POST("/get_meeting", ma.GetMeetingPublic)
+ meetingGroup.POST("/get_meetings_public", ma.GetMeetingsPublic)
+ // 签到接口
+ meetingGroup.POST("/check_in", ma.CheckInMeeting)
+ meetingGroup.POST("/check_user_check_in", ma.CheckUserCheckIn)
+ meetingGroup.POST("/get_check_ins", ma.GetMeetingCheckIns)
+ meetingGroup.POST("/get_check_in_stats", ma.GetMeetingCheckInStats)
+ }
+ // Conversation
+ {
+ c := NewConversationApi(conversation.NewConversationClient(conversationConn))
+ conversationGroup := r.Group("/conversation")
+ conversationGroup.POST("/get_sorted_conversation_list", c.GetSortedConversationList)
+ conversationGroup.POST("/get_all_conversations", c.GetAllConversations)
+ conversationGroup.POST("/get_conversation", c.GetConversation)
+ conversationGroup.POST("/get_conversations", c.GetConversations)
+ conversationGroup.POST("/set_conversations", c.SetConversations)
+ //conversationGroup.POST("/get_conversation_offline_push_user_ids", c.GetConversationOfflinePushUserIDs)
+ conversationGroup.POST("/get_full_conversation_ids", c.GetFullOwnerConversationIDs)
+ conversationGroup.POST("/get_incremental_conversations", c.GetIncrementalConversation)
+ conversationGroup.POST("/get_owner_conversation", c.GetOwnerConversation)
+ conversationGroup.POST("/get_not_notify_conversation_ids", c.GetNotNotifyConversationIDs)
+ conversationGroup.POST("/get_pinned_conversation_ids", c.GetPinnedConversationIDs)
+ conversationGroup.POST("/delete_conversations", c.DeleteConversations)
+ }
+
+ stats := NewStatisticsApi(redisClient, msgDocDatabase, rpcli.NewUserClient(userConn), rpcli.NewGroupClient(groupConn))
+ {
+ statisticsGroup := r.Group("/statistics")
+ statisticsGroup.POST("/user/register", u.UserRegisterCount)
+ statisticsGroup.POST("/user/active", m.GetActiveUser)
+ statisticsGroup.POST("/group/create", g.GroupCreateCount)
+ statisticsGroup.POST("/group/active", m.GetActiveGroup)
+ statisticsGroup.POST("/online_user_count", stats.OnlineUserCount)
+ statisticsGroup.POST("/online_user_count_trend", stats.OnlineUserCountTrend)
+ statisticsGroup.POST("/user_send_msg_count", stats.UserSendMsgCount)
+ statisticsGroup.POST("/user_send_msg_count_trend", stats.UserSendMsgCountTrend)
+ statisticsGroup.POST("/user_send_msg_query", stats.UserSendMsgQuery)
+ }
+
+ {
+ j := jssdk.NewJSSdkApi(rpcli.NewUserClient(userConn), rpcli.NewRelationClient(friendConn),
+ rpcli.NewGroupClient(groupConn), rpcli.NewConversationClient(conversationConn), rpcli.NewMsgClient(msgConn))
+ jssdk := r.Group("/jssdk")
+ jssdk.POST("/get_conversations", j.GetConversations)
+ jssdk.POST("/get_active_conversations", j.GetActiveConversations)
+ }
+ {
+ pd := NewPrometheusDiscoveryApi(cfg, client)
+ proDiscoveryGroup := r.Group("/prometheus_discovery")
+ proDiscoveryGroup.GET("/api", pd.Api)
+ proDiscoveryGroup.GET("/user", pd.User)
+ proDiscoveryGroup.GET("/group", pd.Group)
+ proDiscoveryGroup.GET("/msg", pd.Msg)
+ proDiscoveryGroup.GET("/friend", pd.Friend)
+ proDiscoveryGroup.GET("/conversation", pd.Conversation)
+ proDiscoveryGroup.GET("/third", pd.Third)
+ proDiscoveryGroup.GET("/auth", pd.Auth)
+ proDiscoveryGroup.GET("/push", pd.Push)
+ proDiscoveryGroup.GET("/msg_gateway", pd.MessageGateway)
+ proDiscoveryGroup.GET("/msg_transfer", pd.MessageTransfer)
+ }
+
+ var etcdClient *clientv3.Client
+ if cfg.Discovery.Enable == config.ETCD {
+ etcdClient = client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
+ }
+
+ // 初始化SystemConfig数据库(用于webhook配置管理)
+ var systemConfigDB database.SystemConfig
+ systemConfigDB, err = mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err != nil {
+ log.ZWarn(ctx, "failed to init system config db, webhook config management will be disabled", err)
+ }
+
+ cm := NewConfigManager(cfg.Share.IMAdminUser.UserIDs, &cfg.AllConfig, etcdClient, string(cfg.ConfigPath), systemConfigDB)
+ {
+ configGroup := r.Group("/config", cm.CheckAdmin)
+ configGroup.POST("/get_config_list", cm.GetConfigList)
+ configGroup.POST("/get_config", cm.GetConfig)
+ configGroup.POST("/set_config", cm.SetConfig)
+ configGroup.POST("/reset_config", cm.ResetConfig)
+ configGroup.POST("/set_enable_config_manager", cm.SetEnableConfigManager)
+ configGroup.POST("/get_enable_config_manager", cm.GetEnableConfigManager)
+ }
+ {
+ r.POST("/restart", cm.CheckAdmin, cm.Restart)
+ }
+ return r, nil
+}
+
+func GinParseToken(authClient *rpcli.AuthClient) gin.HandlerFunc {
+ return func(c *gin.Context) {
+ switch c.Request.Method {
+ case http.MethodPost:
+ for _, wApi := range Whitelist {
+ if strings.HasPrefix(c.Request.URL.Path, wApi) {
+ c.Next()
+ return
+ }
+ }
+
+ token := c.Request.Header.Get(constant.Token)
+ if token == "" {
+ log.ZWarn(c, "header get token error", servererrs.ErrArgs.WrapMsg("header must have token"))
+ apiresp.GinError(c, servererrs.ErrArgs.WrapMsg("header must have token"))
+ c.Abort()
+ return
+ }
+ resp, err := authClient.ParseToken(c, token)
+ if err != nil {
+ apiresp.GinError(c, err)
+ c.Abort()
+ return
+ }
+ c.Set(constant.OpUserPlatform, constant.PlatformIDToName(int(resp.PlatformID)))
+ c.Set(constant.OpUserID, resp.UserID)
+ c.Next()
+ }
+ }
+}
+
+func setGinIsAdmin(imAdminUserID []string) gin.HandlerFunc {
+ return func(c *gin.Context) {
+ c.Set(authverify.CtxAdminUserIDsKey, imAdminUserID)
+ }
+}
+
+// Whitelist api not parse token
+var Whitelist = []string{
+ "/auth/get_admin_token",
+ "/auth/parse_token",
+}
diff --git a/internal/api/statistics.go b/internal/api/statistics.go
new file mode 100644
index 0000000..2e83e34
--- /dev/null
+++ b/internal/api/statistics.go
@@ -0,0 +1,555 @@
+package api
+
+import (
+ "context"
+ "errors"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ rediscache "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/a2r"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+type StatisticsApi struct {
+ rdb redis.UniversalClient
+ msgDatabase database.Msg
+ userClient *rpcli.UserClient
+ groupClient *rpcli.GroupClient
+}
+
+func NewStatisticsApi(rdb redis.UniversalClient, msgDatabase database.Msg, userClient *rpcli.UserClient, groupClient *rpcli.GroupClient) *StatisticsApi {
+ return &StatisticsApi{
+ rdb: rdb,
+ msgDatabase: msgDatabase,
+ userClient: userClient,
+ groupClient: groupClient,
+ }
+}
+
+const (
+ trendIntervalMinutes15 = 15
+ trendIntervalMinutes30 = 30
+ trendIntervalMinutes60 = 60
+ trendChatTypeSingle = 1
+ trendChatTypeGroup = 2
+ defaultTrendDuration = 24 * time.Hour
+)
+
+// refreshOnlineUserCountAndHistory 刷新在线人数并写入历史采样
+func refreshOnlineUserCountAndHistory(ctx context.Context, rdb redis.UniversalClient) {
+ count, err := rediscache.RefreshOnlineUserCount(ctx, rdb)
+ if err != nil {
+ log.ZWarn(ctx, "refresh online user count failed", err)
+ return
+ }
+ if err := rediscache.AppendOnlineUserCountHistory(ctx, rdb, time.Now().UnixMilli(), count); err != nil {
+ log.ZWarn(ctx, "append online user count history failed", err)
+ }
+}
+
+// startOnlineCountRefresher 定时刷新在线人数缓存
+func startOnlineCountRefresher(ctx context.Context, cfg *Config, rdb redis.UniversalClient) {
+ if cfg == nil || rdb == nil {
+ return
+ }
+ refreshCfg := cfg.API.OnlineCountRefresh
+ if !refreshCfg.Enable || refreshCfg.Interval <= 0 {
+ return
+ }
+ log.ZInfo(ctx, "online user count refresh enabled", "interval", refreshCfg.Interval)
+ go func() {
+ refreshOnlineUserCountAndHistory(ctx, rdb)
+ ticker := time.NewTicker(refreshCfg.Interval)
+ defer ticker.Stop()
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ case <-ticker.C:
+ refreshOnlineUserCountAndHistory(ctx, rdb)
+ }
+ }
+ }()
+}
+
+// OnlineUserCount 在线人数统计接口
+func (s *StatisticsApi) OnlineUserCount(c *gin.Context) {
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if s.rdb == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("redis client is nil"))
+ return
+ }
+ count, err := rediscache.GetOnlineUserCount(c, s.rdb)
+ if err != nil {
+ if errors.Is(err, redis.Nil) {
+ count, err = rediscache.RefreshOnlineUserCount(c, s.rdb)
+ if err == nil {
+ if appendErr := rediscache.AppendOnlineUserCountHistory(c, s.rdb, time.Now().UnixMilli(), count); appendErr != nil {
+ log.ZWarn(c, "append online user count history failed", appendErr)
+ }
+ }
+ }
+ }
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ apiresp.GinSuccess(c, &apistruct.OnlineUserCountResp{OnlineCount: count})
+}
+
+// OnlineUserCountTrend 在线人数走势统计接口
+func (s *StatisticsApi) OnlineUserCountTrend(c *gin.Context) {
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ req, err := a2r.ParseRequest[apistruct.OnlineUserCountTrendReq](c)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if s.rdb == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("redis client is nil"))
+ return
+ }
+ intervalMillis, err := parseTrendIntervalMillis(req.IntervalMinutes)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ startTime, endTime, err := normalizeTrendTimeRange(req.StartTime, req.EndTime)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ bucketStart, bucketEnd := alignTrendRange(startTime, endTime, intervalMillis)
+ // 使用对齐后的时间范围获取历史数据,确保数据范围与构建数据点的范围一致
+ samples, err := rediscache.GetOnlineUserCountHistory(c, s.rdb, bucketStart, bucketEnd)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ // 将当前在线人数作为最新采样,确保最后一个时间段展示该段内的最大在线人数
+ now := time.Now().UnixMilli()
+ currentBucket := now - (now % intervalMillis)
+ if now < 0 && now%intervalMillis != 0 {
+ currentBucket = now - ((now % intervalMillis) + intervalMillis)
+ }
+ if currentBucket >= bucketStart && currentBucket <= bucketEnd {
+ if currentCount, err := rediscache.GetOnlineUserCount(c, s.rdb); err == nil {
+ samples = append(samples, rediscache.OnlineUserCountSample{
+ Timestamp: now,
+ Count: currentCount,
+ })
+ }
+ }
+ points := buildOnlineUserCountTrendPoints(samples, bucketStart, bucketEnd, intervalMillis)
+ apiresp.GinSuccess(c, &apistruct.OnlineUserCountTrendResp{
+ IntervalMinutes: req.IntervalMinutes,
+ Points: points,
+ })
+}
+
+// UserSendMsgCount 用户发送消息总数统计
+func (s *StatisticsApi) UserSendMsgCount(c *gin.Context) {
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ _, err := a2r.ParseRequest[apistruct.UserSendMsgCountReq](c)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if s.msgDatabase == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("msg database is nil"))
+ return
+ }
+ now := time.Now()
+ endTime := now.UnixMilli()
+ start24h := now.Add(-24 * time.Hour).UnixMilli()
+ start7d := now.Add(-7 * 24 * time.Hour).UnixMilli()
+ start30d := now.Add(-30 * 24 * time.Hour).UnixMilli()
+
+ count24h, err := s.msgDatabase.CountUserSendMessages(c, "", start24h, endTime, "")
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ count7d, err := s.msgDatabase.CountUserSendMessages(c, "", start7d, endTime, "")
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ count30d, err := s.msgDatabase.CountUserSendMessages(c, "", start30d, endTime, "")
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ apiresp.GinSuccess(c, &apistruct.UserSendMsgCountResp{
+ Count24h: count24h,
+ Count7d: count7d,
+ Count30d: count30d,
+ })
+}
+
+// UserSendMsgCountTrend 用户发送消息走势统计
+func (s *StatisticsApi) UserSendMsgCountTrend(c *gin.Context) {
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ req, err := a2r.ParseRequest[apistruct.UserSendMsgCountTrendReq](c)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if s.msgDatabase == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("msg database is nil"))
+ return
+ }
+ intervalMillis, err := parseTrendIntervalMillis(req.IntervalMinutes)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ startTime, endTime, err := normalizeTrendTimeRange(req.StartTime, req.EndTime)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ sessionTypes, err := mapTrendChatType(req.ChatType)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ bucketStart, bucketEnd := alignTrendRange(startTime, endTime, intervalMillis)
+ countMap, err := s.msgDatabase.CountUserSendMessagesTrend(c, req.UserID, sessionTypes, startTime, endTime, intervalMillis)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ points := buildUserSendMsgCountTrendPoints(countMap, bucketStart, bucketEnd, intervalMillis)
+ apiresp.GinSuccess(c, &apistruct.UserSendMsgCountTrendResp{
+ UserID: req.UserID,
+ ChatType: req.ChatType,
+ IntervalMinutes: req.IntervalMinutes,
+ Points: points,
+ })
+}
+
+// UserSendMsgQuery 用户发送消息查询
+func (s *StatisticsApi) UserSendMsgQuery(c *gin.Context) {
+ if err := authverify.CheckAdmin(c); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ req, err := a2r.ParseRequest[apistruct.UserSendMsgQueryReq](c)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ if req.StartTime > 0 && req.EndTime > 0 && req.EndTime <= req.StartTime {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("invalid time range"))
+ return
+ }
+ if s.msgDatabase == nil {
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("msg database is nil"))
+ return
+ }
+ pageNumber := req.PageNumber
+ if pageNumber <= 0 {
+ pageNumber = 1
+ }
+ showNumber := req.ShowNumber
+ if showNumber <= 0 {
+ showNumber = 50
+ }
+ const maxShowNumber int32 = 200
+ if showNumber > maxShowNumber {
+ showNumber = maxShowNumber
+ }
+ total, msgs, err := s.msgDatabase.SearchUserMessages(c, req.UserID, req.StartTime, req.EndTime, req.Content, pageNumber, showNumber)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ sendIDs := make([]string, 0, len(msgs))
+ recvIDs := make([]string, 0, len(msgs))
+ groupIDs := make([]string, 0, len(msgs))
+ for _, item := range msgs {
+ if item == nil || item.Msg == nil {
+ continue
+ }
+ msg := item.Msg
+ if msg.SendID != "" {
+ sendIDs = append(sendIDs, msg.SendID)
+ }
+ switch msg.SessionType {
+ case constant.ReadGroupChatType, constant.WriteGroupChatType:
+ if msg.GroupID != "" {
+ groupIDs = append(groupIDs, msg.GroupID)
+ }
+ default:
+ if msg.RecvID != "" {
+ recvIDs = append(recvIDs, msg.RecvID)
+ }
+ }
+ }
+ sendIDs = datautil.Distinct(sendIDs)
+ recvIDs = datautil.Distinct(recvIDs)
+ groupIDs = datautil.Distinct(groupIDs)
+
+ sendMap, recvMap, groupMap := map[string]*sdkws.UserInfo{}, map[string]*sdkws.UserInfo{}, map[string]*sdkws.GroupInfo{}
+ if s.userClient != nil {
+ if len(sendIDs) > 0 {
+ if users, err := s.userClient.GetUsersInfo(c, sendIDs); err == nil {
+ sendMap = datautil.SliceToMap(users, (*sdkws.UserInfo).GetUserID)
+ }
+ }
+ if len(recvIDs) > 0 {
+ if users, err := s.userClient.GetUsersInfo(c, recvIDs); err == nil {
+ recvMap = datautil.SliceToMap(users, (*sdkws.UserInfo).GetUserID)
+ }
+ }
+ }
+ if s.groupClient != nil && len(groupIDs) > 0 {
+ if groups, err := s.groupClient.GetGroupsInfo(c, groupIDs); err == nil {
+ groupMap = datautil.SliceToMap(groups, (*sdkws.GroupInfo).GetGroupID)
+ }
+ }
+
+ records := make([]*apistruct.UserSendMsgQueryRecord, 0, len(msgs))
+ for _, item := range msgs {
+ if item == nil || item.Msg == nil {
+ continue
+ }
+ msg := item.Msg
+ msgID := msg.ServerMsgID
+ if msgID == "" {
+ msgID = msg.ClientMsgID
+ }
+ senderName := msg.SenderNickname
+ if senderName == "" {
+ if u := sendMap[msg.SendID]; u != nil {
+ senderName = u.Nickname
+ } else {
+ senderName = msg.SendID
+ }
+ }
+ recvID := msg.RecvID
+ recvName := ""
+ if msg.SessionType == constant.ReadGroupChatType || msg.SessionType == constant.WriteGroupChatType {
+ if msg.GroupID != "" {
+ recvID = msg.GroupID
+ }
+ if g := groupMap[recvID]; g != nil {
+ recvName = g.GroupName
+ } else if recvID != "" {
+ recvName = recvID
+ }
+ } else {
+ if u := recvMap[msg.RecvID]; u != nil {
+ recvName = u.Nickname
+ } else if msg.RecvID != "" {
+ recvName = msg.RecvID
+ }
+ }
+ records = append(records, &apistruct.UserSendMsgQueryRecord{
+ MsgID: msgID,
+ SendID: msg.SendID,
+ SenderName: senderName,
+ RecvID: recvID,
+ RecvName: recvName,
+ ContentType: msg.ContentType,
+ ContentTypeName: contentTypeName(msg.ContentType),
+ SessionType: msg.SessionType,
+ ChatTypeName: chatTypeName(msg.SessionType),
+ Content: msg.Content,
+ SendTime: msg.SendTime,
+ })
+ }
+ apiresp.GinSuccess(c, &apistruct.UserSendMsgQueryResp{
+ Count: total,
+ PageNumber: pageNumber,
+ ShowNumber: showNumber,
+ Records: records,
+ })
+}
+
+// parseTrendIntervalMillis 解析走势统计间隔并转换为毫秒
+func parseTrendIntervalMillis(intervalMinutes int32) (int64, error) {
+ switch intervalMinutes {
+ case trendIntervalMinutes15, trendIntervalMinutes30, trendIntervalMinutes60:
+ return int64(intervalMinutes) * int64(time.Minute/time.Millisecond), nil
+ default:
+ return 0, errs.ErrArgs.WrapMsg("invalid intervalMinutes")
+ }
+}
+
+// normalizeTrendTimeRange 标准化走势统计时间区间
+func normalizeTrendTimeRange(startTime int64, endTime int64) (int64, int64, error) {
+ now := time.Now().UnixMilli()
+ if endTime <= 0 {
+ endTime = now
+ }
+ if startTime <= 0 {
+ startTime = endTime - int64(defaultTrendDuration/time.Millisecond)
+ }
+ if startTime < 0 {
+ startTime = 0
+ }
+ if endTime <= startTime {
+ return 0, 0, errs.ErrArgs.WrapMsg("invalid time range")
+ }
+ return startTime, endTime, nil
+}
+
+// alignTrendRange 对齐走势统计区间到间隔边界
+func alignTrendRange(startTime int64, endTime int64, intervalMillis int64) (int64, int64) {
+ if intervalMillis <= 0 {
+ return startTime, endTime
+ }
+ // 开始时间向下对齐到间隔边界
+ bucketStart := startTime - (startTime % intervalMillis)
+ if startTime < 0 {
+ bucketStart = startTime - ((startTime % intervalMillis) + intervalMillis)
+ }
+ // 结束时间向下对齐到所在间隔的起始(只包含已发生的间隔)
+ bucketEnd := endTime - (endTime % intervalMillis)
+ if endTime < 0 && endTime%intervalMillis != 0 {
+ bucketEnd = endTime - ((endTime % intervalMillis) + intervalMillis)
+ }
+ // 确保至少覆盖一个间隔
+ if bucketEnd < bucketStart {
+ bucketEnd = bucketStart
+ }
+ return bucketStart, bucketEnd
+}
+
+// buildOnlineUserCountTrendPoints 构建在线人数走势数据点
+func buildOnlineUserCountTrendPoints(samples []rediscache.OnlineUserCountSample, startTime int64, endTime int64, intervalMillis int64) []*apistruct.OnlineUserCountTrendItem {
+ points := make([]*apistruct.OnlineUserCountTrendItem, 0)
+ if intervalMillis <= 0 || endTime <= startTime {
+ return points
+ }
+ maxMap := make(map[int64]int64)
+ for _, sample := range samples {
+ // 将采样时间戳对齐到间隔边界
+ bucket := sample.Timestamp - (sample.Timestamp % intervalMillis)
+ // 处理负数时间戳的情况(虽然通常不会发生)
+ if sample.Timestamp < 0 && sample.Timestamp%intervalMillis != 0 {
+ bucket = sample.Timestamp - ((sample.Timestamp % intervalMillis) + intervalMillis)
+ }
+ if sample.Count > maxMap[bucket] {
+ maxMap[bucket] = sample.Count
+ }
+ }
+ // 计算需要生成的数据点数量
+ // endTime是对齐后的最后一个bucket的起始时间,所以需要包含它
+ estimated := int((endTime-startTime)/intervalMillis) + 1
+ if estimated > 0 {
+ points = make([]*apistruct.OnlineUserCountTrendItem, 0, estimated)
+ }
+ // 生成从startTime到endTime(包含endTime)的所有时间点
+ // endTime已经是对齐后的最后一个bucket的起始时间
+ for ts := startTime; ts <= endTime; ts += intervalMillis {
+ maxVal := maxMap[ts]
+ points = append(points, &apistruct.OnlineUserCountTrendItem{
+ Timestamp: ts,
+ OnlineCount: maxVal,
+ })
+ }
+ return points
+}
+
+// buildUserSendMsgCountTrendPoints 构建用户发送消息走势数据点
+func buildUserSendMsgCountTrendPoints(countMap map[int64]int64, startTime int64, endTime int64, intervalMillis int64) []*apistruct.UserSendMsgCountTrendItem {
+ points := make([]*apistruct.UserSendMsgCountTrendItem, 0)
+ if intervalMillis <= 0 || endTime <= startTime {
+ return points
+ }
+ estimated := int((endTime - startTime) / intervalMillis)
+ if estimated > 0 {
+ points = make([]*apistruct.UserSendMsgCountTrendItem, 0, estimated)
+ }
+ for ts := startTime; ts < endTime; ts += intervalMillis {
+ points = append(points, &apistruct.UserSendMsgCountTrendItem{
+ Timestamp: ts,
+ Count: countMap[ts],
+ })
+ }
+ return points
+}
+
+// mapTrendChatType 走势统计聊天类型转为 sessionType 列表
+func mapTrendChatType(chatType int32) ([]int32, error) {
+ switch chatType {
+ case trendChatTypeSingle:
+ return []int32{constant.SingleChatType}, nil
+ case trendChatTypeGroup:
+ return []int32{constant.ReadGroupChatType, constant.WriteGroupChatType}, nil
+ default:
+ return nil, errs.ErrArgs.WrapMsg("invalid chatType")
+ }
+}
+
+// contentTypeName 消息类型名称转换
+func contentTypeName(contentType int32) string {
+ switch contentType {
+ case constant.Text:
+ return "文本消息"
+ case constant.Picture:
+ return "图片消息"
+ case constant.Voice:
+ return "语音消息"
+ case constant.Video:
+ return "视频消息"
+ case constant.File:
+ return "文件消息"
+ case constant.AtText:
+ return "艾特消息"
+ case constant.Merger:
+ return "合并消息"
+ case constant.Card:
+ return "名片消息"
+ case constant.Location:
+ return "位置消息"
+ case constant.Custom:
+ return "自定义消息"
+ case constant.Revoke:
+ return "撤回消息"
+ case constant.MarkdownText:
+ return "Markdown消息"
+ default:
+ return "未知消息"
+ }
+}
+
+// chatTypeName 聊天类型名称转换
+func chatTypeName(sessionType int32) string {
+ switch sessionType {
+ case constant.SingleChatType:
+ return "单聊"
+ case constant.ReadGroupChatType, constant.WriteGroupChatType:
+ return "群聊"
+ case constant.NotificationChatType:
+ return "通知"
+ default:
+ return "未知"
+ }
+}
diff --git a/internal/api/third.go b/internal/api/third.go
new file mode 100644
index 0000000..1dc2d23
--- /dev/null
+++ b/internal/api/third.go
@@ -0,0 +1,175 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "context"
+ "math/rand"
+ "net/http"
+ "net/url"
+ "strconv"
+ "strings"
+
+ "google.golang.org/grpc"
+
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/a2r"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+type ThirdApi struct {
+ GrafanaUrl string
+ Client third.ThirdClient
+}
+
+func NewThirdApi(client third.ThirdClient, grafanaUrl string) ThirdApi {
+ return ThirdApi{Client: client, GrafanaUrl: grafanaUrl}
+}
+
+func (o *ThirdApi) FcmUpdateToken(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.FcmUpdateToken, o.Client)
+}
+
+func (o *ThirdApi) SetAppBadge(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.SetAppBadge, o.Client)
+}
+
+// #################### s3 ####################
+
+func setURLPrefixOption[A, B, C any](_ func(client C, ctx context.Context, req *A, options ...grpc.CallOption) (*B, error), fn func(*A) error) *a2r.Option[A, B] {
+ return &a2r.Option[A, B]{
+ BindAfter: fn,
+ }
+}
+
+func setURLPrefix(c *gin.Context, urlPrefix *string) error {
+ host := c.GetHeader("X-Request-Api")
+ if host != "" {
+ if strings.HasSuffix(host, "/") {
+ *urlPrefix = host + "object/"
+ return nil
+ } else {
+ *urlPrefix = host + "/object/"
+ return nil
+ }
+ }
+ u := url.URL{
+ Scheme: "http",
+ Host: c.Request.Host,
+ Path: "/object/",
+ }
+ if c.Request.TLS != nil {
+ u.Scheme = "https"
+ }
+ *urlPrefix = u.String()
+ return nil
+}
+
+func (o *ThirdApi) PartLimit(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.PartLimit, o.Client)
+}
+
+func (o *ThirdApi) PartSize(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.PartSize, o.Client)
+}
+
+func (o *ThirdApi) InitiateMultipartUpload(c *gin.Context) {
+ opt := setURLPrefixOption(third.ThirdClient.InitiateMultipartUpload, func(req *third.InitiateMultipartUploadReq) error {
+ return setURLPrefix(c, &req.UrlPrefix)
+ })
+ a2r.Call(c, third.ThirdClient.InitiateMultipartUpload, o.Client, opt)
+}
+
+func (o *ThirdApi) AuthSign(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.AuthSign, o.Client)
+}
+
+func (o *ThirdApi) CompleteMultipartUpload(c *gin.Context) {
+ opt := setURLPrefixOption(third.ThirdClient.CompleteMultipartUpload, func(req *third.CompleteMultipartUploadReq) error {
+ return setURLPrefix(c, &req.UrlPrefix)
+ })
+ a2r.Call(c, third.ThirdClient.CompleteMultipartUpload, o.Client, opt)
+}
+
+func (o *ThirdApi) AccessURL(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.AccessURL, o.Client)
+}
+
+func (o *ThirdApi) InitiateFormData(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.InitiateFormData, o.Client)
+}
+
+func (o *ThirdApi) CompleteFormData(c *gin.Context) {
+ opt := setURLPrefixOption(third.ThirdClient.CompleteFormData, func(req *third.CompleteFormDataReq) error {
+ return setURLPrefix(c, &req.UrlPrefix)
+ })
+ a2r.Call(c, third.ThirdClient.CompleteFormData, o.Client, opt)
+}
+
+func (o *ThirdApi) ObjectRedirect(c *gin.Context) {
+ name := c.Param("name")
+ if name == "" {
+ c.String(http.StatusBadRequest, "name is empty")
+ return
+ }
+ if name[0] == '/' {
+ name = name[1:]
+ }
+ operationID := c.Query("operationID")
+ if operationID == "" {
+ operationID = strconv.Itoa(rand.Int())
+ }
+ ctx := mcontext.SetOperationID(c, operationID)
+ query := make(map[string]string)
+ for key, values := range c.Request.URL.Query() {
+ if len(values) == 0 {
+ continue
+ }
+ query[key] = values[0]
+ }
+ resp, err := o.Client.AccessURL(ctx, &third.AccessURLReq{Name: name, Query: query})
+ if err != nil {
+ if errs.ErrArgs.Is(err) {
+ c.String(http.StatusBadRequest, err.Error())
+ return
+ }
+ if errs.ErrRecordNotFound.Is(err) {
+ c.String(http.StatusNotFound, err.Error())
+ return
+ }
+ c.String(http.StatusInternalServerError, err.Error())
+ return
+ }
+ c.Redirect(http.StatusFound, resp.Url)
+}
+
+// #################### logs ####################.
+func (o *ThirdApi) UploadLogs(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.UploadLogs, o.Client)
+}
+
+func (o *ThirdApi) DeleteLogs(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.DeleteLogs, o.Client)
+}
+
+func (o *ThirdApi) SearchLogs(c *gin.Context) {
+ a2r.Call(c, third.ThirdClient.SearchLogs, o.Client)
+}
+
+func (o *ThirdApi) GetPrometheus(c *gin.Context) {
+ c.Redirect(http.StatusFound, o.GrafanaUrl)
+}
diff --git a/internal/api/user.go b/internal/api/user.go
new file mode 100644
index 0000000..47b628a
--- /dev/null
+++ b/internal/api/user.go
@@ -0,0 +1,260 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msggateway"
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/gin-gonic/gin"
+ "github.com/openimsdk/tools/a2r"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+)
+
+type UserApi struct {
+ Client user.UserClient
+ discov discovery.Conn
+ config config.RpcService
+}
+
+func NewUserApi(client user.UserClient, discov discovery.Conn, config config.RpcService) UserApi {
+ return UserApi{Client: client, discov: discov, config: config}
+}
+
+func (u *UserApi) UserRegister(c *gin.Context) {
+ a2r.Call(c, user.UserClient.UserRegister, u.Client)
+}
+
+// UpdateUserInfo is deprecated. Use UpdateUserInfoEx
+func (u *UserApi) UpdateUserInfo(c *gin.Context) {
+ a2r.Call(c, user.UserClient.UpdateUserInfo, u.Client)
+}
+
+func (u *UserApi) UpdateUserInfoEx(c *gin.Context) {
+ a2r.Call(c, user.UserClient.UpdateUserInfoEx, u.Client)
+}
+func (u *UserApi) SetGlobalRecvMessageOpt(c *gin.Context) {
+ a2r.Call(c, user.UserClient.SetGlobalRecvMessageOpt, u.Client)
+}
+
+func (u *UserApi) GetUsersPublicInfo(c *gin.Context) {
+ a2r.Call(c, user.UserClient.GetDesignateUsers, u.Client)
+}
+
+func (u *UserApi) GetAllUsersID(c *gin.Context) {
+ a2r.Call(c, user.UserClient.GetAllUserID, u.Client)
+}
+
+func (u *UserApi) AccountCheck(c *gin.Context) {
+ a2r.Call(c, user.UserClient.AccountCheck, u.Client)
+}
+
+func (u *UserApi) GetUsers(c *gin.Context) {
+ a2r.Call(c, user.UserClient.GetPaginationUsers, u.Client)
+}
+
+// GetUsersOnlineStatus Get user online status.
+func (u *UserApi) GetUsersOnlineStatus(c *gin.Context) {
+ var req msggateway.GetUsersOnlineStatusReq
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ conns, err := u.discov.GetConns(c, u.config.MessageGateway)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+
+ var wsResult []*msggateway.GetUsersOnlineStatusResp_SuccessResult
+ var respResult []*msggateway.GetUsersOnlineStatusResp_SuccessResult
+ flag := false
+
+ // Online push message
+ for _, v := range conns {
+ msgClient := msggateway.NewMsgGatewayClient(v)
+ reply, err := msgClient.GetUsersOnlineStatus(c, &req)
+ if err != nil {
+ log.ZDebug(c, "GetUsersOnlineStatus rpc error", err)
+
+ parseError := apiresp.ParseError(err)
+ if parseError.ErrCode == errs.NoPermissionError {
+ apiresp.GinError(c, err)
+ return
+ }
+ } else {
+ wsResult = append(wsResult, reply.SuccessResult...)
+ }
+ }
+ // Traversing the userIDs in the api request body
+ for _, v1 := range req.UserIDs {
+ flag = false
+ res := new(msggateway.GetUsersOnlineStatusResp_SuccessResult)
+ // Iterate through the online results fetched from various gateways
+ for _, v2 := range wsResult {
+ // If matches the above description on the line, and vice versa
+ if v2.UserID == v1 {
+ flag = true
+ res.UserID = v1
+ res.Status = constant.Online
+ res.DetailPlatformStatus = append(res.DetailPlatformStatus, v2.DetailPlatformStatus...)
+ break
+ }
+ }
+ if !flag {
+ res.UserID = v1
+ res.Status = constant.Offline
+ }
+ respResult = append(respResult, res)
+ }
+ apiresp.GinSuccess(c, respResult)
+}
+
+func (u *UserApi) UserRegisterCount(c *gin.Context) {
+ a2r.Call(c, user.UserClient.UserRegisterCount, u.Client)
+}
+
+// GetUsersOnlineTokenDetail Get user online token details.
+func (u *UserApi) GetUsersOnlineTokenDetail(c *gin.Context) {
+ var wsResult []*msggateway.GetUsersOnlineStatusResp_SuccessResult
+ var respResult []*msggateway.SingleDetail
+ flag := false
+ var req msggateway.GetUsersOnlineStatusReq
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+ conns, err := u.discov.GetConns(c, u.config.MessageGateway)
+ if err != nil {
+ apiresp.GinError(c, err)
+ return
+ }
+ // Online push message
+ for _, v := range conns {
+ msgClient := msggateway.NewMsgGatewayClient(v)
+ reply, err := msgClient.GetUsersOnlineStatus(c, &req)
+ if err != nil {
+ log.ZWarn(c, "GetUsersOnlineStatus rpc err", err)
+ continue
+ } else {
+ wsResult = append(wsResult, reply.SuccessResult...)
+ }
+ }
+
+ for _, v1 := range req.UserIDs {
+ m := make(map[int32][]string, 10)
+ flag = false
+ temp := new(msggateway.SingleDetail)
+ for _, v2 := range wsResult {
+ if v2.UserID == v1 {
+ flag = true
+ temp.UserID = v1
+ temp.Status = constant.Online
+ for _, status := range v2.DetailPlatformStatus {
+ if v, ok := m[status.PlatformID]; ok {
+ m[status.PlatformID] = append(v, status.Token)
+ } else {
+ m[status.PlatformID] = []string{status.Token}
+ }
+ }
+ }
+ }
+ for p, tokens := range m {
+ t := new(msggateway.SinglePlatformToken)
+ t.PlatformID = p
+ t.Token = tokens
+ t.Total = int32(len(tokens))
+ temp.SinglePlatformToken = append(temp.SinglePlatformToken, t)
+ }
+
+ if flag {
+ respResult = append(respResult, temp)
+ }
+ }
+
+ apiresp.GinSuccess(c, respResult)
+}
+
+// SubscriberStatus Presence status of subscribed users.
+func (u *UserApi) SubscriberStatus(c *gin.Context) {
+ a2r.Call(c, user.UserClient.SubscribeOrCancelUsersStatus, u.Client)
+}
+
+// GetUserStatus Get the online status of the user.
+func (u *UserApi) GetUserStatus(c *gin.Context) {
+ a2r.Call(c, user.UserClient.GetUserStatus, u.Client)
+}
+
+// GetSubscribeUsersStatus Get the online status of subscribers.
+func (u *UserApi) GetSubscribeUsersStatus(c *gin.Context) {
+ a2r.Call(c, user.UserClient.GetSubscribeUsersStatus, u.Client)
+}
+
+// ProcessUserCommandAdd user general function add.
+func (u *UserApi) ProcessUserCommandAdd(c *gin.Context) {
+ a2r.Call(c, user.UserClient.ProcessUserCommandAdd, u.Client)
+}
+
+// ProcessUserCommandDelete user general function delete.
+func (u *UserApi) ProcessUserCommandDelete(c *gin.Context) {
+ a2r.Call(c, user.UserClient.ProcessUserCommandDelete, u.Client)
+}
+
+// ProcessUserCommandUpdate user general function update.
+func (u *UserApi) ProcessUserCommandUpdate(c *gin.Context) {
+ a2r.Call(c, user.UserClient.ProcessUserCommandUpdate, u.Client)
+}
+
+// ProcessUserCommandGet user general function get.
+func (u *UserApi) ProcessUserCommandGet(c *gin.Context) {
+ a2r.Call(c, user.UserClient.ProcessUserCommandGet, u.Client)
+}
+
+// ProcessUserCommandGet user general function get all.
+func (u *UserApi) ProcessUserCommandGetAll(c *gin.Context) {
+ a2r.Call(c, user.UserClient.ProcessUserCommandGetAll, u.Client)
+}
+
+func (u *UserApi) AddNotificationAccount(c *gin.Context) {
+ a2r.Call(c, user.UserClient.AddNotificationAccount, u.Client)
+}
+
+func (u *UserApi) UpdateNotificationAccountInfo(c *gin.Context) {
+ a2r.Call(c, user.UserClient.UpdateNotificationAccountInfo, u.Client)
+}
+
+func (u *UserApi) SearchNotificationAccount(c *gin.Context) {
+ a2r.Call(c, user.UserClient.SearchNotificationAccount, u.Client)
+}
+
+func (u *UserApi) GetUserClientConfig(c *gin.Context) {
+ a2r.Call(c, user.UserClient.GetUserClientConfig, u.Client)
+}
+
+func (u *UserApi) SetUserClientConfig(c *gin.Context) {
+ a2r.Call(c, user.UserClient.SetUserClientConfig, u.Client)
+}
+
+func (u *UserApi) DelUserClientConfig(c *gin.Context) {
+ a2r.Call(c, user.UserClient.DelUserClientConfig, u.Client)
+}
+
+func (u *UserApi) PageUserClientConfig(c *gin.Context) {
+ a2r.Call(c, user.UserClient.PageUserClientConfig, u.Client)
+}
diff --git a/internal/api/wallet.go b/internal/api/wallet.go
new file mode 100644
index 0000000..150b117
--- /dev/null
+++ b/internal/api/wallet.go
@@ -0,0 +1,523 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package api
+
+import (
+ "strconv"
+ "time"
+
+ "github.com/gin-gonic/gin"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/idutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+type WalletApi struct {
+ walletDB database.Wallet
+ walletBalanceRecordDB database.WalletBalanceRecord
+ userDB database.User
+ userClient *rpcli.UserClient
+}
+
+func NewWalletApi(walletDB database.Wallet, walletBalanceRecordDB database.WalletBalanceRecord, userDB database.User, userClient *rpcli.UserClient) *WalletApi {
+ return &WalletApi{
+ walletDB: walletDB,
+ walletBalanceRecordDB: walletBalanceRecordDB,
+ userDB: userDB,
+ userClient: userClient,
+ }
+}
+
+// updateBalanceWithRecord 统一的余额更新方法,包含余额记录创建和并发控制
+// operation: 操作类型(set/add/subtract)
+// amount: 金额(分)
+// oldBalance: 旧余额
+// oldVersion: 旧版本号
+// remark: 备注信息
+// 返回新余额、新版本号和错误
+func (w *WalletApi) updateBalanceWithRecord(ctx *gin.Context, userID string, operation string, amount int64, oldBalance int64, oldVersion int64, remark string) (newBalance int64, newVersion int64, err error) {
+ // 使用版本号更新余额(防止并发覆盖)
+ params := &database.WalletUpdateParams{
+ UserID: userID,
+ Operation: operation,
+ Amount: amount,
+ OldBalance: oldBalance,
+ OldVersion: oldVersion,
+ }
+
+ result, err := w.walletDB.UpdateBalanceWithVersion(ctx, params)
+ if err != nil {
+ return 0, 0, err
+ }
+
+ // 计算变动金额:用于余额记录
+ var recordAmount int64
+ switch operation {
+ case "set":
+ recordAmount = result.NewBalance - oldBalance
+ case "add":
+ recordAmount = amount
+ case "subtract":
+ recordAmount = -amount
+ }
+
+ // 创建余额记录(新字段)
+ recordID := idutil.GetMsgIDByMD5(userID + timeutil.GetCurrentTimeFormatted() + operation + strconv.FormatInt(amount, 10))
+ var recordType int32 = 99 // 默认其他
+ switch operation {
+ case "add":
+ recordType = 99 // 通用“加钱”,具体业务可在上层传入
+ case "subtract":
+ recordType = 99 // 通用“减钱”
+ case "set":
+ recordType = 99 // 设置,归为其他
+ }
+ balanceRecord := &model.WalletBalanceRecord{
+ ID: recordID,
+ UserID: userID,
+ Amount: recordAmount,
+ Type: recordType,
+ BeforeBalance: oldBalance,
+ AfterBalance: result.NewBalance,
+ OrderID: "",
+ TransactionID: "",
+ RedPacketID: "",
+ Remark: remark,
+ CreateTime: time.Now(),
+ }
+ if err := w.walletBalanceRecordDB.Create(ctx, balanceRecord); err != nil {
+ // 余额记录创建失败不影响主流程,只记录警告日志
+ log.ZWarn(ctx, "updateBalanceWithRecord: failed to create balance record", err,
+ "userID", userID,
+ "operation", operation,
+ "amount", amount,
+ "oldBalance", oldBalance,
+ "newBalance", result.NewBalance)
+ }
+
+ return result.NewBalance, result.NewVersion, nil
+}
+
+// paginationWrapper 实现 pagination.Pagination 接口
+type walletPaginationWrapper struct {
+ pageNumber int32
+ showNumber int32
+}
+
+func (p *walletPaginationWrapper) GetPageNumber() int32 {
+ if p.pageNumber <= 0 {
+ return 1
+ }
+ return p.pageNumber
+}
+
+func (p *walletPaginationWrapper) GetShowNumber() int32 {
+ if p.showNumber <= 0 {
+ return 20
+ }
+ return p.showNumber
+}
+
+// GetWallets 查询用户钱包列表(后台管理接口)
+func (w *WalletApi) GetWallets(c *gin.Context) {
+ var (
+ req apistruct.GetWalletsReq
+ resp apistruct.GetWalletsResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 设置默认分页参数
+ if req.Pagination.PageNumber <= 0 {
+ req.Pagination.PageNumber = 1
+ }
+ if req.Pagination.ShowNumber <= 0 {
+ req.Pagination.ShowNumber = 20
+ }
+
+ // 创建分页对象
+ pagination := &walletPaginationWrapper{
+ pageNumber: req.Pagination.PageNumber,
+ showNumber: req.Pagination.ShowNumber,
+ }
+
+ // 查询钱包列表
+ var total int64
+ var wallets []*apistruct.WalletInfo
+ var err error
+
+ // 判断是否有查询条件(用户ID、手机号、账号)
+ hasQueryCondition := req.UserID != "" || req.PhoneNumber != "" || req.Account != ""
+
+ if hasQueryCondition {
+ // 如果有查询条件,先通过条件查询用户ID
+ var userIDs []string
+ if req.UserID != "" {
+ // 如果直接提供了用户ID,直接使用
+ userIDs = []string{req.UserID}
+ } else {
+ // 通过手机号或账号查询用户ID
+ searchUserIDs, err := w.userDB.SearchUsersByFields(c, req.Account, req.PhoneNumber, "")
+ if err != nil {
+ log.ZError(c, "GetWallets: failed to search users", err, "account", req.Account, "phoneNumber", req.PhoneNumber)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to search users"))
+ return
+ }
+ if len(searchUserIDs) == 0 {
+ // 没有找到匹配的用户,返回空列表
+ resp.Total = 0
+ resp.Wallets = []*apistruct.WalletInfo{}
+ apiresp.GinSuccess(c, resp)
+ return
+ }
+ userIDs = searchUserIDs
+ }
+
+ // 根据查询到的用户ID列表查询钱包
+ walletModels, err := w.walletDB.FindWalletsByUserIDs(c, userIDs)
+ if err != nil {
+ log.ZError(c, "GetWallets: failed to find wallets by userIDs", err, "userIDs", userIDs)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find wallets"))
+ return
+ }
+
+ // 转换为响应格式
+ wallets = make([]*apistruct.WalletInfo, 0, len(walletModels))
+ for _, wallet := range walletModels {
+ wallets = append(wallets, &apistruct.WalletInfo{
+ UserID: wallet.UserID,
+ Balance: wallet.Balance,
+ CreateTime: wallet.CreateTime.UnixMilli(),
+ UpdateTime: wallet.UpdateTime.UnixMilli(),
+ })
+ }
+ total = int64(len(wallets))
+ } else {
+ // 如果没有任何查询条件,查询所有钱包(带分页)
+ var walletModels []*model.Wallet
+ total, walletModels, err = w.walletDB.FindAllWallets(c, pagination)
+ if err != nil {
+ log.ZError(c, "GetWallets: failed to find wallets", err)
+ apiresp.GinError(c, errs.ErrInternalServer.WrapMsg("failed to find wallets"))
+ return
+ }
+ // 转换为响应格式
+ wallets = make([]*apistruct.WalletInfo, 0, len(walletModels))
+ for _, wallet := range walletModels {
+ wallets = append(wallets, &apistruct.WalletInfo{
+ UserID: wallet.UserID,
+ Balance: wallet.Balance,
+ CreateTime: wallet.CreateTime.UnixMilli(),
+ UpdateTime: wallet.UpdateTime.UnixMilli(),
+ })
+ }
+ }
+
+ // 收集所有用户ID
+ userIDMap := make(map[string]bool)
+ for _, wallet := range wallets {
+ if wallet.UserID != "" {
+ userIDMap[wallet.UserID] = true
+ }
+ }
+
+ // 批量查询用户信息
+ userIDList := make([]string, 0, len(userIDMap))
+ for userID := range userIDMap {
+ userIDList = append(userIDList, userID)
+ }
+
+ userInfoMap := make(map[string]*apistruct.WalletInfo) // userID -> WalletInfo (with nickname and faceURL)
+ if len(userIDList) > 0 {
+ userInfos, err := w.userClient.GetUsersInfo(c, userIDList)
+ if err == nil && len(userInfos) > 0 {
+ for _, userInfo := range userInfos {
+ if userInfo != nil {
+ userInfoMap[userInfo.UserID] = &apistruct.WalletInfo{
+ Nickname: userInfo.Nickname,
+ FaceURL: userInfo.FaceURL,
+ }
+ }
+ }
+ } else {
+ log.ZWarn(c, "GetWallets: failed to get users info", err, "userIDs", userIDList)
+ }
+ }
+
+ // 填充用户信息
+ for _, wallet := range wallets {
+ if userInfo, ok := userInfoMap[wallet.UserID]; ok {
+ wallet.Nickname = userInfo.Nickname
+ wallet.FaceURL = userInfo.FaceURL
+ }
+ }
+
+ // 填充响应
+ resp.Total = total
+ resp.Wallets = wallets
+
+ log.ZInfo(c, "GetWallets: success", "userID", req.UserID, "phoneNumber", req.PhoneNumber, "account", req.Account, "total", total, "count", len(resp.Wallets))
+ apiresp.GinSuccess(c, resp)
+}
+
+// BatchUpdateWalletBalance 批量修改用户余额(后台管理接口)
+func (w *WalletApi) BatchUpdateWalletBalance(c *gin.Context) {
+ var (
+ req apistruct.BatchUpdateWalletBalanceReq
+ resp apistruct.BatchUpdateWalletBalanceResp
+ )
+ if err := c.BindJSON(&req); err != nil {
+ apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
+ return
+ }
+
+ // 验证请求参数
+ if len(req.Users) == 0 {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("users list cannot be empty"))
+ return
+ }
+
+ // 设置默认操作类型
+ defaultOperation := req.Operation
+ if defaultOperation == "" {
+ defaultOperation = "add" // 默认为增加
+ }
+ if defaultOperation != "" && defaultOperation != "set" && defaultOperation != "add" && defaultOperation != "subtract" {
+ apiresp.GinError(c, errs.ErrArgs.WrapMsg("operation must be 'set', 'add', or 'subtract'"))
+ return
+ }
+
+ // 处理每个用户
+ resp.Total = int32(len(req.Users))
+ resp.Results = make([]apistruct.WalletUpdateResult, 0, len(req.Users))
+
+ for _, user := range req.Users {
+ result := apistruct.WalletUpdateResult{
+ UserID: user.UserID.String(),
+ PhoneNumber: user.PhoneNumber,
+ Account: user.Account,
+ Success: false,
+ Remark: user.Remark, // 保存备注信息
+ }
+
+ // 确定用户ID
+ var userID string
+ if user.UserID != "" {
+ userID = user.UserID.String()
+ } else if user.PhoneNumber != "" || user.Account != "" {
+ // 通过手机号或账号查询用户ID
+ searchUserIDs, err := w.userDB.SearchUsersByFields(c, user.Account, user.PhoneNumber, "")
+ if err != nil {
+ result.Message = "failed to search user: " + err.Error()
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+ if len(searchUserIDs) == 0 {
+ result.Message = "user not found"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+ if len(searchUserIDs) > 1 {
+ result.Message = "multiple users found, please use userID"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+ userID = searchUserIDs[0]
+ } else {
+ result.Message = "user identifier is required (userID, phoneNumber, or account)"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ // 先验证用户是否存在
+ userExists, err := w.userDB.Exist(c, userID)
+ if err != nil {
+ result.Message = "failed to check user existence: " + err.Error()
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+ if !userExists {
+ result.Message = "user not found"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ // 查询当前钱包
+ wallet, err := w.walletDB.Take(c, userID)
+ if err != nil {
+ // 如果钱包不存在,但用户存在,创建新钱包
+ if errs.ErrRecordNotFound.Is(err) {
+ wallet = &model.Wallet{
+ UserID: userID,
+ Balance: 0,
+ CreateTime: time.Now(),
+ UpdateTime: time.Now(),
+ }
+ if err := w.walletDB.Create(c, wallet); err != nil {
+ result.Message = "failed to create wallet: " + err.Error()
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+ } else {
+ result.Message = "failed to get wallet: " + err.Error()
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+ }
+
+ // 记录旧余额和版本号(旧数据可能没有 version,保持 0 以兼容)
+ result.OldBalance = wallet.Balance
+ oldVersion := wallet.Version
+
+ // 确定该用户使用的金额和操作类型
+ userAmount := user.Amount
+ if userAmount == 0 {
+ // 如果用户没有指定金额,使用默认金额
+ userAmount = req.Amount
+ }
+ if userAmount == 0 {
+ result.Message = "amount is required (either in user object or in request)"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ userOperation := user.Operation
+ if userOperation == "" {
+ // 如果用户没有指定操作类型,使用默认操作类型
+ userOperation = defaultOperation
+ }
+ if userOperation != "set" && userOperation != "add" && userOperation != "subtract" {
+ result.Message = "operation must be 'set', 'add', or 'subtract'"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ // 预检查:计算新余额并检查不能为负数
+ var expectedNewBalance int64
+ switch userOperation {
+ case "set":
+ expectedNewBalance = userAmount
+ case "add":
+ expectedNewBalance = wallet.Balance + userAmount
+ case "subtract":
+ expectedNewBalance = wallet.Balance - userAmount
+ }
+ if expectedNewBalance < 0 {
+ result.Message = "balance cannot be negative"
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ // 使用统一的余额更新方法(包含并发控制和余额记录创建)
+ // 如果发生并发冲突,重试一次(重新获取最新余额和版本号)
+ var newBalance int64
+ maxRetries := 2
+ for retry := 0; retry < maxRetries; retry++ {
+ if retry > 0 {
+ // 重试时重新获取钱包信息
+ wallet, err = w.walletDB.Take(c, userID)
+ if err != nil {
+ result.Message = "failed to get wallet for retry: " + err.Error()
+ break
+ }
+ result.OldBalance = wallet.Balance
+ oldVersion = wallet.Version
+ // 重新计算预期新余额
+ switch userOperation {
+ case "set":
+ expectedNewBalance = userAmount
+ case "add":
+ expectedNewBalance = wallet.Balance + userAmount
+ case "subtract":
+ expectedNewBalance = wallet.Balance - userAmount
+ }
+ if expectedNewBalance < 0 {
+ result.Message = "balance cannot be negative after retry"
+ break
+ }
+ }
+
+ newBalance, _, err = w.updateBalanceWithRecord(c, userID, userOperation, userAmount, result.OldBalance, oldVersion, user.Remark)
+ if err == nil {
+ // 更新成功
+ break
+ }
+
+ // 如果是并发冲突且还有重试机会,继续重试
+ if retry < maxRetries-1 && errs.ErrInternalServer.Is(err) {
+ log.ZWarn(c, "BatchUpdateWalletBalance: concurrent modification detected, retrying", err,
+ "userID", userID,
+ "retry", retry+1,
+ "maxRetries", maxRetries)
+ continue
+ }
+
+ // 其他错误或重试次数用完,返回错误
+ result.Message = "failed to update balance: " + err.Error()
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ if err != nil {
+ // 更新失败
+ resp.Results = append(resp.Results, result)
+ resp.Failed++
+ continue
+ }
+
+ // 更新成功
+ result.Success = true
+ result.NewBalance = newBalance
+ result.Message = "success"
+ // 备注信息已在初始化时设置
+ resp.Results = append(resp.Results, result)
+ resp.Success++
+
+ // 记录日志,包含备注信息
+ if user.Remark != "" {
+ log.ZInfo(c, "BatchUpdateWalletBalance: user balance updated",
+ "userID", userID,
+ "operation", userOperation,
+ "amount", userAmount,
+ "oldBalance", result.OldBalance,
+ "newBalance", newBalance,
+ "remark", user.Remark)
+ }
+ }
+
+ log.ZInfo(c, "BatchUpdateWalletBalance: success", "total", resp.Total, "success", resp.Success, "failed", resp.Failed)
+ apiresp.GinSuccess(c, resp)
+}
diff --git a/internal/msggateway/callback.go b/internal/msggateway/callback.go
new file mode 100644
index 0000000..4098f83
--- /dev/null
+++ b/internal/msggateway/callback.go
@@ -0,0 +1,76 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "context"
+ "time"
+
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func (ws *WsServer) webhookAfterUserOnline(ctx context.Context, after *config.AfterConfig, userID string, platformID int, isAppBackground bool, connID string) {
+ req := cbapi.CallbackUserOnlineReq{
+ UserStatusCallbackReq: cbapi.UserStatusCallbackReq{
+ UserStatusBaseCallback: cbapi.UserStatusBaseCallback{
+ CallbackCommand: cbapi.CallbackAfterUserOnlineCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ PlatformID: platformID,
+ Platform: constant.PlatformIDToName(platformID),
+ },
+ UserID: userID,
+ },
+ Seq: time.Now().UnixMilli(),
+ IsAppBackground: isAppBackground,
+ ConnID: connID,
+ }
+ ws.webhookClient.AsyncPost(ctx, req.GetCallbackCommand(), req, &cbapi.CommonCallbackResp{}, after)
+}
+
+func (ws *WsServer) webhookAfterUserOffline(ctx context.Context, after *config.AfterConfig, userID string, platformID int, connID string) {
+ req := &cbapi.CallbackUserOfflineReq{
+ UserStatusCallbackReq: cbapi.UserStatusCallbackReq{
+ UserStatusBaseCallback: cbapi.UserStatusBaseCallback{
+ CallbackCommand: cbapi.CallbackAfterUserOfflineCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ PlatformID: platformID,
+ Platform: constant.PlatformIDToName(platformID),
+ },
+ UserID: userID,
+ },
+ Seq: time.Now().UnixMilli(),
+ ConnID: connID,
+ }
+ ws.webhookClient.AsyncPost(ctx, req.GetCallbackCommand(), req, &cbapi.CallbackUserOfflineResp{}, after)
+}
+
+func (ws *WsServer) webhookAfterUserKickOff(ctx context.Context, after *config.AfterConfig, userID string, platformID int) {
+ req := &cbapi.CallbackUserKickOffReq{
+ UserStatusCallbackReq: cbapi.UserStatusCallbackReq{
+ UserStatusBaseCallback: cbapi.UserStatusBaseCallback{
+ CallbackCommand: cbapi.CallbackAfterUserKickOffCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ PlatformID: platformID,
+ Platform: constant.PlatformIDToName(platformID),
+ },
+ UserID: userID,
+ },
+ Seq: time.Now().UnixMilli(),
+ }
+ ws.webhookClient.AsyncPost(ctx, req.GetCallbackCommand(), req, &cbapi.CommonCallbackResp{}, after)
+}
diff --git a/internal/msggateway/client.go b/internal/msggateway/client.go
new file mode 100644
index 0000000..493d383
--- /dev/null
+++ b/internal/msggateway/client.go
@@ -0,0 +1,476 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "google.golang.org/protobuf/proto"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/stringutil"
+)
+
+var (
+ ErrConnClosed = errs.New("conn has closed")
+ ErrNotSupportMessageProtocol = errs.New("not support message protocol")
+ ErrClientClosed = errs.New("client actively close the connection")
+ ErrPanic = errs.New("panic error")
+)
+
+const (
+ // MessageText is for UTF-8 encoded text messages like JSON.
+ MessageText = iota + 1
+ // MessageBinary is for binary messages like protobufs.
+ MessageBinary
+ // CloseMessage denotes a close control message. The optional message
+ // payload contains a numeric code and text. Use the FormatCloseMessage
+ // function to format a close message payload.
+ CloseMessage = 8
+
+ // PingMessage denotes a ping control message. The optional message payload
+ // is UTF-8 encoded text.
+ PingMessage = 9
+
+ // PongMessage denotes a pong control message. The optional message payload
+ // is UTF-8 encoded text.
+ PongMessage = 10
+)
+
+type PingPongHandler func(string) error
+
+type Client struct {
+ w *sync.Mutex
+ conn LongConn
+ PlatformID int `json:"platformID"`
+ IsCompress bool `json:"isCompress"`
+ UserID string `json:"userID"`
+ IsBackground bool `json:"isBackground"`
+ SDKType string `json:"sdkType"`
+ SDKVersion string `json:"sdkVersion"`
+ Encoder Encoder
+ ctx *UserConnContext
+ longConnServer LongConnServer
+ closed atomic.Bool
+ closedErr error
+ token string
+ hbCtx context.Context
+ hbCancel context.CancelFunc
+ subLock *sync.Mutex
+ subUserIDs map[string]struct{} // client conn subscription list
+}
+
+// ResetClient updates the client's state with new connection and context information.
+func (c *Client) ResetClient(ctx *UserConnContext, conn LongConn, longConnServer LongConnServer) {
+ c.w = new(sync.Mutex)
+ c.conn = conn
+ c.PlatformID = stringutil.StringToInt(ctx.GetPlatformID())
+ c.IsCompress = ctx.GetCompression()
+ c.IsBackground = ctx.GetBackground()
+ c.UserID = ctx.GetUserID()
+ c.ctx = ctx
+ c.longConnServer = longConnServer
+ c.IsBackground = false
+ c.closed.Store(false)
+ c.closedErr = nil
+ c.token = ctx.GetToken()
+ c.SDKType = ctx.GetSDKType()
+ c.SDKVersion = ctx.GetSDKVersion()
+ c.hbCtx, c.hbCancel = context.WithCancel(c.ctx)
+ c.subLock = new(sync.Mutex)
+ if c.subUserIDs != nil {
+ clear(c.subUserIDs)
+ }
+ if c.SDKType == GoSDK {
+ c.Encoder = NewGobEncoder()
+ } else {
+ c.Encoder = NewJsonEncoder()
+ }
+ c.subUserIDs = make(map[string]struct{})
+}
+
+func (c *Client) pingHandler(appData string) error {
+ if err := c.conn.SetReadDeadline(pongWait); err != nil {
+ return err
+ }
+
+ log.ZDebug(c.ctx, "ping Handler Success.", "appData", appData)
+ return c.writePongMsg(appData)
+}
+
+func (c *Client) pongHandler(_ string) error {
+ if err := c.conn.SetReadDeadline(pongWait); err != nil {
+ return err
+ }
+ return nil
+}
+
+// readMessage continuously reads messages from the connection.
+func (c *Client) readMessage() {
+ defer func() {
+ if r := recover(); r != nil {
+ c.closedErr = ErrPanic
+ log.ZPanic(c.ctx, "socket have panic err:", errs.ErrPanic(r))
+ }
+ c.close()
+ }()
+
+ c.conn.SetReadLimit(maxMessageSize)
+ _ = c.conn.SetReadDeadline(pongWait)
+ c.conn.SetPongHandler(c.pongHandler)
+ c.conn.SetPingHandler(c.pingHandler)
+ c.activeHeartbeat(c.hbCtx)
+
+ for {
+ log.ZDebug(c.ctx, "readMessage")
+ messageType, message, returnErr := c.conn.ReadMessage()
+ if returnErr != nil {
+ log.ZWarn(c.ctx, "readMessage", returnErr, "messageType", messageType)
+ c.closedErr = returnErr
+ return
+ }
+
+ log.ZDebug(c.ctx, "readMessage", "messageType", messageType)
+ if c.closed.Load() {
+ // The scenario where the connection has just been closed, but the coroutine has not exited
+ c.closedErr = ErrConnClosed
+ return
+ }
+
+ switch messageType {
+ case MessageBinary:
+ _ = c.conn.SetReadDeadline(pongWait)
+ parseDataErr := c.handleMessage(message)
+ if parseDataErr != nil {
+ c.closedErr = parseDataErr
+ return
+ }
+ case MessageText:
+ _ = c.conn.SetReadDeadline(pongWait)
+ parseDataErr := c.handlerTextMessage(message)
+ if parseDataErr != nil {
+ c.closedErr = parseDataErr
+ return
+ }
+ case PingMessage:
+ err := c.writePongMsg("")
+ log.ZError(c.ctx, "writePongMsg", err)
+
+ case CloseMessage:
+ c.closedErr = ErrClientClosed
+ return
+
+ default:
+ }
+ }
+}
+
+// handleMessage processes a single message received by the client.
+func (c *Client) handleMessage(message []byte) error {
+ if c.IsCompress {
+ var err error
+ message, err = c.longConnServer.DecompressWithPool(message)
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ }
+
+ var binaryReq = getReq()
+ defer freeReq(binaryReq)
+
+ err := c.Encoder.Decode(message, binaryReq)
+ if err != nil {
+ return err
+ }
+
+ if err := c.longConnServer.Validate(binaryReq); err != nil {
+ return err
+ }
+
+ if binaryReq.SendID != c.UserID {
+ return errs.New("exception conn userID not same to req userID", "binaryReq", binaryReq.String())
+ }
+
+ ctx := mcontext.WithMustInfoCtx(
+ []string{binaryReq.OperationID, binaryReq.SendID, constant.PlatformIDToName(c.PlatformID), c.ctx.GetConnID()},
+ )
+
+ log.ZDebug(ctx, "gateway req message", "req", binaryReq.String())
+
+ var (
+ resp []byte
+ messageErr error
+ )
+
+ switch binaryReq.ReqIdentifier {
+ case WSGetNewestSeq:
+ resp, messageErr = c.longConnServer.GetSeq(ctx, binaryReq)
+ case WSSendMsg:
+ resp, messageErr = c.longConnServer.SendMessage(ctx, binaryReq)
+ case WSSendSignalMsg:
+ resp, messageErr = c.longConnServer.SendSignalMessage(ctx, binaryReq)
+ case WSPullMsgBySeqList:
+ resp, messageErr = c.longConnServer.PullMessageBySeqList(ctx, binaryReq)
+ case WSPullMsg:
+ resp, messageErr = c.longConnServer.GetSeqMessage(ctx, binaryReq)
+ case WSGetConvMaxReadSeq:
+ resp, messageErr = c.longConnServer.GetConversationsHasReadAndMaxSeq(ctx, binaryReq)
+ case WsPullConvLastMessage:
+ resp, messageErr = c.longConnServer.GetLastMessage(ctx, binaryReq)
+ case WsLogoutMsg:
+ resp, messageErr = c.longConnServer.UserLogout(ctx, binaryReq)
+ case WsSetBackgroundStatus:
+ resp, messageErr = c.setAppBackgroundStatus(ctx, binaryReq)
+ case WsSubUserOnlineStatus:
+ resp, messageErr = c.longConnServer.SubUserOnlineStatus(ctx, c, binaryReq)
+ default:
+ return fmt.Errorf(
+ "ReqIdentifier failed,sendID:%s,msgIncr:%s,reqIdentifier:%d",
+ binaryReq.SendID,
+ binaryReq.MsgIncr,
+ binaryReq.ReqIdentifier,
+ )
+ }
+
+ return c.replyMessage(ctx, binaryReq, messageErr, resp)
+}
+
+func (c *Client) setAppBackgroundStatus(ctx context.Context, req *Req) ([]byte, error) {
+ resp, isBackground, messageErr := c.longConnServer.SetUserDeviceBackground(ctx, req)
+ if messageErr != nil {
+ return nil, messageErr
+ }
+
+ c.IsBackground = isBackground
+ // TODO: callback
+ return resp, nil
+}
+
+func (c *Client) close() {
+ c.w.Lock()
+ defer c.w.Unlock()
+ if c.closed.Load() {
+ return
+ }
+ c.closed.Store(true)
+ c.conn.Close()
+ c.hbCancel() // Close server-initiated heartbeat.
+ c.longConnServer.UnRegister(c)
+}
+
+func (c *Client) replyMessage(ctx context.Context, binaryReq *Req, err error, resp []byte) error {
+ errResp := apiresp.ParseError(err)
+ mReply := Resp{
+ ReqIdentifier: binaryReq.ReqIdentifier,
+ MsgIncr: binaryReq.MsgIncr,
+ OperationID: binaryReq.OperationID,
+ ErrCode: errResp.ErrCode,
+ ErrMsg: errResp.ErrMsg,
+ Data: resp,
+ }
+ t := time.Now()
+ log.ZDebug(ctx, "gateway reply message", "resp", mReply.String())
+ err = c.writeBinaryMsg(mReply)
+ if err != nil {
+ log.ZWarn(ctx, "wireBinaryMsg replyMessage", err, "resp", mReply.String())
+ }
+ log.ZDebug(ctx, "wireBinaryMsg end", "time cost", time.Since(t))
+
+ if binaryReq.ReqIdentifier == WsLogoutMsg {
+ return errs.New("user logout", "operationID", binaryReq.OperationID).Wrap()
+ }
+ return nil
+}
+
+func (c *Client) PushMessage(ctx context.Context, msgData *sdkws.MsgData) error {
+ var msg sdkws.PushMessages
+ conversationID := msgprocessor.GetConversationIDByMsg(msgData)
+ m := map[string]*sdkws.PullMsgs{conversationID: {Msgs: []*sdkws.MsgData{msgData}}}
+ if msgprocessor.IsNotification(conversationID) {
+ msg.NotificationMsgs = m
+ } else {
+ msg.Msgs = m
+ }
+ log.ZDebug(ctx, "PushMessage", "msg", &msg)
+ data, err := proto.Marshal(&msg)
+ if err != nil {
+ return err
+ }
+ resp := Resp{
+ ReqIdentifier: WSPushMsg,
+ OperationID: mcontext.GetOperationID(ctx),
+ Data: data,
+ }
+ return c.writeBinaryMsg(resp)
+}
+
+func (c *Client) KickOnlineMessage() error {
+ resp := Resp{
+ ReqIdentifier: WSKickOnlineMsg,
+ }
+ log.ZDebug(c.ctx, "KickOnlineMessage debug ")
+ // 先发送踢掉通知消息
+ err := c.writeBinaryMsg(resp)
+ if err != nil {
+ log.ZWarn(c.ctx, "KickOnlineMessage writeBinaryMsg failed", err)
+ // 即使发送失败,也要关闭连接
+ c.close()
+ return err
+ }
+ // 给消息一些时间发送,然后关闭连接
+ // 使用一个小延迟确保消息已经写入缓冲区
+ time.Sleep(10 * time.Millisecond)
+ c.close()
+ return nil
+}
+
+func (c *Client) PushUserOnlineStatus(data []byte) error {
+ resp := Resp{
+ ReqIdentifier: WsSubUserOnlineStatus,
+ Data: data,
+ }
+ return c.writeBinaryMsg(resp)
+}
+
+func (c *Client) writeBinaryMsg(resp Resp) error {
+ if c.closed.Load() {
+ return nil
+ }
+
+ encodedBuf, err := c.Encoder.Encode(resp)
+ if err != nil {
+ return err
+ }
+
+ c.w.Lock()
+ defer c.w.Unlock()
+
+ err = c.conn.SetWriteDeadline(writeWait)
+ if err != nil {
+ return err
+ }
+
+ if c.IsCompress {
+ resultBuf, compressErr := c.longConnServer.CompressWithPool(encodedBuf)
+ if compressErr != nil {
+ return compressErr
+ }
+ return c.conn.WriteMessage(MessageBinary, resultBuf)
+ }
+
+ return c.conn.WriteMessage(MessageBinary, encodedBuf)
+}
+
+// Actively initiate Heartbeat when platform in Web.
+func (c *Client) activeHeartbeat(ctx context.Context) {
+ if c.PlatformID == constant.WebPlatformID {
+ go func() {
+ defer func() {
+ if r := recover(); r != nil {
+ log.ZPanic(ctx, "activeHeartbeat Panic", errs.ErrPanic(r))
+ }
+ }()
+ log.ZDebug(ctx, "server initiative send heartbeat start.")
+ ticker := time.NewTicker(pingPeriod)
+ defer ticker.Stop()
+
+ for {
+ select {
+ case <-ticker.C:
+ if err := c.writePingMsg(); err != nil {
+ log.ZWarn(c.ctx, "send Ping Message error.", err)
+ return
+ }
+ case <-c.hbCtx.Done():
+ return
+ }
+ }
+ }()
+ }
+}
+func (c *Client) writePingMsg() error {
+ if c.closed.Load() {
+ return nil
+ }
+
+ c.w.Lock()
+ defer c.w.Unlock()
+
+ err := c.conn.SetWriteDeadline(writeWait)
+ if err != nil {
+ return err
+ }
+
+ return c.conn.WriteMessage(PingMessage, nil)
+}
+
+func (c *Client) writePongMsg(appData string) error {
+ log.ZDebug(c.ctx, "write Pong Msg in Server", "appData", appData)
+ if c.closed.Load() {
+ log.ZWarn(c.ctx, "is closed in server", nil, "appdata", appData, "closed err", c.closedErr)
+ return nil
+ }
+
+ c.w.Lock()
+ defer c.w.Unlock()
+
+ err := c.conn.SetWriteDeadline(writeWait)
+ if err != nil {
+ log.ZWarn(c.ctx, "SetWriteDeadline in Server have error", errs.Wrap(err), "writeWait", writeWait, "appData", appData)
+ return errs.Wrap(err)
+ }
+ err = c.conn.WriteMessage(PongMessage, []byte(appData))
+ if err != nil {
+ log.ZWarn(c.ctx, "Write Message have error", errs.Wrap(err), "Pong msg", PongMessage)
+ }
+
+ return errs.Wrap(err)
+}
+
+func (c *Client) handlerTextMessage(b []byte) error {
+ var msg TextMessage
+ if err := json.Unmarshal(b, &msg); err != nil {
+ return err
+ }
+ switch msg.Type {
+ case TextPong:
+ return nil
+ case TextPing:
+ msg.Type = TextPong
+ msgData, err := json.Marshal(msg)
+ if err != nil {
+ return err
+ }
+ c.w.Lock()
+ defer c.w.Unlock()
+ if err := c.conn.SetWriteDeadline(writeWait); err != nil {
+ return err
+ }
+ return c.conn.WriteMessage(MessageText, msgData)
+ default:
+ return fmt.Errorf("not support message type %s", msg.Type)
+ }
+}
diff --git a/internal/msggateway/compressor.go b/internal/msggateway/compressor.go
new file mode 100644
index 0000000..52d315b
--- /dev/null
+++ b/internal/msggateway/compressor.go
@@ -0,0 +1,113 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "bytes"
+ "compress/gzip"
+ "io"
+ "sync"
+
+ "github.com/openimsdk/tools/errs"
+)
+
+var (
+ gzipWriterPool = sync.Pool{New: func() any { return gzip.NewWriter(nil) }}
+ gzipReaderPool = sync.Pool{New: func() any { return new(gzip.Reader) }}
+)
+
+type Compressor interface {
+ Compress(rawData []byte) ([]byte, error)
+ CompressWithPool(rawData []byte) ([]byte, error)
+ DeCompress(compressedData []byte) ([]byte, error)
+ DecompressWithPool(compressedData []byte) ([]byte, error)
+}
+
+type GzipCompressor struct {
+ compressProtocol string
+}
+
+func NewGzipCompressor() *GzipCompressor {
+ return &GzipCompressor{compressProtocol: "gzip"}
+}
+
+func (g *GzipCompressor) Compress(rawData []byte) ([]byte, error) {
+ gzipBuffer := bytes.Buffer{}
+ gz := gzip.NewWriter(&gzipBuffer)
+
+ if _, err := gz.Write(rawData); err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.Compress: writing to gzip writer failed")
+ }
+
+ if err := gz.Close(); err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.Compress: closing gzip writer failed")
+ }
+
+ return gzipBuffer.Bytes(), nil
+}
+
+func (g *GzipCompressor) CompressWithPool(rawData []byte) ([]byte, error) {
+ gz := gzipWriterPool.Get().(*gzip.Writer)
+ defer gzipWriterPool.Put(gz)
+
+ gzipBuffer := bytes.Buffer{}
+ gz.Reset(&gzipBuffer)
+
+ if _, err := gz.Write(rawData); err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.CompressWithPool: error writing data")
+ }
+ if err := gz.Close(); err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.CompressWithPool: error closing gzip writer")
+ }
+ return gzipBuffer.Bytes(), nil
+}
+
+func (g *GzipCompressor) DeCompress(compressedData []byte) ([]byte, error) {
+ buff := bytes.NewBuffer(compressedData)
+ reader, err := gzip.NewReader(buff)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.DeCompress: NewReader creation failed")
+ }
+ decompressedData, err := io.ReadAll(reader)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.DeCompress: reading from gzip reader failed")
+ }
+ if err = reader.Close(); err != nil {
+ // Even if closing the reader fails, we've successfully read the data,
+ // so we return the decompressed data and an error indicating the close failure.
+ return decompressedData, errs.WrapMsg(err, "GzipCompressor.DeCompress: closing gzip reader failed")
+ }
+ return decompressedData, nil
+}
+
+func (g *GzipCompressor) DecompressWithPool(compressedData []byte) ([]byte, error) {
+ reader := gzipReaderPool.Get().(*gzip.Reader)
+ defer gzipReaderPool.Put(reader)
+
+ err := reader.Reset(bytes.NewReader(compressedData))
+ if err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.DecompressWithPool: resetting gzip reader failed")
+ }
+
+ decompressedData, err := io.ReadAll(reader)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "GzipCompressor.DecompressWithPool: reading from pooled gzip reader failed")
+ }
+ if err = reader.Close(); err != nil {
+ // Similar to DeCompress, return the data and error for close failure.
+ return decompressedData, errs.WrapMsg(err, "GzipCompressor.DecompressWithPool: closing pooled gzip reader failed")
+ }
+ return decompressedData, nil
+}
diff --git a/internal/msggateway/compressor_test.go b/internal/msggateway/compressor_test.go
new file mode 100644
index 0000000..952bd4d
--- /dev/null
+++ b/internal/msggateway/compressor_test.go
@@ -0,0 +1,139 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "crypto/rand"
+ "github.com/stretchr/testify/assert"
+ "sync"
+ "testing"
+ "unsafe"
+)
+
+func mockRandom() []byte {
+ bs := make([]byte, 50)
+ rand.Read(bs)
+ return bs
+}
+
+func TestCompressDecompress(t *testing.T) {
+
+ compressor := NewGzipCompressor()
+
+ for i := 0; i < 2000; i++ {
+ src := mockRandom()
+
+ // compress
+ dest, err := compressor.CompressWithPool(src)
+ if err != nil {
+ t.Log(err)
+ }
+ assert.Equal(t, nil, err)
+
+ // decompress
+ res, err := compressor.DecompressWithPool(dest)
+ if err != nil {
+ t.Log(err)
+ }
+ assert.Equal(t, nil, err)
+
+ // check
+ assert.EqualValues(t, src, res)
+ }
+}
+
+func TestCompressDecompressWithConcurrency(t *testing.T) {
+ wg := sync.WaitGroup{}
+ compressor := NewGzipCompressor()
+
+ for i := 0; i < 200; i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ src := mockRandom()
+
+ // compress
+ dest, err := compressor.CompressWithPool(src)
+ if err != nil {
+ t.Log(err)
+ }
+ assert.Equal(t, nil, err)
+
+ // decompress
+ res, err := compressor.DecompressWithPool(dest)
+ if err != nil {
+ t.Log(err)
+ }
+ assert.Equal(t, nil, err)
+
+ // check
+ assert.EqualValues(t, src, res)
+
+ }()
+ }
+ wg.Wait()
+}
+
+func BenchmarkCompress(b *testing.B) {
+ src := mockRandom()
+ compressor := NewGzipCompressor()
+
+ for i := 0; i < b.N; i++ {
+ _, err := compressor.Compress(src)
+ assert.Equal(b, nil, err)
+ }
+}
+
+func BenchmarkCompressWithSyncPool(b *testing.B) {
+ src := mockRandom()
+
+ compressor := NewGzipCompressor()
+ for i := 0; i < b.N; i++ {
+ _, err := compressor.CompressWithPool(src)
+ assert.Equal(b, nil, err)
+ }
+}
+
+func BenchmarkDecompress(b *testing.B) {
+ src := mockRandom()
+
+ compressor := NewGzipCompressor()
+ comdata, err := compressor.Compress(src)
+
+ assert.Equal(b, nil, err)
+
+ for i := 0; i < b.N; i++ {
+ _, err := compressor.DeCompress(comdata)
+ assert.Equal(b, nil, err)
+ }
+}
+
+func BenchmarkDecompressWithSyncPool(b *testing.B) {
+ src := mockRandom()
+
+ compressor := NewGzipCompressor()
+ comdata, err := compressor.Compress(src)
+ assert.Equal(b, nil, err)
+
+ for i := 0; i < b.N; i++ {
+ _, err := compressor.DecompressWithPool(comdata)
+ assert.Equal(b, nil, err)
+ }
+}
+
+func TestName(t *testing.T) {
+ t.Log(unsafe.Sizeof(Client{}))
+
+}
diff --git a/internal/msggateway/constant.go b/internal/msggateway/constant.go
new file mode 100644
index 0000000..3959e11
--- /dev/null
+++ b/internal/msggateway/constant.go
@@ -0,0 +1,72 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import "time"
+
+const (
+ WsUserID = "sendID"
+ CommonUserID = "userID"
+ PlatformID = "platformID"
+ ConnID = "connID"
+ Token = "token"
+ OperationID = "operationID"
+ Compression = "compression"
+ GzipCompressionProtocol = "gzip"
+ BackgroundStatus = "isBackground"
+ SendResponse = "isMsgResp"
+ SDKType = "sdkType"
+ SDKVersion = "sdkVersion"
+)
+
+const (
+ GoSDK = "go"
+ JsSDK = "js"
+)
+
+const (
+ WebSocket = iota + 1
+)
+
+const (
+ // Websocket Protocol.
+ WSGetNewestSeq = 1001
+ WSPullMsgBySeqList = 1002
+ WSSendMsg = 1003
+ WSSendSignalMsg = 1004
+ WSPullMsg = 1005
+ WSGetConvMaxReadSeq = 1006
+ WsPullConvLastMessage = 1007
+ WSPushMsg = 2001
+ WSKickOnlineMsg = 2002
+ WsLogoutMsg = 2003
+ WsSetBackgroundStatus = 2004
+ WsSubUserOnlineStatus = 2005
+ WSDataError = 3001
+)
+
+const (
+ // Time allowed to write a message to the peer.
+ writeWait = 10 * time.Second
+
+ // Time allowed to read the next pong message from the peer.
+ pongWait = 30 * time.Second
+
+ // Send pings to peer with this period. Must be less than pongWait.
+ pingPeriod = (pongWait * 9) / 10
+
+ // Maximum message size allowed from peer.
+ maxMessageSize = 51200
+)
diff --git a/internal/msggateway/context.go b/internal/msggateway/context.go
new file mode 100644
index 0000000..8f79a93
--- /dev/null
+++ b/internal/msggateway/context.go
@@ -0,0 +1,216 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "net/http"
+ "net/url"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/utils/encrypt"
+ "github.com/openimsdk/tools/utils/stringutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+type UserConnContext struct {
+ RespWriter http.ResponseWriter
+ Req *http.Request
+ Path string
+ Method string
+ RemoteAddr string
+ ConnID string
+}
+
+func (c *UserConnContext) Deadline() (deadline time.Time, ok bool) {
+ return
+}
+
+func (c *UserConnContext) Done() <-chan struct{} {
+ return nil
+}
+
+func (c *UserConnContext) Err() error {
+ return nil
+}
+
+func (c *UserConnContext) Value(key any) any {
+ switch key {
+ case constant.OpUserID:
+ return c.GetUserID()
+ case constant.OperationID:
+ return c.GetOperationID()
+ case constant.ConnID:
+ return c.GetConnID()
+ case constant.OpUserPlatform:
+ return constant.PlatformIDToName(stringutil.StringToInt(c.GetPlatformID()))
+ case constant.RemoteAddr:
+ return c.RemoteAddr
+ default:
+ return ""
+ }
+}
+
+func newContext(respWriter http.ResponseWriter, req *http.Request) *UserConnContext {
+ remoteAddr := req.RemoteAddr
+ if forwarded := req.Header.Get("X-Forwarded-For"); forwarded != "" {
+ remoteAddr += "_" + forwarded
+ }
+ return &UserConnContext{
+ RespWriter: respWriter,
+ Req: req,
+ Path: req.URL.Path,
+ Method: req.Method,
+ RemoteAddr: remoteAddr,
+ ConnID: encrypt.Md5(req.RemoteAddr + "_" + strconv.Itoa(int(timeutil.GetCurrentTimestampByMill()))),
+ }
+}
+
+func newTempContext() *UserConnContext {
+ return &UserConnContext{
+ Req: &http.Request{URL: &url.URL{}},
+ }
+}
+
+func (c *UserConnContext) GetRemoteAddr() string {
+ return c.RemoteAddr
+}
+
+func (c *UserConnContext) Query(key string) (string, bool) {
+ var value string
+ if value = c.Req.URL.Query().Get(key); value == "" {
+ return value, false
+ }
+ return value, true
+}
+
+func (c *UserConnContext) GetHeader(key string) (string, bool) {
+ var value string
+ if value = c.Req.Header.Get(key); value == "" {
+ return value, false
+ }
+ return value, true
+}
+
+func (c *UserConnContext) SetHeader(key, value string) {
+ c.RespWriter.Header().Set(key, value)
+}
+
+func (c *UserConnContext) ErrReturn(error string, code int) {
+ http.Error(c.RespWriter, error, code)
+}
+
+func (c *UserConnContext) GetConnID() string {
+ return c.ConnID
+}
+
+func (c *UserConnContext) GetUserID() string {
+ return c.Req.URL.Query().Get(WsUserID)
+}
+
+func (c *UserConnContext) GetPlatformID() string {
+ return c.Req.URL.Query().Get(PlatformID)
+}
+
+func (c *UserConnContext) GetOperationID() string {
+ return c.Req.URL.Query().Get(OperationID)
+}
+
+func (c *UserConnContext) SetOperationID(operationID string) {
+ values := c.Req.URL.Query()
+ values.Set(OperationID, operationID)
+ c.Req.URL.RawQuery = values.Encode()
+}
+
+func (c *UserConnContext) GetToken() string {
+ return c.Req.URL.Query().Get(Token)
+}
+
+func (c *UserConnContext) GetSDKVersion() string {
+ return c.Req.URL.Query().Get(SDKVersion)
+}
+
+func (c *UserConnContext) GetCompression() bool {
+ compression, exists := c.Query(Compression)
+ if exists && compression == GzipCompressionProtocol {
+ return true
+ } else {
+ compression, exists := c.GetHeader(Compression)
+ if exists && compression == GzipCompressionProtocol {
+ return true
+ }
+ }
+ return false
+}
+
+func (c *UserConnContext) GetSDKType() string {
+ sdkType := c.Req.URL.Query().Get(SDKType)
+ if sdkType == "" {
+ sdkType = GoSDK
+ }
+ return sdkType
+}
+
+func (c *UserConnContext) ShouldSendResp() bool {
+ errResp, exists := c.Query(SendResponse)
+ if exists {
+ b, err := strconv.ParseBool(errResp)
+ if err != nil {
+ return false
+ } else {
+ return b
+ }
+ }
+ return false
+}
+
+func (c *UserConnContext) SetToken(token string) {
+ c.Req.URL.RawQuery = Token + "=" + token
+}
+
+func (c *UserConnContext) GetBackground() bool {
+ b, err := strconv.ParseBool(c.Req.URL.Query().Get(BackgroundStatus))
+ if err != nil {
+ return false
+ }
+ return b
+}
+func (c *UserConnContext) ParseEssentialArgs() error {
+ _, exists := c.Query(Token)
+ if !exists {
+ return servererrs.ErrConnArgsErr.WrapMsg("token is empty")
+ }
+ _, exists = c.Query(WsUserID)
+ if !exists {
+ return servererrs.ErrConnArgsErr.WrapMsg("sendID is empty")
+ }
+ platformIDStr, exists := c.Query(PlatformID)
+ if !exists {
+ return servererrs.ErrConnArgsErr.WrapMsg("platformID is empty")
+ }
+ _, err := strconv.Atoi(platformIDStr)
+ if err != nil {
+ return servererrs.ErrConnArgsErr.WrapMsg("platformID is not int")
+ }
+ switch sdkType, _ := c.Query(SDKType); sdkType {
+ case "", GoSDK, JsSDK:
+ default:
+ return servererrs.ErrConnArgsErr.WrapMsg("sdkType is not go or js")
+ }
+ return nil
+}
diff --git a/internal/msggateway/encoder.go b/internal/msggateway/encoder.go
new file mode 100644
index 0000000..6a5936d
--- /dev/null
+++ b/internal/msggateway/encoder.go
@@ -0,0 +1,74 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "bytes"
+ "encoding/gob"
+ "encoding/json"
+
+ "github.com/openimsdk/tools/errs"
+)
+
+type Encoder interface {
+ Encode(data any) ([]byte, error)
+ Decode(encodeData []byte, decodeData any) error
+}
+
+type GobEncoder struct{}
+
+func NewGobEncoder() Encoder {
+ return GobEncoder{}
+}
+
+func (g GobEncoder) Encode(data any) ([]byte, error) {
+ var buff bytes.Buffer
+ enc := gob.NewEncoder(&buff)
+ if err := enc.Encode(data); err != nil {
+ return nil, errs.WrapMsg(err, "GobEncoder.Encode failed", "action", "encode")
+ }
+ return buff.Bytes(), nil
+}
+
+func (g GobEncoder) Decode(encodeData []byte, decodeData any) error {
+ buff := bytes.NewBuffer(encodeData)
+ dec := gob.NewDecoder(buff)
+ if err := dec.Decode(decodeData); err != nil {
+ return errs.WrapMsg(err, "GobEncoder.Decode failed", "action", "decode")
+ }
+ return nil
+}
+
+type JsonEncoder struct{}
+
+func NewJsonEncoder() Encoder {
+ return JsonEncoder{}
+}
+
+func (g JsonEncoder) Encode(data any) ([]byte, error) {
+ b, err := json.Marshal(data)
+ if err != nil {
+ return nil, errs.New("JsonEncoder.Encode failed", "action", "encode")
+ }
+ return b, nil
+}
+
+func (g JsonEncoder) Decode(encodeData []byte, decodeData any) error {
+ err := json.Unmarshal(encodeData, decodeData)
+ if err != nil {
+ return errs.New("JsonEncoder.Decode failed", "action", "decode")
+ }
+ return nil
+}
diff --git a/internal/msggateway/http_error.go b/internal/msggateway/http_error.go
new file mode 100644
index 0000000..8d9d035
--- /dev/null
+++ b/internal/msggateway/http_error.go
@@ -0,0 +1,25 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/log"
+)
+
+func httpError(ctx *UserConnContext, err error) {
+ log.ZWarn(ctx, "ws connection error", err)
+ apiresp.HttpError(ctx.RespWriter, err)
+}
diff --git a/internal/msggateway/hub_server.go b/internal/msggateway/hub_server.go
new file mode 100644
index 0000000..dfeeb02
--- /dev/null
+++ b/internal/msggateway/hub_server.go
@@ -0,0 +1,264 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "context"
+ "sync/atomic"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msggateway"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/mq/memamq"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/grpc"
+)
+
+func (s *Server) InitServer(ctx context.Context, config *Config, disCov discovery.Conn, server grpc.ServiceRegistrar) error {
+ userConn, err := disCov.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ s.userClient = rpcli.NewUserClient(userConn)
+ if err := s.LongConnServer.SetDiscoveryRegistry(ctx, disCov, config); err != nil {
+ return err
+ }
+ msggateway.RegisterMsgGatewayServer(server, s)
+ if s.ready != nil {
+ return s.ready(s)
+ }
+ return nil
+}
+
+//func (s *Server) Start(ctx context.Context, index int, conf *Config) error {
+// return startrpc.Start(ctx, &conf.Discovery, &conf.MsgGateway.Prometheus, conf.MsgGateway.ListenIP,
+// conf.MsgGateway.RPC.RegisterIP,
+// conf.MsgGateway.RPC.AutoSetPorts, conf.MsgGateway.RPC.Ports, index,
+// conf.Discovery.RpcService.MessageGateway,
+// nil,
+// conf,
+// []string{
+// conf.Share.GetConfigFileName(),
+// conf.Discovery.GetConfigFileName(),
+// conf.MsgGateway.GetConfigFileName(),
+// conf.WebhooksConfig.GetConfigFileName(),
+// conf.RedisConfig.GetConfigFileName(),
+// },
+// []string{
+// conf.Discovery.RpcService.MessageGateway,
+// },
+// s.InitServer,
+// )
+//}
+
+type Server struct {
+ msggateway.UnimplementedMsgGatewayServer
+
+ LongConnServer LongConnServer
+ config *Config
+ pushTerminal map[int]struct{}
+ ready func(srv *Server) error
+ queue *memamq.MemoryQueue
+ userClient *rpcli.UserClient
+}
+
+func (s *Server) SetLongConnServer(LongConnServer LongConnServer) {
+ s.LongConnServer = LongConnServer
+}
+
+func NewServer(longConnServer LongConnServer, conf *Config, ready func(srv *Server) error) *Server {
+ s := &Server{
+ LongConnServer: longConnServer,
+ pushTerminal: make(map[int]struct{}),
+ config: conf,
+ ready: ready,
+ queue: memamq.NewMemoryQueue(512, 1024*16),
+ }
+ s.pushTerminal[constant.IOSPlatformID] = struct{}{}
+ s.pushTerminal[constant.AndroidPlatformID] = struct{}{}
+ s.pushTerminal[constant.WebPlatformID] = struct{}{}
+ return s
+}
+
+func (s *Server) GetUsersOnlineStatus(ctx context.Context, req *msggateway.GetUsersOnlineStatusReq) (*msggateway.GetUsersOnlineStatusResp, error) {
+ if !authverify.IsAdmin(ctx) {
+ return nil, errs.ErrNoPermission.WrapMsg("only app manager")
+ }
+ var resp msggateway.GetUsersOnlineStatusResp
+ for _, userID := range req.UserIDs {
+ clients, ok := s.LongConnServer.GetUserAllCons(userID)
+ if !ok {
+ continue
+ }
+
+ uresp := new(msggateway.GetUsersOnlineStatusResp_SuccessResult)
+ uresp.UserID = userID
+ for _, client := range clients {
+ if client == nil {
+ continue
+ }
+
+ ps := new(msggateway.GetUsersOnlineStatusResp_SuccessDetail)
+ ps.PlatformID = int32(client.PlatformID)
+ ps.ConnID = client.ctx.GetConnID()
+ ps.Token = client.token
+ ps.IsBackground = client.IsBackground
+ uresp.Status = constant.Online
+ uresp.DetailPlatformStatus = append(uresp.DetailPlatformStatus, ps)
+ }
+ if uresp.Status == constant.Online {
+ resp.SuccessResult = append(resp.SuccessResult, uresp)
+ }
+ }
+ return &resp, nil
+}
+
+func (s *Server) pushToUser(ctx context.Context, userID string, msgData *sdkws.MsgData) *msggateway.SingleMsgToUserResults {
+ clients, ok := s.LongConnServer.GetUserAllCons(userID)
+ if !ok {
+ log.ZDebug(ctx, "push user not online", "userID", userID)
+ return &msggateway.SingleMsgToUserResults{
+ UserID: userID,
+ }
+ }
+ log.ZDebug(ctx, "push user online", "clients", clients, "userID", userID)
+ result := &msggateway.SingleMsgToUserResults{
+ UserID: userID,
+ Resp: make([]*msggateway.SingleMsgToUserPlatform, 0, len(clients)),
+ }
+ for _, client := range clients {
+ if client == nil {
+ continue
+ }
+ userPlatform := &msggateway.SingleMsgToUserPlatform{
+ RecvPlatFormID: int32(client.PlatformID),
+ }
+ if !client.IsBackground ||
+ (client.IsBackground && client.PlatformID != constant.IOSPlatformID) {
+ err := client.PushMessage(ctx, msgData)
+ if err != nil {
+ log.ZWarn(ctx, "online push msg failed", err, "userID", userID, "platformID", client.PlatformID)
+ userPlatform.ResultCode = int64(servererrs.ErrPushMsgErr.Code())
+ } else {
+ if _, ok := s.pushTerminal[client.PlatformID]; ok {
+ result.OnlinePush = true
+ }
+ }
+ } else {
+ userPlatform.ResultCode = int64(servererrs.ErrIOSBackgroundPushErr.Code())
+ }
+ result.Resp = append(result.Resp, userPlatform)
+ }
+ return result
+}
+
+func (s *Server) SuperGroupOnlineBatchPushOneMsg(ctx context.Context, req *msggateway.OnlineBatchPushOneMsgReq) (*msggateway.OnlineBatchPushOneMsgResp, error) {
+ if len(req.PushToUserIDs) == 0 {
+ return &msggateway.OnlineBatchPushOneMsgResp{}, nil
+ }
+ ch := make(chan *msggateway.SingleMsgToUserResults, len(req.PushToUserIDs))
+ var count atomic.Int64
+ count.Add(int64(len(req.PushToUserIDs)))
+ for i := range req.PushToUserIDs {
+ userID := req.PushToUserIDs[i]
+ err := s.queue.PushCtx(ctx, func() {
+ ch <- s.pushToUser(ctx, userID, req.MsgData)
+ if count.Add(-1) == 0 {
+ close(ch)
+ }
+ })
+ if err != nil {
+ if count.Add(-1) == 0 {
+ close(ch)
+ }
+ log.ZError(ctx, "pushToUser MemoryQueue failed", err, "userID", userID)
+ ch <- &msggateway.SingleMsgToUserResults{
+ UserID: userID,
+ }
+ }
+ }
+ resp := &msggateway.OnlineBatchPushOneMsgResp{
+ SinglePushResult: make([]*msggateway.SingleMsgToUserResults, 0, len(req.PushToUserIDs)),
+ }
+ for {
+ select {
+ case <-ctx.Done():
+ log.ZError(ctx, "SuperGroupOnlineBatchPushOneMsg ctx done", context.Cause(ctx))
+ userIDSet := datautil.SliceSet(req.PushToUserIDs)
+ for _, results := range resp.SinglePushResult {
+ delete(userIDSet, results.UserID)
+ }
+ for userID := range userIDSet {
+ resp.SinglePushResult = append(resp.SinglePushResult, &msggateway.SingleMsgToUserResults{
+ UserID: userID,
+ })
+ }
+ return resp, nil
+ case res, ok := <-ch:
+ if !ok {
+ return resp, nil
+ }
+ resp.SinglePushResult = append(resp.SinglePushResult, res)
+ }
+ }
+}
+
+func (s *Server) KickUserOffline(ctx context.Context, req *msggateway.KickUserOfflineReq) (*msggateway.KickUserOfflineResp, error) {
+ for _, v := range req.KickUserIDList {
+ clients, _, ok := s.LongConnServer.GetUserPlatformCons(v, int(req.PlatformID))
+ if !ok {
+ log.ZDebug(ctx, "conn not exist", "userID", v, "platformID", req.PlatformID)
+ continue
+ }
+
+ for _, client := range clients {
+ log.ZDebug(ctx, "kick user offline", "userID", v, "platformID", req.PlatformID, "client", client)
+ if err := client.longConnServer.KickUserConn(client); err != nil {
+ log.ZWarn(ctx, "kick user offline failed", err, "userID", v, "platformID", req.PlatformID)
+ }
+ }
+ continue
+ }
+
+ return &msggateway.KickUserOfflineResp{}, nil
+}
+
+func (s *Server) MultiTerminalLoginCheck(ctx context.Context, req *msggateway.MultiTerminalLoginCheckReq) (*msggateway.MultiTerminalLoginCheckResp, error) {
+ if oldClients, userOK, clientOK := s.LongConnServer.GetUserPlatformCons(req.UserID, int(req.PlatformID)); userOK {
+ tempUserCtx := newTempContext()
+ tempUserCtx.SetToken(req.Token)
+ tempUserCtx.SetOperationID(mcontext.GetOperationID(ctx))
+ client := &Client{}
+ client.ctx = tempUserCtx
+ client.token = req.Token
+ client.UserID = req.UserID
+ client.PlatformID = int(req.PlatformID)
+ i := &kickHandler{
+ clientOK: clientOK,
+ oldClients: oldClients,
+ newClient: client,
+ }
+ s.LongConnServer.SetKickHandlerInfo(i)
+ }
+ return &msggateway.MultiTerminalLoginCheckResp{}, nil
+}
diff --git a/internal/msggateway/init.go b/internal/msggateway/init.go
new file mode 100644
index 0000000..f6af9c9
--- /dev/null
+++ b/internal/msggateway/init.go
@@ -0,0 +1,117 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpccache"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "google.golang.org/grpc"
+
+ "github.com/openimsdk/tools/log"
+)
+
+type Config struct {
+ MsgGateway config.MsgGateway
+ Share config.Share
+ RedisConfig config.Redis
+ WebhooksConfig config.Webhooks
+ Discovery config.Discovery
+ Index config.Index
+}
+
+// Start run ws server.
+func Start(ctx context.Context, conf *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ log.CInfo(ctx, "MSG-GATEWAY server is initializing", "runtimeEnv", runtimeenv.RuntimeEnvironment(),
+ "rpcPorts", conf.MsgGateway.RPC.Ports,
+ "wsPort", conf.MsgGateway.LongConnSvr.Ports, "prometheusPorts", conf.MsgGateway.Prometheus.Ports)
+ wsPort, err := datautil.GetElemByIndex(conf.MsgGateway.LongConnSvr.Ports, int(conf.Index))
+ if err != nil {
+ return err
+ }
+
+ dbb := dbbuild.NewBuilder(nil, &conf.RedisConfig)
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+
+ longServer := NewWsServer(
+ conf,
+ WithPort(wsPort),
+ WithMaxConnNum(int64(conf.MsgGateway.LongConnSvr.WebsocketMaxConnNum)),
+ WithHandshakeTimeout(time.Duration(conf.MsgGateway.LongConnSvr.WebsocketTimeout)*time.Second),
+ WithMessageMaxMsgLength(conf.MsgGateway.LongConnSvr.WebsocketMaxMsgLen),
+ )
+
+ hubServer := NewServer(longServer, conf, func(srv *Server) error {
+ var err error
+ longServer.online, err = rpccache.NewOnlineCache(srv.userClient, nil, rdb, false, longServer.subscriberUserOnlineStatusChanges)
+ return err
+ })
+
+ if err := hubServer.InitServer(ctx, conf, client, server); err != nil {
+ return err
+ }
+
+ go longServer.ChangeOnlineStatus(4)
+
+ return hubServer.LongConnServer.Run(ctx)
+}
+
+//
+//// Start run ws server.
+//func Start(ctx context.Context, index int, conf *Config) error {
+// log.CInfo(ctx, "MSG-GATEWAY server is initializing", "runtimeEnv", runtimeenv.RuntimeEnvironment(),
+// "rpcPorts", conf.MsgGateway.RPC.Ports,
+// "wsPort", conf.MsgGateway.LongConnSvr.Ports, "prometheusPorts", conf.MsgGateway.Prometheus.Ports)
+// wsPort, err := datautil.GetElemByIndex(conf.MsgGateway.LongConnSvr.Ports, index)
+// if err != nil {
+// return err
+// }
+//
+// rdb, err := redisutil.NewRedisClient(ctx, conf.RedisConfig.Build())
+// if err != nil {
+// return err
+// }
+// longServer := NewWsServer(
+// conf,
+// WithPort(wsPort),
+// WithMaxConnNum(int64(conf.MsgGateway.LongConnSvr.WebsocketMaxConnNum)),
+// WithHandshakeTimeout(time.Duration(conf.MsgGateway.LongConnSvr.WebsocketTimeout)*time.Second),
+// WithMessageMaxMsgLength(conf.MsgGateway.LongConnSvr.WebsocketMaxMsgLen),
+// )
+//
+// hubServer := NewServer(longServer, conf, func(srv *Server) error {
+// var err error
+// longServer.online, err = rpccache.NewOnlineCache(srv.userClient, nil, rdb, false, longServer.subscriberUserOnlineStatusChanges)
+// return err
+// })
+//
+// go longServer.ChangeOnlineStatus(4)
+//
+// netDone := make(chan error)
+// go func() {
+// err = hubServer.Start(ctx, index, conf)
+// netDone <- err
+// }()
+// return hubServer.LongConnServer.Run(netDone)
+//}
diff --git a/internal/msggateway/long_conn.go b/internal/msggateway/long_conn.go
new file mode 100644
index 0000000..c1b3e27
--- /dev/null
+++ b/internal/msggateway/long_conn.go
@@ -0,0 +1,179 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "encoding/json"
+ "net/http"
+ "time"
+
+ "github.com/openimsdk/tools/apiresp"
+
+ "github.com/gorilla/websocket"
+ "github.com/openimsdk/tools/errs"
+)
+
+type LongConn interface {
+ // Close this connection
+ Close() error
+ // WriteMessage Write message to connection,messageType means data type,can be set binary(2) and text(1).
+ WriteMessage(messageType int, message []byte) error
+ // ReadMessage Read message from connection.
+ ReadMessage() (int, []byte, error)
+ // SetReadDeadline sets the read deadline on the underlying network connection,
+ // after a read has timed out, will return an error.
+ SetReadDeadline(timeout time.Duration) error
+ // SetWriteDeadline sets to write deadline when send message,when read has timed out,will return error.
+ SetWriteDeadline(timeout time.Duration) error
+ // Dial Try to dial a connection,url must set auth args,header can control compress data
+ Dial(urlStr string, requestHeader http.Header) (*http.Response, error)
+ // IsNil Whether the connection of the current long connection is nil
+ IsNil() bool
+ // SetConnNil Set the connection of the current long connection to nil
+ SetConnNil()
+ // SetReadLimit sets the maximum size for a message read from the peer.bytes
+ SetReadLimit(limit int64)
+ SetPongHandler(handler PingPongHandler)
+ SetPingHandler(handler PingPongHandler)
+ // GenerateLongConn Check the connection of the current and when it was sent are the same
+ GenerateLongConn(w http.ResponseWriter, r *http.Request) error
+}
+type GWebSocket struct {
+ protocolType int
+ conn *websocket.Conn
+ handshakeTimeout time.Duration
+ writeBufferSize int
+}
+
+func newGWebSocket(protocolType int, handshakeTimeout time.Duration, wbs int) *GWebSocket {
+ return &GWebSocket{protocolType: protocolType, handshakeTimeout: handshakeTimeout, writeBufferSize: wbs}
+}
+
+func (d *GWebSocket) Close() error {
+ return d.conn.Close()
+}
+
+func (d *GWebSocket) GenerateLongConn(w http.ResponseWriter, r *http.Request) error {
+ upgrader := &websocket.Upgrader{
+ HandshakeTimeout: d.handshakeTimeout,
+ CheckOrigin: func(r *http.Request) bool { return true },
+ }
+ if d.writeBufferSize > 0 { // default is 4kb.
+ upgrader.WriteBufferSize = d.writeBufferSize
+ }
+
+ conn, err := upgrader.Upgrade(w, r, nil)
+ if err != nil {
+ // The upgrader.Upgrade method usually returns enough error messages to diagnose problems that may occur during the upgrade
+ return errs.WrapMsg(err, "GenerateLongConn: WebSocket upgrade failed")
+ }
+ d.conn = conn
+ return nil
+}
+
+func (d *GWebSocket) WriteMessage(messageType int, message []byte) error {
+ // d.setSendConn(d.conn)
+ return d.conn.WriteMessage(messageType, message)
+}
+
+// func (d *GWebSocket) setSendConn(sendConn *websocket.Conn) {
+// d.sendConn = sendConn
+//}
+
+func (d *GWebSocket) ReadMessage() (int, []byte, error) {
+ return d.conn.ReadMessage()
+}
+
+func (d *GWebSocket) SetReadDeadline(timeout time.Duration) error {
+ return d.conn.SetReadDeadline(time.Now().Add(timeout))
+}
+
+func (d *GWebSocket) SetWriteDeadline(timeout time.Duration) error {
+ if timeout <= 0 {
+ return errs.New("timeout must be greater than 0")
+ }
+
+ // TODO SetWriteDeadline Future add error handling
+ if err := d.conn.SetWriteDeadline(time.Now().Add(timeout)); err != nil {
+ return errs.WrapMsg(err, "GWebSocket.SetWriteDeadline failed")
+ }
+ return nil
+}
+
+func (d *GWebSocket) Dial(urlStr string, requestHeader http.Header) (*http.Response, error) {
+ conn, httpResp, err := websocket.DefaultDialer.Dial(urlStr, requestHeader)
+ if err != nil {
+ return httpResp, errs.WrapMsg(err, "GWebSocket.Dial failed", "url", urlStr)
+ }
+ d.conn = conn
+ return httpResp, nil
+}
+
+func (d *GWebSocket) IsNil() bool {
+ return d.conn == nil
+ //
+ // if d.conn != nil {
+ // return false
+ // }
+ // return true
+}
+
+func (d *GWebSocket) SetConnNil() {
+ d.conn = nil
+}
+
+func (d *GWebSocket) SetReadLimit(limit int64) {
+ d.conn.SetReadLimit(limit)
+}
+
+func (d *GWebSocket) SetPongHandler(handler PingPongHandler) {
+ d.conn.SetPongHandler(handler)
+}
+
+func (d *GWebSocket) SetPingHandler(handler PingPongHandler) {
+ d.conn.SetPingHandler(handler)
+}
+
+func (d *GWebSocket) RespondWithError(err error, w http.ResponseWriter, r *http.Request) error {
+ if err := d.GenerateLongConn(w, r); err != nil {
+ return err
+ }
+ data, err := json.Marshal(apiresp.ParseError(err))
+ if err != nil {
+ _ = d.Close()
+ return errs.WrapMsg(err, "json marshal failed")
+ }
+
+ if err := d.WriteMessage(MessageText, data); err != nil {
+ _ = d.Close()
+ return errs.WrapMsg(err, "WriteMessage failed")
+ }
+ _ = d.Close()
+ return nil
+}
+
+func (d *GWebSocket) RespondWithSuccess() error {
+ data, err := json.Marshal(apiresp.ParseError(nil))
+ if err != nil {
+ _ = d.Close()
+ return errs.WrapMsg(err, "json marshal failed")
+ }
+
+ if err := d.WriteMessage(MessageText, data); err != nil {
+ _ = d.Close()
+ return errs.WrapMsg(err, "WriteMessage failed")
+ }
+ return nil
+}
diff --git a/internal/msggateway/message_handler.go b/internal/msggateway/message_handler.go
new file mode 100644
index 0000000..1a4ef76
--- /dev/null
+++ b/internal/msggateway/message_handler.go
@@ -0,0 +1,282 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import (
+ "context"
+ "encoding/json"
+ "sync"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "github.com/go-playground/validator/v10"
+ "google.golang.org/protobuf/proto"
+
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/push"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/jsonutil"
+)
+
+const (
+ TextPing = "ping"
+ TextPong = "pong"
+)
+
+type TextMessage struct {
+ Type string `json:"type"`
+ Body json.RawMessage `json:"body"`
+}
+
+type Req struct {
+ ReqIdentifier int32 `json:"reqIdentifier" validate:"required"`
+ Token string `json:"token"`
+ SendID string `json:"sendID" validate:"required"`
+ OperationID string `json:"operationID" validate:"required"`
+ MsgIncr string `json:"msgIncr" validate:"required"`
+ Data []byte `json:"data"`
+}
+
+func (r *Req) String() string {
+ var tReq Req
+ tReq.ReqIdentifier = r.ReqIdentifier
+ tReq.Token = r.Token
+ tReq.SendID = r.SendID
+ tReq.OperationID = r.OperationID
+ tReq.MsgIncr = r.MsgIncr
+ return jsonutil.StructToJsonString(tReq)
+}
+
+var reqPool = sync.Pool{
+ New: func() any {
+ return new(Req)
+ },
+}
+
+func getReq() *Req {
+ req := reqPool.Get().(*Req)
+ req.Data = nil
+ req.MsgIncr = ""
+ req.OperationID = ""
+ req.ReqIdentifier = 0
+ req.SendID = ""
+ req.Token = ""
+ return req
+}
+
+func freeReq(req *Req) {
+ reqPool.Put(req)
+}
+
+type Resp struct {
+ ReqIdentifier int32 `json:"reqIdentifier"`
+ MsgIncr string `json:"msgIncr"`
+ OperationID string `json:"operationID"`
+ ErrCode int `json:"errCode"`
+ ErrMsg string `json:"errMsg"`
+ Data []byte `json:"data"`
+}
+
+func (r *Resp) String() string {
+ var tResp Resp
+ tResp.ReqIdentifier = r.ReqIdentifier
+ tResp.MsgIncr = r.MsgIncr
+ tResp.OperationID = r.OperationID
+ tResp.ErrCode = r.ErrCode
+ tResp.ErrMsg = r.ErrMsg
+ return jsonutil.StructToJsonString(tResp)
+}
+
+type MessageHandler interface {
+ GetSeq(ctx context.Context, data *Req) ([]byte, error)
+ SendMessage(ctx context.Context, data *Req) ([]byte, error)
+ SendSignalMessage(ctx context.Context, data *Req) ([]byte, error)
+ PullMessageBySeqList(ctx context.Context, data *Req) ([]byte, error)
+ GetConversationsHasReadAndMaxSeq(ctx context.Context, data *Req) ([]byte, error)
+ GetSeqMessage(ctx context.Context, data *Req) ([]byte, error)
+ UserLogout(ctx context.Context, data *Req) ([]byte, error)
+ SetUserDeviceBackground(ctx context.Context, data *Req) ([]byte, bool, error)
+ GetLastMessage(ctx context.Context, data *Req) ([]byte, error)
+}
+
+var _ MessageHandler = (*GrpcHandler)(nil)
+
+type GrpcHandler struct {
+ validate *validator.Validate
+ msgClient *rpcli.MsgClient
+ pushClient *rpcli.PushMsgServiceClient
+}
+
+func NewGrpcHandler(validate *validator.Validate, msgClient *rpcli.MsgClient, pushClient *rpcli.PushMsgServiceClient) *GrpcHandler {
+ return &GrpcHandler{
+ validate: validate,
+ msgClient: msgClient,
+ pushClient: pushClient,
+ }
+}
+
+func (g *GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
+ req := sdkws.GetMaxSeqReq{}
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, errs.WrapMsg(err, "GetSeq: error unmarshaling request", "action", "unmarshal", "dataType", "GetMaxSeqReq")
+ }
+ if err := g.validate.Struct(&req); err != nil {
+ return nil, errs.WrapMsg(err, "GetSeq: validation failed", "action", "validate", "dataType", "GetMaxSeqReq")
+ }
+ resp, err := g.msgClient.MsgClient.GetMaxSeq(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "GetSeq: error marshaling response", "action", "marshal", "dataType", "GetMaxSeqResp")
+ }
+ return c, nil
+}
+
+// SendMessage handles the sending of messages through gRPC. It unmarshals the request data,
+// validates the message, and then sends it using the message RPC client.
+func (g *GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error) {
+ var msgData sdkws.MsgData
+ if err := proto.Unmarshal(data.Data, &msgData); err != nil {
+ return nil, errs.WrapMsg(err, "SendMessage: error unmarshaling message data", "action", "unmarshal", "dataType", "MsgData")
+ }
+
+ if err := g.validate.Struct(&msgData); err != nil {
+ return nil, errs.WrapMsg(err, "SendMessage: message data validation failed", "action", "validate", "dataType", "MsgData")
+ }
+
+ req := msg.SendMsgReq{MsgData: &msgData}
+ resp, err := g.msgClient.MsgClient.SendMsg(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "SendMessage: error marshaling response", "action", "marshal", "dataType", "SendMsgResp")
+ }
+
+ return c, nil
+}
+
+func (g *GrpcHandler) SendSignalMessage(ctx context.Context, data *Req) ([]byte, error) {
+ resp, err := g.msgClient.MsgClient.SendMsg(ctx, nil)
+ if err != nil {
+ return nil, err
+ }
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "error marshaling response", "action", "marshal", "dataType", "SendMsgResp")
+ }
+ return c, nil
+}
+
+func (g *GrpcHandler) PullMessageBySeqList(ctx context.Context, data *Req) ([]byte, error) {
+ req := sdkws.PullMessageBySeqsReq{}
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "PullMessageBySeqsReq")
+ }
+ if err := g.validate.Struct(data); err != nil {
+ return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "PullMessageBySeqsReq")
+ }
+ resp, err := g.msgClient.MsgClient.PullMessageBySeqs(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "error marshaling response", "action", "marshal", "dataType", "PullMessageBySeqsResp")
+ }
+ return c, nil
+}
+
+func (g *GrpcHandler) GetConversationsHasReadAndMaxSeq(ctx context.Context, data *Req) ([]byte, error) {
+ req := msg.GetConversationsHasReadAndMaxSeqReq{}
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "GetConversationsHasReadAndMaxSeq")
+ }
+ if err := g.validate.Struct(data); err != nil {
+ return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetConversationsHasReadAndMaxSeq")
+ }
+ resp, err := g.msgClient.MsgClient.GetConversationsHasReadAndMaxSeq(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "error marshaling response", "action", "marshal", "dataType", "GetConversationsHasReadAndMaxSeq")
+ }
+ return c, nil
+}
+
+func (g *GrpcHandler) GetSeqMessage(ctx context.Context, data *Req) ([]byte, error) {
+ req := msg.GetSeqMessageReq{}
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "GetSeqMessage")
+ }
+ if err := g.validate.Struct(data); err != nil {
+ return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetSeqMessage")
+ }
+ resp, err := g.msgClient.MsgClient.GetSeqMessage(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "error marshaling response", "action", "marshal", "dataType", "GetSeqMessage")
+ }
+ return c, nil
+}
+
+func (g *GrpcHandler) UserLogout(ctx context.Context, data *Req) ([]byte, error) {
+ req := push.DelUserPushTokenReq{}
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "DelUserPushTokenReq")
+ }
+ resp, err := g.pushClient.PushMsgServiceClient.DelUserPushToken(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+ c, err := proto.Marshal(resp)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "error marshaling response", "action", "marshal", "dataType", "DelUserPushTokenResp")
+ }
+ return c, nil
+}
+
+func (g *GrpcHandler) SetUserDeviceBackground(ctx context.Context, data *Req) ([]byte, bool, error) {
+ req := sdkws.SetAppBackgroundStatusReq{}
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, false, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "SetAppBackgroundStatusReq")
+ }
+ if err := g.validate.Struct(data); err != nil {
+ return nil, false, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "SetAppBackgroundStatusReq")
+ }
+ return nil, req.IsBackground, nil
+}
+
+func (g *GrpcHandler) GetLastMessage(ctx context.Context, data *Req) ([]byte, error) {
+ var req msg.GetLastMessageReq
+ if err := proto.Unmarshal(data.Data, &req); err != nil {
+ return nil, err
+ }
+ resp, err := g.msgClient.GetLastMessage(ctx, &req)
+ if err != nil {
+ return nil, err
+ }
+ return proto.Marshal(resp)
+}
diff --git a/internal/msggateway/online.go b/internal/msggateway/online.go
new file mode 100644
index 0000000..2d7d7cb
--- /dev/null
+++ b/internal/msggateway/online.go
@@ -0,0 +1,131 @@
+package msggateway
+
+import (
+ "context"
+ "crypto/md5"
+ "encoding/binary"
+ "fmt"
+ "math/rand"
+ "os"
+ "strconv"
+ "sync/atomic"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ pbuser "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (ws *WsServer) ChangeOnlineStatus(concurrent int) {
+ if concurrent < 1 {
+ concurrent = 1
+ }
+ const renewalTime = cachekey.OnlineExpire / 3
+ //const renewalTime = time.Second * 10
+ renewalTicker := time.NewTicker(renewalTime)
+
+ requestChs := make([]chan *pbuser.SetUserOnlineStatusReq, concurrent)
+ changeStatus := make([][]UserState, concurrent)
+
+ for i := 0; i < concurrent; i++ {
+ requestChs[i] = make(chan *pbuser.SetUserOnlineStatusReq, 64)
+ changeStatus[i] = make([]UserState, 0, 100)
+ }
+
+ mergeTicker := time.NewTicker(time.Second)
+
+ local2pb := func(u UserState) *pbuser.UserOnlineStatus {
+ return &pbuser.UserOnlineStatus{
+ UserID: u.UserID,
+ Online: u.Online,
+ Offline: u.Offline,
+ }
+ }
+
+ rNum := rand.Uint64()
+ pushUserState := func(us ...UserState) {
+ for _, u := range us {
+ sum := md5.Sum([]byte(u.UserID))
+ i := (binary.BigEndian.Uint64(sum[:]) + rNum) % uint64(concurrent)
+ changeStatus[i] = append(changeStatus[i], u)
+ status := changeStatus[i]
+ if len(status) == cap(status) {
+ req := &pbuser.SetUserOnlineStatusReq{
+ Status: datautil.Slice(status, local2pb),
+ }
+ changeStatus[i] = status[:0]
+ select {
+ case requestChs[i] <- req:
+ default:
+ log.ZError(context.Background(), "user online processing is too slow", nil)
+ }
+ }
+ }
+ }
+
+ pushAllUserState := func() {
+ for i, status := range changeStatus {
+ if len(status) == 0 {
+ continue
+ }
+ req := &pbuser.SetUserOnlineStatusReq{
+ Status: datautil.Slice(status, local2pb),
+ }
+ changeStatus[i] = status[:0]
+ select {
+ case requestChs[i] <- req:
+ default:
+ log.ZError(context.Background(), "user online processing is too slow", nil)
+ }
+ }
+ }
+
+ var count atomic.Int64
+ operationIDPrefix := fmt.Sprintf("p_%d_", os.Getpid())
+ doRequest := func(req *pbuser.SetUserOnlineStatusReq) {
+ opIdCtx := mcontext.SetOperationID(context.Background(), operationIDPrefix+strconv.FormatInt(count.Add(1), 10))
+ ctx, cancel := context.WithTimeout(opIdCtx, time.Second*5)
+ defer cancel()
+ if err := ws.userClient.SetUserOnlineStatus(ctx, req); err != nil {
+ log.ZError(ctx, "update user online status", err)
+ }
+ for _, ss := range req.Status {
+ for _, online := range ss.Online {
+ client, _, _ := ws.clients.Get(ss.UserID, int(online))
+ back := false
+ if len(client) > 0 {
+ back = client[0].IsBackground
+ }
+ ws.webhookAfterUserOnline(ctx, &ws.msgGatewayConfig.WebhooksConfig.AfterUserOnline, ss.UserID, int(online), back, ss.ConnID)
+ }
+ for _, offline := range ss.Offline {
+ ws.webhookAfterUserOffline(ctx, &ws.msgGatewayConfig.WebhooksConfig.AfterUserOffline, ss.UserID, int(offline), ss.ConnID)
+ }
+ }
+ }
+
+ for i := 0; i < concurrent; i++ {
+ go func(ch <-chan *pbuser.SetUserOnlineStatusReq) {
+ for req := range ch {
+ doRequest(req)
+ }
+ }(requestChs[i])
+ }
+
+ for {
+ select {
+ case <-mergeTicker.C:
+ pushAllUserState()
+ case now := <-renewalTicker.C:
+ deadline := now.Add(-cachekey.OnlineExpire / 3)
+ users := ws.clients.GetAllUserStatus(deadline, now)
+ log.ZDebug(context.Background(), "renewal ticker", "deadline", deadline, "nowtime", now, "num", len(users), "users", users)
+ pushUserState(users...)
+ case state := <-ws.clients.UserState():
+ log.ZDebug(context.Background(), "OnlineCache user online change", "userID", state.UserID, "online", state.Online, "offline", state.Offline)
+ pushUserState(state)
+ }
+ }
+}
diff --git a/internal/msggateway/options.go b/internal/msggateway/options.go
new file mode 100644
index 0000000..b65f8e3
--- /dev/null
+++ b/internal/msggateway/options.go
@@ -0,0 +1,63 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msggateway
+
+import "time"
+
+type (
+ Option func(opt *configs)
+ configs struct {
+ // Long connection listening port
+ port int
+ // Maximum number of connections allowed for long connection
+ maxConnNum int64
+ // Connection handshake timeout
+ handshakeTimeout time.Duration
+ // Maximum length allowed for messages
+ messageMaxMsgLength int
+ // Websocket write buffer, default: 4096, 4kb.
+ writeBufferSize int
+ }
+)
+
+func WithPort(port int) Option {
+ return func(opt *configs) {
+ opt.port = port
+ }
+}
+
+func WithMaxConnNum(num int64) Option {
+ return func(opt *configs) {
+ opt.maxConnNum = num
+ }
+}
+
+func WithHandshakeTimeout(t time.Duration) Option {
+ return func(opt *configs) {
+ opt.handshakeTimeout = t
+ }
+}
+
+func WithMessageMaxMsgLength(length int) Option {
+ return func(opt *configs) {
+ opt.messageMaxMsgLength = length
+ }
+}
+
+func WithWriteBufferSize(size int) Option {
+ return func(opt *configs) {
+ opt.writeBufferSize = size
+ }
+}
diff --git a/internal/msggateway/subscription.go b/internal/msggateway/subscription.go
new file mode 100644
index 0000000..dd16dd5
--- /dev/null
+++ b/internal/msggateway/subscription.go
@@ -0,0 +1,167 @@
+package msggateway
+
+import (
+ "context"
+ "sync"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/protobuf/proto"
+)
+
+func (ws *WsServer) subscriberUserOnlineStatusChanges(ctx context.Context, userID string, platformIDs []int32) {
+ if ws.clients.RecvSubChange(userID, platformIDs) {
+ log.ZDebug(ctx, "gateway receive subscription message and go back online", "userID", userID, "platformIDs", platformIDs)
+ } else {
+ log.ZDebug(ctx, "gateway ignore user online status changes", "userID", userID, "platformIDs", platformIDs)
+ }
+ ws.pushUserIDOnlineStatus(ctx, userID, platformIDs)
+}
+
+func (ws *WsServer) SubUserOnlineStatus(ctx context.Context, client *Client, data *Req) ([]byte, error) {
+ var sub sdkws.SubUserOnlineStatus
+ if err := proto.Unmarshal(data.Data, &sub); err != nil {
+ return nil, err
+ }
+ ws.subscription.Sub(client, sub.SubscribeUserID, sub.UnsubscribeUserID)
+ var resp sdkws.SubUserOnlineStatusTips
+ if len(sub.SubscribeUserID) > 0 {
+ resp.Subscribers = make([]*sdkws.SubUserOnlineStatusElem, 0, len(sub.SubscribeUserID))
+ for _, userID := range sub.SubscribeUserID {
+ platformIDs, err := ws.online.GetUserOnlinePlatform(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ resp.Subscribers = append(resp.Subscribers, &sdkws.SubUserOnlineStatusElem{
+ UserID: userID,
+ OnlinePlatformIDs: platformIDs,
+ })
+ }
+ }
+ return proto.Marshal(&resp)
+}
+
+func newSubscription() *Subscription {
+ return &Subscription{
+ userIDs: make(map[string]*subClient),
+ }
+}
+
+type subClient struct {
+ clients map[string]*Client
+}
+
+type Subscription struct {
+ lock sync.RWMutex
+ userIDs map[string]*subClient // subscribe to the user's client connection
+}
+
+func (s *Subscription) DelClient(client *Client) {
+ client.subLock.Lock()
+ userIDs := datautil.Keys(client.subUserIDs)
+ for _, userID := range userIDs {
+ delete(client.subUserIDs, userID)
+ }
+ client.subLock.Unlock()
+ if len(userIDs) == 0 {
+ return
+ }
+ addr := client.ctx.GetRemoteAddr()
+ s.lock.Lock()
+ defer s.lock.Unlock()
+ for _, userID := range userIDs {
+ sub, ok := s.userIDs[userID]
+ if !ok {
+ continue
+ }
+ delete(sub.clients, addr)
+ if len(sub.clients) == 0 {
+ delete(s.userIDs, userID)
+ }
+ }
+}
+
+func (s *Subscription) GetClient(userID string) []*Client {
+ s.lock.RLock()
+ defer s.lock.RUnlock()
+ cs, ok := s.userIDs[userID]
+ if !ok {
+ return nil
+ }
+ clients := make([]*Client, 0, len(cs.clients))
+ for _, client := range cs.clients {
+ clients = append(clients, client)
+ }
+ return clients
+}
+
+func (s *Subscription) Sub(client *Client, addUserIDs, delUserIDs []string) {
+ if len(addUserIDs)+len(delUserIDs) == 0 {
+ return
+ }
+ var (
+ del = make(map[string]struct{})
+ add = make(map[string]struct{})
+ )
+ client.subLock.Lock()
+ for _, userID := range delUserIDs {
+ if _, ok := client.subUserIDs[userID]; !ok {
+ continue
+ }
+ del[userID] = struct{}{}
+ delete(client.subUserIDs, userID)
+ }
+ for _, userID := range addUserIDs {
+ delete(del, userID)
+ if _, ok := client.subUserIDs[userID]; ok {
+ continue
+ }
+ client.subUserIDs[userID] = struct{}{}
+ add[userID] = struct{}{}
+ }
+ client.subLock.Unlock()
+ if len(del)+len(add) == 0 {
+ return
+ }
+ addr := client.ctx.GetRemoteAddr()
+ s.lock.Lock()
+ defer s.lock.Unlock()
+ for userID := range del {
+ sub, ok := s.userIDs[userID]
+ if !ok {
+ continue
+ }
+ delete(sub.clients, addr)
+ if len(sub.clients) == 0 {
+ delete(s.userIDs, userID)
+ }
+ }
+ for userID := range add {
+ sub, ok := s.userIDs[userID]
+ if !ok {
+ sub = &subClient{clients: make(map[string]*Client)}
+ s.userIDs[userID] = sub
+ }
+ sub.clients[addr] = client
+ }
+}
+
+func (ws *WsServer) pushUserIDOnlineStatus(ctx context.Context, userID string, platformIDs []int32) {
+ clients := ws.subscription.GetClient(userID)
+ if len(clients) == 0 {
+ return
+ }
+ onlineStatus, err := proto.Marshal(&sdkws.SubUserOnlineStatusTips{
+ Subscribers: []*sdkws.SubUserOnlineStatusElem{{UserID: userID, OnlinePlatformIDs: platformIDs}},
+ })
+ if err != nil {
+ log.ZError(ctx, "pushUserIDOnlineStatus json.Marshal", err)
+ return
+ }
+ for _, client := range clients {
+ if err := client.PushUserOnlineStatus(onlineStatus); err != nil {
+ log.ZError(ctx, "UserSubscribeOnlineStatusNotification push failed", err, "userID", client.UserID, "platformID", client.PlatformID, "changeUserID", userID, "changePlatformID", platformIDs)
+ }
+ }
+}
diff --git a/internal/msggateway/user_map.go b/internal/msggateway/user_map.go
new file mode 100644
index 0000000..5baa4f9
--- /dev/null
+++ b/internal/msggateway/user_map.go
@@ -0,0 +1,185 @@
+package msggateway
+
+import (
+ "github.com/openimsdk/tools/utils/datautil"
+ "sync"
+ "time"
+)
+
+type UserMap interface {
+ GetAll(userID string) ([]*Client, bool)
+ Get(userID string, platformID int) ([]*Client, bool, bool)
+ Set(userID string, v *Client)
+ DeleteClients(userID string, clients []*Client) (isDeleteUser bool)
+ UserState() <-chan UserState
+ GetAllUserStatus(deadline time.Time, nowtime time.Time) []UserState
+ RecvSubChange(userID string, platformIDs []int32) bool
+}
+
+type UserState struct {
+ UserID string
+ Online []int32
+ Offline []int32
+}
+
+type UserPlatform struct {
+ Time time.Time
+ Clients []*Client
+}
+
+func (u *UserPlatform) PlatformIDs() []int32 {
+ if len(u.Clients) == 0 {
+ return nil
+ }
+ platformIDs := make([]int32, 0, len(u.Clients))
+ for _, client := range u.Clients {
+ platformIDs = append(platformIDs, int32(client.PlatformID))
+ }
+ return platformIDs
+}
+
+func (u *UserPlatform) PlatformIDSet() map[int32]struct{} {
+ if len(u.Clients) == 0 {
+ return nil
+ }
+ platformIDs := make(map[int32]struct{})
+ for _, client := range u.Clients {
+ platformIDs[int32(client.PlatformID)] = struct{}{}
+ }
+ return platformIDs
+}
+
+func newUserMap() UserMap {
+ return &userMap{
+ data: make(map[string]*UserPlatform),
+ ch: make(chan UserState, 10000),
+ }
+}
+
+type userMap struct {
+ lock sync.RWMutex
+ data map[string]*UserPlatform
+ ch chan UserState
+}
+
+func (u *userMap) RecvSubChange(userID string, platformIDs []int32) bool {
+ u.lock.RLock()
+ defer u.lock.RUnlock()
+ result, ok := u.data[userID]
+ if !ok {
+ return false
+ }
+ localPlatformIDs := result.PlatformIDSet()
+ for _, platformID := range platformIDs {
+ delete(localPlatformIDs, platformID)
+ }
+ if len(localPlatformIDs) == 0 {
+ return false
+ }
+ u.push(userID, result, nil)
+ return true
+}
+
+func (u *userMap) push(userID string, userPlatform *UserPlatform, offline []int32) bool {
+ select {
+ case u.ch <- UserState{UserID: userID, Online: userPlatform.PlatformIDs(), Offline: offline}:
+ userPlatform.Time = time.Now()
+ return true
+ default:
+ return false
+ }
+}
+
+func (u *userMap) GetAll(userID string) ([]*Client, bool) {
+ u.lock.RLock()
+ defer u.lock.RUnlock()
+ result, ok := u.data[userID]
+ if !ok {
+ return nil, false
+ }
+ return result.Clients, true
+}
+
+func (u *userMap) Get(userID string, platformID int) ([]*Client, bool, bool) {
+ u.lock.RLock()
+ defer u.lock.RUnlock()
+ result, ok := u.data[userID]
+ if !ok {
+ return nil, false, false
+ }
+ var clients []*Client
+ for _, client := range result.Clients {
+ if client.PlatformID == platformID {
+ clients = append(clients, client)
+ }
+ }
+ return clients, true, len(clients) > 0
+}
+
+func (u *userMap) Set(userID string, client *Client) {
+ u.lock.Lock()
+ defer u.lock.Unlock()
+ result, ok := u.data[userID]
+ if ok {
+ result.Clients = append(result.Clients, client)
+ } else {
+ result = &UserPlatform{
+ Clients: []*Client{client},
+ }
+ u.data[userID] = result
+ }
+ u.push(client.UserID, result, nil)
+}
+
+func (u *userMap) DeleteClients(userID string, clients []*Client) (isDeleteUser bool) {
+ if len(clients) == 0 {
+ return false
+ }
+ u.lock.Lock()
+ defer u.lock.Unlock()
+ result, ok := u.data[userID]
+ if !ok {
+ return false
+ }
+ offline := make([]int32, 0, len(clients))
+ deleteAddr := datautil.SliceSetAny(clients, func(client *Client) string {
+ return client.ctx.GetRemoteAddr()
+ })
+ tmp := result.Clients
+ result.Clients = result.Clients[:0]
+ for _, client := range tmp {
+ if _, delCli := deleteAddr[client.ctx.GetRemoteAddr()]; delCli {
+ offline = append(offline, int32(client.PlatformID))
+ } else {
+ result.Clients = append(result.Clients, client)
+ }
+ }
+ defer u.push(userID, result, offline)
+ if len(result.Clients) > 0 {
+ return false
+ }
+ delete(u.data, userID)
+ return true
+}
+
+func (u *userMap) GetAllUserStatus(deadline time.Time, nowtime time.Time) (result []UserState) {
+ u.lock.RLock()
+ defer u.lock.RUnlock()
+ result = make([]UserState, 0, len(u.data))
+ for userID, userPlatform := range u.data {
+ if deadline.Before(userPlatform.Time) {
+ continue
+ }
+ userPlatform.Time = nowtime
+ online := make([]int32, 0, len(userPlatform.Clients))
+ for _, client := range userPlatform.Clients {
+ online = append(online, int32(client.PlatformID))
+ }
+ result = append(result, UserState{UserID: userID, Online: online})
+ }
+ return result
+}
+
+func (u *userMap) UserState() <-chan UserState {
+ return u.ch
+}
diff --git a/internal/msggateway/ws_server.go b/internal/msggateway/ws_server.go
new file mode 100644
index 0000000..09f0aa9
--- /dev/null
+++ b/internal/msggateway/ws_server.go
@@ -0,0 +1,568 @@
+package msggateway
+
+import (
+ "context"
+ "fmt"
+ "net/http"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpccache"
+ pbAuth "git.imall.cloud/openim/protocol/auth"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/mcontext"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msggateway"
+ "github.com/go-playground/validator/v10"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/stringutil"
+ "golang.org/x/sync/errgroup"
+)
+
+type LongConnServer interface {
+ Run(ctx context.Context) error
+ wsHandler(w http.ResponseWriter, r *http.Request)
+ GetUserAllCons(userID string) ([]*Client, bool)
+ GetUserPlatformCons(userID string, platform int) ([]*Client, bool, bool)
+ Validate(s any) error
+ SetDiscoveryRegistry(ctx context.Context, client discovery.Conn, config *Config) error
+ KickUserConn(client *Client) error
+ UnRegister(c *Client)
+ SetKickHandlerInfo(i *kickHandler)
+ SubUserOnlineStatus(ctx context.Context, client *Client, data *Req) ([]byte, error)
+ Compressor
+ MessageHandler
+}
+
+type WsServer struct {
+ msgGatewayConfig *Config
+ port int
+ wsMaxConnNum int64
+ registerChan chan *Client
+ unregisterChan chan *Client
+ kickHandlerChan chan *kickHandler
+ clients UserMap
+ online rpccache.OnlineCache
+ subscription *Subscription
+ clientPool sync.Pool
+ onlineUserNum atomic.Int64
+ onlineUserConnNum atomic.Int64
+ handshakeTimeout time.Duration
+ writeBufferSize int
+ validate *validator.Validate
+ disCov discovery.Conn
+ Compressor
+ //Encoder
+ MessageHandler
+ webhookClient *webhook.Client
+ userClient *rpcli.UserClient
+ authClient *rpcli.AuthClient
+
+ ready atomic.Bool
+}
+
+type kickHandler struct {
+ clientOK bool
+ oldClients []*Client
+ newClient *Client
+}
+
+func (ws *WsServer) SetDiscoveryRegistry(ctx context.Context, disCov discovery.Conn, config *Config) error {
+ userConn, err := disCov.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ pushConn, err := disCov.GetConn(ctx, config.Discovery.RpcService.Push)
+ if err != nil {
+ return err
+ }
+ authConn, err := disCov.GetConn(ctx, config.Discovery.RpcService.Auth)
+ if err != nil {
+ return err
+ }
+ msgConn, err := disCov.GetConn(ctx, config.Discovery.RpcService.Msg)
+ if err != nil {
+ return err
+ }
+ ws.userClient = rpcli.NewUserClient(userConn)
+ ws.authClient = rpcli.NewAuthClient(authConn)
+ ws.MessageHandler = NewGrpcHandler(ws.validate, rpcli.NewMsgClient(msgConn), rpcli.NewPushMsgServiceClient(pushConn))
+ ws.disCov = disCov
+
+ ws.ready.Store(true)
+ return nil
+}
+
+//func (ws *WsServer) SetUserOnlineStatus(ctx context.Context, client *Client, status int32) {
+// err := ws.userClient.SetUserStatus(ctx, client.UserID, status, client.PlatformID)
+// if err != nil {
+// log.ZWarn(ctx, "SetUserStatus err", err)
+// }
+// switch status {
+// case constant.Online:
+// ws.webhookAfterUserOnline(ctx, &ws.msgGatewayConfig.WebhooksConfig.AfterUserOnline, client.UserID, client.PlatformID, client.IsBackground, client.ctx.GetConnID())
+// case constant.Offline:
+// ws.webhookAfterUserOffline(ctx, &ws.msgGatewayConfig.WebhooksConfig.AfterUserOffline, client.UserID, client.PlatformID, client.ctx.GetConnID())
+// }
+//}
+
+func (ws *WsServer) UnRegister(c *Client) {
+ ws.unregisterChan <- c
+}
+
+func (ws *WsServer) Validate(_ any) error {
+ return nil
+}
+
+func (ws *WsServer) GetUserAllCons(userID string) ([]*Client, bool) {
+ return ws.clients.GetAll(userID)
+}
+
+func (ws *WsServer) GetUserPlatformCons(userID string, platform int) ([]*Client, bool, bool) {
+ return ws.clients.Get(userID, platform)
+}
+
+func NewWsServer(msgGatewayConfig *Config, opts ...Option) *WsServer {
+ var config configs
+ for _, o := range opts {
+ o(&config)
+ }
+ //userRpcClient := rpcclient.NewUserRpcClient(client, config.Discovery.RpcService.User, config.Share.IMAdminUser)
+
+ v := validator.New()
+ return &WsServer{
+ msgGatewayConfig: msgGatewayConfig,
+ port: config.port,
+ wsMaxConnNum: config.maxConnNum,
+ writeBufferSize: config.writeBufferSize,
+ handshakeTimeout: config.handshakeTimeout,
+ clientPool: sync.Pool{
+ New: func() any {
+ return new(Client)
+ },
+ },
+ registerChan: make(chan *Client, 1000),
+ unregisterChan: make(chan *Client, 1000),
+ kickHandlerChan: make(chan *kickHandler, 1000),
+ validate: v,
+ clients: newUserMap(),
+ subscription: newSubscription(),
+ Compressor: NewGzipCompressor(),
+ webhookClient: webhook.NewWebhookClient(msgGatewayConfig.WebhooksConfig.URL),
+ }
+}
+
+func (ws *WsServer) Run(ctx context.Context) error {
+ var client *Client
+
+ ctx, cancel := context.WithCancelCause(ctx)
+ go func() {
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ case client = <-ws.registerChan:
+ ws.registerClient(client)
+ case client = <-ws.unregisterChan:
+ ws.unregisterClient(client)
+ case onlineInfo := <-ws.kickHandlerChan:
+ ws.multiTerminalLoginChecker(onlineInfo.clientOK, onlineInfo.oldClients, onlineInfo.newClient)
+ }
+ }
+ }()
+
+ done := make(chan struct{})
+ go func() {
+ wsServer := http.Server{Addr: fmt.Sprintf(":%d", ws.port), Handler: nil}
+ http.HandleFunc("/", ws.wsHandler)
+ go func() {
+ defer close(done)
+ <-ctx.Done()
+ _ = wsServer.Shutdown(context.Background())
+ }()
+ err := wsServer.ListenAndServe()
+ if err == nil {
+ err = fmt.Errorf("http server closed")
+ }
+ cancel(fmt.Errorf("msg gateway %w", err))
+ }()
+
+ <-ctx.Done()
+
+ timeout := time.NewTimer(time.Second * 15)
+ defer timeout.Stop()
+ select {
+ case <-timeout.C:
+ log.ZWarn(ctx, "msg gateway graceful stop timeout", nil)
+ case <-done:
+ log.ZDebug(ctx, "msg gateway graceful stop done")
+ }
+ return context.Cause(ctx)
+}
+
+const concurrentRequest = 3
+
+func (ws *WsServer) sendUserOnlineInfoToOtherNode(ctx context.Context, client *Client) error {
+ conns, err := ws.disCov.GetConns(ctx, ws.msgGatewayConfig.Discovery.RpcService.MessageGateway)
+ if err != nil {
+ return err
+ }
+ if len(conns) == 0 || (len(conns) == 1 && ws.disCov.IsSelfNode(conns[0])) {
+ return nil
+ }
+
+ wg := errgroup.Group{}
+ wg.SetLimit(concurrentRequest)
+
+ // Online push user online message to other node
+ for _, v := range conns {
+ v := v
+ log.ZDebug(ctx, "sendUserOnlineInfoToOtherNode conn")
+ if ws.disCov.IsSelfNode(v) {
+ log.ZDebug(ctx, "Filter out this node")
+ continue
+ }
+
+ wg.Go(func() error {
+ msgClient := msggateway.NewMsgGatewayClient(v)
+ _, err := msgClient.MultiTerminalLoginCheck(ctx, &msggateway.MultiTerminalLoginCheckReq{
+ UserID: client.UserID,
+ PlatformID: int32(client.PlatformID), Token: client.token,
+ })
+ if err != nil {
+ log.ZWarn(ctx, "MultiTerminalLoginCheck err", err)
+ }
+ return nil
+ })
+ }
+
+ _ = wg.Wait()
+ return nil
+}
+
+func (ws *WsServer) SetKickHandlerInfo(i *kickHandler) {
+ ws.kickHandlerChan <- i
+}
+
+func (ws *WsServer) registerClient(client *Client) {
+ var (
+ userOK bool
+ clientOK bool
+ oldClients []*Client
+ )
+ oldClients, userOK, clientOK = ws.clients.Get(client.UserID, client.PlatformID)
+
+ log.ZInfo(client.ctx, "registerClient", "userID", client.UserID, "platformID", client.PlatformID,
+ "sdkVersion", client.SDKVersion)
+
+ if !userOK {
+ ws.clients.Set(client.UserID, client)
+ log.ZDebug(client.ctx, "user not exist", "userID", client.UserID, "platformID", client.PlatformID)
+ prommetrics.OnlineUserGauge.Add(1)
+ ws.onlineUserNum.Add(1)
+ ws.onlineUserConnNum.Add(1)
+ } else {
+ ws.multiTerminalLoginChecker(clientOK, oldClients, client)
+ log.ZDebug(client.ctx, "user exist", "userID", client.UserID, "platformID", client.PlatformID)
+ if clientOK {
+ ws.clients.Set(client.UserID, client)
+ // There is already a connection to the platform
+ log.ZDebug(client.ctx, "repeat login", "userID", client.UserID, "platformID",
+ client.PlatformID, "old remote addr", getRemoteAdders(oldClients))
+ ws.onlineUserConnNum.Add(1)
+ } else {
+ ws.clients.Set(client.UserID, client)
+ ws.onlineUserConnNum.Add(1)
+ }
+ }
+
+ wg := sync.WaitGroup{}
+ log.ZDebug(client.ctx, "ws.msgGatewayConfig.Discovery.Enable", "discoveryEnable", ws.msgGatewayConfig.Discovery.Enable)
+
+ if ws.msgGatewayConfig.Discovery.Enable != "k8s" {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ _ = ws.sendUserOnlineInfoToOtherNode(client.ctx, client)
+ }()
+ }
+
+ //wg.Add(1)
+ //go func() {
+ // defer wg.Done()
+ // ws.SetUserOnlineStatus(client.ctx, client, constant.Online)
+ //}()
+
+ wg.Wait()
+
+ log.ZDebug(client.ctx, "user online", "online user Num", ws.onlineUserNum.Load(), "online user conn Num", ws.onlineUserConnNum.Load())
+}
+
+func getRemoteAdders(client []*Client) string {
+ var ret string
+ for i, c := range client {
+ if i == 0 {
+ ret = c.ctx.GetRemoteAddr()
+ } else {
+ ret += "@" + c.ctx.GetRemoteAddr()
+ }
+ }
+ return ret
+}
+
+func (ws *WsServer) KickUserConn(client *Client) error {
+ // 先发送踢掉通知,再删除客户端,确保通知能正确发送
+ err := client.KickOnlineMessage()
+ // KickOnlineMessage 内部会调用 close(),close() 会调用 UnRegister,UnRegister 会删除客户端
+ // 所以这里不需要再调用 DeleteClients
+ return err
+}
+
+func (ws *WsServer) multiTerminalLoginChecker(clientOK bool, oldClients []*Client, newClient *Client) {
+ kickTokenFunc := func(kickClients []*Client) {
+ if len(kickClients) == 0 {
+ return
+ }
+ var kickTokens []string
+ // 先发送踢掉通知,KickOnlineMessage 内部会关闭连接并删除客户端
+ // 使用 goroutine 并发发送通知,提高效率
+ var wg sync.WaitGroup
+ for _, c := range kickClients {
+ kickTokens = append(kickTokens, c.token)
+ wg.Add(1)
+ go func(client *Client) {
+ defer wg.Done()
+ err := client.KickOnlineMessage()
+ if err != nil {
+ log.ZWarn(client.ctx, "KickOnlineMessage", err)
+ }
+ }(c)
+ }
+ // 等待所有通知发送完成
+ wg.Wait()
+ ctx := mcontext.WithMustInfoCtx(
+ []string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(),
+ constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()},
+ )
+ if err := ws.authClient.KickTokens(ctx, kickTokens); err != nil {
+ log.ZWarn(newClient.ctx, "kickTokens err", err)
+ }
+ }
+
+ // If reconnect: When multiple msgGateway instances are deployed, a client may disconnect from instance A and reconnect to instance B.
+ // During this process, instance A might still be executing, resulting in two clients with the same token existing simultaneously.
+ // This situation needs to be filtered to prevent duplicate clients.
+ checkSameTokenFunc := func(oldClients []*Client) []*Client {
+ var clientsNeedToKick []*Client
+
+ for _, c := range oldClients {
+ if c.token == newClient.token {
+ log.ZDebug(newClient.ctx, "token is same, not kick",
+ "userID", newClient.UserID,
+ "platformID", newClient.PlatformID,
+ "token", newClient.token)
+ continue
+ }
+
+ clientsNeedToKick = append(clientsNeedToKick, c)
+ }
+
+ return clientsNeedToKick
+ }
+
+ switch ws.msgGatewayConfig.Share.MultiLogin.Policy {
+ case constant.DefalutNotKick:
+ case constant.PCAndOther:
+ if constant.PlatformIDToClass(newClient.PlatformID) == constant.TerminalPC {
+ return
+ }
+ clients, ok := ws.clients.GetAll(newClient.UserID)
+ clientOK = ok
+ oldClients = make([]*Client, 0, len(clients))
+ for _, c := range clients {
+ if constant.PlatformIDToClass(c.PlatformID) == constant.TerminalPC {
+ continue
+ }
+ oldClients = append(oldClients, c)
+ }
+
+ fallthrough
+ case constant.AllLoginButSameTermKick:
+ if !clientOK {
+ return
+ }
+
+ oldClients = checkSameTokenFunc(oldClients)
+ if len(oldClients) == 0 {
+ return
+ }
+
+ // 先发送踢掉通知,KickOnlineMessage 内部会关闭连接并删除客户端
+ // 使用 goroutine 并发发送通知,提高效率
+ var wg sync.WaitGroup
+ for _, c := range oldClients {
+ wg.Add(1)
+ go func(client *Client) {
+ defer wg.Done()
+ err := client.KickOnlineMessage()
+ if err != nil {
+ log.ZWarn(client.ctx, "KickOnlineMessage", err)
+ }
+ }(c)
+ }
+ // 等待所有通知发送完成
+ wg.Wait()
+
+ ctx := mcontext.WithMustInfoCtx(
+ []string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(),
+ constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()},
+ )
+ req := &pbAuth.InvalidateTokenReq{
+ PreservedToken: newClient.token,
+ UserID: newClient.UserID,
+ PlatformID: int32(newClient.PlatformID),
+ }
+ if err := ws.authClient.InvalidateToken(ctx, req); err != nil {
+ log.ZWarn(newClient.ctx, "InvalidateToken err", err, "userID", newClient.UserID,
+ "platformID", newClient.PlatformID)
+ }
+ case constant.AllLoginButSameClassKick:
+ clients, ok := ws.clients.GetAll(newClient.UserID)
+ if !ok {
+ return
+ }
+
+ var kickClients []*Client
+ for _, client := range clients {
+ if constant.PlatformIDToClass(client.PlatformID) == constant.PlatformIDToClass(newClient.PlatformID) {
+ {
+ kickClients = append(kickClients, client)
+ }
+ }
+ }
+ kickClients = checkSameTokenFunc(kickClients)
+
+ kickTokenFunc(kickClients)
+ }
+}
+
+func (ws *WsServer) unregisterClient(client *Client) {
+ defer ws.clientPool.Put(client)
+ isDeleteUser := ws.clients.DeleteClients(client.UserID, []*Client{client})
+ if isDeleteUser {
+ ws.onlineUserNum.Add(-1)
+ prommetrics.OnlineUserGauge.Dec()
+ }
+ ws.onlineUserConnNum.Add(-1)
+ ws.subscription.DelClient(client)
+ //ws.SetUserOnlineStatus(client.ctx, client, constant.Offline)
+ log.ZDebug(client.ctx, "user offline", "close reason", client.closedErr, "online user Num",
+ ws.onlineUserNum.Load(), "online user conn Num",
+ ws.onlineUserConnNum.Load(),
+ )
+}
+
+// validateRespWithRequest checks if the response matches the expected userID and platformID.
+func (ws *WsServer) validateRespWithRequest(ctx *UserConnContext, resp *pbAuth.ParseTokenResp) error {
+ userID := ctx.GetUserID()
+ platformID := stringutil.StringToInt32(ctx.GetPlatformID())
+ if resp.UserID != userID {
+ return servererrs.ErrTokenInvalid.WrapMsg(fmt.Sprintf("token uid %s != userID %s", resp.UserID, userID))
+ }
+ if resp.PlatformID != platformID {
+ return servererrs.ErrTokenInvalid.WrapMsg(fmt.Sprintf("token platform %d != platformID %d", resp.PlatformID, platformID))
+ }
+ return nil
+}
+
+func (ws *WsServer) wsHandler(w http.ResponseWriter, r *http.Request) {
+ // Create a new connection context
+ connContext := newContext(w, r)
+
+ if !ws.ready.Load() {
+ httpError(connContext, errs.New("ws server not ready"))
+ return
+ }
+
+ // Check if the current number of online user connections exceeds the maximum limit
+ if ws.onlineUserConnNum.Load() >= ws.wsMaxConnNum {
+ // If it exceeds the maximum connection number, return an error via HTTP and stop processing
+ httpError(connContext, servererrs.ErrConnOverMaxNumLimit.WrapMsg("over max conn num limit"))
+ return
+ }
+
+ // Parse essential arguments (e.g., user ID, Token)
+ err := connContext.ParseEssentialArgs()
+ if err != nil {
+ // If there's an error during parsing, return an error via HTTP and stop processing
+
+ httpError(connContext, err)
+ return
+ }
+
+ if ws.authClient == nil {
+ httpError(connContext, errs.New("auth client is not initialized"))
+ return
+ }
+
+ // Call the authentication client to parse the Token obtained from the context
+ resp, err := ws.authClient.ParseToken(connContext, connContext.GetToken())
+ if err != nil {
+ // If there's an error parsing the Token, decide whether to send the error message via WebSocket based on the context flag
+ shouldSendError := connContext.ShouldSendResp()
+ if shouldSendError {
+ // Create a WebSocket connection object and attempt to send the error message via WebSocket
+ wsLongConn := newGWebSocket(WebSocket, ws.handshakeTimeout, ws.writeBufferSize)
+ if err := wsLongConn.RespondWithError(err, w, r); err == nil {
+ // If the error message is successfully sent via WebSocket, stop processing
+ return
+ }
+ }
+ // If sending via WebSocket is not required or fails, return the error via HTTP and stop processing
+ httpError(connContext, err)
+ return
+ }
+
+ // Validate the authentication response matches the request (e.g., user ID and platform ID)
+ err = ws.validateRespWithRequest(connContext, resp)
+ if err != nil {
+ // If validation fails, return an error via HTTP and stop processing
+ httpError(connContext, err)
+ return
+ }
+
+ log.ZDebug(connContext, "new conn", "token", connContext.GetToken())
+ // Create a WebSocket long connection object
+ wsLongConn := newGWebSocket(WebSocket, ws.handshakeTimeout, ws.writeBufferSize)
+ if err := wsLongConn.GenerateLongConn(w, r); err != nil {
+ //If the creation of the long connection fails, the error is handled internally during the handshake process.
+ log.ZWarn(connContext, "long connection fails", err)
+ return
+ } else {
+ // Check if a normal response should be sent via WebSocket
+ shouldSendSuccessResp := connContext.ShouldSendResp()
+ if shouldSendSuccessResp {
+ // Attempt to send a success message through WebSocket
+ if err := wsLongConn.RespondWithSuccess(); err != nil {
+ // If the success message is successfully sent, end further processing
+ return
+ }
+ }
+ }
+
+ // Retrieve a client object from the client pool, reset its state, and associate it with the current WebSocket long connection
+ client := ws.clientPool.Get().(*Client)
+ client.ResetClient(connContext, wsLongConn, ws)
+
+ // Register the client with the server and start message processing
+ ws.registerChan <- client
+ go client.readMessage()
+}
diff --git a/internal/msgtransfer/callback.go b/internal/msgtransfer/callback.go
new file mode 100644
index 0000000..a8b67e2
--- /dev/null
+++ b/internal/msgtransfer/callback.go
@@ -0,0 +1,112 @@
+package msgtransfer
+
+import (
+ "context"
+ "encoding/base64"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/stringutil"
+ "google.golang.org/protobuf/proto"
+
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+)
+
+func toCommonCallback(ctx context.Context, msg *sdkws.MsgData, command string) cbapi.CommonCallbackReq {
+ return cbapi.CommonCallbackReq{
+ SendID: msg.SendID,
+ ServerMsgID: msg.ServerMsgID,
+ CallbackCommand: command,
+ ClientMsgID: msg.ClientMsgID,
+ OperationID: mcontext.GetOperationID(ctx),
+ SenderPlatformID: msg.SenderPlatformID,
+ SenderNickname: msg.SenderNickname,
+ SessionType: msg.SessionType,
+ MsgFrom: msg.MsgFrom,
+ ContentType: msg.ContentType,
+ Status: msg.Status,
+ SendTime: msg.SendTime,
+ CreateTime: msg.CreateTime,
+ AtUserIDList: msg.AtUserIDList,
+ SenderFaceURL: msg.SenderFaceURL,
+ Content: GetContent(msg),
+ Seq: uint32(msg.Seq),
+ Ex: msg.Ex,
+ }
+}
+
+func GetContent(msg *sdkws.MsgData) string {
+ if msg.ContentType >= constant.NotificationBegin && msg.ContentType <= constant.NotificationEnd {
+ var tips sdkws.TipsComm
+ _ = proto.Unmarshal(msg.Content, &tips)
+ content := tips.JsonDetail
+ return content
+ } else {
+ return string(msg.Content)
+ }
+}
+
+func (mc *OnlineHistoryMongoConsumerHandler) webhookAfterMsgSaveDB(ctx context.Context, after *config.AfterConfig, msg *sdkws.MsgData) {
+ if !filterAfterMsg(msg, after) {
+ return
+ }
+
+ cbReq := &cbapi.CallbackAfterMsgSaveDBReq{
+ CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackAfterMsgSaveDBCommand),
+ RecvID: msg.RecvID,
+ GroupID: msg.GroupID,
+ }
+ mc.webhookClient.AsyncPostWithQuery(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterMsgSaveDBResp{}, after, buildKeyMsgDataQuery(msg))
+}
+
+func buildKeyMsgDataQuery(msg *sdkws.MsgData) map[string]string {
+ keyMsgData := apistruct.KeyMsgData{
+ SendID: msg.SendID,
+ RecvID: msg.RecvID,
+ GroupID: msg.GroupID,
+ }
+
+ return map[string]string{
+ webhook.Key: base64.StdEncoding.EncodeToString(stringutil.StructToJsonBytes(keyMsgData)),
+ }
+}
+
+func filterAfterMsg(msg *sdkws.MsgData, after *config.AfterConfig) bool {
+ return filterMsg(msg, after.AttentionIds, after.DeniedTypes)
+}
+
+func filterMsg(msg *sdkws.MsgData, attentionIds []string, deniedTypes []int32) bool {
+ // According to the attentionIds configuration, only some users are sent
+ if len(attentionIds) != 0 && msg.ContentType == constant.SingleChatType && !datautil.Contain(msg.RecvID, attentionIds...) {
+ return false
+ }
+
+ if len(attentionIds) != 0 && msg.ContentType == constant.ReadGroupChatType && !datautil.Contain(msg.GroupID, attentionIds...) {
+ return false
+ }
+
+ if defaultDeniedTypes(msg.ContentType) {
+ return false
+ }
+
+ if len(deniedTypes) != 0 && datautil.Contain(msg.ContentType, deniedTypes...) {
+ return false
+ }
+
+ return true
+}
+
+func defaultDeniedTypes(contentType int32) bool {
+ if contentType >= constant.NotificationBegin && contentType <= constant.NotificationEnd {
+ return true
+ }
+ if contentType == constant.Typing {
+ return true
+ }
+ return false
+}
diff --git a/internal/msgtransfer/init.go b/internal/msgtransfer/init.go
new file mode 100644
index 0000000..3395aff
--- /dev/null
+++ b/internal/msgtransfer/init.go
@@ -0,0 +1,188 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msgtransfer
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/mqbuild"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/mq"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+
+ conf "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "github.com/openimsdk/tools/log"
+ "google.golang.org/grpc"
+)
+
+type MsgTransfer struct {
+ historyConsumer mq.Consumer
+ historyMongoConsumer mq.Consumer
+ // This consumer aggregated messages, subscribed to the topic:toRedis,
+ // the message is stored in redis, Incr Redis, and then the message is sent to toPush topic for push,
+ // and the message is sent to toMongo topic for persistence
+ historyHandler *OnlineHistoryRedisConsumerHandler
+ //This consumer handle message to mongo
+ historyMongoHandler *OnlineHistoryMongoConsumerHandler
+ ctx context.Context
+ //cancel context.CancelFunc
+}
+
+type Config struct {
+ MsgTransfer conf.MsgTransfer
+ RedisConfig conf.Redis
+ MongodbConfig conf.Mongo
+ KafkaConfig conf.Kafka
+ Share conf.Share
+ WebhooksConfig conf.Webhooks
+ Discovery conf.Discovery
+ Index conf.Index
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ builder := mqbuild.NewBuilder(&config.KafkaConfig)
+
+ log.CInfo(ctx, "MSG-TRANSFER server is initializing", "runTimeEnv", runtimeenv.RuntimeEnvironment(), "prometheusPorts",
+ config.MsgTransfer.Prometheus.Ports, "index", config.Index)
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+
+ //if config.Discovery.Enable == conf.ETCD {
+ // cm := disetcd.NewConfigManager(client.(*etcd.SvcDiscoveryRegistryImpl).GetClient(), []string{
+ // config.MsgTransfer.GetConfigFileName(),
+ // config.RedisConfig.GetConfigFileName(),
+ // config.MongodbConfig.GetConfigFileName(),
+ // config.KafkaConfig.GetConfigFileName(),
+ // config.Share.GetConfigFileName(),
+ // config.WebhooksConfig.GetConfigFileName(),
+ // config.Discovery.GetConfigFileName(),
+ // conf.LogConfigFileName,
+ // })
+ // cm.Watch(ctx)
+ //}
+ mongoProducer, err := builder.GetTopicProducer(ctx, config.KafkaConfig.ToMongoTopic)
+ if err != nil {
+ return err
+ }
+ pushProducer, err := builder.GetTopicProducer(ctx, config.KafkaConfig.ToPushTopic)
+ if err != nil {
+ return err
+ }
+ msgDocModel, err := mgo.NewMsgMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ var msgModel cache.MsgCache
+ if rdb == nil {
+ cm, err := mgo.NewCacheMgo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ msgModel = mcache.NewMsgCache(cm, msgDocModel)
+ } else {
+ msgModel = redis.NewMsgCache(rdb, msgDocModel)
+ }
+ seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ seqConversationCache := redis.NewSeqConversationCacheRedis(rdb, seqConversation)
+ seqUser, err := mgo.NewSeqUserMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ seqUserCache := redis.NewSeqUserCacheRedis(rdb, seqUser)
+ msgTransferDatabase, err := controller.NewMsgTransferDatabase(msgDocModel, msgModel, seqUserCache, seqConversationCache, mongoProducer, pushProducer)
+ if err != nil {
+ return err
+ }
+ historyConsumer, err := builder.GetTopicConsumer(ctx, config.KafkaConfig.ToRedisTopic)
+ if err != nil {
+ return err
+ }
+ historyMongoConsumer, err := builder.GetTopicConsumer(ctx, config.KafkaConfig.ToMongoTopic)
+ if err != nil {
+ return err
+ }
+ historyHandler, err := NewOnlineHistoryRedisConsumerHandler(ctx, client, config, msgTransferDatabase)
+ if err != nil {
+ return err
+ }
+ historyMongoHandler := NewOnlineHistoryMongoConsumerHandler(msgTransferDatabase, config)
+
+ msgTransfer := &MsgTransfer{
+ historyConsumer: historyConsumer,
+ historyMongoConsumer: historyMongoConsumer,
+ historyHandler: historyHandler,
+ historyMongoHandler: historyMongoHandler,
+ }
+
+ return msgTransfer.Start(ctx)
+}
+
+func (m *MsgTransfer) Start(ctx context.Context) error {
+ m.ctx = ctx
+
+ go func() {
+ for {
+ if err := m.historyConsumer.Subscribe(m.ctx, m.historyHandler.HandlerRedisMessage); err != nil {
+ log.ZError(m.ctx, "historyConsumer err, will retry in 5 seconds", err)
+ time.Sleep(5 * time.Second)
+ continue
+ }
+ // Subscribe returned normally (possibly due to context cancellation), retry immediately
+ log.ZWarn(m.ctx, "historyConsumer Subscribe returned normally, will retry immediately", nil)
+ }
+ }()
+
+ go func() {
+ fn := func(msg mq.Message) error {
+ m.historyMongoHandler.HandleChatWs2Mongo(msg)
+ return nil
+ }
+ for {
+ if err := m.historyMongoConsumer.Subscribe(m.ctx, fn); err != nil {
+ log.ZError(m.ctx, "historyMongoConsumer err, will retry in 5 seconds", err)
+ time.Sleep(5 * time.Second)
+ continue
+ }
+ // Subscribe returned normally (possibly due to context cancellation), retry immediately
+ log.ZWarn(m.ctx, "historyMongoConsumer Subscribe returned normally, will retry immediately", nil)
+ }
+ }()
+
+ go m.historyHandler.HandleUserHasReadSeqMessages(m.ctx)
+
+ err := m.historyHandler.redisMessageBatches.Start()
+ if err != nil {
+ return err
+ }
+ <-m.ctx.Done()
+ return m.ctx.Err()
+}
diff --git a/internal/msgtransfer/online_history_msg_handler.go b/internal/msgtransfer/online_history_msg_handler.go
new file mode 100644
index 0000000..34d3b67
--- /dev/null
+++ b/internal/msgtransfer/online_history_msg_handler.go
@@ -0,0 +1,410 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msgtransfer
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+
+ "github.com/openimsdk/tools/mq"
+
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "github.com/openimsdk/tools/discovery"
+
+ "github.com/go-redis/redis"
+ "google.golang.org/protobuf/proto"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/tools/batcher"
+ "git.imall.cloud/openim/protocol/constant"
+ pbconv "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/stringutil"
+)
+
+const (
+ size = 1000
+ mainDataBuffer = 2000
+ subChanBuffer = 200
+ worker = 100
+ interval = 100 * time.Millisecond
+ hasReadChanBuffer = 2000
+)
+
+type ContextMsg struct {
+ message *sdkws.MsgData
+ ctx context.Context
+}
+
+// This structure is used for asynchronously writing the sender’s read sequence (seq) regarding a message into MongoDB.
+// For example, if the sender sends a message with a seq of 10, then their own read seq for this conversation should be set to 10.
+type userHasReadSeq struct {
+ conversationID string
+ userHasReadMap map[string]int64
+}
+
+type OnlineHistoryRedisConsumerHandler struct {
+ redisMessageBatches *batcher.Batcher[ConsumerMessage]
+
+ msgTransferDatabase controller.MsgTransferDatabase
+ conversationUserHasReadChan chan *userHasReadSeq
+ wg sync.WaitGroup
+
+ groupClient *rpcli.GroupClient
+ conversationClient *rpcli.ConversationClient
+}
+
+type ConsumerMessage struct {
+ Ctx context.Context
+ Key string
+ Value []byte
+ Raw mq.Message
+}
+
+func NewOnlineHistoryRedisConsumerHandler(ctx context.Context, client discovery.Conn, config *Config, database controller.MsgTransferDatabase) (*OnlineHistoryRedisConsumerHandler, error) {
+ groupConn, err := client.GetConn(ctx, config.Discovery.RpcService.Group)
+ if err != nil {
+ return nil, err
+ }
+ conversationConn, err := client.GetConn(ctx, config.Discovery.RpcService.Conversation)
+ if err != nil {
+ return nil, err
+ }
+ var och OnlineHistoryRedisConsumerHandler
+ och.msgTransferDatabase = database
+ och.conversationUserHasReadChan = make(chan *userHasReadSeq, hasReadChanBuffer)
+ och.groupClient = rpcli.NewGroupClient(groupConn)
+ och.conversationClient = rpcli.NewConversationClient(conversationConn)
+ och.wg.Add(1)
+
+ b := batcher.New[ConsumerMessage](
+ batcher.WithSize(size),
+ batcher.WithWorker(worker),
+ batcher.WithInterval(interval),
+ batcher.WithDataBuffer(mainDataBuffer),
+ batcher.WithSyncWait(true),
+ batcher.WithBuffer(subChanBuffer),
+ )
+ b.Sharding = func(key string) int {
+ hashCode := stringutil.GetHashCode(key)
+ return int(hashCode) % och.redisMessageBatches.Worker()
+ }
+ b.Key = func(consumerMessage *ConsumerMessage) string {
+ return consumerMessage.Key
+ }
+ b.Do = och.do
+ och.redisMessageBatches = b
+
+ och.redisMessageBatches.OnComplete = func(lastMessage *ConsumerMessage, totalCount int) {
+ lastMessage.Raw.Mark()
+ lastMessage.Raw.Commit()
+ }
+
+ return &och, nil
+}
+func (och *OnlineHistoryRedisConsumerHandler) do(ctx context.Context, channelID int, val *batcher.Msg[ConsumerMessage]) {
+ ctx = mcontext.WithTriggerIDContext(ctx, val.TriggerID())
+ ctxMessages := och.parseConsumerMessages(ctx, val.Val())
+ ctx = withAggregationCtx(ctx, ctxMessages)
+ log.ZInfo(ctx, "msg arrived channel", "channel id", channelID, "msgList length", len(ctxMessages), "key", val.Key())
+ och.doSetReadSeq(ctx, ctxMessages)
+
+ storageMsgList, notStorageMsgList, storageNotificationList, notStorageNotificationList :=
+ och.categorizeMessageLists(ctxMessages)
+ log.ZDebug(ctx, "number of categorized messages", "storageMsgList", len(storageMsgList), "notStorageMsgList",
+ len(notStorageMsgList), "storageNotificationList", len(storageNotificationList), "notStorageNotificationList", len(notStorageNotificationList))
+
+ conversationIDMsg := msgprocessor.GetChatConversationIDByMsg(ctxMessages[0].message)
+ conversationIDNotification := msgprocessor.GetNotificationConversationIDByMsg(ctxMessages[0].message)
+ och.handleMsg(ctx, val.Key(), conversationIDMsg, storageMsgList, notStorageMsgList)
+ och.handleNotification(ctx, val.Key(), conversationIDNotification, storageNotificationList, notStorageNotificationList)
+}
+
+func (och *OnlineHistoryRedisConsumerHandler) doSetReadSeq(ctx context.Context, msgs []*ContextMsg) {
+
+ // Outer map: conversationID -> (userID -> maxHasReadSeq)
+ conversationUserSeq := make(map[string]map[string]int64)
+
+ for _, msg := range msgs {
+ if msg.message.ContentType != constant.HasReadReceipt {
+ continue
+ }
+ var elem sdkws.NotificationElem
+ if err := json.Unmarshal(msg.message.Content, &elem); err != nil {
+ log.ZWarn(ctx, "Unmarshal NotificationElem error", err, "msg", msg)
+ continue
+ }
+ var tips sdkws.MarkAsReadTips
+ if err := json.Unmarshal([]byte(elem.Detail), &tips); err != nil {
+ log.ZWarn(ctx, "Unmarshal MarkAsReadTips error", err, "msg", msg)
+ continue
+ }
+ if len(tips.ConversationID) == 0 || tips.HasReadSeq < 0 {
+ continue
+ }
+
+ // Calculate the max seq from tips.Seqs
+ for _, seq := range tips.Seqs {
+ if tips.HasReadSeq < seq {
+ tips.HasReadSeq = seq
+ }
+ }
+
+ if _, ok := conversationUserSeq[tips.ConversationID]; !ok {
+ conversationUserSeq[tips.ConversationID] = make(map[string]int64)
+ }
+ if conversationUserSeq[tips.ConversationID][tips.MarkAsReadUserID] < tips.HasReadSeq {
+ conversationUserSeq[tips.ConversationID][tips.MarkAsReadUserID] = tips.HasReadSeq
+ }
+ }
+ log.ZInfo(ctx, "doSetReadSeq", "conversationUserSeq", conversationUserSeq)
+
+ // persist to db
+ for convID, userSeqMap := range conversationUserSeq {
+ if err := och.msgTransferDatabase.SetHasReadSeqToDB(ctx, convID, userSeqMap); err != nil {
+ log.ZWarn(ctx, "SetHasReadSeqToDB error", err, "conversationID", convID, "userSeqMap", userSeqMap)
+ }
+ }
+
+}
+
+func (och *OnlineHistoryRedisConsumerHandler) parseConsumerMessages(ctx context.Context, consumerMessages []*ConsumerMessage) []*ContextMsg {
+ var ctxMessages []*ContextMsg
+ for i := 0; i < len(consumerMessages); i++ {
+ ctxMsg := &ContextMsg{}
+ msgFromMQ := &sdkws.MsgData{}
+ err := proto.Unmarshal(consumerMessages[i].Value, msgFromMQ)
+ if err != nil {
+ log.ZWarn(ctx, "msg_transfer Unmarshal msg err", err, string(consumerMessages[i].Value))
+ continue
+ }
+ ctxMsg.ctx = consumerMessages[i].Ctx
+ ctxMsg.message = msgFromMQ
+ log.ZDebug(ctx, "message parse finish", "message", msgFromMQ, "key", consumerMessages[i].Key)
+ ctxMessages = append(ctxMessages, ctxMsg)
+ }
+ return ctxMessages
+}
+
+// Get messages/notifications stored message list, not stored and pushed message list.
+func (och *OnlineHistoryRedisConsumerHandler) categorizeMessageLists(totalMsgs []*ContextMsg) (storageMsgList,
+ notStorageMsgList, storageNotificationList, notStorageNotificationList []*ContextMsg) {
+ for _, v := range totalMsgs {
+ options := msgprocessor.Options(v.message.Options)
+ if !options.IsNotNotification() {
+ // clone msg from notificationMsg
+ if options.IsSendMsg() {
+ msg := proto.Clone(v.message).(*sdkws.MsgData)
+ // message
+ if v.message.Options != nil {
+ msg.Options = msgprocessor.NewMsgOptions()
+ }
+ msg.Options = msgprocessor.WithOptions(msg.Options,
+ msgprocessor.WithOfflinePush(options.IsOfflinePush()),
+ msgprocessor.WithUnreadCount(options.IsUnreadCount()),
+ )
+ v.message.Options = msgprocessor.WithOptions(
+ v.message.Options,
+ msgprocessor.WithOfflinePush(false),
+ msgprocessor.WithUnreadCount(false),
+ )
+ ctxMsg := &ContextMsg{
+ message: msg,
+ ctx: v.ctx,
+ }
+ storageMsgList = append(storageMsgList, ctxMsg)
+ }
+ if options.IsHistory() {
+ storageNotificationList = append(storageNotificationList, v)
+ } else {
+ notStorageNotificationList = append(notStorageNotificationList, v)
+ }
+ } else {
+ if options.IsHistory() {
+ storageMsgList = append(storageMsgList, v)
+ } else {
+ notStorageMsgList = append(notStorageMsgList, v)
+ }
+ }
+ }
+ return
+}
+
+func (och *OnlineHistoryRedisConsumerHandler) handleMsg(ctx context.Context, key, conversationID string, storageList, notStorageList []*ContextMsg) {
+ log.ZInfo(ctx, "handle storage msg")
+ for _, storageMsg := range storageList {
+ log.ZDebug(ctx, "handle storage msg", "msg", storageMsg.message.String())
+ }
+
+ och.toPushTopic(ctx, key, conversationID, notStorageList)
+ var storageMessageList []*sdkws.MsgData
+ for _, msg := range storageList {
+ storageMessageList = append(storageMessageList, msg.message)
+ }
+ if len(storageMessageList) > 0 {
+ msg := storageMessageList[0]
+ lastSeq, isNewConversation, userSeqMap, err := och.msgTransferDatabase.BatchInsertChat2Cache(ctx, conversationID, storageMessageList)
+ if err != nil && !errors.Is(errs.Unwrap(err), redis.Nil) {
+ log.ZWarn(ctx, "batch data insert to redis err", err, "storageMsgList", storageMessageList)
+ return
+ }
+ log.ZInfo(ctx, "BatchInsertChat2Cache end")
+ err = och.msgTransferDatabase.SetHasReadSeqs(ctx, conversationID, userSeqMap)
+ if err != nil {
+ log.ZWarn(ctx, "SetHasReadSeqs error", err, "userSeqMap", userSeqMap, "conversationID", conversationID)
+ prommetrics.SeqSetFailedCounter.Inc()
+ }
+ select {
+ case och.conversationUserHasReadChan <- &userHasReadSeq{
+ conversationID: conversationID,
+ userHasReadMap: userSeqMap,
+ }:
+ default:
+ log.ZWarn(ctx, "conversationUserHasReadChan full, drop userHasReadSeq update", nil,
+ "conversationID", conversationID, "userSeqMapSize", len(userSeqMap))
+ }
+
+ if isNewConversation {
+ switch msg.SessionType {
+ case constant.ReadGroupChatType:
+ log.ZDebug(ctx, "group chat first create conversation", "conversationID",
+ conversationID)
+
+ userIDs, err := och.groupClient.GetGroupMemberUserIDs(ctx, msg.GroupID)
+ if err != nil {
+ log.ZWarn(ctx, "get group member ids error", err, "conversationID",
+ conversationID)
+ } else {
+ log.ZInfo(ctx, "GetGroupMemberIDs end")
+
+ if err := och.conversationClient.CreateGroupChatConversations(ctx, msg.GroupID, userIDs); err != nil {
+ log.ZWarn(ctx, "single chat first create conversation error", err,
+ "conversationID", conversationID)
+ }
+ }
+ case constant.SingleChatType, constant.NotificationChatType:
+ req := &pbconv.CreateSingleChatConversationsReq{
+ RecvID: msg.RecvID,
+ SendID: msg.SendID,
+ ConversationID: conversationID,
+ ConversationType: msg.SessionType,
+ }
+ if err := och.conversationClient.CreateSingleChatConversations(ctx, req); err != nil {
+ log.ZWarn(ctx, "single chat or notification first create conversation error", err,
+ "conversationID", conversationID, "sessionType", msg.SessionType)
+ }
+ default:
+ log.ZWarn(ctx, "unknown session type", nil, "sessionType",
+ msg.SessionType)
+ }
+ }
+
+ log.ZInfo(ctx, "success incr to next topic")
+ err = och.msgTransferDatabase.MsgToMongoMQ(ctx, key, conversationID, storageMessageList, lastSeq)
+ if err != nil {
+ log.ZError(ctx, "Msg To MongoDB MQ error", err, "conversationID",
+ conversationID, "storageList", storageMessageList, "lastSeq", lastSeq)
+ }
+ log.ZInfo(ctx, "MsgToMongoMQ end")
+
+ och.toPushTopic(ctx, key, conversationID, storageList)
+ log.ZInfo(ctx, "toPushTopic end")
+ }
+}
+
+func (och *OnlineHistoryRedisConsumerHandler) handleNotification(ctx context.Context, key, conversationID string,
+ storageList, notStorageList []*ContextMsg) {
+ och.toPushTopic(ctx, key, conversationID, notStorageList)
+ var storageMessageList []*sdkws.MsgData
+ for _, msg := range storageList {
+ storageMessageList = append(storageMessageList, msg.message)
+ }
+ if len(storageMessageList) > 0 {
+ lastSeq, _, _, err := och.msgTransferDatabase.BatchInsertChat2Cache(ctx, conversationID, storageMessageList)
+ if err != nil {
+ log.ZError(ctx, "notification batch insert to redis error", err, "conversationID", conversationID,
+ "storageList", storageMessageList)
+ return
+ }
+ log.ZDebug(ctx, "success to next topic", "conversationID", conversationID)
+ err = och.msgTransferDatabase.MsgToMongoMQ(ctx, key, conversationID, storageMessageList, lastSeq)
+ if err != nil {
+ log.ZError(ctx, "Msg To MongoDB MQ error", err, "conversationID",
+ conversationID, "storageList", storageMessageList, "lastSeq", lastSeq)
+ }
+ och.toPushTopic(ctx, key, conversationID, storageList)
+ }
+}
+func (och *OnlineHistoryRedisConsumerHandler) HandleUserHasReadSeqMessages(ctx context.Context) {
+ defer func() {
+ if r := recover(); r != nil {
+ log.ZPanic(ctx, "HandleUserHasReadSeqMessages Panic", errs.ErrPanic(r))
+ }
+ }()
+
+ defer och.wg.Done()
+
+ for msg := range och.conversationUserHasReadChan {
+ if err := och.msgTransferDatabase.SetHasReadSeqToDB(ctx, msg.conversationID, msg.userHasReadMap); err != nil {
+ log.ZWarn(ctx, "set read seq to db error", err, "conversationID", msg.conversationID, "userSeqMap", msg.userHasReadMap)
+ }
+ }
+
+ log.ZInfo(ctx, "Channel closed, exiting handleUserHasReadSeqMessages")
+}
+func (och *OnlineHistoryRedisConsumerHandler) Close() {
+ close(och.conversationUserHasReadChan)
+ och.wg.Wait()
+}
+
+func (och *OnlineHistoryRedisConsumerHandler) toPushTopic(ctx context.Context, key, conversationID string, msgs []*ContextMsg) {
+ for _, v := range msgs {
+ log.ZDebug(ctx, "push msg to topic", "msg", v.message.String())
+ if err := och.msgTransferDatabase.MsgToPushMQ(v.ctx, key, conversationID, v.message); err != nil {
+ log.ZError(ctx, "msg to push topic error", err, "msg", v.message.String())
+ }
+ }
+}
+
+func withAggregationCtx(ctx context.Context, values []*ContextMsg) context.Context {
+ var allMessageOperationID string
+ for i, v := range values {
+ if opid := mcontext.GetOperationID(v.ctx); opid != "" {
+ if i == 0 {
+ allMessageOperationID += opid
+ } else {
+ allMessageOperationID += "$" + opid
+ }
+ }
+ }
+ return mcontext.SetOperationID(ctx, allMessageOperationID)
+}
+
+func (och *OnlineHistoryRedisConsumerHandler) HandlerRedisMessage(msg mq.Message) error { // a instance in the consumer group
+ err := och.redisMessageBatches.Put(msg.Context(), &ConsumerMessage{Ctx: msg.Context(), Key: msg.Key(), Value: msg.Value(), Raw: msg})
+ if err != nil {
+ log.ZWarn(msg.Context(), "put msg to error", err, "key", msg.Key(), "value", msg.Value())
+ }
+ return nil
+}
diff --git a/internal/msgtransfer/online_msg_to_mongo_handler.go b/internal/msgtransfer/online_msg_to_mongo_handler.go
new file mode 100644
index 0000000..4d2af76
--- /dev/null
+++ b/internal/msgtransfer/online_msg_to_mongo_handler.go
@@ -0,0 +1,78 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msgtransfer
+
+import (
+ "github.com/openimsdk/tools/mq"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/log"
+ "google.golang.org/protobuf/proto"
+)
+
+type OnlineHistoryMongoConsumerHandler struct {
+ msgTransferDatabase controller.MsgTransferDatabase
+ config *Config
+ webhookClient *webhook.Client
+}
+
+func NewOnlineHistoryMongoConsumerHandler(database controller.MsgTransferDatabase, config *Config) *OnlineHistoryMongoConsumerHandler {
+ return &OnlineHistoryMongoConsumerHandler{
+ msgTransferDatabase: database,
+ config: config,
+ webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
+ }
+}
+
+func (mc *OnlineHistoryMongoConsumerHandler) HandleChatWs2Mongo(val mq.Message) {
+ ctx := val.Context()
+ key := val.Key()
+ msg := val.Value()
+ msgFromMQ := pbmsg.MsgDataToMongoByMQ{}
+ err := proto.Unmarshal(msg, &msgFromMQ)
+ if err != nil {
+ log.ZError(ctx, "unmarshall failed", err, "key", key, "len", len(msg))
+ return
+ }
+ if len(msgFromMQ.MsgData) == 0 {
+ log.ZError(ctx, "msgFromMQ.MsgData is empty", nil, "key", key, "msg", msg)
+ return
+ }
+ log.ZDebug(ctx, "mongo consumer recv msg", "msgs", msgFromMQ.String())
+ err = mc.msgTransferDatabase.BatchInsertChat2DB(ctx, msgFromMQ.ConversationID, msgFromMQ.MsgData, msgFromMQ.LastSeq)
+ if err != nil {
+ log.ZError(ctx, "single data insert to mongo err", err, "msg", msgFromMQ.MsgData, "conversationID", msgFromMQ.ConversationID)
+ prommetrics.MsgInsertMongoFailedCounter.Inc()
+ } else {
+ prommetrics.MsgInsertMongoSuccessCounter.Inc()
+ val.Mark()
+ }
+
+ for _, msgData := range msgFromMQ.MsgData {
+ mc.webhookAfterMsgSaveDB(ctx, &mc.config.WebhooksConfig.AfterMsgSaveDB, msgData)
+ }
+
+ //var seqs []int64
+ //for _, msg := range msgFromMQ.MsgData {
+ // seqs = append(seqs, msg.Seq)
+ //}
+ //if err := mc.msgTransferDatabase.DeleteMessagesFromCache(ctx, msgFromMQ.ConversationID, seqs); err != nil {
+ // log.ZError(ctx, "remove cache msg from redis err", err, "msg",
+ // msgFromMQ.MsgData, "conversationID", msgFromMQ.ConversationID)
+ //}
+}
diff --git a/internal/push/callback.go b/internal/push/callback.go
new file mode 100644
index 0000000..a2e7647
--- /dev/null
+++ b/internal/push/callback.go
@@ -0,0 +1,150 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package push
+
+import (
+ "context"
+ "encoding/json"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func (c *ConsumerHandler) webhookBeforeOfflinePush(ctx context.Context, before *config.BeforeConfig, userIDs []string, msg *sdkws.MsgData, offlinePushUserIDs *[]string) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ if msg.ContentType == constant.Typing {
+ return nil
+ }
+ req := &callbackstruct.CallbackBeforePushReq{
+ UserStatusBatchCallbackReq: callbackstruct.UserStatusBatchCallbackReq{
+ UserStatusBaseCallback: callbackstruct.UserStatusBaseCallback{
+ CallbackCommand: callbackstruct.CallbackBeforeOfflinePushCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ PlatformID: int(msg.SenderPlatformID),
+ Platform: constant.PlatformIDToName(int(msg.SenderPlatformID)),
+ },
+ UserIDList: userIDs,
+ },
+ OfflinePushInfo: msg.OfflinePushInfo,
+ ClientMsgID: msg.ClientMsgID,
+ SendID: msg.SendID,
+ GroupID: msg.GroupID,
+ ContentType: msg.ContentType,
+ SessionType: msg.SessionType,
+ AtUserIDs: msg.AtUserIDList,
+ Content: GetContent(msg),
+ }
+
+ resp := &callbackstruct.CallbackBeforePushResp{}
+
+ if err := c.webhookClient.SyncPost(ctx, req.GetCallbackCommand(), req, resp, before); err != nil {
+ return err
+ }
+
+ if len(resp.UserIDs) != 0 {
+ *offlinePushUserIDs = resp.UserIDs
+ }
+ if resp.OfflinePushInfo != nil {
+ msg.OfflinePushInfo = resp.OfflinePushInfo
+ }
+ return nil
+ })
+}
+
+func (c *ConsumerHandler) webhookBeforeOnlinePush(ctx context.Context, before *config.BeforeConfig, userIDs []string, msg *sdkws.MsgData) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ if msg.ContentType == constant.Typing {
+ return nil
+ }
+ req := callbackstruct.CallbackBeforePushReq{
+ UserStatusBatchCallbackReq: callbackstruct.UserStatusBatchCallbackReq{
+ UserStatusBaseCallback: callbackstruct.UserStatusBaseCallback{
+ CallbackCommand: callbackstruct.CallbackBeforeOnlinePushCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ PlatformID: int(msg.SenderPlatformID),
+ Platform: constant.PlatformIDToName(int(msg.SenderPlatformID)),
+ },
+ UserIDList: userIDs,
+ },
+ ClientMsgID: msg.ClientMsgID,
+ SendID: msg.SendID,
+ GroupID: msg.GroupID,
+ ContentType: msg.ContentType,
+ SessionType: msg.SessionType,
+ AtUserIDs: msg.AtUserIDList,
+ Content: GetContent(msg),
+ }
+ resp := &callbackstruct.CallbackBeforePushResp{}
+ if err := c.webhookClient.SyncPost(ctx, req.GetCallbackCommand(), req, resp, before); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (c *ConsumerHandler) webhookBeforeGroupOnlinePush(
+ ctx context.Context,
+ before *config.BeforeConfig,
+ groupID string,
+ msg *sdkws.MsgData,
+ pushToUserIDs *[]string,
+) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ if msg.ContentType == constant.Typing {
+ return nil
+ }
+ req := callbackstruct.CallbackBeforeSuperGroupOnlinePushReq{
+ UserStatusBaseCallback: callbackstruct.UserStatusBaseCallback{
+ CallbackCommand: callbackstruct.CallbackBeforeGroupOnlinePushCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ PlatformID: int(msg.SenderPlatformID),
+ Platform: constant.PlatformIDToName(int(msg.SenderPlatformID)),
+ },
+ ClientMsgID: msg.ClientMsgID,
+ SendID: msg.SendID,
+ GroupID: groupID,
+ ContentType: msg.ContentType,
+ SessionType: msg.SessionType,
+ AtUserIDs: msg.AtUserIDList,
+ Content: GetContent(msg),
+ Seq: msg.Seq,
+ }
+ resp := &callbackstruct.CallbackBeforeSuperGroupOnlinePushResp{}
+ if err := c.webhookClient.SyncPost(ctx, req.GetCallbackCommand(), req, resp, before); err != nil {
+ return err
+ }
+ if len(resp.UserIDs) != 0 {
+ *pushToUserIDs = resp.UserIDs
+ }
+ return nil
+ })
+}
+
+func GetContent(msg *sdkws.MsgData) string {
+ if msg.ContentType >= constant.NotificationBegin && msg.ContentType <= constant.NotificationEnd {
+ var notification sdkws.NotificationElem
+ if err := json.Unmarshal(msg.Content, ¬ification); err != nil {
+ return ""
+ }
+ return notification.Detail
+ } else {
+ return string(msg.Content)
+ }
+}
diff --git a/internal/push/offlinepush/dummy/push.go b/internal/push/offlinepush/dummy/push.go
new file mode 100644
index 0000000..f4fb460
--- /dev/null
+++ b/internal/push/offlinepush/dummy/push.go
@@ -0,0 +1,38 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dummy
+
+import (
+ "context"
+ "sync/atomic"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "github.com/openimsdk/tools/log"
+)
+
+func NewClient() *Dummy {
+ return &Dummy{}
+}
+
+type Dummy struct {
+ v atomic.Bool
+}
+
+func (d *Dummy) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
+ if d.v.CompareAndSwap(false, true) {
+ log.ZWarn(ctx, "dummy push", nil, "ps", "the offline push is not configured. to configure it, please go to config/openim-push.yml")
+ }
+ return nil
+}
diff --git a/internal/push/offlinepush/fcm/push.go b/internal/push/offlinepush/fcm/push.go
new file mode 100644
index 0000000..e902839
--- /dev/null
+++ b/internal/push/offlinepush/fcm/push.go
@@ -0,0 +1,172 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package fcm
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "path/filepath"
+ "strings"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "github.com/openimsdk/tools/utils/httputil"
+
+ firebase "firebase.google.com/go/v4"
+ "firebase.google.com/go/v4/messaging"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/errs"
+ "github.com/redis/go-redis/v9"
+ "google.golang.org/api/option"
+)
+
+const SinglePushCountLimit = 400
+
+var Terminal = []int{constant.IOSPlatformID, constant.AndroidPlatformID, constant.WebPlatformID}
+
+type Fcm struct {
+ fcmMsgCli *messaging.Client
+ cache cache.ThirdCache
+}
+
+// NewClient initializes a new FCM client using the Firebase Admin SDK.
+// It requires the FCM service account credentials file located within the project's configuration directory.
+func NewClient(pushConf *config.Push, cache cache.ThirdCache, fcmConfigPath string) (*Fcm, error) {
+ var opt option.ClientOption
+ switch {
+ case len(pushConf.FCM.FilePath) != 0:
+ // with file path
+ credentialsFilePath := filepath.Join(fcmConfigPath, pushConf.FCM.FilePath)
+ opt = option.WithCredentialsFile(credentialsFilePath)
+ case len(pushConf.FCM.AuthURL) != 0:
+ // with authentication URL
+ client := httputil.NewHTTPClient(httputil.NewClientConfig())
+ resp, err := client.Get(pushConf.FCM.AuthURL)
+ if err != nil {
+ return nil, err
+ }
+ opt = option.WithCredentialsJSON(resp)
+ default:
+ return nil, errs.New("no FCM config").Wrap()
+ }
+
+ fcmApp, err := firebase.NewApp(context.Background(), nil, opt)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ ctx := context.Background()
+ fcmMsgClient, err := fcmApp.Messaging(ctx)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &Fcm{fcmMsgCli: fcmMsgClient, cache: cache}, nil
+}
+
+func (f *Fcm) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
+ // accounts->registrationToken
+ allTokens := make(map[string][]string, 0)
+ for _, account := range userIDs {
+ var personTokens []string
+ for _, v := range Terminal {
+ Token, err := f.cache.GetFcmToken(ctx, account, v)
+ if err == nil {
+ personTokens = append(personTokens, Token)
+ }
+ }
+ allTokens[account] = personTokens
+ }
+ Success := 0
+ Fail := 0
+ notification := &messaging.Notification{}
+ notification.Body = content
+ notification.Title = title
+ var messages []*messaging.Message
+ var sendErrBuilder strings.Builder
+ var msgErrBuilder strings.Builder
+ for userID, personTokens := range allTokens {
+ apns := &messaging.APNSConfig{Payload: &messaging.APNSPayload{Aps: &messaging.Aps{Sound: opts.IOSPushSound}}}
+ messageCount := len(messages)
+ if messageCount >= SinglePushCountLimit {
+ response, err := f.fcmMsgCli.SendEach(ctx, messages)
+ if err != nil {
+ Fail = Fail + messageCount
+ // Record push error
+ sendErrBuilder.WriteString(err.Error())
+ sendErrBuilder.WriteByte('.')
+ } else {
+ Success = Success + response.SuccessCount
+ Fail = Fail + response.FailureCount
+ if response.FailureCount != 0 {
+ // Record message error
+ for i := range response.Responses {
+ if !response.Responses[i].Success {
+ msgErrBuilder.WriteString(response.Responses[i].Error.Error())
+ msgErrBuilder.WriteByte('.')
+ }
+ }
+ }
+ }
+ messages = messages[0:0]
+ }
+ if opts.IOSBadgeCount {
+ unreadCountSum, err := f.cache.IncrUserBadgeUnreadCountSum(ctx, userID)
+ if err == nil {
+ apns.Payload.Aps.Badge = &unreadCountSum
+ } else {
+ // log.Error(operationID, "IncrUserBadgeUnreadCountSum redis err", err.Error(), uid)
+ Fail++
+ continue
+ }
+ } else {
+ unreadCountSum, err := f.cache.GetUserBadgeUnreadCountSum(ctx, userID)
+ if err == nil && unreadCountSum != 0 {
+ apns.Payload.Aps.Badge = &unreadCountSum
+ } else if errors.Is(err, redis.Nil) || unreadCountSum == 0 {
+ zero := 1
+ apns.Payload.Aps.Badge = &zero
+ } else {
+ // log.Error(operationID, "GetUserBadgeUnreadCountSum redis err", err.Error(), uid)
+ Fail++
+ continue
+ }
+ }
+ for _, token := range personTokens {
+ temp := &messaging.Message{
+ Data: map[string]string{"ex": opts.Ex},
+ Token: token,
+ Notification: notification,
+ APNS: apns,
+ }
+ messages = append(messages, temp)
+ }
+ }
+ messageCount := len(messages)
+ if messageCount > 0 {
+ response, err := f.fcmMsgCli.SendEach(ctx, messages)
+ if err != nil {
+ Fail = Fail + messageCount
+ } else {
+ Success = Success + response.SuccessCount
+ Fail = Fail + response.FailureCount
+ }
+ }
+ if Fail != 0 {
+ return errs.New(fmt.Sprintf("%d message send failed;send err:%s;message err:%s",
+ Fail, sendErrBuilder.String(), msgErrBuilder.String())).Wrap()
+ }
+ return nil
+}
diff --git a/internal/push/offlinepush/getui/body.go b/internal/push/offlinepush/getui/body.go
new file mode 100644
index 0000000..1053eaa
--- /dev/null
+++ b/internal/push/offlinepush/getui/body.go
@@ -0,0 +1,219 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package getui
+
+import (
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+var (
+ incOne = datautil.ToPtr("+1")
+ addNum = "1"
+ defaultStrategy = strategy{
+ Default: 1,
+ }
+ msgCategory = "CATEGORY_MESSAGE"
+)
+
+type Resp struct {
+ Code int `json:"code"`
+ Msg string `json:"msg"`
+ Data any `json:"data"`
+}
+
+func (r *Resp) parseError() (err error) {
+ switch r.Code {
+ case tokenExpireCode:
+ err = ErrTokenExpire
+ case 0:
+ err = nil
+ default:
+ err = fmt.Errorf("code %d, msg %s", r.Code, r.Msg)
+ }
+ return err
+}
+
+type RespI interface {
+ parseError() error
+}
+
+type AuthReq struct {
+ Sign string `json:"sign"`
+ Timestamp string `json:"timestamp"`
+ AppKey string `json:"appkey"`
+}
+
+type AuthResp struct {
+ ExpireTime string `json:"expire_time"`
+ Token string `json:"token"`
+}
+
+type TaskResp struct {
+ TaskID string `json:"taskID"`
+}
+
+type Settings struct {
+ TTL *int64 `json:"ttl"`
+ Strategy strategy `json:"strategy"`
+}
+
+type strategy struct {
+ Default int64 `json:"default"`
+ //IOS int64 `json:"ios"`
+ //St int64 `json:"st"`
+ //Hw int64 `json:"hw"`
+ //Ho int64 `json:"ho"`
+ //XM int64 `json:"xm"`
+ //XMG int64 `json:"xmg"`
+ //VV int64 `json:"vv"`
+ //Op int64 `json:"op"`
+ //OpG int64 `json:"opg"`
+ //MZ int64 `json:"mz"`
+ //HosHw int64 `json:"hoshw"`
+ //WX int64 `json:"wx"`
+}
+
+type Audience struct {
+ Alias []string `json:"alias"`
+}
+
+type PushMessage struct {
+ Notification *Notification `json:"notification,omitempty"`
+ Transmission *string `json:"transmission,omitempty"`
+}
+
+type PushChannel struct {
+ Ios *Ios `json:"ios"`
+ Android *Android `json:"android"`
+}
+
+type PushReq struct {
+ RequestID *string `json:"request_id"`
+ Settings *Settings `json:"settings"`
+ Audience *Audience `json:"audience"`
+ PushMessage *PushMessage `json:"push_message"`
+ PushChannel *PushChannel `json:"push_channel"`
+ IsAsync *bool `json:"is_async"`
+ TaskID *string `json:"taskid"`
+}
+
+type Ios struct {
+ NotificationType *string `json:"type"`
+ AutoBadge *string `json:"auto_badge"`
+ Aps struct {
+ Sound string `json:"sound"`
+ Alert Alert `json:"alert"`
+ } `json:"aps"`
+}
+
+type Alert struct {
+ Title string `json:"title"`
+ Body string `json:"body"`
+}
+
+type Android struct {
+ Ups struct {
+ Notification Notification `json:"notification"`
+ Options Options `json:"options"`
+ } `json:"ups"`
+}
+
+type Notification struct {
+ Title string `json:"title"`
+ Body string `json:"body"`
+ ChannelID string `json:"channelID"`
+ ChannelName string `json:"ChannelName"`
+ ClickType string `json:"click_type"`
+ BadgeAddNum string `json:"badge_add_num"`
+ Category string `json:"category"`
+}
+
+type Options struct {
+ HW struct {
+ DefaultSound bool `json:"/message/android/notification/default_sound"`
+ ChannelID string `json:"/message/android/notification/channel_id"`
+ Sound string `json:"/message/android/notification/sound"`
+ Importance string `json:"/message/android/notification/importance"`
+ Category string `json:"/message/android/category"`
+ } `json:"HW"`
+ XM struct {
+ ChannelID string `json:"/extra.channel_id"`
+ } `json:"XM"`
+ VV struct {
+ Classification int `json:"/classification"`
+ } `json:"VV"`
+}
+
+type Payload struct {
+ IsSignal bool `json:"isSignal"`
+}
+
+func newPushReq(pushConf *config.Push, title, content string) PushReq {
+ pushReq := PushReq{PushMessage: &PushMessage{Notification: &Notification{
+ Title: title,
+ Body: content,
+ ClickType: "startapp",
+ ChannelID: pushConf.GeTui.ChannelID,
+ ChannelName: pushConf.GeTui.ChannelName,
+ BadgeAddNum: addNum,
+ Category: msgCategory,
+ }}}
+ return pushReq
+}
+
+func newBatchPushReq(userIDs []string, taskID string) PushReq {
+ IsAsync := true
+ return PushReq{Audience: &Audience{Alias: userIDs}, IsAsync: &IsAsync, TaskID: &taskID}
+}
+
+func (pushReq *PushReq) setPushChannel(title string, body string) {
+ pushReq.PushChannel = &PushChannel{}
+ // autoBadge := "+1"
+ pushReq.PushChannel.Ios = &Ios{}
+ notify := "notify"
+ pushReq.PushChannel.Ios.NotificationType = ¬ify
+ pushReq.PushChannel.Ios.Aps.Sound = "default"
+ pushReq.PushChannel.Ios.AutoBadge = incOne
+ pushReq.PushChannel.Ios.Aps.Alert = Alert{
+ Title: title,
+ Body: body,
+ }
+ pushReq.PushChannel.Android = &Android{}
+ pushReq.PushChannel.Android.Ups.Notification = Notification{
+ Title: title,
+ Body: body,
+ ClickType: "startapp",
+ }
+ pushReq.PushChannel.Android.Ups.Options = Options{
+ HW: struct {
+ DefaultSound bool `json:"/message/android/notification/default_sound"`
+ ChannelID string `json:"/message/android/notification/channel_id"`
+ Sound string `json:"/message/android/notification/sound"`
+ Importance string `json:"/message/android/notification/importance"`
+ Category string `json:"/message/android/category"`
+ }{ChannelID: "RingRing4", Sound: "/raw/ring001", Importance: "NORMAL", Category: "IM"},
+ XM: struct {
+ ChannelID string `json:"/extra.channel_id"`
+ }{ChannelID: "high_system"},
+ VV: struct {
+ Classification int "json:\"/classification\""
+ }{
+ Classification: 1,
+ },
+ }
+}
diff --git a/internal/push/offlinepush/getui/push.go b/internal/push/offlinepush/getui/push.go
new file mode 100644
index 0000000..762ccaf
--- /dev/null
+++ b/internal/push/offlinepush/getui/push.go
@@ -0,0 +1,219 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package getui
+
+import (
+ "context"
+ "crypto/sha256"
+ "encoding/hex"
+ "errors"
+ "strconv"
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/httputil"
+ "github.com/openimsdk/tools/utils/splitter"
+ "github.com/redis/go-redis/v9"
+)
+
+var (
+ ErrTokenExpire = errs.New("token expire")
+ ErrUserIDEmpty = errs.New("userIDs is empty")
+)
+
+const (
+ pushURL = "/push/single/alias"
+ authURL = "/auth"
+ taskURL = "/push/list/message"
+ batchPushURL = "/push/list/alias"
+
+ // Codes.
+ tokenExpireCode = 10001
+ tokenExpireTime = 60 * 60 * 23
+ taskIDTTL = 1000 * 60 * 60 * 24
+)
+
+type Client struct {
+ cache cache.ThirdCache
+ tokenExpireTime int64
+ taskIDTTL int64
+ pushConf *config.Push
+ httpClient *httputil.HTTPClient
+}
+
+func NewClient(pushConf *config.Push, cache cache.ThirdCache) *Client {
+ return &Client{cache: cache,
+ tokenExpireTime: tokenExpireTime,
+ taskIDTTL: taskIDTTL,
+ pushConf: pushConf,
+ httpClient: httputil.NewHTTPClient(httputil.NewClientConfig()),
+ }
+}
+
+func (g *Client) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
+ token, err := g.cache.GetGetuiToken(ctx)
+ if err != nil {
+ if errors.Is(err, redis.Nil) {
+ log.ZDebug(ctx, "getui token not exist in redis")
+ token, err = g.getTokenAndSave2Redis(ctx)
+ if err != nil {
+ return err
+ }
+ } else {
+ return err
+ }
+ }
+ pushReq := newPushReq(g.pushConf, title, content)
+ pushReq.setPushChannel(title, content)
+ if len(userIDs) > 1 {
+ maxNum := 999
+ if len(userIDs) > maxNum {
+ s := splitter.NewSplitter(maxNum, userIDs)
+ wg := sync.WaitGroup{}
+ wg.Add(len(s.GetSplitResult()))
+ for i, v := range s.GetSplitResult() {
+ go func(index int, userIDs []string) {
+ defer wg.Done()
+ for i := 0; i < len(userIDs); i += maxNum {
+ end := i + maxNum
+ if end > len(userIDs) {
+ end = len(userIDs)
+ }
+ if err = g.batchPush(ctx, token, userIDs[i:end], pushReq); err != nil {
+ log.ZError(ctx, "batchPush failed", err, "index", index, "token", token, "req", pushReq)
+ }
+ }
+ if err = g.batchPush(ctx, token, userIDs, pushReq); err != nil {
+ log.ZError(ctx, "batchPush failed", err, "index", index, "token", token, "req", pushReq)
+ }
+ }(i, v.Item)
+ }
+ wg.Wait()
+ } else {
+ err = g.batchPush(ctx, token, userIDs, pushReq)
+ }
+ } else if len(userIDs) == 1 {
+ err = g.singlePush(ctx, token, userIDs[0], pushReq)
+ } else {
+ return ErrUserIDEmpty
+ }
+ switch err {
+ case ErrTokenExpire:
+ token, err = g.getTokenAndSave2Redis(ctx)
+ }
+ return err
+}
+
+func (g *Client) Auth(ctx context.Context, timeStamp int64) (token string, expireTime int64, err error) {
+ h := sha256.New()
+ h.Write(
+ []byte(g.pushConf.GeTui.AppKey + strconv.Itoa(int(timeStamp)) + g.pushConf.GeTui.MasterSecret),
+ )
+ sign := hex.EncodeToString(h.Sum(nil))
+ reqAuth := AuthReq{
+ Sign: sign,
+ Timestamp: strconv.Itoa(int(timeStamp)),
+ AppKey: g.pushConf.GeTui.AppKey,
+ }
+ respAuth := AuthResp{}
+ err = g.request(ctx, authURL, reqAuth, "", &respAuth)
+ if err != nil {
+ return "", 0, err
+ }
+ expire, err := strconv.Atoi(respAuth.ExpireTime)
+ return respAuth.Token, int64(expire), err
+}
+
+func (g *Client) GetTaskID(ctx context.Context, token string, pushReq PushReq) (string, error) {
+ respTask := TaskResp{}
+ ttl := int64(1000 * 60 * 5)
+ pushReq.Settings = &Settings{TTL: &ttl, Strategy: defaultStrategy}
+ err := g.request(ctx, taskURL, pushReq, token, &respTask)
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ return respTask.TaskID, nil
+}
+
+// max num is 999.
+func (g *Client) batchPush(ctx context.Context, token string, userIDs []string, pushReq PushReq) error {
+ taskID, err := g.GetTaskID(ctx, token, pushReq)
+ if err != nil {
+ return err
+ }
+ pushReq = newBatchPushReq(userIDs, taskID)
+ return g.request(ctx, batchPushURL, pushReq, token, nil)
+}
+
+func (g *Client) singlePush(ctx context.Context, token, userID string, pushReq PushReq) error {
+ operationID := mcontext.GetOperationID(ctx)
+ pushReq.RequestID = &operationID
+ pushReq.Audience = &Audience{Alias: []string{userID}}
+ return g.request(ctx, pushURL, pushReq, token, nil)
+}
+
+func (g *Client) request(ctx context.Context, url string, input any, token string, output any) error {
+ header := map[string]string{"token": token}
+ resp := &Resp{}
+ resp.Data = output
+ return g.postReturn(ctx, g.pushConf.GeTui.PushUrl+url, header, input, resp, 3)
+}
+
+func (g *Client) postReturn(
+ ctx context.Context,
+ url string,
+ header map[string]string,
+ input any,
+ output RespI,
+ timeout int,
+) error {
+ err := g.httpClient.PostReturn(ctx, url, header, input, output, timeout)
+ if err != nil {
+ return err
+ }
+ log.ZDebug(ctx, "postReturn", "url", url, "header", header, "input", input, "timeout", timeout, "output", output)
+ return output.parseError()
+}
+
+func (g *Client) getTokenAndSave2Redis(ctx context.Context) (token string, err error) {
+ token, _, err = g.Auth(ctx, time.Now().UnixNano()/1e6)
+ if err != nil {
+ return
+ }
+ err = g.cache.SetGetuiToken(ctx, token, 60*60*23)
+ if err != nil {
+ return
+ }
+ return token, nil
+}
+
+func (g *Client) GetTaskIDAndSave2Redis(ctx context.Context, token string, pushReq PushReq) (taskID string, err error) {
+ pushReq.Settings = &Settings{TTL: &g.taskIDTTL, Strategy: defaultStrategy}
+ taskID, err = g.GetTaskID(ctx, token, pushReq)
+ if err != nil {
+ return
+ }
+ err = g.cache.SetGetuiTaskID(ctx, taskID, g.tokenExpireTime)
+ if err != nil {
+ return
+ }
+ return token, nil
+}
diff --git a/internal/push/offlinepush/jpush/body/audience.go b/internal/push/offlinepush/jpush/body/audience.go
new file mode 100644
index 0000000..9db66ff
--- /dev/null
+++ b/internal/push/offlinepush/jpush/body/audience.go
@@ -0,0 +1,64 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package body
+
+const (
+ TAG = "tag"
+ TAGAND = "tag_and"
+ TAGNOT = "tag_not"
+ ALIAS = "alias"
+ REGISTRATIONID = "registration_id"
+)
+
+type Audience struct {
+ Object any
+ audience map[string][]string
+}
+
+func (a *Audience) set(key string, v []string) {
+ if a.audience == nil {
+ a.audience = make(map[string][]string)
+ a.Object = a.audience
+ }
+ // v, ok = this.audience[key]
+ // if ok {
+ // return
+ //}
+ a.audience[key] = v
+}
+
+func (a *Audience) SetTag(tags []string) {
+ a.set(TAG, tags)
+}
+
+func (a *Audience) SetTagAnd(tags []string) {
+ a.set(TAGAND, tags)
+}
+
+func (a *Audience) SetTagNot(tags []string) {
+ a.set(TAGNOT, tags)
+}
+
+func (a *Audience) SetAlias(alias []string) {
+ a.set(ALIAS, alias)
+}
+
+func (a *Audience) SetRegistrationId(ids []string) {
+ a.set(REGISTRATIONID, ids)
+}
+
+func (a *Audience) SetAll() {
+ a.Object = "all"
+}
diff --git a/internal/push/offlinepush/jpush/body/message.go b/internal/push/offlinepush/jpush/body/message.go
new file mode 100644
index 0000000..e885d1d
--- /dev/null
+++ b/internal/push/offlinepush/jpush/body/message.go
@@ -0,0 +1,41 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package body
+
+type Message struct {
+ MsgContent string `json:"msg_content"`
+ Title string `json:"title,omitempty"`
+ ContentType string `json:"content_type,omitempty"`
+ Extras map[string]any `json:"extras,omitempty"`
+}
+
+func (m *Message) SetMsgContent(c string) {
+ m.MsgContent = c
+}
+
+func (m *Message) SetTitle(t string) {
+ m.Title = t
+}
+
+func (m *Message) SetContentType(c string) {
+ m.ContentType = c
+}
+
+func (m *Message) SetExtras(key string, value any) {
+ if m.Extras == nil {
+ m.Extras = make(map[string]any)
+ }
+ m.Extras[key] = value
+}
diff --git a/internal/push/offlinepush/jpush/body/notification.go b/internal/push/offlinepush/jpush/body/notification.go
new file mode 100644
index 0000000..d0901ad
--- /dev/null
+++ b/internal/push/offlinepush/jpush/body/notification.go
@@ -0,0 +1,72 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package body
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+)
+
+type Notification struct {
+ Alert string `json:"alert,omitempty"`
+ Android Android `json:"android,omitempty"`
+ IOS Ios `json:"ios,omitempty"`
+}
+
+type Android struct {
+ Alert string `json:"alert,omitempty"`
+ Title string `json:"title,omitempty"`
+ Intent struct {
+ URL string `json:"url,omitempty"`
+ } `json:"intent,omitempty"`
+ Extras map[string]string `json:"extras,omitempty"`
+}
+type Ios struct {
+ Alert IosAlert `json:"alert,omitempty"`
+ Sound string `json:"sound,omitempty"`
+ Badge string `json:"badge,omitempty"`
+ Extras map[string]string `json:"extras,omitempty"`
+ MutableContent bool `json:"mutable-content"`
+}
+
+type IosAlert struct {
+ Title string `json:"title,omitempty"`
+ Body string `json:"body,omitempty"`
+}
+
+func (n *Notification) SetAlert(alert string, title string, opts *options.Opts) {
+ n.Alert = alert
+ n.Android.Alert = alert
+ n.Android.Title = title
+ n.IOS.Alert.Body = alert
+ n.IOS.Alert.Title = title
+ n.IOS.Sound = opts.IOSPushSound
+ if opts.IOSBadgeCount {
+ n.IOS.Badge = "+1"
+ }
+}
+
+func (n *Notification) SetExtras(extras map[string]string) {
+ n.IOS.Extras = extras
+ n.Android.Extras = extras
+}
+
+func (n *Notification) SetAndroidIntent(pushConf *config.Push) {
+ n.Android.Intent.URL = pushConf.JPush.PushIntent
+}
+
+func (n *Notification) IOSEnableMutableContent() {
+ n.IOS.MutableContent = true
+}
diff --git a/internal/push/offlinepush/jpush/body/options.go b/internal/push/offlinepush/jpush/body/options.go
new file mode 100644
index 0000000..2edf80c
--- /dev/null
+++ b/internal/push/offlinepush/jpush/body/options.go
@@ -0,0 +1,23 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package body
+
+type Options struct {
+ ApnsProduction bool `json:"apns_production"`
+}
+
+func (o *Options) SetApnsProduction(c bool) {
+ o.ApnsProduction = c
+}
diff --git a/internal/push/offlinepush/jpush/body/platform.go b/internal/push/offlinepush/jpush/body/platform.go
new file mode 100644
index 0000000..1c8b9b0
--- /dev/null
+++ b/internal/push/offlinepush/jpush/body/platform.go
@@ -0,0 +1,99 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package body
+
+import (
+ "github.com/openimsdk/tools/errs"
+
+ "git.imall.cloud/openim/protocol/constant"
+)
+
+const (
+ ANDROID = "android"
+ IOS = "ios"
+ QUICKAPP = "quickapp"
+ WINDOWSPHONE = "winphone"
+ ALL = "all"
+)
+
+type Platform struct {
+ Os any
+ osArry []string
+}
+
+func (p *Platform) Set(os string) error {
+ if p.Os == nil {
+ p.osArry = make([]string, 0, 4)
+ } else {
+ switch p.Os.(type) {
+ case string:
+ return errs.New("platform is all")
+ default:
+ }
+ }
+
+ for _, value := range p.osArry {
+ if os == value {
+ return nil
+ }
+ }
+
+ switch os {
+ case IOS:
+ fallthrough
+ case ANDROID:
+ fallthrough
+ case QUICKAPP:
+ fallthrough
+ case WINDOWSPHONE:
+ p.osArry = append(p.osArry, os)
+ p.Os = p.osArry
+ default:
+ return errs.New("unknow platform")
+ }
+
+ return nil
+}
+
+func (p *Platform) SetPlatform(platform string) error {
+ switch platform {
+ case constant.AndroidPlatformStr:
+ return p.SetAndroid()
+ case constant.IOSPlatformStr:
+ return p.SetIOS()
+ default:
+ return errs.New("platform err")
+ }
+}
+
+func (p *Platform) SetIOS() error {
+ return p.Set(IOS)
+}
+
+func (p *Platform) SetAndroid() error {
+ return p.Set(ANDROID)
+}
+
+func (p *Platform) SetQuickApp() error {
+ return p.Set(QUICKAPP)
+}
+
+func (p *Platform) SetWindowsPhone() error {
+ return p.Set(WINDOWSPHONE)
+}
+
+func (p *Platform) SetAll() {
+ p.Os = ALL
+}
diff --git a/internal/push/offlinepush/jpush/body/pushobj.go b/internal/push/offlinepush/jpush/body/pushobj.go
new file mode 100644
index 0000000..3dc133d
--- /dev/null
+++ b/internal/push/offlinepush/jpush/body/pushobj.go
@@ -0,0 +1,43 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package body
+
+type PushObj struct {
+ Platform any `json:"platform"`
+ Audience any `json:"audience"`
+ Notification any `json:"notification,omitempty"`
+ Message any `json:"message,omitempty"`
+ Options any `json:"options,omitempty"`
+}
+
+func (p *PushObj) SetPlatform(pf *Platform) {
+ p.Platform = pf.Os
+}
+
+func (p *PushObj) SetAudience(ad *Audience) {
+ p.Audience = ad.Object
+}
+
+func (p *PushObj) SetNotification(no *Notification) {
+ p.Notification = no
+}
+
+func (p *PushObj) SetMessage(m *Message) {
+ p.Message = m
+}
+
+func (p *PushObj) SetOptions(o *Options) {
+ p.Options = o
+}
diff --git a/internal/push/offlinepush/jpush/push.go b/internal/push/offlinepush/jpush/push.go
new file mode 100644
index 0000000..07a4df5
--- /dev/null
+++ b/internal/push/offlinepush/jpush/push.go
@@ -0,0 +1,107 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package jpush
+
+import (
+ "context"
+ "encoding/base64"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/jpush/body"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/utils/httputil"
+)
+
+type JPush struct {
+ pushConf *config.Push
+ httpClient *httputil.HTTPClient
+}
+
+func NewClient(pushConf *config.Push) *JPush {
+ return &JPush{pushConf: pushConf,
+ httpClient: httputil.NewHTTPClient(httputil.NewClientConfig()),
+ }
+}
+
+func (j *JPush) Auth(apiKey, secretKey string, timeStamp int64) (token string, err error) {
+ return token, nil
+}
+
+func (j *JPush) SetAlias(cid, alias string) (resp string, err error) {
+ return resp, nil
+}
+
+func (j *JPush) getAuthorization(appKey string, masterSecret string) string {
+ str := fmt.Sprintf("%s:%s", appKey, masterSecret)
+ buf := []byte(str)
+ Authorization := fmt.Sprintf("Basic %s", base64.StdEncoding.EncodeToString(buf))
+ return Authorization
+}
+
+func (j *JPush) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
+ var pf body.Platform
+ pf.SetAll()
+ var au body.Audience
+ au.SetAlias(userIDs)
+ var no body.Notification
+ extras := make(map[string]string)
+ extras["ex"] = opts.Ex
+ if opts.Signal.ClientMsgID != "" {
+ extras["ClientMsgID"] = opts.Signal.ClientMsgID
+ }
+ no.IOSEnableMutableContent()
+ no.SetExtras(extras)
+ no.SetAlert(content, title, opts)
+ no.SetAndroidIntent(j.pushConf)
+
+ var msg body.Message
+ msg.SetMsgContent(content)
+ msg.SetTitle(title)
+ if opts.Signal.ClientMsgID != "" {
+ msg.SetExtras("ClientMsgID", opts.Signal.ClientMsgID)
+ }
+ msg.SetExtras("ex", opts.Ex)
+ var opt body.Options
+ opt.SetApnsProduction(j.pushConf.IOSPush.Production)
+ var pushObj body.PushObj
+ pushObj.SetPlatform(&pf)
+ pushObj.SetAudience(&au)
+ pushObj.SetNotification(&no)
+ pushObj.SetMessage(&msg)
+ pushObj.SetOptions(&opt)
+ var resp map[string]any
+ return j.request(ctx, pushObj, &resp, 5)
+}
+
+func (j *JPush) request(ctx context.Context, po body.PushObj, resp *map[string]any, timeout int) error {
+ err := j.httpClient.PostReturn(
+ ctx,
+ j.pushConf.JPush.PushURL,
+ map[string]string{
+ "Authorization": j.getAuthorization(j.pushConf.JPush.AppKey, j.pushConf.JPush.MasterSecret),
+ },
+ po,
+ resp,
+ timeout,
+ )
+ if err != nil {
+ return err
+ }
+ if (*resp)["sendno"] != "0" {
+ return fmt.Errorf("jpush push failed %v", resp)
+ }
+ return nil
+}
diff --git a/internal/push/offlinepush/offlinepusher.go b/internal/push/offlinepush/offlinepusher.go
new file mode 100644
index 0000000..2c9be90
--- /dev/null
+++ b/internal/push/offlinepush/offlinepusher.go
@@ -0,0 +1,55 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package offlinepush
+
+import (
+ "context"
+ "strings"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/dummy"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/fcm"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/getui"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/jpush"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+)
+
+const (
+ geTUI = "getui"
+ firebase = "fcm"
+ jPush = "jpush"
+)
+
+// OfflinePusher Offline Pusher.
+type OfflinePusher interface {
+ Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error
+}
+
+func NewOfflinePusher(pushConf *config.Push, cache cache.ThirdCache, fcmConfigPath string) (OfflinePusher, error) {
+ var offlinePusher OfflinePusher
+ pushConf.Enable = strings.ToLower(pushConf.Enable)
+ switch pushConf.Enable {
+ case geTUI:
+ offlinePusher = getui.NewClient(pushConf, cache)
+ case firebase:
+ return fcm.NewClient(pushConf, cache, fcmConfigPath)
+ case jPush:
+ offlinePusher = jpush.NewClient(pushConf)
+ default:
+ offlinePusher = dummy.NewClient()
+ }
+ return offlinePusher, nil
+}
diff --git a/internal/push/offlinepush/options/options.go b/internal/push/offlinepush/options/options.go
new file mode 100644
index 0000000..056f6b7
--- /dev/null
+++ b/internal/push/offlinepush/options/options.go
@@ -0,0 +1,14 @@
+package options
+
+// Opts opts.
+type Opts struct {
+ Signal *Signal
+ IOSPushSound string
+ IOSBadgeCount bool
+ Ex string
+}
+
+// Signal message id.
+type Signal struct {
+ ClientMsgID string
+}
diff --git a/internal/push/offlinepush_handler.go b/internal/push/offlinepush_handler.go
new file mode 100644
index 0000000..8f188e0
--- /dev/null
+++ b/internal/push/offlinepush_handler.go
@@ -0,0 +1,105 @@
+package push
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/protocol/constant"
+ pbpush "git.imall.cloud/openim/protocol/push"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "google.golang.org/protobuf/proto"
+)
+
+type OfflinePushConsumerHandler struct {
+ offlinePusher offlinepush.OfflinePusher
+}
+
+func NewOfflinePushConsumerHandler(offlinePusher offlinepush.OfflinePusher) *OfflinePushConsumerHandler {
+ return &OfflinePushConsumerHandler{
+ offlinePusher: offlinePusher,
+ }
+}
+
+func (o *OfflinePushConsumerHandler) HandleMsg2OfflinePush(ctx context.Context, msg []byte) {
+ offlinePushMsg := pbpush.PushMsgReq{}
+ if err := proto.Unmarshal(msg, &offlinePushMsg); err != nil {
+ log.ZError(ctx, "offline push Unmarshal msg err", err, "msg", string(msg))
+ return
+ }
+ if offlinePushMsg.MsgData == nil || offlinePushMsg.UserIDs == nil {
+ log.ZError(ctx, "offline push msg is empty", errs.New("offlinePushMsg is empty"), "userIDs", offlinePushMsg.UserIDs, "msg", offlinePushMsg.MsgData)
+ return
+ }
+ if offlinePushMsg.MsgData.Status == constant.MsgStatusSending {
+ offlinePushMsg.MsgData.Status = constant.MsgStatusSendSuccess
+ }
+ log.ZInfo(ctx, "receive to OfflinePush MQ", "userIDs", offlinePushMsg.UserIDs, "msg", offlinePushMsg.MsgData)
+
+ err := o.offlinePushMsg(ctx, offlinePushMsg.MsgData, offlinePushMsg.UserIDs)
+ if err != nil {
+ log.ZWarn(ctx, "offline push failed", err, "msg", offlinePushMsg.String())
+ }
+}
+
+func (o *OfflinePushConsumerHandler) getOfflinePushInfos(msg *sdkws.MsgData) (title, content string, opts *options.Opts, err error) {
+ type AtTextElem struct {
+ Text string `json:"text,omitempty"`
+ AtUserList []string `json:"atUserList,omitempty"`
+ IsAtSelf bool `json:"isAtSelf"`
+ }
+
+ opts = &options.Opts{Signal: &options.Signal{ClientMsgID: msg.ClientMsgID}}
+ if msg.OfflinePushInfo != nil {
+ opts.IOSBadgeCount = msg.OfflinePushInfo.IOSBadgeCount
+ opts.IOSPushSound = msg.OfflinePushInfo.IOSPushSound
+ opts.Ex = msg.OfflinePushInfo.Ex
+ }
+
+ if msg.OfflinePushInfo != nil {
+ title = msg.OfflinePushInfo.Title
+ content = msg.OfflinePushInfo.Desc
+ }
+ if title == "" {
+ switch msg.ContentType {
+ case constant.Text:
+ fallthrough
+ case constant.Picture:
+ fallthrough
+ case constant.Voice:
+ fallthrough
+ case constant.Video:
+ fallthrough
+ case constant.File:
+ title = constant.ContentType2PushContent[int64(msg.ContentType)]
+ case constant.AtText:
+ ac := AtTextElem{}
+ _ = jsonutil.JsonStringToStruct(string(msg.Content), &ac)
+ case constant.SignalingNotification:
+ title = constant.ContentType2PushContent[constant.SignalMsg]
+ default:
+ title = constant.ContentType2PushContent[constant.Common]
+ }
+ }
+ if content == "" {
+ content = title
+ }
+ return
+}
+
+func (o *OfflinePushConsumerHandler) offlinePushMsg(ctx context.Context, msg *sdkws.MsgData, offlinePushUserIDs []string) error {
+ title, content, opts, err := o.getOfflinePushInfos(msg)
+ if err != nil {
+ return err
+ }
+ err = o.offlinePusher.Push(ctx, offlinePushUserIDs, title, content, opts)
+ if err != nil {
+ prommetrics.MsgOfflinePushFailedCounter.Inc()
+ return err
+ }
+ return nil
+}
diff --git a/internal/push/onlinepusher.go b/internal/push/onlinepusher.go
new file mode 100644
index 0000000..ddfceec
--- /dev/null
+++ b/internal/push/onlinepusher.go
@@ -0,0 +1,214 @@
+package push
+
+import (
+ "context"
+ "fmt"
+ "sync"
+
+ "git.imall.cloud/openim/protocol/msggateway"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "golang.org/x/sync/errgroup"
+ "google.golang.org/grpc"
+
+ conf "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+)
+
+type OnlinePusher interface {
+ GetConnsAndOnlinePush(ctx context.Context, msg *sdkws.MsgData,
+ pushToUserIDs []string) (wsResults []*msggateway.SingleMsgToUserResults, err error)
+ GetOnlinePushFailedUserIDs(ctx context.Context, msg *sdkws.MsgData, wsResults []*msggateway.SingleMsgToUserResults,
+ pushToUserIDs *[]string) []string
+}
+
+type emptyOnlinePusher struct{}
+
+func newEmptyOnlinePusher() *emptyOnlinePusher {
+ return &emptyOnlinePusher{}
+}
+
+func (emptyOnlinePusher) GetConnsAndOnlinePush(ctx context.Context, msg *sdkws.MsgData, pushToUserIDs []string) (wsResults []*msggateway.SingleMsgToUserResults, err error) {
+ log.ZInfo(ctx, "emptyOnlinePusher GetConnsAndOnlinePush", nil)
+ return nil, nil
+}
+func (u emptyOnlinePusher) GetOnlinePushFailedUserIDs(ctx context.Context, msg *sdkws.MsgData, wsResults []*msggateway.SingleMsgToUserResults, pushToUserIDs *[]string) []string {
+ log.ZInfo(ctx, "emptyOnlinePusher GetOnlinePushFailedUserIDs", nil)
+ return nil
+}
+
+func NewOnlinePusher(disCov discovery.Conn, config *Config) (OnlinePusher, error) {
+ if conf.Standalone() {
+ return NewDefaultAllNode(disCov, config), nil
+ }
+ if runtimeenv.RuntimeEnvironment() == conf.KUBERNETES {
+ return NewDefaultAllNode(disCov, config), nil
+ }
+ switch config.Discovery.Enable {
+ case conf.ETCD:
+ return NewDefaultAllNode(disCov, config), nil
+ default:
+ return nil, errs.New(fmt.Sprintf("unsupported discovery type %s", config.Discovery.Enable))
+ }
+}
+
+type DefaultAllNode struct {
+ disCov discovery.Conn
+ config *Config
+}
+
+func NewDefaultAllNode(disCov discovery.Conn, config *Config) *DefaultAllNode {
+ return &DefaultAllNode{disCov: disCov, config: config}
+}
+
+func (d *DefaultAllNode) GetConnsAndOnlinePush(ctx context.Context, msg *sdkws.MsgData,
+ pushToUserIDs []string) (wsResults []*msggateway.SingleMsgToUserResults, err error) {
+ conns, err := d.disCov.GetConns(ctx, d.config.Discovery.RpcService.MessageGateway)
+ if len(conns) == 0 {
+ log.ZWarn(ctx, "get gateway conn 0 ", nil)
+ } else {
+ log.ZDebug(ctx, "get gateway conn", "conn length", len(conns))
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ var (
+ mu sync.Mutex
+ wg = errgroup.Group{}
+ input = &msggateway.OnlineBatchPushOneMsgReq{MsgData: msg, PushToUserIDs: pushToUserIDs}
+ maxWorkers = d.config.RpcConfig.MaxConcurrentWorkers
+ )
+
+ if maxWorkers < 3 {
+ maxWorkers = 3
+ }
+
+ wg.SetLimit(maxWorkers)
+
+ // Online push message
+ for _, conn := range conns {
+ conn := conn // loop var safe
+ ctx := ctx
+ wg.Go(func() error {
+ msgClient := msggateway.NewMsgGatewayClient(conn)
+ reply, err := msgClient.SuperGroupOnlineBatchPushOneMsg(ctx, input)
+ if err != nil {
+ log.ZError(ctx, "SuperGroupOnlineBatchPushOneMsg ", err, "req:", input.String())
+ return nil
+ }
+
+ log.ZDebug(ctx, "push result", "reply", reply)
+ if reply != nil && reply.SinglePushResult != nil {
+ mu.Lock()
+ wsResults = append(wsResults, reply.SinglePushResult...)
+ mu.Unlock()
+ }
+
+ return nil
+ })
+ }
+
+ _ = wg.Wait()
+
+ // always return nil
+ return wsResults, nil
+}
+
+func (d *DefaultAllNode) GetOnlinePushFailedUserIDs(_ context.Context, msg *sdkws.MsgData,
+ wsResults []*msggateway.SingleMsgToUserResults, pushToUserIDs *[]string) []string {
+
+ onlineSuccessUserIDs := []string{msg.SendID}
+ for _, v := range wsResults {
+ //message sender do not need offline push
+ if msg.SendID == v.UserID {
+ continue
+ }
+ // mobile online push success
+ if v.OnlinePush {
+ onlineSuccessUserIDs = append(onlineSuccessUserIDs, v.UserID)
+ }
+
+ }
+
+ return datautil.SliceSub(*pushToUserIDs, onlineSuccessUserIDs)
+}
+
+type K8sStaticConsistentHash struct {
+ disCov discovery.SvcDiscoveryRegistry
+ config *Config
+}
+
+func NewK8sStaticConsistentHash(disCov discovery.SvcDiscoveryRegistry, config *Config) *K8sStaticConsistentHash {
+ return &K8sStaticConsistentHash{disCov: disCov, config: config}
+}
+
+func (k *K8sStaticConsistentHash) GetConnsAndOnlinePush(ctx context.Context, msg *sdkws.MsgData,
+ pushToUserIDs []string) (wsResults []*msggateway.SingleMsgToUserResults, err error) {
+
+ var usersHost = make(map[string][]string)
+ for _, v := range pushToUserIDs {
+ tHost, err := k.disCov.GetUserIdHashGatewayHost(ctx, v)
+ if err != nil {
+ log.ZError(ctx, "get msg gateway hash error", err)
+ return nil, err
+ }
+ tUsers, tbl := usersHost[tHost]
+ if tbl {
+ tUsers = append(tUsers, v)
+ usersHost[tHost] = tUsers
+ } else {
+ usersHost[tHost] = []string{v}
+ }
+ }
+ log.ZDebug(ctx, "genUsers send hosts struct:", "usersHost", usersHost)
+ var usersConns = make(map[grpc.ClientConnInterface][]string)
+ for host, userIds := range usersHost {
+ tconn, _ := k.disCov.GetConn(ctx, host)
+ usersConns[tconn] = userIds
+ }
+ var (
+ mu sync.Mutex
+ wg = errgroup.Group{}
+ maxWorkers = k.config.RpcConfig.MaxConcurrentWorkers
+ )
+ if maxWorkers < 3 {
+ maxWorkers = 3
+ }
+ wg.SetLimit(maxWorkers)
+ for conn, userIds := range usersConns {
+ tcon := conn
+ tuserIds := userIds
+ wg.Go(func() error {
+ input := &msggateway.OnlineBatchPushOneMsgReq{MsgData: msg, PushToUserIDs: tuserIds}
+ msgClient := msggateway.NewMsgGatewayClient(tcon)
+ reply, err := msgClient.SuperGroupOnlineBatchPushOneMsg(ctx, input)
+ if err != nil {
+ return nil
+ }
+ log.ZDebug(ctx, "push result", "reply", reply)
+ if reply != nil && reply.SinglePushResult != nil {
+ mu.Lock()
+ wsResults = append(wsResults, reply.SinglePushResult...)
+ mu.Unlock()
+ }
+ return nil
+ })
+ }
+ _ = wg.Wait()
+ return wsResults, nil
+}
+func (k *K8sStaticConsistentHash) GetOnlinePushFailedUserIDs(_ context.Context, _ *sdkws.MsgData,
+ wsResults []*msggateway.SingleMsgToUserResults, _ *[]string) []string {
+ var needOfflinePushUserIDs []string
+ for _, v := range wsResults {
+ if !v.OnlinePush {
+ needOfflinePushUserIDs = append(needOfflinePushUserIDs, v.UserID)
+ }
+ }
+ return needOfflinePushUserIDs
+}
diff --git a/internal/push/push.go b/internal/push/push.go
new file mode 100644
index 0000000..5524914
--- /dev/null
+++ b/internal/push/push.go
@@ -0,0 +1,163 @@
+package push
+
+import (
+ "context"
+ "math/rand"
+ "strconv"
+ "time"
+
+ "github.com/openimsdk/tools/mq"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/mqbuild"
+ pbpush "git.imall.cloud/openim/protocol/push"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "google.golang.org/grpc"
+)
+
+type pushServer struct {
+ pbpush.UnimplementedPushMsgServiceServer
+ database controller.PushDatabase
+ disCov discovery.Conn
+ offlinePusher offlinepush.OfflinePusher
+}
+
+type Config struct {
+ RpcConfig config.Push
+ RedisConfig config.Redis
+ MongoConfig config.Mongo
+ KafkaConfig config.Kafka
+ NotificationConfig config.Notification
+ Share config.Share
+ WebhooksConfig config.Webhooks
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+ FcmConfigPath config.Path
+}
+
+func (p pushServer) DelUserPushToken(ctx context.Context,
+ req *pbpush.DelUserPushTokenReq) (resp *pbpush.DelUserPushTokenResp, err error) {
+ if err = p.database.DelFcmToken(ctx, req.UserID, int(req.PlatformID)); err != nil {
+ return nil, err
+ }
+ return &pbpush.DelUserPushTokenResp{}, nil
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongoConfig, &config.RedisConfig)
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+ var cacheModel cache.ThirdCache
+ if rdb == nil {
+ mdb, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ mc, err := mgo.NewCacheMgo(mdb.GetDB())
+ if err != nil {
+ return err
+ }
+ cacheModel = mcache.NewThirdCache(mc)
+ } else {
+ cacheModel = redis.NewThirdCache(rdb)
+ }
+ offlinePusher, err := offlinepush.NewOfflinePusher(&config.RpcConfig, cacheModel, string(config.FcmConfigPath))
+ if err != nil {
+ return err
+ }
+ builder := mqbuild.NewBuilder(&config.KafkaConfig)
+
+ offlinePushProducer, err := builder.GetTopicProducer(ctx, config.KafkaConfig.ToOfflinePushTopic)
+ if err != nil {
+ return err
+ }
+ database := controller.NewPushDatabase(cacheModel, offlinePushProducer)
+
+ pushConsumer, err := builder.GetTopicConsumer(ctx, config.KafkaConfig.ToPushTopic)
+ if err != nil {
+ return err
+ }
+ offlinePushConsumer, err := builder.GetTopicConsumer(ctx, config.KafkaConfig.ToOfflinePushTopic)
+ if err != nil {
+ return err
+ }
+
+ pushHandler, err := NewConsumerHandler(ctx, config, database, offlinePusher, rdb, client)
+ if err != nil {
+ return err
+ }
+
+ offlineHandler := NewOfflinePushConsumerHandler(offlinePusher)
+
+ pbpush.RegisterPushMsgServiceServer(server, &pushServer{
+ database: database,
+ disCov: client,
+ offlinePusher: offlinePusher,
+ })
+
+ go func() {
+ consumerCtx := mcontext.SetOperationID(context.Background(), "push_"+strconv.Itoa(int(rand.Uint32())))
+ waitDone := make(chan struct{})
+ go func() {
+ defer func() {
+ if r := recover(); r != nil {
+ log.ZError(consumerCtx, "WaitCache panic", nil, "panic", r)
+ }
+ close(waitDone)
+ }()
+ pushHandler.WaitCache()
+ }()
+ select {
+ case <-waitDone:
+ log.ZInfo(consumerCtx, "WaitCache completed successfully")
+ case <-time.After(30 * time.Second):
+ log.ZWarn(consumerCtx, "WaitCache timeout after 30s, will start subscribe anyway", nil)
+ }
+ fn := func(msg mq.Message) error {
+ pushHandler.HandleMs2PsChat(authverify.WithTempAdmin(msg.Context()), msg.Value())
+ return nil
+ }
+ log.ZInfo(consumerCtx, "begin consume messages")
+ for {
+ if err := pushConsumer.Subscribe(consumerCtx, fn); err != nil {
+ log.ZError(consumerCtx, "subscribe err, will retry in 5 seconds", err)
+ time.Sleep(5 * time.Second)
+ continue
+ }
+ // Subscribe returned normally (possibly due to context cancellation), retry immediately
+ log.ZWarn(consumerCtx, "Subscribe returned normally, will retry immediately", nil)
+ }
+ }()
+
+ go func() {
+ fn := func(msg mq.Message) error {
+ offlineHandler.HandleMsg2OfflinePush(msg.Context(), msg.Value())
+ return nil
+ }
+ consumerCtx := mcontext.SetOperationID(context.Background(), "push_"+strconv.Itoa(int(rand.Uint32())))
+ log.ZInfo(consumerCtx, "begin consume messages")
+ for {
+ if err := offlinePushConsumer.Subscribe(consumerCtx, fn); err != nil {
+ log.ZError(consumerCtx, "subscribe err, will retry in 5 seconds", err)
+ time.Sleep(5 * time.Second)
+ continue
+ }
+ // Subscribe returned normally (possibly due to context cancellation), retry immediately
+ log.ZWarn(consumerCtx, "Subscribe returned normally, will retry immediately", nil)
+ }
+ }()
+
+ return nil
+}
diff --git a/internal/push/push_handler.go b/internal/push/push_handler.go
new file mode 100644
index 0000000..24872f1
--- /dev/null
+++ b/internal/push/push_handler.go
@@ -0,0 +1,615 @@
+package push
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush"
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push/offlinepush/options"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpccache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/conversationutil"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msggateway"
+ pbpush "git.imall.cloud/openim/protocol/push"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+ "github.com/redis/go-redis/v9"
+ "google.golang.org/protobuf/proto"
+)
+
+type ConsumerHandler struct {
+ //pushConsumerGroup mq.Consumer
+ offlinePusher offlinepush.OfflinePusher
+ onlinePusher OnlinePusher
+ pushDatabase controller.PushDatabase
+ onlineCache rpccache.OnlineCache
+ groupLocalCache *rpccache.GroupLocalCache
+ conversationLocalCache *rpccache.ConversationLocalCache
+ webhookClient *webhook.Client
+ config *Config
+ userClient *rpcli.UserClient
+ groupClient *rpcli.GroupClient
+ msgClient *rpcli.MsgClient
+ conversationClient *rpcli.ConversationClient
+}
+
+func NewConsumerHandler(ctx context.Context, config *Config, database controller.PushDatabase, offlinePusher offlinepush.OfflinePusher, rdb redis.UniversalClient, client discovery.Conn) (*ConsumerHandler, error) {
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return nil, err
+ }
+ groupConn, err := client.GetConn(ctx, config.Discovery.RpcService.Group)
+ if err != nil {
+ return nil, err
+ }
+ msgConn, err := client.GetConn(ctx, config.Discovery.RpcService.Msg)
+ if err != nil {
+ return nil, err
+ }
+ conversationConn, err := client.GetConn(ctx, config.Discovery.RpcService.Conversation)
+ if err != nil {
+ return nil, err
+ }
+ onlinePusher, err := NewOnlinePusher(client, config)
+ if err != nil {
+ return nil, err
+ }
+ var consumerHandler ConsumerHandler
+ consumerHandler.userClient = rpcli.NewUserClient(userConn)
+ consumerHandler.groupClient = rpcli.NewGroupClient(groupConn)
+ consumerHandler.msgClient = rpcli.NewMsgClient(msgConn)
+ consumerHandler.conversationClient = rpcli.NewConversationClient(conversationConn)
+
+ consumerHandler.offlinePusher = offlinePusher
+ consumerHandler.onlinePusher = onlinePusher
+ consumerHandler.groupLocalCache = rpccache.NewGroupLocalCache(consumerHandler.groupClient, &config.LocalCacheConfig, rdb)
+ consumerHandler.conversationLocalCache = rpccache.NewConversationLocalCache(consumerHandler.conversationClient, &config.LocalCacheConfig, rdb)
+ consumerHandler.webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ consumerHandler.config = config
+ consumerHandler.pushDatabase = database
+ consumerHandler.onlineCache, err = rpccache.NewOnlineCache(consumerHandler.userClient, consumerHandler.groupLocalCache, rdb, config.RpcConfig.FullUserCache, nil)
+ if err != nil {
+ return nil, err
+ }
+ return &consumerHandler, nil
+}
+
+func (c *ConsumerHandler) HandleMs2PsChat(ctx context.Context, msg []byte) {
+ msgFromMQ := pbpush.PushMsgReq{}
+ if err := proto.Unmarshal(msg, &msgFromMQ); err != nil {
+ log.ZError(ctx, "push Unmarshal msg err", err, "msg", string(msg))
+ return
+ }
+
+ sec := msgFromMQ.MsgData.SendTime / 1000
+ nowSec := timeutil.GetCurrentTimestampBySecond()
+
+ if nowSec-sec > 10 {
+ prommetrics.MsgLoneTimePushCounter.Inc()
+ log.ZWarn(ctx, "it’s been a while since the message was sent", nil, "msg", msgFromMQ.String(), "sec", sec, "nowSec", nowSec, "nowSec-sec", nowSec-sec)
+ }
+ var err error
+
+ switch msgFromMQ.MsgData.SessionType {
+ case constant.ReadGroupChatType:
+ err = c.Push2Group(ctx, msgFromMQ.MsgData.GroupID, msgFromMQ.MsgData)
+ default:
+ var pushUserIDList []string
+ isSenderSync := datautil.GetSwitchFromOptions(msgFromMQ.MsgData.Options, constant.IsSenderSync)
+ if !isSenderSync || msgFromMQ.MsgData.SendID == msgFromMQ.MsgData.RecvID {
+ pushUserIDList = append(pushUserIDList, msgFromMQ.MsgData.RecvID)
+ } else {
+ pushUserIDList = append(pushUserIDList, msgFromMQ.MsgData.RecvID, msgFromMQ.MsgData.SendID)
+ }
+ err = c.Push2User(ctx, pushUserIDList, msgFromMQ.MsgData)
+ }
+ if err != nil {
+ log.ZWarn(ctx, "push failed", err, "msg", msgFromMQ.String())
+ }
+}
+
+func (c *ConsumerHandler) WaitCache() {
+ c.onlineCache.WaitCache()
+}
+
+// Push2User Suitable for two types of conversations, one is SingleChatType and the other is NotificationChatType.
+func (c *ConsumerHandler) Push2User(ctx context.Context, userIDs []string, msg *sdkws.MsgData) (err error) {
+ log.ZInfo(ctx, "Get msg from msg_transfer And push msg", "userIDs", userIDs, "msg", msg.String())
+ defer func(duration time.Time) {
+ t := time.Since(duration)
+ log.ZInfo(ctx, "Get msg from msg_transfer And push msg end", "msg", msg.String(), "time cost", t)
+ }(time.Now())
+ if err := c.webhookBeforeOnlinePush(ctx, &c.config.WebhooksConfig.BeforeOnlinePush, userIDs, msg); err != nil {
+ return err
+ }
+
+ wsResults, err := c.GetConnsAndOnlinePush(ctx, msg, userIDs)
+ if err != nil {
+ return err
+ }
+
+ log.ZDebug(ctx, "single and notification push result", "result", wsResults, "msg", msg, "push_to_userID", userIDs)
+ log.ZInfo(ctx, "single and notification push end")
+
+ if !c.shouldPushOffline(ctx, msg) {
+ return nil
+ }
+ log.ZInfo(ctx, "pushOffline start")
+
+ for _, v := range wsResults {
+ //message sender do not need offline push
+ if msg.SendID == v.UserID {
+ continue
+ }
+ //receiver online push success
+ if v.OnlinePush {
+ return nil
+ }
+ }
+ needOfflinePushUserID := []string{msg.RecvID}
+ var offlinePushUserID []string
+
+ //receiver offline push
+ if err = c.webhookBeforeOfflinePush(ctx, &c.config.WebhooksConfig.BeforeOfflinePush, needOfflinePushUserID, msg, &offlinePushUserID); err != nil {
+ return err
+ }
+
+ if len(offlinePushUserID) > 0 {
+ needOfflinePushUserID = offlinePushUserID
+ }
+ err = c.offlinePushMsg(ctx, msg, needOfflinePushUserID)
+ if err != nil {
+ log.ZDebug(ctx, "offlinePushMsg failed", err, "needOfflinePushUserID", needOfflinePushUserID, "msg", msg)
+ log.ZWarn(ctx, "offlinePushMsg failed", err, "needOfflinePushUserID length", len(needOfflinePushUserID), "msg", msg)
+ return nil
+ }
+
+ return nil
+}
+
+func (c *ConsumerHandler) shouldPushOffline(_ context.Context, msg *sdkws.MsgData) bool {
+ isOfflinePush := datautil.GetSwitchFromOptions(msg.Options, constant.IsOfflinePush)
+ if !isOfflinePush {
+ return false
+ }
+ switch msg.ContentType {
+ case constant.RoomParticipantsConnectedNotification:
+ return false
+ case constant.RoomParticipantsDisconnectedNotification:
+ return false
+ }
+ return true
+}
+
+func (c *ConsumerHandler) GetConnsAndOnlinePush(ctx context.Context, msg *sdkws.MsgData, pushToUserIDs []string) ([]*msggateway.SingleMsgToUserResults, error) {
+ if msg != nil && msg.Status == constant.MsgStatusSending {
+ msg.Status = constant.MsgStatusSendSuccess
+ }
+ onlineUserIDs, offlineUserIDs, err := c.onlineCache.GetUsersOnline(ctx, pushToUserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ log.ZDebug(ctx, "GetConnsAndOnlinePush online cache", "sendID", msg.SendID, "recvID", msg.RecvID, "groupID", msg.GroupID, "sessionType", msg.SessionType, "clientMsgID", msg.ClientMsgID, "serverMsgID", msg.ServerMsgID, "offlineUserIDs", offlineUserIDs, "onlineUserIDs", onlineUserIDs)
+ var result []*msggateway.SingleMsgToUserResults
+ if len(onlineUserIDs) > 0 {
+ var err error
+ result, err = c.onlinePusher.GetConnsAndOnlinePush(ctx, msg, onlineUserIDs)
+ if err != nil {
+ return nil, err
+ }
+ }
+ for _, userID := range offlineUserIDs {
+ result = append(result, &msggateway.SingleMsgToUserResults{
+ UserID: userID,
+ })
+ }
+ return result, nil
+}
+
+func (c *ConsumerHandler) Push2Group(ctx context.Context, groupID string, msg *sdkws.MsgData) (err error) {
+ log.ZInfo(ctx, "Get group msg from msg_transfer and push msg", "msg", msg.String(), "groupID", groupID, "contentType", msg.ContentType)
+ defer func(duration time.Time) {
+ t := time.Since(duration)
+ log.ZInfo(ctx, "Get group msg from msg_transfer and push msg end", "msg", msg.String(), "groupID", groupID, "time cost", t)
+ }(time.Now())
+ var pushToUserIDs []string
+ if err = c.webhookBeforeGroupOnlinePush(ctx, &c.config.WebhooksConfig.BeforeGroupOnlinePush, groupID, msg,
+ &pushToUserIDs); err != nil {
+ log.ZWarn(ctx, "Push2Group webhookBeforeGroupOnlinePush failed", err, "groupID", groupID)
+ return err
+ }
+ log.ZDebug(ctx, "Push2Group after webhook", "groupID", groupID, "pushToUserIDsFromWebhook", pushToUserIDs, "count", len(pushToUserIDs))
+
+ err = c.groupMessagesHandler(ctx, groupID, &pushToUserIDs, msg)
+ if err != nil {
+ log.ZWarn(ctx, "Push2Group groupMessagesHandler failed", err, "groupID", groupID)
+ return err
+ }
+ log.ZDebug(ctx, "Push2Group after groupMessagesHandler", "groupID", groupID, "pushToUserIDs", pushToUserIDs, "count", len(pushToUserIDs))
+
+ wsResults, err := c.GetConnsAndOnlinePush(ctx, msg, pushToUserIDs)
+ if err != nil {
+ log.ZWarn(ctx, "Push2Group GetConnsAndOnlinePush failed", err, "groupID", groupID)
+ return err
+ }
+ log.ZDebug(ctx, "Push2Group online push completed", "groupID", groupID, "wsResultsCount", len(wsResults))
+
+ log.ZDebug(ctx, "group push result", "result", wsResults, "msg", msg)
+ log.ZInfo(ctx, "online group push end")
+
+ if !c.shouldPushOffline(ctx, msg) {
+ return nil
+ }
+ needOfflinePushUserIDs := c.onlinePusher.GetOnlinePushFailedUserIDs(ctx, msg, wsResults, &pushToUserIDs)
+ //filter some user, like don not disturb or don't need offline push etc.
+ needOfflinePushUserIDs, err = c.filterGroupMessageOfflinePush(ctx, groupID, msg, needOfflinePushUserIDs)
+ if err != nil {
+ return err
+ }
+ log.ZInfo(ctx, "filterGroupMessageOfflinePush end")
+
+ // Use offline push messaging
+ if len(needOfflinePushUserIDs) > 0 {
+ c.asyncOfflinePush(ctx, needOfflinePushUserIDs, msg)
+ }
+
+ return nil
+}
+
+func (c *ConsumerHandler) asyncOfflinePush(ctx context.Context, needOfflinePushUserIDs []string, msg *sdkws.MsgData) {
+ var offlinePushUserIDs []string
+ err := c.webhookBeforeOfflinePush(ctx, &c.config.WebhooksConfig.BeforeOfflinePush, needOfflinePushUserIDs, msg, &offlinePushUserIDs)
+ if err != nil {
+ log.ZWarn(ctx, "webhookBeforeOfflinePush failed", err, "msg", msg)
+ return
+ }
+
+ if len(offlinePushUserIDs) > 0 {
+ needOfflinePushUserIDs = offlinePushUserIDs
+ }
+ if err := c.pushDatabase.MsgToOfflinePushMQ(ctx, conversationutil.GenConversationUniqueKeyForSingle(msg.SendID, msg.RecvID), needOfflinePushUserIDs, msg); err != nil {
+ log.ZDebug(ctx, "Msg To OfflinePush MQ error", err, "needOfflinePushUserIDs",
+ needOfflinePushUserIDs, "msg", msg)
+ log.ZWarn(ctx, "Msg To OfflinePush MQ error", err, "needOfflinePushUserIDs length",
+ len(needOfflinePushUserIDs), "msg", msg)
+ prommetrics.GroupChatMsgProcessFailedCounter.Inc()
+ return
+ }
+}
+
+func (c *ConsumerHandler) groupMessagesHandler(ctx context.Context, groupID string, pushToUserIDs *[]string, msg *sdkws.MsgData) (err error) {
+ if len(*pushToUserIDs) == 0 {
+ *pushToUserIDs, err = c.groupLocalCache.GetGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ return err
+ }
+ switch msg.ContentType {
+ case constant.MemberQuitNotification:
+ var tips sdkws.MemberQuitTips
+ if unmarshalNotificationElem(msg.Content, &tips) != nil {
+ return err
+ }
+ if err = c.DeleteMemberAndSetConversationSeq(ctx, groupID, []string{tips.QuitUser.UserID}); err != nil {
+ log.ZError(ctx, "MemberQuitNotification DeleteMemberAndSetConversationSeq", err, "groupID", groupID, "userID", tips.QuitUser.UserID)
+ }
+ // 退出群聊通知只通知群主和管理员,不通知退出者本人
+ case constant.MemberKickedNotification:
+ var tips sdkws.MemberKickedTips
+ if unmarshalNotificationElem(msg.Content, &tips) != nil {
+ return err
+ }
+ kickedUsers := datautil.Slice(tips.KickedUserList, func(e *sdkws.GroupMemberFullInfo) string { return e.UserID })
+ if err = c.DeleteMemberAndSetConversationSeq(ctx, groupID, kickedUsers); err != nil {
+ log.ZError(ctx, "MemberKickedNotification DeleteMemberAndSetConversationSeq", err, "groupID", groupID, "userIDs", kickedUsers)
+ }
+ // 被踢出群聊通知只通知群主和管理员,不通知被踢出的用户本人
+ case constant.GroupDismissedNotification:
+ if msgprocessor.IsNotification(msgprocessor.GetConversationIDByMsg(msg)) {
+ var tips sdkws.GroupDismissedTips
+ if unmarshalNotificationElem(msg.Content, &tips) != nil {
+ return err
+ }
+ log.ZDebug(ctx, "GroupDismissedNotificationInfo****", "groupID", groupID, "num", len(*pushToUserIDs), "list", pushToUserIDs)
+ if len(c.config.Share.IMAdminUser.UserIDs) > 0 {
+ ctx = mcontext.WithOpUserIDContext(ctx, c.config.Share.IMAdminUser.UserIDs[0])
+ }
+ defer func(groupID string) {
+ if err := c.groupClient.DismissGroup(ctx, groupID, true); err != nil {
+ log.ZError(ctx, "DismissGroup Notification clear members", err, "groupID", groupID)
+ }
+ }(groupID)
+ }
+ }
+ }
+ // 过滤通知消息,只通知群主、管理员和相关成员本人
+ switch msg.ContentType {
+ case constant.GroupMemberMutedNotification, constant.GroupMemberCancelMutedNotification:
+ // 禁言和取消禁言:通知群主、管理员和本人
+ if err := c.filterNotificationWithUser(ctx, groupID, pushToUserIDs, msg, true); err != nil {
+ return err
+ }
+ case constant.MemberQuitNotification, constant.MemberEnterNotification, constant.GroupMemberInfoSetNotification:
+ // 退出、进入、成员信息设置:只通知群主和管理员
+ if err := c.filterNotificationWithUser(ctx, groupID, pushToUserIDs, msg, false); err != nil {
+ return err
+ }
+ case constant.MemberKickedNotification:
+ // 被踢出:通知群主、管理员和本人
+ if err := c.filterNotificationWithUser(ctx, groupID, pushToUserIDs, msg, true); err != nil {
+ return err
+ }
+ case constant.MemberInvitedNotification:
+ // 被邀请:通知群主、管理员和被邀请的本人
+ if err := c.filterNotificationWithUser(ctx, groupID, pushToUserIDs, msg, true); err != nil {
+ return err
+ }
+ case constant.GroupMemberSetToAdminNotification, constant.GroupMemberSetToOrdinaryUserNotification:
+ // 设置为管理员/普通用户:通知群主、管理员和本人
+ if err := c.filterNotificationWithUser(ctx, groupID, pushToUserIDs, msg, true); err != nil {
+ return err
+ }
+ }
+ return err
+}
+
+// filterNotificationWithUser 过滤通知消息,只通知群主、管理员,可选是否包含相关用户本人
+// includeUser: true 表示包含相关用户本人,false 表示排除相关用户
+func (c *ConsumerHandler) filterNotificationWithUser(ctx context.Context, groupID string, pushToUserIDs *[]string, msg *sdkws.MsgData, includeUser bool) error {
+ notificationName := getNotificationName(msg.ContentType)
+ log.ZInfo(ctx, notificationName+" push filter start", "groupID", groupID, "originalPushToUserIDs", *pushToUserIDs, "originalCount", len(*pushToUserIDs), "includeUser", includeUser)
+
+ // 解析通知内容,提取相关用户ID
+ var relatedUserIDs []string
+ var excludeUserIDs []string
+ switch msg.ContentType {
+ case constant.GroupMemberMutedNotification:
+ var tips sdkws.GroupMemberMutedTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ if includeUser {
+ relatedUserIDs = append(relatedUserIDs, tips.MutedUser.UserID)
+ }
+ log.ZDebug(ctx, notificationName+" parsed tips", "mutedUserID", tips.MutedUser.UserID, "opUserID", tips.OpUser.UserID)
+ case constant.GroupMemberCancelMutedNotification:
+ var tips sdkws.GroupMemberCancelMutedTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ if includeUser {
+ relatedUserIDs = append(relatedUserIDs, tips.MutedUser.UserID)
+ }
+ log.ZDebug(ctx, notificationName+" parsed tips", "cancelMutedUserID", tips.MutedUser.UserID, "opUserID", tips.OpUser.UserID)
+ case constant.MemberQuitNotification:
+ var tips sdkws.MemberQuitTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ excludeUserIDs = append(excludeUserIDs, tips.QuitUser.UserID)
+ log.ZDebug(ctx, notificationName+" parsed tips", "quitUserID", tips.QuitUser.UserID)
+ case constant.MemberInvitedNotification:
+ var tips sdkws.MemberInvitedTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ invitedUserIDs := datautil.Slice(tips.InvitedUserList, func(e *sdkws.GroupMemberFullInfo) string { return e.UserID })
+ if includeUser {
+ relatedUserIDs = append(relatedUserIDs, invitedUserIDs...)
+ } else {
+ excludeUserIDs = append(excludeUserIDs, invitedUserIDs...)
+ }
+ log.ZDebug(ctx, notificationName+" parsed tips", "invitedUserIDs", invitedUserIDs, "opUserID", tips.OpUser.UserID)
+ case constant.MemberEnterNotification:
+ var tips sdkws.MemberEnterTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ excludeUserIDs = append(excludeUserIDs, tips.EntrantUser.UserID)
+ log.ZDebug(ctx, notificationName+" parsed tips", "entrantUserID", tips.EntrantUser.UserID)
+ case constant.MemberKickedNotification:
+ var tips sdkws.MemberKickedTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ kickedUserIDs := datautil.Slice(tips.KickedUserList, func(e *sdkws.GroupMemberFullInfo) string { return e.UserID })
+ if includeUser {
+ relatedUserIDs = append(relatedUserIDs, kickedUserIDs...)
+ } else {
+ excludeUserIDs = append(excludeUserIDs, kickedUserIDs...)
+ }
+ log.ZDebug(ctx, notificationName+" parsed tips", "kickedUserIDs", kickedUserIDs, "opUserID", tips.OpUser.UserID)
+ case constant.GroupMemberInfoSetNotification, constant.GroupMemberSetToAdminNotification, constant.GroupMemberSetToOrdinaryUserNotification:
+ var tips sdkws.GroupMemberInfoSetTips
+ if err := unmarshalNotificationElem(msg.Content, &tips); err != nil {
+ log.ZWarn(ctx, notificationName+" unmarshalNotificationElem failed", err, "groupID", groupID)
+ return err
+ }
+ if includeUser {
+ relatedUserIDs = append(relatedUserIDs, tips.ChangedUser.UserID)
+ } else {
+ excludeUserIDs = append(excludeUserIDs, tips.ChangedUser.UserID)
+ }
+ log.ZDebug(ctx, notificationName+" parsed tips", "changedUserID", tips.ChangedUser.UserID, "opUserID", tips.OpUser.UserID)
+ default:
+ log.ZWarn(ctx, notificationName+" unsupported notification type", nil, "contentType", msg.ContentType)
+ return nil
+ }
+
+ // 获取所有群成员信息(如果还没有获取)
+ allMemberIDs := *pushToUserIDs
+ if len(allMemberIDs) == 0 {
+ var err error
+ allMemberIDs, err = c.groupLocalCache.GetGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ log.ZWarn(ctx, notificationName+" GetGroupMemberIDs failed", err, "groupID", groupID)
+ return err
+ }
+ log.ZDebug(ctx, notificationName+" fetched all member IDs", "groupID", groupID, "memberCount", len(allMemberIDs))
+ }
+
+ members, err := c.groupLocalCache.GetGroupMembers(ctx, groupID, allMemberIDs)
+ if err != nil {
+ log.ZWarn(ctx, notificationName+" GetGroupMembers failed", err, "groupID", groupID)
+ return err
+ }
+ log.ZDebug(ctx, notificationName+" got members", "groupID", groupID, "memberCount", len(members))
+
+ // 筛选群主和管理员
+ var targetUserIDs []string
+ var ownerUserIDs []string
+ var adminUserIDs []string
+ for _, member := range members {
+ // 排除相关用户
+ if len(excludeUserIDs) > 0 && datautil.Contain(member.UserID, excludeUserIDs...) {
+ continue
+ }
+ if member.RoleLevel == constant.GroupOwner {
+ targetUserIDs = append(targetUserIDs, member.UserID)
+ ownerUserIDs = append(ownerUserIDs, member.UserID)
+ } else if member.RoleLevel == constant.GroupAdmin {
+ targetUserIDs = append(targetUserIDs, member.UserID)
+ adminUserIDs = append(adminUserIDs, member.UserID)
+ }
+ }
+ log.ZDebug(ctx, notificationName+" filtered owners and admins", "ownerCount", len(ownerUserIDs), "ownerIDs", ownerUserIDs, "adminCount", len(adminUserIDs), "adminIDs", adminUserIDs)
+
+ // 添加相关用户本人(如果需要)
+ if includeUser && len(relatedUserIDs) > 0 {
+ targetUserIDs = append(targetUserIDs, relatedUserIDs...)
+ log.ZDebug(ctx, notificationName+" added related users", "relatedUserIDs", relatedUserIDs, "targetCountBeforeDistinct", len(targetUserIDs))
+ }
+
+ // 去重并更新推送列表
+ *pushToUserIDs = datautil.Distinct(targetUserIDs)
+ log.ZInfo(ctx, notificationName+" push filter completed", "groupID", groupID, "finalPushToUserIDs", *pushToUserIDs, "finalCount", len(*pushToUserIDs), "filteredFrom", len(allMemberIDs))
+ return nil
+}
+
+// getNotificationName 根据通知类型获取通知名称
+func getNotificationName(contentType int32) string {
+ switch contentType {
+ case constant.GroupMemberMutedNotification:
+ return "GroupMemberMutedNotification"
+ case constant.GroupMemberCancelMutedNotification:
+ return "GroupMemberCancelMutedNotification"
+ case constant.MemberQuitNotification:
+ return "MemberQuitNotification"
+ case constant.MemberInvitedNotification:
+ return "MemberInvitedNotification"
+ case constant.MemberEnterNotification:
+ return "MemberEnterNotification"
+ case constant.MemberKickedNotification:
+ return "MemberKickedNotification"
+ case constant.GroupMemberInfoSetNotification:
+ return "GroupMemberInfoSetNotification"
+ case constant.GroupMemberSetToAdminNotification:
+ return "GroupMemberSetToAdminNotification"
+ case constant.GroupMemberSetToOrdinaryUserNotification:
+ return "GroupMemberSetToOrdinaryUserNotification"
+ default:
+ return "UnknownNotification"
+ }
+}
+
+func (c *ConsumerHandler) offlinePushMsg(ctx context.Context, msg *sdkws.MsgData, offlinePushUserIDs []string) error {
+ title, content, opts, err := c.getOfflinePushInfos(msg)
+ if err != nil {
+ log.ZError(ctx, "getOfflinePushInfos failed", err, "msg", msg)
+ return err
+ }
+ err = c.offlinePusher.Push(ctx, offlinePushUserIDs, title, content, opts)
+ if err != nil {
+ prommetrics.MsgOfflinePushFailedCounter.Inc()
+ return err
+ }
+ return nil
+}
+
+func (c *ConsumerHandler) filterGroupMessageOfflinePush(ctx context.Context, groupID string, msg *sdkws.MsgData,
+ offlinePushUserIDs []string) (userIDs []string, err error) {
+ needOfflinePushUserIDs, err := c.conversationClient.GetConversationOfflinePushUserIDs(ctx, conversationutil.GenGroupConversationID(groupID), offlinePushUserIDs)
+ if err != nil {
+ return nil, err
+ }
+ return needOfflinePushUserIDs, nil
+}
+
+func (c *ConsumerHandler) getOfflinePushInfos(msg *sdkws.MsgData) (title, content string, opts *options.Opts, err error) {
+ type AtTextElem struct {
+ Text string `json:"text,omitempty"`
+ AtUserList []string `json:"atUserList,omitempty"`
+ IsAtSelf bool `json:"isAtSelf"`
+ }
+
+ opts = &options.Opts{Signal: &options.Signal{ClientMsgID: msg.ClientMsgID}}
+ if msg.OfflinePushInfo != nil {
+ opts.IOSBadgeCount = msg.OfflinePushInfo.IOSBadgeCount
+ opts.IOSPushSound = msg.OfflinePushInfo.IOSPushSound
+ opts.Ex = msg.OfflinePushInfo.Ex
+ }
+
+ if msg.OfflinePushInfo != nil {
+ title = msg.OfflinePushInfo.Title
+ content = msg.OfflinePushInfo.Desc
+ }
+ if title == "" {
+ switch msg.ContentType {
+ case constant.Text:
+ fallthrough
+ case constant.Picture:
+ fallthrough
+ case constant.Voice:
+ fallthrough
+ case constant.Video:
+ fallthrough
+ case constant.File:
+ title = constant.ContentType2PushContent[int64(msg.ContentType)]
+ case constant.AtText:
+ ac := AtTextElem{}
+ _ = jsonutil.JsonStringToStruct(string(msg.Content), &ac)
+ case constant.SignalingNotification:
+ title = constant.ContentType2PushContent[constant.SignalMsg]
+ default:
+ title = constant.ContentType2PushContent[constant.Common]
+ }
+ }
+ if content == "" {
+ content = title
+ }
+ return
+}
+
+func (c *ConsumerHandler) DeleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error {
+ conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
+ maxSeq, err := c.msgClient.GetConversationMaxSeq(ctx, conversationID)
+ if err != nil {
+ return err
+ }
+ return c.conversationClient.SetConversationMaxSeq(ctx, conversationID, userIDs, maxSeq)
+}
+
+func unmarshalNotificationElem(bytes []byte, t any) error {
+ var notification sdkws.NotificationElem
+ if err := json.Unmarshal(bytes, ¬ification); err != nil {
+ return err
+ }
+ return json.Unmarshal([]byte(notification.Detail), t)
+}
diff --git a/internal/rpc/auth/auth.go b/internal/rpc/auth/auth.go
new file mode 100644
index 0000000..90d49b3
--- /dev/null
+++ b/internal/rpc/auth/auth.go
@@ -0,0 +1,311 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package auth
+
+import (
+ "context"
+ "errors"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpccache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ redis2 "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ pbauth "git.imall.cloud/openim/protocol/auth"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msggateway"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/tokenverify"
+ "google.golang.org/grpc"
+)
+
+type authServer struct {
+ pbauth.UnimplementedAuthServer
+ authDatabase controller.AuthDatabase
+ AuthLocalCache *rpccache.AuthLocalCache
+ RegisterCenter discovery.Conn
+ config *Config
+ userClient *rpcli.UserClient
+ adminUserIDs []string
+}
+
+type Config struct {
+ RpcConfig config.Auth
+ RedisConfig config.Redis
+ MongoConfig config.Mongo
+ Share config.Share
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongoConfig, &config.RedisConfig)
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+ var token cache.TokenModel
+ if rdb == nil {
+ mdb, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ mc, err := mgo.NewCacheMgo(mdb.GetDB())
+ if err != nil {
+ return err
+ }
+ token = mcache.NewTokenCacheModel(mc, config.RpcConfig.TokenPolicy.Expire)
+ } else {
+ token = redis2.NewTokenCacheModel(rdb, &config.LocalCacheConfig, config.RpcConfig.TokenPolicy.Expire)
+ }
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ authConn, err := client.GetConn(ctx, config.Discovery.RpcService.Auth)
+ if err != nil {
+ return err
+ }
+
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+
+ pbauth.RegisterAuthServer(server, &authServer{
+ RegisterCenter: client,
+ authDatabase: controller.NewAuthDatabase(
+ token,
+ config.Share.Secret,
+ config.RpcConfig.TokenPolicy.Expire,
+ config.Share.MultiLogin,
+ config.Share.IMAdminUser.UserIDs,
+ ),
+ AuthLocalCache: rpccache.NewAuthLocalCache(rpcli.NewAuthClient(authConn), &config.LocalCacheConfig, rdb),
+ config: config,
+ userClient: rpcli.NewUserClient(userConn),
+ adminUserIDs: config.Share.IMAdminUser.UserIDs,
+ })
+ return nil
+}
+
+func (s *authServer) GetAdminToken(ctx context.Context, req *pbauth.GetAdminTokenReq) (*pbauth.GetAdminTokenResp, error) {
+ resp := pbauth.GetAdminTokenResp{}
+ if req.Secret != s.config.Share.Secret {
+ return nil, errs.ErrNoPermission.WrapMsg("secret invalid")
+ }
+
+ if !datautil.Contain(req.UserID, s.adminUserIDs...) {
+ return nil, errs.ErrArgs.WrapMsg("userID is error.", "userID", req.UserID, "adminUserID", s.adminUserIDs)
+
+ }
+
+ if err := s.userClient.CheckUser(ctx, []string{req.UserID}); err != nil {
+ return nil, err
+ }
+
+ token, err := s.authDatabase.CreateToken(ctx, req.UserID, int(constant.AdminPlatformID))
+ if err != nil {
+ return nil, err
+ }
+
+ prommetrics.UserLoginCounter.Inc()
+
+ resp.Token = token
+ resp.ExpireTimeSeconds = s.config.RpcConfig.TokenPolicy.Expire * 24 * 60 * 60
+ return &resp, nil
+}
+
+func (s *authServer) GetUserToken(ctx context.Context, req *pbauth.GetUserTokenReq) (*pbauth.GetUserTokenResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ if req.PlatformID == constant.AdminPlatformID {
+ return nil, errs.ErrNoPermission.WrapMsg("platformID invalid. platformID must not be adminPlatformID")
+ }
+
+ resp := pbauth.GetUserTokenResp{}
+
+ if authverify.CheckUserIsAdmin(ctx, req.UserID) {
+ return nil, errs.ErrNoPermission.WrapMsg("don't get Admin token")
+ }
+ user, err := s.userClient.GetUserInfo(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if user.AppMangerLevel >= constant.AppNotificationAdmin {
+ return nil, errs.ErrArgs.WrapMsg("app account can`t get token")
+ }
+ token, err := s.authDatabase.CreateToken(ctx, req.UserID, int(req.PlatformID))
+ if err != nil {
+ return nil, err
+ }
+
+ prommetrics.UserLoginCounter.Inc()
+
+ resp.Token = token
+ resp.ExpireTimeSeconds = s.config.RpcConfig.TokenPolicy.Expire * 24 * 60 * 60
+ return &resp, nil
+}
+
+func (s *authServer) GetExistingToken(ctx context.Context, req *pbauth.GetExistingTokenReq) (*pbauth.GetExistingTokenResp, error) {
+ m, err := s.authDatabase.GetTokensWithoutError(ctx, req.UserID, int(req.PlatformID))
+ if err != nil {
+ return nil, err
+ }
+
+ return &pbauth.GetExistingTokenResp{
+ TokenStates: convert.TokenMapDB2Pb(m),
+ }, nil
+}
+
+func (s *authServer) parseToken(ctx context.Context, tokensString string) (claims *tokenverify.Claims, err error) {
+ claims, err = tokenverify.GetClaimFromToken(tokensString, authverify.Secret(s.config.Share.Secret))
+ if err != nil {
+ return nil, err
+ }
+
+ m, err := s.AuthLocalCache.GetExistingToken(ctx, claims.UserID, claims.PlatformID)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(m) == 0 {
+ isAdmin := authverify.CheckUserIsAdmin(ctx, claims.UserID)
+ if isAdmin {
+ if err = s.authDatabase.GetTemporaryTokensWithoutError(ctx, claims.UserID, claims.PlatformID, tokensString); err == nil {
+ return claims, nil
+ }
+ }
+ return nil, servererrs.ErrTokenNotExist.Wrap()
+ }
+ if v, ok := m[tokensString]; ok {
+ switch v {
+ case constant.NormalToken:
+ return claims, nil
+ case constant.KickedToken:
+ return nil, servererrs.ErrTokenKicked.Wrap()
+ default:
+ return nil, errs.Wrap(errs.ErrTokenUnknown)
+ }
+ } else {
+ isAdmin := authverify.CheckUserIsAdmin(ctx, claims.UserID)
+ if isAdmin {
+ if err = s.authDatabase.GetTemporaryTokensWithoutError(ctx, claims.UserID, claims.PlatformID, tokensString); err == nil {
+ return claims, nil
+ }
+ }
+ }
+ return nil, servererrs.ErrTokenNotExist.Wrap()
+}
+
+func (s *authServer) ParseToken(ctx context.Context, req *pbauth.ParseTokenReq) (resp *pbauth.ParseTokenResp, err error) {
+ resp = &pbauth.ParseTokenResp{}
+ claims, err := s.parseToken(ctx, req.Token)
+ if err != nil {
+ return nil, err
+ }
+ resp.UserID = claims.UserID
+ resp.PlatformID = int32(claims.PlatformID)
+ resp.ExpireTimeSeconds = claims.ExpiresAt.Unix()
+ return resp, nil
+}
+
+func (s *authServer) ForceLogout(ctx context.Context, req *pbauth.ForceLogoutReq) (*pbauth.ForceLogoutResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ if err := s.forceKickOff(ctx, req.UserID, req.PlatformID); err != nil {
+ return nil, err
+ }
+ return &pbauth.ForceLogoutResp{}, nil
+}
+
+func (s *authServer) forceKickOff(ctx context.Context, userID string, platformID int32) error {
+ conns, err := s.RegisterCenter.GetConns(ctx, s.config.Discovery.RpcService.MessageGateway)
+ if err != nil {
+ return err
+ }
+ for _, v := range conns {
+ log.ZDebug(ctx, "forceKickOff", "userID", userID, "platformID", platformID)
+ client := msggateway.NewMsgGatewayClient(v)
+ kickReq := &msggateway.KickUserOfflineReq{KickUserIDList: []string{userID}, PlatformID: platformID}
+ _, err := client.KickUserOffline(ctx, kickReq)
+ if err != nil {
+ log.ZError(ctx, "forceKickOff", err, "kickReq", kickReq)
+ }
+ }
+
+ m, err := s.authDatabase.GetTokensWithoutError(ctx, userID, int(platformID))
+ if err != nil && !errors.Is(err, redis.Nil) {
+ return err
+ }
+ for k := range m {
+ m[k] = constant.KickedToken
+ log.ZDebug(ctx, "set token map is ", "token map", m, "userID",
+ userID, "token", k)
+
+ err = s.authDatabase.SetTokenMapByUidPid(ctx, userID, int(platformID), m)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (s *authServer) InvalidateToken(ctx context.Context, req *pbauth.InvalidateTokenReq) (*pbauth.InvalidateTokenResp, error) {
+ m, err := s.authDatabase.GetTokensWithoutError(ctx, req.UserID, int(req.PlatformID))
+ if err != nil && !errors.Is(err, redis.Nil) {
+ return nil, err
+ }
+ if m == nil {
+ return nil, errs.New("token map is empty").Wrap()
+ }
+ log.ZDebug(ctx, "get token from redis", "userID", req.UserID, "platformID",
+ req.PlatformID, "tokenMap", m)
+
+ for k := range m {
+ if k != req.GetPreservedToken() {
+ m[k] = constant.KickedToken
+ }
+ }
+ log.ZDebug(ctx, "set token map is ", "token map", m, "userID",
+ req.UserID, "token", req.GetPreservedToken())
+ err = s.authDatabase.SetTokenMapByUidPid(ctx, req.UserID, int(req.PlatformID), m)
+ if err != nil {
+ return nil, err
+ }
+ return &pbauth.InvalidateTokenResp{}, nil
+}
+
+func (s *authServer) KickTokens(ctx context.Context, req *pbauth.KickTokensReq) (*pbauth.KickTokensResp, error) {
+ if err := s.authDatabase.BatchSetTokenMapByUidPid(ctx, req.Tokens); err != nil {
+ return nil, err
+ }
+ return &pbauth.KickTokensResp{}, nil
+}
diff --git a/internal/rpc/conversation/callback.go b/internal/rpc/conversation/callback.go
new file mode 100644
index 0000000..d36dd7b
--- /dev/null
+++ b/internal/rpc/conversation/callback.go
@@ -0,0 +1,117 @@
+package conversation
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ dbModel "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (c *conversationServer) webhookBeforeCreateSingleChatConversations(ctx context.Context, before *config.BeforeConfig, req *dbModel.Conversation) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &callbackstruct.CallbackBeforeCreateSingleChatConversationsReq{
+ CallbackCommand: callbackstruct.CallbackBeforeCreateSingleChatConversationsCommand,
+ OwnerUserID: req.OwnerUserID,
+ ConversationID: req.ConversationID,
+ ConversationType: req.ConversationType,
+ UserID: req.UserID,
+ RecvMsgOpt: req.RecvMsgOpt,
+ IsPinned: req.IsPinned,
+ IsPrivateChat: req.IsPrivateChat,
+ BurnDuration: req.BurnDuration,
+ GroupAtType: req.GroupAtType,
+ AttachedInfo: req.AttachedInfo,
+ Ex: req.Ex,
+ }
+
+ resp := &callbackstruct.CallbackBeforeCreateSingleChatConversationsResp{}
+
+ if err := c.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ datautil.NotNilReplace(&req.RecvMsgOpt, resp.RecvMsgOpt)
+ datautil.NotNilReplace(&req.IsPinned, resp.IsPinned)
+ datautil.NotNilReplace(&req.IsPrivateChat, resp.IsPrivateChat)
+ datautil.NotNilReplace(&req.BurnDuration, resp.BurnDuration)
+ datautil.NotNilReplace(&req.GroupAtType, resp.GroupAtType)
+ datautil.NotNilReplace(&req.AttachedInfo, resp.AttachedInfo)
+ datautil.NotNilReplace(&req.Ex, resp.Ex)
+ return nil
+ })
+}
+
+func (c *conversationServer) webhookAfterCreateSingleChatConversations(ctx context.Context, after *config.AfterConfig, req *dbModel.Conversation) error {
+ cbReq := &callbackstruct.CallbackAfterCreateSingleChatConversationsReq{
+ CallbackCommand: callbackstruct.CallbackAfterCreateSingleChatConversationsCommand,
+ OwnerUserID: req.OwnerUserID,
+ ConversationID: req.ConversationID,
+ ConversationType: req.ConversationType,
+ UserID: req.UserID,
+ RecvMsgOpt: req.RecvMsgOpt,
+ IsPinned: req.IsPinned,
+ IsPrivateChat: req.IsPrivateChat,
+ BurnDuration: req.BurnDuration,
+ GroupAtType: req.GroupAtType,
+ AttachedInfo: req.AttachedInfo,
+ Ex: req.Ex,
+ }
+
+ c.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterCreateSingleChatConversationsResp{}, after)
+ return nil
+}
+
+func (c *conversationServer) webhookBeforeCreateGroupChatConversations(ctx context.Context, before *config.BeforeConfig, req *dbModel.Conversation) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &callbackstruct.CallbackBeforeCreateGroupChatConversationsReq{
+ CallbackCommand: callbackstruct.CallbackBeforeCreateGroupChatConversationsCommand,
+ ConversationID: req.ConversationID,
+ ConversationType: req.ConversationType,
+ GroupID: req.GroupID,
+ RecvMsgOpt: req.RecvMsgOpt,
+ IsPinned: req.IsPinned,
+ IsPrivateChat: req.IsPrivateChat,
+ BurnDuration: req.BurnDuration,
+ GroupAtType: req.GroupAtType,
+ AttachedInfo: req.AttachedInfo,
+ Ex: req.Ex,
+ }
+
+ resp := &callbackstruct.CallbackBeforeCreateGroupChatConversationsResp{}
+
+ if err := c.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ datautil.NotNilReplace(&req.RecvMsgOpt, resp.RecvMsgOpt)
+ datautil.NotNilReplace(&req.IsPinned, resp.IsPinned)
+ datautil.NotNilReplace(&req.IsPrivateChat, resp.IsPrivateChat)
+ datautil.NotNilReplace(&req.BurnDuration, resp.BurnDuration)
+ datautil.NotNilReplace(&req.GroupAtType, resp.GroupAtType)
+ datautil.NotNilReplace(&req.AttachedInfo, resp.AttachedInfo)
+ datautil.NotNilReplace(&req.Ex, resp.Ex)
+ return nil
+ })
+}
+
+func (c *conversationServer) webhookAfterCreateGroupChatConversations(ctx context.Context, after *config.AfterConfig, req *dbModel.Conversation) error {
+ cbReq := &callbackstruct.CallbackAfterCreateGroupChatConversationsReq{
+ CallbackCommand: callbackstruct.CallbackAfterCreateGroupChatConversationsCommand,
+ ConversationID: req.ConversationID,
+ ConversationType: req.ConversationType,
+ GroupID: req.GroupID,
+ RecvMsgOpt: req.RecvMsgOpt,
+ IsPinned: req.IsPinned,
+ IsPrivateChat: req.IsPrivateChat,
+ BurnDuration: req.BurnDuration,
+ GroupAtType: req.GroupAtType,
+ AttachedInfo: req.AttachedInfo,
+ Ex: req.Ex,
+ }
+
+ c.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterCreateGroupChatConversationsResp{}, after)
+ return nil
+}
diff --git a/internal/rpc/conversation/conversation.go b/internal/rpc/conversation/conversation.go
new file mode 100644
index 0000000..2465e1f
--- /dev/null
+++ b/internal/rpc/conversation/conversation.go
@@ -0,0 +1,865 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package conversation
+
+import (
+ "context"
+ "sort"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "google.golang.org/grpc"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ dbModel "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/protocol/constant"
+ pbconversation "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+type conversationServer struct {
+ pbconversation.UnimplementedConversationServer
+ conversationDatabase controller.ConversationDatabase
+
+ conversationNotificationSender *ConversationNotificationSender
+ config *Config
+
+ webhookClient *webhook.Client
+ userClient *rpcli.UserClient
+ msgClient *rpcli.MsgClient
+ groupClient *rpcli.GroupClient
+}
+
+type Config struct {
+ RpcConfig config.Conversation
+ RedisConfig config.Redis
+ MongodbConfig config.Mongo
+ NotificationConfig config.Notification
+ Share config.Share
+ WebhooksConfig config.Webhooks
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+ conversationDB, err := mgo.NewConversationMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ groupConn, err := client.GetConn(ctx, config.Discovery.RpcService.Group)
+ if err != nil {
+ return err
+ }
+ msgConn, err := client.GetConn(ctx, config.Discovery.RpcService.Msg)
+ if err != nil {
+ return err
+ }
+
+ msgClient := rpcli.NewMsgClient(msgConn)
+
+ // 初始化webhook配置管理器(支持从数据库读取配置)
+ var webhookClient *webhook.Client
+ systemConfigDB, err := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err == nil {
+ // 如果SystemConfig数据库初始化成功,使用配置管理器
+ webhookConfigManager := webhook.NewConfigManager(systemConfigDB, &config.WebhooksConfig)
+ if err := webhookConfigManager.Start(ctx); err != nil {
+ log.ZWarn(ctx, "failed to start webhook config manager, using default config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ } else {
+ webhookClient = webhook.NewWebhookClientWithManager(webhookConfigManager)
+ }
+ } else {
+ // 如果SystemConfig数据库初始化失败,使用默认配置
+ log.ZWarn(ctx, "failed to init system config db, using default webhook config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ }
+
+ cs := conversationServer{
+ config: config,
+ webhookClient: webhookClient,
+ userClient: rpcli.NewUserClient(userConn),
+ groupClient: rpcli.NewGroupClient(groupConn),
+ msgClient: msgClient,
+ }
+
+ cs.conversationNotificationSender = NewConversationNotificationSender(&config.NotificationConfig, msgClient)
+ cs.conversationDatabase = controller.NewConversationDatabase(
+ conversationDB,
+ redis.NewConversationRedis(rdb, &config.LocalCacheConfig, conversationDB),
+ mgocli.GetTx())
+
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+ pbconversation.RegisterConversationServer(server, &cs)
+ return nil
+}
+
+func (c *conversationServer) GetConversation(ctx context.Context, req *pbconversation.GetConversationReq) (*pbconversation.GetConversationResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ conversations, err := c.conversationDatabase.FindConversations(ctx, req.OwnerUserID, []string{req.ConversationID})
+ if err != nil {
+ return nil, err
+ }
+ if len(conversations) < 1 {
+ return nil, errs.ErrRecordNotFound.WrapMsg("conversation not found")
+ }
+ resp := &pbconversation.GetConversationResp{Conversation: &pbconversation.Conversation{}}
+ resp.Conversation = convert.ConversationDB2Pb(conversations[0])
+ return resp, nil
+}
+
+// Deprecated: Use `GetConversations` instead.
+func (c *conversationServer) GetSortedConversationList(ctx context.Context, req *pbconversation.GetSortedConversationListReq) (resp *pbconversation.GetSortedConversationListResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ var conversationIDs []string
+ if len(req.ConversationIDs) == 0 {
+ conversationIDs, err = c.conversationDatabase.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ conversationIDs = req.ConversationIDs
+ }
+
+ conversations, err := c.conversationDatabase.FindConversations(ctx, req.UserID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ if len(conversations) == 0 {
+ return nil, errs.ErrRecordNotFound.Wrap()
+ }
+ maxSeqs, err := c.msgClient.GetMaxSeqs(ctx, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ chatLogs, err := c.msgClient.GetMsgByConversationIDs(ctx, conversationIDs, maxSeqs)
+ if err != nil {
+ return nil, err
+ }
+
+ conversationMsg, err := c.getConversationInfo(ctx, chatLogs, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ hasReadSeqs, err := c.msgClient.GetHasReadSeqs(ctx, conversationIDs, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ var unreadTotal int64
+ conversation_unreadCount := make(map[string]int64)
+ for conversationID, maxSeq := range maxSeqs {
+ unreadCount := maxSeq - hasReadSeqs[conversationID]
+ conversation_unreadCount[conversationID] = unreadCount
+ unreadTotal += unreadCount
+ }
+
+ conversation_isPinTime := make(map[int64]string)
+ conversation_notPinTime := make(map[int64]string)
+
+ for _, v := range conversations {
+ conversationID := v.ConversationID
+ var time int64
+ if _, ok := conversationMsg[conversationID]; ok {
+ time = conversationMsg[conversationID].MsgInfo.LatestMsgRecvTime
+ } else {
+ conversationMsg[conversationID] = &pbconversation.ConversationElem{
+ ConversationID: conversationID,
+ IsPinned: v.IsPinned,
+ MsgInfo: nil,
+ }
+ time = v.CreateTime.UnixMilli()
+ }
+
+ conversationMsg[conversationID].RecvMsgOpt = v.RecvMsgOpt
+ if v.IsPinned {
+ conversationMsg[conversationID].IsPinned = v.IsPinned
+ conversation_isPinTime[time] = conversationID
+ continue
+ }
+ conversation_notPinTime[time] = conversationID
+ }
+ resp = &pbconversation.GetSortedConversationListResp{
+ ConversationTotal: int64(len(chatLogs)),
+ ConversationElems: []*pbconversation.ConversationElem{},
+ UnreadTotal: unreadTotal,
+ }
+
+ c.conversationSort(conversation_isPinTime, resp, conversation_unreadCount, conversationMsg)
+ c.conversationSort(conversation_notPinTime, resp, conversation_unreadCount, conversationMsg)
+
+ resp.ConversationElems = datautil.Paginate(resp.ConversationElems, int(req.Pagination.GetPageNumber()), int(req.Pagination.GetShowNumber()))
+ return resp, nil
+}
+
+func (c *conversationServer) GetAllConversations(ctx context.Context, req *pbconversation.GetAllConversationsReq) (*pbconversation.GetAllConversationsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ conversations, err := c.conversationDatabase.GetUserAllConversation(ctx, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ resp := &pbconversation.GetAllConversationsResp{Conversations: []*pbconversation.Conversation{}}
+ resp.Conversations = convert.ConversationsDB2Pb(conversations)
+ return resp, nil
+}
+
+func (c *conversationServer) GetConversations(ctx context.Context, req *pbconversation.GetConversationsReq) (*pbconversation.GetConversationsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ conversations, err := c.getConversations(ctx, req.OwnerUserID, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetConversationsResp{
+ Conversations: conversations,
+ }, nil
+}
+
+func (c *conversationServer) getConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*pbconversation.Conversation, error) {
+ conversations, err := c.conversationDatabase.FindConversations(ctx, ownerUserID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ resp := &pbconversation.GetConversationsResp{Conversations: []*pbconversation.Conversation{}}
+ resp.Conversations = convert.ConversationsDB2Pb(conversations)
+ return convert.ConversationsDB2Pb(conversations), nil
+}
+
+// Deprecated
+func (c *conversationServer) SetConversation(ctx context.Context, req *pbconversation.SetConversationReq) (*pbconversation.SetConversationResp, error) {
+ if err := authverify.CheckAccess(ctx, req.GetConversation().GetUserID()); err != nil {
+ return nil, err
+ }
+ var conversation dbModel.Conversation
+ conversation.CreateTime = time.Now()
+
+ if err := datautil.CopyStructFields(&conversation, req.Conversation); err != nil {
+ return nil, err
+ }
+ err := c.conversationDatabase.SetUserConversations(ctx, req.Conversation.OwnerUserID, []*dbModel.Conversation{&conversation})
+ if err != nil {
+ return nil, err
+ }
+ c.conversationNotificationSender.ConversationChangeNotification(ctx, req.Conversation.OwnerUserID, []string{req.Conversation.ConversationID})
+ resp := &pbconversation.SetConversationResp{}
+ return resp, nil
+}
+
+func (c *conversationServer) SetConversations(ctx context.Context, req *pbconversation.SetConversationsReq) (*pbconversation.SetConversationsResp, error) {
+ for _, userID := range req.UserIDs {
+ if err := authverify.CheckAccess(ctx, userID); err != nil {
+ return nil, err
+ }
+ }
+ if req.Conversation.ConversationType == constant.WriteGroupChatType {
+ groupInfo, err := c.groupClient.GetGroupInfo(ctx, req.Conversation.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if groupInfo == nil {
+ return nil, servererrs.ErrGroupIDNotFound.WrapMsg(req.Conversation.GroupID)
+ }
+ if groupInfo.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.WrapMsg("group dismissed")
+ }
+ }
+
+ conversationMap := make(map[string]*dbModel.Conversation)
+ var needUpdateUsersList []string
+
+ for _, userID := range req.UserIDs {
+ conversationList, err := c.conversationDatabase.FindConversations(ctx, userID, []string{req.Conversation.ConversationID})
+ if err != nil {
+ return nil, err
+ }
+ if len(conversationList) != 0 {
+ conversationMap[userID] = conversationList[0]
+ } else {
+ needUpdateUsersList = append(needUpdateUsersList, userID)
+ }
+ }
+
+ var conversation dbModel.Conversation
+ conversation.ConversationID = req.Conversation.ConversationID
+ conversation.ConversationType = req.Conversation.ConversationType
+ conversation.UserID = req.Conversation.UserID
+ conversation.GroupID = req.Conversation.GroupID
+ conversation.CreateTime = time.Now()
+
+ m, conversation, err := UpdateConversationsMap(ctx, req)
+ if err != nil {
+ return nil, err
+ }
+
+ for userID := range conversationMap {
+ unequal := UserUpdateCheckMap(ctx, userID, req.Conversation, conversationMap[userID])
+
+ if unequal {
+ needUpdateUsersList = append(needUpdateUsersList, userID)
+ }
+ }
+
+ if len(m) != 0 && len(needUpdateUsersList) != 0 {
+ if err := c.conversationDatabase.SetUsersConversationFieldTx(ctx, needUpdateUsersList, &conversation, m); err != nil {
+ return nil, err
+ }
+
+ for _, userID := range needUpdateUsersList {
+ c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.Conversation.ConversationID})
+ }
+ }
+
+ if req.Conversation.IsPrivateChat != nil && req.Conversation.ConversationType != constant.ReadGroupChatType {
+ var conversations []*dbModel.Conversation
+ for _, ownerUserID := range req.UserIDs {
+ transConversation := conversation
+ transConversation.OwnerUserID = ownerUserID
+ transConversation.IsPrivateChat = req.Conversation.IsPrivateChat.Value
+ conversations = append(conversations, &transConversation)
+ }
+
+ if err := c.conversationDatabase.SyncPeerUserPrivateConversationTx(ctx, conversations); err != nil {
+ return nil, err
+ }
+
+ for _, userID := range req.UserIDs {
+ c.conversationNotificationSender.ConversationSetPrivateNotification(ctx, userID, req.Conversation.UserID,
+ req.Conversation.IsPrivateChat.Value, req.Conversation.ConversationID)
+ }
+ }
+
+ return &pbconversation.SetConversationsResp{}, nil
+}
+
+func (c *conversationServer) UpdateConversationsByUser(ctx context.Context, req *pbconversation.UpdateConversationsByUserReq) (*pbconversation.UpdateConversationsByUserResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ m := make(map[string]any)
+ if req.Ex != nil {
+ m["ex"] = req.Ex.Value
+ }
+ if len(m) > 0 {
+ if err := c.conversationDatabase.UpdateUserConversations(ctx, req.UserID, m); err != nil {
+ return nil, err
+ }
+ }
+ return &pbconversation.UpdateConversationsByUserResp{}, nil
+}
+
+// create conversation without notification for msg redis transfer.
+func (c *conversationServer) CreateSingleChatConversations(ctx context.Context, req *pbconversation.CreateSingleChatConversationsReq) (*pbconversation.CreateSingleChatConversationsResp, error) {
+ var conversation dbModel.Conversation
+ conversation.CreateTime = time.Now()
+
+ switch req.ConversationType {
+ case constant.SingleChatType:
+ // sendUser create
+ conversation.ConversationID = req.ConversationID
+ conversation.ConversationType = req.ConversationType
+ conversation.OwnerUserID = req.SendID
+ conversation.UserID = req.RecvID
+
+ if err := c.webhookBeforeCreateSingleChatConversations(ctx, &c.config.WebhooksConfig.BeforeCreateSingleChatConversations, &conversation); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ err := c.conversationDatabase.CreateConversation(ctx, []*dbModel.Conversation{&conversation})
+ if err != nil {
+ log.ZWarn(ctx, "create conversation failed", err, "conversation", conversation)
+ }
+
+ c.webhookAfterCreateSingleChatConversations(ctx, &c.config.WebhooksConfig.AfterCreateSingleChatConversations, &conversation)
+
+ // recvUser create
+ conversation2 := conversation
+ conversation2.OwnerUserID = req.RecvID
+ conversation2.UserID = req.SendID
+
+ if err := c.webhookBeforeCreateSingleChatConversations(ctx, &c.config.WebhooksConfig.BeforeCreateSingleChatConversations, &conversation); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ err = c.conversationDatabase.CreateConversation(ctx, []*dbModel.Conversation{&conversation2})
+ if err != nil {
+ log.ZWarn(ctx, "create conversation failed", err, "conversation2", conversation)
+ }
+
+ c.webhookAfterCreateSingleChatConversations(ctx, &c.config.WebhooksConfig.AfterCreateSingleChatConversations, &conversation2)
+ case constant.NotificationChatType:
+ conversation.ConversationID = req.ConversationID
+ conversation.ConversationType = req.ConversationType
+ conversation.OwnerUserID = req.RecvID
+ conversation.UserID = req.SendID
+
+ if err := c.webhookBeforeCreateSingleChatConversations(ctx, &c.config.WebhooksConfig.BeforeCreateSingleChatConversations, &conversation); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ err := c.conversationDatabase.CreateConversation(ctx, []*dbModel.Conversation{&conversation})
+ if err != nil {
+ log.ZWarn(ctx, "create conversation failed", err, "conversation2", conversation)
+ }
+
+ c.webhookAfterCreateSingleChatConversations(ctx, &c.config.WebhooksConfig.AfterCreateSingleChatConversations, &conversation)
+ }
+
+ return &pbconversation.CreateSingleChatConversationsResp{}, nil
+}
+
+func (c *conversationServer) CreateGroupChatConversations(ctx context.Context, req *pbconversation.CreateGroupChatConversationsReq) (*pbconversation.CreateGroupChatConversationsResp, error) {
+ var conversation dbModel.Conversation
+
+ conversation.ConversationID = msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, req.GroupID)
+ conversation.GroupID = req.GroupID
+ conversation.ConversationType = constant.ReadGroupChatType
+ conversation.CreateTime = time.Now()
+
+ if err := c.webhookBeforeCreateGroupChatConversations(ctx, &c.config.WebhooksConfig.BeforeCreateGroupChatConversations, &conversation); err != nil {
+ return nil, err
+ }
+
+ err := c.conversationDatabase.CreateGroupChatConversation(ctx, req.GroupID, req.UserIDs, &conversation)
+ if err != nil {
+ return nil, err
+ }
+ if err := c.msgClient.SetUserConversationMaxSeq(ctx, conversation.ConversationID, req.UserIDs, 0); err != nil {
+ return nil, err
+ }
+
+ c.webhookAfterCreateGroupChatConversations(ctx, &c.config.WebhooksConfig.AfterCreateGroupChatConversations, &conversation)
+ return &pbconversation.CreateGroupChatConversationsResp{}, nil
+}
+
+func (c *conversationServer) SetConversationMaxSeq(ctx context.Context, req *pbconversation.SetConversationMaxSeqReq) (*pbconversation.SetConversationMaxSeqResp, error) {
+ if err := c.msgClient.SetUserConversationMaxSeq(ctx, req.ConversationID, req.OwnerUserID, req.MaxSeq); err != nil {
+ return nil, err
+ }
+ if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID,
+ map[string]any{"max_seq": req.MaxSeq}); err != nil {
+ return nil, err
+ }
+ for _, userID := range req.OwnerUserID {
+ c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.ConversationID})
+ }
+ return &pbconversation.SetConversationMaxSeqResp{}, nil
+}
+
+func (c *conversationServer) SetConversationMinSeq(ctx context.Context, req *pbconversation.SetConversationMinSeqReq) (*pbconversation.SetConversationMinSeqResp, error) {
+ if err := c.msgClient.SetUserConversationMin(ctx, req.ConversationID, req.OwnerUserID, req.MinSeq); err != nil {
+ return nil, err
+ }
+ if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID,
+ map[string]any{"min_seq": req.MinSeq}); err != nil {
+ return nil, err
+ }
+ for _, userID := range req.OwnerUserID {
+ c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.ConversationID})
+ }
+ return &pbconversation.SetConversationMinSeqResp{}, nil
+}
+
+func (c *conversationServer) GetConversationIDs(ctx context.Context, req *pbconversation.GetConversationIDsReq) (*pbconversation.GetConversationIDsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ conversationIDs, err := c.conversationDatabase.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetConversationIDsResp{ConversationIDs: conversationIDs}, nil
+}
+
+func (c *conversationServer) GetUserConversationIDsHash(ctx context.Context, req *pbconversation.GetUserConversationIDsHashReq) (*pbconversation.GetUserConversationIDsHashResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ hash, err := c.conversationDatabase.GetUserConversationIDsHash(ctx, req.OwnerUserID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetUserConversationIDsHashResp{Hash: hash}, nil
+}
+
+func (c *conversationServer) GetConversationsByConversationID(ctx context.Context, req *pbconversation.GetConversationsByConversationIDReq) (*pbconversation.GetConversationsByConversationIDResp, error) {
+ conversations, err := c.conversationDatabase.GetConversationsByConversationID(ctx, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetConversationsByConversationIDResp{Conversations: convert.ConversationsDB2Pb(conversations)}, nil
+}
+
+func (c *conversationServer) GetConversationOfflinePushUserIDs(ctx context.Context, req *pbconversation.GetConversationOfflinePushUserIDsReq) (*pbconversation.GetConversationOfflinePushUserIDsResp, error) {
+ if req.ConversationID == "" {
+ return nil, errs.ErrArgs.WrapMsg("conversationID is empty")
+ }
+ if len(req.UserIDs) == 0 {
+ return &pbconversation.GetConversationOfflinePushUserIDsResp{}, nil
+ }
+ userIDs, err := c.conversationDatabase.GetConversationNotReceiveMessageUserIDs(ctx, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ if len(userIDs) == 0 {
+ return &pbconversation.GetConversationOfflinePushUserIDsResp{UserIDs: req.UserIDs}, nil
+ }
+ userIDSet := make(map[string]struct{})
+ for _, userID := range req.UserIDs {
+ userIDSet[userID] = struct{}{}
+ }
+ for _, userID := range userIDs {
+ delete(userIDSet, userID)
+ }
+ return &pbconversation.GetConversationOfflinePushUserIDsResp{UserIDs: datautil.Keys(userIDSet)}, nil
+}
+
+func (c *conversationServer) conversationSort(conversations map[int64]string, resp *pbconversation.GetSortedConversationListResp, conversation_unreadCount map[string]int64, conversationMsg map[string]*pbconversation.ConversationElem) {
+ keys := []int64{}
+ for key := range conversations {
+ keys = append(keys, key)
+ }
+
+ sort.Slice(keys, func(i, j int) bool {
+ return keys[i] > keys[j]
+ })
+ index := 0
+
+ cons := make([]*pbconversation.ConversationElem, len(conversations))
+ for _, v := range keys {
+ conversationID := conversations[v]
+ conversationElem := conversationMsg[conversationID]
+ conversationElem.UnreadCount = conversation_unreadCount[conversationID]
+ cons[index] = conversationElem
+ index++
+ }
+ resp.ConversationElems = append(resp.ConversationElems, cons...)
+}
+
+func (c *conversationServer) getConversationInfo(ctx context.Context, chatLogs map[string]*sdkws.MsgData, userID string) (map[string]*pbconversation.ConversationElem, error) {
+ var (
+ sendIDs []string
+ groupIDs []string
+ sendMap = make(map[string]*sdkws.UserInfo)
+ groupMap = make(map[string]*sdkws.GroupInfo)
+ conversationMsg = make(map[string]*pbconversation.ConversationElem)
+ )
+ for _, chatLog := range chatLogs {
+ switch chatLog.SessionType {
+ case constant.SingleChatType:
+ if chatLog.SendID == userID {
+ sendIDs = append(sendIDs, chatLog.RecvID)
+ }
+ sendIDs = append(sendIDs, chatLog.SendID)
+ case constant.WriteGroupChatType, constant.ReadGroupChatType:
+ groupIDs = append(groupIDs, chatLog.GroupID)
+ sendIDs = append(sendIDs, chatLog.SendID)
+ }
+ }
+ if len(sendIDs) != 0 {
+ sendInfos, err := c.userClient.GetUsersInfo(ctx, sendIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, sendInfo := range sendInfos {
+ sendMap[sendInfo.UserID] = sendInfo
+ }
+ }
+ if len(groupIDs) != 0 {
+ groupInfos, err := c.groupClient.GetGroupsInfo(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, groupInfo := range groupInfos {
+ groupMap[groupInfo.GroupID] = groupInfo
+ }
+ }
+ for conversationID, chatLog := range chatLogs {
+ pbchatLog := &pbconversation.ConversationElem{}
+ msgInfo := &pbconversation.MsgInfo{}
+ if err := datautil.CopyStructFields(msgInfo, chatLog); err != nil {
+ return nil, err
+ }
+ switch chatLog.SessionType {
+ case constant.SingleChatType:
+ if chatLog.SendID == userID {
+ if recv, ok := sendMap[chatLog.RecvID]; ok {
+ msgInfo.FaceURL = recv.FaceURL
+ msgInfo.SenderName = recv.Nickname
+ }
+ break
+ }
+ if send, ok := sendMap[chatLog.SendID]; ok {
+ msgInfo.FaceURL = send.FaceURL
+ msgInfo.SenderName = send.Nickname
+ }
+ case constant.WriteGroupChatType, constant.ReadGroupChatType:
+ msgInfo.GroupID = chatLog.GroupID
+ if group, ok := groupMap[chatLog.GroupID]; ok {
+ msgInfo.GroupName = group.GroupName
+ msgInfo.GroupFaceURL = group.FaceURL
+ msgInfo.GroupMemberCount = group.MemberCount
+ msgInfo.GroupType = group.GroupType
+ }
+ if send, ok := sendMap[chatLog.SendID]; ok {
+ msgInfo.SenderName = send.Nickname
+ }
+ }
+ pbchatLog.ConversationID = conversationID
+ msgInfo.LatestMsgRecvTime = chatLog.SendTime
+ pbchatLog.MsgInfo = msgInfo
+ conversationMsg[conversationID] = pbchatLog
+ }
+ return conversationMsg, nil
+}
+
+func (c *conversationServer) GetConversationNotReceiveMessageUserIDs(ctx context.Context, req *pbconversation.GetConversationNotReceiveMessageUserIDsReq) (*pbconversation.GetConversationNotReceiveMessageUserIDsResp, error) {
+ userIDs, err := c.conversationDatabase.GetConversationNotReceiveMessageUserIDs(ctx, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetConversationNotReceiveMessageUserIDsResp{UserIDs: userIDs}, nil
+}
+
+func (c *conversationServer) UpdateConversation(ctx context.Context, req *pbconversation.UpdateConversationReq) (*pbconversation.UpdateConversationResp, error) {
+ for _, userID := range req.UserIDs {
+ if err := authverify.CheckAccess(ctx, userID); err != nil {
+ return nil, err
+ }
+ }
+ m := make(map[string]any)
+ if req.RecvMsgOpt != nil {
+ m["recv_msg_opt"] = req.RecvMsgOpt.Value
+ }
+ if req.AttachedInfo != nil {
+ m["attached_info"] = req.AttachedInfo.Value
+ }
+ if req.Ex != nil {
+ m["ex"] = req.Ex.Value
+ }
+ if req.IsPinned != nil {
+ m["is_pinned"] = req.IsPinned.Value
+ }
+ if req.GroupAtType != nil {
+ m["group_at_type"] = req.GroupAtType.Value
+ }
+ if req.MsgDestructTime != nil {
+ m["msg_destruct_time"] = req.MsgDestructTime.Value
+ }
+ if req.IsMsgDestruct != nil {
+ m["is_msg_destruct"] = req.IsMsgDestruct.Value
+ }
+ if req.BurnDuration != nil {
+ m["burn_duration"] = req.BurnDuration.Value
+ }
+ if req.IsPrivateChat != nil {
+ m["is_private_chat"] = req.IsPrivateChat.Value
+ }
+ if req.MinSeq != nil {
+ m["min_seq"] = req.MinSeq.Value
+ }
+ if req.MaxSeq != nil {
+ m["max_seq"] = req.MaxSeq.Value
+ }
+ if req.LatestMsgDestructTime != nil {
+ m["latest_msg_destruct_time"] = time.UnixMilli(req.LatestMsgDestructTime.Value)
+ }
+ if len(m) > 0 {
+ if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.UserIDs, req.ConversationID, m); err != nil {
+ return nil, err
+ }
+ }
+ return &pbconversation.UpdateConversationResp{}, nil
+}
+
+func (c *conversationServer) GetOwnerConversation(ctx context.Context, req *pbconversation.GetOwnerConversationReq) (*pbconversation.GetOwnerConversationResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ total, conversations, err := c.conversationDatabase.GetOwnerConversation(ctx, req.UserID, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetOwnerConversationResp{
+ Total: total,
+ Conversations: convert.ConversationsDB2Pb(conversations),
+ }, nil
+}
+
+func (c *conversationServer) GetConversationsNeedClearMsg(ctx context.Context, _ *pbconversation.GetConversationsNeedClearMsgReq) (*pbconversation.GetConversationsNeedClearMsgResp, error) {
+ num, err := c.conversationDatabase.GetAllConversationIDsNumber(ctx)
+ if err != nil {
+ log.ZError(ctx, "GetAllConversationIDsNumber failed", err)
+ return nil, err
+ }
+ const batchNum = 100
+
+ if num == 0 {
+ return nil, errs.New("Need Destruct Msg is nil").Wrap()
+ }
+
+ maxPage := (num + batchNum - 1) / batchNum
+
+ temp := make([]*dbModel.Conversation, 0, maxPage*batchNum)
+
+ for pageNumber := 0; pageNumber < int(maxPage); pageNumber++ {
+ pagination := &sdkws.RequestPagination{
+ PageNumber: int32(pageNumber),
+ ShowNumber: batchNum,
+ }
+
+ conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, pagination)
+ if err != nil {
+ log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber)
+ continue
+ }
+
+ // log.ZDebug(ctx, "PageConversationIDs success", "pageNumber", pageNumber, "conversationIDsNum", len(conversationIDs), "conversationIDs", conversationIDs)
+ if len(conversationIDs) == 0 {
+ continue
+ }
+
+ conversations, err := c.conversationDatabase.GetConversationsByConversationID(ctx, conversationIDs)
+ if err != nil {
+ log.ZError(ctx, "GetConversationsByConversationID failed", err, "conversationIDs", conversationIDs)
+ continue
+ }
+
+ for _, conversation := range conversations {
+ if conversation.IsMsgDestruct && conversation.MsgDestructTime != 0 && ((time.Now().UnixMilli() > (conversation.MsgDestructTime + conversation.LatestMsgDestructTime.UnixMilli() + 8*60*60)) || // 8*60*60 is UTC+8
+ conversation.LatestMsgDestructTime.IsZero()) {
+ temp = append(temp, conversation)
+ }
+ }
+ }
+
+ return &pbconversation.GetConversationsNeedClearMsgResp{Conversations: convert.ConversationsDB2Pb(temp)}, nil
+}
+
+func (c *conversationServer) GetNotNotifyConversationIDs(ctx context.Context, req *pbconversation.GetNotNotifyConversationIDsReq) (*pbconversation.GetNotNotifyConversationIDsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ conversationIDs, err := c.conversationDatabase.GetNotNotifyConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetNotNotifyConversationIDsResp{ConversationIDs: conversationIDs}, nil
+}
+
+func (c *conversationServer) GetPinnedConversationIDs(ctx context.Context, req *pbconversation.GetPinnedConversationIDsReq) (*pbconversation.GetPinnedConversationIDsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ conversationIDs, err := c.conversationDatabase.GetPinnedConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbconversation.GetPinnedConversationIDsResp{ConversationIDs: conversationIDs}, nil
+}
+
+func (c *conversationServer) DeleteConversations(ctx context.Context, req *pbconversation.DeleteConversationsReq) (*pbconversation.DeleteConversationsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ if err := c.conversationDatabase.DeleteUsersConversations(ctx, req.OwnerUserID, req.ConversationIDs); err != nil {
+ return nil, err
+ }
+ return &pbconversation.DeleteConversationsResp{}, nil
+}
+
+func (c *conversationServer) ClearUserConversationMsg(ctx context.Context, req *pbconversation.ClearUserConversationMsgReq) (*pbconversation.ClearUserConversationMsgResp, error) {
+ conversations, err := c.conversationDatabase.FindRandConversation(ctx, req.Timestamp, int(req.Limit))
+ if err != nil {
+ return nil, err
+ }
+ latestMsgDestructTime := time.UnixMilli(req.Timestamp)
+ for i, conversation := range conversations {
+ if conversation.IsMsgDestruct == false || conversation.MsgDestructTime == 0 {
+ continue
+ }
+ seq, err := c.msgClient.GetLastMessageSeqByTime(ctx, conversation.ConversationID, req.Timestamp-(conversation.MsgDestructTime*1000))
+ if err != nil {
+ return nil, err
+ }
+ if seq <= 0 {
+ log.ZDebug(ctx, "ClearUserConversationMsg GetLastMessageSeqByTime seq <= 0", "index", i, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID, "msgDestructTime", conversation.MsgDestructTime, "seq", seq)
+ if err := c.setConversationMinSeqAndLatestMsgDestructTime(ctx, conversation.ConversationID, conversation.OwnerUserID, -1, latestMsgDestructTime); err != nil {
+ return nil, err
+ }
+ continue
+ }
+ seq++
+ if err := c.setConversationMinSeqAndLatestMsgDestructTime(ctx, conversation.ConversationID, conversation.OwnerUserID, seq, latestMsgDestructTime); err != nil {
+ return nil, err
+ }
+ log.ZDebug(ctx, "ClearUserConversationMsg set min seq", "index", i, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID, "seq", seq, "msgDestructTime", conversation.MsgDestructTime)
+ }
+ return &pbconversation.ClearUserConversationMsgResp{Count: int32(len(conversations))}, nil
+}
+
+func (c *conversationServer) setConversationMinSeqAndLatestMsgDestructTime(ctx context.Context, conversationID string, ownerUserID string, minSeq int64, latestMsgDestructTime time.Time) error {
+ update := map[string]any{
+ "latest_msg_destruct_time": latestMsgDestructTime,
+ }
+ if minSeq >= 0 {
+ if err := c.msgClient.SetUserConversationMin(ctx, conversationID, []string{ownerUserID}, minSeq); err != nil {
+ return err
+ }
+ update["min_seq"] = minSeq
+ }
+
+ if err := c.conversationDatabase.UpdateUsersConversationField(ctx, []string{ownerUserID}, conversationID, update); err != nil {
+ return err
+ }
+ c.conversationNotificationSender.ConversationChangeNotification(ctx, ownerUserID, []string{conversationID})
+ return nil
+}
diff --git a/internal/rpc/conversation/db_map.go b/internal/rpc/conversation/db_map.go
new file mode 100644
index 0000000..29d4978
--- /dev/null
+++ b/internal/rpc/conversation/db_map.go
@@ -0,0 +1,85 @@
+package conversation
+
+import (
+ "context"
+
+ dbModel "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/conversation"
+)
+
+func UpdateConversationsMap(ctx context.Context, req *conversation.SetConversationsReq) (m map[string]any, conversation dbModel.Conversation, err error) {
+ m = make(map[string]any)
+
+ conversation.ConversationID = req.Conversation.ConversationID
+ conversation.ConversationType = req.Conversation.ConversationType
+ conversation.UserID = req.Conversation.UserID
+ conversation.GroupID = req.Conversation.GroupID
+
+ if req.Conversation.RecvMsgOpt != nil {
+ conversation.RecvMsgOpt = req.Conversation.RecvMsgOpt.Value
+ m["recv_msg_opt"] = req.Conversation.RecvMsgOpt.Value
+ }
+
+ if req.Conversation.AttachedInfo != nil {
+ conversation.AttachedInfo = req.Conversation.AttachedInfo.Value
+ m["attached_info"] = req.Conversation.AttachedInfo.Value
+ }
+
+ if req.Conversation.Ex != nil {
+ conversation.Ex = req.Conversation.Ex.Value
+ m["ex"] = req.Conversation.Ex.Value
+ }
+ if req.Conversation.IsPinned != nil {
+ conversation.IsPinned = req.Conversation.IsPinned.Value
+ m["is_pinned"] = req.Conversation.IsPinned.Value
+ }
+ if req.Conversation.GroupAtType != nil {
+ conversation.GroupAtType = req.Conversation.GroupAtType.Value
+ m["group_at_type"] = req.Conversation.GroupAtType.Value
+ }
+ if req.Conversation.MsgDestructTime != nil {
+ conversation.MsgDestructTime = req.Conversation.MsgDestructTime.Value
+ m["msg_destruct_time"] = req.Conversation.MsgDestructTime.Value
+ }
+ if req.Conversation.IsMsgDestruct != nil {
+ conversation.IsMsgDestruct = req.Conversation.IsMsgDestruct.Value
+ m["is_msg_destruct"] = req.Conversation.IsMsgDestruct.Value
+ }
+ if req.Conversation.BurnDuration != nil {
+ conversation.BurnDuration = req.Conversation.BurnDuration.Value
+ m["burn_duration"] = req.Conversation.BurnDuration.Value
+ }
+
+ return m, conversation, nil
+}
+
+func UserUpdateCheckMap(ctx context.Context, userID string, req *conversation.ConversationReq, conversation *dbModel.Conversation) (unequal bool) {
+ unequal = false
+
+ if req.RecvMsgOpt != nil && conversation.RecvMsgOpt != req.RecvMsgOpt.Value {
+ unequal = true
+ }
+ if req.AttachedInfo != nil && conversation.AttachedInfo != req.AttachedInfo.Value {
+ unequal = true
+ }
+ if req.Ex != nil && conversation.Ex != req.Ex.Value {
+ unequal = true
+ }
+ if req.IsPinned != nil && conversation.IsPinned != req.IsPinned.Value {
+ unequal = true
+ }
+ if req.GroupAtType != nil && conversation.GroupAtType != req.GroupAtType.Value {
+ unequal = true
+ }
+ if req.MsgDestructTime != nil && conversation.MsgDestructTime != req.MsgDestructTime.Value {
+ unequal = true
+ }
+ if req.IsMsgDestruct != nil && conversation.IsMsgDestruct != req.IsMsgDestruct.Value {
+ unequal = true
+ }
+ if req.BurnDuration != nil && conversation.BurnDuration != req.BurnDuration.Value {
+ unequal = true
+ }
+
+ return unequal
+}
diff --git a/internal/rpc/conversation/notification.go b/internal/rpc/conversation/notification.go
new file mode 100644
index 0000000..33ce06a
--- /dev/null
+++ b/internal/rpc/conversation/notification.go
@@ -0,0 +1,75 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package conversation
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/msg"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+type ConversationNotificationSender struct {
+ *notification.NotificationSender
+}
+
+func NewConversationNotificationSender(conf *config.Notification, msgClient *rpcli.MsgClient) *ConversationNotificationSender {
+ return &ConversationNotificationSender{notification.NewNotificationSender(conf, notification.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
+ return msgClient.SendMsg(ctx, req)
+ }))}
+}
+
+// SetPrivate invote.
+func (c *ConversationNotificationSender) ConversationSetPrivateNotification(ctx context.Context, sendID, recvID string,
+ isPrivateChat bool, conversationID string,
+) {
+ tips := &sdkws.ConversationSetPrivateTips{
+ RecvID: recvID,
+ SendID: sendID,
+ IsPrivate: isPrivateChat,
+ ConversationID: conversationID,
+ }
+
+ c.Notification(ctx, sendID, recvID, constant.ConversationPrivateChatNotification, tips)
+}
+
+func (c *ConversationNotificationSender) ConversationChangeNotification(ctx context.Context, userID string, conversationIDs []string) {
+ tips := &sdkws.ConversationUpdateTips{
+ UserID: userID,
+ ConversationIDList: conversationIDs,
+ }
+
+ c.Notification(ctx, userID, userID, constant.ConversationChangeNotification, tips)
+}
+
+func (c *ConversationNotificationSender) ConversationUnreadChangeNotification(
+ ctx context.Context,
+ userID, conversationID string,
+ unreadCountTime, hasReadSeq int64,
+) {
+ tips := &sdkws.ConversationHasReadTips{
+ UserID: userID,
+ ConversationID: conversationID,
+ HasReadSeq: hasReadSeq,
+ UnreadCountTime: unreadCountTime,
+ }
+
+ c.Notification(ctx, userID, userID, constant.ConversationUnreadNotification, tips)
+}
diff --git a/internal/rpc/conversation/sync.go b/internal/rpc/conversation/sync.go
new file mode 100644
index 0000000..beb9138
--- /dev/null
+++ b/internal/rpc/conversation/sync.go
@@ -0,0 +1,63 @@
+package conversation
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/incrversion"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/hashutil"
+ "git.imall.cloud/openim/protocol/conversation"
+)
+
+func (c *conversationServer) GetFullOwnerConversationIDs(ctx context.Context, req *conversation.GetFullOwnerConversationIDsReq) (*conversation.GetFullOwnerConversationIDsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ vl, err := c.conversationDatabase.FindMaxConversationUserVersionCache(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ conversationIDs, err := c.conversationDatabase.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ idHash := hashutil.IdHash(conversationIDs)
+ if req.IdHash == idHash {
+ conversationIDs = nil
+ }
+ return &conversation.GetFullOwnerConversationIDsResp{
+ Version: idHash,
+ VersionID: vl.ID.Hex(),
+ Equal: req.IdHash == idHash,
+ ConversationIDs: conversationIDs,
+ }, nil
+}
+
+func (c *conversationServer) GetIncrementalConversation(ctx context.Context, req *conversation.GetIncrementalConversationReq) (*conversation.GetIncrementalConversationResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ opt := incrversion.Option[*conversation.Conversation, conversation.GetIncrementalConversationResp]{
+ Ctx: ctx,
+ VersionKey: req.UserID,
+ VersionID: req.VersionID,
+ VersionNumber: req.Version,
+ Version: c.conversationDatabase.FindConversationUserVersion,
+ CacheMaxVersion: c.conversationDatabase.FindMaxConversationUserVersionCache,
+ Find: func(ctx context.Context, conversationIDs []string) ([]*conversation.Conversation, error) {
+ return c.getConversations(ctx, req.UserID, conversationIDs)
+ },
+ Resp: func(version *model.VersionLog, delIDs []string, insertList, updateList []*conversation.Conversation, full bool) *conversation.GetIncrementalConversationResp {
+ return &conversation.GetIncrementalConversationResp{
+ VersionID: version.ID.Hex(),
+ Version: uint64(version.Version),
+ Full: full,
+ Delete: delIDs,
+ Insert: insertList,
+ Update: updateList,
+ }
+ },
+ }
+ return opt.Build()
+}
diff --git a/internal/rpc/group/cache.go b/internal/rpc/group/cache.go
new file mode 100644
index 0000000..306e404
--- /dev/null
+++ b/internal/rpc/group/cache.go
@@ -0,0 +1,46 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ pbgroup "git.imall.cloud/openim/protocol/group"
+)
+
+// GetGroupInfoCache get group info from cache.
+func (g *groupServer) GetGroupInfoCache(ctx context.Context, req *pbgroup.GetGroupInfoCacheReq) (*pbgroup.GetGroupInfoCacheResp, error) {
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupInfoCacheResp{
+ GroupInfo: convert.Db2PbGroupInfo(group, "", 0),
+ }, nil
+}
+
+func (g *groupServer) GetGroupMemberCache(ctx context.Context, req *pbgroup.GetGroupMemberCacheReq) (*pbgroup.GetGroupMemberCacheResp, error) {
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ members, err := g.db.TakeGroupMember(ctx, req.GroupID, req.GroupMemberID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupMemberCacheResp{
+ Member: convert.Db2PbGroupMember(members),
+ }, nil
+}
diff --git a/internal/rpc/group/callback.go b/internal/rpc/group/callback.go
new file mode 100644
index 0000000..2df9081
--- /dev/null
+++ b/internal/rpc/group/callback.go
@@ -0,0 +1,476 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/wrapperspb"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+// CallbackBeforeCreateGroup callback before create group.
+func (g *groupServer) webhookBeforeCreateGroup(ctx context.Context, before *config.BeforeConfig, req *group.CreateGroupReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &callbackstruct.CallbackBeforeCreateGroupReq{
+ CallbackCommand: callbackstruct.CallbackBeforeCreateGroupCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ GroupInfo: req.GroupInfo,
+ }
+ cbReq.InitMemberList = append(cbReq.InitMemberList, &apistruct.GroupAddMemberInfo{
+ UserID: req.OwnerUserID,
+ RoleLevel: constant.GroupOwner,
+ })
+ for _, userID := range req.AdminUserIDs {
+ cbReq.InitMemberList = append(cbReq.InitMemberList, &apistruct.GroupAddMemberInfo{
+ UserID: userID,
+ RoleLevel: constant.GroupAdmin,
+ })
+ }
+ for _, userID := range req.MemberUserIDs {
+ cbReq.InitMemberList = append(cbReq.InitMemberList, &apistruct.GroupAddMemberInfo{
+ UserID: userID,
+ RoleLevel: constant.GroupOrdinaryUsers,
+ })
+ }
+ resp := &callbackstruct.CallbackBeforeCreateGroupResp{}
+
+ if err := g.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ datautil.NotNilReplace(&req.GroupInfo.GroupID, resp.GroupID)
+ datautil.NotNilReplace(&req.GroupInfo.GroupName, resp.GroupName)
+ datautil.NotNilReplace(&req.GroupInfo.Notification, resp.Notification)
+ datautil.NotNilReplace(&req.GroupInfo.Introduction, resp.Introduction)
+ datautil.NotNilReplace(&req.GroupInfo.FaceURL, resp.FaceURL)
+ datautil.NotNilReplace(&req.GroupInfo.OwnerUserID, resp.OwnerUserID)
+ datautil.NotNilReplace(&req.GroupInfo.Ex, resp.Ex)
+ datautil.NotNilReplace(&req.GroupInfo.Status, resp.Status)
+ datautil.NotNilReplace(&req.GroupInfo.CreatorUserID, resp.CreatorUserID)
+ datautil.NotNilReplace(&req.GroupInfo.GroupType, resp.GroupType)
+ datautil.NotNilReplace(&req.GroupInfo.NeedVerification, resp.NeedVerification)
+ datautil.NotNilReplace(&req.GroupInfo.LookMemberInfo, resp.LookMemberInfo)
+ return nil
+ })
+}
+
+func (g *groupServer) webhookAfterCreateGroup(ctx context.Context, after *config.AfterConfig, req *group.CreateGroupReq) {
+ cbReq := &callbackstruct.CallbackAfterCreateGroupReq{
+ CallbackCommand: callbackstruct.CallbackAfterCreateGroupCommand,
+ GroupInfo: req.GroupInfo,
+ }
+ cbReq.InitMemberList = append(cbReq.InitMemberList, &apistruct.GroupAddMemberInfo{
+ UserID: req.OwnerUserID,
+ RoleLevel: constant.GroupOwner,
+ })
+ for _, userID := range req.AdminUserIDs {
+ cbReq.InitMemberList = append(cbReq.InitMemberList, &apistruct.GroupAddMemberInfo{
+ UserID: userID,
+ RoleLevel: constant.GroupAdmin,
+ })
+ }
+ for _, userID := range req.MemberUserIDs {
+ cbReq.InitMemberList = append(cbReq.InitMemberList, &apistruct.GroupAddMemberInfo{
+ UserID: userID,
+ RoleLevel: constant.GroupOrdinaryUsers,
+ })
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterCreateGroupResp{}, after)
+}
+
+func (g *groupServer) webhookBeforeMembersJoinGroup(ctx context.Context, before *config.BeforeConfig, groupMembers []*model.GroupMember, groupID string, groupEx string) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ groupMembersMap := datautil.SliceToMap(groupMembers, func(e *model.GroupMember) string {
+ return e.UserID
+ })
+ var groupMembersCallback []*callbackstruct.CallbackGroupMember
+
+ for _, member := range groupMembers {
+ groupMembersCallback = append(groupMembersCallback, &callbackstruct.CallbackGroupMember{
+ UserID: member.UserID,
+ Ex: member.Ex,
+ })
+ }
+
+ cbReq := &callbackstruct.CallbackBeforeMembersJoinGroupReq{
+ CallbackCommand: callbackstruct.CallbackBeforeMembersJoinGroupCommand,
+ GroupID: groupID,
+ MembersList: groupMembersCallback,
+ GroupEx: groupEx,
+ }
+ resp := &callbackstruct.CallbackBeforeMembersJoinGroupResp{}
+
+ if err := g.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ // 记录webhook响应,用于排查自动禁言问题
+ if len(resp.MemberCallbackList) > 0 {
+ log.ZInfo(ctx, "webhookBeforeMembersJoinGroup: webhook response received",
+ "groupID", groupID,
+ "memberCallbackListCount", len(resp.MemberCallbackList),
+ "memberCallbackList", resp.MemberCallbackList)
+ }
+
+ for _, memberCallbackResp := range resp.MemberCallbackList {
+ if _, ok := groupMembersMap[(*memberCallbackResp.UserID)]; ok {
+ if memberCallbackResp.MuteEndTime != nil {
+ muteEndTimeTimestamp := *memberCallbackResp.MuteEndTime
+ now := time.Now()
+ nowUnixMilli := now.UnixMilli()
+
+ // 检查时间戳是否合理(防止时间戳单位错误导致自动禁言)
+ // 如果时间戳小于1000000000000(2001-09-09),可能是秒级时间戳,需要转换为毫秒
+ // 如果时间戳大于当前时间+10年,可能是时间戳单位错误
+ var muteEndTime time.Time
+ if muteEndTimeTimestamp < 1000000000000 {
+ // 可能是秒级时间戳,转换为毫秒
+ log.ZWarn(ctx, "webhookBeforeMembersJoinGroup: MuteEndTime appears to be in seconds, converting to milliseconds", nil,
+ "groupID", groupID,
+ "userID", *memberCallbackResp.UserID,
+ "originalTimestamp", muteEndTimeTimestamp)
+ muteEndTime = time.Unix(muteEndTimeTimestamp, 0)
+ } else if muteEndTimeTimestamp > nowUnixMilli+10*365*24*3600*1000 {
+ // 时间戳超过当前时间10年,可能是单位错误,忽略
+ log.ZWarn(ctx, "webhookBeforeMembersJoinGroup: MuteEndTime is too far in the future, ignoring", nil,
+ "groupID", groupID,
+ "userID", *memberCallbackResp.UserID,
+ "muteEndTimeTimestamp", muteEndTimeTimestamp,
+ "nowUnixMilli", nowUnixMilli)
+ continue
+ } else {
+ muteEndTime = time.UnixMilli(muteEndTimeTimestamp)
+ }
+
+ // 记录webhook返回的禁言时间,用于排查自动禁言问题
+ log.ZInfo(ctx, "webhookBeforeMembersJoinGroup: webhook returned MuteEndTime",
+ "groupID", groupID,
+ "userID", *memberCallbackResp.UserID,
+ "muteEndTimeTimestamp", muteEndTimeTimestamp,
+ "muteEndTime", muteEndTime.Format(time.RFC3339),
+ "now", now.Format(time.RFC3339),
+ "isMuted", muteEndTime.After(now),
+ "mutedDurationSeconds", muteEndTime.Sub(now).Seconds())
+ groupMembersMap[(*memberCallbackResp.UserID)].MuteEndTime = muteEndTime
+ }
+
+ datautil.NotNilReplace(&groupMembersMap[(*memberCallbackResp.UserID)].FaceURL, memberCallbackResp.FaceURL)
+ datautil.NotNilReplace(&groupMembersMap[(*memberCallbackResp.UserID)].Ex, memberCallbackResp.Ex)
+ datautil.NotNilReplace(&groupMembersMap[(*memberCallbackResp.UserID)].Nickname, memberCallbackResp.Nickname)
+ datautil.NotNilReplace(&groupMembersMap[(*memberCallbackResp.UserID)].RoleLevel, memberCallbackResp.RoleLevel)
+ }
+ }
+
+ return nil
+ })
+}
+
+func (g *groupServer) webhookBeforeSetGroupMemberInfo(ctx context.Context, before *config.BeforeConfig, req *group.SetGroupMemberInfo) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := callbackstruct.CallbackBeforeSetGroupMemberInfoReq{
+ CallbackCommand: callbackstruct.CallbackBeforeSetGroupMemberInfoCommand,
+ GroupID: req.GroupID,
+ UserID: req.UserID,
+ }
+ if req.Nickname != nil {
+ cbReq.Nickname = &req.Nickname.Value
+ }
+ if req.FaceURL != nil {
+ cbReq.FaceURL = &req.FaceURL.Value
+ }
+ if req.RoleLevel != nil {
+ cbReq.RoleLevel = &req.RoleLevel.Value
+ }
+ if req.Ex != nil {
+ cbReq.Ex = &req.Ex.Value
+ }
+ resp := &callbackstruct.CallbackBeforeSetGroupMemberInfoResp{}
+ if err := g.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+ if resp.FaceURL != nil {
+ req.FaceURL = wrapperspb.String(*resp.FaceURL)
+ }
+ if resp.Nickname != nil {
+ req.Nickname = wrapperspb.String(*resp.Nickname)
+ }
+ if resp.RoleLevel != nil {
+ req.RoleLevel = wrapperspb.Int32(*resp.RoleLevel)
+ }
+ if resp.Ex != nil {
+ req.Ex = wrapperspb.String(*resp.Ex)
+ }
+ return nil
+ })
+}
+
+func (g *groupServer) webhookAfterSetGroupMemberInfo(ctx context.Context, after *config.AfterConfig, req *group.SetGroupMemberInfo) {
+ cbReq := callbackstruct.CallbackAfterSetGroupMemberInfoReq{
+ CallbackCommand: callbackstruct.CallbackAfterSetGroupMemberInfoCommand,
+ GroupID: req.GroupID,
+ UserID: req.UserID,
+ }
+ if req.Nickname != nil {
+ cbReq.Nickname = &req.Nickname.Value
+ }
+ if req.FaceURL != nil {
+ cbReq.FaceURL = &req.FaceURL.Value
+ }
+ if req.RoleLevel != nil {
+ cbReq.RoleLevel = &req.RoleLevel.Value
+ }
+ if req.Ex != nil {
+ cbReq.Ex = &req.Ex.Value
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterSetGroupMemberInfoResp{}, after)
+}
+
+func (g *groupServer) webhookAfterQuitGroup(ctx context.Context, after *config.AfterConfig, req *group.QuitGroupReq) {
+ cbReq := &callbackstruct.CallbackQuitGroupReq{
+ CallbackCommand: callbackstruct.CallbackAfterQuitGroupCommand,
+ GroupID: req.GroupID,
+ UserID: req.UserID,
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackQuitGroupResp{}, after)
+}
+
+func (g *groupServer) webhookAfterKickGroupMember(ctx context.Context, after *config.AfterConfig, req *group.KickGroupMemberReq) {
+ cbReq := &callbackstruct.CallbackKillGroupMemberReq{
+ CallbackCommand: callbackstruct.CallbackAfterKickGroupCommand,
+ GroupID: req.GroupID,
+ KickedUserIDs: req.KickedUserIDs,
+ Reason: req.Reason,
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackKillGroupMemberResp{}, after)
+}
+
+func (g *groupServer) webhookAfterDismissGroup(ctx context.Context, after *config.AfterConfig, req *callbackstruct.CallbackDisMissGroupReq) {
+ req.CallbackCommand = callbackstruct.CallbackAfterDisMissGroupCommand
+ g.webhookClient.AsyncPost(ctx, req.GetCallbackCommand(), req, &callbackstruct.CallbackDisMissGroupResp{}, after)
+}
+
+func (g *groupServer) webhookBeforeApplyJoinGroup(ctx context.Context, before *config.BeforeConfig, req *callbackstruct.CallbackJoinGroupReq) (err error) {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ req.CallbackCommand = callbackstruct.CallbackBeforeJoinGroupCommand
+ resp := &callbackstruct.CallbackJoinGroupResp{}
+ if err := g.webhookClient.SyncPost(ctx, req.GetCallbackCommand(), req, resp, before); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (g *groupServer) webhookAfterTransferGroupOwner(ctx context.Context, after *config.AfterConfig, req *group.TransferGroupOwnerReq) {
+ cbReq := &callbackstruct.CallbackTransferGroupOwnerReq{
+ CallbackCommand: callbackstruct.CallbackAfterTransferGroupOwnerCommand,
+ GroupID: req.GroupID,
+ OldOwnerUserID: req.OldOwnerUserID,
+ NewOwnerUserID: req.NewOwnerUserID,
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackTransferGroupOwnerResp{}, after)
+}
+
+func (g *groupServer) webhookBeforeInviteUserToGroup(ctx context.Context, before *config.BeforeConfig, req *group.InviteUserToGroupReq) (err error) {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &callbackstruct.CallbackBeforeInviteUserToGroupReq{
+ CallbackCommand: callbackstruct.CallbackBeforeInviteJoinGroupCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ GroupID: req.GroupID,
+ Reason: req.Reason,
+ InvitedUserIDs: req.InvitedUserIDs,
+ }
+
+ resp := &callbackstruct.CallbackBeforeInviteUserToGroupResp{}
+ if err := g.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ // Handle the scenario where certain members are refused
+ // You might want to update the req.Members list or handle it as per your business logic
+
+ // if len(resp.RefusedMembersAccount) > 0 {
+ // implement members are refused
+ // }
+
+ return nil
+ })
+}
+
+func (g *groupServer) webhookAfterJoinGroup(ctx context.Context, after *config.AfterConfig, req *group.JoinGroupReq) {
+ cbReq := &callbackstruct.CallbackAfterJoinGroupReq{
+ CallbackCommand: callbackstruct.CallbackAfterJoinGroupCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ GroupID: req.GroupID,
+ ReqMessage: req.ReqMessage,
+ JoinSource: req.JoinSource,
+ InviterUserID: req.InviterUserID,
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterJoinGroupResp{}, after)
+}
+
+func (g *groupServer) webhookBeforeSetGroupInfo(ctx context.Context, before *config.BeforeConfig, req *group.SetGroupInfoReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &callbackstruct.CallbackBeforeSetGroupInfoReq{
+ CallbackCommand: callbackstruct.CallbackBeforeSetGroupInfoCommand,
+ GroupID: req.GroupInfoForSet.GroupID,
+ Notification: req.GroupInfoForSet.Notification,
+ Introduction: req.GroupInfoForSet.Introduction,
+ FaceURL: req.GroupInfoForSet.FaceURL,
+ GroupName: req.GroupInfoForSet.GroupName,
+ }
+ if req.GroupInfoForSet.Ex != nil {
+ cbReq.Ex = req.GroupInfoForSet.Ex.Value
+ }
+ log.ZDebug(ctx, "debug CallbackBeforeSetGroupInfo", "ex", cbReq.Ex)
+ if req.GroupInfoForSet.NeedVerification != nil {
+ cbReq.NeedVerification = req.GroupInfoForSet.NeedVerification.Value
+ }
+ if req.GroupInfoForSet.LookMemberInfo != nil {
+ cbReq.LookMemberInfo = req.GroupInfoForSet.LookMemberInfo.Value
+ }
+ if req.GroupInfoForSet.ApplyMemberFriend != nil {
+ cbReq.ApplyMemberFriend = req.GroupInfoForSet.ApplyMemberFriend.Value
+ }
+ resp := &callbackstruct.CallbackBeforeSetGroupInfoResp{}
+
+ if err := g.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ if resp.Ex != nil {
+ req.GroupInfoForSet.Ex = wrapperspb.String(*resp.Ex)
+ }
+ if resp.NeedVerification != nil {
+ req.GroupInfoForSet.NeedVerification = wrapperspb.Int32(*resp.NeedVerification)
+ }
+ if resp.LookMemberInfo != nil {
+ req.GroupInfoForSet.LookMemberInfo = wrapperspb.Int32(*resp.LookMemberInfo)
+ }
+ if resp.ApplyMemberFriend != nil {
+ req.GroupInfoForSet.ApplyMemberFriend = wrapperspb.Int32(*resp.ApplyMemberFriend)
+ }
+ datautil.NotNilReplace(&req.GroupInfoForSet.GroupID, &resp.GroupID)
+ datautil.NotNilReplace(&req.GroupInfoForSet.GroupName, &resp.GroupName)
+ datautil.NotNilReplace(&req.GroupInfoForSet.FaceURL, &resp.FaceURL)
+ datautil.NotNilReplace(&req.GroupInfoForSet.Introduction, &resp.Introduction)
+ return nil
+ })
+}
+
+func (g *groupServer) webhookAfterSetGroupInfo(ctx context.Context, after *config.AfterConfig, req *group.SetGroupInfoReq) {
+ cbReq := &callbackstruct.CallbackAfterSetGroupInfoReq{
+ CallbackCommand: callbackstruct.CallbackAfterSetGroupInfoCommand,
+ GroupID: req.GroupInfoForSet.GroupID,
+ Notification: req.GroupInfoForSet.Notification,
+ Introduction: req.GroupInfoForSet.Introduction,
+ FaceURL: req.GroupInfoForSet.FaceURL,
+ GroupName: req.GroupInfoForSet.GroupName,
+ }
+ if req.GroupInfoForSet.Ex != nil {
+ cbReq.Ex = &req.GroupInfoForSet.Ex.Value
+ }
+ if req.GroupInfoForSet.NeedVerification != nil {
+ cbReq.NeedVerification = &req.GroupInfoForSet.NeedVerification.Value
+ }
+ if req.GroupInfoForSet.LookMemberInfo != nil {
+ cbReq.LookMemberInfo = &req.GroupInfoForSet.LookMemberInfo.Value
+ }
+ if req.GroupInfoForSet.ApplyMemberFriend != nil {
+ cbReq.ApplyMemberFriend = &req.GroupInfoForSet.ApplyMemberFriend.Value
+ }
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterSetGroupInfoResp{}, after)
+}
+
+func (g *groupServer) webhookBeforeSetGroupInfoEx(ctx context.Context, before *config.BeforeConfig, req *group.SetGroupInfoExReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &callbackstruct.CallbackBeforeSetGroupInfoExReq{
+ CallbackCommand: callbackstruct.CallbackBeforeSetGroupInfoExCommand,
+ GroupID: req.GroupID,
+ GroupName: req.GroupName,
+ Notification: req.Notification,
+ Introduction: req.Introduction,
+ FaceURL: req.FaceURL,
+ }
+
+ if req.Ex != nil {
+ cbReq.Ex = req.Ex
+ }
+ log.ZDebug(ctx, "debug CallbackBeforeSetGroupInfoEx", "ex", cbReq.Ex)
+
+ if req.NeedVerification != nil {
+ cbReq.NeedVerification = req.NeedVerification
+ }
+ if req.LookMemberInfo != nil {
+ cbReq.LookMemberInfo = req.LookMemberInfo
+ }
+ if req.ApplyMemberFriend != nil {
+ cbReq.ApplyMemberFriend = req.ApplyMemberFriend
+ }
+
+ resp := &callbackstruct.CallbackBeforeSetGroupInfoExResp{}
+
+ if err := g.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ datautil.NotNilReplace(&req.GroupID, &resp.GroupID)
+ datautil.NotNilReplace(&req.GroupName, &resp.GroupName)
+ datautil.NotNilReplace(&req.FaceURL, &resp.FaceURL)
+ datautil.NotNilReplace(&req.Introduction, &resp.Introduction)
+ datautil.NotNilReplace(&req.Ex, &resp.Ex)
+ datautil.NotNilReplace(&req.NeedVerification, &resp.NeedVerification)
+ datautil.NotNilReplace(&req.LookMemberInfo, &resp.LookMemberInfo)
+ datautil.NotNilReplace(&req.ApplyMemberFriend, &resp.ApplyMemberFriend)
+
+ return nil
+ })
+}
+
+func (g *groupServer) webhookAfterSetGroupInfoEx(ctx context.Context, after *config.AfterConfig, req *group.SetGroupInfoExReq) {
+ cbReq := &callbackstruct.CallbackAfterSetGroupInfoExReq{
+ CallbackCommand: callbackstruct.CallbackAfterSetGroupInfoExCommand,
+ GroupID: req.GroupID,
+ GroupName: req.GroupName,
+ Notification: req.Notification,
+ Introduction: req.Introduction,
+ FaceURL: req.FaceURL,
+ }
+
+ if req.Ex != nil {
+ cbReq.Ex = req.Ex
+ }
+ if req.NeedVerification != nil {
+ cbReq.NeedVerification = req.NeedVerification
+ }
+ if req.LookMemberInfo != nil {
+ cbReq.LookMemberInfo = req.LookMemberInfo
+ }
+ if req.ApplyMemberFriend != nil {
+ cbReq.ApplyMemberFriend = req.ApplyMemberFriend
+ }
+
+ g.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &callbackstruct.CallbackAfterSetGroupInfoExResp{}, after)
+}
diff --git a/internal/rpc/group/convert.go b/internal/rpc/group/convert.go
new file mode 100644
index 0000000..bcc4fb3
--- /dev/null
+++ b/internal/rpc/group/convert.go
@@ -0,0 +1,63 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+func (g *groupServer) groupDB2PB(group *model.Group, ownerUserID string, memberCount uint32) *sdkws.GroupInfo {
+ return &sdkws.GroupInfo{
+ GroupID: group.GroupID,
+ GroupName: group.GroupName,
+ Notification: group.Notification,
+ Introduction: group.Introduction,
+ FaceURL: group.FaceURL,
+ OwnerUserID: ownerUserID,
+ CreateTime: group.CreateTime.UnixMilli(),
+ MemberCount: memberCount,
+ Ex: group.Ex,
+ Status: group.Status,
+ CreatorUserID: group.CreatorUserID,
+ GroupType: group.GroupType,
+ NeedVerification: group.NeedVerification,
+ LookMemberInfo: group.LookMemberInfo,
+ ApplyMemberFriend: group.ApplyMemberFriend,
+ NotificationUpdateTime: group.NotificationUpdateTime.UnixMilli(),
+ NotificationUserID: group.NotificationUserID,
+ }
+}
+
+func (g *groupServer) groupMemberDB2PB(member *model.GroupMember, appMangerLevel int32) *sdkws.GroupMemberFullInfo {
+ return &sdkws.GroupMemberFullInfo{
+ GroupID: member.GroupID,
+ UserID: member.UserID,
+ RoleLevel: member.RoleLevel,
+ JoinTime: member.JoinTime.UnixMilli(),
+ Nickname: member.Nickname,
+ FaceURL: member.FaceURL,
+ AppMangerLevel: appMangerLevel,
+ JoinSource: member.JoinSource,
+ OperatorUserID: member.OperatorUserID,
+ Ex: member.Ex,
+ MuteEndTime: member.MuteEndTime.UnixMilli(),
+ InviterUserID: member.InviterUserID,
+ }
+}
+
+func (g *groupServer) groupMemberDB2PB2(member *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ return g.groupMemberDB2PB(member, 0)
+}
diff --git a/internal/rpc/group/db_map.go b/internal/rpc/group/db_map.go
new file mode 100644
index 0000000..8556592
--- /dev/null
+++ b/internal/rpc/group/db_map.go
@@ -0,0 +1,134 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+ "strings"
+ "time"
+
+ pbgroup "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func UpdateGroupInfoMap(ctx context.Context, group *sdkws.GroupInfoForSet) map[string]any {
+ m := make(map[string]any)
+ if group.GroupName != "" {
+ m["group_name"] = group.GroupName
+ }
+ if group.Notification != "" {
+ m["notification"] = group.Notification
+ m["notification_update_time"] = time.Now()
+ m["notification_user_id"] = mcontext.GetOpUserID(ctx)
+ }
+ if group.Introduction != "" {
+ m["introduction"] = group.Introduction
+ }
+ if group.FaceURL != "" {
+ m["face_url"] = group.FaceURL
+ }
+ if group.NeedVerification != nil {
+ m["need_verification"] = group.NeedVerification.Value
+ }
+ if group.LookMemberInfo != nil {
+ m["look_member_info"] = group.LookMemberInfo.Value
+ }
+ if group.ApplyMemberFriend != nil {
+ m["apply_member_friend"] = group.ApplyMemberFriend.Value
+ }
+ if group.Ex != nil {
+ m["ex"] = group.Ex.Value
+ }
+ return m
+}
+
+func UpdateGroupInfoExMap(ctx context.Context, group *pbgroup.SetGroupInfoExReq) (m map[string]any, normalFlag, groupNameFlag, notificationFlag bool, err error) {
+ m = make(map[string]any)
+
+ if group.GroupName != nil {
+ if strings.TrimSpace(group.GroupName.Value) != "" {
+ m["group_name"] = group.GroupName.Value
+ groupNameFlag = true
+ } else {
+ return nil, normalFlag, notificationFlag, groupNameFlag, errs.ErrArgs.WrapMsg("group name is empty")
+ }
+ }
+
+ if group.Notification != nil {
+ notificationFlag = true
+ group.Notification.Value = strings.TrimSpace(group.Notification.Value) // if Notification only contains spaces, set it to empty string
+
+ m["notification"] = group.Notification.Value
+ m["notification_user_id"] = mcontext.GetOpUserID(ctx)
+ m["notification_update_time"] = time.Now()
+ }
+ if group.Introduction != nil {
+ m["introduction"] = group.Introduction.Value
+ normalFlag = true
+ }
+ if group.FaceURL != nil {
+ m["face_url"] = group.FaceURL.Value
+ normalFlag = true
+ }
+ if group.NeedVerification != nil {
+ m["need_verification"] = group.NeedVerification.Value
+ normalFlag = true
+ }
+ if group.LookMemberInfo != nil {
+ m["look_member_info"] = group.LookMemberInfo.Value
+ normalFlag = true
+ }
+ if group.ApplyMemberFriend != nil {
+ m["apply_member_friend"] = group.ApplyMemberFriend.Value
+ normalFlag = true
+ }
+ if group.Ex != nil {
+ m["ex"] = group.Ex.Value
+ normalFlag = true
+ }
+
+ return m, normalFlag, groupNameFlag, notificationFlag, nil
+}
+
+func UpdateGroupStatusMap(status int) map[string]any {
+ return map[string]any{
+ "status": status,
+ }
+}
+
+func UpdateGroupMemberMutedTimeMap(t time.Time) map[string]any {
+ return map[string]any{
+ "mute_end_time": t,
+ }
+}
+
+func UpdateGroupMemberMap(req *pbgroup.SetGroupMemberInfo) map[string]any {
+ m := make(map[string]any)
+ if req.Nickname != nil {
+ m["nickname"] = req.Nickname.Value
+ }
+ if req.FaceURL != nil {
+ m["face_url"] = req.FaceURL.Value
+ }
+ if req.RoleLevel != nil {
+ m["role_level"] = req.RoleLevel.Value
+ }
+ if req.Ex != nil {
+ m["ex"] = req.Ex.Value
+ }
+ return m
+}
diff --git a/internal/rpc/group/fill.go b/internal/rpc/group/fill.go
new file mode 100644
index 0000000..3ebfacc
--- /dev/null
+++ b/internal/rpc/group/fill.go
@@ -0,0 +1,25 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+func (g *groupServer) PopulateGroupMember(ctx context.Context, members ...*relationtb.GroupMember) error {
+ return g.notification.PopulateGroupMember(ctx, members...)
+}
diff --git a/internal/rpc/group/group.go b/internal/rpc/group/group.go
new file mode 100644
index 0000000..10a8467
--- /dev/null
+++ b/internal/rpc/group/group.go
@@ -0,0 +1,2096 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+ "fmt"
+ "math/big"
+ "math/rand"
+ "strconv"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "google.golang.org/grpc"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ redis2 "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/common"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification/grouphash"
+ "git.imall.cloud/openim/protocol/constant"
+ pbconv "git.imall.cloud/openim/protocol/conversation"
+ pbgroup "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/wrapperspb"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/mw/specialerror"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/encrypt"
+)
+
+type groupServer struct {
+ pbgroup.UnimplementedGroupServer
+ db controller.GroupDatabase
+ userDB controller.UserDatabase
+ notification *NotificationSender
+ config *Config
+ webhookClient *webhook.Client
+ userClient *rpcli.UserClient
+ msgClient *rpcli.MsgClient
+ conversationClient *rpcli.ConversationClient
+ adminUserIDs []string
+}
+
+type Config struct {
+ RpcConfig config.Group
+ RedisConfig config.Redis
+ MongodbConfig config.Mongo
+ NotificationConfig config.Notification
+ Share config.Share
+ WebhooksConfig config.Webhooks
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+ groupDB, err := mgo.NewGroupMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ groupMemberDB, err := mgo.NewGroupMember(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ groupRequestDB, err := mgo.NewGroupRequestMgo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ msgConn, err := client.GetConn(ctx, config.Discovery.RpcService.Msg)
+ if err != nil {
+ return err
+ }
+ conversationConn, err := client.GetConn(ctx, config.Discovery.RpcService.Conversation)
+ if err != nil {
+ return err
+ }
+
+ // 初始化用户数据库,用于关联查询
+ userDB, err := mgo.NewUserMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ userCache := redis2.NewUserCacheRedis(rdb, &config.LocalCacheConfig, userDB, redis2.GetRocksCacheOptions())
+ userDatabase := controller.NewUserDatabase(userDB, userCache, mgocli.GetTx())
+
+ // 初始化webhook配置管理器(支持从数据库读取配置)
+ var webhookClient *webhook.Client
+ systemConfigDB, err := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err == nil {
+ // 如果SystemConfig数据库初始化成功,使用配置管理器
+ webhookConfigManager := webhook.NewConfigManager(systemConfigDB, &config.WebhooksConfig)
+ if err := webhookConfigManager.Start(ctx); err != nil {
+ log.ZWarn(ctx, "failed to start webhook config manager, using default config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ } else {
+ webhookClient = webhook.NewWebhookClientWithManager(webhookConfigManager)
+ }
+ } else {
+ // 如果SystemConfig数据库初始化失败,使用默认配置
+ log.ZWarn(ctx, "failed to init system config db, using default webhook config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ }
+
+ gs := groupServer{
+ config: config,
+ webhookClient: webhookClient,
+ userClient: rpcli.NewUserClient(userConn),
+ msgClient: rpcli.NewMsgClient(msgConn),
+ conversationClient: rpcli.NewConversationClient(conversationConn),
+ userDB: userDatabase,
+ adminUserIDs: config.Share.IMAdminUser.UserIDs,
+ }
+ gs.db = controller.NewGroupDatabase(rdb, &config.LocalCacheConfig, groupDB, groupMemberDB, groupRequestDB, mgocli.GetTx(), grouphash.NewGroupHashFromGroupServer(&gs))
+ gs.notification = NewNotificationSender(gs.db, config, gs.userClient, gs.msgClient, gs.conversationClient)
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+ pbgroup.RegisterGroupServer(server, &gs)
+ return nil
+}
+
+func (g *groupServer) NotificationUserInfoUpdate(ctx context.Context, req *pbgroup.NotificationUserInfoUpdateReq) (*pbgroup.NotificationUserInfoUpdateResp, error) {
+ members, err := g.db.FindGroupMemberUser(ctx, nil, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ groupIDs := make([]string, 0, len(members))
+ for _, member := range members {
+ if member.Nickname != "" && member.FaceURL != "" {
+ continue
+ }
+ groupIDs = append(groupIDs, member.GroupID)
+ }
+ for _, groupID := range groupIDs {
+ if err := g.db.MemberGroupIncrVersion(ctx, groupID, []string{req.UserID}, model.VersionStateUpdate); err != nil {
+ return nil, err
+ }
+ }
+ for _, groupID := range groupIDs {
+ g.notification.GroupMemberInfoSetNotification(ctx, groupID, req.UserID)
+ }
+ if err = g.db.DeleteGroupMemberHash(ctx, groupIDs); err != nil {
+ return nil, err
+ }
+ return &pbgroup.NotificationUserInfoUpdateResp{}, nil
+}
+
+func (g *groupServer) CheckGroupAdmin(ctx context.Context, groupID string) error {
+ if !authverify.IsAdmin(ctx) {
+ members, err := g.db.FindGroupMembers(ctx, groupID, []string{mcontext.GetOpUserID(ctx)})
+ if err != nil {
+ return err
+ }
+ if len(members) == 0 {
+ return errs.ErrNoPermission.WrapMsg("op user not in group")
+ }
+ groupMember := members[0]
+ if !(groupMember.RoleLevel == constant.GroupOwner || groupMember.RoleLevel == constant.GroupAdmin) {
+ return errs.ErrNoPermission.WrapMsg("no group owner or admin")
+ }
+ }
+ return nil
+}
+
+func (g *groupServer) IsNotFound(err error) bool {
+ return errs.ErrRecordNotFound.Is(specialerror.ErrCode(errs.Unwrap(err)))
+}
+
+func (g *groupServer) GenGroupID(ctx context.Context, groupID *string) error {
+ if *groupID != "" {
+ _, err := g.db.TakeGroup(ctx, *groupID)
+ if err == nil {
+ return servererrs.ErrGroupIDExisted.WrapMsg("group id existed " + *groupID)
+ } else if g.IsNotFound(err) {
+ return nil
+ } else {
+ return err
+ }
+ }
+ for i := 0; i < 10; i++ {
+ id := encrypt.Md5(strings.Join([]string{mcontext.GetOperationID(ctx), strconv.FormatInt(time.Now().UnixNano(), 10), strconv.Itoa(rand.Int())}, ",;,"))
+ bi := big.NewInt(0)
+ bi.SetString(id[0:8], 16)
+ id = bi.String()
+ _, err := g.db.TakeGroup(ctx, id)
+ if err == nil {
+ continue
+ } else if g.IsNotFound(err) {
+ *groupID = id
+ return nil
+ } else {
+ return err
+ }
+ }
+ return servererrs.ErrData.WrapMsg("group id gen error")
+}
+
+func (g *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupReq) (*pbgroup.CreateGroupResp, error) {
+ if req.GroupInfo.GroupType != constant.WorkingGroup {
+ return nil, errs.ErrArgs.WrapMsg(fmt.Sprintf("group type only supports %d", constant.WorkingGroup))
+ }
+ if req.OwnerUserID == "" {
+ return nil, errs.ErrArgs.WrapMsg("no group owner")
+ }
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ userIDs := append(append(req.MemberUserIDs, req.AdminUserIDs...), req.OwnerUserID)
+ opUserID := mcontext.GetOpUserID(ctx)
+ if !datautil.Contain(opUserID, userIDs...) {
+ userIDs = append(userIDs, opUserID)
+ }
+
+ if datautil.Duplicate(userIDs) {
+ return nil, errs.ErrArgs.WrapMsg("group member repeated")
+ }
+
+ userMap, err := g.userClient.GetUsersInfoMap(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(userMap) != len(userIDs) {
+ return nil, servererrs.ErrUserIDNotFound.WrapMsg("user not found")
+ }
+
+ if err := g.webhookBeforeCreateGroup(ctx, &g.config.WebhooksConfig.BeforeCreateGroup, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ var groupMembers []*model.GroupMember
+ group := convert.Pb2DBGroupInfo(req.GroupInfo)
+ if err := g.GenGroupID(ctx, &group.GroupID); err != nil {
+ return nil, err
+ }
+
+ joinGroupFunc := func(userID string, roleLevel int32) {
+ groupMember := &model.GroupMember{
+ GroupID: group.GroupID,
+ UserID: userID,
+ RoleLevel: roleLevel,
+ OperatorUserID: opUserID,
+ JoinSource: constant.JoinByInvitation,
+ InviterUserID: opUserID,
+ JoinTime: time.Now(),
+ MuteEndTime: time.UnixMilli(0),
+ }
+
+ groupMembers = append(groupMembers, groupMember)
+ }
+
+ joinGroupFunc(req.OwnerUserID, constant.GroupOwner)
+
+ for _, userID := range req.AdminUserIDs {
+ joinGroupFunc(userID, constant.GroupAdmin)
+ }
+
+ for _, userID := range req.MemberUserIDs {
+ joinGroupFunc(userID, constant.GroupOrdinaryUsers)
+ }
+
+ if err := g.webhookBeforeMembersJoinGroup(ctx, &g.config.WebhooksConfig.BeforeMemberJoinGroup, groupMembers, group.GroupID, group.Ex); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ if err := g.db.CreateGroup(ctx, []*model.Group{group}, groupMembers); err != nil {
+ return nil, err
+ }
+ resp := &pbgroup.CreateGroupResp{GroupInfo: &sdkws.GroupInfo{}}
+
+ resp.GroupInfo = convert.Db2PbGroupInfo(group, req.OwnerUserID, uint32(len(userIDs)))
+ resp.GroupInfo.MemberCount = uint32(len(userIDs))
+ tips := &sdkws.GroupCreatedTips{
+ Group: resp.GroupInfo,
+ OperationTime: group.CreateTime.UnixMilli(),
+ GroupOwnerUser: g.groupMemberDB2PB(groupMembers[0], userMap[groupMembers[0].UserID].AppMangerLevel),
+ }
+ for _, member := range groupMembers {
+ member.Nickname = userMap[member.UserID].Nickname
+ tips.MemberList = append(tips.MemberList, g.groupMemberDB2PB(member, userMap[member.UserID].AppMangerLevel))
+ if member.UserID == opUserID {
+ tips.OpUser = g.groupMemberDB2PB(member, userMap[member.UserID].AppMangerLevel)
+ break
+ }
+ }
+ g.notification.GroupCreatedNotification(ctx, tips, req.SendMessage)
+
+ if req.GroupInfo.Notification != "" {
+ notificationFlag := true
+ g.notification.GroupInfoSetAnnouncementNotification(ctx, &sdkws.GroupInfoSetAnnouncementTips{
+ Group: tips.Group,
+ OpUser: tips.OpUser,
+ }, ¬ificationFlag)
+ }
+
+ reqCallBackAfter := &pbgroup.CreateGroupReq{
+ MemberUserIDs: userIDs,
+ GroupInfo: resp.GroupInfo,
+ OwnerUserID: req.OwnerUserID,
+ AdminUserIDs: req.AdminUserIDs,
+ }
+
+ g.webhookAfterCreateGroup(ctx, &g.config.WebhooksConfig.AfterCreateGroup, reqCallBackAfter)
+
+ return resp, nil
+}
+
+func (g *groupServer) GetJoinedGroupList(ctx context.Context, req *pbgroup.GetJoinedGroupListReq) (*pbgroup.GetJoinedGroupListResp, error) {
+ if err := authverify.CheckAccess(ctx, req.FromUserID); err != nil {
+ return nil, err
+ }
+ total, members, err := g.db.PageGetJoinGroup(ctx, req.FromUserID, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ var resp pbgroup.GetJoinedGroupListResp
+ resp.Total = uint32(total)
+ if len(members) == 0 {
+ return &resp, nil
+ }
+ groupIDs := datautil.Slice(members, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+ groups, err := g.db.FindGroup(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ groupMemberNum, err := g.db.MapGroupMemberNum(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ owners, err := g.db.FindGroupsOwner(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ ownerMap := datautil.SliceToMap(owners, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+ resp.Groups = datautil.Slice(datautil.Order(groupIDs, groups, func(group *model.Group) string {
+ return group.GroupID
+ }), func(group *model.Group) *sdkws.GroupInfo {
+ var userID string
+ if user := ownerMap[group.GroupID]; user != nil {
+ userID = user.UserID
+ }
+ return convert.Db2PbGroupInfo(group, userID, groupMemberNum[group.GroupID])
+ })
+ return &resp, nil
+}
+
+func (g *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.InviteUserToGroupReq) (*pbgroup.InviteUserToGroupResp, error) {
+ if len(req.InvitedUserIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("user empty")
+ }
+ if datautil.Duplicate(req.InvitedUserIDs) {
+ return nil, errs.ErrArgs.WrapMsg("userID duplicate")
+ }
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+
+ if group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.WrapMsg("group dismissed checking group status found it dismissed")
+ }
+
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+
+ userMap, err := g.userClient.GetUsersInfoMap(ctx, req.InvitedUserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(userMap) != len(req.InvitedUserIDs) {
+ return nil, errs.ErrRecordNotFound.WrapMsg("user not found")
+ }
+
+ var groupMember *model.GroupMember
+ opUserID := mcontext.GetOpUserID(ctx)
+
+ if !authverify.IsAdmin(ctx) {
+ var err error
+ groupMember, err = g.db.TakeGroupMember(ctx, req.GroupID, opUserID)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, groupMember); err != nil {
+ return nil, err
+ }
+ }
+
+ if err := g.webhookBeforeInviteUserToGroup(ctx, &g.config.WebhooksConfig.BeforeInviteUserToGroup, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ if group.NeedVerification == constant.AllNeedVerification {
+ if !authverify.IsAdmin(ctx) {
+ if !(groupMember.RoleLevel == constant.GroupOwner || groupMember.RoleLevel == constant.GroupAdmin) {
+ var requests []*model.GroupRequest
+ for _, userID := range req.InvitedUserIDs {
+ requests = append(requests, &model.GroupRequest{
+ UserID: userID,
+ GroupID: req.GroupID,
+ JoinSource: constant.JoinByInvitation,
+ InviterUserID: opUserID,
+ ReqTime: time.Now(),
+ HandledTime: time.Unix(0, 0),
+ })
+ }
+ if err := g.db.CreateGroupRequest(ctx, requests); err != nil {
+ return nil, err
+ }
+ for _, request := range requests {
+ g.notification.JoinGroupApplicationNotification(ctx, &pbgroup.JoinGroupReq{
+ GroupID: request.GroupID,
+ ReqMessage: request.ReqMsg,
+ JoinSource: request.JoinSource,
+ InviterUserID: request.InviterUserID,
+ }, request)
+ }
+ return &pbgroup.InviteUserToGroupResp{}, nil
+ }
+ }
+ }
+
+ // 查询已存在的群成员,避免重复添加
+ existingMembers, err := g.db.FindGroupMembers(ctx, req.GroupID, req.InvitedUserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ // 构建已存在成员的 userID 集合
+ existingUserIDMap := make(map[string]bool)
+ for _, member := range existingMembers {
+ existingUserIDMap[member.UserID] = true
+ }
+
+ // 过滤出需要添加的成员(不存在的成员)
+ var newUserIDs []string
+ for _, userID := range req.InvitedUserIDs {
+ if !existingUserIDMap[userID] {
+ newUserIDs = append(newUserIDs, userID)
+ }
+ }
+
+ // 如果所有成员都已存在,直接返回成功
+ if len(newUserIDs) == 0 {
+ return &pbgroup.InviteUserToGroupResp{}, nil
+ }
+
+ // 只为新成员创建 GroupMember 对象
+ var groupMembers []*model.GroupMember
+ for _, userID := range newUserIDs {
+ member := &model.GroupMember{
+ GroupID: req.GroupID,
+ UserID: userID,
+ RoleLevel: constant.GroupOrdinaryUsers,
+ OperatorUserID: opUserID,
+ InviterUserID: opUserID,
+ JoinSource: constant.JoinByInvitation,
+ JoinTime: time.Now(),
+ MuteEndTime: time.UnixMilli(0),
+ }
+
+ groupMembers = append(groupMembers, member)
+ }
+
+ if err := g.webhookBeforeMembersJoinGroup(ctx, &g.config.WebhooksConfig.BeforeMemberJoinGroup, groupMembers, group.GroupID, group.Ex); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ const singleQuantity = 50
+ for start := 0; start < len(groupMembers); start += singleQuantity {
+ end := min(start+singleQuantity, len(groupMembers))
+ currentMembers := groupMembers[start:end]
+
+ if err := g.db.CreateGroup(ctx, nil, currentMembers); err != nil {
+ return nil, err
+ }
+
+ userIDs := datautil.Slice(currentMembers, func(e *model.GroupMember) string {
+ return e.UserID
+ })
+
+ if len(userIDs) != 0 {
+ g.notification.GroupApplicationAgreeMemberEnterNotification(ctx, req.GroupID, req.SendMessage, opUserID, userIDs...)
+ }
+ }
+ return &pbgroup.InviteUserToGroupResp{}, nil
+}
+
+func (g *groupServer) GetGroupAllMember(ctx context.Context, req *pbgroup.GetGroupAllMemberReq) (*pbgroup.GetGroupAllMemberResp, error) {
+ members, err := g.db.FindGroupMemberAll(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if !authverify.IsAdmin(ctx) {
+ var inGroup bool
+ opUserID := mcontext.GetOpUserID(ctx)
+ for _, member := range members {
+ if member.UserID == opUserID {
+ inGroup = true
+ break
+ }
+ }
+ if !inGroup {
+ return nil, errs.ErrNoPermission.WrapMsg("opuser not in group")
+ }
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ var resp pbgroup.GetGroupAllMemberResp
+ resp.Members = datautil.Slice(members, func(e *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ return convert.Db2PbGroupMember(e)
+ })
+ return &resp, nil
+}
+
+func (g *groupServer) checkAdminOrInGroup(ctx context.Context, groupID string) error {
+ if authverify.IsAdmin(ctx) {
+ return nil
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ members, err := g.db.FindGroupMembers(ctx, groupID, []string{opUserID})
+ if err != nil {
+ return err
+ }
+ if len(members) == 0 {
+ return errs.ErrNoPermission.WrapMsg("op user not in group")
+ }
+ return nil
+}
+
+func (g *groupServer) GetGroupMemberList(ctx context.Context, req *pbgroup.GetGroupMemberListReq) (*pbgroup.GetGroupMemberListResp, error) {
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ var (
+ total int64
+ members []*model.GroupMember
+ err error
+ )
+
+ // 支持多个独立的搜索关键词:昵称、账号、手机号
+ // 这些字段需要关联用户表(user)进行搜索,相当于链表查询
+ nickname := req.Nickname
+ account := req.Account
+ phone := req.Phone
+
+ // 检查是否有多个搜索参数
+ hasMultiSearch := nickname != "" || account != "" || phone != ""
+
+ if req.Keyword != "" && !hasMultiSearch {
+ // 使用旧的搜索方法(向后兼容)
+ total, members, err = g.db.SearchGroupMember(ctx, req.GroupID, req.Keyword, req.Pagination)
+ } else if hasMultiSearch {
+ // 实现关联查询:先在用户表中搜索,再在群成员表中查询
+ // 1. 先在用户表中根据 account、phone、nickname 搜索匹配的用户
+ log.ZDebug(ctx, "SearchUsersByFields start", "account", account, "phone", phone, "nickname", nickname, "groupID", req.GroupID)
+ matchedUserIDs, err := g.userDB.SearchUsersByFields(ctx, account, phone, nickname)
+ if err != nil {
+ log.ZError(ctx, "SearchUsersByFields failed", err, "account", account, "phone", phone, "nickname", nickname)
+ return nil, err
+ }
+ log.ZDebug(ctx, "SearchUsersByFields result", "matchedUserIDs", matchedUserIDs, "count", len(matchedUserIDs))
+
+ // 2. 如果找到匹配的用户,在群成员表中根据 user_id 和 group_id 查询
+ if len(matchedUserIDs) > 0 {
+ log.ZDebug(ctx, "FindGroupMembers start", "groupID", req.GroupID, "matchedUserIDs", matchedUserIDs, "count", len(matchedUserIDs))
+ // 查询这些用户在指定群中的成员信息
+ members, err = g.db.FindGroupMembers(ctx, req.GroupID, matchedUserIDs)
+ if err != nil {
+ log.ZError(ctx, "FindGroupMembers failed", err, "groupID", req.GroupID, "matchedUserIDs", matchedUserIDs)
+ return nil, err
+ }
+ log.ZDebug(ctx, "FindGroupMembers result", "memberCount", len(members))
+
+ // 计算总数(用于分页)
+ total = int64(len(members))
+
+ // 应用分页
+ pageNumber := int(req.Pagination.GetPageNumber())
+ showNumber := int(req.Pagination.GetShowNumber())
+ if pageNumber > 0 && showNumber > 0 {
+ start := (pageNumber - 1) * showNumber
+ end := start + showNumber
+ if start < len(members) {
+ if end > len(members) {
+ end = len(members)
+ }
+ members = members[start:end]
+ } else {
+ members = []*model.GroupMember{}
+ }
+ }
+ } else {
+ // 没有找到匹配的用户,返回空结果
+ total = 0
+ members = []*model.GroupMember{}
+ }
+ } else {
+ // 如果没有任何搜索条件,返回所有成员
+ total, members, err = g.db.PageGetGroupMember(ctx, req.GroupID, req.Pagination)
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupMemberListResp{
+ Total: uint32(total),
+ Members: datautil.Batch(convert.Db2PbGroupMember, members),
+ }, nil
+}
+
+func (g *groupServer) KickGroupMember(ctx context.Context, req *pbgroup.KickGroupMemberReq) (*pbgroup.KickGroupMemberResp, error) {
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if len(req.KickedUserIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("KickedUserIDs empty")
+ }
+ if datautil.Duplicate(req.KickedUserIDs) {
+ return nil, errs.ErrArgs.WrapMsg("KickedUserIDs duplicate")
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ if datautil.Contain(opUserID, req.KickedUserIDs...) {
+ return nil, errs.ErrArgs.WrapMsg("opUserID in KickedUserIDs")
+ }
+ owner, err := g.db.TakeGroupOwner(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if datautil.Contain(owner.UserID, req.KickedUserIDs...) {
+ return nil, errs.ErrArgs.WrapMsg("ownerUID can not Kick")
+ }
+
+ members, err := g.db.FindGroupMembers(ctx, req.GroupID, append(req.KickedUserIDs, opUserID))
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ memberMap := make(map[string]*model.GroupMember)
+ for i, member := range members {
+ memberMap[member.UserID] = members[i]
+ }
+ isAppManagerUid := authverify.IsAdmin(ctx)
+ opMember := memberMap[opUserID]
+ for _, userID := range req.KickedUserIDs {
+ member, ok := memberMap[userID]
+ if !ok {
+ return nil, servererrs.ErrUserIDNotFound.WrapMsg(userID)
+ }
+ if !isAppManagerUid {
+ if opMember == nil {
+ return nil, errs.ErrNoPermission.WrapMsg("opUserID no in group")
+ }
+ switch opMember.RoleLevel {
+ case constant.GroupOwner:
+ case constant.GroupAdmin:
+ if member.RoleLevel == constant.GroupOwner || member.RoleLevel == constant.GroupAdmin {
+ return nil, errs.ErrNoPermission.WrapMsg("group admins cannot remove the group owner and other admins")
+ }
+ case constant.GroupOrdinaryUsers:
+ return nil, errs.ErrNoPermission.WrapMsg("opUserID no permission")
+ default:
+ return nil, errs.ErrNoPermission.WrapMsg("opUserID roleLevel unknown")
+ }
+ }
+ }
+ num, err := g.db.FindGroupMemberNum(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ ownerUserIDs, err := g.db.GetGroupRoleLevelMemberIDs(ctx, req.GroupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ var ownerUserID string
+ if len(ownerUserIDs) > 0 {
+ ownerUserID = ownerUserIDs[0]
+ }
+ if err := g.db.DeleteGroupMember(ctx, group.GroupID, req.KickedUserIDs); err != nil {
+ return nil, err
+ }
+ tips := &sdkws.MemberKickedTips{
+ Group: &sdkws.GroupInfo{
+ GroupID: group.GroupID,
+ GroupName: group.GroupName,
+ Notification: group.Notification,
+ Introduction: group.Introduction,
+ FaceURL: group.FaceURL,
+ OwnerUserID: ownerUserID,
+ CreateTime: group.CreateTime.UnixMilli(),
+ MemberCount: num - uint32(len(req.KickedUserIDs)),
+ Ex: group.Ex,
+ Status: group.Status,
+ CreatorUserID: group.CreatorUserID,
+ GroupType: group.GroupType,
+ NeedVerification: group.NeedVerification,
+ LookMemberInfo: group.LookMemberInfo,
+ ApplyMemberFriend: group.ApplyMemberFriend,
+ NotificationUpdateTime: group.NotificationUpdateTime.UnixMilli(),
+ NotificationUserID: group.NotificationUserID,
+ },
+ KickedUserList: []*sdkws.GroupMemberFullInfo{},
+ }
+ if opMember, ok := memberMap[opUserID]; ok {
+ tips.OpUser = convert.Db2PbGroupMember(opMember)
+ }
+ for _, userID := range req.KickedUserIDs {
+ tips.KickedUserList = append(tips.KickedUserList, convert.Db2PbGroupMember(memberMap[userID]))
+ }
+ // 只有群主或管理员操作时才发送通知
+ var shouldSendMsg *bool
+ if isAppManagerUid || (opMember != nil && (opMember.RoleLevel == constant.GroupOwner || opMember.RoleLevel == constant.GroupAdmin)) {
+ shouldSendMsg = req.SendMessage
+ } else {
+ shouldSendFalse := false
+ shouldSendMsg = &shouldSendFalse
+ }
+ g.notification.MemberKickedNotification(ctx, tips, shouldSendMsg)
+ if err := g.deleteMemberAndSetConversationSeq(ctx, req.GroupID, req.KickedUserIDs); err != nil {
+ return nil, err
+ }
+ g.webhookAfterKickGroupMember(ctx, &g.config.WebhooksConfig.AfterKickGroupMember, req)
+
+ return &pbgroup.KickGroupMemberResp{}, nil
+}
+
+func (g *groupServer) GetGroupMembersInfo(ctx context.Context, req *pbgroup.GetGroupMembersInfoReq) (*pbgroup.GetGroupMembersInfoResp, error) {
+ if len(req.UserIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("userIDs empty")
+ }
+ if req.GroupID == "" {
+ return nil, errs.ErrArgs.WrapMsg("groupID empty")
+ }
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ members, err := g.getGroupMembersInfo(ctx, req.GroupID, req.UserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ return &pbgroup.GetGroupMembersInfoResp{
+ Members: members,
+ }, nil
+}
+
+func (g *groupServer) getGroupMembersInfo(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ if len(userIDs) == 0 {
+ return nil, nil
+ }
+ members, err := g.db.FindGroupMembers(ctx, groupID, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ return datautil.Slice(members, func(e *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ return convert.Db2PbGroupMember(e)
+ }), nil
+}
+
+// GetGroupApplicationList handles functions that get a list of group requests.
+func (g *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.GetGroupApplicationListReq) (*pbgroup.GetGroupApplicationListResp, error) {
+ if err := authverify.CheckAccess(ctx, req.FromUserID); err != nil {
+ return nil, err
+ }
+ var (
+ groupIDs []string
+ err error
+ )
+ if len(req.GroupIDs) == 0 {
+ groupIDs, err = g.db.FindUserManagedGroupID(ctx, req.FromUserID)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ req.GroupIDs = datautil.Distinct(req.GroupIDs)
+ if !authverify.IsAdmin(ctx) {
+ for _, groupID := range req.GroupIDs {
+ if err := g.CheckGroupAdmin(ctx, groupID); err != nil {
+ return nil, err
+ }
+ }
+ }
+ groupIDs = req.GroupIDs
+ }
+ resp := &pbgroup.GetGroupApplicationListResp{}
+ if len(groupIDs) == 0 {
+ return resp, nil
+ }
+ handleResults := datautil.Slice(req.HandleResults, func(e int32) int {
+ return int(e)
+ })
+ total, groupRequests, err := g.db.PageGroupRequest(ctx, groupIDs, handleResults, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ resp.Total = uint32(total)
+ if len(groupRequests) == 0 {
+ return resp, nil
+ }
+ var userIDs []string
+
+ for _, gr := range groupRequests {
+ userIDs = append(userIDs, gr.UserID)
+ }
+ userIDs = datautil.Distinct(userIDs)
+ userMap, err := g.userClient.GetUsersInfoMap(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ groups, err := g.db.FindGroup(ctx, datautil.Distinct(groupIDs))
+ if err != nil {
+ return nil, err
+ }
+ groupMap := datautil.SliceToMap(groups, func(e *model.Group) string {
+ return e.GroupID
+ })
+ if ids := datautil.Single(datautil.Keys(groupMap), groupIDs); len(ids) > 0 {
+ return nil, servererrs.ErrGroupIDNotFound.WrapMsg(strings.Join(ids, ","))
+ }
+ groupMemberNumMap, err := g.db.MapGroupMemberNum(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ owners, err := g.db.FindGroupsOwner(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
+ ownerMap := datautil.SliceToMap(owners, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+ resp.GroupRequests = datautil.Slice(groupRequests, func(e *model.GroupRequest) *sdkws.GroupRequest {
+ var ownerUserID string
+ if owner, ok := ownerMap[e.GroupID]; ok {
+ ownerUserID = owner.UserID
+ }
+ return convert.Db2PbGroupRequest(e, userMap[e.UserID], convert.Db2PbGroupInfo(groupMap[e.GroupID], ownerUserID, groupMemberNumMap[e.GroupID]))
+ })
+ return resp, nil
+}
+
+func (g *groupServer) GetGroupsInfo(ctx context.Context, req *pbgroup.GetGroupsInfoReq) (*pbgroup.GetGroupsInfoResp, error) {
+ if len(req.GroupIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("groupID is empty")
+ }
+ groups, err := g.getGroupsInfo(ctx, req.GroupIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupsInfoResp{
+ GroupInfos: groups,
+ }, nil
+}
+
+func (g *groupServer) GetGroupApplicationUnhandledCount(ctx context.Context, req *pbgroup.GetGroupApplicationUnhandledCountReq) (*pbgroup.GetGroupApplicationUnhandledCountResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ groupIDs, err := g.db.FindUserManagedGroupID(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ count, err := g.db.GetGroupApplicationUnhandledCount(ctx, groupIDs, req.Time)
+ if err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupApplicationUnhandledCountResp{
+ Count: count,
+ }, nil
+}
+
+func (g *groupServer) getGroupsInfo(ctx context.Context, groupIDs []string) ([]*sdkws.GroupInfo, error) {
+ if len(groupIDs) == 0 {
+ return nil, nil
+ }
+ groups, err := g.db.FindGroup(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ groupMemberNumMap, err := g.db.MapGroupMemberNum(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ owners, err := g.db.FindGroupsOwner(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
+ ownerMap := datautil.SliceToMap(owners, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+ return datautil.Slice(groups, func(e *model.Group) *sdkws.GroupInfo {
+ var ownerUserID string
+ if owner, ok := ownerMap[e.GroupID]; ok {
+ ownerUserID = owner.UserID
+ }
+ return convert.Db2PbGroupInfo(e, ownerUserID, groupMemberNumMap[e.GroupID])
+ }), nil
+}
+
+func (g *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) (*pbgroup.GroupApplicationResponseResp, error) {
+ if !datautil.Contain(req.HandleResult, constant.GroupResponseAgree, constant.GroupResponseRefuse) {
+ return nil, errs.ErrArgs.WrapMsg("HandleResult unknown")
+ }
+ if !authverify.IsAdmin(ctx) {
+ groupMember, err := g.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ if err != nil {
+ return nil, err
+ }
+ if !(groupMember.RoleLevel == constant.GroupOwner || groupMember.RoleLevel == constant.GroupAdmin) {
+ return nil, errs.ErrNoPermission.WrapMsg("no group owner or admin")
+ }
+ }
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ groupRequest, err := g.db.TakeGroupRequest(ctx, req.GroupID, req.FromUserID)
+ if err != nil {
+ return nil, err
+ }
+ if groupRequest.HandleResult != 0 {
+ return nil, servererrs.ErrGroupRequestHandled.WrapMsg("group request already processed")
+ }
+ var inGroup bool
+ if _, err := g.db.TakeGroupMember(ctx, req.GroupID, req.FromUserID); err == nil {
+ inGroup = true // Already in group
+ } else if !g.IsNotFound(err) {
+ return nil, err
+ }
+ if err := g.userClient.CheckUser(ctx, []string{req.FromUserID}); err != nil {
+ return nil, err
+ }
+ var member *model.GroupMember
+ if (!inGroup) && req.HandleResult == constant.GroupResponseAgree {
+ member = &model.GroupMember{
+ GroupID: req.GroupID,
+ UserID: req.FromUserID,
+ Nickname: "",
+ FaceURL: "",
+ RoleLevel: constant.GroupOrdinaryUsers,
+ JoinTime: time.Now(),
+ JoinSource: groupRequest.JoinSource,
+ MuteEndTime: time.Unix(0, 0),
+ InviterUserID: groupRequest.InviterUserID,
+ OperatorUserID: mcontext.GetOpUserID(ctx),
+ }
+
+ if err := g.webhookBeforeMembersJoinGroup(ctx, &g.config.WebhooksConfig.BeforeMemberJoinGroup, []*model.GroupMember{member}, group.GroupID, group.Ex); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+ }
+ log.ZDebug(ctx, "GroupApplicationResponse", "inGroup", inGroup, "HandleResult", req.HandleResult, "member", member)
+ if err := g.db.HandlerGroupRequest(ctx, req.GroupID, req.FromUserID, req.HandledMsg, req.HandleResult, member); err != nil {
+ return nil, err
+ }
+ switch req.HandleResult {
+ case constant.GroupResponseAgree:
+ g.notification.GroupApplicationAcceptedNotification(ctx, req)
+ if member == nil {
+ log.ZDebug(ctx, "GroupApplicationResponse", "member is nil")
+ } else {
+ if groupRequest.InviterUserID == "" {
+ if err = g.notification.MemberEnterNotification(ctx, req.GroupID, req.FromUserID); err != nil {
+ return nil, err
+ }
+ } else {
+ if err = g.notification.GroupApplicationAgreeMemberEnterNotification(ctx, req.GroupID, nil, groupRequest.InviterUserID, req.FromUserID); err != nil {
+ return nil, err
+ }
+ }
+ }
+ case constant.GroupResponseRefuse:
+ g.notification.GroupApplicationRejectedNotification(ctx, req)
+ }
+
+ return &pbgroup.GroupApplicationResponseResp{}, nil
+}
+
+func (g *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq) (*pbgroup.JoinGroupResp, error) {
+ user, err := g.userClient.GetUserInfo(ctx, req.InviterUserID)
+ if err != nil {
+ return nil, err
+ }
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.Wrap()
+ }
+
+ reqCall := &callbackstruct.CallbackJoinGroupReq{
+ GroupID: req.GroupID,
+ GroupType: string(group.GroupType),
+ ApplyID: req.InviterUserID,
+ ReqMessage: req.ReqMessage,
+ Ex: req.Ex,
+ }
+
+ if err := g.webhookBeforeApplyJoinGroup(ctx, &g.config.WebhooksConfig.BeforeApplyJoinGroup, reqCall); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ _, err = g.db.TakeGroupMember(ctx, req.GroupID, req.InviterUserID)
+ if err == nil {
+ return nil, errs.ErrArgs.Wrap()
+ } else if !g.IsNotFound(err) && errs.Unwrap(err) != errs.ErrRecordNotFound {
+ return nil, err
+ }
+ log.ZDebug(ctx, "JoinGroup.groupInfo", "group", group, "eq", group.NeedVerification == constant.Directly)
+ if group.NeedVerification == constant.Directly {
+ groupMember := &model.GroupMember{
+ GroupID: group.GroupID,
+ UserID: user.UserID,
+ RoleLevel: constant.GroupOrdinaryUsers,
+ OperatorUserID: mcontext.GetOpUserID(ctx),
+ InviterUserID: req.InviterUserID,
+ JoinSource: req.JoinSource, // 设置加入来源(参考其他地方的实现)
+ JoinTime: time.Now(),
+ MuteEndTime: time.UnixMilli(0),
+ }
+
+ if err := g.webhookBeforeMembersJoinGroup(ctx, &g.config.WebhooksConfig.BeforeMemberJoinGroup, []*model.GroupMember{groupMember}, group.GroupID, group.Ex); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ if err := g.db.CreateGroup(ctx, nil, []*model.GroupMember{groupMember}); err != nil {
+ return nil, err
+ }
+
+ // 填充群成员信息(Nickname、FaceURL等),确保信息完整
+ if err := g.PopulateGroupMember(ctx, groupMember); err != nil {
+ log.ZWarn(ctx, "PopulateGroupMember failed", err, "groupID", req.GroupID, "userID", user.UserID)
+ // 不阻断流程,继续执行
+ }
+
+ if err = g.notification.MemberEnterNotification(ctx, req.GroupID, req.InviterUserID); err != nil {
+ return nil, err
+ }
+ g.webhookAfterJoinGroup(ctx, &g.config.WebhooksConfig.AfterJoinGroup, req)
+
+ return &pbgroup.JoinGroupResp{}, nil
+ }
+
+ groupRequest := model.GroupRequest{
+ UserID: req.InviterUserID,
+ ReqMsg: req.ReqMessage,
+ GroupID: req.GroupID,
+ JoinSource: req.JoinSource,
+ ReqTime: time.Now(),
+ HandledTime: time.Unix(0, 0),
+ Ex: req.Ex,
+ }
+ if err = g.db.CreateGroupRequest(ctx, []*model.GroupRequest{&groupRequest}); err != nil {
+ return nil, err
+ }
+ g.notification.JoinGroupApplicationNotification(ctx, req, &groupRequest)
+ return &pbgroup.JoinGroupResp{}, nil
+}
+
+func (g *groupServer) QuitGroup(ctx context.Context, req *pbgroup.QuitGroupReq) (*pbgroup.QuitGroupResp, error) {
+ if req.UserID == "" {
+ req.UserID = mcontext.GetOpUserID(ctx)
+ } else {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ }
+ member, err := g.db.TakeGroupMember(ctx, req.GroupID, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if member.RoleLevel == constant.GroupOwner {
+ return nil, errs.ErrNoPermission.WrapMsg("group owner can't quit")
+ }
+ if err := g.PopulateGroupMember(ctx, member); err != nil {
+ return nil, err
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ // 检查操作者是否是群主或管理员,只有群主/管理员操作时才发送通知
+ var shouldSendNotification bool
+ isAppManagerUid := authverify.IsAdmin(ctx)
+ if isAppManagerUid {
+ // 系统管理员操作时发送通知
+ shouldSendNotification = true
+ } else if opUserID != req.UserID {
+ // 如果不是自己退群,检查操作者是否是群主/管理员
+ opMember, err := g.db.TakeGroupMember(ctx, req.GroupID, opUserID)
+ if err == nil && (opMember.RoleLevel == constant.GroupOwner || opMember.RoleLevel == constant.GroupAdmin) {
+ shouldSendNotification = true
+ }
+ }
+ // 如果是普通成员自己退群,不发送通知
+ err = g.db.DeleteGroupMember(ctx, req.GroupID, []string{req.UserID})
+ if err != nil {
+ return nil, err
+ }
+ if shouldSendNotification {
+ g.notification.MemberQuitNotification(ctx, g.groupMemberDB2PB(member, 1))
+ }
+ if err := g.deleteMemberAndSetConversationSeq(ctx, req.GroupID, []string{req.UserID}); err != nil {
+ return nil, err
+ }
+ g.webhookAfterQuitGroup(ctx, &g.config.WebhooksConfig.AfterQuitGroup, req)
+
+ return &pbgroup.QuitGroupResp{}, nil
+}
+
+func (g *groupServer) deleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error {
+ conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
+ maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
+ if err != nil {
+ return err
+ }
+ return g.conversationClient.SetConversationMaxSeq(ctx, conversationID, userIDs, maxSeq)
+}
+
+func (g *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInfoReq) (*pbgroup.SetGroupInfoResp, error) {
+ var opMember *model.GroupMember
+ if !authverify.IsAdmin(ctx) {
+ var err error
+ opMember, err = g.db.TakeGroupMember(ctx, req.GroupInfoForSet.GroupID, mcontext.GetOpUserID(ctx))
+ if err != nil {
+ return nil, err
+ }
+ if !(opMember.RoleLevel == constant.GroupOwner || opMember.RoleLevel == constant.GroupAdmin) {
+ return nil, errs.ErrNoPermission.WrapMsg("no group owner or admin")
+ }
+ if err := g.PopulateGroupMember(ctx, opMember); err != nil {
+ return nil, err
+ }
+ }
+
+ if err := g.webhookBeforeSetGroupInfo(ctx, &g.config.WebhooksConfig.BeforeSetGroupInfo, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ group, err := g.db.TakeGroup(ctx, req.GroupInfoForSet.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.Wrap()
+ }
+
+ count, err := g.db.FindGroupMemberNum(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ owner, err := g.db.TakeGroupOwner(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, owner); err != nil {
+ return nil, err
+ }
+ update := UpdateGroupInfoMap(ctx, req.GroupInfoForSet)
+ if len(update) == 0 {
+ return &pbgroup.SetGroupInfoResp{}, nil
+ }
+ if err := g.db.UpdateGroup(ctx, group.GroupID, update); err != nil {
+ return nil, err
+ }
+ group, err = g.db.TakeGroup(ctx, req.GroupInfoForSet.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.GroupInfoSetTips{
+ Group: g.groupDB2PB(group, owner.UserID, count),
+ MuteTime: 0,
+ OpUser: &sdkws.GroupMemberFullInfo{},
+ }
+ if opMember != nil {
+ tips.OpUser = g.groupMemberDB2PB(opMember, 0)
+ }
+ num := len(update)
+ if req.GroupInfoForSet.Notification != "" {
+ num -= 3
+ func() {
+ conversation := &pbconv.ConversationReq{
+ ConversationID: msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, req.GroupInfoForSet.GroupID),
+ ConversationType: constant.ReadGroupChatType,
+ GroupID: req.GroupInfoForSet.GroupID,
+ }
+ resp, err := g.GetGroupMemberUserIDs(ctx, &pbgroup.GetGroupMemberUserIDsReq{GroupID: req.GroupInfoForSet.GroupID})
+ if err != nil {
+ log.ZWarn(ctx, "GetGroupMemberIDs is failed.", err)
+ return
+ }
+ conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
+ if err := g.conversationClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
+ log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation)
+ }
+ }()
+ notficationFlag := true
+ g.notification.GroupInfoSetAnnouncementNotification(ctx, &sdkws.GroupInfoSetAnnouncementTips{Group: tips.Group, OpUser: tips.OpUser}, ¬ficationFlag)
+ }
+ if req.GroupInfoForSet.GroupName != "" {
+ num--
+ g.notification.GroupInfoSetNameNotification(ctx, &sdkws.GroupInfoSetNameTips{Group: tips.Group, OpUser: tips.OpUser})
+ }
+ if num > 0 {
+ g.notification.GroupInfoSetNotification(ctx, tips)
+ }
+
+ g.webhookAfterSetGroupInfo(ctx, &g.config.WebhooksConfig.AfterSetGroupInfo, req)
+
+ return &pbgroup.SetGroupInfoResp{}, nil
+}
+
+func (g *groupServer) SetGroupInfoEx(ctx context.Context, req *pbgroup.SetGroupInfoExReq) (*pbgroup.SetGroupInfoExResp, error) {
+ var opMember *model.GroupMember
+
+ if !authverify.IsAdmin(ctx) {
+ var err error
+
+ opMember, err = g.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ if err != nil {
+ return nil, err
+ }
+
+ if !(opMember.RoleLevel == constant.GroupOwner || opMember.RoleLevel == constant.GroupAdmin) {
+ return nil, errs.ErrNoPermission.WrapMsg("no group owner or admin")
+ }
+
+ if err := g.PopulateGroupMember(ctx, opMember); err != nil {
+ return nil, err
+ }
+ }
+
+ if err := g.webhookBeforeSetGroupInfoEx(ctx, &g.config.WebhooksConfig.BeforeSetGroupInfoEx, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.Wrap()
+ }
+
+ count, err := g.db.FindGroupMemberNum(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+
+ owner, err := g.db.TakeGroupOwner(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := g.PopulateGroupMember(ctx, owner); err != nil {
+ return nil, err
+ }
+
+ updatedData, normalFlag, groupNameFlag, notificationFlag, err := UpdateGroupInfoExMap(ctx, req)
+ if len(updatedData) == 0 {
+ return &pbgroup.SetGroupInfoExResp{}, nil
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ if err := g.db.UpdateGroup(ctx, group.GroupID, updatedData); err != nil {
+ return nil, err
+ }
+
+ group, err = g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+
+ tips := &sdkws.GroupInfoSetTips{
+ Group: g.groupDB2PB(group, owner.UserID, count),
+ MuteTime: 0,
+ OpUser: &sdkws.GroupMemberFullInfo{},
+ }
+
+ if opMember != nil {
+ tips.OpUser = g.groupMemberDB2PB(opMember, 0)
+ }
+
+ if notificationFlag {
+ if req.Notification.Value != "" {
+ conversation := &pbconv.ConversationReq{
+ ConversationID: msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, req.GroupID),
+ ConversationType: constant.ReadGroupChatType,
+ GroupID: req.GroupID,
+ }
+
+ resp, err := g.GetGroupMemberUserIDs(ctx, &pbgroup.GetGroupMemberUserIDsReq{GroupID: req.GroupID})
+ if err != nil {
+ log.ZWarn(ctx, "GetGroupMemberIDs is failed.", err)
+ return nil, err
+ }
+
+ conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
+ if err := g.conversationClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
+ log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation)
+ }
+
+ g.notification.GroupInfoSetAnnouncementNotification(ctx, &sdkws.GroupInfoSetAnnouncementTips{Group: tips.Group, OpUser: tips.OpUser}, ¬ificationFlag)
+ } else {
+ notificationFlag = false
+ g.notification.GroupInfoSetAnnouncementNotification(ctx, &sdkws.GroupInfoSetAnnouncementTips{Group: tips.Group, OpUser: tips.OpUser}, ¬ificationFlag)
+ }
+ }
+
+ if groupNameFlag {
+ g.notification.GroupInfoSetNameNotification(ctx, &sdkws.GroupInfoSetNameTips{Group: tips.Group, OpUser: tips.OpUser})
+ }
+
+ // if updatedData > 0, send the normal notification
+ if normalFlag {
+ g.notification.GroupInfoSetNotification(ctx, tips)
+ }
+
+ g.webhookAfterSetGroupInfoEx(ctx, &g.config.WebhooksConfig.AfterSetGroupInfoEx, req)
+
+ return &pbgroup.SetGroupInfoExResp{}, nil
+}
+
+func (g *groupServer) TransferGroupOwner(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) (*pbgroup.TransferGroupOwnerResp, error) {
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+
+ if group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.Wrap()
+ }
+
+ if req.OldOwnerUserID == req.NewOwnerUserID {
+ return nil, errs.ErrArgs.WrapMsg("OldOwnerUserID == NewOwnerUserID")
+ }
+
+ members, err := g.db.FindGroupMembers(ctx, req.GroupID, []string{req.OldOwnerUserID, req.NewOwnerUserID})
+ if err != nil {
+ return nil, err
+ }
+
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+
+ memberMap := datautil.SliceToMap(members, func(e *model.GroupMember) string { return e.UserID })
+ if ids := datautil.Single([]string{req.OldOwnerUserID, req.NewOwnerUserID}, datautil.Keys(memberMap)); len(ids) > 0 {
+ return nil, errs.ErrArgs.WrapMsg("user not in group " + strings.Join(ids, ","))
+ }
+
+ oldOwner := memberMap[req.OldOwnerUserID]
+ if oldOwner == nil {
+ return nil, errs.ErrArgs.WrapMsg("OldOwnerUserID not in group " + req.NewOwnerUserID)
+ }
+
+ newOwner := memberMap[req.NewOwnerUserID]
+ if newOwner == nil {
+ return nil, errs.ErrArgs.WrapMsg("NewOwnerUser not in group " + req.NewOwnerUserID)
+ }
+
+ if !authverify.IsAdmin(ctx) {
+ if !(mcontext.GetOpUserID(ctx) == oldOwner.UserID && oldOwner.RoleLevel == constant.GroupOwner) {
+ return nil, errs.ErrNoPermission.WrapMsg("no permission transfer group owner")
+ }
+ }
+
+ if newOwner.MuteEndTime.After(time.Now()) {
+ if _, err := g.CancelMuteGroupMember(ctx, &pbgroup.CancelMuteGroupMemberReq{
+ GroupID: group.GroupID,
+ UserID: req.NewOwnerUserID}); err != nil {
+ return nil, err
+ }
+ }
+
+ if err := g.db.TransferGroupOwner(ctx, req.GroupID, req.OldOwnerUserID, req.NewOwnerUserID, newOwner.RoleLevel); err != nil {
+ return nil, err
+ }
+
+ g.webhookAfterTransferGroupOwner(ctx, &g.config.WebhooksConfig.AfterTransferGroupOwner, req)
+
+ g.notification.GroupOwnerTransferredNotification(ctx, req)
+
+ return &pbgroup.TransferGroupOwnerResp{}, nil
+}
+
+func (g *groupServer) GetGroups(ctx context.Context, req *pbgroup.GetGroupsReq) (*pbgroup.GetGroupsResp, error) {
+ var (
+ group []*model.Group
+ err error
+ )
+ var resp pbgroup.GetGroupsResp
+ if req.GroupID != "" {
+ group, err = g.db.FindGroup(ctx, []string{req.GroupID})
+ resp.Total = uint32(len(group))
+ } else {
+ var total int64
+ total, group, err = g.db.SearchGroup(ctx, req.GroupName, req.Pagination)
+ resp.Total = uint32(total)
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ groupIDs := datautil.Slice(group, func(e *model.Group) string {
+ return e.GroupID
+ })
+
+ ownerMembers, err := g.db.FindGroupsOwner(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ ownerMemberMap := datautil.SliceToMap(ownerMembers, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+ groupMemberNumMap, err := g.db.MapGroupMemberNum(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ resp.Groups = datautil.Slice(group, func(group *model.Group) *pbgroup.CMSGroup {
+ var (
+ userID string
+ username string
+ )
+ if member, ok := ownerMemberMap[group.GroupID]; ok {
+ userID = member.UserID
+ username = member.Nickname
+ }
+ return convert.Db2PbCMSGroup(group, userID, username, groupMemberNumMap[group.GroupID])
+ })
+ return &resp, nil
+}
+
+func (g *groupServer) GetGroupMembersCMS(ctx context.Context, req *pbgroup.GetGroupMembersCMSReq) (*pbgroup.GetGroupMembersCMSResp, error) {
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ total, members, err := g.db.SearchGroupMember(ctx, req.UserName, req.GroupID, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ var resp pbgroup.GetGroupMembersCMSResp
+ resp.Total = uint32(total)
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ resp.Members = datautil.Slice(members, func(e *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ return convert.Db2PbGroupMember(e)
+ })
+ return &resp, nil
+}
+
+func (g *groupServer) GetUserReqApplicationList(ctx context.Context, req *pbgroup.GetUserReqApplicationListReq) (*pbgroup.GetUserReqApplicationListResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ user, err := g.userClient.GetUserInfo(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ handleResults := datautil.Slice(req.HandleResults, func(e int32) int {
+ return int(e)
+ })
+ total, requests, err := g.db.PageGroupRequestUser(ctx, req.UserID, req.GroupIDs, handleResults, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ if len(requests) == 0 {
+ return &pbgroup.GetUserReqApplicationListResp{Total: uint32(total)}, nil
+ }
+ groupIDs := datautil.Distinct(datautil.Slice(requests, func(e *model.GroupRequest) string {
+ return e.GroupID
+ }))
+ groups, err := g.db.FindGroup(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ groupMap := datautil.SliceToMap(groups, func(e *model.Group) string {
+ return e.GroupID
+ })
+ owners, err := g.db.FindGroupsOwner(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
+ ownerMap := datautil.SliceToMap(owners, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+ groupMemberNum, err := g.db.MapGroupMemberNum(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetUserReqApplicationListResp{
+ Total: uint32(total),
+ GroupRequests: datautil.Slice(requests, func(e *model.GroupRequest) *sdkws.GroupRequest {
+ var ownerUserID string
+ if owner, ok := ownerMap[e.GroupID]; ok {
+ ownerUserID = owner.UserID
+ }
+ return convert.Db2PbGroupRequest(e, user, convert.Db2PbGroupInfo(groupMap[e.GroupID], ownerUserID, groupMemberNum[e.GroupID]))
+ }),
+ }, nil
+}
+
+func (g *groupServer) DismissGroup(ctx context.Context, req *pbgroup.DismissGroupReq) (*pbgroup.DismissGroupResp, error) {
+ owner, err := g.db.TakeGroupOwner(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if !authverify.IsAdmin(ctx) {
+ if owner.UserID != mcontext.GetOpUserID(ctx) {
+ return nil, errs.ErrNoPermission.WrapMsg("not group owner")
+ }
+ }
+ if err := g.PopulateGroupMember(ctx, owner); err != nil {
+ return nil, err
+ }
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if !req.DeleteMember && group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.WrapMsg("group status is dismissed")
+ }
+ if err := g.db.DismissGroup(ctx, req.GroupID, req.DeleteMember); err != nil {
+ return nil, err
+ }
+ if !req.DeleteMember {
+ num, err := g.db.FindGroupMemberNum(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ group.Status = constant.GroupStatusDismissed
+ tips := &sdkws.GroupDismissedTips{
+ Group: g.groupDB2PB(group, owner.UserID, num),
+ OpUser: &sdkws.GroupMemberFullInfo{},
+ }
+ if mcontext.GetOpUserID(ctx) == owner.UserID {
+ tips.OpUser = g.groupMemberDB2PB(owner, 0)
+ }
+ g.notification.GroupDismissedNotification(ctx, tips, req.SendMessage)
+ }
+ membersID, err := g.db.FindGroupMemberUserID(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ cbReq := &callbackstruct.CallbackDisMissGroupReq{
+ GroupID: req.GroupID,
+ OwnerID: owner.UserID,
+ MembersID: membersID,
+ GroupType: string(group.GroupType),
+ }
+
+ g.webhookAfterDismissGroup(ctx, &g.config.WebhooksConfig.AfterDismissGroup, cbReq)
+
+ return &pbgroup.DismissGroupResp{}, nil
+}
+
+func (g *groupServer) MuteGroupMember(ctx context.Context, req *pbgroup.MuteGroupMemberReq) (*pbgroup.MuteGroupMemberResp, error) {
+ member, err := g.db.TakeGroupMember(ctx, req.GroupID, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, member); err != nil {
+ return nil, err
+ }
+ if !authverify.IsAdmin(ctx) {
+ opMember, err := g.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ if err != nil {
+ return nil, err
+ }
+ switch member.RoleLevel {
+ case constant.GroupOwner:
+ return nil, errs.ErrNoPermission.WrapMsg("set group owner mute")
+ case constant.GroupAdmin:
+ if opMember.RoleLevel != constant.GroupOwner {
+ return nil, errs.ErrNoPermission.WrapMsg("set group admin mute")
+ }
+ case constant.GroupOrdinaryUsers:
+ if !(opMember.RoleLevel == constant.GroupAdmin || opMember.RoleLevel == constant.GroupOwner) {
+ return nil, errs.ErrNoPermission.WrapMsg("set group ordinary users mute")
+ }
+ }
+ }
+ data := UpdateGroupMemberMutedTimeMap(time.Now().Add(time.Second * time.Duration(req.MutedSeconds)))
+ if err := g.db.UpdateGroupMember(ctx, member.GroupID, member.UserID, data); err != nil {
+ return nil, err
+ }
+ g.notification.GroupMemberMutedNotification(ctx, req.GroupID, req.UserID, req.MutedSeconds)
+ return &pbgroup.MuteGroupMemberResp{}, nil
+}
+
+func (g *groupServer) CancelMuteGroupMember(ctx context.Context, req *pbgroup.CancelMuteGroupMemberReq) (*pbgroup.CancelMuteGroupMemberResp, error) {
+ member, err := g.db.TakeGroupMember(ctx, req.GroupID, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := g.PopulateGroupMember(ctx, member); err != nil {
+ return nil, err
+ }
+
+ if !authverify.IsAdmin(ctx) {
+ opMember, err := g.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ if err != nil {
+ return nil, err
+ }
+
+ switch member.RoleLevel {
+ case constant.GroupOwner:
+ return nil, errs.ErrNoPermission.WrapMsg("Can not set group owner unmute")
+ case constant.GroupAdmin:
+ if opMember.RoleLevel != constant.GroupOwner {
+ return nil, errs.ErrNoPermission.WrapMsg("Can not set group admin unmute")
+ }
+ case constant.GroupOrdinaryUsers:
+ if !(opMember.RoleLevel == constant.GroupAdmin || opMember.RoleLevel == constant.GroupOwner) {
+ return nil, errs.ErrNoPermission.WrapMsg("Can not set group ordinary users unmute")
+ }
+ }
+ }
+
+ data := UpdateGroupMemberMutedTimeMap(time.Unix(0, 0))
+ if err := g.db.UpdateGroupMember(ctx, member.GroupID, member.UserID, data); err != nil {
+ return nil, err
+ }
+
+ g.notification.GroupMemberCancelMutedNotification(ctx, req.GroupID, req.UserID)
+
+ return &pbgroup.CancelMuteGroupMemberResp{}, nil
+}
+
+func (g *groupServer) MuteGroup(ctx context.Context, req *pbgroup.MuteGroupReq) (*pbgroup.MuteGroupResp, error) {
+ if err := g.CheckGroupAdmin(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ if err := g.db.UpdateGroup(ctx, req.GroupID, UpdateGroupStatusMap(constant.GroupStatusMuted)); err != nil {
+ return nil, err
+ }
+ g.notification.GroupMutedNotification(ctx, req.GroupID)
+ return &pbgroup.MuteGroupResp{}, nil
+}
+
+func (g *groupServer) CancelMuteGroup(ctx context.Context, req *pbgroup.CancelMuteGroupReq) (*pbgroup.CancelMuteGroupResp, error) {
+ if err := g.CheckGroupAdmin(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ if err := g.db.UpdateGroup(ctx, req.GroupID, UpdateGroupStatusMap(constant.GroupOk)); err != nil {
+ return nil, err
+ }
+ g.notification.GroupCancelMutedNotification(ctx, req.GroupID)
+ return &pbgroup.CancelMuteGroupResp{}, nil
+}
+
+func (g *groupServer) SetGroupMemberInfo(ctx context.Context, req *pbgroup.SetGroupMemberInfoReq) (*pbgroup.SetGroupMemberInfoResp, error) {
+ if len(req.Members) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("members empty")
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ if opUserID == "" {
+ return nil, errs.ErrNoPermission.WrapMsg("no op user id")
+ }
+ isAppManagerUid := authverify.IsAdmin(ctx)
+ groupMembers := make(map[string][]*pbgroup.SetGroupMemberInfo)
+ for i, member := range req.Members {
+ if member.RoleLevel != nil {
+ switch member.RoleLevel.Value {
+ case constant.GroupOwner:
+ return nil, errs.ErrNoPermission.WrapMsg("cannot set ungroup owner")
+ case constant.GroupAdmin, constant.GroupOrdinaryUsers:
+ default:
+ return nil, errs.ErrArgs.WrapMsg("invalid role level")
+ }
+ }
+ groupMembers[member.GroupID] = append(groupMembers[member.GroupID], req.Members[i])
+ }
+ for groupID, members := range groupMembers {
+ temp := make(map[string]struct{})
+ userIDs := make([]string, 0, len(members)+1)
+ for _, member := range members {
+ if _, ok := temp[member.UserID]; ok {
+ return nil, errs.ErrArgs.WrapMsg(fmt.Sprintf("repeat group %s user %s", member.GroupID, member.UserID))
+ }
+ temp[member.UserID] = struct{}{}
+ userIDs = append(userIDs, member.UserID)
+ }
+ if _, ok := temp[opUserID]; !ok {
+ userIDs = append(userIDs, opUserID)
+ }
+ dbMembers, err := g.db.FindGroupMembers(ctx, groupID, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ opUserIndex := -1
+ for i, member := range dbMembers {
+ if member.UserID == opUserID {
+ opUserIndex = i
+ break
+ }
+ }
+ switch len(userIDs) - len(dbMembers) {
+ case 0:
+ if !isAppManagerUid {
+ roleLevel := dbMembers[opUserIndex].RoleLevel
+ var (
+ dbSelf = &model.GroupMember{}
+ reqSelf *pbgroup.SetGroupMemberInfo
+ )
+ switch roleLevel {
+ case constant.GroupOwner:
+ for _, member := range dbMembers {
+ if member.UserID == opUserID {
+ dbSelf = member
+ break
+ }
+ }
+ case constant.GroupAdmin:
+ for _, member := range dbMembers {
+ if member.UserID == opUserID {
+ dbSelf = member
+ }
+ if member.RoleLevel == constant.GroupOwner {
+ return nil, errs.ErrNoPermission.WrapMsg("admin can not change group owner")
+ }
+ if member.RoleLevel == constant.GroupAdmin && member.UserID != opUserID {
+ return nil, errs.ErrNoPermission.WrapMsg("admin can not change other group admin")
+ }
+ }
+ case constant.GroupOrdinaryUsers:
+ for _, member := range dbMembers {
+ if member.UserID == opUserID {
+ dbSelf = member
+ }
+ if !(member.RoleLevel == constant.GroupOrdinaryUsers && member.UserID == opUserID) {
+ return nil, errs.ErrNoPermission.WrapMsg("ordinary users can not change other role level")
+ }
+ }
+ default:
+ for _, member := range dbMembers {
+ if member.UserID == opUserID {
+ dbSelf = member
+ }
+ if member.RoleLevel >= roleLevel {
+ return nil, errs.ErrNoPermission.WrapMsg("can not change higher role level")
+ }
+ }
+ }
+ for _, member := range req.Members {
+ if member.UserID == opUserID {
+ reqSelf = member
+ break
+ }
+ }
+ if reqSelf != nil && reqSelf.RoleLevel != nil {
+ if reqSelf.RoleLevel.GetValue() > dbSelf.RoleLevel {
+ return nil, errs.ErrNoPermission.WrapMsg("can not improve role level by self")
+ }
+ if roleLevel == constant.GroupOwner {
+ return nil, errs.ErrArgs.WrapMsg("group owner can not change own role level") // Prevent the absence of a group owner
+ }
+ }
+ }
+ case 1:
+ if opUserIndex >= 0 {
+ return nil, errs.ErrArgs.WrapMsg("user not in group")
+ }
+ if !isAppManagerUid {
+ return nil, errs.ErrNoPermission.WrapMsg("user not in group")
+ }
+ default:
+ return nil, errs.ErrArgs.WrapMsg("user not in group")
+ }
+ }
+
+ for i := 0; i < len(req.Members); i++ {
+
+ if err := g.webhookBeforeSetGroupMemberInfo(ctx, &g.config.WebhooksConfig.BeforeSetGroupMemberInfo, req.Members[i]); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ }
+ if err := g.db.UpdateGroupMembers(ctx, datautil.Slice(req.Members, func(e *pbgroup.SetGroupMemberInfo) *common.BatchUpdateGroupMember {
+ return &common.BatchUpdateGroupMember{
+ GroupID: e.GroupID,
+ UserID: e.UserID,
+ Map: UpdateGroupMemberMap(e),
+ }
+ })); err != nil {
+ return nil, err
+ }
+ for _, member := range req.Members {
+ if member.RoleLevel != nil {
+ switch member.RoleLevel.Value {
+ case constant.GroupAdmin:
+ g.notification.GroupMemberSetToAdminNotification(ctx, member.GroupID, member.UserID)
+ case constant.GroupOrdinaryUsers:
+ g.notification.GroupMemberSetToOrdinaryUserNotification(ctx, member.GroupID, member.UserID)
+ }
+ }
+ if member.Nickname != nil || member.FaceURL != nil || member.Ex != nil {
+ g.notification.GroupMemberInfoSetNotification(ctx, member.GroupID, member.UserID)
+ }
+ }
+ for i := 0; i < len(req.Members); i++ {
+ g.webhookAfterSetGroupMemberInfo(ctx, &g.config.WebhooksConfig.AfterSetGroupMemberInfo, req.Members[i])
+ }
+
+ return &pbgroup.SetGroupMemberInfoResp{}, nil
+}
+
+func (g *groupServer) GetGroupAbstractInfo(ctx context.Context, req *pbgroup.GetGroupAbstractInfoReq) (*pbgroup.GetGroupAbstractInfoResp, error) {
+ if len(req.GroupIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("groupIDs empty")
+ }
+ if datautil.Duplicate(req.GroupIDs) {
+ return nil, errs.ErrArgs.WrapMsg("groupIDs duplicate")
+ }
+ for _, groupID := range req.GroupIDs {
+ if err := g.checkAdminOrInGroup(ctx, groupID); err != nil {
+ return nil, err
+ }
+ }
+ groups, err := g.db.FindGroup(ctx, req.GroupIDs)
+ if err != nil {
+ return nil, err
+ }
+ if ids := datautil.Single(req.GroupIDs, datautil.Slice(groups, func(group *model.Group) string {
+ return group.GroupID
+ })); len(ids) > 0 {
+ return nil, servererrs.ErrGroupIDNotFound.WrapMsg("not found group " + strings.Join(ids, ","))
+ }
+ groupUserMap, err := g.db.MapGroupMemberUserID(ctx, req.GroupIDs)
+ if err != nil {
+ return nil, err
+ }
+ if ids := datautil.Single(req.GroupIDs, datautil.Keys(groupUserMap)); len(ids) > 0 {
+ return nil, servererrs.ErrGroupIDNotFound.WrapMsg(fmt.Sprintf("group %s not found member", strings.Join(ids, ",")))
+ }
+ return &pbgroup.GetGroupAbstractInfoResp{
+ GroupAbstractInfos: datautil.Slice(groups, func(group *model.Group) *pbgroup.GroupAbstractInfo {
+ users := groupUserMap[group.GroupID]
+ return convert.Db2PbGroupAbstractInfo(group.GroupID, users.MemberNum, users.Hash)
+ }),
+ }, nil
+}
+
+func (g *groupServer) GetUserInGroupMembers(ctx context.Context, req *pbgroup.GetUserInGroupMembersReq) (*pbgroup.GetUserInGroupMembersResp, error) {
+ if len(req.GroupIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("groupIDs empty")
+ }
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ members, err := g.db.FindGroupMemberUser(ctx, req.GroupIDs, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetUserInGroupMembersResp{
+ Members: datautil.Slice(members, func(e *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ return convert.Db2PbGroupMember(e)
+ }),
+ }, nil
+}
+
+func (g *groupServer) GetGroupMemberUserIDs(ctx context.Context, req *pbgroup.GetGroupMemberUserIDsReq) (*pbgroup.GetGroupMemberUserIDsResp, error) {
+ userIDs, err := g.db.FindGroupMemberUserID(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if err := authverify.CheckAccessIn(ctx, userIDs...); err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupMemberUserIDsResp{
+ UserIDs: userIDs,
+ }, nil
+}
+
+func (g *groupServer) GetGroupMemberRoleLevel(ctx context.Context, req *pbgroup.GetGroupMemberRoleLevelReq) (*pbgroup.GetGroupMemberRoleLevelResp, error) {
+ if len(req.RoleLevels) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("RoleLevels empty")
+ }
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ members, err := g.db.FindGroupMemberRoleLevels(ctx, req.GroupID, req.RoleLevels)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ return &pbgroup.GetGroupMemberRoleLevelResp{
+ Members: datautil.Slice(members, func(e *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ return convert.Db2PbGroupMember(e)
+ }),
+ }, nil
+}
+
+func (g *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *pbgroup.GetGroupUsersReqApplicationListReq) (*pbgroup.GetGroupUsersReqApplicationListResp, error) {
+ if err := g.CheckGroupAdmin(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ requests, err := g.db.FindGroupRequests(ctx, req.GroupID, req.UserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(requests) == 0 {
+ return &pbgroup.GetGroupUsersReqApplicationListResp{}, nil
+ }
+
+ groupIDs := datautil.Distinct(datautil.Slice(requests, func(e *model.GroupRequest) string {
+ return e.GroupID
+ }))
+
+ groups, err := g.db.FindGroup(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ groupMap := datautil.SliceToMap(groups, func(e *model.Group) string {
+ return e.GroupID
+ })
+
+ if ids := datautil.Single(groupIDs, datautil.Keys(groupMap)); len(ids) > 0 {
+ return nil, servererrs.ErrGroupIDNotFound.WrapMsg(strings.Join(ids, ","))
+ }
+
+ userMap, err := g.userClient.GetUsersInfoMap(ctx, req.UserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ owners, err := g.db.FindGroupsOwner(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := g.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
+
+ ownerMap := datautil.SliceToMap(owners, func(e *model.GroupMember) string {
+ return e.GroupID
+ })
+
+ groupMemberNum, err := g.db.MapGroupMemberNum(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ return &pbgroup.GetGroupUsersReqApplicationListResp{
+ Total: int64(len(requests)),
+ GroupRequests: datautil.Slice(requests, func(e *model.GroupRequest) *sdkws.GroupRequest {
+ var ownerUserID string
+ if owner, ok := ownerMap[e.GroupID]; ok {
+ ownerUserID = owner.UserID
+ }
+
+ var userInfo *sdkws.UserInfo
+ if user, ok := userMap[e.UserID]; !ok {
+ userInfo = user
+ }
+
+ return convert.Db2PbGroupRequest(e, userInfo, convert.Db2PbGroupInfo(groupMap[e.GroupID], ownerUserID, groupMemberNum[e.GroupID]))
+ }),
+ }, nil
+}
+
+func (g *groupServer) GetSpecifiedUserGroupRequestInfo(ctx context.Context, req *pbgroup.GetSpecifiedUserGroupRequestInfoReq) (*pbgroup.GetSpecifiedUserGroupRequestInfoResp, error) {
+ opUserID := mcontext.GetOpUserID(ctx)
+
+ owners, err := g.db.FindGroupsOwner(ctx, []string{req.GroupID})
+ if err != nil {
+ return nil, err
+ }
+
+ if req.UserID != opUserID {
+ adminIDs, err := g.db.GetGroupRoleLevelMemberIDs(ctx, req.GroupID, constant.GroupAdmin)
+ if err != nil {
+ return nil, err
+ }
+
+ adminIDs = append(adminIDs, owners[0].UserID)
+ adminIDs = append(adminIDs, g.adminUserIDs...)
+
+ if !datautil.Contain(opUserID, adminIDs...) {
+ return nil, errs.ErrNoPermission.WrapMsg("opUser no permission")
+ }
+ }
+
+ requests, err := g.db.FindGroupRequests(ctx, req.GroupID, []string{req.UserID})
+ if err != nil {
+ return nil, err
+ }
+
+ if len(requests) == 0 {
+ return &pbgroup.GetSpecifiedUserGroupRequestInfoResp{}, nil
+ }
+
+ groups, err := g.db.FindGroup(ctx, []string{req.GroupID})
+ if err != nil {
+ return nil, err
+ }
+
+ userInfos, err := g.userClient.GetUsersInfo(ctx, []string{req.UserID})
+ if err != nil {
+ return nil, err
+ }
+
+ groupMemberNum, err := g.db.MapGroupMemberNum(ctx, []string{req.GroupID})
+ if err != nil {
+ return nil, err
+ }
+
+ resp := &pbgroup.GetSpecifiedUserGroupRequestInfoResp{
+ GroupRequests: make([]*sdkws.GroupRequest, 0, len(requests)),
+ }
+
+ for _, request := range requests {
+ resp.GroupRequests = append(resp.GroupRequests, convert.Db2PbGroupRequest(request, userInfos[0], convert.Db2PbGroupInfo(groups[0], owners[0].UserID, groupMemberNum[groups[0].GroupID])))
+ }
+
+ resp.Total = uint32(len(requests))
+
+ return resp, nil
+}
diff --git a/internal/rpc/group/notification.go b/internal/rpc/group/notification.go
new file mode 100644
index 0000000..441b3ac
--- /dev/null
+++ b/internal/rpc/group/notification.go
@@ -0,0 +1,930 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "github.com/google/uuid"
+
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/versionctx"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification/common_user"
+ "git.imall.cloud/openim/protocol/constant"
+ pbgroup "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/stringutil"
+)
+
+// GroupApplicationReceiver
+const (
+ applicantReceiver = iota
+ adminReceiver
+)
+
+func NewNotificationSender(db controller.GroupDatabase, config *Config, userClient *rpcli.UserClient, msgClient *rpcli.MsgClient, conversationClient *rpcli.ConversationClient) *NotificationSender {
+ return &NotificationSender{
+ NotificationSender: notification.NewNotificationSender(&config.NotificationConfig,
+ notification.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
+ return msgClient.SendMsg(ctx, req)
+ }),
+ notification.WithUserRpcClient(userClient.GetUserInfo),
+ ),
+ getUsersInfo: func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error) {
+ users, err := userClient.GetUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ return datautil.Slice(users, func(e *sdkws.UserInfo) common_user.CommonUser { return e }), nil
+ },
+ db: db,
+ config: config,
+ msgClient: msgClient,
+ conversationClient: conversationClient,
+ }
+}
+
+type NotificationSender struct {
+ *notification.NotificationSender
+ getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
+ db controller.GroupDatabase
+ config *Config
+ msgClient *rpcli.MsgClient
+ conversationClient *rpcli.ConversationClient
+}
+
+func (g *NotificationSender) PopulateGroupMember(ctx context.Context, members ...*model.GroupMember) error {
+ if len(members) == 0 {
+ return nil
+ }
+
+ // 收集所有需要填充用户信息的UserID
+ userIDsMap := make(map[string]struct{})
+ for _, member := range members {
+ userIDsMap[member.UserID] = struct{}{}
+ }
+
+ // 获取所有用户信息
+ users, err := g.getUsersInfo(ctx, datautil.Keys(userIDsMap))
+ if err != nil {
+ return err
+ }
+
+ // 构建用户信息map
+ userMap := make(map[string]common_user.CommonUser)
+ for i, user := range users {
+ userMap[user.GetUserID()] = users[i]
+ }
+
+ // 填充群成员信息
+ for i, member := range members {
+ user, ok := userMap[member.UserID]
+ if !ok {
+ continue
+ }
+
+ // 填充昵称和头像
+ if member.Nickname == "" {
+ members[i].Nickname = user.GetNickname()
+ }
+ if member.FaceURL == "" {
+ members[i].FaceURL = user.GetFaceURL()
+ }
+
+ // 填充UserType和UserFlag到Ex字段
+ // 先解析现有的Ex字段(如果有的话)
+ var exData map[string]interface{}
+ if members[i].Ex != "" {
+ _ = jsonutil.JsonUnmarshal([]byte(members[i].Ex), &exData)
+ }
+ if exData == nil {
+ exData = make(map[string]interface{})
+ }
+
+ // 添加userType和userFlag
+ exData["userType"] = user.GetUserType()
+ exData["userFlag"] = user.GetUserFlag()
+
+ exJSON, _ := jsonutil.JsonMarshal(exData)
+ members[i].Ex = string(exJSON)
+ }
+ return nil
+}
+
+func (g *NotificationSender) getUser(ctx context.Context, userID string) (*sdkws.PublicUserInfo, error) {
+ users, err := g.getUsersInfo(ctx, []string{userID})
+ if err != nil {
+ return nil, err
+ }
+ if len(users) == 0 {
+ return nil, servererrs.ErrUserIDNotFound.WrapMsg(fmt.Sprintf("user %s not found", userID))
+ }
+ return &sdkws.PublicUserInfo{
+ UserID: users[0].GetUserID(),
+ Nickname: users[0].GetNickname(),
+ FaceURL: users[0].GetFaceURL(),
+ Ex: users[0].GetEx(),
+ UserType: users[0].GetUserType(),
+ }, nil
+}
+
+func (g *NotificationSender) getGroupInfo(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) {
+ gm, err := g.db.TakeGroup(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ num, err := g.db.FindGroupMemberNum(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ ownerUserIDs, err := g.db.GetGroupRoleLevelMemberIDs(ctx, groupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ var ownerUserID string
+ if len(ownerUserIDs) > 0 {
+ ownerUserID = ownerUserIDs[0]
+ }
+
+ return convert.Db2PbGroupInfo(gm, ownerUserID, num), nil
+}
+
+func (g *NotificationSender) getGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ members, err := g.db.FindGroupMembers(ctx, groupID, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ log.ZDebug(ctx, "getGroupMembers", "members", members)
+ res := make([]*sdkws.GroupMemberFullInfo, 0, len(members))
+ for _, member := range members {
+ res = append(res, g.groupMemberDB2PB(member, 0))
+ }
+ return res, nil
+}
+
+func (g *NotificationSender) getGroupMemberMap(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) {
+ members, err := g.getGroupMembers(ctx, groupID, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ m := make(map[string]*sdkws.GroupMemberFullInfo)
+ for i, member := range members {
+ m[member.UserID] = members[i]
+ }
+ return m, nil
+}
+
+func (g *NotificationSender) getGroupMember(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) {
+ members, err := g.getGroupMembers(ctx, groupID, []string{userID})
+ if err != nil {
+ return nil, err
+ }
+ if len(members) == 0 {
+ return nil, errs.ErrInternalServer.WrapMsg(fmt.Sprintf("group %s member %s not found", groupID, userID))
+ }
+ return members[0], nil
+}
+
+func (g *NotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, groupID string) ([]string, error) {
+ members, err := g.db.FindGroupMemberRoleLevels(ctx, groupID, []int32{constant.GroupOwner, constant.GroupAdmin})
+ if err != nil {
+ return nil, err
+ }
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ fn := func(e *model.GroupMember) string { return e.UserID }
+ return datautil.Slice(members, fn), nil
+}
+
+func (g *NotificationSender) groupMemberDB2PB(member *model.GroupMember, appMangerLevel int32) *sdkws.GroupMemberFullInfo {
+ return &sdkws.GroupMemberFullInfo{
+ GroupID: member.GroupID,
+ UserID: member.UserID,
+ RoleLevel: member.RoleLevel,
+ JoinTime: member.JoinTime.UnixMilli(),
+ Nickname: member.Nickname,
+ FaceURL: member.FaceURL,
+ AppMangerLevel: appMangerLevel,
+ JoinSource: member.JoinSource,
+ OperatorUserID: member.OperatorUserID,
+ Ex: member.Ex,
+ MuteEndTime: member.MuteEndTime.UnixMilli(),
+ InviterUserID: member.InviterUserID,
+ }
+}
+
+/* func (g *NotificationSender) getUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
+ users, err := g.getUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ result := make(map[string]*sdkws.UserInfo)
+ for _, user := range users {
+ result[user.GetUserID()] = user.(*sdkws.UserInfo)
+ }
+ return result, nil
+} */
+
+func (g *NotificationSender) fillOpUser(ctx context.Context, targetUser **sdkws.GroupMemberFullInfo, groupID string) (err error) {
+ return g.fillUserByUserID(ctx, mcontext.GetOpUserID(ctx), targetUser, groupID)
+}
+
+func (g *NotificationSender) fillUserByUserID(ctx context.Context, userID string, targetUser **sdkws.GroupMemberFullInfo, groupID string) error {
+ if targetUser == nil {
+ return errs.ErrInternalServer.WrapMsg("**sdkws.GroupMemberFullInfo is nil")
+ }
+ if groupID != "" {
+ if authverify.CheckUserIsAdmin(ctx, userID) {
+ *targetUser = &sdkws.GroupMemberFullInfo{
+ GroupID: groupID,
+ UserID: userID,
+ RoleLevel: constant.GroupAdmin,
+ AppMangerLevel: constant.AppAdmin,
+ }
+ } else {
+ member, err := g.db.TakeGroupMember(ctx, groupID, userID)
+ if err == nil {
+ *targetUser = g.groupMemberDB2PB(member, 0)
+ } else if !(errors.Is(err, mongo.ErrNoDocuments) || errs.ErrRecordNotFound.Is(err)) {
+ return err
+ }
+ }
+ }
+ user, err := g.getUser(ctx, userID)
+ if err != nil {
+ return err
+ }
+ if *targetUser == nil {
+ *targetUser = &sdkws.GroupMemberFullInfo{
+ GroupID: groupID,
+ UserID: userID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ OperatorUserID: userID,
+ }
+ } else {
+ if (*targetUser).Nickname == "" {
+ (*targetUser).Nickname = user.Nickname
+ }
+ if (*targetUser).FaceURL == "" {
+ (*targetUser).FaceURL = user.FaceURL
+ }
+ }
+ return nil
+}
+
+func (g *NotificationSender) setVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string) {
+ versions := versionctx.GetVersionLog(ctx).Get()
+ for i := len(versions) - 1; i >= 0; i-- {
+ coll := versions[i]
+ if coll.Name == collName && coll.Doc.DID == id {
+ *version = uint64(coll.Doc.Version)
+ *versionID = coll.Doc.ID.Hex()
+ return
+ }
+ }
+}
+
+func (g *NotificationSender) setSortVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string, sortVersion *uint64) {
+ versions := versionctx.GetVersionLog(ctx).Get()
+ for _, coll := range versions {
+ if coll.Name == collName && coll.Doc.DID == id {
+ *version = uint64(coll.Doc.Version)
+ *versionID = coll.Doc.ID.Hex()
+ for _, elem := range coll.Doc.Logs {
+ if elem.EID == model.VersionSortChangeID {
+ *sortVersion = uint64(elem.Version)
+ }
+ }
+ }
+ }
+}
+
+func (g *NotificationSender) GroupCreatedNotification(ctx context.Context, tips *sdkws.GroupCreatedTips, SendMessage *bool) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupCreatedNotification, tips, notification.WithSendMessage(SendMessage))
+}
+
+func (g *NotificationSender) GroupInfoSetNotification(ctx context.Context, tips *sdkws.GroupInfoSetTips) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNotification, tips, notification.WithRpcGetUserName())
+}
+
+func (g *NotificationSender) GroupInfoSetNameNotification(ctx context.Context, tips *sdkws.GroupInfoSetNameTips) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNameNotification, tips)
+}
+
+func (g *NotificationSender) GroupInfoSetAnnouncementNotification(ctx context.Context, tips *sdkws.GroupInfoSetAnnouncementTips, sendMessage *bool) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetAnnouncementNotification, tips, notification.WithRpcGetUserName(), notification.WithSendMessage(sendMessage))
+}
+
+func (g *NotificationSender) uuid() string {
+ return uuid.New().String()
+}
+
+func (g *NotificationSender) getGroupRequest(ctx context.Context, groupID string, userID string) (*sdkws.GroupRequest, error) {
+ request, err := g.db.TakeGroupRequest(ctx, groupID, userID)
+ if err != nil {
+ return nil, err
+ }
+ users, err := g.getUsersInfo(ctx, []string{userID})
+ if err != nil {
+ return nil, err
+ }
+ if len(users) == 0 {
+ return nil, servererrs.ErrUserIDNotFound.WrapMsg(fmt.Sprintf("user %s not found", userID))
+ }
+ info, ok := users[0].(*sdkws.UserInfo)
+ if !ok {
+ info = &sdkws.UserInfo{
+ UserID: users[0].GetUserID(),
+ Nickname: users[0].GetNickname(),
+ FaceURL: users[0].GetFaceURL(),
+ Ex: users[0].GetEx(),
+ }
+ }
+ return convert.Db2PbGroupRequest(request, info, nil), nil
+}
+
+func (g *NotificationSender) JoinGroupApplicationNotification(ctx context.Context, req *pbgroup.JoinGroupReq, dbReq *model.GroupRequest) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ request, err := g.getGroupRequest(ctx, dbReq.GroupID, dbReq.UserID)
+ if err != nil {
+ log.ZError(ctx, "JoinGroupApplicationNotification getGroupRequest", err, "dbReq", dbReq)
+ return
+ }
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+ var user *sdkws.PublicUserInfo
+ user, err = g.getUser(ctx, req.InviterUserID)
+ if err != nil {
+ return
+ }
+ userIDs, err := g.getGroupOwnerAndAdminUserID(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+ userIDs = append(userIDs, req.InviterUserID, mcontext.GetOpUserID(ctx))
+ tips := &sdkws.JoinGroupApplicationTips{
+ Group: group,
+ Applicant: user,
+ ReqMsg: req.ReqMessage,
+ Uuid: g.uuid(),
+ Request: request,
+ }
+ for _, userID := range datautil.Distinct(userIDs) {
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), userID, constant.JoinGroupApplicationNotification, tips)
+ }
+}
+
+func (g *NotificationSender) MemberQuitNotification(ctx context.Context, member *sdkws.GroupMemberFullInfo) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, member.GroupID)
+ if err != nil {
+ return
+ }
+ tips := &sdkws.MemberQuitTips{Group: group, QuitUser: member}
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, member.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), member.GroupID, constant.MemberQuitNotification, tips)
+}
+
+func (g *NotificationSender) GroupApplicationAcceptedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ request, err := g.getGroupRequest(ctx, req.GroupID, req.FromUserID)
+ if err != nil {
+ log.ZError(ctx, "GroupApplicationAcceptedNotification getGroupRequest", err, "req", req)
+ return
+ }
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+ var userIDs []string
+ userIDs, err = g.getGroupOwnerAndAdminUserID(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+
+ var opUser *sdkws.GroupMemberFullInfo
+ if err = g.fillOpUser(ctx, &opUser, group.GroupID); err != nil {
+ return
+ }
+ tips := &sdkws.GroupApplicationAcceptedTips{
+ Group: group,
+ OpUser: opUser,
+ HandleMsg: req.HandledMsg,
+ Uuid: g.uuid(),
+ Request: request,
+ }
+ for _, userID := range append(userIDs, req.FromUserID) {
+ if userID == req.FromUserID {
+ tips.ReceiverAs = applicantReceiver
+ } else {
+ tips.ReceiverAs = adminReceiver
+ }
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), userID, constant.GroupApplicationAcceptedNotification, tips)
+ }
+}
+
+func (g *NotificationSender) GroupApplicationRejectedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ request, err := g.getGroupRequest(ctx, req.GroupID, req.FromUserID)
+ if err != nil {
+ log.ZError(ctx, "GroupApplicationAcceptedNotification getGroupRequest", err, "req", req)
+ return
+ }
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+ var userIDs []string
+ userIDs, err = g.getGroupOwnerAndAdminUserID(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+
+ var opUser *sdkws.GroupMemberFullInfo
+ if err = g.fillOpUser(ctx, &opUser, group.GroupID); err != nil {
+ return
+ }
+ tips := &sdkws.GroupApplicationRejectedTips{
+ Group: group,
+ OpUser: opUser,
+ HandleMsg: req.HandledMsg,
+ Uuid: g.uuid(),
+ Request: request,
+ }
+ for _, userID := range append(userIDs, req.FromUserID) {
+ if userID == req.FromUserID {
+ tips.ReceiverAs = applicantReceiver
+ } else {
+ tips.ReceiverAs = adminReceiver
+ }
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), userID, constant.GroupApplicationRejectedNotification, tips)
+ }
+}
+
+func (g *NotificationSender) GroupOwnerTransferredNotification(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, req.GroupID)
+ if err != nil {
+ return
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ var member map[string]*sdkws.GroupMemberFullInfo
+ member, err = g.getGroupMemberMap(ctx, req.GroupID, []string{opUserID, req.NewOwnerUserID, req.OldOwnerUserID})
+ if err != nil {
+ return
+ }
+ tips := &sdkws.GroupOwnerTransferredTips{
+ Group: group,
+ OpUser: member[opUserID],
+ NewGroupOwner: member[req.NewOwnerUserID],
+ OldGroupOwnerInfo: member[req.OldOwnerUserID],
+ }
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, req.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupOwnerTransferredNotification, tips)
+}
+
+func (g *NotificationSender) MemberKickedNotification(ctx context.Context, tips *sdkws.MemberKickedTips, SendMessage *bool) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.MemberKickedNotification, tips, notification.WithSendMessage(SendMessage))
+}
+
+func (g *NotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx context.Context, groupID string, SendMessage *bool, invitedOpUserID string, entrantUserID ...string) error {
+ return g.groupApplicationAgreeMemberEnterNotification(ctx, groupID, SendMessage, invitedOpUserID, entrantUserID...)
+}
+
+func (g *NotificationSender) groupApplicationAgreeMemberEnterNotification(ctx context.Context, groupID string, SendMessage *bool, invitedOpUserID string, entrantUserID ...string) error {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+
+ if !g.config.RpcConfig.EnableHistoryForNewMembers {
+ conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
+ maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
+ if err != nil {
+ return err
+ }
+ if err := g.msgClient.SetUserConversationsMinSeq(ctx, conversationID, entrantUserID, maxSeq+1); err != nil {
+ return err
+ }
+ }
+ if err := g.conversationClient.CreateGroupChatConversations(ctx, groupID, entrantUserID); err != nil {
+ return err
+ }
+
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return err
+ }
+ users, err := g.getGroupMembers(ctx, groupID, entrantUserID)
+ if err != nil {
+ return err
+ }
+
+ tips := &sdkws.MemberInvitedTips{
+ Group: group,
+ InvitedUserList: users,
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ if err = g.fillUserByUserID(ctx, opUserID, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return nil
+ }
+ if invitedOpUserID == opUserID {
+ tips.InviterUser = tips.OpUser
+ } else {
+ if err = g.fillUserByUserID(ctx, invitedOpUserID, &tips.InviterUser, tips.Group.GroupID); err != nil {
+ return err
+ }
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.MemberInvitedNotification, tips, notification.WithSendMessage(SendMessage))
+ return nil
+}
+
+func (g *NotificationSender) MemberEnterNotification(ctx context.Context, groupID string, entrantUserID string) error {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+
+ if !g.config.RpcConfig.EnableHistoryForNewMembers {
+ conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
+ maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
+ if err != nil {
+ return err
+ }
+ if err := g.msgClient.SetUserConversationsMinSeq(ctx, conversationID, []string{entrantUserID}, maxSeq+1); err != nil {
+ return err
+ }
+ }
+ if err := g.conversationClient.CreateGroupChatConversations(ctx, groupID, []string{entrantUserID}); err != nil {
+ return err
+ }
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return err
+ }
+ user, err := g.getGroupMember(ctx, groupID, entrantUserID)
+ if err != nil {
+ return err
+ }
+
+ tips := &sdkws.MemberEnterTips{
+ Group: group,
+ EntrantUser: user,
+ OperationTime: time.Now().UnixMilli(),
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ // 群内广播:通知所有群成员有人入群
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.MemberEnterNotification, tips)
+ // 给入群本人发送一条系统通知(单聊),便于客户端展示“你已加入群聊”
+ g.NotificationWithSessionType(ctx, mcontext.GetOpUserID(ctx), entrantUserID,
+ constant.MemberEnterNotification, constant.SingleChatType, tips)
+ return nil
+}
+
+func (g *NotificationSender) GroupDismissedNotification(ctx context.Context, tips *sdkws.GroupDismissedTips, SendMessage *bool) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupDismissedNotification, tips, notification.WithSendMessage(SendMessage))
+}
+
+func (g *NotificationSender) GroupMemberMutedNotification(ctx context.Context, groupID, groupMemberUserID string, mutedSeconds uint32) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ log.ZDebug(ctx, "GroupMemberMutedNotification start", "groupID", groupID, "groupMemberUserID", groupMemberUserID, "mutedSeconds", mutedSeconds, "opUserID", mcontext.GetOpUserID(ctx))
+
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ log.ZWarn(ctx, "GroupMemberMutedNotification getGroupInfo failed", err, "groupID", groupID)
+ return
+ }
+ log.ZDebug(ctx, "GroupMemberMutedNotification got group info", "groupID", groupID, "groupName", group.GroupName)
+
+ var user map[string]*sdkws.GroupMemberFullInfo
+ user, err = g.getGroupMemberMap(ctx, groupID, []string{mcontext.GetOpUserID(ctx), groupMemberUserID})
+ if err != nil {
+ log.ZWarn(ctx, "GroupMemberMutedNotification getGroupMemberMap failed", err, "groupID", groupID)
+ return
+ }
+ log.ZDebug(ctx, "GroupMemberMutedNotification got user map", "opUser", user[mcontext.GetOpUserID(ctx)], "mutedUser", user[groupMemberUserID])
+
+ tips := &sdkws.GroupMemberMutedTips{
+ Group: group, MutedSeconds: mutedSeconds,
+ OpUser: user[mcontext.GetOpUserID(ctx)], MutedUser: user[groupMemberUserID],
+ }
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ log.ZWarn(ctx, "GroupMemberMutedNotification fillOpUser failed", err)
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ // 在群聊中通知,但推送服务会过滤只推送给群主、管理员和被禁言成员本人
+ log.ZInfo(ctx, "GroupMemberMutedNotification sending notification", "groupID", groupID, "recvID", group.GroupID, "contentType", constant.GroupMemberMutedNotification, "mutedUserID", groupMemberUserID, "mutedSeconds", mutedSeconds)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberMutedNotification, tips)
+ log.ZDebug(ctx, "GroupMemberMutedNotification notification sent", "groupID", groupID)
+}
+
+func (g *NotificationSender) GroupMemberCancelMutedNotification(ctx context.Context, groupID, groupMemberUserID string) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ log.ZDebug(ctx, "GroupMemberCancelMutedNotification start", "groupID", groupID, "groupMemberUserID", groupMemberUserID, "opUserID", mcontext.GetOpUserID(ctx))
+
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ log.ZWarn(ctx, "GroupMemberCancelMutedNotification getGroupInfo failed", err, "groupID", groupID)
+ return
+ }
+ log.ZDebug(ctx, "GroupMemberCancelMutedNotification got group info", "groupID", groupID, "groupName", group.GroupName)
+
+ var user map[string]*sdkws.GroupMemberFullInfo
+ user, err = g.getGroupMemberMap(ctx, groupID, []string{mcontext.GetOpUserID(ctx), groupMemberUserID})
+ if err != nil {
+ log.ZWarn(ctx, "GroupMemberCancelMutedNotification getGroupMemberMap failed", err, "groupID", groupID)
+ return
+ }
+ log.ZDebug(ctx, "GroupMemberCancelMutedNotification got user map", "opUser", user[mcontext.GetOpUserID(ctx)], "mutedUser", user[groupMemberUserID])
+
+ tips := &sdkws.GroupMemberCancelMutedTips{Group: group, OpUser: user[mcontext.GetOpUserID(ctx)], MutedUser: user[groupMemberUserID]}
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ log.ZWarn(ctx, "GroupMemberCancelMutedNotification fillOpUser failed", err)
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID)
+ // 在群聊中通知,但推送服务会过滤只推送给群主、管理员和被取消禁言成员本人
+ log.ZInfo(ctx, "GroupMemberCancelMutedNotification sending notification", "groupID", groupID, "recvID", group.GroupID, "contentType", constant.GroupMemberCancelMutedNotification, "cancelMutedUserID", groupMemberUserID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberCancelMutedNotification, tips)
+ log.ZDebug(ctx, "GroupMemberCancelMutedNotification notification sent", "groupID", groupID)
+}
+
+func (g *NotificationSender) GroupMutedNotification(ctx context.Context, groupID string) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return
+ }
+ var users []*sdkws.GroupMemberFullInfo
+ users, err = g.getGroupMembers(ctx, groupID, []string{mcontext.GetOpUserID(ctx)})
+ if err != nil {
+ return
+ }
+ tips := &sdkws.GroupMutedTips{Group: group}
+ if len(users) > 0 {
+ tips.OpUser = users[0]
+ }
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, groupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMutedNotification, tips)
+}
+
+func (g *NotificationSender) GroupCancelMutedNotification(ctx context.Context, groupID string) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return
+ }
+ var users []*sdkws.GroupMemberFullInfo
+ users, err = g.getGroupMembers(ctx, groupID, []string{mcontext.GetOpUserID(ctx)})
+ if err != nil {
+ return
+ }
+ tips := &sdkws.GroupCancelMutedTips{Group: group}
+ if len(users) > 0 {
+ tips.OpUser = users[0]
+ }
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, groupID)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupCancelMutedNotification, tips)
+}
+
+func (g *NotificationSender) GroupMemberInfoSetNotification(ctx context.Context, groupID, groupMemberUserID string) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return
+ }
+ var user map[string]*sdkws.GroupMemberFullInfo
+ user, err = g.getGroupMemberMap(ctx, groupID, []string{groupMemberUserID})
+ if err != nil {
+ return
+ }
+ tips := &sdkws.GroupMemberInfoSetTips{Group: group, OpUser: user[mcontext.GetOpUserID(ctx)], ChangedUser: user[groupMemberUserID]}
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setSortVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID, &tips.GroupSortVersion)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberInfoSetNotification, tips)
+}
+
+func (g *NotificationSender) GroupMemberSetToAdminNotification(ctx context.Context, groupID, groupMemberUserID string) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return
+ }
+ user, err := g.getGroupMemberMap(ctx, groupID, []string{mcontext.GetOpUserID(ctx), groupMemberUserID})
+ if err != nil {
+ return
+ }
+ tips := &sdkws.GroupMemberInfoSetTips{Group: group, OpUser: user[mcontext.GetOpUserID(ctx)], ChangedUser: user[groupMemberUserID]}
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setSortVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID, &tips.GroupSortVersion)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberSetToAdminNotification, tips)
+}
+
+func (g *NotificationSender) GroupMemberSetToOrdinaryUserNotification(ctx context.Context, groupID, groupMemberUserID string) {
+ var err error
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, stringutil.GetFuncName(1)+" failed", err)
+ }
+ }()
+ var group *sdkws.GroupInfo
+ group, err = g.getGroupInfo(ctx, groupID)
+ if err != nil {
+ return
+ }
+ var user map[string]*sdkws.GroupMemberFullInfo
+ user, err = g.getGroupMemberMap(ctx, groupID, []string{mcontext.GetOpUserID(ctx), groupMemberUserID})
+ if err != nil {
+ return
+ }
+ tips := &sdkws.GroupMemberInfoSetTips{Group: group, OpUser: user[mcontext.GetOpUserID(ctx)], ChangedUser: user[groupMemberUserID]}
+ if err = g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
+ return
+ }
+ g.setSortVersion(ctx, &tips.GroupMemberVersion, &tips.GroupMemberVersionID, database.GroupMemberVersionName, tips.Group.GroupID, &tips.GroupSortVersion)
+ g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberSetToOrdinaryUserNotification, tips)
+}
diff --git a/internal/rpc/group/statistics.go b/internal/rpc/group/statistics.go
new file mode 100644
index 0000000..25e9683
--- /dev/null
+++ b/internal/rpc/group/statistics.go
@@ -0,0 +1,47 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package group
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/protocol/group"
+ "github.com/openimsdk/tools/errs"
+)
+
+func (g *groupServer) GroupCreateCount(ctx context.Context, req *group.GroupCreateCountReq) (*group.GroupCreateCountResp, error) {
+ if req.Start > req.End {
+ return nil, errs.ErrArgs.WrapMsg("start > end: %d > %d", req.Start, req.End)
+ }
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ total, err := g.db.CountTotal(ctx, nil)
+ if err != nil {
+ return nil, err
+ }
+ start := time.UnixMilli(req.Start)
+ before, err := g.db.CountTotal(ctx, &start)
+ if err != nil {
+ return nil, err
+ }
+ count, err := g.db.CountRangeEverydayTotal(ctx, start, time.UnixMilli(req.End))
+ if err != nil {
+ return nil, err
+ }
+ return &group.GroupCreateCountResp{Total: total, Before: before, Count: count}, nil
+}
diff --git a/internal/rpc/group/sync.go b/internal/rpc/group/sync.go
new file mode 100644
index 0000000..d273ce7
--- /dev/null
+++ b/internal/rpc/group/sync.go
@@ -0,0 +1,197 @@
+package group
+
+import (
+ "context"
+ "errors"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/incrversion"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/hashutil"
+ "git.imall.cloud/openim/protocol/constant"
+ pbgroup "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+)
+
+const versionSyncLimit = 500
+
+func (g *groupServer) GetFullGroupMemberUserIDs(ctx context.Context, req *pbgroup.GetFullGroupMemberUserIDsReq) (*pbgroup.GetFullGroupMemberUserIDsResp, error) {
+ userIDs, err := g.db.FindGroupMemberUserID(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if err := authverify.CheckAccessIn(ctx, userIDs...); err != nil {
+ return nil, err
+ }
+ vl, err := g.db.FindMaxGroupMemberVersionCache(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ idHash := hashutil.IdHash(userIDs)
+ if req.IdHash == idHash {
+ userIDs = nil
+ }
+ return &pbgroup.GetFullGroupMemberUserIDsResp{
+ Version: idHash,
+ VersionID: vl.ID.Hex(),
+ Equal: req.IdHash == idHash,
+ UserIDs: userIDs,
+ }, nil
+}
+
+func (g *groupServer) GetFullJoinGroupIDs(ctx context.Context, req *pbgroup.GetFullJoinGroupIDsReq) (*pbgroup.GetFullJoinGroupIDsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ vl, err := g.db.FindMaxJoinGroupVersionCache(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ groupIDs, err := g.db.FindJoinGroupID(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ idHash := hashutil.IdHash(groupIDs)
+ if req.IdHash == idHash {
+ groupIDs = nil
+ }
+ return &pbgroup.GetFullJoinGroupIDsResp{
+ Version: idHash,
+ VersionID: vl.ID.Hex(),
+ Equal: req.IdHash == idHash,
+ GroupIDs: groupIDs,
+ }, nil
+}
+
+func (g *groupServer) GetIncrementalGroupMember(ctx context.Context, req *pbgroup.GetIncrementalGroupMemberReq) (*pbgroup.GetIncrementalGroupMemberResp, error) {
+ if err := g.checkAdminOrInGroup(ctx, req.GroupID); err != nil {
+ return nil, err
+ }
+ group, err := g.db.TakeGroup(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ if group.Status == constant.GroupStatusDismissed {
+ return nil, servererrs.ErrDismissedAlready.Wrap()
+ }
+ var (
+ hasGroupUpdate bool
+ sortVersion uint64
+ )
+ opt := incrversion.Option[*sdkws.GroupMemberFullInfo, pbgroup.GetIncrementalGroupMemberResp]{
+ Ctx: ctx,
+ VersionKey: req.GroupID,
+ VersionID: req.VersionID,
+ VersionNumber: req.Version,
+ Version: func(ctx context.Context, groupID string, version uint, limit int) (*model.VersionLog, error) {
+ vl, err := g.db.FindMemberIncrVersion(ctx, groupID, version, limit)
+ if err != nil {
+ return nil, err
+ }
+ logs := make([]model.VersionLogElem, 0, len(vl.Logs))
+ for i, log := range vl.Logs {
+ switch log.EID {
+ case model.VersionGroupChangeID:
+ vl.LogLen--
+ hasGroupUpdate = true
+ case model.VersionSortChangeID:
+ vl.LogLen--
+ sortVersion = uint64(log.Version)
+ default:
+ logs = append(logs, vl.Logs[i])
+ }
+ }
+ vl.Logs = logs
+ if vl.LogLen > 0 {
+ hasGroupUpdate = true
+ }
+ return vl, nil
+ },
+ CacheMaxVersion: g.db.FindMaxGroupMemberVersionCache,
+ Find: func(ctx context.Context, ids []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ return g.getGroupMembersInfo(ctx, req.GroupID, ids)
+ },
+ Resp: func(version *model.VersionLog, delIDs []string, insertList, updateList []*sdkws.GroupMemberFullInfo, full bool) *pbgroup.GetIncrementalGroupMemberResp {
+ return &pbgroup.GetIncrementalGroupMemberResp{
+ VersionID: version.ID.Hex(),
+ Version: uint64(version.Version),
+ Full: full,
+ Delete: delIDs,
+ Insert: insertList,
+ Update: updateList,
+ SortVersion: sortVersion,
+ }
+ },
+ }
+ resp, err := opt.Build()
+ if err != nil {
+ return nil, err
+ }
+ if resp.Full || hasGroupUpdate {
+ count, err := g.db.FindGroupMemberNum(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ owner, err := g.db.TakeGroupOwner(ctx, group.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ resp.Group = g.groupDB2PB(group, owner.UserID, count)
+ }
+ return resp, nil
+}
+
+func (g *groupServer) GetIncrementalJoinGroup(ctx context.Context, req *pbgroup.GetIncrementalJoinGroupReq) (*pbgroup.GetIncrementalJoinGroupResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ opt := incrversion.Option[*sdkws.GroupInfo, pbgroup.GetIncrementalJoinGroupResp]{
+ Ctx: ctx,
+ VersionKey: req.UserID,
+ VersionID: req.VersionID,
+ VersionNumber: req.Version,
+ Version: g.db.FindJoinIncrVersion,
+ CacheMaxVersion: g.db.FindMaxJoinGroupVersionCache,
+ Find: g.getGroupsInfo,
+ Resp: func(version *model.VersionLog, delIDs []string, insertList, updateList []*sdkws.GroupInfo, full bool) *pbgroup.GetIncrementalJoinGroupResp {
+ return &pbgroup.GetIncrementalJoinGroupResp{
+ VersionID: version.ID.Hex(),
+ Version: uint64(version.Version),
+ Full: full,
+ Delete: delIDs,
+ Insert: insertList,
+ Update: updateList,
+ }
+ },
+ }
+ return opt.Build()
+}
+
+func (g *groupServer) BatchGetIncrementalGroupMember(ctx context.Context, req *pbgroup.BatchGetIncrementalGroupMemberReq) (*pbgroup.BatchGetIncrementalGroupMemberResp, error) {
+ var num int
+ resp := make(map[string]*pbgroup.GetIncrementalGroupMemberResp)
+
+ for _, memberReq := range req.ReqList {
+ if _, ok := resp[memberReq.GroupID]; ok {
+ continue
+ }
+ memberResp, err := g.GetIncrementalGroupMember(ctx, memberReq)
+ if err != nil {
+ if errors.Is(err, servererrs.ErrDismissedAlready) {
+ log.ZWarn(ctx, "Failed to get incremental group member", err, "groupID", memberReq.GroupID, "request", memberReq)
+ continue
+ }
+ return nil, err
+ }
+
+ resp[memberReq.GroupID] = memberResp
+ num += len(memberResp.Insert) + len(memberResp.Update) + len(memberResp.Delete)
+ if num >= versionSyncLimit {
+ break
+ }
+ }
+
+ return &pbgroup.BatchGetIncrementalGroupMemberResp{RespList: resp}, nil
+}
diff --git a/internal/rpc/incrversion/batch_option.go b/internal/rpc/incrversion/batch_option.go
new file mode 100644
index 0000000..f7a88b9
--- /dev/null
+++ b/internal/rpc/incrversion/batch_option.go
@@ -0,0 +1,207 @@
+package incrversion
+
+import (
+ "context"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+)
+
+type BatchOption[A, B any] struct {
+ Ctx context.Context
+ TargetKeys []string
+ VersionIDs []string
+ VersionNumbers []uint64
+ //SyncLimit int
+ Versions func(ctx context.Context, dIds []string, versions []uint64, limits []int) (map[string]*model.VersionLog, error)
+ CacheMaxVersions func(ctx context.Context, dIds []string) (map[string]*model.VersionLog, error)
+ Find func(ctx context.Context, dId string, ids []string) (A, error)
+ Resp func(versionsMap map[string]*model.VersionLog, deleteIdsMap map[string][]string, insertListMap, updateListMap map[string]A, fullMap map[string]bool) *B
+}
+
+func (o *BatchOption[A, B]) newError(msg string) error {
+ return errs.ErrInternalServer.WrapMsg(msg)
+}
+
+func (o *BatchOption[A, B]) check() error {
+ if o.Ctx == nil {
+ return o.newError("opt ctx is nil")
+ }
+ if len(o.TargetKeys) == 0 {
+ return o.newError("targetKeys is empty")
+ }
+ if o.Versions == nil {
+ return o.newError("func versions is nil")
+ }
+ if o.Find == nil {
+ return o.newError("func find is nil")
+ }
+ if o.Resp == nil {
+ return o.newError("func resp is nil")
+ }
+ return nil
+}
+
+func (o *BatchOption[A, B]) validVersions() []bool {
+ valids := make([]bool, len(o.VersionIDs))
+ for i, versionID := range o.VersionIDs {
+ objID, err := primitive.ObjectIDFromHex(versionID)
+ valids[i] = (err == nil && (!objID.IsZero()) && o.VersionNumbers[i] > 0)
+ }
+ return valids
+}
+
+func (o *BatchOption[A, B]) equalIDs(objIDs []primitive.ObjectID) []bool {
+ equals := make([]bool, len(o.VersionIDs))
+ for i, versionID := range o.VersionIDs {
+ equals[i] = versionID == objIDs[i].Hex()
+ }
+ return equals
+}
+
+func (o *BatchOption[A, B]) getVersions(tags *[]int) (versions map[string]*model.VersionLog, err error) {
+ var dIDs []string
+ var versionNums []uint64
+ var limits []int
+
+ valids := o.validVersions()
+
+ if o.CacheMaxVersions == nil {
+ for i, valid := range valids {
+ if valid {
+ (*tags)[i] = tagQuery
+ dIDs = append(dIDs, o.TargetKeys[i])
+ versionNums = append(versionNums, o.VersionNumbers[i])
+ limits = append(limits, syncLimit)
+ } else {
+ (*tags)[i] = tagFull
+ dIDs = append(dIDs, o.TargetKeys[i])
+ versionNums = append(versionNums, 0)
+ limits = append(limits, 0)
+ }
+ }
+
+ versions, err = o.Versions(o.Ctx, dIDs, versionNums, limits)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return versions, nil
+
+ } else {
+ caches, err := o.CacheMaxVersions(o.Ctx, o.TargetKeys)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ objIDs := make([]primitive.ObjectID, len(o.VersionIDs))
+
+ for i, versionID := range o.VersionIDs {
+ objID, _ := primitive.ObjectIDFromHex(versionID)
+ objIDs[i] = objID
+ }
+
+ equals := o.equalIDs(objIDs)
+ for i, valid := range valids {
+ if !valid {
+ (*tags)[i] = tagFull
+ } else if !equals[i] {
+ (*tags)[i] = tagFull
+ } else if o.VersionNumbers[i] == uint64(caches[o.TargetKeys[i]].Version) {
+ (*tags)[i] = tagEqual
+ } else {
+ (*tags)[i] = tagQuery
+ dIDs = append(dIDs, o.TargetKeys[i])
+ versionNums = append(versionNums, o.VersionNumbers[i])
+ limits = append(limits, syncLimit)
+
+ delete(caches, o.TargetKeys[i])
+ }
+ }
+
+ if dIDs != nil {
+ versionMap, err := o.Versions(o.Ctx, dIDs, versionNums, limits)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ for k, v := range versionMap {
+ caches[k] = v
+ }
+ }
+
+ versions = caches
+ }
+ return versions, nil
+}
+
+func (o *BatchOption[A, B]) Build() (*B, error) {
+ if err := o.check(); err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ tags := make([]int, len(o.TargetKeys))
+ versions, err := o.getVersions(&tags)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ fullMap := make(map[string]bool)
+ for i, tag := range tags {
+ switch tag {
+ case tagQuery:
+ vLog := versions[o.TargetKeys[i]]
+ fullMap[o.TargetKeys[i]] = vLog.ID.Hex() != o.VersionIDs[i] || uint64(vLog.Version) < o.VersionNumbers[i] || len(vLog.Logs) != vLog.LogLen
+ case tagFull:
+ fullMap[o.TargetKeys[i]] = true
+ case tagEqual:
+ fullMap[o.TargetKeys[i]] = false
+ default:
+ panic(fmt.Errorf("undefined tag %d", tag))
+ }
+ }
+
+ var (
+ insertIdsMap = make(map[string][]string)
+ deleteIdsMap = make(map[string][]string)
+ updateIdsMap = make(map[string][]string)
+ )
+
+ for _, targetKey := range o.TargetKeys {
+ if !fullMap[targetKey] {
+ version := versions[targetKey]
+ insertIds, deleteIds, updateIds := version.DeleteAndChangeIDs()
+ insertIdsMap[targetKey] = insertIds
+ deleteIdsMap[targetKey] = deleteIds
+ updateIdsMap[targetKey] = updateIds
+ }
+ }
+
+ var (
+ insertListMap = make(map[string]A)
+ updateListMap = make(map[string]A)
+ )
+
+ for targetKey, insertIds := range insertIdsMap {
+ if len(insertIds) > 0 {
+ insertList, err := o.Find(o.Ctx, targetKey, insertIds)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ insertListMap[targetKey] = insertList
+ }
+ }
+
+ for targetKey, updateIds := range updateIdsMap {
+ if len(updateIds) > 0 {
+ updateList, err := o.Find(o.Ctx, targetKey, updateIds)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ updateListMap[targetKey] = updateList
+ }
+ }
+
+ return o.Resp(versions, deleteIdsMap, insertListMap, updateListMap, fullMap), nil
+}
diff --git a/internal/rpc/incrversion/option.go b/internal/rpc/incrversion/option.go
new file mode 100644
index 0000000..c908fe4
--- /dev/null
+++ b/internal/rpc/incrversion/option.go
@@ -0,0 +1,153 @@
+package incrversion
+
+import (
+ "context"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+)
+
+//func Limit(maxSync int, version uint64) int {
+// if version == 0 {
+// return 0
+// }
+// return maxSync
+//}
+
+const syncLimit = 200
+
+const (
+ tagQuery = iota + 1
+ tagFull
+ tagEqual
+)
+
+type Option[A, B any] struct {
+ Ctx context.Context
+ VersionKey string
+ VersionID string
+ VersionNumber uint64
+ //SyncLimit int
+ CacheMaxVersion func(ctx context.Context, dId string) (*model.VersionLog, error)
+ Version func(ctx context.Context, dId string, version uint, limit int) (*model.VersionLog, error)
+ //SortID func(ctx context.Context, dId string) ([]string, error)
+ Find func(ctx context.Context, ids []string) ([]A, error)
+ Resp func(version *model.VersionLog, deleteIds []string, insertList, updateList []A, full bool) *B
+}
+
+func (o *Option[A, B]) newError(msg string) error {
+ return errs.ErrInternalServer.WrapMsg(msg)
+}
+
+func (o *Option[A, B]) check() error {
+ if o.Ctx == nil {
+ return o.newError("opt ctx is nil")
+ }
+ if o.VersionKey == "" {
+ return o.newError("versionKey is empty")
+ }
+ //if o.SyncLimit <= 0 {
+ // return o.newError("invalid synchronization quantity")
+ //}
+ if o.Version == nil {
+ return o.newError("func version is nil")
+ }
+ //if o.SortID == nil {
+ // return o.newError("func allID is nil")
+ //}
+ if o.Find == nil {
+ return o.newError("func find is nil")
+ }
+ if o.Resp == nil {
+ return o.newError("func resp is nil")
+ }
+ return nil
+}
+
+func (o *Option[A, B]) validVersion() bool {
+ objID, err := primitive.ObjectIDFromHex(o.VersionID)
+ return err == nil && (!objID.IsZero()) && o.VersionNumber > 0
+}
+
+func (o *Option[A, B]) equalID(objID primitive.ObjectID) bool {
+ return o.VersionID == objID.Hex()
+}
+
+func (o *Option[A, B]) getVersion(tag *int) (*model.VersionLog, error) {
+ if o.CacheMaxVersion == nil {
+ if o.validVersion() {
+ *tag = tagQuery
+ return o.Version(o.Ctx, o.VersionKey, uint(o.VersionNumber), syncLimit)
+ }
+ *tag = tagFull
+ return o.Version(o.Ctx, o.VersionKey, 0, 0)
+ } else {
+ cache, err := o.CacheMaxVersion(o.Ctx, o.VersionKey)
+ if err != nil {
+ return nil, err
+ }
+ if !o.validVersion() {
+ *tag = tagFull
+ return cache, nil
+ }
+ if !o.equalID(cache.ID) {
+ *tag = tagFull
+ return cache, nil
+ }
+ if o.VersionNumber == uint64(cache.Version) {
+ *tag = tagEqual
+ return cache, nil
+ }
+ *tag = tagQuery
+ return o.Version(o.Ctx, o.VersionKey, uint(o.VersionNumber), syncLimit)
+ }
+}
+
+func (o *Option[A, B]) Build() (*B, error) {
+ if err := o.check(); err != nil {
+ return nil, err
+ }
+ var tag int
+ version, err := o.getVersion(&tag)
+ if err != nil {
+ return nil, err
+ }
+ var full bool
+ switch tag {
+ case tagQuery:
+ full = version.ID.Hex() != o.VersionID || uint64(version.Version) < o.VersionNumber || len(version.Logs) != version.LogLen
+ case tagFull:
+ full = true
+ case tagEqual:
+ full = false
+ default:
+ panic(fmt.Errorf("undefined tag %d", tag))
+ }
+ var (
+ insertIds []string
+ deleteIds []string
+ updateIds []string
+ )
+ if !full {
+ insertIds, deleteIds, updateIds = version.DeleteAndChangeIDs()
+ }
+ var (
+ insertList []A
+ updateList []A
+ )
+ if len(insertIds) > 0 {
+ insertList, err = o.Find(o.Ctx, insertIds)
+ if err != nil {
+ return nil, err
+ }
+ }
+ if len(updateIds) > 0 {
+ updateList, err = o.Find(o.Ctx, updateIds)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return o.Resp(version, deleteIds, insertList, updateList, full), nil
+}
diff --git a/internal/rpc/msg/as_read.go b/internal/rpc/msg/as_read.go
new file mode 100644
index 0000000..86ddb0e
--- /dev/null
+++ b/internal/rpc/msg/as_read.go
@@ -0,0 +1,231 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "errors"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+func (m *msgServer) GetConversationsHasReadAndMaxSeq(ctx context.Context, req *msg.GetConversationsHasReadAndMaxSeqReq) (*msg.GetConversationsHasReadAndMaxSeqResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ var conversationIDs []string
+ if len(req.ConversationIDs) == 0 {
+ var err error
+ conversationIDs, err = m.ConversationLocalCache.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ conversationIDs = req.ConversationIDs
+ }
+
+ hasReadSeqs, err := m.MsgDatabase.GetHasReadSeqs(ctx, req.UserID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ conversations, err := m.ConversationLocalCache.GetConversations(ctx, req.UserID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ conversationMaxSeqMap := make(map[string]int64)
+ for _, conversation := range conversations {
+ if conversation.MaxSeq != 0 {
+ conversationMaxSeqMap[conversation.ConversationID] = conversation.MaxSeq
+ }
+ }
+ maxSeqs, err := m.MsgDatabase.GetMaxSeqsWithTime(ctx, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ resp := &msg.GetConversationsHasReadAndMaxSeqResp{Seqs: make(map[string]*msg.Seqs)}
+ if req.ReturnPinned {
+ pinnedConversationIDs, err := m.ConversationLocalCache.GetPinnedConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ resp.PinnedConversationIDs = pinnedConversationIDs
+ }
+ for conversationID, maxSeq := range maxSeqs {
+ resp.Seqs[conversationID] = &msg.Seqs{
+ HasReadSeq: hasReadSeqs[conversationID],
+ MaxSeq: maxSeq.Seq,
+ MaxSeqTime: maxSeq.Time,
+ }
+ if v, ok := conversationMaxSeqMap[conversationID]; ok {
+ resp.Seqs[conversationID].MaxSeq = v
+ }
+ }
+ return resp, nil
+}
+
+func (m *msgServer) SetConversationHasReadSeq(ctx context.Context, req *msg.SetConversationHasReadSeqReq) (*msg.SetConversationHasReadSeqResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ maxSeq, err := m.MsgDatabase.GetMaxSeq(ctx, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ if req.HasReadSeq > maxSeq {
+ return nil, errs.ErrArgs.WrapMsg("hasReadSeq must not be bigger than maxSeq")
+ }
+ if err := m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, req.HasReadSeq); err != nil {
+ return nil, err
+ }
+ m.sendMarkAsReadNotification(ctx, req.ConversationID, constant.SingleChatType, req.UserID, req.UserID, nil, req.HasReadSeq)
+ return &msg.SetConversationHasReadSeqResp{}, nil
+}
+
+func (m *msgServer) MarkMsgsAsRead(ctx context.Context, req *msg.MarkMsgsAsReadReq) (*msg.MarkMsgsAsReadResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ maxSeq, err := m.MsgDatabase.GetMaxSeq(ctx, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ hasReadSeq := req.Seqs[len(req.Seqs)-1]
+ if hasReadSeq > maxSeq {
+ return nil, errs.ErrArgs.WrapMsg("hasReadSeq must not be bigger than maxSeq")
+ }
+ conversation, err := m.ConversationLocalCache.GetConversation(ctx, req.UserID, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ webhookCfg := m.webhookConfig()
+ if err := m.MsgDatabase.MarkSingleChatMsgsAsRead(ctx, req.UserID, req.ConversationID, req.Seqs); err != nil {
+ return nil, err
+ }
+ currentHasReadSeq, err := m.MsgDatabase.GetHasReadSeq(ctx, req.UserID, req.ConversationID)
+ if err != nil && !errors.Is(err, redis.Nil) {
+ return nil, err
+ }
+ if hasReadSeq > currentHasReadSeq {
+ err = m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, hasReadSeq)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ reqCallback := &cbapi.CallbackSingleMsgReadReq{
+ ConversationID: conversation.ConversationID,
+ UserID: req.UserID,
+ Seqs: req.Seqs,
+ ContentType: conversation.ConversationType,
+ }
+ m.webhookAfterSingleMsgRead(ctx, &webhookCfg.AfterSingleMsgRead, reqCallback)
+ m.sendMarkAsReadNotification(ctx, req.ConversationID, conversation.ConversationType, req.UserID,
+ m.conversationAndGetRecvID(conversation, req.UserID), req.Seqs, hasReadSeq)
+ return &msg.MarkMsgsAsReadResp{}, nil
+}
+
+func (m *msgServer) MarkConversationAsRead(ctx context.Context, req *msg.MarkConversationAsReadReq) (*msg.MarkConversationAsReadResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ conversation, err := m.ConversationLocalCache.GetConversation(ctx, req.UserID, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ webhookCfg := m.webhookConfig()
+ hasReadSeq, err := m.MsgDatabase.GetHasReadSeq(ctx, req.UserID, req.ConversationID)
+ if err != nil && !errors.Is(err, redis.Nil) {
+ return nil, err
+ }
+ var seqs []int64
+
+ log.ZDebug(ctx, "MarkConversationAsRead", "hasReadSeq", hasReadSeq, "req.HasReadSeq", req.HasReadSeq)
+ if conversation.ConversationType == constant.SingleChatType {
+ for i := hasReadSeq + 1; i <= req.HasReadSeq; i++ {
+ seqs = append(seqs, i)
+ }
+ // avoid client missed call MarkConversationMessageAsRead by order
+ for _, val := range req.Seqs {
+ if !datautil.Contain(val, seqs...) {
+ seqs = append(seqs, val)
+ }
+ }
+ if len(seqs) > 0 {
+ log.ZDebug(ctx, "MarkConversationAsRead", "seqs", seqs, "conversationID", req.ConversationID)
+ if err = m.MsgDatabase.MarkSingleChatMsgsAsRead(ctx, req.UserID, req.ConversationID, seqs); err != nil {
+ return nil, err
+ }
+ }
+ if req.HasReadSeq > hasReadSeq {
+ err = m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, req.HasReadSeq)
+ if err != nil {
+ return nil, err
+ }
+ hasReadSeq = req.HasReadSeq
+ }
+ m.sendMarkAsReadNotification(ctx, req.ConversationID, conversation.ConversationType, req.UserID,
+ m.conversationAndGetRecvID(conversation, req.UserID), seqs, hasReadSeq)
+ } else if conversation.ConversationType == constant.ReadGroupChatType ||
+ conversation.ConversationType == constant.NotificationChatType {
+ if req.HasReadSeq > hasReadSeq {
+ err = m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, req.HasReadSeq)
+ if err != nil {
+ return nil, err
+ }
+ hasReadSeq = req.HasReadSeq
+ }
+ m.sendMarkAsReadNotification(ctx, req.ConversationID, constant.SingleChatType, req.UserID,
+ req.UserID, seqs, hasReadSeq)
+ }
+
+ if conversation.ConversationType == constant.SingleChatType {
+ reqCall := &cbapi.CallbackSingleMsgReadReq{
+ ConversationID: conversation.ConversationID,
+ UserID: conversation.OwnerUserID,
+ Seqs: req.Seqs,
+ ContentType: conversation.ConversationType,
+ }
+ m.webhookAfterSingleMsgRead(ctx, &webhookCfg.AfterSingleMsgRead, reqCall)
+ } else if conversation.ConversationType == constant.ReadGroupChatType {
+ reqCall := &cbapi.CallbackGroupMsgReadReq{
+ SendID: conversation.OwnerUserID,
+ ReceiveID: req.UserID,
+ UnreadMsgNum: req.HasReadSeq,
+ ContentType: int64(conversation.ConversationType),
+ }
+ m.webhookAfterGroupMsgRead(ctx, &webhookCfg.AfterGroupMsgRead, reqCall)
+ }
+ return &msg.MarkConversationAsReadResp{}, nil
+}
+
+func (m *msgServer) sendMarkAsReadNotification(ctx context.Context, conversationID string, sessionType int32, sendID, recvID string, seqs []int64, hasReadSeq int64) {
+ tips := &sdkws.MarkAsReadTips{
+ MarkAsReadUserID: sendID,
+ ConversationID: conversationID,
+ Seqs: seqs,
+ HasReadSeq: hasReadSeq,
+ }
+ m.notificationSender.NotificationWithSessionType(ctx, sendID, recvID, constant.HasReadReceipt, sessionType, tips)
+}
diff --git a/internal/rpc/msg/callback.go b/internal/rpc/msg/callback.go
new file mode 100644
index 0000000..9b0b2df
--- /dev/null
+++ b/internal/rpc/msg/callback.go
@@ -0,0 +1,236 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "encoding/base64"
+ "encoding/json"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/constant"
+ pbchat "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/stringutil"
+ "google.golang.org/protobuf/proto"
+)
+
+func toCommonCallback(ctx context.Context, msg *pbchat.SendMsgReq, command string) cbapi.CommonCallbackReq {
+ return cbapi.CommonCallbackReq{
+ SendID: msg.MsgData.SendID,
+ ServerMsgID: msg.MsgData.ServerMsgID,
+ CallbackCommand: command,
+ ClientMsgID: msg.MsgData.ClientMsgID,
+ OperationID: mcontext.GetOperationID(ctx),
+ SenderPlatformID: msg.MsgData.SenderPlatformID,
+ SenderNickname: msg.MsgData.SenderNickname,
+ SessionType: msg.MsgData.SessionType,
+ MsgFrom: msg.MsgData.MsgFrom,
+ ContentType: msg.MsgData.ContentType,
+ Status: msg.MsgData.Status,
+ SendTime: msg.MsgData.SendTime,
+ CreateTime: msg.MsgData.CreateTime,
+ AtUserIDList: msg.MsgData.AtUserIDList,
+ SenderFaceURL: msg.MsgData.SenderFaceURL,
+ Content: GetContent(msg.MsgData),
+ Seq: uint32(msg.MsgData.Seq),
+ Ex: msg.MsgData.Ex,
+ }
+}
+
+func GetContent(msg *sdkws.MsgData) string {
+ if msg.ContentType >= constant.NotificationBegin && msg.ContentType <= constant.NotificationEnd {
+ var tips sdkws.TipsComm
+ _ = proto.Unmarshal(msg.Content, &tips)
+ content := tips.JsonDetail
+ return content
+ } else {
+ return string(msg.Content)
+ }
+}
+
+func (m *msgServer) webhookBeforeSendSingleMsg(ctx context.Context, before *config.BeforeConfig, msg *pbchat.SendMsgReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ if msg.MsgData.ContentType == constant.Typing {
+ return nil
+ }
+ if !filterBeforeMsg(msg, before) {
+ return nil
+ }
+ cbReq := &cbapi.CallbackBeforeSendSingleMsgReq{
+ CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackBeforeSendSingleMsgCommand),
+ RecvID: msg.MsgData.RecvID,
+ }
+ resp := &cbapi.CallbackBeforeSendSingleMsgResp{}
+ if err := m.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ return nil
+ })
+}
+
+func (m *msgServer) webhookAfterSendSingleMsg(ctx context.Context, after *config.AfterConfig, msg *pbchat.SendMsgReq) {
+ if msg.MsgData.ContentType == constant.Typing {
+ return
+ }
+ if !filterAfterMsg(msg, after) {
+ return
+ }
+ cbReq := &cbapi.CallbackAfterSendSingleMsgReq{
+ CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackAfterSendSingleMsgCommand),
+ RecvID: msg.MsgData.RecvID,
+ }
+ m.webhookClient.AsyncPostWithQuery(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterSendSingleMsgResp{}, after, buildKeyMsgDataQuery(msg.MsgData))
+}
+
+func (m *msgServer) webhookBeforeSendGroupMsg(ctx context.Context, before *config.BeforeConfig, msg *pbchat.SendMsgReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ if !filterBeforeMsg(msg, before) {
+ return nil
+ }
+ if msg.MsgData.ContentType == constant.Typing {
+ return nil
+ }
+ cbReq := &cbapi.CallbackBeforeSendGroupMsgReq{
+ CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackBeforeSendGroupMsgCommand),
+ GroupID: msg.MsgData.GroupID,
+ }
+ resp := &cbapi.CallbackBeforeSendGroupMsgResp{}
+ if err := m.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (m *msgServer) webhookAfterSendGroupMsg(ctx context.Context, after *config.AfterConfig, msg *pbchat.SendMsgReq) {
+ if after == nil {
+ return
+ }
+
+ if msg.MsgData.ContentType == constant.Typing {
+ log.ZDebug(ctx, "webhook skipped: typing message", "contentType", msg.MsgData.ContentType)
+ return
+ }
+
+ log.ZInfo(ctx, "webhook afterSendGroupMsg checking", "enable", after.Enable, "groupID", msg.MsgData.GroupID, "contentType", msg.MsgData.ContentType, "attentionIds", after.AttentionIds, "deniedTypes", after.DeniedTypes)
+
+ if !filterAfterMsg(msg, after) {
+ log.ZDebug(ctx, "webhook filtered out by filterAfterMsg", "groupID", msg.MsgData.GroupID)
+ return
+ }
+
+ if !after.Enable {
+ log.ZDebug(ctx, "webhook afterSendGroupMsg disabled, skipping", "enable", after.Enable)
+ return
+ }
+
+ log.ZInfo(ctx, "webhook afterSendGroupMsg sending", "groupID", msg.MsgData.GroupID, "sendID", msg.MsgData.SendID, "contentType", msg.MsgData.ContentType)
+
+ cbReq := &cbapi.CallbackAfterSendGroupMsgReq{
+ CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackAfterSendGroupMsgCommand),
+ GroupID: msg.MsgData.GroupID,
+ }
+
+ m.webhookClient.AsyncPostWithQuery(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterSendGroupMsgResp{}, after, buildKeyMsgDataQuery(msg.MsgData))
+}
+
+func (m *msgServer) webhookBeforeMsgModify(ctx context.Context, before *config.BeforeConfig, msg *pbchat.SendMsgReq, beforeMsgData **sdkws.MsgData) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ //if msg.MsgData.ContentType != constant.Text {
+ // return nil
+ //}
+ if !filterBeforeMsg(msg, before) {
+ return nil
+ }
+ cbReq := &cbapi.CallbackMsgModifyCommandReq{
+ CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackBeforeMsgModifyCommand),
+ }
+ resp := &cbapi.CallbackMsgModifyCommandResp{}
+ if err := m.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+ if beforeMsgData != nil {
+ *beforeMsgData = proto.Clone(msg.MsgData).(*sdkws.MsgData)
+ }
+ if resp.Content != nil {
+ msg.MsgData.Content = []byte(*resp.Content)
+ if err := json.Unmarshal(msg.MsgData.Content, &struct{}{}); err != nil {
+ return errs.ErrArgs.WrapMsg("webhook msg modify content is not json", "content", string(msg.MsgData.Content))
+ }
+ }
+ datautil.NotNilReplace(msg.MsgData.OfflinePushInfo, resp.OfflinePushInfo)
+ datautil.NotNilReplace(&msg.MsgData.RecvID, resp.RecvID)
+ datautil.NotNilReplace(&msg.MsgData.GroupID, resp.GroupID)
+ datautil.NotNilReplace(&msg.MsgData.ClientMsgID, resp.ClientMsgID)
+ datautil.NotNilReplace(&msg.MsgData.ServerMsgID, resp.ServerMsgID)
+ datautil.NotNilReplace(&msg.MsgData.SenderPlatformID, resp.SenderPlatformID)
+ datautil.NotNilReplace(&msg.MsgData.SenderNickname, resp.SenderNickname)
+ datautil.NotNilReplace(&msg.MsgData.SenderFaceURL, resp.SenderFaceURL)
+ datautil.NotNilReplace(&msg.MsgData.SessionType, resp.SessionType)
+ datautil.NotNilReplace(&msg.MsgData.MsgFrom, resp.MsgFrom)
+ datautil.NotNilReplace(&msg.MsgData.ContentType, resp.ContentType)
+ datautil.NotNilReplace(&msg.MsgData.Status, resp.Status)
+ datautil.NotNilReplace(&msg.MsgData.Options, resp.Options)
+ datautil.NotNilReplace(&msg.MsgData.AtUserIDList, resp.AtUserIDList)
+ datautil.NotNilReplace(&msg.MsgData.AttachedInfo, resp.AttachedInfo)
+ datautil.NotNilReplace(&msg.MsgData.Ex, resp.Ex)
+ return nil
+ })
+}
+
+func (m *msgServer) webhookAfterGroupMsgRead(ctx context.Context, after *config.AfterConfig, req *cbapi.CallbackGroupMsgReadReq) {
+ req.CallbackCommand = cbapi.CallbackAfterGroupMsgReadCommand
+ m.webhookClient.AsyncPost(ctx, req.GetCallbackCommand(), req, &cbapi.CallbackGroupMsgReadResp{}, after)
+}
+
+func (m *msgServer) webhookAfterSingleMsgRead(ctx context.Context, after *config.AfterConfig, req *cbapi.CallbackSingleMsgReadReq) {
+
+ req.CallbackCommand = cbapi.CallbackAfterSingleMsgReadCommand
+
+ m.webhookClient.AsyncPost(ctx, req.GetCallbackCommand(), req, &cbapi.CallbackSingleMsgReadResp{}, after)
+
+}
+
+func (m *msgServer) webhookAfterRevokeMsg(ctx context.Context, after *config.AfterConfig, req *pbchat.RevokeMsgReq) {
+ callbackReq := &cbapi.CallbackAfterRevokeMsgReq{
+ CallbackCommand: cbapi.CallbackAfterRevokeMsgCommand,
+ ConversationID: req.ConversationID,
+ Seq: req.Seq,
+ UserID: req.UserID,
+ }
+ m.webhookClient.AsyncPost(ctx, callbackReq.GetCallbackCommand(), callbackReq, &cbapi.CallbackAfterRevokeMsgResp{}, after)
+}
+
+func buildKeyMsgDataQuery(msg *sdkws.MsgData) map[string]string {
+ keyMsgData := apistruct.KeyMsgData{
+ SendID: msg.SendID,
+ RecvID: msg.RecvID,
+ GroupID: msg.GroupID,
+ }
+
+ return map[string]string{
+ webhook.Key: base64.StdEncoding.EncodeToString(stringutil.StructToJsonBytes(keyMsgData)),
+ }
+}
diff --git a/internal/rpc/msg/clear.go b/internal/rpc/msg/clear.go
new file mode 100644
index 0000000..b37f4bb
--- /dev/null
+++ b/internal/rpc/msg/clear.go
@@ -0,0 +1,61 @@
+package msg
+
+import (
+ "context"
+ "strings"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/log"
+)
+
+// DestructMsgs hard delete in Database.
+func (m *msgServer) DestructMsgs(ctx context.Context, req *msg.DestructMsgsReq) (*msg.DestructMsgsResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ docs, err := m.MsgDatabase.GetRandBeforeMsg(ctx, req.Timestamp, int(req.Limit))
+ if err != nil {
+ return nil, err
+ }
+ for i, doc := range docs {
+ if err := m.MsgDatabase.DeleteDoc(ctx, doc.DocID); err != nil {
+ return nil, err
+ }
+ log.ZDebug(ctx, "DestructMsgs delete doc", "index", i, "docID", doc.DocID)
+ index := strings.LastIndex(doc.DocID, ":")
+ if index < 0 {
+ continue
+ }
+ var minSeq int64
+ for _, model := range doc.Msg {
+ if model.Msg == nil {
+ continue
+ }
+ if model.Msg.Seq > minSeq {
+ minSeq = model.Msg.Seq
+ }
+ }
+ if minSeq <= 0 {
+ continue
+ }
+ conversationID := doc.DocID[:index]
+ if conversationID == "" {
+ continue
+ }
+ minSeq++
+ if err := m.MsgDatabase.SetMinSeq(ctx, conversationID, minSeq); err != nil {
+ return nil, err
+ }
+ log.ZDebug(ctx, "DestructMsgs delete doc set min seq", "index", i, "docID", doc.DocID, "conversationID", conversationID, "setMinSeq", minSeq)
+ }
+ return &msg.DestructMsgsResp{Count: int32(len(docs))}, nil
+}
+
+func (m *msgServer) GetLastMessageSeqByTime(ctx context.Context, req *msg.GetLastMessageSeqByTimeReq) (*msg.GetLastMessageSeqByTimeResp, error) {
+ seq, err := m.MsgDatabase.GetLastMessageSeqByTime(ctx, req.ConversationID, req.Time)
+ if err != nil {
+ return nil, err
+ }
+ return &msg.GetLastMessageSeqByTimeResp{Seq: seq}, nil
+}
diff --git a/internal/rpc/msg/delete.go b/internal/rpc/msg/delete.go
new file mode 100644
index 0000000..1e3ddbb
--- /dev/null
+++ b/internal/rpc/msg/delete.go
@@ -0,0 +1,251 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+func (m *msgServer) getMinSeqs(maxSeqs map[string]int64) map[string]int64 {
+ minSeqs := make(map[string]int64)
+ for k, v := range maxSeqs {
+ minSeqs[k] = v + 1
+ }
+ return minSeqs
+}
+
+func (m *msgServer) validateDeleteSyncOpt(opt *msg.DeleteSyncOpt) (isSyncSelf, isSyncOther bool) {
+ if opt == nil {
+ return
+ }
+ return opt.IsSyncSelf, opt.IsSyncOther
+}
+
+func (m *msgServer) ClearConversationsMsg(ctx context.Context, req *msg.ClearConversationsMsgReq) (*msg.ClearConversationsMsgResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ if err := m.clearConversation(ctx, req.ConversationIDs, req.UserID, req.DeleteSyncOpt); err != nil {
+ return nil, err
+ }
+ return &msg.ClearConversationsMsgResp{}, nil
+}
+
+func (m *msgServer) UserClearAllMsg(ctx context.Context, req *msg.UserClearAllMsgReq) (*msg.UserClearAllMsgResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ conversationIDs, err := m.ConversationLocalCache.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if err := m.clearConversation(ctx, conversationIDs, req.UserID, req.DeleteSyncOpt); err != nil {
+ return nil, err
+ }
+ return &msg.UserClearAllMsgResp{}, nil
+}
+
+func (m *msgServer) DeleteMsgs(ctx context.Context, req *msg.DeleteMsgsReq) (*msg.DeleteMsgsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+
+ // 获取要删除的消息信息,用于权限检查
+ _, _, msgs, err := m.MsgDatabase.GetMsgBySeqs(ctx, req.UserID, req.ConversationID, req.Seqs)
+ if err != nil {
+ return nil, err
+ }
+ if len(msgs) == 0 {
+ return nil, errs.ErrRecordNotFound.WrapMsg("messages not found")
+ }
+
+ // 权限检查:如果不是管理员,需要检查删除权限
+ if !authverify.IsAdmin(ctx) {
+ // 收集所有消息的发送者ID
+ sendIDs := make([]string, 0, len(msgs))
+ for _, msg := range msgs {
+ if msg != nil && msg.SendID != "" {
+ sendIDs = append(sendIDs, msg.SendID)
+ }
+ }
+ sendIDs = datautil.Distinct(sendIDs)
+
+ // 检查第一条消息的会话类型(假设所有消息来自同一会话)
+ sessionType := msgs[0].SessionType
+ switch sessionType {
+ case constant.SingleChatType:
+ // 单聊:只能删除自己发送的消息
+ for _, msg := range msgs {
+ if msg != nil && msg.SendID != req.UserID {
+ return nil, errs.ErrNoPermission.WrapMsg("can only delete own messages in single chat")
+ }
+ }
+ case constant.ReadGroupChatType:
+ // 群聊:检查权限
+ groupID := msgs[0].GroupID
+ if groupID == "" {
+ return nil, errs.ErrArgs.WrapMsg("groupID is empty")
+ }
+
+ // 获取操作者和所有消息发送者的群成员信息
+ allUserIDs := append([]string{req.UserID}, sendIDs...)
+ members, err := m.GroupLocalCache.GetGroupMemberInfoMap(ctx, groupID, datautil.Distinct(allUserIDs))
+ if err != nil {
+ return nil, err
+ }
+
+ // 检查操作者的角色
+ opMember, ok := members[req.UserID]
+ if !ok {
+ return nil, errs.ErrNoPermission.WrapMsg("user not in group")
+ }
+
+ // 检查每条消息的删除权限
+ for _, msg := range msgs {
+ if msg == nil || msg.SendID == "" {
+ continue
+ }
+
+ // 如果是自己发送的消息,可以删除
+ if msg.SendID == req.UserID {
+ continue
+ }
+
+ // 如果不是自己发送的消息,需要检查权限
+ switch opMember.RoleLevel {
+ case constant.GroupOwner:
+ // 群主可以删除任何人的消息
+ case constant.GroupAdmin:
+ // 管理员只能删除普通成员的消息
+ sendMember, ok := members[msg.SendID]
+ if !ok {
+ return nil, errs.ErrNoPermission.WrapMsg("message sender not in group")
+ }
+ if sendMember.RoleLevel != constant.GroupOrdinaryUsers {
+ return nil, errs.ErrNoPermission.WrapMsg("group admin can only delete messages from ordinary members")
+ }
+ default:
+ // 普通成员只能删除自己的消息
+ return nil, errs.ErrNoPermission.WrapMsg("can only delete own messages")
+ }
+ }
+ default:
+ return nil, errs.ErrInternalServer.WrapMsg("sessionType not supported", "sessionType", sessionType)
+ }
+ }
+
+ isSyncSelf, isSyncOther := m.validateDeleteSyncOpt(req.DeleteSyncOpt)
+ if isSyncOther {
+ if err := m.MsgDatabase.DeleteMsgsPhysicalBySeqs(ctx, req.ConversationID, req.Seqs); err != nil {
+ return nil, err
+ }
+ conv, err := m.conversationClient.GetConversationsByConversationID(ctx, req.ConversationID)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.DeleteMsgsTips{UserID: req.UserID, ConversationID: req.ConversationID, Seqs: req.Seqs}
+ m.notificationSender.NotificationWithSessionType(ctx, req.UserID, m.conversationAndGetRecvID(conv, req.UserID),
+ constant.DeleteMsgsNotification, conv.ConversationType, tips)
+ } else {
+ if err := m.MsgDatabase.DeleteUserMsgsBySeqs(ctx, req.UserID, req.ConversationID, req.Seqs); err != nil {
+ return nil, err
+ }
+ if isSyncSelf {
+ tips := &sdkws.DeleteMsgsTips{UserID: req.UserID, ConversationID: req.ConversationID, Seqs: req.Seqs}
+ m.notificationSender.NotificationWithSessionType(ctx, req.UserID, req.UserID, constant.DeleteMsgsNotification, constant.SingleChatType, tips)
+ }
+ }
+ return &msg.DeleteMsgsResp{}, nil
+}
+
+func (m *msgServer) DeleteMsgPhysicalBySeq(ctx context.Context, req *msg.DeleteMsgPhysicalBySeqReq) (*msg.DeleteMsgPhysicalBySeqResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ err := m.MsgDatabase.DeleteMsgsPhysicalBySeqs(ctx, req.ConversationID, req.Seqs)
+ if err != nil {
+ return nil, err
+ }
+ return &msg.DeleteMsgPhysicalBySeqResp{}, nil
+}
+
+func (m *msgServer) DeleteMsgPhysical(ctx context.Context, req *msg.DeleteMsgPhysicalReq) (*msg.DeleteMsgPhysicalResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ remainTime := timeutil.GetCurrentTimestampBySecond() - req.Timestamp
+ if _, err := m.DestructMsgs(ctx, &msg.DestructMsgsReq{Timestamp: remainTime, Limit: 9999}); err != nil {
+ return nil, err
+ }
+ return &msg.DeleteMsgPhysicalResp{}, nil
+}
+
+func (m *msgServer) clearConversation(ctx context.Context, conversationIDs []string, userID string, deleteSyncOpt *msg.DeleteSyncOpt) error {
+ conversations, err := m.conversationClient.GetConversationsByConversationIDs(ctx, conversationIDs)
+ if err != nil {
+ return err
+ }
+ var existConversations []*conversation.Conversation
+ var existConversationIDs []string
+ for _, conversation := range conversations {
+ existConversations = append(existConversations, conversation)
+ existConversationIDs = append(existConversationIDs, conversation.ConversationID)
+ }
+ log.ZDebug(ctx, "ClearConversationsMsg", "existConversationIDs", existConversationIDs)
+ maxSeqs, err := m.MsgDatabase.GetMaxSeqs(ctx, existConversationIDs)
+ if err != nil {
+ return err
+ }
+ isSyncSelf, isSyncOther := m.validateDeleteSyncOpt(deleteSyncOpt)
+ if !isSyncOther {
+ setSeqs := m.getMinSeqs(maxSeqs)
+ if err := m.MsgDatabase.SetUserConversationsMinSeqs(ctx, userID, setSeqs); err != nil {
+ return err
+ }
+ ownerUserIDs := []string{userID}
+ for conversationID, seq := range setSeqs {
+ if err := m.conversationClient.SetConversationMinSeq(ctx, conversationID, ownerUserIDs, seq); err != nil {
+ return err
+ }
+ }
+ // notification 2 self
+ if isSyncSelf {
+ tips := &sdkws.ClearConversationTips{UserID: userID, ConversationIDs: existConversationIDs}
+ m.notificationSender.NotificationWithSessionType(ctx, userID, userID, constant.ClearConversationNotification, constant.SingleChatType, tips)
+ }
+ } else {
+ if err := m.MsgDatabase.SetMinSeqs(ctx, m.getMinSeqs(maxSeqs)); err != nil {
+ return err
+ }
+ for _, conversation := range existConversations {
+ tips := &sdkws.ClearConversationTips{UserID: userID, ConversationIDs: []string{conversation.ConversationID}}
+ m.notificationSender.NotificationWithSessionType(ctx, userID, m.conversationAndGetRecvID(conversation, userID), constant.ClearConversationNotification, conversation.ConversationType, tips)
+ }
+ }
+ if err := m.MsgDatabase.UserSetHasReadSeqs(ctx, userID, maxSeqs); err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/internal/rpc/msg/filter.go b/internal/rpc/msg/filter.go
new file mode 100644
index 0000000..62b5092
--- /dev/null
+++ b/internal/rpc/msg/filter.go
@@ -0,0 +1,106 @@
+package msg
+
+import (
+ "strconv"
+ "strings"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/constant"
+ pbchat "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+const (
+ separator = "-"
+)
+
+func filterAfterMsg(msg *pbchat.SendMsgReq, after *config.AfterConfig) bool {
+ result := filterMsg(msg, after.AttentionIds, after.DeniedTypes)
+ // 添加调试日志
+ if !result {
+ // 只在过滤掉时记录,避免日志过多
+ }
+ return result
+}
+
+func filterBeforeMsg(msg *pbchat.SendMsgReq, before *config.BeforeConfig) bool {
+ return filterMsg(msg, nil, before.DeniedTypes)
+}
+
+func filterMsg(msg *pbchat.SendMsgReq, attentionIds []string, deniedTypes []int32) bool {
+ // According to the attentionIds configuration, only some users are sent
+ // 注意:对于群消息,应该检查GroupID而不是RecvID
+ if len(attentionIds) != 0 {
+ // 单聊消息检查RecvID,群聊消息检查GroupID
+ if msg.MsgData.SessionType == constant.SingleChatType {
+ if !datautil.Contain(msg.MsgData.RecvID, attentionIds...) {
+ return false
+ }
+ } else if msg.MsgData.SessionType == constant.ReadGroupChatType {
+ if !datautil.Contain(msg.MsgData.GroupID, attentionIds...) {
+ return false
+ }
+ }
+ }
+
+ if defaultDeniedTypes(msg.MsgData.ContentType) {
+ return false
+ }
+
+ if len(deniedTypes) != 0 && datautil.Contain(msg.MsgData.ContentType, deniedTypes...) {
+ return false
+ }
+ //if len(allowedTypes) != 0 && !isInInterval(msg.MsgData.ContentType, allowedTypes) {
+ // return false
+ //}
+ //if len(deniedTypes) != 0 && isInInterval(msg.MsgData.ContentType, deniedTypes) {
+ // return false
+ //}
+ return true
+}
+
+func defaultDeniedTypes(contentType int32) bool {
+ if contentType >= constant.NotificationBegin && contentType <= constant.NotificationEnd {
+ return true
+ }
+ if contentType == constant.Typing {
+ return true
+ }
+ return false
+}
+
+// isInInterval if data is in interval
+// Supports two formats: a single type or a range. The range is defined by the lower and upper bounds connected with a hyphen ("-")
+// e.g. [1, 100, 200-500, 600-700] means that only data within the range
+// {1, 100} ∪ [200, 500] ∪ [600, 700] will return true.
+func isInInterval(data int32, interval []string) bool {
+ for _, v := range interval {
+ if strings.Contains(v, separator) {
+ // is interval
+ bounds := strings.Split(v, separator)
+ if len(bounds) != 2 {
+ continue
+ }
+ bottom, err := strconv.Atoi(bounds[0])
+ if err != nil {
+ continue
+ }
+ top, err := strconv.Atoi(bounds[1])
+ if err != nil {
+ continue
+ }
+ if datautil.BetweenEq(int(data), bottom, top) {
+ return true
+ }
+ } else {
+ iv, err := strconv.Atoi(v)
+ if err != nil {
+ continue
+ }
+ if int(data) == iv {
+ return true
+ }
+ }
+ }
+ return false
+}
diff --git a/internal/rpc/msg/msg_status.go b/internal/rpc/msg/msg_status.go
new file mode 100644
index 0000000..e38e7cc
--- /dev/null
+++ b/internal/rpc/msg/msg_status.go
@@ -0,0 +1,44 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/protocol/constant"
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func (m *msgServer) SetSendMsgStatus(ctx context.Context, req *pbmsg.SetSendMsgStatusReq) (*pbmsg.SetSendMsgStatusResp, error) {
+ resp := &pbmsg.SetSendMsgStatusResp{}
+ if err := m.MsgDatabase.SetSendMsgStatus(ctx, mcontext.GetOperationID(ctx), req.Status); err != nil {
+ return nil, err
+ }
+ return resp, nil
+}
+
+func (m *msgServer) GetSendMsgStatus(ctx context.Context, req *pbmsg.GetSendMsgStatusReq) (*pbmsg.GetSendMsgStatusResp, error) {
+ resp := &pbmsg.GetSendMsgStatusResp{}
+ status, err := m.MsgDatabase.GetSendMsgStatus(ctx, mcontext.GetOperationID(ctx))
+ if IsNotFound(err) {
+ resp.Status = constant.MsgStatusNotExist
+ return resp, nil
+ } else if err != nil {
+ return nil, err
+ }
+ resp.Status = status
+ return resp, nil
+}
diff --git a/internal/rpc/msg/notification.go b/internal/rpc/msg/notification.go
new file mode 100644
index 0000000..feb6e58
--- /dev/null
+++ b/internal/rpc/msg/notification.go
@@ -0,0 +1,50 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+type MsgNotificationSender struct {
+ *notification.NotificationSender
+}
+
+func NewMsgNotificationSender(config *Config, opts ...notification.NotificationSenderOptions) *MsgNotificationSender {
+ return &MsgNotificationSender{notification.NewNotificationSender(&config.NotificationConfig, opts...)}
+}
+
+func (m *MsgNotificationSender) UserDeleteMsgsNotification(ctx context.Context, userID, conversationID string, seqs []int64) {
+ tips := sdkws.DeleteMsgsTips{
+ UserID: userID,
+ ConversationID: conversationID,
+ Seqs: seqs,
+ }
+ m.Notification(ctx, userID, userID, constant.DeleteMsgsNotification, &tips)
+}
+
+func (m *MsgNotificationSender) MarkAsReadNotification(ctx context.Context, conversationID string, sessionType int32, sendID, recvID string, seqs []int64, hasReadSeq int64) {
+ tips := &sdkws.MarkAsReadTips{
+ MarkAsReadUserID: sendID,
+ ConversationID: conversationID,
+ Seqs: seqs,
+ HasReadSeq: hasReadSeq,
+ }
+ m.NotificationWithSessionType(ctx, sendID, recvID, constant.HasReadReceipt, sessionType, tips)
+}
diff --git a/internal/rpc/msg/qrcode_decoder.go b/internal/rpc/msg/qrcode_decoder.go
new file mode 100644
index 0000000..d5f2dde
--- /dev/null
+++ b/internal/rpc/msg/qrcode_decoder.go
@@ -0,0 +1,971 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "fmt"
+ "image"
+ "image/color"
+ _ "image/jpeg"
+ _ "image/png"
+ "math"
+ "os"
+ "sync"
+
+ "github.com/makiuchi-d/gozxing"
+ "github.com/makiuchi-d/gozxing/qrcode"
+ "github.com/openimsdk/tools/log"
+)
+
+// openImageFile 打开图片文件
+func openImageFile(imagePath string) (*os.File, error) {
+ file, err := os.Open(imagePath)
+ if err != nil {
+ return nil, fmt.Errorf("无法打开文件: %v", err)
+ }
+ return file, nil
+}
+
+// QRDecoder 二维码解码器接口
+type QRDecoder interface {
+ Name() string
+ Decode(ctx context.Context, imagePath string, logPrefix string) (bool, error) // 返回是否检测到二维码
+}
+
+// ============================================================================
+// QuircDecoder - Quirc解码器包装
+// ============================================================================
+
+// QuircDecoder 使用 Quirc 库的解码器
+type QuircDecoder struct {
+ detectFunc func([]uint8, int, int) (bool, error)
+}
+
+// NewQuircDecoder 创建Quirc解码器
+func NewQuircDecoder(detectFunc func([]uint8, int, int) (bool, error)) *QuircDecoder {
+ return &QuircDecoder{detectFunc: detectFunc}
+}
+
+func (d *QuircDecoder) Name() string {
+ return "quirc"
+}
+
+func (d *QuircDecoder) Decode(ctx context.Context, imagePath string, logPrefix string) (bool, error) {
+ if d.detectFunc == nil {
+ return false, fmt.Errorf("quirc 解码器未启用")
+ }
+
+ // 打开并解码图片
+ file, err := openImageFile(imagePath)
+ if err != nil {
+ log.ZError(ctx, "打开图片文件失败", err, "imagePath", imagePath)
+ return false, err
+ }
+ defer file.Close()
+
+ img, _, err := image.Decode(file)
+ if err != nil {
+ log.ZError(ctx, "解码图片失败", err, "imagePath", imagePath)
+ return false, fmt.Errorf("无法解码图片: %v", err)
+ }
+
+ bounds := img.Bounds()
+ width := bounds.Dx()
+ height := bounds.Dy()
+
+ // 转换为灰度图
+ grayData := convertToGrayscale(img, width, height)
+
+ // 调用Quirc检测
+ hasQRCode, err := d.detectFunc(grayData, width, height)
+ if err != nil {
+ log.ZError(ctx, "Quirc检测失败", err, "width", width, "height", height)
+ return false, err
+ }
+
+ return hasQRCode, nil
+}
+
+// ============================================================================
+// CustomQRDecoder - 自定义解码器(兼容圆形角)
+// ============================================================================
+
+// CustomQRDecoder 自定义二维码解码器,兼容圆形角等特殊格式
+type CustomQRDecoder struct{}
+
+func (d *CustomQRDecoder) Name() string {
+ return "custom (圆形角兼容)"
+}
+
+// Decode 解码二维码,返回是否检测到二维码
+func (d *CustomQRDecoder) Decode(ctx context.Context, imagePath string, logPrefix string) (bool, error) {
+ file, err := openImageFile(imagePath)
+ if err != nil {
+ log.ZError(ctx, "打开图片文件失败", err, "imagePath", imagePath)
+ return false, fmt.Errorf("无法打开文件: %v", err)
+ }
+ defer file.Close()
+
+ img, _, err := image.Decode(file)
+ if err != nil {
+ log.ZError(ctx, "解码图片失败", err, "imagePath", imagePath)
+ return false, fmt.Errorf("无法解码图片: %v", err)
+ }
+
+ bounds := img.Bounds()
+ width := bounds.Dx()
+ height := bounds.Dy()
+
+ reader := qrcode.NewQRCodeReader()
+ hints := make(map[gozxing.DecodeHintType]interface{})
+ hints[gozxing.DecodeHintType_TRY_HARDER] = true
+ hints[gozxing.DecodeHintType_PURE_BARCODE] = false
+ hints[gozxing.DecodeHintType_CHARACTER_SET] = "UTF-8"
+
+ // 尝试直接解码
+ bitmap, err := gozxing.NewBinaryBitmapFromImage(img)
+ if err == nil {
+ if _, err := reader.Decode(bitmap, hints); err == nil {
+ return true, nil
+ }
+
+ // 尝试不使用PURE_BARCODE
+ delete(hints, gozxing.DecodeHintType_PURE_BARCODE)
+ if _, err := reader.Decode(bitmap, hints); err == nil {
+ return true, nil
+ }
+ hints[gozxing.DecodeHintType_PURE_BARCODE] = false
+ }
+
+ // 尝试多尺度缩放
+ scales := []float64{1.0, 1.5, 2.0, 0.75, 0.5}
+ for _, scale := range scales {
+ scaledImg := scaleImage(img, width, height, scale)
+ if scaledImg == nil {
+ continue
+ }
+
+ scaledBitmap, err := gozxing.NewBinaryBitmapFromImage(scaledImg)
+ if err == nil {
+ if _, err := reader.Decode(scaledBitmap, hints); err == nil {
+ return true, nil
+ }
+ }
+ }
+
+ // 转换为灰度图进行预处理
+ grayData := convertToGrayscale(img, width, height)
+
+ // 尝试多种预处理方法
+ preprocessMethods := []struct {
+ name string
+ fn func([]byte, int, int) []byte
+ }{
+ {"Otsu二值化", enhanceImageOtsu},
+ {"标准增强", enhanceImage},
+ {"强对比度", enhanceImageStrong},
+ {"圆形角处理", enhanceImageForRoundedCorners},
+ {"去噪+锐化", enhanceImageDenoiseSharpen},
+ {"高斯模糊+锐化", enhanceImageGaussianSharpen},
+ }
+
+ scalesForPreprocessed := []float64{1.0, 2.0, 1.5, 1.2, 0.8}
+
+ for _, method := range preprocessMethods {
+ processed := method.fn(grayData, width, height)
+
+ // 快速检测定位图案
+ corners := detectCornersFast(processed, width, height)
+ if len(corners) < 2 {
+ // 如果没有检测到足够的定位图案,仍然尝试解码
+ }
+
+ processedImg := createImageFromGrayscale(processed, width, height)
+ bitmap2, err := gozxing.NewBinaryBitmapFromImage(processedImg)
+ if err == nil {
+ if _, err := reader.Decode(bitmap2, hints); err == nil {
+ return true, nil
+ }
+ delete(hints, gozxing.DecodeHintType_PURE_BARCODE)
+ if _, err := reader.Decode(bitmap2, hints); err == nil {
+ return true, nil
+ }
+ hints[gozxing.DecodeHintType_PURE_BARCODE] = false
+ }
+
+ // 对预处理后的图像进行多尺度缩放
+ for _, scale := range scalesForPreprocessed {
+ scaledProcessed := scaleGrayscaleImage(processed, width, height, scale)
+ if scaledProcessed == nil {
+ continue
+ }
+ scaledImg := createImageFromGrayscale(scaledProcessed.data, scaledProcessed.width, scaledProcessed.height)
+ scaledBitmap, err := gozxing.NewBinaryBitmapFromImage(scaledImg)
+ if err == nil {
+ if _, err := reader.Decode(scaledBitmap, hints); err == nil {
+ return true, nil
+ }
+ delete(hints, gozxing.DecodeHintType_PURE_BARCODE)
+ if _, err := reader.Decode(scaledBitmap, hints); err == nil {
+ return true, nil
+ }
+ hints[gozxing.DecodeHintType_PURE_BARCODE] = false
+ }
+ }
+ }
+
+ return false, nil
+}
+
+// ============================================================================
+// ParallelQRDecoder - 并行解码器
+// ============================================================================
+
+// ParallelQRDecoder 并行解码器,同时运行 quirc 和 custom 解码器
+type ParallelQRDecoder struct {
+ quircDecoder QRDecoder
+ customDecoder QRDecoder
+}
+
+// NewParallelQRDecoder 创建并行解码器
+func NewParallelQRDecoder(quircDecoder, customDecoder QRDecoder) *ParallelQRDecoder {
+ return &ParallelQRDecoder{
+ quircDecoder: quircDecoder,
+ customDecoder: customDecoder,
+ }
+}
+
+func (d *ParallelQRDecoder) Name() string {
+ return "parallel (quirc + custom)"
+}
+
+// Decode 并行解码:同时运行 quirc 和 custom,任一成功立即返回
+func (d *ParallelQRDecoder) Decode(ctx context.Context, imagePath string, logPrefix string) (bool, error) {
+ type decodeResult struct {
+ hasQRCode bool
+ err error
+ name string
+ }
+
+ resultChan := make(chan decodeResult, 2)
+ var wg sync.WaitGroup
+ var mu sync.Mutex
+ var quircErr error
+ var customErr error
+
+ // 启动Quirc解码
+ if d.quircDecoder != nil {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ hasQRCode, err := d.quircDecoder.Decode(ctx, imagePath, logPrefix)
+ mu.Lock()
+ if err != nil {
+ quircErr = err
+ }
+ mu.Unlock()
+ resultChan <- decodeResult{
+ hasQRCode: hasQRCode,
+ err: err,
+ name: d.quircDecoder.Name(),
+ }
+ }()
+ }
+
+ // 启动Custom解码
+ if d.customDecoder != nil {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ hasQRCode, err := d.customDecoder.Decode(ctx, imagePath, logPrefix)
+ mu.Lock()
+ if err != nil {
+ customErr = err
+ }
+ mu.Unlock()
+ resultChan <- decodeResult{
+ hasQRCode: hasQRCode,
+ err: err,
+ name: d.customDecoder.Name(),
+ }
+ }()
+ }
+
+ // 等待第一个结果
+ var firstResult decodeResult
+ var secondResult decodeResult
+
+ firstResult = <-resultChan
+ if firstResult.hasQRCode {
+ // 如果检测到二维码,立即返回
+ go func() {
+ <-resultChan
+ wg.Wait()
+ }()
+ return true, nil
+ }
+
+ // 等待第二个结果
+ if d.quircDecoder != nil && d.customDecoder != nil {
+ secondResult = <-resultChan
+ if secondResult.hasQRCode {
+ wg.Wait()
+ return true, nil
+ }
+ }
+
+ wg.Wait()
+
+ // 如果都失败,返回错误
+ if firstResult.err != nil && secondResult.err != nil {
+ log.ZError(ctx, "并行解码失败,两个解码器都失败", fmt.Errorf("quirc错误=%v, custom错误=%v", quircErr, customErr),
+ "quircError", quircErr,
+ "customError", customErr)
+ return false, fmt.Errorf("quirc 和 custom 都解码失败: quirc错误=%v, custom错误=%v", quircErr, customErr)
+ }
+
+ return false, nil
+}
+
+// ============================================================================
+// 辅助函数
+// ============================================================================
+
+// convertToGrayscale 转换为灰度图
+func convertToGrayscale(img image.Image, width, height int) []byte {
+ grayData := make([]byte, width*height)
+ bounds := img.Bounds()
+
+ if ycbcr, ok := img.(*image.YCbCr); ok {
+ for y := 0; y < height; y++ {
+ for x := 0; x < width; x++ {
+ yi := ycbcr.YOffset(x+bounds.Min.X, y+bounds.Min.Y)
+ grayData[y*width+x] = ycbcr.Y[yi]
+ }
+ }
+ return grayData
+ }
+
+ for y := 0; y < height; y++ {
+ for x := 0; x < width; x++ {
+ r, g, b, _ := img.At(x+bounds.Min.X, y+bounds.Min.Y).RGBA()
+ r8 := uint8(r >> 8)
+ g8 := uint8(g >> 8)
+ b8 := uint8(b >> 8)
+ gray := byte((uint16(r8)*299 + uint16(g8)*587 + uint16(b8)*114) / 1000)
+ grayData[y*width+x] = gray
+ }
+ }
+
+ return grayData
+}
+
+// enhanceImage 图像增强(标准方法)
+func enhanceImage(data []byte, width, height int) []byte {
+ enhanced := make([]byte, len(data))
+ copy(enhanced, data)
+
+ minVal := uint8(255)
+ maxVal := uint8(0)
+ for _, v := range data {
+ if v < minVal {
+ minVal = v
+ }
+ if v > maxVal {
+ maxVal = v
+ }
+ }
+
+ if maxVal-minVal < 50 {
+ rangeVal := maxVal - minVal
+ if rangeVal == 0 {
+ rangeVal = 1
+ }
+ for i, v := range data {
+ stretched := uint8((uint16(v-minVal) * 255) / uint16(rangeVal))
+ enhanced[i] = stretched
+ }
+ }
+
+ return enhanced
+}
+
+// enhanceImageStrong 强对比度增强
+func enhanceImageStrong(data []byte, width, height int) []byte {
+ enhanced := make([]byte, len(data))
+
+ histogram := make([]int, 256)
+ for _, v := range data {
+ histogram[v]++
+ }
+
+ cdf := make([]int, 256)
+ cdf[0] = histogram[0]
+ for i := 1; i < 256; i++ {
+ cdf[i] = cdf[i-1] + histogram[i]
+ }
+
+ total := len(data)
+ for i, v := range data {
+ if total > 0 {
+ enhanced[i] = uint8((cdf[v] * 255) / total)
+ }
+ }
+
+ return enhanced
+}
+
+// enhanceImageForRoundedCorners 针对圆形角的特殊处理
+func enhanceImageForRoundedCorners(data []byte, width, height int) []byte {
+ enhanced := make([]byte, len(data))
+ copy(enhanced, data)
+
+ minVal := uint8(255)
+ maxVal := uint8(0)
+ for _, v := range data {
+ if v < minVal {
+ minVal = v
+ }
+ if v > maxVal {
+ maxVal = v
+ }
+ }
+
+ if maxVal-minVal < 100 {
+ rangeVal := maxVal - minVal
+ if rangeVal == 0 {
+ rangeVal = 1
+ }
+ for i, v := range data {
+ stretched := uint8((uint16(v-minVal) * 255) / uint16(rangeVal))
+ enhanced[i] = stretched
+ }
+ }
+
+ // 形态学操作:先腐蚀后膨胀
+ dilated := make([]byte, len(enhanced))
+ kernelSize := 3
+ halfKernel := kernelSize / 2
+
+ // 腐蚀(最小值滤波)
+ for y := halfKernel; y < height-halfKernel; y++ {
+ for x := halfKernel; x < width-halfKernel; x++ {
+ minVal := uint8(255)
+ for ky := -halfKernel; ky <= halfKernel; ky++ {
+ for kx := -halfKernel; kx <= halfKernel; kx++ {
+ idx := (y+ky)*width + (x + kx)
+ if enhanced[idx] < minVal {
+ minVal = enhanced[idx]
+ }
+ }
+ }
+ dilated[y*width+x] = minVal
+ }
+ }
+
+ // 膨胀(最大值滤波)
+ for y := halfKernel; y < height-halfKernel; y++ {
+ for x := halfKernel; x < width-halfKernel; x++ {
+ maxVal := uint8(0)
+ for ky := -halfKernel; ky <= halfKernel; ky++ {
+ for kx := -halfKernel; kx <= halfKernel; kx++ {
+ idx := (y+ky)*width + (x + kx)
+ if dilated[idx] > maxVal {
+ maxVal = dilated[idx]
+ }
+ }
+ }
+ enhanced[y*width+x] = maxVal
+ }
+ }
+
+ // 边界保持原值
+ for y := 0; y < height; y++ {
+ for x := 0; x < width; x++ {
+ if y < halfKernel || y >= height-halfKernel || x < halfKernel || x >= width-halfKernel {
+ enhanced[y*width+x] = data[y*width+x]
+ }
+ }
+ }
+
+ return enhanced
+}
+
+// enhanceImageDenoiseSharpen 去噪+锐化处理
+func enhanceImageDenoiseSharpen(data []byte, width, height int) []byte {
+ denoised := medianFilter(data, width, height, 3)
+ sharpened := sharpenImage(denoised, width, height)
+ return sharpened
+}
+
+// medianFilter 中值滤波去噪
+func medianFilter(data []byte, width, height, kernelSize int) []byte {
+ filtered := make([]byte, len(data))
+ halfKernel := kernelSize / 2
+ kernelArea := kernelSize * kernelSize
+ values := make([]byte, kernelArea)
+
+ for y := halfKernel; y < height-halfKernel; y++ {
+ for x := halfKernel; x < width-halfKernel; x++ {
+ idx := 0
+ for ky := -halfKernel; ky <= halfKernel; ky++ {
+ for kx := -halfKernel; kx <= halfKernel; kx++ {
+ values[idx] = data[(y+ky)*width+(x+kx)]
+ idx++
+ }
+ }
+ filtered[y*width+x] = quickSelectMedian(values)
+ }
+ }
+
+ // 边界保持原值
+ for y := 0; y < height; y++ {
+ for x := 0; x < width; x++ {
+ if y < halfKernel || y >= height-halfKernel || x < halfKernel || x >= width-halfKernel {
+ filtered[y*width+x] = data[y*width+x]
+ }
+ }
+ }
+
+ return filtered
+}
+
+// quickSelectMedian 快速选择中值
+func quickSelectMedian(arr []byte) byte {
+ n := len(arr)
+ if n <= 7 {
+ // 小数组使用插入排序
+ for i := 1; i < n; i++ {
+ key := arr[i]
+ j := i - 1
+ for j >= 0 && arr[j] > key {
+ arr[j+1] = arr[j]
+ j--
+ }
+ arr[j+1] = key
+ }
+ return arr[n/2]
+ }
+ return quickSelect(arr, 0, n-1, n/2)
+}
+
+// quickSelect 快速选择第k小的元素
+func quickSelect(arr []byte, left, right, k int) byte {
+ if left == right {
+ return arr[left]
+ }
+ pivotIndex := partition(arr, left, right)
+ if k == pivotIndex {
+ return arr[k]
+ } else if k < pivotIndex {
+ return quickSelect(arr, left, pivotIndex-1, k)
+ }
+ return quickSelect(arr, pivotIndex+1, right, k)
+}
+
+func partition(arr []byte, left, right int) int {
+ pivot := arr[right]
+ i := left
+ for j := left; j < right; j++ {
+ if arr[j] <= pivot {
+ arr[i], arr[j] = arr[j], arr[i]
+ i++
+ }
+ }
+ arr[i], arr[right] = arr[right], arr[i]
+ return i
+}
+
+// sharpenImage 锐化处理
+func sharpenImage(data []byte, width, height int) []byte {
+ sharpened := make([]byte, len(data))
+ kernel := []int{0, -1, 0, -1, 5, -1, 0, -1, 0}
+
+ for y := 1; y < height-1; y++ {
+ for x := 1; x < width-1; x++ {
+ sum := 0
+ idx := 0
+ for ky := -1; ky <= 1; ky++ {
+ for kx := -1; kx <= 1; kx++ {
+ pixelIdx := (y+ky)*width + (x + kx)
+ sum += int(data[pixelIdx]) * kernel[idx]
+ idx++
+ }
+ }
+ if sum < 0 {
+ sum = 0
+ }
+ if sum > 255 {
+ sum = 255
+ }
+ sharpened[y*width+x] = uint8(sum)
+ }
+ }
+
+ // 边界保持原值
+ for y := 0; y < height; y++ {
+ for x := 0; x < width; x++ {
+ if y == 0 || y == height-1 || x == 0 || x == width-1 {
+ sharpened[y*width+x] = data[y*width+x]
+ }
+ }
+ }
+
+ return sharpened
+}
+
+// enhanceImageOtsu Otsu自适应阈值二值化
+func enhanceImageOtsu(data []byte, width, height int) []byte {
+ threshold := calculateOtsuThreshold(data)
+ binary := make([]byte, len(data))
+ for i := range data {
+ if data[i] > threshold {
+ binary[i] = 255
+ }
+ }
+ return binary
+}
+
+// calculateOtsuThreshold 计算Otsu自适应阈值
+func calculateOtsuThreshold(data []byte) uint8 {
+ histogram := make([]int, 256)
+ for _, v := range data {
+ histogram[v]++
+ }
+
+ total := len(data)
+ if total == 0 {
+ return 128
+ }
+
+ var threshold uint8
+ var maxVar float64
+ var sum int
+
+ for i := 0; i < 256; i++ {
+ sum += i * histogram[i]
+ }
+
+ var sum1 int
+ var wB int
+ for i := 0; i < 256; i++ {
+ wB += histogram[i]
+ if wB == 0 {
+ continue
+ }
+ wF := total - wB
+ if wF == 0 {
+ break
+ }
+
+ sum1 += i * histogram[i]
+ mB := float64(sum1) / float64(wB)
+ mF := float64(sum-sum1) / float64(wF)
+
+ varBetween := float64(wB) * float64(wF) * (mB - mF) * (mB - mF)
+
+ if varBetween > maxVar {
+ maxVar = varBetween
+ threshold = uint8(i)
+ }
+ }
+
+ return threshold
+}
+
+// enhanceImageGaussianSharpen 高斯模糊+锐化
+func enhanceImageGaussianSharpen(data []byte, width, height int) []byte {
+ blurred := gaussianBlur(data, width, height, 1.0)
+ sharpened := sharpenImage(blurred, width, height)
+ enhanced := enhanceImage(sharpened, width, height)
+ return enhanced
+}
+
+// gaussianBlur 高斯模糊
+func gaussianBlur(data []byte, width, height int, sigma float64) []byte {
+ blurred := make([]byte, len(data))
+ kernelSize := 5
+ halfKernel := kernelSize / 2
+
+ kernel := make([]float64, kernelSize*kernelSize)
+ sum := 0.0
+ for y := -halfKernel; y <= halfKernel; y++ {
+ for x := -halfKernel; x <= halfKernel; x++ {
+ idx := (y+halfKernel)*kernelSize + (x + halfKernel)
+ val := math.Exp(-(float64(x*x+y*y) / (2 * sigma * sigma)))
+ kernel[idx] = val
+ sum += val
+ }
+ }
+
+ for i := range kernel {
+ kernel[i] /= sum
+ }
+
+ for y := halfKernel; y < height-halfKernel; y++ {
+ for x := halfKernel; x < width-halfKernel; x++ {
+ var val float64
+ idx := 0
+ for ky := -halfKernel; ky <= halfKernel; ky++ {
+ for kx := -halfKernel; kx <= halfKernel; kx++ {
+ pixelIdx := (y+ky)*width + (x + kx)
+ val += float64(data[pixelIdx]) * kernel[idx]
+ idx++
+ }
+ }
+ blurred[y*width+x] = uint8(val)
+ }
+ }
+
+ // 边界保持原值
+ for y := 0; y < height; y++ {
+ for x := 0; x < width; x++ {
+ if y < halfKernel || y >= height-halfKernel || x < halfKernel || x >= width-halfKernel {
+ blurred[y*width+x] = data[y*width+x]
+ }
+ }
+ }
+
+ return blurred
+}
+
+// createImageFromGrayscale 从灰度数据创建图像
+func createImageFromGrayscale(data []byte, width, height int) image.Image {
+ img := image.NewGray(image.Rect(0, 0, width, height))
+ for y := 0; y < height; y++ {
+ rowStart := y * width
+ rowEnd := rowStart + width
+ if rowEnd > len(data) {
+ rowEnd = len(data)
+ }
+ copy(img.Pix[y*img.Stride:], data[rowStart:rowEnd])
+ }
+ return img
+}
+
+// scaledImage 缩放后的图像数据
+type scaledImage struct {
+ data []byte
+ width int
+ height int
+}
+
+// scaleImage 缩放图像
+func scaleImage(img image.Image, origWidth, origHeight int, scale float64) image.Image {
+ if scale == 1.0 {
+ return img
+ }
+
+ newWidth := int(float64(origWidth) * scale)
+ newHeight := int(float64(origHeight) * scale)
+
+ if newWidth < 50 || newHeight < 50 {
+ return nil
+ }
+ if newWidth > 1500 || newHeight > 1500 {
+ return nil
+ }
+
+ scaled := image.NewRGBA(image.Rect(0, 0, newWidth, newHeight))
+ bounds := img.Bounds()
+
+ for y := 0; y < newHeight; y++ {
+ for x := 0; x < newWidth; x++ {
+ srcX := float64(x) / scale
+ srcY := float64(y) / scale
+
+ x1 := int(srcX)
+ y1 := int(srcY)
+ x2 := x1 + 1
+ y2 := y1 + 1
+
+ if x2 >= bounds.Dx() {
+ x2 = bounds.Dx() - 1
+ }
+ if y2 >= bounds.Dy() {
+ y2 = bounds.Dy() - 1
+ }
+
+ fx := srcX - float64(x1)
+ fy := srcY - float64(y1)
+
+ c11 := getPixelColor(img, bounds.Min.X+x1, bounds.Min.Y+y1)
+ c12 := getPixelColor(img, bounds.Min.X+x1, bounds.Min.Y+y2)
+ c21 := getPixelColor(img, bounds.Min.X+x2, bounds.Min.Y+y1)
+ c22 := getPixelColor(img, bounds.Min.X+x2, bounds.Min.Y+y2)
+
+ r := uint8(float64(c11.R)*(1-fx)*(1-fy) + float64(c21.R)*fx*(1-fy) + float64(c12.R)*(1-fx)*fy + float64(c22.R)*fx*fy)
+ g := uint8(float64(c11.G)*(1-fx)*(1-fy) + float64(c21.G)*fx*(1-fy) + float64(c12.G)*(1-fx)*fy + float64(c22.G)*fx*fy)
+ b := uint8(float64(c11.B)*(1-fx)*(1-fy) + float64(c21.B)*fx*(1-fy) + float64(c12.B)*(1-fx)*fy + float64(c22.B)*fx*fy)
+
+ scaled.Set(x, y, color.RGBA{R: r, G: g, B: b, A: 255})
+ }
+ }
+
+ return scaled
+}
+
+// getPixelColor 获取像素颜色
+func getPixelColor(img image.Image, x, y int) color.RGBA {
+ r, g, b, _ := img.At(x, y).RGBA()
+ return color.RGBA{
+ R: uint8(r >> 8),
+ G: uint8(g >> 8),
+ B: uint8(b >> 8),
+ A: 255,
+ }
+}
+
+// scaleGrayscaleImage 缩放灰度图像
+func scaleGrayscaleImage(data []byte, origWidth, origHeight int, scale float64) *scaledImage {
+ if scale == 1.0 {
+ return &scaledImage{data: data, width: origWidth, height: origHeight}
+ }
+
+ newWidth := int(float64(origWidth) * scale)
+ newHeight := int(float64(origHeight) * scale)
+
+ if newWidth < 21 || newHeight < 21 || newWidth > 2000 || newHeight > 2000 {
+ return nil
+ }
+
+ scaled := make([]byte, newWidth*newHeight)
+
+ for y := 0; y < newHeight; y++ {
+ for x := 0; x < newWidth; x++ {
+ srcX := float64(x) / scale
+ srcY := float64(y) / scale
+
+ x1 := int(srcX)
+ y1 := int(srcY)
+ x2 := x1 + 1
+ y2 := y1 + 1
+
+ if x2 >= origWidth {
+ x2 = origWidth - 1
+ }
+ if y2 >= origHeight {
+ y2 = origHeight - 1
+ }
+
+ fx := srcX - float64(x1)
+ fy := srcY - float64(y1)
+
+ v11 := float64(data[y1*origWidth+x1])
+ v12 := float64(data[y2*origWidth+x1])
+ v21 := float64(data[y1*origWidth+x2])
+ v22 := float64(data[y2*origWidth+x2])
+
+ val := v11*(1-fx)*(1-fy) + v21*fx*(1-fy) + v12*(1-fx)*fy + v22*fx*fy
+ scaled[y*newWidth+x] = uint8(val)
+ }
+ }
+
+ return &scaledImage{data: scaled, width: newWidth, height: newHeight}
+}
+
+// Point 表示一个点
+type Point struct {
+ X, Y int
+}
+
+// Corner 表示检测到的定位图案角点
+type Corner struct {
+ Center Point
+ Size int
+ Type int
+}
+
+// detectCornersFast 快速检测定位图案
+func detectCornersFast(data []byte, width, height int) []Corner {
+ var corners []Corner
+
+ scanStep := max(2, min(width, height)/80)
+ if scanStep < 1 {
+ scanStep = 1
+ }
+
+ for y := scanStep * 3; y < height-scanStep*3; y += scanStep {
+ for x := scanStep * 3; x < width-scanStep*3; x += scanStep {
+ if isFinderPatternFast(data, width, height, x, y) {
+ corners = append(corners, Corner{
+ Center: Point{X: x, Y: y},
+ Size: 20,
+ })
+ if len(corners) >= 3 {
+ return corners
+ }
+ }
+ }
+ }
+
+ return corners
+}
+
+// isFinderPatternFast 快速检测定位图案
+func isFinderPatternFast(data []byte, width, height, x, y int) bool {
+ centerIdx := y*width + x
+ if centerIdx < 0 || centerIdx >= len(data) {
+ return false
+ }
+ if data[centerIdx] > 180 {
+ return false
+ }
+
+ radius := min(width, height) / 15
+ if radius < 3 {
+ radius = 3
+ }
+ if radius > 30 {
+ radius = 30
+ }
+
+ directions := []struct{ dx, dy int }{{radius, 0}, {-radius, 0}, {0, radius}, {0, -radius}}
+ blackCount := 0
+ whiteCount := 0
+
+ for _, dir := range directions {
+ nx := x + dir.dx
+ ny := y + dir.dy
+ if nx >= 0 && nx < width && ny >= 0 && ny < height {
+ idx := ny*width + nx
+ if idx >= 0 && idx < len(data) {
+ if data[idx] < 128 {
+ blackCount++
+ } else {
+ whiteCount++
+ }
+ }
+ }
+ }
+
+ return blackCount >= 2 && whiteCount >= 2
+}
+
+// 辅助函数
+func max(a, b int) int {
+ if a > b {
+ return a
+ }
+ return b
+}
+
+func min(a, b int) int {
+ if a < b {
+ return a
+ }
+ return b
+}
diff --git a/internal/rpc/msg/qrcode_detect.go b/internal/rpc/msg/qrcode_detect.go
new file mode 100644
index 0000000..1bc7e1f
--- /dev/null
+++ b/internal/rpc/msg/qrcode_detect.go
@@ -0,0 +1,191 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ _ "image/jpeg"
+ _ "image/png"
+ "io"
+ "net/http"
+ "os"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+)
+
+// PictureElem 用于解析图片消息内容
+type PictureElem struct {
+ SourcePicture struct {
+ URL string `json:"url"`
+ } `json:"sourcePicture"`
+ BigPicture struct {
+ URL string `json:"url"`
+ } `json:"bigPicture"`
+ SnapshotPicture struct {
+ URL string `json:"url"`
+ } `json:"snapshotPicture"`
+}
+
+// checkImageContainsQRCode 检测图片中是否包含二维码
+// userType: 0-普通用户(不能发送包含二维码的图片),1-特殊用户(可以发送)
+func (m *msgServer) checkImageContainsQRCode(ctx context.Context, msgData *sdkws.MsgData, userType int32) error {
+ // userType=1 的用户可以发送包含二维码的图片,不进行检测
+ if userType == 1 {
+ return nil
+ }
+
+ // 只检测图片类型的消息
+ if msgData.ContentType != constant.Picture {
+ return nil
+ }
+
+ // 解析图片消息内容
+ var pictureElem PictureElem
+ if err := json.Unmarshal(msgData.Content, &pictureElem); err != nil {
+ // 如果解析失败,记录警告但不拦截
+ log.ZWarn(ctx, "failed to parse picture message", err, "content", string(msgData.Content))
+ return nil
+ }
+
+ // 获取图片URL(优先使用原图,如果没有则使用大图)
+ imageURL := pictureElem.SourcePicture.URL
+ if imageURL == "" {
+ imageURL = pictureElem.BigPicture.URL
+ }
+ if imageURL == "" {
+ imageURL = pictureElem.SnapshotPicture.URL
+ }
+ if imageURL == "" {
+ // 没有有效的图片URL,无法检测
+ log.ZWarn(ctx, "no valid image URL found in picture message", nil, "pictureElem", pictureElem)
+ return nil
+ }
+
+ // 下载图片并检测二维码
+ hasQRCode, err := m.detectQRCodeInImage(ctx, imageURL, "")
+ if err != nil {
+ // 检测失败时,记录错误但不拦截(避免误拦截)
+ log.ZWarn(ctx, "QR code detection failed", err, "imageURL", imageURL)
+ return nil
+ }
+
+ if hasQRCode {
+ log.ZWarn(ctx, "检测到二维码,拒绝发送", nil, "imageURL", imageURL, "userType", userType)
+ return servererrs.ErrImageContainsQRCode.WrapMsg("userType=0的用户不能发送包含二维码的图片")
+ }
+
+ return nil
+}
+
+// detectQRCodeInImage 下载图片并检测是否包含二维码
+func (m *msgServer) detectQRCodeInImage(ctx context.Context, imageURL string, logPrefix string) (bool, error) {
+ // 创建带超时的HTTP客户端
+ client := &http.Client{
+ Timeout: 5 * time.Second,
+ }
+
+ // 下载图片
+ req, err := http.NewRequestWithContext(ctx, http.MethodGet, imageURL, nil)
+ if err != nil {
+ log.ZError(ctx, "创建HTTP请求失败", err, "imageURL", imageURL)
+ return false, errs.WrapMsg(err, "failed to create request")
+ }
+
+ resp, err := client.Do(req)
+ if err != nil {
+ log.ZError(ctx, "下载图片失败", err, "imageURL", imageURL)
+ return false, errs.WrapMsg(err, "failed to download image")
+ }
+ defer resp.Body.Close()
+
+ if resp.StatusCode != http.StatusOK {
+ log.ZError(ctx, "下载图片状态码异常", nil, "statusCode", resp.StatusCode, "imageURL", imageURL)
+ return false, errs.WrapMsg(fmt.Errorf("unexpected status code: %d", resp.StatusCode), "failed to download image")
+ }
+
+ // 限制图片大小(最大10MB)
+ const maxImageSize = 10 * 1024 * 1024
+ limitedReader := io.LimitReader(resp.Body, maxImageSize+1)
+
+ // 创建临时文件
+ tmpFile, err := os.CreateTemp("", "qrcode_detect_*.tmp")
+ if err != nil {
+ log.ZError(ctx, "创建临时文件失败", err)
+ return false, errs.WrapMsg(err, "failed to create temp file")
+ }
+ tmpFilePath := tmpFile.Name()
+
+ // 确保检测完成后删除临时文件(无论成功还是失败)
+ defer func() {
+ // 确保文件已关闭后再删除
+ if tmpFile != nil {
+ _ = tmpFile.Close()
+ }
+ // 删除临时文件,忽略文件不存在的错误
+ if err := os.Remove(tmpFilePath); err != nil && !os.IsNotExist(err) {
+ log.ZWarn(ctx, "删除临时文件失败", err, "tmpFile", tmpFilePath)
+ }
+ }()
+
+ // 保存图片到临时文件
+ written, err := io.Copy(tmpFile, limitedReader)
+ if err != nil {
+ log.ZError(ctx, "保存图片到临时文件失败", err, "tmpFile", tmpFilePath)
+ return false, errs.WrapMsg(err, "failed to save image")
+ }
+
+ // 关闭文件以便后续读取
+ if err := tmpFile.Close(); err != nil {
+ log.ZError(ctx, "关闭临时文件失败", err, "tmpFile", tmpFilePath)
+ return false, errs.WrapMsg(err, "failed to close temp file")
+ }
+
+ // 检查文件大小
+ if written > maxImageSize {
+ log.ZWarn(ctx, "图片过大", nil, "size", written, "maxSize", maxImageSize)
+ return false, errs.WrapMsg(fmt.Errorf("image too large: %d bytes", written), "image size exceeds limit")
+ }
+
+ // 使用优化的并行解码器检测二维码
+ hasQRCode, err := m.detectQRCodeWithDecoder(ctx, tmpFilePath, "")
+ if err != nil {
+ log.ZError(ctx, "二维码检测失败", err, "tmpFile", tmpFilePath)
+ return false, err
+ }
+
+ return hasQRCode, nil
+}
+
+// detectQRCodeWithDecoder 使用优化的解码器检测二维码
+func (m *msgServer) detectQRCodeWithDecoder(ctx context.Context, imagePath string, logPrefix string) (bool, error) {
+ // 使用Custom解码器(已移除Quirc解码器依赖)
+ customDecoder := &CustomQRDecoder{}
+
+ // 执行解码
+ hasQRCode, err := customDecoder.Decode(ctx, imagePath, logPrefix)
+ if err != nil {
+ log.ZError(ctx, "解码器检测失败", err, "decoder", customDecoder.Name())
+ return false, err
+ }
+
+ return hasQRCode, nil
+}
diff --git a/internal/rpc/msg/revoke.go b/internal/rpc/msg/revoke.go
new file mode 100644
index 0000000..6dd438b
--- /dev/null
+++ b/internal/rpc/msg/revoke.go
@@ -0,0 +1,134 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.RevokeMsgResp, error) {
+ if req.UserID == "" {
+ return nil, errs.ErrArgs.WrapMsg("user_id is empty")
+ }
+ if req.ConversationID == "" {
+ return nil, errs.ErrArgs.WrapMsg("conversation_id is empty")
+ }
+ if req.Seq < 0 {
+ return nil, errs.ErrArgs.WrapMsg("seq is invalid")
+ }
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ user, err := m.UserLocalCache.GetUserInfo(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ _, _, msgs, err := m.MsgDatabase.GetMsgBySeqs(ctx, req.UserID, req.ConversationID, []int64{req.Seq})
+ if err != nil {
+ return nil, err
+ }
+ if len(msgs) == 0 || msgs[0] == nil {
+ return nil, errs.ErrRecordNotFound.WrapMsg("msg not found")
+ }
+ if msgs[0].ContentType == constant.MsgRevokeNotification {
+ return nil, servererrs.ErrMsgAlreadyRevoke.WrapMsg("msg already revoke")
+ }
+
+ data, _ := json.Marshal(msgs[0])
+ log.ZDebug(ctx, "GetMsgBySeqs", "conversationID", req.ConversationID, "seq", req.Seq, "msg", string(data))
+ var role int32
+ if !authverify.IsAdmin(ctx) {
+ sessionType := msgs[0].SessionType
+ switch sessionType {
+ case constant.SingleChatType:
+ if err := authverify.CheckAccess(ctx, msgs[0].SendID); err != nil {
+ return nil, err
+ }
+ role = user.AppMangerLevel
+ case constant.ReadGroupChatType:
+ members, err := m.GroupLocalCache.GetGroupMemberInfoMap(ctx, msgs[0].GroupID, datautil.Distinct([]string{req.UserID, msgs[0].SendID}))
+ if err != nil {
+ return nil, err
+ }
+ if req.UserID != msgs[0].SendID {
+ switch members[req.UserID].RoleLevel {
+ case constant.GroupOwner:
+ case constant.GroupAdmin:
+ if sendMember, ok := members[msgs[0].SendID]; ok {
+ if sendMember.RoleLevel != constant.GroupOrdinaryUsers {
+ return nil, errs.ErrNoPermission.WrapMsg("no permission")
+ }
+ }
+ default:
+ return nil, errs.ErrNoPermission.WrapMsg("no permission")
+ }
+ }
+ if member := members[req.UserID]; member != nil {
+ role = member.RoleLevel
+ }
+ default:
+ return nil, errs.ErrInternalServer.WrapMsg("msg sessionType not supported", "sessionType", sessionType)
+ }
+ }
+ now := time.Now().UnixMilli()
+ err = m.MsgDatabase.RevokeMsg(ctx, req.ConversationID, req.Seq, &model.RevokeModel{
+ Role: role,
+ UserID: req.UserID,
+ Nickname: user.Nickname,
+ Time: now,
+ })
+ if err != nil {
+ return nil, err
+ }
+ revokerUserID := mcontext.GetOpUserID(ctx)
+ var flag bool
+
+ if len(m.config.Share.IMAdminUser.UserIDs) > 0 {
+ flag = datautil.Contain(revokerUserID, m.adminUserIDs...)
+ }
+ tips := sdkws.RevokeMsgTips{
+ RevokerUserID: revokerUserID,
+ ClientMsgID: msgs[0].ClientMsgID,
+ RevokeTime: now,
+ Seq: req.Seq,
+ SesstionType: msgs[0].SessionType,
+ ConversationID: req.ConversationID,
+ IsAdminRevoke: flag,
+ }
+ var recvID string
+ if msgs[0].SessionType == constant.ReadGroupChatType {
+ recvID = msgs[0].GroupID
+ } else {
+ recvID = msgs[0].RecvID
+ }
+ m.notificationSender.NotificationWithSessionType(ctx, req.UserID, recvID, constant.MsgRevokeNotification, msgs[0].SessionType, &tips)
+ webhookCfg := m.webhookConfig()
+ m.webhookAfterRevokeMsg(ctx, &webhookCfg.AfterRevokeMsg, req)
+ return &msg.RevokeMsgResp{}, nil
+}
diff --git a/internal/rpc/msg/send.go b/internal/rpc/msg/send.go
new file mode 100644
index 0000000..b9cb7ff
--- /dev/null
+++ b/internal/rpc/msg/send.go
@@ -0,0 +1,231 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "google.golang.org/protobuf/proto"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/conversationutil"
+ "git.imall.cloud/openim/protocol/constant"
+ pbconv "git.imall.cloud/openim/protocol/conversation"
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/wrapperspb"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (m *msgServer) SendMsg(ctx context.Context, req *pbmsg.SendMsgReq) (*pbmsg.SendMsgResp, error) {
+ if req.MsgData == nil {
+ return nil, errs.ErrArgs.WrapMsg("msgData is nil")
+ }
+ if err := authverify.CheckAccess(ctx, req.MsgData.SendID); err != nil {
+ return nil, err
+ }
+ before := new(*sdkws.MsgData)
+ resp, err := m.sendMsg(ctx, req, before)
+ if err != nil {
+ return nil, err
+ }
+ if *before != nil && proto.Equal(*before, req.MsgData) == false {
+ resp.Modify = req.MsgData
+ }
+ return resp, nil
+}
+
+func (m *msgServer) sendMsg(ctx context.Context, req *pbmsg.SendMsgReq, before **sdkws.MsgData) (*pbmsg.SendMsgResp, error) {
+ m.encapsulateMsgData(req.MsgData)
+ switch req.MsgData.SessionType {
+ case constant.SingleChatType:
+ return m.sendMsgSingleChat(ctx, req, before)
+ case constant.NotificationChatType:
+ return m.sendMsgNotification(ctx, req, before)
+ case constant.ReadGroupChatType:
+ return m.sendMsgGroupChat(ctx, req, before)
+ default:
+ return nil, errs.ErrArgs.WrapMsg("unknown sessionType")
+ }
+}
+
+func (m *msgServer) sendMsgGroupChat(ctx context.Context, req *pbmsg.SendMsgReq, before **sdkws.MsgData) (resp *pbmsg.SendMsgResp, err error) {
+ if err = m.messageVerification(ctx, req); err != nil {
+ prommetrics.GroupChatMsgProcessFailedCounter.Inc()
+ return nil, err
+ }
+
+ webhookCfg := m.webhookConfig()
+
+ if err = m.webhookBeforeSendGroupMsg(ctx, &webhookCfg.BeforeSendGroupMsg, req); err != nil {
+ return nil, err
+ }
+ if err := m.webhookBeforeMsgModify(ctx, &webhookCfg.BeforeMsgModify, req, before); err != nil {
+ return nil, err
+ }
+ err = m.MsgDatabase.MsgToMQ(ctx, conversationutil.GenConversationUniqueKeyForGroup(req.MsgData.GroupID), req.MsgData)
+ if err != nil {
+ return nil, err
+ }
+ if req.MsgData.ContentType == constant.AtText {
+ go m.setConversationAtInfo(ctx, req.MsgData)
+ }
+
+ // 获取webhook配置(优先从配置管理器获取)
+ m.webhookAfterSendGroupMsg(ctx, &webhookCfg.AfterSendGroupMsg, req)
+
+ prommetrics.GroupChatMsgProcessSuccessCounter.Inc()
+ resp = &pbmsg.SendMsgResp{}
+ resp.SendTime = req.MsgData.SendTime
+ resp.ServerMsgID = req.MsgData.ServerMsgID
+ resp.ClientMsgID = req.MsgData.ClientMsgID
+ return resp, nil
+}
+
+func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgData) {
+
+ log.ZDebug(nctx, "setConversationAtInfo", "msg", msg)
+
+ defer func() {
+ if r := recover(); r != nil {
+ log.ZPanic(nctx, "setConversationAtInfo Panic", errs.ErrPanic(r))
+ }
+ }()
+
+ ctx := mcontext.NewCtx("@@@" + mcontext.GetOperationID(nctx))
+
+ var atUserID []string
+
+ conversation := &pbconv.ConversationReq{
+ ConversationID: msgprocessor.GetConversationIDByMsg(msg),
+ ConversationType: msg.SessionType,
+ GroupID: msg.GroupID,
+ }
+ memberUserIDList, err := m.GroupLocalCache.GetGroupMemberIDs(ctx, msg.GroupID)
+ if err != nil {
+ log.ZWarn(ctx, "GetGroupMemberIDs", err)
+ return
+ }
+
+ tagAll := datautil.Contain(constant.AtAllString, msg.AtUserIDList...)
+ if tagAll {
+
+ memberUserIDList = datautil.DeleteElems(memberUserIDList, msg.SendID)
+
+ atUserID = datautil.Single([]string{constant.AtAllString}, msg.AtUserIDList)
+
+ if len(atUserID) == 0 { // just @everyone
+ conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll}
+ } else { // @Everyone and @other people
+ conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAllAtMe}
+ atUserID = datautil.SliceIntersectFuncs(atUserID, memberUserIDList, func(a string) string { return a }, func(b string) string {
+ return b
+ })
+ if err := m.conversationClient.SetConversations(ctx, atUserID, conversation); err != nil {
+ log.ZWarn(ctx, "SetConversations", err, "userID", atUserID, "conversation", conversation)
+ }
+ memberUserIDList = datautil.Single(atUserID, memberUserIDList)
+ }
+
+ conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll}
+ if err := m.conversationClient.SetConversations(ctx, memberUserIDList, conversation); err != nil {
+ log.ZWarn(ctx, "SetConversations", err, "userID", memberUserIDList, "conversation", conversation)
+ }
+
+ return
+ }
+ atUserID = datautil.SliceIntersectFuncs(msg.AtUserIDList, memberUserIDList, func(a string) string { return a }, func(b string) string {
+ return b
+ })
+ conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtMe}
+
+ if err := m.conversationClient.SetConversations(ctx, atUserID, conversation); err != nil {
+ log.ZWarn(ctx, "SetConversations", err, atUserID, conversation)
+ }
+}
+
+func (m *msgServer) sendMsgNotification(ctx context.Context, req *pbmsg.SendMsgReq, before **sdkws.MsgData) (resp *pbmsg.SendMsgResp, err error) {
+ if err := m.MsgDatabase.MsgToMQ(ctx, conversationutil.GenConversationUniqueKeyForSingle(req.MsgData.SendID, req.MsgData.RecvID), req.MsgData); err != nil {
+ return nil, err
+ }
+ resp = &pbmsg.SendMsgResp{
+ ServerMsgID: req.MsgData.ServerMsgID,
+ ClientMsgID: req.MsgData.ClientMsgID,
+ SendTime: req.MsgData.SendTime,
+ }
+ return resp, nil
+}
+
+func (m *msgServer) sendMsgSingleChat(ctx context.Context, req *pbmsg.SendMsgReq, before **sdkws.MsgData) (resp *pbmsg.SendMsgResp, err error) {
+ if err := m.messageVerification(ctx, req); err != nil {
+ return nil, err
+ }
+ webhookCfg := m.webhookConfig()
+ isSend := true
+ isNotification := msgprocessor.IsNotificationByMsg(req.MsgData)
+ if !isNotification {
+ isSend, err = m.modifyMessageByUserMessageReceiveOpt(authverify.WithTempAdmin(ctx), req.MsgData.RecvID, conversationutil.GenConversationIDForSingle(req.MsgData.SendID, req.MsgData.RecvID), constant.SingleChatType, req)
+ if err != nil {
+ return nil, err
+ }
+ }
+ if !isSend {
+ prommetrics.SingleChatMsgProcessFailedCounter.Inc()
+ return nil, errs.ErrArgs.WrapMsg("message is not sent")
+ } else {
+ if err := m.webhookBeforeMsgModify(ctx, &webhookCfg.BeforeMsgModify, req, before); err != nil {
+ return nil, err
+ }
+ if err := m.MsgDatabase.MsgToMQ(ctx, conversationutil.GenConversationUniqueKeyForSingle(req.MsgData.SendID, req.MsgData.RecvID), req.MsgData); err != nil {
+ prommetrics.SingleChatMsgProcessFailedCounter.Inc()
+ return nil, err
+ }
+
+ m.webhookAfterSendSingleMsg(ctx, &webhookCfg.AfterSendSingleMsg, req)
+ prommetrics.SingleChatMsgProcessSuccessCounter.Inc()
+ return &pbmsg.SendMsgResp{
+ ServerMsgID: req.MsgData.ServerMsgID,
+ ClientMsgID: req.MsgData.ClientMsgID,
+ SendTime: req.MsgData.SendTime,
+ }, nil
+ }
+}
+
+func (m *msgServer) SendSimpleMsg(ctx context.Context, req *pbmsg.SendSimpleMsgReq) (*pbmsg.SendSimpleMsgResp, error) {
+ if req.MsgData == nil {
+ return nil, errs.ErrArgs.WrapMsg("msg data is nil")
+ }
+ sender, err := m.UserLocalCache.GetUserInfo(ctx, req.MsgData.SendID)
+ if err != nil {
+ return nil, err
+ }
+ req.MsgData.SenderFaceURL = sender.FaceURL
+ req.MsgData.SenderNickname = sender.Nickname
+ resp, err := m.SendMsg(ctx, &pbmsg.SendMsgReq{MsgData: req.MsgData})
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.SendSimpleMsgResp{
+ ServerMsgID: resp.ServerMsgID,
+ ClientMsgID: resp.ClientMsgID,
+ SendTime: resp.SendTime,
+ Modify: resp.Modify,
+ }, nil
+}
diff --git a/internal/rpc/msg/seq.go b/internal/rpc/msg/seq.go
new file mode 100644
index 0000000..b63d306
--- /dev/null
+++ b/internal/rpc/msg/seq.go
@@ -0,0 +1,105 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "errors"
+ "sort"
+
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "github.com/redis/go-redis/v9"
+)
+
+func (m *msgServer) GetConversationMaxSeq(ctx context.Context, req *pbmsg.GetConversationMaxSeqReq) (*pbmsg.GetConversationMaxSeqResp, error) {
+ maxSeq, err := m.MsgDatabase.GetMaxSeq(ctx, req.ConversationID)
+ if err != nil && !errors.Is(err, redis.Nil) {
+ return nil, err
+ }
+ return &pbmsg.GetConversationMaxSeqResp{MaxSeq: maxSeq}, nil
+}
+
+func (m *msgServer) GetMaxSeqs(ctx context.Context, req *pbmsg.GetMaxSeqsReq) (*pbmsg.SeqsInfoResp, error) {
+ maxSeqs, err := m.MsgDatabase.GetMaxSeqs(ctx, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.SeqsInfoResp{MaxSeqs: maxSeqs}, nil
+}
+
+func (m *msgServer) GetHasReadSeqs(ctx context.Context, req *pbmsg.GetHasReadSeqsReq) (*pbmsg.SeqsInfoResp, error) {
+ hasReadSeqs, err := m.MsgDatabase.GetHasReadSeqs(ctx, req.UserID, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.SeqsInfoResp{MaxSeqs: hasReadSeqs}, nil
+}
+
+func (m *msgServer) GetMsgByConversationIDs(ctx context.Context, req *pbmsg.GetMsgByConversationIDsReq) (*pbmsg.GetMsgByConversationIDsResp, error) {
+ Msgs, err := m.MsgDatabase.FindOneByDocIDs(ctx, req.ConversationIDs, req.MaxSeqs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.GetMsgByConversationIDsResp{MsgDatas: Msgs}, nil
+}
+
+func (m *msgServer) SetUserConversationsMinSeq(ctx context.Context, req *pbmsg.SetUserConversationsMinSeqReq) (*pbmsg.SetUserConversationsMinSeqResp, error) {
+ for _, userID := range req.UserIDs {
+ if err := m.MsgDatabase.SetUserConversationsMinSeqs(ctx, userID, map[string]int64{req.ConversationID: req.Seq}); err != nil {
+ return nil, err
+ }
+ }
+ return &pbmsg.SetUserConversationsMinSeqResp{}, nil
+}
+
+func (m *msgServer) GetActiveConversation(ctx context.Context, req *pbmsg.GetActiveConversationReq) (*pbmsg.GetActiveConversationResp, error) {
+ res, err := m.MsgDatabase.GetCacheMaxSeqWithTime(ctx, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ conversations := make([]*pbmsg.ActiveConversation, 0, len(res))
+ for conversationID, val := range res {
+ conversations = append(conversations, &pbmsg.ActiveConversation{
+ MaxSeq: val.Seq,
+ LastTime: val.Time,
+ ConversationID: conversationID,
+ })
+ }
+ if req.Limit > 0 {
+ sort.Sort(activeConversations(conversations))
+ if len(conversations) > int(req.Limit) {
+ conversations = conversations[:req.Limit]
+ }
+ }
+ return &pbmsg.GetActiveConversationResp{Conversations: conversations}, nil
+}
+
+func (m *msgServer) SetUserConversationMaxSeq(ctx context.Context, req *pbmsg.SetUserConversationMaxSeqReq) (*pbmsg.SetUserConversationMaxSeqResp, error) {
+ for _, userID := range req.OwnerUserID {
+ if err := m.MsgDatabase.SetUserConversationsMaxSeq(ctx, req.ConversationID, userID, req.MaxSeq); err != nil {
+ return nil, err
+ }
+ }
+ return &pbmsg.SetUserConversationMaxSeqResp{}, nil
+}
+
+func (m *msgServer) SetUserConversationMinSeq(ctx context.Context, req *pbmsg.SetUserConversationMinSeqReq) (*pbmsg.SetUserConversationMinSeqResp, error) {
+ for _, userID := range req.OwnerUserID {
+ if err := m.MsgDatabase.SetUserConversationsMinSeq(ctx, req.ConversationID, userID, req.MinSeq); err != nil {
+ return nil, err
+ }
+ }
+ return &pbmsg.SetUserConversationMinSeqResp{}, nil
+}
diff --git a/internal/rpc/msg/server.go b/internal/rpc/msg/server.go
new file mode 100644
index 0000000..cc51a74
--- /dev/null
+++ b/internal/rpc/msg/server.go
@@ -0,0 +1,218 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/mqbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "google.golang.org/grpc"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpccache"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/log"
+)
+
+type MessageInterceptorFunc func(ctx context.Context, globalConfig *Config, req *msg.SendMsgReq) (*sdkws.MsgData, error)
+
+// MessageInterceptorChain defines a chain of message interceptor functions.
+type MessageInterceptorChain []MessageInterceptorFunc
+
+type Config struct {
+ RpcConfig config.Msg
+ RedisConfig config.Redis
+ MongodbConfig config.Mongo
+ KafkaConfig config.Kafka
+ NotificationConfig config.Notification
+ Share config.Share
+ WebhooksConfig config.Webhooks
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+// MsgServer encapsulates dependencies required for message handling.
+type msgServer struct {
+ msg.UnimplementedMsgServer
+ RegisterCenter discovery.Conn // Service discovery registry for service registration.
+ MsgDatabase controller.CommonMsgDatabase // Interface for message database operations.
+ UserLocalCache *rpccache.UserLocalCache // Local cache for user data.
+ FriendLocalCache *rpccache.FriendLocalCache // Local cache for friend data.
+ GroupLocalCache *rpccache.GroupLocalCache // Local cache for group data.
+ ConversationLocalCache *rpccache.ConversationLocalCache // Local cache for conversation data.
+ Handlers MessageInterceptorChain // Chain of handlers for processing messages.
+ notificationSender *notification.NotificationSender // RPC client for sending notifications.
+ msgNotificationSender *MsgNotificationSender // RPC client for sending msg notifications.
+ config *Config // Global configuration settings.
+ webhookClient *webhook.Client
+ webhookConfigManager *webhook.ConfigManager // Webhook配置管理器(支持从数据库读取)
+ conversationClient *rpcli.ConversationClient
+ redPacketDB database.RedPacket // Database for red packet records.
+ redPacketReceiveDB database.RedPacketReceive // Database for red packet receive records.
+
+ adminUserIDs []string
+}
+
+func (m *msgServer) addInterceptorHandler(interceptorFunc ...MessageInterceptorFunc) {
+ m.Handlers = append(m.Handlers, interceptorFunc...)
+
+}
+
+// webhookConfig returns the latest webhook config from the client/manager with fallback.
+func (m *msgServer) webhookConfig() *config.Webhooks {
+ if m.webhookClient == nil {
+ return &m.config.WebhooksConfig
+ }
+ return m.webhookClient.GetConfig(&m.config.WebhooksConfig)
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ builder := mqbuild.NewBuilder(&config.KafkaConfig)
+ redisProducer, err := builder.GetTopicProducer(ctx, config.KafkaConfig.ToRedisTopic)
+ if err != nil {
+ return err
+ }
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+ msgDocModel, err := mgo.NewMsgMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ var msgModel cache.MsgCache
+ if rdb == nil {
+ cm, err := mgo.NewCacheMgo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ msgModel = mcache.NewMsgCache(cm, msgDocModel)
+ } else {
+ msgModel = redis.NewMsgCache(rdb, msgDocModel)
+ }
+ seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ seqConversationCache := redis.NewSeqConversationCacheRedis(rdb, seqConversation)
+ seqUser, err := mgo.NewSeqUserMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ seqUserCache := redis.NewSeqUserCacheRedis(rdb, seqUser)
+ redPacketDB, err := mgo.NewRedPacketMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ redPacketReceiveDB, err := mgo.NewRedPacketReceiveMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ groupConn, err := client.GetConn(ctx, config.Discovery.RpcService.Group)
+ if err != nil {
+ return err
+ }
+ friendConn, err := client.GetConn(ctx, config.Discovery.RpcService.Friend)
+ if err != nil {
+ return err
+ }
+ conversationConn, err := client.GetConn(ctx, config.Discovery.RpcService.Conversation)
+ if err != nil {
+ return err
+ }
+ conversationClient := rpcli.NewConversationClient(conversationConn)
+ msgDatabase := controller.NewCommonMsgDatabase(msgDocModel, msgModel, seqUserCache, seqConversationCache, redisProducer)
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+
+ // 初始化webhook配置管理器(支持从数据库读取配置)
+ var webhookClient *webhook.Client
+ var webhookConfigManager *webhook.ConfigManager
+ systemConfigDB, err := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err == nil {
+ // 如果SystemConfig数据库初始化成功,使用配置管理器
+ webhookConfigManager = webhook.NewConfigManager(systemConfigDB, &config.WebhooksConfig)
+ if err := webhookConfigManager.Start(ctx); err != nil {
+ log.ZWarn(ctx, "failed to start webhook config manager, using default config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ } else {
+ webhookClient = webhook.NewWebhookClientWithManager(webhookConfigManager)
+ }
+ } else {
+ // 如果SystemConfig数据库初始化失败,使用默认配置
+ log.ZWarn(ctx, "failed to init system config db, using default webhook config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ }
+
+ s := &msgServer{
+ MsgDatabase: msgDatabase,
+ RegisterCenter: client,
+ UserLocalCache: rpccache.NewUserLocalCache(rpcli.NewUserClient(userConn), &config.LocalCacheConfig, rdb),
+ GroupLocalCache: rpccache.NewGroupLocalCache(rpcli.NewGroupClient(groupConn), &config.LocalCacheConfig, rdb),
+ ConversationLocalCache: rpccache.NewConversationLocalCache(conversationClient, &config.LocalCacheConfig, rdb),
+ FriendLocalCache: rpccache.NewFriendLocalCache(rpcli.NewRelationClient(friendConn), &config.LocalCacheConfig, rdb),
+ config: config,
+ webhookClient: webhookClient,
+ webhookConfigManager: webhookConfigManager,
+ conversationClient: conversationClient,
+ redPacketDB: redPacketDB,
+ redPacketReceiveDB: redPacketReceiveDB,
+ adminUserIDs: config.Share.IMAdminUser.UserIDs,
+ }
+
+ s.notificationSender = notification.NewNotificationSender(&config.NotificationConfig, notification.WithLocalSendMsg(s.SendMsg))
+ s.msgNotificationSender = NewMsgNotificationSender(config, notification.WithLocalSendMsg(s.SendMsg))
+
+ msg.RegisterMsgServer(server, s)
+
+ return nil
+}
+
+func (m *msgServer) conversationAndGetRecvID(conversation *conversation.Conversation, userID string) string {
+ if conversation.ConversationType == constant.SingleChatType ||
+ conversation.ConversationType == constant.NotificationChatType {
+ if userID == conversation.OwnerUserID {
+ return conversation.UserID
+ } else {
+ return conversation.OwnerUserID
+ }
+ } else if conversation.ConversationType == constant.ReadGroupChatType {
+ return conversation.GroupID
+ }
+ return ""
+}
diff --git a/internal/rpc/msg/statistics.go b/internal/rpc/msg/statistics.go
new file mode 100644
index 0000000..54f53c0
--- /dev/null
+++ b/internal/rpc/msg/statistics.go
@@ -0,0 +1,107 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (m *msgServer) GetActiveUser(ctx context.Context, req *msg.GetActiveUserReq) (*msg.GetActiveUserResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ msgCount, userCount, users, dateCount, err := m.MsgDatabase.RangeUserSendCount(ctx, time.UnixMilli(req.Start), time.UnixMilli(req.End), req.Group, req.Ase, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ if err != nil {
+ return nil, err
+ }
+ var pbUsers []*msg.ActiveUser
+ if len(users) > 0 {
+ userIDs := datautil.Slice(users, func(e *model.UserCount) string { return e.UserID })
+ userMap, err := m.UserLocalCache.GetUsersInfoMap(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ pbUsers = make([]*msg.ActiveUser, 0, len(users))
+ for _, user := range users {
+ pbUser := userMap[user.UserID]
+ if pbUser == nil {
+ pbUser = &sdkws.UserInfo{
+ UserID: user.UserID,
+ Nickname: user.UserID,
+ }
+ }
+ pbUsers = append(pbUsers, &msg.ActiveUser{
+ User: pbUser,
+ Count: user.Count,
+ })
+ }
+ }
+ return &msg.GetActiveUserResp{
+ MsgCount: msgCount,
+ UserCount: userCount,
+ DateCount: dateCount,
+ Users: pbUsers,
+ }, nil
+}
+
+func (m *msgServer) GetActiveGroup(ctx context.Context, req *msg.GetActiveGroupReq) (*msg.GetActiveGroupResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ msgCount, groupCount, groups, dateCount, err := m.MsgDatabase.RangeGroupSendCount(ctx, time.UnixMilli(req.Start), time.UnixMilli(req.End), req.Ase, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ if err != nil {
+ return nil, err
+ }
+ var pbgroups []*msg.ActiveGroup
+ if len(groups) > 0 {
+ groupIDs := datautil.Slice(groups, func(e *model.GroupCount) string { return e.GroupID })
+ resp, err := m.GroupLocalCache.GetGroupInfos(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ groupMap := make(map[string]*sdkws.GroupInfo, len(groups))
+ for i, group := range groups {
+ groupMap[group.GroupID] = resp[i]
+ }
+ pbgroups = make([]*msg.ActiveGroup, 0, len(groups))
+ for _, group := range groups {
+ pbgroup := groupMap[group.GroupID]
+ if pbgroup == nil {
+ pbgroup = &sdkws.GroupInfo{
+ GroupID: group.GroupID,
+ GroupName: group.GroupID,
+ }
+ }
+ pbgroups = append(pbgroups, &msg.ActiveGroup{
+ Group: pbgroup,
+ Count: group.Count,
+ })
+ }
+ }
+ return &msg.GetActiveGroupResp{
+ MsgCount: msgCount,
+ GroupCount: groupCount,
+ DateCount: dateCount,
+ Groups: pbgroups,
+ }, nil
+}
diff --git a/internal/rpc/msg/sync_msg.go b/internal/rpc/msg/sync_msg.go
new file mode 100644
index 0000000..8cfe8eb
--- /dev/null
+++ b/internal/rpc/msg/sync_msg.go
@@ -0,0 +1,658 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/conversationutil"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+func (m *msgServer) PullMessageBySeqs(ctx context.Context, req *sdkws.PullMessageBySeqsReq) (*sdkws.PullMessageBySeqsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ // 设置请求超时,防止大量数据拉取导致pod异常
+ // 对于大量大群用户,需要更长的超时时间(60秒)
+ queryTimeout := 60 * time.Second
+ queryCtx, cancel := context.WithTimeout(ctx, queryTimeout)
+ defer cancel()
+
+ // 参数验证:限制 SeqRanges 数量和每个范围的大小,防止内存溢出
+ const maxSeqRanges = 100
+ const maxSeqRangeSize = 10000
+ // 限制响应总大小,防止pod内存溢出(估算:每条消息平均1KB,最多50MB)
+ const maxTotalMessages = 50000
+ if len(req.SeqRanges) > maxSeqRanges {
+ log.ZWarn(ctx, "SeqRanges count exceeds limit", nil, "count", len(req.SeqRanges), "limit", maxSeqRanges)
+ return nil, errs.ErrArgs.WrapMsg("too many seq ranges", "count", len(req.SeqRanges), "limit", maxSeqRanges)
+ }
+ for _, seq := range req.SeqRanges {
+ // 验证每个 seq range 的合理性
+ if seq.Begin < 0 || seq.End < 0 {
+ log.ZWarn(ctx, "invalid seq range: negative values", nil, "begin", seq.Begin, "end", seq.End)
+ continue
+ }
+ if seq.End < seq.Begin {
+ log.ZWarn(ctx, "invalid seq range: end < begin", nil, "begin", seq.Begin, "end", seq.End)
+ continue
+ }
+ seqRangeSize := seq.End - seq.Begin + 1
+ if seqRangeSize > maxSeqRangeSize {
+ log.ZWarn(ctx, "seq range size exceeds limit, will be limited", nil, "conversationID", seq.ConversationID, "begin", seq.Begin, "end", seq.End, "size", seqRangeSize, "limit", maxSeqRangeSize)
+ }
+ }
+ resp := &sdkws.PullMessageBySeqsResp{}
+ resp.Msgs = make(map[string]*sdkws.PullMsgs)
+ resp.NotificationMsgs = make(map[string]*sdkws.PullMsgs)
+
+ var totalMessages int
+ for _, seq := range req.SeqRanges {
+ // 检查总消息数,防止内存溢出
+ if totalMessages >= maxTotalMessages {
+ log.ZWarn(ctx, "total messages count exceeds limit, stopping", nil, "totalMessages", totalMessages, "limit", maxTotalMessages)
+ break
+ }
+ if !msgprocessor.IsNotification(seq.ConversationID) {
+ conversation, err := m.ConversationLocalCache.GetConversation(queryCtx, req.UserID, seq.ConversationID)
+ if err != nil {
+ log.ZError(ctx, "GetConversation error", err, "conversationID", seq.ConversationID)
+ continue
+ }
+ minSeq, maxSeq, msgs, err := m.MsgDatabase.GetMsgBySeqsRange(queryCtx, req.UserID, seq.ConversationID,
+ seq.Begin, seq.End, seq.Num, conversation.MaxSeq)
+ if err != nil {
+ // 如果是超时错误,记录更详细的日志
+ if queryCtx.Err() == context.DeadlineExceeded {
+ log.ZWarn(ctx, "GetMsgBySeqsRange timeout", err, "conversationID", seq.ConversationID, "seq", seq, "timeout", queryTimeout)
+ return nil, errs.ErrInternalServer.WrapMsg("message pull timeout, data too large or query too slow")
+ }
+ log.ZWarn(ctx, "GetMsgBySeqsRange error", err, "conversationID", seq.ConversationID, "seq", seq)
+ continue
+ }
+ totalMessages += len(msgs)
+ var isEnd bool
+ switch req.Order {
+ case sdkws.PullOrder_PullOrderAsc:
+ isEnd = maxSeq <= seq.End
+ case sdkws.PullOrder_PullOrderDesc:
+ isEnd = seq.Begin <= minSeq
+ }
+ if len(msgs) == 0 {
+ log.ZWarn(ctx, "not have msgs", nil, "conversationID", seq.ConversationID, "seq", seq)
+ continue
+ }
+ // 过滤禁言通知消息(只保留群主、管理员和被禁言成员本人可以看到的)
+ msgs = m.filterMuteNotificationMsgs(ctx, req.UserID, seq.ConversationID, msgs)
+ // 填充红包消息的领取信息
+ msgs = m.enrichRedPacketMessages(ctx, req.UserID, msgs)
+ resp.Msgs[seq.ConversationID] = &sdkws.PullMsgs{Msgs: msgs, IsEnd: isEnd}
+ } else {
+ // 限制通知消息的查询范围,防止内存溢出
+ const maxNotificationSeqRange = 5000
+ var seqs []int64
+ seqRange := seq.End - seq.Begin + 1
+ if seqRange > maxNotificationSeqRange {
+ log.ZWarn(ctx, "notification seq range too large, limiting", nil, "conversationID", seq.ConversationID, "begin", seq.Begin, "end", seq.End, "range", seqRange, "limit", maxNotificationSeqRange)
+ // 只取最后 maxNotificationSeqRange 条
+ for i := seq.End - maxNotificationSeqRange + 1; i <= seq.End; i++ {
+ seqs = append(seqs, i)
+ }
+ } else {
+ for i := seq.Begin; i <= seq.End; i++ {
+ seqs = append(seqs, i)
+ }
+ }
+ minSeq, maxSeq, notificationMsgs, err := m.MsgDatabase.GetMsgBySeqs(queryCtx, req.UserID, seq.ConversationID, seqs)
+ if err != nil {
+ // 如果是超时错误,记录更详细的日志
+ if queryCtx.Err() == context.DeadlineExceeded {
+ log.ZWarn(ctx, "GetMsgBySeqs timeout", err, "conversationID", seq.ConversationID, "seq", seq, "timeout", queryTimeout)
+ return nil, errs.ErrInternalServer.WrapMsg("notification message pull timeout, data too large or query too slow")
+ }
+ log.ZWarn(ctx, "GetMsgBySeqs error", err, "conversationID", seq.ConversationID, "seq", seq)
+ continue
+ }
+ totalMessages += len(notificationMsgs)
+ var isEnd bool
+ switch req.Order {
+ case sdkws.PullOrder_PullOrderAsc:
+ isEnd = maxSeq <= seq.End
+ case sdkws.PullOrder_PullOrderDesc:
+ isEnd = seq.Begin <= minSeq
+ }
+ if len(notificationMsgs) == 0 {
+ log.ZWarn(ctx, "not have notificationMsgs", nil, "conversationID", seq.ConversationID, "seq", seq)
+
+ continue
+ }
+ // 过滤禁言通知消息(只保留群主、管理员和被禁言成员本人可以看到的)
+ notificationMsgs = m.filterMuteNotificationMsgs(ctx, req.UserID, seq.ConversationID, notificationMsgs)
+ // 填充红包消息的领取信息
+ notificationMsgs = m.enrichRedPacketMessages(ctx, req.UserID, notificationMsgs)
+ resp.NotificationMsgs[seq.ConversationID] = &sdkws.PullMsgs{Msgs: notificationMsgs, IsEnd: isEnd}
+ }
+ }
+ return resp, nil
+}
+
+func (m *msgServer) GetSeqMessage(ctx context.Context, req *msg.GetSeqMessageReq) (*msg.GetSeqMessageResp, error) {
+ resp := &msg.GetSeqMessageResp{
+ Msgs: make(map[string]*sdkws.PullMsgs),
+ NotificationMsgs: make(map[string]*sdkws.PullMsgs),
+ }
+ for _, conv := range req.Conversations {
+ isEnd, endSeq, msgs, err := m.MsgDatabase.GetMessagesBySeqWithBounds(ctx, req.UserID, conv.ConversationID, conv.Seqs, req.GetOrder())
+ if err != nil {
+ return nil, err
+ }
+ var pullMsgs *sdkws.PullMsgs
+ if ok := false; conversationutil.IsNotificationConversationID(conv.ConversationID) {
+ pullMsgs, ok = resp.NotificationMsgs[conv.ConversationID]
+ if !ok {
+ pullMsgs = &sdkws.PullMsgs{}
+ resp.NotificationMsgs[conv.ConversationID] = pullMsgs
+ }
+ } else {
+ pullMsgs, ok = resp.Msgs[conv.ConversationID]
+ if !ok {
+ pullMsgs = &sdkws.PullMsgs{}
+ resp.Msgs[conv.ConversationID] = pullMsgs
+ }
+ }
+ // 过滤禁言通知消息(只保留群主、管理员和被禁言成员本人可以看到的)
+ filteredMsgs := m.filterMuteNotificationMsgs(ctx, req.UserID, conv.ConversationID, msgs)
+ // 填充红包消息的领取信息
+ filteredMsgs = m.enrichRedPacketMessages(ctx, req.UserID, filteredMsgs)
+ pullMsgs.Msgs = append(pullMsgs.Msgs, filteredMsgs...)
+ pullMsgs.IsEnd = isEnd
+ pullMsgs.EndSeq = endSeq
+ }
+ return resp, nil
+}
+
+// filterMuteNotificationMsgs 过滤禁言、取消禁言、踢出群聊、退出群聊、进入群聊、群成员信息设置和角色变更通知消息,只保留群主、管理员和相关成员本人可以看到的消息
+func (m *msgServer) filterMuteNotificationMsgs(ctx context.Context, userID, conversationID string, msgs []*sdkws.MsgData) []*sdkws.MsgData {
+ // 如果不是群聊会话,直接返回
+ if !conversationutil.IsGroupConversationID(conversationID) {
+ return msgs
+ }
+
+ // 提取群ID
+ groupID := conversationutil.GetGroupIDFromConversationID(conversationID)
+ if groupID == "" {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: invalid group conversationID", nil, "conversationID", conversationID)
+ return msgs
+ }
+
+ var filteredMsgs []*sdkws.MsgData
+ var needCheckPermission bool
+
+ // 先检查是否有需要过滤的消息
+ for _, msg := range msgs {
+ if msg.ContentType == constant.GroupMemberMutedNotification ||
+ msg.ContentType == constant.GroupMemberCancelMutedNotification ||
+ msg.ContentType == constant.MemberKickedNotification ||
+ msg.ContentType == constant.MemberQuitNotification ||
+ msg.ContentType == constant.MemberInvitedNotification ||
+ msg.ContentType == constant.MemberEnterNotification ||
+ msg.ContentType == constant.GroupMemberInfoSetNotification ||
+ msg.ContentType == constant.GroupMemberSetToAdminNotification ||
+ msg.ContentType == constant.GroupMemberSetToOrdinaryUserNotification {
+ needCheckPermission = true
+ break
+ }
+ }
+
+ // 如果没有需要过滤的消息,直接返回
+ if !needCheckPermission {
+ return msgs
+ }
+
+ // 对于被踢出的用户,可能无法获取成员信息,需要特殊处理
+ // 先收集所有被踢出的用户ID,以便后续判断
+ var allKickedUserIDs []string
+ if needCheckPermission {
+ for _, msg := range msgs {
+ if msg.ContentType == constant.MemberKickedNotification {
+ var tips sdkws.MemberKickedTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err == nil {
+ kickedUserIDs := datautil.Slice(tips.KickedUserList, func(e *sdkws.GroupMemberFullInfo) string { return e.UserID })
+ allKickedUserIDs = append(allKickedUserIDs, kickedUserIDs...)
+ }
+ }
+ }
+ allKickedUserIDs = datautil.Distinct(allKickedUserIDs)
+ }
+
+ // 检查用户是否是被踢出的用户(即使已经被踢出,也应该能看到自己被踢出的通知)
+ isKickedUserInMsgs := datautil.Contain(userID, allKickedUserIDs...)
+
+ // 获取当前用户在群中的角色(如果用户已经被踢出,这里会返回错误)
+ // 添加超时保护,防止大群查询阻塞(3秒超时)
+ memberCtx, cancel := context.WithTimeout(ctx, 3*time.Second)
+ defer cancel()
+ member, err := m.GroupLocalCache.GetGroupMember(memberCtx, groupID, userID)
+ isOwnerOrAdmin := false
+ if err != nil {
+ if memberCtx.Err() == context.DeadlineExceeded {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: GetGroupMember timeout", err, "groupID", groupID, "userID", userID)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GetGroupMember failed (user may be kicked)", err, "groupID", groupID, "userID", userID, "isKickedUserInMsgs", isKickedUserInMsgs)
+ }
+ // 如果获取失败(可能已经被踢出或超时),仍然需要检查是否是相关成员本人
+ // 继续处理,但 isOwnerOrAdmin 保持为 false
+ // 如果是被踢出的用户,仍然可以查看自己被踢出的通知
+ } else {
+ isOwnerOrAdmin = member.RoleLevel == constant.GroupOwner || member.RoleLevel == constant.GroupAdmin
+ }
+
+ // 过滤消息
+ for _, msg := range msgs {
+
+ if msg.ContentType == constant.GroupMemberMutedNotification {
+ var tips sdkws.GroupMemberMutedTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal GroupMemberMutedTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ mutedUserID := tips.MutedUser.UserID
+ if isOwnerOrAdmin || userID == mutedUserID {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberMutedNotification allowed", "userID", userID, "mutedUserID", mutedUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberMutedNotification filtered", "userID", userID, "mutedUserID", mutedUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ }
+ } else if msg.ContentType == constant.GroupMemberCancelMutedNotification {
+ var tips sdkws.GroupMemberCancelMutedTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal GroupMemberCancelMutedTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ cancelMutedUserID := tips.MutedUser.UserID
+ if isOwnerOrAdmin || userID == cancelMutedUserID {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberCancelMutedNotification allowed", "userID", userID, "cancelMutedUserID", cancelMutedUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberCancelMutedNotification filtered", "userID", userID, "cancelMutedUserID", cancelMutedUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ }
+ } else if msg.ContentType == constant.MemberQuitNotification {
+ var tips sdkws.MemberQuitTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal MemberQuitTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ quitUserID := tips.QuitUser.UserID
+ // 退出群聊通知只允许群主和管理员看到,退出者本人不通知
+ if isOwnerOrAdmin {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberQuitNotification allowed", "userID", userID, "quitUserID", quitUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberQuitNotification filtered", "userID", userID, "quitUserID", quitUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ }
+ } else if msg.ContentType == constant.MemberInvitedNotification {
+ var tips sdkws.MemberInvitedTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal MemberInvitedTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ // 获取被邀请的用户ID列表
+ invitedUserIDs := datautil.Slice(tips.InvitedUserList, func(e *sdkws.GroupMemberFullInfo) string { return e.UserID })
+ isInvitedUser := datautil.Contain(userID, invitedUserIDs...)
+ // 邀请入群通知:允许群主、管理员和被邀请的用户本人看到
+ if isOwnerOrAdmin || isInvitedUser {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberInvitedNotification allowed", "userID", userID, "invitedUserIDs", invitedUserIDs, "isOwnerOrAdmin", isOwnerOrAdmin, "isInvitedUser", isInvitedUser)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberInvitedNotification filtered", "userID", userID, "invitedUserIDs", invitedUserIDs, "isOwnerOrAdmin", isOwnerOrAdmin, "isInvitedUser", isInvitedUser)
+ }
+ } else if msg.ContentType == constant.MemberEnterNotification {
+ var tips sdkws.MemberEnterTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal MemberEnterTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ entrantUserID := tips.EntrantUser.UserID
+ // 进入群聊通知只允许群主和管理员看到
+ if isOwnerOrAdmin {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberEnterNotification allowed", "userID", userID, "entrantUserID", entrantUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberEnterNotification filtered", "userID", userID, "entrantUserID", entrantUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ }
+ } else if msg.ContentType == constant.MemberKickedNotification {
+ var tips sdkws.MemberKickedTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal MemberKickedTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ // 获取被踢出的用户ID列表
+ kickedUserIDs := datautil.Slice(tips.KickedUserList, func(e *sdkws.GroupMemberFullInfo) string { return e.UserID })
+ isKickedUser := datautil.Contain(userID, kickedUserIDs...)
+ // 被踢出群聊通知:允许群主、管理员和被踢出的用户本人看到
+ if isOwnerOrAdmin || isKickedUser {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberKickedNotification allowed", "userID", userID, "kickedUserIDs", kickedUserIDs, "isOwnerOrAdmin", isOwnerOrAdmin, "isKickedUser", isKickedUser)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: MemberKickedNotification filtered", "userID", userID, "kickedUserIDs", kickedUserIDs, "isOwnerOrAdmin", isOwnerOrAdmin, "isKickedUser", isKickedUser)
+ }
+ } else if msg.ContentType == constant.GroupMemberInfoSetNotification {
+ var tips sdkws.GroupMemberInfoSetTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal GroupMemberInfoSetTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ changedUserID := tips.ChangedUser.UserID
+ // 群成员信息设置通知(如背景音)只允许群主和管理员看到
+ if isOwnerOrAdmin {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberInfoSetNotification allowed", "userID", userID, "changedUserID", changedUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberInfoSetNotification filtered", "userID", userID, "changedUserID", changedUserID, "isOwnerOrAdmin", isOwnerOrAdmin)
+ }
+ } else if msg.ContentType == constant.GroupMemberSetToAdminNotification || msg.ContentType == constant.GroupMemberSetToOrdinaryUserNotification {
+ var tips sdkws.GroupMemberInfoSetTips
+ if err := unmarshalNotificationContent(string(msg.Content), &tips); err != nil {
+ log.ZWarn(ctx, "filterMuteNotificationMsgs: unmarshal GroupMemberInfoSetTips failed", err)
+ filteredMsgs = append(filteredMsgs, msg)
+ continue
+ }
+ changedUserID := tips.ChangedUser.UserID
+ // 设置为管理员/普通用户通知:允许群主、管理员和本人看到
+ if isOwnerOrAdmin || userID == changedUserID {
+ filteredMsgs = append(filteredMsgs, msg)
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberSetToAdmin/OrdinaryUserNotification allowed", "userID", userID, "changedUserID", changedUserID, "isOwnerOrAdmin", isOwnerOrAdmin, "contentType", msg.ContentType)
+ } else {
+ log.ZDebug(ctx, "filterMuteNotificationMsgs: GroupMemberSetToAdmin/OrdinaryUserNotification filtered", "userID", userID, "changedUserID", changedUserID, "isOwnerOrAdmin", isOwnerOrAdmin, "contentType", msg.ContentType)
+ }
+ } else {
+ // 其他消息直接通过
+ filteredMsgs = append(filteredMsgs, msg)
+ }
+ }
+
+ return filteredMsgs
+}
+
+// unmarshalNotificationContent 解析通知消息内容
+func unmarshalNotificationContent(content string, v interface{}) error {
+ var notificationElem sdkws.NotificationElem
+ if err := json.Unmarshal([]byte(content), ¬ificationElem); err != nil {
+ return err
+ }
+ return jsonutil.JsonUnmarshal([]byte(notificationElem.Detail), v)
+}
+
+// enrichRedPacketMessages 填充红包消息的领取信息和状态
+func (m *msgServer) enrichRedPacketMessages(ctx context.Context, userID string, msgs []*sdkws.MsgData) []*sdkws.MsgData {
+ if m.redPacketReceiveDB == nil || m.redPacketDB == nil {
+ return msgs
+ }
+
+ for _, msg := range msgs {
+ // 只处理自定义消息类型
+ if msg.ContentType != constant.Custom {
+ continue
+ }
+
+ // 解析自定义消息内容
+ var customElem apistruct.CustomElem
+ if err := json.Unmarshal(msg.Content, &customElem); err != nil {
+ continue
+ }
+
+ // 检查是否为红包消息(通过description字段判断二级类型)
+ if customElem.Description != "redpacket" {
+ continue
+ }
+
+ // 解析红包消息内容(从data字段中解析)
+ var redPacketElem apistruct.RedPacketElem
+ if err := json.Unmarshal([]byte(customElem.Data), &redPacketElem); err != nil {
+ log.ZWarn(ctx, "enrichRedPacketMessages: failed to unmarshal red packet data", err, "redPacketID", redPacketElem.RedPacketID)
+ continue
+ }
+
+ // 查询红包记录
+ redPacket, err := m.redPacketDB.Take(ctx, redPacketElem.RedPacketID)
+ if err != nil {
+ log.ZWarn(ctx, "enrichRedPacketMessages: failed to get red packet", err, "redPacketID", redPacketElem.RedPacketID)
+ // 如果查询失败,保持原有状态,不填充信息
+ continue
+ }
+
+ // 填充红包状态信息
+ redPacketElem.Status = redPacket.Status
+
+ // 判断是否已过期(检查过期时间和状态)
+ now := time.Now()
+ isExpired := redPacket.Status == model.RedPacketStatusExpired || (redPacket.ExpireTime.Before(now) && redPacket.Status == model.RedPacketStatusActive)
+ redPacketElem.IsExpired = isExpired
+
+ // 判断是否已领完
+ isFinished := redPacket.Status == model.RedPacketStatusFinished || redPacket.RemainCount <= 0
+ redPacketElem.IsFinished = isFinished
+
+ // 如果已过期,更新状态
+ if isExpired && redPacket.Status == model.RedPacketStatusActive {
+ redPacket.Status = model.RedPacketStatusExpired
+ redPacketElem.Status = model.RedPacketStatusExpired
+ }
+
+ // 查询用户是否已领取
+ receive, err := m.redPacketReceiveDB.FindByUserAndRedPacketID(ctx, userID, redPacketElem.RedPacketID)
+ if err != nil {
+ // 如果查询失败或未找到记录,说明未领取
+ redPacketElem.IsReceived = false
+ redPacketElem.ReceiveInfo = nil
+ } else {
+ // 已领取,填充领取信息(包括金额)
+ redPacketElem.IsReceived = true
+ redPacketElem.ReceiveInfo = &apistruct.RedPacketReceiveInfo{
+ Amount: receive.Amount,
+ ReceiveTime: receive.ReceiveTime.UnixMilli(),
+ IsLucky: false, // 已去掉手气最佳功能,始终返回 false
+ }
+ }
+
+ // 更新自定义消息的data字段(包含领取信息和状态)
+ redPacketData := jsonutil.StructToJsonString(redPacketElem)
+ customElem.Data = redPacketData
+
+ // 重新序列化并更新消息内容
+ newContent, err := json.Marshal(customElem)
+ if err != nil {
+ log.ZWarn(ctx, "enrichRedPacketMessages: failed to marshal custom elem", err, "redPacketID", redPacketElem.RedPacketID)
+ continue
+ }
+ msg.Content = newContent
+ }
+
+ return msgs
+}
+
+func (m *msgServer) GetMaxSeq(ctx context.Context, req *sdkws.GetMaxSeqReq) (*sdkws.GetMaxSeqResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ conversationIDs, err := m.ConversationLocalCache.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ for _, conversationID := range conversationIDs {
+ conversationIDs = append(conversationIDs, conversationutil.GetNotificationConversationIDByConversationID(conversationID))
+ }
+ conversationIDs = append(conversationIDs, conversationutil.GetSelfNotificationConversationID(req.UserID))
+ log.ZDebug(ctx, "GetMaxSeq", "conversationIDs", conversationIDs)
+ maxSeqs, err := m.MsgDatabase.GetMaxSeqs(ctx, conversationIDs)
+ if err != nil {
+ log.ZWarn(ctx, "GetMaxSeqs error", err, "conversationIDs", conversationIDs, "maxSeqs", maxSeqs)
+ return nil, err
+ }
+ // avoid pulling messages from sessions with a large number of max seq values of 0
+ for conversationID, seq := range maxSeqs {
+ if seq == 0 {
+ delete(maxSeqs, conversationID)
+ }
+ }
+ resp := new(sdkws.GetMaxSeqResp)
+ resp.MaxSeqs = maxSeqs
+ return resp, nil
+}
+
+func (m *msgServer) SearchMessage(ctx context.Context, req *msg.SearchMessageReq) (resp *msg.SearchMessageResp, err error) {
+ // var chatLogs []*sdkws.MsgData
+ var chatLogs []*msg.SearchedMsgData
+ var total int64
+ resp = &msg.SearchMessageResp{}
+ if total, chatLogs, err = m.MsgDatabase.SearchMessage(ctx, req); err != nil {
+ return nil, err
+ }
+
+ var (
+ sendIDs []string
+ recvIDs []string
+ groupIDs []string
+ sendNameMap = make(map[string]string)
+ sendFaceMap = make(map[string]string)
+ recvMap = make(map[string]string)
+ groupMap = make(map[string]*sdkws.GroupInfo)
+ seenSendIDs = make(map[string]struct{})
+ seenRecvIDs = make(map[string]struct{})
+ seenGroupIDs = make(map[string]struct{})
+ )
+
+ for _, chatLog := range chatLogs {
+ if chatLog.MsgData.SenderNickname == "" || chatLog.MsgData.SenderFaceURL == "" {
+ if _, ok := seenSendIDs[chatLog.MsgData.SendID]; !ok {
+ seenSendIDs[chatLog.MsgData.SendID] = struct{}{}
+ sendIDs = append(sendIDs, chatLog.MsgData.SendID)
+ }
+ }
+ switch chatLog.MsgData.SessionType {
+ case constant.SingleChatType, constant.NotificationChatType:
+ if _, ok := seenRecvIDs[chatLog.MsgData.RecvID]; !ok {
+ seenRecvIDs[chatLog.MsgData.RecvID] = struct{}{}
+ recvIDs = append(recvIDs, chatLog.MsgData.RecvID)
+ }
+ case constant.WriteGroupChatType, constant.ReadGroupChatType:
+ if _, ok := seenGroupIDs[chatLog.MsgData.GroupID]; !ok {
+ seenGroupIDs[chatLog.MsgData.GroupID] = struct{}{}
+ groupIDs = append(groupIDs, chatLog.MsgData.GroupID)
+ }
+ }
+ }
+
+ // Retrieve sender and receiver information
+ if len(sendIDs) != 0 {
+ sendInfos, err := m.UserLocalCache.GetUsersInfo(ctx, sendIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, sendInfo := range sendInfos {
+ sendNameMap[sendInfo.UserID] = sendInfo.Nickname
+ sendFaceMap[sendInfo.UserID] = sendInfo.FaceURL
+ }
+ }
+
+ if len(recvIDs) != 0 {
+ recvInfos, err := m.UserLocalCache.GetUsersInfo(ctx, recvIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, recvInfo := range recvInfos {
+ recvMap[recvInfo.UserID] = recvInfo.Nickname
+ }
+ }
+
+ // Retrieve group information including member counts
+ if len(groupIDs) != 0 {
+ groupInfos, err := m.GroupLocalCache.GetGroupInfos(ctx, groupIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, groupInfo := range groupInfos {
+ groupMap[groupInfo.GroupID] = groupInfo
+ // Get actual member count
+ memberIDs, err := m.GroupLocalCache.GetGroupMemberIDs(ctx, groupInfo.GroupID)
+ if err == nil {
+ groupInfo.MemberCount = uint32(len(memberIDs)) // Update the member count with actual number
+ }
+ }
+ }
+
+ // Construct response with updated information
+ for _, chatLog := range chatLogs {
+ pbchatLog := &msg.ChatLog{}
+ datautil.CopyStructFields(pbchatLog, chatLog.MsgData)
+ pbchatLog.SendTime = chatLog.MsgData.SendTime
+ pbchatLog.CreateTime = chatLog.MsgData.CreateTime
+ if chatLog.MsgData.SenderNickname == "" {
+ pbchatLog.SenderNickname = sendNameMap[chatLog.MsgData.SendID]
+ }
+ if chatLog.MsgData.SenderFaceURL == "" {
+ pbchatLog.SenderFaceURL = sendFaceMap[chatLog.MsgData.SendID]
+ }
+ switch chatLog.MsgData.SessionType {
+ case constant.SingleChatType, constant.NotificationChatType:
+ pbchatLog.RecvNickname = recvMap[chatLog.MsgData.RecvID]
+ case constant.ReadGroupChatType:
+ groupInfo := groupMap[chatLog.MsgData.GroupID]
+ pbchatLog.GroupMemberCount = groupInfo.MemberCount // Reflects actual member count
+ pbchatLog.RecvID = groupInfo.GroupID
+ pbchatLog.GroupName = groupInfo.GroupName
+ pbchatLog.GroupOwner = groupInfo.OwnerUserID
+ pbchatLog.GroupType = groupInfo.GroupType
+ }
+ searchChatLog := &msg.SearchChatLog{ChatLog: pbchatLog, IsRevoked: chatLog.IsRevoked}
+
+ resp.ChatLogs = append(resp.ChatLogs, searchChatLog)
+ }
+ resp.ChatLogsNum = int32(total)
+ return resp, nil
+}
+
+func (m *msgServer) GetServerTime(ctx context.Context, _ *msg.GetServerTimeReq) (*msg.GetServerTimeResp, error) {
+ return &msg.GetServerTimeResp{ServerTime: timeutil.GetCurrentTimestampByMill()}, nil
+}
+
+func (m *msgServer) GetLastMessage(ctx context.Context, req *msg.GetLastMessageReq) (*msg.GetLastMessageResp, error) {
+ msgs, err := m.MsgDatabase.GetLastMessage(ctx, req.ConversationIDs, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ return &msg.GetLastMessageResp{Msgs: msgs}, nil
+}
diff --git a/internal/rpc/msg/utils.go b/internal/rpc/msg/utils.go
new file mode 100644
index 0000000..d80cfe8
--- /dev/null
+++ b/internal/rpc/msg/utils.go
@@ -0,0 +1,91 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/errs"
+ "github.com/redis/go-redis/v9"
+ "go.mongodb.org/mongo-driver/mongo"
+)
+
+func IsNotFound(err error) bool {
+ switch errs.Unwrap(err) {
+ case redis.Nil, mongo.ErrNoDocuments:
+ return true
+ default:
+ return false
+ }
+}
+
+type activeConversations []*msg.ActiveConversation
+
+func (s activeConversations) Len() int {
+ return len(s)
+}
+
+func (s activeConversations) Less(i, j int) bool {
+ return s[i].LastTime > s[j].LastTime
+}
+
+func (s activeConversations) Swap(i, j int) {
+ s[i], s[j] = s[j], s[i]
+}
+
+//type seqTime struct {
+// ConversationID string
+// Seq int64
+// Time int64
+// Unread int64
+// Pinned bool
+//}
+//
+//func (s seqTime) String() string {
+// return fmt.Sprintf("", s.Time, s.Unread, s.Pinned)
+//}
+//
+//type seqTimes []seqTime
+//
+//func (s seqTimes) Len() int {
+// return len(s)
+//}
+//
+//// Less sticky priority, unread priority, time descending
+//func (s seqTimes) Less(i, j int) bool {
+// iv, jv := s[i], s[j]
+// if iv.Pinned && (!jv.Pinned) {
+// return true
+// }
+// if jv.Pinned && (!iv.Pinned) {
+// return false
+// }
+// if iv.Unread > 0 && jv.Unread == 0 {
+// return true
+// }
+// if jv.Unread > 0 && iv.Unread == 0 {
+// return false
+// }
+// return iv.Time > jv.Time
+//}
+//
+//func (s seqTimes) Swap(i, j int) {
+// s[i], s[j] = s[j], s[i]
+//}
+//
+//type conversationStatus struct {
+// ConversationID string
+// Pinned bool
+// Recv bool
+//}
diff --git a/internal/rpc/msg/verify.go b/internal/rpc/msg/verify.go
new file mode 100644
index 0000000..dfd1afe
--- /dev/null
+++ b/internal/rpc/msg/verify.go
@@ -0,0 +1,405 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msg
+
+import (
+ "context"
+ "encoding/json"
+ "math/rand"
+ "regexp"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/encrypt"
+ "github.com/openimsdk/tools/utils/timeutil"
+
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+)
+
+var ExcludeContentType = []int{constant.HasReadReceipt}
+
+// URL正则表达式,匹配常见的URL格式
+// 匹配以下格式的链接:
+// 1. http:// 或 https:// 开头的完整URL
+// 2. // 开头的协议相对URL(如 //s.yam.com/MvQzr)
+// 3. www. 开头的链接
+// 4. 直接包含域名的链接(如 s.yam.com/MvQzr、xxx.cc/csd、t.cn/AX4fYkFZ),匹配包含至少一个点的域名格式,后面可跟路径
+// 域名格式:至少包含一个点和一个2位以上的顶级域名,如 xxx.com、xxx.cn、s.yam.com、xxx.cc、t.cn 等
+// 注意:匹配 // 后面必须跟着域名格式,避免误匹配其他 // 开头的文本
+// 修复:支持单字符域名(如 t.cn),移除了点之前必须有两个字符的限制
+var urlRegex = regexp.MustCompile(`(?i)(https?://[^\s<>"{}|\\^` + "`" + `\[\]]+|//[a-zA-Z0-9][a-zA-Z0-9\-\.]*\.[a-zA-Z]{2,}(/[^\s<>"{}|\\^` + "`" + `\[\]]*)?|www\.[^\s<>"{}|\\^` + "`" + `\[\]]+|[a-zA-Z0-9][a-zA-Z0-9\-\.]*\.[a-zA-Z]{2,}(/[^\s<>"{}|\\^` + "`" + `\[\]]*)?)`)
+
+type Validator interface {
+ validate(pb *msg.SendMsgReq) (bool, int32, string)
+}
+
+// TextElem 用于解析文本消息内容
+type TextElem struct {
+ Content string `json:"content"`
+}
+
+// AtElem 用于解析@消息内容
+type AtElem struct {
+ Text string `json:"text"`
+}
+
+// checkMessageContainsLink 检测消息内容中是否包含链接
+// userType: 0-普通用户(不能发送链接),1-特殊用户(可以发送链接)
+func (m *msgServer) checkMessageContainsLink(msgData *sdkws.MsgData, userType int32) error {
+ // userType=1 的用户可以发送链接,不进行检测
+ if userType == 1 {
+ return nil
+ }
+
+ // 只检测文本类型的消息
+ if msgData.ContentType != constant.Text && msgData.ContentType != constant.AtText {
+ return nil
+ }
+
+ var textContent string
+ var err error
+
+ // 解析消息内容
+ if msgData.ContentType == constant.Text {
+ var textElem TextElem
+ if err = json.Unmarshal(msgData.Content, &textElem); err != nil {
+ // 如果解析失败,尝试直接使用字符串
+ textContent = string(msgData.Content)
+ } else {
+ textContent = textElem.Content
+ }
+ } else if msgData.ContentType == constant.AtText {
+ var atElem AtElem
+ if err = json.Unmarshal(msgData.Content, &atElem); err != nil {
+ // 如果解析失败,尝试直接使用字符串
+ textContent = string(msgData.Content)
+ } else {
+ textContent = atElem.Text
+ }
+ }
+
+ // 检测是否包含链接
+ if urlRegex.MatchString(textContent) {
+ return servererrs.ErrMessageContainsLink.WrapMsg("userType=0的用户不能发送包含链接的消息")
+ }
+
+ return nil
+}
+
+type MessageRevoked struct {
+ RevokerID string `json:"revokerID"`
+ RevokerRole int32 `json:"revokerRole"`
+ ClientMsgID string `json:"clientMsgID"`
+ RevokerNickname string `json:"revokerNickname"`
+ RevokeTime int64 `json:"revokeTime"`
+ SourceMessageSendTime int64 `json:"sourceMessageSendTime"`
+ SourceMessageSendID string `json:"sourceMessageSendID"`
+ SourceMessageSenderNickname string `json:"sourceMessageSenderNickname"`
+ SessionType int32 `json:"sessionType"`
+ Seq uint32 `json:"seq"`
+}
+
+func (m *msgServer) messageVerification(ctx context.Context, data *msg.SendMsgReq) error {
+ webhookCfg := m.webhookConfig()
+
+ switch data.MsgData.SessionType {
+ case constant.SingleChatType:
+ if datautil.Contain(data.MsgData.SendID, m.adminUserIDs...) {
+ return nil
+ }
+ if data.MsgData.ContentType <= constant.NotificationEnd &&
+ data.MsgData.ContentType >= constant.NotificationBegin {
+ return nil
+ }
+ if err := m.webhookBeforeSendSingleMsg(ctx, &webhookCfg.BeforeSendSingleMsg, data); err != nil {
+ return err
+ }
+ u, err := m.UserLocalCache.GetUserInfo(ctx, data.MsgData.SendID)
+ if err != nil {
+ return err
+ }
+ if authverify.CheckSystemAccount(ctx, u.AppMangerLevel) {
+ return nil
+ }
+ // userType=1 的用户可以发送链接和二维码,不进行检测
+ userType := u.GetUserType()
+
+ if userType != 1 {
+ // 检测userType=0的用户是否发送了包含链接的消息
+ if err := m.checkMessageContainsLink(data.MsgData, userType); err != nil {
+ return err
+ }
+ // 检测userType=0的用户是否发送了包含二维码的图片
+ if err := m.checkImageContainsQRCode(ctx, data.MsgData, userType); err != nil {
+ return err
+ }
+ }
+
+ // 单聊中:不限制文件发送权限,所有用户都可以发送文件
+
+ black, err := m.FriendLocalCache.IsBlack(ctx, data.MsgData.SendID, data.MsgData.RecvID)
+ if err != nil {
+ return err
+ }
+ if black {
+ return servererrs.ErrBlockedByPeer.Wrap()
+ }
+ if m.config.RpcConfig.FriendVerify {
+ friend, err := m.FriendLocalCache.IsFriend(ctx, data.MsgData.SendID, data.MsgData.RecvID)
+ if err != nil {
+ return err
+ }
+ if !friend {
+ return servererrs.ErrNotPeersFriend.Wrap()
+ }
+ return nil
+ }
+ return nil
+ case constant.ReadGroupChatType:
+ groupInfo, err := m.GroupLocalCache.GetGroupInfo(ctx, data.MsgData.GroupID)
+ if err != nil {
+ return err
+ }
+ if groupInfo.Status == constant.GroupStatusDismissed &&
+ data.MsgData.ContentType != constant.GroupDismissedNotification {
+ return servererrs.ErrDismissedAlready.Wrap()
+ }
+ // 检查是否为系统管理员,系统管理员跳过部分检查
+ isSystemAdmin := datautil.Contain(data.MsgData.SendID, m.adminUserIDs...)
+
+ // 通知消息类型跳过检查
+ if data.MsgData.ContentType <= constant.NotificationEnd &&
+ data.MsgData.ContentType >= constant.NotificationBegin {
+ return nil
+ }
+
+ // SuperGroup 跳过部分检查,但仍需检查文件权限
+ if groupInfo.GroupType == constant.SuperGroup {
+ // SuperGroup 也需要检查文件发送权限
+ if data.MsgData.ContentType == constant.File {
+ if isSystemAdmin {
+ // 系统管理员可以发送文件
+ return nil
+ }
+ memberIDs, err := m.GroupLocalCache.GetGroupMemberIDMap(ctx, data.MsgData.GroupID)
+ if err != nil {
+ return err
+ }
+ if _, ok := memberIDs[data.MsgData.SendID]; !ok {
+ return servererrs.ErrNotInGroupYet.Wrap()
+ }
+ groupMemberInfo, err := m.GroupLocalCache.GetGroupMember(ctx, data.MsgData.GroupID, data.MsgData.SendID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ return servererrs.ErrNotInGroupYet.WrapMsg(err.Error())
+ }
+ return err
+ }
+ u, err := m.UserLocalCache.GetUserInfo(ctx, data.MsgData.SendID)
+ if err != nil {
+ return err
+ }
+ isGroupOwner := groupMemberInfo.RoleLevel == constant.GroupOwner
+ isGroupAdmin := groupMemberInfo.RoleLevel == constant.GroupAdmin
+ canSendFile := u.GetUserType() == 1 || isGroupOwner || isGroupAdmin
+ if !canSendFile {
+ return servererrs.ErrNoPermission.WrapMsg("only group owner, admin, or userType=1 can send files in group chat")
+ }
+ }
+ return nil
+ }
+
+ // 先获取用户信息,用于检查userType(系统管理员和userType=1的用户可以发送文件)
+ memberIDs, err := m.GroupLocalCache.GetGroupMemberIDMap(ctx, data.MsgData.GroupID)
+ if err != nil {
+ return err
+ }
+ if _, ok := memberIDs[data.MsgData.SendID]; !ok {
+ return servererrs.ErrNotInGroupYet.Wrap()
+ }
+
+ groupMemberInfo, err := m.GroupLocalCache.GetGroupMember(ctx, data.MsgData.GroupID, data.MsgData.SendID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ return servererrs.ErrNotInGroupYet.WrapMsg(err.Error())
+ }
+ return err
+ }
+ // 获取用户信息以获取userType
+ u, err := m.UserLocalCache.GetUserInfo(ctx, data.MsgData.SendID)
+ if err != nil {
+ return err
+ }
+
+ // 群聊中:检查文件发送权限(优先检查,确保不会被其他逻辑跳过)
+ // 只有群主、管理员、userType=1的用户可以发送文件
+ // 系统管理员也可以发送文件
+ if data.MsgData.ContentType == constant.File {
+ isGroupOwner := groupMemberInfo.RoleLevel == constant.GroupOwner
+ isGroupAdmin := groupMemberInfo.RoleLevel == constant.GroupAdmin
+ userType := u.GetUserType()
+
+ // 如果是文件消息且 userType=0 且不是群主/管理员,可能是缓存问题,尝试清除缓存并重新获取
+ if userType == 0 && !isGroupOwner && !isGroupAdmin && !isSystemAdmin {
+ // 清除本地缓存
+ m.UserLocalCache.DelUserInfo(ctx, data.MsgData.SendID)
+ // 重新获取用户信息
+ u, err = m.UserLocalCache.GetUserInfo(ctx, data.MsgData.SendID)
+ if err != nil {
+ return err
+ }
+ userType = u.GetUserType()
+ }
+
+ canSendFile := isSystemAdmin || userType == 1 || isGroupOwner || isGroupAdmin
+ if !canSendFile {
+ return servererrs.ErrNoPermission.WrapMsg("only group owner, admin, or userType=1 can send files in group chat")
+ }
+ }
+
+ if isSystemAdmin {
+ // 系统管理员跳过大部分检查
+ return nil
+ }
+
+ // 群聊中:userType=1、群主、群管理员可以发送链接
+ isGroupOwner := groupMemberInfo.RoleLevel == constant.GroupOwner
+ isGroupAdmin := groupMemberInfo.RoleLevel == constant.GroupAdmin
+ canSendLink := u.GetUserType() == 1 || isGroupOwner || isGroupAdmin
+
+ // 如果不符合发送链接的条件,进行链接检测
+ if !canSendLink {
+ if err := m.checkMessageContainsLink(data.MsgData, u.GetUserType()); err != nil {
+ return err
+ }
+ }
+ // 群聊中:检测userType=0的普通成员是否发送了包含二维码的图片
+ // userType=1、群主、群管理员可以发送包含二维码的图片,不进行检测
+ if !canSendLink {
+ if err := m.checkImageContainsQRCode(ctx, data.MsgData, u.GetUserType()); err != nil {
+ return err
+ }
+ }
+
+ if isGroupOwner {
+ return nil
+ } else {
+ nowUnixMilli := time.Now().UnixMilli()
+ // 记录禁言检查信息,用于排查自动禁言问题
+ if groupMemberInfo.MuteEndTime > 0 {
+ muteEndTime := time.UnixMilli(groupMemberInfo.MuteEndTime)
+ isMuted := groupMemberInfo.MuteEndTime >= nowUnixMilli
+ log.ZInfo(ctx, "messageVerification: checking mute status",
+ "groupID", data.MsgData.GroupID,
+ "userID", data.MsgData.SendID,
+ "muteEndTimeTimestamp", groupMemberInfo.MuteEndTime,
+ "muteEndTime", muteEndTime.Format(time.RFC3339),
+ "now", time.UnixMilli(nowUnixMilli).Format(time.RFC3339),
+ "isMuted", isMuted,
+ "mutedDurationSeconds", (groupMemberInfo.MuteEndTime-nowUnixMilli)/1000,
+ "roleLevel", groupMemberInfo.RoleLevel)
+ }
+ if groupMemberInfo.MuteEndTime >= nowUnixMilli {
+ return servererrs.ErrMutedInGroup.Wrap()
+ }
+ if groupInfo.Status == constant.GroupStatusMuted && !isGroupAdmin {
+ return servererrs.ErrMutedGroup.Wrap()
+ }
+ }
+ return nil
+ default:
+ return nil
+ }
+}
+
+func (m *msgServer) encapsulateMsgData(msg *sdkws.MsgData) {
+ msg.ServerMsgID = GetMsgID(msg.SendID)
+ if msg.SendTime == 0 {
+ msg.SendTime = timeutil.GetCurrentTimestampByMill()
+ }
+ switch msg.ContentType {
+ case constant.Text, constant.Picture, constant.Voice, constant.Video,
+ constant.File, constant.AtText, constant.Merger, constant.Card,
+ constant.Location, constant.Custom, constant.Quote, constant.AdvancedText, constant.MarkdownText:
+ case constant.Revoke:
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsUnreadCount, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsOfflinePush, false)
+ case constant.HasReadReceipt:
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsConversationUpdate, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsSenderConversationUpdate, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsUnreadCount, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsOfflinePush, false)
+ case constant.Typing:
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsHistory, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsPersistent, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsSenderSync, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsConversationUpdate, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsSenderConversationUpdate, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsUnreadCount, false)
+ datautil.SetSwitchFromOptions(msg.Options, constant.IsOfflinePush, false)
+ }
+}
+
+func GetMsgID(sendID string) string {
+ t := timeutil.GetCurrentTimeFormatted()
+ return encrypt.Md5(t + "-" + sendID + "-" + strconv.Itoa(rand.Int()))
+}
+
+func (m *msgServer) modifyMessageByUserMessageReceiveOpt(ctx context.Context, userID, conversationID string, sessionType int, pb *msg.SendMsgReq) (bool, error) {
+ opt, err := m.UserLocalCache.GetUserGlobalMsgRecvOpt(ctx, userID)
+ if err != nil {
+ return false, err
+ }
+ switch opt {
+ case constant.ReceiveMessage:
+ case constant.NotReceiveMessage:
+ return false, nil
+ case constant.ReceiveNotNotifyMessage:
+ if pb.MsgData.Options == nil {
+ pb.MsgData.Options = make(map[string]bool, 10)
+ }
+ datautil.SetSwitchFromOptions(pb.MsgData.Options, constant.IsOfflinePush, false)
+ return true, nil
+ }
+ singleOpt, err := m.ConversationLocalCache.GetSingleConversationRecvMsgOpt(ctx, userID, conversationID)
+ if errs.ErrRecordNotFound.Is(err) {
+ return true, nil
+ } else if err != nil {
+ return false, err
+ }
+ switch singleOpt {
+ case constant.ReceiveMessage:
+ return true, nil
+ case constant.NotReceiveMessage:
+ if datautil.Contain(int(pb.MsgData.ContentType), ExcludeContentType...) {
+ return true, nil
+ }
+ return false, nil
+ case constant.ReceiveNotNotifyMessage:
+ if pb.MsgData.Options == nil {
+ pb.MsgData.Options = make(map[string]bool, 10)
+ }
+ datautil.SetSwitchFromOptions(pb.MsgData.Options, constant.IsOfflinePush, false)
+ return true, nil
+ }
+ return true, nil
+}
diff --git a/internal/rpc/relation/black.go b/internal/rpc/relation/black.go
new file mode 100644
index 0000000..a3506fd
--- /dev/null
+++ b/internal/rpc/relation/black.go
@@ -0,0 +1,165 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (s *friendServer) GetPaginationBlacks(ctx context.Context, req *relation.GetPaginationBlacksReq) (resp *relation.GetPaginationBlacksResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ total, blacks, err := s.blackDatabase.FindOwnerBlacks(ctx, req.UserID, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ resp = &relation.GetPaginationBlacksResp{}
+ resp.Blacks, err = convert.BlackDB2Pb(ctx, blacks, s.userClient.GetUsersInfoMap)
+ if err != nil {
+ return nil, err
+ }
+ resp.Total = int32(total)
+ return resp, nil
+}
+
+func (s *friendServer) IsBlack(ctx context.Context, req *relation.IsBlackReq) (*relation.IsBlackResp, error) {
+ if err := authverify.CheckAccessIn(ctx, req.UserID1, req.UserID2); err != nil {
+ return nil, err
+ }
+ in1, in2, err := s.blackDatabase.CheckIn(ctx, req.UserID1, req.UserID2)
+ if err != nil {
+ return nil, err
+ }
+ resp := &relation.IsBlackResp{}
+ resp.InUser1Blacks = in1
+ resp.InUser2Blacks = in2
+ return resp, nil
+}
+
+func (s *friendServer) RemoveBlack(ctx context.Context, req *relation.RemoveBlackReq) (*relation.RemoveBlackResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+
+ if err := s.blackDatabase.Delete(ctx, []*model.Black{{OwnerUserID: req.OwnerUserID, BlockUserID: req.BlackUserID}}); err != nil {
+ return nil, err
+ }
+
+ s.notificationSender.BlackDeletedNotification(ctx, req)
+ s.webhookAfterRemoveBlack(ctx, &s.config.WebhooksConfig.AfterRemoveBlack, req)
+
+ return &relation.RemoveBlackResp{}, nil
+}
+
+func (s *friendServer) AddBlack(ctx context.Context, req *relation.AddBlackReq) (*relation.AddBlackResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+
+ if err := s.webhookBeforeAddBlack(ctx, &s.config.WebhooksConfig.BeforeAddBlack, req); err != nil {
+ return nil, err
+ }
+ if err := s.userClient.CheckUser(ctx, []string{req.OwnerUserID, req.BlackUserID}); err != nil {
+ return nil, err
+ }
+ black := model.Black{
+ OwnerUserID: req.OwnerUserID,
+ BlockUserID: req.BlackUserID,
+ OperatorUserID: mcontext.GetOpUserID(ctx),
+ CreateTime: time.Now(),
+ Ex: req.Ex,
+ }
+
+ if err := s.blackDatabase.Create(ctx, []*model.Black{&black}); err != nil {
+ return nil, err
+ }
+ s.notificationSender.BlackAddedNotification(ctx, req)
+ return &relation.AddBlackResp{}, nil
+}
+
+func (s *friendServer) GetSpecifiedBlacks(ctx context.Context, req *relation.GetSpecifiedBlacksReq) (*relation.GetSpecifiedBlacksResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+
+ if len(req.UserIDList) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("userIDList is empty")
+ }
+
+ if datautil.Duplicate(req.UserIDList) {
+ return nil, errs.ErrArgs.WrapMsg("userIDList repeated")
+ }
+
+ userMap, err := s.userClient.GetUsersInfoMap(ctx, req.UserIDList)
+ if err != nil {
+ return nil, err
+ }
+
+ blacks, err := s.blackDatabase.FindBlackInfos(ctx, req.OwnerUserID, req.UserIDList)
+ if err != nil {
+ return nil, err
+ }
+
+ blackMap := datautil.SliceToMap(blacks, func(e *model.Black) string {
+ return e.BlockUserID
+ })
+
+ resp := &relation.GetSpecifiedBlacksResp{
+ Blacks: make([]*sdkws.BlackInfo, 0, len(req.UserIDList)),
+ }
+
+ toPublcUser := func(userID string) *sdkws.PublicUserInfo {
+ v, ok := userMap[userID]
+ if !ok {
+ return nil
+ }
+ return &sdkws.PublicUserInfo{
+ UserID: v.UserID,
+ Nickname: v.Nickname,
+ FaceURL: v.FaceURL,
+ Ex: v.Ex,
+ UserType: v.UserType,
+ }
+ }
+
+ for _, userID := range req.UserIDList {
+ if black := blackMap[userID]; black != nil {
+ resp.Blacks = append(resp.Blacks,
+ &sdkws.BlackInfo{
+ OwnerUserID: black.OwnerUserID,
+ CreateTime: black.CreateTime.UnixMilli(),
+ BlackUserInfo: toPublcUser(userID),
+ AddSource: black.AddSource,
+ OperatorUserID: black.OperatorUserID,
+ Ex: black.Ex,
+ })
+ }
+ }
+
+ resp.Total = int32(len(resp.Blacks))
+
+ return resp, nil
+}
diff --git a/internal/rpc/relation/callback.go b/internal/rpc/relation/callback.go
new file mode 100644
index 0000000..af14d7c
--- /dev/null
+++ b/internal/rpc/relation/callback.go
@@ -0,0 +1,169 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/relation"
+)
+
+func (s *friendServer) webhookAfterDeleteFriend(ctx context.Context, after *config.AfterConfig, req *relation.DeleteFriendReq) {
+ cbReq := &cbapi.CallbackAfterDeleteFriendReq{
+ CallbackCommand: cbapi.CallbackAfterDeleteFriendCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserID: req.FriendUserID,
+ }
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterDeleteFriendResp{}, after)
+}
+
+func (s *friendServer) webhookBeforeAddFriend(ctx context.Context, before *config.BeforeConfig, req *relation.ApplyToAddFriendReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeAddFriendReq{
+ CallbackCommand: cbapi.CallbackBeforeAddFriendCommand,
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ ReqMsg: req.ReqMsg,
+ Ex: req.Ex,
+ }
+ resp := &cbapi.CallbackBeforeAddFriendResp{}
+
+ if err := s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func (s *friendServer) webhookAfterAddFriend(ctx context.Context, after *config.AfterConfig, req *relation.ApplyToAddFriendReq) {
+ cbReq := &cbapi.CallbackAfterAddFriendReq{
+ CallbackCommand: cbapi.CallbackAfterAddFriendCommand,
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ ReqMsg: req.ReqMsg,
+ }
+ resp := &cbapi.CallbackAfterAddFriendResp{}
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, after)
+}
+
+func (s *friendServer) webhookAfterSetFriendRemark(ctx context.Context, after *config.AfterConfig, req *relation.SetFriendRemarkReq) {
+ cbReq := &cbapi.CallbackAfterSetFriendRemarkReq{
+ CallbackCommand: cbapi.CallbackAfterSetFriendRemarkCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserID: req.FriendUserID,
+ Remark: req.Remark,
+ }
+ resp := &cbapi.CallbackAfterSetFriendRemarkResp{}
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, after)
+}
+
+func (s *friendServer) webhookAfterImportFriends(ctx context.Context, after *config.AfterConfig, req *relation.ImportFriendReq) {
+ cbReq := &cbapi.CallbackAfterImportFriendsReq{
+ CallbackCommand: cbapi.CallbackAfterImportFriendsCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserIDs: req.FriendUserIDs,
+ }
+ resp := &cbapi.CallbackAfterImportFriendsResp{}
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, after)
+}
+
+func (s *friendServer) webhookAfterRemoveBlack(ctx context.Context, after *config.AfterConfig, req *relation.RemoveBlackReq) {
+ cbReq := &cbapi.CallbackAfterRemoveBlackReq{
+ CallbackCommand: cbapi.CallbackAfterRemoveBlackCommand,
+ OwnerUserID: req.OwnerUserID,
+ BlackUserID: req.BlackUserID,
+ }
+ resp := &cbapi.CallbackAfterRemoveBlackResp{}
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, after)
+}
+
+func (s *friendServer) webhookBeforeSetFriendRemark(ctx context.Context, before *config.BeforeConfig, req *relation.SetFriendRemarkReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeSetFriendRemarkReq{
+ CallbackCommand: cbapi.CallbackBeforeSetFriendRemarkCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserID: req.FriendUserID,
+ Remark: req.Remark,
+ }
+ resp := &cbapi.CallbackBeforeSetFriendRemarkResp{}
+ if err := s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+ if resp.Remark != "" {
+ req.Remark = resp.Remark
+ }
+ return nil
+ })
+}
+
+func (s *friendServer) webhookBeforeAddBlack(ctx context.Context, before *config.BeforeConfig, req *relation.AddBlackReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeAddBlackReq{
+ CallbackCommand: cbapi.CallbackBeforeAddBlackCommand,
+ OwnerUserID: req.OwnerUserID,
+ BlackUserID: req.BlackUserID,
+ }
+ resp := &cbapi.CallbackBeforeAddBlackResp{}
+ return s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before)
+ })
+}
+
+func (s *friendServer) webhookBeforeAddFriendAgree(ctx context.Context, before *config.BeforeConfig, req *relation.RespondFriendApplyReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeAddFriendAgreeReq{
+ CallbackCommand: cbapi.CallbackBeforeAddFriendAgreeCommand,
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ HandleMsg: req.HandleMsg,
+ HandleResult: req.HandleResult,
+ }
+ resp := &cbapi.CallbackBeforeAddFriendAgreeResp{}
+ return s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before)
+ })
+}
+
+func (s *friendServer) webhookAfterAddFriendAgree(ctx context.Context, after *config.AfterConfig, req *relation.RespondFriendApplyReq) {
+ cbReq := &cbapi.CallbackAfterAddFriendAgreeReq{
+ CallbackCommand: cbapi.CallbackAfterAddFriendAgreeCommand,
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ HandleMsg: req.HandleMsg,
+ HandleResult: req.HandleResult,
+ }
+ resp := &cbapi.CallbackAfterAddFriendAgreeResp{}
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, after)
+}
+
+func (s *friendServer) webhookBeforeImportFriends(ctx context.Context, before *config.BeforeConfig, req *relation.ImportFriendReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeImportFriendsReq{
+ CallbackCommand: cbapi.CallbackBeforeImportFriendsCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserIDs: req.FriendUserIDs,
+ }
+ resp := &cbapi.CallbackBeforeImportFriendsResp{}
+ if err := s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+ if len(resp.FriendUserIDs) > 0 {
+ req.FriendUserIDs = resp.FriendUserIDs
+ }
+ return nil
+ })
+}
diff --git a/internal/rpc/relation/friend.go b/internal/rpc/relation/friend.go
new file mode 100644
index 0000000..ab17812
--- /dev/null
+++ b/internal/rpc/relation/friend.go
@@ -0,0 +1,597 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification/common_user"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "github.com/openimsdk/tools/mq/memamq"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/grpc"
+)
+
+type friendServer struct {
+ relation.UnimplementedFriendServer
+ db controller.FriendDatabase
+ blackDatabase controller.BlackDatabase
+ notificationSender *FriendNotificationSender
+ RegisterCenter discovery.Conn
+ config *Config
+ webhookClient *webhook.Client
+ queue *memamq.MemoryQueue
+ userClient *rpcli.UserClient
+}
+
+type Config struct {
+ RpcConfig config.Friend
+ RedisConfig config.Redis
+ MongodbConfig config.Mongo
+ // ZookeeperConfig config.ZooKeeper
+ NotificationConfig config.Notification
+ Share config.Share
+ WebhooksConfig config.Webhooks
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+
+ friendMongoDB, err := mgo.NewFriendMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ friendRequestMongoDB, err := mgo.NewFriendRequestMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ blackMongoDB, err := mgo.NewBlackMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ msgConn, err := client.GetConn(ctx, config.Discovery.RpcService.Msg)
+ if err != nil {
+ return err
+ }
+ userClient := rpcli.NewUserClient(userConn)
+ database := controller.NewFriendDatabase(
+ friendMongoDB,
+ friendRequestMongoDB,
+ redis.NewFriendCacheRedis(rdb, &config.LocalCacheConfig, friendMongoDB),
+ mgocli.GetTx(),
+ )
+ // Initialize notification sender
+ notificationSender := NewFriendNotificationSender(
+ &config.NotificationConfig,
+ rpcli.NewMsgClient(msgConn),
+ WithRpcFunc(userClient.GetUsersInfo),
+ WithFriendDB(database),
+ )
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+
+ // 初始化webhook配置管理器(支持从数据库读取配置)
+ var webhookClient *webhook.Client
+ systemConfigDB, err := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err == nil {
+ // 如果SystemConfig数据库初始化成功,使用配置管理器
+ webhookConfigManager := webhook.NewConfigManager(systemConfigDB, &config.WebhooksConfig)
+ if err := webhookConfigManager.Start(ctx); err != nil {
+ log.ZWarn(ctx, "failed to start webhook config manager, using default config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ } else {
+ webhookClient = webhook.NewWebhookClientWithManager(webhookConfigManager)
+ }
+ } else {
+ // 如果SystemConfig数据库初始化失败,使用默认配置
+ log.ZWarn(ctx, "failed to init system config db, using default webhook config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ }
+
+ // Register Friend server with refactored MongoDB and Redis integrations
+ relation.RegisterFriendServer(server, &friendServer{
+ db: database,
+ blackDatabase: controller.NewBlackDatabase(
+ blackMongoDB,
+ redis.NewBlackCacheRedis(rdb, &config.LocalCacheConfig, blackMongoDB),
+ ),
+ notificationSender: notificationSender,
+ RegisterCenter: client,
+ config: config,
+ webhookClient: webhookClient,
+ queue: memamq.NewMemoryQueue(16, 1024*1024),
+ userClient: userClient,
+ })
+ return nil
+}
+
+// ok.
+func (s *friendServer) ApplyToAddFriend(ctx context.Context, req *relation.ApplyToAddFriendReq) (resp *relation.ApplyToAddFriendResp, err error) {
+ resp = &relation.ApplyToAddFriendResp{}
+ if err := authverify.CheckAccess(ctx, req.FromUserID); err != nil {
+ return nil, err
+ }
+ if req.ToUserID == req.FromUserID {
+ return nil, servererrs.ErrCanNotAddYourself.WrapMsg("req.ToUserID", req.ToUserID)
+ }
+ if err = s.webhookBeforeAddFriend(ctx, &s.config.WebhooksConfig.BeforeAddFriend, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+ if err := s.userClient.CheckUser(ctx, []string{req.ToUserID, req.FromUserID}); err != nil {
+ return nil, err
+ }
+
+ in1, in2, err := s.db.CheckIn(ctx, req.FromUserID, req.ToUserID)
+ if err != nil {
+ return nil, err
+ }
+ if in1 && in2 {
+ return nil, servererrs.ErrRelationshipAlready.WrapMsg("already friends has f")
+ }
+ if err = s.db.AddFriendRequest(ctx, req.FromUserID, req.ToUserID, req.ReqMsg, req.Ex); err != nil {
+ return nil, err
+ }
+ s.notificationSender.FriendApplicationAddNotification(ctx, req)
+ s.webhookAfterAddFriend(ctx, &s.config.WebhooksConfig.AfterAddFriend, req)
+ return resp, nil
+}
+
+// ok.
+func (s *friendServer) ImportFriends(ctx context.Context, req *relation.ImportFriendReq) (resp *relation.ImportFriendResp, err error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ if err := s.userClient.CheckUser(ctx, append([]string{req.OwnerUserID}, req.FriendUserIDs...)); err != nil {
+ return nil, err
+ }
+ if datautil.Contain(req.OwnerUserID, req.FriendUserIDs...) {
+ return nil, servererrs.ErrCanNotAddYourself.WrapMsg("can not add yourself")
+ }
+ if datautil.Duplicate(req.FriendUserIDs) {
+ return nil, errs.ErrArgs.WrapMsg("friend userID repeated")
+ }
+
+ if err := s.webhookBeforeImportFriends(ctx, &s.config.WebhooksConfig.BeforeImportFriends, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ if err := s.db.BecomeFriends(ctx, req.OwnerUserID, req.FriendUserIDs, constant.BecomeFriendByImport); err != nil {
+ return nil, err
+ }
+ for _, userID := range req.FriendUserIDs {
+ s.notificationSender.FriendApplicationAgreedNotification(ctx, &relation.RespondFriendApplyReq{
+ FromUserID: req.OwnerUserID,
+ ToUserID: userID,
+ HandleResult: constant.FriendResponseAgree,
+ }, false)
+ }
+
+ s.webhookAfterImportFriends(ctx, &s.config.WebhooksConfig.AfterImportFriends, req)
+ return &relation.ImportFriendResp{}, nil
+}
+
+// ok.
+func (s *friendServer) RespondFriendApply(ctx context.Context, req *relation.RespondFriendApplyReq) (resp *relation.RespondFriendApplyResp, err error) {
+ resp = &relation.RespondFriendApplyResp{}
+ if err := authverify.CheckAccess(ctx, req.ToUserID); err != nil {
+ return nil, err
+ }
+
+ friendRequest := model.FriendRequest{
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ HandleMsg: req.HandleMsg,
+ HandleResult: req.HandleResult,
+ }
+ if req.HandleResult == constant.FriendResponseAgree {
+ if err := s.webhookBeforeAddFriendAgree(ctx, &s.config.WebhooksConfig.BeforeAddFriendAgree, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+ err := s.db.AgreeFriendRequest(ctx, &friendRequest)
+ if err != nil {
+ return nil, err
+ }
+ s.webhookAfterAddFriendAgree(ctx, &s.config.WebhooksConfig.AfterAddFriendAgree, req)
+ s.notificationSender.FriendApplicationAgreedNotification(ctx, req, true)
+ return resp, nil
+ }
+ if req.HandleResult == constant.FriendResponseRefuse {
+ err := s.db.RefuseFriendRequest(ctx, &friendRequest)
+ if err != nil {
+ return nil, err
+ }
+ s.notificationSender.FriendApplicationRefusedNotification(ctx, req)
+ return resp, nil
+ }
+ return nil, errs.ErrArgs.WrapMsg("req.HandleResult != -1/1")
+}
+
+// ok.
+func (s *friendServer) DeleteFriend(ctx context.Context, req *relation.DeleteFriendReq) (resp *relation.DeleteFriendResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+
+ _, err = s.db.FindFriendsWithError(ctx, req.OwnerUserID, []string{req.FriendUserID})
+ if err != nil {
+ return nil, err
+ }
+
+ if err := s.db.Delete(ctx, req.OwnerUserID, []string{req.FriendUserID}); err != nil {
+ return nil, err
+ }
+
+ s.notificationSender.FriendDeletedNotification(ctx, req)
+ s.webhookAfterDeleteFriend(ctx, &s.config.WebhooksConfig.AfterDeleteFriend, req)
+
+ return &relation.DeleteFriendResp{}, nil
+}
+
+// ok.
+func (s *friendServer) SetFriendRemark(ctx context.Context, req *relation.SetFriendRemarkReq) (resp *relation.SetFriendRemarkResp, err error) {
+ if err = s.webhookBeforeSetFriendRemark(ctx, &s.config.WebhooksConfig.BeforeSetFriendRemark, req); err != nil && err != servererrs.ErrCallbackContinue {
+ return nil, err
+ }
+
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+
+ _, err = s.db.FindFriendsWithError(ctx, req.OwnerUserID, []string{req.FriendUserID})
+ if err != nil {
+ return nil, err
+ }
+
+ if err := s.db.UpdateRemark(ctx, req.OwnerUserID, req.FriendUserID, req.Remark); err != nil {
+ return nil, err
+ }
+
+ s.webhookAfterSetFriendRemark(ctx, &s.config.WebhooksConfig.AfterSetFriendRemark, req)
+ s.notificationSender.FriendRemarkSetNotification(ctx, req.OwnerUserID, req.FriendUserID)
+
+ return &relation.SetFriendRemarkResp{}, nil
+}
+
+func (s *friendServer) GetFriendInfo(ctx context.Context, req *relation.GetFriendInfoReq) (*relation.GetFriendInfoResp, error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ friends, err := s.db.FindFriendsWithError(ctx, req.OwnerUserID, req.FriendUserIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &relation.GetFriendInfoResp{FriendInfos: convert.FriendOnlyDB2PbOnly(friends)}, nil
+}
+
+func (s *friendServer) GetDesignatedFriends(ctx context.Context, req *relation.GetDesignatedFriendsReq) (resp *relation.GetDesignatedFriendsResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ resp = &relation.GetDesignatedFriendsResp{}
+ if datautil.Duplicate(req.FriendUserIDs) {
+ return nil, errs.ErrArgs.WrapMsg("friend userID repeated")
+ }
+ friends, err := s.getFriend(ctx, req.OwnerUserID, req.FriendUserIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &relation.GetDesignatedFriendsResp{
+ FriendsInfo: friends,
+ }, nil
+}
+
+func (s *friendServer) getFriend(ctx context.Context, ownerUserID string, friendUserIDs []string) ([]*sdkws.FriendInfo, error) {
+ if len(friendUserIDs) == 0 {
+ return nil, nil
+ }
+ friends, err := s.db.FindFriendsWithError(ctx, ownerUserID, friendUserIDs)
+ if err != nil {
+ return nil, err
+ }
+ return convert.FriendsDB2Pb(ctx, friends, s.userClient.GetUsersInfoMap)
+}
+
+// Get the list of friend requests sent out proactively.
+func (s *friendServer) GetDesignatedFriendsApply(ctx context.Context, req *relation.GetDesignatedFriendsApplyReq) (resp *relation.GetDesignatedFriendsApplyResp, err error) {
+ if err := authverify.CheckAccessIn(ctx, req.FromUserID, req.ToUserID); err != nil {
+ return nil, err
+ }
+ friendRequests, err := s.db.FindBothFriendRequests(ctx, req.FromUserID, req.ToUserID)
+ if err != nil {
+ return nil, err
+ }
+ resp = &relation.GetDesignatedFriendsApplyResp{}
+ resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.getCommonUserMap)
+ if err != nil {
+ return nil, err
+ }
+ return resp, nil
+}
+
+// Get received friend requests (i.e., those initiated by others).
+func (s *friendServer) GetPaginationFriendsApplyTo(ctx context.Context, req *relation.GetPaginationFriendsApplyToReq) (resp *relation.GetPaginationFriendsApplyToResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+
+ handleResults := datautil.Slice(req.HandleResults, func(e int32) int {
+ return int(e)
+ })
+ total, friendRequests, err := s.db.PageFriendRequestToMe(ctx, req.UserID, handleResults, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+
+ resp = &relation.GetPaginationFriendsApplyToResp{}
+ resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.getCommonUserMap)
+ if err != nil {
+ return nil, err
+ }
+
+ resp.Total = int32(total)
+
+ return resp, nil
+}
+
+func (s *friendServer) GetPaginationFriendsApplyFrom(ctx context.Context, req *relation.GetPaginationFriendsApplyFromReq) (resp *relation.GetPaginationFriendsApplyFromResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+
+ handleResults := datautil.Slice(req.HandleResults, func(e int32) int {
+ return int(e)
+ })
+ total, friendRequests, err := s.db.PageFriendRequestFromMe(ctx, req.UserID, handleResults, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+
+ resp = &relation.GetPaginationFriendsApplyFromResp{}
+ resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.getCommonUserMap)
+ if err != nil {
+ return nil, err
+ }
+
+ resp.Total = int32(total)
+
+ return resp, nil
+}
+
+// ok.
+func (s *friendServer) IsFriend(ctx context.Context, req *relation.IsFriendReq) (resp *relation.IsFriendResp, err error) {
+ if err := authverify.CheckAccessIn(ctx, req.UserID1, req.UserID2); err != nil {
+ return nil, err
+ }
+ resp = &relation.IsFriendResp{}
+ resp.InUser1Friends, resp.InUser2Friends, err = s.db.CheckIn(ctx, req.UserID1, req.UserID2)
+ if err != nil {
+ return nil, err
+ }
+ return resp, nil
+}
+
+func (s *friendServer) GetPaginationFriends(ctx context.Context, req *relation.GetPaginationFriendsReq) (resp *relation.GetPaginationFriendsResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+
+ total, friends, err := s.db.PageOwnerFriends(ctx, req.UserID, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+
+ resp = &relation.GetPaginationFriendsResp{}
+ resp.FriendsInfo, err = convert.FriendsDB2Pb(ctx, friends, s.userClient.GetUsersInfoMap)
+ if err != nil {
+ return nil, err
+ }
+
+ resp.Total = int32(total)
+
+ return resp, nil
+}
+
+func (s *friendServer) GetFriendIDs(ctx context.Context, req *relation.GetFriendIDsReq) (resp *relation.GetFriendIDsResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+
+ resp = &relation.GetFriendIDsResp{}
+ resp.FriendIDs, err = s.db.FindFriendUserIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ return resp, nil
+}
+
+func (s *friendServer) GetSpecifiedFriendsInfo(ctx context.Context, req *relation.GetSpecifiedFriendsInfoReq) (*relation.GetSpecifiedFriendsInfoResp, error) {
+ if len(req.UserIDList) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("userIDList is empty")
+ }
+
+ if datautil.Duplicate(req.UserIDList) {
+ return nil, errs.ErrArgs.WrapMsg("userIDList repeated")
+ }
+
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+ userMap, err := s.userClient.GetUsersInfoMap(ctx, req.UserIDList)
+ if err != nil {
+ return nil, err
+ }
+
+ friends, err := s.db.FindFriendsWithError(ctx, req.OwnerUserID, req.UserIDList)
+ if err != nil {
+ return nil, err
+ }
+
+ blacks, err := s.blackDatabase.FindBlackInfos(ctx, req.OwnerUserID, req.UserIDList)
+ if err != nil {
+ return nil, err
+ }
+
+ friendMap := datautil.SliceToMap(friends, func(e *model.Friend) string {
+ return e.FriendUserID
+ })
+
+ blackMap := datautil.SliceToMap(blacks, func(e *model.Black) string {
+ return e.BlockUserID
+ })
+
+ resp := &relation.GetSpecifiedFriendsInfoResp{
+ Infos: make([]*relation.GetSpecifiedFriendsInfoInfo, 0, len(req.UserIDList)),
+ }
+
+ for _, userID := range req.UserIDList {
+ user := userMap[userID]
+
+ if user == nil {
+ continue
+ }
+
+ var friendInfo *sdkws.FriendInfo
+ if friend := friendMap[userID]; friend != nil {
+ friendInfo = &sdkws.FriendInfo{
+ OwnerUserID: friend.OwnerUserID,
+ Remark: friend.Remark,
+ CreateTime: friend.CreateTime.UnixMilli(),
+ AddSource: friend.AddSource,
+ OperatorUserID: friend.OperatorUserID,
+ Ex: friend.Ex,
+ IsPinned: friend.IsPinned,
+ }
+ }
+
+ var blackInfo *sdkws.BlackInfo
+ if black := blackMap[userID]; black != nil {
+ blackInfo = &sdkws.BlackInfo{
+ OwnerUserID: black.OwnerUserID,
+ CreateTime: black.CreateTime.UnixMilli(),
+ AddSource: black.AddSource,
+ OperatorUserID: black.OperatorUserID,
+ Ex: black.Ex,
+ }
+ }
+
+ resp.Infos = append(resp.Infos, &relation.GetSpecifiedFriendsInfoInfo{
+ UserInfo: user,
+ FriendInfo: friendInfo,
+ BlackInfo: blackInfo,
+ })
+ }
+
+ return resp, nil
+}
+
+func (s *friendServer) UpdateFriends(ctx context.Context, req *relation.UpdateFriendsReq) (*relation.UpdateFriendsResp, error) {
+ if len(req.FriendUserIDs) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("friendIDList is empty")
+ }
+ if datautil.Duplicate(req.FriendUserIDs) {
+ return nil, errs.ErrArgs.WrapMsg("friendIDList repeated")
+ }
+
+ if err := authverify.CheckAccess(ctx, req.OwnerUserID); err != nil {
+ return nil, err
+ }
+
+ _, err := s.db.FindFriendsWithError(ctx, req.OwnerUserID, req.FriendUserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ val := make(map[string]any)
+
+ if req.IsPinned != nil {
+ val["is_pinned"] = req.IsPinned.Value
+ }
+ if req.Remark != nil {
+ val["remark"] = req.Remark.Value
+ }
+ if req.Ex != nil {
+ val["ex"] = req.Ex.Value
+ }
+ if err = s.db.UpdateFriends(ctx, req.OwnerUserID, req.FriendUserIDs, val); err != nil {
+ return nil, err
+ }
+
+ resp := &relation.UpdateFriendsResp{}
+
+ s.notificationSender.FriendsInfoUpdateNotification(ctx, req.OwnerUserID, req.FriendUserIDs)
+ return resp, nil
+}
+
+func (s *friendServer) GetSelfUnhandledApplyCount(ctx context.Context, req *relation.GetSelfUnhandledApplyCountReq) (*relation.GetSelfUnhandledApplyCountResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+
+ count, err := s.db.GetUnhandledCount(ctx, req.UserID, req.Time)
+ if err != nil {
+ return nil, err
+ }
+
+ return &relation.GetSelfUnhandledApplyCountResp{
+ Count: count,
+ }, nil
+}
+
+func (s *friendServer) getCommonUserMap(ctx context.Context, userIDs []string) (map[string]common_user.CommonUser, error) {
+ users, err := s.userClient.GetUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ return datautil.SliceToMapAny(users, func(e *sdkws.UserInfo) (string, common_user.CommonUser) {
+ return e.UserID, e
+ }), nil
+}
diff --git a/internal/rpc/relation/notification.go b/internal/rpc/relation/notification.go
new file mode 100644
index 0000000..f981f2b
--- /dev/null
+++ b/internal/rpc/relation/notification.go
@@ -0,0 +1,303 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/versionctx"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification/common_user"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+type FriendNotificationSender struct {
+ *notification.NotificationSender
+ // Target not found err
+ getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
+ // db controller
+ db controller.FriendDatabase
+}
+
+type friendNotificationSenderOptions func(*FriendNotificationSender)
+
+func WithFriendDB(db controller.FriendDatabase) friendNotificationSenderOptions {
+ return func(s *FriendNotificationSender) {
+ s.db = db
+ }
+}
+
+func WithDBFunc(fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error)) friendNotificationSenderOptions {
+ return func(s *FriendNotificationSender) {
+ f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
+ users, err := fn(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, user := range users {
+ result = append(result, user)
+ }
+ return result, nil
+ }
+ s.getUsersInfo = f
+ }
+}
+
+func WithRpcFunc(fn func(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error)) friendNotificationSenderOptions {
+ return func(s *FriendNotificationSender) {
+ f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
+ users, err := fn(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, user := range users {
+ result = append(result, user)
+ }
+ return result, err
+ }
+ s.getUsersInfo = f
+ }
+}
+
+func NewFriendNotificationSender(conf *config.Notification, msgClient *rpcli.MsgClient, opts ...friendNotificationSenderOptions) *FriendNotificationSender {
+ f := &FriendNotificationSender{
+ NotificationSender: notification.NewNotificationSender(conf, notification.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
+ return msgClient.SendMsg(ctx, req)
+ })),
+ }
+ for _, opt := range opts {
+ opt(f)
+ }
+ return f
+}
+
+func (f *FriendNotificationSender) getUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
+ users, err := f.getUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ result := make(map[string]*sdkws.UserInfo)
+ for _, user := range users {
+ result[user.GetUserID()] = user.(*sdkws.UserInfo)
+ }
+ return result, nil
+}
+
+//nolint:unused
+func (f *FriendNotificationSender) getFromToUserNickname(ctx context.Context, fromUserID, toUserID string) (string, string, error) {
+ users, err := f.getUsersInfoMap(ctx, []string{fromUserID, toUserID})
+ if err != nil {
+ return "", "", nil
+ }
+ return users[fromUserID].Nickname, users[toUserID].Nickname, nil
+}
+
+func (f *FriendNotificationSender) UserInfoUpdatedNotification(ctx context.Context, changedUserID string) {
+ tips := sdkws.UserInfoUpdatedTips{UserID: changedUserID}
+ f.Notification(ctx, mcontext.GetOpUserID(ctx), changedUserID, constant.UserInfoUpdatedNotification, &tips)
+}
+
+func (f *FriendNotificationSender) getCommonUserMap(ctx context.Context, userIDs []string) (map[string]common_user.CommonUser, error) {
+ users, err := f.getUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ return datautil.SliceToMap(users, func(e common_user.CommonUser) string {
+ return e.GetUserID()
+ }), nil
+}
+
+func (f *FriendNotificationSender) getFriendRequests(ctx context.Context, fromUserID, toUserID string) (*sdkws.FriendRequest, error) {
+ if f.db == nil {
+ return nil, errs.ErrInternalServer.WithDetail("db is nil")
+ }
+ friendRequests, err := f.db.FindBothFriendRequests(ctx, fromUserID, toUserID)
+ if err != nil {
+ return nil, err
+ }
+ requests, err := convert.FriendRequestDB2Pb(ctx, friendRequests, f.getCommonUserMap)
+ if err != nil {
+ return nil, err
+ }
+ for _, request := range requests {
+ if request.FromUserID == fromUserID && request.ToUserID == toUserID {
+ return request, nil
+ }
+ }
+ return nil, errs.ErrRecordNotFound.WrapMsg("friend request not found", "fromUserID", fromUserID, "toUserID", toUserID)
+}
+
+func (f *FriendNotificationSender) FriendApplicationAddNotification(ctx context.Context, req *relation.ApplyToAddFriendReq) {
+ request, err := f.getFriendRequests(ctx, req.FromUserID, req.ToUserID)
+ if err != nil {
+ log.ZError(ctx, "FriendApplicationAddNotification get friend request", err, "fromUserID", req.FromUserID, "toUserID", req.ToUserID)
+ return
+ }
+ tips := sdkws.FriendApplicationTips{
+ FromToUserID: &sdkws.FromToUserID{
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ },
+ Request: request,
+ }
+ f.Notification(ctx, req.FromUserID, req.ToUserID, constant.FriendApplicationNotification, &tips)
+}
+
+func (f *FriendNotificationSender) FriendApplicationAgreedNotification(ctx context.Context, req *relation.RespondFriendApplyReq, checkReq bool) {
+ var (
+ request *sdkws.FriendRequest
+ err error
+ )
+ if checkReq {
+ request, err = f.getFriendRequests(ctx, req.FromUserID, req.ToUserID)
+ if err != nil {
+ log.ZError(ctx, "FriendApplicationAgreedNotification get friend request", err, "fromUserID", req.FromUserID, "toUserID", req.ToUserID)
+ return
+ }
+ }
+ tips := sdkws.FriendApplicationApprovedTips{
+ FromToUserID: &sdkws.FromToUserID{
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ },
+ HandleMsg: req.HandleMsg,
+ Request: request,
+ }
+ f.Notification(ctx, req.ToUserID, req.FromUserID, constant.FriendApplicationApprovedNotification, &tips)
+}
+
+func (f *FriendNotificationSender) FriendApplicationRefusedNotification(ctx context.Context, req *relation.RespondFriendApplyReq) {
+ request, err := f.getFriendRequests(ctx, req.FromUserID, req.ToUserID)
+ if err != nil {
+ log.ZError(ctx, "FriendApplicationRefusedNotification get friend request", err, "fromUserID", req.FromUserID, "toUserID", req.ToUserID)
+ return
+ }
+ tips := sdkws.FriendApplicationRejectedTips{
+ FromToUserID: &sdkws.FromToUserID{
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ },
+ HandleMsg: req.HandleMsg,
+ Request: request,
+ }
+ f.Notification(ctx, req.ToUserID, req.FromUserID, constant.FriendApplicationRejectedNotification, &tips)
+}
+
+//func (f *FriendNotificationSender) FriendAddedNotification(ctx context.Context, operationID, opUserID, fromUserID, toUserID string) error {
+// tips := sdkws.FriendAddedTips{Friend: &sdkws.FriendInfo{}, OpUser: &sdkws.PublicUserInfo{}}
+// user, err := f.getUsersInfo(ctx, []string{opUserID})
+// if err != nil {
+// return err
+// }
+// tips.OpUser.UserID = user[0].GetUserID()
+// tips.OpUser.Ex = user[0].GetEx()
+// tips.OpUser.Nickname = user[0].GetNickname()
+// tips.OpUser.FaceURL = user[0].GetFaceURL()
+// friends, err := f.db.FindFriendsWithError(ctx, fromUserID, []string{toUserID})
+// if err != nil {
+// return err
+// }
+// tips.Friend, err = convert.FriendDB2Pb(ctx, friends[0], f.getUsersInfoMap)
+// if err != nil {
+// return err
+// }
+// f.Notification(ctx, fromUserID, toUserID, constant.FriendAddedNotification, &tips)
+// return nil
+//}
+
+func (f *FriendNotificationSender) FriendDeletedNotification(ctx context.Context, req *relation.DeleteFriendReq) {
+ tips := sdkws.FriendDeletedTips{FromToUserID: &sdkws.FromToUserID{
+ FromUserID: req.OwnerUserID,
+ ToUserID: req.FriendUserID,
+ }}
+ f.Notification(ctx, req.OwnerUserID, req.FriendUserID, constant.FriendDeletedNotification, &tips)
+}
+
+func (f *FriendNotificationSender) setVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string) {
+ versions := versionctx.GetVersionLog(ctx).Get()
+ for _, coll := range versions {
+ if coll.Name == collName && coll.Doc.DID == id {
+ *version = uint64(coll.Doc.Version)
+ *versionID = coll.Doc.ID.Hex()
+ return
+ }
+ }
+}
+
+func (f *FriendNotificationSender) setSortVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string, sortVersion *uint64) {
+ versions := versionctx.GetVersionLog(ctx).Get()
+ for _, coll := range versions {
+ if coll.Name == collName && coll.Doc.DID == id {
+ *version = uint64(coll.Doc.Version)
+ *versionID = coll.Doc.ID.Hex()
+ for _, elem := range coll.Doc.Logs {
+ if elem.EID == relationtb.VersionSortChangeID {
+ *sortVersion = uint64(elem.Version)
+ }
+ }
+ }
+ }
+}
+
+func (f *FriendNotificationSender) FriendRemarkSetNotification(ctx context.Context, fromUserID, toUserID string) {
+ tips := sdkws.FriendInfoChangedTips{FromToUserID: &sdkws.FromToUserID{}}
+ tips.FromToUserID.FromUserID = fromUserID
+ tips.FromToUserID.ToUserID = toUserID
+ f.setSortVersion(ctx, &tips.FriendVersion, &tips.FriendVersionID, database.FriendVersionName, toUserID, &tips.FriendSortVersion)
+ f.Notification(ctx, fromUserID, toUserID, constant.FriendRemarkSetNotification, &tips)
+}
+
+func (f *FriendNotificationSender) FriendsInfoUpdateNotification(ctx context.Context, toUserID string, friendIDs []string) {
+ tips := sdkws.FriendsInfoUpdateTips{FromToUserID: &sdkws.FromToUserID{}}
+ tips.FromToUserID.ToUserID = toUserID
+ tips.FriendIDs = friendIDs
+ f.Notification(ctx, toUserID, toUserID, constant.FriendsInfoUpdateNotification, &tips)
+}
+
+func (f *FriendNotificationSender) BlackAddedNotification(ctx context.Context, req *relation.AddBlackReq) {
+ tips := sdkws.BlackAddedTips{FromToUserID: &sdkws.FromToUserID{}}
+ tips.FromToUserID.FromUserID = req.OwnerUserID
+ tips.FromToUserID.ToUserID = req.BlackUserID
+ f.Notification(ctx, req.OwnerUserID, req.BlackUserID, constant.BlackAddedNotification, &tips)
+}
+
+func (f *FriendNotificationSender) BlackDeletedNotification(ctx context.Context, req *relation.RemoveBlackReq) {
+ blackDeletedTips := sdkws.BlackDeletedTips{FromToUserID: &sdkws.FromToUserID{
+ FromUserID: req.OwnerUserID,
+ ToUserID: req.BlackUserID,
+ }}
+ f.Notification(ctx, req.OwnerUserID, req.BlackUserID, constant.BlackDeletedNotification, &blackDeletedTips)
+}
+
+func (f *FriendNotificationSender) FriendInfoUpdatedNotification(ctx context.Context, changedUserID string, needNotifiedUserID string) {
+ tips := sdkws.UserInfoUpdatedTips{UserID: changedUserID}
+ f.Notification(ctx, mcontext.GetOpUserID(ctx), needNotifiedUserID, constant.FriendInfoUpdatedNotification, &tips)
+}
diff --git a/internal/rpc/relation/sync.go b/internal/rpc/relation/sync.go
new file mode 100644
index 0000000..489f734
--- /dev/null
+++ b/internal/rpc/relation/sync.go
@@ -0,0 +1,108 @@
+package relation
+
+import (
+ "context"
+ "slices"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/hashutil"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/incrversion"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/relation"
+)
+
+func (s *friendServer) NotificationUserInfoUpdate(ctx context.Context, req *relation.NotificationUserInfoUpdateReq) (*relation.NotificationUserInfoUpdateResp, error) {
+ userIDs, err := s.db.FindFriendUserIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if len(userIDs) > 0 {
+ friendUserIDs := []string{req.UserID}
+ noCancelCtx := context.WithoutCancel(ctx)
+ err := s.queue.PushCtx(ctx, func() {
+ for _, userID := range userIDs {
+ if err := s.db.OwnerIncrVersion(noCancelCtx, userID, friendUserIDs, model.VersionStateUpdate); err != nil {
+ log.ZError(ctx, "OwnerIncrVersion", err, "userID", userID, "friendUserIDs", friendUserIDs)
+ }
+ }
+ for _, userID := range userIDs {
+ s.notificationSender.FriendInfoUpdatedNotification(noCancelCtx, req.UserID, userID)
+ }
+ })
+ if err != nil {
+ log.ZError(ctx, "NotificationUserInfoUpdate timeout", err, "userID", req.UserID)
+ }
+ }
+ return &relation.NotificationUserInfoUpdateResp{}, nil
+}
+
+func (s *friendServer) GetFullFriendUserIDs(ctx context.Context, req *relation.GetFullFriendUserIDsReq) (*relation.GetFullFriendUserIDsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ vl, err := s.db.FindMaxFriendVersionCache(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ userIDs, err := s.db.FindFriendUserIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ idHash := hashutil.IdHash(userIDs)
+ if req.IdHash == idHash {
+ userIDs = nil
+ }
+ return &relation.GetFullFriendUserIDsResp{
+ Version: idHash,
+ VersionID: vl.ID.Hex(),
+ Equal: req.IdHash == idHash,
+ UserIDs: userIDs,
+ }, nil
+}
+
+func (s *friendServer) GetIncrementalFriends(ctx context.Context, req *relation.GetIncrementalFriendsReq) (*relation.GetIncrementalFriendsResp, error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ var sortVersion uint64
+ opt := incrversion.Option[*sdkws.FriendInfo, relation.GetIncrementalFriendsResp]{
+ Ctx: ctx,
+ VersionKey: req.UserID,
+ VersionID: req.VersionID,
+ VersionNumber: req.Version,
+ Version: func(ctx context.Context, ownerUserID string, version uint, limit int) (*model.VersionLog, error) {
+ vl, err := s.db.FindFriendIncrVersion(ctx, ownerUserID, version, limit)
+ if err != nil {
+ return nil, err
+ }
+ vl.Logs = slices.DeleteFunc(vl.Logs, func(elem model.VersionLogElem) bool {
+ if elem.EID == model.VersionSortChangeID {
+ vl.LogLen--
+ sortVersion = uint64(elem.Version)
+ return true
+ }
+ return false
+ })
+ return vl, nil
+ },
+ CacheMaxVersion: s.db.FindMaxFriendVersionCache,
+ Find: func(ctx context.Context, ids []string) ([]*sdkws.FriendInfo, error) {
+ return s.getFriend(ctx, req.UserID, ids)
+ },
+ Resp: func(version *model.VersionLog, deleteIds []string, insertList, updateList []*sdkws.FriendInfo, full bool) *relation.GetIncrementalFriendsResp {
+ return &relation.GetIncrementalFriendsResp{
+ VersionID: version.ID.Hex(),
+ Version: uint64(version.Version),
+ Full: full,
+ Delete: deleteIds,
+ Insert: insertList,
+ Update: updateList,
+ SortVersion: sortVersion,
+ }
+ },
+ }
+ return opt.Build()
+}
diff --git a/internal/rpc/third/log.go b/internal/rpc/third/log.go
new file mode 100644
index 0000000..47cc5ce
--- /dev/null
+++ b/internal/rpc/third/log.go
@@ -0,0 +1,233 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package third
+
+import (
+ "context"
+ "crypto/rand"
+ "net/url"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func genLogID() string {
+ const dataLen = 10
+ data := make([]byte, dataLen)
+ rand.Read(data)
+ chars := []byte("0123456789")
+ for i := 0; i < len(data); i++ {
+ if i == 0 {
+ data[i] = chars[1:][data[i]%9]
+ } else {
+ data[i] = chars[data[i]%10]
+ }
+ }
+ return string(data)
+}
+
+// extractKeyFromLogURL 从日志URL中提取S3的key
+// URL格式: https://s3.jizhying.com/images/openim/data/hash/{hash}?...
+// 或: https://chatall.oss-ap-southeast-1.aliyuncs.com/openim%2Fdata%2Fhash%2F{hash}
+// key格式: openim/data/hash/{hash}(不包含bucket名称)
+// bucket名称在URL路径的第一段(如images),需要去掉
+func extractKeyFromLogURL(logURL string, bucketName string) string {
+ if logURL == "" {
+ return ""
+ }
+ parsedURL, err := url.Parse(logURL)
+ if err != nil {
+ return ""
+ }
+ // 获取路径部分,去掉开头的'/'
+ path := strings.TrimPrefix(parsedURL.Path, "/")
+ if path == "" {
+ return ""
+ }
+
+ // 如果配置了bucket名称,且路径以bucket名称开头,则去掉bucket名称前缀
+ if bucketName != "" && strings.HasPrefix(path, bucketName+"/") {
+ path = strings.TrimPrefix(path, bucketName+"/")
+ } else {
+ // 如果没有匹配到bucket名称,尝试去掉路径的第一段(可能是bucket名称)
+ // 这种情况下,假设路径的第一段是bucket名称
+ parts := strings.SplitN(path, "/", 2)
+ if len(parts) > 1 {
+ path = parts[1]
+ }
+ }
+
+ // URL.Path已经是解码后的路径,所以直接返回即可
+ return path
+}
+
+func (t *thirdServer) UploadLogs(ctx context.Context, req *third.UploadLogsReq) (*third.UploadLogsResp, error) {
+ var dbLogs []*relationtb.Log
+ userID := mcontext.GetOpUserID(ctx)
+ platform := constant.PlatformID2Name[int(req.Platform)]
+ for _, fileURL := range req.FileURLs {
+ log := relationtb.Log{
+ Platform: platform,
+ UserID: userID,
+ CreateTime: time.Now(),
+ Url: fileURL.URL,
+ FileName: fileURL.Filename,
+ AppFramework: req.AppFramework,
+ Version: req.Version,
+ Ex: req.Ex,
+ }
+ for i := 0; i < 20; i++ {
+ id := genLogID()
+ logs, err := t.thirdDatabase.GetLogs(ctx, []string{id}, "")
+ if err != nil {
+ return nil, err
+ }
+ if len(logs) == 0 {
+ log.LogID = id
+ break
+ }
+ }
+ if log.LogID == "" {
+ return nil, servererrs.ErrData.WrapMsg("Log id gen error")
+ }
+ dbLogs = append(dbLogs, &log)
+ }
+ err := t.thirdDatabase.UploadLogs(ctx, dbLogs)
+ if err != nil {
+ return nil, err
+ }
+ return &third.UploadLogsResp{}, nil
+}
+
+func (t *thirdServer) DeleteLogs(ctx context.Context, req *third.DeleteLogsReq) (*third.DeleteLogsResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ userID := ""
+ logs, err := t.thirdDatabase.GetLogs(ctx, req.LogIDs, userID)
+ if err != nil {
+ return nil, err
+ }
+ var logIDs []string
+ for _, log := range logs {
+ logIDs = append(logIDs, log.LogID)
+ }
+ if ids := datautil.Single(req.LogIDs, logIDs); len(ids) > 0 {
+ return nil, errs.ErrRecordNotFound.WrapMsg("logIDs not found", "logIDs", ids)
+ }
+
+ // 在删除日志记录前,先删除对应的S3文件
+ engine := t.config.RpcConfig.Object.Enable
+ if engine != "" && t.s3 != nil {
+ // 获取bucket名称(从minio配置中)
+ bucketName := ""
+ if engine == "minio" {
+ bucketName = t.config.MinioConfig.Bucket
+ }
+
+ for _, logRecord := range logs {
+ if logRecord.Url == "" {
+ continue
+ }
+ // 从URL中提取S3的key(不包含bucket名称)
+ key := extractKeyFromLogURL(logRecord.Url, bucketName)
+ if key == "" {
+ log.ZDebug(ctx, "DeleteLogs: cannot extract key from URL, skipping S3 deletion", "logID", logRecord.LogID, "url", logRecord.Url)
+ continue
+ }
+ // 直接使用key删除S3文件
+ log.ZInfo(ctx, "DeleteLogs: attempting to delete S3 file", "logID", logRecord.LogID, "url", logRecord.Url, "key", key, "bucket", bucketName, "engine", engine)
+ if err := t.s3.DeleteObject(ctx, key); err != nil {
+ // S3文件删除失败,返回错误,不删除数据库记录
+ log.ZError(ctx, "DeleteLogs: S3 file delete failed", err, "logID", logRecord.LogID, "url", logRecord.Url, "key", key, "bucket", bucketName, "engine", engine)
+ return nil, errs.WrapMsg(err, "failed to delete S3 file for log", "logID", logRecord.LogID, "url", logRecord.Url, "key", key)
+ }
+ log.ZInfo(ctx, "DeleteLogs: S3 file delete command executed successfully", "logID", logRecord.LogID, "url", logRecord.Url, "key", key, "bucket", bucketName, "engine", engine)
+ }
+ }
+
+ err = t.thirdDatabase.DeleteLogs(ctx, req.LogIDs, userID)
+ if err != nil {
+ return nil, err
+ }
+
+ return &third.DeleteLogsResp{}, nil
+}
+
+func dbToPbLogInfos(logs []*relationtb.Log) []*third.LogInfo {
+ db2pbForLogInfo := func(log *relationtb.Log) *third.LogInfo {
+ return &third.LogInfo{
+ Filename: log.FileName,
+ UserID: log.UserID,
+ Platform: log.Platform,
+ Url: log.Url,
+ CreateTime: log.CreateTime.UnixMilli(),
+ LogID: log.LogID,
+ SystemType: log.SystemType,
+ Version: log.Version,
+ Ex: log.Ex,
+ }
+ }
+ return datautil.Slice(logs, db2pbForLogInfo)
+}
+
+func (t *thirdServer) SearchLogs(ctx context.Context, req *third.SearchLogsReq) (*third.SearchLogsResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ var (
+ resp third.SearchLogsResp
+ userIDs []string
+ )
+ if req.StartTime > req.EndTime {
+ return nil, errs.ErrArgs.WrapMsg("startTime>endTime")
+ }
+ if req.StartTime == 0 && req.EndTime == 0 {
+ t := time.Date(2019, time.January, 1, 0, 0, 0, 0, time.UTC)
+ timestampMills := t.UnixNano() / int64(time.Millisecond)
+ req.StartTime = timestampMills
+ req.EndTime = time.Now().UnixNano() / int64(time.Millisecond)
+ }
+
+ total, logs, err := t.thirdDatabase.SearchLogs(ctx, req.Keyword, time.UnixMilli(req.StartTime), time.UnixMilli(req.EndTime), req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ pbLogs := dbToPbLogInfos(logs)
+ for _, log := range logs {
+ userIDs = append(userIDs, log.UserID)
+ }
+ userMap, err := t.userClient.GetUsersInfoMap(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, pbLog := range pbLogs {
+ if user, ok := userMap[pbLog.UserID]; ok {
+ pbLog.Nickname = user.Nickname
+ }
+ }
+ resp.LogsInfos = pbLogs
+ resp.Total = uint32(total)
+ return &resp, nil
+}
diff --git a/internal/rpc/third/r2.go b/internal/rpc/third/r2.go
new file mode 100644
index 0000000..63073df
--- /dev/null
+++ b/internal/rpc/third/r2.go
@@ -0,0 +1,446 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package third
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "net/http"
+ "net/url"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/aws/aws-sdk-go-v2/aws"
+ v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+ awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+ "github.com/aws/aws-sdk-go-v2/credentials"
+ aws3 "github.com/aws/aws-sdk-go-v2/service/s3"
+ "github.com/aws/aws-sdk-go-v2/service/s3/types"
+ "github.com/openimsdk/tools/s3"
+)
+
+const (
+ minPartSize int64 = 1024 * 1024 * 5 // 5MB
+ maxPartSize int64 = 1024 * 1024 * 1024 * 5 // 5GB
+ maxNumSize int64 = 10000
+)
+
+type R2Config struct {
+ Endpoint string
+ Region string
+ Bucket string
+ AccessKeyID string
+ SecretAccessKey string
+ SessionToken string
+}
+
+// NewR2 创建支持 Cloudflare R2 的 S3 客户端
+func NewR2(conf R2Config) (*R2, error) {
+ if conf.Endpoint == "" {
+ return nil, errors.New("endpoint is required for R2")
+ }
+
+ // 创建 HTTP 客户端,设置合理的超时
+ httpClient := &http.Client{
+ Timeout: 30 * time.Second,
+ Transport: &http.Transport{
+ MaxIdleConns: 100,
+ MaxIdleConnsPerHost: 10,
+ IdleConnTimeout: 90 * time.Second,
+ TLSHandshakeTimeout: 10 * time.Second,
+ },
+ }
+
+ cfg := aws.Config{
+ Region: conf.Region,
+ Credentials: credentials.NewStaticCredentialsProvider(conf.AccessKeyID, conf.SecretAccessKey, conf.SessionToken),
+ HTTPClient: httpClient,
+ }
+
+ // 创建 S3 客户端,启用路径风格访问(R2 要求)并设置自定义 endpoint
+ client := aws3.NewFromConfig(cfg, func(o *aws3.Options) {
+ o.BaseEndpoint = aws.String(conf.Endpoint)
+ o.UsePathStyle = true
+ })
+
+ r2 := &R2{
+ bucket: conf.Bucket,
+ client: client,
+ presign: aws3.NewPresignClient(client),
+ }
+
+ // 测试连接:尝试列出 bucket(验证 bucket 存在且有权限),设置 5 秒超时
+ ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
+ defer cancel()
+
+ fmt.Printf("[R2] Testing connection to bucket '%s' at endpoint '%s'...\n", conf.Bucket, conf.Endpoint)
+ _, err := client.ListObjectsV2(ctx, &aws3.ListObjectsV2Input{
+ Bucket: aws.String(conf.Bucket),
+ MaxKeys: aws.Int32(1),
+ })
+ if err != nil {
+ // 详细的错误信息
+ var respErr *awshttp.ResponseError
+ if errors.As(err, &respErr) {
+ fmt.Printf("[R2] Bucket verification HTTP error:\n")
+ fmt.Printf(" Status Code: %d\n", respErr.Response.StatusCode)
+ fmt.Printf(" Status: %s\n", respErr.Response.Status)
+ }
+ fmt.Printf("[R2] Warning: failed to verify R2 bucket '%s' at endpoint '%s': %v\n", conf.Bucket, conf.Endpoint, err)
+ fmt.Printf("[R2] Please ensure:\n")
+ fmt.Printf(" 1. Bucket '%s' exists in your R2 account\n", conf.Bucket)
+ fmt.Printf(" 2. API credentials have correct permissions (Object Read & Write)\n")
+ fmt.Printf(" 3. Account ID in endpoint matches your R2 account\n")
+ } else {
+ fmt.Printf("[R2] Successfully connected to bucket '%s'\n", conf.Bucket)
+ }
+
+ return r2, nil
+}
+
+type R2 struct {
+ bucket string
+ client *aws3.Client
+ presign *aws3.PresignClient
+}
+
+func (r *R2) Engine() string {
+ return "aws"
+}
+
+func (r *R2) PartLimit() (*s3.PartLimit, error) {
+ return &s3.PartLimit{
+ MinPartSize: minPartSize,
+ MaxPartSize: maxPartSize,
+ MaxNumSize: maxNumSize,
+ }, nil
+}
+
+func (r *R2) formatETag(etag string) string {
+ return strings.Trim(etag, `"`)
+}
+
+func (r *R2) PartSize(ctx context.Context, size int64) (int64, error) {
+ if size <= 0 {
+ return 0, errors.New("size must be greater than 0")
+ }
+ if size > maxPartSize*maxNumSize {
+ return 0, fmt.Errorf("size must be less than the maximum allowed limit")
+ }
+ if size <= minPartSize*maxNumSize {
+ return minPartSize, nil
+ }
+ partSize := size / maxNumSize
+ if size%maxNumSize != 0 {
+ partSize++
+ }
+ return partSize, nil
+}
+
+func (r *R2) IsNotFound(err error) bool {
+ var respErr *awshttp.ResponseError
+ if !errors.As(err, &respErr) {
+ return false
+ }
+ if respErr == nil || respErr.Response == nil {
+ return false
+ }
+ return respErr.Response.StatusCode == http.StatusNotFound
+}
+
+func (r *R2) PresignedPutObject(ctx context.Context, name string, expire time.Duration, opt *s3.PutOption) (*s3.PresignedPutResult, error) {
+ res, err := r.presign.PresignPutObject(ctx, &aws3.PutObjectInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ }, aws3.WithPresignExpires(expire), withDisableHTTPPresignerHeaderV4(nil))
+ if err != nil {
+ return nil, err
+ }
+ return &s3.PresignedPutResult{URL: res.URL}, nil
+}
+
+func (r *R2) DeleteObject(ctx context.Context, name string) error {
+ _, err := r.client.DeleteObject(ctx, &aws3.DeleteObjectInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ })
+ return err
+}
+
+func (r *R2) CopyObject(ctx context.Context, src string, dst string) (*s3.CopyObjectInfo, error) {
+ res, err := r.client.CopyObject(ctx, &aws3.CopyObjectInput{
+ Bucket: aws.String(r.bucket),
+ CopySource: aws.String(r.bucket + "/" + src),
+ Key: aws.String(dst),
+ })
+ if err != nil {
+ return nil, err
+ }
+ if res.CopyObjectResult == nil || res.CopyObjectResult.ETag == nil || *res.CopyObjectResult.ETag == "" {
+ return nil, errors.New("CopyObject etag is nil")
+ }
+ return &s3.CopyObjectInfo{
+ Key: dst,
+ ETag: r.formatETag(*res.CopyObjectResult.ETag),
+ }, nil
+}
+
+func (r *R2) StatObject(ctx context.Context, name string) (*s3.ObjectInfo, error) {
+ res, err := r.client.HeadObject(ctx, &aws3.HeadObjectInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ })
+ if err != nil {
+ return nil, err
+ }
+ if res.ETag == nil || *res.ETag == "" {
+ return nil, errors.New("GetObjectAttributes etag is nil")
+ }
+ if res.ContentLength == nil {
+ return nil, errors.New("GetObjectAttributes object size is nil")
+ }
+ info := &s3.ObjectInfo{
+ ETag: r.formatETag(*res.ETag),
+ Key: name,
+ Size: *res.ContentLength,
+ }
+ if res.LastModified == nil {
+ info.LastModified = time.Unix(0, 0)
+ } else {
+ info.LastModified = *res.LastModified
+ }
+ return info, nil
+}
+
+func (r *R2) InitiateMultipartUpload(ctx context.Context, name string, opt *s3.PutOption) (*s3.InitiateMultipartUploadResult, error) {
+ startTime := time.Now()
+ fmt.Printf("[R2] InitiateMultipartUpload start: bucket=%s, key=%s\n", r.bucket, name)
+
+ input := &aws3.CreateMultipartUploadInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ }
+
+ // 如果提供了 ContentType,添加到请求中
+ if opt != nil && opt.ContentType != "" {
+ input.ContentType = aws.String(opt.ContentType)
+ fmt.Printf("[R2] ContentType: %s\n", opt.ContentType)
+ }
+
+ res, err := r.client.CreateMultipartUpload(ctx, input)
+ duration := time.Since(startTime)
+
+ if err != nil {
+ // 详细错误信息
+ var respErr *awshttp.ResponseError
+ if errors.As(err, &respErr) {
+ fmt.Printf("[R2] HTTP Response Error after %v:\n", duration)
+ fmt.Printf(" Status Code: %d\n", respErr.Response.StatusCode)
+ fmt.Printf(" Status: %s\n", respErr.Response.Status)
+ if respErr.Response.Body != nil {
+ body := make([]byte, 1024)
+ n, _ := respErr.Response.Body.Read(body)
+ fmt.Printf(" Body: %s\n", string(body[:n]))
+ }
+ }
+ fmt.Printf("[R2] InitiateMultipartUpload failed after %v: %v\n", duration, err)
+ return nil, fmt.Errorf("CreateMultipartUpload failed (bucket=%s, key=%s): %w", r.bucket, name, err)
+ }
+ if res.UploadId == nil || *res.UploadId == "" {
+ return nil, errors.New("CreateMultipartUpload upload id is nil")
+ }
+
+ fmt.Printf("[R2] InitiateMultipartUpload success after %v: uploadID=%s\n", duration, *res.UploadId)
+ return &s3.InitiateMultipartUploadResult{
+ Key: name,
+ Bucket: r.bucket,
+ UploadID: *res.UploadId,
+ }, nil
+}
+
+func (r *R2) CompleteMultipartUpload(ctx context.Context, uploadID string, name string, parts []s3.Part) (*s3.CompleteMultipartUploadResult, error) {
+ params := &aws3.CompleteMultipartUploadInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ UploadId: aws.String(uploadID),
+ MultipartUpload: &types.CompletedMultipartUpload{
+ Parts: make([]types.CompletedPart, 0, len(parts)),
+ },
+ }
+ for _, part := range parts {
+ params.MultipartUpload.Parts = append(params.MultipartUpload.Parts, types.CompletedPart{
+ ETag: aws.String(part.ETag),
+ PartNumber: aws.Int32(int32(part.PartNumber)),
+ })
+ }
+ res, err := r.client.CompleteMultipartUpload(ctx, params)
+ if err != nil {
+ return nil, err
+ }
+ if res.ETag == nil || *res.ETag == "" {
+ return nil, errors.New("CompleteMultipartUpload etag is nil")
+ }
+ info := &s3.CompleteMultipartUploadResult{
+ Key: name,
+ Bucket: r.bucket,
+ ETag: r.formatETag(*res.ETag),
+ }
+ if res.Location != nil {
+ info.Location = *res.Location
+ }
+ return info, nil
+}
+
+func (r *R2) AbortMultipartUpload(ctx context.Context, uploadID string, name string) error {
+ _, err := r.client.AbortMultipartUpload(ctx, &aws3.AbortMultipartUploadInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ UploadId: aws.String(uploadID),
+ })
+ return err
+}
+
+func (r *R2) ListUploadedParts(ctx context.Context, uploadID string, name string, partNumberMarker int, maxParts int) (*s3.ListUploadedPartsResult, error) {
+ params := &aws3.ListPartsInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ UploadId: aws.String(uploadID),
+ PartNumberMarker: aws.String(strconv.Itoa(partNumberMarker)),
+ MaxParts: aws.Int32(int32(maxParts)),
+ }
+ res, err := r.client.ListParts(ctx, params)
+ if err != nil {
+ return nil, err
+ }
+ info := &s3.ListUploadedPartsResult{
+ Key: name,
+ UploadID: uploadID,
+ UploadedParts: make([]s3.UploadedPart, 0, len(res.Parts)),
+ }
+ if res.MaxParts != nil {
+ info.MaxParts = int(*res.MaxParts)
+ }
+ if res.NextPartNumberMarker != nil {
+ info.NextPartNumberMarker, _ = strconv.Atoi(*res.NextPartNumberMarker)
+ }
+ for _, part := range res.Parts {
+ var val s3.UploadedPart
+ if part.PartNumber != nil {
+ val.PartNumber = int(*part.PartNumber)
+ }
+ if part.LastModified != nil {
+ val.LastModified = *part.LastModified
+ }
+ if part.Size != nil {
+ val.Size = *part.Size
+ }
+ info.UploadedParts = append(info.UploadedParts, val)
+ }
+ return info, nil
+}
+
+func (r *R2) AuthSign(ctx context.Context, uploadID string, name string, expire time.Duration, partNumbers []int) (*s3.AuthSignResult, error) {
+ res := &s3.AuthSignResult{
+ Parts: make([]s3.SignPart, 0, len(partNumbers)),
+ }
+ params := &aws3.UploadPartInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ UploadId: aws.String(uploadID),
+ }
+ opt := aws3.WithPresignExpires(expire)
+ for _, number := range partNumbers {
+ params.PartNumber = aws.Int32(int32(number))
+ val, err := r.presign.PresignUploadPart(ctx, params, opt)
+ if err != nil {
+ return nil, err
+ }
+ u, err := url.Parse(val.URL)
+ if err != nil {
+ return nil, err
+ }
+ query := u.Query()
+ u.RawQuery = ""
+ urlstr := u.String()
+ if res.URL == "" {
+ res.URL = urlstr
+ }
+ if res.URL == urlstr {
+ urlstr = ""
+ }
+ res.Parts = append(res.Parts, s3.SignPart{
+ PartNumber: number,
+ URL: urlstr,
+ Query: query,
+ Header: val.SignedHeader,
+ })
+ }
+ return res, nil
+}
+
+func (r *R2) AccessURL(ctx context.Context, name string, expire time.Duration, opt *s3.AccessURLOption) (string, error) {
+ params := &aws3.GetObjectInput{
+ Bucket: aws.String(r.bucket),
+ Key: aws.String(name),
+ }
+ res, err := r.presign.PresignGetObject(ctx, params, aws3.WithPresignExpires(expire), withDisableHTTPPresignerHeaderV4(opt))
+ if err != nil {
+ return "", err
+ }
+ return res.URL, nil
+}
+
+func (r *R2) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ return nil, errors.New("R2 does not currently support form data file uploads")
+}
+
+func withDisableHTTPPresignerHeaderV4(opt *s3.AccessURLOption) func(options *aws3.PresignOptions) {
+ return func(options *aws3.PresignOptions) {
+ options.Presigner = &disableHTTPPresignerHeaderV4{
+ opt: opt,
+ presigner: options.Presigner,
+ }
+ }
+}
+
+type disableHTTPPresignerHeaderV4 struct {
+ opt *s3.AccessURLOption
+ presigner aws3.HTTPPresignerV4
+}
+
+func (d *disableHTTPPresignerHeaderV4) PresignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4.SignerOptions)) (url string, signedHeader http.Header, err error) {
+ optFns = append(optFns, func(options *v4.SignerOptions) {
+ options.DisableHeaderHoisting = true
+ })
+ r.Header.Del("Amz-Sdk-Request")
+ d.setOption(r.URL)
+ return d.presigner.PresignHTTP(ctx, credentials, r, payloadHash, service, region, signingTime, optFns...)
+}
+
+func (d *disableHTTPPresignerHeaderV4) setOption(u *url.URL) {
+ if d.opt == nil {
+ return
+ }
+ query := u.Query()
+ if d.opt.ContentType != "" {
+ query.Set("response-content-type", d.opt.ContentType)
+ }
+ if d.opt.Filename != "" {
+ query.Set("response-content-disposition", `attachment; filename*=UTF-8''`+url.PathEscape(d.opt.Filename))
+ }
+ u.RawQuery = query.Encode()
+}
diff --git a/internal/rpc/third/s3.go b/internal/rpc/third/s3.go
new file mode 100644
index 0000000..cb2c99e
--- /dev/null
+++ b/internal/rpc/third/s3.go
@@ -0,0 +1,352 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package third
+
+import (
+ "context"
+ "encoding/base64"
+ "encoding/hex"
+ "encoding/json"
+ "path"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/google/uuid"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/s3"
+ "github.com/openimsdk/tools/s3/cont"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (t *thirdServer) PartLimit(ctx context.Context, req *third.PartLimitReq) (*third.PartLimitResp, error) {
+ limit, err := t.s3dataBase.PartLimit()
+ if err != nil {
+ return nil, err
+ }
+ return &third.PartLimitResp{
+ MinPartSize: limit.MinPartSize,
+ MaxPartSize: limit.MaxPartSize,
+ MaxNumSize: int32(limit.MaxNumSize),
+ }, nil
+}
+
+func (t *thirdServer) PartSize(ctx context.Context, req *third.PartSizeReq) (*third.PartSizeResp, error) {
+ size, err := t.s3dataBase.PartSize(ctx, req.Size)
+ if err != nil {
+ return nil, err
+ }
+ return &third.PartSizeResp{Size: size}, nil
+}
+
+func (t *thirdServer) InitiateMultipartUpload(ctx context.Context, req *third.InitiateMultipartUploadReq) (*third.InitiateMultipartUploadResp, error) {
+ if err := t.checkUploadName(ctx, req.Name); err != nil {
+ return nil, err
+ }
+ expireTime := time.Now().Add(t.defaultExpire)
+ result, err := t.s3dataBase.InitiateMultipartUpload(ctx, req.Hash, req.Size, t.defaultExpire, int(req.MaxParts), req.ContentType)
+ if err != nil {
+ if haErr, ok := errs.Unwrap(err).(*cont.HashAlreadyExistsError); ok {
+ obj := &model.Object{
+ Name: req.Name,
+ UserID: mcontext.GetOpUserID(ctx),
+ Hash: req.Hash,
+ Key: haErr.Object.Key,
+ Size: haErr.Object.Size,
+ ContentType: req.ContentType,
+ Group: req.Cause,
+ CreateTime: time.Now(),
+ }
+ if err := t.s3dataBase.SetObject(ctx, obj); err != nil {
+ return nil, err
+ }
+
+ // 获取 OSS 的真实 URL
+ _, rawURL, err := t.s3dataBase.AccessURL(ctx, obj.Name, t.defaultExpire, nil)
+ if err != nil {
+ // 如果获取 OSS URL 失败,则使用配置的 URL
+ rawURL = t.apiAddress(req.UrlPrefix, obj.Name)
+ }
+
+ return &third.InitiateMultipartUploadResp{
+ Url: rawURL,
+ }, nil
+ }
+ return nil, err
+ }
+ var sign *third.AuthSignParts
+ if result.Sign != nil && len(result.Sign.Parts) > 0 {
+ sign = &third.AuthSignParts{
+ Url: result.Sign.URL,
+ Query: toPbMapArray(result.Sign.Query),
+ Header: toPbMapArray(result.Sign.Header),
+ Parts: make([]*third.SignPart, len(result.Sign.Parts)),
+ }
+ for i, part := range result.Sign.Parts {
+ sign.Parts[i] = &third.SignPart{
+ PartNumber: int32(part.PartNumber),
+ Url: part.URL,
+ Query: toPbMapArray(part.Query),
+ Header: toPbMapArray(part.Header),
+ }
+ }
+ }
+ return &third.InitiateMultipartUploadResp{
+ Upload: &third.UploadInfo{
+ UploadID: result.UploadID,
+ PartSize: result.PartSize,
+ Sign: sign,
+ ExpireTime: expireTime.UnixMilli(),
+ },
+ }, nil
+}
+
+func (t *thirdServer) AuthSign(ctx context.Context, req *third.AuthSignReq) (*third.AuthSignResp, error) {
+ partNumbers := datautil.Slice(req.PartNumbers, func(partNumber int32) int { return int(partNumber) })
+ result, err := t.s3dataBase.AuthSign(ctx, req.UploadID, partNumbers)
+ if err != nil {
+ return nil, err
+ }
+ resp := &third.AuthSignResp{
+ Url: result.URL,
+ Query: toPbMapArray(result.Query),
+ Header: toPbMapArray(result.Header),
+ Parts: make([]*third.SignPart, len(result.Parts)),
+ }
+ for i, part := range result.Parts {
+ resp.Parts[i] = &third.SignPart{
+ PartNumber: int32(part.PartNumber),
+ Url: part.URL,
+ Query: toPbMapArray(part.Query),
+ Header: toPbMapArray(part.Header),
+ }
+ }
+ return resp, nil
+}
+
+func (t *thirdServer) CompleteMultipartUpload(ctx context.Context, req *third.CompleteMultipartUploadReq) (*third.CompleteMultipartUploadResp, error) {
+ if err := t.checkUploadName(ctx, req.Name); err != nil {
+ return nil, err
+ }
+ result, err := t.s3dataBase.CompleteMultipartUpload(ctx, req.UploadID, req.Parts)
+ if err != nil {
+ return nil, err
+ }
+ obj := &model.Object{
+ Name: req.Name,
+ UserID: mcontext.GetOpUserID(ctx),
+ Hash: result.Hash,
+ Key: result.Key,
+ Size: result.Size,
+ ContentType: req.ContentType,
+ Group: req.Cause,
+ CreateTime: time.Now(),
+ }
+ if err := t.s3dataBase.SetObject(ctx, obj); err != nil {
+ return nil, err
+ }
+ // 获取 OSS 的真实 URL
+ _, rawURL, err := t.s3dataBase.AccessURL(ctx, obj.Name, t.defaultExpire, nil)
+ if err != nil {
+ // 如果获取 OSS URL 失败,则使用配置的 URL
+ rawURL = t.apiAddress(req.UrlPrefix, obj.Name)
+ }
+
+ return &third.CompleteMultipartUploadResp{
+ Url: rawURL,
+ }, nil
+}
+
+func (t *thirdServer) AccessURL(ctx context.Context, req *third.AccessURLReq) (*third.AccessURLResp, error) {
+ opt := &s3.AccessURLOption{}
+ if len(req.Query) > 0 {
+ switch req.Query["type"] {
+ case "":
+ case "image":
+ opt.Image = &s3.Image{}
+ opt.Image.Format = req.Query["format"]
+ opt.Image.Width, _ = strconv.Atoi(req.Query["width"])
+ opt.Image.Height, _ = strconv.Atoi(req.Query["height"])
+ log.ZDebug(ctx, "AccessURL image", "name", req.Name, "option", opt.Image)
+ default:
+ return nil, errs.ErrArgs.WrapMsg("invalid query type")
+ }
+ }
+ expireTime, rawURL, err := t.s3dataBase.AccessURL(ctx, req.Name, t.defaultExpire, opt)
+ if err != nil {
+ return nil, err
+ }
+ return &third.AccessURLResp{
+ Url: rawURL,
+ ExpireTime: expireTime.UnixMilli(),
+ }, nil
+}
+
+func (t *thirdServer) InitiateFormData(ctx context.Context, req *third.InitiateFormDataReq) (*third.InitiateFormDataResp, error) {
+ if req.Name == "" {
+ return nil, errs.ErrArgs.WrapMsg("name is empty")
+ }
+ if req.Size <= 0 {
+ return nil, errs.ErrArgs.WrapMsg("size must be greater than 0")
+ }
+ if err := t.checkUploadName(ctx, req.Name); err != nil {
+ return nil, err
+ }
+ var duration time.Duration
+ opUserID := mcontext.GetOpUserID(ctx)
+ var key string
+ if authverify.CheckUserIsAdmin(ctx, opUserID) {
+ if req.Millisecond <= 0 {
+ duration = time.Minute * 10
+ } else {
+ duration = time.Millisecond * time.Duration(req.Millisecond)
+ }
+ if req.Absolute {
+ key = req.Name
+ }
+ } else {
+ duration = time.Minute * 10
+ }
+ uid, err := uuid.NewRandom()
+ if err != nil {
+ return nil, errs.WrapMsg(err, "uuid NewRandom failed")
+ }
+ if key == "" {
+ date := time.Now().Format("20060102")
+ key = path.Join(cont.DirectPath, date, opUserID, hex.EncodeToString(uid[:])+path.Ext(req.Name))
+ }
+ mate := FormDataMate{
+ Name: req.Name,
+ Size: req.Size,
+ ContentType: req.ContentType,
+ Group: req.Group,
+ Key: key,
+ }
+ mateData, err := json.Marshal(&mate)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "marshal failed")
+ }
+ resp, err := t.s3dataBase.FormData(ctx, key, req.Size, req.ContentType, duration)
+ if err != nil {
+ return nil, err
+ }
+ return &third.InitiateFormDataResp{
+ Id: base64.RawStdEncoding.EncodeToString(mateData),
+ Url: resp.URL,
+ File: resp.File,
+ Header: toPbMapArray(resp.Header),
+ FormData: resp.FormData,
+ Expires: resp.Expires.UnixMilli(),
+ SuccessCodes: datautil.Slice(resp.SuccessCodes, func(code int) int32 {
+ return int32(code)
+ }),
+ }, nil
+}
+
+func (t *thirdServer) CompleteFormData(ctx context.Context, req *third.CompleteFormDataReq) (*third.CompleteFormDataResp, error) {
+ if req.Id == "" {
+ return nil, errs.ErrArgs.WrapMsg("id is empty")
+ }
+ data, err := base64.RawStdEncoding.DecodeString(req.Id)
+ if err != nil {
+ return nil, errs.ErrArgs.WrapMsg("invalid id " + err.Error())
+ }
+ var mate FormDataMate
+ if err := json.Unmarshal(data, &mate); err != nil {
+ return nil, errs.ErrArgs.WrapMsg("invalid id " + err.Error())
+ }
+ if err := t.checkUploadName(ctx, mate.Name); err != nil {
+ return nil, err
+ }
+ info, err := t.s3dataBase.StatObject(ctx, mate.Key)
+ if err != nil {
+ return nil, err
+ }
+ if info.Size > 0 && info.Size != mate.Size {
+ return nil, servererrs.ErrData.WrapMsg("file size mismatch")
+ }
+ obj := &model.Object{
+ Name: mate.Name,
+ UserID: mcontext.GetOpUserID(ctx),
+ Hash: "etag_" + info.ETag,
+ Key: info.Key,
+ Size: info.Size,
+ ContentType: mate.ContentType,
+ Group: mate.Group,
+ CreateTime: time.Now(),
+ }
+ if err := t.s3dataBase.SetObject(ctx, obj); err != nil {
+ return nil, err
+ }
+
+ // 获取 OSS 的真实 URL
+ _, rawURL, err := t.s3dataBase.AccessURL(ctx, mate.Name, t.defaultExpire, nil)
+ if err != nil {
+ // 如果获取 OSS URL 失败,则使用配置的 URL
+ rawURL = t.apiAddress(req.UrlPrefix, mate.Name)
+ }
+
+ return &third.CompleteFormDataResp{Url: rawURL}, nil
+}
+
+func (t *thirdServer) apiAddress(prefix, name string) string {
+ return prefix + name
+}
+
+func (t *thirdServer) DeleteOutdatedData(ctx context.Context, req *third.DeleteOutdatedDataReq) (*third.DeleteOutdatedDataResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ engine := t.config.RpcConfig.Object.Enable
+ expireTime := time.UnixMilli(req.ExpireTime)
+ // Find all expired data in S3 database
+ models, err := t.s3dataBase.FindExpirationObject(ctx, engine, expireTime, req.ObjectGroup, int64(req.Limit))
+ if err != nil {
+ return nil, err
+ }
+ for i, obj := range models {
+ if err := t.s3dataBase.DeleteSpecifiedData(ctx, engine, []string{obj.Name}); err != nil {
+ return nil, errs.Wrap(err)
+ }
+ if err := t.s3dataBase.DelS3Key(ctx, engine, obj.Name); err != nil {
+ return nil, err
+ }
+ count, err := t.s3dataBase.GetKeyCount(ctx, engine, obj.Key)
+ if err != nil {
+ return nil, err
+ }
+ log.ZDebug(ctx, "delete s3 object record", "index", i, "s3", obj, "count", count)
+ if count == 0 {
+ if err := t.s3.DeleteObject(ctx, obj.Key); err != nil {
+ return nil, err
+ }
+ }
+ }
+ return &third.DeleteOutdatedDataResp{Count: int32(len(models))}, nil
+}
+
+type FormDataMate struct {
+ Name string `json:"name"`
+ Size int64 `json:"size"`
+ ContentType string `json:"contentType"`
+ Group string `json:"group"`
+ Key string `json:"key"`
+}
diff --git a/internal/rpc/third/third.go b/internal/rpc/third/third.go
new file mode 100644
index 0000000..62f2f62
--- /dev/null
+++ b/internal/rpc/third/third.go
@@ -0,0 +1,175 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package third
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "github.com/openimsdk/tools/s3/disable"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "github.com/openimsdk/tools/s3/aws"
+ "github.com/openimsdk/tools/s3/kodo"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/s3"
+ "github.com/openimsdk/tools/s3/cos"
+ "github.com/openimsdk/tools/s3/minio"
+ "github.com/openimsdk/tools/s3/oss"
+ "google.golang.org/grpc"
+)
+
+type thirdServer struct {
+ third.UnimplementedThirdServer
+ thirdDatabase controller.ThirdDatabase
+ s3dataBase controller.S3Database
+ defaultExpire time.Duration
+ config *Config
+ s3 s3.Interface
+ userClient *rpcli.UserClient
+}
+
+type Config struct {
+ RpcConfig config.Third
+ RedisConfig config.Redis
+ MongodbConfig config.Mongo
+ NotificationConfig config.Notification
+ Share config.Share
+ MinioConfig config.Minio
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+
+ logdb, err := mgo.NewLogMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ s3db, err := mgo.NewS3Mongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ var thirdCache cache.ThirdCache
+ if rdb == nil {
+ tc, err := mgo.NewCacheMgo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ thirdCache = mcache.NewThirdCache(tc)
+ } else {
+ thirdCache = redis.NewThirdCache(rdb)
+ }
+ // Select the oss method according to the profile policy
+ var o s3.Interface
+ switch enable := config.RpcConfig.Object.Enable; enable {
+ case "minio":
+ var minioCache minio.Cache
+ if rdb == nil {
+ mc, err := mgo.NewCacheMgo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ minioCache = mcache.NewMinioCache(mc)
+ } else {
+ minioCache = redis.NewMinioCache(rdb)
+ }
+ o, err = minio.NewMinio(ctx, minioCache, *config.MinioConfig.Build())
+ case "cos":
+ o, err = cos.NewCos(*config.RpcConfig.Object.Cos.Build())
+ case "oss":
+ o, err = oss.NewOSS(*config.RpcConfig.Object.Oss.Build())
+ case "kodo":
+ o, err = kodo.NewKodo(*config.RpcConfig.Object.Kodo.Build())
+ case "aws":
+ // 使用自定义 R2 客户端支持 Cloudflare R2(需要自定义 endpoint)
+ awsConf := config.RpcConfig.Object.Aws
+ if awsConf.Endpoint != "" {
+ // 如果配置了 endpoint,使用 R2 客户端
+ o, err = NewR2(R2Config{
+ Endpoint: awsConf.Endpoint,
+ Region: awsConf.Region,
+ Bucket: awsConf.Bucket,
+ AccessKeyID: awsConf.AccessKeyID,
+ SecretAccessKey: awsConf.SecretAccessKey,
+ SessionToken: awsConf.SessionToken,
+ })
+ } else {
+ // 标准 AWS S3
+ o, err = aws.NewAws(*awsConf.Build())
+ }
+ case "":
+ o = disable.NewDisable()
+ default:
+ err = fmt.Errorf("invalid object enable: %s", enable)
+ }
+ if err != nil {
+ return err
+ }
+ userConn, err := client.GetConn(ctx, config.Discovery.RpcService.User)
+ if err != nil {
+ return err
+ }
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+ third.RegisterThirdServer(server, &thirdServer{
+ thirdDatabase: controller.NewThirdDatabase(thirdCache, logdb),
+ s3dataBase: controller.NewS3Database(rdb, o, s3db),
+ defaultExpire: time.Hour * 24 * 7,
+ config: config,
+ s3: o,
+ userClient: rpcli.NewUserClient(userConn),
+ })
+ return nil
+}
+
+func (t *thirdServer) FcmUpdateToken(ctx context.Context, req *third.FcmUpdateTokenReq) (resp *third.FcmUpdateTokenResp, err error) {
+ err = t.thirdDatabase.FcmUpdateToken(ctx, req.Account, int(req.PlatformID), req.FcmToken, req.ExpireTime)
+ if err != nil {
+ return nil, err
+ }
+ return &third.FcmUpdateTokenResp{}, nil
+}
+
+func (t *thirdServer) SetAppBadge(ctx context.Context, req *third.SetAppBadgeReq) (resp *third.SetAppBadgeResp, err error) {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ err = t.thirdDatabase.SetAppBadge(ctx, req.UserID, int(req.AppUnreadCount))
+ if err != nil {
+ return nil, err
+ }
+ return &third.SetAppBadgeResp{}, nil
+}
diff --git a/internal/rpc/third/tool.go b/internal/rpc/third/tool.go
new file mode 100644
index 0000000..a5ef86e
--- /dev/null
+++ b/internal/rpc/third/tool.go
@@ -0,0 +1,88 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package third
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "unicode/utf8"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func toPbMapArray(m map[string][]string) []*third.KeyValues {
+ if len(m) == 0 {
+ return nil
+ }
+ res := make([]*third.KeyValues, 0, len(m))
+ for key := range m {
+ res = append(res, &third.KeyValues{
+ Key: key,
+ Values: m[key],
+ })
+ }
+ return res
+}
+
+func (t *thirdServer) checkUploadName(ctx context.Context, name string) error {
+ if name == "" {
+ return errs.ErrArgs.WrapMsg("name is empty")
+ }
+ if name[0] == '/' {
+ return errs.ErrArgs.WrapMsg("name cannot start with `/`")
+ }
+ if err := checkValidObjectName(name); err != nil {
+ return errs.ErrArgs.WrapMsg(err.Error())
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ if opUserID == "" {
+ return errs.ErrNoPermission.WrapMsg("opUserID is empty")
+ }
+ if !authverify.CheckUserIsAdmin(ctx, opUserID) {
+ if !strings.HasPrefix(name, opUserID+"/") {
+ return errs.ErrNoPermission.WrapMsg(fmt.Sprintf("name must start with `%s/`", opUserID))
+ }
+ }
+ return nil
+}
+
+func checkValidObjectNamePrefix(objectName string) error {
+ if len(objectName) > 1024 {
+ return errs.New("object name cannot be longer than 1024 characters")
+ }
+ if !utf8.ValidString(objectName) {
+ return errs.New("object name with non UTF-8 strings are not supported")
+ }
+ return nil
+}
+
+func checkValidObjectName(objectName string) error {
+ if strings.TrimSpace(objectName) == "" {
+ return errs.New("object name cannot be empty")
+ }
+ return checkValidObjectNamePrefix(objectName)
+}
+
+func putUpdate[T any](update map[string]any, name string, val interface{ GetValuePtr() *T }) {
+ ptrVal := val.GetValuePtr()
+ if ptrVal == nil {
+ return
+ }
+ update[name] = *ptrVal
+}
diff --git a/internal/rpc/user/callback.go b/internal/rpc/user/callback.go
new file mode 100644
index 0000000..453b833
--- /dev/null
+++ b/internal/rpc/user/callback.go
@@ -0,0 +1,127 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package user
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "github.com/openimsdk/tools/utils/datautil"
+
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ pbuser "git.imall.cloud/openim/protocol/user"
+)
+
+func (s *userServer) webhookBeforeUpdateUserInfo(ctx context.Context, before *config.BeforeConfig, req *pbuser.UpdateUserInfoReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeUpdateUserInfoReq{
+ CallbackCommand: cbapi.CallbackBeforeUpdateUserInfoCommand,
+ UserID: req.UserInfo.UserID,
+ FaceURL: &req.UserInfo.FaceURL,
+ Nickname: &req.UserInfo.Nickname,
+ UserType: &req.UserInfo.UserType,
+ UserFlag: &req.UserInfo.UserFlag,
+ }
+ resp := &cbapi.CallbackBeforeUpdateUserInfoResp{}
+ if err := s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ datautil.NotNilReplace(&req.UserInfo.FaceURL, resp.FaceURL)
+ datautil.NotNilReplace(&req.UserInfo.Ex, resp.Ex)
+ datautil.NotNilReplace(&req.UserInfo.Nickname, resp.Nickname)
+ datautil.NotNilReplace(&req.UserInfo.UserType, resp.UserType)
+ datautil.NotNilReplace(&req.UserInfo.UserFlag, resp.UserFlag)
+ return nil
+ })
+}
+
+func (s *userServer) webhookAfterUpdateUserInfo(ctx context.Context, after *config.AfterConfig, req *pbuser.UpdateUserInfoReq) {
+ cbReq := &cbapi.CallbackAfterUpdateUserInfoReq{
+ CallbackCommand: cbapi.CallbackAfterUpdateUserInfoCommand,
+ UserID: req.UserInfo.UserID,
+ FaceURL: req.UserInfo.FaceURL,
+ Nickname: req.UserInfo.Nickname,
+ UserType: req.UserInfo.UserType,
+ UserFlag: req.UserInfo.UserFlag,
+ }
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterUpdateUserInfoResp{}, after)
+}
+
+func (s *userServer) webhookBeforeUpdateUserInfoEx(ctx context.Context, before *config.BeforeConfig, req *pbuser.UpdateUserInfoExReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeUpdateUserInfoExReq{
+ CallbackCommand: cbapi.CallbackBeforeUpdateUserInfoExCommand,
+ UserID: req.UserInfo.UserID,
+ FaceURL: req.UserInfo.FaceURL,
+ Nickname: req.UserInfo.Nickname,
+ UserType: req.UserInfo.UserType,
+ UserFlag: req.UserInfo.UserFlag,
+ }
+ resp := &cbapi.CallbackBeforeUpdateUserInfoExResp{}
+ if err := s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ datautil.NotNilReplace(req.UserInfo.FaceURL, resp.FaceURL)
+ datautil.NotNilReplace(req.UserInfo.Ex, resp.Ex)
+ datautil.NotNilReplace(req.UserInfo.Nickname, resp.Nickname)
+ datautil.NotNilReplace(req.UserInfo.UserType, resp.UserType)
+ datautil.NotNilReplace(req.UserInfo.UserFlag, resp.UserFlag)
+ return nil
+ })
+}
+
+func (s *userServer) webhookAfterUpdateUserInfoEx(ctx context.Context, after *config.AfterConfig, req *pbuser.UpdateUserInfoExReq) {
+ cbReq := &cbapi.CallbackAfterUpdateUserInfoExReq{
+ CallbackCommand: cbapi.CallbackAfterUpdateUserInfoExCommand,
+ UserID: req.UserInfo.UserID,
+ FaceURL: req.UserInfo.FaceURL,
+ Nickname: req.UserInfo.Nickname,
+ UserType: req.UserInfo.UserType,
+ UserFlag: req.UserInfo.UserFlag,
+ }
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterUpdateUserInfoExResp{}, after)
+}
+
+func (s *userServer) webhookBeforeUserRegister(ctx context.Context, before *config.BeforeConfig, req *pbuser.UserRegisterReq) error {
+ return webhook.WithCondition(ctx, before, func(ctx context.Context) error {
+ cbReq := &cbapi.CallbackBeforeUserRegisterReq{
+ CallbackCommand: cbapi.CallbackBeforeUserRegisterCommand,
+ Users: req.Users,
+ }
+
+ resp := &cbapi.CallbackBeforeUserRegisterResp{}
+
+ if err := s.webhookClient.SyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, resp, before); err != nil {
+ return err
+ }
+
+ if len(resp.Users) != 0 {
+ req.Users = resp.Users
+ }
+ return nil
+ })
+}
+
+func (s *userServer) webhookAfterUserRegister(ctx context.Context, after *config.AfterConfig, req *pbuser.UserRegisterReq) {
+ cbReq := &cbapi.CallbackAfterUserRegisterReq{
+ CallbackCommand: cbapi.CallbackAfterUserRegisterCommand,
+ Users: req.Users,
+ }
+
+ s.webhookClient.AsyncPost(ctx, cbReq.GetCallbackCommand(), cbReq, &cbapi.CallbackAfterUserRegisterResp{}, after)
+}
diff --git a/internal/rpc/user/config.go b/internal/rpc/user/config.go
new file mode 100644
index 0000000..2da8d29
--- /dev/null
+++ b/internal/rpc/user/config.go
@@ -0,0 +1,71 @@
+package user
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ pbuser "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func (s *userServer) GetUserClientConfig(ctx context.Context, req *pbuser.GetUserClientConfigReq) (*pbuser.GetUserClientConfigResp, error) {
+ if req.UserID != "" {
+ if err := authverify.CheckAccess(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ if _, err := s.db.GetUserByID(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ }
+ res, err := s.clientConfig.GetUserConfig(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetUserClientConfigResp{Configs: res}, nil
+}
+
+func (s *userServer) SetUserClientConfig(ctx context.Context, req *pbuser.SetUserClientConfigReq) (*pbuser.SetUserClientConfigResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ if req.UserID != "" {
+ if _, err := s.db.GetUserByID(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ }
+ if err := s.clientConfig.SetUserConfig(ctx, req.UserID, req.Configs); err != nil {
+ return nil, err
+ }
+ return &pbuser.SetUserClientConfigResp{}, nil
+}
+
+func (s *userServer) DelUserClientConfig(ctx context.Context, req *pbuser.DelUserClientConfigReq) (*pbuser.DelUserClientConfigResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ if err := s.clientConfig.DelUserConfig(ctx, req.UserID, req.Keys); err != nil {
+ return nil, err
+ }
+ return &pbuser.DelUserClientConfigResp{}, nil
+}
+
+func (s *userServer) PageUserClientConfig(ctx context.Context, req *pbuser.PageUserClientConfigReq) (*pbuser.PageUserClientConfigResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ total, res, err := s.clientConfig.GetUserConfigPage(ctx, req.UserID, req.Key, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.PageUserClientConfigResp{
+ Total: total,
+ Configs: datautil.Slice(res, func(e *model.ClientConfig) *pbuser.ClientConfig {
+ return &pbuser.ClientConfig{
+ UserID: e.UserID,
+ Key: e.Key,
+ Value: e.Value,
+ }
+ }),
+ }, nil
+}
diff --git a/internal/rpc/user/notification.go b/internal/rpc/user/notification.go
new file mode 100644
index 0000000..834e0b6
--- /dev/null
+++ b/internal/rpc/user/notification.go
@@ -0,0 +1,126 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package user
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/msg"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification/common_user"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+type UserNotificationSender struct {
+ *notification.NotificationSender
+ getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
+ // db controller
+ db controller.UserDatabase
+}
+
+type userNotificationSenderOptions func(*UserNotificationSender)
+
+func WithUserDB(db controller.UserDatabase) userNotificationSenderOptions {
+ return func(u *UserNotificationSender) {
+ u.db = db
+ }
+}
+
+func WithUserFunc(
+ fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error),
+) userNotificationSenderOptions {
+ return func(u *UserNotificationSender) {
+ f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
+ users, err := fn(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, user := range users {
+ result = append(result, user)
+ }
+ return result, nil
+ }
+ u.getUsersInfo = f
+ }
+}
+
+func NewUserNotificationSender(config *Config, msgClient *rpcli.MsgClient, opts ...userNotificationSenderOptions) *UserNotificationSender {
+ f := &UserNotificationSender{
+ NotificationSender: notification.NewNotificationSender(&config.NotificationConfig, notification.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
+ return msgClient.SendMsg(ctx, req)
+ })),
+ }
+ for _, opt := range opts {
+ opt(f)
+ }
+ return f
+}
+
+/* func (u *UserNotificationSender) getUsersInfoMap(
+ ctx context.Context,
+ userIDs []string,
+) (map[string]*sdkws.UserInfo, error) {
+ users, err := u.getUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ result := make(map[string]*sdkws.UserInfo)
+ for _, user := range users {
+ result[user.GetUserID()] = user.(*sdkws.UserInfo)
+ }
+ return result, nil
+} */
+
+/* func (u *UserNotificationSender) getFromToUserNickname(
+ ctx context.Context,
+ fromUserID, toUserID string,
+) (string, string, error) {
+ users, err := u.getUsersInfoMap(ctx, []string{fromUserID, toUserID})
+ if err != nil {
+ return "", "", nil
+ }
+ return users[fromUserID].Nickname, users[toUserID].Nickname, nil
+} */
+
+func (u *UserNotificationSender) UserStatusChangeNotification(
+ ctx context.Context,
+ tips *sdkws.UserStatusChangeTips,
+) {
+ u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserStatusChangeNotification, tips)
+}
+func (u *UserNotificationSender) UserCommandUpdateNotification(
+ ctx context.Context,
+ tips *sdkws.UserCommandUpdateTips,
+) {
+ u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserCommandUpdateNotification, tips)
+}
+func (u *UserNotificationSender) UserCommandAddNotification(
+ ctx context.Context,
+ tips *sdkws.UserCommandAddTips,
+) {
+ u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserCommandAddNotification, tips)
+}
+func (u *UserNotificationSender) UserCommandDeleteNotification(
+ ctx context.Context,
+ tips *sdkws.UserCommandDeleteTips,
+) {
+ u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserCommandDeleteNotification, tips)
+}
diff --git a/internal/rpc/user/online.go b/internal/rpc/user/online.go
new file mode 100644
index 0000000..e644f9f
--- /dev/null
+++ b/internal/rpc/user/online.go
@@ -0,0 +1,104 @@
+package user
+
+import (
+ "context"
+
+ "github.com/openimsdk/tools/utils/datautil"
+
+ "git.imall.cloud/openim/protocol/constant"
+ pbuser "git.imall.cloud/openim/protocol/user"
+)
+
+func (s *userServer) getUserOnlineStatus(ctx context.Context, userID string) (*pbuser.OnlineStatus, error) {
+ platformIDs, err := s.online.GetOnline(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ status := pbuser.OnlineStatus{
+ UserID: userID,
+ PlatformIDs: platformIDs,
+ }
+ if len(platformIDs) > 0 {
+ status.Status = constant.Online
+ } else {
+ status.Status = constant.Offline
+ }
+ return &status, nil
+}
+
+func (s *userServer) getUsersOnlineStatus(ctx context.Context, userIDs []string) ([]*pbuser.OnlineStatus, error) {
+ res := make([]*pbuser.OnlineStatus, 0, len(userIDs))
+ for _, userID := range userIDs {
+ status, err := s.getUserOnlineStatus(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ res = append(res, status)
+ }
+ return res, nil
+}
+
+// SubscribeOrCancelUsersStatus Subscribe online or cancel online users.
+func (s *userServer) SubscribeOrCancelUsersStatus(ctx context.Context, req *pbuser.SubscribeOrCancelUsersStatusReq) (*pbuser.SubscribeOrCancelUsersStatusResp, error) {
+ return &pbuser.SubscribeOrCancelUsersStatusResp{}, nil
+}
+
+// GetUserStatus Get the online status of the user.
+func (s *userServer) GetUserStatus(ctx context.Context, req *pbuser.GetUserStatusReq) (*pbuser.GetUserStatusResp, error) {
+ res, err := s.getUsersOnlineStatus(ctx, req.UserIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetUserStatusResp{StatusList: res}, nil
+}
+
+// SetUserStatus Synchronize user's online status.
+func (s *userServer) SetUserStatus(ctx context.Context, req *pbuser.SetUserStatusReq) (*pbuser.SetUserStatusResp, error) {
+ var (
+ online []int32
+ offline []int32
+ )
+ switch req.Status {
+ case constant.Online:
+ online = []int32{req.PlatformID}
+ case constant.Offline:
+ offline = []int32{req.PlatformID}
+ }
+ if err := s.online.SetUserOnline(ctx, req.UserID, online, offline); err != nil {
+ return nil, err
+ }
+ return &pbuser.SetUserStatusResp{}, nil
+}
+
+// GetSubscribeUsersStatus Get the online status of subscribers.
+func (s *userServer) GetSubscribeUsersStatus(ctx context.Context, req *pbuser.GetSubscribeUsersStatusReq) (*pbuser.GetSubscribeUsersStatusResp, error) {
+ return &pbuser.GetSubscribeUsersStatusResp{}, nil
+}
+
+func (s *userServer) SetUserOnlineStatus(ctx context.Context, req *pbuser.SetUserOnlineStatusReq) (*pbuser.SetUserOnlineStatusResp, error) {
+ for _, status := range req.Status {
+ if err := s.online.SetUserOnline(ctx, status.UserID, status.Online, status.Offline); err != nil {
+ return nil, err
+ }
+ }
+ return &pbuser.SetUserOnlineStatusResp{}, nil
+}
+
+func (s *userServer) GetAllOnlineUsers(ctx context.Context, req *pbuser.GetAllOnlineUsersReq) (*pbuser.GetAllOnlineUsersResp, error) {
+ resMap, nextCursor, err := s.online.GetAllOnlineUsers(ctx, req.Cursor)
+ if err != nil {
+ return nil, err
+ }
+ resp := &pbuser.GetAllOnlineUsersResp{
+ StatusList: make([]*pbuser.OnlineStatus, 0, len(resMap)),
+ NextCursor: nextCursor,
+ }
+ for userID, plats := range resMap {
+ resp.StatusList = append(resp.StatusList, &pbuser.OnlineStatus{
+ UserID: userID,
+ Status: int32(datautil.If(len(plats) > 0, constant.Online, constant.Offline)),
+ PlatformIDs: plats,
+ })
+ }
+ return resp, nil
+}
diff --git a/internal/rpc/user/statistics.go b/internal/rpc/user/statistics.go
new file mode 100644
index 0000000..7c1ed21
--- /dev/null
+++ b/internal/rpc/user/statistics.go
@@ -0,0 +1,43 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package user
+
+import (
+ "context"
+ "time"
+
+ pbuser "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/errs"
+)
+
+func (s *userServer) UserRegisterCount(ctx context.Context, req *pbuser.UserRegisterCountReq) (*pbuser.UserRegisterCountResp, error) {
+ if req.Start > req.End {
+ return nil, errs.ErrArgs.WrapMsg("start > end")
+ }
+ total, err := s.db.CountTotal(ctx, nil)
+ if err != nil {
+ return nil, err
+ }
+ start := time.UnixMilli(req.Start)
+ before, err := s.db.CountTotal(ctx, &start)
+ if err != nil {
+ return nil, err
+ }
+ count, err := s.db.CountRangeEverydayTotal(ctx, start, time.UnixMilli(req.End))
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.UserRegisterCountResp{Total: total, Before: before, Count: count}, nil
+}
diff --git a/internal/rpc/user/user.go b/internal/rpc/user/user.go
new file mode 100644
index 0000000..7ffce10
--- /dev/null
+++ b/internal/rpc/user/user.go
@@ -0,0 +1,730 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package user
+
+import (
+ "context"
+ "errors"
+ "math/rand"
+ "strings"
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/relation"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ tablerelation "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/group"
+ friendpb "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ pbuser "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/grpc"
+)
+
+const (
+ defaultSecret = "openIM123"
+)
+
+type userServer struct {
+ pbuser.UnimplementedUserServer
+ online cache.OnlineCache
+ db controller.UserDatabase
+ friendNotificationSender *relation.FriendNotificationSender
+ userNotificationSender *UserNotificationSender
+ RegisterCenter discovery.Conn
+ config *Config
+ webhookClient *webhook.Client
+ groupClient *rpcli.GroupClient
+ relationClient *rpcli.RelationClient
+ clientConfig controller.ClientConfigDatabase
+
+ adminUserIDs []string
+}
+
+type Config struct {
+ RpcConfig config.User
+ RedisConfig config.Redis
+ MongodbConfig config.Mongo
+ KafkaConfig config.Kafka
+ NotificationConfig config.Notification
+ Share config.Share
+ WebhooksConfig config.Webhooks
+ LocalCacheConfig config.LocalCache
+ Discovery config.Discovery
+}
+
+func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error {
+ dbb := dbbuild.NewBuilder(&config.MongodbConfig, &config.RedisConfig)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ return err
+ }
+
+ users := make([]*tablerelation.User, 0)
+
+ for i := range config.Share.IMAdminUser.UserIDs {
+ users = append(users, &tablerelation.User{
+ UserID: config.Share.IMAdminUser.UserIDs[i],
+ Nickname: config.Share.IMAdminUser.Nicknames[i],
+ AppMangerLevel: constant.AppAdmin,
+ })
+ }
+ userDB, err := mgo.NewUserMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ clientConfigDB, err := mgo.NewClientConfig(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ msgConn, err := client.GetConn(ctx, config.Discovery.RpcService.Msg)
+ if err != nil {
+ return err
+ }
+ groupConn, err := client.GetConn(ctx, config.Discovery.RpcService.Group)
+ if err != nil {
+ return err
+ }
+ friendConn, err := client.GetConn(ctx, config.Discovery.RpcService.Friend)
+ if err != nil {
+ return err
+ }
+ msgClient := rpcli.NewMsgClient(msgConn)
+ userCache := redis.NewUserCacheRedis(rdb, &config.LocalCacheConfig, userDB, redis.GetRocksCacheOptions())
+ database := controller.NewUserDatabase(userDB, userCache, mgocli.GetTx())
+ localcache.InitLocalCache(&config.LocalCacheConfig)
+
+ // 初始化webhook配置管理器(支持从数据库读取配置)
+ var webhookClient *webhook.Client
+ log.ZInfo(ctx, "initializing webhook config manager...", "default_url", config.WebhooksConfig.URL)
+ systemConfigDB, err := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err == nil {
+ // 如果SystemConfig数据库初始化成功,使用配置管理器
+ log.ZInfo(ctx, "system config db initialized successfully, creating webhook config manager")
+ webhookConfigManager := webhook.NewConfigManager(systemConfigDB, &config.WebhooksConfig)
+ if err := webhookConfigManager.Start(ctx); err != nil {
+ log.ZWarn(ctx, "failed to start webhook config manager, using default config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ } else {
+ log.ZInfo(ctx, "webhook config manager started, using dynamic config")
+ webhookClient = webhook.NewWebhookClientWithManager(webhookConfigManager)
+ }
+ } else {
+ // 如果SystemConfig数据库初始化失败,使用默认配置
+ log.ZWarn(ctx, "failed to init system config db, using default webhook config", err)
+ webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
+ }
+
+ u := &userServer{
+ online: redis.NewUserOnline(rdb),
+ db: database,
+ RegisterCenter: client,
+ friendNotificationSender: relation.NewFriendNotificationSender(&config.NotificationConfig, msgClient, relation.WithDBFunc(database.FindWithError)),
+ userNotificationSender: NewUserNotificationSender(config, msgClient, WithUserFunc(database.FindWithError)),
+ config: config,
+ webhookClient: webhookClient,
+ clientConfig: controller.NewClientConfigDatabase(clientConfigDB, redis.NewClientConfigCache(rdb, clientConfigDB), mgocli.GetTx()),
+ groupClient: rpcli.NewGroupClient(groupConn),
+ relationClient: rpcli.NewRelationClient(friendConn),
+ adminUserIDs: config.Share.IMAdminUser.UserIDs,
+ }
+ pbuser.RegisterUserServer(server, u)
+ return u.db.InitOnce(context.Background(), users)
+}
+
+func (s *userServer) GetDesignateUsers(ctx context.Context, req *pbuser.GetDesignateUsersReq) (resp *pbuser.GetDesignateUsersResp, err error) {
+ resp = &pbuser.GetDesignateUsersResp{}
+ users, err := s.db.Find(ctx, req.UserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ resp.UsersInfo = convert.UsersDB2Pb(users)
+ return resp, nil
+}
+
+// deprecated:
+// UpdateUserInfo
+func (s *userServer) UpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserInfoReq) (resp *pbuser.UpdateUserInfoResp, err error) {
+ resp = &pbuser.UpdateUserInfoResp{}
+ err = authverify.CheckAccess(ctx, req.UserInfo.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := s.webhookBeforeUpdateUserInfo(ctx, &s.config.WebhooksConfig.BeforeUpdateUserInfo, req); err != nil {
+ return nil, err
+ }
+ data := convert.UserPb2DBMap(req.UserInfo)
+ oldUser, err := s.db.GetUserByID(ctx, req.UserInfo.UserID)
+ if err != nil {
+ return nil, err
+ }
+ if err := s.db.UpdateByMap(ctx, req.UserInfo.UserID, data); err != nil {
+ return nil, err
+ }
+ s.friendNotificationSender.UserInfoUpdatedNotification(ctx, req.UserInfo.UserID)
+
+ s.webhookAfterUpdateUserInfo(ctx, &s.config.WebhooksConfig.AfterUpdateUserInfo, req)
+ if err = s.NotificationUserInfoUpdate(ctx, req.UserInfo.UserID, oldUser); err != nil {
+ return nil, err
+ }
+ return resp, nil
+}
+
+func (s *userServer) UpdateUserInfoEx(ctx context.Context, req *pbuser.UpdateUserInfoExReq) (resp *pbuser.UpdateUserInfoExResp, err error) {
+ resp = &pbuser.UpdateUserInfoExResp{}
+ err = authverify.CheckAccess(ctx, req.UserInfo.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ if err = s.webhookBeforeUpdateUserInfoEx(ctx, &s.config.WebhooksConfig.BeforeUpdateUserInfoEx, req); err != nil {
+ return nil, err
+ }
+
+ oldUser, err := s.db.GetUserByID(ctx, req.UserInfo.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ data := convert.UserPb2DBMapEx(req.UserInfo)
+ if err = s.db.UpdateByMap(ctx, req.UserInfo.UserID, data); err != nil {
+ return nil, err
+ }
+
+ s.friendNotificationSender.UserInfoUpdatedNotification(ctx, req.UserInfo.UserID)
+
+ s.webhookAfterUpdateUserInfoEx(ctx, &s.config.WebhooksConfig.AfterUpdateUserInfoEx, req)
+ if err := s.NotificationUserInfoUpdate(ctx, req.UserInfo.UserID, oldUser); err != nil {
+ return nil, err
+ }
+
+ return resp, nil
+}
+func (s *userServer) SetGlobalRecvMessageOpt(ctx context.Context, req *pbuser.SetGlobalRecvMessageOptReq) (resp *pbuser.SetGlobalRecvMessageOptResp, err error) {
+ resp = &pbuser.SetGlobalRecvMessageOptResp{}
+ if _, err := s.db.FindWithError(ctx, []string{req.UserID}); err != nil {
+ return nil, err
+ }
+ m := make(map[string]any, 1)
+ m["global_recv_msg_opt"] = req.GlobalRecvMsgOpt
+ if err := s.db.UpdateByMap(ctx, req.UserID, m); err != nil {
+ return nil, err
+ }
+ s.friendNotificationSender.UserInfoUpdatedNotification(ctx, req.UserID)
+ return resp, nil
+}
+
+func (s *userServer) AccountCheck(ctx context.Context, req *pbuser.AccountCheckReq) (resp *pbuser.AccountCheckResp, err error) {
+ resp = &pbuser.AccountCheckResp{}
+ if datautil.Duplicate(req.CheckUserIDs) {
+ return nil, errs.ErrArgs.WrapMsg("userID repeated")
+ }
+ if err = authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ users, err := s.db.Find(ctx, req.CheckUserIDs)
+ if err != nil {
+ return nil, err
+ }
+ userIDs := make(map[string]any, 0)
+ for _, v := range users {
+ userIDs[v.UserID] = nil
+ }
+ for _, v := range req.CheckUserIDs {
+ temp := &pbuser.AccountCheckRespSingleUserStatus{UserID: v}
+ if _, ok := userIDs[v]; ok {
+ temp.AccountStatus = constant.Registered
+ } else {
+ temp.AccountStatus = constant.UnRegistered
+ }
+ resp.Results = append(resp.Results, temp)
+ }
+ return resp, nil
+}
+
+func (s *userServer) GetPaginationUsers(ctx context.Context, req *pbuser.GetPaginationUsersReq) (resp *pbuser.GetPaginationUsersResp, err error) {
+ if req.UserID == "" && req.NickName == "" {
+ total, users, err := s.db.PageFindUser(ctx, constant.IMOrdinaryUser, constant.AppOrdinaryUsers, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetPaginationUsersResp{Total: int32(total), Users: convert.UsersDB2Pb(users)}, err
+ } else {
+ total, users, err := s.db.PageFindUserWithKeyword(ctx, constant.IMOrdinaryUser, constant.AppOrdinaryUsers, req.UserID, req.NickName, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetPaginationUsersResp{Total: int32(total), Users: convert.UsersDB2Pb(users)}, err
+
+ }
+
+}
+
+func (s *userServer) UserRegister(ctx context.Context, req *pbuser.UserRegisterReq) (resp *pbuser.UserRegisterResp, err error) {
+ resp = &pbuser.UserRegisterResp{}
+ if len(req.Users) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("users is empty")
+ }
+ // check if secret is changed
+ //if s.config.Share.Secret == defaultSecret {
+ // return nil, servererrs.ErrSecretNotChanged.Wrap()
+ //}
+ if err = authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ if datautil.DuplicateAny(req.Users, func(e *sdkws.UserInfo) string { return e.UserID }) {
+ return nil, errs.ErrArgs.WrapMsg("userID repeated")
+ }
+ userIDs := make([]string, 0)
+ for _, user := range req.Users {
+ if user.UserID == "" {
+ return nil, errs.ErrArgs.WrapMsg("userID is empty")
+ }
+ if strings.Contains(user.UserID, ":") {
+ return nil, errs.ErrArgs.WrapMsg("userID contains ':' is invalid userID")
+ }
+ userIDs = append(userIDs, user.UserID)
+ }
+ exist, err := s.db.IsExist(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ if exist {
+ return nil, servererrs.ErrRegisteredAlready.WrapMsg("userID registered already")
+ }
+ if err := s.webhookBeforeUserRegister(ctx, &s.config.WebhooksConfig.BeforeUserRegister, req); err != nil {
+ return nil, err
+ }
+ now := time.Now()
+ users := make([]*tablerelation.User, 0, len(req.Users))
+ for _, user := range req.Users {
+ users = append(users, &tablerelation.User{
+ UserID: user.UserID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ Ex: user.Ex,
+ CreateTime: now,
+ AppMangerLevel: user.AppMangerLevel,
+ GlobalRecvMsgOpt: user.GlobalRecvMsgOpt,
+ })
+ }
+ if err := s.db.Create(ctx, users); err != nil {
+ return nil, err
+ }
+
+ prommetrics.UserRegisterCounter.Add(float64(len(users)))
+
+ s.webhookAfterUserRegister(ctx, &s.config.WebhooksConfig.AfterUserRegister, req)
+ return resp, nil
+}
+
+func (s *userServer) GetGlobalRecvMessageOpt(ctx context.Context, req *pbuser.GetGlobalRecvMessageOptReq) (resp *pbuser.GetGlobalRecvMessageOptResp, err error) {
+ user, err := s.db.FindWithError(ctx, []string{req.UserID})
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetGlobalRecvMessageOptResp{GlobalRecvMsgOpt: user[0].GlobalRecvMsgOpt}, nil
+}
+
+// GetAllUserID Get user account by page.
+func (s *userServer) GetAllUserID(ctx context.Context, req *pbuser.GetAllUserIDReq) (resp *pbuser.GetAllUserIDResp, err error) {
+ total, userIDs, err := s.db.GetAllUserID(ctx, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetAllUserIDResp{Total: int32(total), UserIDs: userIDs}, nil
+}
+
+// ProcessUserCommandAdd user general function add.
+func (s *userServer) ProcessUserCommandAdd(ctx context.Context, req *pbuser.ProcessUserCommandAddReq) (*pbuser.ProcessUserCommandAddResp, error) {
+ err := authverify.CheckAccess(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ var value string
+ if req.Value != nil {
+ value = req.Value.Value
+ }
+ var ex string
+ if req.Ex != nil {
+ value = req.Ex.Value
+ }
+ // Assuming you have a method in s.storage to add a user command
+ err = s.db.AddUserCommand(ctx, req.UserID, req.Type, req.Uuid, value, ex)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.UserCommandAddTips{
+ FromUserID: req.UserID,
+ ToUserID: req.UserID,
+ }
+ s.userNotificationSender.UserCommandAddNotification(ctx, tips)
+ return &pbuser.ProcessUserCommandAddResp{}, nil
+}
+
+// ProcessUserCommandDelete user general function delete.
+func (s *userServer) ProcessUserCommandDelete(ctx context.Context, req *pbuser.ProcessUserCommandDeleteReq) (*pbuser.ProcessUserCommandDeleteResp, error) {
+ err := authverify.CheckAccess(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ err = s.db.DeleteUserCommand(ctx, req.UserID, req.Type, req.Uuid)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.UserCommandDeleteTips{
+ FromUserID: req.UserID,
+ ToUserID: req.UserID,
+ }
+ s.userNotificationSender.UserCommandDeleteNotification(ctx, tips)
+ return &pbuser.ProcessUserCommandDeleteResp{}, nil
+}
+
+// ProcessUserCommandUpdate user general function update.
+func (s *userServer) ProcessUserCommandUpdate(ctx context.Context, req *pbuser.ProcessUserCommandUpdateReq) (*pbuser.ProcessUserCommandUpdateResp, error) {
+ err := authverify.CheckAccess(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ val := make(map[string]any)
+
+ // Map fields from eax to val
+ if req.Value != nil {
+ val["value"] = req.Value.Value
+ }
+ if req.Ex != nil {
+ val["ex"] = req.Ex.Value
+ }
+
+ // Assuming you have a method in s.storage to update a user command
+ err = s.db.UpdateUserCommand(ctx, req.UserID, req.Type, req.Uuid, val)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.UserCommandUpdateTips{
+ FromUserID: req.UserID,
+ ToUserID: req.UserID,
+ }
+ s.userNotificationSender.UserCommandUpdateNotification(ctx, tips)
+ return &pbuser.ProcessUserCommandUpdateResp{}, nil
+}
+
+func (s *userServer) ProcessUserCommandGet(ctx context.Context, req *pbuser.ProcessUserCommandGetReq) (*pbuser.ProcessUserCommandGetResp, error) {
+
+ err := authverify.CheckAccess(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ // Fetch user commands from the database
+ commands, err := s.db.GetUserCommands(ctx, req.UserID, req.Type)
+ if err != nil {
+ return nil, err
+ }
+
+ // Initialize commandInfoSlice as an empty slice
+ commandInfoSlice := make([]*pbuser.CommandInfoResp, 0, len(commands))
+
+ for _, command := range commands {
+ // No need to use index since command is already a pointer
+ commandInfoSlice = append(commandInfoSlice, &pbuser.CommandInfoResp{
+ Type: command.Type,
+ Uuid: command.Uuid,
+ Value: command.Value,
+ CreateTime: command.CreateTime,
+ Ex: command.Ex,
+ })
+ }
+
+ // Return the response with the slice
+ return &pbuser.ProcessUserCommandGetResp{CommandResp: commandInfoSlice}, nil
+}
+
+func (s *userServer) ProcessUserCommandGetAll(ctx context.Context, req *pbuser.ProcessUserCommandGetAllReq) (*pbuser.ProcessUserCommandGetAllResp, error) {
+ err := authverify.CheckAccess(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ // Fetch user commands from the database
+ commands, err := s.db.GetAllUserCommands(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ // Initialize commandInfoSlice as an empty slice
+ commandInfoSlice := make([]*pbuser.AllCommandInfoResp, 0, len(commands))
+
+ for _, command := range commands {
+ // No need to use index since command is already a pointer
+ commandInfoSlice = append(commandInfoSlice, &pbuser.AllCommandInfoResp{
+ Type: command.Type,
+ Uuid: command.Uuid,
+ Value: command.Value,
+ CreateTime: command.CreateTime,
+ Ex: command.Ex,
+ })
+ }
+
+ // Return the response with the slice
+ return &pbuser.ProcessUserCommandGetAllResp{CommandResp: commandInfoSlice}, nil
+}
+
+func (s *userServer) AddNotificationAccount(ctx context.Context, req *pbuser.AddNotificationAccountReq) (*pbuser.AddNotificationAccountResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ if req.AppMangerLevel < constant.AppNotificationAdmin {
+ return nil, errs.ErrArgs.WithDetail("app level not supported")
+ }
+ if req.UserID == "" {
+ for i := 0; i < 20; i++ {
+ userId := s.genUserID()
+ _, err := s.db.FindWithError(ctx, []string{userId})
+ if err == nil {
+ continue
+ }
+ req.UserID = userId
+ break
+ }
+ if req.UserID == "" {
+ return nil, errs.ErrInternalServer.WrapMsg("gen user id failed")
+ }
+ } else {
+ _, err := s.db.FindWithError(ctx, []string{req.UserID})
+ if err == nil {
+ return nil, errs.ErrArgs.WrapMsg("userID is used")
+ }
+ }
+
+ user := &tablerelation.User{
+ UserID: req.UserID,
+ Nickname: req.NickName,
+ FaceURL: req.FaceURL,
+ CreateTime: time.Now(),
+ AppMangerLevel: req.AppMangerLevel,
+ }
+ if err := s.db.Create(ctx, []*tablerelation.User{user}); err != nil {
+ return nil, err
+ }
+
+ return &pbuser.AddNotificationAccountResp{
+ UserID: req.UserID,
+ NickName: req.NickName,
+ FaceURL: req.FaceURL,
+ AppMangerLevel: req.AppMangerLevel,
+ }, nil
+}
+
+func (s *userServer) UpdateNotificationAccountInfo(ctx context.Context, req *pbuser.UpdateNotificationAccountInfoReq) (*pbuser.UpdateNotificationAccountInfoResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ if _, err := s.db.FindWithError(ctx, []string{req.UserID}); err != nil {
+ return nil, errs.ErrArgs.Wrap()
+ }
+
+ user := map[string]interface{}{}
+
+ if req.NickName != "" {
+ user["nickname"] = req.NickName
+ }
+
+ if req.FaceURL != "" {
+ user["face_url"] = req.FaceURL
+ }
+
+ if err := s.db.UpdateByMap(ctx, req.UserID, user); err != nil {
+ return nil, err
+ }
+
+ return &pbuser.UpdateNotificationAccountInfoResp{}, nil
+}
+
+func (s *userServer) SearchNotificationAccount(ctx context.Context, req *pbuser.SearchNotificationAccountReq) (*pbuser.SearchNotificationAccountResp, error) {
+ // Check if user is an admin
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ var users []*tablerelation.User
+ var err error
+
+ // If a keyword is provided in the request
+ if req.Keyword != "" {
+ // Find users by keyword
+ users, err = s.db.Find(ctx, []string{req.Keyword})
+ if err != nil {
+ return nil, err
+ }
+
+ // Convert users to response format
+ resp := s.userModelToResp(users, req.Pagination, req.AppManagerLevel)
+ if resp.Total != 0 {
+ return resp, nil
+ }
+
+ // Find users by nickname if no users found by keyword
+ users, err = s.db.FindByNickname(ctx, req.Keyword)
+ if err != nil {
+ return nil, err
+ }
+ resp = s.userModelToResp(users, req.Pagination, req.AppManagerLevel)
+ return resp, nil
+ }
+
+ // If no keyword, find users with notification settings
+ if req.AppManagerLevel != nil {
+ users, err = s.db.FindNotification(ctx, int64(*req.AppManagerLevel))
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ users, err = s.db.FindSystemAccount(ctx)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ resp := s.userModelToResp(users, req.Pagination, req.AppManagerLevel)
+ return resp, nil
+}
+
+func (s *userServer) GetNotificationAccount(ctx context.Context, req *pbuser.GetNotificationAccountReq) (*pbuser.GetNotificationAccountResp, error) {
+ if req.UserID == "" {
+ return nil, errs.ErrArgs.WrapMsg("userID is empty")
+ }
+ user, err := s.db.GetUserByID(ctx, req.UserID)
+ if err != nil {
+ return nil, servererrs.ErrUserIDNotFound.Wrap()
+ }
+ if user.AppMangerLevel >= constant.AppAdmin {
+ return &pbuser.GetNotificationAccountResp{Account: &pbuser.NotificationAccountInfo{
+ UserID: user.UserID,
+ FaceURL: user.FaceURL,
+ NickName: user.Nickname,
+ AppMangerLevel: user.AppMangerLevel,
+ }}, nil
+ }
+
+ return nil, errs.ErrNoPermission.WrapMsg("notification messages cannot be sent for this ID")
+}
+
+func (s *userServer) genUserID() string {
+ const l = 10
+ data := make([]byte, l)
+ rand.Read(data)
+ chars := []byte("0123456789")
+ for i := 0; i < len(data); i++ {
+ if i == 0 {
+ data[i] = chars[1:][data[i]%9]
+ } else {
+ data[i] = chars[data[i]%10]
+ }
+ }
+ return string(data)
+}
+
+func (s *userServer) userModelToResp(users []*tablerelation.User, pagination pagination.Pagination, appManagerLevel *int32) *pbuser.SearchNotificationAccountResp {
+ accounts := make([]*pbuser.NotificationAccountInfo, 0)
+ var total int64
+ for _, v := range users {
+ if v.AppMangerLevel >= constant.AppNotificationAdmin && !datautil.Contain(v.UserID, s.adminUserIDs...) {
+ if appManagerLevel != nil {
+ if v.AppMangerLevel != *appManagerLevel {
+ continue
+ }
+ }
+ temp := &pbuser.NotificationAccountInfo{
+ UserID: v.UserID,
+ FaceURL: v.FaceURL,
+ NickName: v.Nickname,
+ AppMangerLevel: v.AppMangerLevel,
+ }
+ accounts = append(accounts, temp)
+ total += 1
+ }
+ }
+
+ notificationAccounts := datautil.Paginate(accounts, int(pagination.GetPageNumber()), int(pagination.GetShowNumber()))
+
+ return &pbuser.SearchNotificationAccountResp{Total: total, NotificationAccounts: notificationAccounts}
+}
+
+func (s *userServer) NotificationUserInfoUpdate(ctx context.Context, userID string, oldUser *tablerelation.User) error {
+ user, err := s.db.GetUserByID(ctx, userID)
+ if err != nil {
+ return err
+ }
+ if user.Nickname == oldUser.Nickname && user.FaceURL == oldUser.FaceURL && user.Ex == oldUser.Ex {
+ return nil
+ }
+ oldUserInfo := convert.UserDB2Pb(oldUser)
+ newUserInfo := convert.UserDB2Pb(user)
+ var wg sync.WaitGroup
+ var es [2]error
+ wg.Add(len(es))
+ go func() {
+ defer wg.Done()
+ _, es[0] = s.groupClient.NotificationUserInfoUpdate(ctx, &group.NotificationUserInfoUpdateReq{
+ UserID: userID,
+ OldUserInfo: oldUserInfo,
+ NewUserInfo: newUserInfo,
+ })
+ }()
+
+ go func() {
+ defer wg.Done()
+ _, es[1] = s.relationClient.NotificationUserInfoUpdate(ctx, &friendpb.NotificationUserInfoUpdateReq{
+ UserID: userID,
+ OldUserInfo: oldUserInfo,
+ NewUserInfo: newUserInfo,
+ })
+ }()
+ wg.Wait()
+ return errors.Join(es[:]...)
+}
+
+func (s *userServer) SortQuery(ctx context.Context, req *pbuser.SortQueryReq) (*pbuser.SortQueryResp, error) {
+ users, err := s.db.SortQuery(ctx, req.UserIDName, req.Asc)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.SortQueryResp{Users: convert.UsersDB2Pb(users)}, nil
+}
diff --git a/internal/tools/cron/clear_msg.go b/internal/tools/cron/clear_msg.go
new file mode 100644
index 0000000..42dd880
--- /dev/null
+++ b/internal/tools/cron/clear_msg.go
@@ -0,0 +1,851 @@
+package cron
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "net/url"
+ "os"
+ "strconv"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ mgo "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+// clearGroupMsg 清理群聊消息
+// 注意:每次执行时都会重新从数据库读取配置,确保配置的实时性
+func (c *cronServer) clearGroupMsg() {
+ now := time.Now()
+ operationID := fmt.Sprintf("cron_clear_group_msg_%d_%d", os.Getpid(), now.UnixMilli())
+ ctx := mcontext.SetOperationID(c.ctx, operationID)
+
+ log.ZDebug(ctx, "[CLEAR_MSG] 定时任务触发:检查清理群聊消息配置", "operationID", operationID, "time", now.Format("2006-01-02 15:04:05"))
+
+ // 每次执行时都重新读取配置,确保获取最新的配置值
+ config, err := c.systemConfigDB.FindByKey(ctx, "clear_group_msg")
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 读取配置失败", err, "key", "clear_group_msg")
+ return
+ }
+
+ // 如果配置不存在,跳过执行
+ if config == nil {
+ log.ZDebug(ctx, "[CLEAR_MSG] 配置不存在,跳过执行", "key", "clear_group_msg")
+ return
+ }
+
+ // 记录从数据库读取到的配置详细信息(用于排查问题)
+ log.ZInfo(ctx, "[CLEAR_MSG] 从数据库读取到群聊清理配置",
+ "key", config.Key,
+ "value", config.Value,
+ "enabled", config.Enabled,
+ "title", config.Title,
+ "valueType", config.ValueType,
+ "createTime", config.CreateTime.Format("2006-01-02 15:04:05"),
+ "updateTime", config.UpdateTime.Format("2006-01-02 15:04:05"))
+
+ // 如果配置未启用,跳过执行
+ if !config.Enabled {
+ log.ZDebug(ctx, "[CLEAR_MSG] 配置未启用,跳过执行", "key", config.Key, "enabled", config.Enabled)
+ return
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] ====== 开始执行清理群聊消息任务 ======", "key", config.Key, "value", config.Value, "enabled", config.Enabled)
+
+ // 值为空也跳过,避免解析错误
+ if strings.TrimSpace(config.Value) == "" {
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置值为空,跳过执行", "key", config.Key)
+ return
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置读取成功,配置已启用", "key", config.Key, "value", config.Value, "enabled", config.Enabled)
+
+ // 解析配置值(单位:分钟)
+ minutes, err := strconv.ParseInt(config.Value, 10, 64)
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 解析配置值失败", err, "value", config.Value)
+ return
+ }
+
+ if minutes <= 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置分钟数无效,跳过执行", "minutes", minutes)
+ return
+ }
+
+ // 计算删除时间点:查询当前时间减去配置分钟数之前的消息
+ // 例如:配置30分钟,当前时间09:35:00,则查询09:05:00之前的所有消息
+ deltime := now.Add(-time.Duration(minutes) * time.Minute)
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置检查通过,开始查询消息",
+ "配置分钟数", minutes,
+ "当前时间", now.Format("2006-01-02 15:04:05"),
+ "查询时间点", deltime.Format("2006-01-02 15:04:05"),
+ "查询时间戳", deltime.UnixMilli(),
+ "说明", fmt.Sprintf("将查询send_time <= %d (即%s之前)的所有消息", deltime.UnixMilli(), deltime.Format("2006-01-02 15:04:05")))
+
+ const (
+ deleteCount = 10000
+ deleteLimit = 50
+ )
+
+ var totalCount int
+ var fileDeleteCount int
+ for i := 1; i <= deleteCount; i++ {
+ ctx := mcontext.SetOperationID(c.ctx, fmt.Sprintf("%s_%d", operationID, i))
+
+ // 先查询消息,提取文件信息并删除S3文件
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始查询消息", "iteration", i, "timestamp", deltime.UnixMilli(), "limit", deleteLimit, "deltime", deltime.Format("2006-01-02 15:04:05"))
+ docs, err := c.msgDocDB.GetRandBeforeMsg(ctx, deltime.UnixMilli(), deleteLimit)
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 查询消息失败", err, "iteration", i, "timestamp", deltime.UnixMilli())
+ break
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 查询消息结果", "iteration", i, "docCount", len(docs), "timestamp", deltime.UnixMilli())
+
+ if len(docs) == 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 没有更多消息需要删除", "iteration", i)
+ break
+ }
+
+ // 处理每个文档中的消息,提取文件信息并删除
+ // 同时收集要删除的消息信息(conversationID -> seqs),用于发送通知
+ var processedDocs int
+ var deletedDocCount int
+ conversationSeqsMap := make(map[string][]int64) // conversationID -> []seq
+ conversationDocsMap := make(map[string]*model.MsgDocModel) // conversationID -> doc
+ docIDsToDelete := make([]string, 0, len(docs)) // 收集需要删除的文档ID
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始处理文档", "iteration", i, "totalDocs", len(docs))
+ for docIdx, doc := range docs {
+ log.ZInfo(ctx, "[CLEAR_MSG] 处理文档", "iteration", i, "docIndex", docIdx+1, "totalDocs", len(docs), "docID", doc.DocID)
+
+ // 判断是否为群聊消息
+ conversationID := extractConversationID(doc.DocID)
+ log.ZInfo(ctx, "[CLEAR_MSG] 提取会话ID", "docID", doc.DocID, "conversationID", conversationID, "isGroup", isGroupConversationID(conversationID))
+ if !isGroupConversationID(conversationID) {
+ log.ZInfo(ctx, "[CLEAR_MSG] 跳过非群聊消息", "docID", doc.DocID, "conversationID", conversationID)
+ continue
+ }
+
+ // 获取完整的消息内容
+ log.ZInfo(ctx, "[CLEAR_MSG] 获取完整消息文档", "docID", doc.DocID)
+ fullDoc, err := c.msgDocDB.FindOneByDocID(ctx, doc.DocID)
+ if err != nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 获取完整消息文档失败", err, "docID", doc.DocID)
+ continue
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 获取完整消息文档成功", "docID", doc.DocID, "msgCount", len(fullDoc.Msg))
+
+ // 收集要删除的消息seq(只收集send_time <= deltime的消息)
+ var seqs []int64
+ var beforeTimeCount int
+ var afterTimeCount int
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始收集消息seq", "docID", doc.DocID, "msgCount", len(fullDoc.Msg), "查询时间戳", deltime.UnixMilli())
+ for msgIdx, msgInfo := range fullDoc.Msg {
+ if msgInfo.Msg != nil {
+ isBeforeTime := msgInfo.Msg.SendTime <= deltime.UnixMilli()
+ if isBeforeTime {
+ beforeTimeCount++
+ } else {
+ afterTimeCount++
+ }
+ log.ZInfo(ctx, "[CLEAR_MSG] 处理消息",
+ "docID", doc.DocID,
+ "msgIndex", msgIdx+1,
+ "totalMsgs", len(fullDoc.Msg),
+ "seq", msgInfo.Msg.Seq,
+ "sendID", msgInfo.Msg.SendID,
+ "contentType", msgInfo.Msg.ContentType,
+ "sendTime", msgInfo.Msg.SendTime,
+ "sendTimeFormatted", time.Unix(msgInfo.Msg.SendTime/1000, 0).Format("2006-01-02 15:04:05"),
+ "查询时间戳", deltime.UnixMilli(),
+ "是否在查询时间点之前", isBeforeTime)
+ if msgInfo.Msg.Seq > 0 && isBeforeTime {
+ seqs = append(seqs, msgInfo.Msg.Seq)
+ }
+ } else {
+ log.ZWarn(ctx, "[CLEAR_MSG] 消息数据为空", nil, "docID", doc.DocID, "msgIndex", msgIdx+1)
+ }
+ }
+ log.ZInfo(ctx, "[CLEAR_MSG] 收集消息seq完成",
+ "docID", doc.DocID,
+ "seqCount", len(seqs),
+ "seqs", seqs,
+ "在查询时间点之前的消息数", beforeTimeCount,
+ "在查询时间点之后的消息数", afterTimeCount,
+ "说明", fmt.Sprintf("文档中有%d条消息在查询时间点之前,%d条消息在查询时间点之后", beforeTimeCount, afterTimeCount))
+ if len(seqs) > 0 {
+ conversationSeqsMap[conversationID] = append(conversationSeqsMap[conversationID], seqs...)
+ conversationDocsMap[conversationID] = fullDoc
+ log.ZInfo(ctx, "[CLEAR_MSG] 已添加到通知列表", "conversationID", conversationID, "totalSeqs", len(conversationSeqsMap[conversationID]))
+ }
+
+ // 提取文件信息并删除S3文件
+ deletedFiles := c.extractAndDeleteFiles(ctx, fullDoc, true) // true表示只处理群聊消息
+ fileDeleteCount += deletedFiles
+
+ // 如果文档中所有消息都在查询时间点之前,则删除整个文档
+ // 如果文档中只有部分消息在查询时间点之前,则只删除那些消息(通过DeleteMsgsPhysicalBySeqs)
+ if afterTimeCount == 0 {
+ // 文档中所有消息都需要删除,删除整个文档
+ docIDsToDelete = append(docIDsToDelete, doc.DocID)
+ log.ZInfo(ctx, "[CLEAR_MSG] 文档标记为删除(所有消息都在查询时间点之前)", "docID", doc.DocID, "beforeTimeCount", beforeTimeCount)
+ } else {
+ // 文档中只有部分消息需要删除,使用RPC调用DeleteMsgPhysicalBySeq删除指定消息
+ if len(seqs) > 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始删除文档中的部分消息", "docID", doc.DocID, "conversationID", conversationID, "seqs", seqs)
+ _, err := c.msgClient.DeleteMsgPhysicalBySeq(ctx, &msg.DeleteMsgPhysicalBySeqReq{
+ ConversationID: conversationID,
+ Seqs: seqs,
+ })
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 删除文档中的部分消息失败", err, "docID", doc.DocID, "conversationID", conversationID, "seqs", seqs)
+ } else {
+ log.ZInfo(ctx, "[CLEAR_MSG] 删除文档中的部分消息成功", "docID", doc.DocID, "conversationID", conversationID, "seqCount", len(seqs))
+ totalCount += len(seqs)
+ }
+ }
+ }
+ processedDocs++
+ }
+ if processedDocs > 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 文档处理完成(群聊)", "processedDocs", processedDocs, "totalDocs", len(docs), "deletedFiles", fileDeleteCount, "docIDsToDelete", len(docIDsToDelete), "iteration", i)
+ }
+
+ // 删除整个文档(如果文档中所有消息都在查询时间点之前)
+ if len(docIDsToDelete) > 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始删除整个文档", "iteration", i, "docCount", len(docIDsToDelete))
+ for _, docID := range docIDsToDelete {
+ if err := c.msgDocDB.DeleteDoc(ctx, docID); err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 删除文档失败", err, "docID", docID)
+ } else {
+ deletedDocCount++
+ totalCount++ // 每个文档算作一条删除记录
+ log.ZInfo(ctx, "[CLEAR_MSG] 删除文档成功", "docID", docID)
+ }
+ }
+ log.ZInfo(ctx, "[CLEAR_MSG] 批次删除文档完成", "deletedDocCount", deletedDocCount, "totalDocCount", len(docIDsToDelete), "totalCount", totalCount, "iteration", i)
+ }
+
+ // 发送删除通知
+ if len(conversationSeqsMap) > 0 {
+ c.sendDeleteNotifications(ctx, conversationSeqsMap, conversationDocsMap, true)
+ }
+
+ if deletedDocCount < deleteLimit && len(docIDsToDelete) == 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 已处理完所有消息", "lastBatchCount", deletedDocCount)
+ break
+ }
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] ====== 清理群聊消息任务完成 ======", "deltime", deltime.Format("2006-01-02 15:04:05"), "duration", time.Since(now), "totalCount", totalCount, "fileDeleteCount", fileDeleteCount, "operationID", operationID)
+}
+
+// clearUserMsg 清理个人聊天消息
+// 注意:每次执行时都会重新从数据库读取配置,确保配置的实时性
+func (c *cronServer) clearUserMsg() {
+ now := time.Now()
+ operationID := fmt.Sprintf("cron_clear_user_msg_%d_%d", os.Getpid(), now.UnixMilli())
+ ctx := mcontext.SetOperationID(c.ctx, operationID)
+
+ log.ZDebug(ctx, "[CLEAR_MSG] 定时任务触发:检查清理个人聊天消息配置", "operationID", operationID, "time", now.Format("2006-01-02 15:04:05"))
+
+ // 每次执行时都重新读取配置,确保获取最新的配置值
+ config, err := c.systemConfigDB.FindByKey(ctx, "clear_user_msg")
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 读取配置失败", err, "key", "clear_user_msg")
+ return
+ }
+
+ // 如果配置不存在,跳过执行
+ if config == nil {
+ log.ZDebug(ctx, "[CLEAR_MSG] 配置不存在,跳过执行", "key", "clear_user_msg")
+ return
+ }
+
+ // 记录从数据库读取到的配置详细信息(用于排查问题)
+ log.ZInfo(ctx, "[CLEAR_MSG] 从数据库读取到个人消息清理配置",
+ "key", config.Key,
+ "value", config.Value,
+ "enabled", config.Enabled,
+ "title", config.Title,
+ "valueType", config.ValueType,
+ "createTime", config.CreateTime.Format("2006-01-02 15:04:05"),
+ "updateTime", config.UpdateTime.Format("2006-01-02 15:04:05"))
+
+ // 如果配置未启用,跳过执行
+ if !config.Enabled {
+ log.ZDebug(ctx, "[CLEAR_MSG] 配置未启用,跳过执行", "key", config.Key, "enabled", config.Enabled)
+ return
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] ====== 开始执行清理个人聊天消息任务 ======", "key", config.Key, "value", config.Value, "enabled", config.Enabled)
+
+ // 值为空也跳过,避免解析错误
+ if strings.TrimSpace(config.Value) == "" {
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置值为空,跳过执行", "key", config.Key)
+ return
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置读取成功,配置已启用", "key", config.Key, "value", config.Value, "enabled", config.Enabled)
+
+ // 解析配置值(单位:分钟)
+ minutes, err := strconv.ParseInt(config.Value, 10, 64)
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 解析配置值失败", err, "value", config.Value)
+ return
+ }
+
+ if minutes <= 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置分钟数无效,跳过执行", "minutes", minutes)
+ return
+ }
+
+ // 计算删除时间点:查询当前时间减去配置分钟数之前的消息
+ // 例如:配置30分钟,当前时间09:35:00,则查询09:05:00之前的所有消息
+ deltime := now.Add(-time.Duration(minutes) * time.Minute)
+ log.ZInfo(ctx, "[CLEAR_MSG] 配置检查通过,开始查询消息(个人聊天)",
+ "配置分钟数", minutes,
+ "当前时间", now.Format("2006-01-02 15:04:05"),
+ "查询时间点", deltime.Format("2006-01-02 15:04:05"),
+ "查询时间戳", deltime.UnixMilli(),
+ "说明", fmt.Sprintf("将查询send_time <= %d (即%s之前)的所有消息", deltime.UnixMilli(), deltime.Format("2006-01-02 15:04:05")))
+
+ const (
+ deleteCount = 10000
+ deleteLimit = 50
+ )
+
+ var totalCount int
+ var fileDeleteCount int
+ for i := 1; i <= deleteCount; i++ {
+ ctx := mcontext.SetOperationID(c.ctx, fmt.Sprintf("%s_%d", operationID, i))
+
+ // 先查询消息,提取文件信息并删除S3文件
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始查询消息(个人聊天)", "iteration", i, "timestamp", deltime.UnixMilli(), "limit", deleteLimit, "deltime", deltime.Format("2006-01-02 15:04:05"))
+ docs, err := c.msgDocDB.GetRandBeforeMsg(ctx, deltime.UnixMilli(), deleteLimit)
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 查询消息失败(个人聊天)", err, "iteration", i, "timestamp", deltime.UnixMilli())
+ break
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 查询消息结果(个人聊天)", "iteration", i, "docCount", len(docs), "timestamp", deltime.UnixMilli())
+
+ if len(docs) == 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 没有更多消息需要删除(个人聊天)", "iteration", i)
+ break
+ }
+
+ // 处理每个文档中的消息,提取文件信息并删除
+ // 同时收集要删除的消息信息(conversationID -> seqs),用于发送通知
+ var processedDocs int
+ var deletedDocCount int
+ conversationSeqsMap := make(map[string][]int64) // conversationID -> []seq
+ conversationDocsMap := make(map[string]*model.MsgDocModel) // conversationID -> doc
+ docIDsToDelete := make([]string, 0, len(docs)) // 收集需要删除的文档ID
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始处理文档(个人聊天)", "iteration", i, "totalDocs", len(docs))
+ for docIdx, doc := range docs {
+ log.ZInfo(ctx, "[CLEAR_MSG] 处理文档(个人聊天)", "iteration", i, "docIndex", docIdx+1, "totalDocs", len(docs), "docID", doc.DocID)
+
+ // 判断是否为个人聊天消息
+ conversationID := extractConversationID(doc.DocID)
+ log.ZInfo(ctx, "[CLEAR_MSG] 提取会话ID(个人聊天)", "docID", doc.DocID, "conversationID", conversationID, "isSingle", isSingleConversationID(conversationID))
+ if !isSingleConversationID(conversationID) {
+ log.ZInfo(ctx, "[CLEAR_MSG] 跳过非个人聊天消息", "docID", doc.DocID, "conversationID", conversationID)
+ continue
+ }
+
+ // 获取完整的消息内容
+ log.ZInfo(ctx, "[CLEAR_MSG] 获取完整消息文档(个人聊天)", "docID", doc.DocID)
+ fullDoc, err := c.msgDocDB.FindOneByDocID(ctx, doc.DocID)
+ if err != nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 获取完整消息文档失败(个人聊天)", err, "docID", doc.DocID)
+ continue
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 获取完整消息文档成功(个人聊天)", "docID", doc.DocID, "msgCount", len(fullDoc.Msg))
+
+ // 收集要删除的消息seq(只收集send_time <= deltime的消息)
+ var seqs []int64
+ var beforeTimeCount int
+ var afterTimeCount int
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始收集消息seq(个人聊天)", "docID", doc.DocID, "msgCount", len(fullDoc.Msg), "查询时间戳", deltime.UnixMilli())
+ for msgIdx, msgInfo := range fullDoc.Msg {
+ if msgInfo.Msg != nil {
+ isBeforeTime := msgInfo.Msg.SendTime <= deltime.UnixMilli()
+ if isBeforeTime {
+ beforeTimeCount++
+ } else {
+ afterTimeCount++
+ }
+ log.ZInfo(ctx, "[CLEAR_MSG] 处理消息(个人聊天)",
+ "docID", doc.DocID,
+ "msgIndex", msgIdx+1,
+ "totalMsgs", len(fullDoc.Msg),
+ "seq", msgInfo.Msg.Seq,
+ "sendID", msgInfo.Msg.SendID,
+ "contentType", msgInfo.Msg.ContentType,
+ "sendTime", msgInfo.Msg.SendTime,
+ "sendTimeFormatted", time.Unix(msgInfo.Msg.SendTime/1000, 0).Format("2006-01-02 15:04:05"),
+ "查询时间戳", deltime.UnixMilli(),
+ "是否在查询时间点之前", isBeforeTime)
+ if msgInfo.Msg.Seq > 0 && isBeforeTime {
+ seqs = append(seqs, msgInfo.Msg.Seq)
+ }
+ } else {
+ log.ZWarn(ctx, "[CLEAR_MSG] 消息数据为空(个人聊天)", nil, "docID", doc.DocID, "msgIndex", msgIdx+1)
+ }
+ }
+ log.ZInfo(ctx, "[CLEAR_MSG] 收集消息seq完成(个人聊天)",
+ "docID", doc.DocID,
+ "seqCount", len(seqs),
+ "seqs", seqs,
+ "在查询时间点之前的消息数", beforeTimeCount,
+ "在查询时间点之后的消息数", afterTimeCount,
+ "说明", fmt.Sprintf("文档中有%d条消息在查询时间点之前,%d条消息在查询时间点之后", beforeTimeCount, afterTimeCount))
+ if len(seqs) > 0 {
+ conversationSeqsMap[conversationID] = append(conversationSeqsMap[conversationID], seqs...)
+ conversationDocsMap[conversationID] = fullDoc
+ log.ZInfo(ctx, "[CLEAR_MSG] 已添加到通知列表(个人聊天)", "conversationID", conversationID, "totalSeqs", len(conversationSeqsMap[conversationID]))
+ }
+
+ // 提取文件信息并删除S3文件
+ deletedFiles := c.extractAndDeleteFiles(ctx, fullDoc, false) // false表示只处理个人聊天消息
+ fileDeleteCount += deletedFiles
+
+ // 如果文档中所有消息都在查询时间点之前,则删除整个文档
+ // 如果文档中只有部分消息在查询时间点之前,则只删除那些消息(通过DeleteMsgPhysicalBySeq)
+ if afterTimeCount == 0 {
+ // 文档中所有消息都需要删除,删除整个文档
+ docIDsToDelete = append(docIDsToDelete, doc.DocID)
+ log.ZInfo(ctx, "[CLEAR_MSG] 文档标记为删除(所有消息都在查询时间点之前)(个人聊天)", "docID", doc.DocID, "beforeTimeCount", beforeTimeCount)
+ } else {
+ // 文档中只有部分消息需要删除,使用RPC调用DeleteMsgPhysicalBySeq删除指定消息
+ if len(seqs) > 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始删除文档中的部分消息(个人聊天)", "docID", doc.DocID, "conversationID", conversationID, "seqs", seqs)
+ _, err := c.msgClient.DeleteMsgPhysicalBySeq(ctx, &msg.DeleteMsgPhysicalBySeqReq{
+ ConversationID: conversationID,
+ Seqs: seqs,
+ })
+ if err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 删除文档中的部分消息失败(个人聊天)", err, "docID", doc.DocID, "conversationID", conversationID, "seqs", seqs)
+ } else {
+ log.ZInfo(ctx, "[CLEAR_MSG] 删除文档中的部分消息成功(个人聊天)", "docID", doc.DocID, "conversationID", conversationID, "seqCount", len(seqs))
+ totalCount += len(seqs)
+ }
+ }
+ }
+ processedDocs++
+ }
+ if processedDocs > 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 文档处理完成(个人)", "processedDocs", processedDocs, "totalDocs", len(docs), "deletedFiles", fileDeleteCount, "docIDsToDelete", len(docIDsToDelete), "iteration", i)
+ }
+
+ // 删除整个文档(如果文档中所有消息都在查询时间点之前)
+ if len(docIDsToDelete) > 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始删除整个文档(个人聊天)", "iteration", i, "docCount", len(docIDsToDelete))
+ for _, docID := range docIDsToDelete {
+ if err := c.msgDocDB.DeleteDoc(ctx, docID); err != nil {
+ log.ZError(ctx, "[CLEAR_MSG] 删除文档失败(个人聊天)", err, "docID", docID)
+ } else {
+ deletedDocCount++
+ totalCount++ // 每个文档算作一条删除记录
+ log.ZInfo(ctx, "[CLEAR_MSG] 删除文档成功(个人聊天)", "docID", docID)
+ }
+ }
+ log.ZInfo(ctx, "[CLEAR_MSG] 批次删除文档完成(个人聊天)", "deletedDocCount", deletedDocCount, "totalDocCount", len(docIDsToDelete), "totalCount", totalCount, "iteration", i)
+ }
+
+ // 发送删除通知
+ if len(conversationSeqsMap) > 0 {
+ c.sendDeleteNotifications(ctx, conversationSeqsMap, conversationDocsMap, false)
+ }
+
+ if deletedDocCount < deleteLimit && len(docIDsToDelete) == 0 {
+ log.ZInfo(ctx, "[CLEAR_MSG] 已处理完所有消息(个人聊天)", "lastBatchCount", deletedDocCount)
+ break
+ }
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] ====== 清理个人聊天消息任务完成 ======", "deltime", deltime.Format("2006-01-02 15:04:05"), "duration", time.Since(now), "totalCount", totalCount, "fileDeleteCount", fileDeleteCount, "operationID", operationID)
+}
+
+// isGroupConversationID 判断是否为群聊会话ID
+func isGroupConversationID(conversationID string) bool {
+ return strings.HasPrefix(conversationID, "g_") || strings.HasPrefix(conversationID, "sg_")
+}
+
+// isSingleConversationID 判断是否为个人聊天会话ID
+func isSingleConversationID(conversationID string) bool {
+ return strings.HasPrefix(conversationID, "si_")
+}
+
+// extractConversationID 从docID中提取conversationID
+func extractConversationID(docID string) string {
+ index := strings.LastIndex(docID, ":")
+ if index < 0 {
+ return ""
+ }
+ return docID[:index]
+}
+
+// extractAndDeleteFiles 从消息中提取文件信息并删除S3文件
+// isGroupMsg: true表示只处理群聊消息,false表示只处理个人聊天消息
+func (c *cronServer) extractAndDeleteFiles(ctx context.Context, doc *model.MsgDocModel, isGroupMsg bool) int {
+ if doc == nil || len(doc.Msg) == 0 {
+ return 0
+ }
+
+ // 判断conversationID类型
+ conversationID := extractConversationID(doc.DocID)
+ if isGroupMsg && !isGroupConversationID(conversationID) {
+ return 0
+ }
+ if !isGroupMsg && !isSingleConversationID(conversationID) {
+ return 0
+ }
+
+ var fileNames []string
+ fileNamesMap := make(map[string]bool) // 用于去重
+ var fileTypeStats = map[string]int{
+ "picture": 0,
+ "video": 0,
+ "file": 0,
+ "voice": 0,
+ }
+
+ // 遍历消息,提取文件信息
+ totalMsgs := len(doc.Msg)
+ var processedMsgs int
+ for _, msgInfo := range doc.Msg {
+ if msgInfo.Msg == nil {
+ continue
+ }
+
+ contentType := msgInfo.Msg.ContentType
+ content := msgInfo.Msg.Content
+ processedMsgs++
+
+ // 根据消息类型提取文件URL
+ switch contentType {
+ case constant.Picture:
+ // 图片消息
+ var pictureElem apistruct.PictureElem
+ if err := json.Unmarshal([]byte(content), &pictureElem); err == nil {
+ var extractedCount int
+ if pictureElem.SourcePicture.Url != "" {
+ if name := extractFileNameFromURL(pictureElem.SourcePicture.Url); name != "" {
+ fileNamesMap[name] = true
+ extractedCount++
+ }
+ }
+ if pictureElem.BigPicture.Url != "" {
+ if name := extractFileNameFromURL(pictureElem.BigPicture.Url); name != "" {
+ fileNamesMap[name] = true
+ extractedCount++
+ }
+ }
+ if pictureElem.SnapshotPicture.Url != "" {
+ if name := extractFileNameFromURL(pictureElem.SnapshotPicture.Url); name != "" {
+ fileNamesMap[name] = true
+ extractedCount++
+ }
+ }
+ if extractedCount > 0 {
+ fileTypeStats["picture"]++
+ }
+ } else {
+ log.ZDebug(ctx, "[CLEAR_MSG] 解析图片消息失败", "err", err, "seq", msgInfo.Msg.Seq)
+ }
+ case constant.Video:
+ // 视频消息
+ var videoElem apistruct.VideoElem
+ if err := json.Unmarshal([]byte(content), &videoElem); err == nil {
+ var extractedCount int
+ if videoElem.VideoURL != "" {
+ if name := extractFileNameFromURL(videoElem.VideoURL); name != "" {
+ fileNamesMap[name] = true
+ extractedCount++
+ }
+ }
+ if videoElem.SnapshotURL != "" {
+ if name := extractFileNameFromURL(videoElem.SnapshotURL); name != "" {
+ fileNamesMap[name] = true
+ extractedCount++
+ }
+ }
+ if extractedCount > 0 {
+ fileTypeStats["video"]++
+ }
+ } else {
+ log.ZDebug(ctx, "[CLEAR_MSG] 解析视频消息失败", "err", err, "seq", msgInfo.Msg.Seq)
+ }
+ case constant.File:
+ // 文件消息
+ var fileElem apistruct.FileElem
+ if err := json.Unmarshal([]byte(content), &fileElem); err == nil {
+ if fileElem.SourceURL != "" {
+ if name := extractFileNameFromURL(fileElem.SourceURL); name != "" {
+ fileNamesMap[name] = true
+ fileTypeStats["file"]++
+ }
+ }
+ } else {
+ log.ZDebug(ctx, "[CLEAR_MSG] 解析文件消息失败", "err", err, "seq", msgInfo.Msg.Seq)
+ }
+ case constant.Voice:
+ // 音频消息
+ var soundElem apistruct.SoundElem
+ if err := json.Unmarshal([]byte(content), &soundElem); err == nil {
+ if soundElem.SourceURL != "" {
+ if name := extractFileNameFromURL(soundElem.SourceURL); name != "" {
+ fileNamesMap[name] = true
+ fileTypeStats["voice"]++
+ }
+ }
+ } else {
+ log.ZDebug(ctx, "[CLEAR_MSG] 解析音频消息失败", "err", err, "seq", msgInfo.Msg.Seq)
+ }
+ }
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 消息处理统计", "docID", doc.DocID, "totalMsgs", totalMsgs, "processedMsgs", processedMsgs, "fileTypeStats", fileTypeStats)
+
+ // 将map转换为slice
+ for name := range fileNamesMap {
+ fileNames = append(fileNames, name)
+ }
+
+ if len(fileNames) == 0 {
+ log.ZDebug(ctx, "[CLEAR_MSG] 消息中未找到文件", "docID", doc.DocID)
+ return 0
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 提取到文件列表", "docID", doc.DocID, "conversationID", conversationID, "fileCount", len(fileNames), "fileNames", fileNames[:min(10, len(fileNames))])
+
+ // 删除S3文件
+ // 通过objectDB查询文件信息,然后删除数据库记录
+ // 直接按文件名查询(不指定engine),再使用记录中的engine/key处理
+ deletedCount := 0
+ notFoundCount := 0
+ failedCount := 0
+ var deletedFiles []string
+ var notFoundFiles []string
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始删除文件记录", "docID", doc.DocID, "totalFiles", len(fileNames))
+
+ for i, fileName := range fileNames {
+ obj, err := c.objectDB.Take(ctx, "", fileName)
+ if err != nil || obj == nil {
+ // 检查是否是"未找到"错误(正常情况)还是真正的错误
+ if err != nil && !mgo.IsNotFound(err) {
+ // 真正的错误,记录为警告
+ log.ZWarn(ctx, "[CLEAR_MSG] 查询文件记录出错", err, "fileName", fileName, "index", i+1, "total", len(fileNames))
+ } else {
+ // 文件不存在是正常情况,只记录debug日志
+ log.ZDebug(ctx, "[CLEAR_MSG] 文件记录不存在(正常)", "fileName", fileName, "index", i+1, "total", len(fileNames))
+ }
+ notFoundCount++
+ notFoundFiles = append(notFoundFiles, fileName)
+ continue
+ }
+
+ engine := obj.Engine
+ // 在删除前获取key引用计数,用于判断是否需要删除S3文件
+ keyCountBeforeDelete, err := c.objectDB.GetKeyCount(ctx, engine, obj.Key)
+ if err != nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 获取key引用计数失败", err, "engine", engine, "key", obj.Key, "fileName", fileName)
+ keyCountBeforeDelete = 0 // 如果获取失败,假设为0,后续会尝试删除
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 准备删除文件记录", "engine", engine, "fileName", fileName, "key", obj.Key, "index", i+1, "total", len(fileNames), "size", obj.Size, "contentType", obj.ContentType, "keyCountBeforeDelete", keyCountBeforeDelete)
+
+ // 删除数据库记录
+ if err := c.objectDB.Delete(ctx, engine, []string{fileName}); err != nil {
+ failedCount++
+ log.ZWarn(ctx, "[CLEAR_MSG] 删除文件记录失败", err, "engine", engine, "fileName", fileName, "key", obj.Key, "index", i+1, "total", len(fileNames))
+ continue
+ }
+
+ deletedCount++
+ deletedFiles = append(deletedFiles, fileName)
+
+ // 删除数据库记录后,再次检查key引用计数
+ keyCountAfterDelete, err := c.objectDB.GetKeyCount(ctx, engine, obj.Key)
+ if err != nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 删除后获取key引用计数失败", err, "engine", engine, "key", obj.Key, "fileName", fileName)
+ }
+
+ // 删除缓存
+ if c.s3Cache != nil {
+ if err := c.s3Cache.DelS3Key(ctx, engine, fileName); err != nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 删除S3缓存失败", err, "engine", engine, "fileName", fileName)
+ } else {
+ log.ZInfo(ctx, "[CLEAR_MSG] S3缓存删除成功", "engine", engine, "fileName", fileName)
+ }
+ }
+
+ // 如果删除前引用计数<=1,说明删除后应该为0,S3文件应该被删除
+ if keyCountBeforeDelete <= 1 {
+ // 删除S3文件
+ if c.s3Client != nil {
+ if err := c.s3Client.DeleteObject(ctx, obj.Key); err != nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 删除S3文件失败", err, "engine", engine, "key", obj.Key, "fileName", fileName)
+ } else {
+ log.ZInfo(ctx, "[CLEAR_MSG] S3文件删除成功",
+ "engine", engine,
+ "key", obj.Key,
+ "fileName", fileName,
+ "keyCountBeforeDelete", keyCountBeforeDelete,
+ "keyCountAfterDelete", keyCountAfterDelete)
+ }
+ } else {
+ log.ZWarn(ctx, "[CLEAR_MSG] S3客户端未初始化,无法删除S3文件", nil,
+ "engine", engine,
+ "key", obj.Key,
+ "fileName", fileName,
+ "keyCountBeforeDelete", keyCountBeforeDelete,
+ "keyCountAfterDelete", keyCountAfterDelete)
+ }
+ } else {
+ log.ZInfo(ctx, "[CLEAR_MSG] 文件key仍有其他引用,S3文件保留",
+ "engine", engine,
+ "key", obj.Key,
+ "fileName", fileName,
+ "keyCountBeforeDelete", keyCountBeforeDelete,
+ "keyCountAfterDelete", keyCountAfterDelete)
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 文件记录删除成功", "engine", engine, "fileName", fileName, "key", obj.Key, "index", i+1, "total", len(fileNames), "size", obj.Size, "contentType", obj.ContentType)
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 文件删除汇总", "docID", doc.DocID, "conversationID", conversationID,
+ "totalFiles", len(fileNames),
+ "deletedCount", deletedCount,
+ "notFoundCount", notFoundCount,
+ "failedCount", failedCount,
+ "deletedFiles", deletedFiles[:min(5, len(deletedFiles))],
+ "notFoundFiles", notFoundFiles[:min(5, len(notFoundFiles))])
+
+ return deletedCount
+}
+
+// extractFileNameFromURL 从URL中提取文件名
+func extractFileNameFromURL(fileURL string) string {
+ if fileURL == "" {
+ return ""
+ }
+
+ // 解析URL
+ parsedURL, err := url.Parse(fileURL)
+ if err != nil {
+ // 如果解析失败,尝试从URL路径中提取
+ parts := strings.Split(fileURL, "/")
+ if len(parts) > 0 {
+ lastPart := parts[len(parts)-1]
+ // 移除查询参数
+ if idx := strings.Index(lastPart, "?"); idx >= 0 {
+ lastPart = lastPart[:idx]
+ }
+ return lastPart
+ }
+ return ""
+ }
+
+ // 从URL路径中提取文件名
+ path := parsedURL.Path
+ parts := strings.Split(path, "/")
+ if len(parts) > 0 {
+ fileName := parts[len(parts)-1]
+ // 移除查询参数
+ if idx := strings.Index(fileName, "?"); idx >= 0 {
+ fileName = fileName[:idx]
+ }
+ return fileName
+ }
+
+ return ""
+}
+
+func min(a, b int) int {
+ if a < b {
+ return a
+ }
+ return b
+}
+
+// sendDeleteNotifications 发送消息删除通知
+// conversationSeqsMap: conversationID -> []seq
+// conversationDocsMap: conversationID -> doc
+// isGroupMsg: true表示群聊消息,false表示个人聊天消息
+func (c *cronServer) sendDeleteNotifications(ctx context.Context, conversationSeqsMap map[string][]int64, conversationDocsMap map[string]*model.MsgDocModel, isGroupMsg bool) {
+ log.ZInfo(ctx, "[CLEAR_MSG] 开始发送删除通知", "conversationCount", len(conversationSeqsMap), "isGroupMsg", isGroupMsg)
+
+ adminUserID := c.config.Share.IMAdminUser.UserIDs[0]
+
+ for conversationID, seqs := range conversationSeqsMap {
+ if len(seqs) == 0 {
+ continue
+ }
+
+ // 从conversationDocsMap获取原始消息文档(参考撤销消息的实现)
+ doc, ok := conversationDocsMap[conversationID]
+ if !ok || doc == nil || len(doc.Msg) == 0 {
+ log.ZWarn(ctx, "[CLEAR_MSG] 无法获取原始消息", nil, "conversationID", conversationID)
+ continue
+ }
+
+ // 获取第一条消息的信息(参考撤销消息的实现:使用原始消息的SessionType、GroupID、RecvID)
+ var firstMsg *model.MsgDataModel
+ for _, msgInfo := range doc.Msg {
+ if msgInfo.Msg != nil {
+ firstMsg = msgInfo.Msg
+ break
+ }
+ }
+ if firstMsg == nil {
+ log.ZWarn(ctx, "[CLEAR_MSG] 无法获取原始消息数据", nil, "conversationID", conversationID)
+ continue
+ }
+
+ // 构建删除通知
+ tips := &sdkws.DeleteMsgsTips{
+ UserID: adminUserID,
+ ConversationID: conversationID,
+ Seqs: seqs,
+ }
+
+ // 参考撤销消息的实现:根据原始消息的SessionType确定recvID
+ var recvID string
+ var sessionType int32
+ if firstMsg.SessionType == constant.ReadGroupChatType {
+ recvID = firstMsg.GroupID
+ sessionType = firstMsg.SessionType
+ } else {
+ recvID = firstMsg.RecvID
+ sessionType = firstMsg.SessionType
+ }
+
+ if recvID == "" {
+ log.ZWarn(ctx, "[CLEAR_MSG] 无法确定通知接收者", nil, "conversationID", conversationID, "sessionType", sessionType, "groupID", firstMsg.GroupID, "recvID", firstMsg.RecvID)
+ continue
+ }
+
+ // 使用NotificationSender发送通知(参考撤销消息的实现)
+ c.notificationSender.NotificationWithSessionType(ctx, adminUserID, recvID,
+ constant.DeleteMsgsNotification, sessionType, tips)
+ log.ZInfo(ctx, "[CLEAR_MSG] 发送删除通知", "conversationID", conversationID, "recvID", recvID, "sessionType", sessionType, "seqCount", len(seqs))
+ }
+
+ log.ZInfo(ctx, "[CLEAR_MSG] 删除通知发送完成", "conversationCount", len(conversationSeqsMap), "isGroupMsg", isGroupMsg)
+}
diff --git a/internal/tools/cron/cron_task.go b/internal/tools/cron/cron_task.go
new file mode 100644
index 0000000..3d9705d
--- /dev/null
+++ b/internal/tools/cron/cron_task.go
@@ -0,0 +1,307 @@
+package cron
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ disetcd "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery/etcd"
+ mcache "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ redisCache "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/dbbuild"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ pbconversation "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/discovery/etcd"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/s3"
+ "github.com/openimsdk/tools/s3/cont"
+ "github.com/openimsdk/tools/s3/disable"
+ "github.com/openimsdk/tools/s3/minio"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "github.com/robfig/cron/v3"
+ "google.golang.org/grpc"
+)
+
+type Config struct {
+ CronTask config.CronTask
+ Share config.Share
+ Discovery config.Discovery
+ Mongo config.Mongo
+ Redis config.Redis
+ Minio config.Minio
+ Third config.Third
+ Notification config.Notification
+}
+
+func Start(ctx context.Context, conf *Config, client discovery.SvcDiscoveryRegistry, service grpc.ServiceRegistrar) error {
+ log.CInfo(ctx, "CRON-TASK server is initializing", "runTimeEnv", runtimeenv.RuntimeEnvironment(), "chatRecordsClearTime", conf.CronTask.CronExecuteTime, "msgDestructTime", conf.CronTask.RetainChatRecords)
+ if conf.CronTask.RetainChatRecords < 1 {
+ log.ZInfo(ctx, "disable cron")
+ <-ctx.Done()
+ return nil
+ }
+ ctx = mcontext.SetOpUserID(ctx, conf.Share.IMAdminUser.UserIDs[0])
+
+ msgConn, err := client.GetConn(ctx, conf.Discovery.RpcService.Msg)
+ if err != nil {
+ return err
+ }
+
+ thirdConn, err := client.GetConn(ctx, conf.Discovery.RpcService.Third)
+ if err != nil {
+ return err
+ }
+
+ conversationConn, err := client.GetConn(ctx, conf.Discovery.RpcService.Conversation)
+ if err != nil {
+ return err
+ }
+
+ groupConn, err := client.GetConn(ctx, conf.Discovery.RpcService.Group)
+ if err != nil {
+ return err
+ }
+
+ // 初始化数据库连接(用于会议群聊解散)
+ dbb := dbbuild.NewBuilder(&conf.Mongo, &conf.Redis)
+ mgocli, err := dbb.Mongo(ctx)
+ if err != nil {
+ return err
+ }
+ meetingDB, err := mgo.NewMeetingMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ systemConfigDB, err := mgo.NewSystemConfigMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ msgDocDB, err := mgo.NewMsgMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ objectDB, err := mgo.NewS3Mongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+
+ // 初始化S3客户端和缓存(用于删除S3文件)
+ rdb, err := dbb.Redis(ctx)
+ if err != nil {
+ log.ZWarn(ctx, "Redis连接失败,S3文件删除功能可能受限", err)
+ rdb = nil
+ }
+ var s3Client s3.Interface
+ var s3Cache cont.S3Cache
+ switch enable := conf.Third.Object.Enable; enable {
+ case "minio":
+ var minioCache minio.Cache
+ if rdb == nil {
+ mc, err := mgo.NewCacheMgo(mgocli.GetDB())
+ if err != nil {
+ log.ZWarn(ctx, "Mongo缓存初始化失败,S3文件删除功能可能受限", err)
+ s3Client = disable.NewDisable()
+ s3Cache = nil
+ } else {
+ minioCache = mcache.NewMinioCache(mc)
+ s3Client, err = minio.NewMinio(ctx, minioCache, *conf.Minio.Build())
+ if err != nil {
+ log.ZError(ctx, "Minio初始化失败", err)
+ return err
+ }
+ s3Cache = nil // MongoDB缓存模式下,S3Cache为nil
+ }
+ } else {
+ minioCache = redisCache.NewMinioCache(rdb)
+ s3Client, err = minio.NewMinio(ctx, minioCache, *conf.Minio.Build())
+ if err != nil {
+ log.ZError(ctx, "Minio初始化失败", err)
+ return err
+ }
+ s3Cache = redisCache.NewS3Cache(rdb, s3Client)
+ }
+ case "":
+ s3Client = disable.NewDisable()
+ s3Cache = nil
+ default:
+ // 其他S3类型暂不支持,使用disable模式
+ log.ZWarn(ctx, "S3类型不支持,使用disable模式", nil, "enable", enable)
+ s3Client = disable.NewDisable()
+ s3Cache = nil
+ }
+
+ var locker Locker
+ if conf.Discovery.Enable == config.ETCD {
+ cm := disetcd.NewConfigManager(client.(*etcd.SvcDiscoveryRegistryImpl).GetClient(), []string{
+ conf.CronTask.GetConfigFileName(),
+ conf.Share.GetConfigFileName(),
+ conf.Discovery.GetConfigFileName(),
+ })
+ cm.Watch(ctx)
+ locker, err = NewEtcdLocker(client.(*etcd.SvcDiscoveryRegistryImpl).GetClient())
+ if err != nil {
+ return err
+ }
+ }
+
+ if locker == nil {
+ locker = emptyLocker{}
+ }
+
+ // 初始化NotificationSender(用于发送删除消息通知)
+ notificationSender := notification.NewNotificationSender(&conf.Notification,
+ notification.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
+ return msg.NewMsgClient(msgConn).SendMsg(ctx, req)
+ }),
+ )
+
+ srv := &cronServer{
+ ctx: ctx,
+ config: conf,
+ cron: cron.New(),
+ msgClient: msg.NewMsgClient(msgConn),
+ conversationClient: pbconversation.NewConversationClient(conversationConn),
+ thirdClient: third.NewThirdClient(thirdConn),
+ groupClient: rpcli.NewGroupClient(groupConn),
+ meetingDB: meetingDB,
+ systemConfigDB: systemConfigDB,
+ msgDocDB: msgDocDB,
+ objectDB: objectDB,
+ s3Client: s3Client,
+ s3Cache: s3Cache,
+ notificationSender: notificationSender,
+ locker: locker,
+ }
+
+ if err := srv.registerClearS3(); err != nil {
+ return err
+ }
+ if err := srv.registerDeleteMsg(); err != nil {
+ return err
+ }
+ if err := srv.registerClearUserMsg(); err != nil {
+ return err
+ }
+ if err := srv.registerDismissMeetingGroups(); err != nil {
+ return err
+ }
+ if err := srv.registerClearGroupMsgByConfig(); err != nil {
+ return err
+ }
+ if err := srv.registerClearUserMsgByConfig(); err != nil {
+ return err
+ }
+ log.ZDebug(ctx, "start cron task", "CronExecuteTime", conf.CronTask.CronExecuteTime)
+ srv.cron.Start()
+ log.ZDebug(ctx, "cron task server is running")
+ <-ctx.Done()
+ log.ZDebug(ctx, "cron task server is shutting down")
+ srv.cron.Stop()
+
+ return nil
+}
+
+type Locker interface {
+ ExecuteWithLock(ctx context.Context, taskName string, task func())
+}
+
+type emptyLocker struct{}
+
+func (emptyLocker) ExecuteWithLock(ctx context.Context, taskName string, task func()) {
+ task()
+}
+
+type cronServer struct {
+ ctx context.Context
+ config *Config
+ cron *cron.Cron
+ msgClient msg.MsgClient
+ conversationClient pbconversation.ConversationClient
+ thirdClient third.ThirdClient
+ groupClient *rpcli.GroupClient
+ meetingDB database.Meeting
+ systemConfigDB database.SystemConfig
+ msgDocDB database.Msg
+ objectDB database.ObjectInfo
+ s3Client s3.Interface
+ s3Cache cont.S3Cache
+ notificationSender *notification.NotificationSender
+ locker Locker
+}
+
+func (c *cronServer) registerClearS3() error {
+ if c.config.CronTask.FileExpireTime <= 0 || len(c.config.CronTask.DeleteObjectType) == 0 {
+ log.ZInfo(c.ctx, "disable scheduled cleanup of s3", "fileExpireTime", c.config.CronTask.FileExpireTime, "deleteObjectType", c.config.CronTask.DeleteObjectType)
+ return nil
+ }
+ _, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, func() {
+ c.locker.ExecuteWithLock(c.ctx, "clearS3", c.clearS3)
+ })
+ return errs.WrapMsg(err, "failed to register clear s3 cron task")
+}
+
+func (c *cronServer) registerDeleteMsg() error {
+ if c.config.CronTask.RetainChatRecords <= 0 {
+ log.ZInfo(c.ctx, "disable scheduled cleanup of chat records", "retainChatRecords", c.config.CronTask.RetainChatRecords)
+ return nil
+ }
+ _, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, func() {
+ c.locker.ExecuteWithLock(c.ctx, "deleteMsg", c.deleteMsg)
+ })
+ return errs.WrapMsg(err, "failed to register delete msg cron task")
+}
+
+func (c *cronServer) registerClearUserMsg() error {
+ _, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, func() {
+ c.locker.ExecuteWithLock(c.ctx, "clearUserMsg", c.clearUserMsg)
+ })
+ return errs.WrapMsg(err, "failed to register clear user msg cron task")
+}
+
+func (c *cronServer) registerDismissMeetingGroups() error {
+ // 每分钟执行一次,检查已结束超过10分钟的会议并解散群聊
+ _, err := c.cron.AddFunc("*/1 * * * *", func() {
+ c.locker.ExecuteWithLock(c.ctx, "dismissMeetingGroups", c.dismissMeetingGroups)
+ })
+ return errs.WrapMsg(err, "failed to register dismiss meeting groups cron task")
+}
+
+func (c *cronServer) registerClearGroupMsgByConfig() error {
+ // 使用配置文件中的执行时间,清理群聊消息(根据系统配置)
+ cronExpr := c.config.CronTask.CronExecuteTime
+ log.ZInfo(c.ctx, "[CLEAR_MSG] 注册清理群聊消息定时任务", "cron", cronExpr)
+ _, err := c.cron.AddFunc(cronExpr, func() {
+ c.locker.ExecuteWithLock(c.ctx, "clearGroupMsgByConfig", c.clearGroupMsg)
+ })
+ if err != nil {
+ log.ZError(c.ctx, "[CLEAR_MSG] 注册清理群聊消息定时任务失败", err)
+ } else {
+ log.ZInfo(c.ctx, "[CLEAR_MSG] 清理群聊消息定时任务注册成功", "cron", cronExpr)
+ }
+ return errs.WrapMsg(err, "failed to register clear group msg by config cron task")
+}
+
+func (c *cronServer) registerClearUserMsgByConfig() error {
+ // 使用配置文件中的执行时间,清理个人聊天消息(根据系统配置)
+ cronExpr := c.config.CronTask.CronExecuteTime
+ log.ZInfo(c.ctx, "[CLEAR_MSG] 注册清理个人聊天消息定时任务", "cron", cronExpr)
+ _, err := c.cron.AddFunc(cronExpr, func() {
+ c.locker.ExecuteWithLock(c.ctx, "clearUserMsgByConfig", c.clearUserMsg)
+ })
+ if err != nil {
+ log.ZError(c.ctx, "[CLEAR_MSG] 注册清理个人聊天消息定时任务失败", err)
+ } else {
+ log.ZInfo(c.ctx, "[CLEAR_MSG] 清理个人聊天消息定时任务注册成功", "cron", cronExpr)
+ }
+ return errs.WrapMsg(err, "failed to register clear user msg by config cron task")
+}
diff --git a/internal/tools/cron/cron_test.go b/internal/tools/cron/cron_test.go
new file mode 100644
index 0000000..07854d8
--- /dev/null
+++ b/internal/tools/cron/cron_test.go
@@ -0,0 +1,64 @@
+package cron
+
+import (
+ "context"
+ "testing"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ kdisc "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery"
+ pbconversation "git.imall.cloud/openim/protocol/conversation"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/mw"
+ "github.com/robfig/cron/v3"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+)
+
+func TestName(t *testing.T) {
+ conf := &config.Discovery{
+ Enable: config.ETCD,
+ Etcd: config.Etcd{
+ RootDirectory: "openim",
+ Address: []string{"localhost:12379"},
+ },
+ }
+ client, err := kdisc.NewDiscoveryRegister(conf, nil)
+ if err != nil {
+ panic(err)
+ }
+ client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()))
+ ctx := mcontext.SetOpUserID(context.Background(), "imAdmin")
+ msgConn, err := client.GetConn(ctx, "msg-rpc-service")
+ if err != nil {
+ panic(err)
+ }
+ thirdConn, err := client.GetConn(ctx, "third-rpc-service")
+ if err != nil {
+ panic(err)
+ }
+
+ conversationConn, err := client.GetConn(ctx, "conversation-rpc-service")
+ if err != nil {
+ panic(err)
+ }
+
+ srv := &cronServer{
+ ctx: ctx,
+ config: &Config{
+ CronTask: config.CronTask{
+ RetainChatRecords: 1,
+ FileExpireTime: 1,
+ DeleteObjectType: []string{"msg-picture", "msg-file", "msg-voice", "msg-video", "msg-video-snapshot", "sdklog", ""},
+ },
+ },
+ cron: cron.New(),
+ msgClient: msg.NewMsgClient(msgConn),
+ conversationClient: pbconversation.NewConversationClient(conversationConn),
+ thirdClient: third.NewThirdClient(thirdConn),
+ }
+ srv.deleteMsg()
+ //srv.clearS3()
+ //srv.clearUserMsg()
+}
diff --git a/internal/tools/cron/dist_look.go b/internal/tools/cron/dist_look.go
new file mode 100644
index 0000000..e46b920
--- /dev/null
+++ b/internal/tools/cron/dist_look.go
@@ -0,0 +1,86 @@
+package cron
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "time"
+
+ "github.com/openimsdk/tools/log"
+ clientv3 "go.etcd.io/etcd/client/v3"
+ "go.etcd.io/etcd/client/v3/concurrency"
+)
+
+const (
+ lockLeaseTTL = 300
+)
+
+type EtcdLocker struct {
+ client *clientv3.Client
+ instanceID string
+}
+
+// NewEtcdLocker creates a new etcd distributed lock
+func NewEtcdLocker(client *clientv3.Client) (*EtcdLocker, error) {
+ hostname, _ := os.Hostname()
+ pid := os.Getpid()
+ instanceID := fmt.Sprintf("%s-pid-%d-%d", hostname, pid, time.Now().UnixNano())
+
+ locker := &EtcdLocker{
+ client: client,
+ instanceID: instanceID,
+ }
+
+ return locker, nil
+}
+
+func (e *EtcdLocker) ExecuteWithLock(ctx context.Context, taskName string, task func()) {
+ session, err := concurrency.NewSession(e.client, concurrency.WithTTL(lockLeaseTTL))
+ if err != nil {
+ log.ZWarn(ctx, "Failed to create etcd session", err,
+ "taskName", taskName,
+ "instanceID", e.instanceID)
+ return
+ }
+ defer session.Close()
+
+ lockKey := fmt.Sprintf("openim/crontask/%s", taskName)
+ mutex := concurrency.NewMutex(session, lockKey)
+
+ ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
+ defer cancel()
+
+ err = mutex.TryLock(ctxWithTimeout)
+ if err != nil {
+ // errors.Is(err, concurrency.ErrLocked)
+ log.ZDebug(ctx, "Task is being executed by another instance, skipping",
+ "taskName", taskName,
+ "instanceID", e.instanceID,
+ "error", err.Error())
+
+ return
+ }
+
+ defer func() {
+ if err := mutex.Unlock(ctx); err != nil {
+ log.ZWarn(ctx, "Failed to release task lock", err,
+ "taskName", taskName,
+ "instanceID", e.instanceID)
+ } else {
+ log.ZInfo(ctx, "Successfully released task lock",
+ "taskName", taskName,
+ "instanceID", e.instanceID)
+ }
+ }()
+
+ log.ZInfo(ctx, "Successfully acquired task lock, starting execution",
+ "taskName", taskName,
+ "instanceID", e.instanceID,
+ "sessionID", session.Lease())
+
+ task()
+
+ log.ZInfo(ctx, "Task execution completed",
+ "taskName", taskName,
+ "instanceID", e.instanceID)
+}
diff --git a/internal/tools/cron/meeting.go b/internal/tools/cron/meeting.go
new file mode 100644
index 0000000..44c255f
--- /dev/null
+++ b/internal/tools/cron/meeting.go
@@ -0,0 +1,152 @@
+package cron
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+// meetingPagination 简单的分页实现,供定时任务批量扫描使用
+type meetingPagination struct {
+ pageNumber int32
+ showNumber int32
+}
+
+func (p *meetingPagination) GetPageNumber() int32 {
+ if p.pageNumber <= 0 {
+ return 1
+ }
+ return p.pageNumber
+}
+
+func (p *meetingPagination) GetShowNumber() int32 {
+ if p.showNumber <= 0 {
+ return 200
+ }
+ return p.showNumber
+}
+
+// dismissMeetingGroups 解散已结束超过10分钟的会议群聊
+func (c *cronServer) dismissMeetingGroups() {
+ now := time.Now()
+ // 计算10分钟前的时间
+ beforeTime := now.Add(-10 * time.Minute)
+ operationID := fmt.Sprintf("cron_dismiss_meeting_groups_%d_%d", os.Getpid(), now.UnixMilli())
+ ctx := mcontext.SetOperationID(c.ctx, operationID)
+
+ log.ZDebug(ctx, "Start dismissing meeting groups", "beforeTime", beforeTime)
+
+ // 先将已过期但状态仍为已预约/进行中的会议标记为已结束
+ c.finishExpiredMeetings(ctx, now)
+
+ // 查询已结束且结束时间在10分钟前的会议
+ // 结束时间 = scheduledTime + duration(分钟)
+ meetings, err := c.meetingDB.FindFinishedMeetingsBefore(ctx, beforeTime)
+ if err != nil {
+ log.ZError(ctx, "Failed to find finished meetings", err)
+ return
+ }
+
+ if len(meetings) == 0 {
+ log.ZDebug(ctx, "No finished meetings to dismiss groups")
+ return
+ }
+
+ log.ZInfo(ctx, "Found finished meetings to dismiss groups", "count", len(meetings))
+
+ dismissedCount := 0
+ failedCount := 0
+
+ for _, meeting := range meetings {
+ if meeting.GroupID == "" {
+ log.ZWarn(ctx, "Meeting has no group ID, skip", nil, "meetingID", meeting.MeetingID)
+ continue
+ }
+
+ // 计算会议结束时间
+ endTime := meeting.ScheduledTime.Add(time.Duration(meeting.Duration) * time.Minute)
+ // 检查是否已经超过10分钟
+ if now.Sub(endTime) < 10*time.Minute {
+ log.ZDebug(ctx, "Meeting ended less than 10 minutes ago, skip", "meetingID", meeting.MeetingID, "endTime", endTime)
+ continue
+ }
+
+ // 解散群聊,deleteMember设为true表示删除所有成员
+ ctx := mcontext.SetOperationID(c.ctx, fmt.Sprintf("%s_%s", operationID, meeting.MeetingID))
+ err := c.groupClient.DismissGroup(ctx, meeting.GroupID, true)
+ if err != nil {
+ // 如果群不存在或找不到群主(RecordNotFoundError),说明群可能已经被解散或数据不一致
+ if errs.ErrRecordNotFound.Is(err) {
+ log.ZWarn(ctx, "Group not found or owner not found, may already be dismissed, clear groupID", nil, "meetingID", meeting.MeetingID, "groupID", meeting.GroupID)
+ // 清空groupID,避免下次重复处理
+ if updateErr := c.meetingDB.Update(ctx, meeting.MeetingID, map[string]any{"group_id": ""}); updateErr != nil {
+ log.ZWarn(ctx, "Failed to clear groupID after group not found", updateErr, "meetingID", meeting.MeetingID)
+ }
+ // 不增加失败计数,因为这不是真正的失败
+ continue
+ }
+ log.ZError(ctx, "Failed to dismiss meeting group", err, "meetingID", meeting.MeetingID, "groupID", meeting.GroupID)
+ failedCount++
+ continue
+ }
+
+ // 从webhook配置的attentionIds中移除会议群ID
+ if c.systemConfigDB != nil {
+ if err := webhook.UpdateAttentionIds(ctx, c.systemConfigDB, meeting.GroupID, false); err != nil {
+ log.ZWarn(ctx, "dismissMeetingGroups: failed to remove groupID from webhook attentionIds", err, "meetingID", meeting.MeetingID, "groupID", meeting.GroupID)
+ }
+ }
+
+ // 解散群成功后,清空会议的groupID,避免下次重复处理
+ if updateErr := c.meetingDB.Update(ctx, meeting.MeetingID, map[string]any{"group_id": ""}); updateErr != nil {
+ log.ZWarn(ctx, "Failed to clear groupID after dismissing group", updateErr, "meetingID", meeting.MeetingID, "groupID", meeting.GroupID)
+ } else {
+ log.ZInfo(ctx, "Successfully dismissed meeting group and cleared groupID", "meetingID", meeting.MeetingID, "groupID", meeting.GroupID, "endTime", endTime)
+ }
+ dismissedCount++
+ }
+
+ log.ZInfo(ctx, "Finished dismissing meeting groups", "total", len(meetings), "dismissed", dismissedCount, "failed", failedCount, "duration", time.Since(now))
+}
+
+// finishExpiredMeetings 将已过结束时间的会议状态更新为已结束
+func (c *cronServer) finishExpiredMeetings(ctx context.Context, now time.Time) {
+ statuses := []int32{model.MeetingStatusScheduled, model.MeetingStatusOngoing}
+ for _, status := range statuses {
+ page := int32(1)
+ for {
+ total, meetings, err := c.meetingDB.FindByStatus(ctx, status, &meetingPagination{pageNumber: page, showNumber: 200})
+ if err != nil {
+ log.ZWarn(ctx, "finishExpiredMeetings: failed to list meetings", err, "status", status, "page", page)
+ break
+ }
+ if len(meetings) == 0 {
+ break
+ }
+
+ for _, meeting := range meetings {
+ endTime := meeting.ScheduledTime.Add(time.Duration(meeting.Duration) * time.Minute)
+ if now.After(endTime) {
+ if err := c.meetingDB.UpdateStatus(ctx, meeting.MeetingID, model.MeetingStatusFinished); err != nil {
+ log.ZWarn(ctx, "finishExpiredMeetings: failed to update status", err, "meetingID", meeting.MeetingID)
+ continue
+ }
+ log.ZInfo(ctx, "finishExpiredMeetings: meeting marked finished", "meetingID", meeting.MeetingID, "endTime", endTime)
+ }
+ }
+
+ // 分页结束条件
+ if int64(page*200) >= total {
+ break
+ }
+ page++
+ }
+ }
+}
diff --git a/internal/tools/cron/msg.go b/internal/tools/cron/msg.go
new file mode 100644
index 0000000..4bdc3d3
--- /dev/null
+++ b/internal/tools/cron/msg.go
@@ -0,0 +1,37 @@
+package cron
+
+import (
+ "fmt"
+ "os"
+ "time"
+
+ "git.imall.cloud/openim/protocol/msg"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func (c *cronServer) deleteMsg() {
+ now := time.Now()
+ deltime := now.Add(-time.Hour * 24 * time.Duration(c.config.CronTask.RetainChatRecords))
+ operationID := fmt.Sprintf("cron_msg_%d_%d", os.Getpid(), deltime.UnixMilli())
+ ctx := mcontext.SetOperationID(c.ctx, operationID)
+ log.ZDebug(ctx, "Destruct chat records", "deltime", deltime, "timestamp", deltime.UnixMilli())
+ const (
+ deleteCount = 10000
+ deleteLimit = 50
+ )
+ var count int
+ for i := 1; i <= deleteCount; i++ {
+ ctx := mcontext.SetOperationID(c.ctx, fmt.Sprintf("%s_%d", operationID, i))
+ resp, err := c.msgClient.DestructMsgs(ctx, &msg.DestructMsgsReq{Timestamp: deltime.UnixMilli(), Limit: deleteLimit})
+ if err != nil {
+ log.ZError(ctx, "cron destruct chat records failed", err)
+ break
+ }
+ count += int(resp.Count)
+ if resp.Count < deleteLimit {
+ break
+ }
+ }
+ log.ZDebug(ctx, "cron destruct chat records end", "deltime", deltime, "cont", time.Since(now), "count", count)
+}
diff --git a/internal/tools/cron/s3.go b/internal/tools/cron/s3.go
new file mode 100644
index 0000000..c180432
--- /dev/null
+++ b/internal/tools/cron/s3.go
@@ -0,0 +1,80 @@
+package cron
+
+import (
+ "fmt"
+ "os"
+ "time"
+
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+)
+
+func (c *cronServer) clearS3() {
+ start := time.Now()
+ deleteTime := start.Add(-time.Hour * 24 * time.Duration(c.config.CronTask.FileExpireTime))
+ operationID := fmt.Sprintf("cron_s3_%d_%d", os.Getpid(), deleteTime.UnixMilli())
+ ctx := mcontext.SetOperationID(c.ctx, operationID)
+ log.ZDebug(ctx, "deleteoutDatedData", "deletetime", deleteTime, "timestamp", deleteTime.UnixMilli())
+ const (
+ deleteCount = 10000
+ deleteLimit = 100
+ )
+
+ var count int
+ for i := 1; i <= deleteCount; i++ {
+ resp, err := c.thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: deleteTime.UnixMilli(), ObjectGroup: c.config.CronTask.DeleteObjectType, Limit: deleteLimit})
+ if err != nil {
+ log.ZError(ctx, "cron deleteoutDatedData failed", err)
+ return
+ }
+ count += int(resp.Count)
+ if resp.Count < deleteLimit {
+ break
+ }
+ }
+ log.ZDebug(ctx, "cron deleteoutDatedData success", "deltime", deleteTime, "cont", time.Since(start), "count", count)
+}
+
+// var req *third.DeleteOutdatedDataReq
+// count1, err := ExtractField(ctx, c.thirdClient.DeleteOutdatedData, req, (*third.DeleteOutdatedDataResp).GetCount)
+//
+// c.thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{})
+// msggateway.GetUsersOnlineStatusCaller.Invoke(ctx, &msggateway.GetUsersOnlineStatusReq{})
+//
+// var cli ThirdClient
+//
+// c111, err := cli.DeleteOutdatedData(ctx, 100)
+//
+// cli.ThirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{})
+//
+// cli.AuthSign(ctx, &third.AuthSignReq{})
+//
+// cli.SetAppBadge()
+//
+//}
+//
+//func extractField[A, B, C any](ctx context.Context, fn func(ctx context.Context, req *A, opts ...grpc.CallOption) (*B, error), req *A, get func(*B) C) (C, error) {
+// resp, err := fn(ctx, req)
+// if err != nil {
+// var c C
+// return c, err
+// }
+// return get(resp), nil
+//}
+//
+//func ignore(_ any, err error) error {
+// return err
+//}
+//
+//type ThirdClient struct {
+// third.ThirdClient
+//}
+//
+//func (c *ThirdClient) DeleteOutdatedData(ctx context.Context, expireTime int64) (int32, error) {
+// return extractField(ctx, c.ThirdClient.DeleteOutdatedData, &third.DeleteOutdatedDataReq{ExpireTime: expireTime}, (*third.DeleteOutdatedDataResp).GetCount)
+//}
+//
+//func (c *ThirdClient) DeleteOutdatedData1(ctx context.Context, expireTime int64) error {
+// return ignore(c.ThirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: expireTime}))
+//}
diff --git a/magefile.go b/magefile.go
new file mode 100644
index 0000000..b9861bd
--- /dev/null
+++ b/magefile.go
@@ -0,0 +1,105 @@
+//go:build mage
+// +build mage
+
+package main
+
+import (
+ "flag"
+ "os"
+
+ "github.com/openimsdk/gomake/mageutil"
+)
+
+var Default = Build
+
+var Aliases = map[string]any{
+ "buildcc": BuildWithCustomConfig,
+ "startcc": StartWithCustomConfig,
+}
+
+var (
+ customRootDir = "." // workDir in mage, default is "./"(project root directory)
+ customSrcDir = "cmd" // source code directory, default is "cmd"
+ customOutputDir = "_output" // output directory, default is "_output"
+ customConfigDir = "config" // configuration directory, default is "config"
+ customToolsDir = "tools" // tools source code directory, default is "tools"
+)
+
+
+// Build support specifical binary build.
+//
+// Example: `mage build openim-api openim-rpc-user seq`
+func Build() {
+ flag.Parse()
+ bin := flag.Args()
+ if len(bin) != 0 {
+ bin = bin[1:]
+ }
+
+ mageutil.Build(bin, nil)
+}
+
+func BuildWithCustomConfig() {
+ flag.Parse()
+ bin := flag.Args()
+ if len(bin) != 0 {
+ bin = bin[1:]
+ }
+
+ config := &mageutil.PathOptions{
+ RootDir: &customRootDir,
+ OutputDir: &customOutputDir,
+ SrcDir: &customSrcDir,
+ ToolsDir: &customToolsDir,
+ }
+
+ mageutil.Build(bin, config)
+}
+
+func Start() {
+ mageutil.InitForSSC()
+ err := setMaxOpenFiles()
+ if err != nil {
+ mageutil.PrintRed("setMaxOpenFiles failed " + err.Error())
+ os.Exit(1)
+ }
+
+ flag.Parse()
+ bin := flag.Args()
+ if len(bin) != 0 {
+ bin = bin[1:]
+ }
+
+ mageutil.StartToolsAndServices(bin, nil)
+}
+
+func StartWithCustomConfig() {
+ mageutil.InitForSSC()
+ err := setMaxOpenFiles()
+ if err != nil {
+ mageutil.PrintRed("setMaxOpenFiles failed " + err.Error())
+ os.Exit(1)
+ }
+
+ flag.Parse()
+ bin := flag.Args()
+ if len(bin) != 0 {
+ bin = bin[1:]
+ }
+
+ config := &mageutil.PathOptions{
+ RootDir: &customRootDir,
+ OutputDir: &customOutputDir,
+ ConfigDir: &customConfigDir,
+ }
+
+ mageutil.StartToolsAndServices(bin, config)
+}
+
+func Stop() {
+ mageutil.StopAndCheckBinaries()
+}
+
+func Check() {
+ mageutil.CheckAndReportBinariesStatus()
+}
diff --git a/magefile_unix.go b/magefile_unix.go
new file mode 100644
index 0000000..4bb0cc1
--- /dev/null
+++ b/magefile_unix.go
@@ -0,0 +1,21 @@
+//go:build mage && !windows
+// +build mage,!windows
+
+package main
+
+import (
+ "syscall"
+
+ "github.com/openimsdk/gomake/mageutil"
+)
+
+func setMaxOpenFiles() error {
+ var rLimit syscall.Rlimit
+ err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit)
+ if err != nil {
+ return err
+ }
+ rLimit.Max = uint64(mageutil.MaxFileDescriptors)
+ rLimit.Cur = uint64(mageutil.MaxFileDescriptors)
+ return syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit)
+}
diff --git a/magefile_windows.go b/magefile_windows.go
new file mode 100644
index 0000000..7441bfd
--- /dev/null
+++ b/magefile_windows.go
@@ -0,0 +1,8 @@
+//go:build mage
+// +build mage
+
+package main
+
+func setMaxOpenFiles() error {
+ return nil
+}
diff --git a/package-lock.json b/package-lock.json
new file mode 100644
index 0000000..29cfb6f
--- /dev/null
+++ b/package-lock.json
@@ -0,0 +1,6 @@
+{
+ "name": "open-im-server-deploy",
+ "lockfileVersion": 3,
+ "requires": true,
+ "packages": {}
+}
diff --git a/pkg/apistruct/config_manager.go b/pkg/apistruct/config_manager.go
new file mode 100644
index 0000000..ca5683a
--- /dev/null
+++ b/pkg/apistruct/config_manager.go
@@ -0,0 +1,28 @@
+package apistruct
+
+type GetConfigReq struct {
+ ConfigName string `json:"configName"`
+}
+
+type GetConfigListResp struct {
+ Environment string `json:"environment"`
+ Version string `json:"version"`
+ ConfigNames []string `json:"configNames"`
+}
+
+type SetConfigReq struct {
+ ConfigName string `json:"configName"`
+ Data string `json:"data"`
+}
+
+type SetConfigsReq struct {
+ Configs []SetConfigReq `json:"configs"`
+}
+
+type SetEnableConfigManagerReq struct {
+ Enable bool `json:"enable"`
+}
+
+type GetEnableConfigManagerResp struct {
+ Enable bool `json:"enable"`
+}
diff --git a/pkg/apistruct/doc.go b/pkg/apistruct/doc.go
new file mode 100644
index 0000000..31fcc77
--- /dev/null
+++ b/pkg/apistruct/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package apistruct // import "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
diff --git a/pkg/apistruct/manage.go b/pkg/apistruct/manage.go
new file mode 100644
index 0000000..1911542
--- /dev/null
+++ b/pkg/apistruct/manage.go
@@ -0,0 +1,584 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package apistruct
+
+import (
+ "encoding/json"
+ "fmt"
+ "strconv"
+
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+// SendMsg defines the structure for sending messages with various metadata.
+type SendMsg struct {
+ // SendID uniquely identifies the sender.
+ SendID string `json:"sendID" binding:"required"`
+
+ // GroupID is the identifier for the group, required if SessionType is 2 or 3.
+ GroupID string `json:"groupID" binding:"required_if=SessionType 2|required_if=SessionType 3"`
+
+ // SenderNickname is the nickname of the sender.
+ SenderNickname string `json:"senderNickname"`
+
+ // SenderFaceURL is the URL to the sender's avatar.
+ SenderFaceURL string `json:"senderFaceURL"`
+
+ // SenderPlatformID is an integer identifier for the sender's platform.
+ SenderPlatformID int32 `json:"senderPlatformID"`
+
+ // Content is the actual content of the message, required and excluded from Swagger documentation.
+ Content map[string]any `json:"content" binding:"required" swaggerignore:"true"`
+
+ // ContentType is an integer that represents the type of the content.
+ ContentType int32 `json:"contentType" binding:"required"`
+
+ // SessionType is an integer that represents the type of session for the message.
+ SessionType int32 `json:"sessionType" binding:"required"`
+
+ // IsOnlineOnly specifies if the message is only sent when the receiver is online.
+ IsOnlineOnly bool `json:"isOnlineOnly"`
+
+ // NotOfflinePush specifies if the message should not trigger offline push notifications.
+ NotOfflinePush bool `json:"notOfflinePush"`
+
+ // SendTime is a timestamp indicating when the message was sent.
+ SendTime int64 `json:"sendTime"`
+
+ // OfflinePushInfo contains information for offline push notifications.
+ OfflinePushInfo *sdkws.OfflinePushInfo `json:"offlinePushInfo"`
+
+ // Ex stores extended fields
+ Ex string `json:"ex"`
+}
+
+// SendMsgReq extends SendMsg with the requirement of RecvID when SessionType indicates a one-on-one or notification chat.
+type SendMsgReq struct {
+ // RecvID uniquely identifies the receiver and is required for one-on-one or notification chat types.
+ RecvID string `json:"recvID" binding:"required_if" message:"recvID is required if sessionType is SingleChatType or NotificationChatType"`
+ SendMsg
+}
+
+type GetConversationListReq struct {
+ // userID uniquely identifies the user.
+ UserID string `protobuf:"bytes,1,opt,name=userID,proto3" json:"userID,omitempty" binding:"required"`
+
+ // ConversationIDs contains a list of unique identifiers for conversations.
+ ConversationIDs []string `protobuf:"bytes,2,rep,name=conversationIDs,proto3" json:"conversationIDs,omitempty"`
+}
+
+type GetConversationListResp struct {
+ // ConversationElems is a map that associates conversation IDs with their respective details.
+ ConversationElems map[string]*ConversationElem `protobuf:"bytes,1,rep,name=conversationElems,proto3" json:"conversationElems,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+type ConversationElem struct {
+ // MaxSeq represents the maximum sequence number within the conversation.
+ MaxSeq int64 `protobuf:"varint,1,opt,name=maxSeq,proto3" json:"maxSeq,omitempty"`
+
+ // UnreadSeq represents the number of unread messages in the conversation.
+ UnreadSeq int64 `protobuf:"varint,2,opt,name=unreadSeq,proto3" json:"unreadSeq,omitempty"`
+
+ // LastSeqTime represents the timestamp of the last sequence in the conversation.
+ LastSeqTime int64 `protobuf:"varint,3,opt,name=LastSeqTime,proto3" json:"LastSeqTime,omitempty"`
+}
+
+// BatchSendMsgReq defines the structure for sending a message to multiple recipients.
+type BatchSendMsgReq struct {
+ SendMsg
+
+ // IsSendAll indicates whether the message should be sent to all users.
+ IsSendAll bool `json:"isSendAll"`
+
+ // RecvIDs is a slice of receiver identifiers to whom the message will be sent, required field.
+ RecvIDs []string `json:"recvIDs" binding:"required"`
+}
+
+// BatchSendMsgResp contains the results of a batch message send operation.
+type BatchSendMsgResp struct {
+ // Results is a slice of SingleReturnResult, representing the outcome of each message sent.
+ Results []*SingleReturnResult `json:"results"`
+
+ // FailedIDs is a slice of user IDs for whom the message send failed.
+ FailedIDs []string `json:"failedUserIDs"`
+}
+
+// SendRedPacketReq 发送红包请求
+type SendRedPacketReq struct {
+ GroupID string `json:"groupID" binding:"required"` // 群ID
+ RedPacketType int32 `json:"redPacketType" binding:"required"` // 红包类型:1-普通红包,2-拼手气红包
+ TotalAmount int64 `json:"totalAmount" binding:"required"` // 总金额(分)
+ TotalCount int32 `json:"totalCount" binding:"required"` // 总个数
+ Blessing string `json:"blessing"` // 祝福语
+}
+
+// SendRedPacketResp 发送红包响应
+type SendRedPacketResp struct {
+ RedPacketID string `json:"redPacketID"` // 红包ID
+ ServerMsgID string `json:"serverMsgID"` // 消息ID
+ ClientMsgID string `json:"clientMsgID"` // 客户端消息ID
+ SendTime int64 `json:"sendTime"` // 发送时间
+}
+
+// ReceiveRedPacketReq 领取红包请求
+type ReceiveRedPacketReq struct {
+ RedPacketID string `json:"redPacketID" binding:"required"` // 红包ID
+}
+
+// ReceiveRedPacketResp 领取红包响应
+type ReceiveRedPacketResp struct {
+ RedPacketID string `json:"redPacketID"` // 红包ID
+ Amount int64 `json:"amount"` // 领取金额(分)
+ IsLucky bool `json:"isLucky"` // 是否为手气最佳(仅拼手气红包有效)
+}
+
+// GetRedPacketsByGroupReq 根据群ID查询红包列表请求(群ID为选填,不填则查询所有红包)
+type GetRedPacketsByGroupReq struct {
+ GroupID string `json:"groupID"` // 群ID(选填,不填则查询所有红包)
+ Pagination Pagination `json:"pagination"` // 分页参数
+}
+
+// Pagination 分页参数
+type Pagination struct {
+ PageNumber int32 `json:"pageNumber"` // 页码,从1开始
+ ShowNumber int32 `json:"showNumber"` // 每页数量
+}
+
+// GetRedPacketsByGroupResp 根据群ID查询红包列表响应
+type GetRedPacketsByGroupResp struct {
+ Total int64 `json:"total"` // 总数
+ RedPackets []*RedPacketInfo `json:"redPackets"` // 红包列表
+}
+
+// RedPacketInfo 红包信息
+type RedPacketInfo struct {
+ RedPacketID string `json:"redPacketID"` // 红包ID
+ SendUserID string `json:"sendUserID"` // 发送者ID
+ GroupID string `json:"groupID"` // 群ID
+ GroupName string `json:"groupName"` // 群名称
+ RedPacketType int32 `json:"redPacketType"` // 红包类型:1-普通红包,2-拼手气红包
+ TotalAmount int64 `json:"totalAmount"` // 总金额(分)
+ TotalCount int32 `json:"totalCount"` // 总个数
+ RemainAmount int64 `json:"remainAmount"` // 剩余金额(分)
+ RemainCount int32 `json:"remainCount"` // 剩余个数
+ Blessing string `json:"blessing"` // 祝福语
+ Status int32 `json:"status"` // 状态:0-进行中,1-已领完,2-已过期
+ ExpireTime int64 `json:"expireTime"` // 过期时间戳(毫秒)
+ CreateTime int64 `json:"createTime"` // 创建时间戳(毫秒)
+}
+
+// GetRedPacketReceiveInfoReq 查询红包领取情况请求
+type GetRedPacketReceiveInfoReq struct {
+ RedPacketID string `json:"redPacketID" binding:"required"` // 红包ID
+}
+
+// GetRedPacketReceiveInfoResp 查询红包领取情况响应
+type GetRedPacketReceiveInfoResp struct {
+ RedPacketID string `json:"redPacketID"` // 红包ID
+ TotalAmount int64 `json:"totalAmount"` // 总金额(分)
+ TotalCount int32 `json:"totalCount"` // 总个数
+ RemainAmount int64 `json:"remainAmount"` // 剩余金额(分)
+ RemainCount int32 `json:"remainCount"` // 剩余个数
+ Status int32 `json:"status"` // 状态:0-进行中,1-已领完,2-已过期
+ Receives []*RedPacketReceiveDetail `json:"receives"` // 领取记录列表
+}
+
+// RedPacketReceiveDetail 红包领取详情(后台管理接口使用)
+type RedPacketReceiveDetail struct {
+ ReceiveID string `json:"receiveID"` // 领取记录ID
+ ReceiveUserID string `json:"receiveUserID"` // 领取者ID
+ Amount int64 `json:"amount"` // 领取金额(分)
+ ReceiveTime int64 `json:"receiveTime"` // 领取时间戳(毫秒)
+ IsLucky bool `json:"isLucky"` // 是否为手气最佳(仅拼手气红包有效)
+}
+
+// PauseRedPacketReq 暂停红包请求
+type PauseRedPacketReq struct {
+ RedPacketID string `json:"redPacketID" binding:"required"` // 红包ID
+}
+
+// PauseRedPacketResp 暂停红包响应
+type PauseRedPacketResp struct {
+ RedPacketID string `json:"redPacketID"` // 红包ID
+}
+
+// GetRedPacketDetailReq 查询红包详情请求(用户端)
+type GetRedPacketDetailReq struct {
+ RedPacketID string `json:"redPacketID" binding:"required"` // 红包ID(必填)
+}
+
+// GetRedPacketDetailResp 查询红包详情响应(用户端)
+type GetRedPacketDetailResp struct {
+ RedPacketID string `json:"redPacketID"` // 红包ID
+ GroupID string `json:"groupID"` // 群ID
+ RedPacketType int32 `json:"redPacketType"` // 红包类型:1-普通红包,2-拼手气红包
+ TotalAmount int64 `json:"totalAmount"` // 总金额(分)
+ TotalCount int32 `json:"totalCount"` // 总个数
+ RemainAmount int64 `json:"remainAmount"` // 剩余金额(分)
+ RemainCount int32 `json:"remainCount"` // 剩余个数
+ Blessing string `json:"blessing"` // 祝福语
+ Status int32 `json:"status"` // 状态:0-进行中,1-已领完,2-已过期
+ IsExpired bool `json:"isExpired"` // 是否过期(超过一周)
+ MyReceive *RedPacketMyReceiveDetail `json:"myReceive"` // 当前用户的领取信息(如果已领取)
+ Receives []*RedPacketUserReceiveDetail `json:"receives"` // 领取记录列表(仅群主/管理员可见)
+}
+
+// RedPacketMyReceiveDetail 当前用户自己的领取详情(不包含用户ID、昵称、头像)
+type RedPacketMyReceiveDetail struct {
+ ReceiveID string `json:"receiveID"` // 领取记录ID
+ Amount int64 `json:"amount"` // 领取金额(分)
+ ReceiveTime int64 `json:"receiveTime"` // 领取时间戳(毫秒)
+ IsLucky bool `json:"isLucky"` // 是否为手气最佳(仅拼手气红包有效)
+}
+
+// RedPacketUserReceiveDetail 红包用户领取详情(用户端,包含用户信息)
+type RedPacketUserReceiveDetail struct {
+ ReceiveID string `json:"receiveID"` // 领取记录ID
+ ReceiveUserID string `json:"receiveUserID"` // 领取者ID
+ Nickname string `json:"nickname"` // 领取者昵称
+ FaceURL string `json:"faceURL"` // 领取者头像
+ Amount int64 `json:"amount"` // 领取金额(分)
+ ReceiveTime int64 `json:"receiveTime"` // 领取时间戳(毫秒)
+ IsLucky bool `json:"isLucky"` // 是否为手气最佳(仅拼手气红包有效)
+}
+
+// GetWalletsReq 查询用户钱包列表请求(后台管理接口)
+type GetWalletsReq struct {
+ UserID string `json:"userID"` // 用户ID(选填,支持通过ID、手机号、账号查询)
+ PhoneNumber string `json:"phoneNumber"` // 手机号(选填,支持通过ID、手机号、账号查询)
+ Account string `json:"account"` // 账号(选填,支持通过ID、手机号、账号查询)
+ Pagination Pagination `json:"pagination"` // 分页参数
+}
+
+// GetWalletsResp 查询用户钱包列表响应
+type GetWalletsResp struct {
+ Total int64 `json:"total"` // 总数
+ Wallets []*WalletInfo `json:"wallets"` // 钱包列表
+}
+
+// WalletInfo 钱包信息
+type WalletInfo struct {
+ UserID string `json:"userID"` // 用户ID
+ Nickname string `json:"nickname"` // 用户昵称
+ FaceURL string `json:"faceURL"` // 用户头像
+ Balance int64 `json:"balance"` // 余额(分)
+ CreateTime int64 `json:"createTime"` // 创建时间戳(毫秒)
+ UpdateTime int64 `json:"updateTime"` // 更新时间戳(毫秒)
+}
+
+// BatchUpdateWalletBalanceReq 批量修改用户余额请求(后台管理接口)
+type BatchUpdateWalletBalanceReq struct {
+ Users []WalletUserIdentifier `json:"users" binding:"required"` // 用户标识列表(支持用户ID、手机号、账号)
+ Amount int64 `json:"amount"` // 默认金额(分),如果用户没有指定金额则使用此值
+ Operation string `json:"operation"` // 默认操作类型:set(设置为指定金额)、add(增加金额)、subtract(减少金额),默认为add
+}
+
+// WalletUserIdentifier 钱包用户标识(支持用户ID、手机号、账号)
+type WalletUserIdentifier struct {
+ UserID FlexibleString `json:"userID"` // 用户ID(选填,支持数字和字符串)
+ PhoneNumber string `json:"phoneNumber"` // 手机号(选填)
+ Account string `json:"account"` // 账号(选填)
+ Amount int64 `json:"amount"` // 金额(分),如果未指定则使用请求中的默认金额
+ Operation string `json:"operation"` // 操作类型:set(设置为指定金额)、add(增加金额)、subtract(减少金额),如果未指定则使用请求中的默认操作类型
+ Remark string `json:"remark"` // 备注(选填),每条修改记录对应的备注信息
+}
+
+// FlexibleString 灵活的字符串类型,可以接受数字和字符串
+type FlexibleString string
+
+// UnmarshalJSON 自定义JSON反序列化,支持数字和字符串
+func (f *FlexibleString) UnmarshalJSON(data []byte) error {
+ // 尝试解析为字符串
+ var s string
+ if err := json.Unmarshal(data, &s); err == nil {
+ *f = FlexibleString(s)
+ return nil
+ }
+ // 尝试解析为数字
+ var num json.Number
+ if err := json.Unmarshal(data, &num); err == nil {
+ *f = FlexibleString(num.String())
+ return nil
+ }
+ // 尝试解析为整数
+ var i int64
+ if err := json.Unmarshal(data, &i); err == nil {
+ *f = FlexibleString(strconv.FormatInt(i, 10))
+ return nil
+ }
+ // 尝试解析为浮点数
+ var f64 float64
+ if err := json.Unmarshal(data, &f64); err == nil {
+ *f = FlexibleString(strconv.FormatFloat(f64, 'f', -1, 64))
+ return nil
+ }
+ return fmt.Errorf("cannot unmarshal %s into FlexibleString", string(data))
+}
+
+// String 返回字符串值
+func (f FlexibleString) String() string {
+ return string(f)
+}
+
+// BatchUpdateWalletBalanceResp 批量修改用户余额响应
+type BatchUpdateWalletBalanceResp struct {
+ Total int32 `json:"total"` // 总处理数量
+ Success int32 `json:"success"` // 成功数量
+ Failed int32 `json:"failed"` // 失败数量
+ Results []WalletUpdateResult `json:"results"` // 处理结果列表
+}
+
+// WalletUpdateResult 钱包更新结果
+type WalletUpdateResult struct {
+ UserID string `json:"userID"` // 用户ID
+ PhoneNumber string `json:"phoneNumber"` // 手机号
+ Account string `json:"account"` // 账号
+ Success bool `json:"success"` // 是否成功
+ Message string `json:"message"` // 结果消息
+ OldBalance int64 `json:"oldBalance"` // 修改前余额
+ NewBalance int64 `json:"newBalance"` // 修改后余额
+ Remark string `json:"remark"` // 备注信息
+}
+
+// SendSingleMsgReq defines the structure for sending a message to multiple recipients.
+type SendSingleMsgReq struct {
+ // groupMsg should appoint sendID
+ SendID string `json:"sendID"`
+ Content string `json:"content" binding:"required"`
+ OfflinePushInfo *sdkws.OfflinePushInfo `json:"offlinePushInfo"`
+ Ex string `json:"ex"`
+}
+
+type KeyMsgData struct {
+ SendID string `json:"sendID"`
+ RecvID string `json:"recvID"`
+ GroupID string `json:"groupID"`
+}
+
+// SingleReturnResult encapsulates the result of a single message send attempt.
+type SingleReturnResult struct {
+ // ServerMsgID is the message identifier on the server-side.
+ ServerMsgID string `json:"serverMsgID"`
+
+ // ClientMsgID is the message identifier on the client-side.
+ ClientMsgID string `json:"clientMsgID"`
+
+ // SendTime is the timestamp of when the message was sent.
+ SendTime int64 `json:"sendTime"`
+
+ // RecvID uniquely identifies the receiver of the message.
+ RecvID string `json:"recvID"`
+
+ // Modify fields modified via webhook.
+ Modify map[string]any `json:"modify,omitempty"`
+}
+
+type SendMsgResp struct {
+ // SendMsgResp original response.
+ *pbmsg.SendMsgResp
+
+ // Modify fields modified via webhook.
+ Modify map[string]any `json:"modify,omitempty"`
+}
+
+// CreateMeetingReq 创建会议请求
+type CreateMeetingReq struct {
+ MeetingID string `json:"meetingID"` // 会议ID(选填,不填则自动生成)
+ Subject string `json:"subject" binding:"required"` // 会议主题(必填)
+ CoverURL string `json:"coverURL"` // 封面URL
+ ScheduledTime int64 `json:"scheduledTime" binding:"required"` // 预约时间戳(毫秒,必填)
+ Description string `json:"description"` // 会议描述
+ Duration int32 `json:"duration"` // 会议时长(分钟)
+ EstimatedCount int32 `json:"estimatedCount"` // 会议预估人数
+ EnableMic bool `json:"enableMic"` // 是否开启连麦
+ EnableComment bool `json:"enableComment"` // 是否开启评论
+ AnchorUserIDs []string `json:"anchorUserIDs"` // 主播用户ID列表(多选)
+ Password string `json:"password"` // 会议密码(6位数字,选填,不填则自动生成)
+ Ex string `json:"ex"` // 扩展字段
+}
+
+// CreateMeetingResp 创建会议响应
+type CreateMeetingResp struct {
+ MeetingInfo *MeetingInfo `json:"meetingInfo"` // 会议信息
+ GroupID string `json:"groupID"` // 创建的群聊ID
+}
+
+// UpdateMeetingReq 更新会议请求
+type UpdateMeetingReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+ Subject string `json:"subject"` // 会议主题
+ CoverURL string `json:"coverURL"` // 封面URL
+ ScheduledTime int64 `json:"scheduledTime"` // 预约时间戳(毫秒)
+ Status int32 `json:"status"` // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ Description string `json:"description"` // 会议描述
+ Duration int32 `json:"duration"` // 会议时长(分钟)
+ EstimatedCount int32 `json:"estimatedCount"` // 会议预估人数
+ EnableMic *bool `json:"enableMic"` // 是否开启连麦(使用指针以区分是否设置)
+ EnableComment *bool `json:"enableComment"` // 是否开启评论(使用指针以区分是否设置)
+ AnchorUserIDs []string `json:"anchorUserIDs"` // 主播用户ID列表(多选)
+ Password *string `json:"password"` // 会议密码(6位数字,使用指针以区分是否设置)
+ Ex string `json:"ex"` // 扩展字段
+}
+
+// UpdateMeetingResp 更新会议响应
+type UpdateMeetingResp struct {
+ MeetingInfo *MeetingInfo `json:"meetingInfo"` // 会议信息
+}
+
+// GetMeetingsReq 获取会议列表请求
+type GetMeetingsReq struct {
+ CreatorUserID string `json:"creatorUserID"` // 创建者用户ID(选填)
+ Status int32 `json:"status"` // 会议状态(选填):1-已预约,2-进行中,3-已结束,4-已取消
+ Keyword string `json:"keyword"` // 搜索关键词(选填,搜索主题和描述)
+ StartTime int64 `json:"startTime"` // 开始时间戳(毫秒,选填)
+ EndTime int64 `json:"endTime"` // 结束时间戳(毫秒,选填)
+ Pagination Pagination `json:"pagination"` // 分页参数
+}
+
+// GetMeetingsResp 获取会议列表响应
+type GetMeetingsResp struct {
+ Total int64 `json:"total"` // 总数
+ Meetings []*MeetingInfo `json:"meetings"` // 会议列表
+}
+
+// DeleteMeetingReq 删除会议请求
+type DeleteMeetingReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+}
+
+// DeleteMeetingResp 删除会议响应
+type DeleteMeetingResp struct {
+}
+
+// MeetingInfo 会议信息
+type MeetingInfo struct {
+ MeetingID string `json:"meetingID"` // 会议ID
+ Subject string `json:"subject"` // 会议主题
+ CoverURL string `json:"coverURL"` // 封面URL
+ ScheduledTime int64 `json:"scheduledTime"` // 预约时间戳(毫秒)
+ Status int32 `json:"status"` // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ CreatorUserID string `json:"creatorUserID"` // 创建者用户ID
+ Description string `json:"description"` // 会议描述
+ Duration int32 `json:"duration"` // 会议时长(分钟)
+ EstimatedCount int32 `json:"estimatedCount"` // 会议预估人数
+ EnableMic bool `json:"enableMic"` // 是否开启连麦
+ EnableComment bool `json:"enableComment"` // 是否开启评论
+ AnchorUserIDs []string `json:"anchorUserIDs"` // 主播用户ID列表(多选)
+ AnchorUsers []*sdkws.UserInfo `json:"anchorUsers"` // 主播用户信息列表
+ CreateTime int64 `json:"createTime"` // 创建时间戳(毫秒)
+ UpdateTime int64 `json:"updateTime"` // 更新时间戳(毫秒)
+ Ex string `json:"ex"` // 扩展字段
+ GroupID string `json:"groupID"` // 关联的群聊ID
+ CheckInCount int32 `json:"checkInCount"` // 签到人数统计
+ Password string `json:"password"` // 会议密码(6位数字)
+}
+
+// GetMeetingReq 获取会议请求(用户端)
+type GetMeetingReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+}
+
+// GetMeetingResp 获取会议响应(用户端)
+type GetMeetingResp struct {
+ MeetingInfo *MeetingPublicInfo `json:"meetingInfo"` // 会议信息
+}
+
+// GetMeetingsPublicReq 获取会议列表请求(用户端)
+type GetMeetingsPublicReq struct {
+ Status int32 `json:"status"` // 会议状态(选填):1-已预约,2-进行中,3-已结束,4-已取消
+ Keyword string `json:"keyword"` // 搜索关键词(选填,搜索主题和描述)
+ StartTime int64 `json:"startTime"` // 开始时间戳(毫秒,选填)
+ EndTime int64 `json:"endTime"` // 结束时间戳(毫秒,选填)
+ Pagination Pagination `json:"pagination"` // 分页参数
+}
+
+// GetMeetingsPublicResp 获取会议列表响应(用户端)
+type GetMeetingsPublicResp struct {
+ Total int64 `json:"total"` // 总数
+ Meetings []*MeetingPublicInfo `json:"meetings"` // 会议列表
+}
+
+// MeetingPublicInfo 会议公开信息(用户端,过滤了管理字段)
+type MeetingPublicInfo struct {
+ MeetingID string `json:"meetingID"` // 会议ID
+ Subject string `json:"subject"` // 会议主题
+ CoverURL string `json:"coverURL"` // 封面URL
+ ScheduledTime int64 `json:"scheduledTime"` // 预约时间戳(毫秒)
+ Status int32 `json:"status"` // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ Description string `json:"description"` // 会议描述
+ Duration int32 `json:"duration"` // 会议时长(分钟)
+ EstimatedCount int32 `json:"estimatedCount"` // 会议预估人数
+ EnableMic bool `json:"enableMic"` // 是否开启连麦
+ EnableComment bool `json:"enableComment"` // 是否开启评论
+ AnchorUsers []*sdkws.UserInfo `json:"anchorUsers"` // 主播用户信息列表
+ GroupID string `json:"groupID"` // 关联的群聊ID
+ CheckInCount int32 `json:"checkInCount"` // 签到人数统计
+ Password string `json:"password"` // 会议密码(6位数字)
+}
+
+// CheckInMeetingReq 会议签到请求
+type CheckInMeetingReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+}
+
+// CheckInMeetingResp 会议签到响应
+type CheckInMeetingResp struct {
+ CheckInID string `json:"checkInID"` // 签到ID
+ CheckInTime int64 `json:"checkInTime"` // 签到时间戳(毫秒)
+}
+
+// GetMeetingCheckInsReq 获取会议签到列表请求
+type GetMeetingCheckInsReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+ Pagination Pagination `json:"pagination"` // 分页参数
+}
+
+// GetMeetingCheckInsResp 获取会议签到列表响应
+type GetMeetingCheckInsResp struct {
+ Total int64 `json:"total"` // 总数
+ CheckIns []*MeetingCheckInInfo `json:"checkIns"` // 签到列表
+}
+
+// MeetingCheckInInfo 会议签到信息
+type MeetingCheckInInfo struct {
+ CheckInID string `json:"checkInID"` // 签到ID
+ MeetingID string `json:"meetingID"` // 会议ID
+ UserID string `json:"userID"` // 用户ID
+ CheckInTime int64 `json:"checkInTime"` // 签到时间戳(毫秒)
+ UserInfo *sdkws.UserInfo `json:"userInfo"` // 用户信息
+}
+
+// GetMeetingCheckInStatsReq 获取会议签到统计请求
+type GetMeetingCheckInStatsReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+}
+
+// GetMeetingCheckInStatsResp 获取会议签到统计响应
+type GetMeetingCheckInStatsResp struct {
+ MeetingID string `json:"meetingID"` // 会议ID
+ CheckInCount int64 `json:"checkInCount"` // 签到人数
+}
+
+// CheckUserCheckInReq 检查用户是否已签到请求
+type CheckUserCheckInReq struct {
+ MeetingID string `json:"meetingID" binding:"required"` // 会议ID(必填)
+}
+
+// CheckUserCheckInResp 检查用户是否已签到响应
+type CheckUserCheckInResp struct {
+ IsCheckedIn bool `json:"isCheckedIn"` // 是否已签到
+ CheckInInfo *MeetingCheckInInfo `json:"checkInInfo,omitempty"` // 签到信息(如果已签到)
+}
diff --git a/pkg/apistruct/msg.go b/pkg/apistruct/msg.go
new file mode 100644
index 0000000..8c8f42a
--- /dev/null
+++ b/pkg/apistruct/msg.go
@@ -0,0 +1,186 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package apistruct
+
+import "git.imall.cloud/openim/protocol/sdkws"
+
+type PictureBaseInfo struct {
+ UUID string `mapstructure:"uuid"`
+ Type string `mapstructure:"type" validate:"required"`
+ Size int64 `mapstructure:"size"`
+ Width int32 `mapstructure:"width" validate:"required"`
+ Height int32 `mapstructure:"height" validate:"required"`
+ Url string `mapstructure:"url" validate:"required"`
+}
+
+type PictureElem struct {
+ SourcePath string `mapstructure:"sourcePath"`
+ SourcePicture PictureBaseInfo `mapstructure:"sourcePicture" validate:"required"`
+ BigPicture PictureBaseInfo `mapstructure:"bigPicture" validate:"required"`
+ SnapshotPicture PictureBaseInfo `mapstructure:"snapshotPicture" validate:"required"`
+}
+
+type SoundElem struct {
+ UUID string `mapstructure:"uuid"`
+ SoundPath string `mapstructure:"soundPath"`
+ SourceURL string `mapstructure:"sourceUrl" validate:"required"`
+ DataSize int64 `mapstructure:"dataSize"`
+ Duration int64 `mapstructure:"duration" validate:"required,min=1"`
+}
+
+type VideoElem struct {
+ VideoPath string `mapstructure:"videoPath"`
+ VideoUUID string `mapstructure:"videoUUID"`
+ VideoURL string `mapstructure:"videoUrl" validate:"required"`
+ VideoType string `mapstructure:"videoType" validate:"required"`
+ VideoSize int64 `mapstructure:"videoSize" validate:"required"`
+ Duration int64 `mapstructure:"duration" validate:"required"`
+ SnapshotPath string `mapstructure:"snapshotPath"`
+ SnapshotUUID string `mapstructure:"snapshotUUID"`
+ SnapshotSize int64 `mapstructure:"snapshotSize"`
+ SnapshotURL string `mapstructure:"snapshotUrl" validate:"required"`
+ SnapshotWidth int32 `mapstructure:"snapshotWidth" validate:"required"`
+ SnapshotHeight int32 `mapstructure:"snapshotHeight" validate:"required"`
+}
+
+type FileElem struct {
+ FilePath string `mapstructure:"filePath"`
+ UUID string `mapstructure:"uuid"`
+ SourceURL string `mapstructure:"sourceUrl" validate:"required"`
+ FileName string `mapstructure:"fileName" validate:"required"`
+ FileSize int64 `mapstructure:"fileSize" validate:"required"`
+}
+type AtElem struct {
+ Text string `mapstructure:"text"`
+ AtUserList []string `mapstructure:"atUserList" validate:"required,max=1000"`
+ AtUsersInfo []*AtInfo `json:"atUsersInfo"`
+ QuoteMessage *MsgStruct `json:"quoteMessage"`
+ IsAtSelf bool `mapstructure:"isAtSelf"`
+}
+type LocationElem struct {
+ Description string `mapstructure:"description"`
+ Longitude float64 `mapstructure:"longitude" validate:"required"`
+ Latitude float64 `mapstructure:"latitude" validate:"required"`
+}
+
+type CustomElem struct {
+ Data string `mapstructure:"data" validate:"required"`
+ Description string `mapstructure:"description"`
+ Extension string `mapstructure:"extension"`
+}
+
+type TextElem struct {
+ Content string `json:"content" validate:"required"`
+}
+
+type MarkdownTextElem struct {
+ Content string `mapstructure:"content" validate:"required"`
+}
+
+type StreamMsgElem struct {
+ Type string `mapstructure:"type" validate:"required"`
+ Content string `mapstructure:"content" validate:"required"`
+}
+
+type RevokeElem struct {
+ RevokeMsgClientID string `mapstructure:"revokeMsgClientID" validate:"required"`
+}
+
+type QuoteElem struct {
+ Text string `json:"text,omitempty"`
+ QuoteMessage *MsgStruct `json:"quoteMessage,omitempty"`
+}
+
+type OANotificationElem struct {
+ NotificationName string `mapstructure:"notificationName" json:"notificationName" validate:"required"`
+ NotificationFaceURL string `mapstructure:"notificationFaceURL" json:"notificationFaceURL"`
+ NotificationType int32 `mapstructure:"notificationType" json:"notificationType" validate:"required"`
+ Text string `mapstructure:"text" json:"text" validate:"required"`
+ Url string `mapstructure:"url" json:"url"`
+ MixType int32 `mapstructure:"mixType" json:"mixType" validate:"gte=0,lte=5"`
+ PictureElem *PictureElem `mapstructure:"pictureElem" json:"pictureElem"`
+ SoundElem *SoundElem `mapstructure:"soundElem" json:"soundElem"`
+ VideoElem *VideoElem `mapstructure:"videoElem" json:"videoElem"`
+ FileElem *FileElem `mapstructure:"fileElem" json:"fileElem"`
+ Ex string `mapstructure:"ex" json:"ex"`
+}
+
+type MessageRevoked struct {
+ RevokerID string `mapstructure:"revokerID" json:"revokerID" validate:"required"`
+ RevokerRole int32 `mapstructure:"revokerRole" json:"revokerRole" validate:"required"`
+ ClientMsgID string `mapstructure:"clientMsgID" json:"clientMsgID" validate:"required"`
+ RevokerNickname string `mapstructure:"revokerNickname" json:"revokerNickname"`
+ SessionType int32 `mapstructure:"sessionType" json:"sessionType" validate:"required"`
+ Seq uint32 `mapstructure:"seq" json:"seq" validate:"required"`
+}
+
+type MsgStruct struct {
+ ClientMsgID string `json:"clientMsgID,omitempty"`
+ ServerMsgID string `json:"serverMsgID,omitempty"`
+ CreateTime int64 `json:"createTime"`
+ SendTime int64 `json:"sendTime"`
+ SessionType int32 `json:"sessionType"`
+ SendID string `json:"sendID,omitempty"`
+ RecvID string `json:"recvID,omitempty"`
+ MsgFrom int32 `json:"msgFrom"`
+ ContentType int32 `json:"contentType"`
+ SenderPlatformID int32 `json:"senderPlatformID"`
+ SenderNickname string `json:"senderNickname,omitempty"`
+ SenderFaceURL string `json:"senderFaceUrl,omitempty"`
+ GroupID string `json:"groupID,omitempty"`
+ Content string `json:"content,omitempty"`
+ Seq int64 `json:"seq"`
+ IsRead bool `json:"isRead"`
+ Status int32 `json:"status"`
+ IsReact bool `json:"isReact,omitempty"`
+ IsExternalExtensions bool `json:"isExternalExtensions,omitempty"`
+ OfflinePush *sdkws.OfflinePushInfo `json:"offlinePush,omitempty"`
+ AttachedInfo string `json:"attachedInfo,omitempty"`
+ Ex string `json:"ex,omitempty"`
+ LocalEx string `json:"localEx,omitempty"`
+ TextElem *TextElem `json:"textElem,omitempty"`
+ PictureElem *PictureElem `json:"pictureElem,omitempty"`
+ SoundElem *SoundElem `json:"soundElem,omitempty"`
+ VideoElem *VideoElem `json:"videoElem,omitempty"`
+ FileElem *FileElem `json:"fileElem,omitempty"`
+ AtTextElem *AtElem `json:"atTextElem,omitempty"`
+ LocationElem *LocationElem `json:"locationElem,omitempty"`
+ CustomElem *CustomElem `json:"customElem,omitempty"`
+ QuoteElem *QuoteElem `json:"quoteElem,omitempty"`
+}
+
+type AtInfo struct {
+ AtUserID string `json:"atUserID,omitempty"`
+ GroupNickname string `json:"groupNickname,omitempty"`
+}
+
+// RedPacketReceiveInfo 红包领取信息(已领取时返回)
+type RedPacketReceiveInfo struct {
+ Amount int64 `json:"amount"` // 领取金额(分)
+ ReceiveTime int64 `json:"receiveTime"` // 领取时间戳(毫秒)
+ IsLucky bool `json:"isLucky"` // 是否为手气最佳(仅拼手气红包有效)
+}
+
+// RedPacketElem 红包消息元素
+type RedPacketElem struct {
+ RedPacketID string `json:"redPacketID" validate:"required"` // 红包ID
+ RedPacketType int32 `json:"redPacketType" validate:"required"` // 红包类型:1-普通红包,2-拼手气红包
+ Blessing string `json:"blessing"` // 祝福语
+ IsReceived bool `json:"isReceived"` // 当前用户是否已领取
+ ReceiveInfo *RedPacketReceiveInfo `json:"receiveInfo,omitempty"` // 领取信息(已领取时返回,包含金额)
+ Status int32 `json:"status"` // 红包状态:0-进行中,1-已领完,2-已过期
+ IsExpired bool `json:"isExpired"` // 是否已过期
+ IsFinished bool `json:"isFinished"` // 是否已领完
+}
diff --git a/pkg/apistruct/public.go b/pkg/apistruct/public.go
new file mode 100644
index 0000000..7589b1d
--- /dev/null
+++ b/pkg/apistruct/public.go
@@ -0,0 +1,20 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package apistruct
+
+type GroupAddMemberInfo struct {
+ UserID string `json:"userID" binding:"required"`
+ RoleLevel int32 `json:"roleLevel" binding:"required,oneof= 1 3"`
+}
diff --git a/pkg/apistruct/statistics.go b/pkg/apistruct/statistics.go
new file mode 100644
index 0000000..350d8a0
--- /dev/null
+++ b/pkg/apistruct/statistics.go
@@ -0,0 +1,124 @@
+package apistruct
+
+// OnlineUserCountResp 在线人数统计返回
+type OnlineUserCountResp struct {
+ OnlineCount int64 `json:"onlineCount"`
+}
+
+// UserSendMsgCountReq 用户发送消息总数统计请求
+type UserSendMsgCountReq struct{}
+
+// UserSendMsgCountResp 用户发送消息总数统计返回
+type UserSendMsgCountResp struct {
+ // Count24h 最近24小时发送消息总数
+ Count24h int64 `json:"count24h"`
+ // Count7d 最近7天发送消息总数
+ Count7d int64 `json:"count7d"`
+ // Count30d 最近30天发送消息总数
+ Count30d int64 `json:"count30d"`
+}
+
+// UserSendMsgQueryReq 用户发送消息查询请求
+type UserSendMsgQueryReq struct {
+ UserID string `json:"userID"`
+ StartTime int64 `json:"startTime"`
+ EndTime int64 `json:"endTime"`
+ Content string `json:"content"`
+ PageNumber int32 `json:"pageNumber"`
+ // ShowNumber 每页条数 默认50 最大200
+ ShowNumber int32 `json:"showNumber"`
+}
+
+// UserSendMsgQueryRecord 用户发送消息查询记录
+type UserSendMsgQueryRecord struct {
+ // MsgID 使用服务端消息ID
+ MsgID string `json:"msgID"`
+ // SendID 发送者ID
+ SendID string `json:"sendID"`
+ // SenderName 发送者昵称或名称
+ SenderName string `json:"senderName"`
+ // RecvID 接收者ID 群聊为群ID
+ RecvID string `json:"recvID"`
+ // RecvName 接收者昵称或名称 群聊为群名称
+ RecvName string `json:"recvName"`
+ // ContentType 消息类型编号
+ ContentType int32 `json:"contentType"`
+ // ContentTypeName 消息类型名称
+ ContentTypeName string `json:"contentTypeName"`
+ // SessionType 聊天类型编号
+ SessionType int32 `json:"sessionType"`
+ // ChatTypeName 聊天类型名称
+ ChatTypeName string `json:"chatTypeName"`
+ // Content 消息内容
+ Content string `json:"content"`
+ // SendTime 消息发送时间
+ SendTime int64 `json:"sendTime"`
+}
+
+// UserSendMsgQueryResp 用户发送消息查询返回
+type UserSendMsgQueryResp struct {
+ Count int64 `json:"count"`
+ PageNumber int32 `json:"pageNumber"`
+ ShowNumber int32 `json:"showNumber"`
+ Records []*UserSendMsgQueryRecord `json:"records"`
+}
+
+// OnlineUserCountTrendReq 在线人数走势统计请求
+type OnlineUserCountTrendReq struct {
+ // StartTime 统计开始时间(毫秒时间戳),为空时默认最近24小时
+ StartTime int64 `json:"startTime"`
+ // EndTime 统计结束时间(毫秒时间戳),为空时默认当前时间
+ EndTime int64 `json:"endTime"`
+ // IntervalMinutes 统计间隔(分钟),仅支持15/30/60
+ IntervalMinutes int32 `json:"intervalMinutes" binding:"required,oneof=15 30 60"`
+}
+
+// OnlineUserCountTrendItem 在线人数走势数据点
+type OnlineUserCountTrendItem struct {
+ // Timestamp 区间起始时间(毫秒时间戳)
+ Timestamp int64 `json:"timestamp"`
+ // OnlineCount 区间内平均在线人数
+ OnlineCount int64 `json:"onlineCount"`
+}
+
+// OnlineUserCountTrendResp 在线人数走势统计返回
+type OnlineUserCountTrendResp struct {
+ // IntervalMinutes 统计间隔(分钟)
+ IntervalMinutes int32 `json:"intervalMinutes"`
+ // Points 走势数据点
+ Points []*OnlineUserCountTrendItem `json:"points"`
+}
+
+// UserSendMsgCountTrendReq 用户发送消息走势统计请求
+type UserSendMsgCountTrendReq struct {
+ // UserID 发送者用户ID
+ UserID string `json:"userID" binding:"required"`
+ // ChatType 聊天类型:1-单聊,2-群聊
+ ChatType int32 `json:"chatType" binding:"required,oneof=1 2"`
+ // StartTime 统计开始时间(毫秒时间戳),为空时默认最近24小时
+ StartTime int64 `json:"startTime"`
+ // EndTime 统计结束时间(毫秒时间戳),为空时默认当前时间
+ EndTime int64 `json:"endTime"`
+ // IntervalMinutes 统计间隔(分钟),仅支持15/30/60
+ IntervalMinutes int32 `json:"intervalMinutes" binding:"required,oneof=15 30 60"`
+}
+
+// UserSendMsgCountTrendItem 用户发送消息走势数据点
+type UserSendMsgCountTrendItem struct {
+ // Timestamp 区间起始时间(毫秒时间戳)
+ Timestamp int64 `json:"timestamp"`
+ // Count 区间内发送消息数
+ Count int64 `json:"count"`
+}
+
+// UserSendMsgCountTrendResp 用户发送消息走势统计返回
+type UserSendMsgCountTrendResp struct {
+ // UserID 发送者用户ID
+ UserID string `json:"userID"`
+ // ChatType 聊天类型:1-单聊,2-群聊
+ ChatType int32 `json:"chatType"`
+ // IntervalMinutes 统计间隔(分钟)
+ IntervalMinutes int32 `json:"intervalMinutes"`
+ // Points 走势数据点
+ Points []*UserSendMsgCountTrendItem `json:"points"`
+}
diff --git a/pkg/authverify/doc.go b/pkg/authverify/doc.go
new file mode 100644
index 0000000..9647c65
--- /dev/null
+++ b/pkg/authverify/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package authverify // import "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
diff --git a/pkg/authverify/token.go b/pkg/authverify/token.go
new file mode 100644
index 0000000..1ce94e4
--- /dev/null
+++ b/pkg/authverify/token.go
@@ -0,0 +1,120 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package authverify
+
+import (
+ "context"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/golang-jwt/jwt/v4"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func Secret(secret string) jwt.Keyfunc {
+ return func(token *jwt.Token) (any, error) {
+ return []byte(secret), nil
+ }
+}
+
+func CheckAdmin(ctx context.Context) error {
+ if IsAdmin(ctx) {
+ return nil
+ }
+ return servererrs.ErrNoPermission.WrapMsg(fmt.Sprintf("user %s is not admin userID", mcontext.GetOpUserID(ctx)))
+}
+
+//func IsManagerUserID(opUserID string, imAdminUserID []string) bool {
+// return datautil.Contain(opUserID, imAdminUserID...)
+//}
+
+func CheckUserIsAdmin(ctx context.Context, userID string) bool {
+ return datautil.Contain(userID, GetIMAdminUserIDs(ctx)...)
+}
+
+func CheckSystemAccount(ctx context.Context, level int32) bool {
+ return level >= constant.AppAdmin
+}
+
+const (
+ CtxAdminUserIDsKey = "CtxAdminUserIDsKey"
+)
+
+func WithIMAdminUserIDs(ctx context.Context, imAdminUserID []string) context.Context {
+ return context.WithValue(ctx, CtxAdminUserIDsKey, imAdminUserID)
+}
+
+func GetIMAdminUserIDs(ctx context.Context) []string {
+ imAdminUserID, _ := ctx.Value(CtxAdminUserIDsKey).([]string)
+ return imAdminUserID
+}
+
+func IsAdmin(ctx context.Context) bool {
+ return IsTempAdmin(ctx) || IsSystemAdmin(ctx)
+}
+
+func CheckAccess(ctx context.Context, ownerUserID string) error {
+ if mcontext.GetOpUserID(ctx) == ownerUserID {
+ return nil
+ }
+ if IsAdmin(ctx) {
+ return nil
+ }
+ return servererrs.ErrNoPermission.WrapMsg("ownerUserID", ownerUserID)
+}
+
+func CheckAccessIn(ctx context.Context, ownerUserIDs ...string) error {
+ opUserID := mcontext.GetOpUserID(ctx)
+ for _, userID := range ownerUserIDs {
+ if opUserID == userID {
+ return nil
+ }
+ }
+ if IsAdmin(ctx) {
+ return nil
+ }
+ return servererrs.ErrNoPermission.WrapMsg("opUser in ownerUserIDs")
+}
+
+var tempAdminValue = []string{"1"}
+
+const ctxTempAdminKey = "ctxImTempAdminKey"
+
+func WithTempAdmin(ctx context.Context) context.Context {
+ keys, _ := ctx.Value(constant.RpcCustomHeader).([]string)
+ if datautil.Contain(ctxTempAdminKey, keys...) {
+ return ctx
+ }
+ if len(keys) > 0 {
+ temp := make([]string, 0, len(keys)+1)
+ temp = append(temp, keys...)
+ keys = append(temp, ctxTempAdminKey)
+ } else {
+ keys = []string{ctxTempAdminKey}
+ }
+ ctx = context.WithValue(ctx, constant.RpcCustomHeader, keys)
+ return context.WithValue(ctx, ctxTempAdminKey, tempAdminValue)
+}
+
+func IsTempAdmin(ctx context.Context) bool {
+ values, _ := ctx.Value(ctxTempAdminKey).([]string)
+ return datautil.Equal(tempAdminValue, values)
+}
+
+func IsSystemAdmin(ctx context.Context) bool {
+ return datautil.Contain(mcontext.GetOpUserID(ctx), GetIMAdminUserIDs(ctx)...)
+}
diff --git a/pkg/callbackstruct/common.go b/pkg/callbackstruct/common.go
new file mode 100644
index 0000000..7ad5522
--- /dev/null
+++ b/pkg/callbackstruct/common.go
@@ -0,0 +1,93 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "github.com/openimsdk/tools/errs"
+)
+
+const (
+ Next = 1
+)
+
+type CommonCallbackReq struct {
+ SendID string `json:"sendID"`
+ CallbackCommand string `json:"callbackCommand"`
+ ServerMsgID string `json:"serverMsgID"`
+ ClientMsgID string `json:"clientMsgID"`
+ OperationID string `json:"operationID"`
+ SenderPlatformID int32 `json:"senderPlatformID"`
+ SenderNickname string `json:"senderNickname"`
+ SessionType int32 `json:"sessionType"`
+ MsgFrom int32 `json:"msgFrom"`
+ ContentType int32 `json:"contentType"`
+ Status int32 `json:"status"`
+ SendTime int64 `json:"sendTime"`
+ CreateTime int64 `json:"createTime"`
+ Content string `json:"content"`
+ Seq uint32 `json:"seq"`
+ AtUserIDList []string `json:"atUserList"`
+ SenderFaceURL string `json:"faceURL"`
+ Ex string `json:"ex"`
+}
+
+func (c *CommonCallbackReq) GetCallbackCommand() string {
+ return c.CallbackCommand
+}
+
+type CallbackReq interface {
+ GetCallbackCommand() string
+}
+
+type CallbackResp interface {
+ Parse() (err error)
+}
+
+type CommonCallbackResp struct {
+ ActionCode int32 `json:"actionCode"`
+ ErrCode int32 `json:"errCode"`
+ ErrMsg string `json:"errMsg"`
+ ErrDlt string `json:"errDlt"`
+ NextCode int32 `json:"nextCode"`
+}
+
+func (c CommonCallbackResp) Parse() error {
+ if c.ActionCode == servererrs.NoError && c.NextCode == Next {
+ return errs.NewCodeError(int(c.ErrCode), c.ErrMsg).WithDetail(c.ErrDlt)
+ }
+ return nil
+}
+
+type UserStatusBaseCallback struct {
+ CallbackCommand string `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ PlatformID int `json:"platformID"`
+ Platform string `json:"platform"`
+}
+
+func (c UserStatusBaseCallback) GetCallbackCommand() string {
+ return c.CallbackCommand
+}
+
+type UserStatusCallbackReq struct {
+ UserStatusBaseCallback
+ UserID string `json:"userID"`
+}
+
+type UserStatusBatchCallbackReq struct {
+ UserStatusBaseCallback
+ UserIDList []string `json:"userIDList"`
+}
diff --git a/pkg/callbackstruct/constant.go b/pkg/callbackstruct/constant.go
new file mode 100644
index 0000000..ef7cb50
--- /dev/null
+++ b/pkg/callbackstruct/constant.go
@@ -0,0 +1,70 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+const (
+ CallbackBeforeInviteJoinGroupCommand = "callbackBeforeInviteJoinGroupCommand"
+ CallbackAfterJoinGroupCommand = "callbackAfterJoinGroupCommand"
+ CallbackAfterSetGroupInfoCommand = "callbackAfterSetGroupInfoCommand"
+ CallbackAfterSetGroupInfoExCommand = "callbackAfterSetGroupInfoExCommand"
+ CallbackBeforeSetGroupInfoCommand = "callbackBeforeSetGroupInfoCommand"
+ CallbackBeforeSetGroupInfoExCommand = "callbackBeforeSetGroupInfoExCommand"
+ CallbackAfterRevokeMsgCommand = "callbackBeforeAfterMsgCommand"
+ CallbackBeforeAddBlackCommand = "callbackBeforeAddBlackCommand"
+ CallbackAfterAddFriendCommand = "callbackAfterAddFriendCommand"
+ CallbackBeforeAddFriendAgreeCommand = "callbackBeforeAddFriendAgreeCommand"
+ CallbackAfterAddFriendAgreeCommand = "callbackAfterAddFriendAgreeCommand"
+ CallbackAfterDeleteFriendCommand = "callbackAfterDeleteFriendCommand"
+ CallbackBeforeImportFriendsCommand = "callbackBeforeImportFriendsCommand"
+ CallbackAfterImportFriendsCommand = "callbackAfterImportFriendsCommand"
+ CallbackAfterRemoveBlackCommand = "callbackAfterRemoveBlackCommand"
+ CallbackAfterQuitGroupCommand = "callbackAfterQuitGroupCommand"
+ CallbackAfterKickGroupCommand = "callbackAfterKickGroupCommand"
+ CallbackAfterDisMissGroupCommand = "callbackAfterDisMissGroupCommand"
+ CallbackBeforeJoinGroupCommand = "callbackBeforeJoinGroupCommand"
+ CallbackAfterGroupMsgReadCommand = "callbackAfterGroupMsgReadCommand"
+ CallbackBeforeMsgModifyCommand = "callbackBeforeMsgModifyCommand"
+ CallbackAfterUpdateUserInfoCommand = "callbackAfterUpdateUserInfoCommand"
+ CallbackAfterUpdateUserInfoExCommand = "callbackAfterUpdateUserInfoExCommand"
+ CallbackBeforeUpdateUserInfoExCommand = "callbackBeforeUpdateUserInfoExCommand"
+ CallbackBeforeUserRegisterCommand = "callbackBeforeUserRegisterCommand"
+ CallbackAfterUserRegisterCommand = "callbackAfterUserRegisterCommand"
+ CallbackAfterTransferGroupOwnerCommand = "callbackAfterTransferGroupOwnerCommand"
+ CallbackBeforeSetFriendRemarkCommand = "callbackBeforeSetFriendRemarkCommand"
+ CallbackAfterSetFriendRemarkCommand = "callbackAfterSetFriendRemarkCommand"
+ CallbackAfterSingleMsgReadCommand = "callbackAfterSingleMsgReadCommand"
+ CallbackBeforeSendSingleMsgCommand = "callbackBeforeSendSingleMsgCommand"
+ CallbackAfterSendSingleMsgCommand = "callbackAfterSendSingleMsgCommand"
+ CallbackBeforeSendGroupMsgCommand = "callbackBeforeSendGroupMsgCommand"
+ CallbackAfterSendGroupMsgCommand = "callbackAfterSendGroupMsgCommand"
+ CallbackAfterUserOnlineCommand = "callbackAfterUserOnlineCommand"
+ CallbackAfterUserOfflineCommand = "callbackAfterUserOfflineCommand"
+ CallbackAfterUserKickOffCommand = "callbackAfterUserKickOffCommand"
+ CallbackBeforeOfflinePushCommand = "callbackBeforeOfflinePushCommand"
+ CallbackBeforeOnlinePushCommand = "callbackBeforeOnlinePushCommand"
+ CallbackBeforeGroupOnlinePushCommand = "callbackBeforeGroupOnlinePushCommand"
+ CallbackBeforeAddFriendCommand = "callbackBeforeAddFriendCommand"
+ CallbackBeforeUpdateUserInfoCommand = "callbackBeforeUpdateUserInfoCommand"
+ CallbackBeforeCreateGroupCommand = "callbackBeforeCreateGroupCommand"
+ CallbackAfterCreateGroupCommand = "callbackAfterCreateGroupCommand"
+ CallbackBeforeMembersJoinGroupCommand = "callbackBeforeMembersJoinGroupCommand"
+ CallbackBeforeSetGroupMemberInfoCommand = "callbackBeforeSetGroupMemberInfoCommand"
+ CallbackAfterSetGroupMemberInfoCommand = "callbackAfterSetGroupMemberInfoCommand"
+ CallbackBeforeCreateSingleChatConversationsCommand = "callbackBeforeCreateSingleChatConversationsCommand"
+ CallbackAfterCreateSingleChatConversationsCommand = "callbackAfterCreateSingleChatConversationsCommand"
+ CallbackBeforeCreateGroupChatConversationsCommand = "callbackBeforeCreateGroupChatConversationsCommand"
+ CallbackAfterCreateGroupChatConversationsCommand = "callbackAfterCreateGroupChatConversationsCommand"
+ CallbackAfterMsgSaveDBCommand = "callbackAfterMsgSaveDBCommand"
+)
diff --git a/pkg/callbackstruct/conversation.go b/pkg/callbackstruct/conversation.go
new file mode 100644
index 0000000..14e7809
--- /dev/null
+++ b/pkg/callbackstruct/conversation.go
@@ -0,0 +1,91 @@
+package callbackstruct
+
+type CallbackBeforeCreateSingleChatConversationsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"owner_user_id"`
+ ConversationID string `json:"conversation_id"`
+ ConversationType int32 `json:"conversation_type"`
+ UserID string `json:"user_id"`
+ RecvMsgOpt int32 `json:"recv_msg_opt"`
+ IsPinned bool `json:"is_pinned"`
+ IsPrivateChat bool `json:"is_private_chat"`
+ BurnDuration int32 `json:"burn_duration"`
+ GroupAtType int32 `json:"group_at_type"`
+ AttachedInfo string `json:"attached_info"`
+ Ex string `json:"ex"`
+}
+
+type CallbackBeforeCreateSingleChatConversationsResp struct {
+ CommonCallbackResp
+ RecvMsgOpt *int32 `json:"recv_msg_opt"`
+ IsPinned *bool `json:"is_pinned"`
+ IsPrivateChat *bool `json:"is_private_chat"`
+ BurnDuration *int32 `json:"burn_duration"`
+ GroupAtType *int32 `json:"group_at_type"`
+ AttachedInfo *string `json:"attached_info"`
+ Ex *string `json:"ex"`
+}
+
+type CallbackAfterCreateSingleChatConversationsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"owner_user_id"`
+ ConversationID string `json:"conversation_id"`
+ ConversationType int32 `json:"conversation_type"`
+ UserID string `json:"user_id"`
+ RecvMsgOpt int32 `json:"recv_msg_opt"`
+ IsPinned bool `json:"is_pinned"`
+ IsPrivateChat bool `json:"is_private_chat"`
+ BurnDuration int32 `json:"burn_duration"`
+ GroupAtType int32 `json:"group_at_type"`
+ AttachedInfo string `json:"attached_info"`
+ Ex string `json:"ex"`
+}
+
+type CallbackAfterCreateSingleChatConversationsResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeCreateGroupChatConversationsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"owner_user_id"`
+ ConversationID string `json:"conversation_id"`
+ ConversationType int32 `json:"conversation_type"`
+ GroupID string `json:"group_id"`
+ RecvMsgOpt int32 `json:"recv_msg_opt"`
+ IsPinned bool `json:"is_pinned"`
+ IsPrivateChat bool `json:"is_private_chat"`
+ BurnDuration int32 `json:"burn_duration"`
+ GroupAtType int32 `json:"group_at_type"`
+ AttachedInfo string `json:"attached_info"`
+ Ex string `json:"ex"`
+}
+
+type CallbackBeforeCreateGroupChatConversationsResp struct {
+ CommonCallbackResp
+ RecvMsgOpt *int32 `json:"recv_msg_opt"`
+ IsPinned *bool `json:"is_pinned"`
+ IsPrivateChat *bool `json:"is_private_chat"`
+ BurnDuration *int32 `json:"burn_duration"`
+ GroupAtType *int32 `json:"group_at_type"`
+ AttachedInfo *string `json:"attached_info"`
+ Ex *string `json:"ex"`
+}
+
+type CallbackAfterCreateGroupChatConversationsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"owner_user_id"`
+ ConversationID string `json:"conversation_id"`
+ ConversationType int32 `json:"conversation_type"`
+ GroupID string `json:"group_id"`
+ RecvMsgOpt int32 `json:"recv_msg_opt"`
+ IsPinned bool `json:"is_pinned"`
+ IsPrivateChat bool `json:"is_private_chat"`
+ BurnDuration int32 `json:"burn_duration"`
+ GroupAtType int32 `json:"group_at_type"`
+ AttachedInfo string `json:"attached_info"`
+ Ex string `json:"ex"`
+}
+
+type CallbackAfterCreateGroupChatConversationsResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/doc.go b/pkg/callbackstruct/doc.go
new file mode 100644
index 0000000..b6471b3
--- /dev/null
+++ b/pkg/callbackstruct/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct // import "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
diff --git a/pkg/callbackstruct/friend.go b/pkg/callbackstruct/friend.go
new file mode 100644
index 0000000..a81746b
--- /dev/null
+++ b/pkg/callbackstruct/friend.go
@@ -0,0 +1,139 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+type CallbackBeforeAddFriendReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ FromUserID string `json:"fromUserID" `
+ ToUserID string `json:"toUserID"`
+ ReqMsg string `json:"reqMsg"`
+ Ex string `json:"ex"`
+}
+
+type CallbackBeforeAddFriendResp struct {
+ CommonCallbackResp
+}
+
+type CallBackAddFriendReplyBeforeReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ FromUserID string `json:"fromUserID" `
+ ToUserID string `json:"toUserID"`
+}
+
+type CallBackAddFriendReplyBeforeResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeSetFriendRemarkReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID"`
+ FriendUserID string `json:"friendUserID"`
+ Remark string `json:"remark"`
+}
+
+type CallbackBeforeSetFriendRemarkResp struct {
+ CommonCallbackResp
+ Remark string `json:"remark"`
+}
+
+type CallbackAfterSetFriendRemarkReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID"`
+ FriendUserID string `json:"friendUserID"`
+ Remark string `json:"remark"`
+}
+
+type CallbackAfterSetFriendRemarkResp struct {
+ CommonCallbackResp
+}
+type CallbackAfterAddFriendReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ FromUserID string `json:"fromUserID" `
+ ToUserID string `json:"toUserID"`
+ ReqMsg string `json:"reqMsg"`
+}
+
+type CallbackAfterAddFriendResp struct {
+ CommonCallbackResp
+}
+type CallbackBeforeAddBlackReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ BlackUserID string `json:"blackUserID"`
+}
+
+type CallbackBeforeAddBlackResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeAddFriendAgreeReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ FromUserID string `json:"fromUserID" `
+ ToUserID string `json:"blackUserID"`
+ HandleResult int32 `json:"HandleResult"`
+ HandleMsg string `json:"HandleMsg"`
+}
+
+type CallbackBeforeAddFriendAgreeResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterAddFriendAgreeReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ FromUserID string `json:"fromUserID" `
+ ToUserID string `json:"blackUserID"`
+ HandleResult int32 `json:"HandleResult"`
+ HandleMsg string `json:"HandleMsg"`
+}
+
+type CallbackAfterAddFriendAgreeResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterDeleteFriendReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ FriendUserID string `json:"friendUserID"`
+}
+type CallbackAfterDeleteFriendResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeImportFriendsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ FriendUserIDs []string `json:"friendUserIDs"`
+}
+type CallbackBeforeImportFriendsResp struct {
+ CommonCallbackResp
+ FriendUserIDs []string `json:"friendUserIDs"`
+}
+type CallbackAfterImportFriendsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ FriendUserIDs []string `json:"friendUserIDs"`
+}
+type CallbackAfterImportFriendsResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterRemoveBlackReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID"`
+ BlackUserID string `json:"blackUserID"`
+}
+type CallbackAfterRemoveBlackResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/group.go b/pkg/callbackstruct/group.go
new file mode 100644
index 0000000..248d018
--- /dev/null
+++ b/pkg/callbackstruct/group.go
@@ -0,0 +1,290 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ common "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/wrapperspb"
+)
+
+type CallbackCommand string
+
+func (c CallbackCommand) GetCallbackCommand() string {
+ return string(c)
+}
+
+type CallbackBeforeCreateGroupReq struct {
+ OperationID string `json:"operationID"`
+ CallbackCommand `json:"callbackCommand"`
+ *common.GroupInfo
+ InitMemberList []*apistruct.GroupAddMemberInfo `json:"initMemberList"`
+}
+
+type CallbackBeforeCreateGroupResp struct {
+ CommonCallbackResp
+ GroupID *string `json:"groupID"`
+ GroupName *string `json:"groupName"`
+ Notification *string `json:"notification"`
+ Introduction *string `json:"introduction"`
+ FaceURL *string `json:"faceURL"`
+ OwnerUserID *string `json:"ownerUserID"`
+ Ex *string `json:"ex"`
+ Status *int32 `json:"status"`
+ CreatorUserID *string `json:"creatorUserID"`
+ GroupType *int32 `json:"groupType"`
+ NeedVerification *int32 `json:"needVerification"`
+ LookMemberInfo *int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend *int32 `json:"applyMemberFriend"`
+}
+
+type CallbackAfterCreateGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ *common.GroupInfo
+ InitMemberList []*apistruct.GroupAddMemberInfo `json:"initMemberList"`
+}
+
+type CallbackAfterCreateGroupResp struct {
+ CommonCallbackResp
+}
+
+type CallbackGroupMember struct {
+ UserID string `json:"userID"`
+ Ex string `json:"ex"`
+}
+
+type CallbackBeforeMembersJoinGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ MembersList []*CallbackGroupMember `json:"memberList"`
+ GroupEx string `json:"groupEx"`
+}
+
+type MemberJoinGroupCallBack struct {
+ UserID *string `json:"userID"`
+ Nickname *string `json:"nickname"`
+ FaceURL *string `json:"faceURL"`
+ RoleLevel *int32 `json:"roleLevel"`
+ MuteEndTime *int64 `json:"muteEndTime"`
+ Ex *string `json:"ex"`
+}
+
+type CallbackBeforeMembersJoinGroupResp struct {
+ CommonCallbackResp
+ MemberCallbackList []*MemberJoinGroupCallBack `json:"memberCallbackList"`
+}
+
+type CallbackBeforeSetGroupMemberInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ UserID string `json:"userID"`
+ Nickname *string `json:"nickName"`
+ FaceURL *string `json:"faceURL"`
+ RoleLevel *int32 `json:"roleLevel"`
+ Ex *string `json:"ex"`
+}
+
+type CallbackBeforeSetGroupMemberInfoResp struct {
+ CommonCallbackResp
+ Ex *string `json:"ex"`
+ Nickname *string `json:"nickName"`
+ FaceURL *string `json:"faceURL"`
+ RoleLevel *int32 `json:"roleLevel"`
+}
+
+type CallbackAfterSetGroupMemberInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ UserID string `json:"userID"`
+ Nickname *string `json:"nickName"`
+ FaceURL *string `json:"faceURL"`
+ RoleLevel *int32 `json:"roleLevel"`
+ Ex *string `json:"ex"`
+}
+
+type CallbackAfterSetGroupMemberInfoResp struct {
+ CommonCallbackResp
+}
+
+type CallbackQuitGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ UserID string `json:"userID"`
+}
+
+type CallbackQuitGroupResp struct {
+ CommonCallbackResp
+}
+
+type CallbackKillGroupMemberReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ KickedUserIDs []string `json:"kickedUserIDs"`
+ Reason string `json:"reason"`
+}
+
+type CallbackKillGroupMemberResp struct {
+ CommonCallbackResp
+}
+
+type CallbackDisMissGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ OwnerID string `json:"ownerID"`
+ GroupType string `json:"groupType"`
+ MembersID []string `json:"membersID"`
+}
+
+type CallbackDisMissGroupResp struct {
+ CommonCallbackResp
+}
+
+type CallbackJoinGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ GroupType string `json:"groupType"`
+ ApplyID string `json:"applyID"`
+ ReqMessage string `json:"reqMessage"`
+ Ex string `json:"ex"`
+}
+
+type CallbackJoinGroupResp struct {
+ CommonCallbackResp
+}
+
+type CallbackTransferGroupOwnerReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ GroupID string `json:"groupID"`
+ OldOwnerUserID string `json:"oldOwnerUserID"`
+ NewOwnerUserID string `json:"newOwnerUserID"`
+}
+
+type CallbackTransferGroupOwnerResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeInviteUserToGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ Reason string `json:"reason"`
+ InvitedUserIDs []string `json:"invitedUserIDs"`
+}
+type CallbackBeforeInviteUserToGroupResp struct {
+ CommonCallbackResp
+ RefusedMembersAccount []string `json:"refusedMembersAccount,omitempty"` // Optional field to list members whose invitation is refused.
+}
+
+type CallbackAfterJoinGroupReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ ReqMessage string `json:"reqMessage"`
+ JoinSource int32 `json:"joinSource"`
+ InviterUserID string `json:"inviterUserID"`
+}
+type CallbackAfterJoinGroupResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeSetGroupInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ GroupName string `json:"groupName"`
+ Notification string `json:"notification"`
+ Introduction string `json:"introduction"`
+ FaceURL string `json:"faceURL"`
+ Ex string `json:"ex"`
+ NeedVerification int32 `json:"needVerification"`
+ LookMemberInfo int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend int32 `json:"applyMemberFriend"`
+}
+
+type CallbackBeforeSetGroupInfoResp struct {
+ CommonCallbackResp
+ GroupID string ` json:"groupID"`
+ GroupName string `json:"groupName"`
+ Notification string `json:"notification"`
+ Introduction string `json:"introduction"`
+ FaceURL string `json:"faceURL"`
+ Ex *string `json:"ex"`
+ NeedVerification *int32 `json:"needVerification"`
+ LookMemberInfo *int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend *int32 `json:"applyMemberFriend"`
+}
+
+type CallbackAfterSetGroupInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ GroupName string `json:"groupName"`
+ Notification string `json:"notification"`
+ Introduction string `json:"introduction"`
+ FaceURL string `json:"faceURL"`
+ Ex *string `json:"ex"`
+ NeedVerification *int32 `json:"needVerification"`
+ LookMemberInfo *int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend *int32 `json:"applyMemberFriend"`
+}
+
+type CallbackAfterSetGroupInfoResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeSetGroupInfoExReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ GroupName *wrapperspb.StringValue `json:"groupName"`
+ Notification *wrapperspb.StringValue `json:"notification"`
+ Introduction *wrapperspb.StringValue `json:"introduction"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+ NeedVerification *wrapperspb.Int32Value `json:"needVerification"`
+ LookMemberInfo *wrapperspb.Int32Value `json:"lookMemberInfo"`
+ ApplyMemberFriend *wrapperspb.Int32Value `json:"applyMemberFriend"`
+}
+
+type CallbackBeforeSetGroupInfoExResp struct {
+ CommonCallbackResp
+ GroupID string `json:"groupID"`
+ GroupName *wrapperspb.StringValue `json:"groupName"`
+ Notification *wrapperspb.StringValue `json:"notification"`
+ Introduction *wrapperspb.StringValue `json:"introduction"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+ NeedVerification *wrapperspb.Int32Value `json:"needVerification"`
+ LookMemberInfo *wrapperspb.Int32Value `json:"lookMemberInfo"`
+ ApplyMemberFriend *wrapperspb.Int32Value `json:"applyMemberFriend"`
+}
+
+type CallbackAfterSetGroupInfoExReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ GroupName *wrapperspb.StringValue `json:"groupName"`
+ Notification *wrapperspb.StringValue `json:"notification"`
+ Introduction *wrapperspb.StringValue `json:"introduction"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+ NeedVerification *wrapperspb.Int32Value `json:"needVerification"`
+ LookMemberInfo *wrapperspb.Int32Value `json:"lookMemberInfo"`
+ ApplyMemberFriend *wrapperspb.Int32Value `json:"applyMemberFriend"`
+}
+
+type CallbackAfterSetGroupInfoExResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/message.go b/pkg/callbackstruct/message.go
new file mode 100644
index 0000000..6b3e517
--- /dev/null
+++ b/pkg/callbackstruct/message.go
@@ -0,0 +1,115 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+import (
+ sdkws "git.imall.cloud/openim/protocol/sdkws"
+)
+
+type CallbackBeforeSendSingleMsgReq struct {
+ CommonCallbackReq
+ RecvID string `json:"recvID"`
+}
+
+type CallbackBeforeSendSingleMsgResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterSendSingleMsgReq struct {
+ CommonCallbackReq
+ RecvID string `json:"recvID"`
+}
+
+type CallbackAfterSendSingleMsgResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeSendGroupMsgReq struct {
+ CommonCallbackReq
+ GroupID string `json:"groupID"`
+}
+
+type CallbackBeforeSendGroupMsgResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterSendGroupMsgReq struct {
+ CommonCallbackReq
+ GroupID string `json:"groupID"`
+}
+
+type CallbackAfterSendGroupMsgResp struct {
+ CommonCallbackResp
+}
+
+type CallbackMsgModifyCommandReq struct {
+ CommonCallbackReq
+}
+
+type CallbackMsgModifyCommandResp struct {
+ CommonCallbackResp
+ Content *string `json:"content"`
+ RecvID *string `json:"recvID"`
+ GroupID *string `json:"groupID"`
+ ClientMsgID *string `json:"clientMsgID"`
+ ServerMsgID *string `json:"serverMsgID"`
+ SenderPlatformID *int32 `json:"senderPlatformID"`
+ SenderNickname *string `json:"senderNickname"`
+ SenderFaceURL *string `json:"senderFaceURL"`
+ SessionType *int32 `json:"sessionType"`
+ MsgFrom *int32 `json:"msgFrom"`
+ ContentType *int32 `json:"contentType"`
+ Status *int32 `json:"status"`
+ Options *map[string]bool `json:"options"`
+ OfflinePushInfo *sdkws.OfflinePushInfo `json:"offlinePushInfo"`
+ AtUserIDList *[]string `json:"atUserIDList"`
+ MsgDataList *[]byte `json:"msgDataList"`
+ AttachedInfo *string `json:"attachedInfo"`
+ Ex *string `json:"ex"`
+}
+
+type CallbackGroupMsgReadReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ SendID string `json:"sendID"`
+ ReceiveID string `json:"receiveID"`
+ UnreadMsgNum int64 `json:"unreadMsgNum"`
+ ContentType int64 `json:"contentType"`
+}
+
+type CallbackGroupMsgReadResp struct {
+ CommonCallbackResp
+}
+
+type CallbackSingleMsgReadReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ ConversationID string `json:"conversationID"`
+ UserID string `json:"userID"`
+ Seqs []int64 `json:"Seqs"`
+ ContentType int32 `json:"contentType"`
+}
+
+type CallbackSingleMsgReadResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterMsgSaveDBReq struct {
+ CommonCallbackReq
+ RecvID string `json:"recvID"`
+ GroupID string `json:"groupID"`
+}
+
+type CallbackAfterMsgSaveDBResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/msg_gateway.go b/pkg/callbackstruct/msg_gateway.go
new file mode 100644
index 0000000..ef98c40
--- /dev/null
+++ b/pkg/callbackstruct/msg_gateway.go
@@ -0,0 +1,46 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+type CallbackUserOnlineReq struct {
+ UserStatusCallbackReq
+ // Token string `json:"token"`
+ Seq int64 `json:"seq"`
+ IsAppBackground bool `json:"isAppBackground"`
+ ConnID string `json:"connID"`
+}
+
+type CallbackUserOnlineResp struct {
+ CommonCallbackResp
+}
+
+type CallbackUserOfflineReq struct {
+ UserStatusCallbackReq
+ Seq int64 `json:"seq"`
+ ConnID string `json:"connID"`
+}
+
+type CallbackUserOfflineResp struct {
+ CommonCallbackResp
+}
+
+type CallbackUserKickOffReq struct {
+ UserStatusCallbackReq
+ Seq int64 `json:"seq"`
+}
+
+type CallbackUserKickOffResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/push.go b/pkg/callbackstruct/push.go
new file mode 100644
index 0000000..8b8f448
--- /dev/null
+++ b/pkg/callbackstruct/push.go
@@ -0,0 +1,53 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+import common "git.imall.cloud/openim/protocol/sdkws"
+
+type CallbackBeforePushReq struct {
+ UserStatusBatchCallbackReq
+ *common.OfflinePushInfo
+ ClientMsgID string `json:"clientMsgID"`
+ SendID string `json:"sendID"`
+ GroupID string `json:"groupID"`
+ ContentType int32 `json:"contentType"`
+ SessionType int32 `json:"sessionType"`
+ AtUserIDs []string `json:"atUserIDList"`
+ Content string `json:"content"`
+}
+
+type CallbackBeforePushResp struct {
+ CommonCallbackResp
+ UserIDs []string `json:"userIDList"`
+ OfflinePushInfo *common.OfflinePushInfo `json:"offlinePushInfo"`
+}
+
+type CallbackBeforeSuperGroupOnlinePushReq struct {
+ UserStatusBaseCallback
+ ClientMsgID string `json:"clientMsgID"`
+ SendID string `json:"sendID"`
+ GroupID string `json:"groupID"`
+ ContentType int32 `json:"contentType"`
+ SessionType int32 `json:"sessionType"`
+ AtUserIDs []string `json:"atUserIDList"`
+ Content string `json:"content"`
+ Seq int64 `json:"seq"`
+}
+
+type CallbackBeforeSuperGroupOnlinePushResp struct {
+ CommonCallbackResp
+ UserIDs []string `json:"userIDList"`
+ OfflinePushInfo *common.OfflinePushInfo `json:"offlinePushInfo"`
+}
diff --git a/pkg/callbackstruct/revoke.go b/pkg/callbackstruct/revoke.go
new file mode 100644
index 0000000..b36985e
--- /dev/null
+++ b/pkg/callbackstruct/revoke.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+type CallbackAfterRevokeMsgReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ ConversationID string `json:"conversationID"`
+ Seq int64 `json:"seq"`
+ UserID string `json:"userID"`
+}
+
+type CallbackAfterRevokeMsgResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/user.go b/pkg/callbackstruct/user.go
new file mode 100644
index 0000000..20517e9
--- /dev/null
+++ b/pkg/callbackstruct/user.go
@@ -0,0 +1,102 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+import (
+ "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/wrapperspb"
+)
+
+type CallbackBeforeUpdateUserInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ UserID string `json:"userID"`
+ Nickname *string `json:"nickName"`
+ FaceURL *string `json:"faceURL"`
+ Ex *string `json:"ex"`
+ UserType *int32 `json:"userType"`
+ UserFlag *string `json:"userFlag"`
+}
+
+type CallbackBeforeUpdateUserInfoResp struct {
+ CommonCallbackResp
+ Nickname *string `json:"nickName"`
+ FaceURL *string `json:"faceURL"`
+ Ex *string `json:"ex"`
+ UserType *int32 `json:"userType"`
+ UserFlag *string `json:"userFlag"`
+}
+
+type CallbackAfterUpdateUserInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ UserID string `json:"userID"`
+ Nickname string `json:"nickName"`
+ FaceURL string `json:"faceURL"`
+ Ex string `json:"ex"`
+ UserType int32 `json:"userType"`
+ UserFlag string `json:"userFlag"`
+}
+type CallbackAfterUpdateUserInfoResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeUpdateUserInfoExReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ UserID string `json:"userID"`
+ Nickname *wrapperspb.StringValue `json:"nickName"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+ UserType *wrapperspb.Int32Value `json:"userType"`
+ UserFlag *wrapperspb.StringValue `json:"userFlag"`
+}
+type CallbackBeforeUpdateUserInfoExResp struct {
+ CommonCallbackResp
+ Nickname *wrapperspb.StringValue `json:"nickName"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+ UserType *wrapperspb.Int32Value `json:"userType"`
+ UserFlag *wrapperspb.StringValue `json:"userFlag"`
+}
+
+type CallbackAfterUpdateUserInfoExReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ UserID string `json:"userID"`
+ Nickname *wrapperspb.StringValue `json:"nickName"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+ UserType *wrapperspb.Int32Value `json:"userType"`
+ UserFlag *wrapperspb.StringValue `json:"userFlag"`
+}
+type CallbackAfterUpdateUserInfoExResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeUserRegisterReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ Users []*sdkws.UserInfo `json:"users"`
+}
+
+type CallbackBeforeUserRegisterResp struct {
+ CommonCallbackResp
+ Users []*sdkws.UserInfo `json:"users"`
+}
+
+type CallbackAfterUserRegisterReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ Users []*sdkws.UserInfo `json:"users"`
+}
+
+type CallbackAfterUserRegisterResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/common/cmd/api.go b/pkg/common/cmd/api.go
new file mode 100644
index 0000000..2596415
--- /dev/null
+++ b/pkg/common/cmd/api.go
@@ -0,0 +1,97 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/api"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type ApiCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ apiConfig *api.Config
+}
+
+func NewApiCmd() *ApiCmd {
+ var apiConfig api.Config
+ ret := &ApiCmd{apiConfig: &apiConfig}
+ ret.configMap = map[string]any{
+ config.DiscoveryConfigFilename: &apiConfig.Discovery,
+ config.KafkaConfigFileName: &apiConfig.Kafka,
+ config.LocalCacheConfigFileName: &apiConfig.LocalCache,
+ config.LogConfigFileName: &apiConfig.Log,
+ config.MinioConfigFileName: &apiConfig.Minio,
+ config.MongodbConfigFileName: &apiConfig.Mongo,
+ config.NotificationFileName: &apiConfig.Notification,
+ config.OpenIMAPICfgFileName: &apiConfig.API,
+ config.OpenIMCronTaskCfgFileName: &apiConfig.CronTask,
+ config.OpenIMMsgGatewayCfgFileName: &apiConfig.MsgGateway,
+ config.OpenIMMsgTransferCfgFileName: &apiConfig.MsgTransfer,
+ config.OpenIMPushCfgFileName: &apiConfig.Push,
+ config.OpenIMRPCAuthCfgFileName: &apiConfig.Auth,
+ config.OpenIMRPCConversationCfgFileName: &apiConfig.Conversation,
+ config.OpenIMRPCFriendCfgFileName: &apiConfig.Friend,
+ config.OpenIMRPCGroupCfgFileName: &apiConfig.Group,
+ config.OpenIMRPCMsgCfgFileName: &apiConfig.Msg,
+ config.OpenIMRPCThirdCfgFileName: &apiConfig.Third,
+ config.OpenIMRPCUserCfgFileName: &apiConfig.User,
+ config.RedisConfigFileName: &apiConfig.Redis,
+ config.ShareFileName: &apiConfig.Share,
+ config.WebhooksConfigFileName: &apiConfig.Webhooks,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ apiConfig.ConfigPath = config.Path(ret.configPath)
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *ApiCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *ApiCmd) runE() error {
+ a.apiConfig.Index = config.Index(a.Index())
+ prometheus := config.Prometheus{
+ Enable: a.apiConfig.API.Prometheus.Enable,
+ Ports: a.apiConfig.API.Prometheus.Ports,
+ }
+ return startrpc.Start(
+ a.ctx, &a.apiConfig.Discovery,
+ nil, // circuitBreakerConfig - API doesn't use circuit breaker
+ nil, // rateLimiterConfig - API uses its own rate limiter middleware
+ &prometheus,
+ a.apiConfig.API.Api.ListenIP, "",
+ a.apiConfig.API.Prometheus.AutoSetPorts,
+ nil, int(a.apiConfig.Index),
+ prommetrics.APIKeyName,
+ &a.apiConfig.Notification,
+ a.apiConfig,
+ []string{},
+ []string{},
+ api.Start,
+ )
+}
diff --git a/pkg/common/cmd/auth.go b/pkg/common/cmd/auth.go
new file mode 100644
index 0000000..614a542
--- /dev/null
+++ b/pkg/common/cmd/auth.go
@@ -0,0 +1,73 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/auth"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type AuthRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ authConfig *auth.Config
+}
+
+func NewAuthRpcCmd() *AuthRpcCmd {
+ var authConfig auth.Config
+ ret := &AuthRpcCmd{authConfig: &authConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCAuthCfgFileName: &authConfig.RpcConfig,
+ config.RedisConfigFileName: &authConfig.RedisConfig,
+ config.MongodbConfigFileName: &authConfig.MongoConfig,
+ config.ShareFileName: &authConfig.Share,
+ config.LocalCacheConfigFileName: &authConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &authConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+
+ return ret
+}
+
+func (a *AuthRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *AuthRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.authConfig.Discovery, &a.authConfig.RpcConfig.CircuitBreaker, &a.authConfig.RpcConfig.RateLimiter, &a.authConfig.RpcConfig.Prometheus, a.authConfig.RpcConfig.RPC.ListenIP,
+ a.authConfig.RpcConfig.RPC.RegisterIP, a.authConfig.RpcConfig.RPC.AutoSetPorts, a.authConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.authConfig.Discovery.RpcService.Auth, nil, a.authConfig,
+ []string{
+ a.authConfig.RpcConfig.GetConfigFileName(),
+ a.authConfig.Share.GetConfigFileName(),
+ a.authConfig.RedisConfig.GetConfigFileName(),
+ a.authConfig.Discovery.GetConfigFileName(),
+ },
+ []string{
+ a.authConfig.Discovery.RpcService.MessageGateway,
+ },
+ auth.Start)
+}
diff --git a/pkg/common/cmd/conversation.go b/pkg/common/cmd/conversation.go
new file mode 100644
index 0000000..0cd0221
--- /dev/null
+++ b/pkg/common/cmd/conversation.go
@@ -0,0 +1,75 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/conversation"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type ConversationRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ conversationConfig *conversation.Config
+}
+
+func NewConversationRpcCmd() *ConversationRpcCmd {
+ var conversationConfig conversation.Config
+ ret := &ConversationRpcCmd{conversationConfig: &conversationConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCConversationCfgFileName: &conversationConfig.RpcConfig,
+ config.RedisConfigFileName: &conversationConfig.RedisConfig,
+ config.MongodbConfigFileName: &conversationConfig.MongodbConfig,
+ config.ShareFileName: &conversationConfig.Share,
+ config.NotificationFileName: &conversationConfig.NotificationConfig,
+ config.WebhooksConfigFileName: &conversationConfig.WebhooksConfig,
+ config.LocalCacheConfigFileName: &conversationConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &conversationConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *ConversationRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *ConversationRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.conversationConfig.Discovery, &a.conversationConfig.RpcConfig.CircuitBreaker, &a.conversationConfig.RpcConfig.RateLimiter, &a.conversationConfig.RpcConfig.Prometheus, a.conversationConfig.RpcConfig.RPC.ListenIP,
+ a.conversationConfig.RpcConfig.RPC.RegisterIP, a.conversationConfig.RpcConfig.RPC.AutoSetPorts, a.conversationConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.conversationConfig.Discovery.RpcService.Conversation, &a.conversationConfig.NotificationConfig, a.conversationConfig,
+ []string{
+ a.conversationConfig.RpcConfig.GetConfigFileName(),
+ a.conversationConfig.RedisConfig.GetConfigFileName(),
+ a.conversationConfig.MongodbConfig.GetConfigFileName(),
+ a.conversationConfig.NotificationConfig.GetConfigFileName(),
+ a.conversationConfig.Share.GetConfigFileName(),
+ a.conversationConfig.LocalCacheConfig.GetConfigFileName(),
+ a.conversationConfig.WebhooksConfig.GetConfigFileName(),
+ a.conversationConfig.Discovery.GetConfigFileName(),
+ }, nil,
+ conversation.Start)
+}
diff --git a/pkg/common/cmd/cron_task.go b/pkg/common/cmd/cron_task.go
new file mode 100644
index 0000000..8c5ff80
--- /dev/null
+++ b/pkg/common/cmd/cron_task.go
@@ -0,0 +1,75 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/tools/cron"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type CronTaskCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ cronTaskConfig *cron.Config
+}
+
+func NewCronTaskCmd() *CronTaskCmd {
+ var cronTaskConfig cron.Config
+ ret := &CronTaskCmd{cronTaskConfig: &cronTaskConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMCronTaskCfgFileName: &cronTaskConfig.CronTask,
+ config.ShareFileName: &cronTaskConfig.Share,
+ config.DiscoveryConfigFilename: &cronTaskConfig.Discovery,
+ config.MongodbConfigFileName: &cronTaskConfig.Mongo,
+ config.RedisConfigFileName: &cronTaskConfig.Redis,
+ config.NotificationFileName: &cronTaskConfig.Notification,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *CronTaskCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *CronTaskCmd) runE() error {
+ var prometheus config.Prometheus
+ return startrpc.Start(
+ a.ctx, &a.cronTaskConfig.Discovery,
+ nil,
+ nil,
+ &prometheus,
+ "", "",
+ true,
+ nil, 0,
+ "",
+ nil,
+ a.cronTaskConfig,
+ []string{},
+ []string{},
+ cron.Start,
+ )
+}
diff --git a/pkg/common/cmd/doc.go b/pkg/common/cmd/doc.go
new file mode 100644
index 0000000..f97bfb1
--- /dev/null
+++ b/pkg/common/cmd/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/cmd"
diff --git a/pkg/common/cmd/friend.go b/pkg/common/cmd/friend.go
new file mode 100644
index 0000000..8b7ea12
--- /dev/null
+++ b/pkg/common/cmd/friend.go
@@ -0,0 +1,75 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/relation"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type FriendRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ relationConfig *relation.Config
+}
+
+func NewFriendRpcCmd() *FriendRpcCmd {
+ var relationConfig relation.Config
+ ret := &FriendRpcCmd{relationConfig: &relationConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCFriendCfgFileName: &relationConfig.RpcConfig,
+ config.RedisConfigFileName: &relationConfig.RedisConfig,
+ config.MongodbConfigFileName: &relationConfig.MongodbConfig,
+ config.ShareFileName: &relationConfig.Share,
+ config.NotificationFileName: &relationConfig.NotificationConfig,
+ config.WebhooksConfigFileName: &relationConfig.WebhooksConfig,
+ config.LocalCacheConfigFileName: &relationConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &relationConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *FriendRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *FriendRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.relationConfig.Discovery, &a.relationConfig.RpcConfig.CircuitBreaker, &a.relationConfig.RpcConfig.RateLimiter, &a.relationConfig.RpcConfig.Prometheus, a.relationConfig.RpcConfig.RPC.ListenIP,
+ a.relationConfig.RpcConfig.RPC.RegisterIP, a.relationConfig.RpcConfig.RPC.AutoSetPorts, a.relationConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.relationConfig.Discovery.RpcService.Friend, &a.relationConfig.NotificationConfig, a.relationConfig,
+ []string{
+ a.relationConfig.RpcConfig.GetConfigFileName(),
+ a.relationConfig.RedisConfig.GetConfigFileName(),
+ a.relationConfig.MongodbConfig.GetConfigFileName(),
+ a.relationConfig.NotificationConfig.GetConfigFileName(),
+ a.relationConfig.Share.GetConfigFileName(),
+ a.relationConfig.WebhooksConfig.GetConfigFileName(),
+ a.relationConfig.LocalCacheConfig.GetConfigFileName(),
+ a.relationConfig.Discovery.GetConfigFileName(),
+ }, nil,
+ relation.Start)
+}
diff --git a/pkg/common/cmd/group.go b/pkg/common/cmd/group.go
new file mode 100644
index 0000000..b41231e
--- /dev/null
+++ b/pkg/common/cmd/group.go
@@ -0,0 +1,76 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/group"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/versionctx"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type GroupRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ groupConfig *group.Config
+}
+
+func NewGroupRpcCmd() *GroupRpcCmd {
+ var groupConfig group.Config
+ ret := &GroupRpcCmd{groupConfig: &groupConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCGroupCfgFileName: &groupConfig.RpcConfig,
+ config.RedisConfigFileName: &groupConfig.RedisConfig,
+ config.MongodbConfigFileName: &groupConfig.MongodbConfig,
+ config.ShareFileName: &groupConfig.Share,
+ config.NotificationFileName: &groupConfig.NotificationConfig,
+ config.WebhooksConfigFileName: &groupConfig.WebhooksConfig,
+ config.LocalCacheConfigFileName: &groupConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &groupConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *GroupRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *GroupRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.groupConfig.Discovery, &a.groupConfig.RpcConfig.CircuitBreaker, &a.groupConfig.RpcConfig.RateLimiter, &a.groupConfig.RpcConfig.Prometheus, a.groupConfig.RpcConfig.RPC.ListenIP,
+ a.groupConfig.RpcConfig.RPC.RegisterIP, a.groupConfig.RpcConfig.RPC.AutoSetPorts, a.groupConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.groupConfig.Discovery.RpcService.Group, &a.groupConfig.NotificationConfig, a.groupConfig,
+ []string{
+ a.groupConfig.RpcConfig.GetConfigFileName(),
+ a.groupConfig.RedisConfig.GetConfigFileName(),
+ a.groupConfig.MongodbConfig.GetConfigFileName(),
+ a.groupConfig.NotificationConfig.GetConfigFileName(),
+ a.groupConfig.Share.GetConfigFileName(),
+ a.groupConfig.WebhooksConfig.GetConfigFileName(),
+ a.groupConfig.LocalCacheConfig.GetConfigFileName(),
+ a.groupConfig.Discovery.GetConfigFileName(),
+ }, nil,
+ group.Start, versionctx.EnableVersionCtx())
+}
diff --git a/pkg/common/cmd/msg.go b/pkg/common/cmd/msg.go
new file mode 100644
index 0000000..65e3977
--- /dev/null
+++ b/pkg/common/cmd/msg.go
@@ -0,0 +1,77 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/msg"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type MsgRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ msgConfig *msg.Config
+}
+
+func NewMsgRpcCmd() *MsgRpcCmd {
+ var msgConfig msg.Config
+ ret := &MsgRpcCmd{msgConfig: &msgConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCMsgCfgFileName: &msgConfig.RpcConfig,
+ config.RedisConfigFileName: &msgConfig.RedisConfig,
+ config.MongodbConfigFileName: &msgConfig.MongodbConfig,
+ config.KafkaConfigFileName: &msgConfig.KafkaConfig,
+ config.ShareFileName: &msgConfig.Share,
+ config.NotificationFileName: &msgConfig.NotificationConfig,
+ config.WebhooksConfigFileName: &msgConfig.WebhooksConfig,
+ config.LocalCacheConfigFileName: &msgConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &msgConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *MsgRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *MsgRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.msgConfig.Discovery, &a.msgConfig.RpcConfig.CircuitBreaker, &a.msgConfig.RpcConfig.RateLimiter, &a.msgConfig.RpcConfig.Prometheus, a.msgConfig.RpcConfig.RPC.ListenIP,
+ a.msgConfig.RpcConfig.RPC.RegisterIP, a.msgConfig.RpcConfig.RPC.AutoSetPorts, a.msgConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.msgConfig.Discovery.RpcService.Msg, &a.msgConfig.NotificationConfig, a.msgConfig,
+ []string{
+ a.msgConfig.RpcConfig.GetConfigFileName(),
+ a.msgConfig.RedisConfig.GetConfigFileName(),
+ a.msgConfig.MongodbConfig.GetConfigFileName(),
+ a.msgConfig.KafkaConfig.GetConfigFileName(),
+ a.msgConfig.NotificationConfig.GetConfigFileName(),
+ a.msgConfig.Share.GetConfigFileName(),
+ a.msgConfig.WebhooksConfig.GetConfigFileName(),
+ a.msgConfig.LocalCacheConfig.GetConfigFileName(),
+ a.msgConfig.Discovery.GetConfigFileName(),
+ }, nil,
+ msg.Start)
+}
diff --git a/pkg/common/cmd/msg_gateway.go b/pkg/common/cmd/msg_gateway.go
new file mode 100644
index 0000000..a78a49c
--- /dev/null
+++ b/pkg/common/cmd/msg_gateway.go
@@ -0,0 +1,105 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/msggateway"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type MsgGatewayCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ msgGatewayConfig *msggateway.Config
+}
+
+func NewMsgGatewayCmd() *MsgGatewayCmd {
+ var msgGatewayConfig msggateway.Config
+ ret := &MsgGatewayCmd{msgGatewayConfig: &msgGatewayConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMMsgGatewayCfgFileName: &msgGatewayConfig.MsgGateway,
+ config.ShareFileName: &msgGatewayConfig.Share,
+ config.RedisConfigFileName: &msgGatewayConfig.RedisConfig,
+ config.WebhooksConfigFileName: &msgGatewayConfig.WebhooksConfig,
+ config.DiscoveryConfigFilename: &msgGatewayConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (m *MsgGatewayCmd) Exec() error {
+ return m.Execute()
+}
+
+func (m *MsgGatewayCmd) runE() error {
+ m.msgGatewayConfig.Index = config.Index(m.Index())
+ rpc := m.msgGatewayConfig.MsgGateway.RPC
+ // 从配置读取 Prometheus,避免空配置导致的下标越界
+ prometheus := m.msgGatewayConfig.MsgGateway.Prometheus
+
+ b, err := json.Marshal(prometheus)
+ if err != nil {
+ return err
+ }
+ fmt.Println(string(b))
+ log.CInfo(m.ctx, "prometheus", "prometheus", string(b))
+ // 调试日志:打印关键启动参数
+ log.CInfo(m.ctx, "msg-gateway starting",
+ "autoSetPorts", rpc.AutoSetPorts,
+ "rpcPorts", rpc.Ports,
+ "prometheusEnable", prometheus.Enable,
+ "prometheusPorts", prometheus.Ports,
+ "index", int(m.msgGatewayConfig.Index),
+ "listenIP", rpc.ListenIP,
+ "registerIP", rpc.RegisterIP,
+ )
+
+ if !rpc.AutoSetPorts && (len(rpc.Ports) == 0) {
+ log.ZWarn(m.ctx, "rpc ports is empty while autoSetPorts=false", nil)
+ }
+ if prometheus.Enable && len(prometheus.Ports) == 0 {
+ log.ZWarn(m.ctx, "prometheus enabled but ports is empty", nil)
+ }
+ return startrpc.Start(
+ m.ctx, &m.msgGatewayConfig.Discovery,
+ &m.msgGatewayConfig.MsgGateway.CircuitBreaker,
+ &m.msgGatewayConfig.MsgGateway.RateLimiter,
+ &prometheus,
+ rpc.ListenIP, rpc.RegisterIP,
+ rpc.AutoSetPorts,
+ rpc.Ports, int(m.msgGatewayConfig.Index),
+ m.msgGatewayConfig.Discovery.RpcService.MessageGateway,
+ nil,
+ m.msgGatewayConfig,
+ []string{},
+ []string{},
+ msggateway.Start,
+ )
+}
diff --git a/pkg/common/cmd/msg_gateway_test.go b/pkg/common/cmd/msg_gateway_test.go
new file mode 100644
index 0000000..0ebe6f6
--- /dev/null
+++ b/pkg/common/cmd/msg_gateway_test.go
@@ -0,0 +1,69 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "math"
+ "testing"
+
+ "git.imall.cloud/openim/protocol/auth"
+ "github.com/openimsdk/tools/apiresp"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/stretchr/testify/mock"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+)
+
+// MockRootCmd is a mock type for the RootCmd type
+type MockRootCmd struct {
+ mock.Mock
+}
+
+func (m *MockRootCmd) Execute() error {
+ args := m.Called()
+ return args.Error(0)
+}
+
+func TestName(t *testing.T) {
+ resp := &apiresp.ApiResponse{
+ ErrCode: 1234,
+ ErrMsg: "test",
+ ErrDlt: "4567",
+ Data: &auth.GetUserTokenResp{
+ Token: "1234567",
+ ExpireTimeSeconds: math.MaxInt64,
+ },
+ }
+ data, err := resp.MarshalJSON()
+ if err != nil {
+ panic(err)
+ }
+ t.Log(string(data))
+
+ var rReso apiresp.ApiResponse
+ rReso.Data = &auth.GetUserTokenResp{}
+
+ if err := jsonutil.JsonUnmarshal(data, &rReso); err != nil {
+ panic(err)
+ }
+
+ t.Logf("%+v\n", rReso)
+
+}
+
+func TestName1(t *testing.T) {
+ t.Log(primitive.NewObjectID().String())
+ t.Log(primitive.NewObjectID().Hex())
+
+}
diff --git a/pkg/common/cmd/msg_transfer.go b/pkg/common/cmd/msg_transfer.go
new file mode 100644
index 0000000..78f7ba3
--- /dev/null
+++ b/pkg/common/cmd/msg_transfer.go
@@ -0,0 +1,78 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/msgtransfer"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type MsgTransferCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ msgTransferConfig *msgtransfer.Config
+}
+
+func NewMsgTransferCmd() *MsgTransferCmd {
+ var msgTransferConfig msgtransfer.Config
+ ret := &MsgTransferCmd{msgTransferConfig: &msgTransferConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMMsgTransferCfgFileName: &msgTransferConfig.MsgTransfer,
+ config.RedisConfigFileName: &msgTransferConfig.RedisConfig,
+ config.MongodbConfigFileName: &msgTransferConfig.MongodbConfig,
+ config.KafkaConfigFileName: &msgTransferConfig.KafkaConfig,
+ config.ShareFileName: &msgTransferConfig.Share,
+ config.WebhooksConfigFileName: &msgTransferConfig.WebhooksConfig,
+ config.DiscoveryConfigFilename: &msgTransferConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (m *MsgTransferCmd) Exec() error {
+ return m.Execute()
+}
+
+func (m *MsgTransferCmd) runE() error {
+ m.msgTransferConfig.Index = config.Index(m.Index())
+ var prometheus config.Prometheus
+ return startrpc.Start(
+ m.ctx, &m.msgTransferConfig.Discovery,
+ &m.msgTransferConfig.MsgTransfer.CircuitBreaker,
+ &m.msgTransferConfig.MsgTransfer.RateLimiter,
+ &prometheus,
+ "", "",
+ true,
+ nil, int(m.msgTransferConfig.Index),
+ prommetrics.MessageTransferKeyName,
+ nil,
+ m.msgTransferConfig,
+ []string{},
+ []string{},
+ msgtransfer.Start,
+ )
+}
diff --git a/pkg/common/cmd/msg_utils.go b/pkg/common/cmd/msg_utils.go
new file mode 100644
index 0000000..f807c03
--- /dev/null
+++ b/pkg/common/cmd/msg_utils.go
@@ -0,0 +1,171 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/spf13/cobra"
+)
+
+type MsgUtilsCmd struct {
+ cobra.Command
+}
+
+func (m *MsgUtilsCmd) AddUserIDFlag() {
+ m.Command.PersistentFlags().StringP("userID", "u", "", "openIM userID")
+}
+func (m *MsgUtilsCmd) AddIndexFlag() {
+ m.Command.PersistentFlags().IntP(config.FlagTransferIndex, "i", 0, "process startup sequence number")
+}
+
+func (m *MsgUtilsCmd) AddConfigDirFlag() {
+ m.Command.PersistentFlags().StringP(config.FlagConf, "c", "", "path of config directory")
+
+}
+
+func (m *MsgUtilsCmd) getUserIDFlag(cmdLines *cobra.Command) string {
+ userID, _ := cmdLines.Flags().GetString("userID")
+ return userID
+}
+
+func (m *MsgUtilsCmd) AddFixAllFlag() {
+ m.Command.PersistentFlags().BoolP("fixAll", "f", false, "openIM fix all seqs")
+}
+
+/* func (m *MsgUtilsCmd) getFixAllFlag(cmdLines *cobra.Command) bool {
+ fixAll, _ := cmdLines.Flags().GetBool("fixAll")
+ return fixAll
+} */
+
+func (m *MsgUtilsCmd) AddClearAllFlag() {
+ m.Command.PersistentFlags().BoolP("clearAll", "", false, "openIM clear all seqs")
+}
+
+/* func (m *MsgUtilsCmd) getClearAllFlag(cmdLines *cobra.Command) bool {
+ clearAll, _ := cmdLines.Flags().GetBool("clearAll")
+ return clearAll
+} */
+
+func (m *MsgUtilsCmd) AddSuperGroupIDFlag() {
+ m.Command.PersistentFlags().StringP("superGroupID", "g", "", "openIM superGroupID")
+}
+
+func (m *MsgUtilsCmd) getSuperGroupIDFlag(cmdLines *cobra.Command) string {
+ superGroupID, _ := cmdLines.Flags().GetString("superGroupID")
+ return superGroupID
+}
+
+func (m *MsgUtilsCmd) AddBeginSeqFlag() {
+ m.Command.PersistentFlags().Int64P("beginSeq", "b", 0, "openIM beginSeq")
+}
+
+/* func (m *MsgUtilsCmd) getBeginSeqFlag(cmdLines *cobra.Command) int64 {
+ beginSeq, _ := cmdLines.Flags().GetInt64("beginSeq")
+ return beginSeq
+} */
+
+func (m *MsgUtilsCmd) AddLimitFlag() {
+ m.Command.PersistentFlags().Int64P("limit", "l", 0, "openIM limit")
+}
+
+/* func (m *MsgUtilsCmd) getLimitFlag(cmdLines *cobra.Command) int64 {
+ limit, _ := cmdLines.Flags().GetInt64("limit")
+ return limit
+} */
+
+func (m *MsgUtilsCmd) Execute() error {
+ return m.Command.Execute()
+}
+
+func NewMsgUtilsCmd(use, short string, args cobra.PositionalArgs) *MsgUtilsCmd {
+ return &MsgUtilsCmd{
+ Command: cobra.Command{
+ Use: use,
+ Short: short,
+ Args: args,
+ },
+ }
+}
+
+type GetCmd struct {
+ *MsgUtilsCmd
+}
+
+func NewGetCmd() *GetCmd {
+ return &GetCmd{
+ NewMsgUtilsCmd("get [resource]", "get action", cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs)),
+ }
+}
+
+type FixCmd struct {
+ *MsgUtilsCmd
+}
+
+func NewFixCmd() *FixCmd {
+ return &FixCmd{
+ NewMsgUtilsCmd("fix [resource]", "fix action", cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs)),
+ }
+}
+
+type ClearCmd struct {
+ *MsgUtilsCmd
+}
+
+func NewClearCmd() *ClearCmd {
+ return &ClearCmd{
+ NewMsgUtilsCmd("clear [resource]", "clear action", cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs)),
+ }
+}
+
+type SeqCmd struct {
+ *MsgUtilsCmd
+}
+
+func NewSeqCmd() *SeqCmd {
+ seqCmd := &SeqCmd{
+ NewMsgUtilsCmd("seq", "seq", nil),
+ }
+ return seqCmd
+}
+
+func (s *SeqCmd) GetSeqCmd() *cobra.Command {
+ s.Command.Run = func(cmdLines *cobra.Command, args []string) {
+
+ }
+ return &s.Command
+}
+
+func (s *SeqCmd) FixSeqCmd() *cobra.Command {
+ return &s.Command
+}
+
+type MsgCmd struct {
+ *MsgUtilsCmd
+}
+
+func NewMsgCmd() *MsgCmd {
+ msgCmd := &MsgCmd{
+ NewMsgUtilsCmd("msg", "msg", nil),
+ }
+ return msgCmd
+}
+
+func (m *MsgCmd) GetMsgCmd() *cobra.Command {
+ return &m.Command
+}
+
+func (m *MsgCmd) ClearMsgCmd() *cobra.Command {
+ return &m.Command
+}
diff --git a/pkg/common/cmd/push.go b/pkg/common/cmd/push.go
new file mode 100644
index 0000000..f8a036e
--- /dev/null
+++ b/pkg/common/cmd/push.go
@@ -0,0 +1,80 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/push"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type PushRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ pushConfig *push.Config
+}
+
+func NewPushRpcCmd() *PushRpcCmd {
+ var pushConfig push.Config
+ ret := &PushRpcCmd{pushConfig: &pushConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMPushCfgFileName: &pushConfig.RpcConfig,
+ config.RedisConfigFileName: &pushConfig.RedisConfig,
+ config.MongodbConfigFileName: &pushConfig.MongoConfig,
+ config.KafkaConfigFileName: &pushConfig.KafkaConfig,
+ config.ShareFileName: &pushConfig.Share,
+ config.NotificationFileName: &pushConfig.NotificationConfig,
+ config.WebhooksConfigFileName: &pushConfig.WebhooksConfig,
+ config.LocalCacheConfigFileName: &pushConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &pushConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ ret.pushConfig.FcmConfigPath = config.Path(ret.ConfigPath())
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *PushRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *PushRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.pushConfig.Discovery, &a.pushConfig.RpcConfig.CircuitBreaker, &a.pushConfig.RpcConfig.RateLimiter, &a.pushConfig.RpcConfig.Prometheus, a.pushConfig.RpcConfig.RPC.ListenIP,
+ a.pushConfig.RpcConfig.RPC.RegisterIP, a.pushConfig.RpcConfig.RPC.AutoSetPorts, a.pushConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.pushConfig.Discovery.RpcService.Push, &a.pushConfig.NotificationConfig, a.pushConfig,
+ []string{
+ a.pushConfig.RpcConfig.GetConfigFileName(),
+ a.pushConfig.RedisConfig.GetConfigFileName(),
+ a.pushConfig.KafkaConfig.GetConfigFileName(),
+ a.pushConfig.NotificationConfig.GetConfigFileName(),
+ a.pushConfig.Share.GetConfigFileName(),
+ a.pushConfig.WebhooksConfig.GetConfigFileName(),
+ a.pushConfig.LocalCacheConfig.GetConfigFileName(),
+ a.pushConfig.Discovery.GetConfigFileName(),
+ },
+ []string{
+ a.pushConfig.Discovery.RpcService.MessageGateway,
+ },
+ push.Start)
+}
diff --git a/pkg/common/cmd/root.go b/pkg/common/cmd/root.go
new file mode 100644
index 0000000..b9af6ec
--- /dev/null
+++ b/pkg/common/cmd/root.go
@@ -0,0 +1,251 @@
+package cmd
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ kdisc "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery"
+ disetcd "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery/etcd"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/discovery/etcd"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/spf13/cobra"
+ clientv3 "go.etcd.io/etcd/client/v3"
+)
+
+type RootCmd struct {
+ Command cobra.Command
+ processName string
+ port int
+ prometheusPort int
+ log config.Log
+ index int
+ configPath string
+ etcdClient *clientv3.Client
+}
+
+func (r *RootCmd) ConfigPath() string {
+ return r.configPath
+}
+
+func (r *RootCmd) Index() int {
+ return r.index
+}
+
+func (r *RootCmd) Port() int {
+ return r.port
+}
+
+type CmdOpts struct {
+ loggerPrefixName string
+ configMap map[string]any
+}
+
+func WithCronTaskLogName() func(*CmdOpts) {
+ return func(opts *CmdOpts) {
+ opts.loggerPrefixName = "openim-crontask"
+ }
+}
+
+func WithLogName(logName string) func(*CmdOpts) {
+ return func(opts *CmdOpts) {
+ opts.loggerPrefixName = logName
+ }
+}
+func WithConfigMap(configMap map[string]any) func(*CmdOpts) {
+ return func(opts *CmdOpts) {
+ opts.configMap = configMap
+ }
+}
+
+func NewRootCmd(processName string, opts ...func(*CmdOpts)) *RootCmd {
+ rootCmd := &RootCmd{processName: processName}
+ cmd := cobra.Command{
+ Use: "Start openIM application",
+ Long: fmt.Sprintf(`Start %s `, processName),
+ PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
+ return rootCmd.persistentPreRun(cmd, opts...)
+ },
+ SilenceUsage: true,
+ SilenceErrors: false,
+ }
+ cmd.Flags().StringP(config.FlagConf, "c", "", "path of config directory")
+ cmd.Flags().IntP(config.FlagTransferIndex, "i", 0, "process startup sequence number")
+
+ rootCmd.Command = cmd
+ return rootCmd
+}
+
+func (r *RootCmd) initEtcd() error {
+ configDirectory, _, err := r.getFlag(&r.Command)
+ if err != nil {
+ return err
+ }
+ disConfig := config.Discovery{}
+ err = config.Load(configDirectory, config.DiscoveryConfigFilename, config.EnvPrefixMap[config.DiscoveryConfigFilename], &disConfig)
+ if err != nil {
+ return err
+ }
+ if disConfig.Enable == config.ETCD {
+ discov, _ := kdisc.NewDiscoveryRegister(&disConfig, nil)
+ if etcdDiscov, ok := discov.(*etcd.SvcDiscoveryRegistryImpl); ok {
+ r.etcdClient = etcdDiscov.GetClient()
+ }
+ }
+ return nil
+}
+
+func (r *RootCmd) persistentPreRun(cmd *cobra.Command, opts ...func(*CmdOpts)) error {
+ if err := r.initEtcd(); err != nil {
+ return err
+ }
+ cmdOpts := r.applyOptions(opts...)
+ if err := r.initializeConfiguration(cmd, cmdOpts); err != nil {
+ return err
+ }
+ if err := r.updateConfigFromEtcd(cmdOpts); err != nil {
+ return err
+ }
+ if err := r.initializeLogger(cmdOpts); err != nil {
+ return errs.WrapMsg(err, "failed to initialize logger")
+ }
+ if r.etcdClient != nil {
+ if err := r.etcdClient.Close(); err != nil {
+ return errs.WrapMsg(err, "failed to close etcd client")
+ }
+ }
+ return nil
+}
+
+func (r *RootCmd) initializeConfiguration(cmd *cobra.Command, opts *CmdOpts) error {
+ configDirectory, _, err := r.getFlag(cmd)
+ if err != nil {
+ return err
+ }
+
+ // Load common configuration file
+ //opts.configMap[ShareFileName] = StructEnvPrefix{EnvPrefix: shareEnvPrefix, ConfigStruct: &r.share}
+ for configFileName, configStruct := range opts.configMap {
+ err := config.Load(configDirectory, configFileName, config.EnvPrefixMap[configFileName], configStruct)
+ if err != nil {
+ return err
+ }
+ }
+ // Load common log configuration file
+ return config.Load(configDirectory, config.LogConfigFileName, config.EnvPrefixMap[config.LogConfigFileName], &r.log)
+}
+
+func (r *RootCmd) updateConfigFromEtcd(opts *CmdOpts) error {
+ if r.etcdClient == nil {
+ return nil
+ }
+ ctx := context.TODO()
+
+ res, err := r.etcdClient.Get(ctx, disetcd.BuildKey(disetcd.EnableConfigCenterKey))
+ if err != nil {
+ log.ZWarn(ctx, "root cmd updateConfigFromEtcd, etcd Get EnableConfigCenterKey err: %v", errs.Wrap(err))
+ return nil
+ }
+ if res.Count == 0 {
+ return nil
+ } else {
+ if string(res.Kvs[0].Value) == disetcd.Disable {
+ return nil
+ } else if string(res.Kvs[0].Value) != disetcd.Enable {
+ return errs.New("unknown EnableConfigCenter value").Wrap()
+ }
+ }
+
+ update := func(configFileName string, configStruct any) error {
+ key := disetcd.BuildKey(configFileName)
+ etcdRes, err := r.etcdClient.Get(ctx, key)
+ if err != nil {
+ log.ZWarn(ctx, "root cmd updateConfigFromEtcd, etcd Get err: %v", errs.Wrap(err))
+ return nil
+ }
+ if etcdRes.Count == 0 {
+ data, err := json.Marshal(configStruct)
+ if err != nil {
+ return errs.ErrArgs.WithDetail(err.Error()).Wrap()
+ }
+ _, err = r.etcdClient.Put(ctx, disetcd.BuildKey(configFileName), string(data))
+ if err != nil {
+ log.ZWarn(ctx, "root cmd updateConfigFromEtcd, etcd Put err: %v", errs.Wrap(err))
+ }
+ return nil
+ }
+ err = json.Unmarshal(etcdRes.Kvs[0].Value, configStruct)
+ if err != nil {
+ return errs.WrapMsg(err, "failed to unmarshal config from etcd")
+ }
+ return nil
+ }
+ for configFileName, configStruct := range opts.configMap {
+ if err := update(configFileName, configStruct); err != nil {
+ return err
+ }
+ }
+ if err := update(config.LogConfigFileName, &r.log); err != nil {
+ return err
+ }
+ // Load common log configuration file
+ return nil
+
+}
+
+func (r *RootCmd) applyOptions(opts ...func(*CmdOpts)) *CmdOpts {
+ cmdOpts := defaultCmdOpts()
+ for _, opt := range opts {
+ opt(cmdOpts)
+ }
+
+ return cmdOpts
+}
+
+func (r *RootCmd) initializeLogger(cmdOpts *CmdOpts) error {
+ err := log.InitLoggerFromConfig(
+ cmdOpts.loggerPrefixName,
+ r.processName,
+ "", "",
+ r.log.RemainLogLevel,
+ r.log.IsStdout,
+ r.log.IsJson,
+ r.log.StorageLocation,
+ r.log.RemainRotationCount,
+ r.log.RotationTime,
+ version.Version,
+ r.log.IsSimplify,
+ )
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ return errs.Wrap(log.InitConsoleLogger(r.processName, r.log.RemainLogLevel, r.log.IsJson, version.Version))
+
+}
+
+func defaultCmdOpts() *CmdOpts {
+ return &CmdOpts{
+ loggerPrefixName: "openim-service-log",
+ }
+}
+
+func (r *RootCmd) getFlag(cmd *cobra.Command) (string, int, error) {
+ configDirectory, err := cmd.Flags().GetString(config.FlagConf)
+ if err != nil {
+ return "", 0, errs.Wrap(err)
+ }
+ r.configPath = configDirectory
+ index, err := cmd.Flags().GetInt(config.FlagTransferIndex)
+ if err != nil {
+ return "", 0, errs.Wrap(err)
+ }
+ r.index = index
+ return configDirectory, index, nil
+}
+
+func (r *RootCmd) Execute() error {
+ return r.Command.Execute()
+}
diff --git a/pkg/common/cmd/third.go b/pkg/common/cmd/third.go
new file mode 100644
index 0000000..881bd30
--- /dev/null
+++ b/pkg/common/cmd/third.go
@@ -0,0 +1,75 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/third"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type ThirdRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ thirdConfig *third.Config
+}
+
+func NewThirdRpcCmd() *ThirdRpcCmd {
+ var thirdConfig third.Config
+ ret := &ThirdRpcCmd{thirdConfig: &thirdConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCThirdCfgFileName: &thirdConfig.RpcConfig,
+ config.RedisConfigFileName: &thirdConfig.RedisConfig,
+ config.MongodbConfigFileName: &thirdConfig.MongodbConfig,
+ config.ShareFileName: &thirdConfig.Share,
+ config.NotificationFileName: &thirdConfig.NotificationConfig,
+ config.MinioConfigFileName: &thirdConfig.MinioConfig,
+ config.LocalCacheConfigFileName: &thirdConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &thirdConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *ThirdRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *ThirdRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.thirdConfig.Discovery, &a.thirdConfig.RpcConfig.CircuitBreaker, &a.thirdConfig.RpcConfig.RateLimiter, &a.thirdConfig.RpcConfig.Prometheus, a.thirdConfig.RpcConfig.RPC.ListenIP,
+ a.thirdConfig.RpcConfig.RPC.RegisterIP, a.thirdConfig.RpcConfig.RPC.AutoSetPorts, a.thirdConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.thirdConfig.Discovery.RpcService.Third, &a.thirdConfig.NotificationConfig, a.thirdConfig,
+ []string{
+ a.thirdConfig.RpcConfig.GetConfigFileName(),
+ a.thirdConfig.RedisConfig.GetConfigFileName(),
+ a.thirdConfig.MongodbConfig.GetConfigFileName(),
+ a.thirdConfig.NotificationConfig.GetConfigFileName(),
+ a.thirdConfig.Share.GetConfigFileName(),
+ a.thirdConfig.MinioConfig.GetConfigFileName(),
+ a.thirdConfig.LocalCacheConfig.GetConfigFileName(),
+ a.thirdConfig.Discovery.GetConfigFileName(),
+ }, nil,
+ third.Start)
+}
diff --git a/pkg/common/cmd/user.go b/pkg/common/cmd/user.go
new file mode 100644
index 0000000..081451f
--- /dev/null
+++ b/pkg/common/cmd/user.go
@@ -0,0 +1,77 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cmd
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/internal/rpc/user"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/startrpc"
+ "git.imall.cloud/openim/open-im-server-deploy/version"
+ "github.com/openimsdk/tools/system/program"
+ "github.com/spf13/cobra"
+)
+
+type UserRpcCmd struct {
+ *RootCmd
+ ctx context.Context
+ configMap map[string]any
+ userConfig *user.Config
+}
+
+func NewUserRpcCmd() *UserRpcCmd {
+ var userConfig user.Config
+ ret := &UserRpcCmd{userConfig: &userConfig}
+ ret.configMap = map[string]any{
+ config.OpenIMRPCUserCfgFileName: &userConfig.RpcConfig,
+ config.RedisConfigFileName: &userConfig.RedisConfig,
+ config.MongodbConfigFileName: &userConfig.MongodbConfig,
+ config.KafkaConfigFileName: &userConfig.KafkaConfig,
+ config.ShareFileName: &userConfig.Share,
+ config.NotificationFileName: &userConfig.NotificationConfig,
+ config.WebhooksConfigFileName: &userConfig.WebhooksConfig,
+ config.LocalCacheConfigFileName: &userConfig.LocalCacheConfig,
+ config.DiscoveryConfigFilename: &userConfig.Discovery,
+ }
+ ret.RootCmd = NewRootCmd(program.GetProcessName(), WithConfigMap(ret.configMap))
+ ret.ctx = context.WithValue(context.Background(), "version", version.Version)
+ ret.Command.RunE = func(cmd *cobra.Command, args []string) error {
+ return ret.runE()
+ }
+ return ret
+}
+
+func (a *UserRpcCmd) Exec() error {
+ return a.Execute()
+}
+
+func (a *UserRpcCmd) runE() error {
+ return startrpc.Start(a.ctx, &a.userConfig.Discovery, &a.userConfig.RpcConfig.CircuitBreaker, &a.userConfig.RpcConfig.RateLimiter, &a.userConfig.RpcConfig.Prometheus, a.userConfig.RpcConfig.RPC.ListenIP,
+ a.userConfig.RpcConfig.RPC.RegisterIP, a.userConfig.RpcConfig.RPC.AutoSetPorts, a.userConfig.RpcConfig.RPC.Ports,
+ a.Index(), a.userConfig.Discovery.RpcService.User, &a.userConfig.NotificationConfig, a.userConfig,
+ []string{
+ a.userConfig.RpcConfig.GetConfigFileName(),
+ a.userConfig.RedisConfig.GetConfigFileName(),
+ a.userConfig.MongodbConfig.GetConfigFileName(),
+ a.userConfig.KafkaConfig.GetConfigFileName(),
+ a.userConfig.NotificationConfig.GetConfigFileName(),
+ a.userConfig.Share.GetConfigFileName(),
+ a.userConfig.WebhooksConfig.GetConfigFileName(),
+ a.userConfig.LocalCacheConfig.GetConfigFileName(),
+ a.userConfig.Discovery.GetConfigFileName(),
+ }, nil,
+ user.Start)
+}
diff --git a/pkg/common/config/config.go b/pkg/common/config/config.go
new file mode 100644
index 0000000..767826b
--- /dev/null
+++ b/pkg/common/config/config.go
@@ -0,0 +1,979 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ "strings"
+ "time"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+ "github.com/openimsdk/tools/mq/kafka"
+ "github.com/openimsdk/tools/s3/aws"
+ "github.com/openimsdk/tools/s3/cos"
+ "github.com/openimsdk/tools/s3/kodo"
+ "github.com/openimsdk/tools/s3/minio"
+ "github.com/openimsdk/tools/s3/oss"
+)
+
+const StructTagName = "yaml"
+
+type Path string
+
+type Index int
+
+type CacheConfig struct {
+ Topic string `yaml:"topic"`
+ SlotNum int `yaml:"slotNum"`
+ SlotSize int `yaml:"slotSize"`
+ SuccessExpire int `yaml:"successExpire"`
+ FailedExpire int `yaml:"failedExpire"`
+}
+
+type LocalCache struct {
+ Auth CacheConfig `yaml:"auth"`
+ User CacheConfig `yaml:"user"`
+ Group CacheConfig `yaml:"group"`
+ Friend CacheConfig `yaml:"friend"`
+ Conversation CacheConfig `yaml:"conversation"`
+}
+
+type Log struct {
+ StorageLocation string `yaml:"storageLocation"`
+ RotationTime uint `yaml:"rotationTime"`
+ RemainRotationCount uint `yaml:"remainRotationCount"`
+ RemainLogLevel int `yaml:"remainLogLevel"`
+ IsStdout bool `yaml:"isStdout"`
+ IsJson bool `yaml:"isJson"`
+ IsSimplify bool `yaml:"isSimplify"`
+ WithStack bool `yaml:"withStack"`
+}
+
+type Minio struct {
+ Bucket string `yaml:"bucket"`
+ AccessKeyID string `yaml:"accessKeyID"`
+ SecretAccessKey string `yaml:"secretAccessKey"`
+ SessionToken string `yaml:"sessionToken"`
+ InternalAddress string `yaml:"internalAddress"`
+ ExternalAddress string `yaml:"externalAddress"`
+ PublicRead bool `yaml:"publicRead"`
+}
+
+type Mongo struct {
+ URI string `yaml:"uri"`
+ Address []string `yaml:"address"`
+ Database string `yaml:"database"`
+ Username string `yaml:"username"`
+ Password string `yaml:"password"`
+ AuthSource string `yaml:"authSource"`
+ MaxPoolSize int `yaml:"maxPoolSize"`
+ MaxRetry int `yaml:"maxRetry"`
+ MongoMode string `yaml:"mongoMode"`
+ ReplicaSet ReplicaSetConfig
+ ReadPreference ReadPrefConfig
+ WriteConcern WriteConcernConfig
+}
+
+type ReplicaSetConfig struct {
+ Name string `yaml:"name"`
+ Hosts []string `yaml:"hosts"`
+ ReadConcern string `yaml:"readConcern"`
+ MaxStaleness time.Duration `yaml:"maxStaleness"`
+}
+
+type ReadPrefConfig struct {
+ Mode string `yaml:"mode"`
+ TagSets []map[string]string `yaml:"tagSets"`
+ MaxStaleness time.Duration `yaml:"maxStaleness"`
+}
+
+type WriteConcernConfig struct {
+ W any `yaml:"w"`
+ J bool `yaml:"j"`
+ WTimeout time.Duration `yaml:"wtimeout"`
+}
+
+type Kafka struct {
+ Username string `yaml:"username"`
+ Password string `yaml:"password"`
+ ProducerAck string `yaml:"producerAck"`
+ CompressType string `yaml:"compressType"`
+ Address []string `yaml:"address"`
+ ToRedisTopic string `yaml:"toRedisTopic"`
+ ToMongoTopic string `yaml:"toMongoTopic"`
+ ToPushTopic string `yaml:"toPushTopic"`
+ ToOfflinePushTopic string `yaml:"toOfflinePushTopic"`
+ ToRedisGroupID string `yaml:"toRedisGroupID"`
+ ToMongoGroupID string `yaml:"toMongoGroupID"`
+ ToPushGroupID string `yaml:"toPushGroupID"`
+ ToOfflineGroupID string `yaml:"toOfflinePushGroupID"`
+
+ Tls TLSConfig `yaml:"tls"`
+}
+type TLSConfig struct {
+ EnableTLS bool `yaml:"enableTLS"`
+ CACrt string `yaml:"caCrt"`
+ ClientCrt string `yaml:"clientCrt"`
+ ClientKey string `yaml:"clientKey"`
+ ClientKeyPwd string `yaml:"clientKeyPwd"`
+ InsecureSkipVerify bool `yaml:"insecureSkipVerify"`
+}
+
+type API struct {
+ Api struct {
+ ListenIP string `yaml:"listenIP"`
+ Ports []int `yaml:"ports"`
+ CompressionLevel int `yaml:"compressionLevel"`
+ } `yaml:"api"`
+ Prometheus struct {
+ Enable bool `yaml:"enable"`
+ AutoSetPorts bool `yaml:"autoSetPorts"`
+ Ports []int `yaml:"ports"`
+ GrafanaURL string `yaml:"grafanaURL"`
+ } `yaml:"prometheus"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ OnlineCountRefresh struct {
+ Enable bool `yaml:"enable"`
+ Interval time.Duration `yaml:"interval"`
+ } `yaml:"onlineCountRefresh"`
+}
+
+type CronTask struct {
+ CronExecuteTime string `yaml:"cronExecuteTime"`
+ RetainChatRecords int `yaml:"retainChatRecords"`
+ FileExpireTime int `yaml:"fileExpireTime"`
+ DeleteObjectType []string `yaml:"deleteObjectType"`
+}
+
+type OfflinePushConfig struct {
+ Enable bool `yaml:"enable"`
+ Title string `yaml:"title"`
+ Desc string `yaml:"desc"`
+ Ext string `yaml:"ext"`
+}
+
+type NotificationConfig struct {
+ IsSendMsg bool `yaml:"isSendMsg"`
+ ReliabilityLevel int `yaml:"reliabilityLevel"`
+ UnreadCount bool `yaml:"unreadCount"`
+ OfflinePush OfflinePushConfig `yaml:"offlinePush"`
+}
+
+type Notification struct {
+ GroupCreated NotificationConfig `yaml:"groupCreated"`
+ GroupInfoSet NotificationConfig `yaml:"groupInfoSet"`
+ JoinGroupApplication NotificationConfig `yaml:"joinGroupApplication"`
+ MemberQuit NotificationConfig `yaml:"memberQuit"`
+ GroupApplicationAccepted NotificationConfig `yaml:"groupApplicationAccepted"`
+ GroupApplicationRejected NotificationConfig `yaml:"groupApplicationRejected"`
+ GroupOwnerTransferred NotificationConfig `yaml:"groupOwnerTransferred"`
+ MemberKicked NotificationConfig `yaml:"memberKicked"`
+ MemberInvited NotificationConfig `yaml:"memberInvited"`
+ MemberEnter NotificationConfig `yaml:"memberEnter"`
+ GroupDismissed NotificationConfig `yaml:"groupDismissed"`
+ GroupMuted NotificationConfig `yaml:"groupMuted"`
+ GroupCancelMuted NotificationConfig `yaml:"groupCancelMuted"`
+ GroupMemberMuted NotificationConfig `yaml:"groupMemberMuted"`
+ GroupMemberCancelMuted NotificationConfig `yaml:"groupMemberCancelMuted"`
+ GroupMemberInfoSet NotificationConfig `yaml:"groupMemberInfoSet"`
+ GroupMemberSetToAdmin NotificationConfig `yaml:"groupMemberSetToAdmin"`
+ GroupMemberSetToOrdinary NotificationConfig `yaml:"groupMemberSetToOrdinaryUser"`
+ GroupInfoSetAnnouncement NotificationConfig `yaml:"groupInfoSetAnnouncement"`
+ GroupInfoSetName NotificationConfig `yaml:"groupInfoSetName"`
+ FriendApplicationAdded NotificationConfig `yaml:"friendApplicationAdded"`
+ FriendApplicationApproved NotificationConfig `yaml:"friendApplicationApproved"`
+ FriendApplicationRejected NotificationConfig `yaml:"friendApplicationRejected"`
+ FriendAdded NotificationConfig `yaml:"friendAdded"`
+ FriendDeleted NotificationConfig `yaml:"friendDeleted"`
+ FriendRemarkSet NotificationConfig `yaml:"friendRemarkSet"`
+ BlackAdded NotificationConfig `yaml:"blackAdded"`
+ BlackDeleted NotificationConfig `yaml:"blackDeleted"`
+ FriendInfoUpdated NotificationConfig `yaml:"friendInfoUpdated"`
+ UserInfoUpdated NotificationConfig `yaml:"userInfoUpdated"`
+ UserStatusChanged NotificationConfig `yaml:"userStatusChanged"`
+ ConversationChanged NotificationConfig `yaml:"conversationChanged"`
+ ConversationSetPrivate NotificationConfig `yaml:"conversationSetPrivate"`
+}
+
+type Prometheus struct {
+ Enable bool `yaml:"enable"`
+ Ports []int `yaml:"ports"`
+}
+
+type MsgGateway struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ ListenIP string `yaml:"listenIP"`
+ LongConnSvr struct {
+ Ports []int `yaml:"ports"`
+ WebsocketMaxConnNum int `yaml:"websocketMaxConnNum"`
+ WebsocketMaxMsgLen int `yaml:"websocketMaxMsgLen"`
+ WebsocketTimeout int `yaml:"websocketTimeout"`
+ } `yaml:"longConnSvr"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type MsgTransfer struct {
+ Prometheus struct {
+ Enable bool `yaml:"enable"`
+ AutoSetPorts bool `yaml:"autoSetPorts"`
+ Ports []int `yaml:"ports"`
+ } `yaml:"prometheus"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Push struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ MaxConcurrentWorkers int `yaml:"maxConcurrentWorkers"`
+ Enable string `yaml:"enable"`
+ GeTui struct {
+ PushUrl string `yaml:"pushUrl"`
+ MasterSecret string `yaml:"masterSecret"`
+ AppKey string `yaml:"appKey"`
+ Intent string `yaml:"intent"`
+ ChannelID string `yaml:"channelID"`
+ ChannelName string `yaml:"channelName"`
+ } `yaml:"geTui"`
+ FCM struct {
+ FilePath string `yaml:"filePath"`
+ AuthURL string `yaml:"authURL"`
+ } `yaml:"fcm"`
+ JPush struct {
+ AppKey string `yaml:"appKey"`
+ MasterSecret string `yaml:"masterSecret"`
+ PushURL string `yaml:"pushURL"`
+ PushIntent string `yaml:"pushIntent"`
+ } `yaml:"jpush"`
+ IOSPush struct {
+ PushSound string `yaml:"pushSound"`
+ BadgeCount bool `yaml:"badgeCount"`
+ Production bool `yaml:"production"`
+ } `yaml:"iosPush"`
+ FullUserCache bool `yaml:"fullUserCache"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Auth struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ TokenPolicy struct {
+ Expire int64 `yaml:"expire"`
+ } `yaml:"tokenPolicy"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Conversation struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Friend struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Group struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ EnableHistoryForNewMembers bool `yaml:"enableHistoryForNewMembers"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Msg struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ FriendVerify bool `yaml:"friendVerify"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type Third struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ Object struct {
+ Enable string `yaml:"enable"`
+ Cos Cos `yaml:"cos"`
+ Oss Oss `yaml:"oss"`
+ Kodo Kodo `yaml:"kodo"`
+ Aws Aws `yaml:"aws"`
+ } `yaml:"object"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+type Cos struct {
+ BucketURL string `yaml:"bucketURL"`
+ SecretID string `yaml:"secretID"`
+ SecretKey string `yaml:"secretKey"`
+ SessionToken string `yaml:"sessionToken"`
+ PublicRead bool `yaml:"publicRead"`
+}
+type Oss struct {
+ Endpoint string `yaml:"endpoint"`
+ Bucket string `yaml:"bucket"`
+ BucketURL string `yaml:"bucketURL"`
+ AccessKeyID string `yaml:"accessKeyID"`
+ AccessKeySecret string `yaml:"accessKeySecret"`
+ SessionToken string `yaml:"sessionToken"`
+ PublicRead bool `yaml:"publicRead"`
+}
+
+type Kodo struct {
+ Endpoint string `yaml:"endpoint"`
+ Bucket string `yaml:"bucket"`
+ BucketURL string `yaml:"bucketURL"`
+ AccessKeyID string `yaml:"accessKeyID"`
+ AccessKeySecret string `yaml:"accessKeySecret"`
+ SessionToken string `yaml:"sessionToken"`
+ PublicRead bool `yaml:"publicRead"`
+}
+
+type Aws struct {
+ Region string `yaml:"region"`
+ Bucket string `yaml:"bucket"`
+ Endpoint string `yaml:"endpoint"`
+ AccessKeyID string `yaml:"accessKeyID"`
+ SecretAccessKey string `yaml:"secretAccessKey"`
+ SessionToken string `yaml:"sessionToken"`
+ PublicRead bool `yaml:"publicRead"`
+}
+
+type User struct {
+ RPC RPC `yaml:"rpc"`
+ Prometheus Prometheus `yaml:"prometheus"`
+ RateLimiter RateLimiter `yaml:"ratelimiter"`
+ CircuitBreaker CircuitBreaker `yaml:"circuitBreaker"`
+}
+
+type RPC struct {
+ RegisterIP string `yaml:"registerIP"`
+ ListenIP string `yaml:"listenIP"`
+ AutoSetPorts bool `yaml:"autoSetPorts"`
+ Ports []int `yaml:"ports"`
+}
+
+type Redis struct {
+ Disable bool `yaml:"-"`
+ Address []string `yaml:"address"`
+ Username string `yaml:"username"`
+ Password string `yaml:"password"`
+ RedisMode string `yaml:"redisMode"`
+ DB int `yaml:"db"`
+ MaxRetry int `yaml:"maxRetry"`
+ PoolSize int `yaml:"poolSize"`
+ OnlineKeyPrefix string `yaml:"onlineKeyPrefix"`
+ OnlineKeyPrefixHashTag bool `yaml:"onlineKeyPrefixHashTag"`
+ SentinelMode Sentinel `yaml:"sentinelMode"`
+}
+
+type Sentinel struct {
+ MasterName string `yaml:"masterName"`
+ SentinelAddrs []string `yaml:"sentinelsAddrs"`
+ RouteByLatency bool `yaml:"routeByLatency"`
+ RouteRandomly bool `yaml:"routeRandomly"`
+}
+
+type BeforeConfig struct {
+ Enable bool `yaml:"enable"`
+ Timeout int `yaml:"timeout"`
+ FailedContinue bool `yaml:"failedContinue"`
+ DeniedTypes []int32 `yaml:"deniedTypes"`
+}
+
+type AfterConfig struct {
+ Enable bool `yaml:"enable"`
+ Timeout int `yaml:"timeout"`
+ AttentionIds []string `yaml:"attentionIds"`
+ DeniedTypes []int32 `yaml:"deniedTypes"`
+}
+
+type RateLimiter struct {
+ Enable bool `yaml:"enable"`
+ Window time.Duration `yaml:"window"`
+ Bucket int `yaml:"bucket"`
+ CPUThreshold int64 `yaml:"cpuThreshold"`
+}
+
+type CircuitBreaker struct {
+ Enable bool `yaml:"enable"`
+ Window time.Duration `yaml:"window"`
+ Bucket int `yaml:"bucket"`
+ Success float64 `yaml:"success"`
+ Request int64 `yaml:"request"`
+}
+
+type Share struct {
+ Secret string `yaml:"secret"`
+ IMAdminUser struct {
+ UserIDs []string `yaml:"userIDs"`
+ Nicknames []string `yaml:"nicknames"`
+ } `yaml:"imAdminUser"`
+ MultiLogin MultiLogin `yaml:"multiLogin"`
+ RPCMaxBodySize MaxRequestBody `yaml:"rpcMaxBodySize"`
+}
+
+type MaxRequestBody struct {
+ RequestMaxBodySize int `yaml:"requestMaxBodySize"`
+ ResponseMaxBodySize int `yaml:"responseMaxBodySize"`
+}
+
+type MultiLogin struct {
+ Policy int `yaml:"policy"`
+ MaxNumOneEnd int `yaml:"maxNumOneEnd"`
+}
+
+type RpcService struct {
+ User string `yaml:"user"`
+ Friend string `yaml:"friend"`
+ Msg string `yaml:"msg"`
+ Push string `yaml:"push"`
+ MessageGateway string `yaml:"messageGateway"`
+ Group string `yaml:"group"`
+ Auth string `yaml:"auth"`
+ Conversation string `yaml:"conversation"`
+ Third string `yaml:"third"`
+}
+
+func (r *RpcService) GetServiceNames() []string {
+ return []string{
+ r.User,
+ r.Friend,
+ r.Msg,
+ r.Push,
+ r.MessageGateway,
+ r.Group,
+ r.Auth,
+ r.Conversation,
+ r.Third,
+ }
+}
+
+// FullConfig stores all configurations for before and after events
+type Webhooks struct {
+ URL string `yaml:"url"`
+ BeforeSendSingleMsg BeforeConfig `yaml:"beforeSendSingleMsg"`
+ BeforeUpdateUserInfoEx BeforeConfig `yaml:"beforeUpdateUserInfoEx"`
+ AfterUpdateUserInfoEx AfterConfig `yaml:"afterUpdateUserInfoEx"`
+ AfterSendSingleMsg AfterConfig `yaml:"afterSendSingleMsg"`
+ BeforeSendGroupMsg BeforeConfig `yaml:"beforeSendGroupMsg"`
+ BeforeMsgModify BeforeConfig `yaml:"beforeMsgModify"`
+ AfterSendGroupMsg AfterConfig `yaml:"afterSendGroupMsg"`
+ AfterMsgSaveDB AfterConfig `yaml:"afterMsgSaveDB"`
+ AfterUserOnline AfterConfig `yaml:"afterUserOnline"`
+ AfterUserOffline AfterConfig `yaml:"afterUserOffline"`
+ AfterUserKickOff AfterConfig `yaml:"afterUserKickOff"`
+ BeforeOfflinePush BeforeConfig `yaml:"beforeOfflinePush"`
+ BeforeOnlinePush BeforeConfig `yaml:"beforeOnlinePush"`
+ BeforeGroupOnlinePush BeforeConfig `yaml:"beforeGroupOnlinePush"`
+ BeforeAddFriend BeforeConfig `yaml:"beforeAddFriend"`
+ BeforeUpdateUserInfo BeforeConfig `yaml:"beforeUpdateUserInfo"`
+ AfterUpdateUserInfo AfterConfig `yaml:"afterUpdateUserInfo"`
+ BeforeCreateGroup BeforeConfig `yaml:"beforeCreateGroup"`
+ AfterCreateGroup AfterConfig `yaml:"afterCreateGroup"`
+ BeforeMemberJoinGroup BeforeConfig `yaml:"beforeMemberJoinGroup"`
+ BeforeSetGroupMemberInfo BeforeConfig `yaml:"beforeSetGroupMemberInfo"`
+ AfterSetGroupMemberInfo AfterConfig `yaml:"afterSetGroupMemberInfo"`
+ AfterQuitGroup AfterConfig `yaml:"afterQuitGroup"`
+ AfterKickGroupMember AfterConfig `yaml:"afterKickGroupMember"`
+ AfterDismissGroup AfterConfig `yaml:"afterDismissGroup"`
+ BeforeApplyJoinGroup BeforeConfig `yaml:"beforeApplyJoinGroup"`
+ AfterGroupMsgRead AfterConfig `yaml:"afterGroupMsgRead"`
+ AfterSingleMsgRead AfterConfig `yaml:"afterSingleMsgRead"`
+ BeforeUserRegister BeforeConfig `yaml:"beforeUserRegister"`
+ AfterUserRegister AfterConfig `yaml:"afterUserRegister"`
+ AfterTransferGroupOwner AfterConfig `yaml:"afterTransferGroupOwner"`
+ BeforeSetFriendRemark BeforeConfig `yaml:"beforeSetFriendRemark"`
+ AfterSetFriendRemark AfterConfig `yaml:"afterSetFriendRemark"`
+ AfterGroupMsgRevoke AfterConfig `yaml:"afterGroupMsgRevoke"`
+ AfterJoinGroup AfterConfig `yaml:"afterJoinGroup"`
+ BeforeInviteUserToGroup BeforeConfig `yaml:"beforeInviteUserToGroup"`
+ AfterSetGroupInfo AfterConfig `yaml:"afterSetGroupInfo"`
+ BeforeSetGroupInfo BeforeConfig `yaml:"beforeSetGroupInfo"`
+ AfterSetGroupInfoEx AfterConfig `yaml:"afterSetGroupInfoEx"`
+ BeforeSetGroupInfoEx BeforeConfig `yaml:"beforeSetGroupInfoEx"`
+ AfterRevokeMsg AfterConfig `yaml:"afterRevokeMsg"`
+ BeforeAddBlack BeforeConfig `yaml:"beforeAddBlack"`
+ AfterAddFriend AfterConfig `yaml:"afterAddFriend"`
+ BeforeAddFriendAgree BeforeConfig `yaml:"beforeAddFriendAgree"`
+ AfterAddFriendAgree AfterConfig `yaml:"afterAddFriendAgree"`
+ AfterDeleteFriend AfterConfig `yaml:"afterDeleteFriend"`
+ BeforeImportFriends BeforeConfig `yaml:"beforeImportFriends"`
+ AfterImportFriends AfterConfig `yaml:"afterImportFriends"`
+ AfterRemoveBlack AfterConfig `yaml:"afterRemoveBlack"`
+ BeforeCreateSingleChatConversations BeforeConfig `yaml:"beforeCreateSingleChatConversations"`
+ AfterCreateSingleChatConversations AfterConfig `yaml:"afterCreateSingleChatConversations"`
+ BeforeCreateGroupChatConversations BeforeConfig `yaml:"beforeCreateGroupChatConversations"`
+ AfterCreateGroupChatConversations AfterConfig `yaml:"afterCreateGroupChatConversations"`
+}
+
+type ZooKeeper struct {
+ Schema string `yaml:"schema"`
+ Address []string `yaml:"address"`
+ Username string `yaml:"username"`
+ Password string `yaml:"password"`
+}
+
+type Discovery struct {
+ Enable string `yaml:"enable"`
+ Etcd Etcd `yaml:"etcd"`
+ Kubernetes Kubernetes `yaml:"kubernetes"`
+ RpcService RpcService `yaml:"rpcService"`
+}
+
+type Kubernetes struct {
+ Namespace string `yaml:"namespace"`
+}
+
+type Etcd struct {
+ RootDirectory string `yaml:"rootDirectory"`
+ Address []string `yaml:"address"`
+ Username string `yaml:"username"`
+ Password string `yaml:"password"`
+}
+
+func (m *Mongo) Build() *mongoutil.Config {
+ return &mongoutil.Config{
+ Uri: m.URI,
+ Address: m.Address,
+ Database: m.Database,
+ Username: m.Username,
+ Password: m.Password,
+ AuthSource: m.AuthSource,
+ MaxPoolSize: m.MaxPoolSize,
+ MaxRetry: m.MaxRetry,
+ MongoMode: m.MongoMode,
+ ReplicaSet: &mongoutil.ReplicaSetConfig{
+ Name: m.ReplicaSet.Name,
+ Hosts: m.ReplicaSet.Hosts,
+ ReadConcern: m.ReplicaSet.ReadConcern,
+ MaxStaleness: m.ReplicaSet.MaxStaleness,
+ },
+ ReadPreference: &mongoutil.ReadPrefConfig{
+ Mode: m.ReadPreference.Mode,
+ TagSets: m.ReadPreference.TagSets,
+ MaxStaleness: m.ReadPreference.MaxStaleness,
+ },
+ WriteConcern: &mongoutil.WriteConcernConfig{
+ W: m.WriteConcern.W,
+ J: m.WriteConcern.J,
+ WTimeout: m.WriteConcern.WTimeout,
+ },
+ }
+}
+
+func (r *Redis) Build() *redisutil.Config {
+ return &redisutil.Config{
+ RedisMode: r.RedisMode,
+ Address: r.Address,
+ Username: r.Username,
+ Password: r.Password,
+ DB: r.DB,
+ MaxRetry: r.MaxRetry,
+ PoolSize: r.PoolSize,
+ Sentinel: &redisutil.Sentinel{
+ MasterName: r.SentinelMode.MasterName,
+ SentinelAddrs: r.SentinelMode.SentinelAddrs,
+ RouteByLatency: r.SentinelMode.RouteByLatency,
+ RouteRandomly: r.SentinelMode.RouteRandomly,
+ },
+ }
+}
+
+func (k *Kafka) Build() *kafka.Config {
+ return &kafka.Config{
+ Username: k.Username,
+ Password: k.Password,
+ ProducerAck: k.ProducerAck,
+ CompressType: k.CompressType,
+ Addr: k.Address,
+ TLS: kafka.TLSConfig{
+ EnableTLS: k.Tls.EnableTLS,
+ CACrt: k.Tls.CACrt,
+ ClientCrt: k.Tls.ClientCrt,
+ ClientKey: k.Tls.ClientKey,
+ ClientKeyPwd: k.Tls.ClientKeyPwd,
+ InsecureSkipVerify: k.Tls.InsecureSkipVerify,
+ },
+ }
+}
+
+func (m *Minio) Build() *minio.Config {
+ formatEndpoint := func(address string) string {
+ if strings.HasPrefix(address, "http://") || strings.HasPrefix(address, "https://") {
+ return address
+ }
+ return "http://" + address
+ }
+ return &minio.Config{
+ Bucket: m.Bucket,
+ AccessKeyID: m.AccessKeyID,
+ SecretAccessKey: m.SecretAccessKey,
+ SessionToken: m.SessionToken,
+ PublicRead: m.PublicRead,
+ Endpoint: formatEndpoint(m.InternalAddress),
+ SignEndpoint: formatEndpoint(m.ExternalAddress),
+ }
+}
+
+func (c *Cos) Build() *cos.Config {
+ return &cos.Config{
+ BucketURL: c.BucketURL,
+ SecretID: c.SecretID,
+ SecretKey: c.SecretKey,
+ SessionToken: c.SessionToken,
+ PublicRead: c.PublicRead,
+ }
+}
+
+func (o *Oss) Build() *oss.Config {
+ return &oss.Config{
+ Endpoint: o.Endpoint,
+ Bucket: o.Bucket,
+ BucketURL: o.BucketURL,
+ AccessKeyID: o.AccessKeyID,
+ AccessKeySecret: o.AccessKeySecret,
+ SessionToken: o.SessionToken,
+ PublicRead: o.PublicRead,
+ }
+}
+
+func (o *Kodo) Build() *kodo.Config {
+ return &kodo.Config{
+ Endpoint: o.Endpoint,
+ Bucket: o.Bucket,
+ BucketURL: o.BucketURL,
+ AccessKeyID: o.AccessKeyID,
+ AccessKeySecret: o.AccessKeySecret,
+ SessionToken: o.SessionToken,
+ PublicRead: o.PublicRead,
+ }
+}
+
+func (o *Aws) Build() *aws.Config {
+ return &aws.Config{
+ Region: o.Region,
+ Bucket: o.Bucket,
+ AccessKeyID: o.AccessKeyID,
+ SecretAccessKey: o.SecretAccessKey,
+ SessionToken: o.SessionToken,
+ }
+}
+
+func (l *CacheConfig) Failed() time.Duration {
+ return time.Second * time.Duration(l.FailedExpire)
+}
+
+func (l *CacheConfig) Success() time.Duration {
+ return time.Second * time.Duration(l.SuccessExpire)
+}
+
+func (l *CacheConfig) Enable() bool {
+ return l.Topic != "" && l.SlotNum > 0 && l.SlotSize > 0
+}
+
+func InitNotification(notification *Notification) {
+ notification.GroupCreated.UnreadCount = false
+ notification.GroupCreated.ReliabilityLevel = 1
+ notification.GroupInfoSet.UnreadCount = false
+ notification.GroupInfoSet.ReliabilityLevel = 1
+ notification.JoinGroupApplication.UnreadCount = false
+ notification.JoinGroupApplication.ReliabilityLevel = 1
+ notification.MemberQuit.UnreadCount = false
+ notification.MemberQuit.ReliabilityLevel = 1
+ notification.GroupApplicationAccepted.UnreadCount = false
+ notification.GroupApplicationAccepted.ReliabilityLevel = 1
+ notification.GroupApplicationRejected.UnreadCount = false
+ notification.GroupApplicationRejected.ReliabilityLevel = 1
+ notification.GroupOwnerTransferred.UnreadCount = false
+ notification.GroupOwnerTransferred.ReliabilityLevel = 1
+ notification.MemberKicked.UnreadCount = false
+ notification.MemberKicked.ReliabilityLevel = 1
+ notification.MemberInvited.UnreadCount = false
+ notification.MemberInvited.ReliabilityLevel = 1
+ notification.MemberEnter.UnreadCount = false
+ notification.MemberEnter.ReliabilityLevel = 1
+ notification.GroupDismissed.UnreadCount = false
+ notification.GroupDismissed.ReliabilityLevel = 1
+ notification.GroupMuted.UnreadCount = false
+ notification.GroupMuted.ReliabilityLevel = 1
+ notification.GroupCancelMuted.UnreadCount = false
+ notification.GroupCancelMuted.ReliabilityLevel = 1
+ notification.GroupMemberMuted.UnreadCount = false
+ notification.GroupMemberMuted.ReliabilityLevel = 1
+ notification.GroupMemberCancelMuted.UnreadCount = false
+ notification.GroupMemberCancelMuted.ReliabilityLevel = 1
+ notification.GroupMemberInfoSet.UnreadCount = false
+ notification.GroupMemberInfoSet.ReliabilityLevel = 1
+ notification.GroupMemberSetToAdmin.UnreadCount = false
+ notification.GroupMemberSetToAdmin.ReliabilityLevel = 1
+ notification.GroupMemberSetToOrdinary.UnreadCount = false
+ notification.GroupMemberSetToOrdinary.ReliabilityLevel = 1
+ notification.GroupInfoSetAnnouncement.UnreadCount = false
+ notification.GroupInfoSetAnnouncement.ReliabilityLevel = 1
+ notification.GroupInfoSetName.UnreadCount = false
+ notification.GroupInfoSetName.ReliabilityLevel = 1
+ notification.FriendApplicationAdded.UnreadCount = false
+ notification.FriendApplicationAdded.ReliabilityLevel = 1
+ notification.FriendApplicationApproved.UnreadCount = false
+ notification.FriendApplicationApproved.ReliabilityLevel = 1
+ notification.FriendApplicationRejected.UnreadCount = false
+ notification.FriendApplicationRejected.ReliabilityLevel = 1
+ notification.FriendAdded.UnreadCount = false
+ notification.FriendAdded.ReliabilityLevel = 1
+ notification.FriendDeleted.UnreadCount = false
+ notification.FriendDeleted.ReliabilityLevel = 1
+ notification.FriendRemarkSet.UnreadCount = false
+ notification.FriendRemarkSet.ReliabilityLevel = 1
+ notification.BlackAdded.UnreadCount = false
+ notification.BlackAdded.ReliabilityLevel = 1
+ notification.BlackDeleted.UnreadCount = false
+ notification.BlackDeleted.ReliabilityLevel = 1
+ notification.FriendInfoUpdated.UnreadCount = false
+ notification.FriendInfoUpdated.ReliabilityLevel = 1
+ notification.UserInfoUpdated.UnreadCount = false
+ notification.UserInfoUpdated.ReliabilityLevel = 1
+ notification.UserStatusChanged.UnreadCount = false
+ notification.UserStatusChanged.ReliabilityLevel = 1
+ notification.ConversationChanged.UnreadCount = false
+ notification.ConversationChanged.ReliabilityLevel = 1
+ notification.ConversationSetPrivate.UnreadCount = false
+ notification.ConversationSetPrivate.ReliabilityLevel = 1
+}
+
+type AllConfig struct {
+ Discovery Discovery
+ Kafka Kafka
+ LocalCache LocalCache
+ Log Log
+ Minio Minio
+ Mongo Mongo
+ Notification Notification
+ API API
+ CronTask CronTask
+ MsgGateway MsgGateway
+ MsgTransfer MsgTransfer
+ Push Push
+ Auth Auth
+ Conversation Conversation
+ Friend Friend
+ Group Group
+ Msg Msg
+ Third Third
+ User User
+ Redis Redis
+ Share Share
+ Webhooks Webhooks
+}
+
+func (a *AllConfig) Name2Config(name string) any {
+ switch name {
+ case a.Discovery.GetConfigFileName():
+ return a.Discovery
+ case a.Kafka.GetConfigFileName():
+ return a.Kafka
+ case a.LocalCache.GetConfigFileName():
+ return a.LocalCache
+ case a.Log.GetConfigFileName():
+ return a.Log
+ case a.Minio.GetConfigFileName():
+ return a.Minio
+ case a.Mongo.GetConfigFileName():
+ return a.Mongo
+ case a.Notification.GetConfigFileName():
+ return a.Notification
+ case a.API.GetConfigFileName():
+ return a.API
+ case a.CronTask.GetConfigFileName():
+ return a.CronTask
+ case a.MsgGateway.GetConfigFileName():
+ return a.MsgGateway
+ case a.MsgTransfer.GetConfigFileName():
+ return a.MsgTransfer
+ case a.Push.GetConfigFileName():
+ return a.Push
+ case a.Auth.GetConfigFileName():
+ return a.Auth
+ case a.Conversation.GetConfigFileName():
+ return a.Conversation
+ case a.Friend.GetConfigFileName():
+ return a.Friend
+ case a.Group.GetConfigFileName():
+ return a.Group
+ case a.Msg.GetConfigFileName():
+ return a.Msg
+ case a.Third.GetConfigFileName():
+ return a.Third
+ case a.User.GetConfigFileName():
+ return a.User
+ case a.Redis.GetConfigFileName():
+ return a.Redis
+ case a.Share.GetConfigFileName():
+ return a.Share
+ case a.Webhooks.GetConfigFileName():
+ return a.Webhooks
+ default:
+ return nil
+ }
+}
+
+func (a *AllConfig) GetConfigNames() []string {
+ return []string{
+ a.Discovery.GetConfigFileName(),
+ a.Kafka.GetConfigFileName(),
+ a.LocalCache.GetConfigFileName(),
+ a.Log.GetConfigFileName(),
+ a.Minio.GetConfigFileName(),
+ a.Mongo.GetConfigFileName(),
+ a.Notification.GetConfigFileName(),
+ a.API.GetConfigFileName(),
+ a.CronTask.GetConfigFileName(),
+ a.MsgGateway.GetConfigFileName(),
+ a.MsgTransfer.GetConfigFileName(),
+ a.Push.GetConfigFileName(),
+ a.Auth.GetConfigFileName(),
+ a.Conversation.GetConfigFileName(),
+ a.Friend.GetConfigFileName(),
+ a.Group.GetConfigFileName(),
+ a.Msg.GetConfigFileName(),
+ a.Third.GetConfigFileName(),
+ a.User.GetConfigFileName(),
+ a.Redis.GetConfigFileName(),
+ a.Share.GetConfigFileName(),
+ a.Webhooks.GetConfigFileName(),
+ }
+}
+
+const (
+ FileName = "config.yaml"
+ DiscoveryConfigFilename = "discovery.yml"
+ KafkaConfigFileName = "kafka.yml"
+ LocalCacheConfigFileName = "local-cache.yml"
+ LogConfigFileName = "log.yml"
+ MinioConfigFileName = "minio.yml"
+ MongodbConfigFileName = "mongodb.yml"
+ NotificationFileName = "notification.yml"
+ OpenIMAPICfgFileName = "openim-api.yml"
+ OpenIMCronTaskCfgFileName = "openim-crontask.yml"
+ OpenIMMsgGatewayCfgFileName = "openim-msggateway.yml"
+ OpenIMMsgTransferCfgFileName = "openim-msgtransfer.yml"
+ OpenIMPushCfgFileName = "openim-push.yml"
+ OpenIMRPCAuthCfgFileName = "openim-rpc-auth.yml"
+ OpenIMRPCConversationCfgFileName = "openim-rpc-conversation.yml"
+ OpenIMRPCFriendCfgFileName = "openim-rpc-friend.yml"
+ OpenIMRPCGroupCfgFileName = "openim-rpc-group.yml"
+ OpenIMRPCMsgCfgFileName = "openim-rpc-msg.yml"
+ OpenIMRPCThirdCfgFileName = "openim-rpc-third.yml"
+ OpenIMRPCUserCfgFileName = "openim-rpc-user.yml"
+ RedisConfigFileName = "redis.yml"
+ ShareFileName = "share.yml"
+ WebhooksConfigFileName = "webhooks.yml"
+)
+
+func (d *Discovery) GetConfigFileName() string {
+ return DiscoveryConfigFilename
+}
+
+func (k *Kafka) GetConfigFileName() string {
+ return KafkaConfigFileName
+}
+
+func (lc *LocalCache) GetConfigFileName() string {
+ return LocalCacheConfigFileName
+}
+
+func (l *Log) GetConfigFileName() string {
+ return LogConfigFileName
+}
+
+func (m *Minio) GetConfigFileName() string {
+ return MinioConfigFileName
+}
+
+func (m *Mongo) GetConfigFileName() string {
+ return MongodbConfigFileName
+}
+
+func (n *Notification) GetConfigFileName() string {
+ return NotificationFileName
+}
+
+func (a *API) GetConfigFileName() string {
+ return OpenIMAPICfgFileName
+}
+
+func (ct *CronTask) GetConfigFileName() string {
+ return OpenIMCronTaskCfgFileName
+}
+
+func (mg *MsgGateway) GetConfigFileName() string {
+ return OpenIMMsgGatewayCfgFileName
+}
+
+func (mt *MsgTransfer) GetConfigFileName() string {
+ return OpenIMMsgTransferCfgFileName
+}
+
+func (p *Push) GetConfigFileName() string {
+ return OpenIMPushCfgFileName
+}
+
+func (a *Auth) GetConfigFileName() string {
+ return OpenIMRPCAuthCfgFileName
+}
+
+func (c *Conversation) GetConfigFileName() string {
+ return OpenIMRPCConversationCfgFileName
+}
+
+func (f *Friend) GetConfigFileName() string {
+ return OpenIMRPCFriendCfgFileName
+}
+
+func (g *Group) GetConfigFileName() string {
+ return OpenIMRPCGroupCfgFileName
+}
+
+func (m *Msg) GetConfigFileName() string {
+ return OpenIMRPCMsgCfgFileName
+}
+
+func (t *Third) GetConfigFileName() string {
+ return OpenIMRPCThirdCfgFileName
+}
+
+func (u *User) GetConfigFileName() string {
+ return OpenIMRPCUserCfgFileName
+}
+
+func (r *Redis) GetConfigFileName() string {
+ return RedisConfigFileName
+}
+
+func (s *Share) GetConfigFileName() string {
+ return ShareFileName
+}
+
+func (w *Webhooks) GetConfigFileName() string {
+ return WebhooksConfigFileName
+}
diff --git a/pkg/common/config/constant.go b/pkg/common/config/constant.go
new file mode 100644
index 0000000..fa3f0ca
--- /dev/null
+++ b/pkg/common/config/constant.go
@@ -0,0 +1,47 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import "github.com/openimsdk/tools/utils/runtimeenv"
+
+const ConfKey = "conf"
+
+const (
+ MountConfigFilePath = "CONFIG_PATH"
+ DeploymentType = "DEPLOYMENT_TYPE"
+ KUBERNETES = runtimeenv.Kubernetes
+ ETCD = "etcd"
+ //Standalone = "standalone"
+)
+
+const (
+ // DefaultDirPerm is used for creating general directories, allowing the owner to read, write, and execute,
+ // while the group and others can only read and execute.
+ DefaultDirPerm = 0755
+
+ // PrivateFilePerm is used for sensitive files, allowing only the owner to read and write.
+ PrivateFilePerm = 0600
+
+ // ExecFilePerm is used for executable files, allowing the owner to read, write, and execute,
+ // while the group and others can only read.
+ ExecFilePerm = 0754
+
+ // SharedDirPerm is used for shared directories, allowing the owner and group to read, write, and execute,
+ // with no permissions for others.
+ SharedDirPerm = 0770
+
+ // ReadOnlyDirPerm is used for read-only directories, allowing the owner, group, and others to only read.
+ ReadOnlyDirPerm = 0555
+)
diff --git a/pkg/common/config/doc.go b/pkg/common/config/doc.go
new file mode 100644
index 0000000..23ce494
--- /dev/null
+++ b/pkg/common/config/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
diff --git a/pkg/common/config/env.go b/pkg/common/config/env.go
new file mode 100644
index 0000000..99ccb3c
--- /dev/null
+++ b/pkg/common/config/env.go
@@ -0,0 +1,30 @@
+package config
+
+import "strings"
+
+var EnvPrefixMap map[string]string
+
+func init() {
+ EnvPrefixMap = make(map[string]string)
+ fileNames := []string{
+ FileName, NotificationFileName, ShareFileName, WebhooksConfigFileName,
+ KafkaConfigFileName, RedisConfigFileName,
+ MongodbConfigFileName, MinioConfigFileName, LogConfigFileName,
+ OpenIMAPICfgFileName, OpenIMCronTaskCfgFileName, OpenIMMsgGatewayCfgFileName,
+ OpenIMMsgTransferCfgFileName, OpenIMPushCfgFileName, OpenIMRPCAuthCfgFileName,
+ OpenIMRPCConversationCfgFileName, OpenIMRPCFriendCfgFileName, OpenIMRPCGroupCfgFileName,
+ OpenIMRPCMsgCfgFileName, OpenIMRPCThirdCfgFileName, OpenIMRPCUserCfgFileName, DiscoveryConfigFilename,
+ }
+
+ for _, fileName := range fileNames {
+ envKey := strings.TrimSuffix(strings.TrimSuffix(fileName, ".yml"), ".yaml")
+ envKey = "IMENV_" + envKey
+ envKey = strings.ToUpper(strings.ReplaceAll(envKey, "-", "_"))
+ EnvPrefixMap[fileName] = envKey
+ }
+}
+
+const (
+ FlagConf = "config_folder_path"
+ FlagTransferIndex = "index"
+)
diff --git a/pkg/common/config/global.go b/pkg/common/config/global.go
new file mode 100644
index 0000000..19f74b0
--- /dev/null
+++ b/pkg/common/config/global.go
@@ -0,0 +1,11 @@
+package config
+
+var standalone bool
+
+func SetStandalone() {
+ standalone = true
+}
+
+func Standalone() bool {
+ return standalone
+}
diff --git a/pkg/common/config/load_config.go b/pkg/common/config/load_config.go
new file mode 100644
index 0000000..142b704
--- /dev/null
+++ b/pkg/common/config/load_config.go
@@ -0,0 +1,44 @@
+package config
+
+import (
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/mitchellh/mapstructure"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "github.com/spf13/viper"
+)
+
+func Load(configDirectory string, configFileName string, envPrefix string, config any) error {
+ if runtimeenv.RuntimeEnvironment() == KUBERNETES {
+ mountPath := os.Getenv(MountConfigFilePath)
+ if mountPath == "" {
+ return errs.ErrArgs.WrapMsg(MountConfigFilePath + " env is empty")
+ }
+
+ return loadConfig(filepath.Join(mountPath, configFileName), envPrefix, config)
+ }
+
+ return loadConfig(filepath.Join(configDirectory, configFileName), envPrefix, config)
+}
+
+func loadConfig(path string, envPrefix string, config any) error {
+ v := viper.New()
+ v.SetConfigFile(path)
+ v.SetEnvPrefix(envPrefix)
+ v.AutomaticEnv()
+ v.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
+
+ if err := v.ReadInConfig(); err != nil {
+ return errs.WrapMsg(err, "failed to read config file", "path", path, "envPrefix", envPrefix)
+ }
+
+ if err := v.Unmarshal(config, func(config *mapstructure.DecoderConfig) {
+ config.TagName = StructTagName
+ }); err != nil {
+ return errs.WrapMsg(err, "failed to unmarshal config", "path", path, "envPrefix", envPrefix)
+ }
+ return nil
+}
diff --git a/pkg/common/config/load_config_test.go b/pkg/common/config/load_config_test.go
new file mode 100644
index 0000000..f11d91d
--- /dev/null
+++ b/pkg/common/config/load_config_test.go
@@ -0,0 +1,93 @@
+package config
+
+import (
+ "os"
+ "testing"
+
+ "github.com/stretchr/testify/assert"
+)
+
+func TestLoadLogConfig(t *testing.T) {
+ var log Log
+ os.Setenv("IMENV_LOG_REMAINLOGLEVEL", "5")
+ err := Load("../../../config/", "log.yml", "IMENV_LOG", &log)
+ assert.Nil(t, err)
+ t.Log(log.RemainLogLevel)
+ // assert.Equal(t, "../../../../logs/", log.StorageLocation)
+}
+
+func TestLoadMongoConfig(t *testing.T) {
+ var mongo Mongo
+ // os.Setenv("DEPLOYMENT_TYPE", "kubernetes")
+ os.Setenv("IMENV_MONGODB_PASSWORD", "openIM1231231")
+ // os.Setenv("IMENV_MONGODB_URI", "openIM123")
+ // os.Setenv("IMENV_MONGODB_USERNAME", "openIM123")
+ err := Load("../../../config/", "mongodb.yml", "IMENV_MONGODB", &mongo)
+ // err := LoadApiConfig("../../../config/mongodb.yml", "IMENV_MONGODB", &mongo)
+
+ assert.Nil(t, err)
+ t.Log(mongo.Password)
+ // assert.Equal(t, "openIM123", mongo.Password)
+ t.Log(os.Getenv("IMENV_MONGODB_PASSWORD"))
+ t.Log(mongo)
+ // //export IMENV_OPENIM_RPC_USER_RPC_LISTENIP="0.0.0.0"
+ // assert.Equal(t, "0.0.0.0", user.RPC.ListenIP)
+ // //export IMENV_OPENIM_RPC_USER_RPC_PORTS="10110,10111,10112"
+ // assert.Equal(t, []int{10110, 10111, 10112}, user.RPC.Ports)
+}
+
+func TestLoadMinioConfig(t *testing.T) {
+ var storageConfig Minio
+ err := Load("../../../config/minio.yml", "IMENV_MINIO", "", &storageConfig)
+ assert.Nil(t, err)
+ assert.Equal(t, "openim", storageConfig.Bucket)
+}
+
+func TestLoadWebhooksConfig(t *testing.T) {
+ var webhooks Webhooks
+ err := Load("../../../config/webhooks.yml", "IMENV_WEBHOOKS", "", &webhooks)
+ assert.Nil(t, err)
+ assert.Equal(t, 5, webhooks.BeforeAddBlack.Timeout)
+
+}
+
+func TestLoadOpenIMRpcUserConfig(t *testing.T) {
+ var user User
+ err := Load("../../../config/openim-rpc-user.yml", "IMENV_OPENIM_RPC_USER", "", &user)
+ assert.Nil(t, err)
+ //export IMENV_OPENIM_RPC_USER_RPC_LISTENIP="0.0.0.0"
+ assert.Equal(t, "0.0.0.0", user.RPC.ListenIP)
+ //export IMENV_OPENIM_RPC_USER_RPC_PORTS="10110,10111,10112"
+ assert.Equal(t, []int{10110, 10111, 10112}, user.RPC.Ports)
+}
+
+func TestLoadNotificationConfig(t *testing.T) {
+ var noti Notification
+ err := Load("../../../config/notification.yml", "IMENV_NOTIFICATION", "", ¬i)
+ assert.Nil(t, err)
+ assert.Equal(t, "Your friend's profile has been changed", noti.FriendRemarkSet.OfflinePush.Title)
+}
+
+func TestLoadOpenIMThirdConfig(t *testing.T) {
+ var third Third
+ err := Load("../../../config/openim-rpc-third.yml", "IMENV_OPENIM_RPC_THIRD", "", &third)
+ assert.Nil(t, err)
+ assert.Equal(t, "enabled", third.Object.Enable)
+ assert.Equal(t, "https://oss-cn-chengdu.aliyuncs.com", third.Object.Oss.Endpoint)
+ assert.Equal(t, "my_bucket_name", third.Object.Oss.Bucket)
+ assert.Equal(t, "https://my_bucket_name.oss-cn-chengdu.aliyuncs.com", third.Object.Oss.BucketURL)
+ assert.Equal(t, "AKID1234567890", third.Object.Oss.AccessKeyID)
+ assert.Equal(t, "abc123xyz789", third.Object.Oss.AccessKeySecret)
+ assert.Equal(t, "session_token_value", third.Object.Oss.SessionToken) // Uncomment if session token is needed
+ assert.Equal(t, true, third.Object.Oss.PublicRead)
+
+ // Environment: IMENV_OPENIM_RPC_THIRD_OBJECT_ENABLE=enabled;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_ENDPOINT=https://oss-cn-chengdu.aliyuncs.com;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_BUCKET=my_bucket_name;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_BUCKETURL=https://my_bucket_name.oss-cn-chengdu.aliyuncs.com;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_ACCESSKEYID=AKID1234567890;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_ACCESSKEYSECRET=abc123xyz789;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_SESSIONTOKEN=session_token_value;IMENV_OPENIM_RPC_THIRD_OBJECT_OSS_PUBLICREAD=true
+}
+
+func TestTransferConfig(t *testing.T) {
+ var tran MsgTransfer
+ err := Load("../../../config/openim-msgtransfer.yml", "IMENV_OPENIM-MSGTRANSFER", "", &tran)
+ assert.Nil(t, err)
+ assert.Equal(t, true, tran.Prometheus.Enable)
+ assert.Equal(t, true, tran.Prometheus.AutoSetPorts)
+}
diff --git a/pkg/common/config/parse.go b/pkg/common/config/parse.go
new file mode 100644
index 0000000..36fd0d1
--- /dev/null
+++ b/pkg/common/config/parse.go
@@ -0,0 +1,107 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ "os"
+ "path/filepath"
+
+ "gopkg.in/yaml.v3"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/field"
+)
+
+const (
+ DefaultFolderPath = "../config/"
+)
+
+// return absolude path join ../config/, this is k8s container config path.
+func GetDefaultConfigPath() (string, error) {
+ executablePath, err := os.Executable()
+ if err != nil {
+ return "", errs.WrapMsg(err, "failed to get executable path")
+ }
+
+ configPath, err := field.OutDir(filepath.Join(filepath.Dir(executablePath), "../config/"))
+ if err != nil {
+ return "", errs.WrapMsg(err, "failed to get output directory", "outDir", filepath.Join(filepath.Dir(executablePath), "../config/"))
+ }
+ return configPath, nil
+}
+
+// getProjectRoot returns the absolute path of the project root directory.
+func GetProjectRoot() (string, error) {
+ executablePath, err := os.Executable()
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ projectRoot, err := field.OutDir(filepath.Join(filepath.Dir(executablePath), "../../../../.."))
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ return projectRoot, nil
+}
+
+func GetOptionsByNotification(cfg NotificationConfig, sendMessage *bool) msgprocessor.Options {
+ opts := msgprocessor.NewOptions()
+
+ if sendMessage != nil {
+ cfg.IsSendMsg = *sendMessage
+ }
+ if cfg.IsSendMsg {
+ opts = msgprocessor.WithOptions(opts, msgprocessor.WithUnreadCount(true))
+ }
+ if cfg.OfflinePush.Enable {
+ opts = msgprocessor.WithOptions(opts, msgprocessor.WithOfflinePush(true))
+ }
+ switch cfg.ReliabilityLevel {
+ case constant.UnreliableNotification:
+ case constant.ReliableNotificationNoMsg:
+ opts = msgprocessor.WithOptions(opts, msgprocessor.WithHistory(true), msgprocessor.WithPersistent())
+ }
+ opts = msgprocessor.WithOptions(opts, msgprocessor.WithSendMsg(cfg.IsSendMsg))
+
+ return opts
+}
+
+// initConfig loads configuration from a specified path into the provided config structure.
+// If the specified config file does not exist, it attempts to load from the project's default "config" directory.
+// It logs informative messages regarding the configuration path being used.
+func initConfig(config any, configName, configFolderPath string) error {
+ configFolderPath = filepath.Join(configFolderPath, configName)
+ _, err := os.Stat(configFolderPath)
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return errs.WrapMsg(err, "stat config path error", "config Folder Path", configFolderPath)
+ }
+ path, err := GetProjectRoot()
+ if err != nil {
+ return err
+ }
+ configFolderPath = filepath.Join(path, "config", configName)
+ }
+ data, err := os.ReadFile(configFolderPath)
+ if err != nil {
+ return errs.WrapMsg(err, "read file error", "config Folder Path", configFolderPath)
+ }
+ if err = yaml.Unmarshal(data, config); err != nil {
+ return errs.WrapMsg(err, "unmarshal yaml error", "config Folder Path", configFolderPath)
+ }
+
+ return nil
+}
diff --git a/pkg/common/convert/auth.go b/pkg/common/convert/auth.go
new file mode 100644
index 0000000..a0f00c8
--- /dev/null
+++ b/pkg/common/convert/auth.go
@@ -0,0 +1,25 @@
+package convert
+
+func TokenMapDB2Pb(tokenMapDB map[string]int) map[string]int32 {
+ if tokenMapDB == nil {
+ return nil
+ }
+
+ tokenMapPB := make(map[string]int32, len(tokenMapDB))
+ for k, v := range tokenMapDB {
+ tokenMapPB[k] = int32(v)
+ }
+ return tokenMapPB
+}
+
+func TokenMapPb2DB(tokenMapPB map[string]int32) map[string]int {
+ if tokenMapPB == nil {
+ return nil
+ }
+
+ tokenMapDB := make(map[string]int, len(tokenMapPB))
+ for k, v := range tokenMapPB {
+ tokenMapDB[k] = int(v)
+ }
+ return tokenMapDB
+}
\ No newline at end of file
diff --git a/pkg/common/convert/black.go b/pkg/common/convert/black.go
new file mode 100644
index 0000000..9f46cb6
--- /dev/null
+++ b/pkg/common/convert/black.go
@@ -0,0 +1,56 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+ sdk "git.imall.cloud/openim/protocol/sdkws"
+)
+
+func BlackDB2Pb(ctx context.Context, blackDBs []*model.Black, f func(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error)) (blackPbs []*sdk.BlackInfo, err error) {
+ if len(blackDBs) == 0 {
+ return nil, nil
+ }
+ var userIDs []string
+ for _, blackDB := range blackDBs {
+ userIDs = append(userIDs, blackDB.BlockUserID)
+ }
+ userInfos, err := f(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, blackDB := range blackDBs {
+ blackPb := &sdk.BlackInfo{
+ OwnerUserID: blackDB.OwnerUserID,
+ CreateTime: blackDB.CreateTime.Unix(),
+ AddSource: blackDB.AddSource,
+ Ex: blackDB.Ex,
+ OperatorUserID: blackDB.OperatorUserID,
+ BlackUserInfo: &sdkws.PublicUserInfo{
+ UserID: userInfos[blackDB.BlockUserID].UserID,
+ Nickname: userInfos[blackDB.BlockUserID].Nickname,
+ FaceURL: userInfos[blackDB.BlockUserID].FaceURL,
+ Ex: userInfos[blackDB.BlockUserID].Ex,
+ UserType: userInfos[blackDB.BlockUserID].UserType,
+ },
+ }
+ blackPbs = append(blackPbs, blackPb)
+ }
+ return blackPbs, nil
+}
diff --git a/pkg/common/convert/conversation.go b/pkg/common/convert/conversation.go
new file mode 100644
index 0000000..14342c3
--- /dev/null
+++ b/pkg/common/convert/conversation.go
@@ -0,0 +1,61 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/conversation"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func ConversationDB2Pb(conversationDB *model.Conversation) *conversation.Conversation {
+ conversationPB := &conversation.Conversation{}
+ conversationPB.LatestMsgDestructTime = conversationDB.LatestMsgDestructTime.UnixMilli()
+ if err := datautil.CopyStructFields(conversationPB, conversationDB); err != nil {
+ return nil
+ }
+ return conversationPB
+}
+
+func ConversationsDB2Pb(conversationsDB []*model.Conversation) (conversationsPB []*conversation.Conversation) {
+ for _, conversationDB := range conversationsDB {
+ conversationPB := &conversation.Conversation{}
+ if err := datautil.CopyStructFields(conversationPB, conversationDB); err != nil {
+ continue
+ }
+ conversationPB.LatestMsgDestructTime = conversationDB.LatestMsgDestructTime.UnixMilli()
+ conversationsPB = append(conversationsPB, conversationPB)
+ }
+ return conversationsPB
+}
+
+func ConversationPb2DB(conversationPB *conversation.Conversation) *model.Conversation {
+ conversationDB := &model.Conversation{}
+ if err := datautil.CopyStructFields(conversationDB, conversationPB); err != nil {
+ return nil
+ }
+ return conversationDB
+}
+
+func ConversationsPb2DB(conversationsPB []*conversation.Conversation) (conversationsDB []*model.Conversation) {
+ for _, conversationPB := range conversationsPB {
+ conversationDB := &model.Conversation{}
+ if err := datautil.CopyStructFields(conversationDB, conversationPB); err != nil {
+ continue
+ }
+ conversationsDB = append(conversationsDB, conversationDB)
+ }
+ return conversationsDB
+}
diff --git a/pkg/common/convert/doc.go b/pkg/common/convert/doc.go
new file mode 100644
index 0000000..146bd9d
--- /dev/null
+++ b/pkg/common/convert/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
diff --git a/pkg/common/convert/friend.go b/pkg/common/convert/friend.go
new file mode 100644
index 0000000..8b190d4
--- /dev/null
+++ b/pkg/common/convert/friend.go
@@ -0,0 +1,171 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "context"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/notification/common_user"
+ "git.imall.cloud/openim/protocol/relation"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+func FriendPb2DB(friend *sdkws.FriendInfo) *model.Friend {
+ dbFriend := &model.Friend{}
+ err := datautil.CopyStructFields(dbFriend, friend)
+ if err != nil {
+ return nil
+ }
+ dbFriend.FriendUserID = friend.FriendUser.UserID
+ dbFriend.CreateTime = timeutil.UnixSecondToTime(friend.CreateTime)
+ return dbFriend
+}
+
+func FriendDB2Pb(ctx context.Context, friendDB *model.Friend, getUsers func(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error)) (*sdkws.FriendInfo, error) {
+ users, err := getUsers(ctx, []string{friendDB.FriendUserID})
+ if err != nil {
+ return nil, err
+ }
+ user, ok := users[friendDB.FriendUserID]
+ if !ok {
+ return nil, fmt.Errorf("user not found: %s", friendDB.FriendUserID)
+ }
+
+ return &sdkws.FriendInfo{
+ FriendUser: user,
+ CreateTime: friendDB.CreateTime.Unix(),
+ }, nil
+}
+
+func FriendsDB2Pb(ctx context.Context, friendsDB []*model.Friend, getUsers func(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error)) (friendsPb []*sdkws.FriendInfo, err error) {
+ if len(friendsDB) == 0 {
+ return nil, nil
+ }
+ var userID []string
+ for _, friendDB := range friendsDB {
+ userID = append(userID, friendDB.FriendUserID)
+ }
+
+ users, err := getUsers(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ for _, friend := range friendsDB {
+ friendPb := &sdkws.FriendInfo{FriendUser: &sdkws.UserInfo{}}
+ err := datautil.CopyStructFields(friendPb, friend)
+ if err != nil {
+ return nil, err
+ }
+
+ friendPb.FriendUser.UserID = users[friend.FriendUserID].UserID
+ friendPb.FriendUser.Nickname = users[friend.FriendUserID].Nickname
+ friendPb.FriendUser.FaceURL = users[friend.FriendUserID].FaceURL
+ friendPb.FriendUser.Ex = users[friend.FriendUserID].Ex
+ friendPb.CreateTime = friend.CreateTime.Unix()
+ friendPb.IsPinned = friend.IsPinned
+ friendsPb = append(friendsPb, friendPb)
+ }
+ return friendsPb, nil
+}
+
+func FriendOnlyDB2PbOnly(friendsDB []*model.Friend) []*relation.FriendInfoOnly {
+ return datautil.Slice(friendsDB, func(f *model.Friend) *relation.FriendInfoOnly {
+ return &relation.FriendInfoOnly{
+ OwnerUserID: f.OwnerUserID,
+ FriendUserID: f.FriendUserID,
+ Remark: f.Remark,
+ CreateTime: f.CreateTime.UnixMilli(),
+ AddSource: f.AddSource,
+ OperatorUserID: f.OperatorUserID,
+ Ex: f.Ex,
+ IsPinned: f.IsPinned,
+ }
+ })
+}
+
+func FriendRequestDB2Pb(ctx context.Context, friendRequests []*model.FriendRequest, getUsers func(ctx context.Context, userIDs []string) (map[string]common_user.CommonUser, error)) ([]*sdkws.FriendRequest, error) {
+ if len(friendRequests) == 0 {
+ return nil, nil
+ }
+ userIDMap := make(map[string]struct{})
+ for _, friendRequest := range friendRequests {
+ userIDMap[friendRequest.ToUserID] = struct{}{}
+ userIDMap[friendRequest.FromUserID] = struct{}{}
+ }
+ users, err := getUsers(ctx, datautil.Keys(userIDMap))
+ if err != nil {
+ return nil, err
+ }
+ res := make([]*sdkws.FriendRequest, 0, len(friendRequests))
+ for _, friendRequest := range friendRequests {
+ toUser := users[friendRequest.ToUserID]
+ fromUser := users[friendRequest.FromUserID]
+ res = append(res, &sdkws.FriendRequest{
+ FromUserID: friendRequest.FromUserID,
+ FromNickname: fromUser.GetNickname(),
+ FromFaceURL: fromUser.GetFaceURL(),
+ ToUserID: friendRequest.ToUserID,
+ ToNickname: toUser.GetNickname(),
+ ToFaceURL: toUser.GetFaceURL(),
+ HandleResult: friendRequest.HandleResult,
+ ReqMsg: friendRequest.ReqMsg,
+ CreateTime: friendRequest.CreateTime.UnixMilli(),
+ HandlerUserID: friendRequest.HandlerUserID,
+ HandleMsg: friendRequest.HandleMsg,
+ HandleTime: friendRequest.HandleTime.UnixMilli(),
+ Ex: friendRequest.Ex,
+ })
+ }
+ return res, nil
+}
+
+// FriendPb2DBMap converts a FriendInfo protobuf object to a map suitable for database operations.
+// It only includes non-zero or non-empty fields in the map.
+func FriendPb2DBMap(friend *sdkws.FriendInfo) map[string]any {
+ if friend == nil {
+ return nil
+ }
+
+ val := make(map[string]any)
+
+ // Assuming FriendInfo has similar fields to those in Friend.
+ // Add or remove fields based on your actual FriendInfo and Friend structures.
+ if friend.FriendUser != nil {
+ if friend.FriendUser.UserID != "" {
+ val["friend_user_id"] = friend.FriendUser.UserID
+ }
+ if friend.FriendUser.Nickname != "" {
+ val["nickname"] = friend.FriendUser.Nickname
+ }
+ if friend.FriendUser.FaceURL != "" {
+ val["face_url"] = friend.FriendUser.FaceURL
+ }
+ if friend.FriendUser.Ex != "" {
+ val["ex"] = friend.FriendUser.Ex
+ }
+ }
+ if friend.CreateTime != 0 {
+ val["create_time"] = friend.CreateTime // You might need to convert this to a proper time format.
+ }
+
+ // Include other fields from FriendInfo as needed, similar to the above pattern.
+
+ return val
+}
diff --git a/pkg/common/convert/group.go b/pkg/common/convert/group.go
new file mode 100644
index 0000000..0d8049d
--- /dev/null
+++ b/pkg/common/convert/group.go
@@ -0,0 +1,174 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ pbgroup "git.imall.cloud/openim/protocol/group"
+ sdkws "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+)
+
+func Db2PbGroupInfo(m *model.Group, ownerUserID string, memberCount uint32) *sdkws.GroupInfo {
+ return &sdkws.GroupInfo{
+ GroupID: m.GroupID,
+ GroupName: m.GroupName,
+ Notification: m.Notification,
+ Introduction: m.Introduction,
+ FaceURL: m.FaceURL,
+ OwnerUserID: ownerUserID,
+ CreateTime: m.CreateTime.UnixMilli(),
+ MemberCount: memberCount,
+ Ex: m.Ex,
+ Status: m.Status,
+ CreatorUserID: m.CreatorUserID,
+ GroupType: m.GroupType,
+ NeedVerification: m.NeedVerification,
+ LookMemberInfo: m.LookMemberInfo,
+ ApplyMemberFriend: m.ApplyMemberFriend,
+ NotificationUpdateTime: m.NotificationUpdateTime.UnixMilli(),
+ NotificationUserID: m.NotificationUserID,
+ }
+}
+
+func Pb2DbGroupRequest(req *pbgroup.GroupApplicationResponseReq, handleUserID string) *model.GroupRequest {
+ return &model.GroupRequest{
+ UserID: req.FromUserID,
+ GroupID: req.GroupID,
+ HandleResult: req.HandleResult,
+ HandledMsg: req.HandledMsg,
+ HandleUserID: handleUserID,
+ HandledTime: time.Now(),
+ }
+}
+
+func Db2PbCMSGroup(m *model.Group, ownerUserID string, ownerUserName string, memberCount uint32) *pbgroup.CMSGroup {
+ return &pbgroup.CMSGroup{
+ GroupInfo: Db2PbGroupInfo(m, ownerUserID, memberCount),
+ GroupOwnerUserID: ownerUserID,
+ GroupOwnerUserName: ownerUserName,
+ }
+}
+
+// Db2PbGroupMember 将数据库群成员模型转换为 protobuf 群成员信息
+// 返回的 GroupMemberFullInfo 包含以下禁言相关字段:
+// - MuteEndTime: 禁言结束时间(毫秒时间戳)
+// 判断是否被禁言:MuteEndTime >= 当前时间戳(毫秒)表示正在被禁言
+// 判断剩余禁言时间:max(0, MuteEndTime - 当前时间戳) / 1000 得到剩余秒数
+func Db2PbGroupMember(m *model.GroupMember) *sdkws.GroupMemberFullInfo {
+ muteEndTime := int64(0)
+ if !m.MuteEndTime.IsZero() && m.MuteEndTime.After(time.Unix(0, 0)) {
+ muteEndTime = m.MuteEndTime.UnixMilli()
+ // 记录从数据库读取的禁言时间,用于排查自动禁言问题
+ now := time.Now()
+ if muteEndTime >= now.UnixMilli() {
+ // 只有在用户被禁言时才记录日志,避免日志过多
+ log.ZInfo(context.Background(), "Db2PbGroupMember: found muted user in database",
+ "groupID", m.GroupID,
+ "userID", m.UserID,
+ "muteEndTimeFromDB", m.MuteEndTime.Format(time.RFC3339),
+ "muteEndTimeTimestamp", muteEndTime,
+ "now", now.Format(time.RFC3339),
+ "mutedDurationSeconds", (muteEndTime-now.UnixMilli())/1000,
+ "isZero", m.MuteEndTime.IsZero(),
+ "afterUnixZero", m.MuteEndTime.After(time.Unix(0, 0)))
+ }
+ }
+ return &sdkws.GroupMemberFullInfo{
+ GroupID: m.GroupID,
+ UserID: m.UserID,
+ RoleLevel: m.RoleLevel,
+ JoinTime: m.JoinTime.UnixMilli(),
+ Nickname: m.Nickname,
+ FaceURL: m.FaceURL,
+ // AppMangerLevel: m.AppMangerLevel,
+ JoinSource: m.JoinSource,
+ OperatorUserID: m.OperatorUserID,
+ Ex: m.Ex,
+ MuteEndTime: muteEndTime,
+ InviterUserID: m.InviterUserID,
+ }
+}
+
+func Db2PbGroupRequest(m *model.GroupRequest, user *sdkws.UserInfo, group *sdkws.GroupInfo) *sdkws.GroupRequest {
+ var pu *sdkws.PublicUserInfo
+ if user != nil {
+ pu = &sdkws.PublicUserInfo{
+ UserID: user.UserID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ Ex: user.Ex,
+ UserType: user.UserType,
+ }
+ }
+ return &sdkws.GroupRequest{
+ UserInfo: pu,
+ GroupInfo: group,
+ HandleResult: m.HandleResult,
+ ReqMsg: m.ReqMsg,
+ HandleMsg: m.HandledMsg,
+ ReqTime: m.ReqTime.UnixMilli(),
+ HandleUserID: m.HandleUserID,
+ HandleTime: m.HandledTime.UnixMilli(),
+ Ex: m.Ex,
+ JoinSource: m.JoinSource,
+ InviterUserID: m.InviterUserID,
+ }
+}
+
+func Db2PbGroupAbstractInfo(
+ groupID string,
+ groupMemberNumber uint32,
+ groupMemberListHash uint64,
+) *pbgroup.GroupAbstractInfo {
+ return &pbgroup.GroupAbstractInfo{
+ GroupID: groupID,
+ GroupMemberNumber: groupMemberNumber,
+ GroupMemberListHash: groupMemberListHash,
+ }
+}
+
+func Pb2DBGroupInfo(m *sdkws.GroupInfo) *model.Group {
+ return &model.Group{
+ GroupID: m.GroupID,
+ GroupName: m.GroupName,
+ Notification: m.Notification,
+ Introduction: m.Introduction,
+ FaceURL: m.FaceURL,
+ CreateTime: time.Now(),
+ Ex: m.Ex,
+ Status: m.Status,
+ CreatorUserID: m.CreatorUserID,
+ GroupType: m.GroupType,
+ NeedVerification: m.NeedVerification,
+ LookMemberInfo: m.LookMemberInfo,
+ ApplyMemberFriend: m.ApplyMemberFriend,
+ NotificationUpdateTime: time.UnixMilli(m.NotificationUpdateTime),
+ NotificationUserID: m.NotificationUserID,
+ }
+}
+
+// func Pb2DbGroupMember(m *sdkws.UserInfo) *relation.GroupMember {
+// return &relation.GroupMember{
+// UserID: m.UserID,
+// Nickname: m.Nickname,
+// FaceURL: m.FaceURL,
+// Ex: m.Ex,
+// }
+//}
diff --git a/pkg/common/convert/msg.go b/pkg/common/convert/msg.go
new file mode 100644
index 0000000..b502a84
--- /dev/null
+++ b/pkg/common/convert/msg.go
@@ -0,0 +1,98 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+func MsgPb2DB(msg *sdkws.MsgData) *model.MsgDataModel {
+ if msg == nil {
+ return nil
+ }
+ var msgDataModel model.MsgDataModel
+ msgDataModel.SendID = msg.SendID
+ msgDataModel.RecvID = msg.RecvID
+ msgDataModel.GroupID = msg.GroupID
+ msgDataModel.ClientMsgID = msg.ClientMsgID
+ msgDataModel.ServerMsgID = msg.ServerMsgID
+ msgDataModel.SenderPlatformID = msg.SenderPlatformID
+ msgDataModel.SenderNickname = msg.SenderNickname
+ msgDataModel.SenderFaceURL = msg.SenderFaceURL
+ msgDataModel.SessionType = msg.SessionType
+ msgDataModel.MsgFrom = msg.MsgFrom
+ msgDataModel.ContentType = msg.ContentType
+ msgDataModel.Content = string(msg.Content)
+ msgDataModel.Seq = msg.Seq
+ msgDataModel.SendTime = msg.SendTime
+ msgDataModel.CreateTime = msg.CreateTime
+ msgDataModel.Status = msg.Status
+ msgDataModel.Options = msg.Options
+ if msg.OfflinePushInfo != nil {
+ msgDataModel.OfflinePush = &model.OfflinePushModel{
+ Title: msg.OfflinePushInfo.Title,
+ Desc: msg.OfflinePushInfo.Desc,
+ Ex: msg.OfflinePushInfo.Ex,
+ IOSPushSound: msg.OfflinePushInfo.IOSPushSound,
+ IOSBadgeCount: msg.OfflinePushInfo.IOSBadgeCount,
+ }
+ }
+ msgDataModel.AtUserIDList = msg.AtUserIDList
+ msgDataModel.AttachedInfo = msg.AttachedInfo
+ msgDataModel.Ex = msg.Ex
+ return &msgDataModel
+}
+
+func MsgDB2Pb(msgModel *model.MsgDataModel) *sdkws.MsgData {
+ if msgModel == nil {
+ return nil
+ }
+ var msg sdkws.MsgData
+ msg.SendID = msgModel.SendID
+ msg.RecvID = msgModel.RecvID
+ msg.GroupID = msgModel.GroupID
+ msg.ClientMsgID = msgModel.ClientMsgID
+ msg.ServerMsgID = msgModel.ServerMsgID
+ msg.SenderPlatformID = msgModel.SenderPlatformID
+ msg.SenderNickname = msgModel.SenderNickname
+ msg.SenderFaceURL = msgModel.SenderFaceURL
+ msg.SessionType = msgModel.SessionType
+ msg.MsgFrom = msgModel.MsgFrom
+ msg.ContentType = msgModel.ContentType
+ msg.Content = []byte(msgModel.Content)
+ msg.Seq = msgModel.Seq
+ msg.SendTime = msgModel.SendTime
+ msg.CreateTime = msgModel.CreateTime
+ msg.Status = msgModel.Status
+ if msgModel.SessionType == constant.SingleChatType {
+ msg.IsRead = msgModel.IsRead
+ }
+ msg.Options = msgModel.Options
+ if msgModel.OfflinePush != nil {
+ msg.OfflinePushInfo = &sdkws.OfflinePushInfo{
+ Title: msgModel.OfflinePush.Title,
+ Desc: msgModel.OfflinePush.Desc,
+ Ex: msgModel.OfflinePush.Ex,
+ IOSPushSound: msgModel.OfflinePush.IOSPushSound,
+ IOSBadgeCount: msgModel.OfflinePush.IOSBadgeCount,
+ }
+ }
+ msg.AtUserIDList = msgModel.AtUserIDList
+ msg.AttachedInfo = msgModel.AttachedInfo
+ msg.Ex = msgModel.Ex
+ return &msg
+}
diff --git a/pkg/common/convert/user.go b/pkg/common/convert/user.go
new file mode 100644
index 0000000..18dd1ee
--- /dev/null
+++ b/pkg/common/convert/user.go
@@ -0,0 +1,109 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "time"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/utils/datautil"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+func UserDB2Pb(user *relationtb.User) *sdkws.UserInfo {
+ return &sdkws.UserInfo{
+ UserID: user.UserID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ Ex: user.Ex,
+ CreateTime: user.CreateTime.UnixMilli(),
+ AppMangerLevel: user.AppMangerLevel,
+ GlobalRecvMsgOpt: user.GlobalRecvMsgOpt,
+ UserType: user.UserType,
+ UserFlag: user.UserFlag,
+ }
+}
+
+func UsersDB2Pb(users []*relationtb.User) []*sdkws.UserInfo {
+ return datautil.Slice(users, UserDB2Pb)
+}
+
+func UserPb2DB(user *sdkws.UserInfo) *relationtb.User {
+ return &relationtb.User{
+ UserID: user.UserID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ Ex: user.Ex,
+ CreateTime: time.UnixMilli(user.CreateTime),
+ AppMangerLevel: user.AppMangerLevel,
+ GlobalRecvMsgOpt: user.GlobalRecvMsgOpt,
+ UserType: user.UserType,
+ UserFlag: user.UserFlag,
+ }
+}
+
+func UserPb2DBMap(user *sdkws.UserInfo) map[string]any {
+ if user == nil {
+ return nil
+ }
+ val := make(map[string]any)
+ fields := map[string]any{
+ "nickname": user.Nickname,
+ "face_url": user.FaceURL,
+ "ex": user.Ex,
+ "app_manager_level": user.AppMangerLevel,
+ "global_recv_msg_opt": user.GlobalRecvMsgOpt,
+ "user_type": user.UserType,
+ "user_flag": user.UserFlag,
+ }
+ for key, value := range fields {
+ if v, ok := value.(string); ok && v != "" {
+ val[key] = v
+ } else if v, ok := value.(int32); ok {
+ // 对于 int32 类型,包括 0 值也要更新
+ val[key] = v
+ }
+ }
+ return val
+}
+func UserPb2DBMapEx(user *sdkws.UserInfoWithEx) map[string]any {
+ if user == nil {
+ return nil
+ }
+ val := make(map[string]any)
+
+ // Map fields from UserInfoWithEx to val
+ if user.Nickname != nil {
+ val["nickname"] = user.Nickname.Value
+ }
+ if user.FaceURL != nil {
+ val["face_url"] = user.FaceURL.Value
+ }
+ if user.Ex != nil {
+ val["ex"] = user.Ex.Value
+ }
+ if user.GlobalRecvMsgOpt != nil {
+ val["global_recv_msg_opt"] = user.GlobalRecvMsgOpt.Value
+ }
+ if user.UserType != nil {
+ val["user_type"] = user.UserType.Value
+ }
+ if user.UserFlag != nil {
+ val["user_flag"] = user.UserFlag.Value
+ }
+
+ return val
+}
diff --git a/pkg/common/convert/user_test.go b/pkg/common/convert/user_test.go
new file mode 100644
index 0000000..7dda72a
--- /dev/null
+++ b/pkg/common/convert/user_test.go
@@ -0,0 +1,87 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "reflect"
+ "testing"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+func TestUsersDB2Pb(t *testing.T) {
+ type args struct {
+ users []*relationtb.User
+ }
+ tests := []struct {
+ name string
+ args args
+ wantResult []*sdkws.UserInfo
+ }{
+ // TODO: Add test cases.
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if gotResult := UsersDB2Pb(tt.args.users); !reflect.DeepEqual(gotResult, tt.wantResult) {
+ t.Errorf("UsersDB2Pb() = %v, want %v", gotResult, tt.wantResult)
+ }
+ })
+ }
+}
+
+func TestUserPb2DB(t *testing.T) {
+ type args struct {
+ user *sdkws.UserInfo
+ }
+ tests := []struct {
+ name string
+ args args
+ want *relationtb.User
+ }{
+ // TODO: Add test cases.
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if got := UserPb2DB(tt.args.user); !reflect.DeepEqual(got, tt.want) {
+ t.Errorf("UserPb2DB() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+func TestUserPb2DBMap(t *testing.T) {
+ user := &sdkws.UserInfo{
+ Nickname: "TestUser",
+ FaceURL: "http://openim.io/logo.jpg",
+ Ex: "Extra Data",
+ AppMangerLevel: 1,
+ GlobalRecvMsgOpt: 2,
+ }
+
+ expected := map[string]any{
+ "nickname": "TestUser",
+ "face_url": "http://openim.io/logo.jpg",
+ "ex": "Extra Data",
+ "app_manager_level": int32(1),
+ "global_recv_msg_opt": int32(2),
+ }
+
+ result := UserPb2DBMap(user)
+ if !reflect.DeepEqual(result, expected) {
+ t.Errorf("UserPb2DBMap returned unexpected map. Got %v, want %v", result, expected)
+ }
+}
diff --git a/pkg/common/discovery/direct/direct_resolver.go b/pkg/common/discovery/direct/direct_resolver.go
new file mode 100644
index 0000000..8213782
--- /dev/null
+++ b/pkg/common/discovery/direct/direct_resolver.go
@@ -0,0 +1,96 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package direct
+
+import (
+ "context"
+ "math/rand"
+ "strings"
+
+ "github.com/openimsdk/tools/log"
+ "google.golang.org/grpc/resolver"
+)
+
+const (
+ slashSeparator = "/"
+ // EndpointSepChar is the separator char in endpoints.
+ EndpointSepChar = ','
+
+ subsetSize = 32
+ scheme = "direct"
+)
+
+type ResolverDirect struct {
+}
+
+func NewResolverDirect() *ResolverDirect {
+ return &ResolverDirect{}
+}
+
+func (rd *ResolverDirect) Build(target resolver.Target, cc resolver.ClientConn, _ resolver.BuildOptions) (
+ resolver.Resolver, error) {
+ log.ZDebug(context.Background(), "Build", "target", target)
+ endpoints := strings.FieldsFunc(GetEndpoints(target), func(r rune) bool {
+ return r == EndpointSepChar
+ })
+ endpoints = subset(endpoints, subsetSize)
+ addrs := make([]resolver.Address, 0, len(endpoints))
+
+ for _, val := range endpoints {
+ addrs = append(addrs, resolver.Address{
+ Addr: val,
+ })
+ }
+ if err := cc.UpdateState(resolver.State{
+ Addresses: addrs,
+ }); err != nil {
+ return nil, err
+ }
+
+ return &nopResolver{cc: cc}, nil
+}
+func init() {
+ resolver.Register(&ResolverDirect{})
+}
+func (rd *ResolverDirect) Scheme() string {
+ return scheme // return your custom scheme name
+}
+
+// GetEndpoints returns the endpoints from the given target.
+func GetEndpoints(target resolver.Target) string {
+ return strings.Trim(target.URL.Path, slashSeparator)
+}
+func subset(set []string, sub int) []string {
+ rand.Shuffle(len(set), func(i, j int) {
+ set[i], set[j] = set[j], set[i]
+ })
+ if len(set) <= sub {
+ return set
+ }
+
+ return set[:sub]
+}
+
+type nopResolver struct {
+ cc resolver.ClientConn
+}
+
+func (n nopResolver) ResolveNow(options resolver.ResolveNowOptions) {
+
+}
+
+func (n nopResolver) Close() {
+
+}
diff --git a/pkg/common/discovery/direct/directconn.go b/pkg/common/discovery/direct/directconn.go
new file mode 100644
index 0000000..77b95c4
--- /dev/null
+++ b/pkg/common/discovery/direct/directconn.go
@@ -0,0 +1,174 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package direct
+
+//import (
+// "context"
+// "fmt"
+//
+// config2 "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+// "github.com/openimsdk/tools/errs"
+// "google.golang.org/grpc"
+// "google.golang.org/grpc/credentials/insecure"
+//)
+//
+//type ServiceAddresses map[string][]int
+//
+//func getServiceAddresses(rpcRegisterName *config2.RpcRegisterName,
+// rpcPort *config2.RpcPort, longConnSvrPort []int) ServiceAddresses {
+// return ServiceAddresses{
+// rpcRegisterName.OpenImUserName: rpcPort.OpenImUserPort,
+// rpcRegisterName.OpenImFriendName: rpcPort.OpenImFriendPort,
+// rpcRegisterName.OpenImMsgName: rpcPort.OpenImMessagePort,
+// rpcRegisterName.OpenImMessageGatewayName: longConnSvrPort,
+// rpcRegisterName.OpenImGroupName: rpcPort.OpenImGroupPort,
+// rpcRegisterName.OpenImAuthName: rpcPort.OpenImAuthPort,
+// rpcRegisterName.OpenImPushName: rpcPort.OpenImPushPort,
+// rpcRegisterName.OpenImConversationName: rpcPort.OpenImConversationPort,
+// rpcRegisterName.OpenImThirdName: rpcPort.OpenImThirdPort,
+// }
+//}
+//
+//type ConnDirect struct {
+// additionalOpts []grpc.DialOption
+// currentServiceAddress string
+// conns map[string][]*grpc.ClientConn
+// resolverDirect *ResolverDirect
+// config *config2.GlobalConfig
+//}
+//
+//func (cd *ConnDirect) GetClientLocalConns() map[string][]*grpc.ClientConn {
+// return nil
+//}
+//
+//func (cd *ConnDirect) GetUserIdHashGatewayHost(ctx context.Context, userId string) (string, error) {
+// return "", nil
+//}
+//
+//func (cd *ConnDirect) Register(serviceName, host string, port int, opts ...grpc.DialOption) error {
+// return nil
+//}
+//
+//func (cd *ConnDirect) UnRegister() error {
+// return nil
+//}
+//
+//func (cd *ConnDirect) CreateRpcRootNodes(serviceNames []string) error {
+// return nil
+//}
+//
+//func (cd *ConnDirect) RegisterConf2Registry(key string, conf []byte) error {
+// return nil
+//}
+//
+//func (cd *ConnDirect) GetConfFromRegistry(key string) ([]byte, error) {
+// return nil, nil
+//}
+//
+//func (cd *ConnDirect) Close() {
+//
+//}
+//
+//func NewConnDirect(config *config2.GlobalConfig) (*ConnDirect, error) {
+// return &ConnDirect{
+// conns: make(map[string][]*grpc.ClientConn),
+// resolverDirect: NewResolverDirect(),
+// config: config,
+// }, nil
+//}
+//
+//func (cd *ConnDirect) GetConns(ctx context.Context,
+// serviceName string, opts ...grpc.DialOption) ([]*grpc.ClientConn, error) {
+//
+// if conns, exists := cd.conns[serviceName]; exists {
+// return conns, nil
+// }
+// ports := getServiceAddresses(&cd.config.RpcRegisterName,
+// &cd.config.RpcPort, cd.config.LongConnSvr.OpenImMessageGatewayPort)[serviceName]
+// var connections []*grpc.ClientConn
+// for _, port := range ports {
+// conn, err := cd.dialServiceWithoutResolver(ctx, fmt.Sprintf(cd.config.Rpc.ListenIP+":%d", port), append(cd.additionalOpts, opts...)...)
+// if err != nil {
+// return nil, errs.Wrap(fmt.Errorf("connect to port %d failed,serviceName %s, IP %s", port, serviceName, cd.config.Rpc.ListenIP))
+// }
+// connections = append(connections, conn)
+// }
+//
+// if len(connections) == 0 {
+// return nil, errs.New("no connections found for service", "serviceName", serviceName).Wrap()
+// }
+// return connections, nil
+//}
+//
+//func (cd *ConnDirect) GetConn(ctx context.Context, serviceName string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+// // Get service addresses
+// addresses := getServiceAddresses(&cd.config.RpcRegisterName,
+// &cd.config.RpcPort, cd.config.LongConnSvr.OpenImMessageGatewayPort)
+// address, ok := addresses[serviceName]
+// if !ok {
+// return nil, errs.New("unknown service name", "serviceName", serviceName).Wrap()
+// }
+// var result string
+// for _, addr := range address {
+// if result != "" {
+// result = result + "," + fmt.Sprintf(cd.config.Rpc.ListenIP+":%d", addr)
+// } else {
+// result = fmt.Sprintf(cd.config.Rpc.ListenIP+":%d", addr)
+// }
+// }
+// // Try to dial a new connection
+// conn, err := cd.dialService(ctx, result, append(cd.additionalOpts, opts...)...)
+// if err != nil {
+// return nil, errs.WrapMsg(err, "address", result)
+// }
+//
+// // Store the new connection
+// cd.conns[serviceName] = append(cd.conns[serviceName], conn)
+// return conn, nil
+//}
+//
+//func (cd *ConnDirect) GetSelfConnTarget() string {
+// return cd.currentServiceAddress
+//}
+//
+//func (cd *ConnDirect) AddOption(opts ...grpc.DialOption) {
+// cd.additionalOpts = append(cd.additionalOpts, opts...)
+//}
+//
+//func (cd *ConnDirect) CloseConn(conn *grpc.ClientConn) {
+// if conn != nil {
+// conn.Close()
+// }
+//}
+//
+//func (cd *ConnDirect) dialService(ctx context.Context, address string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+// options := append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
+// conn, err := grpc.DialContext(ctx, cd.resolverDirect.Scheme()+":///"+address, options...)
+//
+// if err != nil {
+// return nil, errs.WrapMsg(err, "address", address)
+// }
+// return conn, nil
+//}
+//
+//func (cd *ConnDirect) dialServiceWithoutResolver(ctx context.Context, address string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+// options := append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
+// conn, err := grpc.DialContext(ctx, address, options...)
+//
+// if err != nil {
+// return nil, errs.Wrap(err)
+// }
+// return conn, nil
+//}
diff --git a/pkg/common/discovery/direct/doc.go b/pkg/common/discovery/direct/doc.go
new file mode 100644
index 0000000..76533d5
--- /dev/null
+++ b/pkg/common/discovery/direct/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package direct // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery/direct"
diff --git a/pkg/common/discovery/discoveryregister.go b/pkg/common/discovery/discoveryregister.go
new file mode 100644
index 0000000..20e47d9
--- /dev/null
+++ b/pkg/common/discovery/discoveryregister.go
@@ -0,0 +1,55 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package discovery
+
+import (
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery/kubernetes"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/discovery/etcd"
+ "github.com/openimsdk/tools/discovery/standalone"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "google.golang.org/grpc"
+)
+
+// NewDiscoveryRegister creates a new service discovery and registry client based on the provided environment type.
+func NewDiscoveryRegister(confDiscovery *config.Discovery, watchNames []string) (discovery.SvcDiscoveryRegistry, error) {
+ if config.Standalone() {
+ return standalone.GetSvcDiscoveryRegistry(), nil
+ }
+ if runtimeenv.RuntimeEnvironment() == config.KUBERNETES {
+ return kubernetes.NewKubernetesConnManager(confDiscovery.Kubernetes.Namespace, watchNames,
+ grpc.WithDefaultCallOptions(
+ grpc.MaxCallSendMsgSize(1024*1024*20),
+ ),
+ )
+ }
+
+ switch confDiscovery.Enable {
+ case config.ETCD:
+ return etcd.NewSvcDiscoveryRegistry(
+ confDiscovery.Etcd.RootDirectory,
+ confDiscovery.Etcd.Address,
+ watchNames,
+ etcd.WithDialTimeout(10*time.Second),
+ etcd.WithMaxCallSendMsgSize(20*1024*1024),
+ etcd.WithUsernameAndPassword(confDiscovery.Etcd.Username, confDiscovery.Etcd.Password))
+ default:
+ return nil, errs.New("unsupported discovery type", "type", confDiscovery.Enable).Wrap()
+ }
+}
diff --git a/pkg/common/discovery/discoveryregister_test.go b/pkg/common/discovery/discoveryregister_test.go
new file mode 100644
index 0000000..63f7e94
--- /dev/null
+++ b/pkg/common/discovery/discoveryregister_test.go
@@ -0,0 +1,60 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package discovery
+
+import (
+ "os"
+)
+
+func setupTestEnvironment() {
+ os.Setenv("ZOOKEEPER_SCHEMA", "openim")
+ os.Setenv("ZOOKEEPER_ADDRESS", "172.28.0.1")
+ os.Setenv("ZOOKEEPER_PORT", "12181")
+ os.Setenv("ZOOKEEPER_USERNAME", "")
+ os.Setenv("ZOOKEEPER_PASSWORD", "")
+}
+
+//func TestNewDiscoveryRegister(t *testing.T) {
+// setupTestEnvironment()
+// conf := config.NewGlobalConfig()
+// tests := []struct {
+// envType string
+// gatewayName string
+// expectedError bool
+// expectedResult bool
+// }{
+// {"zookeeper", "MessageGateway", false, true},
+// {"k8s", "MessageGateway", false, true},
+// {"direct", "MessageGateway", false, true},
+// {"invalid", "MessageGateway", true, false},
+// }
+//
+// for _, test := range tests {
+// conf.Envs.Discovery = test.envType
+// conf.RpcRegisterName.OpenImMessageGatewayName = test.gatewayName
+// client, err := NewDiscoveryRegister(conf)
+//
+// if test.expectedError {
+// assert.Error(t, err)
+// } else {
+// assert.NoError(t, err)
+// if test.expectedResult {
+// assert.Implements(t, (*discovery.SvcDiscoveryRegistry)(nil), client)
+// } else {
+// assert.Nil(t, client)
+// }
+// }
+// }
+//}
diff --git a/pkg/common/discovery/doc.go b/pkg/common/discovery/doc.go
new file mode 100644
index 0000000..1899f95
--- /dev/null
+++ b/pkg/common/discovery/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package discovery // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery"
diff --git a/pkg/common/discovery/etcd/config_manager.go b/pkg/common/discovery/etcd/config_manager.go
new file mode 100644
index 0000000..70d37c3
--- /dev/null
+++ b/pkg/common/discovery/etcd/config_manager.go
@@ -0,0 +1,106 @@
+package etcd
+
+import (
+ "context"
+ "os"
+ "os/exec"
+ "runtime"
+ "sync"
+ "syscall"
+
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ clientv3 "go.etcd.io/etcd/client/v3"
+)
+
+var (
+ ShutDowns []func() error
+)
+
+func RegisterShutDown(shutDown ...func() error) {
+ ShutDowns = append(ShutDowns, shutDown...)
+}
+
+type ConfigManager struct {
+ client *clientv3.Client
+ watchConfigNames []string
+ lock sync.Mutex
+}
+
+func BuildKey(s string) string {
+ return ConfigKeyPrefix + s
+}
+
+func NewConfigManager(client *clientv3.Client, configNames []string) *ConfigManager {
+ return &ConfigManager{
+ client: client,
+ watchConfigNames: datautil.Batch(func(s string) string { return BuildKey(s) }, append(configNames, RestartKey))}
+}
+
+func (c *ConfigManager) Watch(ctx context.Context) {
+ chans := make([]clientv3.WatchChan, 0, len(c.watchConfigNames))
+ for _, name := range c.watchConfigNames {
+ chans = append(chans, c.client.Watch(ctx, name, clientv3.WithPrefix()))
+ }
+
+ doWatch := func(watchChan clientv3.WatchChan) {
+ for watchResp := range watchChan {
+ if watchResp.Err() != nil {
+ log.ZError(ctx, "watch err", errs.Wrap(watchResp.Err()))
+ continue
+ }
+ for _, event := range watchResp.Events {
+ if event.IsModify() {
+ if datautil.Contain(string(event.Kv.Key), c.watchConfigNames...) {
+ c.lock.Lock()
+ err := restartServer(ctx)
+ if err != nil {
+ log.ZError(ctx, "restart server err", err)
+ }
+ c.lock.Unlock()
+ }
+ }
+ }
+ }
+ }
+ for _, ch := range chans {
+ go doWatch(ch)
+ }
+}
+
+func restartServer(ctx context.Context) error {
+ exePath, err := os.Executable()
+ if err != nil {
+ return errs.New("get executable path fail").Wrap()
+ }
+
+ args := os.Args
+ env := os.Environ()
+
+ cmd := exec.Command(exePath, args[1:]...)
+ cmd.Env = env
+ cmd.Stdout = os.Stdout
+ cmd.Stderr = os.Stderr
+ cmd.Stdin = os.Stdin
+
+ if runtime.GOOS != "windows" {
+ cmd.SysProcAttr = &syscall.SysProcAttr{}
+ }
+ log.ZInfo(ctx, "shutdown server")
+ for _, f := range ShutDowns {
+ if err = f(); err != nil {
+ log.ZError(ctx, "shutdown fail", err)
+ }
+ }
+
+ log.ZInfo(ctx, "restart server")
+ err = cmd.Start()
+ if err != nil {
+ return errs.New("restart server fail").Wrap()
+ }
+ log.ZInfo(ctx, "cmd start over")
+
+ os.Exit(0)
+ return nil
+}
diff --git a/pkg/common/discovery/etcd/const.go b/pkg/common/discovery/etcd/const.go
new file mode 100644
index 0000000..c9b00fc
--- /dev/null
+++ b/pkg/common/discovery/etcd/const.go
@@ -0,0 +1,9 @@
+package etcd
+
+const (
+ ConfigKeyPrefix = "/open-im/config/"
+ RestartKey = "restart"
+ EnableConfigCenterKey = "enable-config-center"
+ Enable = "enable"
+ Disable = "disable"
+)
diff --git a/pkg/common/discovery/kubernetes/doc.go b/pkg/common/discovery/kubernetes/doc.go
new file mode 100644
index 0000000..a83fa40
--- /dev/null
+++ b/pkg/common/discovery/kubernetes/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package kubernetes // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery/kubernetes"
diff --git a/pkg/common/discovery/kubernetes/kubernetes.go b/pkg/common/discovery/kubernetes/kubernetes.go
new file mode 100644
index 0000000..d347eb8
--- /dev/null
+++ b/pkg/common/discovery/kubernetes/kubernetes.go
@@ -0,0 +1,640 @@
+package kubernetes
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/connectivity"
+ "google.golang.org/grpc/credentials/insecure"
+ "google.golang.org/grpc/keepalive"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/informers"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "k8s.io/client-go/tools/cache"
+)
+
+// addrConn 存储连接和地址信息,用于连接复用
+type addrConn struct {
+ conn *grpc.ClientConn
+ addr string
+ reused bool // 标记是否被复用
+}
+
+type KubernetesConnManager struct {
+ clientset *kubernetes.Clientset
+ namespace string
+ dialOptions []grpc.DialOption
+
+ rpcTargets map[string]string
+ selfTarget string
+
+ // watchNames 只监听这些服务的 Endpoints 变化
+ watchNames []string
+
+ mu sync.RWMutex
+ connMap map[string][]*addrConn
+}
+
+// NewKubernetesConnManager creates a new connection manager that uses Kubernetes services for service discovery.
+func NewKubernetesConnManager(namespace string, watchNames []string, options ...grpc.DialOption) (*KubernetesConnManager, error) {
+ ctx := context.Background()
+ log.ZInfo(ctx, "K8s Discovery: Initializing connection manager", "namespace", namespace, "watchNames", watchNames)
+
+ // 获取集群内配置
+ config, err := rest.InClusterConfig()
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to create in-cluster config", err)
+ return nil, fmt.Errorf("failed to create in-cluster config: %v", err)
+ }
+ log.ZDebug(ctx, "K8s Discovery: Successfully created in-cluster config")
+
+ // 创建 K8s API 客户端
+ clientset, err := kubernetes.NewForConfig(config)
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to create clientset", err)
+ return nil, fmt.Errorf("failed to create clientset: %v", err)
+ }
+ log.ZDebug(ctx, "K8s Discovery: Successfully created clientset")
+
+ // 初始化连接管理器
+ k := &KubernetesConnManager{
+ clientset: clientset,
+ namespace: namespace,
+ dialOptions: options,
+ connMap: make(map[string][]*addrConn),
+ rpcTargets: make(map[string]string),
+ watchNames: watchNames,
+ }
+
+ // 启动后台 goroutine 监听 Endpoints 变化
+ log.ZInfo(ctx, "K8s Discovery: Starting Endpoints watcher")
+ go k.watchEndpoints()
+
+ log.ZInfo(ctx, "K8s Discovery: Connection manager initialized successfully")
+ return k, nil
+}
+
+// parseServiceName 解析服务名,去掉端口信息
+// 例如:user-rpc-service:http-10320 -> user-rpc-service
+func parseServiceName(serviceName string) string {
+ if idx := strings.Index(serviceName, ":"); idx != -1 {
+ return serviceName[:idx]
+ }
+ return serviceName
+}
+
+// initializeConns 初始化指定服务的所有 gRPC 连接(支持连接复用)
+func (k *KubernetesConnManager) initializeConns(serviceName string) error {
+ ctx := context.Background()
+ log.ZInfo(ctx, "K8s Discovery: Starting to initialize connections", "serviceName", serviceName)
+
+ // 步骤 1: 获取 Service 的端口
+ port, err := k.getServicePort(serviceName)
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to get service port", err, "serviceName", serviceName)
+ return fmt.Errorf("failed to get service port: %w", err)
+ }
+ log.ZDebug(ctx, "K8s Discovery: Got service port", "serviceName", serviceName, "port", port)
+
+ // 步骤 2: 获取旧连接,建立地址到连接的映射(用于复用)
+ k.mu.Lock()
+ oldList := k.connMap[serviceName]
+ addrMap := make(map[string]*addrConn, len(oldList))
+ for _, ac := range oldList {
+ addrMap[ac.addr] = ac
+ ac.reused = false // 重置复用标记
+ }
+ k.mu.Unlock()
+
+ log.ZDebug(ctx, "K8s Discovery: Old connections snapshot", "serviceName", serviceName, "count", len(oldList))
+
+ // 步骤 3: 获取 Service 对应的 Endpoints
+ endpoints, err := k.clientset.CoreV1().Endpoints(k.namespace).Get(
+ ctx,
+ serviceName,
+ metav1.GetOptions{},
+ )
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to get endpoints", err, "serviceName", serviceName)
+ return fmt.Errorf("failed to get endpoints for service %s: %w", serviceName, err)
+ }
+
+ // 统计 Endpoints 数量
+ var totalAddresses int
+ for _, subset := range endpoints.Subsets {
+ totalAddresses += len(subset.Addresses)
+ }
+ log.ZDebug(ctx, "K8s Discovery: Found endpoint addresses", "serviceName", serviceName, "count", totalAddresses)
+
+ // 步骤 4: 为每个 Pod IP 创建或复用 gRPC 连接
+ var newList []*addrConn
+ var reusedCount, createdCount int
+
+ for _, subset := range endpoints.Subsets {
+ for _, address := range subset.Addresses {
+ target := fmt.Sprintf("%s:%d", address.IP, port)
+
+ // 检查是否可以复用旧连接
+ if ac, ok := addrMap[target]; ok {
+ // 复用旧连接
+ ac.reused = true
+ newList = append(newList, ac)
+ reusedCount++
+ log.ZDebug(ctx, "K8s Discovery: Reusing existing connection", "serviceName", serviceName, "target", target)
+ continue
+ }
+
+ // 创建新连接
+ log.ZDebug(ctx, "K8s Discovery: Creating new connection", "serviceName", serviceName, "target", target)
+ conn, err := grpc.Dial(
+ target,
+ append(k.dialOptions,
+ grpc.WithTransportCredentials(insecure.NewCredentials()),
+ grpc.WithKeepaliveParams(keepalive.ClientParameters{
+ Time: 10 * time.Second,
+ Timeout: 3 * time.Second,
+ PermitWithoutStream: true,
+ }),
+ )...,
+ )
+ if err != nil {
+ log.ZWarn(ctx, "K8s Discovery: Failed to dial endpoint, skipping", err, "serviceName", serviceName, "target", target)
+ // 跳过无法连接的端点,不终止整个初始化
+ continue
+ }
+
+ state := conn.GetState()
+ log.ZDebug(ctx, "K8s Discovery: New connection created", "serviceName", serviceName, "target", target, "state", state.String())
+
+ newList = append(newList, &addrConn{conn: conn, addr: target, reused: false})
+ createdCount++
+ }
+ }
+
+ log.ZInfo(ctx, "K8s Discovery: Connection initialization summary", "serviceName", serviceName,
+ "total", len(newList), "reused", reusedCount, "created", createdCount)
+
+ // 步骤 5: 收集需要关闭的旧连接(未被复用的)
+ var connsToClose []*addrConn
+ for _, ac := range oldList {
+ if !ac.reused {
+ connsToClose = append(connsToClose, ac)
+ }
+ }
+
+ // 步骤 6: 更新连接映射
+ k.mu.Lock()
+ k.connMap[serviceName] = newList
+ k.mu.Unlock()
+
+ log.ZDebug(ctx, "K8s Discovery: Connection map updated", "serviceName", serviceName,
+ "oldCount", len(oldList), "newCount", len(newList), "toClose", len(connsToClose))
+
+ // 步骤 7: 延迟关闭未复用的旧连接
+ if len(connsToClose) > 0 {
+ log.ZInfo(ctx, "K8s Discovery: Scheduling delayed close for unused connections", "serviceName", serviceName, "count", len(connsToClose), "delaySeconds", 5)
+ go func() {
+ time.Sleep(5 * time.Second)
+ log.ZDebug(ctx, "K8s Discovery: Closing unused old connections", "serviceName", serviceName, "count", len(connsToClose))
+ closedCount := 0
+ for _, ac := range connsToClose {
+ if err := ac.conn.Close(); err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to close old connection", err, "serviceName", serviceName, "addr", ac.addr)
+ } else {
+ closedCount++
+ }
+ }
+ log.ZInfo(ctx, "K8s Discovery: Closed unused connections", "serviceName", serviceName, "closed", closedCount, "total", len(connsToClose))
+ }()
+ }
+
+ log.ZInfo(ctx, "K8s Discovery: Connection initialization completed", "serviceName", serviceName)
+ return nil
+}
+
+// GetConns returns gRPC client connections for a given Kubernetes service name.
+func (k *KubernetesConnManager) GetConns(ctx context.Context, serviceName string, opts ...grpc.DialOption) ([]grpc.ClientConnInterface, error) {
+ // 解析服务名,去掉端口信息
+ svcName := parseServiceName(serviceName)
+ log.ZDebug(ctx, "K8s Discovery: GetConns called", "serviceName", serviceName, "parsedName", svcName)
+
+ // 步骤 1: 第一次检查缓存(读锁)
+ k.mu.RLock()
+ conns, exists := k.connMap[svcName]
+ k.mu.RUnlock()
+
+ // 步骤 2: 如果缓存中有连接,检查健康状态
+ if exists && len(conns) > 0 {
+ log.ZDebug(ctx, "K8s Discovery: Found connections in cache, checking health", "serviceName", svcName, "count", len(conns))
+
+ // 检查连接健康状态
+ validConns := k.filterValidConns(ctx, svcName, conns)
+
+ // 如果还有有效连接,更新缓存并返回
+ if len(validConns) > 0 {
+ if len(validConns) < len(conns) {
+ log.ZWarn(ctx, "K8s Discovery: Removed invalid connections", nil, "serviceName", svcName, "removed", len(conns)-len(validConns), "remaining", len(validConns))
+ k.mu.Lock()
+ k.connMap[svcName] = validConns
+ k.mu.Unlock()
+ } else {
+ log.ZDebug(ctx, "K8s Discovery: All connections are healthy", "serviceName", svcName, "count", len(validConns))
+ }
+ // 转换为接口类型
+ result := make([]grpc.ClientConnInterface, len(validConns))
+ for i, ac := range validConns {
+ result[i] = ac.conn
+ }
+ return result, nil
+ }
+
+ // 如果所有连接都失效,清除缓存并重新初始化
+ log.ZWarn(ctx, "K8s Discovery: All connections are invalid, reinitializing", nil, "serviceName", svcName)
+ k.mu.Lock()
+ delete(k.connMap, svcName)
+ k.mu.Unlock()
+ } else {
+ log.ZDebug(ctx, "K8s Discovery: No connections in cache, initializing", "serviceName", svcName)
+ }
+
+ // 步骤 3: 缓存中没有连接或所有连接都失效,重新初始化
+ k.mu.Lock()
+ conns, exists = k.connMap[svcName]
+ if exists && len(conns) > 0 {
+ log.ZDebug(ctx, "K8s Discovery: Connections were initialized by another goroutine", "serviceName", svcName)
+ k.mu.Unlock()
+ result := make([]grpc.ClientConnInterface, len(conns))
+ for i, ac := range conns {
+ result[i] = ac.conn
+ }
+ return result, nil
+ }
+ k.mu.Unlock()
+
+ // 初始化新连接
+ log.ZDebug(ctx, "K8s Discovery: Initializing new connections", "serviceName", svcName)
+ if err := k.initializeConns(svcName); err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to initialize connections", err, "serviceName", svcName)
+ return nil, fmt.Errorf("failed to initialize connections for service %s: %w", svcName, err)
+ }
+
+ // 返回新初始化的连接
+ k.mu.RLock()
+ conns = k.connMap[svcName]
+ k.mu.RUnlock()
+
+ log.ZDebug(ctx, "K8s Discovery: Returning connections", "serviceName", svcName, "count", len(conns))
+ result := make([]grpc.ClientConnInterface, len(conns))
+ for i, ac := range conns {
+ result[i] = ac.conn
+ }
+ return result, nil
+}
+
+// filterValidConns 过滤出有效的连接
+func (k *KubernetesConnManager) filterValidConns(ctx context.Context, serviceName string, conns []*addrConn) []*addrConn {
+ validConns := make([]*addrConn, 0, len(conns))
+ invalidStates := make(map[string]int)
+
+ for _, ac := range conns {
+ state := ac.conn.GetState()
+
+ // 只保留 Ready 和 Idle 状态的连接
+ if state == connectivity.Ready || state == connectivity.Idle {
+ validConns = append(validConns, ac)
+ } else {
+ invalidStates[state.String()]++
+ log.ZDebug(ctx, "K8s Discovery: Connection is invalid, closing", "serviceName", serviceName, "addr", ac.addr, "state", state.String())
+ if err := ac.conn.Close(); err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to close invalid connection", err, "serviceName", serviceName, "addr", ac.addr)
+ }
+ }
+ }
+
+ if len(invalidStates) > 0 {
+ log.ZWarn(ctx, "K8s Discovery: Found invalid connections", nil, "serviceName", serviceName, "invalidStates", invalidStates)
+ }
+
+ return validConns
+}
+
+// GetConn returns a single gRPC client connection for a given Kubernetes service name.
+// 重要:GetConn 使用 DNS,避免连接被强制关闭
+func (k *KubernetesConnManager) GetConn(ctx context.Context, serviceName string, opts ...grpc.DialOption) (grpc.ClientConnInterface, error) {
+ // 解析服务名,去掉端口信息
+ svcName := parseServiceName(serviceName)
+ log.ZDebug(ctx, "K8s Discovery: GetConn called (using DNS)", "serviceName", serviceName, "parsedName", svcName)
+
+ var target string
+
+ // 检查是否有自定义目标
+ if k.rpcTargets[svcName] == "" {
+ // 获取 Service 端口
+ svcPort, err := k.getServicePort(svcName)
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to get service port", err, "serviceName", svcName)
+ return nil, err
+ }
+
+ // 构建 K8s DNS 名称
+ target = fmt.Sprintf("%s.%s.svc.cluster.local:%d", svcName, k.namespace, svcPort)
+ log.ZDebug(ctx, "K8s Discovery: Using DNS target", "serviceName", svcName, "target", target)
+ } else {
+ target = k.rpcTargets[svcName]
+ log.ZDebug(ctx, "K8s Discovery: Using custom target", "serviceName", svcName, "target", target)
+ }
+
+ // 创建 gRPC 连接
+ log.ZDebug(ctx, "K8s Discovery: Dialing DNS target", "serviceName", svcName, "target", target)
+ conn, err := grpc.DialContext(
+ ctx,
+ target,
+ append([]grpc.DialOption{
+ grpc.WithTransportCredentials(insecure.NewCredentials()),
+ grpc.WithDefaultCallOptions(
+ grpc.MaxCallRecvMsgSize(1024*1024*10),
+ grpc.MaxCallSendMsgSize(1024*1024*20),
+ ),
+ grpc.WithKeepaliveParams(keepalive.ClientParameters{
+ Time: 10 * time.Second,
+ Timeout: 3 * time.Second,
+ PermitWithoutStream: true,
+ }),
+ }, k.dialOptions...)...,
+ )
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to dial DNS target", err, "serviceName", svcName, "target", target)
+ return nil, err
+ }
+
+ log.ZDebug(ctx, "K8s Discovery: GetConn completed successfully", "serviceName", svcName)
+ return conn, nil
+}
+
+// IsSelfNode checks if the given connection is to the current node
+func (k *KubernetesConnManager) IsSelfNode(cc grpc.ClientConnInterface) bool {
+ return false
+}
+
+// watchEndpoints 监听 Endpoints 资源变化
+func (k *KubernetesConnManager) watchEndpoints() {
+ ctx := context.Background()
+ log.ZInfo(ctx, "K8s Discovery: Starting Endpoints watcher")
+
+ informerFactory := informers.NewSharedInformerFactoryWithOptions(k.clientset, time.Minute*10,
+ informers.WithNamespace(k.namespace))
+ informer := informerFactory.Core().V1().Endpoints().Informer()
+ log.ZDebug(ctx, "K8s Discovery: Endpoints Informer created")
+
+ informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: func(obj interface{}) {
+ k.handleEndpointChange(obj)
+ },
+ UpdateFunc: func(oldObj, newObj interface{}) {
+ oldEndpoint := oldObj.(*v1.Endpoints)
+ newEndpoint := newObj.(*v1.Endpoints)
+
+ if k.endpointsChanged(oldEndpoint, newEndpoint) {
+ k.handleEndpointChange(newObj)
+ }
+ },
+ DeleteFunc: func(obj interface{}) {
+ k.handleEndpointChange(obj)
+ },
+ })
+
+ log.ZDebug(ctx, "K8s Discovery: Starting Informer factory")
+ informerFactory.Start(ctx.Done())
+
+ log.ZDebug(ctx, "K8s Discovery: Waiting for Informer cache to sync")
+ if !cache.WaitForCacheSync(ctx.Done(), informer.HasSynced) {
+ log.ZError(ctx, "K8s Discovery: Failed to sync Informer cache", nil)
+ return
+ }
+ log.ZInfo(ctx, "K8s Discovery: Informer cache synced successfully")
+
+ log.ZInfo(ctx, "K8s Discovery: Endpoints watcher is running")
+ <-ctx.Done()
+ log.ZInfo(ctx, "K8s Discovery: Endpoints watcher stopped")
+}
+
+// endpointsChanged 检查 Endpoints 是否有实际变化
+func (k *KubernetesConnManager) endpointsChanged(old, new *v1.Endpoints) bool {
+ oldAddresses := make(map[string]bool)
+ for _, subset := range old.Subsets {
+ for _, address := range subset.Addresses {
+ oldAddresses[address.IP] = true
+ }
+ }
+
+ newAddresses := make(map[string]bool)
+ for _, subset := range new.Subsets {
+ for _, address := range subset.Addresses {
+ newAddresses[address.IP] = true
+ }
+ }
+
+ if len(oldAddresses) != len(newAddresses) {
+ return true
+ }
+
+ for ip := range oldAddresses {
+ if !newAddresses[ip] {
+ return true
+ }
+ }
+
+ return false
+}
+
+// handleEndpointChange 处理 Endpoints 资源变化
+func (k *KubernetesConnManager) handleEndpointChange(obj interface{}) {
+ ctx := context.Background()
+
+ endpoint, ok := obj.(*v1.Endpoints)
+ if !ok {
+ log.ZError(ctx, "K8s Discovery: Expected *v1.Endpoints", nil, "actualType", fmt.Sprintf("%T", obj))
+ return
+ }
+
+ serviceName := endpoint.Name
+
+ // 只处理 watchNames 中的服务
+ if len(k.watchNames) > 0 && !datautil.Contain(serviceName, k.watchNames...) {
+ log.ZDebug(ctx, "K8s Discovery: Ignoring Endpoints change (not in watchNames)", "serviceName", serviceName)
+ return
+ }
+
+ log.ZInfo(ctx, "K8s Discovery: Handling Endpoints change", "serviceName", serviceName)
+
+ var totalAddresses int
+ for _, subset := range endpoint.Subsets {
+ totalAddresses += len(subset.Addresses)
+ }
+ log.ZDebug(ctx, "K8s Discovery: Endpoint addresses count", "serviceName", serviceName, "count", totalAddresses)
+
+ if err := k.initializeConns(serviceName); err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to initialize connections", err, "serviceName", serviceName)
+ } else {
+ log.ZInfo(ctx, "K8s Discovery: Successfully updated connections", "serviceName", serviceName)
+ }
+}
+
+// getServicePort 获取 Service 的 RPC 端口
+func (k *KubernetesConnManager) getServicePort(serviceName string) (int32, error) {
+ ctx := context.Background()
+ log.ZDebug(ctx, "K8s Discovery: Getting service port", "serviceName", serviceName)
+
+ svc, err := k.clientset.CoreV1().Services(k.namespace).Get(
+ ctx,
+ serviceName,
+ metav1.GetOptions{},
+ )
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to get service", err, "serviceName", serviceName, "namespace", k.namespace)
+ return 0, fmt.Errorf("failed to get service %s: %w", serviceName, err)
+ }
+
+ if len(svc.Spec.Ports) == 0 {
+ log.ZError(ctx, "K8s Discovery: Service has no ports defined", nil, "serviceName", serviceName)
+ return 0, fmt.Errorf("service %s has no ports defined", serviceName)
+ }
+
+ var svcPort int32
+ for _, port := range svc.Spec.Ports {
+ if port.Port != 10001 {
+ svcPort = port.Port
+ break
+ }
+ }
+
+ if svcPort == 0 {
+ log.ZError(ctx, "K8s Discovery: Service has no RPC port", nil, "serviceName", serviceName)
+ return 0, fmt.Errorf("service %s has no RPC port (all ports are 10001)", serviceName)
+ }
+
+ log.ZDebug(ctx, "K8s Discovery: Got service port", "serviceName", serviceName, "port", svcPort)
+ return svcPort, nil
+}
+
+// Close 关闭所有连接
+func (k *KubernetesConnManager) Close() {
+ ctx := context.Background()
+ log.ZInfo(ctx, "K8s Discovery: Closing all connections")
+
+ k.mu.Lock()
+ defer k.mu.Unlock()
+
+ totalConns := 0
+ for serviceName, conns := range k.connMap {
+ log.ZDebug(ctx, "K8s Discovery: Closing connections for service", "serviceName", serviceName, "count", len(conns))
+ for _, ac := range conns {
+ if err := ac.conn.Close(); err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to close connection", err, "serviceName", serviceName, "addr", ac.addr)
+ }
+ }
+ totalConns += len(conns)
+ }
+
+ log.ZInfo(ctx, "K8s Discovery: Closed all connections", "totalCount", totalConns)
+ k.connMap = make(map[string][]*addrConn)
+}
+
+// GetSelfConnTarget returns the connection target for the current service.
+func (k *KubernetesConnManager) GetSelfConnTarget() string {
+ ctx := context.Background()
+
+ if k.selfTarget == "" {
+ hostName := os.Getenv("HOSTNAME")
+ log.ZDebug(ctx, "K8s Discovery: Getting self connection target", "hostname", hostName)
+
+ pod, err := k.clientset.CoreV1().Pods(k.namespace).Get(ctx, hostName, metav1.GetOptions{})
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to get pod", err, "hostname", hostName)
+ }
+
+ for pod.Status.PodIP == "" {
+ log.ZDebug(ctx, "K8s Discovery: Waiting for pod IP to be assigned", "hostname", hostName)
+ pod, err = k.clientset.CoreV1().Pods(k.namespace).Get(context.TODO(), hostName, metav1.GetOptions{})
+ if err != nil {
+ log.ZError(ctx, "K8s Discovery: Failed to get pod", err)
+ }
+ time.Sleep(3 * time.Second)
+ }
+
+ var selfPort int32
+ for _, port := range pod.Spec.Containers[0].Ports {
+ if port.ContainerPort != 10001 {
+ selfPort = port.ContainerPort
+ break
+ }
+ }
+
+ k.selfTarget = fmt.Sprintf("%s:%d", pod.Status.PodIP, selfPort)
+ log.ZInfo(ctx, "K8s Discovery: Self connection target", "target", k.selfTarget)
+ }
+
+ return k.selfTarget
+}
+
+// AddOption appends gRPC dial options to the existing options.
+func (k *KubernetesConnManager) AddOption(opts ...grpc.DialOption) {
+ k.mu.Lock()
+ defer k.mu.Unlock()
+ k.dialOptions = append(k.dialOptions, opts...)
+ log.ZDebug(context.Background(), "K8s Discovery: Added dial options", "count", len(opts))
+}
+
+// CloseConn closes a given gRPC client connection.
+func (k *KubernetesConnManager) CloseConn(conn *grpc.ClientConn) {
+ log.ZDebug(context.Background(), "K8s Discovery: Closing single connection")
+ conn.Close()
+}
+
+func (k *KubernetesConnManager) Register(ctx context.Context, serviceName, host string, port int, opts ...grpc.DialOption) error {
+ // K8s 环境下不需要注册,返回 nil
+ return nil
+}
+
+func (k *KubernetesConnManager) UnRegister() error {
+ // K8s 环境下不需要注销,返回 nil
+ return nil
+}
+
+func (k *KubernetesConnManager) GetUserIdHashGatewayHost(ctx context.Context, userId string) (string, error) {
+ // K8s 环境下不支持,返回空
+ return "", nil
+}
+
+// KeyValue interface methods - K8s环境下不支持,返回 discovery.ErrNotSupported
+// 这样调用方可以通过 errors.Is(err, discovery.ErrNotSupported) 来判断并忽略
+
+func (k *KubernetesConnManager) SetKey(ctx context.Context, key string, value []byte) error {
+ return discovery.ErrNotSupported
+}
+
+func (k *KubernetesConnManager) SetWithLease(ctx context.Context, key string, val []byte, ttl int64) error {
+ return discovery.ErrNotSupported
+}
+
+func (k *KubernetesConnManager) GetKey(ctx context.Context, key string) ([]byte, error) {
+ return nil, discovery.ErrNotSupported
+}
+
+func (k *KubernetesConnManager) GetKeyWithPrefix(ctx context.Context, key string) ([][]byte, error) {
+ return nil, discovery.ErrNotSupported
+}
+
+func (k *KubernetesConnManager) WatchKey(ctx context.Context, key string, fn discovery.WatchKeyHandler) error {
+ return discovery.ErrNotSupported
+}
diff --git a/pkg/common/ginprometheus/doc.go b/pkg/common/ginprometheus/doc.go
new file mode 100644
index 0000000..62d865a
--- /dev/null
+++ b/pkg/common/ginprometheus/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package ginprometheus // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/ginprometheus"
diff --git a/pkg/common/ginprometheus/ginprometheus.go b/pkg/common/ginprometheus/ginprometheus.go
new file mode 100644
index 0000000..64f8a0d
--- /dev/null
+++ b/pkg/common/ginprometheus/ginprometheus.go
@@ -0,0 +1,444 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package ginprometheus
+
+//
+//import (
+// "bytes"
+// "fmt"
+// "io"
+// "net/http"
+// "os"
+// "strconv"
+// "time"
+//
+// "github.com/gin-gonic/gin"
+// "github.com/prometheus/client_golang/prometheus"
+// "github.com/prometheus/client_golang/prometheus/promhttp"
+//)
+//
+//var defaultMetricPath = "/metrics"
+//
+//// counter, counter_vec, gauge, gauge_vec,
+//// histogram, histogram_vec, summary, summary_vec.
+//var (
+// reqCounter = &Metric{
+// ID: "reqCnt",
+// Name: "requests_total",
+// Description: "How many HTTP requests processed, partitioned by status code and HTTP method.",
+// Type: "counter_vec",
+// Args: []string{"code", "method", "handler", "host", "url"}}
+//
+// reqDuration = &Metric{
+// ID: "reqDur",
+// Name: "request_duration_seconds",
+// Description: "The HTTP request latencies in seconds.",
+// Type: "histogram_vec",
+// Args: []string{"code", "method", "url"},
+// }
+//
+// resSize = &Metric{
+// ID: "resSz",
+// Name: "response_size_bytes",
+// Description: "The HTTP response sizes in bytes.",
+// Type: "summary"}
+//
+// reqSize = &Metric{
+// ID: "reqSz",
+// Name: "request_size_bytes",
+// Description: "The HTTP request sizes in bytes.",
+// Type: "summary"}
+//
+// standardMetrics = []*Metric{
+// reqCounter,
+// reqDuration,
+// resSize,
+// reqSize,
+// }
+//)
+//
+///*
+//RequestCounterURLLabelMappingFn is a function which can be supplied to the middleware to control
+//the cardinality of the request counter's "url" label, which might be required in some contexts.
+//For instance, if for a "/customer/:name" route you don't want to generate a time series for every
+//possible customer name, you could use this function:
+//
+// func(c *gin.Context) string {
+// url := c.Request.URL.Path
+// for _, p := range c.Params {
+// if p.Key == "name" {
+// url = strings.Replace(url, p.Value, ":name", 1)
+// break
+// }
+// }
+// return url
+// }
+//
+//which would map "/customer/alice" and "/customer/bob" to their template "/customer/:name".
+//*/
+//type RequestCounterURLLabelMappingFn func(c *gin.Context) string
+//
+//// Metric is a definition for the name, description, type, ID, and
+//// prometheus.Collector type (i.e. CounterVec, Summary, etc) of each metric.
+//type Metric struct {
+// MetricCollector prometheus.Collector
+// ID string
+// Name string
+// Description string
+// Type string
+// Args []string
+//}
+//
+//// Prometheus contains the metrics gathered by the instance and its path.
+//type Prometheus struct {
+// reqCnt *prometheus.CounterVec
+// reqDur *prometheus.HistogramVec
+// reqSz, resSz prometheus.Summary
+// router *gin.Engine
+// listenAddress string
+// Ppg PrometheusPushGateway
+//
+// MetricsList []*Metric
+// MetricsPath string
+//
+// ReqCntURLLabelMappingFn RequestCounterURLLabelMappingFn
+//
+// // gin.Context string to use as a prometheus URL label
+// URLLabelFromContext string
+//}
+//
+//// PrometheusPushGateway contains the configuration for pushing to a Prometheus pushgateway (optional).
+//type PrometheusPushGateway struct {
+//
+// // Push interval in seconds
+// PushIntervalSeconds time.Duration
+//
+// // Push Gateway URL in format http://domain:port
+// // where JOBNAME can be any string of your choice
+// PushGatewayURL string
+//
+// // Local metrics URL where metrics are fetched from, this could be omitted in the future
+// // if implemented using prometheus common/expfmt instead
+// MetricsURL string
+//
+// // pushgateway job name, defaults to "gin"
+// Job string
+//}
+//
+//// NewPrometheus generates a new set of metrics with a certain subsystem name.
+//func NewPrometheus(subsystem string, customMetricsList ...[]*Metric) *Prometheus {
+// if subsystem == "" {
+// subsystem = "app"
+// }
+//
+// var metricsList []*Metric
+//
+// if len(customMetricsList) > 1 {
+// panic("Too many args. NewPrometheus( string, ).")
+// } else if len(customMetricsList) == 1 {
+// metricsList = customMetricsList[0]
+// }
+// metricsList = append(metricsList, standardMetrics...)
+//
+// p := &Prometheus{
+// MetricsList: metricsList,
+// MetricsPath: defaultMetricPath,
+// ReqCntURLLabelMappingFn: func(c *gin.Context) string {
+// return c.FullPath() // e.g. /user/:id , /user/:id/info
+// },
+// }
+//
+// p.registerMetrics(subsystem)
+//
+// return p
+//}
+//
+//// SetPushGateway sends metrics to a remote pushgateway exposed on pushGatewayURL
+//// every pushIntervalSeconds. Metrics are fetched from metricsURL.
+//func (p *Prometheus) SetPushGateway(pushGatewayURL, metricsURL string, pushIntervalSeconds time.Duration) {
+// p.Ppg.PushGatewayURL = pushGatewayURL
+// p.Ppg.MetricsURL = metricsURL
+// p.Ppg.PushIntervalSeconds = pushIntervalSeconds
+// p.startPushTicker()
+//}
+//
+//// SetPushGatewayJob job name, defaults to "gin".
+//func (p *Prometheus) SetPushGatewayJob(j string) {
+// p.Ppg.Job = j
+//}
+//
+//// SetListenAddress for exposing metrics on address. If not set, it will be exposed at the
+//// same address of the gin engine that is being used.
+//func (p *Prometheus) SetListenAddress(address string) {
+// p.listenAddress = address
+// if p.listenAddress != "" {
+// p.router = gin.Default()
+// }
+//}
+//
+//// SetListenAddressWithRouter for using a separate router to expose metrics. (this keeps things like GET /metrics out of
+//// your content's access log).
+//func (p *Prometheus) SetListenAddressWithRouter(listenAddress string, r *gin.Engine) {
+// p.listenAddress = listenAddress
+// if len(p.listenAddress) > 0 {
+// p.router = r
+// }
+//}
+//
+//// SetMetricsPath set metrics paths.
+//func (p *Prometheus) SetMetricsPath(e *gin.Engine) error {
+//
+// if p.listenAddress != "" {
+// p.router.GET(p.MetricsPath, prometheusHandler())
+// return p.runServer()
+// } else {
+// e.GET(p.MetricsPath, prometheusHandler())
+// return nil
+// }
+//}
+//
+//// SetMetricsPathWithAuth set metrics paths with authentication.
+//func (p *Prometheus) SetMetricsPathWithAuth(e *gin.Engine, accounts gin.Accounts) error {
+//
+// if p.listenAddress != "" {
+// p.router.GET(p.MetricsPath, gin.BasicAuth(accounts), prometheusHandler())
+// return p.runServer()
+// } else {
+// e.GET(p.MetricsPath, gin.BasicAuth(accounts), prometheusHandler())
+// return nil
+// }
+//
+//}
+//
+//func (p *Prometheus) runServer() error {
+// return p.router.Run(p.listenAddress)
+//}
+//
+//func (p *Prometheus) getMetrics() []byte {
+// response, err := http.Get(p.Ppg.MetricsURL)
+// if err != nil {
+// return nil
+// }
+//
+// defer response.Body.Close()
+//
+// body, _ := io.ReadAll(response.Body)
+// return body
+//}
+//
+//var hostname, _ = os.Hostname()
+//
+//func (p *Prometheus) getPushGatewayURL() string {
+// if p.Ppg.Job == "" {
+// p.Ppg.Job = "gin"
+// }
+// return p.Ppg.PushGatewayURL + "/metrics/job/" + p.Ppg.Job + "/instance/" + hostname
+//}
+//
+//func (p *Prometheus) sendMetricsToPushGateway(metrics []byte) {
+// req, err := http.NewRequest("POST", p.getPushGatewayURL(), bytes.NewBuffer(metrics))
+// if err != nil {
+// return
+// }
+//
+// client := &http.Client{}
+// resp, err := client.Do(req)
+// if err != nil {
+// fmt.Println("Error sending to push gateway error:", err.Error())
+// }
+//
+// resp.Body.Close()
+//}
+//
+//func (p *Prometheus) startPushTicker() {
+// ticker := time.NewTicker(time.Second * p.Ppg.PushIntervalSeconds)
+// go func() {
+// for range ticker.C {
+// p.sendMetricsToPushGateway(p.getMetrics())
+// }
+// }()
+//}
+//
+//// NewMetric associates prometheus.Collector based on Metric.Type.
+//func NewMetric(m *Metric, subsystem string) prometheus.Collector {
+// var metric prometheus.Collector
+// switch m.Type {
+// case "counter_vec":
+// metric = prometheus.NewCounterVec(
+// prometheus.CounterOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// m.Args,
+// )
+// case "counter":
+// metric = prometheus.NewCounter(
+// prometheus.CounterOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// )
+// case "gauge_vec":
+// metric = prometheus.NewGaugeVec(
+// prometheus.GaugeOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// m.Args,
+// )
+// case "gauge":
+// metric = prometheus.NewGauge(
+// prometheus.GaugeOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// )
+// case "histogram_vec":
+// metric = prometheus.NewHistogramVec(
+// prometheus.HistogramOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// m.Args,
+// )
+// case "histogram":
+// metric = prometheus.NewHistogram(
+// prometheus.HistogramOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// )
+// case "summary_vec":
+// metric = prometheus.NewSummaryVec(
+// prometheus.SummaryOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// m.Args,
+// )
+// case "summary":
+// metric = prometheus.NewSummary(
+// prometheus.SummaryOpts{
+// Subsystem: subsystem,
+// Name: m.Name,
+// Help: m.Description,
+// },
+// )
+// }
+// return metric
+//}
+//
+//func (p *Prometheus) registerMetrics(subsystem string) {
+// for _, metricDef := range p.MetricsList {
+// metric := NewMetric(metricDef, subsystem)
+// if err := prometheus.Register(metric); err != nil {
+// fmt.Println("could not be registered in Prometheus,metricDef.Name:", metricDef.Name, " error:", err.Error())
+// }
+//
+// switch metricDef {
+// case reqCounter:
+// p.reqCnt = metric.(*prometheus.CounterVec)
+// case reqDuration:
+// p.reqDur = metric.(*prometheus.HistogramVec)
+// case resSize:
+// p.resSz = metric.(prometheus.Summary)
+// case reqSize:
+// p.reqSz = metric.(prometheus.Summary)
+// }
+// metricDef.MetricCollector = metric
+// }
+//}
+//
+//// Use adds the middleware to a gin engine.
+//func (p *Prometheus) Use(e *gin.Engine) error {
+// e.Use(p.HandlerFunc())
+// return p.SetMetricsPath(e)
+//}
+//
+//// UseWithAuth adds the middleware to a gin engine with BasicAuth.
+//func (p *Prometheus) UseWithAuth(e *gin.Engine, accounts gin.Accounts) error {
+// e.Use(p.HandlerFunc())
+// return p.SetMetricsPathWithAuth(e, accounts)
+//}
+//
+//// HandlerFunc defines handler function for middleware.
+//func (p *Prometheus) HandlerFunc() gin.HandlerFunc {
+// return func(c *gin.Context) {
+// if c.Request.URL.Path == p.MetricsPath {
+// c.Next()
+// return
+// }
+//
+// start := time.Now()
+// reqSz := computeApproximateRequestSize(c.Request)
+//
+// c.Next()
+//
+// status := strconv.Itoa(c.Writer.Status())
+// elapsed := float64(time.Since(start)) / float64(time.Second)
+// resSz := float64(c.Writer.Size())
+//
+// url := p.ReqCntURLLabelMappingFn(c)
+// if len(p.URLLabelFromContext) > 0 {
+// u, found := c.Get(p.URLLabelFromContext)
+// if !found {
+// u = "unknown"
+// }
+// url = u.(string)
+// }
+// p.reqDur.WithLabelValues(status, c.Request.Method, url).Observe(elapsed)
+// p.reqCnt.WithLabelValues(status, c.Request.Method, c.HandlerName(), c.Request.Host, url).Inc()
+// p.reqSz.Observe(float64(reqSz))
+// p.resSz.Observe(resSz)
+// }
+//}
+//
+//func prometheusHandler() gin.HandlerFunc {
+// h := promhttp.Handler()
+// return func(c *gin.Context) {
+// h.ServeHTTP(c.Writer, c.Request)
+// }
+//}
+//
+//func computeApproximateRequestSize(r *http.Request) int {
+// var s int
+// if r.URL != nil {
+// s = len(r.URL.Path)
+// }
+//
+// s += len(r.Method)
+// s += len(r.Proto)
+// for name, values := range r.Header {
+// s += len(name)
+// for _, value := range values {
+// s += len(value)
+// }
+// }
+// s += len(r.Host)
+//
+// // r.FormData and r.MultipartForm are assumed to be included in r.URL.
+//
+// if r.ContentLength != -1 {
+// s += int(r.ContentLength)
+// }
+// return s
+//}
diff --git a/pkg/common/prommetrics/api.go b/pkg/common/prommetrics/api.go
new file mode 100644
index 0000000..b1368f5
--- /dev/null
+++ b/pkg/common/prommetrics/api.go
@@ -0,0 +1,48 @@
+package prommetrics
+
+import (
+ "net"
+ "strconv"
+
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+)
+
+var (
+ apiCounter = prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Name: "api_count",
+ Help: "Total number of API calls",
+ },
+ []string{"path", "method", "code"},
+ )
+ httpCounter = prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Name: "http_count",
+ Help: "Total number of HTTP calls",
+ },
+ []string{"path", "method", "status"},
+ )
+)
+
+func RegistryApi() {
+ registry.MustRegister(apiCounter, httpCounter)
+}
+
+func ApiInit(listener net.Listener) error {
+ apiRegistry := prometheus.NewRegistry()
+ cs := append(
+ baseCollector,
+ apiCounter,
+ httpCounter,
+ )
+ return Init(apiRegistry, listener, commonPath, promhttp.HandlerFor(apiRegistry, promhttp.HandlerOpts{}), cs...)
+}
+
+func APICall(path string, method string, apiCode int) {
+ apiCounter.With(prometheus.Labels{"path": path, "method": method, "code": strconv.Itoa(apiCode)}).Inc()
+}
+
+func HttpCall(path string, method string, status int) {
+ httpCounter.With(prometheus.Labels{"path": path, "method": method, "status": strconv.Itoa(status)}).Inc()
+}
diff --git a/pkg/common/prommetrics/grpc_auth.go b/pkg/common/prommetrics/grpc_auth.go
new file mode 100644
index 0000000..a102c5d
--- /dev/null
+++ b/pkg/common/prommetrics/grpc_auth.go
@@ -0,0 +1,30 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+)
+
+var (
+ UserLoginCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "user_login_total",
+ Help: "The number of user login",
+ })
+)
+
+func RegistryAuth() {
+ registry.MustRegister(UserLoginCounter)
+}
diff --git a/pkg/common/prommetrics/grpc_msg.go b/pkg/common/prommetrics/grpc_msg.go
new file mode 100644
index 0000000..909fddd
--- /dev/null
+++ b/pkg/common/prommetrics/grpc_msg.go
@@ -0,0 +1,47 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+)
+
+var (
+ SingleChatMsgProcessSuccessCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "single_chat_msg_process_success_total",
+ Help: "The number of single chat msg successful processed",
+ })
+ SingleChatMsgProcessFailedCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "single_chat_msg_process_failed_total",
+ Help: "The number of single chat msg failed processed",
+ })
+ GroupChatMsgProcessSuccessCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "group_chat_msg_process_success_total",
+ Help: "The number of group chat msg successful processed",
+ })
+ GroupChatMsgProcessFailedCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "group_chat_msg_process_failed_total",
+ Help: "The number of group chat msg failed processed",
+ })
+)
+
+func RegistryMsg() {
+ registry.MustRegister(
+ SingleChatMsgProcessSuccessCounter,
+ SingleChatMsgProcessFailedCounter,
+ GroupChatMsgProcessSuccessCounter,
+ GroupChatMsgProcessFailedCounter,
+ )
+}
diff --git a/pkg/common/prommetrics/grpc_msggateway.go b/pkg/common/prommetrics/grpc_msggateway.go
new file mode 100644
index 0000000..0377b2f
--- /dev/null
+++ b/pkg/common/prommetrics/grpc_msggateway.go
@@ -0,0 +1,30 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+)
+
+var (
+ OnlineUserGauge = prometheus.NewGauge(prometheus.GaugeOpts{
+ Name: "online_user_num",
+ Help: "The number of online user num",
+ })
+)
+
+func RegistryMsgGateway() {
+ registry.MustRegister(OnlineUserGauge)
+}
diff --git a/pkg/common/prommetrics/grpc_push.go b/pkg/common/prommetrics/grpc_push.go
new file mode 100644
index 0000000..c6280ec
--- /dev/null
+++ b/pkg/common/prommetrics/grpc_push.go
@@ -0,0 +1,37 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+)
+
+var (
+ MsgOfflinePushFailedCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "msg_offline_push_failed_total",
+ Help: "The number of msg failed offline pushed",
+ })
+ MsgLoneTimePushCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "msg_long_time_push_total",
+ Help: "The number of messages with a push time exceeding 10 seconds",
+ })
+)
+
+func RegistryPush() {
+ registry.MustRegister(
+ MsgOfflinePushFailedCounter,
+ MsgLoneTimePushCounter,
+ )
+}
diff --git a/pkg/common/prommetrics/grpc_user.go b/pkg/common/prommetrics/grpc_user.go
new file mode 100644
index 0000000..1c8c94c
--- /dev/null
+++ b/pkg/common/prommetrics/grpc_user.go
@@ -0,0 +1,14 @@
+package prommetrics
+
+import "github.com/prometheus/client_golang/prometheus"
+
+var (
+ UserRegisterCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "user_register_total",
+ Help: "The number of user login",
+ })
+)
+
+func RegistryUser() {
+ registry.MustRegister(UserRegisterCounter)
+}
diff --git a/pkg/common/prommetrics/prommetrics.go b/pkg/common/prommetrics/prommetrics.go
new file mode 100644
index 0000000..3f683a5
--- /dev/null
+++ b/pkg/common/prommetrics/prommetrics.go
@@ -0,0 +1,117 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import (
+ "errors"
+ "fmt"
+ "net"
+ "net/http"
+
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/collectors"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+)
+
+const commonPath = "/metrics"
+
+var registry = &prometheusRegistry{prometheus.NewRegistry()}
+
+type prometheusRegistry struct {
+ *prometheus.Registry
+}
+
+func (x *prometheusRegistry) MustRegister(cs ...prometheus.Collector) {
+ for _, c := range cs {
+ if err := x.Registry.Register(c); err != nil {
+ if errors.As(err, &prometheus.AlreadyRegisteredError{}) {
+ continue
+ }
+ panic(err)
+ }
+ }
+}
+
+func init() {
+ registry.MustRegister(
+ collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
+ collectors.NewGoCollector(),
+ )
+}
+
+var (
+ baseCollector = []prometheus.Collector{
+ collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
+ collectors.NewGoCollector(),
+ }
+)
+
+func Init(registry *prometheus.Registry, listener net.Listener, path string, handler http.Handler, cs ...prometheus.Collector) error {
+ registry.MustRegister(cs...)
+ srv := http.NewServeMux()
+ srv.Handle(path, handler)
+ return http.Serve(listener, srv)
+}
+
+func RegistryAll() {
+ RegistryApi()
+ RegistryAuth()
+ RegistryMsg()
+ RegistryMsgGateway()
+ RegistryPush()
+ RegistryUser()
+ RegistryRpc()
+ RegistryTransfer()
+}
+
+func Start(listener net.Listener) error {
+ srv := http.NewServeMux()
+ srv.Handle(commonPath, promhttp.HandlerFor(registry, promhttp.HandlerOpts{}))
+ return http.Serve(listener, srv)
+}
+
+const (
+ APIKeyName = "api"
+ MessageTransferKeyName = "message-transfer"
+
+ TTL = 300
+)
+
+type Target struct {
+ Target string `json:"target"`
+ Labels map[string]string `json:"labels"`
+}
+
+type RespTarget struct {
+ Targets []string `json:"targets"`
+ Labels map[string]string `json:"labels"`
+}
+
+func BuildDiscoveryKeyPrefix(name string) string {
+ return fmt.Sprintf("%s/%s/%s", "openim", "prometheus_discovery", name)
+}
+
+func BuildDiscoveryKey(name string, index int) string {
+ return fmt.Sprintf("%s/%s/%s/%d", "openim", "prometheus_discovery", name, index)
+}
+
+func BuildDefaultTarget(host string, ip int) Target {
+ return Target{
+ Target: fmt.Sprintf("%s:%d", host, ip),
+ Labels: map[string]string{
+ "namespace": "default",
+ },
+ }
+}
diff --git a/pkg/common/prommetrics/prommetrics_test.go b/pkg/common/prommetrics/prommetrics_test.go
new file mode 100644
index 0000000..be2dff7
--- /dev/null
+++ b/pkg/common/prommetrics/prommetrics_test.go
@@ -0,0 +1,77 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import "testing"
+
+//func TestNewGrpcPromObj(t *testing.T) {
+// // Create a custom metric to pass into the NewGrpcPromObj function.
+// customMetric := prometheus.NewCounter(prometheus.CounterOpts{
+// Name: "test_metric",
+// Help: "This is a test metric.",
+// })
+// cusMetrics := []prometheus.Collector{customMetric}
+//
+// // Call NewGrpcPromObj with the custom metrics.
+// reg, grpcMetrics, err := NewGrpcPromObj(cusMetrics)
+//
+// // Assert no error was returned.
+// assert.NoError(t, err)
+//
+// // Assert the registry was correctly initialized.
+// assert.NotNil(t, reg)
+//
+// // Assert the grpcMetrics was correctly initialized.
+// assert.NotNil(t, grpcMetrics)
+//
+// // Assert that the custom metric is registered.
+// mfs, err := reg.Gather()
+// assert.NoError(t, err)
+// assert.NotEmpty(t, mfs) // Ensure some metrics are present.
+// found := false
+// for _, mf := range mfs {
+// if *mf.Name == "test_metric" {
+// found = true
+// break
+// }
+// }
+// assert.True(t, found, "Custom metric not found in registry")
+//}
+
+//func TestGetGrpcCusMetrics(t *testing.T) {
+// conf := config2.NewGlobalConfig()
+//
+// config2.InitConfig(conf, "../../config")
+// // Test various cases based on the switch statement in the GetGrpcCusMetrics function.
+// testCases := []struct {
+// name string
+// expected int // The expected number of metrics for each case.
+// }{
+// {conf.RpcRegisterName.OpenImMessageGatewayName, 1},
+// }
+//
+// for _, tc := range testCases {
+// t.Run(tc.name, func(t *testing.T) {
+// metrics := GetGrpcCusMetrics(tc.name, &conf.RpcRegisterName)
+// assert.Len(t, metrics, tc.expected)
+// })
+// }
+//}
+
+func TestName(t *testing.T) {
+ RegistryApi()
+ RegistryApi()
+
+}
diff --git a/pkg/common/prommetrics/rpc.go b/pkg/common/prommetrics/rpc.go
new file mode 100644
index 0000000..9e3af35
--- /dev/null
+++ b/pkg/common/prommetrics/rpc.go
@@ -0,0 +1,74 @@
+package prommetrics
+
+import (
+ "net"
+ "strconv"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ gp "github.com/grpc-ecosystem/go-grpc-prometheus"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+)
+
+const rpcPath = commonPath
+
+var (
+ grpcMetrics *gp.ServerMetrics
+ rpcCounter = prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Name: "rpc_count",
+ Help: "Total number of RPC calls",
+ },
+ []string{"name", "path", "code"},
+ )
+)
+
+func RegistryRpc() {
+ registry.MustRegister(rpcCounter)
+}
+
+func RpcInit(cs []prometheus.Collector, listener net.Listener) error {
+ reg := prometheus.NewRegistry()
+ cs = append(append(
+ baseCollector,
+ rpcCounter,
+ ), cs...)
+ return Init(reg, listener, rpcPath, promhttp.HandlerFor(reg, promhttp.HandlerOpts{Registry: reg}), cs...)
+}
+
+func RPCCall(name string, path string, code int) {
+ rpcCounter.With(prometheus.Labels{"name": name, "path": path, "code": strconv.Itoa(code)}).Inc()
+}
+
+func GetGrpcServerMetrics() *gp.ServerMetrics {
+ if grpcMetrics == nil {
+ grpcMetrics = gp.NewServerMetrics()
+ grpcMetrics.EnableHandlingTimeHistogram()
+ }
+ return grpcMetrics
+}
+
+func GetGrpcCusMetrics(registerName string, discovery *config.Discovery) []prometheus.Collector {
+ switch registerName {
+ case discovery.RpcService.MessageGateway:
+ return []prometheus.Collector{OnlineUserGauge}
+ case discovery.RpcService.Msg:
+ return []prometheus.Collector{
+ SingleChatMsgProcessSuccessCounter,
+ SingleChatMsgProcessFailedCounter,
+ GroupChatMsgProcessSuccessCounter,
+ GroupChatMsgProcessFailedCounter,
+ }
+ case discovery.RpcService.Push:
+ return []prometheus.Collector{
+ MsgOfflinePushFailedCounter,
+ MsgLoneTimePushCounter,
+ }
+ case discovery.RpcService.Auth:
+ return []prometheus.Collector{UserLoginCounter}
+ case discovery.RpcService.User:
+ return []prometheus.Collector{UserRegisterCounter}
+ default:
+ return nil
+ }
+}
diff --git a/pkg/common/prommetrics/transfer.go b/pkg/common/prommetrics/transfer.go
new file mode 100644
index 0000000..51a4ca8
--- /dev/null
+++ b/pkg/common/prommetrics/transfer.go
@@ -0,0 +1,68 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package prommetrics
+
+import (
+ "net"
+
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+)
+
+var (
+ MsgInsertRedisSuccessCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "msg_insert_redis_success_total",
+ Help: "The number of successful insert msg to redis",
+ })
+ MsgInsertRedisFailedCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "msg_insert_redis_failed_total",
+ Help: "The number of failed insert msg to redis",
+ })
+ MsgInsertMongoSuccessCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "msg_insert_mongo_success_total",
+ Help: "The number of successful insert msg to mongo",
+ })
+ MsgInsertMongoFailedCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "msg_insert_mongo_failed_total",
+ Help: "The number of failed insert msg to mongo",
+ })
+ SeqSetFailedCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "seq_set_failed_total",
+ Help: "The number of failed set seq",
+ })
+)
+
+func RegistryTransfer() {
+ registry.MustRegister(
+ MsgInsertRedisSuccessCounter,
+ MsgInsertRedisFailedCounter,
+ MsgInsertMongoSuccessCounter,
+ MsgInsertMongoFailedCounter,
+ SeqSetFailedCounter,
+ )
+}
+
+func TransferInit(listener net.Listener) error {
+ reg := prometheus.NewRegistry()
+ cs := append(
+ baseCollector,
+ MsgInsertRedisSuccessCounter,
+ MsgInsertRedisFailedCounter,
+ MsgInsertMongoSuccessCounter,
+ MsgInsertMongoFailedCounter,
+ SeqSetFailedCounter,
+ )
+ return Init(reg, listener, commonPath, promhttp.HandlerFor(reg, promhttp.HandlerOpts{Registry: reg}), cs...)
+}
diff --git a/pkg/common/servererrs/code.go b/pkg/common/servererrs/code.go
new file mode 100644
index 0000000..d3f9d37
--- /dev/null
+++ b/pkg/common/servererrs/code.go
@@ -0,0 +1,104 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package servererrs
+
+// UnknownCode represents the error code when code is not parsed or parsed code equals 0.
+const UnknownCode = 1000
+
+// Error codes for various error scenarios.
+const (
+ FormattingError = 10001 // Error in formatting
+ HasRegistered = 10002 // user has already registered
+ NotRegistered = 10003 // user is not registered
+ PasswordErr = 10004 // Password error
+ GetIMTokenErr = 10005 // Error in getting IM token
+ RepeatSendCode = 10006 // Repeat sending code
+ MailSendCodeErr = 10007 // Error in sending code via email
+ SmsSendCodeErr = 10008 // Error in sending code via SMS
+ CodeInvalidOrExpired = 10009 // Code is invalid or expired
+ RegisterFailed = 10010 // Registration failed
+ ResetPasswordFailed = 10011 // Resetting password failed
+ RegisterLimit = 10012 // Registration limit exceeded
+ LoginLimit = 10013 // Login limit exceeded
+ InvitationError = 10014 // Error in invitation
+)
+
+// General error codes.
+const (
+ NoError = 0 // No error
+
+ DatabaseError = 90002 // Database error (redis/mysql, etc.)
+ NetworkError = 90004 // Network error
+ DataError = 90007 // Data error
+
+ CallbackError = 80000
+
+ // General error codes.
+ ServerInternalError = 500 // Server internal error
+ ArgsError = 1001 // Input parameter error
+ NoPermissionError = 1002 // Insufficient permission
+ DuplicateKeyError = 1003
+ RecordNotFoundError = 1004 // Record does not exist
+ SecretNotChangedError = 1050 // secret not changed
+
+ // Account error codes.
+ UserIDNotFoundError = 1101 // UserID does not exist or is not registered
+ RegisteredAlreadyError = 1102 // user is already registered
+
+ // Group error codes.
+ GroupIDNotFoundError = 1201 // GroupID does not exist
+ GroupIDExisted = 1202 // GroupID already exists
+ NotInGroupYetError = 1203 // Not in the group yet
+ DismissedAlreadyError = 1204 // Group has already been dismissed
+ GroupTypeNotSupport = 1205
+ GroupRequestHandled = 1206
+
+ // Relationship error codes.
+ CanNotAddYourselfError = 1301 // Cannot add yourself as a friend
+ BlockedByPeer = 1302 // Blocked by the peer
+ NotPeersFriend = 1303 // Not the peer's friend
+ RelationshipAlreadyError = 1304 // Already in a friend relationship
+
+ // Message error codes.
+ MessageHasReadDisable = 1401
+ MutedInGroup = 1402 // Member muted in the group
+ MutedGroup = 1403 // Group is muted
+ MsgAlreadyRevoke = 1404 // Message already revoked
+ MessageContainsLink = 1405 // Message contains link (not allowed for userType=0)
+ ImageContainsQRCode = 1406 // Image contains QR code (not allowed for userType=0)
+
+ // Token error codes.
+ TokenExpiredError = 1501
+ TokenInvalidError = 1502
+ TokenMalformedError = 1503
+ TokenNotValidYetError = 1504
+ TokenUnknownError = 1505
+ TokenKickedError = 1506
+ TokenNotExistError = 1507
+
+ // Long connection gateway error codes.
+ ConnOverMaxNumLimit = 1601
+ ConnArgsErr = 1602
+ PushMsgErr = 1603
+ IOSBackgroundPushErr = 1604
+
+ // S3 error codes.
+ FileUploadedExpiredError = 1701 // Upload expired
+
+ // Red packet error codes.
+ RedPacketFinishedError = 1801 // Red packet has been finished
+ RedPacketExpiredError = 1802 // Red packet has expired
+ RedPacketAlreadyReceivedError = 1803 // User has already received this red packet
+)
diff --git a/pkg/common/servererrs/doc.go b/pkg/common/servererrs/doc.go
new file mode 100644
index 0000000..6bd0689
--- /dev/null
+++ b/pkg/common/servererrs/doc.go
@@ -0,0 +1 @@
+package servererrs // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
diff --git a/pkg/common/servererrs/predefine.go b/pkg/common/servererrs/predefine.go
new file mode 100644
index 0000000..fd9fea2
--- /dev/null
+++ b/pkg/common/servererrs/predefine.go
@@ -0,0 +1,77 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package servererrs
+
+import "github.com/openimsdk/tools/errs"
+
+var (
+ ErrSecretNotChanged = errs.NewCodeError(SecretNotChangedError, "secret not changed, please change secret in config/share.yml for security reasons")
+
+ ErrDatabase = errs.NewCodeError(DatabaseError, "DatabaseError")
+ ErrNetwork = errs.NewCodeError(NetworkError, "NetworkError")
+ ErrCallback = errs.NewCodeError(CallbackError, "CallbackError")
+ ErrCallbackContinue = errs.NewCodeError(CallbackError, "ErrCallbackContinue")
+
+ ErrInternalServer = errs.NewCodeError(ServerInternalError, "ServerInternalError")
+ ErrArgs = errs.NewCodeError(ArgsError, "ArgsError")
+ ErrNoPermission = errs.NewCodeError(NoPermissionError, "NoPermissionError")
+ ErrDuplicateKey = errs.NewCodeError(DuplicateKeyError, "DuplicateKeyError")
+ ErrRecordNotFound = errs.NewCodeError(RecordNotFoundError, "RecordNotFoundError")
+
+ ErrUserIDNotFound = errs.NewCodeError(UserIDNotFoundError, "UserIDNotFoundError")
+ ErrGroupIDNotFound = errs.NewCodeError(GroupIDNotFoundError, "GroupIDNotFoundError")
+ ErrGroupIDExisted = errs.NewCodeError(GroupIDExisted, "GroupIDExisted")
+
+ ErrNotInGroupYet = errs.NewCodeError(NotInGroupYetError, "NotInGroupYetError")
+ ErrDismissedAlready = errs.NewCodeError(DismissedAlreadyError, "DismissedAlreadyError")
+ ErrRegisteredAlready = errs.NewCodeError(RegisteredAlreadyError, "RegisteredAlreadyError")
+ ErrGroupTypeNotSupport = errs.NewCodeError(GroupTypeNotSupport, "")
+ ErrGroupRequestHandled = errs.NewCodeError(GroupRequestHandled, "GroupRequestHandled")
+
+ ErrData = errs.NewCodeError(DataError, "DataError")
+ ErrTokenExpired = errs.NewCodeError(TokenExpiredError, "TokenExpiredError")
+ ErrTokenInvalid = errs.NewCodeError(TokenInvalidError, "TokenInvalidError") //
+ ErrTokenMalformed = errs.NewCodeError(TokenMalformedError, "TokenMalformedError") //
+ ErrTokenNotValidYet = errs.NewCodeError(TokenNotValidYetError, "TokenNotValidYetError") //
+ ErrTokenUnknown = errs.NewCodeError(TokenUnknownError, "TokenUnknownError") //
+ ErrTokenKicked = errs.NewCodeError(TokenKickedError, "TokenKickedError")
+ ErrTokenNotExist = errs.NewCodeError(TokenNotExistError, "TokenNotExistError") //
+
+ ErrMessageHasReadDisable = errs.NewCodeError(MessageHasReadDisable, "MessageHasReadDisable")
+
+ ErrCanNotAddYourself = errs.NewCodeError(CanNotAddYourselfError, "CanNotAddYourselfError")
+ ErrBlockedByPeer = errs.NewCodeError(BlockedByPeer, "BlockedByPeer")
+ ErrNotPeersFriend = errs.NewCodeError(NotPeersFriend, "NotPeersFriend")
+ ErrRelationshipAlready = errs.NewCodeError(RelationshipAlreadyError, "RelationshipAlreadyError")
+
+ ErrMutedInGroup = errs.NewCodeError(MutedInGroup, "MutedInGroup")
+ ErrMutedGroup = errs.NewCodeError(MutedGroup, "MutedGroup")
+ ErrMsgAlreadyRevoke = errs.NewCodeError(MsgAlreadyRevoke, "MsgAlreadyRevoke")
+ ErrMessageContainsLink = errs.NewCodeError(MessageContainsLink, "MessageContainsLink")
+ ErrImageContainsQRCode = errs.NewCodeError(ImageContainsQRCode, "ImageContainsQRCode")
+
+ ErrConnOverMaxNumLimit = errs.NewCodeError(ConnOverMaxNumLimit, "ConnOverMaxNumLimit")
+
+ ErrConnArgsErr = errs.NewCodeError(ConnArgsErr, "args err, need token, sendID, platformID")
+ ErrPushMsgErr = errs.NewCodeError(PushMsgErr, "push msg err")
+ ErrIOSBackgroundPushErr = errs.NewCodeError(IOSBackgroundPushErr, "ios background push err")
+
+ ErrFileUploadedExpired = errs.NewCodeError(FileUploadedExpiredError, "FileUploadedExpiredError")
+
+ // Red packet errors.
+ ErrRedPacketFinished = errs.NewCodeError(RedPacketFinishedError, "red packet has been finished")
+ ErrRedPacketExpired = errs.NewCodeError(RedPacketExpiredError, "red packet has expired")
+ ErrRedPacketAlreadyReceived = errs.NewCodeError(RedPacketAlreadyReceivedError, "you have already received this red packet")
+)
diff --git a/pkg/common/servererrs/relation.go b/pkg/common/servererrs/relation.go
new file mode 100644
index 0000000..62b0561
--- /dev/null
+++ b/pkg/common/servererrs/relation.go
@@ -0,0 +1,58 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package servererrs
+
+import "github.com/openimsdk/tools/errs"
+
+var Relation = &relation{m: make(map[int]map[int]struct{})}
+
+func init() {
+ Relation.Add(errs.RecordNotFoundError, UserIDNotFoundError)
+ Relation.Add(errs.RecordNotFoundError, GroupIDNotFoundError)
+ Relation.Add(errs.DuplicateKeyError, GroupIDExisted)
+}
+
+type relation struct {
+ m map[int]map[int]struct{}
+}
+
+func (r *relation) Add(codes ...int) {
+ if len(codes) < 2 {
+ panic("codes length must be greater than 2")
+ }
+ for i := 1; i < len(codes); i++ {
+ parent := codes[i-1]
+ s, ok := r.m[parent]
+ if !ok {
+ s = make(map[int]struct{})
+ r.m[parent] = s
+ }
+ for _, code := range codes[i:] {
+ s[code] = struct{}{}
+ }
+ }
+}
+
+func (r *relation) Is(parent, child int) bool {
+ if parent == child {
+ return true
+ }
+ s, ok := r.m[parent]
+ if !ok {
+ return false
+ }
+ _, ok = s[child]
+ return ok
+}
diff --git a/pkg/common/startrpc/circuitbreaker.go b/pkg/common/startrpc/circuitbreaker.go
new file mode 100644
index 0000000..060a3aa
--- /dev/null
+++ b/pkg/common/startrpc/circuitbreaker.go
@@ -0,0 +1,107 @@
+package startrpc
+
+import (
+ "context"
+ "time"
+
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/stability/circuitbreaker"
+ "github.com/openimsdk/tools/stability/circuitbreaker/sre"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+)
+
+type CircuitBreaker struct {
+ Enable bool `yaml:"enable"`
+ Success float64 `yaml:"success"` // success rate threshold (0.0-1.0)
+ Request int64 `yaml:"request"` // request threshold
+ Bucket int `yaml:"bucket"` // number of buckets
+ Window time.Duration `yaml:"window"` // time window for statistics
+}
+
+func NewCircuitBreaker(config *CircuitBreaker) circuitbreaker.CircuitBreaker {
+ if !config.Enable {
+ return nil
+ }
+
+ return sre.NewSREBraker(
+ sre.WithWindow(config.Window),
+ sre.WithBucket(config.Bucket),
+ sre.WithSuccess(config.Success),
+ sre.WithRequest(config.Request),
+ )
+}
+
+func UnaryCircuitBreakerInterceptor(breaker circuitbreaker.CircuitBreaker) grpc.ServerOption {
+ if breaker == nil {
+ return grpc.ChainUnaryInterceptor(func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, err error) {
+ return handler(ctx, req)
+ })
+ }
+
+ return grpc.ChainUnaryInterceptor(func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, err error) {
+ if err := breaker.Allow(); err != nil {
+ log.ZWarn(ctx, "rpc circuit breaker open", err, "method", info.FullMethod)
+ return nil, status.Error(codes.Unavailable, "service unavailable due to circuit breaker")
+ }
+
+ resp, err = handler(ctx, req)
+
+ if err != nil {
+ if st, ok := status.FromError(err); ok {
+ switch st.Code() {
+ case codes.OK:
+ breaker.MarkSuccess()
+ case codes.InvalidArgument, codes.NotFound, codes.AlreadyExists, codes.PermissionDenied:
+ breaker.MarkSuccess()
+ default:
+ breaker.MarkFailed()
+ }
+ } else {
+ breaker.MarkFailed()
+ }
+ } else {
+ breaker.MarkSuccess()
+ }
+
+ return resp, err
+
+ })
+}
+
+func StreamCircuitBreakerInterceptor(breaker circuitbreaker.CircuitBreaker) grpc.ServerOption {
+ if breaker == nil {
+ return grpc.ChainStreamInterceptor(func(srv any, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
+ return handler(srv, ss)
+ })
+ }
+
+ return grpc.ChainStreamInterceptor(func(srv any, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
+ if err := breaker.Allow(); err != nil {
+ log.ZWarn(ss.Context(), "rpc circuit breaker open", err, "method", info.FullMethod)
+ return status.Error(codes.Unavailable, "service unavailable due to circuit breaker")
+ }
+
+ err := handler(srv, ss)
+
+ if err != nil {
+ if st, ok := status.FromError(err); ok {
+ switch st.Code() {
+ case codes.OK:
+ breaker.MarkSuccess()
+ case codes.InvalidArgument, codes.NotFound, codes.AlreadyExists, codes.PermissionDenied:
+ breaker.MarkSuccess()
+ default:
+ breaker.MarkFailed()
+ }
+ } else {
+ breaker.MarkFailed()
+ }
+ } else {
+ breaker.MarkSuccess()
+ }
+
+ return err
+ })
+}
diff --git a/pkg/common/startrpc/mw.go b/pkg/common/startrpc/mw.go
new file mode 100644
index 0000000..2fcafb1
--- /dev/null
+++ b/pkg/common/startrpc/mw.go
@@ -0,0 +1,15 @@
+package startrpc
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "google.golang.org/grpc"
+)
+
+func grpcServerIMAdminUserID(imAdminUserID []string) grpc.ServerOption {
+ return grpc.ChainUnaryInterceptor(func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, err error) {
+ ctx = authverify.WithIMAdminUserIDs(ctx, imAdminUserID)
+ return handler(ctx, req)
+ })
+}
diff --git a/pkg/common/startrpc/ratelimit.go b/pkg/common/startrpc/ratelimit.go
new file mode 100644
index 0000000..1c2ac8e
--- /dev/null
+++ b/pkg/common/startrpc/ratelimit.go
@@ -0,0 +1,70 @@
+package startrpc
+
+import (
+ "context"
+ "time"
+
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/stability/ratelimit"
+ "github.com/openimsdk/tools/stability/ratelimit/bbr"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+)
+
+type RateLimiter struct {
+ Enable bool
+ Window time.Duration
+ Bucket int
+ CPUThreshold int64
+}
+
+func NewRateLimiter(config *RateLimiter) ratelimit.Limiter {
+ if !config.Enable {
+ return nil
+ }
+
+ return bbr.NewBBRLimiter(
+ bbr.WithWindow(config.Window),
+ bbr.WithBucket(config.Bucket),
+ bbr.WithCPUThreshold(config.CPUThreshold),
+ )
+}
+
+func UnaryRateLimitInterceptor(limiter ratelimit.Limiter) grpc.ServerOption {
+ if limiter == nil {
+ return grpc.ChainUnaryInterceptor(func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, err error) {
+ return handler(ctx, req)
+ })
+ }
+
+ return grpc.ChainUnaryInterceptor(func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, err error) {
+ done, err := limiter.Allow()
+ if err != nil {
+ log.ZWarn(ctx, "rpc rate limited", err, "method", info.FullMethod)
+ return nil, status.Errorf(codes.ResourceExhausted, "rpc request rate limit exceeded: %v, please try again later", err)
+ }
+
+ defer done(ratelimit.DoneInfo{})
+ return handler(ctx, req)
+ })
+}
+
+func StreamRateLimitInterceptor(limiter ratelimit.Limiter) grpc.ServerOption {
+ if limiter == nil {
+ return grpc.ChainStreamInterceptor(func(srv any, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
+ return handler(srv, ss)
+ })
+ }
+
+ return grpc.ChainStreamInterceptor(func(srv any, ss grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error {
+ done, err := limiter.Allow()
+ if err != nil {
+ log.ZWarn(ss.Context(), "rpc rate limited", err, "method", info.FullMethod)
+ return status.Errorf(codes.ResourceExhausted, "rpc request rate limit exceeded: %v, please try again later", err)
+ }
+ defer done(ratelimit.DoneInfo{})
+
+ return handler(srv, ss)
+ })
+}
diff --git a/pkg/common/startrpc/start.go b/pkg/common/startrpc/start.go
new file mode 100644
index 0000000..fd5250a
--- /dev/null
+++ b/pkg/common/startrpc/start.go
@@ -0,0 +1,321 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package startrpc
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "net"
+ "os"
+ "os/signal"
+ "reflect"
+ "strconv"
+ "syscall"
+ "time"
+
+ conf "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/network"
+ "google.golang.org/grpc/status"
+
+ kdisc "git.imall.cloud/openim/open-im-server-deploy/pkg/common/discovery"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/prommetrics"
+ "github.com/openimsdk/tools/discovery"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ grpccli "github.com/openimsdk/tools/mw/grpc/client"
+ grpcsrv "github.com/openimsdk/tools/mw/grpc/server"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+)
+
+func init() {
+ prommetrics.RegistryAll()
+}
+
+func Start[T any](ctx context.Context, disc *conf.Discovery, circuitBreakerConfig *conf.CircuitBreaker, rateLimiterConfig *conf.RateLimiter, prometheusConfig *conf.Prometheus, listenIP,
+ registerIP string, autoSetPorts bool, rpcPorts []int, index int, rpcRegisterName string, notification *conf.Notification, config T,
+ watchConfigNames []string, watchServiceNames []string,
+ rpcFn func(ctx context.Context, config T, client discovery.SvcDiscoveryRegistry, server grpc.ServiceRegistrar) error,
+ options ...grpc.ServerOption) error {
+
+ if notification != nil {
+ conf.InitNotification(notification)
+ }
+
+ maxRequestBody := getConfigRpcMaxRequestBody(reflect.ValueOf(config))
+ shareConfig := getConfigShare(reflect.ValueOf(config))
+
+ log.ZDebug(ctx, "rpc start", "rpcMaxRequestBody", maxRequestBody, "rpcRegisterName", rpcRegisterName, "registerIP", registerIP, "listenIP", listenIP)
+
+ options = append(options,
+ grpcsrv.GrpcServerMetadataContext(),
+ grpcsrv.GrpcServerErrorConvert(),
+ grpcsrv.GrpcServerLogger(),
+ grpcsrv.GrpcServerRequestValidate(),
+ grpcsrv.GrpcServerPanicCapture(),
+ )
+ if shareConfig != nil && len(shareConfig.IMAdminUser.UserIDs) > 0 {
+ options = append(options, grpcServerIMAdminUserID(shareConfig.IMAdminUser.UserIDs))
+ }
+ var clientOptions []grpc.DialOption
+ if maxRequestBody != nil {
+ if maxRequestBody.RequestMaxBodySize > 0 {
+ options = append(options, grpc.MaxRecvMsgSize(maxRequestBody.RequestMaxBodySize))
+ clientOptions = append(clientOptions, grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(maxRequestBody.RequestMaxBodySize)))
+ }
+ if maxRequestBody.ResponseMaxBodySize > 0 {
+ options = append(options, grpc.MaxSendMsgSize(maxRequestBody.ResponseMaxBodySize))
+ clientOptions = append(clientOptions, grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxRequestBody.ResponseMaxBodySize)))
+ }
+ }
+
+ if circuitBreakerConfig != nil && circuitBreakerConfig.Enable {
+ cb := &CircuitBreaker{
+ Enable: circuitBreakerConfig.Enable,
+ Success: circuitBreakerConfig.Success,
+ Request: circuitBreakerConfig.Request,
+ Bucket: circuitBreakerConfig.Bucket,
+ Window: circuitBreakerConfig.Window,
+ }
+
+ breaker := NewCircuitBreaker(cb)
+
+ options = append(options,
+ UnaryCircuitBreakerInterceptor(breaker),
+ StreamCircuitBreakerInterceptor(breaker),
+ )
+
+ log.ZInfo(ctx, "RPC circuit breaker enabled",
+ "service", rpcRegisterName,
+ "window", circuitBreakerConfig.Window,
+ "bucket", circuitBreakerConfig.Bucket,
+ "success", circuitBreakerConfig.Success,
+ "requestThreshold", circuitBreakerConfig.Request)
+ }
+
+ if rateLimiterConfig != nil && rateLimiterConfig.Enable {
+ rl := &RateLimiter{
+ Enable: rateLimiterConfig.Enable,
+ Window: rateLimiterConfig.Window,
+ Bucket: rateLimiterConfig.Bucket,
+ CPUThreshold: rateLimiterConfig.CPUThreshold,
+ }
+
+ limiter := NewRateLimiter(rl)
+
+ options = append(options,
+ UnaryRateLimitInterceptor(limiter),
+ StreamRateLimitInterceptor(limiter),
+ )
+
+ log.ZInfo(ctx, "RPC rate limiter enabled",
+ "service", rpcRegisterName,
+ "window", rateLimiterConfig.Window,
+ "bucket", rateLimiterConfig.Bucket,
+ "cpuThreshold", rateLimiterConfig.CPUThreshold)
+ }
+
+ registerIP, err := network.GetRpcRegisterIP(registerIP)
+ if err != nil {
+ return err
+ }
+ var prometheusListenAddr string
+ if autoSetPorts {
+ prometheusListenAddr = net.JoinHostPort(listenIP, "0")
+ } else {
+ prometheusPort, err := datautil.GetElemByIndex(prometheusConfig.Ports, index)
+ if err != nil {
+ return err
+ }
+ prometheusListenAddr = net.JoinHostPort(listenIP, strconv.Itoa(prometheusPort))
+ }
+
+ watchConfigNames = append(watchConfigNames, conf.LogConfigFileName)
+
+ client, err := kdisc.NewDiscoveryRegister(disc, watchServiceNames)
+ if err != nil {
+ return err
+ }
+
+ defer client.Close()
+ client.AddOption(
+ grpc.WithTransportCredentials(insecure.NewCredentials()),
+ grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")),
+
+ grpccli.GrpcClientLogger(),
+ grpccli.GrpcClientContext(),
+ grpccli.GrpcClientErrorConvert(),
+ )
+ if len(clientOptions) > 0 {
+ client.AddOption(clientOptions...)
+ }
+
+ ctx, cancel := context.WithCancelCause(ctx)
+
+ go func() {
+ sigs := make(chan os.Signal, 1)
+ signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGKILL)
+ select {
+ case <-ctx.Done():
+ return
+ case val := <-sigs:
+ log.ZDebug(ctx, "recv signal", "signal", val.String())
+ cancel(fmt.Errorf("signal %s", val.String()))
+ }
+ }()
+
+ if prometheusListenAddr != "" {
+ options = append(
+ options,
+ prommetricsUnaryInterceptor(rpcRegisterName),
+ prommetricsStreamInterceptor(rpcRegisterName),
+ )
+ prometheusListener, prometheusPort, err := listenTCP(prometheusListenAddr)
+ if err != nil {
+ return err
+ }
+ log.ZDebug(ctx, "prometheus start", "addr", prometheusListener.Addr(), "rpcRegisterName", rpcRegisterName)
+ target, err := jsonutil.JsonMarshal(prommetrics.BuildDefaultTarget(registerIP, prometheusPort))
+ if err != nil {
+ return err
+ }
+ if autoSetPorts {
+ if err = client.SetWithLease(ctx, prommetrics.BuildDiscoveryKey(rpcRegisterName, index), target, prommetrics.TTL); err != nil {
+ if !errors.Is(err, discovery.ErrNotSupported) {
+ return err
+ }
+ }
+ }
+ go func() {
+ err := prommetrics.Start(prometheusListener)
+ if err == nil {
+ err = fmt.Errorf("listener done")
+ }
+ cancel(fmt.Errorf("prommetrics %s %w", rpcRegisterName, err))
+ }()
+ }
+
+ var (
+ rpcServer *grpc.Server
+ rpcGracefulStop chan struct{}
+ )
+
+ onGrpcServiceRegistrar := func(desc *grpc.ServiceDesc, impl any) {
+ if rpcServer != nil {
+ rpcServer.RegisterService(desc, impl)
+ return
+ }
+ var rpcListenAddr string
+ if autoSetPorts {
+ rpcListenAddr = net.JoinHostPort(listenIP, "0")
+ } else {
+ rpcPort, err := datautil.GetElemByIndex(rpcPorts, index)
+ if err != nil {
+ cancel(fmt.Errorf("rpcPorts index out of range %s %w", rpcRegisterName, err))
+ return
+ }
+ rpcListenAddr = net.JoinHostPort(listenIP, strconv.Itoa(rpcPort))
+ }
+ rpcListener, err := net.Listen("tcp", rpcListenAddr)
+ if err != nil {
+ cancel(fmt.Errorf("listen rpc %s %s %w", rpcRegisterName, rpcListenAddr, err))
+ return
+ }
+
+ rpcServer = grpc.NewServer(options...)
+ rpcServer.RegisterService(desc, impl)
+ rpcGracefulStop = make(chan struct{})
+ rpcPort := rpcListener.Addr().(*net.TCPAddr).Port
+ log.ZDebug(ctx, "rpc start register", "rpcRegisterName", rpcRegisterName, "registerIP", registerIP, "rpcPort", rpcPort)
+ grpcOpt := grpc.WithTransportCredentials(insecure.NewCredentials())
+ rpcGracefulStop = make(chan struct{})
+ go func() {
+ <-ctx.Done()
+ rpcServer.GracefulStop()
+ close(rpcGracefulStop)
+ }()
+ if err := client.Register(ctx, rpcRegisterName, registerIP, rpcListener.Addr().(*net.TCPAddr).Port, grpcOpt); err != nil {
+ cancel(fmt.Errorf("rpc register %s %w", rpcRegisterName, err))
+ return
+ }
+
+ go func() {
+ err := rpcServer.Serve(rpcListener)
+ if err == nil {
+ err = fmt.Errorf("serve end")
+ }
+ cancel(fmt.Errorf("rpc %s %w", rpcRegisterName, err))
+ }()
+ }
+
+ err = rpcFn(ctx, config, client, &grpcServiceRegistrar{onRegisterService: onGrpcServiceRegistrar})
+ if err != nil {
+ return err
+ }
+ <-ctx.Done()
+ log.ZDebug(ctx, "cmd wait done", "err", context.Cause(ctx))
+ if rpcGracefulStop != nil {
+ timeout := time.NewTimer(time.Second * 15)
+ defer timeout.Stop()
+ select {
+ case <-timeout.C:
+ log.ZWarn(ctx, "rcp graceful stop timeout", nil)
+ case <-rpcGracefulStop:
+ log.ZDebug(ctx, "rcp graceful stop done")
+ }
+ }
+ return context.Cause(ctx)
+}
+
+func listenTCP(addr string) (net.Listener, int, error) {
+ listener, err := net.Listen("tcp", addr)
+ if err != nil {
+ return nil, 0, errs.WrapMsg(err, "listen err", "addr", addr)
+ }
+ return listener, listener.Addr().(*net.TCPAddr).Port, nil
+}
+
+func prommetricsUnaryInterceptor(rpcRegisterName string) grpc.ServerOption {
+ getCode := func(err error) int {
+ if err == nil {
+ return 0
+ }
+ rpcErr, ok := err.(interface{ GRPCStatus() *status.Status })
+ if !ok {
+ return -1
+ }
+ return int(rpcErr.GRPCStatus().Code())
+ }
+ return grpc.ChainUnaryInterceptor(func(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {
+ resp, err := handler(ctx, req)
+ prommetrics.RPCCall(rpcRegisterName, info.FullMethod, getCode(err))
+ return resp, err
+ })
+}
+
+func prommetricsStreamInterceptor(rpcRegisterName string) grpc.ServerOption {
+ return grpc.ChainStreamInterceptor()
+}
+
+type grpcServiceRegistrar struct {
+ onRegisterService func(desc *grpc.ServiceDesc, impl any)
+}
+
+func (x *grpcServiceRegistrar) RegisterService(desc *grpc.ServiceDesc, impl any) {
+ x.onRegisterService(desc, impl)
+}
diff --git a/pkg/common/startrpc/tools.go b/pkg/common/startrpc/tools.go
new file mode 100644
index 0000000..c7e36d7
--- /dev/null
+++ b/pkg/common/startrpc/tools.go
@@ -0,0 +1,47 @@
+package startrpc
+
+import (
+ "reflect"
+
+ conf "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+)
+
+func getConfig[T any](value reflect.Value) *T {
+ for value.Kind() == reflect.Pointer {
+ value = value.Elem()
+ }
+ if value.Kind() == reflect.Struct {
+ num := value.NumField()
+ for i := 0; i < num; i++ {
+ field := value.Field(i)
+ for field.Kind() == reflect.Pointer {
+ field = field.Elem()
+ }
+ if field.Kind() == reflect.Struct {
+ if elem, ok := field.Interface().(T); ok {
+ return &elem
+ }
+ if elem := getConfig[T](field); elem != nil {
+ return elem
+ }
+ }
+ }
+ }
+ return nil
+}
+
+func getConfigRpcMaxRequestBody(value reflect.Value) *conf.MaxRequestBody {
+ return getConfig[conf.MaxRequestBody](value)
+}
+
+func getConfigShare(value reflect.Value) *conf.Share {
+ return getConfig[conf.Share](value)
+}
+
+func getConfigRateLimiter(value reflect.Value) *conf.RateLimiter {
+ return getConfig[conf.RateLimiter](value)
+}
+
+func getConfigCircuitBreaker(value reflect.Value) *conf.CircuitBreaker {
+ return getConfig[conf.CircuitBreaker](value)
+}
diff --git a/pkg/common/storage/cache/batch_handler.go b/pkg/common/storage/cache/batch_handler.go
new file mode 100644
index 0000000..8417425
--- /dev/null
+++ b/pkg/common/storage/cache/batch_handler.go
@@ -0,0 +1,17 @@
+package cache
+
+import (
+ "context"
+)
+
+// BatchDeleter interface defines a set of methods for batch deleting cache and publishing deletion information.
+type BatchDeleter interface {
+ //ChainExecDel method is used for chain calls and must call Clone to prevent memory pollution.
+ ChainExecDel(ctx context.Context) error
+ //ExecDelWithKeys method directly takes keys for deletion.
+ ExecDelWithKeys(ctx context.Context, keys []string) error
+ //Clone method creates a copy of the BatchDeleter to avoid modifying the original object.
+ Clone() BatchDeleter
+ //AddKeys method adds keys to be deleted.
+ AddKeys(keys ...string)
+}
diff --git a/pkg/common/storage/cache/black.go b/pkg/common/storage/cache/black.go
new file mode 100644
index 0000000..515ce3c
--- /dev/null
+++ b/pkg/common/storage/cache/black.go
@@ -0,0 +1,27 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+)
+
+type BlackCache interface {
+ BatchDeleter
+ CloneBlackCache() BlackCache
+ GetBlackIDs(ctx context.Context, userID string) (blackIDs []string, err error)
+ // del user's blackIDs msgCache, exec when a user's black list changed
+ DelBlackIDs(ctx context.Context, userID string) BlackCache
+}
diff --git a/pkg/common/storage/cache/client_config.go b/pkg/common/storage/cache/client_config.go
new file mode 100644
index 0000000..329f25c
--- /dev/null
+++ b/pkg/common/storage/cache/client_config.go
@@ -0,0 +1,8 @@
+package cache
+
+import "context"
+
+type ClientConfigCache interface {
+ DeleteUserCache(ctx context.Context, userIDs []string) error
+ GetUserConfig(ctx context.Context, userID string) (map[string]string, error)
+}
diff --git a/pkg/common/storage/cache/conversation.go b/pkg/common/storage/cache/conversation.go
new file mode 100644
index 0000000..8636a9e
--- /dev/null
+++ b/pkg/common/storage/cache/conversation.go
@@ -0,0 +1,65 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+// arg fn will exec when no data in msgCache.
+type ConversationCache interface {
+ BatchDeleter
+ CloneConversationCache() ConversationCache
+ // get user's conversationIDs from msgCache
+ GetUserConversationIDs(ctx context.Context, ownerUserID string) ([]string, error)
+ GetUserNotNotifyConversationIDs(ctx context.Context, userID string) ([]string, error)
+ GetPinnedConversationIDs(ctx context.Context, userID string) ([]string, error)
+ DelConversationIDs(userIDs ...string) ConversationCache
+
+ GetUserConversationIDsHash(ctx context.Context, ownerUserID string) (hash uint64, err error)
+ DelUserConversationIDsHash(ownerUserIDs ...string) ConversationCache
+
+ // get one conversation from msgCache
+ GetConversation(ctx context.Context, ownerUserID, conversationID string) (*relationtb.Conversation, error)
+ DelConversations(ownerUserID string, conversationIDs ...string) ConversationCache
+ DelUsersConversation(conversationID string, ownerUserIDs ...string) ConversationCache
+ // get one conversation from msgCache
+ GetConversations(ctx context.Context, ownerUserID string,
+ conversationIDs []string) ([]*relationtb.Conversation, error)
+ // get one user's all conversations from msgCache
+ GetUserAllConversations(ctx context.Context, ownerUserID string) ([]*relationtb.Conversation, error)
+ // get user conversation recv msg from msgCache
+ GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error)
+ DelUserRecvMsgOpt(ownerUserID, conversationID string) ConversationCache
+ // get one super group recv msg but do not notification userID list
+ // GetSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) (userIDs []string, err error)
+ DelSuperGroupRecvMsgNotNotifyUserIDs(groupID string) ConversationCache
+ // get one super group recv msg but do not notification userID list hash
+ // GetSuperGroupRecvMsgNotNotifyUserIDsHash(ctx context.Context, groupID string) (hash uint64, err error)
+ DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID string) ConversationCache
+
+ // GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (map[string]int64, error)
+ DelUserAllHasReadSeqs(ownerUserID string, conversationIDs ...string) ConversationCache
+
+ GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error)
+ DelConversationNotReceiveMessageUserIDs(conversationIDs ...string) ConversationCache
+ DelConversationNotNotifyMessageUserIDs(userIDs ...string) ConversationCache
+ DelUserPinnedConversations(userIDs ...string) ConversationCache
+ DelConversationVersionUserIDs(userIDs ...string) ConversationCache
+
+ FindMaxConversationUserVersion(ctx context.Context, userID string) (*relationtb.VersionLog, error)
+}
diff --git a/pkg/common/storage/cache/doc.go b/pkg/common/storage/cache/doc.go
new file mode 100644
index 0000000..21e67e1
--- /dev/null
+++ b/pkg/common/storage/cache/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
diff --git a/pkg/common/storage/cache/friend.go b/pkg/common/storage/cache/friend.go
new file mode 100644
index 0000000..15ec855
--- /dev/null
+++ b/pkg/common/storage/cache/friend.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+// FriendCache is an interface for caching friend-related data.
+type FriendCache interface {
+ BatchDeleter
+ CloneFriendCache() FriendCache
+ GetFriendIDs(ctx context.Context, ownerUserID string) (friendIDs []string, err error)
+ // Called when friendID list changed
+ DelFriendIDs(ownerUserID ...string) FriendCache
+ // Get single friendInfo from the cache
+ GetFriend(ctx context.Context, ownerUserID, friendUserID string) (friend *relationtb.Friend, err error)
+ // Delete friend when friend info changed
+ DelFriend(ownerUserID, friendUserID string) FriendCache
+ // Delete friends when friends' info changed
+ DelFriends(ownerUserID string, friendUserIDs []string) FriendCache
+
+ DelOwner(friendUserID string, ownerUserIDs []string) FriendCache
+
+ DelMaxFriendVersion(ownerUserIDs ...string) FriendCache
+
+ //DelSortFriendUserIDs(ownerUserIDs ...string) FriendCache
+
+ //FindSortFriendUserIDs(ctx context.Context, ownerUserID string) ([]string, error)
+
+ //FindFriendIncrVersion(ctx context.Context, ownerUserID string, version uint, limit int) (*relationtb.VersionLog, error)
+
+ FindMaxFriendVersion(ctx context.Context, ownerUserID string) (*relationtb.VersionLog, error)
+}
diff --git a/pkg/common/storage/cache/group.go b/pkg/common/storage/cache/group.go
new file mode 100644
index 0000000..79452f6
--- /dev/null
+++ b/pkg/common/storage/cache/group.go
@@ -0,0 +1,70 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/common"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+type GroupHash interface {
+ GetGroupHash(ctx context.Context, groupID string) (uint64, error)
+}
+
+type GroupCache interface {
+ BatchDeleter
+ CloneGroupCache() GroupCache
+ GetGroupsInfo(ctx context.Context, groupIDs []string) (groups []*model.Group, err error)
+ GetGroupInfo(ctx context.Context, groupID string) (group *model.Group, err error)
+ DelGroupsInfo(groupIDs ...string) GroupCache
+
+ GetGroupMembersHash(ctx context.Context, groupID string) (hashCode uint64, err error)
+ GetGroupMemberHashMap(ctx context.Context, groupIDs []string) (map[string]*common.GroupSimpleUserID, error)
+ DelGroupMembersHash(groupID string) GroupCache
+
+ GetGroupMemberIDs(ctx context.Context, groupID string) (groupMemberIDs []string, err error)
+
+ DelGroupMemberIDs(groupID string) GroupCache
+
+ GetJoinedGroupIDs(ctx context.Context, userID string) (joinedGroupIDs []string, err error)
+ DelJoinedGroupID(userID ...string) GroupCache
+
+ GetGroupMemberInfo(ctx context.Context, groupID, userID string) (groupMember *model.GroupMember, err error)
+ GetGroupMembersInfo(ctx context.Context, groupID string, userID []string) (groupMembers []*model.GroupMember, err error)
+ GetAllGroupMembersInfo(ctx context.Context, groupID string) (groupMembers []*model.GroupMember, err error)
+ FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) ([]*model.GroupMember, error)
+
+ GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error)
+ GetGroupOwner(ctx context.Context, groupID string) (*model.GroupMember, error)
+ GetGroupsOwner(ctx context.Context, groupIDs []string) ([]*model.GroupMember, error)
+ DelGroupRoleLevel(groupID string, roleLevel []int32) GroupCache
+ DelGroupAllRoleLevel(groupID string) GroupCache
+ DelGroupMembersInfo(groupID string, userID ...string) GroupCache
+ GetGroupRoleLevelMemberInfo(ctx context.Context, groupID string, roleLevel int32) ([]*model.GroupMember, error)
+ GetGroupRolesLevelMemberInfo(ctx context.Context, groupID string, roleLevels []int32) ([]*model.GroupMember, error)
+ GetGroupMemberNum(ctx context.Context, groupID string) (memberNum int64, err error)
+ DelGroupsMemberNum(groupID ...string) GroupCache
+
+ //FindSortGroupMemberUserIDs(ctx context.Context, groupID string) ([]string, error)
+ //FindSortJoinGroupIDs(ctx context.Context, userID string) ([]string, error)
+
+ DelMaxGroupMemberVersion(groupIDs ...string) GroupCache
+ DelMaxJoinGroupVersion(userIDs ...string) GroupCache
+ FindMaxGroupMemberVersion(ctx context.Context, groupID string) (*model.VersionLog, error)
+ BatchFindMaxGroupMemberVersion(ctx context.Context, groupIDs []string) ([]*model.VersionLog, error)
+ FindMaxJoinGroupVersion(ctx context.Context, userID string) (*model.VersionLog, error)
+}
diff --git a/pkg/common/storage/cache/mcache/minio.go b/pkg/common/storage/cache/mcache/minio.go
new file mode 100644
index 0000000..af06c7d
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/minio.go
@@ -0,0 +1,50 @@
+package mcache
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/openimsdk/tools/s3/minio"
+)
+
+func NewMinioCache(cache database.Cache) minio.Cache {
+ return &minioCache{
+ cache: cache,
+ expireTime: time.Hour * 24 * 7,
+ }
+}
+
+type minioCache struct {
+ cache database.Cache
+ expireTime time.Duration
+}
+
+func (g *minioCache) getObjectImageInfoKey(key string) string {
+ return cachekey.GetObjectImageInfoKey(key)
+}
+
+func (g *minioCache) getMinioImageThumbnailKey(key string, format string, width int, height int) string {
+ return cachekey.GetMinioImageThumbnailKey(key, format, width, height)
+}
+
+func (g *minioCache) DelObjectImageInfoKey(ctx context.Context, keys ...string) error {
+ ks := make([]string, 0, len(keys))
+ for _, key := range keys {
+ ks = append(ks, g.getObjectImageInfoKey(key))
+ }
+ return g.cache.Del(ctx, ks)
+}
+
+func (g *minioCache) DelImageThumbnailKey(ctx context.Context, key string, format string, width int, height int) error {
+ return g.cache.Del(ctx, []string{g.getMinioImageThumbnailKey(key, format, width, height)})
+}
+
+func (g *minioCache) GetImageObjectKeyInfo(ctx context.Context, key string, fn func(ctx context.Context) (*minio.ImageInfo, error)) (*minio.ImageInfo, error) {
+ return getCache[*minio.ImageInfo](ctx, g.cache, g.getObjectImageInfoKey(key), g.expireTime, fn)
+}
+
+func (g *minioCache) GetThumbnailKey(ctx context.Context, key string, format string, width int, height int, minioCache func(ctx context.Context) (string, error)) (string, error) {
+ return getCache[string](ctx, g.cache, g.getMinioImageThumbnailKey(key, format, width, height), g.expireTime, minioCache)
+}
diff --git a/pkg/common/storage/cache/mcache/msg_cache.go b/pkg/common/storage/cache/mcache/msg_cache.go
new file mode 100644
index 0000000..7fdf1bf
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/msg_cache.go
@@ -0,0 +1,132 @@
+package mcache
+
+import (
+ "context"
+ "strconv"
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/lru"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+var (
+ memMsgCache lru.LRU[string, *model.MsgInfoModel]
+ initMemMsgCache sync.Once
+)
+
+func NewMsgCache(cache database.Cache, msgDocDatabase database.Msg) cache.MsgCache {
+ initMemMsgCache.Do(func() {
+ memMsgCache = lru.NewLazyLRU[string, *model.MsgInfoModel](1024*8, time.Hour, time.Second*10, localcache.EmptyTarget{}, nil)
+ })
+ return &msgCache{
+ cache: cache,
+ msgDocDatabase: msgDocDatabase,
+ memMsgCache: memMsgCache,
+ }
+}
+
+type msgCache struct {
+ cache database.Cache
+ msgDocDatabase database.Msg
+ memMsgCache lru.LRU[string, *model.MsgInfoModel]
+}
+
+func (x *msgCache) getSendMsgKey(id string) string {
+ return cachekey.GetSendMsgKey(id)
+}
+
+func (x *msgCache) SetSendMsgStatus(ctx context.Context, id string, status int32) error {
+ return x.cache.Set(ctx, x.getSendMsgKey(id), strconv.Itoa(int(status)), time.Hour*24)
+}
+
+func (x *msgCache) GetSendMsgStatus(ctx context.Context, id string) (int32, error) {
+ key := x.getSendMsgKey(id)
+ res, err := x.cache.Get(ctx, []string{key})
+ if err != nil {
+ return 0, err
+ }
+ val, ok := res[key]
+ if !ok {
+ return 0, errs.Wrap(redis.Nil)
+ }
+ status, err := strconv.Atoi(val)
+ if err != nil {
+ return 0, errs.WrapMsg(err, "GetSendMsgStatus strconv.Atoi error", "val", val)
+ }
+ return int32(status), nil
+}
+
+func (x *msgCache) getMsgCacheKey(conversationID string, seq int64) string {
+ return cachekey.GetMsgCacheKey(conversationID, seq)
+
+}
+
+func (x *msgCache) GetMessageBySeqs(ctx context.Context, conversationID string, seqs []int64) ([]*model.MsgInfoModel, error) {
+ if len(seqs) == 0 {
+ return nil, nil
+ }
+ keys := make([]string, 0, len(seqs))
+ keySeq := make(map[string]int64, len(seqs))
+ for _, seq := range seqs {
+ key := x.getMsgCacheKey(conversationID, seq)
+ keys = append(keys, key)
+ keySeq[key] = seq
+ }
+ res, err := x.memMsgCache.GetBatch(keys, func(keys []string) (map[string]*model.MsgInfoModel, error) {
+ findSeqs := make([]int64, 0, len(keys))
+ for _, key := range keys {
+ seq, ok := keySeq[key]
+ if !ok {
+ continue
+ }
+ findSeqs = append(findSeqs, seq)
+ }
+ res, err := x.msgDocDatabase.FindSeqs(ctx, conversationID, seqs)
+ if err != nil {
+ return nil, err
+ }
+ kv := make(map[string]*model.MsgInfoModel)
+ for i := range res {
+ msg := res[i]
+ if msg == nil || msg.Msg == nil || msg.Msg.Seq <= 0 {
+ continue
+ }
+ key := x.getMsgCacheKey(conversationID, msg.Msg.Seq)
+ kv[key] = msg
+ }
+ return kv, nil
+ })
+ if err != nil {
+ return nil, err
+ }
+ return datautil.Values(res), nil
+}
+
+func (x msgCache) DelMessageBySeqs(ctx context.Context, conversationID string, seqs []int64) error {
+ if len(seqs) == 0 {
+ return nil
+ }
+ for _, seq := range seqs {
+ x.memMsgCache.Del(x.getMsgCacheKey(conversationID, seq))
+ }
+ return nil
+}
+
+func (x *msgCache) SetMessageBySeqs(ctx context.Context, conversationID string, msgs []*model.MsgInfoModel) error {
+ for i := range msgs {
+ msg := msgs[i]
+ if msg == nil || msg.Msg == nil || msg.Msg.Seq <= 0 {
+ continue
+ }
+ x.memMsgCache.Set(x.getMsgCacheKey(conversationID, msg.Msg.Seq), msg)
+ }
+ return nil
+}
diff --git a/pkg/common/storage/cache/mcache/online.go b/pkg/common/storage/cache/mcache/online.go
new file mode 100644
index 0000000..a7c898e
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/online.go
@@ -0,0 +1,82 @@
+package mcache
+
+import (
+ "context"
+ "sync"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+)
+
+var (
+ globalOnlineCache cache.OnlineCache
+ globalOnlineOnce sync.Once
+)
+
+func NewOnlineCache() cache.OnlineCache {
+ globalOnlineOnce.Do(func() {
+ globalOnlineCache = &onlineCache{
+ user: make(map[string]map[int32]struct{}),
+ }
+ })
+ return globalOnlineCache
+}
+
+type onlineCache struct {
+ lock sync.RWMutex
+ user map[string]map[int32]struct{}
+}
+
+func (x *onlineCache) GetOnline(ctx context.Context, userID string) ([]int32, error) {
+ x.lock.RLock()
+ defer x.lock.RUnlock()
+ pSet, ok := x.user[userID]
+ if !ok {
+ return nil, nil
+ }
+ res := make([]int32, 0, len(pSet))
+ for k := range pSet {
+ res = append(res, k)
+ }
+ return res, nil
+}
+
+func (x *onlineCache) SetUserOnline(ctx context.Context, userID string, online, offline []int32) error {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ pSet, ok := x.user[userID]
+ if ok {
+ for _, p := range offline {
+ delete(pSet, p)
+ }
+ }
+ if len(online) > 0 {
+ if !ok {
+ pSet = make(map[int32]struct{})
+ x.user[userID] = pSet
+ }
+ for _, p := range online {
+ pSet[p] = struct{}{}
+ }
+ }
+ if len(pSet) == 0 {
+ delete(x.user, userID)
+ }
+ return nil
+}
+
+func (x *onlineCache) GetAllOnlineUsers(ctx context.Context, cursor uint64) (map[string][]int32, uint64, error) {
+ if cursor != 0 {
+ return nil, 0, nil
+ }
+ x.lock.RLock()
+ defer x.lock.RUnlock()
+ res := make(map[string][]int32)
+ for k, v := range x.user {
+ pSet := make([]int32, 0, len(v))
+ for p := range v {
+ pSet = append(pSet, p)
+ }
+ res[k] = pSet
+ }
+ return res, 0, nil
+}
diff --git a/pkg/common/storage/cache/mcache/seq_conversation.go b/pkg/common/storage/cache/mcache/seq_conversation.go
new file mode 100644
index 0000000..6953ebb
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/seq_conversation.go
@@ -0,0 +1,79 @@
+package mcache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+)
+
+func NewSeqConversationCache(sc database.SeqConversation) cache.SeqConversationCache {
+ return &seqConversationCache{
+ sc: sc,
+ }
+}
+
+type seqConversationCache struct {
+ sc database.SeqConversation
+}
+
+func (x *seqConversationCache) Malloc(ctx context.Context, conversationID string, size int64) (int64, error) {
+ return x.sc.Malloc(ctx, conversationID, size)
+}
+
+func (x *seqConversationCache) SetMinSeq(ctx context.Context, conversationID string, seq int64) error {
+ return x.sc.SetMinSeq(ctx, conversationID, seq)
+}
+
+func (x *seqConversationCache) GetMinSeq(ctx context.Context, conversationID string) (int64, error) {
+ return x.sc.GetMinSeq(ctx, conversationID)
+}
+
+func (x *seqConversationCache) GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error) {
+ res := make(map[string]int64)
+ for _, conversationID := range conversationIDs {
+ seq, err := x.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return nil, err
+ }
+ res[conversationID] = seq
+ }
+ return res, nil
+}
+
+func (x *seqConversationCache) GetMaxSeqsWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error) {
+ res := make(map[string]database.SeqTime)
+ for _, conversationID := range conversationIDs {
+ seq, err := x.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return nil, err
+ }
+ res[conversationID] = database.SeqTime{Seq: seq}
+ }
+ return res, nil
+}
+
+func (x *seqConversationCache) GetMaxSeq(ctx context.Context, conversationID string) (int64, error) {
+ return x.sc.GetMaxSeq(ctx, conversationID)
+}
+
+func (x *seqConversationCache) GetMaxSeqWithTime(ctx context.Context, conversationID string) (database.SeqTime, error) {
+ seq, err := x.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return database.SeqTime{}, err
+ }
+ return database.SeqTime{Seq: seq}, nil
+}
+
+func (x *seqConversationCache) SetMinSeqs(ctx context.Context, seqs map[string]int64) error {
+ for conversationID, seq := range seqs {
+ if err := x.sc.SetMinSeq(ctx, conversationID, seq); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (x *seqConversationCache) GetCacheMaxSeqWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error) {
+ return x.GetMaxSeqsWithTime(ctx, conversationIDs)
+}
diff --git a/pkg/common/storage/cache/mcache/third.go b/pkg/common/storage/cache/mcache/third.go
new file mode 100644
index 0000000..b73d6a7
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/third.go
@@ -0,0 +1,98 @@
+package mcache
+
+import (
+ "context"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/openimsdk/tools/errs"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewThirdCache(cache database.Cache) cache.ThirdCache {
+ return &thirdCache{
+ cache: cache,
+ }
+}
+
+type thirdCache struct {
+ cache database.Cache
+}
+
+func (c *thirdCache) getGetuiTokenKey() string {
+ return cachekey.GetGetuiTokenKey()
+}
+
+func (c *thirdCache) getGetuiTaskIDKey() string {
+ return cachekey.GetGetuiTaskIDKey()
+}
+
+func (c *thirdCache) getUserBadgeUnreadCountSumKey(userID string) string {
+ return cachekey.GetUserBadgeUnreadCountSumKey(userID)
+}
+
+func (c *thirdCache) getFcmAccountTokenKey(account string, platformID int) string {
+ return cachekey.GetFcmAccountTokenKey(account, platformID)
+}
+
+func (c *thirdCache) get(ctx context.Context, key string) (string, error) {
+ res, err := c.cache.Get(ctx, []string{key})
+ if err != nil {
+ return "", err
+ }
+ if val, ok := res[key]; ok {
+ return val, nil
+ }
+ return "", errs.Wrap(redis.Nil)
+}
+
+func (c *thirdCache) SetFcmToken(ctx context.Context, account string, platformID int, fcmToken string, expireTime int64) (err error) {
+ return errs.Wrap(c.cache.Set(ctx, c.getFcmAccountTokenKey(account, platformID), fcmToken, time.Duration(expireTime)*time.Second))
+}
+
+func (c *thirdCache) GetFcmToken(ctx context.Context, account string, platformID int) (string, error) {
+ return c.get(ctx, c.getFcmAccountTokenKey(account, platformID))
+}
+
+func (c *thirdCache) DelFcmToken(ctx context.Context, account string, platformID int) error {
+ return c.cache.Del(ctx, []string{c.getFcmAccountTokenKey(account, platformID)})
+}
+
+func (c *thirdCache) IncrUserBadgeUnreadCountSum(ctx context.Context, userID string) (int, error) {
+ return c.cache.Incr(ctx, c.getUserBadgeUnreadCountSumKey(userID), 1)
+}
+
+func (c *thirdCache) SetUserBadgeUnreadCountSum(ctx context.Context, userID string, value int) error {
+ return c.cache.Set(ctx, c.getUserBadgeUnreadCountSumKey(userID), strconv.Itoa(value), 0)
+}
+
+func (c *thirdCache) GetUserBadgeUnreadCountSum(ctx context.Context, userID string) (int, error) {
+ str, err := c.get(ctx, c.getUserBadgeUnreadCountSumKey(userID))
+ if err != nil {
+ return 0, err
+ }
+ val, err := strconv.Atoi(str)
+ if err != nil {
+ return 0, errs.WrapMsg(err, "strconv.Atoi", "str", str)
+ }
+ return val, nil
+}
+
+func (c *thirdCache) SetGetuiToken(ctx context.Context, token string, expireTime int64) error {
+ return c.cache.Set(ctx, c.getGetuiTokenKey(), token, time.Duration(expireTime)*time.Second)
+}
+
+func (c *thirdCache) GetGetuiToken(ctx context.Context) (string, error) {
+ return c.get(ctx, c.getGetuiTokenKey())
+}
+
+func (c *thirdCache) SetGetuiTaskID(ctx context.Context, taskID string, expireTime int64) error {
+ return c.cache.Set(ctx, c.getGetuiTaskIDKey(), taskID, time.Duration(expireTime)*time.Second)
+}
+
+func (c *thirdCache) GetGetuiTaskID(ctx context.Context) (string, error) {
+ return c.get(ctx, c.getGetuiTaskIDKey())
+}
diff --git a/pkg/common/storage/cache/mcache/token.go b/pkg/common/storage/cache/mcache/token.go
new file mode 100644
index 0000000..2b7188d
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/token.go
@@ -0,0 +1,166 @@
+package mcache
+
+import (
+ "context"
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+)
+
+func NewTokenCacheModel(cache database.Cache, accessExpire int64) cache.TokenModel {
+ c := &tokenCache{cache: cache}
+ c.accessExpire = c.getExpireTime(accessExpire)
+ return c
+}
+
+type tokenCache struct {
+ cache database.Cache
+ accessExpire time.Duration
+}
+
+func (x *tokenCache) getTokenKey(userID string, platformID int, token string) string {
+ return cachekey.GetTokenKey(userID, platformID) + ":" + token
+}
+
+func (x *tokenCache) SetTokenFlag(ctx context.Context, userID string, platformID int, token string, flag int) error {
+ return x.cache.Set(ctx, x.getTokenKey(userID, platformID, token), strconv.Itoa(flag), x.accessExpire)
+}
+
+// SetTokenFlagEx set token and flag with expire time
+func (x *tokenCache) SetTokenFlagEx(ctx context.Context, userID string, platformID int, token string, flag int) error {
+ return x.SetTokenFlag(ctx, userID, platformID, token, flag)
+}
+
+func (x *tokenCache) GetTokensWithoutError(ctx context.Context, userID string, platformID int) (map[string]int, error) {
+ prefix := x.getTokenKey(userID, platformID, "")
+ m, err := x.cache.Prefix(ctx, prefix)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ mm := make(map[string]int)
+ for k, v := range m {
+ state, err := strconv.Atoi(v)
+ if err != nil {
+ log.ZError(ctx, "token value is not int", err, "value", v, "userID", userID, "platformID", platformID)
+ continue
+ }
+ mm[strings.TrimPrefix(k, prefix)] = state
+ }
+ return mm, nil
+}
+
+func (x *tokenCache) HasTemporaryToken(ctx context.Context, userID string, platformID int, token string) error {
+ key := cachekey.GetTemporaryTokenKey(userID, platformID, token)
+ if _, err := x.cache.Get(ctx, []string{key}); err != nil {
+ return err
+ }
+ return nil
+}
+
+func (x *tokenCache) GetAllTokensWithoutError(ctx context.Context, userID string) (map[int]map[string]int, error) {
+ prefix := cachekey.UidPidToken + userID + ":"
+ tokens, err := x.cache.Prefix(ctx, prefix)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[int]map[string]int)
+ for key, flagStr := range tokens {
+ flag, err := strconv.Atoi(flagStr)
+ if err != nil {
+ log.ZError(ctx, "token value is not int", err, "key", key, "value", flagStr, "userID", userID)
+ continue
+ }
+ arr := strings.SplitN(strings.TrimPrefix(key, prefix), ":", 2)
+ if len(arr) != 2 {
+ log.ZError(ctx, "token value is not int", err, "key", key, "value", flagStr, "userID", userID)
+ continue
+ }
+ platformID, err := strconv.Atoi(arr[0])
+ if err != nil {
+ log.ZError(ctx, "token value is not int", err, "key", key, "value", flagStr, "userID", userID)
+ continue
+ }
+ token := arr[1]
+ if token == "" {
+ log.ZError(ctx, "token value is not int", err, "key", key, "value", flagStr, "userID", userID)
+ continue
+ }
+ tk, ok := res[platformID]
+ if !ok {
+ tk = make(map[string]int)
+ res[platformID] = tk
+ }
+ tk[token] = flag
+ }
+ return res, nil
+}
+
+func (x *tokenCache) SetTokenMapByUidPid(ctx context.Context, userID string, platformID int, m map[string]int) error {
+ for token, flag := range m {
+ err := x.SetTokenFlag(ctx, userID, platformID, token, flag)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (x *tokenCache) BatchSetTokenMapByUidPid(ctx context.Context, tokens map[string]map[string]any) error {
+ for prefix, tokenFlag := range tokens {
+ for token, flag := range tokenFlag {
+ flagStr := fmt.Sprintf("%v", flag)
+ if err := x.cache.Set(ctx, prefix+":"+token, flagStr, x.accessExpire); err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+func (x *tokenCache) DeleteTokenByUidPid(ctx context.Context, userID string, platformID int, fields []string) error {
+ keys := make([]string, 0, len(fields))
+ for _, token := range fields {
+ keys = append(keys, x.getTokenKey(userID, platformID, token))
+ }
+ return x.cache.Del(ctx, keys)
+}
+
+func (x *tokenCache) getExpireTime(t int64) time.Duration {
+ return time.Hour * 24 * time.Duration(t)
+}
+
+func (x *tokenCache) DeleteTokenByTokenMap(ctx context.Context, userID string, tokens map[int][]string) error {
+ keys := make([]string, 0, len(tokens))
+ for platformID, ts := range tokens {
+ for _, t := range ts {
+ keys = append(keys, x.getTokenKey(userID, platformID, t))
+ }
+ }
+ return x.cache.Del(ctx, keys)
+}
+
+func (x *tokenCache) DeleteAndSetTemporary(ctx context.Context, userID string, platformID int, fields []string) error {
+ keys := make([]string, 0, len(fields))
+ for _, f := range fields {
+ keys = append(keys, x.getTokenKey(userID, platformID, f))
+ }
+ if err := x.cache.Del(ctx, keys); err != nil {
+ return err
+ }
+
+ for _, f := range fields {
+ k := cachekey.GetTemporaryTokenKey(userID, platformID, f)
+ if err := x.cache.Set(ctx, k, "", x.accessExpire); err != nil {
+ return errs.Wrap(err)
+ }
+ }
+
+ return nil
+}
diff --git a/pkg/common/storage/cache/mcache/tools.go b/pkg/common/storage/cache/mcache/tools.go
new file mode 100644
index 0000000..7648edd
--- /dev/null
+++ b/pkg/common/storage/cache/mcache/tools.go
@@ -0,0 +1,63 @@
+package mcache
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/openimsdk/tools/log"
+)
+
+func getCache[V any](ctx context.Context, cache database.Cache, key string, expireTime time.Duration, fn func(ctx context.Context) (V, error)) (V, error) {
+ getDB := func() (V, bool, error) {
+ res, err := cache.Get(ctx, []string{key})
+ if err != nil {
+ var val V
+ return val, false, err
+ }
+ var val V
+ if str, ok := res[key]; ok {
+ if json.Unmarshal([]byte(str), &val) != nil {
+ return val, false, err
+ }
+ return val, true, nil
+ }
+ return val, false, nil
+ }
+ dbVal, ok, err := getDB()
+ if err != nil {
+ return dbVal, err
+ }
+ if ok {
+ return dbVal, nil
+ }
+ lockValue, err := cache.Lock(ctx, key, time.Minute)
+ if err != nil {
+ return dbVal, err
+ }
+ defer func() {
+ if err := cache.Unlock(ctx, key, lockValue); err != nil {
+ log.ZError(ctx, "unlock cache key", err, "key", key, "value", lockValue)
+ }
+ }()
+ dbVal, ok, err = getDB()
+ if err != nil {
+ return dbVal, err
+ }
+ if ok {
+ return dbVal, nil
+ }
+ val, err := fn(ctx)
+ if err != nil {
+ return val, err
+ }
+ data, err := json.Marshal(val)
+ if err != nil {
+ return val, err
+ }
+ if err := cache.Set(ctx, key, string(data), expireTime); err != nil {
+ return val, err
+ }
+ return val, nil
+}
diff --git a/pkg/common/storage/cache/msg.go b/pkg/common/storage/cache/msg.go
new file mode 100644
index 0000000..1c6f061
--- /dev/null
+++ b/pkg/common/storage/cache/msg.go
@@ -0,0 +1,30 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+type MsgCache interface {
+ SetSendMsgStatus(ctx context.Context, id string, status int32) error
+ GetSendMsgStatus(ctx context.Context, id string) (int32, error)
+
+ GetMessageBySeqs(ctx context.Context, conversationID string, seqs []int64) ([]*model.MsgInfoModel, error)
+ DelMessageBySeqs(ctx context.Context, conversationID string, seqs []int64) error
+ SetMessageBySeqs(ctx context.Context, conversationID string, msgs []*model.MsgInfoModel) error
+}
diff --git a/pkg/common/storage/cache/online.go b/pkg/common/storage/cache/online.go
new file mode 100644
index 0000000..d21ae61
--- /dev/null
+++ b/pkg/common/storage/cache/online.go
@@ -0,0 +1,9 @@
+package cache
+
+import "context"
+
+type OnlineCache interface {
+ GetOnline(ctx context.Context, userID string) ([]int32, error)
+ SetUserOnline(ctx context.Context, userID string, online, offline []int32) error
+ GetAllOnlineUsers(ctx context.Context, cursor uint64) (map[string][]int32, uint64, error)
+}
diff --git a/pkg/common/storage/cache/redis/batch.go b/pkg/common/storage/cache/redis/batch.go
new file mode 100644
index 0000000..12df0b8
--- /dev/null
+++ b/pkg/common/storage/cache/redis/batch.go
@@ -0,0 +1,135 @@
+package redis
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "github.com/dtm-labs/rockscache"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+// GetRocksCacheOptions returns the default configuration options for RocksCache.
+func GetRocksCacheOptions() *rockscache.Options {
+ opts := rockscache.NewDefaultOptions()
+ opts.LockExpire = rocksCacheTimeout
+ opts.WaitReplicasTimeout = rocksCacheTimeout
+ opts.StrongConsistency = true
+ opts.RandomExpireAdjustment = 0.2
+
+ return &opts
+}
+
+func newRocksCacheClient(rdb redis.UniversalClient) *rocksCacheClient {
+ if rdb == nil {
+ return &rocksCacheClient{}
+ }
+ rc := &rocksCacheClient{
+ rdb: rdb,
+ client: rockscache.NewClient(rdb, *GetRocksCacheOptions()),
+ }
+ return rc
+}
+
+type rocksCacheClient struct {
+ rdb redis.UniversalClient
+ client *rockscache.Client
+}
+
+func (x *rocksCacheClient) GetClient() *rockscache.Client {
+ return x.client
+}
+
+func (x *rocksCacheClient) Disable() bool {
+ return x.client == nil
+}
+
+func (x *rocksCacheClient) GetRedis() redis.UniversalClient {
+ return x.rdb
+}
+
+func (x *rocksCacheClient) GetBatchDeleter(topics ...string) cache.BatchDeleter {
+ return NewBatchDeleterRedis(x, topics)
+}
+
+func batchGetCache2[K comparable, V any](ctx context.Context, rcClient *rocksCacheClient, expire time.Duration, ids []K, idKey func(id K) string, vId func(v *V) K, fn func(ctx context.Context, ids []K) ([]*V, error)) ([]*V, error) {
+ if len(ids) == 0 {
+ return nil, nil
+ }
+ if rcClient.Disable() {
+ return fn(ctx, ids)
+ }
+ findKeys := make([]string, 0, len(ids))
+ keyId := make(map[string]K)
+ for _, id := range ids {
+ key := idKey(id)
+ if _, ok := keyId[key]; ok {
+ continue
+ }
+ keyId[key] = id
+ findKeys = append(findKeys, key)
+ }
+ slotKeys, err := groupKeysBySlot(ctx, rcClient.GetRedis(), findKeys)
+ if err != nil {
+ return nil, err
+ }
+ result := make([]*V, 0, len(findKeys))
+ for _, keys := range slotKeys {
+ indexCache, err := rcClient.GetClient().FetchBatch2(ctx, keys, expire, func(idx []int) (map[int]string, error) {
+ queryIds := make([]K, 0, len(idx))
+ idIndex := make(map[K]int)
+ for _, index := range idx {
+ id := keyId[keys[index]]
+ idIndex[id] = index
+ queryIds = append(queryIds, id)
+ }
+ values, err := fn(ctx, queryIds)
+ if err != nil {
+ log.ZError(ctx, "batchGetCache query database failed", err, "keys", keys, "queryIds", queryIds)
+ return nil, err
+ }
+ if len(values) == 0 {
+ return map[int]string{}, nil
+ }
+ cacheIndex := make(map[int]string)
+ for _, value := range values {
+ id := vId(value)
+ index, ok := idIndex[id]
+ if !ok {
+ continue
+ }
+ bs, err := json.Marshal(value)
+ if err != nil {
+ log.ZError(ctx, "marshal failed", err)
+ return nil, err
+ }
+ cacheIndex[index] = string(bs)
+ }
+ return cacheIndex, nil
+ })
+ if err != nil {
+ return nil, errs.WrapMsg(err, "FetchBatch2 failed")
+ }
+ for index, data := range indexCache {
+ if data == "" {
+ continue
+ }
+ var value V
+ if err := json.Unmarshal([]byte(data), &value); err != nil {
+ return nil, errs.WrapMsg(err, "Unmarshal failed")
+ }
+ if cb, ok := any(&value).(BatchCacheCallback[K]); ok {
+ cb.BatchCache(keyId[keys[index]])
+ }
+ result = append(result, &value)
+ }
+ }
+ return result, nil
+}
+
+type BatchCacheCallback[K comparable] interface {
+ BatchCache(id K)
+}
diff --git a/pkg/common/storage/cache/redis/batch_handler.go b/pkg/common/storage/cache/redis/batch_handler.go
new file mode 100644
index 0000000..3ced597
--- /dev/null
+++ b/pkg/common/storage/cache/redis/batch_handler.go
@@ -0,0 +1,149 @@
+package redis
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "github.com/dtm-labs/rockscache"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ rocksCacheTimeout = 11 * time.Second
+)
+
+// BatchDeleterRedis is a concrete implementation of the BatchDeleter interface based on Redis and RocksCache.
+type BatchDeleterRedis struct {
+ redisClient redis.UniversalClient
+ keys []string
+ rocksClient *rockscache.Client
+ redisPubTopics []string
+}
+
+// NewBatchDeleterRedis creates a new BatchDeleterRedis instance.
+func NewBatchDeleterRedis(rcClient *rocksCacheClient, redisPubTopics []string) *BatchDeleterRedis {
+ return &BatchDeleterRedis{
+ redisClient: rcClient.GetRedis(),
+ rocksClient: rcClient.GetClient(),
+ redisPubTopics: redisPubTopics,
+ }
+}
+
+// ExecDelWithKeys directly takes keys for batch deletion and publishes deletion information.
+func (c *BatchDeleterRedis) ExecDelWithKeys(ctx context.Context, keys []string) error {
+ distinctKeys := datautil.Distinct(keys)
+ return c.execDel(ctx, distinctKeys)
+}
+
+// ChainExecDel is used for chain calls for batch deletion. It must call Clone to prevent memory pollution.
+func (c *BatchDeleterRedis) ChainExecDel(ctx context.Context) error {
+ distinctKeys := datautil.Distinct(c.keys)
+ return c.execDel(ctx, distinctKeys)
+}
+
+// execDel performs batch deletion and publishes the keys that have been deleted to update the local cache information of other nodes.
+func (c *BatchDeleterRedis) execDel(ctx context.Context, keys []string) error {
+ if len(keys) > 0 {
+ log.ZDebug(ctx, "delete cache", "topic", c.redisPubTopics, "keys", keys)
+ // Batch delete keys
+ err := ProcessKeysBySlot(ctx, c.redisClient, keys, func(ctx context.Context, slot int64, keys []string) error {
+ return c.rocksClient.TagAsDeletedBatch2(ctx, keys)
+ })
+ if err != nil {
+ return err
+ }
+ // Publish the keys that have been deleted to Redis to update the local cache information of other nodes
+ if len(c.redisPubTopics) > 0 && len(keys) > 0 {
+ keysByTopic := localcache.GetPublishKeysByTopic(c.redisPubTopics, keys)
+ for topic, keys := range keysByTopic {
+ if len(keys) > 0 {
+ data, err := json.Marshal(keys)
+ if err != nil {
+ log.ZWarn(ctx, "keys json marshal failed", err, "topic", topic, "keys", keys)
+ } else {
+ if err := c.redisClient.Publish(ctx, topic, string(data)).Err(); err != nil {
+ log.ZWarn(ctx, "redis publish cache delete error", err, "topic", topic, "keys", keys)
+ }
+ }
+ }
+ }
+ }
+ }
+ return nil
+}
+
+// Clone creates a copy of BatchDeleterRedis for chain calls to prevent memory pollution.
+func (c *BatchDeleterRedis) Clone() cache.BatchDeleter {
+ return &BatchDeleterRedis{
+ redisClient: c.redisClient,
+ keys: c.keys,
+ rocksClient: c.rocksClient,
+ redisPubTopics: c.redisPubTopics,
+ }
+}
+
+// AddKeys adds keys to be deleted.
+func (c *BatchDeleterRedis) AddKeys(keys ...string) {
+ c.keys = append(c.keys, keys...)
+}
+
+type disableBatchDeleter struct{}
+
+func (x disableBatchDeleter) ChainExecDel(ctx context.Context) error {
+ return nil
+}
+
+func (x disableBatchDeleter) ExecDelWithKeys(ctx context.Context, keys []string) error {
+ return nil
+}
+
+func (x disableBatchDeleter) Clone() cache.BatchDeleter {
+ return x
+}
+
+func (x disableBatchDeleter) AddKeys(keys ...string) {}
+
+func getCache[T any](ctx context.Context, rcClient *rocksCacheClient, key string, expire time.Duration, fn func(ctx context.Context) (T, error)) (T, error) {
+ if rcClient.Disable() {
+ return fn(ctx)
+ }
+ var t T
+ var write bool
+ v, err := rcClient.GetClient().Fetch2(ctx, key, expire, func() (s string, err error) {
+ t, err = fn(ctx)
+ if err != nil {
+ //log.ZError(ctx, "getCache query database failed", err, "key", key)
+ return "", err
+ }
+ bs, err := json.Marshal(t)
+ if err != nil {
+ return "", errs.WrapMsg(err, "marshal failed")
+ }
+ write = true
+
+ return string(bs), nil
+ })
+ if err != nil {
+ return t, errs.Wrap(err)
+ }
+ if write {
+ return t, nil
+ }
+ if v == "" {
+ return t, errs.ErrRecordNotFound.WrapMsg("cache is not found")
+ }
+ err = json.Unmarshal([]byte(v), &t)
+ if err != nil {
+ errInfo := fmt.Sprintf("cache json.Unmarshal failed, key:%s, value:%s, expire:%s", key, v, expire)
+ return t, errs.WrapMsg(err, errInfo)
+ }
+
+ return t, nil
+}
diff --git a/pkg/common/storage/cache/redis/batch_test.go b/pkg/common/storage/cache/redis/batch_test.go
new file mode 100644
index 0000000..878dc34
--- /dev/null
+++ b/pkg/common/storage/cache/redis/batch_test.go
@@ -0,0 +1,56 @@
+package redis
+
+import (
+ "context"
+ "testing"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+)
+
+func TestName(t *testing.T) {
+ //var rocks rockscache.Client
+ //rdb := getRocksCacheRedisClient(&rocks)
+ //t.Log(rdb == nil)
+
+ ctx := context.Background()
+ rdb, err := redisutil.NewRedisClient(ctx, (&config.Redis{
+ Address: []string{"172.16.8.48:16379"},
+ Password: "openIM123",
+ DB: 3,
+ }).Build())
+ if err != nil {
+ panic(err)
+ }
+ mgocli, err := mongoutil.NewMongoDB(ctx, (&config.Mongo{
+ Address: []string{"172.16.8.48:37017"},
+ Database: "openim_v3",
+ Username: "openIM",
+ Password: "openIM123",
+ MaxPoolSize: 100,
+ MaxRetry: 1,
+ }).Build())
+ if err != nil {
+ panic(err)
+ }
+ //userMgo, err := mgo.NewUserMongo(mgocli.GetDB())
+ //if err != nil {
+ // panic(err)
+ //}
+ //rock := rockscache.NewClient(rdb, rockscache.NewDefaultOptions())
+ mgoSeqUser, err := mgo.NewSeqUserMongo(mgocli.GetDB())
+ if err != nil {
+ panic(err)
+ }
+ seqUser := NewSeqUserCacheRedis(rdb, mgoSeqUser)
+
+ res, err := seqUser.GetUserReadSeqs(ctx, "2110910952", []string{"sg_2920732023", "sg_345762580"})
+ if err != nil {
+ panic(err)
+ }
+
+ t.Log(res)
+
+}
diff --git a/pkg/common/storage/cache/redis/black.go b/pkg/common/storage/cache/redis/black.go
new file mode 100644
index 0000000..d28523d
--- /dev/null
+++ b/pkg/common/storage/cache/redis/black.go
@@ -0,0 +1,65 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ blackExpireTime = time.Second * 60 * 60 * 12
+)
+
+type BlackCacheRedis struct {
+ cache.BatchDeleter
+ expireTime time.Duration
+ rcClient *rocksCacheClient
+ blackDB database.Black
+}
+
+func NewBlackCacheRedis(rdb redis.UniversalClient, localCache *config.LocalCache, blackDB database.Black) cache.BlackCache {
+ rc := newRocksCacheClient(rdb)
+ return &BlackCacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(localCache.Friend.Topic),
+ expireTime: blackExpireTime,
+ rcClient: rc,
+ blackDB: blackDB,
+ }
+}
+
+func (b *BlackCacheRedis) CloneBlackCache() cache.BlackCache {
+ return &BlackCacheRedis{
+ BatchDeleter: b.BatchDeleter.Clone(),
+ expireTime: b.expireTime,
+ rcClient: b.rcClient,
+ blackDB: b.blackDB,
+ }
+}
+
+func (b *BlackCacheRedis) getBlackIDsKey(ownerUserID string) string {
+ return cachekey.GetBlackIDsKey(ownerUserID)
+}
+
+func (b *BlackCacheRedis) GetBlackIDs(ctx context.Context, userID string) (blackIDs []string, err error) {
+ return getCache(
+ ctx,
+ b.rcClient,
+ b.getBlackIDsKey(userID),
+ b.expireTime,
+ func(ctx context.Context) ([]string, error) {
+ return b.blackDB.FindBlackUserIDs(ctx, userID)
+ },
+ )
+}
+
+func (b *BlackCacheRedis) DelBlackIDs(_ context.Context, userID string) cache.BlackCache {
+ cache := b.CloneBlackCache()
+ cache.AddKeys(b.getBlackIDsKey(userID))
+
+ return cache
+}
diff --git a/pkg/common/storage/cache/redis/client_config.go b/pkg/common/storage/cache/redis/client_config.go
new file mode 100644
index 0000000..a153369
--- /dev/null
+++ b/pkg/common/storage/cache/redis/client_config.go
@@ -0,0 +1,69 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewClientConfigCache(rdb redis.UniversalClient, mgo database.ClientConfig) cache.ClientConfigCache {
+ rc := newRocksCacheClient(rdb)
+ return &ClientConfigCache{
+ mgo: mgo,
+ rcClient: rc,
+ delete: rc.GetBatchDeleter(),
+ }
+}
+
+type ClientConfigCache struct {
+ mgo database.ClientConfig
+ rcClient *rocksCacheClient
+ delete cache.BatchDeleter
+}
+
+func (x *ClientConfigCache) getExpireTime(userID string) time.Duration {
+ if userID == "" {
+ return time.Hour * 24
+ } else {
+ return time.Hour
+ }
+}
+
+func (x *ClientConfigCache) getClientConfigKey(userID string) string {
+ return cachekey.GetClientConfigKey(userID)
+}
+
+func (x *ClientConfigCache) GetConfig(ctx context.Context, userID string) (map[string]string, error) {
+ return getCache(ctx, x.rcClient, x.getClientConfigKey(userID), x.getExpireTime(userID), func(ctx context.Context) (map[string]string, error) {
+ return x.mgo.Get(ctx, userID)
+ })
+}
+
+func (x *ClientConfigCache) DeleteUserCache(ctx context.Context, userIDs []string) error {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, x.getClientConfigKey(userID))
+ }
+ return x.delete.ExecDelWithKeys(ctx, keys)
+}
+
+func (x *ClientConfigCache) GetUserConfig(ctx context.Context, userID string) (map[string]string, error) {
+ config, err := x.GetConfig(ctx, "")
+ if err != nil {
+ return nil, err
+ }
+ if userID != "" {
+ userConfig, err := x.GetConfig(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ for k, v := range userConfig {
+ config[k] = v
+ }
+ }
+ return config, nil
+}
diff --git a/pkg/common/storage/cache/redis/conversation.go b/pkg/common/storage/cache/redis/conversation.go
new file mode 100644
index 0000000..2d42e33
--- /dev/null
+++ b/pkg/common/storage/cache/redis/conversation.go
@@ -0,0 +1,276 @@
+package redis
+
+import (
+ "context"
+ "math/big"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/encrypt"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ conversationExpireTime = time.Second * 60 * 60 * 12
+)
+
+func NewConversationRedis(rdb redis.UniversalClient, localCache *config.LocalCache, db database.Conversation) cache.ConversationCache {
+ rc := newRocksCacheClient(rdb)
+ return &ConversationRedisCache{
+ BatchDeleter: rc.GetBatchDeleter(localCache.Conversation.Topic),
+ rcClient: rc,
+ conversationDB: db,
+ expireTime: conversationExpireTime,
+ }
+}
+
+type ConversationRedisCache struct {
+ cache.BatchDeleter
+ rcClient *rocksCacheClient
+ conversationDB database.Conversation
+ expireTime time.Duration
+}
+
+func (c *ConversationRedisCache) CloneConversationCache() cache.ConversationCache {
+ return &ConversationRedisCache{
+ BatchDeleter: c.BatchDeleter.Clone(),
+ rcClient: c.rcClient,
+ conversationDB: c.conversationDB,
+ expireTime: c.expireTime,
+ }
+}
+
+func (c *ConversationRedisCache) getConversationKey(ownerUserID, conversationID string) string {
+ return cachekey.GetConversationKey(ownerUserID, conversationID)
+}
+
+func (c *ConversationRedisCache) getConversationIDsKey(ownerUserID string) string {
+ return cachekey.GetConversationIDsKey(ownerUserID)
+}
+
+func (c *ConversationRedisCache) getNotNotifyConversationIDsKey(ownerUserID string) string {
+ return cachekey.GetNotNotifyConversationIDsKey(ownerUserID)
+}
+
+func (c *ConversationRedisCache) getPinnedConversationIDsKey(ownerUserID string) string {
+ return cachekey.GetPinnedConversationIDs(ownerUserID)
+}
+
+func (c *ConversationRedisCache) getSuperGroupRecvNotNotifyUserIDsKey(groupID string) string {
+ return cachekey.GetSuperGroupRecvNotNotifyUserIDsKey(groupID)
+}
+
+func (c *ConversationRedisCache) getRecvMsgOptKey(ownerUserID, conversationID string) string {
+ return cachekey.GetRecvMsgOptKey(ownerUserID, conversationID)
+}
+
+func (c *ConversationRedisCache) getSuperGroupRecvNotNotifyUserIDsHashKey(groupID string) string {
+ return cachekey.GetSuperGroupRecvNotNotifyUserIDsHashKey(groupID)
+}
+
+func (c *ConversationRedisCache) getConversationHasReadSeqKey(ownerUserID, conversationID string) string {
+ return cachekey.GetConversationHasReadSeqKey(ownerUserID, conversationID)
+}
+
+func (c *ConversationRedisCache) getConversationNotReceiveMessageUserIDsKey(conversationID string) string {
+ return cachekey.GetConversationNotReceiveMessageUserIDsKey(conversationID)
+}
+
+func (c *ConversationRedisCache) getUserConversationIDsHashKey(ownerUserID string) string {
+ return cachekey.GetUserConversationIDsHashKey(ownerUserID)
+}
+
+func (c *ConversationRedisCache) getConversationUserMaxVersionKey(ownerUserID string) string {
+ return cachekey.GetConversationUserMaxVersionKey(ownerUserID)
+}
+
+func (c *ConversationRedisCache) GetUserConversationIDs(ctx context.Context, ownerUserID string) ([]string, error) {
+ return getCache(ctx, c.rcClient, c.getConversationIDsKey(ownerUserID), c.expireTime, func(ctx context.Context) ([]string, error) {
+ return c.conversationDB.FindUserIDAllConversationID(ctx, ownerUserID)
+ })
+}
+
+func (c *ConversationRedisCache) GetUserNotNotifyConversationIDs(ctx context.Context, userID string) ([]string, error) {
+ return getCache(ctx, c.rcClient, c.getNotNotifyConversationIDsKey(userID), c.expireTime, func(ctx context.Context) ([]string, error) {
+ return c.conversationDB.FindUserIDAllNotNotifyConversationID(ctx, userID)
+ })
+}
+
+func (c *ConversationRedisCache) GetPinnedConversationIDs(ctx context.Context, userID string) ([]string, error) {
+ return getCache(ctx, c.rcClient, c.getPinnedConversationIDsKey(userID), c.expireTime, func(ctx context.Context) ([]string, error) {
+ return c.conversationDB.FindUserIDAllPinnedConversationID(ctx, userID)
+ })
+}
+
+func (c *ConversationRedisCache) DelConversationIDs(userIDs ...string) cache.ConversationCache {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, c.getConversationIDsKey(userID))
+ }
+ cache := c.CloneConversationCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (c *ConversationRedisCache) GetUserConversationIDsHash(ctx context.Context, ownerUserID string) (hash uint64, err error) {
+ return getCache(
+ ctx,
+ c.rcClient,
+ c.getUserConversationIDsHashKey(ownerUserID),
+ c.expireTime,
+ func(ctx context.Context) (uint64, error) {
+ conversationIDs, err := c.GetUserConversationIDs(ctx, ownerUserID)
+ if err != nil {
+ return 0, err
+ }
+ datautil.Sort(conversationIDs, true)
+ bi := big.NewInt(0)
+ bi.SetString(encrypt.Md5(strings.Join(conversationIDs, ";"))[0:8], 16)
+ return bi.Uint64(), nil
+ },
+ )
+}
+
+func (c *ConversationRedisCache) DelUserConversationIDsHash(ownerUserIDs ...string) cache.ConversationCache {
+ keys := make([]string, 0, len(ownerUserIDs))
+ for _, ownerUserID := range ownerUserIDs {
+ keys = append(keys, c.getUserConversationIDsHashKey(ownerUserID))
+ }
+ cache := c.CloneConversationCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (c *ConversationRedisCache) GetConversation(ctx context.Context, ownerUserID, conversationID string) (*model.Conversation, error) {
+ return getCache(ctx, c.rcClient, c.getConversationKey(ownerUserID, conversationID), c.expireTime, func(ctx context.Context) (*model.Conversation, error) {
+ return c.conversationDB.Take(ctx, ownerUserID, conversationID)
+ })
+}
+
+func (c *ConversationRedisCache) DelConversations(ownerUserID string, conversationIDs ...string) cache.ConversationCache {
+ keys := make([]string, 0, len(conversationIDs))
+ for _, conversationID := range conversationIDs {
+ keys = append(keys, c.getConversationKey(ownerUserID, conversationID))
+ }
+ cache := c.CloneConversationCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (c *ConversationRedisCache) GetConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*model.Conversation, error) {
+ return batchGetCache2(ctx, c.rcClient, c.expireTime, conversationIDs, func(conversationID string) string {
+ return c.getConversationKey(ownerUserID, conversationID)
+ }, func(conversation *model.Conversation) string {
+ return conversation.ConversationID
+ }, func(ctx context.Context, conversationIDs []string) ([]*model.Conversation, error) {
+ return c.conversationDB.Find(ctx, ownerUserID, conversationIDs)
+ })
+}
+
+func (c *ConversationRedisCache) GetUserAllConversations(ctx context.Context, ownerUserID string) ([]*model.Conversation, error) {
+ conversationIDs, err := c.GetUserConversationIDs(ctx, ownerUserID)
+ if err != nil {
+ return nil, err
+ }
+ return c.GetConversations(ctx, ownerUserID, conversationIDs)
+}
+
+func (c *ConversationRedisCache) GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error) {
+ return getCache(ctx, c.rcClient, c.getRecvMsgOptKey(ownerUserID, conversationID), c.expireTime, func(ctx context.Context) (opt int, err error) {
+ return c.conversationDB.GetUserRecvMsgOpt(ctx, ownerUserID, conversationID)
+ })
+}
+
+func (c *ConversationRedisCache) DelUsersConversation(conversationID string, ownerUserIDs ...string) cache.ConversationCache {
+ keys := make([]string, 0, len(ownerUserIDs))
+ for _, ownerUserID := range ownerUserIDs {
+ keys = append(keys, c.getConversationKey(ownerUserID, conversationID))
+ }
+ cache := c.CloneConversationCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (c *ConversationRedisCache) DelUserRecvMsgOpt(ownerUserID, conversationID string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ cache.AddKeys(c.getRecvMsgOptKey(ownerUserID, conversationID))
+
+ return cache
+}
+
+func (c *ConversationRedisCache) DelSuperGroupRecvMsgNotNotifyUserIDs(groupID string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ cache.AddKeys(c.getSuperGroupRecvNotNotifyUserIDsKey(groupID))
+
+ return cache
+}
+
+func (c *ConversationRedisCache) DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ cache.AddKeys(c.getSuperGroupRecvNotNotifyUserIDsHashKey(groupID))
+
+ return cache
+}
+
+func (c *ConversationRedisCache) DelUserAllHasReadSeqs(ownerUserID string, conversationIDs ...string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ for _, conversationID := range conversationIDs {
+ cache.AddKeys(c.getConversationHasReadSeqKey(ownerUserID, conversationID))
+ }
+
+ return cache
+}
+
+func (c *ConversationRedisCache) GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error) {
+ return getCache(ctx, c.rcClient, c.getConversationNotReceiveMessageUserIDsKey(conversationID), c.expireTime, func(ctx context.Context) ([]string, error) {
+ return c.conversationDB.GetConversationNotReceiveMessageUserIDs(ctx, conversationID)
+ })
+}
+
+func (c *ConversationRedisCache) DelConversationNotReceiveMessageUserIDs(conversationIDs ...string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ for _, conversationID := range conversationIDs {
+ cache.AddKeys(c.getConversationNotReceiveMessageUserIDsKey(conversationID))
+ }
+ return cache
+}
+
+func (c *ConversationRedisCache) DelConversationNotNotifyMessageUserIDs(userIDs ...string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ for _, userID := range userIDs {
+ cache.AddKeys(c.getNotNotifyConversationIDsKey(userID))
+ }
+ return cache
+}
+
+func (c *ConversationRedisCache) DelUserPinnedConversations(userIDs ...string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ for _, userID := range userIDs {
+ cache.AddKeys(c.getPinnedConversationIDsKey(userID))
+ }
+ return cache
+}
+
+func (c *ConversationRedisCache) DelConversationVersionUserIDs(userIDs ...string) cache.ConversationCache {
+ cache := c.CloneConversationCache()
+ for _, userID := range userIDs {
+ cache.AddKeys(c.getConversationUserMaxVersionKey(userID))
+ }
+ return cache
+}
+
+func (c *ConversationRedisCache) FindMaxConversationUserVersion(ctx context.Context, userID string) (*model.VersionLog, error) {
+ return getCache(ctx, c.rcClient, c.getConversationUserMaxVersionKey(userID), c.expireTime, func(ctx context.Context) (*model.VersionLog, error) {
+ return c.conversationDB.FindConversationUserVersion(ctx, userID, 0, 0)
+ })
+}
diff --git a/pkg/common/storage/cache/redis/friend.go b/pkg/common/storage/cache/redis/friend.go
new file mode 100644
index 0000000..05e794b
--- /dev/null
+++ b/pkg/common/storage/cache/redis/friend.go
@@ -0,0 +1,167 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ friendExpireTime = time.Second * 60 * 60 * 12
+)
+
+// FriendCacheRedis is an implementation of the FriendCache interface using Redis.
+type FriendCacheRedis struct {
+ cache.BatchDeleter
+ friendDB database.Friend
+ expireTime time.Duration
+ rcClient *rocksCacheClient
+ syncCount int
+}
+
+// NewFriendCacheRedis creates a new instance of FriendCacheRedis.
+func NewFriendCacheRedis(rdb redis.UniversalClient, localCache *config.LocalCache, friendDB database.Friend) cache.FriendCache {
+ rc := newRocksCacheClient(rdb)
+ return &FriendCacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(localCache.Friend.Topic),
+ friendDB: friendDB,
+ expireTime: friendExpireTime,
+ rcClient: rc,
+ }
+}
+
+func (f *FriendCacheRedis) CloneFriendCache() cache.FriendCache {
+ return &FriendCacheRedis{
+ BatchDeleter: f.BatchDeleter.Clone(),
+ friendDB: f.friendDB,
+ expireTime: f.expireTime,
+ rcClient: f.rcClient,
+ }
+}
+
+// getFriendIDsKey returns the key for storing friend IDs in the cache.
+func (f *FriendCacheRedis) getFriendIDsKey(ownerUserID string) string {
+ return cachekey.GetFriendIDsKey(ownerUserID)
+}
+
+func (f *FriendCacheRedis) getFriendMaxVersionKey(ownerUserID string) string {
+ return cachekey.GetFriendMaxVersionKey(ownerUserID)
+}
+
+// getTwoWayFriendsIDsKey returns the key for storing two-way friend IDs in the cache.
+func (f *FriendCacheRedis) getTwoWayFriendsIDsKey(ownerUserID string) string {
+ return cachekey.GetTwoWayFriendsIDsKey(ownerUserID)
+}
+
+// getFriendKey returns the key for storing friend info in the cache.
+func (f *FriendCacheRedis) getFriendKey(ownerUserID, friendUserID string) string {
+ return cachekey.GetFriendKey(ownerUserID, friendUserID)
+}
+
+// GetFriendIDs retrieves friend IDs from the cache or the database if not found.
+func (f *FriendCacheRedis) GetFriendIDs(ctx context.Context, ownerUserID string) (friendIDs []string, err error) {
+ return getCache(ctx, f.rcClient, f.getFriendIDsKey(ownerUserID), f.expireTime, func(ctx context.Context) ([]string, error) {
+ return f.friendDB.FindFriendUserIDs(ctx, ownerUserID)
+ })
+}
+
+// DelFriendIDs deletes friend IDs from the cache.
+func (f *FriendCacheRedis) DelFriendIDs(ownerUserIDs ...string) cache.FriendCache {
+ newFriendCache := f.CloneFriendCache()
+ keys := make([]string, 0, len(ownerUserIDs))
+ for _, userID := range ownerUserIDs {
+ keys = append(keys, f.getFriendIDsKey(userID))
+ }
+ newFriendCache.AddKeys(keys...)
+
+ return newFriendCache
+}
+
+// GetTwoWayFriendIDs retrieves two-way friend IDs from the cache.
+func (f *FriendCacheRedis) GetTwoWayFriendIDs(ctx context.Context, ownerUserID string) (twoWayFriendIDs []string, err error) {
+ friendIDs, err := f.GetFriendIDs(ctx, ownerUserID)
+ if err != nil {
+ return nil, err
+ }
+ for _, friendID := range friendIDs {
+ friendFriendID, err := f.GetFriendIDs(ctx, friendID)
+ if err != nil {
+ return nil, err
+ }
+ if datautil.Contain(ownerUserID, friendFriendID...) {
+ twoWayFriendIDs = append(twoWayFriendIDs, ownerUserID)
+ }
+ }
+
+ return twoWayFriendIDs, nil
+}
+
+// DelTwoWayFriendIDs deletes two-way friend IDs from the cache.
+func (f *FriendCacheRedis) DelTwoWayFriendIDs(ctx context.Context, ownerUserID string) cache.FriendCache {
+ newFriendCache := f.CloneFriendCache()
+ newFriendCache.AddKeys(f.getTwoWayFriendsIDsKey(ownerUserID))
+
+ return newFriendCache
+}
+
+// GetFriend retrieves friend info from the cache or the database if not found.
+func (f *FriendCacheRedis) GetFriend(ctx context.Context, ownerUserID, friendUserID string) (friend *model.Friend, err error) {
+ return getCache(ctx, f.rcClient, f.getFriendKey(ownerUserID,
+ friendUserID), f.expireTime, func(ctx context.Context) (*model.Friend, error) {
+ return f.friendDB.Take(ctx, ownerUserID, friendUserID)
+ })
+}
+
+// DelFriend deletes friend info from the cache.
+func (f *FriendCacheRedis) DelFriend(ownerUserID, friendUserID string) cache.FriendCache {
+ newFriendCache := f.CloneFriendCache()
+ newFriendCache.AddKeys(f.getFriendKey(ownerUserID, friendUserID))
+
+ return newFriendCache
+}
+
+// DelFriends deletes multiple friend infos from the cache.
+func (f *FriendCacheRedis) DelFriends(ownerUserID string, friendUserIDs []string) cache.FriendCache {
+ newFriendCache := f.CloneFriendCache()
+
+ for _, friendUserID := range friendUserIDs {
+ key := f.getFriendKey(ownerUserID, friendUserID)
+ newFriendCache.AddKeys(key) // Assuming AddKeys marks the keys for deletion
+ }
+
+ return newFriendCache
+}
+
+func (f *FriendCacheRedis) DelOwner(friendUserID string, ownerUserIDs []string) cache.FriendCache {
+ newFriendCache := f.CloneFriendCache()
+
+ for _, ownerUserID := range ownerUserIDs {
+ key := f.getFriendKey(ownerUserID, friendUserID)
+ newFriendCache.AddKeys(key) // Assuming AddKeys marks the keys for deletion
+ }
+
+ return newFriendCache
+}
+
+func (f *FriendCacheRedis) DelMaxFriendVersion(ownerUserIDs ...string) cache.FriendCache {
+ newFriendCache := f.CloneFriendCache()
+ for _, ownerUserID := range ownerUserIDs {
+ key := f.getFriendMaxVersionKey(ownerUserID)
+ newFriendCache.AddKeys(key) // Assuming AddKeys marks the keys for deletion
+ }
+
+ return newFriendCache
+}
+
+func (f *FriendCacheRedis) FindMaxFriendVersion(ctx context.Context, ownerUserID string) (*model.VersionLog, error) {
+ return getCache(ctx, f.rcClient, f.getFriendMaxVersionKey(ownerUserID), f.expireTime, func(ctx context.Context) (*model.VersionLog, error) {
+ return f.friendDB.FindIncrVersion(ctx, ownerUserID, 0, 0)
+ })
+}
diff --git a/pkg/common/storage/cache/redis/group.go b/pkg/common/storage/cache/redis/group.go
new file mode 100644
index 0000000..5282161
--- /dev/null
+++ b/pkg/common/storage/cache/redis/group.go
@@ -0,0 +1,385 @@
+package redis
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/common"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ groupExpireTime = time.Second * 60 * 60 * 12
+)
+
+type GroupCacheRedis struct {
+ cache.BatchDeleter
+ groupDB database.Group
+ groupMemberDB database.GroupMember
+ groupRequestDB database.GroupRequest
+ expireTime time.Duration
+ rcClient *rocksCacheClient
+ groupHash cache.GroupHash
+}
+
+func NewGroupCacheRedis(rdb redis.UniversalClient, localCache *config.LocalCache, groupDB database.Group, groupMemberDB database.GroupMember, groupRequestDB database.GroupRequest, hashCode cache.GroupHash) cache.GroupCache {
+ rc := newRocksCacheClient(rdb)
+ return &GroupCacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(localCache.Group.Topic),
+ rcClient: rc,
+ expireTime: groupExpireTime,
+ groupDB: groupDB,
+ groupMemberDB: groupMemberDB,
+ groupRequestDB: groupRequestDB,
+ groupHash: hashCode,
+ }
+}
+
+func (g *GroupCacheRedis) CloneGroupCache() cache.GroupCache {
+ return &GroupCacheRedis{
+ BatchDeleter: g.BatchDeleter.Clone(),
+ rcClient: g.rcClient,
+ expireTime: g.expireTime,
+ groupDB: g.groupDB,
+ groupMemberDB: g.groupMemberDB,
+ groupRequestDB: g.groupRequestDB,
+ }
+}
+
+func (g *GroupCacheRedis) getGroupInfoKey(groupID string) string {
+ return cachekey.GetGroupInfoKey(groupID)
+}
+
+func (g *GroupCacheRedis) getJoinedGroupsKey(userID string) string {
+ return cachekey.GetJoinedGroupsKey(userID)
+}
+
+func (g *GroupCacheRedis) getGroupMembersHashKey(groupID string) string {
+ return cachekey.GetGroupMembersHashKey(groupID)
+}
+
+func (g *GroupCacheRedis) getGroupMemberIDsKey(groupID string) string {
+ return cachekey.GetGroupMemberIDsKey(groupID)
+}
+
+func (g *GroupCacheRedis) getGroupMemberInfoKey(groupID, userID string) string {
+ return cachekey.GetGroupMemberInfoKey(groupID, userID)
+}
+
+func (g *GroupCacheRedis) getGroupMemberNumKey(groupID string) string {
+ return cachekey.GetGroupMemberNumKey(groupID)
+}
+
+func (g *GroupCacheRedis) getGroupRoleLevelMemberIDsKey(groupID string, roleLevel int32) string {
+ return cachekey.GetGroupRoleLevelMemberIDsKey(groupID, roleLevel)
+}
+
+func (g *GroupCacheRedis) getGroupMemberMaxVersionKey(groupID string) string {
+ return cachekey.GetGroupMemberMaxVersionKey(groupID)
+}
+
+func (g *GroupCacheRedis) getJoinGroupMaxVersionKey(userID string) string {
+ return cachekey.GetJoinGroupMaxVersionKey(userID)
+}
+
+func (g *GroupCacheRedis) getGroupID(group *model.Group) string {
+ return group.GroupID
+}
+
+func (g *GroupCacheRedis) GetGroupsInfo(ctx context.Context, groupIDs []string) (groups []*model.Group, err error) {
+ return batchGetCache2(ctx, g.rcClient, g.expireTime, groupIDs, g.getGroupInfoKey, g.getGroupID, g.groupDB.Find)
+}
+
+func (g *GroupCacheRedis) GetGroupInfo(ctx context.Context, groupID string) (group *model.Group, err error) {
+ return getCache(ctx, g.rcClient, g.getGroupInfoKey(groupID), g.expireTime, func(ctx context.Context) (*model.Group, error) {
+ return g.groupDB.Take(ctx, groupID)
+ })
+}
+
+func (g *GroupCacheRedis) DelGroupsInfo(groupIDs ...string) cache.GroupCache {
+ newGroupCache := g.CloneGroupCache()
+ keys := make([]string, 0, len(groupIDs))
+ for _, groupID := range groupIDs {
+ keys = append(keys, g.getGroupInfoKey(groupID))
+ }
+ newGroupCache.AddKeys(keys...)
+
+ return newGroupCache
+}
+
+func (g *GroupCacheRedis) DelGroupsOwner(groupIDs ...string) cache.GroupCache {
+ newGroupCache := g.CloneGroupCache()
+ keys := make([]string, 0, len(groupIDs))
+ for _, groupID := range groupIDs {
+ keys = append(keys, g.getGroupRoleLevelMemberIDsKey(groupID, constant.GroupOwner))
+ }
+ newGroupCache.AddKeys(keys...)
+
+ return newGroupCache
+}
+
+func (g *GroupCacheRedis) DelGroupRoleLevel(groupID string, roleLevels []int32) cache.GroupCache {
+ newGroupCache := g.CloneGroupCache()
+ keys := make([]string, 0, len(roleLevels))
+ for _, roleLevel := range roleLevels {
+ keys = append(keys, g.getGroupRoleLevelMemberIDsKey(groupID, roleLevel))
+ }
+ newGroupCache.AddKeys(keys...)
+ return newGroupCache
+}
+
+func (g *GroupCacheRedis) DelGroupAllRoleLevel(groupID string) cache.GroupCache {
+ return g.DelGroupRoleLevel(groupID, []int32{constant.GroupOwner, constant.GroupAdmin, constant.GroupOrdinaryUsers})
+}
+
+func (g *GroupCacheRedis) GetGroupMembersHash(ctx context.Context, groupID string) (hashCode uint64, err error) {
+ if g.groupHash == nil {
+ return 0, errs.ErrInternalServer.WrapMsg("group hash is nil")
+ }
+ return getCache(ctx, g.rcClient, g.getGroupMembersHashKey(groupID), g.expireTime, func(ctx context.Context) (uint64, error) {
+ return g.groupHash.GetGroupHash(ctx, groupID)
+ })
+}
+
+func (g *GroupCacheRedis) GetGroupMemberHashMap(ctx context.Context, groupIDs []string) (map[string]*common.GroupSimpleUserID, error) {
+ if g.groupHash == nil {
+ return nil, errs.ErrInternalServer.WrapMsg("group hash is nil")
+ }
+ res := make(map[string]*common.GroupSimpleUserID)
+ for _, groupID := range groupIDs {
+ hash, err := g.GetGroupMembersHash(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ log.ZDebug(ctx, "GetGroupMemberHashMap", "groupID", groupID, "hash", hash)
+ num, err := g.GetGroupMemberNum(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ res[groupID] = &common.GroupSimpleUserID{Hash: hash, MemberNum: uint32(num)}
+ }
+
+ return res, nil
+}
+
+func (g *GroupCacheRedis) DelGroupMembersHash(groupID string) cache.GroupCache {
+ cache := g.CloneGroupCache()
+ cache.AddKeys(g.getGroupMembersHashKey(groupID))
+
+ return cache
+}
+
+func (g *GroupCacheRedis) GetGroupMemberIDs(ctx context.Context, groupID string) (groupMemberIDs []string, err error) {
+ return getCache(ctx, g.rcClient, g.getGroupMemberIDsKey(groupID), g.expireTime, func(ctx context.Context) ([]string, error) {
+ return g.groupMemberDB.FindMemberUserID(ctx, groupID)
+ })
+}
+
+func (g *GroupCacheRedis) DelGroupMemberIDs(groupID string) cache.GroupCache {
+ cache := g.CloneGroupCache()
+ cache.AddKeys(g.getGroupMemberIDsKey(groupID))
+
+ return cache
+}
+
+func (g *GroupCacheRedis) findUserJoinedGroupID(ctx context.Context, userID string) ([]string, error) {
+ groupIDs, err := g.groupMemberDB.FindUserJoinedGroupID(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ return g.groupDB.FindJoinSortGroupID(ctx, groupIDs)
+}
+
+func (g *GroupCacheRedis) GetJoinedGroupIDs(ctx context.Context, userID string) (joinedGroupIDs []string, err error) {
+ return getCache(ctx, g.rcClient, g.getJoinedGroupsKey(userID), g.expireTime, func(ctx context.Context) ([]string, error) {
+ return g.findUserJoinedGroupID(ctx, userID)
+ })
+}
+
+func (g *GroupCacheRedis) DelJoinedGroupID(userIDs ...string) cache.GroupCache {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, g.getJoinedGroupsKey(userID))
+ }
+ cache := g.CloneGroupCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (g *GroupCacheRedis) GetGroupMemberInfo(ctx context.Context, groupID, userID string) (groupMember *model.GroupMember, err error) {
+ return getCache(ctx, g.rcClient, g.getGroupMemberInfoKey(groupID, userID), g.expireTime, func(ctx context.Context) (*model.GroupMember, error) {
+ return g.groupMemberDB.Take(ctx, groupID, userID)
+ })
+}
+
+func (g *GroupCacheRedis) GetGroupMembersInfo(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupMember, error) {
+ return batchGetCache2(ctx, g.rcClient, g.expireTime, userIDs, func(userID string) string {
+ return g.getGroupMemberInfoKey(groupID, userID)
+ }, func(member *model.GroupMember) string {
+ return member.UserID
+ }, func(ctx context.Context, userIDs []string) ([]*model.GroupMember, error) {
+ return g.groupMemberDB.Find(ctx, groupID, userIDs)
+ })
+}
+
+func (g *GroupCacheRedis) GetAllGroupMembersInfo(ctx context.Context, groupID string) (groupMembers []*model.GroupMember, err error) {
+ groupMemberIDs, err := g.GetGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+
+ return g.GetGroupMembersInfo(ctx, groupID, groupMemberIDs)
+}
+
+func (g *GroupCacheRedis) DelGroupMembersInfo(groupID string, userIDs ...string) cache.GroupCache {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, g.getGroupMemberInfoKey(groupID, userID))
+ }
+ cache := g.CloneGroupCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (g *GroupCacheRedis) GetGroupMemberNum(ctx context.Context, groupID string) (memberNum int64, err error) {
+ return getCache(ctx, g.rcClient, g.getGroupMemberNumKey(groupID), g.expireTime, func(ctx context.Context) (int64, error) {
+ return g.groupMemberDB.TakeGroupMemberNum(ctx, groupID)
+ })
+}
+
+func (g *GroupCacheRedis) DelGroupsMemberNum(groupID ...string) cache.GroupCache {
+ keys := make([]string, 0, len(groupID))
+ for _, groupID := range groupID {
+ keys = append(keys, g.getGroupMemberNumKey(groupID))
+ }
+ cache := g.CloneGroupCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (g *GroupCacheRedis) GetGroupOwner(ctx context.Context, groupID string) (*model.GroupMember, error) {
+ members, err := g.GetGroupRoleLevelMemberInfo(ctx, groupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ if len(members) == 0 {
+ return nil, errs.ErrRecordNotFound.WrapMsg(fmt.Sprintf("group %s owner not found", groupID))
+ }
+ return members[0], nil
+}
+
+func (g *GroupCacheRedis) GetGroupsOwner(ctx context.Context, groupIDs []string) ([]*model.GroupMember, error) {
+ members := make([]*model.GroupMember, 0, len(groupIDs))
+ for _, groupID := range groupIDs {
+ items, err := g.GetGroupRoleLevelMemberInfo(ctx, groupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ if len(items) > 0 {
+ members = append(members, items[0])
+ }
+ }
+ return members, nil
+}
+
+func (g *GroupCacheRedis) GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error) {
+ return getCache(ctx, g.rcClient, g.getGroupRoleLevelMemberIDsKey(groupID, roleLevel), g.expireTime, func(ctx context.Context) ([]string, error) {
+ return g.groupMemberDB.FindRoleLevelUserIDs(ctx, groupID, roleLevel)
+ })
+}
+
+func (g *GroupCacheRedis) GetGroupRoleLevelMemberInfo(ctx context.Context, groupID string, roleLevel int32) ([]*model.GroupMember, error) {
+ userIDs, err := g.GetGroupRoleLevelMemberIDs(ctx, groupID, roleLevel)
+ if err != nil {
+ return nil, err
+ }
+ return g.GetGroupMembersInfo(ctx, groupID, userIDs)
+}
+
+func (g *GroupCacheRedis) GetGroupRolesLevelMemberInfo(ctx context.Context, groupID string, roleLevels []int32) ([]*model.GroupMember, error) {
+ var userIDs []string
+ for _, roleLevel := range roleLevels {
+ ids, err := g.GetGroupRoleLevelMemberIDs(ctx, groupID, roleLevel)
+ if err != nil {
+ return nil, err
+ }
+ userIDs = append(userIDs, ids...)
+ }
+ return g.GetGroupMembersInfo(ctx, groupID, userIDs)
+}
+
+func (g *GroupCacheRedis) FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) ([]*model.GroupMember, error) {
+ if len(groupIDs) == 0 {
+ var err error
+ groupIDs, err = g.GetJoinedGroupIDs(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return batchGetCache2(ctx, g.rcClient, g.expireTime, groupIDs, func(groupID string) string {
+ return g.getGroupMemberInfoKey(groupID, userID)
+ }, func(member *model.GroupMember) string {
+ return member.GroupID
+ }, func(ctx context.Context, groupIDs []string) ([]*model.GroupMember, error) {
+ return g.groupMemberDB.FindInGroup(ctx, userID, groupIDs)
+ })
+}
+
+func (g *GroupCacheRedis) DelMaxGroupMemberVersion(groupIDs ...string) cache.GroupCache {
+ keys := make([]string, 0, len(groupIDs))
+ for _, groupID := range groupIDs {
+ keys = append(keys, g.getGroupMemberMaxVersionKey(groupID))
+ }
+ cache := g.CloneGroupCache()
+ cache.AddKeys(keys...)
+ return cache
+}
+
+func (g *GroupCacheRedis) DelMaxJoinGroupVersion(userIDs ...string) cache.GroupCache {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, g.getJoinGroupMaxVersionKey(userID))
+ }
+ cache := g.CloneGroupCache()
+ cache.AddKeys(keys...)
+ return cache
+}
+
+func (g *GroupCacheRedis) FindMaxGroupMemberVersion(ctx context.Context, groupID string) (*model.VersionLog, error) {
+ return getCache(ctx, g.rcClient, g.getGroupMemberMaxVersionKey(groupID), g.expireTime, func(ctx context.Context) (*model.VersionLog, error) {
+ return g.groupMemberDB.FindMemberIncrVersion(ctx, groupID, 0, 0)
+ })
+}
+
+func (g *GroupCacheRedis) BatchFindMaxGroupMemberVersion(ctx context.Context, groupIDs []string) ([]*model.VersionLog, error) {
+ return batchGetCache2(ctx, g.rcClient, g.expireTime, groupIDs,
+ func(groupID string) string {
+ return g.getGroupMemberMaxVersionKey(groupID)
+ }, func(versionLog *model.VersionLog) string {
+ return versionLog.DID
+ }, func(ctx context.Context, groupIDs []string) ([]*model.VersionLog, error) {
+ // create two slices with len is groupIDs, just need 0
+ versions := make([]uint, len(groupIDs))
+ limits := make([]int, len(groupIDs))
+
+ return g.groupMemberDB.BatchFindMemberIncrVersion(ctx, groupIDs, versions, limits)
+ })
+}
+
+func (g *GroupCacheRedis) FindMaxJoinGroupVersion(ctx context.Context, userID string) (*model.VersionLog, error) {
+ return getCache(ctx, g.rcClient, g.getJoinGroupMaxVersionKey(userID), g.expireTime, func(ctx context.Context) (*model.VersionLog, error) {
+ return g.groupMemberDB.FindJoinIncrVersion(ctx, userID, 0, 0)
+ })
+}
diff --git a/pkg/common/storage/cache/redis/lua_script.go b/pkg/common/storage/cache/redis/lua_script.go
new file mode 100644
index 0000000..0ef78f8
--- /dev/null
+++ b/pkg/common/storage/cache/redis/lua_script.go
@@ -0,0 +1,127 @@
+package redis
+
+import (
+ "context"
+ "errors"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+var (
+ setBatchWithCommonExpireScript = redis.NewScript(`
+local expire = tonumber(ARGV[1])
+for i, key in ipairs(KEYS) do
+ redis.call('SET', key, ARGV[i + 1])
+ redis.call('EXPIRE', key, expire)
+end
+return #KEYS
+`)
+
+ setBatchWithIndividualExpireScript = redis.NewScript(`
+local n = #KEYS
+for i = 1, n do
+ redis.call('SET', KEYS[i], ARGV[i])
+ redis.call('EXPIRE', KEYS[i], ARGV[i + n])
+end
+return n
+`)
+
+ deleteBatchScript = redis.NewScript(`
+for i, key in ipairs(KEYS) do
+ redis.call('DEL', key)
+end
+return #KEYS
+`)
+
+ getBatchScript = redis.NewScript(`
+local values = {}
+for i, key in ipairs(KEYS) do
+ local value = redis.call('GET', key)
+ table.insert(values, value)
+end
+return values
+`)
+)
+
+func callLua(ctx context.Context, rdb redis.Scripter, script *redis.Script, keys []string, args []any) (any, error) {
+ log.ZDebug(ctx, "callLua args", "scriptHash", script.Hash(), "keys", keys, "args", args)
+ r := script.EvalSha(ctx, rdb, keys, args)
+ if redis.HasErrorPrefix(r.Err(), "NOSCRIPT") {
+ if err := script.Load(ctx, rdb).Err(); err != nil {
+ r = script.Eval(ctx, rdb, keys, args)
+ } else {
+ r = script.EvalSha(ctx, rdb, keys, args)
+ }
+ }
+ v, err := r.Result()
+ if errors.Is(err, redis.Nil) {
+ err = nil
+ }
+ return v, errs.WrapMsg(err, "call lua err", "scriptHash", script.Hash(), "keys", keys, "args", args)
+}
+
+func LuaSetBatchWithCommonExpire(ctx context.Context, rdb redis.Scripter, keys []string, values []string, expire int) error {
+ // Check if the lengths of keys and values match
+ if len(keys) != len(values) {
+ return errs.New("keys and values length mismatch").Wrap()
+ }
+
+ // Ensure allocation size does not overflow
+ maxAllowedLen := (1 << 31) - 1 // 2GB limit (maximum address space for 32-bit systems)
+
+ if len(values) > maxAllowedLen-1 {
+ return fmt.Errorf("values length is too large, causing overflow")
+ }
+ var vals = make([]any, 0, 1+len(values))
+ vals = append(vals, expire)
+ for _, v := range values {
+ vals = append(vals, v)
+ }
+ _, err := callLua(ctx, rdb, setBatchWithCommonExpireScript, keys, vals)
+ return err
+}
+
+func LuaSetBatchWithIndividualExpire(ctx context.Context, rdb redis.Scripter, keys []string, values []string, expires []int) error {
+ // Check if the lengths of keys, values, and expires match
+ if len(keys) != len(values) || len(keys) != len(expires) {
+ return errs.New("keys and values length mismatch").Wrap()
+ }
+
+ // Ensure the allocation size does not overflow
+ maxAllowedLen := (1 << 31) - 1 // 2GB limit (maximum address space for 32-bit systems)
+
+ if len(values) > maxAllowedLen-1 {
+ return errs.New(fmt.Sprintf("values length %d exceeds the maximum allowed length %d", len(values), maxAllowedLen-1)).Wrap()
+ }
+ var vals = make([]any, 0, len(values)+len(expires))
+ for _, v := range values {
+ vals = append(vals, v)
+ }
+ for _, ex := range expires {
+ vals = append(vals, ex)
+ }
+ _, err := callLua(ctx, rdb, setBatchWithIndividualExpireScript, keys, vals)
+ return err
+}
+
+func LuaDeleteBatch(ctx context.Context, rdb redis.Scripter, keys []string) error {
+ _, err := callLua(ctx, rdb, deleteBatchScript, keys, nil)
+ return err
+}
+
+func LuaGetBatch(ctx context.Context, rdb redis.Scripter, keys []string) ([]any, error) {
+ v, err := callLua(ctx, rdb, getBatchScript, keys, nil)
+ if err != nil {
+ return nil, err
+ }
+ values, ok := v.([]any)
+ if !ok {
+ return nil, servererrs.ErrArgs.WrapMsg("invalid lua get batch result")
+ }
+ return values, nil
+
+}
diff --git a/pkg/common/storage/cache/redis/lua_script_test.go b/pkg/common/storage/cache/redis/lua_script_test.go
new file mode 100644
index 0000000..1566b59
--- /dev/null
+++ b/pkg/common/storage/cache/redis/lua_script_test.go
@@ -0,0 +1,75 @@
+package redis
+
+import (
+ "context"
+ "github.com/go-redis/redismock/v9"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+ "testing"
+)
+
+func TestLuaSetBatchWithCommonExpire(t *testing.T) {
+ rdb, mock := redismock.NewClientMock()
+ ctx := context.Background()
+
+ keys := []string{"key1", "key2"}
+ values := []string{"value1", "value2"}
+ expire := 10
+
+ mock.ExpectEvalSha(setBatchWithCommonExpireScript.Hash(), keys, []any{expire, "value1", "value2"}).SetVal(int64(len(keys)))
+
+ err := LuaSetBatchWithCommonExpire(ctx, rdb, keys, values, expire)
+ require.NoError(t, err)
+ assert.NoError(t, mock.ExpectationsWereMet())
+}
+
+func TestLuaSetBatchWithIndividualExpire(t *testing.T) {
+ rdb, mock := redismock.NewClientMock()
+ ctx := context.Background()
+
+ keys := []string{"key1", "key2"}
+ values := []string{"value1", "value2"}
+ expires := []int{10, 20}
+
+ args := make([]any, 0, len(values)+len(expires))
+ for _, v := range values {
+ args = append(args, v)
+ }
+ for _, ex := range expires {
+ args = append(args, ex)
+ }
+
+ mock.ExpectEvalSha(setBatchWithIndividualExpireScript.Hash(), keys, args).SetVal(int64(len(keys)))
+
+ err := LuaSetBatchWithIndividualExpire(ctx, rdb, keys, values, expires)
+ require.NoError(t, err)
+ assert.NoError(t, mock.ExpectationsWereMet())
+}
+
+func TestLuaDeleteBatch(t *testing.T) {
+ rdb, mock := redismock.NewClientMock()
+ ctx := context.Background()
+
+ keys := []string{"key1", "key2"}
+
+ mock.ExpectEvalSha(deleteBatchScript.Hash(), keys, []any{}).SetVal(int64(len(keys)))
+
+ err := LuaDeleteBatch(ctx, rdb, keys)
+ require.NoError(t, err)
+ assert.NoError(t, mock.ExpectationsWereMet())
+}
+
+func TestLuaGetBatch(t *testing.T) {
+ rdb, mock := redismock.NewClientMock()
+ ctx := context.Background()
+
+ keys := []string{"key1", "key2"}
+ expectedValues := []any{"value1", "value2"}
+
+ mock.ExpectEvalSha(getBatchScript.Hash(), keys, []any{}).SetVal(expectedValues)
+
+ values, err := LuaGetBatch(ctx, rdb, keys)
+ require.NoError(t, err)
+ assert.NoError(t, mock.ExpectationsWereMet())
+ assert.Equal(t, expectedValues, values)
+}
diff --git a/pkg/common/storage/cache/redis/minio.go b/pkg/common/storage/cache/redis/minio.go
new file mode 100644
index 0000000..52427ae
--- /dev/null
+++ b/pkg/common/storage/cache/redis/minio.go
@@ -0,0 +1,59 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "github.com/openimsdk/tools/s3/minio"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewMinioCache(rdb redis.UniversalClient) minio.Cache {
+ rc := newRocksCacheClient(rdb)
+ return &minioCacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(),
+ rcClient: rc,
+ expireTime: time.Hour * 24 * 7,
+ }
+}
+
+type minioCacheRedis struct {
+ cache.BatchDeleter
+ rcClient *rocksCacheClient
+ expireTime time.Duration
+}
+
+func (g *minioCacheRedis) getObjectImageInfoKey(key string) string {
+ return cachekey.GetObjectImageInfoKey(key)
+}
+
+func (g *minioCacheRedis) getMinioImageThumbnailKey(key string, format string, width int, height int) string {
+ return cachekey.GetMinioImageThumbnailKey(key, format, width, height)
+}
+
+func (g *minioCacheRedis) DelObjectImageInfoKey(ctx context.Context, keys ...string) error {
+ ks := make([]string, 0, len(keys))
+ for _, key := range keys {
+ ks = append(ks, g.getObjectImageInfoKey(key))
+ }
+ return g.BatchDeleter.ExecDelWithKeys(ctx, ks)
+}
+
+func (g *minioCacheRedis) DelImageThumbnailKey(ctx context.Context, key string, format string, width int, height int) error {
+ return g.BatchDeleter.ExecDelWithKeys(ctx, []string{g.getMinioImageThumbnailKey(key, format, width, height)})
+
+}
+
+func (g *minioCacheRedis) GetImageObjectKeyInfo(ctx context.Context, key string, fn func(ctx context.Context) (*minio.ImageInfo, error)) (*minio.ImageInfo, error) {
+ info, err := getCache(ctx, g.rcClient, g.getObjectImageInfoKey(key), g.expireTime, fn)
+ if err != nil {
+ return nil, err
+ }
+ return info, nil
+}
+
+func (g *minioCacheRedis) GetThumbnailKey(ctx context.Context, key string, format string, width int, height int, minioCache func(ctx context.Context) (string, error)) (string, error) {
+ return getCache(ctx, g.rcClient, g.getMinioImageThumbnailKey(key, format, width, height), g.expireTime, minioCache)
+}
diff --git a/pkg/common/storage/cache/redis/msg.go b/pkg/common/storage/cache/redis/msg.go
new file mode 100644
index 0000000..b4ec610
--- /dev/null
+++ b/pkg/common/storage/cache/redis/msg.go
@@ -0,0 +1,94 @@
+package redis
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+) //
+
+// msgCacheTimeout is expiration time of message cache, 86400 seconds
+const msgCacheTimeout = time.Hour * 24
+
+func NewMsgCache(client redis.UniversalClient, db database.Msg) cache.MsgCache {
+ return &msgCache{
+ rcClient: newRocksCacheClient(client),
+ msgDocDatabase: db,
+ }
+}
+
+type msgCache struct {
+ rcClient *rocksCacheClient
+ msgDocDatabase database.Msg
+}
+
+func (c *msgCache) getSendMsgKey(id string) string {
+ return cachekey.GetSendMsgKey(id)
+}
+
+func (c *msgCache) SetSendMsgStatus(ctx context.Context, id string, status int32) error {
+ return errs.Wrap(c.rcClient.GetRedis().Set(ctx, c.getSendMsgKey(id), status, time.Hour*24).Err())
+}
+
+func (c *msgCache) GetSendMsgStatus(ctx context.Context, id string) (int32, error) {
+ result, err := c.rcClient.GetRedis().Get(ctx, c.getSendMsgKey(id)).Int()
+ return int32(result), errs.Wrap(err)
+}
+
+func (c *msgCache) GetMessageBySeqs(ctx context.Context, conversationID string, seqs []int64) ([]*model.MsgInfoModel, error) {
+ if len(seqs) == 0 {
+ return nil, nil
+ }
+ getKey := func(seq int64) string {
+ return cachekey.GetMsgCacheKey(conversationID, seq)
+ }
+ getMsgID := func(msg *model.MsgInfoModel) int64 {
+ return msg.Msg.Seq
+ }
+ find := func(ctx context.Context, seqs []int64) ([]*model.MsgInfoModel, error) {
+ return c.msgDocDatabase.FindSeqs(ctx, conversationID, seqs)
+ }
+ return batchGetCache2(ctx, c.rcClient, msgCacheTimeout, seqs, getKey, getMsgID, find)
+}
+
+func (c *msgCache) DelMessageBySeqs(ctx context.Context, conversationID string, seqs []int64) error {
+ if len(seqs) == 0 {
+ return nil
+ }
+ keys := datautil.Slice(seqs, func(seq int64) string {
+ return cachekey.GetMsgCacheKey(conversationID, seq)
+ })
+ slotKeys, err := groupKeysBySlot(ctx, c.rcClient.GetRedis(), keys)
+ if err != nil {
+ return err
+ }
+ for _, keys := range slotKeys {
+ if err := c.rcClient.GetClient().TagAsDeletedBatch2(ctx, keys); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (c *msgCache) SetMessageBySeqs(ctx context.Context, conversationID string, msgs []*model.MsgInfoModel) error {
+ for _, msg := range msgs {
+ if msg == nil || msg.Msg == nil || msg.Msg.Seq <= 0 {
+ continue
+ }
+ data, err := json.Marshal(msg)
+ if err != nil {
+ return err
+ }
+ if err := c.rcClient.GetClient().RawSet(ctx, cachekey.GetMsgCacheKey(conversationID, msg.Msg.Seq), string(data), msgCacheTimeout); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/pkg/common/storage/cache/redis/online.go b/pkg/common/storage/cache/redis/online.go
new file mode 100644
index 0000000..7496edd
--- /dev/null
+++ b/pkg/common/storage/cache/redis/online.go
@@ -0,0 +1,161 @@
+package redis
+
+import (
+ "context"
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewUserOnline(rdb redis.UniversalClient) cache.OnlineCache {
+ if rdb == nil || config.Standalone() {
+ return mcache.NewOnlineCache()
+ }
+ return &userOnline{
+ rdb: rdb,
+ expire: cachekey.OnlineExpire,
+ channelName: cachekey.OnlineChannel,
+ }
+}
+
+type userOnline struct {
+ rdb redis.UniversalClient
+ expire time.Duration
+ channelName string
+}
+
+func (s *userOnline) getUserOnlineKey(userID string) string {
+ return cachekey.GetOnlineKey(userID)
+}
+
+func (s *userOnline) GetOnline(ctx context.Context, userID string) ([]int32, error) {
+ members, err := s.rdb.ZRangeByScore(ctx, s.getUserOnlineKey(userID), &redis.ZRangeBy{
+ Min: strconv.FormatInt(time.Now().Unix(), 10),
+ Max: "+inf",
+ }).Result()
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ platformIDs := make([]int32, 0, len(members))
+ for _, member := range members {
+ val, err := strconv.Atoi(member)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ platformIDs = append(platformIDs, int32(val))
+ }
+ return platformIDs, nil
+}
+
+func (s *userOnline) GetAllOnlineUsers(ctx context.Context, cursor uint64) (map[string][]int32, uint64, error) {
+ result := make(map[string][]int32)
+
+ keys, nextCursor, err := s.rdb.Scan(ctx, cursor, fmt.Sprintf("%s*", cachekey.OnlineKey), constant.ParamMaxLength).Result()
+ if err != nil {
+ return nil, 0, err
+ }
+
+ for _, key := range keys {
+ userID := cachekey.GetOnlineKeyUserID(key)
+ strValues, err := s.rdb.ZRange(ctx, key, 0, -1).Result()
+ if err != nil {
+ return nil, 0, err
+ }
+
+ values := make([]int32, 0, len(strValues))
+ for _, value := range strValues {
+ intValue, err := strconv.Atoi(value)
+ if err != nil {
+ return nil, 0, errs.Wrap(err)
+ }
+ values = append(values, int32(intValue))
+ }
+
+ result[userID] = values
+ }
+
+ return result, nextCursor, nil
+}
+
+func (s *userOnline) SetUserOnline(ctx context.Context, userID string, online, offline []int32) error {
+ // 使用Lua脚本原子更新在线状态与在线人数缓存
+ script := `
+ local key = KEYS[1]
+ local countKey = KEYS[2]
+ local expire = tonumber(ARGV[1])
+ local now = ARGV[2]
+ local score = ARGV[3]
+ local offlineLen = tonumber(ARGV[4])
+ redis.call("ZREMRANGEBYSCORE", key, "-inf", now)
+ for i = 5, offlineLen+4 do
+ redis.call("ZREM", key, ARGV[i])
+ end
+ local before = redis.call("ZCARD", key)
+ for i = 5+offlineLen, #ARGV do
+ redis.call("ZADD", key, score, ARGV[i])
+ end
+ redis.call("EXPIRE", key, expire)
+ local after = redis.call("ZCARD", key)
+ local current = redis.call("GET", countKey)
+ if not current then
+ current = 0
+ else
+ current = tonumber(current)
+ end
+ if before == 0 and after > 0 then
+ redis.call("SET", countKey, current + 1)
+ elseif before > 0 and after == 0 then
+ local next = current - 1
+ if next < 0 then
+ next = 0
+ end
+ redis.call("SET", countKey, next)
+ end
+ if before ~= after then
+ local members = redis.call("ZRANGE", key, 0, -1)
+ table.insert(members, "1")
+ return members
+ else
+ return {"0"}
+ end
+`
+ now := time.Now()
+ argv := make([]any, 0, 2+len(online)+len(offline))
+ argv = append(argv, int32(s.expire/time.Second), now.Unix(), now.Add(s.expire).Unix(), int32(len(offline)))
+ for _, platformID := range offline {
+ argv = append(argv, platformID)
+ }
+ for _, platformID := range online {
+ argv = append(argv, platformID)
+ }
+ keys := []string{s.getUserOnlineKey(userID), cachekey.OnlineUserCountKey}
+ platformIDs, err := s.rdb.Eval(ctx, script, keys, argv).StringSlice()
+ if err != nil {
+ log.ZError(ctx, "redis SetUserOnline", err, "userID", userID, "online", online, "offline", offline)
+ return err
+ }
+ if len(platformIDs) == 0 {
+ return errs.ErrInternalServer.WrapMsg("SetUserOnline redis lua invalid return value")
+ }
+ if platformIDs[len(platformIDs)-1] != "0" {
+ log.ZDebug(ctx, "redis SetUserOnline push", "userID", userID, "online", online, "offline", offline, "platformIDs", platformIDs[:len(platformIDs)-1])
+ platformIDs[len(platformIDs)-1] = userID
+ msg := strings.Join(platformIDs, ":")
+ if err := s.rdb.Publish(ctx, s.channelName, msg).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+ } else {
+ log.ZDebug(ctx, "redis SetUserOnline not push", "userID", userID, "online", online, "offline", offline)
+ }
+ return nil
+}
diff --git a/pkg/common/storage/cache/redis/online_count.go b/pkg/common/storage/cache/redis/online_count.go
new file mode 100644
index 0000000..4ce957e
--- /dev/null
+++ b/pkg/common/storage/cache/redis/online_count.go
@@ -0,0 +1,149 @@
+package redis
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/errs"
+ "github.com/redis/go-redis/v9"
+)
+
+const onlineUserCountHistorySeparator = ":"
+
+// OnlineUserCountSample 在线人数历史采样点
+type OnlineUserCountSample struct {
+ // Timestamp 采样时间(毫秒时间戳)
+ Timestamp int64
+ // Count 采样在线人数
+ Count int64
+}
+
+// GetOnlineUserCount 读取在线人数缓存
+func GetOnlineUserCount(ctx context.Context, rdb redis.UniversalClient) (int64, error) {
+ if rdb == nil {
+ return 0, errs.ErrInternalServer.WrapMsg("redis client is nil")
+ }
+ val, err := rdb.Get(ctx, cachekey.OnlineUserCountKey).Result()
+ if err != nil {
+ if errors.Is(err, redis.Nil) {
+ return 0, err
+ }
+ return 0, errs.Wrap(err)
+ }
+ count, err := strconv.ParseInt(val, 10, 64)
+ if err != nil {
+ return 0, errs.WrapMsg(err, "parse online user count failed")
+ }
+ return count, nil
+}
+
+// RefreshOnlineUserCount 刷新在线人数缓存
+func RefreshOnlineUserCount(ctx context.Context, rdb redis.UniversalClient) (int64, error) {
+ if rdb == nil {
+ return 0, errs.ErrInternalServer.WrapMsg("redis client is nil")
+ }
+ var (
+ cursor uint64
+ total int64
+ )
+ now := strconv.FormatInt(time.Now().Unix(), 10)
+ for {
+ keys, nextCursor, err := rdb.Scan(ctx, cursor, fmt.Sprintf("%s*", cachekey.OnlineKey), constant.ParamMaxLength).Result()
+ if err != nil {
+ return 0, errs.Wrap(err)
+ }
+ for _, key := range keys {
+ count, err := rdb.ZCount(ctx, key, now, "+inf").Result()
+ if err != nil {
+ return 0, errs.Wrap(err)
+ }
+ if count > 0 {
+ total++
+ }
+ }
+ cursor = nextCursor
+ if cursor == 0 {
+ break
+ }
+ }
+ if err := rdb.Set(ctx, cachekey.OnlineUserCountKey, total, 0).Err(); err != nil {
+ return 0, errs.Wrap(err)
+ }
+ return total, nil
+}
+
+// AppendOnlineUserCountHistory 写入在线人数历史采样
+func AppendOnlineUserCountHistory(ctx context.Context, rdb redis.UniversalClient, timestamp int64, count int64) error {
+ if rdb == nil {
+ return errs.ErrInternalServer.WrapMsg("redis client is nil")
+ }
+ if timestamp <= 0 {
+ return errs.ErrArgs.WrapMsg("invalid timestamp")
+ }
+ member := fmt.Sprintf("%d%s%d", timestamp, onlineUserCountHistorySeparator, count)
+ if err := rdb.ZAdd(ctx, cachekey.OnlineUserCountHistoryKey, redis.Z{
+ Score: float64(timestamp),
+ Member: member,
+ }).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+ // 清理历史数据,避免无界增长
+ retentionMs := int64(cachekey.OnlineUserCountHistoryRetention / time.Millisecond)
+ cutoff := timestamp - retentionMs
+ if cutoff > 0 {
+ if err := rdb.ZRemRangeByScore(ctx, cachekey.OnlineUserCountHistoryKey, "0", strconv.FormatInt(cutoff, 10)).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+ }
+ return nil
+}
+
+// GetOnlineUserCountHistory 读取在线人数历史采样
+func GetOnlineUserCountHistory(ctx context.Context, rdb redis.UniversalClient, startTime int64, endTime int64) ([]OnlineUserCountSample, error) {
+ if rdb == nil {
+ return nil, errs.ErrInternalServer.WrapMsg("redis client is nil")
+ }
+ if startTime <= 0 || endTime <= 0 || endTime <= startTime {
+ return nil, nil
+ }
+ // 包含endTime的数据,使用endTime作为最大值
+ values, err := rdb.ZRangeByScore(ctx, cachekey.OnlineUserCountHistoryKey, &redis.ZRangeBy{
+ Min: strconv.FormatInt(startTime, 10),
+ Max: strconv.FormatInt(endTime, 10),
+ }).Result()
+ if err != nil {
+ if errors.Is(err, redis.Nil) {
+ return nil, nil
+ }
+ return nil, errs.Wrap(err)
+ }
+ if len(values) == 0 {
+ return nil, nil
+ }
+ samples := make([]OnlineUserCountSample, 0, len(values))
+ for _, val := range values {
+ parts := strings.SplitN(val, onlineUserCountHistorySeparator, 2)
+ if len(parts) != 2 {
+ continue
+ }
+ ts, err := strconv.ParseInt(parts[0], 10, 64)
+ if err != nil {
+ continue
+ }
+ cnt, err := strconv.ParseInt(parts[1], 10, 64)
+ if err != nil {
+ continue
+ }
+ samples = append(samples, OnlineUserCountSample{
+ Timestamp: ts,
+ Count: cnt,
+ })
+ }
+ return samples, nil
+}
diff --git a/pkg/common/storage/cache/redis/online_test.go b/pkg/common/storage/cache/redis/online_test.go
new file mode 100644
index 0000000..b7f6381
--- /dev/null
+++ b/pkg/common/storage/cache/redis/online_test.go
@@ -0,0 +1,52 @@
+package redis
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/db/redisutil"
+)
+
+/*
+address: [ 172.16.8.48:7001, 172.16.8.48:7002, 172.16.8.48:7003, 172.16.8.48:7004, 172.16.8.48:7005, 172.16.8.48:7006 ]
+username:
+password: passwd123
+clusterMode: true
+db: 0
+maxRetry: 10
+*/
+func TestName111111(t *testing.T) {
+ conf := config.Redis{
+ Address: []string{
+ "172.16.8.124:7001",
+ "172.16.8.124:7002",
+ "172.16.8.124:7003",
+ "172.16.8.124:7004",
+ "172.16.8.124:7005",
+ "172.16.8.124:7006",
+ },
+ RedisMode: "cluster",
+ Password: "passwd123",
+ //Address: []string{"localhost:16379"},
+ //Password: "openIM123",
+ }
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*1000)
+ defer cancel()
+ rdb, err := redisutil.NewRedisClient(ctx, conf.Build())
+ if err != nil {
+ panic(err)
+ }
+ online := NewUserOnline(rdb)
+
+ userID := "a123456"
+ t.Log(online.GetOnline(ctx, userID))
+ t.Log(online.SetUserOnline(ctx, userID, []int32{1, 2, 3, 4}, nil))
+ t.Log(online.GetOnline(ctx, userID))
+
+}
+
+func TestName111(t *testing.T) {
+
+}
diff --git a/pkg/common/storage/cache/redis/redis_shard_manager.go b/pkg/common/storage/cache/redis/redis_shard_manager.go
new file mode 100644
index 0000000..0a02638
--- /dev/null
+++ b/pkg/common/storage/cache/redis/redis_shard_manager.go
@@ -0,0 +1,211 @@
+package redis
+
+import (
+ "context"
+
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+ "golang.org/x/sync/errgroup"
+)
+
+const (
+ defaultBatchSize = 50
+ defaultConcurrentLimit = 3
+)
+
+// RedisShardManager is a class for sharding and processing keys
+type RedisShardManager struct {
+ redisClient redis.UniversalClient
+ config *Config
+}
+type Config struct {
+ batchSize int
+ continueOnError bool
+ concurrentLimit int
+}
+
+// Option is a function type for configuring Config
+type Option func(c *Config)
+
+//// NewRedisShardManager creates a new RedisShardManager instance
+//func NewRedisShardManager(redisClient redis.UniversalClient, opts ...Option) *RedisShardManager {
+// config := &Config{
+// batchSize: defaultBatchSize, // Default batch size is 50 keys
+// continueOnError: false,
+// concurrentLimit: defaultConcurrentLimit, // Default concurrent limit is 3
+// }
+// for _, opt := range opts {
+// opt(config)
+// }
+// rsm := &RedisShardManager{
+// redisClient: redisClient,
+// config: config,
+// }
+// return rsm
+//}
+//
+//// WithBatchSize sets the number of keys to process per batch
+//func WithBatchSize(size int) Option {
+// return func(c *Config) {
+// c.batchSize = size
+// }
+//}
+//
+//// WithContinueOnError sets whether to continue processing on error
+//func WithContinueOnError(continueOnError bool) Option {
+// return func(c *Config) {
+// c.continueOnError = continueOnError
+// }
+//}
+//
+//// WithConcurrentLimit sets the concurrency limit
+//func WithConcurrentLimit(limit int) Option {
+// return func(c *Config) {
+// c.concurrentLimit = limit
+// }
+//}
+//
+//// ProcessKeysBySlot groups keys by their Redis cluster hash slots and processes them using the provided function.
+//func (rsm *RedisShardManager) ProcessKeysBySlot(
+// ctx context.Context,
+// keys []string,
+// processFunc func(ctx context.Context, slot int64, keys []string) error,
+//) error {
+//
+// // Group keys by slot
+// slots, err := groupKeysBySlot(ctx, rsm.redisClient, keys)
+// if err != nil {
+// return err
+// }
+//
+// g, ctx := errgroup.WithContext(ctx)
+// g.SetLimit(rsm.config.concurrentLimit)
+//
+// // Process keys in each slot using the provided function
+// for slot, singleSlotKeys := range slots {
+// batches := splitIntoBatches(singleSlotKeys, rsm.config.batchSize)
+// for _, batch := range batches {
+// slot, batch := slot, batch // Avoid closure capture issue
+// g.Go(func() error {
+// err := processFunc(ctx, slot, batch)
+// if err != nil {
+// log.ZWarn(ctx, "Batch processFunc failed", err, "slot", slot, "keys", batch)
+// if !rsm.config.continueOnError {
+// return err
+// }
+// }
+// return nil
+// })
+// }
+// }
+//
+// if err := g.Wait(); err != nil {
+// return err
+// }
+// return nil
+//}
+
+// groupKeysBySlot groups keys by their Redis cluster hash slots.
+func groupKeysBySlot(ctx context.Context, redisClient redis.UniversalClient, keys []string) (map[int64][]string, error) {
+ slots := make(map[int64][]string)
+ clusterClient, isCluster := redisClient.(*redis.ClusterClient)
+ if isCluster && len(keys) > 1 {
+ pipe := clusterClient.Pipeline()
+ cmds := make([]*redis.IntCmd, len(keys))
+ for i, key := range keys {
+ cmds[i] = pipe.ClusterKeySlot(ctx, key)
+ }
+ _, err := pipe.Exec(ctx)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "get slot err")
+ }
+
+ for i, cmd := range cmds {
+ slot, err := cmd.Result()
+ if err != nil {
+ log.ZWarn(ctx, "some key get slot err", err, "key", keys[i])
+ return nil, errs.WrapMsg(err, "get slot err", "key", keys[i])
+ }
+ slots[slot] = append(slots[slot], keys[i])
+ }
+ } else {
+ // If not a cluster client, put all keys in the same slot (0)
+ slots[0] = keys
+ }
+
+ return slots, nil
+}
+
+// splitIntoBatches splits keys into batches of the specified size
+func splitIntoBatches(keys []string, batchSize int) [][]string {
+ var batches [][]string
+ for batchSize < len(keys) {
+ keys, batches = keys[batchSize:], append(batches, keys[0:batchSize:batchSize])
+ }
+ return append(batches, keys)
+}
+
+// ProcessKeysBySlot groups keys by their Redis cluster hash slots and processes them using the provided function.
+func ProcessKeysBySlot(
+ ctx context.Context,
+ redisClient redis.UniversalClient,
+ keys []string,
+ processFunc func(ctx context.Context, slot int64, keys []string) error,
+ opts ...Option,
+) error {
+
+ config := &Config{
+ batchSize: defaultBatchSize,
+ continueOnError: false,
+ concurrentLimit: defaultConcurrentLimit,
+ }
+ for _, opt := range opts {
+ opt(config)
+ }
+
+ // Group keys by slot
+ slots, err := groupKeysBySlot(ctx, redisClient, keys)
+ if err != nil {
+ return err
+ }
+
+ g, ctx := errgroup.WithContext(ctx)
+ g.SetLimit(config.concurrentLimit)
+
+ // Process keys in each slot using the provided function
+ for slot, singleSlotKeys := range slots {
+ batches := splitIntoBatches(singleSlotKeys, config.batchSize)
+ for _, batch := range batches {
+ slot, batch := slot, batch // Avoid closure capture issue
+ g.Go(func() error {
+ err := processFunc(ctx, slot, batch)
+ if err != nil {
+ log.ZWarn(ctx, "Batch processFunc failed", err, "slot", slot, "keys", batch)
+ if !config.continueOnError {
+ return err
+ }
+ }
+ return nil
+ })
+ }
+ }
+
+ if err := g.Wait(); err != nil {
+ return err
+ }
+ return nil
+}
+
+func DeleteCacheBySlot(ctx context.Context, rcClient *rocksCacheClient, keys []string) error {
+ switch len(keys) {
+ case 0:
+ return nil
+ case 1:
+ return rcClient.GetClient().TagAsDeletedBatch2(ctx, keys)
+ default:
+ return ProcessKeysBySlot(ctx, rcClient.GetRedis(), keys, func(ctx context.Context, slot int64, keys []string) error {
+ return rcClient.GetClient().TagAsDeletedBatch2(ctx, keys)
+ })
+ }
+}
diff --git a/pkg/common/storage/cache/redis/s3.go b/pkg/common/storage/cache/redis/s3.go
new file mode 100644
index 0000000..19c9530
--- /dev/null
+++ b/pkg/common/storage/cache/redis/s3.go
@@ -0,0 +1,95 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/s3"
+ "github.com/openimsdk/tools/s3/cont"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewObjectCacheRedis(rdb redis.UniversalClient, objDB database.ObjectInfo) cache.ObjectCache {
+ rc := newRocksCacheClient(rdb)
+ return &objectCacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(),
+ rcClient: rc,
+ expireTime: time.Hour * 12,
+ objDB: objDB,
+ }
+}
+
+type objectCacheRedis struct {
+ cache.BatchDeleter
+ objDB database.ObjectInfo
+ rcClient *rocksCacheClient
+ expireTime time.Duration
+}
+
+func (g *objectCacheRedis) getObjectKey(engine string, name string) string {
+ return cachekey.GetObjectKey(engine, name)
+}
+
+func (g *objectCacheRedis) CloneObjectCache() cache.ObjectCache {
+ return &objectCacheRedis{
+ BatchDeleter: g.BatchDeleter.Clone(),
+ rcClient: g.rcClient,
+ expireTime: g.expireTime,
+ objDB: g.objDB,
+ }
+}
+
+func (g *objectCacheRedis) DelObjectName(engine string, names ...string) cache.ObjectCache {
+ objectCache := g.CloneObjectCache()
+ keys := make([]string, 0, len(names))
+ for _, name := range names {
+ keys = append(keys, g.getObjectKey(name, engine))
+ }
+ objectCache.AddKeys(keys...)
+ return objectCache
+}
+
+func (g *objectCacheRedis) GetName(ctx context.Context, engine string, name string) (*model.Object, error) {
+ return getCache(ctx, g.rcClient, g.getObjectKey(name, engine), g.expireTime, func(ctx context.Context) (*model.Object, error) {
+ return g.objDB.Take(ctx, engine, name)
+ })
+}
+
+func NewS3Cache(rdb redis.UniversalClient, s3 s3.Interface) cont.S3Cache {
+ rc := newRocksCacheClient(rdb)
+ return &s3CacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(),
+ rcClient: rc,
+ expireTime: time.Hour * 12,
+ s3: s3,
+ }
+}
+
+type s3CacheRedis struct {
+ cache.BatchDeleter
+ s3 s3.Interface
+ rcClient *rocksCacheClient
+ expireTime time.Duration
+}
+
+func (g *s3CacheRedis) getS3Key(engine string, name string) string {
+ return cachekey.GetS3Key(engine, name)
+}
+
+func (g *s3CacheRedis) DelS3Key(ctx context.Context, engine string, keys ...string) error {
+ ks := make([]string, 0, len(keys))
+ for _, key := range keys {
+ ks = append(ks, g.getS3Key(engine, key))
+ }
+ return g.BatchDeleter.ExecDelWithKeys(ctx, ks)
+}
+
+func (g *s3CacheRedis) GetKey(ctx context.Context, engine string, name string) (*s3.ObjectInfo, error) {
+ return getCache(ctx, g.rcClient, g.getS3Key(engine, name), g.expireTime, func(ctx context.Context) (*s3.ObjectInfo, error) {
+ return g.s3.StatObject(ctx, name)
+ })
+}
diff --git a/pkg/common/storage/cache/redis/seq_conversation.go b/pkg/common/storage/cache/redis/seq_conversation.go
new file mode 100644
index 0000000..062a668
--- /dev/null
+++ b/pkg/common/storage/cache/redis/seq_conversation.go
@@ -0,0 +1,521 @@
+package redis
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/mcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewSeqConversationCacheRedis(rdb redis.UniversalClient, mgo database.SeqConversation) cache.SeqConversationCache {
+ if rdb == nil {
+ return mcache.NewSeqConversationCache(mgo)
+ }
+ return &seqConversationCacheRedis{
+ mgo: mgo,
+ lockTime: time.Second * 3,
+ dataTime: time.Hour * 24 * 365,
+ minSeqExpireTime: time.Hour,
+ rcClient: newRocksCacheClient(rdb),
+ }
+}
+
+type seqConversationCacheRedis struct {
+ mgo database.SeqConversation
+ rcClient *rocksCacheClient
+ lockTime time.Duration
+ dataTime time.Duration
+ minSeqExpireTime time.Duration
+}
+
+func (s *seqConversationCacheRedis) getMinSeqKey(conversationID string) string {
+ return cachekey.GetMallocMinSeqKey(conversationID)
+}
+
+func (s *seqConversationCacheRedis) SetMinSeq(ctx context.Context, conversationID string, seq int64) error {
+ return s.SetMinSeqs(ctx, map[string]int64{conversationID: seq})
+}
+
+func (s *seqConversationCacheRedis) GetMinSeq(ctx context.Context, conversationID string) (int64, error) {
+ return getCache(ctx, s.rcClient, s.getMinSeqKey(conversationID), s.minSeqExpireTime, func(ctx context.Context) (int64, error) {
+ return s.mgo.GetMinSeq(ctx, conversationID)
+ })
+}
+
+func (s *seqConversationCacheRedis) getSingleMaxSeq(ctx context.Context, conversationID string) (map[string]int64, error) {
+ seq, err := s.GetMaxSeq(ctx, conversationID)
+ if err != nil {
+ return nil, err
+ }
+ return map[string]int64{conversationID: seq}, nil
+}
+
+func (s *seqConversationCacheRedis) getSingleMaxSeqWithTime(ctx context.Context, conversationID string) (map[string]database.SeqTime, error) {
+ seq, err := s.GetMaxSeqWithTime(ctx, conversationID)
+ if err != nil {
+ return nil, err
+ }
+ return map[string]database.SeqTime{conversationID: seq}, nil
+}
+
+func (s *seqConversationCacheRedis) batchGetMaxSeq(ctx context.Context, keys []string, keyConversationID map[string]string, seqs map[string]int64) error {
+ result := make([]*redis.StringCmd, len(keys))
+ pipe := s.rcClient.GetRedis().Pipeline()
+ for i, key := range keys {
+ result[i] = pipe.HGet(ctx, key, "CURR")
+ }
+ if _, err := pipe.Exec(ctx); err != nil && !errors.Is(err, redis.Nil) {
+ return errs.Wrap(err)
+ }
+ var notFoundKey []string
+ for i, r := range result {
+ req, err := r.Int64()
+ if err == nil {
+ seqs[keyConversationID[keys[i]]] = req
+ } else if errors.Is(err, redis.Nil) {
+ notFoundKey = append(notFoundKey, keys[i])
+ } else {
+ return errs.Wrap(err)
+ }
+ }
+ for _, key := range notFoundKey {
+ conversationID := keyConversationID[key]
+ seq, err := s.GetMaxSeq(ctx, conversationID)
+ if err != nil {
+ return err
+ }
+ seqs[conversationID] = seq
+ }
+ return nil
+}
+
+func (s *seqConversationCacheRedis) batchGetMaxSeqWithTime(ctx context.Context, keys []string, keyConversationID map[string]string, seqs map[string]database.SeqTime) error {
+ result := make([]*redis.SliceCmd, len(keys))
+ pipe := s.rcClient.GetRedis().Pipeline()
+ for i, key := range keys {
+ result[i] = pipe.HMGet(ctx, key, "CURR", "TIME")
+ }
+ if _, err := pipe.Exec(ctx); err != nil && !errors.Is(err, redis.Nil) {
+ return errs.Wrap(err)
+ }
+ var notFoundKey []string
+ for i, r := range result {
+ val, err := r.Result()
+ if len(val) != 2 {
+ return errs.WrapMsg(err, "batchGetMaxSeqWithTime invalid result", "key", keys[i], "res", val)
+ }
+ if val[0] == nil {
+ notFoundKey = append(notFoundKey, keys[i])
+ continue
+ }
+ seq, err := s.parseInt64(val[0])
+ if err != nil {
+ return err
+ }
+ mill, err := s.parseInt64(val[1])
+ if err != nil {
+ return err
+ }
+ seqs[keyConversationID[keys[i]]] = database.SeqTime{Seq: seq, Time: mill}
+ }
+ for _, key := range notFoundKey {
+ conversationID := keyConversationID[key]
+ seq, err := s.GetMaxSeqWithTime(ctx, conversationID)
+ if err != nil {
+ return err
+ }
+ seqs[conversationID] = seq
+ }
+ return nil
+}
+
+func (s *seqConversationCacheRedis) GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error) {
+ switch len(conversationIDs) {
+ case 0:
+ return map[string]int64{}, nil
+ case 1:
+ return s.getSingleMaxSeq(ctx, conversationIDs[0])
+ }
+ keys := make([]string, 0, len(conversationIDs))
+ keyConversationID := make(map[string]string, len(conversationIDs))
+ for _, conversationID := range conversationIDs {
+ key := s.getSeqMallocKey(conversationID)
+ if _, ok := keyConversationID[key]; ok {
+ continue
+ }
+ keys = append(keys, key)
+ keyConversationID[key] = conversationID
+ }
+ if len(keys) == 1 {
+ return s.getSingleMaxSeq(ctx, conversationIDs[0])
+ }
+ slotKeys, err := groupKeysBySlot(ctx, s.rcClient.GetRedis(), keys)
+ if err != nil {
+ return nil, err
+ }
+ seqs := make(map[string]int64, len(conversationIDs))
+ for _, keys := range slotKeys {
+ if err := s.batchGetMaxSeq(ctx, keys, keyConversationID, seqs); err != nil {
+ return nil, err
+ }
+ }
+ return seqs, nil
+}
+
+func (s *seqConversationCacheRedis) GetMaxSeqsWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error) {
+ switch len(conversationIDs) {
+ case 0:
+ return map[string]database.SeqTime{}, nil
+ case 1:
+ return s.getSingleMaxSeqWithTime(ctx, conversationIDs[0])
+ }
+ keys := make([]string, 0, len(conversationIDs))
+ keyConversationID := make(map[string]string, len(conversationIDs))
+ for _, conversationID := range conversationIDs {
+ key := s.getSeqMallocKey(conversationID)
+ if _, ok := keyConversationID[key]; ok {
+ continue
+ }
+ keys = append(keys, key)
+ keyConversationID[key] = conversationID
+ }
+ if len(keys) == 1 {
+ return s.getSingleMaxSeqWithTime(ctx, conversationIDs[0])
+ }
+ slotKeys, err := groupKeysBySlot(ctx, s.rcClient.GetRedis(), keys)
+ if err != nil {
+ return nil, err
+ }
+ seqs := make(map[string]database.SeqTime, len(conversationIDs))
+ for _, keys := range slotKeys {
+ if err := s.batchGetMaxSeqWithTime(ctx, keys, keyConversationID, seqs); err != nil {
+ return nil, err
+ }
+ }
+ return seqs, nil
+}
+
+func (s *seqConversationCacheRedis) getSeqMallocKey(conversationID string) string {
+ return cachekey.GetMallocSeqKey(conversationID)
+}
+
+func (s *seqConversationCacheRedis) setSeq(ctx context.Context, key string, owner int64, currSeq int64, lastSeq int64, mill int64) (int64, error) {
+ if lastSeq < currSeq {
+ return 0, errs.New("lastSeq must be greater than currSeq")
+ }
+ // 0: success
+ // 1: success the lock has expired, but has not been locked by anyone else
+ // 2: already locked, but not by yourself
+ script := `
+local key = KEYS[1]
+local lockValue = ARGV[1]
+local dataSecond = ARGV[2]
+local curr_seq = tonumber(ARGV[3])
+local last_seq = tonumber(ARGV[4])
+local mallocTime = ARGV[5]
+if redis.call("EXISTS", key) == 0 then
+ redis.call("HSET", key, "CURR", curr_seq, "LAST", last_seq, "TIME", mallocTime)
+ redis.call("EXPIRE", key, dataSecond)
+ return 1
+end
+if redis.call("HGET", key, "LOCK") ~= lockValue then
+ return 2
+end
+redis.call("HDEL", key, "LOCK")
+redis.call("HSET", key, "CURR", curr_seq, "LAST", last_seq, "TIME", mallocTime)
+redis.call("EXPIRE", key, dataSecond)
+return 0
+`
+ result, err := s.rcClient.GetRedis().Eval(ctx, script, []string{key}, owner, int64(s.dataTime/time.Second), currSeq, lastSeq, mill).Int64()
+ if err != nil {
+ return 0, errs.Wrap(err)
+ }
+ return result, nil
+}
+
+// malloc size=0 is to get the current seq size>0 is to allocate seq
+func (s *seqConversationCacheRedis) malloc(ctx context.Context, key string, size int64) ([]int64, error) {
+ // 0: success
+ // 1: need to obtain and lock
+ // 2: already locked
+ // 3: exceeded the maximum value and locked
+ script := `
+local key = KEYS[1]
+local size = tonumber(ARGV[1])
+local lockSecond = ARGV[2]
+local dataSecond = ARGV[3]
+local mallocTime = ARGV[4]
+local result = {}
+if redis.call("EXISTS", key) == 0 then
+ local lockValue = math.random(0, 999999999)
+ redis.call("HSET", key, "LOCK", lockValue)
+ redis.call("EXPIRE", key, lockSecond)
+ table.insert(result, 1)
+ table.insert(result, lockValue)
+ table.insert(result, mallocTime)
+ return result
+end
+if redis.call("HEXISTS", key, "LOCK") == 1 then
+ table.insert(result, 2)
+ return result
+end
+local curr_seq = tonumber(redis.call("HGET", key, "CURR"))
+local last_seq = tonumber(redis.call("HGET", key, "LAST"))
+if size == 0 then
+ redis.call("EXPIRE", key, dataSecond)
+ table.insert(result, 0)
+ table.insert(result, curr_seq)
+ table.insert(result, last_seq)
+ local setTime = redis.call("HGET", key, "TIME")
+ if setTime then
+ table.insert(result, setTime)
+ else
+ table.insert(result, 0)
+ end
+ return result
+end
+local max_seq = curr_seq + size
+if max_seq > last_seq then
+ local lockValue = math.random(0, 999999999)
+ redis.call("HSET", key, "LOCK", lockValue)
+ redis.call("HSET", key, "CURR", last_seq)
+ redis.call("HSET", key, "TIME", mallocTime)
+ redis.call("EXPIRE", key, lockSecond)
+ table.insert(result, 3)
+ table.insert(result, curr_seq)
+ table.insert(result, last_seq)
+ table.insert(result, lockValue)
+ table.insert(result, mallocTime)
+ return result
+end
+redis.call("HSET", key, "CURR", max_seq)
+redis.call("HSET", key, "TIME", ARGV[4])
+redis.call("EXPIRE", key, dataSecond)
+table.insert(result, 0)
+table.insert(result, curr_seq)
+table.insert(result, last_seq)
+table.insert(result, mallocTime)
+return result
+`
+ result, err := s.rcClient.GetRedis().Eval(ctx, script, []string{key}, size, int64(s.lockTime/time.Second), int64(s.dataTime/time.Second), time.Now().UnixMilli()).Int64Slice()
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return result, nil
+}
+
+func (s *seqConversationCacheRedis) wait(ctx context.Context) error {
+ timer := time.NewTimer(time.Second / 4)
+ defer timer.Stop()
+ select {
+ case <-timer.C:
+ return nil
+ case <-ctx.Done():
+ return ctx.Err()
+ }
+}
+
+func (s *seqConversationCacheRedis) setSeqRetry(ctx context.Context, key string, owner int64, currSeq int64, lastSeq int64, mill int64) {
+ for i := 0; i < 10; i++ {
+ state, err := s.setSeq(ctx, key, owner, currSeq, lastSeq, mill)
+ if err != nil {
+ log.ZError(ctx, "set seq cache failed", err, "key", key, "owner", owner, "currSeq", currSeq, "lastSeq", lastSeq, "count", i+1)
+ if err := s.wait(ctx); err != nil {
+ return
+ }
+ continue
+ }
+ switch state {
+ case 0: // ideal state
+ case 1:
+ log.ZWarn(ctx, "set seq cache lock not found", nil, "key", key, "owner", owner, "currSeq", currSeq, "lastSeq", lastSeq)
+ case 2:
+ log.ZWarn(ctx, "set seq cache lock to be held by someone else", nil, "key", key, "owner", owner, "currSeq", currSeq, "lastSeq", lastSeq)
+ default:
+ log.ZError(ctx, "set seq cache lock unknown state", nil, "key", key, "owner", owner, "currSeq", currSeq, "lastSeq", lastSeq)
+ }
+ return
+ }
+ log.ZError(ctx, "set seq cache retrying still failed", nil, "key", key, "owner", owner, "currSeq", currSeq, "lastSeq", lastSeq)
+}
+
+func (s *seqConversationCacheRedis) getMallocSize(conversationID string, size int64) int64 {
+ if size == 0 {
+ return 0
+ }
+ var basicSize int64
+ if msgprocessor.IsGroupConversationID(conversationID) {
+ basicSize = 100
+ } else {
+ basicSize = 50
+ }
+ basicSize += size
+ return basicSize
+}
+
+func (s *seqConversationCacheRedis) Malloc(ctx context.Context, conversationID string, size int64) (int64, error) {
+ seq, _, err := s.mallocTime(ctx, conversationID, size)
+ return seq, err
+}
+
+func (s *seqConversationCacheRedis) mallocTime(ctx context.Context, conversationID string, size int64) (int64, int64, error) {
+ if size < 0 {
+ return 0, 0, errs.New("size must be greater than 0")
+ }
+ key := s.getSeqMallocKey(conversationID)
+ for i := 0; i < 10; i++ {
+ states, err := s.malloc(ctx, key, size)
+ if err != nil {
+ return 0, 0, err
+ }
+ switch states[0] {
+ case 0: // success
+ return states[1], states[3], nil
+ case 1: // not found
+ mallocSize := s.getMallocSize(conversationID, size)
+ seq, err := s.mgo.Malloc(ctx, conversationID, mallocSize)
+ if err != nil {
+ return 0, 0, err
+ }
+ s.setSeqRetry(ctx, key, states[1], seq+size, seq+mallocSize, states[2])
+ return seq, 0, nil
+ case 2: // locked
+ if err := s.wait(ctx); err != nil {
+ return 0, 0, err
+ }
+ continue
+ case 3: // exceeded cache max value
+ currSeq := states[1]
+ lastSeq := states[2]
+ mill := states[4]
+ mallocSize := s.getMallocSize(conversationID, size)
+ seq, err := s.mgo.Malloc(ctx, conversationID, mallocSize)
+ if err != nil {
+ return 0, 0, err
+ }
+ if lastSeq == seq {
+ s.setSeqRetry(ctx, key, states[3], currSeq+size, seq+mallocSize, mill)
+ return currSeq, states[4], nil
+ } else {
+ log.ZWarn(ctx, "malloc seq not equal cache last seq", nil, "conversationID", conversationID, "currSeq", currSeq, "lastSeq", lastSeq, "mallocSeq", seq)
+ s.setSeqRetry(ctx, key, states[3], seq+size, seq+mallocSize, mill)
+ return seq, mill, nil
+ }
+ default:
+ log.ZError(ctx, "malloc seq unknown state", nil, "state", states[0], "conversationID", conversationID, "size", size)
+ return 0, 0, errs.New(fmt.Sprintf("unknown state: %d", states[0]))
+ }
+ }
+ log.ZError(ctx, "malloc seq retrying still failed", nil, "conversationID", conversationID, "size", size)
+ return 0, 0, errs.New("malloc seq waiting for lock timeout", "conversationID", conversationID, "size", size)
+}
+
+func (s *seqConversationCacheRedis) GetMaxSeq(ctx context.Context, conversationID string) (int64, error) {
+ return s.Malloc(ctx, conversationID, 0)
+}
+
+func (s *seqConversationCacheRedis) GetMaxSeqWithTime(ctx context.Context, conversationID string) (database.SeqTime, error) {
+ seq, mill, err := s.mallocTime(ctx, conversationID, 0)
+ if err != nil {
+ return database.SeqTime{}, err
+ }
+ return database.SeqTime{Seq: seq, Time: mill}, nil
+}
+
+func (s *seqConversationCacheRedis) SetMinSeqs(ctx context.Context, seqs map[string]int64) error {
+ keys := make([]string, 0, len(seqs))
+ for conversationID, seq := range seqs {
+ keys = append(keys, s.getMinSeqKey(conversationID))
+ if err := s.mgo.SetMinSeq(ctx, conversationID, seq); err != nil {
+ return err
+ }
+ }
+ return DeleteCacheBySlot(ctx, s.rcClient, keys)
+}
+
+// GetCacheMaxSeqWithTime only get the existing cache, if there is no cache, no cache will be generated
+func (s *seqConversationCacheRedis) GetCacheMaxSeqWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error) {
+ if len(conversationIDs) == 0 {
+ return map[string]database.SeqTime{}, nil
+ }
+ key2conversationID := make(map[string]string)
+ keys := make([]string, 0, len(conversationIDs))
+ for _, conversationID := range conversationIDs {
+ key := s.getSeqMallocKey(conversationID)
+ if _, ok := key2conversationID[key]; ok {
+ continue
+ }
+ key2conversationID[key] = conversationID
+ keys = append(keys, key)
+ }
+ slotKeys, err := groupKeysBySlot(ctx, s.rcClient.GetRedis(), keys)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[string]database.SeqTime)
+ for _, keys := range slotKeys {
+ if len(keys) == 0 {
+ continue
+ }
+ pipe := s.rcClient.GetRedis().Pipeline()
+ cmds := make([]*redis.SliceCmd, 0, len(keys))
+ for _, key := range keys {
+ cmds = append(cmds, pipe.HMGet(ctx, key, "CURR", "TIME"))
+ }
+ if _, err := pipe.Exec(ctx); err != nil {
+ return nil, errs.Wrap(err)
+ }
+ for i, cmd := range cmds {
+ val, err := cmd.Result()
+ if err != nil {
+ return nil, err
+ }
+ if len(val) != 2 {
+ return nil, errs.WrapMsg(err, "GetCacheMaxSeqWithTime invalid result", "key", keys[i], "res", val)
+ }
+ if val[0] == nil {
+ continue
+ }
+ seq, err := s.parseInt64(val[0])
+ if err != nil {
+ return nil, err
+ }
+ mill, err := s.parseInt64(val[1])
+ if err != nil {
+ return nil, err
+ }
+ conversationID := key2conversationID[keys[i]]
+ res[conversationID] = database.SeqTime{Seq: seq, Time: mill}
+ }
+ }
+ return res, nil
+}
+
+func (s *seqConversationCacheRedis) parseInt64(val any) (int64, error) {
+ switch v := val.(type) {
+ case nil:
+ return 0, nil
+ case int:
+ return int64(v), nil
+ case int64:
+ return v, nil
+ case string:
+ res, err := strconv.ParseInt(v, 10, 64)
+ if err != nil {
+ return 0, errs.WrapMsg(err, "invalid string not int64", "value", v)
+ }
+ return res, nil
+ default:
+ return 0, errs.New("invalid result not int64", "resType", fmt.Sprintf("%T", v), "value", v)
+ }
+}
diff --git a/pkg/common/storage/cache/redis/seq_conversation_test.go b/pkg/common/storage/cache/redis/seq_conversation_test.go
new file mode 100644
index 0000000..473d247
--- /dev/null
+++ b/pkg/common/storage/cache/redis/seq_conversation_test.go
@@ -0,0 +1,144 @@
+package redis
+
+import (
+ "context"
+ "strconv"
+ "sync"
+ "sync/atomic"
+ "testing"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "github.com/redis/go-redis/v9"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func newTestSeq() *seqConversationCacheRedis {
+ mgocli, err := mongo.Connect(context.Background(), options.Client().ApplyURI("mongodb://openIM:openIM123@127.0.0.1:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second))
+ if err != nil {
+ panic(err)
+ }
+ model, err := mgo.NewSeqConversationMongo(mgocli.Database("openim_v3"))
+ if err != nil {
+ panic(err)
+ }
+ opt := &redis.Options{
+ Addr: "127.0.0.1:16379",
+ Password: "openIM123",
+ DB: 1,
+ }
+ rdb := redis.NewClient(opt)
+ if err := rdb.Ping(context.Background()).Err(); err != nil {
+ panic(err)
+ }
+ return NewSeqConversationCacheRedis(rdb, model).(*seqConversationCacheRedis)
+}
+
+func TestSeq(t *testing.T) {
+ ts := newTestSeq()
+ var (
+ wg sync.WaitGroup
+ speed atomic.Int64
+ )
+
+ const count = 128
+ wg.Add(count)
+ for i := 0; i < count; i++ {
+ index := i + 1
+ go func() {
+ defer wg.Done()
+ var size int64 = 10
+ cID := strconv.Itoa(index * 1)
+ for i := 1; ; i++ {
+ //first, err := ts.mgo.Malloc(context.Background(), cID, size) // mongo
+ first, err := ts.Malloc(context.Background(), cID, size) // redis
+ if err != nil {
+ t.Logf("[%d-%d] %s %s", index, i, cID, err)
+ return
+ }
+ speed.Add(size)
+ _ = first
+ //t.Logf("[%d] %d -> %d", i, first+1, first+size)
+ }
+ }()
+ }
+
+ done := make(chan struct{})
+
+ go func() {
+ wg.Wait()
+ close(done)
+ }()
+
+ ticker := time.NewTicker(time.Second)
+
+ for {
+ select {
+ case <-done:
+ ticker.Stop()
+ return
+ case <-ticker.C:
+ value := speed.Swap(0)
+ t.Logf("speed: %d/s", value)
+ }
+ }
+}
+
+func TestDel(t *testing.T) {
+ ts := newTestSeq()
+ for i := 1; i < 100; i++ {
+ var size int64 = 100
+ first, err := ts.Malloc(context.Background(), "100", size)
+ if err != nil {
+ t.Logf("[%d] %s", i, err)
+ return
+ }
+ t.Logf("[%d] %d -> %d", i, first+1, first+size)
+ time.Sleep(time.Second)
+ }
+}
+
+func TestSeqMalloc(t *testing.T) {
+ ts := newTestSeq()
+ t.Log(ts.GetMaxSeq(context.Background(), "100"))
+}
+
+func TestMinSeq(t *testing.T) {
+ ts := newTestSeq()
+ t.Log(ts.GetMinSeq(context.Background(), "10000000"))
+}
+
+func TestMalloc(t *testing.T) {
+ ts := newTestSeq()
+ t.Log(ts.mallocTime(context.Background(), "10000000", 100))
+}
+
+func TestHMGET(t *testing.T) {
+ ts := newTestSeq()
+ res, err := ts.GetCacheMaxSeqWithTime(context.Background(), []string{"10000000", "123456"})
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res)
+}
+
+func TestGetMaxSeqWithTime(t *testing.T) {
+ ts := newTestSeq()
+ t.Log(ts.GetMaxSeqWithTime(context.Background(), "10000000"))
+}
+
+func TestGetMaxSeqWithTime1(t *testing.T) {
+ ts := newTestSeq()
+ t.Log(ts.GetMaxSeqsWithTime(context.Background(), []string{"10000000", "12345", "111"}))
+}
+
+//
+//func TestHMGET(t *testing.T) {
+// ts := newTestSeq()
+// res, err := ts.rdb.HMGet(context.Background(), "MALLOC_SEQ:1", "CURR", "TIME1").Result()
+// if err != nil {
+// panic(err)
+// }
+// t.Log(res)
+//}
diff --git a/pkg/common/storage/cache/redis/seq_user.go b/pkg/common/storage/cache/redis/seq_user.go
new file mode 100644
index 0000000..9928887
--- /dev/null
+++ b/pkg/common/storage/cache/redis/seq_user.go
@@ -0,0 +1,184 @@
+package redis
+
+import (
+ "context"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/openimsdk/tools/errs"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewSeqUserCacheRedis(rdb redis.UniversalClient, mgo database.SeqUser) cache.SeqUser {
+ return &seqUserCacheRedis{
+ mgo: mgo,
+ readSeqWriteRatio: 100,
+ expireTime: time.Hour * 24 * 7,
+ readExpireTime: time.Hour * 24 * 30,
+ rocks: newRocksCacheClient(rdb),
+ }
+}
+
+type seqUserCacheRedis struct {
+ mgo database.SeqUser
+ rocks *rocksCacheClient
+ expireTime time.Duration
+ readExpireTime time.Duration
+ readSeqWriteRatio int64
+}
+
+func (s *seqUserCacheRedis) getSeqUserMaxSeqKey(conversationID string, userID string) string {
+ return cachekey.GetSeqUserMaxSeqKey(conversationID, userID)
+}
+
+func (s *seqUserCacheRedis) getSeqUserMinSeqKey(conversationID string, userID string) string {
+ return cachekey.GetSeqUserMinSeqKey(conversationID, userID)
+}
+
+func (s *seqUserCacheRedis) getSeqUserReadSeqKey(conversationID string, userID string) string {
+ return cachekey.GetSeqUserReadSeqKey(conversationID, userID)
+}
+
+func (s *seqUserCacheRedis) GetUserMaxSeq(ctx context.Context, conversationID string, userID string) (int64, error) {
+ return getCache(ctx, s.rocks, s.getSeqUserMaxSeqKey(conversationID, userID), s.expireTime, func(ctx context.Context) (int64, error) {
+ return s.mgo.GetUserMaxSeq(ctx, conversationID, userID)
+ })
+}
+
+func (s *seqUserCacheRedis) SetUserMaxSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ if err := s.mgo.SetUserMaxSeq(ctx, conversationID, userID, seq); err != nil {
+ return err
+ }
+ return s.rocks.GetClient().TagAsDeleted2(ctx, s.getSeqUserMaxSeqKey(conversationID, userID))
+}
+
+func (s *seqUserCacheRedis) GetUserMinSeq(ctx context.Context, conversationID string, userID string) (int64, error) {
+ return getCache(ctx, s.rocks, s.getSeqUserMinSeqKey(conversationID, userID), s.expireTime, func(ctx context.Context) (int64, error) {
+ return s.mgo.GetUserMinSeq(ctx, conversationID, userID)
+ })
+}
+
+func (s *seqUserCacheRedis) SetUserMinSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ return s.SetUserMinSeqs(ctx, userID, map[string]int64{conversationID: seq})
+}
+
+func (s *seqUserCacheRedis) GetUserReadSeq(ctx context.Context, conversationID string, userID string) (int64, error) {
+ return getCache(ctx, s.rocks, s.getSeqUserReadSeqKey(conversationID, userID), s.readExpireTime, func(ctx context.Context) (int64, error) {
+ return s.mgo.GetUserReadSeq(ctx, conversationID, userID)
+ })
+}
+
+func (s *seqUserCacheRedis) SetUserReadSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ if s.rocks.GetRedis() == nil {
+ return s.SetUserReadSeqToDB(ctx, conversationID, userID, seq)
+ }
+ dbSeq, err := s.GetUserReadSeq(ctx, conversationID, userID)
+ if err != nil {
+ return err
+ }
+ if dbSeq < seq {
+ if err := s.rocks.GetClient().RawSet(ctx, s.getSeqUserReadSeqKey(conversationID, userID), strconv.Itoa(int(seq)), s.readExpireTime); err != nil {
+ return errs.Wrap(err)
+ }
+ }
+ return nil
+}
+
+func (s *seqUserCacheRedis) SetUserReadSeqToDB(ctx context.Context, conversationID string, userID string, seq int64) error {
+ return s.mgo.SetUserReadSeq(ctx, conversationID, userID, seq)
+}
+
+func (s *seqUserCacheRedis) SetUserMinSeqs(ctx context.Context, userID string, seqs map[string]int64) error {
+ keys := make([]string, 0, len(seqs))
+ for conversationID, seq := range seqs {
+ if err := s.mgo.SetUserMinSeq(ctx, conversationID, userID, seq); err != nil {
+ return err
+ }
+ keys = append(keys, s.getSeqUserMinSeqKey(conversationID, userID))
+ }
+ return DeleteCacheBySlot(ctx, s.rocks, keys)
+}
+
+func (s *seqUserCacheRedis) setUserRedisReadSeqs(ctx context.Context, userID string, seqs map[string]int64) error {
+ keys := make([]string, 0, len(seqs))
+ keySeq := make(map[string]int64)
+ for conversationID, seq := range seqs {
+ key := s.getSeqUserReadSeqKey(conversationID, userID)
+ keys = append(keys, key)
+ keySeq[key] = seq
+ }
+ slotKeys, err := groupKeysBySlot(ctx, s.rocks.GetRedis(), keys)
+ if err != nil {
+ return err
+ }
+ for _, keys := range slotKeys {
+ pipe := s.rocks.GetRedis().Pipeline()
+ for _, key := range keys {
+ pipe.HSet(ctx, key, "value", strconv.FormatInt(keySeq[key], 10))
+ pipe.Expire(ctx, key, s.readExpireTime)
+ }
+ if _, err := pipe.Exec(ctx); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (s *seqUserCacheRedis) SetUserReadSeqs(ctx context.Context, userID string, seqs map[string]int64) error {
+ if len(seqs) == 0 {
+ return nil
+ }
+ if err := s.setUserRedisReadSeqs(ctx, userID, seqs); err != nil {
+ return err
+ }
+ return nil
+}
+
+func (s *seqUserCacheRedis) GetUserReadSeqs(ctx context.Context, userID string, conversationIDs []string) (map[string]int64, error) {
+ res, err := batchGetCache2(ctx, s.rocks, s.readExpireTime, conversationIDs, func(conversationID string) string {
+ return s.getSeqUserReadSeqKey(conversationID, userID)
+ }, func(v *readSeqModel) string {
+ return v.ConversationID
+ }, func(ctx context.Context, conversationIDs []string) ([]*readSeqModel, error) {
+ seqs, err := s.mgo.GetUserReadSeqs(ctx, userID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ res := make([]*readSeqModel, 0, len(seqs))
+ for conversationID, seq := range seqs {
+ res = append(res, &readSeqModel{ConversationID: conversationID, Seq: seq})
+ }
+ return res, nil
+ })
+ if err != nil {
+ return nil, err
+ }
+ data := make(map[string]int64)
+ for _, v := range res {
+ data[v.ConversationID] = v.Seq
+ }
+ return data, nil
+}
+
+var _ BatchCacheCallback[string] = (*readSeqModel)(nil)
+
+type readSeqModel struct {
+ ConversationID string
+ Seq int64
+}
+
+func (r *readSeqModel) BatchCache(conversationID string) {
+ r.ConversationID = conversationID
+}
+
+func (r *readSeqModel) UnmarshalJSON(bytes []byte) (err error) {
+ r.Seq, err = strconv.ParseInt(string(bytes), 10, 64)
+ return
+}
+
+func (r *readSeqModel) MarshalJSON() ([]byte, error) {
+ return []byte(strconv.FormatInt(r.Seq, 10)), nil
+}
diff --git a/pkg/common/storage/cache/redis/seq_user_test.go b/pkg/common/storage/cache/redis/seq_user_test.go
new file mode 100644
index 0000000..93f11d5
--- /dev/null
+++ b/pkg/common/storage/cache/redis/seq_user_test.go
@@ -0,0 +1,112 @@
+package redis
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "strconv"
+ "sync/atomic"
+ "testing"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ mgo2 "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "github.com/redis/go-redis/v9"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func newTestOnline() *userOnline {
+ opt := &redis.Options{
+ Addr: "172.16.8.48:16379",
+ Password: "openIM123",
+ DB: 0,
+ }
+ rdb := redis.NewClient(opt)
+ if err := rdb.Ping(context.Background()).Err(); err != nil {
+ panic(err)
+ }
+ return &userOnline{rdb: rdb, expire: time.Hour, channelName: "user_online"}
+}
+
+func TestOnline(t *testing.T) {
+ ts := newTestOnline()
+ var count atomic.Int64
+ for i := 0; i < 64; i++ {
+ go func(userID string) {
+ var err error
+ for i := 0; ; i++ {
+ if i%2 == 0 {
+ err = ts.SetUserOnline(context.Background(), userID, []int32{5, 6}, []int32{7, 8, 9})
+ } else {
+ err = ts.SetUserOnline(context.Background(), userID, []int32{1, 2, 3}, []int32{4, 5, 6})
+ }
+ if err != nil {
+ panic(err)
+ }
+ count.Add(1)
+ }
+ }(strconv.Itoa(10000 + i))
+ }
+
+ ticker := time.NewTicker(time.Second)
+ for range ticker.C {
+ t.Log(count.Swap(0))
+ }
+}
+
+func TestGetOnline(t *testing.T) {
+ ts := newTestOnline()
+ ctx := context.Background()
+ pIDs, err := ts.GetOnline(ctx, "10000")
+ if err != nil {
+ panic(err)
+ }
+ t.Log(pIDs)
+}
+
+func TestRecvOnline(t *testing.T) {
+ ts := newTestOnline()
+ ctx := context.Background()
+ pubsub := ts.rdb.Subscribe(ctx, cachekey.OnlineChannel)
+
+ _, err := pubsub.Receive(ctx)
+ if err != nil {
+ log.Fatalf("Could not subscribe: %v", err)
+ }
+
+ ch := pubsub.Channel()
+
+ for msg := range ch {
+ fmt.Printf("Received message from channel %s: %s\n", msg.Channel, msg.Payload)
+ }
+}
+
+func TestName1(t *testing.T) {
+ opt := &redis.Options{
+ Addr: "172.16.8.48:16379",
+ Password: "openIM123",
+ DB: 0,
+ }
+ rdb := redis.NewClient(opt)
+
+ mgo, err := mongo.Connect(context.Background(),
+ options.Client().
+ ApplyURI("mongodb://openIM:openIM123@172.16.8.48:37017/openim_v3?maxPoolSize=100").
+ SetConnectTimeout(5*time.Second))
+ if err != nil {
+ panic(err)
+ }
+ model, err := mgo2.NewSeqUserMongo(mgo.Database("openim_v3"))
+ if err != nil {
+ panic(err)
+ }
+ seq := NewSeqUserCacheRedis(rdb, model)
+
+ res, err := seq.GetUserReadSeqs(context.Background(), "2110910952", []string{"sg_345762580", "2000", "3000"})
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res)
+
+}
diff --git a/pkg/common/storage/cache/redis/third.go b/pkg/common/storage/cache/redis/third.go
new file mode 100644
index 0000000..f067bc4
--- /dev/null
+++ b/pkg/common/storage/cache/redis/third.go
@@ -0,0 +1,90 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "github.com/openimsdk/tools/errs"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewThirdCache(rdb redis.UniversalClient) cache.ThirdCache {
+ return &thirdCache{rdb: rdb}
+}
+
+type thirdCache struct {
+ rdb redis.UniversalClient
+}
+
+func (c *thirdCache) getGetuiTokenKey() string {
+ return cachekey.GetGetuiTokenKey()
+}
+
+func (c *thirdCache) getGetuiTaskIDKey() string {
+ return cachekey.GetGetuiTaskIDKey()
+}
+
+func (c *thirdCache) getUserBadgeUnreadCountSumKey(userID string) string {
+ return cachekey.GetUserBadgeUnreadCountSumKey(userID)
+}
+
+func (c *thirdCache) getFcmAccountTokenKey(account string, platformID int) string {
+ return cachekey.GetFcmAccountTokenKey(account, platformID)
+}
+
+func (c *thirdCache) SetFcmToken(ctx context.Context, account string, platformID int, fcmToken string, expireTime int64) (err error) {
+ return errs.Wrap(c.rdb.Set(ctx, c.getFcmAccountTokenKey(account, platformID), fcmToken, time.Duration(expireTime)*time.Second).Err())
+}
+
+func (c *thirdCache) GetFcmToken(ctx context.Context, account string, platformID int) (string, error) {
+ val, err := c.rdb.Get(ctx, c.getFcmAccountTokenKey(account, platformID)).Result()
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ return val, nil
+}
+
+func (c *thirdCache) DelFcmToken(ctx context.Context, account string, platformID int) error {
+ return errs.Wrap(c.rdb.Del(ctx, c.getFcmAccountTokenKey(account, platformID)).Err())
+}
+
+func (c *thirdCache) IncrUserBadgeUnreadCountSum(ctx context.Context, userID string) (int, error) {
+ seq, err := c.rdb.Incr(ctx, c.getUserBadgeUnreadCountSumKey(userID)).Result()
+
+ return int(seq), errs.Wrap(err)
+}
+
+func (c *thirdCache) SetUserBadgeUnreadCountSum(ctx context.Context, userID string, value int) error {
+ return errs.Wrap(c.rdb.Set(ctx, c.getUserBadgeUnreadCountSumKey(userID), value, 0).Err())
+}
+
+func (c *thirdCache) GetUserBadgeUnreadCountSum(ctx context.Context, userID string) (int, error) {
+ val, err := c.rdb.Get(ctx, c.getUserBadgeUnreadCountSumKey(userID)).Int()
+ return val, errs.Wrap(err)
+}
+
+func (c *thirdCache) SetGetuiToken(ctx context.Context, token string, expireTime int64) error {
+ return errs.Wrap(c.rdb.Set(ctx, c.getGetuiTokenKey(), token, time.Duration(expireTime)*time.Second).Err())
+}
+
+func (c *thirdCache) GetGetuiToken(ctx context.Context) (string, error) {
+ val, err := c.rdb.Get(ctx, c.getGetuiTokenKey()).Result()
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ return val, nil
+}
+
+func (c *thirdCache) SetGetuiTaskID(ctx context.Context, taskID string, expireTime int64) error {
+ return errs.Wrap(c.rdb.Set(ctx, c.getGetuiTaskIDKey(), taskID, time.Duration(expireTime)*time.Second).Err())
+}
+
+func (c *thirdCache) GetGetuiTaskID(ctx context.Context) (string, error) {
+ val, err := c.rdb.Get(ctx, c.getGetuiTaskIDKey()).Result()
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ return val, nil
+}
diff --git a/pkg/common/storage/cache/redis/token.go b/pkg/common/storage/cache/redis/token.go
new file mode 100644
index 0000000..864d5f5
--- /dev/null
+++ b/pkg/common/storage/cache/redis/token.go
@@ -0,0 +1,248 @@
+package redis
+
+import (
+ "context"
+ "encoding/json"
+ "strconv"
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+type tokenCache struct {
+ rdb redis.UniversalClient
+ accessExpire time.Duration
+ localCache *config.LocalCache
+}
+
+func NewTokenCacheModel(rdb redis.UniversalClient, localCache *config.LocalCache, accessExpire int64) cache.TokenModel {
+ c := &tokenCache{rdb: rdb, localCache: localCache}
+ c.accessExpire = c.getExpireTime(accessExpire)
+ return c
+}
+
+func (c *tokenCache) SetTokenFlag(ctx context.Context, userID string, platformID int, token string, flag int) error {
+ key := cachekey.GetTokenKey(userID, platformID)
+ if err := c.rdb.HSet(ctx, key, token, flag).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, key)
+ }
+
+ return nil
+}
+
+// SetTokenFlagEx set token and flag with expire time
+func (c *tokenCache) SetTokenFlagEx(ctx context.Context, userID string, platformID int, token string, flag int) error {
+ key := cachekey.GetTokenKey(userID, platformID)
+ if err := c.rdb.HSet(ctx, key, token, flag).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+ if err := c.rdb.Expire(ctx, key, c.accessExpire).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, key)
+ }
+
+ return nil
+}
+
+func (c *tokenCache) GetTokensWithoutError(ctx context.Context, userID string, platformID int) (map[string]int, error) {
+ m, err := c.rdb.HGetAll(ctx, cachekey.GetTokenKey(userID, platformID)).Result()
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ mm := make(map[string]int)
+ for k, v := range m {
+ state, err := strconv.Atoi(v)
+ if err != nil {
+ return nil, errs.WrapMsg(err, "redis token value is not int", "value", v, "userID", userID, "platformID", platformID)
+ }
+ mm[k] = state
+ }
+ return mm, nil
+}
+
+func (c *tokenCache) HasTemporaryToken(ctx context.Context, userID string, platformID int, token string) error {
+ err := c.rdb.Get(ctx, cachekey.GetTemporaryTokenKey(userID, platformID, token)).Err()
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ return nil
+}
+
+func (c *tokenCache) GetAllTokensWithoutError(ctx context.Context, userID string) (map[int]map[string]int, error) {
+ var (
+ res = make(map[int]map[string]int)
+ resLock = sync.Mutex{}
+ )
+
+ keys := cachekey.GetAllPlatformTokenKey(userID)
+ if err := ProcessKeysBySlot(ctx, c.rdb, keys, func(ctx context.Context, slot int64, keys []string) error {
+ pipe := c.rdb.Pipeline()
+ mapRes := make([]*redis.MapStringStringCmd, len(keys))
+ for i, key := range keys {
+ mapRes[i] = pipe.HGetAll(ctx, key)
+ }
+ _, err := pipe.Exec(ctx)
+ if err != nil {
+ return err
+ }
+ for i, m := range mapRes {
+ mm := make(map[string]int)
+ for k, v := range m.Val() {
+ state, err := strconv.Atoi(v)
+ if err != nil {
+ return errs.WrapMsg(err, "redis token value is not int", "value", v, "userID", userID)
+ }
+ mm[k] = state
+ }
+ resLock.Lock()
+ res[cachekey.GetPlatformIDByTokenKey(keys[i])] = mm
+ resLock.Unlock()
+ }
+ return nil
+ }); err != nil {
+ return nil, err
+ }
+ return res, nil
+}
+
+func (c *tokenCache) SetTokenMapByUidPid(ctx context.Context, userID string, platformID int, m map[string]int) error {
+ mm := make(map[string]any)
+ for k, v := range m {
+ mm[k] = v
+ }
+
+ err := c.rdb.HSet(ctx, cachekey.GetTokenKey(userID, platformID), mm).Err()
+ if err != nil {
+ return errs.Wrap(err)
+ }
+
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, cachekey.GetTokenKey(userID, platformID))
+ }
+
+ return nil
+}
+
+func (c *tokenCache) BatchSetTokenMapByUidPid(ctx context.Context, tokens map[string]map[string]any) error {
+ keys := datautil.Keys(tokens)
+ if err := ProcessKeysBySlot(ctx, c.rdb, keys, func(ctx context.Context, slot int64, keys []string) error {
+ pipe := c.rdb.Pipeline()
+ for k, v := range tokens {
+ pipe.HSet(ctx, k, v)
+ }
+ _, err := pipe.Exec(ctx)
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ return nil
+ }); err != nil {
+ return err
+ }
+
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, keys...)
+ }
+ return nil
+}
+
+func (c *tokenCache) DeleteTokenByUidPid(ctx context.Context, userID string, platformID int, fields []string) error {
+ key := cachekey.GetTokenKey(userID, platformID)
+ if err := c.rdb.HDel(ctx, key, fields...).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, key)
+ }
+ return nil
+}
+
+func (c *tokenCache) getExpireTime(t int64) time.Duration {
+ return time.Hour * 24 * time.Duration(t)
+}
+
+// DeleteTokenByTokenMap tokens key is platformID, value is token slice
+func (c *tokenCache) DeleteTokenByTokenMap(ctx context.Context, userID string, tokens map[int][]string) error {
+ var (
+ keys = make([]string, 0, len(tokens))
+ keyMap = make(map[string][]string)
+ )
+ for k, v := range tokens {
+ k1 := cachekey.GetTokenKey(userID, k)
+ keys = append(keys, k1)
+ keyMap[k1] = v
+ }
+
+ if err := ProcessKeysBySlot(ctx, c.rdb, keys, func(ctx context.Context, slot int64, keys []string) error {
+ pipe := c.rdb.Pipeline()
+ for k, v := range tokens {
+ pipe.HDel(ctx, cachekey.GetTokenKey(userID, k), v...)
+ }
+ _, err := pipe.Exec(ctx)
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ return nil
+ }); err != nil {
+ return err
+ }
+
+ // Remove local cache for the token
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, keys...)
+ }
+
+ return nil
+}
+
+func (c *tokenCache) DeleteAndSetTemporary(ctx context.Context, userID string, platformID int, fields []string) error {
+ for _, f := range fields {
+ k := cachekey.GetTemporaryTokenKey(userID, platformID, f)
+ if err := c.rdb.Set(ctx, k, "", c.accessExpire).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+ }
+ key := cachekey.GetTokenKey(userID, platformID)
+ if err := c.rdb.HDel(ctx, key, fields...).Err(); err != nil {
+ return errs.Wrap(err)
+ }
+
+ if c.localCache != nil {
+ c.removeLocalTokenCache(ctx, key)
+ }
+ return nil
+}
+
+func (c *tokenCache) removeLocalTokenCache(ctx context.Context, keys ...string) {
+ if len(keys) == 0 {
+ return
+ }
+
+ topic := c.localCache.Auth.Topic
+ if topic == "" {
+ return
+ }
+
+ data, err := json.Marshal(keys)
+ if err != nil {
+ log.ZWarn(ctx, "keys json marshal failed", err, "topic", topic, "keys", keys)
+ } else {
+ if err := c.rdb.Publish(ctx, topic, string(data)).Err(); err != nil {
+ log.ZWarn(ctx, "redis publish cache delete error", err, "topic", topic, "keys", keys)
+ }
+ }
+}
diff --git a/pkg/common/storage/cache/redis/user.go b/pkg/common/storage/cache/redis/user.go
new file mode 100644
index 0000000..813d45a
--- /dev/null
+++ b/pkg/common/storage/cache/redis/user.go
@@ -0,0 +1,107 @@
+package redis
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/dtm-labs/rockscache"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ userExpireTime = time.Second * 60 * 60 * 12
+ userOlineStatusExpireTime = time.Second * 60 * 60 * 24
+ statusMod = 501
+)
+
+type UserCacheRedis struct {
+ cache.BatchDeleter
+ rdb redis.UniversalClient
+ userDB database.User
+ expireTime time.Duration
+ rcClient *rocksCacheClient
+}
+
+func NewUserCacheRedis(rdb redis.UniversalClient, localCache *config.LocalCache, userDB database.User, options *rockscache.Options) cache.UserCache {
+ rc := newRocksCacheClient(rdb)
+ return &UserCacheRedis{
+ BatchDeleter: rc.GetBatchDeleter(localCache.User.Topic),
+ rdb: rdb,
+ userDB: userDB,
+ expireTime: userExpireTime,
+ rcClient: rc,
+ }
+}
+
+func (u *UserCacheRedis) getUserID(user *model.User) string {
+ return user.UserID
+}
+
+func (u *UserCacheRedis) CloneUserCache() cache.UserCache {
+ return &UserCacheRedis{
+ BatchDeleter: u.BatchDeleter.Clone(),
+ rdb: u.rdb,
+ userDB: u.userDB,
+ expireTime: u.expireTime,
+ rcClient: u.rcClient,
+ }
+}
+
+func (u *UserCacheRedis) getUserInfoKey(userID string) string {
+ return cachekey.GetUserInfoKey(userID)
+}
+
+func (u *UserCacheRedis) getUserGlobalRecvMsgOptKey(userID string) string {
+ return cachekey.GetUserGlobalRecvMsgOptKey(userID)
+}
+
+func (u *UserCacheRedis) GetUserInfo(ctx context.Context, userID string) (userInfo *model.User, err error) {
+ return getCache(ctx, u.rcClient, u.getUserInfoKey(userID), u.expireTime, func(ctx context.Context) (*model.User, error) {
+ return u.userDB.Take(ctx, userID)
+ })
+}
+
+func (u *UserCacheRedis) GetUsersInfo(ctx context.Context, userIDs []string) ([]*model.User, error) {
+ log.ZInfo(ctx, "GetUsersInfo start", "userIDs", userIDs)
+ return batchGetCache2(ctx, u.rcClient, u.expireTime, userIDs, u.getUserInfoKey, u.getUserID, u.userDB.Find)
+}
+
+func (u *UserCacheRedis) DelUsersInfo(userIDs ...string) cache.UserCache {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, u.getUserInfoKey(userID))
+ }
+ cache := u.CloneUserCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
+
+func (u *UserCacheRedis) GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error) {
+ return getCache(
+ ctx,
+ u.rcClient,
+ u.getUserGlobalRecvMsgOptKey(userID),
+ u.expireTime,
+ func(ctx context.Context) (int, error) {
+ return u.userDB.GetUserGlobalRecvMsgOpt(ctx, userID)
+ },
+ )
+}
+
+func (u *UserCacheRedis) DelUsersGlobalRecvMsgOpt(userIDs ...string) cache.UserCache {
+ keys := make([]string, 0, len(userIDs))
+ for _, userID := range userIDs {
+ keys = append(keys, u.getUserGlobalRecvMsgOptKey(userID))
+ }
+ cache := u.CloneUserCache()
+ cache.AddKeys(keys...)
+
+ return cache
+}
diff --git a/pkg/common/storage/cache/s3.go b/pkg/common/storage/cache/s3.go
new file mode 100644
index 0000000..6b427ce
--- /dev/null
+++ b/pkg/common/storage/cache/s3.go
@@ -0,0 +1,52 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/s3"
+)
+
+type ObjectCache interface {
+ BatchDeleter
+ CloneObjectCache() ObjectCache
+ GetName(ctx context.Context, engine string, name string) (*relationtb.Object, error)
+ DelObjectName(engine string, names ...string) ObjectCache
+}
+
+type S3Cache interface {
+ BatchDeleter
+ GetKey(ctx context.Context, engine string, key string) (*s3.ObjectInfo, error)
+ DelS3Key(engine string, keys ...string) S3Cache
+}
+
+// TODO integrating minio.Cache and MinioCache interfaces.
+type MinioCache interface {
+ BatchDeleter
+ GetImageObjectKeyInfo(ctx context.Context, key string, fn func(ctx context.Context) (*MinioImageInfo, error)) (*MinioImageInfo, error)
+ GetThumbnailKey(ctx context.Context, key string, format string, width int, height int, minioCache func(ctx context.Context) (string, error)) (string, error)
+ DelObjectImageInfoKey(keys ...string) MinioCache
+ DelImageThumbnailKey(key string, format string, width int, height int) MinioCache
+}
+
+type MinioImageInfo struct {
+ IsImg bool `json:"isImg"`
+ Width int `json:"width"`
+ Height int `json:"height"`
+ Format string `json:"format"`
+ Etag string `json:"etag"`
+}
diff --git a/pkg/common/storage/cache/seq_conversation.go b/pkg/common/storage/cache/seq_conversation.go
new file mode 100644
index 0000000..c294da4
--- /dev/null
+++ b/pkg/common/storage/cache/seq_conversation.go
@@ -0,0 +1,19 @@
+package cache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+)
+
+type SeqConversationCache interface {
+ Malloc(ctx context.Context, conversationID string, size int64) (int64, error)
+ GetMaxSeq(ctx context.Context, conversationID string) (int64, error)
+ SetMinSeq(ctx context.Context, conversationID string, seq int64) error
+ GetMinSeq(ctx context.Context, conversationID string) (int64, error)
+ GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error)
+ SetMinSeqs(ctx context.Context, seqs map[string]int64) error
+ GetCacheMaxSeqWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error)
+ GetMaxSeqsWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error)
+ GetMaxSeqWithTime(ctx context.Context, conversationID string) (database.SeqTime, error)
+}
diff --git a/pkg/common/storage/cache/seq_user.go b/pkg/common/storage/cache/seq_user.go
new file mode 100644
index 0000000..cef414e
--- /dev/null
+++ b/pkg/common/storage/cache/seq_user.go
@@ -0,0 +1,16 @@
+package cache
+
+import "context"
+
+type SeqUser interface {
+ GetUserMaxSeq(ctx context.Context, conversationID string, userID string) (int64, error)
+ SetUserMaxSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ GetUserMinSeq(ctx context.Context, conversationID string, userID string) (int64, error)
+ SetUserMinSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ GetUserReadSeq(ctx context.Context, conversationID string, userID string) (int64, error)
+ SetUserReadSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ SetUserReadSeqToDB(ctx context.Context, conversationID string, userID string, seq int64) error
+ SetUserMinSeqs(ctx context.Context, userID string, seqs map[string]int64) error
+ SetUserReadSeqs(ctx context.Context, userID string, seqs map[string]int64) error
+ GetUserReadSeqs(ctx context.Context, userID string, conversationIDs []string) (map[string]int64, error)
+}
diff --git a/pkg/common/storage/cache/third.go b/pkg/common/storage/cache/third.go
new file mode 100644
index 0000000..ba6d040
--- /dev/null
+++ b/pkg/common/storage/cache/third.go
@@ -0,0 +1,18 @@
+package cache
+
+import (
+ "context"
+)
+
+type ThirdCache interface {
+ SetFcmToken(ctx context.Context, account string, platformID int, fcmToken string, expireTime int64) (err error)
+ GetFcmToken(ctx context.Context, account string, platformID int) (string, error)
+ DelFcmToken(ctx context.Context, account string, platformID int) error
+ IncrUserBadgeUnreadCountSum(ctx context.Context, userID string) (int, error)
+ SetUserBadgeUnreadCountSum(ctx context.Context, userID string, value int) error
+ GetUserBadgeUnreadCountSum(ctx context.Context, userID string) (int, error)
+ SetGetuiToken(ctx context.Context, token string, expireTime int64) error
+ GetGetuiToken(ctx context.Context) (string, error)
+ SetGetuiTaskID(ctx context.Context, taskID string, expireTime int64) error
+ GetGetuiTaskID(ctx context.Context) (string, error)
+}
diff --git a/pkg/common/storage/cache/token.go b/pkg/common/storage/cache/token.go
new file mode 100644
index 0000000..441c089
--- /dev/null
+++ b/pkg/common/storage/cache/token.go
@@ -0,0 +1,19 @@
+package cache
+
+import (
+ "context"
+)
+
+type TokenModel interface {
+ SetTokenFlag(ctx context.Context, userID string, platformID int, token string, flag int) error
+ // SetTokenFlagEx set token and flag with expire time
+ SetTokenFlagEx(ctx context.Context, userID string, platformID int, token string, flag int) error
+ GetTokensWithoutError(ctx context.Context, userID string, platformID int) (map[string]int, error)
+ HasTemporaryToken(ctx context.Context, userID string, platformID int, token string) error
+ GetAllTokensWithoutError(ctx context.Context, userID string) (map[int]map[string]int, error)
+ SetTokenMapByUidPid(ctx context.Context, userID string, platformID int, m map[string]int) error
+ BatchSetTokenMapByUidPid(ctx context.Context, tokens map[string]map[string]any) error
+ DeleteTokenByUidPid(ctx context.Context, userID string, platformID int, fields []string) error
+ DeleteTokenByTokenMap(ctx context.Context, userID string, tokens map[int][]string) error
+ DeleteAndSetTemporary(ctx context.Context, userID string, platformID int, fields []string) error
+}
diff --git a/pkg/common/storage/cache/user.go b/pkg/common/storage/cache/user.go
new file mode 100644
index 0000000..4c60fb2
--- /dev/null
+++ b/pkg/common/storage/cache/user.go
@@ -0,0 +1,33 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+type UserCache interface {
+ BatchDeleter
+ CloneUserCache() UserCache
+ GetUserInfo(ctx context.Context, userID string) (userInfo *model.User, err error)
+ GetUsersInfo(ctx context.Context, userIDs []string) ([]*model.User, error)
+ DelUsersInfo(userIDs ...string) UserCache
+ GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error)
+ DelUsersGlobalRecvMsgOpt(userIDs ...string) UserCache
+ //GetUserStatus(ctx context.Context, userIDs []string) ([]*user.OnlineStatus, error)
+ //SetUserStatus(ctx context.Context, userID string, status, platformID int32) error
+}
diff --git a/pkg/common/storage/common/types.go b/pkg/common/storage/common/types.go
new file mode 100644
index 0000000..7591211
--- /dev/null
+++ b/pkg/common/storage/common/types.go
@@ -0,0 +1,26 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package common
+
+type BatchUpdateGroupMember struct {
+ GroupID string
+ UserID string
+ Map map[string]any
+}
+
+type GroupSimpleUserID struct {
+ Hash uint64
+ MemberNum uint32
+}
diff --git a/pkg/common/storage/controller/auth.go b/pkg/common/storage/controller/auth.go
new file mode 100644
index 0000000..44d29dc
--- /dev/null
+++ b/pkg/common/storage/controller/auth.go
@@ -0,0 +1,253 @@
+package controller
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/golang-jwt/jwt/v4"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/tokenverify"
+)
+
+type AuthDatabase interface {
+ // If the result is empty, no error is returned.
+ GetTokensWithoutError(ctx context.Context, userID string, platformID int) (map[string]int, error)
+
+ GetTemporaryTokensWithoutError(ctx context.Context, userID string, platformID int, token string) error
+ // Create token
+ CreateToken(ctx context.Context, userID string, platformID int) (string, error)
+
+ BatchSetTokenMapByUidPid(ctx context.Context, tokens []string) error
+
+ SetTokenMapByUidPid(ctx context.Context, userID string, platformID int, m map[string]int) error
+}
+
+type multiLoginConfig struct {
+ Policy int
+ MaxNumOneEnd int
+}
+
+type authDatabase struct {
+ cache cache.TokenModel
+ accessSecret string
+ accessExpire int64
+ multiLogin multiLoginConfig
+ adminUserIDs []string
+}
+
+func NewAuthDatabase(cache cache.TokenModel, accessSecret string, accessExpire int64, multiLogin config.MultiLogin, adminUserIDs []string) AuthDatabase {
+ return &authDatabase{cache: cache, accessSecret: accessSecret, accessExpire: accessExpire, multiLogin: multiLoginConfig{
+ Policy: multiLogin.Policy,
+ MaxNumOneEnd: multiLogin.MaxNumOneEnd,
+ },
+ adminUserIDs: adminUserIDs,
+ }
+}
+
+// If the result is empty.
+func (a *authDatabase) GetTokensWithoutError(ctx context.Context, userID string, platformID int) (map[string]int, error) {
+ return a.cache.GetTokensWithoutError(ctx, userID, platformID)
+}
+
+func (a *authDatabase) GetTemporaryTokensWithoutError(ctx context.Context, userID string, platformID int, token string) error {
+ return a.cache.HasTemporaryToken(ctx, userID, platformID, token)
+}
+
+func (a *authDatabase) SetTokenMapByUidPid(ctx context.Context, userID string, platformID int, m map[string]int) error {
+ return a.cache.SetTokenMapByUidPid(ctx, userID, platformID, m)
+}
+
+func (a *authDatabase) BatchSetTokenMapByUidPid(ctx context.Context, tokens []string) error {
+ setMap := make(map[string]map[string]any)
+ for _, token := range tokens {
+ claims, err := tokenverify.GetClaimFromToken(token, authverify.Secret(a.accessSecret))
+ if err != nil {
+ continue
+ }
+ key := cachekey.GetTokenKey(claims.UserID, claims.PlatformID)
+ if v, ok := setMap[key]; ok {
+ v[token] = constant.KickedToken
+ } else {
+ setMap[key] = map[string]any{
+ token: constant.KickedToken,
+ }
+ }
+ }
+ if err := a.cache.BatchSetTokenMapByUidPid(ctx, setMap); err != nil {
+ return err
+ }
+ return nil
+}
+
+// Create Token.
+func (a *authDatabase) CreateToken(ctx context.Context, userID string, platformID int) (string, error) {
+ tokens, err := a.cache.GetAllTokensWithoutError(ctx, userID)
+ if err != nil {
+ return "", err
+ }
+
+ deleteTokenKey, kickedTokenKey, adminTokens, err := a.checkToken(ctx, tokens, platformID)
+ if err != nil {
+ return "", err
+ }
+ if len(deleteTokenKey) != 0 {
+ err = a.cache.DeleteTokenByTokenMap(ctx, userID, deleteTokenKey)
+ if err != nil {
+ return "", err
+ }
+ }
+ if len(kickedTokenKey) != 0 {
+ for plt, ks := range kickedTokenKey {
+ for _, k := range ks {
+ err := a.cache.SetTokenFlagEx(ctx, userID, plt, k, constant.KickedToken)
+ if err != nil {
+ return "", err
+ }
+ log.ZDebug(ctx, "kicked token in create token", "token", k)
+ }
+ }
+ }
+ if len(adminTokens) != 0 {
+ if err = a.cache.DeleteAndSetTemporary(ctx, userID, constant.AdminPlatformID, adminTokens); err != nil {
+ return "", err
+ }
+ }
+
+ claims := tokenverify.BuildClaims(userID, platformID, a.accessExpire)
+ token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
+ tokenString, err := token.SignedString([]byte(a.accessSecret))
+ if err != nil {
+ return "", errs.WrapMsg(err, "token.SignedString")
+ }
+
+ if err = a.cache.SetTokenFlagEx(ctx, userID, platformID, tokenString, constant.NormalToken); err != nil {
+ return "", err
+ }
+
+ return tokenString, nil
+}
+
+// checkToken will check token by tokenPolicy and return deleteToken,kickToken,deleteAdminToken
+func (a *authDatabase) checkToken(ctx context.Context, tokens map[int]map[string]int, platformID int) (map[int][]string, map[int][]string, []string, error) {
+ // todo: Asynchronous deletion of old data.
+ var (
+ loginTokenMap = make(map[int][]string) // The length of the value of the map must be greater than 0
+ deleteToken = make(map[int][]string)
+ kickToken = make(map[int][]string)
+ adminToken = make([]string, 0)
+ unkickTerminal = ""
+ )
+
+ for plfID, tks := range tokens {
+ for k, v := range tks {
+ _, err := tokenverify.GetClaimFromToken(k, authverify.Secret(a.accessSecret))
+ if err != nil || v != constant.NormalToken {
+ deleteToken[plfID] = append(deleteToken[plfID], k)
+ } else {
+ if plfID != constant.AdminPlatformID {
+ loginTokenMap[plfID] = append(loginTokenMap[plfID], k)
+ } else {
+ adminToken = append(adminToken, k)
+ }
+ }
+ }
+ }
+
+ switch a.multiLogin.Policy {
+ case constant.DefalutNotKick:
+ for plt, ts := range loginTokenMap {
+ l := len(ts)
+ if platformID == plt {
+ l++
+ }
+ limit := a.multiLogin.MaxNumOneEnd
+ if l > limit {
+ kickToken[plt] = ts[:l-limit]
+ }
+ }
+ case constant.AllLoginButSameTermKick:
+ for plt, ts := range loginTokenMap {
+ kickToken[plt] = ts[:len(ts)-1]
+
+ if plt == platformID {
+ kickToken[plt] = append(kickToken[plt], ts[len(ts)-1])
+ }
+ }
+ case constant.PCAndOther:
+ unkickTerminal = constant.TerminalPC
+ if constant.PlatformIDToClass(platformID) != unkickTerminal {
+ for plt, ts := range loginTokenMap {
+ if constant.PlatformIDToClass(plt) != unkickTerminal {
+ kickToken[plt] = ts
+ }
+ }
+ } else {
+ var (
+ preKickToken string
+ preKickPlt int
+ reserveToken = false
+ )
+ for plt, ts := range loginTokenMap {
+ if constant.PlatformIDToClass(plt) != unkickTerminal {
+ // Keep a token from another end
+ if !reserveToken {
+ reserveToken = true
+ kickToken[plt] = ts[:len(ts)-1]
+ preKickToken = ts[len(ts)-1]
+ preKickPlt = plt
+ continue
+ } else {
+ // Prioritize keeping Android
+ if plt == constant.AndroidPlatformID {
+ if preKickToken != "" {
+ kickToken[preKickPlt] = append(kickToken[preKickPlt], preKickToken)
+ }
+ kickToken[plt] = ts[:len(ts)-1]
+ } else {
+ kickToken[plt] = ts
+ }
+ }
+ }
+ }
+ }
+ case constant.AllLoginButSameClassKick:
+ var (
+ reserved = make(map[string]struct{})
+ )
+
+ for plt, ts := range loginTokenMap {
+ if constant.PlatformIDToClass(plt) == constant.PlatformIDToClass(platformID) {
+ kickToken[plt] = ts
+ } else {
+ if _, ok := reserved[constant.PlatformIDToClass(plt)]; !ok {
+ reserved[constant.PlatformIDToClass(plt)] = struct{}{}
+ kickToken[plt] = ts[:len(ts)-1]
+ continue
+ } else {
+ kickToken[plt] = ts
+ }
+ }
+ }
+ default:
+ return nil, nil, nil, errs.New("unknown multiLogin policy").Wrap()
+ }
+
+ //var adminTokenMaxNum = a.multiLogin.MaxNumOneEnd
+ //l := len(adminToken)
+ //if platformID == constant.AdminPlatformID {
+ // l++
+ //}
+ //if l > adminTokenMaxNum {
+ // kickToken = append(kickToken, adminToken[:l-adminTokenMaxNum]...)
+ //}
+ var deleteAdminToken []string
+ if platformID == constant.AdminPlatformID {
+ deleteAdminToken = adminToken
+ }
+ return deleteToken, kickToken, deleteAdminToken, nil
+}
diff --git a/pkg/common/storage/controller/black.go b/pkg/common/storage/controller/black.go
new file mode 100644
index 0000000..1fce189
--- /dev/null
+++ b/pkg/common/storage/controller/black.go
@@ -0,0 +1,101 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+type BlackDatabase interface {
+ // Create add BlackList
+ Create(ctx context.Context, blacks []*model.Black) (err error)
+ // Delete delete BlackList
+ Delete(ctx context.Context, blacks []*model.Black) (err error)
+ // FindOwnerBlacks get BlackList list
+ FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*model.Black, err error)
+ FindBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*model.Black, err error)
+ // CheckIn Check whether user2 is in the black list of user1 (inUser1Blacks==true) Check whether user1 is in the black list of user2 (inUser2Blacks==true)
+ CheckIn(ctx context.Context, userID1, userID2 string) (inUser1Blacks bool, inUser2Blacks bool, err error)
+}
+
+type blackDatabase struct {
+ black database.Black
+ cache cache.BlackCache
+}
+
+func NewBlackDatabase(black database.Black, cache cache.BlackCache) BlackDatabase {
+ return &blackDatabase{black, cache}
+}
+
+// Create Add Blacklist.
+func (b *blackDatabase) Create(ctx context.Context, blacks []*model.Black) (err error) {
+ if err := b.black.Create(ctx, blacks); err != nil {
+ return err
+ }
+ return b.deleteBlackIDsCache(ctx, blacks)
+}
+
+// Delete Delete Blacklist.
+func (b *blackDatabase) Delete(ctx context.Context, blacks []*model.Black) (err error) {
+ if err := b.black.Delete(ctx, blacks); err != nil {
+ return err
+ }
+ return b.deleteBlackIDsCache(ctx, blacks)
+}
+
+// FindOwnerBlacks Get Blacklist List.
+func (b *blackDatabase) deleteBlackIDsCache(ctx context.Context, blacks []*model.Black) (err error) {
+ cache := b.cache.CloneBlackCache()
+ for _, black := range blacks {
+ cache = cache.DelBlackIDs(ctx, black.OwnerUserID)
+ }
+ return cache.ChainExecDel(ctx)
+}
+
+// FindOwnerBlacks Get Blacklist List.
+func (b *blackDatabase) FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*model.Black, err error) {
+ return b.black.FindOwnerBlacks(ctx, ownerUserID, pagination)
+}
+
+// FindOwnerBlacks Get Blacklist List.
+func (b *blackDatabase) CheckIn(ctx context.Context, userID1, userID2 string) (inUser1Blacks bool, inUser2Blacks bool, err error) {
+ userID1BlackIDs, err := b.cache.GetBlackIDs(ctx, userID1)
+ if err != nil {
+ return
+ }
+ userID2BlackIDs, err := b.cache.GetBlackIDs(ctx, userID2)
+ if err != nil {
+ return
+ }
+ log.ZDebug(ctx, "blackIDs", "user1BlackIDs", userID1BlackIDs, "user2BlackIDs", userID2BlackIDs)
+ return datautil.Contain(userID2, userID1BlackIDs...), datautil.Contain(userID1, userID2BlackIDs...), nil
+}
+
+// FindBlackIDs Get Blacklist List.
+func (b *blackDatabase) FindBlackIDs(ctx context.Context, ownerUserID string) (blackIDs []string, err error) {
+ return b.cache.GetBlackIDs(ctx, ownerUserID)
+}
+
+// FindBlackInfos Get Blacklist List.
+func (b *blackDatabase) FindBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*model.Black, err error) {
+ return b.black.FindOwnerBlackInfos(ctx, ownerUserID, userIDs)
+}
diff --git a/pkg/common/storage/controller/client_config.go b/pkg/common/storage/controller/client_config.go
new file mode 100644
index 0000000..7c7d0ea
--- /dev/null
+++ b/pkg/common/storage/controller/client_config.go
@@ -0,0 +1,58 @@
+package controller
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/db/tx"
+)
+
+type ClientConfigDatabase interface {
+ SetUserConfig(ctx context.Context, userID string, config map[string]string) error
+ GetUserConfig(ctx context.Context, userID string) (map[string]string, error)
+ DelUserConfig(ctx context.Context, userID string, keys []string) error
+ GetUserConfigPage(ctx context.Context, userID string, key string, pagination pagination.Pagination) (int64, []*model.ClientConfig, error)
+}
+
+func NewClientConfigDatabase(db database.ClientConfig, cache cache.ClientConfigCache, tx tx.Tx) ClientConfigDatabase {
+ return &clientConfigDatabase{
+ tx: tx,
+ db: db,
+ cache: cache,
+ }
+}
+
+type clientConfigDatabase struct {
+ tx tx.Tx
+ db database.ClientConfig
+ cache cache.ClientConfigCache
+}
+
+func (x *clientConfigDatabase) SetUserConfig(ctx context.Context, userID string, config map[string]string) error {
+ return x.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err := x.db.Set(ctx, userID, config); err != nil {
+ return err
+ }
+ return x.cache.DeleteUserCache(ctx, []string{userID})
+ })
+}
+
+func (x *clientConfigDatabase) GetUserConfig(ctx context.Context, userID string) (map[string]string, error) {
+ return x.cache.GetUserConfig(ctx, userID)
+}
+
+func (x *clientConfigDatabase) DelUserConfig(ctx context.Context, userID string, keys []string) error {
+ return x.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err := x.db.Del(ctx, userID, keys); err != nil {
+ return err
+ }
+ return x.cache.DeleteUserCache(ctx, []string{userID})
+ })
+}
+
+func (x *clientConfigDatabase) GetUserConfigPage(ctx context.Context, userID string, key string, pagination pagination.Pagination) (int64, []*model.ClientConfig, error) {
+ return x.db.GetPage(ctx, userID, key, pagination)
+}
diff --git a/pkg/common/storage/controller/conversation.go b/pkg/common/storage/controller/conversation.go
new file mode 100644
index 0000000..2c52908
--- /dev/null
+++ b/pkg/common/storage/controller/conversation.go
@@ -0,0 +1,451 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ relationtb "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/db/tx"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/stringutil"
+)
+
+type ConversationDatabase interface {
+ // UpdateUsersConversationField updates the properties of a conversation for specified users.
+ UpdateUsersConversationField(ctx context.Context, userIDs []string, conversationID string, args map[string]any) error
+ // CreateConversation creates a batch of new conversations.
+ CreateConversation(ctx context.Context, conversations []*relationtb.Conversation) error
+ // SyncPeerUserPrivateConversationTx ensures transactional operation while syncing private conversations between peers.
+ SyncPeerUserPrivateConversationTx(ctx context.Context, conversation []*relationtb.Conversation) error
+ // FindConversations retrieves multiple conversations of a user by conversation IDs.
+ FindConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*relationtb.Conversation, error)
+ // GetUserAllConversation fetches all conversations of a user on the server.
+ GetUserAllConversation(ctx context.Context, ownerUserID string) ([]*relationtb.Conversation, error)
+ // SetUserConversations sets multiple conversation properties for a user, creates new conversations if they do not exist, or updates them otherwise. This operation is atomic.
+ SetUserConversations(ctx context.Context, ownerUserID string, conversations []*relationtb.Conversation) error
+ // SetUsersConversationFieldTx updates a specific field for multiple users' conversations, creating new conversations if they do not exist, or updates them otherwise. This operation is
+ // transactional.
+ SetUsersConversationFieldTx(ctx context.Context, userIDs []string, conversation *relationtb.Conversation, fieldMap map[string]any) error
+ // UpdateUserConversations updates all conversations related to a specified user.
+ // This function does NOT update the user's own conversations but rather the conversations where this user is involved (e.g., other users' conversations referencing this user).
+ UpdateUserConversations(ctx context.Context, userID string, args map[string]any) error
+ // CreateGroupChatConversation creates a group chat conversation for the specified group ID and user IDs.
+ CreateGroupChatConversation(ctx context.Context, groupID string, userIDs []string, conversations *relationtb.Conversation) error
+ // GetConversationIDs retrieves conversation IDs for a given user.
+ GetConversationIDs(ctx context.Context, userID string) ([]string, error)
+ // GetUserConversationIDsHash gets the hash of conversation IDs for a given user.
+ GetUserConversationIDsHash(ctx context.Context, ownerUserID string) (hash uint64, err error)
+ // GetAllConversationIDs fetches all conversation IDs.
+ GetAllConversationIDs(ctx context.Context) ([]string, error)
+ // GetAllConversationIDsNumber returns the number of all conversation IDs.
+ GetAllConversationIDsNumber(ctx context.Context) (int64, error)
+ // PageConversationIDs paginates through conversation IDs based on the specified pagination settings.
+ PageConversationIDs(ctx context.Context, pagination pagination.Pagination) (conversationIDs []string, err error)
+ // GetConversationsByConversationID retrieves conversations by their IDs.
+ GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*relationtb.Conversation, error)
+ // GetConversationIDsNeedDestruct fetches conversations that need to be destructed based on specific criteria.
+ GetConversationIDsNeedDestruct(ctx context.Context) ([]*relationtb.Conversation, error)
+ // GetConversationNotReceiveMessageUserIDs gets user IDs for users in a conversation who have not received messages.
+ GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error)
+ // GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (map[string]int64, error)
+ // FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
+ FindConversationUserVersion(ctx context.Context, userID string, version uint, limit int) (*relationtb.VersionLog, error)
+ FindMaxConversationUserVersionCache(ctx context.Context, userID string) (*relationtb.VersionLog, error)
+ GetOwnerConversation(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (int64, []*relationtb.Conversation, error)
+ // GetNotNotifyConversationIDs gets not notify conversationIDs by userID
+ GetNotNotifyConversationIDs(ctx context.Context, userID string) ([]string, error)
+ // GetPinnedConversationIDs gets pinned conversationIDs by userID
+ GetPinnedConversationIDs(ctx context.Context, userID string) ([]string, error)
+ // FindRandConversation finds random conversations based on the specified timestamp and limit.
+ FindRandConversation(ctx context.Context, ts int64, limit int) ([]*relationtb.Conversation, error)
+ // DeleteUsersConversations deletes conversations for a user.
+ DeleteUsersConversations(ctx context.Context, userID string, conversationIDs []string) error
+}
+
+func NewConversationDatabase(conversation database.Conversation, cache cache.ConversationCache, tx tx.Tx) ConversationDatabase {
+ return &conversationDatabase{
+ conversationDB: conversation,
+ cache: cache,
+ tx: tx,
+ }
+}
+
+type conversationDatabase struct {
+ conversationDB database.Conversation
+ cache cache.ConversationCache
+ tx tx.Tx
+}
+
+func (c *conversationDatabase) SetUsersConversationFieldTx(ctx context.Context, userIDs []string, conversation *relationtb.Conversation, fieldMap map[string]any) (err error) {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.CloneConversationCache()
+ if conversation.GroupID != "" {
+ cache = cache.DelSuperGroupRecvMsgNotNotifyUserIDs(conversation.GroupID).DelSuperGroupRecvMsgNotNotifyUserIDsHash(conversation.GroupID)
+ }
+ haveUserIDs, err := c.conversationDB.FindUserID(ctx, userIDs, []string{conversation.ConversationID})
+ if err != nil {
+ return err
+ }
+ if len(haveUserIDs) > 0 {
+ _, err = c.conversationDB.UpdateByMap(ctx, haveUserIDs, conversation.ConversationID, fieldMap)
+ if err != nil {
+ return err
+ }
+ cache = cache.DelUsersConversation(conversation.ConversationID, haveUserIDs...)
+ if _, ok := fieldMap["has_read_seq"]; ok {
+ for _, userID := range haveUserIDs {
+ cache = cache.DelUserAllHasReadSeqs(userID, conversation.ConversationID)
+ }
+ }
+ if _, ok := fieldMap["recv_msg_opt"]; ok {
+ cache = cache.DelConversationNotReceiveMessageUserIDs(conversation.ConversationID)
+ cache = cache.DelConversationNotNotifyMessageUserIDs(userIDs...)
+ }
+ if _, ok := fieldMap["is_pinned"]; ok {
+ cache = cache.DelUserPinnedConversations(userIDs...)
+ }
+ cache = cache.DelConversationVersionUserIDs(haveUserIDs...)
+ }
+ NotUserIDs := stringutil.DifferenceString(haveUserIDs, userIDs)
+ log.ZDebug(ctx, "SetUsersConversationFieldTx", "NotUserIDs", NotUserIDs, "haveUserIDs", haveUserIDs, "userIDs", userIDs)
+ var conversations []*relationtb.Conversation
+ now := time.Now()
+ for _, v := range NotUserIDs {
+ temp := new(relationtb.Conversation)
+ if err = datautil.CopyStructFields(temp, conversation); err != nil {
+ return err
+ }
+ temp.OwnerUserID = v
+ temp.CreateTime = now
+ conversations = append(conversations, temp)
+ }
+ if len(conversations) > 0 {
+ err = c.conversationDB.Create(ctx, conversations)
+ if err != nil {
+ return err
+ }
+ cache = cache.DelConversationIDs(NotUserIDs...).DelUserConversationIDsHash(NotUserIDs...).DelConversations(conversation.ConversationID, NotUserIDs...)
+ }
+ return cache.ChainExecDel(ctx)
+ })
+}
+
+func (c *conversationDatabase) UpdateUserConversations(ctx context.Context, userID string, args map[string]any) error {
+ conversations, err := c.conversationDB.UpdateUserConversations(ctx, userID, args)
+ if err != nil {
+ return err
+ }
+ cache := c.cache.CloneConversationCache()
+ for _, conversation := range conversations {
+ cache = cache.DelUsersConversation(conversation.ConversationID, conversation.OwnerUserID).DelConversationVersionUserIDs(conversation.OwnerUserID)
+ }
+ return cache.ChainExecDel(ctx)
+}
+
+func (c *conversationDatabase) UpdateUsersConversationField(ctx context.Context, userIDs []string, conversationID string, args map[string]any) error {
+ _, err := c.conversationDB.UpdateByMap(ctx, userIDs, conversationID, args)
+ if err != nil {
+ return err
+ }
+ cache := c.cache.CloneConversationCache()
+ cache = cache.DelUsersConversation(conversationID, userIDs...).DelConversationVersionUserIDs(userIDs...)
+ if _, ok := args["recv_msg_opt"]; ok {
+ cache = cache.DelConversationNotReceiveMessageUserIDs(conversationID)
+ cache = cache.DelConversationNotNotifyMessageUserIDs(userIDs...)
+ }
+ if _, ok := args["is_pinned"]; ok {
+ cache = cache.DelUserPinnedConversations(userIDs...)
+ }
+ return cache.ChainExecDel(ctx)
+}
+
+func (c *conversationDatabase) CreateConversation(ctx context.Context, conversations []*relationtb.Conversation) error {
+ if err := c.conversationDB.Create(ctx, conversations); err != nil {
+ return err
+ }
+ var (
+ userIDs []string
+ notNotifyUserIDs []string
+ pinnedUserIDs []string
+ )
+
+ cache := c.cache.CloneConversationCache()
+ for _, conversation := range conversations {
+ cache = cache.DelConversations(conversation.OwnerUserID, conversation.ConversationID)
+ cache = cache.DelConversationNotReceiveMessageUserIDs(conversation.ConversationID)
+ userIDs = append(userIDs, conversation.OwnerUserID)
+ if conversation.RecvMsgOpt == constant.ReceiveNotNotifyMessage {
+ notNotifyUserIDs = append(notNotifyUserIDs, conversation.OwnerUserID)
+ }
+ if conversation.IsPinned {
+ pinnedUserIDs = append(pinnedUserIDs, conversation.OwnerUserID)
+ }
+ }
+ return cache.DelConversationIDs(userIDs...).
+ DelUserConversationIDsHash(userIDs...).
+ DelConversationVersionUserIDs(userIDs...).
+ DelConversationNotNotifyMessageUserIDs(notNotifyUserIDs...).
+ DelUserPinnedConversations(pinnedUserIDs...).
+ ChainExecDel(ctx)
+}
+
+func (c *conversationDatabase) SyncPeerUserPrivateConversationTx(ctx context.Context, conversations []*relationtb.Conversation) error {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.CloneConversationCache()
+ for _, conversation := range conversations {
+ cache = cache.DelConversationVersionUserIDs(conversation.OwnerUserID, conversation.UserID)
+ for _, v := range [][2]string{{conversation.OwnerUserID, conversation.UserID}, {conversation.UserID, conversation.OwnerUserID}} {
+ ownerUserID := v[0]
+ userID := v[1]
+ haveUserIDs, err := c.conversationDB.FindUserID(ctx, []string{ownerUserID}, []string{conversation.ConversationID})
+ if err != nil {
+ return err
+ }
+ if len(haveUserIDs) > 0 {
+ _, err := c.conversationDB.UpdateByMap(ctx, []string{ownerUserID}, conversation.ConversationID, map[string]any{"is_private_chat": conversation.IsPrivateChat})
+ if err != nil {
+ return err
+ }
+ cache = cache.DelUsersConversation(conversation.ConversationID, ownerUserID)
+ } else {
+ newConversation := *conversation
+ newConversation.OwnerUserID = ownerUserID
+ newConversation.UserID = userID
+ newConversation.ConversationID = conversation.ConversationID
+ newConversation.IsPrivateChat = conversation.IsPrivateChat
+ if err := c.conversationDB.Create(ctx, []*relationtb.Conversation{&newConversation}); err != nil {
+ return err
+ }
+ cache = cache.DelConversationIDs(ownerUserID).DelUserConversationIDsHash(ownerUserID)
+ }
+ }
+ }
+ return cache.ChainExecDel(ctx)
+ })
+}
+
+func (c *conversationDatabase) FindConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*relationtb.Conversation, error) {
+ return c.cache.GetConversations(ctx, ownerUserID, conversationIDs)
+}
+
+func (c *conversationDatabase) GetConversation(ctx context.Context, ownerUserID string, conversationID string) (*relationtb.Conversation, error) {
+ return c.cache.GetConversation(ctx, ownerUserID, conversationID)
+}
+
+func (c *conversationDatabase) GetUserAllConversation(ctx context.Context, ownerUserID string) ([]*relationtb.Conversation, error) {
+ return c.cache.GetUserAllConversations(ctx, ownerUserID)
+}
+
+func (c *conversationDatabase) SetUserConversations(ctx context.Context, ownerUserID string, conversations []*relationtb.Conversation) error {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.CloneConversationCache()
+ cache = cache.DelConversationVersionUserIDs(ownerUserID).
+ DelConversationNotNotifyMessageUserIDs(ownerUserID).
+ DelUserPinnedConversations(ownerUserID)
+
+ groupIDs := datautil.Distinct(datautil.Filter(conversations, func(e *relationtb.Conversation) (string, bool) {
+ return e.GroupID, e.GroupID != ""
+ }))
+ for _, groupID := range groupIDs {
+ cache = cache.DelSuperGroupRecvMsgNotNotifyUserIDs(groupID).DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID)
+ }
+ var conversationIDs []string
+ for _, conversation := range conversations {
+ conversationIDs = append(conversationIDs, conversation.ConversationID)
+ cache = cache.DelConversations(conversation.OwnerUserID, conversation.ConversationID)
+ }
+ existConversations, err := c.conversationDB.Find(ctx, ownerUserID, conversationIDs)
+ if err != nil {
+ return err
+ }
+ if len(existConversations) > 0 {
+ for _, conversation := range conversations {
+ err = c.conversationDB.Update(ctx, conversation)
+ if err != nil {
+ return err
+ }
+ }
+ }
+ var existConversationIDs []string
+ for _, conversation := range existConversations {
+ existConversationIDs = append(existConversationIDs, conversation.ConversationID)
+ }
+
+ var notExistConversations []*relationtb.Conversation
+ for _, conversation := range conversations {
+ if !datautil.Contain(conversation.ConversationID, existConversationIDs...) {
+ notExistConversations = append(notExistConversations, conversation)
+ }
+ }
+ if len(notExistConversations) > 0 {
+ err = c.conversationDB.Create(ctx, notExistConversations)
+ if err != nil {
+ return err
+ }
+ cache = cache.DelConversationIDs(ownerUserID).
+ DelUserConversationIDsHash(ownerUserID).
+ DelConversationNotReceiveMessageUserIDs(datautil.Slice(notExistConversations, func(e *relationtb.Conversation) string { return e.ConversationID })...)
+ }
+ return cache.ChainExecDel(ctx)
+ })
+}
+
+// func (c *conversationDatabase) FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error) {
+// return c.cache.GetSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
+//}
+
+func (c *conversationDatabase) CreateGroupChatConversation(ctx context.Context, groupID string, userIDs []string, conversation *relationtb.Conversation) error {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.CloneConversationCache()
+ conversationID := conversation.ConversationID
+ existConversationUserIDs, err := c.conversationDB.FindUserID(ctx, userIDs, []string{conversationID})
+ if err != nil {
+ return err
+ }
+ notExistUserIDs := stringutil.DifferenceString(userIDs, existConversationUserIDs)
+ var conversations []*relationtb.Conversation
+ for _, v := range notExistUserIDs {
+ conversation := relationtb.Conversation{
+ ConversationType: conversation.ConversationType, GroupID: groupID, OwnerUserID: v, ConversationID: conversationID,
+ // the parameters have default value
+ RecvMsgOpt: conversation.RecvMsgOpt, IsPinned: conversation.IsPinned, IsPrivateChat: conversation.IsPrivateChat,
+ BurnDuration: conversation.BurnDuration, GroupAtType: conversation.GroupAtType, AttachedInfo: conversation.AttachedInfo,
+ Ex: conversation.Ex, MaxSeq: conversation.MaxSeq, MinSeq: conversation.MinSeq, CreateTime: conversation.CreateTime,
+ MsgDestructTime: conversation.MsgDestructTime, IsMsgDestruct: conversation.IsMsgDestruct, LatestMsgDestructTime: conversation.LatestMsgDestructTime,
+ }
+
+ conversations = append(conversations, &conversation)
+ cache = cache.DelConversations(v, conversationID).DelConversationNotReceiveMessageUserIDs(conversationID)
+ }
+ cache = cache.DelConversationIDs(notExistUserIDs...).DelUserConversationIDsHash(notExistUserIDs...)
+ if len(conversations) > 0 {
+ err = c.conversationDB.Create(ctx, conversations)
+ if err != nil {
+ return err
+ }
+ }
+ _, err = c.conversationDB.UpdateByMap(ctx, existConversationUserIDs, conversationID, map[string]any{"max_seq": conversation.MaxSeq})
+ if err != nil {
+ return err
+ }
+ for _, v := range existConversationUserIDs {
+ cache = cache.DelConversations(v, conversationID)
+ }
+ return cache.ChainExecDel(ctx)
+ })
+}
+
+func (c *conversationDatabase) GetConversationIDs(ctx context.Context, userID string) ([]string, error) {
+ return c.cache.GetUserConversationIDs(ctx, userID)
+}
+
+func (c *conversationDatabase) GetUserConversationIDsHash(ctx context.Context, ownerUserID string) (hash uint64, err error) {
+ return c.cache.GetUserConversationIDsHash(ctx, ownerUserID)
+}
+
+func (c *conversationDatabase) GetAllConversationIDs(ctx context.Context) ([]string, error) {
+ return c.conversationDB.GetAllConversationIDs(ctx)
+}
+
+func (c *conversationDatabase) GetAllConversationIDsNumber(ctx context.Context) (int64, error) {
+ return c.conversationDB.GetAllConversationIDsNumber(ctx)
+}
+
+func (c *conversationDatabase) PageConversationIDs(ctx context.Context, pagination pagination.Pagination) ([]string, error) {
+ return c.conversationDB.PageConversationIDs(ctx, pagination)
+}
+
+func (c *conversationDatabase) GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*relationtb.Conversation, error) {
+ return c.conversationDB.GetConversationsByConversationID(ctx, conversationIDs)
+}
+
+func (c *conversationDatabase) GetConversationIDsNeedDestruct(ctx context.Context) ([]*relationtb.Conversation, error) {
+ return c.conversationDB.GetConversationIDsNeedDestruct(ctx)
+}
+
+func (c *conversationDatabase) GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error) {
+ return c.cache.GetConversationNotReceiveMessageUserIDs(ctx, conversationID)
+}
+
+func (c *conversationDatabase) FindConversationUserVersion(ctx context.Context, userID string, version uint, limit int) (*relationtb.VersionLog, error) {
+ return c.conversationDB.FindConversationUserVersion(ctx, userID, version, limit)
+}
+
+func (c *conversationDatabase) FindMaxConversationUserVersionCache(ctx context.Context, userID string) (*relationtb.VersionLog, error) {
+ return c.cache.FindMaxConversationUserVersion(ctx, userID)
+}
+
+func (c *conversationDatabase) GetOwnerConversation(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (int64, []*relationtb.Conversation, error) {
+ conversationIDs, err := c.cache.GetUserConversationIDs(ctx, ownerUserID)
+ if err != nil {
+ return 0, nil, err
+ }
+ findConversationIDs := datautil.Paginate(conversationIDs, int(pagination.GetPageNumber()), int(pagination.GetShowNumber()))
+ conversations := make([]*relationtb.Conversation, 0, len(findConversationIDs))
+ for _, conversationID := range findConversationIDs {
+ conversation, err := c.cache.GetConversation(ctx, ownerUserID, conversationID)
+ if err != nil {
+ return 0, nil, err
+ }
+ conversations = append(conversations, conversation)
+ }
+ return int64(len(conversationIDs)), conversations, nil
+}
+
+func (c *conversationDatabase) GetNotNotifyConversationIDs(ctx context.Context, userID string) ([]string, error) {
+ conversationIDs, err := c.cache.GetUserNotNotifyConversationIDs(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ return conversationIDs, nil
+}
+
+func (c *conversationDatabase) GetPinnedConversationIDs(ctx context.Context, userID string) ([]string, error) {
+ conversationIDs, err := c.cache.GetPinnedConversationIDs(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ return conversationIDs, nil
+}
+
+func (c *conversationDatabase) FindRandConversation(ctx context.Context, ts int64, limit int) ([]*relationtb.Conversation, error) {
+ return c.conversationDB.FindRandConversation(ctx, ts, limit)
+}
+
+func (c *conversationDatabase) DeleteUsersConversations(ctx context.Context, userID string, conversationIDs []string) (err error) {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ err = c.conversationDB.DeleteUsersConversations(ctx, userID, conversationIDs)
+ if err != nil {
+ return err
+ }
+ cache := c.cache.CloneConversationCache()
+ cache = cache.DelConversations(userID, conversationIDs...).
+ DelConversationVersionUserIDs(userID).
+ DelConversationIDs(userID).
+ DelUserConversationIDsHash(userID).
+ DelConversationNotNotifyMessageUserIDs(userID).
+ DelUserPinnedConversations(userID)
+
+ return cache.ChainExecDel(ctx)
+ })
+}
diff --git a/pkg/common/storage/controller/doc.go b/pkg/common/storage/controller/doc.go
new file mode 100644
index 0000000..cda931e
--- /dev/null
+++ b/pkg/common/storage/controller/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/controller"
diff --git a/pkg/common/storage/controller/friend.go b/pkg/common/storage/controller/friend.go
new file mode 100644
index 0000000..05216f7
--- /dev/null
+++ b/pkg/common/storage/controller/friend.go
@@ -0,0 +1,406 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/db/tx"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+type FriendDatabase interface {
+ // CheckIn checks if user2 is in user1's friend list (inUser1Friends==true) and if user1 is in user2's friend list (inUser2Friends==true)
+ CheckIn(ctx context.Context, user1, user2 string) (inUser1Friends bool, inUser2Friends bool, err error)
+
+ // AddFriendRequest adds or updates a friend request
+ AddFriendRequest(ctx context.Context, fromUserID, toUserID string, reqMsg string, ex string) (err error)
+
+ // BecomeFriends first checks if the users are already in the friends model; if not, it inserts them as friends
+ BecomeFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, addSource int32) (err error)
+
+ // RefuseFriendRequest refuses a friend request
+ RefuseFriendRequest(ctx context.Context, friendRequest *model.FriendRequest) (err error)
+
+ // AgreeFriendRequest accepts a friend request
+ AgreeFriendRequest(ctx context.Context, friendRequest *model.FriendRequest) (err error)
+
+ // Delete removes a friend or friends from the owner's friend list
+ Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error)
+
+ // UpdateRemark updates the remark for a friend
+ UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error)
+
+ // PageOwnerFriends retrieves the friend list of ownerUserID with pagination
+ PageOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, friends []*model.Friend, err error)
+
+ // PageInWhoseFriends finds the users who have friendUserID in their friend list with pagination
+ PageInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (total int64, friends []*model.Friend, err error)
+
+ // PageFriendRequestFromMe retrieves the friend requests sent by the user with pagination
+ PageFriendRequestFromMe(ctx context.Context, userID string, handleResults []int, pagination pagination.Pagination) (total int64, friends []*model.FriendRequest, err error)
+
+ // PageFriendRequestToMe retrieves the friend requests received by the user with pagination
+ PageFriendRequestToMe(ctx context.Context, userID string, handleResults []int, pagination pagination.Pagination) (total int64, friends []*model.FriendRequest, err error)
+
+ // FindFriendsWithError fetches specified friends of a user and returns an error if any do not exist
+ FindFriendsWithError(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*model.Friend, err error)
+
+ // FindFriendUserIDs retrieves the friend IDs of a user
+ FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error)
+
+ // FindBothFriendRequests finds friend requests sent and received
+ FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*model.FriendRequest, err error)
+
+ // UpdateFriends updates fields for friends
+ UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) (err error)
+
+ //FindSortFriendUserIDs(ctx context.Context, ownerUserID string) ([]string, error)
+
+ FindFriendIncrVersion(ctx context.Context, ownerUserID string, version uint, limit int) (*model.VersionLog, error)
+
+ FindMaxFriendVersionCache(ctx context.Context, ownerUserID string) (*model.VersionLog, error)
+
+ FindFriendUserID(ctx context.Context, friendUserID string) ([]string, error)
+
+ OwnerIncrVersion(ctx context.Context, ownerUserID string, friendUserIDs []string, state int32) error
+
+ GetUnhandledCount(ctx context.Context, userID string, ts int64) (int64, error)
+}
+
+type friendDatabase struct {
+ friend database.Friend
+ friendRequest database.FriendRequest
+ tx tx.Tx
+ cache cache.FriendCache
+}
+
+func NewFriendDatabase(friend database.Friend, friendRequest database.FriendRequest, cache cache.FriendCache, tx tx.Tx) FriendDatabase {
+ return &friendDatabase{friend: friend, friendRequest: friendRequest, cache: cache, tx: tx}
+}
+
+// CheckIn verifies if user2 is in user1's friend list (inUser1Friends returns true) and
+// if user1 is in user2's friend list (inUser2Friends returns true).
+func (f *friendDatabase) CheckIn(ctx context.Context, userID1, userID2 string) (inUser1Friends bool, inUser2Friends bool, err error) {
+ // Retrieve friend IDs of userID1 from the cache
+ userID1FriendIDs, err := f.cache.GetFriendIDs(ctx, userID1)
+ if err != nil {
+ err = fmt.Errorf("error retrieving friend IDs for user %s: %w", userID1, err)
+ return
+ }
+
+ // Retrieve friend IDs of userID2 from the cache
+ userID2FriendIDs, err := f.cache.GetFriendIDs(ctx, userID2)
+ if err != nil {
+ err = fmt.Errorf("error retrieving friend IDs for user %s: %w", userID2, err)
+ return
+ }
+
+ // Check if userID2 is in userID1's friend list and vice versa
+ inUser1Friends = datautil.Contain(userID2, userID1FriendIDs...)
+ inUser2Friends = datautil.Contain(userID1, userID2FriendIDs...)
+ return inUser1Friends, inUser2Friends, nil
+}
+
+// AddFriendRequest adds or updates a friend request.
+func (f *friendDatabase) AddFriendRequest(ctx context.Context, fromUserID, toUserID string, reqMsg string, ex string) (err error) {
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
+ _, err := f.friendRequest.Take(ctx, fromUserID, toUserID)
+ switch {
+ case err == nil:
+ m := make(map[string]any, 1)
+ m["handle_result"] = 0
+ m["handle_msg"] = ""
+ m["req_msg"] = reqMsg
+ m["ex"] = ex
+ m["create_time"] = time.Now()
+ return f.friendRequest.UpdateByMap(ctx, fromUserID, toUserID, m)
+ case mgo.IsNotFound(err):
+ return f.friendRequest.Create(
+ ctx,
+ []*model.FriendRequest{{FromUserID: fromUserID, ToUserID: toUserID, ReqMsg: reqMsg, Ex: ex, CreateTime: time.Now(), HandleTime: time.Unix(0, 0)}},
+ )
+ default:
+ return err
+ }
+ })
+}
+
+// (1) First determine whether it is in the friends list (in or out does not return an error) (2) for not in the friends list can be inserted.
+func (f *friendDatabase) BecomeFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, addSource int32) (err error) {
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := f.cache.CloneFriendCache()
+ // user find friends
+ myFriends, err := f.friend.FindFriends(ctx, ownerUserID, friendUserIDs)
+ if err != nil {
+ return err
+ }
+ addOwners, err := f.friend.FindReversalFriends(ctx, ownerUserID, friendUserIDs)
+ if err != nil {
+ return err
+ }
+ opUserID := mcontext.GetOpUserID(ctx)
+ friends := make([]*model.Friend, 0, len(friendUserIDs)*2)
+ myFriendsSet := datautil.SliceSetAny(myFriends, func(friend *model.Friend) string {
+ return friend.FriendUserID
+ })
+ addOwnersSet := datautil.SliceSetAny(addOwners, func(friend *model.Friend) string {
+ return friend.OwnerUserID
+ })
+ newMyFriendIDs := make([]string, 0, len(friendUserIDs))
+ newMyOwnerIDs := make([]string, 0, len(friendUserIDs))
+ for _, userID := range friendUserIDs {
+ if ownerUserID == userID {
+ continue
+ }
+ if _, ok := myFriendsSet[userID]; !ok {
+ myFriendsSet[userID] = struct{}{}
+ newMyFriendIDs = append(newMyFriendIDs, userID)
+ friends = append(friends, &model.Friend{OwnerUserID: ownerUserID, FriendUserID: userID, AddSource: addSource, OperatorUserID: opUserID})
+ }
+ if _, ok := addOwnersSet[userID]; !ok {
+ addOwnersSet[userID] = struct{}{}
+ newMyOwnerIDs = append(newMyOwnerIDs, userID)
+ friends = append(friends, &model.Friend{OwnerUserID: userID, FriendUserID: ownerUserID, AddSource: addSource, OperatorUserID: opUserID})
+ }
+ }
+ if len(friends) == 0 {
+ return nil
+ }
+ err = f.friend.Create(ctx, friends)
+ if err != nil {
+ return err
+ }
+ cache = cache.DelFriendIDs(ownerUserID).DelMaxFriendVersion(ownerUserID)
+ if len(newMyFriendIDs) > 0 {
+ cache = cache.DelFriendIDs(newMyFriendIDs...)
+ cache = cache.DelFriends(ownerUserID, newMyFriendIDs).DelMaxFriendVersion(newMyFriendIDs...)
+ }
+ if len(newMyOwnerIDs) > 0 {
+ cache = cache.DelFriendIDs(newMyOwnerIDs...)
+ cache = cache.DelOwner(ownerUserID, newMyOwnerIDs).DelMaxFriendVersion(newMyOwnerIDs...)
+ }
+ return cache.ChainExecDel(ctx)
+ })
+}
+
+// RefuseFriendRequest rejects a friend request. It first checks for an existing, unprocessed request.
+// If no such request exists, it returns an error. Otherwise, it marks the request as refused.
+func (f *friendDatabase) RefuseFriendRequest(ctx context.Context, friendRequest *model.FriendRequest) error {
+ // Attempt to retrieve the friend request from the database.
+ fr, err := f.friendRequest.Take(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
+ if err != nil {
+ return fmt.Errorf("failed to retrieve friend request from %s to %s: %w", friendRequest.FromUserID, friendRequest.ToUserID, err)
+ }
+
+ // Check if the friend request has already been handled.
+ if fr.HandleResult != 0 {
+ return fmt.Errorf("friend request from %s to %s has already been processed", friendRequest.FromUserID, friendRequest.ToUserID)
+ }
+
+ // Log the action of refusing the friend request for debugging and auditing purposes.
+ log.ZDebug(ctx, "Refusing friend request", map[string]interface{}{
+ "DB_FriendRequest": fr,
+ "Arg_FriendRequest": friendRequest,
+ })
+
+ // Mark the friend request as refused and update the handle time.
+ friendRequest.HandleResult = constant.FriendResponseRefuse
+ friendRequest.HandleTime = time.Now()
+ if err := f.friendRequest.Update(ctx, friendRequest); err != nil {
+ return fmt.Errorf("failed to update friend request from %s to %s as refused: %w", friendRequest.FromUserID, friendRequest.ToUserID, err)
+ }
+
+ return nil
+}
+
+// AgreeFriendRequest accepts a friend request. It first checks for an existing, unprocessed request.
+func (f *friendDatabase) AgreeFriendRequest(ctx context.Context, friendRequest *model.FriendRequest) (err error) {
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
+ now := time.Now()
+ fr, err := f.friendRequest.Take(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
+ if err != nil {
+ return err
+ }
+ if fr.HandleResult != 0 {
+ return errs.ErrArgs.WrapMsg("the friend request has been processed")
+ }
+ friendRequest.HandlerUserID = mcontext.GetOpUserID(ctx)
+ friendRequest.HandleResult = constant.FriendResponseAgree
+ friendRequest.HandleTime = now
+ err = f.friendRequest.Update(ctx, friendRequest)
+ if err != nil {
+ return err
+ }
+
+ fr2, err := f.friendRequest.Take(ctx, friendRequest.ToUserID, friendRequest.FromUserID)
+ if err == nil && fr2.HandleResult == constant.FriendResponseNotHandle {
+ fr2.HandlerUserID = mcontext.GetOpUserID(ctx)
+ fr2.HandleResult = constant.FriendResponseAgree
+ fr2.HandleTime = now
+ err = f.friendRequest.Update(ctx, fr2)
+ if err != nil {
+ return err
+ }
+ } else if err != nil && (!mgo.IsNotFound(err)) {
+ return err
+ }
+
+ exists, err := f.friend.FindUserState(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
+ if err != nil {
+ return err
+ }
+ existsMap := datautil.SliceSet(datautil.Slice(exists, func(friend *model.Friend) [2]string {
+ return [...]string{friend.OwnerUserID, friend.FriendUserID} // My - Friend
+ }))
+ var adds []*model.Friend
+ if _, ok := existsMap[[...]string{friendRequest.ToUserID, friendRequest.FromUserID}]; !ok { // My - Friend
+ adds = append(
+ adds,
+ &model.Friend{
+ OwnerUserID: friendRequest.ToUserID,
+ FriendUserID: friendRequest.FromUserID,
+ AddSource: int32(constant.BecomeFriendByApply),
+ OperatorUserID: friendRequest.FromUserID,
+ },
+ )
+ }
+ if _, ok := existsMap[[...]string{friendRequest.FromUserID, friendRequest.ToUserID}]; !ok { // My - Friend
+ adds = append(
+ adds,
+ &model.Friend{
+ OwnerUserID: friendRequest.FromUserID,
+ FriendUserID: friendRequest.ToUserID,
+ AddSource: int32(constant.BecomeFriendByApply),
+ OperatorUserID: friendRequest.FromUserID,
+ },
+ )
+ }
+ if len(adds) > 0 {
+ if err := f.friend.Create(ctx, adds); err != nil {
+ return err
+ }
+ }
+ return f.cache.DelFriendIDs(friendRequest.ToUserID, friendRequest.FromUserID).DelMaxFriendVersion(friendRequest.ToUserID, friendRequest.FromUserID).ChainExecDel(ctx)
+ })
+}
+
+// Delete removes a friend relationship. It is assumed that the external caller has verified the friendship status.
+func (f *friendDatabase) Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error) {
+ if err := f.friend.Delete(ctx, ownerUserID, friendUserIDs); err != nil {
+ return err
+ }
+ userIds := append(friendUserIDs, ownerUserID)
+ return f.cache.DelFriendIDs(userIds...).DelMaxFriendVersion(userIds...).ChainExecDel(ctx)
+}
+
+// UpdateRemark updates the remark for a friend. Zero value for remark is also supported.
+func (f *friendDatabase) UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error) {
+ if err := f.friend.UpdateRemark(ctx, ownerUserID, friendUserID, remark); err != nil {
+ return err
+ }
+ return f.cache.DelFriend(ownerUserID, friendUserID).DelMaxFriendVersion(ownerUserID).ChainExecDel(ctx)
+}
+
+// PageOwnerFriends retrieves the list of friends for the ownerUserID. It does not return an error if the result is empty.
+func (f *friendDatabase) PageOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, friends []*model.Friend, err error) {
+ return f.friend.FindOwnerFriends(ctx, ownerUserID, pagination)
+}
+
+// PageInWhoseFriends identifies in whose friend lists the friendUserID appears.
+func (f *friendDatabase) PageInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (total int64, friends []*model.Friend, err error) {
+ return f.friend.FindInWhoseFriends(ctx, friendUserID, pagination)
+}
+
+// PageFriendRequestFromMe retrieves friend requests sent by me. It does not return an error if the result is empty.
+func (f *friendDatabase) PageFriendRequestFromMe(ctx context.Context, userID string, handleResults []int, pagination pagination.Pagination) (total int64, friends []*model.FriendRequest, err error) {
+ return f.friendRequest.FindFromUserID(ctx, userID, handleResults, pagination)
+}
+
+// PageFriendRequestToMe retrieves friend requests received by me. It does not return an error if the result is empty.
+func (f *friendDatabase) PageFriendRequestToMe(ctx context.Context, userID string, handleResults []int, pagination pagination.Pagination) (total int64, friends []*model.FriendRequest, err error) {
+ return f.friendRequest.FindToUserID(ctx, userID, handleResults, pagination)
+}
+
+// FindFriendsWithError retrieves specified friends' information for ownerUserID. Returns an error if any friend does not exist.
+func (f *friendDatabase) FindFriendsWithError(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*model.Friend, err error) {
+ friends, err = f.friend.FindFriends(ctx, ownerUserID, friendUserIDs)
+ if err != nil {
+ return
+ }
+ return
+}
+
+func (f *friendDatabase) FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error) {
+ return f.cache.GetFriendIDs(ctx, ownerUserID)
+}
+
+func (f *friendDatabase) FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*model.FriendRequest, err error) {
+ return f.friendRequest.FindBothFriendRequests(ctx, fromUserID, toUserID)
+}
+func (f *friendDatabase) UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) (err error) {
+ if len(val) == 0 {
+ return nil
+ }
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err := f.friend.UpdateFriends(ctx, ownerUserID, friendUserIDs, val); err != nil {
+ return err
+ }
+ return f.cache.DelFriends(ownerUserID, friendUserIDs).DelMaxFriendVersion(ownerUserID).ChainExecDel(ctx)
+ })
+}
+
+//func (f *friendDatabase) FindSortFriendUserIDs(ctx context.Context, ownerUserID string) ([]string, error) {
+// return f.cache.FindSortFriendUserIDs(ctx, ownerUserID)
+//}
+
+func (f *friendDatabase) FindFriendIncrVersion(ctx context.Context, ownerUserID string, version uint, limit int) (*model.VersionLog, error) {
+ return f.friend.FindIncrVersion(ctx, ownerUserID, version, limit)
+}
+
+func (f *friendDatabase) FindMaxFriendVersionCache(ctx context.Context, ownerUserID string) (*model.VersionLog, error) {
+ return f.cache.FindMaxFriendVersion(ctx, ownerUserID)
+}
+
+func (f *friendDatabase) FindFriendUserID(ctx context.Context, friendUserID string) ([]string, error) {
+ return f.friend.FindFriendUserID(ctx, friendUserID)
+}
+
+//func (f *friendDatabase) SearchFriend(ctx context.Context, ownerUserID, keyword string, pagination pagination.Pagination) (int64, []*model.Friend, error) {
+// return f.friend.SearchFriend(ctx, ownerUserID, keyword, pagination)
+//}
+
+func (f *friendDatabase) OwnerIncrVersion(ctx context.Context, ownerUserID string, friendUserIDs []string, state int32) error {
+ if err := f.friend.IncrVersion(ctx, ownerUserID, friendUserIDs, state); err != nil {
+ return err
+ }
+ return f.cache.DelMaxFriendVersion(ownerUserID).ChainExecDel(ctx)
+}
+
+func (f *friendDatabase) GetUnhandledCount(ctx context.Context, userID string, ts int64) (int64, error) {
+ return f.friendRequest.GetUnhandledCount(ctx, userID, ts)
+}
diff --git a/pkg/common/storage/controller/group.go b/pkg/common/storage/controller/group.go
new file mode 100644
index 0000000..1c2a17f
--- /dev/null
+++ b/pkg/common/storage/controller/group.go
@@ -0,0 +1,574 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ redis2 "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/common"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/db/tx"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+)
+
+type GroupDatabase interface {
+ // CreateGroup creates new groups along with their members.
+ CreateGroup(ctx context.Context, groups []*model.Group, groupMembers []*model.GroupMember) error
+ // TakeGroup retrieves a single group by its ID.
+ TakeGroup(ctx context.Context, groupID string) (group *model.Group, err error)
+ // FindGroup retrieves multiple groups by their IDs.
+ FindGroup(ctx context.Context, groupIDs []string) (groups []*model.Group, err error)
+ // SearchGroup searches for groups based on a keyword and pagination settings, returns total count and groups.
+ SearchGroup(ctx context.Context, keyword string, pagination pagination.Pagination) (int64, []*model.Group, error)
+ // UpdateGroup updates the properties of a group identified by its ID.
+ UpdateGroup(ctx context.Context, groupID string, data map[string]any) error
+ // DismissGroup disbands a group and optionally removes its members based on the deleteMember flag.
+ DismissGroup(ctx context.Context, groupID string, deleteMember bool) error
+
+ // TakeGroupMember retrieves a specific group member by group ID and user ID.
+ TakeGroupMember(ctx context.Context, groupID string, userID string) (groupMember *model.GroupMember, err error)
+ // TakeGroupOwner retrieves the owner of a group by group ID.
+ TakeGroupOwner(ctx context.Context, groupID string) (*model.GroupMember, error)
+ // FindGroupMembers retrieves members of a group filtered by user IDs.
+ FindGroupMembers(ctx context.Context, groupID string, userIDs []string) (groupMembers []*model.GroupMember, err error)
+ // FindGroupMemberUser retrieves groups that a user is a member of, filtered by group IDs.
+ FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) (groupMembers []*model.GroupMember, err error)
+ // FindGroupMemberRoleLevels retrieves group members filtered by their role levels within a group.
+ FindGroupMemberRoleLevels(ctx context.Context, groupID string, roleLevels []int32) (groupMembers []*model.GroupMember, err error)
+ // FindGroupMemberAll retrieves all members of a group.
+ FindGroupMemberAll(ctx context.Context, groupID string) (groupMembers []*model.GroupMember, err error)
+ // FindGroupsOwner retrieves the owners for multiple groups.
+ FindGroupsOwner(ctx context.Context, groupIDs []string) ([]*model.GroupMember, error)
+ // FindGroupMemberUserID retrieves the user IDs of all members in a group.
+ FindGroupMemberUserID(ctx context.Context, groupID string) ([]string, error)
+ // FindGroupMemberNum retrieves the number of members in a group.
+ FindGroupMemberNum(ctx context.Context, groupID string) (uint32, error)
+ // FindUserManagedGroupID retrieves group IDs managed by a user.
+ FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
+ // PageGroupRequest paginates through group requests for specified groups.
+ PageGroupRequest(ctx context.Context, groupIDs []string, handleResults []int, pagination pagination.Pagination) (int64, []*model.GroupRequest, error)
+ // GetGroupRoleLevelMemberIDs retrieves user IDs of group members with a specific role level.
+ GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error)
+
+ // PageGetJoinGroup paginates through groups that a user has joined.
+ PageGetJoinGroup(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*model.GroupMember, err error)
+ // PageGetGroupMember paginates through members of a group.
+ PageGetGroupMember(ctx context.Context, groupID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*model.GroupMember, err error)
+ // SearchGroupMember searches for group members based on a keyword, group ID, and pagination settings.
+ SearchGroupMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (int64, []*model.GroupMember, error)
+ // SearchGroupMemberByFields searches for group members by multiple independent fields: nickname, userID (account), and phone
+ SearchGroupMemberByFields(ctx context.Context, groupID string, nickname, userID, phone string, pagination pagination.Pagination) (int64, []*model.GroupMember, error)
+ // HandlerGroupRequest processes a group join request with a specified result.
+ HandlerGroupRequest(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32, member *model.GroupMember) error
+ // DeleteGroupMember removes specified users from a group.
+ DeleteGroupMember(ctx context.Context, groupID string, userIDs []string) error
+ // MapGroupMemberUserID maps group IDs to their members' simplified user IDs.
+ MapGroupMemberUserID(ctx context.Context, groupIDs []string) (map[string]*common.GroupSimpleUserID, error)
+ // MapGroupMemberNum maps group IDs to their member count.
+ MapGroupMemberNum(ctx context.Context, groupIDs []string) (map[string]uint32, error)
+ // TransferGroupOwner transfers the ownership of a group to another user.
+ TransferGroupOwner(ctx context.Context, groupID string, oldOwnerUserID, newOwnerUserID string, roleLevel int32) error
+ // UpdateGroupMember updates properties of a group member.
+ UpdateGroupMember(ctx context.Context, groupID string, userID string, data map[string]any) error
+ // UpdateGroupMembers batch updates properties of group members.
+ UpdateGroupMembers(ctx context.Context, data []*common.BatchUpdateGroupMember) error
+
+ // CreateGroupRequest creates new group join requests.
+ CreateGroupRequest(ctx context.Context, requests []*model.GroupRequest) error
+ // TakeGroupRequest retrieves a specific group join request.
+ TakeGroupRequest(ctx context.Context, groupID string, userID string) (*model.GroupRequest, error)
+ // FindGroupRequests retrieves multiple group join requests.
+ FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupRequest, error)
+ // PageGroupRequestUser paginates through group join requests made by a user.
+ PageGroupRequestUser(ctx context.Context, userID string, groupIDs []string, handleResults []int, pagination pagination.Pagination) (int64, []*model.GroupRequest, error)
+
+ // CountTotal counts the total number of groups as of a certain date.
+ CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
+ // CountRangeEverydayTotal counts the daily group creation total within a specified date range.
+ CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+ // DeleteGroupMemberHash deletes the hash entries for group members in specified groups.
+ DeleteGroupMemberHash(ctx context.Context, groupIDs []string) error
+
+ FindMemberIncrVersion(ctx context.Context, groupID string, version uint, limit int) (*model.VersionLog, error)
+ BatchFindMemberIncrVersion(ctx context.Context, groupIDs []string, versions []uint64, limits []int) (map[string]*model.VersionLog, error)
+ FindJoinIncrVersion(ctx context.Context, userID string, version uint, limit int) (*model.VersionLog, error)
+ MemberGroupIncrVersion(ctx context.Context, groupID string, userIDs []string, state int32) error
+
+ //FindSortGroupMemberUserIDs(ctx context.Context, groupID string) ([]string, error)
+ //FindSortJoinGroupIDs(ctx context.Context, userID string) ([]string, error)
+
+ FindMaxGroupMemberVersionCache(ctx context.Context, groupID string) (*model.VersionLog, error)
+ BatchFindMaxGroupMemberVersionCache(ctx context.Context, groupIDs []string) (map[string]*model.VersionLog, error)
+ FindMaxJoinGroupVersionCache(ctx context.Context, userID string) (*model.VersionLog, error)
+
+ SearchJoinGroup(ctx context.Context, userID string, keyword string, pagination pagination.Pagination) (int64, []*model.Group, error)
+
+ FindJoinGroupID(ctx context.Context, userID string) ([]string, error)
+
+ GetGroupApplicationUnhandledCount(ctx context.Context, groupIDs []string, ts int64) (int64, error)
+}
+
+func NewGroupDatabase(
+ rdb redis.UniversalClient,
+ localCache *config.LocalCache,
+ groupDB database.Group,
+ groupMemberDB database.GroupMember,
+ groupRequestDB database.GroupRequest,
+ ctxTx tx.Tx,
+ groupHash cache.GroupHash,
+) GroupDatabase {
+ return &groupDatabase{
+ groupDB: groupDB,
+ groupMemberDB: groupMemberDB,
+ groupRequestDB: groupRequestDB,
+ ctxTx: ctxTx,
+ cache: redis2.NewGroupCacheRedis(rdb, localCache, groupDB, groupMemberDB, groupRequestDB, groupHash),
+ }
+}
+
+type groupDatabase struct {
+ groupDB database.Group
+ groupMemberDB database.GroupMember
+ groupRequestDB database.GroupRequest
+ ctxTx tx.Tx
+ cache cache.GroupCache
+}
+
+func (g *groupDatabase) FindJoinGroupID(ctx context.Context, userID string) ([]string, error) {
+ return g.cache.GetJoinedGroupIDs(ctx, userID)
+}
+
+func (g *groupDatabase) FindGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupMember, error) {
+ return g.cache.GetGroupMembersInfo(ctx, groupID, userIDs)
+}
+
+func (g *groupDatabase) FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) ([]*model.GroupMember, error) {
+ return g.cache.FindGroupMemberUser(ctx, groupIDs, userID)
+}
+
+func (g *groupDatabase) FindGroupMemberRoleLevels(ctx context.Context, groupID string, roleLevels []int32) ([]*model.GroupMember, error) {
+ return g.cache.GetGroupRolesLevelMemberInfo(ctx, groupID, roleLevels)
+}
+
+func (g *groupDatabase) FindGroupMemberAll(ctx context.Context, groupID string) ([]*model.GroupMember, error) {
+ return g.cache.GetAllGroupMembersInfo(ctx, groupID)
+}
+
+func (g *groupDatabase) FindGroupsOwner(ctx context.Context, groupIDs []string) ([]*model.GroupMember, error) {
+ return g.cache.GetGroupsOwner(ctx, groupIDs)
+}
+
+func (g *groupDatabase) GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error) {
+ return g.cache.GetGroupRoleLevelMemberIDs(ctx, groupID, roleLevel)
+}
+
+func (g *groupDatabase) CreateGroup(ctx context.Context, groups []*model.Group, groupMembers []*model.GroupMember) error {
+ if len(groups)+len(groupMembers) == 0 {
+ return nil
+ }
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ c := g.cache.CloneGroupCache()
+ if len(groups) > 0 {
+ if err := g.groupDB.Create(ctx, groups); err != nil {
+ return err
+ }
+ for _, group := range groups {
+ c = c.DelGroupsInfo(group.GroupID).
+ DelGroupMembersHash(group.GroupID).
+ DelGroupsMemberNum(group.GroupID).
+ DelGroupMemberIDs(group.GroupID).
+ DelGroupAllRoleLevel(group.GroupID).
+ DelMaxGroupMemberVersion(group.GroupID)
+ }
+ }
+ if len(groupMembers) > 0 {
+ if err := g.groupMemberDB.Create(ctx, groupMembers); err != nil {
+ return err
+ }
+ for _, groupMember := range groupMembers {
+ c = c.DelGroupMembersHash(groupMember.GroupID).
+ DelGroupsMemberNum(groupMember.GroupID).
+ DelGroupMemberIDs(groupMember.GroupID).
+ DelJoinedGroupID(groupMember.UserID).
+ DelGroupMembersInfo(groupMember.GroupID, groupMember.UserID).
+ DelGroupAllRoleLevel(groupMember.GroupID).
+ DelMaxJoinGroupVersion(groupMember.UserID).
+ DelMaxGroupMemberVersion(groupMember.GroupID)
+ }
+ }
+ return c.ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) FindGroupMemberUserID(ctx context.Context, groupID string) ([]string, error) {
+ return g.cache.GetGroupMemberIDs(ctx, groupID)
+}
+
+func (g *groupDatabase) FindGroupMemberNum(ctx context.Context, groupID string) (uint32, error) {
+ num, err := g.cache.GetGroupMemberNum(ctx, groupID)
+ if err != nil {
+ return 0, err
+ }
+ return uint32(num), nil
+}
+
+func (g *groupDatabase) TakeGroup(ctx context.Context, groupID string) (*model.Group, error) {
+ return g.cache.GetGroupInfo(ctx, groupID)
+}
+
+func (g *groupDatabase) FindGroup(ctx context.Context, groupIDs []string) ([]*model.Group, error) {
+ return g.cache.GetGroupsInfo(ctx, groupIDs)
+}
+
+func (g *groupDatabase) SearchGroup(ctx context.Context, keyword string, pagination pagination.Pagination) (int64, []*model.Group, error) {
+ return g.groupDB.Search(ctx, keyword, pagination)
+}
+
+func (g *groupDatabase) UpdateGroup(ctx context.Context, groupID string, data map[string]any) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupDB.UpdateMap(ctx, groupID, data); err != nil {
+ return err
+ }
+ if err := g.groupMemberDB.MemberGroupIncrVersion(ctx, groupID, []string{""}, model.VersionStateUpdate); err != nil {
+ return err
+ }
+ return g.cache.CloneGroupCache().DelGroupsInfo(groupID).DelMaxGroupMemberVersion(groupID).ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) DismissGroup(ctx context.Context, groupID string, deleteMember bool) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ c := g.cache.CloneGroupCache()
+ if err := g.groupDB.UpdateStatus(ctx, groupID, constant.GroupStatusDismissed); err != nil {
+ return err
+ }
+ if deleteMember {
+ userIDs, err := g.cache.GetGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ return err
+ }
+ if err := g.groupMemberDB.Delete(ctx, groupID, nil); err != nil {
+ return err
+ }
+ c = c.DelJoinedGroupID(userIDs...).
+ DelGroupMemberIDs(groupID).
+ DelGroupsMemberNum(groupID).
+ DelGroupMembersHash(groupID).
+ DelGroupAllRoleLevel(groupID).
+ DelGroupMembersInfo(groupID, userIDs...).
+ DelMaxGroupMemberVersion(groupID).
+ DelMaxJoinGroupVersion(userIDs...)
+ for _, userID := range userIDs {
+ if err := g.groupMemberDB.JoinGroupIncrVersion(ctx, userID, []string{groupID}, model.VersionStateDelete); err != nil {
+ return err
+ }
+ }
+ } else {
+ if err := g.groupMemberDB.MemberGroupIncrVersion(ctx, groupID, []string{""}, model.VersionStateUpdate); err != nil {
+ return err
+ }
+ c = c.DelMaxGroupMemberVersion(groupID)
+ }
+ return c.DelGroupsInfo(groupID).ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) TakeGroupMember(ctx context.Context, groupID string, userID string) (*model.GroupMember, error) {
+ return g.cache.GetGroupMemberInfo(ctx, groupID, userID)
+}
+
+func (g *groupDatabase) TakeGroupOwner(ctx context.Context, groupID string) (*model.GroupMember, error) {
+ return g.cache.GetGroupOwner(ctx, groupID)
+}
+
+func (g *groupDatabase) FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
+ return g.groupMemberDB.FindUserManagedGroupID(ctx, userID)
+}
+
+func (g *groupDatabase) PageGroupRequest(ctx context.Context, groupIDs []string, handleResults []int, pagination pagination.Pagination) (int64, []*model.GroupRequest, error) {
+ return g.groupRequestDB.PageGroup(ctx, groupIDs, handleResults, pagination)
+}
+
+func (g *groupDatabase) PageGetJoinGroup(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*model.GroupMember, err error) {
+ groupIDs, err := g.cache.GetJoinedGroupIDs(ctx, userID)
+ if err != nil {
+ return 0, nil, err
+ }
+ for _, groupID := range datautil.Paginate(groupIDs, int(pagination.GetPageNumber()), int(pagination.GetShowNumber())) {
+ groupMembers, err := g.cache.GetGroupMembersInfo(ctx, groupID, []string{userID})
+ if err != nil {
+ return 0, nil, err
+ }
+ totalGroupMembers = append(totalGroupMembers, groupMembers...)
+ }
+ return int64(len(groupIDs)), totalGroupMembers, nil
+}
+
+func (g *groupDatabase) PageGetGroupMember(ctx context.Context, groupID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*model.GroupMember, err error) {
+ groupMemberIDs, err := g.cache.GetGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ return 0, nil, err
+ }
+ pageIDs := datautil.Paginate(groupMemberIDs, int(pagination.GetPageNumber()), int(pagination.GetShowNumber()))
+ if len(pageIDs) == 0 {
+ return int64(len(groupMemberIDs)), nil, nil
+ }
+ members, err := g.cache.GetGroupMembersInfo(ctx, groupID, pageIDs)
+ if err != nil {
+ return 0, nil, err
+ }
+ return int64(len(groupMemberIDs)), members, nil
+}
+
+func (g *groupDatabase) SearchGroupMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (int64, []*model.GroupMember, error) {
+ return g.groupMemberDB.SearchMember(ctx, keyword, groupID, pagination)
+}
+
+func (g *groupDatabase) SearchGroupMemberByFields(ctx context.Context, groupID string, nickname, userID, phone string, pagination pagination.Pagination) (int64, []*model.GroupMember, error) {
+ return g.groupMemberDB.SearchMemberByFields(ctx, groupID, nickname, userID, phone, pagination)
+}
+
+func (g *groupDatabase) HandlerGroupRequest(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32, member *model.GroupMember) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupRequestDB.UpdateHandler(ctx, groupID, userID, handledMsg, handleResult); err != nil {
+ return err
+ }
+ if member != nil {
+ c := g.cache.CloneGroupCache()
+ if err := g.groupMemberDB.Create(ctx, []*model.GroupMember{member}); err != nil {
+ return err
+ }
+ c = c.DelGroupMembersHash(groupID).
+ DelGroupMembersInfo(groupID, member.UserID).
+ DelGroupMemberIDs(groupID).
+ DelGroupsMemberNum(groupID).
+ DelJoinedGroupID(member.UserID).
+ DelGroupRoleLevel(groupID, []int32{member.RoleLevel}).
+ DelMaxJoinGroupVersion(userID).
+ DelMaxGroupMemberVersion(groupID)
+ if err := c.ChainExecDel(ctx); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+}
+
+func (g *groupDatabase) DeleteGroupMember(ctx context.Context, groupID string, userIDs []string) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupMemberDB.Delete(ctx, groupID, userIDs); err != nil {
+ return err
+ }
+ c := g.cache.CloneGroupCache()
+ return c.DelGroupMembersHash(groupID).
+ DelGroupMemberIDs(groupID).
+ DelGroupsMemberNum(groupID).
+ DelJoinedGroupID(userIDs...).
+ DelGroupMembersInfo(groupID, userIDs...).
+ DelGroupAllRoleLevel(groupID).
+ DelMaxGroupMemberVersion(groupID).
+ DelMaxJoinGroupVersion(userIDs...).
+ ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) MapGroupMemberUserID(ctx context.Context, groupIDs []string) (map[string]*common.GroupSimpleUserID, error) {
+ return g.cache.GetGroupMemberHashMap(ctx, groupIDs)
+}
+
+func (g *groupDatabase) MapGroupMemberNum(ctx context.Context, groupIDs []string) (m map[string]uint32, err error) {
+ m = make(map[string]uint32)
+ for _, groupID := range groupIDs {
+ num, err := g.cache.GetGroupMemberNum(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ m[groupID] = uint32(num)
+ }
+ return m, nil
+}
+
+func (g *groupDatabase) TransferGroupOwner(ctx context.Context, groupID string, oldOwnerUserID, newOwnerUserID string, roleLevel int32) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupMemberDB.UpdateUserRoleLevels(ctx, groupID, oldOwnerUserID, roleLevel, newOwnerUserID, constant.GroupOwner); err != nil {
+ return err
+ }
+ c := g.cache.CloneGroupCache()
+ return c.DelGroupMembersInfo(groupID, oldOwnerUserID, newOwnerUserID).
+ DelGroupAllRoleLevel(groupID).
+ DelGroupMembersHash(groupID).
+ DelMaxGroupMemberVersion(groupID).
+ DelGroupMemberIDs(groupID).
+ ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) UpdateGroupMember(ctx context.Context, groupID string, userID string, data map[string]any) error {
+ if len(data) == 0 {
+ return nil
+ }
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupMemberDB.Update(ctx, groupID, userID, data); err != nil {
+ return err
+ }
+ c := g.cache.CloneGroupCache()
+ c = c.DelGroupMembersInfo(groupID, userID)
+ if g.groupMemberDB.IsUpdateRoleLevel(data) {
+ c = c.DelGroupAllRoleLevel(groupID).DelGroupMemberIDs(groupID)
+ }
+ c = c.DelMaxGroupMemberVersion(groupID)
+ return c.ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) UpdateGroupMembers(ctx context.Context, data []*common.BatchUpdateGroupMember) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ c := g.cache.CloneGroupCache()
+ for _, item := range data {
+ if err := g.groupMemberDB.Update(ctx, item.GroupID, item.UserID, item.Map); err != nil {
+ return err
+ }
+ if g.groupMemberDB.IsUpdateRoleLevel(item.Map) {
+ c = c.DelGroupAllRoleLevel(item.GroupID).DelGroupMemberIDs(item.GroupID)
+ }
+ c = c.DelGroupMembersInfo(item.GroupID, item.UserID).DelMaxGroupMemberVersion(item.GroupID).DelGroupMembersHash(item.GroupID)
+ }
+ return c.ChainExecDel(ctx)
+ })
+}
+
+func (g *groupDatabase) CreateGroupRequest(ctx context.Context, requests []*model.GroupRequest) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ for _, request := range requests {
+ if err := g.groupRequestDB.Delete(ctx, request.GroupID, request.UserID); err != nil {
+ return err
+ }
+ }
+ return g.groupRequestDB.Create(ctx, requests)
+ })
+}
+
+func (g *groupDatabase) TakeGroupRequest(ctx context.Context, groupID string, userID string) (*model.GroupRequest, error) {
+ return g.groupRequestDB.Take(ctx, groupID, userID)
+}
+
+func (g *groupDatabase) PageGroupRequestUser(ctx context.Context, userID string, groupIDs []string, handleResults []int, pagination pagination.Pagination) (int64, []*model.GroupRequest, error) {
+ return g.groupRequestDB.Page(ctx, userID, groupIDs, handleResults, pagination)
+}
+
+func (g *groupDatabase) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
+ return g.groupDB.CountTotal(ctx, before)
+}
+
+func (g *groupDatabase) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
+ return g.groupDB.CountRangeEverydayTotal(ctx, start, end)
+}
+
+func (g *groupDatabase) FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupRequest, error) {
+ return g.groupRequestDB.FindGroupRequests(ctx, groupID, userIDs)
+}
+
+func (g *groupDatabase) DeleteGroupMemberHash(ctx context.Context, groupIDs []string) error {
+ if len(groupIDs) == 0 {
+ return nil
+ }
+ c := g.cache.CloneGroupCache()
+ for _, groupID := range groupIDs {
+ c = c.DelGroupMembersHash(groupID)
+ }
+ return c.ChainExecDel(ctx)
+}
+
+func (g *groupDatabase) FindMemberIncrVersion(ctx context.Context, groupID string, version uint, limit int) (*model.VersionLog, error) {
+ return g.groupMemberDB.FindMemberIncrVersion(ctx, groupID, version, limit)
+}
+
+func (g *groupDatabase) BatchFindMemberIncrVersion(ctx context.Context, groupIDs []string, versions []uint64, limits []int) (map[string]*model.VersionLog, error) {
+ if len(groupIDs) == 0 {
+ return nil, errs.Wrap(errs.New("groupIDs is nil."))
+ }
+
+ // convert []uint64 to []uint
+ var uintVersions []uint
+ for _, version := range versions {
+ uintVersions = append(uintVersions, uint(version))
+ }
+
+ versionLogs, err := g.groupMemberDB.BatchFindMemberIncrVersion(ctx, groupIDs, uintVersions, limits)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ groupMemberIncrVersionsMap := datautil.SliceToMap(versionLogs, func(e *model.VersionLog) string {
+ return e.DID
+ })
+
+ return groupMemberIncrVersionsMap, nil
+}
+
+func (g *groupDatabase) FindJoinIncrVersion(ctx context.Context, userID string, version uint, limit int) (*model.VersionLog, error) {
+ return g.groupMemberDB.FindJoinIncrVersion(ctx, userID, version, limit)
+}
+
+func (g *groupDatabase) FindMaxGroupMemberVersionCache(ctx context.Context, groupID string) (*model.VersionLog, error) {
+ return g.cache.FindMaxGroupMemberVersion(ctx, groupID)
+}
+
+func (g *groupDatabase) BatchFindMaxGroupMemberVersionCache(ctx context.Context, groupIDs []string) (map[string]*model.VersionLog, error) {
+ if len(groupIDs) == 0 {
+ return nil, errs.Wrap(errs.New("groupIDs is nil in Cache."))
+ }
+ versionLogs, err := g.cache.BatchFindMaxGroupMemberVersion(ctx, groupIDs)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ maxGroupMemberVersionsMap := datautil.SliceToMap(versionLogs, func(e *model.VersionLog) string {
+ return e.DID
+ })
+ return maxGroupMemberVersionsMap, nil
+}
+
+func (g *groupDatabase) FindMaxJoinGroupVersionCache(ctx context.Context, userID string) (*model.VersionLog, error) {
+ return g.cache.FindMaxJoinGroupVersion(ctx, userID)
+}
+
+func (g *groupDatabase) SearchJoinGroup(ctx context.Context, userID string, keyword string, pagination pagination.Pagination) (int64, []*model.Group, error) {
+ groupIDs, err := g.cache.GetJoinedGroupIDs(ctx, userID)
+ if err != nil {
+ return 0, nil, err
+ }
+ return g.groupDB.SearchJoin(ctx, groupIDs, keyword, pagination)
+}
+
+func (g *groupDatabase) MemberGroupIncrVersion(ctx context.Context, groupID string, userIDs []string, state int32) error {
+ if err := g.groupMemberDB.MemberGroupIncrVersion(ctx, groupID, userIDs, state); err != nil {
+ return err
+ }
+ return g.cache.DelMaxGroupMemberVersion(groupID).ChainExecDel(ctx)
+}
+
+func (g *groupDatabase) GetGroupApplicationUnhandledCount(ctx context.Context, groupIDs []string, ts int64) (int64, error) {
+ return g.groupRequestDB.GetUnhandledCount(ctx, groupIDs, ts)
+}
diff --git a/pkg/common/storage/controller/msg.go b/pkg/common/storage/controller/msg.go
new file mode 100644
index 0000000..9e448d6
--- /dev/null
+++ b/pkg/common/storage/controller/msg.go
@@ -0,0 +1,865 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+
+ "github.com/openimsdk/tools/mq"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "google.golang.org/protobuf/proto"
+
+ "strconv"
+ "strings"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "github.com/redis/go-redis/v9"
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/protocol/constant"
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+const (
+ updateKeyMsg = iota
+ updateKeyRevoke
+)
+
+// CommonMsgDatabase defines the interface for message database operations.
+type CommonMsgDatabase interface {
+ // RevokeMsg revokes a message in a conversation.
+ RevokeMsg(ctx context.Context, conversationID string, seq int64, revoke *model.RevokeModel) error
+ // MarkSingleChatMsgsAsRead marks messages as read for a single chat by sequence numbers.
+ MarkSingleChatMsgsAsRead(ctx context.Context, userID string, conversationID string, seqs []int64) error
+ // GetMsgBySeqsRange retrieves messages from MongoDB by a range of sequence numbers.
+ GetMsgBySeqsRange(ctx context.Context, userID string, conversationID string, begin, end, num, userMaxSeq int64) (minSeq int64, maxSeq int64, seqMsg []*sdkws.MsgData, err error)
+ // GetMsgBySeqs retrieves messages for large groups from MongoDB by sequence numbers.
+ GetMsgBySeqs(ctx context.Context, userID string, conversationID string, seqs []int64) (minSeq int64, maxSeq int64, seqMsg []*sdkws.MsgData, err error)
+
+ GetMessagesBySeqWithBounds(ctx context.Context, userID string, conversationID string, seqs []int64, pullOrder sdkws.PullOrder) (bool, int64, []*sdkws.MsgData, error)
+ // DeleteUserMsgsBySeqs allows a user to delete messages based on sequence numbers.
+ DeleteUserMsgsBySeqs(ctx context.Context, userID string, conversationID string, seqs []int64) error
+ // DeleteMsgsPhysicalBySeqs physically deletes messages by emptying them based on sequence numbers.
+ DeleteMsgsPhysicalBySeqs(ctx context.Context, conversationID string, seqs []int64) error
+ //SetMaxSeq(ctx context.Context, conversationID string, maxSeq int64) error
+ GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error)
+ GetMaxSeq(ctx context.Context, conversationID string) (int64, error)
+ SetMinSeqs(ctx context.Context, seqs map[string]int64) error
+ SetMinSeq(ctx context.Context, conversationID string, seq int64) error
+
+ SetUserConversationsMinSeqs(ctx context.Context, userID string, seqs map[string]int64) (err error)
+ SetHasReadSeq(ctx context.Context, userID string, conversationID string, hasReadSeq int64) error
+ GetHasReadSeqs(ctx context.Context, userID string, conversationIDs []string) (map[string]int64, error)
+ GetHasReadSeq(ctx context.Context, userID string, conversationID string) (int64, error)
+ UserSetHasReadSeqs(ctx context.Context, userID string, hasReadSeqs map[string]int64) error
+
+ GetMaxSeqsWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error)
+ GetMaxSeqWithTime(ctx context.Context, conversationID string) (database.SeqTime, error)
+ GetCacheMaxSeqWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error)
+
+ SetSendMsgStatus(ctx context.Context, id string, status int32) error
+ GetSendMsgStatus(ctx context.Context, id string) (int32, error)
+ SearchMessage(ctx context.Context, req *pbmsg.SearchMessageReq) (total int64, msgData []*pbmsg.SearchedMsgData, err error)
+ FindOneByDocIDs(ctx context.Context, docIDs []string, seqs map[string]int64) (map[string]*sdkws.MsgData, error)
+
+ // to mq
+ MsgToMQ(ctx context.Context, key string, msg2mq *sdkws.MsgData) error
+
+ RangeUserSendCount(ctx context.Context, start time.Time, end time.Time, group bool, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, users []*model.UserCount, dateCount map[string]int64, err error)
+ RangeGroupSendCount(ctx context.Context, start time.Time, end time.Time, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, groups []*model.GroupCount, dateCount map[string]int64, err error)
+
+ GetRandBeforeMsg(ctx context.Context, ts int64, limit int) ([]*model.MsgDocModel, error)
+
+ SetUserConversationsMaxSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ SetUserConversationsMinSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+
+ DeleteDoc(ctx context.Context, docID string) error
+
+ GetLastMessageSeqByTime(ctx context.Context, conversationID string, time int64) (int64, error)
+
+ GetLastMessage(ctx context.Context, conversationIDS []string, userID string) (map[string]*sdkws.MsgData, error)
+}
+
+func NewCommonMsgDatabase(msgDocModel database.Msg, msg cache.MsgCache, seqUser cache.SeqUser, seqConversation cache.SeqConversationCache, producer mq.Producer) CommonMsgDatabase {
+ return &commonMsgDatabase{
+ msgDocDatabase: msgDocModel,
+ msgCache: msg,
+ seqUser: seqUser,
+ seqConversation: seqConversation,
+ producer: producer,
+ }
+}
+
+type commonMsgDatabase struct {
+ msgDocDatabase database.Msg
+ msgTable model.MsgDocModel
+ msgCache cache.MsgCache
+ seqConversation cache.SeqConversationCache
+ seqUser cache.SeqUser
+ producer mq.Producer
+}
+
+func (db *commonMsgDatabase) MsgToMQ(ctx context.Context, key string, msg2mq *sdkws.MsgData) error {
+ data, err := proto.Marshal(msg2mq)
+ if err != nil {
+ return err
+ }
+ return db.producer.SendMessage(ctx, key, data)
+}
+
+func (db *commonMsgDatabase) batchInsertBlock(ctx context.Context, conversationID string, fields []any, key int8, firstSeq int64) error {
+ if len(fields) == 0 {
+ return nil
+ }
+ num := db.msgTable.GetSingleGocMsgNum()
+ // num = 100
+ for i, field := range fields { // Check the type of the field
+ var ok bool
+ switch key {
+ case updateKeyMsg:
+ var msg *model.MsgDataModel
+ msg, ok = field.(*model.MsgDataModel)
+ if msg != nil && msg.Seq != firstSeq+int64(i) {
+ return errs.ErrInternalServer.WrapMsg("seq is invalid")
+ }
+ case updateKeyRevoke:
+ _, ok = field.(*model.RevokeModel)
+ default:
+ return errs.ErrInternalServer.WrapMsg("key is invalid")
+ }
+ if !ok {
+ return errs.ErrInternalServer.WrapMsg("field type is invalid")
+ }
+ }
+ // Returns true if the document exists in the database, false if the document does not exist in the database
+ updateMsgModel := func(seq int64, i int) (bool, error) {
+ var (
+ res *mongo.UpdateResult
+ err error
+ )
+ docID := db.msgTable.GetDocID(conversationID, seq)
+ index := db.msgTable.GetMsgIndex(seq)
+ field := fields[i]
+ switch key {
+ case updateKeyMsg:
+ res, err = db.msgDocDatabase.UpdateMsg(ctx, docID, index, "msg", field)
+ case updateKeyRevoke:
+ res, err = db.msgDocDatabase.UpdateMsg(ctx, docID, index, "revoke", field)
+ }
+ if err != nil {
+ return false, err
+ }
+ return res.MatchedCount > 0, nil
+ }
+ tryUpdate := true
+ for i := 0; i < len(fields); i++ {
+ seq := firstSeq + int64(i) // Current sequence number
+ if tryUpdate {
+ matched, err := updateMsgModel(seq, i)
+ if err != nil {
+ return err
+ }
+ if matched {
+ continue // The current data has been updated, skip the current data
+ }
+ }
+ doc := model.MsgDocModel{
+ DocID: db.msgTable.GetDocID(conversationID, seq),
+ Msg: make([]*model.MsgInfoModel, num),
+ }
+ var insert int // Inserted data number
+ for j := i; j < len(fields); j++ {
+ seq = firstSeq + int64(j)
+ if db.msgTable.GetDocID(conversationID, seq) != doc.DocID {
+ break
+ }
+ insert++
+ switch key {
+ case updateKeyMsg:
+ doc.Msg[db.msgTable.GetMsgIndex(seq)] = &model.MsgInfoModel{
+ Msg: fields[j].(*model.MsgDataModel),
+ }
+ case updateKeyRevoke:
+ doc.Msg[db.msgTable.GetMsgIndex(seq)] = &model.MsgInfoModel{
+ Revoke: fields[j].(*model.RevokeModel),
+ }
+ }
+ }
+ for i, msgInfo := range doc.Msg {
+ if msgInfo == nil {
+ msgInfo = &model.MsgInfoModel{}
+ doc.Msg[i] = msgInfo
+ }
+ if msgInfo.DelList == nil {
+ doc.Msg[i].DelList = []string{}
+ }
+ }
+ if err := db.msgDocDatabase.Create(ctx, &doc); err != nil {
+ if mongo.IsDuplicateKeyError(err) {
+ i-- // already inserted
+ tryUpdate = true // next block use update mode
+ continue
+ }
+ return err
+ }
+ tryUpdate = false // The current block is inserted successfully, and the next block is inserted preferentially
+ i += insert - 1 // Skip the inserted data
+ }
+
+ return nil
+}
+
+func (db *commonMsgDatabase) RevokeMsg(ctx context.Context, conversationID string, seq int64, revoke *model.RevokeModel) error {
+ if err := db.batchInsertBlock(ctx, conversationID, []any{revoke}, updateKeyRevoke, seq); err != nil {
+ return err
+ }
+ return db.msgCache.DelMessageBySeqs(ctx, conversationID, []int64{seq})
+}
+
+func (db *commonMsgDatabase) MarkSingleChatMsgsAsRead(ctx context.Context, userID string, conversationID string, totalSeqs []int64) error {
+ for docID, seqs := range db.msgTable.GetDocIDSeqsMap(conversationID, totalSeqs) {
+ var indexes []int64
+ for _, seq := range seqs {
+ indexes = append(indexes, db.msgTable.GetMsgIndex(seq))
+ }
+ log.ZDebug(ctx, "MarkSingleChatMsgsAsRead", "userID", userID, "docID", docID, "indexes", indexes)
+ if err := db.msgDocDatabase.MarkSingleChatMsgsAsRead(ctx, userID, docID, indexes); err != nil {
+ log.ZError(ctx, "MarkSingleChatMsgsAsRead", err, "userID", userID, "docID", docID, "indexes", indexes)
+ return err
+ }
+ }
+ return db.msgCache.DelMessageBySeqs(ctx, conversationID, totalSeqs)
+}
+
+func (db *commonMsgDatabase) getMsgBySeqs(ctx context.Context, userID, conversationID string, seqs []int64) (totalMsgs []*sdkws.MsgData, err error) {
+ return db.GetMessageBySeqs(ctx, conversationID, userID, seqs)
+}
+
+func (db *commonMsgDatabase) handlerDBMsg(ctx context.Context, cache map[int64][]*model.MsgInfoModel, userID, conversationID string, msg *model.MsgInfoModel) {
+ if msg == nil || msg.Msg == nil {
+ return
+ }
+ if msg.IsRead {
+ msg.Msg.IsRead = true
+ }
+ if msg.Msg.ContentType != constant.Quote {
+ return
+ }
+ if msg.Msg.Content == "" {
+ return
+ }
+ type MsgData struct {
+ SendID string `json:"sendID"`
+ RecvID string `json:"recvID"`
+ GroupID string `json:"groupID"`
+ ClientMsgID string `json:"clientMsgID"`
+ ServerMsgID string `json:"serverMsgID"`
+ SenderPlatformID int32 `json:"senderPlatformID"`
+ SenderNickname string `json:"senderNickname"`
+ SenderFaceURL string `json:"senderFaceURL"`
+ SessionType int32 `json:"sessionType"`
+ MsgFrom int32 `json:"msgFrom"`
+ ContentType int32 `json:"contentType"`
+ Content string `json:"content"`
+ Seq int64 `json:"seq"`
+ SendTime int64 `json:"sendTime"`
+ CreateTime int64 `json:"createTime"`
+ Status int32 `json:"status"`
+ IsRead bool `json:"isRead"`
+ Options map[string]bool `json:"options,omitempty"`
+ OfflinePushInfo *sdkws.OfflinePushInfo `json:"offlinePushInfo"`
+ AtUserIDList []string `json:"atUserIDList"`
+ AttachedInfo string `json:"attachedInfo"`
+ Ex string `json:"ex"`
+ KeyVersion int32 `json:"keyVersion"`
+ DstUserIDs []string `json:"dstUserIDs"`
+ }
+ var quoteMsg struct {
+ Text string `json:"text,omitempty"`
+ QuoteMessage *MsgData `json:"quoteMessage,omitempty"`
+ MessageEntityList json.RawMessage `json:"messageEntityList,omitempty"`
+ }
+ if err := json.Unmarshal([]byte(msg.Msg.Content), "eMsg); err != nil {
+ log.ZError(ctx, "json.Unmarshal", err)
+ return
+ }
+ if quoteMsg.QuoteMessage == nil {
+ return
+ }
+ if quoteMsg.QuoteMessage.Content == "e30=" {
+ quoteMsg.QuoteMessage.Content = "{}"
+ data, err := json.Marshal("eMsg)
+ if err != nil {
+ return
+ }
+ msg.Msg.Content = string(data)
+ }
+ if quoteMsg.QuoteMessage.Seq <= 0 && quoteMsg.QuoteMessage.ContentType == constant.MsgRevokeNotification {
+ return
+ }
+ var msgs []*model.MsgInfoModel
+ if v, ok := cache[quoteMsg.QuoteMessage.Seq]; ok {
+ msgs = v
+ } else {
+ if quoteMsg.QuoteMessage.Seq > 0 {
+ ms, err := db.msgDocDatabase.GetMsgBySeqIndexIn1Doc(ctx, userID, db.msgTable.GetDocID(conversationID, quoteMsg.QuoteMessage.Seq), []int64{quoteMsg.QuoteMessage.Seq})
+ if err != nil {
+ log.ZError(ctx, "GetMsgBySeqIndexIn1Doc", err, "conversationID", conversationID, "seq", quoteMsg.QuoteMessage.Seq)
+ return
+ }
+ msgs = ms
+ cache[quoteMsg.QuoteMessage.Seq] = ms
+ }
+ }
+ if len(msgs) != 0 && msgs[0].Msg.ContentType != constant.MsgRevokeNotification {
+ return
+ }
+ quoteMsg.QuoteMessage.ContentType = constant.MsgRevokeNotification
+ if len(msgs) > 0 {
+ quoteMsg.QuoteMessage.Content = msgs[0].Msg.Content
+ } else {
+ quoteMsg.QuoteMessage.Content = "{}"
+ }
+ data, err := json.Marshal("eMsg)
+ if err != nil {
+ log.ZError(ctx, "json.Marshal", err)
+ return
+ }
+ msg.Msg.Content = string(data)
+}
+
+func (db *commonMsgDatabase) findMsgInfoBySeq(ctx context.Context, userID, docID string, conversationID string, seqs []int64) (totalMsgs []*model.MsgInfoModel, err error) {
+ msgs, err := db.msgDocDatabase.GetMsgBySeqIndexIn1Doc(ctx, userID, docID, seqs)
+ if err != nil {
+ return nil, err
+ }
+ tempCache := make(map[int64][]*model.MsgInfoModel)
+ for _, msg := range msgs {
+ db.handlerDBMsg(ctx, tempCache, userID, conversationID, msg)
+ }
+ return msgs, err
+}
+
+// GetMsgBySeqsRange In the context of group chat, we have the following parameters:
+//
+// "maxSeq" of a conversation: It represents the maximum value of messages in the group conversation.
+// "minSeq" of a conversation (default: 1): It represents the minimum value of messages in the group conversation.
+//
+// For a user's perspective regarding the group conversation, we have the following parameters:
+//
+// "userMaxSeq": It represents the user's upper limit for message retrieval in the group. If not set (default: 0),
+// it means the upper limit is the same as the conversation's "maxSeq".
+// "userMinSeq": It represents the user's starting point for message retrieval in the group. If not set (default: 0),
+// it means the starting point is the same as the conversation's "minSeq".
+//
+// The scenarios for these parameters are as follows:
+//
+// For users who have been kicked out of the group, "userMaxSeq" can be set as the maximum value they had before
+// being kicked out. This limits their ability to retrieve messages up to a certain point.
+// For new users joining the group, if they don't need to receive old messages,
+// "userMinSeq" can be set as the same value as the conversation's "maxSeq" at the moment they join the group.
+// This ensures that their message retrieval starts from the point they joined.
+func (db *commonMsgDatabase) GetMsgBySeqsRange(ctx context.Context, userID string, conversationID string, begin, end, num, userMaxSeq int64) (int64, int64, []*sdkws.MsgData, error) {
+ userMinSeq, err := db.seqUser.GetUserMinSeq(ctx, conversationID, userID)
+ if err != nil && !errors.Is(err, redis.Nil) {
+ return 0, 0, nil, err
+ }
+ minSeq, err := db.seqConversation.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ if userMinSeq > minSeq {
+ minSeq = userMinSeq
+ }
+ // "minSeq" represents the startSeq value that the user can retrieve.
+ if minSeq > end {
+ log.ZWarn(ctx, "minSeq > end", errs.New("minSeq>end"), "minSeq", minSeq, "end", end)
+ return 0, 0, nil, nil
+ }
+ maxSeq, err := db.seqConversation.GetMaxSeq(ctx, conversationID)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ log.ZDebug(ctx, "GetMsgBySeqsRange", "userMinSeq", userMinSeq, "conMinSeq", minSeq, "conMaxSeq", maxSeq, "userMaxSeq", userMaxSeq)
+ if userMaxSeq != 0 {
+ if userMaxSeq < maxSeq {
+ maxSeq = userMaxSeq
+ }
+ }
+ // "maxSeq" represents the endSeq value that the user can retrieve.
+
+ if begin < minSeq {
+ begin = minSeq
+ }
+ if end > maxSeq {
+ end = maxSeq
+ }
+ // "begin" and "end" represent the actual startSeq and endSeq values that the user can retrieve.
+ if end < begin {
+ log.ZWarn(ctx, "seq end < begin after adjustment", errs.New("seq end < begin"), "begin", begin, "end", end, "minSeq", minSeq, "maxSeq", maxSeq)
+ return 0, 0, nil, nil
+ }
+ // 限制最大查询数量,防止内存溢出(默认最大 5000 条)
+ // 如果单次查询范围太大,进一步限制以避免内存问题
+ const maxFetchLimit = 5000
+ const maxRangeSize = 10000 // 最大范围限制
+ if num <= 0 {
+ num = 100 // 默认值
+ }
+ if num > maxFetchLimit {
+ num = maxFetchLimit
+ }
+ var seqs []int64
+ rangeSize := end - begin + 1
+ // 如果范围太大,限制范围大小
+ if rangeSize > maxRangeSize {
+ log.ZWarn(ctx, "seq range too large, limiting", nil, "conversationID", conversationID, "begin", begin, "end", end, "rangeSize", rangeSize, "maxRangeSize", maxRangeSize)
+ // 只取最后 maxRangeSize 条
+ begin = end - maxRangeSize + 1
+ rangeSize = maxRangeSize
+ }
+ if rangeSize <= num {
+ // 如果范围小于等于 num,直接生成所有 seqs
+ for i := begin; i <= end; i++ {
+ seqs = append(seqs, i)
+ }
+ } else {
+ // 如果范围大于 num,只取最后 num 条
+ for i := end - num + 1; i <= end; i++ {
+ seqs = append(seqs, i)
+ }
+ }
+ successMsgs, err := db.GetMessageBySeqs(ctx, conversationID, userID, seqs)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ return minSeq, maxSeq, successMsgs, nil
+}
+
+func (db *commonMsgDatabase) GetMsgBySeqs(ctx context.Context, userID string, conversationID string, seqs []int64) (int64, int64, []*sdkws.MsgData, error) {
+ userMinSeq, err := db.seqUser.GetUserMinSeq(ctx, conversationID, userID)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ minSeq, err := db.seqConversation.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ maxSeq, err := db.seqConversation.GetMaxSeq(ctx, conversationID)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ userMaxSeq, err := db.seqUser.GetUserMaxSeq(ctx, conversationID, userID)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ if userMinSeq > minSeq {
+ minSeq = userMinSeq
+ }
+ if userMaxSeq > 0 && userMaxSeq < maxSeq {
+ maxSeq = userMaxSeq
+ }
+ newSeqs := make([]int64, 0, len(seqs))
+ for _, seq := range seqs {
+ if seq <= 0 {
+ continue
+ }
+ if seq >= minSeq && seq <= maxSeq {
+ newSeqs = append(newSeqs, seq)
+ }
+ }
+ successMsgs, err := db.GetMessageBySeqs(ctx, conversationID, userID, newSeqs)
+ if err != nil {
+ return 0, 0, nil, err
+ }
+ return minSeq, maxSeq, successMsgs, nil
+}
+
+func (db *commonMsgDatabase) GetMessagesBySeqWithBounds(ctx context.Context, userID string, conversationID string, seqs []int64, pullOrder sdkws.PullOrder) (bool, int64, []*sdkws.MsgData, error) {
+ var endSeq int64
+ var isEnd bool
+ userMinSeq, err := db.seqUser.GetUserMinSeq(ctx, conversationID, userID)
+ if err != nil {
+ return false, 0, nil, err
+ }
+ minSeq, err := db.seqConversation.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return false, 0, nil, err
+ }
+ maxSeq, err := db.seqConversation.GetMaxSeq(ctx, conversationID)
+ if err != nil {
+ return false, 0, nil, err
+ }
+ userMaxSeq, err := db.seqUser.GetUserMaxSeq(ctx, conversationID, userID)
+ if err != nil {
+ return false, 0, nil, err
+ }
+ if userMinSeq > minSeq {
+ minSeq = userMinSeq
+ }
+ if userMaxSeq > 0 && userMaxSeq < maxSeq {
+ maxSeq = userMaxSeq
+ }
+ newSeqs := make([]int64, 0, len(seqs))
+ for _, seq := range seqs {
+ if seq <= 0 {
+ continue
+ }
+ // The normal range and can fetch messages
+ if seq >= minSeq && seq <= maxSeq {
+ newSeqs = append(newSeqs, seq)
+ continue
+ }
+ // If the requested seq is smaller than the minimum seq and the pull order is descending (pulling older messages)
+ if seq < minSeq && pullOrder == sdkws.PullOrder_PullOrderDesc {
+ isEnd = true
+ endSeq = minSeq
+ }
+ // If the requested seq is larger than the maximum seq and the pull order is ascending (pulling newer messages)
+ if seq > maxSeq && pullOrder == sdkws.PullOrder_PullOrderAsc {
+ isEnd = true
+ endSeq = maxSeq
+ }
+ }
+ if len(newSeqs) == 0 {
+ return isEnd, endSeq, nil, nil
+ }
+ successMsgs, err := db.GetMessageBySeqs(ctx, conversationID, userID, newSeqs)
+ if err != nil {
+ return false, 0, nil, err
+ }
+ return isEnd, endSeq, successMsgs, nil
+}
+
+func (db *commonMsgDatabase) DeleteMsgsPhysicalBySeqs(ctx context.Context, conversationID string, allSeqs []int64) error {
+ for docID, seqs := range db.msgTable.GetDocIDSeqsMap(conversationID, allSeqs) {
+ var indexes []int
+ for _, seq := range seqs {
+ indexes = append(indexes, int(db.msgTable.GetMsgIndex(seq)))
+ }
+ if err := db.msgDocDatabase.DeleteMsgsInOneDocByIndex(ctx, docID, indexes); err != nil {
+ return err
+ }
+ }
+ return db.msgCache.DelMessageBySeqs(ctx, conversationID, allSeqs)
+}
+
+func (db *commonMsgDatabase) DeleteUserMsgsBySeqs(ctx context.Context, userID string, conversationID string, seqs []int64) error {
+ for docID, seqs := range db.msgTable.GetDocIDSeqsMap(conversationID, seqs) {
+ for _, seq := range seqs {
+ if _, err := db.msgDocDatabase.PushUnique(ctx, docID, db.msgTable.GetMsgIndex(seq), "del_list", []string{userID}); err != nil {
+ return err
+ }
+ }
+ }
+ return db.msgCache.DelMessageBySeqs(ctx, conversationID, seqs)
+}
+
+func (db *commonMsgDatabase) GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error) {
+ return db.seqConversation.GetMaxSeqs(ctx, conversationIDs)
+}
+
+func (db *commonMsgDatabase) GetMaxSeq(ctx context.Context, conversationID string) (int64, error) {
+ return db.seqConversation.GetMaxSeq(ctx, conversationID)
+}
+
+func (db *commonMsgDatabase) SetMinSeqs(ctx context.Context, seqs map[string]int64) error {
+ return db.seqConversation.SetMinSeqs(ctx, seqs)
+}
+
+func (db *commonMsgDatabase) SetUserConversationsMinSeqs(ctx context.Context, userID string, seqs map[string]int64) error {
+ return db.seqUser.SetUserMinSeqs(ctx, userID, seqs)
+}
+
+func (db *commonMsgDatabase) SetUserConversationsMaxSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ return db.seqUser.SetUserMaxSeq(ctx, conversationID, userID, seq)
+}
+
+func (db *commonMsgDatabase) SetUserConversationsMinSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ return db.seqUser.SetUserMinSeq(ctx, conversationID, userID, seq)
+}
+
+func (db *commonMsgDatabase) UserSetHasReadSeqs(ctx context.Context, userID string, hasReadSeqs map[string]int64) error {
+ return db.seqUser.SetUserReadSeqs(ctx, userID, hasReadSeqs)
+}
+
+func (db *commonMsgDatabase) SetHasReadSeq(ctx context.Context, userID string, conversationID string, hasReadSeq int64) error {
+ return db.seqUser.SetUserReadSeq(ctx, conversationID, userID, hasReadSeq)
+}
+
+func (db *commonMsgDatabase) GetHasReadSeqs(ctx context.Context, userID string, conversationIDs []string) (map[string]int64, error) {
+ return db.seqUser.GetUserReadSeqs(ctx, userID, conversationIDs)
+}
+
+func (db *commonMsgDatabase) GetHasReadSeq(ctx context.Context, userID string, conversationID string) (int64, error) {
+ return db.seqUser.GetUserReadSeq(ctx, conversationID, userID)
+}
+
+func (db *commonMsgDatabase) SetSendMsgStatus(ctx context.Context, id string, status int32) error {
+ return db.msgCache.SetSendMsgStatus(ctx, id, status)
+}
+
+func (db *commonMsgDatabase) GetSendMsgStatus(ctx context.Context, id string) (int32, error) {
+ return db.msgCache.GetSendMsgStatus(ctx, id)
+}
+
+func (db *commonMsgDatabase) GetConversationMinMaxSeqInMongoAndCache(ctx context.Context, conversationID string) (minSeqMongo, maxSeqMongo, minSeqCache, maxSeqCache int64, err error) {
+ minSeqMongo, maxSeqMongo, err = db.GetMinMaxSeqMongo(ctx, conversationID)
+ if err != nil {
+ return
+ }
+ minSeqCache, err = db.seqConversation.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ return
+ }
+ maxSeqCache, err = db.seqConversation.GetMaxSeq(ctx, conversationID)
+ if err != nil {
+ return
+ }
+ return
+}
+
+func (db *commonMsgDatabase) GetMongoMaxAndMinSeq(ctx context.Context, conversationID string) (minSeqMongo, maxSeqMongo int64, err error) {
+ return db.GetMinMaxSeqMongo(ctx, conversationID)
+}
+
+func (db *commonMsgDatabase) GetMinMaxSeqMongo(ctx context.Context, conversationID string) (minSeqMongo, maxSeqMongo int64, err error) {
+ oldestMsgMongo, err := db.msgDocDatabase.GetOldestMsg(ctx, conversationID)
+ if err != nil {
+ return
+ }
+ minSeqMongo = oldestMsgMongo.Msg.Seq
+ newestMsgMongo, err := db.msgDocDatabase.GetNewestMsg(ctx, conversationID)
+ if err != nil {
+ return
+ }
+ maxSeqMongo = newestMsgMongo.Msg.Seq
+ return
+}
+
+func (db *commonMsgDatabase) RangeUserSendCount(ctx context.Context, start time.Time, end time.Time, group bool, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, users []*model.UserCount, dateCount map[string]int64, err error) {
+ return db.msgDocDatabase.RangeUserSendCount(ctx, start, end, group, ase, pageNumber, showNumber)
+}
+
+func (db *commonMsgDatabase) RangeGroupSendCount(ctx context.Context, start time.Time, end time.Time, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, groups []*model.GroupCount, dateCount map[string]int64, err error) {
+ return db.msgDocDatabase.RangeGroupSendCount(ctx, start, end, ase, pageNumber, showNumber)
+}
+
+func (db *commonMsgDatabase) SearchMessage(ctx context.Context, req *pbmsg.SearchMessageReq) (total int64, msgData []*pbmsg.SearchedMsgData, err error) {
+ var totalMsgs []*pbmsg.SearchedMsgData
+ total, msgs, err := db.msgDocDatabase.SearchMessage(ctx, req)
+ if err != nil {
+ return 0, nil, err
+ }
+ for _, msg := range msgs {
+ if msg.IsRead {
+ msg.Msg.IsRead = true
+ }
+ searchedMsgData := &pbmsg.SearchedMsgData{MsgData: convert.MsgDB2Pb(msg.Msg)}
+
+ if msg.Revoke != nil {
+ searchedMsgData.IsRevoked = true
+ }
+
+ totalMsgs = append(totalMsgs, searchedMsgData)
+ }
+ return total, totalMsgs, nil
+}
+
+func (db *commonMsgDatabase) FindOneByDocIDs(ctx context.Context, conversationIDs []string, seqs map[string]int64) (map[string]*sdkws.MsgData, error) {
+ totalMsgs := make(map[string]*sdkws.MsgData)
+ for _, conversationID := range conversationIDs {
+ seq, ok := seqs[conversationID]
+ if !ok {
+ log.ZWarn(ctx, "seq not found for conversationID", errs.New("seq not found for conversation"), "conversationID", conversationID)
+ continue
+ }
+ docID := db.msgTable.GetDocID(conversationID, seq)
+ msgs, err := db.msgDocDatabase.FindOneByDocID(ctx, docID)
+ if err != nil {
+ log.ZWarn(ctx, "FindOneByDocID failed", err, "conversationID", conversationID, "docID", docID, "seq", seq)
+ continue
+ }
+
+ index := db.msgTable.GetMsgIndex(seq)
+ totalMsgs[conversationID] = convert.MsgDB2Pb(msgs.Msg[index].Msg)
+ }
+ return totalMsgs, nil
+}
+
+func (db *commonMsgDatabase) GetRandBeforeMsg(ctx context.Context, ts int64, limit int) ([]*model.MsgDocModel, error) {
+ return db.msgDocDatabase.GetRandBeforeMsg(ctx, ts, limit)
+}
+
+func (db *commonMsgDatabase) SetMinSeq(ctx context.Context, conversationID string, seq int64) error {
+ dbSeq, err := db.seqConversation.GetMinSeq(ctx, conversationID)
+ if err != nil {
+ if errors.Is(errs.Unwrap(err), redis.Nil) {
+ return nil
+ }
+ return err
+ }
+ if dbSeq >= seq {
+ return nil
+ }
+ return db.seqConversation.SetMinSeq(ctx, conversationID, seq)
+}
+
+func (db *commonMsgDatabase) GetCacheMaxSeqWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error) {
+ return db.seqConversation.GetCacheMaxSeqWithTime(ctx, conversationIDs)
+}
+
+func (db *commonMsgDatabase) GetMaxSeqWithTime(ctx context.Context, conversationID string) (database.SeqTime, error) {
+ return db.seqConversation.GetMaxSeqWithTime(ctx, conversationID)
+}
+
+func (db *commonMsgDatabase) GetMaxSeqsWithTime(ctx context.Context, conversationIDs []string) (map[string]database.SeqTime, error) {
+ // todo: only the time in the redis cache will be taken, not the message time
+ return db.seqConversation.GetMaxSeqsWithTime(ctx, conversationIDs)
+}
+
+func (db *commonMsgDatabase) DeleteDoc(ctx context.Context, docID string) error {
+ index := strings.LastIndex(docID, ":")
+ if index <= 0 {
+ return errs.ErrInternalServer.WrapMsg("docID is invalid", "docID", docID)
+ }
+ docIndex, err := strconv.Atoi(docID[index+1:])
+ if err != nil {
+ return errs.WrapMsg(err, "strconv.Atoi", "docID", docID)
+ }
+ conversationID := docID[:index]
+ seqs := make([]int64, db.msgTable.GetSingleGocMsgNum())
+ minSeq := db.msgTable.GetMinSeq(docIndex)
+ for i := range seqs {
+ seqs[i] = minSeq + int64(i)
+ }
+ if err := db.msgDocDatabase.DeleteDoc(ctx, docID); err != nil {
+ return err
+ }
+ return db.msgCache.DelMessageBySeqs(ctx, conversationID, seqs)
+}
+
+func (db *commonMsgDatabase) GetLastMessageSeqByTime(ctx context.Context, conversationID string, time int64) (int64, error) {
+ return db.msgDocDatabase.GetLastMessageSeqByTime(ctx, conversationID, time)
+}
+
+func (db *commonMsgDatabase) handlerDeleteAndRevoked(ctx context.Context, userID string, msgs []*model.MsgInfoModel) {
+ for i := range msgs {
+ msg := msgs[i]
+ if msg == nil || msg.Msg == nil {
+ continue
+ }
+ msg.Msg.IsRead = msg.IsRead
+ if datautil.Contain(userID, msg.DelList...) {
+ msg.Msg.Content = ""
+ msg.Msg.Status = constant.MsgDeleted
+ }
+ if msg.Revoke == nil {
+ continue
+ }
+ msg.Msg.ContentType = constant.MsgRevokeNotification
+ revokeContent := sdkws.MessageRevokedContent{
+ RevokerID: msg.Revoke.UserID,
+ RevokerRole: msg.Revoke.Role,
+ ClientMsgID: msg.Msg.ClientMsgID,
+ RevokerNickname: msg.Revoke.Nickname,
+ RevokeTime: msg.Revoke.Time,
+ SourceMessageSendTime: msg.Msg.SendTime,
+ SourceMessageSendID: msg.Msg.SendID,
+ SourceMessageSenderNickname: msg.Msg.SenderNickname,
+ SessionType: msg.Msg.SessionType,
+ Seq: msg.Msg.Seq,
+ Ex: msg.Msg.Ex,
+ }
+ data, err := jsonutil.JsonMarshal(&revokeContent)
+ if err != nil {
+ log.ZWarn(ctx, "handlerDeleteAndRevoked JsonMarshal MessageRevokedContent", err, "msg", msg)
+ continue
+ }
+ elem := sdkws.NotificationElem{
+ Detail: string(data),
+ }
+ content, err := jsonutil.JsonMarshal(&elem)
+ if err != nil {
+ log.ZWarn(ctx, "handlerDeleteAndRevoked JsonMarshal NotificationElem", err, "msg", msg)
+ continue
+ }
+ msg.Msg.Content = string(content)
+ }
+}
+
+func (db *commonMsgDatabase) handlerQuote(ctx context.Context, userID, conversationID string, msgs []*model.MsgInfoModel) {
+ temp := make(map[int64][]*model.MsgInfoModel)
+ for i := range msgs {
+ db.handlerDBMsg(ctx, temp, userID, conversationID, msgs[i])
+ }
+}
+
+func (db *commonMsgDatabase) GetMessageBySeqs(ctx context.Context, conversationID string, userID string, seqs []int64) ([]*sdkws.MsgData, error) {
+ msgs, err := db.msgCache.GetMessageBySeqs(ctx, conversationID, seqs)
+ if err != nil {
+ return nil, err
+ }
+ db.handlerDeleteAndRevoked(ctx, userID, msgs)
+ db.handlerQuote(ctx, userID, conversationID, msgs)
+ seqMsgs := make(map[int64]*model.MsgInfoModel)
+ for i, msg := range msgs {
+ if msg.Msg == nil {
+ continue
+ }
+ seqMsgs[msg.Msg.Seq] = msgs[i]
+ }
+ res := make([]*sdkws.MsgData, 0, len(seqs))
+ for _, seq := range seqs {
+ if v, ok := seqMsgs[seq]; ok {
+ res = append(res, convert.MsgDB2Pb(v.Msg))
+ } else {
+ res = append(res, &sdkws.MsgData{Seq: seq, Status: constant.MsgStatusHasDeleted})
+ }
+ }
+ return res, nil
+}
+
+func (db *commonMsgDatabase) GetLastMessage(ctx context.Context, conversationIDs []string, userID string) (map[string]*sdkws.MsgData, error) {
+ res := make(map[string]*sdkws.MsgData)
+ for _, conversationID := range conversationIDs {
+ if _, ok := res[conversationID]; ok {
+ continue
+ }
+ msg, err := db.msgDocDatabase.GetLastMessage(ctx, conversationID)
+ if err != nil {
+ if errs.Unwrap(err) == mongo.ErrNoDocuments {
+ continue
+ }
+ return nil, err
+ }
+ tmp := []*model.MsgInfoModel{msg}
+ db.handlerDeleteAndRevoked(ctx, userID, tmp)
+ db.handlerQuote(ctx, userID, conversationID, tmp)
+ res[conversationID] = convert.MsgDB2Pb(msg.Msg)
+ }
+ return res, nil
+}
diff --git a/pkg/common/storage/controller/msg_transfer.go b/pkg/common/storage/controller/msg_transfer.go
new file mode 100644
index 0000000..00b25d1
--- /dev/null
+++ b/pkg/common/storage/controller/msg_transfer.go
@@ -0,0 +1,277 @@
+package controller
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/mq"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/protobuf/proto"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ pbmsg "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "go.mongodb.org/mongo-driver/mongo"
+)
+
+type MsgTransferDatabase interface {
+ // BatchInsertChat2DB inserts a batch of messages into the database for a specific conversation.
+ BatchInsertChat2DB(ctx context.Context, conversationID string, msgs []*sdkws.MsgData, currentMaxSeq int64) error
+ // DeleteMessagesFromCache deletes message caches from Redis by sequence numbers.
+ DeleteMessagesFromCache(ctx context.Context, conversationID string, seqs []int64) error
+
+ // BatchInsertChat2Cache increments the sequence number and then batch inserts messages into the cache.
+ BatchInsertChat2Cache(ctx context.Context, conversationID string, msgs []*sdkws.MsgData) (seq int64, isNewConversation bool, userHasReadMap map[string]int64, err error)
+
+ SetHasReadSeqs(ctx context.Context, conversationID string, userSeqMap map[string]int64) error
+
+ SetHasReadSeqToDB(ctx context.Context, conversationID string, userSeqMap map[string]int64) error
+
+ // to mq
+ MsgToPushMQ(ctx context.Context, key, conversationID string, msg2mq *sdkws.MsgData) error
+ MsgToMongoMQ(ctx context.Context, key, conversationID string, msgs []*sdkws.MsgData, lastSeq int64) error
+}
+
+func NewMsgTransferDatabase(msgDocModel database.Msg, msg cache.MsgCache, seqUser cache.SeqUser, seqConversation cache.SeqConversationCache, mongoProducer, pushProducer mq.Producer) (MsgTransferDatabase, error) {
+ //conf, err := kafka.BuildProducerConfig(*kafkaConf.Build())
+ //if err != nil {
+ // return nil, err
+ //}
+ //producerToMongo, err := kafka.NewKafkaProducerV2(conf, kafkaConf.Address, kafkaConf.ToMongoTopic)
+ //if err != nil {
+ // return nil, err
+ //}
+ //producerToPush, err := kafka.NewKafkaProducerV2(conf, kafkaConf.Address, kafkaConf.ToPushTopic)
+ //if err != nil {
+ // return nil, err
+ //}
+ return &msgTransferDatabase{
+ msgDocDatabase: msgDocModel,
+ msgCache: msg,
+ seqUser: seqUser,
+ seqConversation: seqConversation,
+ producerToMongo: mongoProducer,
+ producerToPush: pushProducer,
+ }, nil
+}
+
+type msgTransferDatabase struct {
+ msgDocDatabase database.Msg
+ msgTable model.MsgDocModel
+ msgCache cache.MsgCache
+ seqConversation cache.SeqConversationCache
+ seqUser cache.SeqUser
+ producerToMongo mq.Producer
+ producerToPush mq.Producer
+}
+
+func (db *msgTransferDatabase) BatchInsertChat2DB(ctx context.Context, conversationID string, msgList []*sdkws.MsgData, currentMaxSeq int64) error {
+ if len(msgList) == 0 {
+ return errs.ErrArgs.WrapMsg("msgList is empty")
+ }
+ msgs := make([]any, len(msgList))
+ seqs := make([]int64, len(msgList))
+ for i, msg := range msgList {
+ if msg == nil {
+ continue
+ }
+ seqs[i] = msg.Seq
+ if msg.Status == constant.MsgStatusSending {
+ msg.Status = constant.MsgStatusSendSuccess
+ }
+ msgs[i] = convert.MsgPb2DB(msg)
+ }
+ if err := db.BatchInsertBlock(ctx, conversationID, msgs, updateKeyMsg, msgList[0].Seq); err != nil {
+ return err
+ }
+ //return db.msgCache.DelMessageBySeqs(ctx, conversationID, seqs)
+ return nil
+}
+
+func (db *msgTransferDatabase) BatchInsertBlock(ctx context.Context, conversationID string, fields []any, key int8, firstSeq int64) error {
+ if len(fields) == 0 {
+ return nil
+ }
+ num := db.msgTable.GetSingleGocMsgNum()
+ // num = 100
+ for i, field := range fields { // Check the type of the field
+ var ok bool
+ switch key {
+ case updateKeyMsg:
+ var msg *model.MsgDataModel
+ msg, ok = field.(*model.MsgDataModel)
+ if msg != nil && msg.Seq != firstSeq+int64(i) {
+ return errs.ErrInternalServer.WrapMsg("seq is invalid")
+ }
+ case updateKeyRevoke:
+ _, ok = field.(*model.RevokeModel)
+ default:
+ return errs.ErrInternalServer.WrapMsg("key is invalid")
+ }
+ if !ok {
+ return errs.ErrInternalServer.WrapMsg("field type is invalid")
+ }
+ }
+ // Returns true if the document exists in the database, false if the document does not exist in the database
+ updateMsgModel := func(seq int64, i int) (bool, error) {
+ var (
+ res *mongo.UpdateResult
+ err error
+ )
+ docID := db.msgTable.GetDocID(conversationID, seq)
+ index := db.msgTable.GetMsgIndex(seq)
+ field := fields[i]
+ switch key {
+ case updateKeyMsg:
+ res, err = db.msgDocDatabase.UpdateMsg(ctx, docID, index, "msg", field)
+ case updateKeyRevoke:
+ res, err = db.msgDocDatabase.UpdateMsg(ctx, docID, index, "revoke", field)
+ }
+ if err != nil {
+ return false, err
+ }
+ return res.MatchedCount > 0, nil
+ }
+ tryUpdate := true
+ for i := 0; i < len(fields); i++ {
+ seq := firstSeq + int64(i) // Current sequence number
+ if tryUpdate {
+ matched, err := updateMsgModel(seq, i)
+ if err != nil {
+ return err
+ }
+ if matched {
+ continue // The current data has been updated, skip the current data
+ }
+ }
+ doc := model.MsgDocModel{
+ DocID: db.msgTable.GetDocID(conversationID, seq),
+ Msg: make([]*model.MsgInfoModel, num),
+ }
+ var insert int // Inserted data number
+ for j := i; j < len(fields); j++ {
+ seq = firstSeq + int64(j)
+ if db.msgTable.GetDocID(conversationID, seq) != doc.DocID {
+ break
+ }
+ insert++
+ switch key {
+ case updateKeyMsg:
+ doc.Msg[db.msgTable.GetMsgIndex(seq)] = &model.MsgInfoModel{
+ Msg: fields[j].(*model.MsgDataModel),
+ }
+ case updateKeyRevoke:
+ doc.Msg[db.msgTable.GetMsgIndex(seq)] = &model.MsgInfoModel{
+ Revoke: fields[j].(*model.RevokeModel),
+ }
+ }
+ }
+ for i, msgInfo := range doc.Msg {
+ if msgInfo == nil {
+ msgInfo = &model.MsgInfoModel{}
+ doc.Msg[i] = msgInfo
+ }
+ if msgInfo.DelList == nil {
+ doc.Msg[i].DelList = []string{}
+ }
+ }
+ if err := db.msgDocDatabase.Create(ctx, &doc); err != nil {
+ if mongo.IsDuplicateKeyError(err) {
+ i-- // already inserted
+ tryUpdate = true // next block use update mode
+ continue
+ }
+ return err
+ }
+ tryUpdate = false // The current block is inserted successfully, and the next block is inserted preferentially
+ i += insert - 1 // Skip the inserted data
+ }
+ return nil
+}
+
+func (db *msgTransferDatabase) DeleteMessagesFromCache(ctx context.Context, conversationID string, seqs []int64) error {
+ return db.msgCache.DelMessageBySeqs(ctx, conversationID, seqs)
+}
+
+func (db *msgTransferDatabase) BatchInsertChat2Cache(ctx context.Context, conversationID string, msgs []*sdkws.MsgData) (seq int64, isNew bool, userHasReadMap map[string]int64, err error) {
+ lenList := len(msgs)
+ if int64(lenList) > db.msgTable.GetSingleGocMsgNum() {
+ return 0, false, nil, errs.New("message count exceeds limit", "limit", db.msgTable.GetSingleGocMsgNum()).Wrap()
+ }
+ if lenList < 1 {
+ return 0, false, nil, errs.New("no messages to insert", "minCount", 1).Wrap()
+ }
+ currentMaxSeq, err := db.seqConversation.Malloc(ctx, conversationID, int64(len(msgs)))
+ if err != nil {
+ log.ZError(ctx, "storage.seq.Malloc", err)
+ return 0, false, nil, err
+ }
+ isNew = currentMaxSeq == 0
+ lastMaxSeq := currentMaxSeq
+ userSeqMap := make(map[string]int64)
+ seqs := make([]int64, 0, lenList)
+ for _, m := range msgs {
+ currentMaxSeq++
+ m.Seq = currentMaxSeq
+ userSeqMap[m.SendID] = m.Seq
+ seqs = append(seqs, m.Seq)
+ }
+ msgToDB := func(msg *sdkws.MsgData) *model.MsgInfoModel {
+ return &model.MsgInfoModel{
+ Msg: convert.MsgPb2DB(msg),
+ }
+ }
+ if err := db.msgCache.SetMessageBySeqs(ctx, conversationID, datautil.Slice(msgs, msgToDB)); err != nil {
+ return 0, false, nil, err
+ }
+ return lastMaxSeq, isNew, userSeqMap, nil
+}
+
+func (db *msgTransferDatabase) SetHasReadSeqs(ctx context.Context, conversationID string, userSeqMap map[string]int64) error {
+ for userID, seq := range userSeqMap {
+ if err := db.seqUser.SetUserReadSeq(ctx, conversationID, userID, seq); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (db *msgTransferDatabase) SetHasReadSeqToDB(ctx context.Context, conversationID string, userSeqMap map[string]int64) error {
+ for userID, seq := range userSeqMap {
+ if err := db.seqUser.SetUserReadSeqToDB(ctx, conversationID, userID, seq); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (db *msgTransferDatabase) MsgToPushMQ(ctx context.Context, key, conversationID string, msg2mq *sdkws.MsgData) error {
+ data, err := proto.Marshal(&pbmsg.PushMsgDataToMQ{MsgData: msg2mq, ConversationID: conversationID})
+ if err != nil {
+ return err
+ }
+ if err := db.producerToPush.SendMessage(ctx, key, data); err != nil {
+ log.ZError(ctx, "MsgToPushMQ", err, "key", key, "conversationID", conversationID)
+ return err
+ }
+ return nil
+}
+
+func (db *msgTransferDatabase) MsgToMongoMQ(ctx context.Context, key, conversationID string, messages []*sdkws.MsgData, lastSeq int64) error {
+ if len(messages) > 0 {
+ data, err := proto.Marshal(&pbmsg.MsgDataToMongoByMQ{LastSeq: lastSeq, ConversationID: conversationID, MsgData: messages})
+ if err != nil {
+ return err
+ }
+ if err := db.producerToMongo.SendMessage(ctx, key, data); err != nil {
+ log.ZError(ctx, "MsgToMongoMQ", err, "key", key, "conversationID", conversationID, "lastSeq", lastSeq)
+ return err
+ }
+ }
+ return nil
+}
diff --git a/pkg/common/storage/controller/push.go b/pkg/common/storage/controller/push.go
new file mode 100644
index 0000000..a761233
--- /dev/null
+++ b/pkg/common/storage/controller/push.go
@@ -0,0 +1,58 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "git.imall.cloud/openim/protocol/push"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mq"
+ "google.golang.org/protobuf/proto"
+)
+
+type PushDatabase interface {
+ DelFcmToken(ctx context.Context, userID string, platformID int) error
+ MsgToOfflinePushMQ(ctx context.Context, key string, userIDs []string, msg2mq *sdkws.MsgData) error
+}
+
+type pushDataBase struct {
+ cache cache.ThirdCache
+ producerToOfflinePush mq.Producer
+}
+
+func NewPushDatabase(cache cache.ThirdCache, offlinePushProducer mq.Producer) PushDatabase {
+ return &pushDataBase{
+ cache: cache,
+ producerToOfflinePush: offlinePushProducer,
+ }
+}
+
+func (p *pushDataBase) DelFcmToken(ctx context.Context, userID string, platformID int) error {
+ return p.cache.DelFcmToken(ctx, userID, platformID)
+}
+
+func (p *pushDataBase) MsgToOfflinePushMQ(ctx context.Context, key string, userIDs []string, msg2mq *sdkws.MsgData) error {
+ data, err := proto.Marshal(&push.PushMsgReq{MsgData: msg2mq, UserIDs: userIDs})
+ if err != nil {
+ return err
+ }
+ if err := p.producerToOfflinePush.SendMessage(ctx, key, data); err != nil {
+ log.ZError(ctx, "message is push to offlinePush topic", err, "key", key, "userIDs", userIDs, "msg", msg2mq.String())
+ }
+ return err
+}
diff --git a/pkg/common/storage/controller/s3.go b/pkg/common/storage/controller/s3.go
new file mode 100644
index 0000000..f3e5353
--- /dev/null
+++ b/pkg/common/storage/controller/s3.go
@@ -0,0 +1,136 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "path/filepath"
+ "time"
+
+ redisCache "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "github.com/openimsdk/tools/s3"
+ "github.com/openimsdk/tools/s3/cont"
+ "github.com/redis/go-redis/v9"
+)
+
+type S3Database interface {
+ PartLimit() (*s3.PartLimit, error)
+ PartSize(ctx context.Context, size int64) (int64, error)
+ AuthSign(ctx context.Context, uploadID string, partNumbers []int) (*s3.AuthSignResult, error)
+ InitiateMultipartUpload(ctx context.Context, hash string, size int64, expire time.Duration, maxParts int, contentType string) (*cont.InitiateUploadResult, error)
+ CompleteMultipartUpload(ctx context.Context, uploadID string, parts []string) (*cont.UploadResult, error)
+ AccessURL(ctx context.Context, name string, expire time.Duration, opt *s3.AccessURLOption) (time.Time, string, error)
+ SetObject(ctx context.Context, info *model.Object) error
+ StatObject(ctx context.Context, name string) (*s3.ObjectInfo, error)
+ FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error)
+ FindExpirationObject(ctx context.Context, engine string, expiration time.Time, needDelType []string, count int64) ([]*model.Object, error)
+ DeleteSpecifiedData(ctx context.Context, engine string, name []string) error
+ DelS3Key(ctx context.Context, engine string, keys ...string) error
+ GetKeyCount(ctx context.Context, engine string, key string) (int64, error)
+}
+
+func NewS3Database(rdb redis.UniversalClient, s3 s3.Interface, obj database.ObjectInfo) S3Database {
+ return &s3Database{
+ s3: cont.New(redisCache.NewS3Cache(rdb, s3), s3),
+ cache: redisCache.NewObjectCacheRedis(rdb, obj),
+ s3cache: redisCache.NewS3Cache(rdb, s3),
+ db: obj,
+ }
+}
+
+type s3Database struct {
+ s3 *cont.Controller
+ cache cache.ObjectCache
+ s3cache cont.S3Cache
+ db database.ObjectInfo
+}
+
+func (s *s3Database) PartSize(ctx context.Context, size int64) (int64, error) {
+ return s.s3.PartSize(ctx, size)
+}
+
+func (s *s3Database) PartLimit() (*s3.PartLimit, error) {
+ return s.s3.PartLimit()
+}
+
+func (s *s3Database) AuthSign(ctx context.Context, uploadID string, partNumbers []int) (*s3.AuthSignResult, error) {
+ return s.s3.AuthSign(ctx, uploadID, partNumbers)
+}
+
+func (s *s3Database) InitiateMultipartUpload(ctx context.Context, hash string, size int64, expire time.Duration, maxParts int, contentType string) (*cont.InitiateUploadResult, error) {
+ return s.s3.InitiateUploadContentType(ctx, hash, size, expire, maxParts, contentType)
+}
+
+func (s *s3Database) CompleteMultipartUpload(ctx context.Context, uploadID string, parts []string) (*cont.UploadResult, error) {
+ return s.s3.CompleteUpload(ctx, uploadID, parts)
+}
+
+func (s *s3Database) SetObject(ctx context.Context, info *model.Object) error {
+ info.Engine = s.s3.Engine()
+ if err := s.db.SetObject(ctx, info); err != nil {
+ return err
+ }
+ return s.cache.DelObjectName(info.Engine, info.Name).ChainExecDel(ctx)
+}
+
+func (s *s3Database) AccessURL(ctx context.Context, name string, expire time.Duration, opt *s3.AccessURLOption) (time.Time, string, error) {
+ obj, err := s.cache.GetName(ctx, s.s3.Engine(), name)
+ if err != nil {
+ return time.Time{}, "", err
+ }
+ if opt == nil {
+ opt = &s3.AccessURLOption{}
+ }
+ if opt.ContentType == "" {
+ opt.ContentType = obj.ContentType
+ }
+ if opt.Filename == "" {
+ opt.Filename = filepath.Base(obj.Name)
+ }
+ expireTime := time.Now().Add(expire)
+ rawURL, err := s.s3.AccessURL(ctx, obj.Key, expire, opt)
+ if err != nil {
+ return time.Time{}, "", err
+ }
+ return expireTime, rawURL, nil
+}
+
+func (s *s3Database) StatObject(ctx context.Context, name string) (*s3.ObjectInfo, error) {
+ return s.s3.StatObject(ctx, name)
+}
+
+func (s *s3Database) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ return s.s3.FormData(ctx, name, size, contentType, duration)
+}
+
+func (s *s3Database) FindExpirationObject(ctx context.Context, engine string, expiration time.Time, needDelType []string, count int64) ([]*model.Object, error) {
+ return s.db.FindExpirationObject(ctx, engine, expiration, needDelType, count)
+}
+
+func (s *s3Database) GetKeyCount(ctx context.Context, engine string, key string) (int64, error) {
+ return s.db.GetKeyCount(ctx, engine, key)
+}
+
+func (s *s3Database) DeleteSpecifiedData(ctx context.Context, engine string, name []string) error {
+ return s.db.Delete(ctx, engine, name)
+}
+
+func (s *s3Database) DelS3Key(ctx context.Context, engine string, keys ...string) error {
+ return s.s3cache.DelS3Key(ctx, engine, keys...)
+}
diff --git a/pkg/common/storage/controller/third.go b/pkg/common/storage/controller/third.go
new file mode 100644
index 0000000..7b89960
--- /dev/null
+++ b/pkg/common/storage/controller/third.go
@@ -0,0 +1,73 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type ThirdDatabase interface {
+ FcmUpdateToken(ctx context.Context, account string, platformID int, fcmToken string, expireTime int64) error
+ SetAppBadge(ctx context.Context, userID string, value int) error
+ // about log for debug
+ UploadLogs(ctx context.Context, logs []*model.Log) error
+ DeleteLogs(ctx context.Context, logID []string, userID string) error
+ SearchLogs(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*model.Log, error)
+ GetLogs(ctx context.Context, LogIDs []string, userID string) ([]*model.Log, error)
+}
+
+type thirdDatabase struct {
+ cache cache.ThirdCache
+ logdb database.Log
+}
+
+// DeleteLogs implements ThirdDatabase.
+func (t *thirdDatabase) DeleteLogs(ctx context.Context, logID []string, userID string) error {
+ return t.logdb.Delete(ctx, logID, userID)
+}
+
+// GetLogs implements ThirdDatabase.
+func (t *thirdDatabase) GetLogs(ctx context.Context, LogIDs []string, userID string) ([]*model.Log, error) {
+ return t.logdb.Get(ctx, LogIDs, userID)
+}
+
+// SearchLogs implements ThirdDatabase.
+func (t *thirdDatabase) SearchLogs(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*model.Log, error) {
+ return t.logdb.Search(ctx, keyword, start, end, pagination)
+}
+
+// UploadLogs implements ThirdDatabase.
+func (t *thirdDatabase) UploadLogs(ctx context.Context, logs []*model.Log) error {
+ return t.logdb.Create(ctx, logs)
+}
+
+func NewThirdDatabase(cache cache.ThirdCache, logdb database.Log) ThirdDatabase {
+ return &thirdDatabase{cache: cache, logdb: logdb}
+}
+
+func (t *thirdDatabase) FcmUpdateToken(ctx context.Context, account string, platformID int, fcmToken string, expireTime int64) error {
+ return t.cache.SetFcmToken(ctx, account, platformID, fcmToken, expireTime)
+}
+
+func (t *thirdDatabase) SetAppBadge(ctx context.Context, userID string, value int) error {
+ return t.cache.SetUserBadgeUnreadCountSum(ctx, userID, value)
+}
diff --git a/pkg/common/storage/controller/user.go b/pkg/common/storage/controller/user.go
new file mode 100644
index 0000000..09de8b5
--- /dev/null
+++ b/pkg/common/storage/controller/user.go
@@ -0,0 +1,263 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package controller
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/db/tx"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/datautil"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache"
+)
+
+type UserDatabase interface {
+ // FindWithError Get the information of the specified user. If the userID is not found, it will also return an error
+ FindWithError(ctx context.Context, userIDs []string) (users []*model.User, err error)
+ // Find Get the information of the specified user If the userID is not found, no error will be returned
+ Find(ctx context.Context, userIDs []string) (users []*model.User, err error)
+ // Find userInfo By Nickname
+ FindByNickname(ctx context.Context, nickname string) (users []*model.User, err error)
+ // FindNotification find system account by level
+ FindNotification(ctx context.Context, level int64) (users []*model.User, err error)
+ // FindSystemAccount find all system account
+ FindSystemAccount(ctx context.Context) (users []*model.User, err error)
+ // Create Insert multiple external guarantees that the userID is not repeated and does not exist in the storage
+ Create(ctx context.Context, users []*model.User) (err error)
+ // UpdateByMap update (zero value) external guarantee userID exists
+ UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error)
+ // FindUser
+ PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*model.User, err error)
+ // FindUser with keyword
+ PageFindUserWithKeyword(ctx context.Context, level1 int64, level2 int64, userID string, nickName string, pagination pagination.Pagination) (count int64, users []*model.User, err error)
+ // Page If not found, no error is returned
+ Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*model.User, err error)
+ // IsExist true as long as one exists
+ IsExist(ctx context.Context, userIDs []string) (exist bool, err error)
+ // GetAllUserID Get all user IDs
+ GetAllUserID(ctx context.Context, pagination pagination.Pagination) (int64, []string, error)
+ // Get user by userID
+ GetUserByID(ctx context.Context, userID string) (user *model.User, err error)
+ // SearchUsersByFields searches users by multiple fields: account (userID), phone, nickname
+ // Returns userIDs that match the search criteria
+ SearchUsersByFields(ctx context.Context, account, phone, nickname string) (userIDs []string, err error)
+ // InitOnce Inside the function, first query whether it exists in the storage, if it exists, do nothing; if it does not exist, insert it
+ InitOnce(ctx context.Context, users []*model.User) (err error)
+ // CountTotal Get the total number of users
+ CountTotal(ctx context.Context, before *time.Time) (int64, error)
+ // CountRangeEverydayTotal Get the user increment in the range
+ CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+
+ SortQuery(ctx context.Context, userIDName map[string]string, asc bool) ([]*model.User, error)
+
+ // CRUD user command
+ AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error
+ DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error
+ UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error
+ GetUserCommands(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error)
+ GetAllUserCommands(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error)
+}
+
+type userDatabase struct {
+ tx tx.Tx
+ userDB database.User
+ cache cache.UserCache
+}
+
+func NewUserDatabase(userDB database.User, cache cache.UserCache, tx tx.Tx) UserDatabase {
+ return &userDatabase{userDB: userDB, cache: cache, tx: tx}
+}
+
+func (u *userDatabase) InitOnce(ctx context.Context, users []*model.User) error {
+ // Extract user IDs from the given user models.
+ userIDs := datautil.Slice(users, func(e *model.User) string {
+ return e.UserID
+ })
+
+ // Find existing users in the database.
+ existingUsers, err := u.userDB.Find(ctx, userIDs)
+ if err != nil {
+ return err
+ }
+
+ // Determine which users are missing from the database.
+ var (
+ missing, update []*model.User
+ )
+ existMap := datautil.SliceToMap(existingUsers, func(e *model.User) string {
+ return e.UserID
+ })
+ orgMap := datautil.SliceToMap(users, func(e *model.User) string { return e.UserID })
+ for k, u1 := range orgMap {
+ if u2, ok := existMap[k]; !ok {
+ missing = append(missing, u1)
+ } else if u1.Nickname != u2.Nickname {
+ update = append(update, u1)
+ }
+ }
+
+ // Create records for missing users.
+ if len(missing) > 0 {
+ if err := u.userDB.Create(ctx, missing); err != nil {
+ return err
+ }
+ }
+ if len(update) > 0 {
+ for i := range update {
+ if err := u.userDB.UpdateByMap(ctx, update[i].UserID, map[string]any{"nickname": update[i].Nickname}); err != nil {
+ return err
+ }
+ }
+ }
+
+ return nil
+}
+
+// FindWithError Get the information of the specified user and return an error if the userID is not found.
+func (u *userDatabase) FindWithError(ctx context.Context, userIDs []string) (users []*model.User, err error) {
+ userIDs = datautil.Distinct(userIDs)
+
+ // TODO: Add logic to identify which user IDs are distinct and which user IDs were not found.
+
+ users, err = u.cache.GetUsersInfo(ctx, userIDs)
+ if err != nil {
+ return
+ }
+
+ if len(users) != len(userIDs) {
+ err = errs.ErrRecordNotFound.WrapMsg("userID not found")
+ }
+ return
+}
+
+// Find Get the information of the specified user. If the userID is not found, no error will be returned.
+func (u *userDatabase) Find(ctx context.Context, userIDs []string) (users []*model.User, err error) {
+ return u.cache.GetUsersInfo(ctx, userIDs)
+}
+
+func (u *userDatabase) FindByNickname(ctx context.Context, nickname string) (users []*model.User, err error) {
+ return u.userDB.TakeByNickname(ctx, nickname)
+}
+
+func (u *userDatabase) FindNotification(ctx context.Context, level int64) (users []*model.User, err error) {
+ return u.userDB.TakeNotification(ctx, level)
+}
+
+func (u *userDatabase) FindSystemAccount(ctx context.Context) (users []*model.User, err error) {
+ return u.userDB.TakeGTEAppManagerLevel(ctx, constant.AppNotificationAdmin)
+}
+
+// Create Insert multiple external guarantees that the userID is not repeated and does not exist in the storage.
+func (u *userDatabase) Create(ctx context.Context, users []*model.User) (err error) {
+ return u.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err = u.userDB.Create(ctx, users); err != nil {
+ return err
+ }
+ return u.cache.DelUsersInfo(datautil.Slice(users, func(e *model.User) string {
+ return e.UserID
+ })...).ChainExecDel(ctx)
+ })
+}
+
+// UpdateByMap update (zero value) externally guarantees that userID exists.
+func (u *userDatabase) UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error) {
+ return u.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err := u.userDB.UpdateByMap(ctx, userID, args); err != nil {
+ return err
+ }
+ return u.cache.DelUsersInfo(userID).ChainExecDel(ctx)
+ })
+}
+
+// Page Gets, returns no error if not found.
+func (u *userDatabase) Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*model.User, err error) {
+ return u.userDB.Page(ctx, pagination)
+}
+
+func (u *userDatabase) PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*model.User, err error) {
+ return u.userDB.PageFindUser(ctx, level1, level2, pagination)
+}
+
+func (u *userDatabase) PageFindUserWithKeyword(ctx context.Context, level1 int64, level2 int64, userID, nickName string, pagination pagination.Pagination) (count int64, users []*model.User, err error) {
+ return u.userDB.PageFindUserWithKeyword(ctx, level1, level2, userID, nickName, pagination)
+}
+
+// IsExist Does userIDs exist? As long as there is one, it will be true.
+func (u *userDatabase) IsExist(ctx context.Context, userIDs []string) (exist bool, err error) {
+ users, err := u.userDB.Find(ctx, userIDs)
+ if err != nil {
+ return false, err
+ }
+ if len(users) > 0 {
+ return true, nil
+ }
+ return false, nil
+}
+
+// GetAllUserID Get all user IDs.
+func (u *userDatabase) GetAllUserID(ctx context.Context, pagination pagination.Pagination) (total int64, userIDs []string, err error) {
+ return u.userDB.GetAllUserID(ctx, pagination)
+}
+
+func (u *userDatabase) GetUserByID(ctx context.Context, userID string) (user *model.User, err error) {
+ return u.cache.GetUserInfo(ctx, userID)
+}
+
+func (u *userDatabase) SearchUsersByFields(ctx context.Context, account, phone, nickname string) (userIDs []string, err error) {
+ return u.userDB.SearchUsersByFields(ctx, account, phone, nickname)
+}
+
+// CountTotal Get the total number of users.
+func (u *userDatabase) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
+ return u.userDB.CountTotal(ctx, before)
+}
+
+// CountRangeEverydayTotal Get the user increment in the range.
+func (u *userDatabase) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
+ return u.userDB.CountRangeEverydayTotal(ctx, start, end)
+}
+
+func (u *userDatabase) SortQuery(ctx context.Context, userIDName map[string]string, asc bool) ([]*model.User, error) {
+ return u.userDB.SortQuery(ctx, userIDName, asc)
+}
+
+func (u *userDatabase) AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error {
+ return u.userDB.AddUserCommand(ctx, userID, Type, UUID, value, ex)
+}
+
+func (u *userDatabase) DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error {
+ return u.userDB.DeleteUserCommand(ctx, userID, Type, UUID)
+}
+
+func (u *userDatabase) UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error {
+ return u.userDB.UpdateUserCommand(ctx, userID, Type, UUID, val)
+}
+
+func (u *userDatabase) GetUserCommands(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error) {
+ commands, err := u.userDB.GetUserCommand(ctx, userID, Type)
+ return commands, err
+}
+
+func (u *userDatabase) GetAllUserCommands(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error) {
+ commands, err := u.userDB.GetAllUserCommand(ctx, userID)
+ return commands, err
+}
diff --git a/pkg/common/storage/database/black.go b/pkg/common/storage/database/black.go
new file mode 100644
index 0000000..e0f5fb5
--- /dev/null
+++ b/pkg/common/storage/database/black.go
@@ -0,0 +1,114 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type Black interface {
+ Create(ctx context.Context, blacks []*model.Black) (err error)
+ Delete(ctx context.Context, blacks []*model.Black) (err error)
+ Find(ctx context.Context, blacks []*model.Black) (blackList []*model.Black, err error)
+ Take(ctx context.Context, ownerUserID, blockUserID string) (black *model.Black, err error)
+ FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*model.Black, err error)
+ FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*model.Black, err error)
+ FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error)
+}
+
+var (
+ _ Black = (*mgoImpl)(nil)
+ _ Black = (*redisImpl)(nil)
+)
+
+type mgoImpl struct {
+}
+
+func (m *mgoImpl) Create(ctx context.Context, blacks []*model.Black) (err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (m *mgoImpl) Delete(ctx context.Context, blacks []*model.Black) (err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (m *mgoImpl) Find(ctx context.Context, blacks []*model.Black) (blackList []*model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (m *mgoImpl) Take(ctx context.Context, ownerUserID, blockUserID string) (black *model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (m *mgoImpl) FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (m *mgoImpl) FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (m *mgoImpl) FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+type redisImpl struct {
+}
+
+func (r *redisImpl) Create(ctx context.Context, blacks []*model.Black) (err error) {
+
+ //TODO implement me
+ panic("implement me")
+}
+
+func (r *redisImpl) Delete(ctx context.Context, blacks []*model.Black) (err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (r *redisImpl) Find(ctx context.Context, blacks []*model.Black) (blackList []*model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (r *redisImpl) Take(ctx context.Context, ownerUserID, blockUserID string) (black *model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (r *redisImpl) FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (r *redisImpl) FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*model.Black, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (r *redisImpl) FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error) {
+ //TODO implement me
+ panic("implement me")
+}
diff --git a/pkg/common/storage/database/cache.go b/pkg/common/storage/database/cache.go
new file mode 100644
index 0000000..c57aea8
--- /dev/null
+++ b/pkg/common/storage/database/cache.go
@@ -0,0 +1,16 @@
+package database
+
+import (
+ "context"
+ "time"
+)
+
+type Cache interface {
+ Get(ctx context.Context, key []string) (map[string]string, error)
+ Prefix(ctx context.Context, prefix string) (map[string]string, error)
+ Set(ctx context.Context, key string, value string, expireAt time.Duration) error
+ Incr(ctx context.Context, key string, value int) (int, error)
+ Del(ctx context.Context, key []string) error
+ Lock(ctx context.Context, key string, duration time.Duration) (string, error)
+ Unlock(ctx context.Context, key string, value string) error
+}
diff --git a/pkg/common/storage/database/client_config.go b/pkg/common/storage/database/client_config.go
new file mode 100644
index 0000000..61bf8ab
--- /dev/null
+++ b/pkg/common/storage/database/client_config.go
@@ -0,0 +1,15 @@
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type ClientConfig interface {
+ Set(ctx context.Context, userID string, config map[string]string) error
+ Get(ctx context.Context, userID string) (map[string]string, error)
+ Del(ctx context.Context, userID string, keys []string) error
+ GetPage(ctx context.Context, userID string, key string, pagination pagination.Pagination) (int64, []*model.ClientConfig, error)
+}
diff --git a/pkg/common/storage/database/conversation.go b/pkg/common/storage/database/conversation.go
new file mode 100644
index 0000000..3739f2e
--- /dev/null
+++ b/pkg/common/storage/database/conversation.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type Conversation interface {
+ Create(ctx context.Context, conversations []*model.Conversation) (err error)
+ UpdateByMap(ctx context.Context, userIDs []string, conversationID string, args map[string]any) (rows int64, err error)
+ UpdateUserConversations(ctx context.Context, userID string, args map[string]any) ([]*model.Conversation, error)
+ Update(ctx context.Context, conversation *model.Conversation) (err error)
+ Find(ctx context.Context, ownerUserID string, conversationIDs []string) (conversations []*model.Conversation, err error)
+ FindUserID(ctx context.Context, userIDs []string, conversationIDs []string) ([]string, error)
+ FindUserIDAllConversationID(ctx context.Context, userID string) ([]string, error)
+ FindUserIDAllNotNotifyConversationID(ctx context.Context, userID string) ([]string, error)
+ FindUserIDAllPinnedConversationID(ctx context.Context, userID string) ([]string, error)
+ Take(ctx context.Context, userID, conversationID string) (conversation *model.Conversation, err error)
+ FindConversationID(ctx context.Context, userID string, conversationIDs []string) (existConversationID []string, err error)
+ FindUserIDAllConversations(ctx context.Context, userID string) (conversations []*model.Conversation, err error)
+ FindRecvMsgUserIDs(ctx context.Context, conversationID string, recvOpts []int) ([]string, error)
+ GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error)
+ GetAllConversationIDs(ctx context.Context) ([]string, error)
+ GetAllConversationIDsNumber(ctx context.Context) (int64, error)
+ PageConversationIDs(ctx context.Context, pagination pagination.Pagination) (conversationIDs []string, err error)
+ GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*model.Conversation, error)
+ GetConversationIDsNeedDestruct(ctx context.Context) ([]*model.Conversation, error)
+ GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error)
+ FindConversationUserVersion(ctx context.Context, userID string, version uint, limit int) (*model.VersionLog, error)
+ FindRandConversation(ctx context.Context, ts int64, limit int) ([]*model.Conversation, error)
+ DeleteUsersConversations(ctx context.Context, userID string, conversationIDs []string) error
+}
diff --git a/pkg/common/storage/database/doc.go b/pkg/common/storage/database/doc.go
new file mode 100644
index 0000000..9ab2245
--- /dev/null
+++ b/pkg/common/storage/database/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model/relation"
diff --git a/pkg/common/storage/database/friend.go b/pkg/common/storage/database/friend.go
new file mode 100644
index 0000000..6540693
--- /dev/null
+++ b/pkg/common/storage/database/friend.go
@@ -0,0 +1,60 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+// Friend defines the operations for managing friends in MongoDB.
+type Friend interface {
+ // Create inserts multiple friend records.
+ Create(ctx context.Context, friends []*model.Friend) (err error)
+ // Delete removes specified friends of the owner user.
+ Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error)
+ // UpdateByMap updates specific fields of a friend document using a map.
+ UpdateByMap(ctx context.Context, ownerUserID string, friendUserID string, args map[string]any) (err error)
+ // UpdateRemark modify remarks.
+ UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error)
+ // Take retrieves a single friend document. Returns an error if not found.
+ Take(ctx context.Context, ownerUserID, friendUserID string) (friend *model.Friend, err error)
+ // FindUserState finds the friendship status between two users.
+ FindUserState(ctx context.Context, userID1, userID2 string) (friends []*model.Friend, err error)
+ // FindFriends retrieves a list of friends for a given owner. Missing friends do not cause an error.
+ FindFriends(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*model.Friend, err error)
+ // FindReversalFriends finds users who have added the specified user as a friend.
+ FindReversalFriends(ctx context.Context, friendUserID string, ownerUserIDs []string) (friends []*model.Friend, err error)
+ // FindOwnerFriends retrieves a paginated list of friends for a given owner.
+ FindOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, friends []*model.Friend, err error)
+ // FindInWhoseFriends finds users who have added the specified user as a friend, with pagination.
+ FindInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (total int64, friends []*model.Friend, err error)
+ // FindFriendUserIDs retrieves a list of friend user IDs for a given owner.
+ FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error)
+ // UpdateFriends update friends' fields
+ UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) (err error)
+
+ FindIncrVersion(ctx context.Context, ownerUserID string, version uint, limit int) (*model.VersionLog, error)
+
+ FindFriendUserID(ctx context.Context, friendUserID string) ([]string, error)
+
+ //SearchFriend(ctx context.Context, ownerUserID, keyword string, pagination pagination.Pagination) (int64, []*model.Friend, error)
+
+ FindOwnerFriendUserIds(ctx context.Context, ownerUserID string, limit int) ([]string, error)
+
+ IncrVersion(ctx context.Context, ownerUserID string, friendUserIDs []string, state int32) error
+}
diff --git a/pkg/common/storage/database/friend_request.go b/pkg/common/storage/database/friend_request.go
new file mode 100644
index 0000000..68c045f
--- /dev/null
+++ b/pkg/common/storage/database/friend_request.go
@@ -0,0 +1,42 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type FriendRequest interface {
+ // Insert multiple records
+ Create(ctx context.Context, friendRequests []*model.FriendRequest) (err error)
+ // Delete record
+ Delete(ctx context.Context, fromUserID, toUserID string) (err error)
+ // Update with zero values
+ UpdateByMap(ctx context.Context, formUserID string, toUserID string, args map[string]any) (err error)
+ // Update multiple records (non-zero values)
+ Update(ctx context.Context, friendRequest *model.FriendRequest) (err error)
+ // Get friend requests sent to a specific user, no error returned if not found
+ Find(ctx context.Context, fromUserID, toUserID string) (friendRequest *model.FriendRequest, err error)
+ Take(ctx context.Context, fromUserID, toUserID string) (friendRequest *model.FriendRequest, err error)
+ // Get list of friend requests received by toUserID
+ FindToUserID(ctx context.Context, toUserID string, handleResults []int, pagination pagination.Pagination) (total int64, friendRequests []*model.FriendRequest, err error)
+ // Get list of friend requests sent by fromUserID
+ FindFromUserID(ctx context.Context, fromUserID string, handleResults []int, pagination pagination.Pagination) (total int64, friendRequests []*model.FriendRequest, err error)
+ FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*model.FriendRequest, err error)
+ GetUnhandledCount(ctx context.Context, userID string, ts int64) (int64, error)
+}
diff --git a/pkg/common/storage/database/group.go b/pkg/common/storage/database/group.go
new file mode 100644
index 0000000..22cb01b
--- /dev/null
+++ b/pkg/common/storage/database/group.go
@@ -0,0 +1,40 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type Group interface {
+ Create(ctx context.Context, groups []*model.Group) (err error)
+ UpdateMap(ctx context.Context, groupID string, args map[string]any) (err error)
+ UpdateStatus(ctx context.Context, groupID string, status int32) (err error)
+ Find(ctx context.Context, groupIDs []string) (groups []*model.Group, err error)
+ Take(ctx context.Context, groupID string) (group *model.Group, err error)
+ Search(ctx context.Context, keyword string, pagination pagination.Pagination) (total int64, groups []*model.Group, err error)
+ // Get Group total quantity
+ CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
+ // Get Group total quantity every day
+ CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+
+ FindJoinSortGroupID(ctx context.Context, groupIDs []string) ([]string, error)
+
+ SearchJoin(ctx context.Context, groupIDs []string, keyword string, pagination pagination.Pagination) (int64, []*model.Group, error)
+}
diff --git a/pkg/common/storage/database/group_member.go b/pkg/common/storage/database/group_member.go
new file mode 100644
index 0000000..54012c4
--- /dev/null
+++ b/pkg/common/storage/database/group_member.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type GroupMember interface {
+ Create(ctx context.Context, groupMembers []*model.GroupMember) (err error)
+ Delete(ctx context.Context, groupID string, userIDs []string) (err error)
+ Update(ctx context.Context, groupID string, userID string, data map[string]any) (err error)
+ UpdateRoleLevel(ctx context.Context, groupID string, userID string, roleLevel int32) error
+ UpdateUserRoleLevels(ctx context.Context, groupID string, firstUserID string, firstUserRoleLevel int32, secondUserID string, secondUserRoleLevel int32) error
+ FindMemberUserID(ctx context.Context, groupID string) (userIDs []string, err error)
+ Take(ctx context.Context, groupID string, userID string) (groupMember *model.GroupMember, err error)
+ Find(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupMember, error)
+ FindInGroup(ctx context.Context, userID string, groupIDs []string) ([]*model.GroupMember, error)
+ TakeOwner(ctx context.Context, groupID string) (groupMember *model.GroupMember, err error)
+ SearchMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (total int64, groupList []*model.GroupMember, err error)
+ // SearchMemberByFields searches for group members by multiple fields: nickname, userID (account), and optionally phone
+ SearchMemberByFields(ctx context.Context, groupID string, nickname, userID, phone string, pagination pagination.Pagination) (total int64, groupList []*model.GroupMember, err error)
+ FindRoleLevelUserIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error)
+ FindUserJoinedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
+ TakeGroupMemberNum(ctx context.Context, groupID string) (count int64, err error)
+ FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
+ IsUpdateRoleLevel(data map[string]any) bool
+ JoinGroupIncrVersion(ctx context.Context, userID string, groupIDs []string, state int32) error
+ MemberGroupIncrVersion(ctx context.Context, groupID string, userIDs []string, state int32) error
+ FindMemberIncrVersion(ctx context.Context, groupID string, version uint, limit int) (*model.VersionLog, error)
+ BatchFindMemberIncrVersion(ctx context.Context, groupIDs []string, versions []uint, limits []int) ([]*model.VersionLog, error)
+ FindJoinIncrVersion(ctx context.Context, userID string, version uint, limit int) (*model.VersionLog, error)
+}
diff --git a/pkg/common/storage/database/group_request.go b/pkg/common/storage/database/group_request.go
new file mode 100644
index 0000000..f342107
--- /dev/null
+++ b/pkg/common/storage/database/group_request.go
@@ -0,0 +1,33 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type GroupRequest interface {
+ Create(ctx context.Context, groupRequests []*model.GroupRequest) (err error)
+ Delete(ctx context.Context, groupID string, userID string) (err error)
+ UpdateHandler(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32) (err error)
+ Take(ctx context.Context, groupID string, userID string) (groupRequest *model.GroupRequest, err error)
+ FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupRequest, error)
+ Page(ctx context.Context, userID string, groupIDs []string, handleResults []int, pagination pagination.Pagination) (total int64, groups []*model.GroupRequest, err error)
+ PageGroup(ctx context.Context, groupIDs []string, handleResults []int, pagination pagination.Pagination) (total int64, groups []*model.GroupRequest, err error)
+ GetUnhandledCount(ctx context.Context, groupIDs []string, ts int64) (int64, error)
+}
diff --git a/pkg/common/storage/database/log.go b/pkg/common/storage/database/log.go
new file mode 100644
index 0000000..b8caf5f
--- /dev/null
+++ b/pkg/common/storage/database/log.go
@@ -0,0 +1,30 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type Log interface {
+ Create(ctx context.Context, log []*model.Log) error
+ Search(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*model.Log, error)
+ Delete(ctx context.Context, logID []string, userID string) error
+ Get(ctx context.Context, logIDs []string, userID string) ([]*model.Log, error)
+}
diff --git a/pkg/common/storage/database/meeting.go b/pkg/common/storage/database/meeting.go
new file mode 100644
index 0000000..a79303b
--- /dev/null
+++ b/pkg/common/storage/database/meeting.go
@@ -0,0 +1,52 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+// Meeting defines the operations for managing meetings in MongoDB.
+type Meeting interface {
+ // Create creates a new meeting record.
+ Create(ctx context.Context, meeting *model.Meeting) error
+ // Take retrieves a meeting by meeting ID. Returns an error if not found.
+ Take(ctx context.Context, meetingID string) (*model.Meeting, error)
+ // Update updates meeting information.
+ Update(ctx context.Context, meetingID string, data map[string]any) error
+ // UpdateStatus updates the status of a meeting.
+ UpdateStatus(ctx context.Context, meetingID string, status int32) error
+ // Find finds meetings by meeting IDs.
+ Find(ctx context.Context, meetingIDs []string) ([]*model.Meeting, error)
+ // FindByCreator finds meetings created by a specific user.
+ FindByCreator(ctx context.Context, creatorUserID string, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error)
+ // FindAll finds all meetings with pagination.
+ FindAll(ctx context.Context, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error)
+ // Search searches meetings by keyword (subject, description).
+ Search(ctx context.Context, keyword string, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error)
+ // FindByStatus finds meetings by status.
+ FindByStatus(ctx context.Context, status int32, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error)
+ // FindByScheduledTimeRange finds meetings within a scheduled time range.
+ FindByScheduledTimeRange(ctx context.Context, startTime, endTime int64, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error)
+ // FindFinishedMeetingsBefore finds finished meetings that ended before the specified time.
+ // This is used for cleanup tasks like dismissing groups after meetings end.
+ FindFinishedMeetingsBefore(ctx context.Context, beforeTime time.Time) ([]*model.Meeting, error)
+ // Delete deletes a meeting by meeting ID.
+ Delete(ctx context.Context, meetingID string) error
+}
diff --git a/pkg/common/storage/database/meeting_checkin.go b/pkg/common/storage/database/meeting_checkin.go
new file mode 100644
index 0000000..3542319
--- /dev/null
+++ b/pkg/common/storage/database/meeting_checkin.go
@@ -0,0 +1,41 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+// MeetingCheckIn defines the operations for managing meeting check-ins in MongoDB.
+type MeetingCheckIn interface {
+ // Create creates a new meeting check-in record.
+ Create(ctx context.Context, checkIn *model.MeetingCheckIn) error
+ // Take retrieves a check-in by check-in ID. Returns an error if not found.
+ Take(ctx context.Context, checkInID string) (*model.MeetingCheckIn, error)
+ // FindByMeetingID finds all check-ins for a meeting with pagination.
+ FindByMeetingID(ctx context.Context, meetingID string, pagination pagination.Pagination) (total int64, checkIns []*model.MeetingCheckIn, err error)
+ // FindByUserAndMeetingID finds if a user has checked in for a specific meeting.
+ FindByUserAndMeetingID(ctx context.Context, userID, meetingID string) (*model.MeetingCheckIn, error)
+ // CountByMeetingID counts the number of check-ins for a meeting.
+ CountByMeetingID(ctx context.Context, meetingID string) (int64, error)
+ // FindByUser finds all check-ins by a user with pagination.
+ FindByUser(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, checkIns []*model.MeetingCheckIn, err error)
+ // Delete deletes a check-in by check-in ID.
+ Delete(ctx context.Context, checkInID string) error
+}
+
diff --git a/pkg/common/storage/database/mgo/black.go b/pkg/common/storage/database/mgo/black.go
new file mode 100644
index 0000000..72405fc
--- /dev/null
+++ b/pkg/common/storage/database/mgo/black.go
@@ -0,0 +1,106 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewBlackMongo(db *mongo.Database) (database.Black, error) {
+ coll := db.Collection(database.BlackName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "owner_user_id", Value: 1},
+ {Key: "block_user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &BlackMgo{coll: coll}, nil
+}
+
+type BlackMgo struct {
+ coll *mongo.Collection
+}
+
+func (b *BlackMgo) blackFilter(ownerUserID, blockUserID string) bson.M {
+ return bson.M{
+ "owner_user_id": ownerUserID,
+ "block_user_id": blockUserID,
+ }
+}
+
+func (b *BlackMgo) blacksFilter(blacks []*model.Black) bson.M {
+ if len(blacks) == 0 {
+ return nil
+ }
+ or := make(bson.A, 0, len(blacks))
+ for _, black := range blacks {
+ or = append(or, b.blackFilter(black.OwnerUserID, black.BlockUserID))
+ }
+ return bson.M{"$or": or}
+}
+
+func (b *BlackMgo) Create(ctx context.Context, blacks []*model.Black) (err error) {
+ return mongoutil.InsertMany(ctx, b.coll, blacks)
+}
+
+func (b *BlackMgo) Delete(ctx context.Context, blacks []*model.Black) (err error) {
+ if len(blacks) == 0 {
+ return nil
+ }
+ return mongoutil.DeleteMany(ctx, b.coll, b.blacksFilter(blacks))
+}
+
+func (b *BlackMgo) UpdateByMap(ctx context.Context, ownerUserID, blockUserID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mongoutil.UpdateOne(ctx, b.coll, b.blackFilter(ownerUserID, blockUserID), bson.M{"$set": args}, false)
+}
+
+func (b *BlackMgo) Find(ctx context.Context, blacks []*model.Black) (blackList []*model.Black, err error) {
+ return mongoutil.Find[*model.Black](ctx, b.coll, b.blacksFilter(blacks))
+}
+
+func (b *BlackMgo) Take(ctx context.Context, ownerUserID, blockUserID string) (black *model.Black, err error) {
+ return mongoutil.FindOne[*model.Black](ctx, b.coll, b.blackFilter(ownerUserID, blockUserID))
+}
+
+func (b *BlackMgo) FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*model.Black, err error) {
+ return mongoutil.FindPage[*model.Black](ctx, b.coll, bson.M{"owner_user_id": ownerUserID}, pagination)
+}
+
+func (b *BlackMgo) FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*model.Black, err error) {
+ if len(userIDs) == 0 {
+ return mongoutil.Find[*model.Black](ctx, b.coll, bson.M{"owner_user_id": ownerUserID})
+ }
+ return mongoutil.Find[*model.Black](ctx, b.coll, bson.M{"owner_user_id": ownerUserID, "block_user_id": bson.M{"$in": userIDs}})
+}
+
+func (b *BlackMgo) FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error) {
+ return mongoutil.Find[string](ctx, b.coll, bson.M{"owner_user_id": ownerUserID}, options.Find().SetProjection(bson.M{"_id": 0, "block_user_id": 1}))
+}
diff --git a/pkg/common/storage/database/mgo/cache.go b/pkg/common/storage/database/mgo/cache.go
new file mode 100644
index 0000000..a190011
--- /dev/null
+++ b/pkg/common/storage/database/mgo/cache.go
@@ -0,0 +1,183 @@
+package mgo
+
+import (
+ "context"
+ "strconv"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/google/uuid"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewCacheMgo(db *mongo.Database) (*CacheMgo, error) {
+ coll := db.Collection(database.CacheName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "key", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "expire_at", Value: 1},
+ },
+ Options: options.Index().SetExpireAfterSeconds(0),
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &CacheMgo{coll: coll}, nil
+}
+
+type CacheMgo struct {
+ coll *mongo.Collection
+}
+
+func (x *CacheMgo) findToMap(res []model.Cache, now time.Time) map[string]string {
+ kv := make(map[string]string)
+ for _, re := range res {
+ if re.ExpireAt != nil && re.ExpireAt.Before(now) {
+ continue
+ }
+ kv[re.Key] = re.Value
+ }
+ return kv
+
+}
+
+func (x *CacheMgo) Get(ctx context.Context, key []string) (map[string]string, error) {
+ if len(key) == 0 {
+ return nil, nil
+ }
+ now := time.Now()
+ res, err := mongoutil.Find[model.Cache](ctx, x.coll, bson.M{
+ "key": bson.M{"$in": key},
+ "$or": []bson.M{
+ {"expire_at": bson.M{"$gt": now}},
+ {"expire_at": nil},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return x.findToMap(res, now), nil
+}
+
+func (x *CacheMgo) Prefix(ctx context.Context, prefix string) (map[string]string, error) {
+ now := time.Now()
+ res, err := mongoutil.Find[model.Cache](ctx, x.coll, bson.M{
+ "key": bson.M{"$regex": "^" + prefix},
+ "$or": []bson.M{
+ {"expire_at": bson.M{"$gt": now}},
+ {"expire_at": nil},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return x.findToMap(res, now), nil
+}
+
+func (x *CacheMgo) Set(ctx context.Context, key string, value string, expireAt time.Duration) error {
+ cv := &model.Cache{
+ Key: key,
+ Value: value,
+ }
+ if expireAt > 0 {
+ now := time.Now().Add(expireAt)
+ cv.ExpireAt = &now
+ }
+ opt := options.Update().SetUpsert(true)
+ return mongoutil.UpdateOne(ctx, x.coll, bson.M{"key": key}, bson.M{"$set": cv}, false, opt)
+}
+
+func (x *CacheMgo) Incr(ctx context.Context, key string, value int) (int, error) {
+ pipeline := mongo.Pipeline{
+ {
+ {"$set", bson.M{
+ "value": bson.M{
+ "$toString": bson.M{
+ "$add": bson.A{
+ bson.M{"$toInt": "$value"},
+ value,
+ },
+ },
+ },
+ }},
+ },
+ }
+ opt := options.FindOneAndUpdate().SetReturnDocument(options.After)
+ res, err := mongoutil.FindOneAndUpdate[model.Cache](ctx, x.coll, bson.M{"key": key}, pipeline, opt)
+ if err != nil {
+ return 0, err
+ }
+ return strconv.Atoi(res.Value)
+}
+
+func (x *CacheMgo) Del(ctx context.Context, key []string) error {
+ if len(key) == 0 {
+ return nil
+ }
+ _, err := x.coll.DeleteMany(ctx, bson.M{"key": bson.M{"$in": key}})
+ return errs.Wrap(err)
+}
+
+func (x *CacheMgo) lockKey(key string) string {
+ return "LOCK_" + key
+}
+
+func (x *CacheMgo) Lock(ctx context.Context, key string, duration time.Duration) (string, error) {
+ tmp, err := uuid.NewUUID()
+ if err != nil {
+ return "", err
+ }
+ if duration <= 0 || duration > time.Minute*10 {
+ duration = time.Minute * 10
+ }
+ cv := &model.Cache{
+ Key: x.lockKey(key),
+ Value: tmp.String(),
+ ExpireAt: nil,
+ }
+ ctx, cancel := context.WithTimeout(ctx, time.Second*30)
+ defer cancel()
+ wait := func() error {
+ timeout := time.NewTimer(time.Millisecond * 100)
+ defer timeout.Stop()
+ select {
+ case <-ctx.Done():
+ return ctx.Err()
+ case <-timeout.C:
+ return nil
+ }
+ }
+ for {
+ if err := mongoutil.DeleteOne(ctx, x.coll, bson.M{"key": key, "expire_at": bson.M{"$lt": time.Now()}}); err != nil {
+ return "", err
+ }
+ expireAt := time.Now().Add(duration)
+ cv.ExpireAt = &expireAt
+ if err := mongoutil.InsertMany[*model.Cache](ctx, x.coll, []*model.Cache{cv}); err != nil {
+ if mongo.IsDuplicateKeyError(err) {
+ if err := wait(); err != nil {
+ return "", err
+ }
+ continue
+ }
+ return "", err
+ }
+ return cv.Value, nil
+ }
+}
+
+func (x *CacheMgo) Unlock(ctx context.Context, key string, value string) error {
+ return mongoutil.DeleteOne(ctx, x.coll, bson.M{"key": x.lockKey(key), "value": value})
+}
diff --git a/pkg/common/storage/database/mgo/cache_test.go b/pkg/common/storage/database/mgo/cache_test.go
new file mode 100644
index 0000000..5a316b3
--- /dev/null
+++ b/pkg/common/storage/database/mgo/cache_test.go
@@ -0,0 +1,133 @@
+package mgo
+
+import (
+ "context"
+ "strings"
+ "sync"
+ "testing"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func TestName1111(t *testing.T) {
+ coll := Mongodb().Collection("temp")
+
+ //updatePipeline := mongo.Pipeline{
+ // {
+ // {"$set", bson.M{
+ // "age": bson.M{
+ // "$toString": bson.M{
+ // "$add": bson.A{
+ // bson.M{"$toInt": "$age"},
+ // 1,
+ // },
+ // },
+ // },
+ // }},
+ // },
+ //}
+
+ pipeline := mongo.Pipeline{
+ {
+ {"$set", bson.M{
+ "value": bson.M{
+ "$toString": bson.M{
+ "$add": bson.A{
+ bson.M{"$toInt": "$value"},
+ 1,
+ },
+ },
+ },
+ }},
+ },
+ }
+
+ opt := options.FindOneAndUpdate().SetUpsert(false).SetReturnDocument(options.After)
+ res, err := mongoutil.FindOneAndUpdate[model.Cache](context.Background(), coll, bson.M{"key": "123456"}, pipeline, opt)
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res)
+}
+
+func TestName33333(t *testing.T) {
+ c, err := NewCacheMgo(Mongodb())
+ if err != nil {
+ panic(err)
+ }
+ if err := c.Set(context.Background(), "123456", "123456", time.Hour); err != nil {
+ panic(err)
+ }
+
+ if err := c.Set(context.Background(), "123666", "123666", time.Hour); err != nil {
+ panic(err)
+ }
+
+ res1, err := c.Get(context.Background(), []string{"123456"})
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res1)
+
+ res2, err := c.Prefix(context.Background(), "123")
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res2)
+}
+
+func TestName1111aa(t *testing.T) {
+
+ c, err := NewCacheMgo(Mongodb())
+ if err != nil {
+ panic(err)
+ }
+ var count int
+
+ key := "123456"
+
+ doFunc := func() {
+ value, err := c.Lock(context.Background(), key, time.Second*30)
+ if err != nil {
+ t.Log("Lock error", err)
+ return
+ }
+ tmp := count
+ tmp++
+ count = tmp
+ t.Log("count", tmp)
+ if err := c.Unlock(context.Background(), key, value); err != nil {
+ t.Log("Unlock error", err)
+ return
+ }
+ }
+
+ if _, err := c.Lock(context.Background(), key, time.Second*10); err != nil {
+ t.Log(err)
+ return
+ }
+
+ var wg sync.WaitGroup
+ for i := 0; i < 32; i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for i := 0; i < 100; i++ {
+ doFunc()
+ }
+ }()
+ }
+
+ wg.Wait()
+
+}
+
+func TestName111111a(t *testing.T) {
+ arr := strings.SplitN("1:testkakskdask:1111", ":", 2)
+ t.Log(arr)
+}
diff --git a/pkg/common/storage/database/mgo/client_config.go b/pkg/common/storage/database/mgo/client_config.go
new file mode 100644
index 0000000..80098d5
--- /dev/null
+++ b/pkg/common/storage/database/mgo/client_config.go
@@ -0,0 +1,99 @@
+// Copyright © 2023 OpenIM open source community. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/tools/errs"
+)
+
+func NewClientConfig(db *mongo.Database) (database.ClientConfig, error) {
+ coll := db.Collection("config")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "key", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &ClientConfig{
+ coll: coll,
+ }, nil
+}
+
+type ClientConfig struct {
+ coll *mongo.Collection
+}
+
+func (x *ClientConfig) Set(ctx context.Context, userID string, config map[string]string) error {
+ if len(config) == 0 {
+ return nil
+ }
+ for key, value := range config {
+ filter := bson.M{"key": key, "user_id": userID}
+ update := bson.M{
+ "value": value,
+ }
+ err := mongoutil.UpdateOne(ctx, x.coll, filter, bson.M{"$set": update}, false, options.Update().SetUpsert(true))
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (x *ClientConfig) Get(ctx context.Context, userID string) (map[string]string, error) {
+ cs, err := mongoutil.Find[*model.ClientConfig](ctx, x.coll, bson.M{"user_id": userID})
+ if err != nil {
+ return nil, err
+ }
+ cm := make(map[string]string)
+ for _, config := range cs {
+ cm[config.Key] = config.Value
+ }
+ return cm, nil
+}
+
+func (x *ClientConfig) Del(ctx context.Context, userID string, keys []string) error {
+ if len(keys) == 0 {
+ return nil
+ }
+ return mongoutil.DeleteMany(ctx, x.coll, bson.M{"key": bson.M{"$in": keys}, "user_id": userID})
+}
+
+func (x *ClientConfig) GetPage(ctx context.Context, userID string, key string, pagination pagination.Pagination) (int64, []*model.ClientConfig, error) {
+ filter := bson.M{}
+ if userID != "" {
+ filter["user_id"] = userID
+ }
+ if key != "" {
+ filter["key"] = key
+ }
+ return mongoutil.FindPage[*model.ClientConfig](ctx, x.coll, filter, pagination)
+}
diff --git a/pkg/common/storage/database/mgo/conversation.go b/pkg/common/storage/database/mgo/conversation.go
new file mode 100644
index 0000000..5eb6ed3
--- /dev/null
+++ b/pkg/common/storage/database/mgo/conversation.go
@@ -0,0 +1,325 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+)
+
+func NewConversationMongo(db *mongo.Database) (*ConversationMgo, error) {
+ coll := db.Collection(database.ConversationName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "owner_user_id", Value: 1},
+ {Key: "conversation_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index(),
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ version, err := NewVersionLog(db.Collection(database.ConversationVersionName))
+ if err != nil {
+ return nil, err
+ }
+ return &ConversationMgo{version: version, coll: coll}, nil
+}
+
+type ConversationMgo struct {
+ version database.VersionLog
+ coll *mongo.Collection
+}
+
+func (c *ConversationMgo) Create(ctx context.Context, conversations []*model.Conversation) (err error) {
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.InsertMany(ctx, c.coll, conversations)
+ }, func() error {
+ userConversation := make(map[string][]string)
+ for _, conversation := range conversations {
+ userConversation[conversation.OwnerUserID] = append(userConversation[conversation.OwnerUserID], conversation.ConversationID)
+ }
+ for userID, conversationIDs := range userConversation {
+ if err := c.version.IncrVersion(ctx, userID, conversationIDs, model.VersionStateInsert); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+}
+
+func (c *ConversationMgo) UpdateByMap(ctx context.Context, userIDs []string, conversationID string, args map[string]any) (int64, error) {
+ if len(args) == 0 || len(userIDs) == 0 {
+ return 0, nil
+ }
+ filter := bson.M{
+ "conversation_id": conversationID,
+ "owner_user_id": bson.M{"$in": userIDs},
+ }
+ var rows int64
+ err := mongoutil.IncrVersion(func() error {
+ res, err := mongoutil.UpdateMany(ctx, c.coll, filter, bson.M{"$set": args})
+ if err != nil {
+ return err
+ }
+ rows = res.ModifiedCount
+ return nil
+ }, func() error {
+ for _, userID := range userIDs {
+ if err := c.version.IncrVersion(ctx, userID, []string{conversationID}, model.VersionStateUpdate); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+ if err != nil {
+ return 0, err
+ }
+ return rows, nil
+}
+
+func (c *ConversationMgo) UpdateUserConversations(ctx context.Context, userID string, args map[string]any) ([]*model.Conversation, error) {
+ if len(args) == 0 {
+ return nil, nil
+ }
+ filter := bson.M{
+ "user_id": userID,
+ }
+
+ conversations, err := mongoutil.Find[*model.Conversation](ctx, c.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1, "conversation_id": 1}))
+ if err != nil {
+ return nil, err
+ }
+ err = mongoutil.IncrVersion(func() error {
+ _, err := mongoutil.UpdateMany(ctx, c.coll, filter, bson.M{"$set": args})
+ if err != nil {
+ return err
+ }
+ return nil
+ }, func() error {
+ for _, conversation := range conversations {
+ if err := c.version.IncrVersion(ctx, conversation.OwnerUserID, []string{conversation.ConversationID}, model.VersionStateUpdate); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+ if err != nil {
+ return nil, err
+ }
+ return conversations, nil
+}
+
+func (c *ConversationMgo) Update(ctx context.Context, conversation *model.Conversation) (err error) {
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.UpdateOne(ctx, c.coll, bson.M{"owner_user_id": conversation.OwnerUserID, "conversation_id": conversation.ConversationID}, bson.M{"$set": conversation}, true)
+ }, func() error {
+ return c.version.IncrVersion(ctx, conversation.OwnerUserID, []string{conversation.ConversationID}, model.VersionStateUpdate)
+ })
+}
+
+func (c *ConversationMgo) Find(ctx context.Context, ownerUserID string, conversationIDs []string) (conversations []*model.Conversation, err error) {
+ return mongoutil.Find[*model.Conversation](ctx, c.coll, bson.M{"owner_user_id": ownerUserID, "conversation_id": bson.M{"$in": conversationIDs}})
+}
+
+func (c *ConversationMgo) FindUserID(ctx context.Context, userIDs []string, conversationIDs []string) ([]string, error) {
+ return mongoutil.Find[string](
+ ctx,
+ c.coll,
+ bson.M{"owner_user_id": bson.M{"$in": userIDs}, "conversation_id": bson.M{"$in": conversationIDs}},
+ options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}),
+ )
+}
+func (c *ConversationMgo) FindUserIDAllConversationID(ctx context.Context, userID string) ([]string, error) {
+ return mongoutil.Find[string](ctx, c.coll, bson.M{"owner_user_id": userID}, options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1}))
+}
+
+func (c *ConversationMgo) FindUserIDAllNotNotifyConversationID(ctx context.Context, userID string) ([]string, error) {
+ return mongoutil.Find[string](ctx, c.coll, bson.M{
+ "owner_user_id": userID,
+ "recv_msg_opt": constant.ReceiveNotNotifyMessage,
+ }, options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1}))
+}
+
+func (c *ConversationMgo) FindUserIDAllPinnedConversationID(ctx context.Context, userID string) ([]string, error) {
+ return mongoutil.Find[string](ctx, c.coll, bson.M{
+ "owner_user_id": userID,
+ "is_pinned": true,
+ }, options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1}))
+}
+
+func (c *ConversationMgo) Take(ctx context.Context, userID, conversationID string) (conversation *model.Conversation, err error) {
+ return mongoutil.FindOne[*model.Conversation](ctx, c.coll, bson.M{"owner_user_id": userID, "conversation_id": conversationID})
+}
+
+func (c *ConversationMgo) FindConversationID(ctx context.Context, userID string, conversationIDs []string) (existConversationID []string, err error) {
+ return mongoutil.Find[string](ctx, c.coll, bson.M{"owner_user_id": userID, "conversation_id": bson.M{"$in": conversationIDs}}, options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1}))
+}
+
+func (c *ConversationMgo) FindUserIDAllConversations(ctx context.Context, userID string) (conversations []*model.Conversation, err error) {
+ return mongoutil.Find[*model.Conversation](ctx, c.coll, bson.M{"owner_user_id": userID})
+}
+
+func (c *ConversationMgo) FindRecvMsgUserIDs(ctx context.Context, conversationID string, recvOpts []int) ([]string, error) {
+ var filter any
+ if len(recvOpts) == 0 {
+ filter = bson.M{"conversation_id": conversationID}
+ } else {
+ filter = bson.M{"conversation_id": conversationID, "recv_msg_opt": bson.M{"$in": recvOpts}}
+ }
+ return mongoutil.Find[string](ctx, c.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}))
+}
+
+func (c *ConversationMgo) GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error) {
+ return mongoutil.FindOne[int](ctx, c.coll, bson.M{"owner_user_id": ownerUserID, "conversation_id": conversationID}, options.FindOne().SetProjection(bson.M{"recv_msg_opt": 1}))
+}
+
+func (c *ConversationMgo) GetAllConversationIDs(ctx context.Context) ([]string, error) {
+ return mongoutil.Aggregate[string](ctx, c.coll, []bson.M{
+ {"$group": bson.M{"_id": "$conversation_id"}},
+ {"$project": bson.M{"_id": 0, "conversation_id": "$_id"}},
+ })
+}
+
+func (c *ConversationMgo) GetAllConversationIDsNumber(ctx context.Context) (int64, error) {
+ counts, err := mongoutil.Aggregate[int64](ctx, c.coll, []bson.M{
+ {"$group": bson.M{"_id": "$conversation_id"}},
+ {"$group": bson.M{"_id": nil, "count": bson.M{"$sum": 1}}},
+ {"$project": bson.M{"_id": 0}},
+ })
+ if err != nil {
+ return 0, err
+ }
+ if len(counts) == 0 {
+ return 0, nil
+ }
+ return counts[0], nil
+}
+
+func (c *ConversationMgo) PageConversationIDs(ctx context.Context, pagination pagination.Pagination) (conversationIDs []string, err error) {
+ return mongoutil.FindPageOnly[string](ctx, c.coll, bson.M{}, pagination, options.Find().SetProjection(bson.M{"conversation_id": 1}))
+}
+
+func (c *ConversationMgo) GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*model.Conversation, error) {
+ return mongoutil.Find[*model.Conversation](ctx, c.coll, bson.M{"conversation_id": bson.M{"$in": conversationIDs}})
+}
+
+func (c *ConversationMgo) GetConversationIDsNeedDestruct(ctx context.Context) ([]*model.Conversation, error) {
+ // "is_msg_destruct = 1 && msg_destruct_time != 0 && (UNIX_TIMESTAMP(NOW()) > (msg_destruct_time + UNIX_TIMESTAMP(latest_msg_destruct_time)) || latest_msg_destruct_time is NULL)"
+ return mongoutil.Find[*model.Conversation](ctx, c.coll, bson.M{
+ "is_msg_destruct": 1,
+ "msg_destruct_time": bson.M{"$ne": 0},
+ "$or": []bson.M{
+ {
+ "$expr": bson.M{
+ "$gt": []any{
+ time.Now(),
+ bson.M{"$add": []any{"$msg_destruct_time", "$latest_msg_destruct_time"}},
+ },
+ },
+ },
+ {
+ "latest_msg_destruct_time": nil,
+ },
+ },
+ })
+}
+
+func (c *ConversationMgo) GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error) {
+ return mongoutil.Find[string](
+ ctx,
+ c.coll,
+ bson.M{"conversation_id": conversationID, "recv_msg_opt": bson.M{"$ne": constant.ReceiveMessage}},
+ options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}),
+ )
+}
+
+func (c *ConversationMgo) FindConversationUserVersion(ctx context.Context, userID string, version uint, limit int) (*model.VersionLog, error) {
+ return c.version.FindChangeLog(ctx, userID, version, limit)
+}
+
+func (c *ConversationMgo) FindRandConversation(ctx context.Context, ts int64, limit int) ([]*model.Conversation, error) {
+ pipeline := []bson.M{
+ {
+ "$match": bson.M{
+ "is_msg_destruct": true,
+ "msg_destruct_time": bson.M{"$ne": 0},
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "next_msg_destruct_timestamp": bson.M{
+ "$add": []any{
+ bson.M{
+ "$toLong": "$latest_msg_destruct_time",
+ },
+ bson.M{
+ "$multiply": []any{
+ "$msg_destruct_time",
+ 1000, // convert to milliseconds
+ },
+ },
+ },
+ },
+ },
+ },
+ {
+ "$match": bson.M{
+ "next_msg_destruct_timestamp": bson.M{"$lt": ts},
+ },
+ },
+ {
+ "$sample": bson.M{
+ "size": limit,
+ },
+ },
+ }
+ return mongoutil.Aggregate[*model.Conversation](ctx, c.coll, pipeline)
+}
+
+func (c *ConversationMgo) DeleteUsersConversations(ctx context.Context, userID string, conversationIDs []string) error {
+ if len(conversationIDs) == 0 {
+ return nil
+ }
+ filter := bson.M{
+ "owner_user_id": userID,
+ "conversation_id": bson.M{"$in": conversationIDs},
+ }
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.DeleteMany(ctx, c.coll, filter)
+ }, func() error {
+ return c.version.IncrVersion(ctx, userID, conversationIDs, model.VersionStateDelete)
+ })
+}
diff --git a/pkg/common/storage/database/mgo/doc.go b/pkg/common/storage/database/mgo/doc.go
new file mode 100644
index 0000000..a71977c
--- /dev/null
+++ b/pkg/common/storage/database/mgo/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
diff --git a/pkg/common/storage/database/mgo/friend.go b/pkg/common/storage/database/mgo/friend.go
new file mode 100644
index 0000000..a29c1dc
--- /dev/null
+++ b/pkg/common/storage/database/mgo/friend.go
@@ -0,0 +1,271 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+// FriendMgo implements Friend using MongoDB as the storage backend.
+type FriendMgo struct {
+ coll *mongo.Collection
+ owner database.VersionLog
+}
+
+// NewFriendMongo creates a new instance of FriendMgo with the provided MongoDB database.
+func NewFriendMongo(db *mongo.Database) (database.Friend, error) {
+ coll := db.Collection(database.FriendName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "owner_user_id", Value: 1},
+ {Key: "friend_user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, err
+ }
+ owner, err := NewVersionLog(db.Collection(database.FriendVersionName))
+ if err != nil {
+ return nil, err
+ }
+ return &FriendMgo{coll: coll, owner: owner}, nil
+}
+
+func (f *FriendMgo) friendSort() any {
+ return bson.D{{"is_pinned", -1}, {"_id", 1}}
+}
+
+// Create inserts multiple friend records.
+func (f *FriendMgo) Create(ctx context.Context, friends []*model.Friend) error {
+ for i, friend := range friends {
+ if friend.ID.IsZero() {
+ friends[i].ID = primitive.NewObjectID()
+ }
+ if friend.CreateTime.IsZero() {
+ friends[i].CreateTime = time.Now()
+ }
+ }
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.InsertMany(ctx, f.coll, friends)
+ }, func() error {
+ mp := make(map[string][]string)
+ for _, friend := range friends {
+ mp[friend.OwnerUserID] = append(mp[friend.OwnerUserID], friend.FriendUserID)
+ }
+ for ownerUserID, friendUserIDs := range mp {
+ if err := f.owner.IncrVersion(ctx, ownerUserID, friendUserIDs, model.VersionStateInsert); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+}
+
+// Delete removes specified friends of the owner user.
+func (f *FriendMgo) Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) error {
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": bson.M{"$in": friendUserIDs},
+ }
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.DeleteOne(ctx, f.coll, filter)
+ }, func() error {
+ return f.owner.IncrVersion(ctx, ownerUserID, friendUserIDs, model.VersionStateDelete)
+ })
+}
+
+// UpdateByMap updates specific fields of a friend document using a map.
+func (f *FriendMgo) UpdateByMap(ctx context.Context, ownerUserID string, friendUserID string, args map[string]any) error {
+ if len(args) == 0 {
+ return nil
+ }
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": friendUserID,
+ }
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.UpdateOne(ctx, f.coll, filter, bson.M{"$set": args}, true)
+ }, func() error {
+ var friendUserIDs []string
+ if f.IsUpdateIsPinned(args) {
+ friendUserIDs = []string{model.VersionSortChangeID, friendUserID}
+ } else {
+ friendUserIDs = []string{friendUserID}
+ }
+ return f.owner.IncrVersion(ctx, ownerUserID, friendUserIDs, model.VersionStateUpdate)
+ })
+}
+
+// UpdateRemark updates the remark for a specific friend.
+func (f *FriendMgo) UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) error {
+ return f.UpdateByMap(ctx, ownerUserID, friendUserID, map[string]any{"remark": remark})
+}
+
+func (f *FriendMgo) fillTime(friends ...*model.Friend) {
+ for i, friend := range friends {
+ if friend.CreateTime.IsZero() {
+ friends[i].CreateTime = friend.ID.Timestamp()
+ }
+ }
+}
+
+func (f *FriendMgo) findOne(ctx context.Context, filter any) (*model.Friend, error) {
+ friend, err := mongoutil.FindOne[*model.Friend](ctx, f.coll, filter)
+ if err != nil {
+ return nil, err
+ }
+ f.fillTime(friend)
+ return friend, nil
+}
+
+func (f *FriendMgo) find(ctx context.Context, filter any) ([]*model.Friend, error) {
+ friends, err := mongoutil.Find[*model.Friend](ctx, f.coll, filter)
+ if err != nil {
+ return nil, err
+ }
+ f.fillTime(friends...)
+ return friends, nil
+}
+
+func (f *FriendMgo) findPage(ctx context.Context, filter any, pagination pagination.Pagination, opts ...*options.FindOptions) (int64, []*model.Friend, error) {
+ return mongoutil.FindPage[*model.Friend](ctx, f.coll, filter, pagination, opts...)
+}
+
+// Take retrieves a single friend document. Returns an error if not found.
+func (f *FriendMgo) Take(ctx context.Context, ownerUserID, friendUserID string) (*model.Friend, error) {
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": friendUserID,
+ }
+ return f.findOne(ctx, filter)
+}
+
+// FindUserState finds the friendship status between two users.
+func (f *FriendMgo) FindUserState(ctx context.Context, userID1, userID2 string) ([]*model.Friend, error) {
+ filter := bson.M{
+ "$or": []bson.M{
+ {"owner_user_id": userID1, "friend_user_id": userID2},
+ {"owner_user_id": userID2, "friend_user_id": userID1},
+ },
+ }
+ return f.find(ctx, filter)
+}
+
+// FindFriends retrieves a list of friends for a given owner. Missing friends do not cause an error.
+func (f *FriendMgo) FindFriends(ctx context.Context, ownerUserID string, friendUserIDs []string) ([]*model.Friend, error) {
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": bson.M{"$in": friendUserIDs},
+ }
+ return f.find(ctx, filter)
+}
+
+// FindReversalFriends finds users who have added the specified user as a friend.
+func (f *FriendMgo) FindReversalFriends(ctx context.Context, friendUserID string, ownerUserIDs []string) ([]*model.Friend, error) {
+ filter := bson.M{
+ "owner_user_id": bson.M{"$in": ownerUserIDs},
+ "friend_user_id": friendUserID,
+ }
+ return f.find(ctx, filter)
+}
+
+// FindOwnerFriends retrieves a paginated list of friends for a given owner.
+func (f *FriendMgo) FindOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (int64, []*model.Friend, error) {
+ filter := bson.M{"owner_user_id": ownerUserID}
+ opt := options.Find().SetSort(f.friendSort())
+ return f.findPage(ctx, filter, pagination, opt)
+}
+
+func (f *FriendMgo) FindOwnerFriendUserIds(ctx context.Context, ownerUserID string, limit int) ([]string, error) {
+ filter := bson.M{"owner_user_id": ownerUserID}
+ opt := options.Find().SetProjection(bson.M{"_id": 0, "friend_user_id": 1}).SetSort(f.friendSort()).SetLimit(int64(limit))
+ return mongoutil.Find[string](ctx, f.coll, filter, opt)
+}
+
+// FindInWhoseFriends finds users who have added the specified user as a friend, with pagination.
+func (f *FriendMgo) FindInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (int64, []*model.Friend, error) {
+ filter := bson.M{"friend_user_id": friendUserID}
+ opt := options.Find().SetSort(f.friendSort())
+ return f.findPage(ctx, filter, pagination, opt)
+}
+
+// FindFriendUserIDs retrieves a list of friend user IDs for a given owner.
+func (f *FriendMgo) FindFriendUserIDs(ctx context.Context, ownerUserID string) ([]string, error) {
+ filter := bson.M{"owner_user_id": ownerUserID}
+ return mongoutil.Find[string](ctx, f.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "friend_user_id": 1}).SetSort(f.friendSort()))
+}
+
+func (f *FriendMgo) UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) error {
+ // Ensure there are IDs to update
+ if len(friendUserIDs) == 0 || len(val) == 0 {
+ return nil // Or return an error if you expect there to always be IDs
+ }
+
+ // Create a filter to match documents with the specified ownerUserID and any of the friendUserIDs
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": bson.M{"$in": friendUserIDs},
+ }
+
+ // Create an update document
+ update := bson.M{"$set": val}
+
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.Ignore(mongoutil.UpdateMany(ctx, f.coll, filter, update))
+ }, func() error {
+ var userIDs []string
+ if f.IsUpdateIsPinned(val) {
+ userIDs = append([]string{model.VersionSortChangeID}, friendUserIDs...)
+ } else {
+ userIDs = friendUserIDs
+ }
+ return f.owner.IncrVersion(ctx, ownerUserID, userIDs, model.VersionStateUpdate)
+ })
+}
+
+func (f *FriendMgo) FindIncrVersion(ctx context.Context, ownerUserID string, version uint, limit int) (*model.VersionLog, error) {
+ return f.owner.FindChangeLog(ctx, ownerUserID, version, limit)
+}
+
+func (f *FriendMgo) FindFriendUserID(ctx context.Context, friendUserID string) ([]string, error) {
+ filter := bson.M{
+ "friend_user_id": friendUserID,
+ }
+ return mongoutil.Find[string](ctx, f.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}).SetSort(f.friendSort()))
+}
+
+func (f *FriendMgo) IncrVersion(ctx context.Context, ownerUserID string, friendUserIDs []string, state int32) error {
+ return f.owner.IncrVersion(ctx, ownerUserID, friendUserIDs, state)
+}
+
+func (f *FriendMgo) IsUpdateIsPinned(data map[string]any) bool {
+ if data == nil {
+ return false
+ }
+ _, ok := data["is_pinned"]
+ return ok
+}
diff --git a/pkg/common/storage/database/mgo/friend_request.go b/pkg/common/storage/database/mgo/friend_request.go
new file mode 100644
index 0000000..3d06b6b
--- /dev/null
+++ b/pkg/common/storage/database/mgo/friend_request.go
@@ -0,0 +1,143 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+func NewFriendRequestMongo(db *mongo.Database) (database.FriendRequest, error) {
+ coll := db.Collection(database.FriendRequestName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "from_user_id", Value: 1},
+ {Key: "to_user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "create_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &FriendRequestMgo{coll: coll}, nil
+}
+
+type FriendRequestMgo struct {
+ coll *mongo.Collection
+}
+
+func (f *FriendRequestMgo) sort() any {
+ return bson.D{{Key: "create_time", Value: -1}}
+}
+
+func (f *FriendRequestMgo) FindToUserID(ctx context.Context, toUserID string, handleResults []int, pagination pagination.Pagination) (total int64, friendRequests []*model.FriendRequest, err error) {
+ filter := bson.M{"to_user_id": toUserID}
+ if len(handleResults) > 0 {
+ filter["handle_result"] = bson.M{"$in": handleResults}
+ }
+ return mongoutil.FindPage[*model.FriendRequest](ctx, f.coll, filter, pagination, options.Find().SetSort(f.sort()))
+}
+
+func (f *FriendRequestMgo) FindFromUserID(ctx context.Context, fromUserID string, handleResults []int, pagination pagination.Pagination) (total int64, friendRequests []*model.FriendRequest, err error) {
+ filter := bson.M{"from_user_id": fromUserID}
+ if len(handleResults) > 0 {
+ filter["handle_result"] = bson.M{"$in": handleResults}
+ }
+ return mongoutil.FindPage[*model.FriendRequest](ctx, f.coll, filter, pagination, options.Find().SetSort(f.sort()))
+}
+
+func (f *FriendRequestMgo) FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*model.FriendRequest, err error) {
+ filter := bson.M{"$or": []bson.M{
+ {"from_user_id": fromUserID, "to_user_id": toUserID},
+ {"from_user_id": toUserID, "to_user_id": fromUserID},
+ }}
+ return mongoutil.Find[*model.FriendRequest](ctx, f.coll, filter)
+}
+
+func (f *FriendRequestMgo) Create(ctx context.Context, friendRequests []*model.FriendRequest) error {
+ return mongoutil.InsertMany(ctx, f.coll, friendRequests)
+}
+
+func (f *FriendRequestMgo) Delete(ctx context.Context, fromUserID, toUserID string) (err error) {
+ return mongoutil.DeleteOne(ctx, f.coll, bson.M{"from_user_id": fromUserID, "to_user_id": toUserID})
+}
+
+func (f *FriendRequestMgo) UpdateByMap(ctx context.Context, formUserID, toUserID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mongoutil.UpdateOne(ctx, f.coll, bson.M{"from_user_id": formUserID, "to_user_id": toUserID}, bson.M{"$set": args}, true)
+}
+
+func (f *FriendRequestMgo) Update(ctx context.Context, friendRequest *model.FriendRequest) (err error) {
+ updater := bson.M{}
+ if friendRequest.HandleResult != 0 {
+ updater["handle_result"] = friendRequest.HandleResult
+ }
+ if friendRequest.ReqMsg != "" {
+ updater["req_msg"] = friendRequest.ReqMsg
+ }
+ if friendRequest.HandlerUserID != "" {
+ updater["handler_user_id"] = friendRequest.HandlerUserID
+ }
+ if friendRequest.HandleMsg != "" {
+ updater["handle_msg"] = friendRequest.HandleMsg
+ }
+ if !friendRequest.HandleTime.IsZero() {
+ updater["handle_time"] = friendRequest.HandleTime
+ }
+ if friendRequest.Ex != "" {
+ updater["ex"] = friendRequest.Ex
+ }
+ if len(updater) == 0 {
+ return nil
+ }
+ filter := bson.M{"from_user_id": friendRequest.FromUserID, "to_user_id": friendRequest.ToUserID}
+ return mongoutil.UpdateOne(ctx, f.coll, filter, bson.M{"$set": updater}, true)
+}
+
+func (f *FriendRequestMgo) Find(ctx context.Context, fromUserID, toUserID string) (friendRequest *model.FriendRequest, err error) {
+ return mongoutil.FindOne[*model.FriendRequest](ctx, f.coll, bson.M{"from_user_id": fromUserID, "to_user_id": toUserID})
+}
+
+func (f *FriendRequestMgo) Take(ctx context.Context, fromUserID, toUserID string) (friendRequest *model.FriendRequest, err error) {
+ return f.Find(ctx, fromUserID, toUserID)
+}
+
+func (f *FriendRequestMgo) GetUnhandledCount(ctx context.Context, userID string, ts int64) (int64, error) {
+ filter := bson.M{"to_user_id": userID, "handle_result": 0}
+ if ts != 0 {
+ filter["create_time"] = bson.M{"$gt": time.UnixMilli(ts)}
+ }
+ return mongoutil.Count(ctx, f.coll, filter)
+}
diff --git a/pkg/common/storage/database/mgo/group.go b/pkg/common/storage/database/mgo/group.go
new file mode 100644
index 0000000..781d9a4
--- /dev/null
+++ b/pkg/common/storage/database/mgo/group.go
@@ -0,0 +1,162 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewGroupMongo(db *mongo.Database) (database.Group, error) {
+ coll := db.Collection(database.GroupName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "group_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &GroupMgo{coll: coll}, nil
+}
+
+type GroupMgo struct {
+ coll *mongo.Collection
+}
+
+func (g *GroupMgo) sortGroup() any {
+ return bson.D{{"group_name", 1}, {"create_time", 1}}
+}
+
+func (g *GroupMgo) Create(ctx context.Context, groups []*model.Group) (err error) {
+ return mongoutil.InsertMany(ctx, g.coll, groups)
+}
+
+func (g *GroupMgo) UpdateStatus(ctx context.Context, groupID string, status int32) (err error) {
+ return g.UpdateMap(ctx, groupID, map[string]any{"status": status})
+}
+
+func (g *GroupMgo) UpdateMap(ctx context.Context, groupID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mongoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID}, bson.M{"$set": args}, true)
+}
+
+func (g *GroupMgo) Find(ctx context.Context, groupIDs []string) (groups []*model.Group, err error) {
+ return mongoutil.Find[*model.Group](ctx, g.coll, bson.M{"group_id": bson.M{"$in": groupIDs}})
+}
+
+func (g *GroupMgo) Take(ctx context.Context, groupID string) (group *model.Group, err error) {
+ return mongoutil.FindOne[*model.Group](ctx, g.coll, bson.M{"group_id": groupID})
+}
+
+func (g *GroupMgo) Search(ctx context.Context, keyword string, pagination pagination.Pagination) (total int64, groups []*model.Group, err error) {
+ // Define the sorting options
+ opts := options.Find().SetSort(bson.D{{Key: "create_time", Value: -1}})
+
+ // Perform the search with pagination and sorting
+ return mongoutil.FindPage[*model.Group](ctx, g.coll, bson.M{
+ "group_name": bson.M{"$regex": keyword},
+ "status": bson.M{"$ne": constant.GroupStatusDismissed},
+ }, pagination, opts)
+}
+
+func (g *GroupMgo) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
+ if before == nil {
+ return mongoutil.Count(ctx, g.coll, bson.M{})
+ }
+ return mongoutil.Count(ctx, g.coll, bson.M{"create_time": bson.M{"$lt": before}})
+}
+
+func (g *GroupMgo) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
+ pipeline := bson.A{
+ bson.M{
+ "$match": bson.M{
+ "create_time": bson.M{
+ "$gte": start,
+ "$lt": end,
+ },
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": bson.M{
+ "$dateToString": bson.M{
+ "format": "%Y-%m-%d",
+ "date": "$create_time",
+ },
+ },
+ "count": bson.M{
+ "$sum": 1,
+ },
+ },
+ },
+ }
+ type Item struct {
+ Date string `bson:"_id"`
+ Count int64 `bson:"count"`
+ }
+ items, err := mongoutil.Aggregate[Item](ctx, g.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[string]int64, len(items))
+ for _, item := range items {
+ res[item.Date] = item.Count
+ }
+ return res, nil
+}
+
+func (g *GroupMgo) FindJoinSortGroupID(ctx context.Context, groupIDs []string) ([]string, error) {
+ if len(groupIDs) < 2 {
+ return groupIDs, nil
+ }
+ filter := bson.M{
+ "group_id": bson.M{"$in": groupIDs},
+ "status": bson.M{"$ne": constant.GroupStatusDismissed},
+ }
+ opt := options.Find().SetSort(g.sortGroup()).SetProjection(bson.M{"_id": 0, "group_id": 1})
+ return mongoutil.Find[string](ctx, g.coll, filter, opt)
+}
+
+func (g *GroupMgo) SearchJoin(ctx context.Context, groupIDs []string, keyword string, pagination pagination.Pagination) (int64, []*model.Group, error) {
+ if len(groupIDs) == 0 {
+ return 0, nil, nil
+ }
+ filter := bson.M{
+ "group_id": bson.M{"$in": groupIDs},
+ "status": bson.M{"$ne": constant.GroupStatusDismissed},
+ }
+ if keyword != "" {
+ filter["group_name"] = bson.M{"$regex": keyword}
+ }
+ // Define the sorting options
+ opts := options.Find().SetSort(g.sortGroup())
+ // Perform the search with pagination and sorting
+ return mongoutil.FindPage[*model.Group](ctx, g.coll, filter, pagination, opts)
+}
diff --git a/pkg/common/storage/database/mgo/group_member.go b/pkg/common/storage/database/mgo/group_member.go
new file mode 100644
index 0000000..d4e03f5
--- /dev/null
+++ b/pkg/common/storage/database/mgo/group_member.go
@@ -0,0 +1,282 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/log"
+
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewGroupMember(db *mongo.Database) (database.GroupMember, error) {
+ coll := db.Collection(database.GroupMemberName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "group_id", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ member, err := NewVersionLog(db.Collection(database.GroupMemberVersionName))
+ if err != nil {
+ return nil, err
+ }
+ join, err := NewVersionLog(db.Collection(database.GroupJoinVersionName))
+ if err != nil {
+ return nil, err
+ }
+ return &GroupMemberMgo{coll: coll, member: member, join: join}, nil
+}
+
+type GroupMemberMgo struct {
+ coll *mongo.Collection
+ member database.VersionLog
+ join database.VersionLog
+}
+
+func (g *GroupMemberMgo) memberSort() any {
+ return bson.D{{Key: "role_level", Value: -1}, {Key: "create_time", Value: 1}}
+}
+
+func (g *GroupMemberMgo) Create(ctx context.Context, groupMembers []*model.GroupMember) (err error) {
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.InsertMany(ctx, g.coll, groupMembers)
+ }, func() error {
+ gms := make(map[string][]string)
+ for _, member := range groupMembers {
+ gms[member.GroupID] = append(gms[member.GroupID], member.UserID)
+ }
+ for groupID, userIDs := range gms {
+ if err := g.member.IncrVersion(ctx, groupID, userIDs, model.VersionStateInsert); err != nil {
+ return err
+ }
+ }
+ return nil
+ }, func() error {
+ gms := make(map[string][]string)
+ for _, member := range groupMembers {
+ gms[member.UserID] = append(gms[member.UserID], member.GroupID)
+ }
+ for userID, groupIDs := range gms {
+ if err := g.join.IncrVersion(ctx, userID, groupIDs, model.VersionStateInsert); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+}
+
+func (g *GroupMemberMgo) Delete(ctx context.Context, groupID string, userIDs []string) (err error) {
+ filter := bson.M{"group_id": groupID}
+ if len(userIDs) > 0 {
+ filter["user_id"] = bson.M{"$in": userIDs}
+ }
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.DeleteMany(ctx, g.coll, filter)
+ }, func() error {
+ if len(userIDs) == 0 {
+ return g.member.Delete(ctx, groupID)
+ } else {
+ return g.member.IncrVersion(ctx, groupID, userIDs, model.VersionStateDelete)
+ }
+ }, func() error {
+ for _, userID := range userIDs {
+ if err := g.join.IncrVersion(ctx, userID, []string{groupID}, model.VersionStateDelete); err != nil {
+ return err
+ }
+ }
+ return nil
+ })
+}
+
+func (g *GroupMemberMgo) UpdateRoleLevel(ctx context.Context, groupID string, userID string, roleLevel int32) error {
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID},
+ bson.M{"$set": bson.M{"role_level": roleLevel}}, true)
+ }, func() error {
+ return g.member.IncrVersion(ctx, groupID, []string{model.VersionSortChangeID, userID}, model.VersionStateUpdate)
+ })
+}
+func (g *GroupMemberMgo) UpdateUserRoleLevels(ctx context.Context, groupID string, firstUserID string, firstUserRoleLevel int32, secondUserID string, secondUserRoleLevel int32) error {
+ return mongoutil.IncrVersion(func() error {
+ if err := mongoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": firstUserID},
+ bson.M{"$set": bson.M{"role_level": firstUserRoleLevel}}, true); err != nil {
+ return err
+ }
+ if err := mongoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": secondUserID},
+ bson.M{"$set": bson.M{"role_level": secondUserRoleLevel}}, true); err != nil {
+ return err
+ }
+ return nil
+ }, func() error {
+ return g.member.IncrVersion(ctx, groupID, []string{model.VersionSortChangeID, firstUserID, secondUserID}, model.VersionStateUpdate)
+ })
+}
+
+func (g *GroupMemberMgo) Update(ctx context.Context, groupID string, userID string, data map[string]any) (err error) {
+ if len(data) == 0 {
+ return nil
+ }
+ return mongoutil.IncrVersion(func() error {
+ return mongoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID}, bson.M{"$set": data}, true)
+ }, func() error {
+ var userIDs []string
+ if g.IsUpdateRoleLevel(data) {
+ userIDs = []string{model.VersionSortChangeID, userID}
+ } else {
+ userIDs = []string{userID}
+ }
+ return g.member.IncrVersion(ctx, groupID, userIDs, model.VersionStateUpdate)
+ })
+}
+
+func (g *GroupMemberMgo) FindMemberUserID(ctx context.Context, groupID string) (userIDs []string, err error) {
+ return mongoutil.Find[string](ctx, g.coll, bson.M{"group_id": groupID}, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}).SetSort(g.memberSort()))
+}
+
+func (g *GroupMemberMgo) Find(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupMember, error) {
+ filter := bson.M{"group_id": groupID}
+ if len(userIDs) > 0 {
+ filter["user_id"] = bson.M{"$in": userIDs}
+ }
+ return mongoutil.Find[*model.GroupMember](ctx, g.coll, filter)
+}
+
+func (g *GroupMemberMgo) FindInGroup(ctx context.Context, userID string, groupIDs []string) ([]*model.GroupMember, error) {
+ filter := bson.M{"user_id": userID}
+ if len(groupIDs) > 0 {
+ filter["group_id"] = bson.M{"$in": groupIDs}
+ }
+ return mongoutil.Find[*model.GroupMember](ctx, g.coll, filter)
+}
+
+func (g *GroupMemberMgo) Take(ctx context.Context, groupID string, userID string) (groupMember *model.GroupMember, err error) {
+ return mongoutil.FindOne[*model.GroupMember](ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID})
+}
+
+func (g *GroupMemberMgo) TakeOwner(ctx context.Context, groupID string) (groupMember *model.GroupMember, err error) {
+ return mongoutil.FindOne[*model.GroupMember](ctx, g.coll, bson.M{"group_id": groupID, "role_level": constant.GroupOwner})
+}
+
+func (g *GroupMemberMgo) FindRoleLevelUserIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error) {
+ return mongoutil.Find[string](ctx, g.coll, bson.M{"group_id": groupID, "role_level": roleLevel}, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+}
+
+func (g *GroupMemberMgo) SearchMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (int64, []*model.GroupMember, error) {
+ // 支持通过昵称、user_id(账号)搜索
+ // 使用 $or 条件,匹配昵称或 user_id
+ filter := bson.M{
+ "group_id": groupID,
+ "$or": []bson.M{
+ {"nickname": bson.M{"$regex": keyword, "$options": "i"}}, // 昵称模糊匹配,不区分大小写
+ {"user_id": bson.M{"$regex": keyword, "$options": "i"}}, // user_id(账号)模糊匹配,不区分大小写
+ },
+ }
+ return mongoutil.FindPage[*model.GroupMember](ctx, g.coll, filter, pagination, options.Find().SetSort(g.memberSort()))
+}
+
+// SearchMemberByFields 支持通过多个独立字段搜索群成员:昵称、账号(userID)、手机号
+// nickname: 用户昵称(群内昵称)
+// userID: 用户账号(user_id)
+// phone: 手机号(如果群成员表中有相关字段,或通过 Ex 字段存储)
+func (g *GroupMemberMgo) SearchMemberByFields(ctx context.Context, groupID string, nickname, userID, phone string, pagination pagination.Pagination) (int64, []*model.GroupMember, error) {
+ filter := bson.M{"group_id": groupID}
+
+ // 构建多个搜索条件,使用 $and 确保所有提供的条件都满足
+ conditions := []bson.M{}
+
+ if nickname != "" {
+ conditions = append(conditions, bson.M{"nickname": bson.M{"$regex": nickname, "$options": "i"}})
+ }
+
+ if userID != "" {
+ conditions = append(conditions, bson.M{"user_id": bson.M{"$regex": userID, "$options": "i"}})
+ }
+
+ if phone != "" {
+ // 手机号可能存储在 Ex 字段中,使用正则表达式匹配
+ // 如果 Ex 字段是 JSON 格式,可能需要更复杂的查询
+ conditions = append(conditions, bson.M{"ex": bson.M{"$regex": phone, "$options": "i"}})
+ }
+
+ // 如果有搜索条件,添加到 filter 中
+ if len(conditions) > 0 {
+ filter["$and"] = conditions
+ }
+
+ return mongoutil.FindPage[*model.GroupMember](ctx, g.coll, filter, pagination, options.Find().SetSort(g.memberSort()))
+}
+
+func (g *GroupMemberMgo) FindUserJoinedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
+ return mongoutil.Find[string](ctx, g.coll, bson.M{"user_id": userID}, options.Find().SetProjection(bson.M{"_id": 0, "group_id": 1}).SetSort(g.memberSort()))
+}
+
+func (g *GroupMemberMgo) TakeGroupMemberNum(ctx context.Context, groupID string) (count int64, err error) {
+ return mongoutil.Count(ctx, g.coll, bson.M{"group_id": groupID})
+}
+
+func (g *GroupMemberMgo) FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
+ filter := bson.M{
+ "user_id": userID,
+ "role_level": bson.M{
+ "$in": []int{constant.GroupOwner, constant.GroupAdmin},
+ },
+ }
+ return mongoutil.Find[string](ctx, g.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "group_id": 1}))
+}
+
+func (g *GroupMemberMgo) IsUpdateRoleLevel(data map[string]any) bool {
+ if len(data) == 0 {
+ return false
+ }
+ _, ok := data["role_level"]
+ return ok
+}
+
+func (g *GroupMemberMgo) JoinGroupIncrVersion(ctx context.Context, userID string, groupIDs []string, state int32) error {
+ return g.join.IncrVersion(ctx, userID, groupIDs, state)
+}
+
+func (g *GroupMemberMgo) MemberGroupIncrVersion(ctx context.Context, groupID string, userIDs []string, state int32) error {
+ return g.member.IncrVersion(ctx, groupID, userIDs, state)
+}
+
+func (g *GroupMemberMgo) FindMemberIncrVersion(ctx context.Context, groupID string, version uint, limit int) (*model.VersionLog, error) {
+ log.ZDebug(ctx, "find member incr version", "groupID", groupID, "version", version)
+ return g.member.FindChangeLog(ctx, groupID, version, limit)
+}
+
+func (g *GroupMemberMgo) BatchFindMemberIncrVersion(ctx context.Context, groupIDs []string, versions []uint, limits []int) ([]*model.VersionLog, error) {
+ log.ZDebug(ctx, "Batch find member incr version", "groupIDs", groupIDs, "versions", versions)
+ return g.member.BatchFindChangeLog(ctx, groupIDs, versions, limits)
+}
+
+func (g *GroupMemberMgo) FindJoinIncrVersion(ctx context.Context, userID string, version uint, limit int) (*model.VersionLog, error) {
+ log.ZDebug(ctx, "find join incr version", "userID", userID, "version", version)
+ return g.join.FindChangeLog(ctx, userID, version, limit)
+}
diff --git a/pkg/common/storage/database/mgo/group_request.go b/pkg/common/storage/database/mgo/group_request.go
new file mode 100644
index 0000000..8657230
--- /dev/null
+++ b/pkg/common/storage/database/mgo/group_request.go
@@ -0,0 +1,115 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/utils/datautil"
+
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+)
+
+func NewGroupRequestMgo(db *mongo.Database) (database.GroupRequest, error) {
+ coll := db.Collection(database.GroupRequestName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "group_id", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "req_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &GroupRequestMgo{coll: coll}, nil
+}
+
+type GroupRequestMgo struct {
+ coll *mongo.Collection
+}
+
+func (g *GroupRequestMgo) Create(ctx context.Context, groupRequests []*model.GroupRequest) (err error) {
+ return mongoutil.InsertMany(ctx, g.coll, groupRequests)
+}
+
+func (g *GroupRequestMgo) Delete(ctx context.Context, groupID string, userID string) (err error) {
+ return mongoutil.DeleteOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID})
+}
+
+func (g *GroupRequestMgo) UpdateHandler(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32) (err error) {
+ return mongoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID}, bson.M{"$set": bson.M{"handle_msg": handledMsg, "handle_result": handleResult}}, true)
+}
+
+func (g *GroupRequestMgo) Take(ctx context.Context, groupID string, userID string) (groupRequest *model.GroupRequest, err error) {
+ return mongoutil.FindOne[*model.GroupRequest](ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID})
+}
+
+func (g *GroupRequestMgo) FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*model.GroupRequest, error) {
+ return mongoutil.Find[*model.GroupRequest](ctx, g.coll, bson.M{"group_id": groupID, "user_id": bson.M{"$in": userIDs}})
+}
+
+func (g *GroupRequestMgo) sort() any {
+ return bson.D{{Key: "req_time", Value: -1}}
+}
+
+func (g *GroupRequestMgo) Page(ctx context.Context, userID string, groupIDs []string, handleResults []int, pagination pagination.Pagination) (total int64, groups []*model.GroupRequest, err error) {
+ filter := bson.M{"user_id": userID}
+ if len(groupIDs) > 0 {
+ filter["group_id"] = bson.M{"$in": datautil.Distinct(groupIDs)}
+ }
+ if len(handleResults) > 0 {
+ filter["handle_result"] = bson.M{"$in": handleResults}
+ }
+ return mongoutil.FindPage[*model.GroupRequest](ctx, g.coll, filter, pagination, options.Find().SetSort(g.sort()))
+}
+
+func (g *GroupRequestMgo) PageGroup(ctx context.Context, groupIDs []string, handleResults []int, pagination pagination.Pagination) (total int64, groups []*model.GroupRequest, err error) {
+ if len(groupIDs) == 0 {
+ return 0, nil, nil
+ }
+ filter := bson.M{"group_id": bson.M{"$in": groupIDs}}
+ if len(handleResults) > 0 {
+ filter["handle_result"] = bson.M{"$in": handleResults}
+ }
+ return mongoutil.FindPage[*model.GroupRequest](ctx, g.coll, filter, pagination, options.Find().SetSort(g.sort()))
+}
+
+func (g *GroupRequestMgo) GetUnhandledCount(ctx context.Context, groupIDs []string, ts int64) (int64, error) {
+ if len(groupIDs) == 0 {
+ return 0, nil
+ }
+ filter := bson.M{"group_id": bson.M{"$in": groupIDs}, "handle_result": 0}
+ if ts != 0 {
+ filter["req_time"] = bson.M{"$gt": time.UnixMilli(ts)}
+ }
+ return mongoutil.Count(ctx, g.coll, filter)
+}
diff --git a/pkg/common/storage/database/mgo/helpers.go b/pkg/common/storage/database/mgo/helpers.go
new file mode 100644
index 0000000..23e6623
--- /dev/null
+++ b/pkg/common/storage/database/mgo/helpers.go
@@ -0,0 +1,24 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/mongo"
+)
+
+func IsNotFound(err error) bool {
+ return errs.Unwrap(err) == mongo.ErrNoDocuments
+}
diff --git a/pkg/common/storage/database/mgo/log.go b/pkg/common/storage/database/mgo/log.go
new file mode 100644
index 0000000..e4c359c
--- /dev/null
+++ b/pkg/common/storage/database/mgo/log.go
@@ -0,0 +1,85 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewLogMongo(db *mongo.Database) (database.Log, error) {
+ coll := db.Collection(database.LogName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "log_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "user_id", Value: 1},
+ },
+ },
+ {
+ Keys: bson.D{
+ {Key: "create_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &LogMgo{coll: coll}, nil
+}
+
+type LogMgo struct {
+ coll *mongo.Collection
+}
+
+func (l *LogMgo) Create(ctx context.Context, log []*model.Log) error {
+ return mongoutil.InsertMany(ctx, l.coll, log)
+}
+
+func (l *LogMgo) Search(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*model.Log, error) {
+ filter := bson.M{"create_time": bson.M{"$gte": start, "$lte": end}}
+ if keyword != "" {
+ filter["user_id"] = bson.M{"$regex": keyword}
+ }
+ return mongoutil.FindPage[*model.Log](ctx, l.coll, filter, pagination, options.Find().SetSort(bson.M{"create_time": -1}))
+}
+
+func (l *LogMgo) Delete(ctx context.Context, logID []string, userID string) error {
+ if userID == "" {
+ return mongoutil.DeleteMany(ctx, l.coll, bson.M{"log_id": bson.M{"$in": logID}})
+ }
+ return mongoutil.DeleteMany(ctx, l.coll, bson.M{"log_id": bson.M{"$in": logID}, "user_id": userID})
+}
+
+func (l *LogMgo) Get(ctx context.Context, logIDs []string, userID string) ([]*model.Log, error) {
+ if userID == "" {
+ return mongoutil.Find[*model.Log](ctx, l.coll, bson.M{"log_id": bson.M{"$in": logIDs}})
+ }
+ return mongoutil.Find[*model.Log](ctx, l.coll, bson.M{"log_id": bson.M{"$in": logIDs}, "user_id": userID})
+}
diff --git a/pkg/common/storage/database/mgo/meeting.go b/pkg/common/storage/database/mgo/meeting.go
new file mode 100644
index 0000000..e6be3de
--- /dev/null
+++ b/pkg/common/storage/database/mgo/meeting.go
@@ -0,0 +1,183 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+// MeetingMgo implements Meeting using MongoDB as the storage backend.
+type MeetingMgo struct {
+ coll *mongo.Collection
+}
+
+// NewMeetingMongo creates a new instance of MeetingMgo with the provided MongoDB database.
+func NewMeetingMongo(db *mongo.Database) (database.Meeting, error) {
+ coll := db.Collection(database.MeetingName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "meeting_id", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "creator_user_id", Value: 1}},
+ },
+ {
+ Keys: bson.D{{Key: "status", Value: 1}},
+ },
+ {
+ Keys: bson.D{{Key: "scheduled_time", Value: 1}},
+ },
+ {
+ Keys: bson.D{{Key: "create_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "update_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "subject", Value: "text"}, {Key: "description", Value: "text"}},
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &MeetingMgo{coll: coll}, nil
+}
+
+// Create creates a new meeting record.
+func (m *MeetingMgo) Create(ctx context.Context, meeting *model.Meeting) error {
+ if meeting.CreateTime.IsZero() {
+ meeting.CreateTime = time.Now()
+ }
+ if meeting.UpdateTime.IsZero() {
+ meeting.UpdateTime = time.Now()
+ }
+ return mongoutil.InsertOne(ctx, m.coll, meeting)
+}
+
+// Take retrieves a meeting by meeting ID. Returns an error if not found.
+func (m *MeetingMgo) Take(ctx context.Context, meetingID string) (*model.Meeting, error) {
+ return mongoutil.FindOne[*model.Meeting](ctx, m.coll, bson.M{"meeting_id": meetingID})
+}
+
+// Update updates meeting information.
+func (m *MeetingMgo) Update(ctx context.Context, meetingID string, data map[string]any) error {
+ data["update_time"] = time.Now()
+ update := bson.M{"$set": data}
+ return mongoutil.UpdateOne(ctx, m.coll, bson.M{"meeting_id": meetingID}, update, false)
+}
+
+// UpdateStatus updates the status of a meeting.
+func (m *MeetingMgo) UpdateStatus(ctx context.Context, meetingID string, status int32) error {
+ return m.Update(ctx, meetingID, map[string]any{"status": status})
+}
+
+// Find finds meetings by meeting IDs.
+func (m *MeetingMgo) Find(ctx context.Context, meetingIDs []string) ([]*model.Meeting, error) {
+ if len(meetingIDs) == 0 {
+ return []*model.Meeting{}, nil
+ }
+ filter := bson.M{"meeting_id": bson.M{"$in": meetingIDs}}
+ return mongoutil.Find[*model.Meeting](ctx, m.coll, filter)
+}
+
+// FindByCreator finds meetings created by a specific user.
+func (m *MeetingMgo) FindByCreator(ctx context.Context, creatorUserID string, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error) {
+ filter := bson.M{"creator_user_id": creatorUserID}
+ return mongoutil.FindPage[*model.Meeting](ctx, m.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "scheduled_time", Value: -1}},
+ })
+}
+
+// FindAll finds all meetings with pagination.
+func (m *MeetingMgo) FindAll(ctx context.Context, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error) {
+ return mongoutil.FindPage[*model.Meeting](ctx, m.coll, bson.M{}, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "scheduled_time", Value: -1}},
+ })
+}
+
+// Search searches meetings by keyword (subject, description).
+func (m *MeetingMgo) Search(ctx context.Context, keyword string, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error) {
+ filter := bson.M{
+ "$or": []bson.M{
+ {"subject": bson.M{"$regex": keyword, "$options": "i"}},
+ {"description": bson.M{"$regex": keyword, "$options": "i"}},
+ },
+ }
+ return mongoutil.FindPage[*model.Meeting](ctx, m.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "scheduled_time", Value: -1}},
+ })
+}
+
+// FindByStatus finds meetings by status.
+func (m *MeetingMgo) FindByStatus(ctx context.Context, status int32, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error) {
+ filter := bson.M{"status": status}
+ return mongoutil.FindPage[*model.Meeting](ctx, m.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "scheduled_time", Value: -1}},
+ })
+}
+
+// FindByScheduledTimeRange finds meetings within a scheduled time range.
+func (m *MeetingMgo) FindByScheduledTimeRange(ctx context.Context, startTime, endTime int64, pagination pagination.Pagination) (total int64, meetings []*model.Meeting, err error) {
+ filter := bson.M{
+ "scheduled_time": bson.M{
+ "$gte": time.UnixMilli(startTime),
+ "$lte": time.UnixMilli(endTime),
+ },
+ }
+ return mongoutil.FindPage[*model.Meeting](ctx, m.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "scheduled_time", Value: 1}},
+ })
+}
+
+// FindFinishedMeetingsBefore finds finished meetings that ended before the specified time.
+// A meeting is considered finished if its status is 3 (Finished) and its end time (scheduledTime + duration) is before beforeTime.
+// Only returns meetings with a non-empty group_id to avoid processing meetings that have already been handled.
+func (m *MeetingMgo) FindFinishedMeetingsBefore(ctx context.Context, beforeTime time.Time) ([]*model.Meeting, error) {
+ // 查询状态为3(已结束)且group_id不为空的会议
+ // 结束时间 = scheduledTime + duration(分钟)
+ // 需要计算:scheduledTime + duration * 60秒 <= beforeTime
+ filter := bson.M{
+ "status": 3, // 已结束
+ "group_id": bson.M{"$ne": ""}, // 只查询group_id不为空的会议,避免重复处理已清空groupID的会议
+ "$expr": bson.M{
+ "$lte": []interface{}{
+ bson.M{
+ "$add": []interface{}{
+ "$scheduled_time",
+ bson.M{"$multiply": []interface{}{"$duration", int64(60)}}, // duration是分钟,转换为秒
+ },
+ },
+ beforeTime,
+ },
+ },
+ }
+ return mongoutil.Find[*model.Meeting](ctx, m.coll, filter)
+}
+
+// Delete deletes a meeting by meeting ID.
+func (m *MeetingMgo) Delete(ctx context.Context, meetingID string) error {
+ return mongoutil.DeleteOne(ctx, m.coll, bson.M{"meeting_id": meetingID})
+}
diff --git a/pkg/common/storage/database/mgo/meeting_checkin.go b/pkg/common/storage/database/mgo/meeting_checkin.go
new file mode 100644
index 0000000..eae33d9
--- /dev/null
+++ b/pkg/common/storage/database/mgo/meeting_checkin.go
@@ -0,0 +1,110 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+// MeetingCheckInMgo implements MeetingCheckIn using MongoDB as the storage backend.
+type MeetingCheckInMgo struct {
+ coll *mongo.Collection
+}
+
+// NewMeetingCheckInMongo creates a new instance of MeetingCheckInMgo with the provided MongoDB database.
+func NewMeetingCheckInMongo(db *mongo.Database) (database.MeetingCheckIn, error) {
+ coll := db.Collection(database.MeetingCheckInName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "check_in_id", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "meeting_id", Value: 1}, {Key: "user_id", Value: 1}},
+ Options: options.Index().SetUnique(true), // 一个用户在一个会议中只能签到一次
+ },
+ {
+ Keys: bson.D{{Key: "meeting_id", Value: 1}, {Key: "check_in_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "user_id", Value: 1}, {Key: "check_in_time", Value: -1}},
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &MeetingCheckInMgo{coll: coll}, nil
+}
+
+// Create creates a new meeting check-in record.
+func (m *MeetingCheckInMgo) Create(ctx context.Context, checkIn *model.MeetingCheckIn) error {
+ if checkIn.CreateTime.IsZero() {
+ checkIn.CreateTime = time.Now()
+ }
+ if checkIn.CheckInTime.IsZero() {
+ checkIn.CheckInTime = time.Now()
+ }
+ return mongoutil.InsertOne(ctx, m.coll, checkIn)
+}
+
+// Take retrieves a check-in by check-in ID. Returns an error if not found.
+func (m *MeetingCheckInMgo) Take(ctx context.Context, checkInID string) (*model.MeetingCheckIn, error) {
+ return mongoutil.FindOne[*model.MeetingCheckIn](ctx, m.coll, bson.M{"check_in_id": checkInID})
+}
+
+// FindByMeetingID finds all check-ins for a meeting with pagination.
+func (m *MeetingCheckInMgo) FindByMeetingID(ctx context.Context, meetingID string, pagination pagination.Pagination) (total int64, checkIns []*model.MeetingCheckIn, err error) {
+ filter := bson.M{"meeting_id": meetingID}
+ return mongoutil.FindPage[*model.MeetingCheckIn](ctx, m.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "check_in_time", Value: -1}},
+ })
+}
+
+// FindByUserAndMeetingID finds if a user has checked in for a specific meeting.
+func (m *MeetingCheckInMgo) FindByUserAndMeetingID(ctx context.Context, userID, meetingID string) (*model.MeetingCheckIn, error) {
+ return mongoutil.FindOne[*model.MeetingCheckIn](ctx, m.coll, bson.M{
+ "user_id": userID,
+ "meeting_id": meetingID,
+ })
+}
+
+// CountByMeetingID counts the number of check-ins for a meeting.
+func (m *MeetingCheckInMgo) CountByMeetingID(ctx context.Context, meetingID string) (int64, error) {
+ return mongoutil.Count(ctx, m.coll, bson.M{"meeting_id": meetingID})
+}
+
+// FindByUser finds all check-ins by a user with pagination.
+func (m *MeetingCheckInMgo) FindByUser(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, checkIns []*model.MeetingCheckIn, err error) {
+ filter := bson.M{"user_id": userID}
+ return mongoutil.FindPage[*model.MeetingCheckIn](ctx, m.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "check_in_time", Value: -1}},
+ })
+}
+
+// Delete deletes a check-in by check-in ID.
+func (m *MeetingCheckInMgo) Delete(ctx context.Context, checkInID string) error {
+ return mongoutil.DeleteOne(ctx, m.coll, bson.M{"check_in_id": checkInID})
+}
+
diff --git a/pkg/common/storage/database/mgo/msg.go b/pkg/common/storage/database/mgo/msg.go
new file mode 100644
index 0000000..be6a328
--- /dev/null
+++ b/pkg/common/storage/database/mgo/msg.go
@@ -0,0 +1,1501 @@
+package mgo
+
+import (
+ "context"
+ "fmt"
+ "reflect"
+ "regexp"
+ "strings"
+ "time"
+
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+)
+
+func NewMsgMongo(db *mongo.Database) (database.Msg, error) {
+ coll := db.Collection(new(model.MsgDocModel).TableName())
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "doc_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &MsgMgo{coll: coll}, nil
+}
+
+type MsgMgo struct {
+ coll *mongo.Collection
+ model model.MsgDocModel
+}
+
+func (m *MsgMgo) Create(ctx context.Context, msg *model.MsgDocModel) error {
+ return mongoutil.InsertMany(ctx, m.coll, []*model.MsgDocModel{msg})
+}
+
+func (m *MsgMgo) UpdateMsg(ctx context.Context, docID string, index int64, key string, value any) (*mongo.UpdateResult, error) {
+ var field string
+ if key == "" {
+ field = fmt.Sprintf("msgs.%d", index)
+ } else {
+ field = fmt.Sprintf("msgs.%d.%s", index, key)
+ }
+ filter := bson.M{"doc_id": docID}
+ update := bson.M{"$set": bson.M{field: value}}
+ return mongoutil.UpdateOneResult(ctx, m.coll, filter, update)
+}
+
+func (m *MsgMgo) PushUnique(ctx context.Context, docID string, index int64, key string, value any) (*mongo.UpdateResult, error) {
+ var field string
+ if key == "" {
+ field = fmt.Sprintf("msgs.%d", index)
+ } else {
+ field = fmt.Sprintf("msgs.%d.%s", index, key)
+ }
+ filter := bson.M{"doc_id": docID}
+ update := bson.M{
+ "$addToSet": bson.M{
+ field: bson.M{"$each": value},
+ },
+ }
+ return mongoutil.UpdateOneResult(ctx, m.coll, filter, update)
+}
+
+func (m *MsgMgo) FindOneByDocID(ctx context.Context, docID string) (*model.MsgDocModel, error) {
+ return mongoutil.FindOne[*model.MsgDocModel](ctx, m.coll, bson.M{"doc_id": docID})
+}
+
+func (m *MsgMgo) GetMsgBySeqIndexIn1Doc(ctx context.Context, userID, docID string, seqs []int64) ([]*model.MsgInfoModel, error) {
+ msgs, err := m.getMsgBySeqIndexIn1Doc(ctx, userID, docID, seqs)
+ if err != nil {
+ return nil, err
+ }
+ if len(msgs) == len(seqs) {
+ return msgs, nil
+ }
+ tmp := make(map[int64]*model.MsgInfoModel)
+ for i, val := range msgs {
+ tmp[val.Msg.Seq] = msgs[i]
+ }
+ res := make([]*model.MsgInfoModel, 0, len(seqs))
+ for _, seq := range seqs {
+ if val, ok := tmp[seq]; ok {
+ res = append(res, val)
+ } else {
+ res = append(res, &model.MsgInfoModel{Msg: &model.MsgDataModel{Seq: seq}})
+ }
+ }
+ return res, nil
+}
+
+func (m *MsgMgo) getMsgBySeqIndexIn1Doc(ctx context.Context, userID, docID string, seqs []int64) ([]*model.MsgInfoModel, error) {
+ indexes := make([]int64, 0, len(seqs))
+ for _, seq := range seqs {
+ indexes = append(indexes, m.model.GetMsgIndex(seq))
+ }
+ pipeline := mongo.Pipeline{
+ bson.D{{Key: "$match", Value: bson.D{
+ {Key: "doc_id", Value: docID},
+ }}},
+ bson.D{{Key: "$project", Value: bson.D{
+ {Key: "_id", Value: 0},
+ {Key: "doc_id", Value: 1},
+ {Key: "msgs", Value: bson.D{
+ {Key: "$map", Value: bson.D{
+ {Key: "input", Value: indexes},
+ {Key: "as", Value: "index"},
+ {Key: "in", Value: bson.D{
+ {Key: "$arrayElemAt", Value: bson.A{"$msgs", "$$index"}},
+ }},
+ }},
+ }},
+ }}},
+ }
+ msgDocModel, err := mongoutil.Aggregate[*model.MsgDocModel](ctx, m.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ if len(msgDocModel) == 0 {
+ return nil, errs.Wrap(mongo.ErrNoDocuments)
+ }
+ msgs := make([]*model.MsgInfoModel, 0, len(msgDocModel[0].Msg))
+ for i := range msgDocModel[0].Msg {
+ msg := msgDocModel[0].Msg[i]
+ if msg == nil || msg.Msg == nil {
+ continue
+ }
+ if datautil.Contain(userID, msg.DelList...) {
+ msg.Msg.Content = ""
+ msg.Msg.Status = constant.MsgDeleted
+ }
+ if msg.Revoke != nil {
+ revokeContent := sdkws.MessageRevokedContent{
+ RevokerID: msg.Revoke.UserID,
+ RevokerRole: msg.Revoke.Role,
+ ClientMsgID: msg.Msg.ClientMsgID,
+ RevokerNickname: msg.Revoke.Nickname,
+ RevokeTime: msg.Revoke.Time,
+ SourceMessageSendTime: msg.Msg.SendTime,
+ SourceMessageSendID: msg.Msg.SendID,
+ SourceMessageSenderNickname: msg.Msg.SenderNickname,
+ SessionType: msg.Msg.SessionType,
+ Seq: msg.Msg.Seq,
+ Ex: msg.Msg.Ex,
+ }
+ data, err := jsonutil.JsonMarshal(&revokeContent)
+ if err != nil {
+ return nil, errs.WrapMsg(err, fmt.Sprintf("docID is %s, seqs is %v", docID, seqs))
+ }
+ elem := sdkws.NotificationElem{
+ Detail: string(data),
+ }
+ content, err := jsonutil.JsonMarshal(&elem)
+ if err != nil {
+ return nil, errs.WrapMsg(err, fmt.Sprintf("docID is %s, seqs is %v", docID, seqs))
+ }
+ msg.Msg.ContentType = constant.MsgRevokeNotification
+ msg.Msg.Content = string(content)
+ }
+ msgs = append(msgs, msg)
+ }
+ return msgs, nil
+}
+
+func (m *MsgMgo) GetNewestMsg(ctx context.Context, conversationID string) (*model.MsgInfoModel, error) {
+ for skip := int64(0); ; skip++ {
+ msgDocModel, err := m.GetMsgDocModelByIndex(ctx, conversationID, skip, -1)
+ if err != nil {
+ return nil, err
+ }
+ for i := len(msgDocModel.Msg) - 1; i >= 0; i-- {
+ if msgDocModel.Msg[i].Msg != nil {
+ return msgDocModel.Msg[i], nil
+ }
+ }
+ }
+}
+
+func (m *MsgMgo) GetOldestMsg(ctx context.Context, conversationID string) (*model.MsgInfoModel, error) {
+ for skip := int64(0); ; skip++ {
+ msgDocModel, err := m.GetMsgDocModelByIndex(ctx, conversationID, skip, 1)
+ if err != nil {
+ return nil, err
+ }
+ for i, v := range msgDocModel.Msg {
+ if v.Msg != nil {
+ return msgDocModel.Msg[i], nil
+ }
+ }
+ }
+}
+
+func (m *MsgMgo) GetMsgDocModelByIndex(ctx context.Context, conversationID string, index, sort int64) (*model.MsgDocModel, error) {
+ if sort != 1 && sort != -1 {
+ return nil, errs.ErrArgs.WrapMsg("mongo sort must be 1 or -1")
+ }
+ opt := options.Find().SetSkip(index).SetSort(bson.M{"_id": sort}).SetLimit(1)
+ filter := bson.M{"doc_id": primitive.Regex{Pattern: fmt.Sprintf("^%s:", conversationID)}}
+ msgs, err := mongoutil.Find[*model.MsgDocModel](ctx, m.coll, filter, opt)
+ if err != nil {
+ return nil, err
+ }
+ if len(msgs) > 0 {
+ return msgs[0], nil
+ }
+ return nil, errs.Wrap(model.ErrMsgListNotExist)
+}
+
+func (m *MsgMgo) DeleteMsgsInOneDocByIndex(ctx context.Context, docID string, indexes []int) error {
+ update := bson.M{
+ "$set": bson.M{},
+ }
+ for _, index := range indexes {
+ update["$set"].(bson.M)[fmt.Sprintf("msgs.%d", index)] = bson.M{
+ "msg": nil,
+ }
+ }
+ _, err := mongoutil.UpdateMany(ctx, m.coll, bson.M{"doc_id": docID}, update)
+ return err
+}
+
+func (m *MsgMgo) MarkSingleChatMsgsAsRead(ctx context.Context, userID string, docID string, indexes []int64) error {
+ var updates []mongo.WriteModel
+ for _, index := range indexes {
+ filter := bson.M{
+ "doc_id": docID,
+ fmt.Sprintf("msgs.%d.msg.send_id", index): bson.M{
+ "$ne": userID,
+ },
+ }
+ update := bson.M{
+ "$set": bson.M{
+ fmt.Sprintf("msgs.%d.is_read", index): true,
+ },
+ }
+ updateModel := mongo.NewUpdateManyModel().
+ SetFilter(filter).
+ SetUpdate(update)
+ updates = append(updates, updateModel)
+ }
+ if _, err := m.coll.BulkWrite(ctx, updates); err != nil {
+ return errs.WrapMsg(err, fmt.Sprintf("docID is %s, indexes is %v", docID, indexes))
+ }
+ return nil
+}
+
+type searchMessageIndex struct {
+ ID primitive.ObjectID `bson:"_id"`
+ Index []int64 `bson:"index"`
+}
+
+func (m *MsgMgo) searchMessageIndex(ctx context.Context, filter any, nextID primitive.ObjectID, limit int) ([]searchMessageIndex, error) {
+ var pipeline bson.A
+ if !nextID.IsZero() {
+ pipeline = append(pipeline, bson.M{"$match": bson.M{"_id": bson.M{"$gt": nextID}}})
+ }
+ coarseFilter := bson.M{
+ "$or": bson.A{
+ bson.M{
+ "doc_id": primitive.Regex{Pattern: "^sg_"},
+ },
+ bson.M{
+ "doc_id": primitive.Regex{Pattern: "^si_"},
+ },
+ },
+ }
+ pipeline = append(pipeline,
+ bson.M{"$sort": bson.M{"_id": 1}},
+ bson.M{"$match": coarseFilter},
+ bson.M{"$match": filter},
+ bson.M{"$limit": limit},
+ bson.M{
+ "$project": bson.M{
+ "_id": 1,
+ "msgs": bson.M{
+ "$map": bson.M{
+ "input": "$msgs",
+ "as": "msg",
+ "in": bson.M{
+ "$mergeObjects": bson.A{
+ "$$msg",
+ bson.M{
+ "_search_temp_index": bson.M{
+ "$indexOfArray": bson.A{
+ "$msgs", "$$msg",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ bson.M{"$unwind": "$msgs"},
+ bson.M{"$match": filter},
+ bson.M{
+ "$project": bson.M{
+ "_id": 1,
+ "msgs._search_temp_index": 1,
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": "$_id",
+ "index": bson.M{"$push": "$msgs._search_temp_index"},
+ },
+ },
+ bson.M{"$sort": bson.M{"_id": 1}},
+ )
+ return mongoutil.Aggregate[searchMessageIndex](ctx, m.coll, pipeline)
+}
+
+func (m *MsgMgo) searchMessage(ctx context.Context, req *msg.SearchMessageReq) (int64, []searchMessageIndex, error) {
+ filter := bson.M{
+ "msgs.msg": bson.M{
+ "$exists": true,
+ "$type": "object",
+ },
+ }
+ if req.RecvID != "" {
+ filter["$or"] = bson.A{
+ bson.M{"msgs.msg.recv_id": req.RecvID},
+ bson.M{"msgs.msg.group_id": req.RecvID},
+ }
+ }
+ if req.SendID != "" {
+ filter["msgs.msg.send_id"] = req.SendID
+ }
+ if req.ContentType != 0 {
+ filter["msgs.msg.content_type"] = req.ContentType
+ }
+ if req.SessionType != 0 {
+ filter["msgs.msg.session_type"] = req.SessionType
+ }
+ // 支持时间段搜索:优先使用 StartTime 和 EndTime,如果没有则检查 SendTime 是否包含时间段格式
+ var timeFilter bson.M
+ reqValue := reflect.ValueOf(req).Elem()
+ reqType := reqValue.Type()
+
+ // 检查是否有 StartTime 和 EndTime 字段
+ var startTimeField, endTimeField reflect.Value
+ var hasStartTime, hasEndTime bool
+
+ for i := 0; i < reqType.NumField(); i++ {
+ field := reqType.Field(i)
+ if field.Name == "StartTime" {
+ startTimeField = reqValue.Field(i)
+ hasStartTime = true
+ }
+ if field.Name == "EndTime" {
+ endTimeField = reqValue.Field(i)
+ hasEndTime = true
+ }
+ }
+
+ // 如果存在 StartTime 和 EndTime 字段,使用时间段搜索
+ if hasStartTime && hasEndTime {
+ startTimeStr := startTimeField.String()
+ endTimeStr := endTimeField.String()
+
+ if startTimeStr != "" || endTimeStr != "" {
+ timeFilter = bson.M{}
+
+ if startTimeStr != "" {
+ startTime, err := time.Parse(time.DateOnly, startTimeStr)
+ if err != nil {
+ return 0, nil, errs.ErrArgs.WrapMsg("invalid startTime", "req", startTimeStr, "format", time.DateOnly, "cause", err.Error())
+ }
+ timeFilter["$gte"] = startTime.UnixMilli()
+ }
+
+ if endTimeStr != "" {
+ endTime, err := time.Parse(time.DateOnly, endTimeStr)
+ if err != nil {
+ return 0, nil, errs.ErrArgs.WrapMsg("invalid endTime", "req", endTimeStr, "format", time.DateOnly, "cause", err.Error())
+ }
+ // 结束时间包含当天,所以加一天并小于该值
+ timeFilter["$lt"] = endTime.Add(time.Hour * 24).UnixMilli()
+ }
+
+ if len(timeFilter) > 0 {
+ filter["msgs.msg.send_time"] = timeFilter
+ }
+ }
+ } else if req.SendTime != "" {
+ // 支持时间段格式:如果 SendTime 包含 "~" 分隔符,则解析为时间段
+ // 格式1: "2025-12-17" (单日期,向后兼容)
+ // 格式2: "2025-12-17~2025-12-20" (时间段)
+ if strings.Contains(req.SendTime, "~") {
+ parts := strings.Split(req.SendTime, "~")
+ if len(parts) != 2 {
+ return 0, nil, errs.ErrArgs.WrapMsg("invalid sendTime format for date range", "req", req.SendTime, "expected", "YYYY-MM-DD~YYYY-MM-DD")
+ }
+
+ timeFilter = bson.M{}
+
+ startTimeStr := strings.TrimSpace(parts[0])
+ if startTimeStr != "" {
+ startTime, err := time.Parse(time.DateOnly, startTimeStr)
+ if err != nil {
+ return 0, nil, errs.ErrArgs.WrapMsg("invalid startTime in sendTime", "req", startTimeStr, "format", time.DateOnly, "cause", err.Error())
+ }
+ timeFilter["$gte"] = startTime.UnixMilli()
+ }
+
+ endTimeStr := strings.TrimSpace(parts[1])
+ if endTimeStr != "" {
+ endTime, err := time.Parse(time.DateOnly, endTimeStr)
+ if err != nil {
+ return 0, nil, errs.ErrArgs.WrapMsg("invalid endTime in sendTime", "req", endTimeStr, "format", time.DateOnly, "cause", err.Error())
+ }
+ // 结束时间包含当天,所以加一天并小于该值
+ timeFilter["$lt"] = endTime.Add(time.Hour * 24).UnixMilli()
+ }
+
+ if len(timeFilter) > 0 {
+ filter["msgs.msg.send_time"] = timeFilter
+ }
+ } else {
+ // 向后兼容:单日期搜索
+ sendTime, err := time.Parse(time.DateOnly, req.SendTime)
+ if err != nil {
+ return 0, nil, errs.ErrArgs.WrapMsg("invalid sendTime", "req", req.SendTime, "format", time.DateOnly, "cause", err.Error())
+ }
+ filter["msgs.msg.send_time"] = bson.M{
+ "$gte": sendTime.UnixMilli(),
+ "$lt": sendTime.Add(time.Hour * 24).UnixMilli(),
+ }
+ }
+ }
+
+ var (
+ nextID primitive.ObjectID
+ count int
+ dataRange []searchMessageIndex
+ skip = int((req.Pagination.GetPageNumber() - 1) * req.Pagination.GetShowNumber())
+ )
+ _, _ = dataRange, skip
+ const maxDoc = 50
+ data := make([]searchMessageIndex, 0, req.Pagination.GetShowNumber())
+ push := cap(data)
+ for i := 0; ; i++ {
+ res, err := m.searchMessageIndex(ctx, filter, nextID, maxDoc)
+ if err != nil {
+ return 0, nil, err
+ }
+ if len(res) > 0 {
+ nextID = res[len(res)-1].ID
+ }
+ for _, r := range res {
+ var dataIndex []int64
+ for _, index := range r.Index {
+ if push > 0 && count >= skip {
+ dataIndex = append(dataIndex, index)
+ push--
+ }
+ count++
+ }
+ if len(dataIndex) > 0 {
+ data = append(data, searchMessageIndex{
+ ID: r.ID,
+ Index: dataIndex,
+ })
+ }
+ }
+ if push <= 0 {
+ push--
+ }
+ if len(res) < maxDoc || push < -10 {
+ return int64(count), data, nil
+ }
+ }
+}
+
+func (m *MsgMgo) SearchMessage(ctx context.Context, req *msg.SearchMessageReq) (int64, []*model.MsgInfoModel, error) {
+ count, data, err := m.searchMessage(ctx, req)
+ if err != nil {
+ return 0, nil, err
+ }
+ var msgs []*model.MsgInfoModel
+ if len(data) > 0 {
+ var n int
+ for _, d := range data {
+ n += len(d.Index)
+ }
+ msgs = make([]*model.MsgInfoModel, 0, n)
+ }
+ for _, val := range data {
+ res, err := mongoutil.FindOne[*model.MsgDocModel](ctx, m.coll, bson.M{"_id": val.ID})
+ if err != nil {
+ return 0, nil, err
+ }
+ for _, i := range val.Index {
+ if i >= int64(len(res.Msg)) {
+ continue
+ }
+ msgs = append(msgs, res.Msg[i])
+ }
+ }
+ return count, msgs, nil
+}
+
+// buildUserMsgFilter 构造消息统计查询条件
+func buildUserMsgFilter(sendID string, startTime int64, endTime int64, content string) bson.M {
+ filter := bson.M{
+ "msgs.msg": bson.M{
+ "$exists": true,
+ "$type": "object",
+ },
+ }
+ if sendID != "" {
+ filter["msgs.msg.send_id"] = sendID
+ }
+ if startTime > 0 || endTime > 0 {
+ rangeCond := bson.M{}
+ if startTime > 0 {
+ rangeCond["$gte"] = startTime
+ }
+ if endTime > 0 {
+ rangeCond["$lt"] = endTime
+ }
+ filter["msgs.msg.send_time"] = rangeCond
+ }
+ if content != "" {
+ pattern := fmt.Sprintf(".*%s.*", regexp.QuoteMeta(content))
+ filter["msgs.msg.content"] = bson.M{
+ "$regex": pattern,
+ "$options": "i",
+ }
+ }
+ return filter
+}
+
+// buildUserMsgTrendFilter 构造用户消息走势统计查询条件
+func buildUserMsgTrendFilter(sendID string, sessionTypes []int32, startTime int64, endTime int64) bson.M {
+ filter := bson.M{
+ "msgs.msg": bson.M{
+ "$exists": true,
+ "$type": "object",
+ },
+ }
+ if sendID != "" {
+ filter["msgs.msg.send_id"] = sendID
+ }
+ if len(sessionTypes) > 0 {
+ filter["msgs.msg.session_type"] = bson.M{"$in": sessionTypes}
+ }
+ if startTime > 0 || endTime > 0 {
+ rangeCond := bson.M{}
+ if startTime > 0 {
+ rangeCond["$gte"] = startTime
+ }
+ if endTime > 0 {
+ rangeCond["$lt"] = endTime
+ }
+ filter["msgs.msg.send_time"] = rangeCond
+ }
+ return filter
+}
+
+// CountUserSendMessages 按条件统计消息数量
+func (m *MsgMgo) CountUserSendMessages(ctx context.Context, sendID string, startTime int64, endTime int64, content string) (int64, error) {
+ filter := buildUserMsgFilter(sendID, startTime, endTime, content)
+ pipeline := bson.A{
+ bson.M{"$unwind": "$msgs"},
+ bson.M{"$match": filter},
+ bson.M{"$count": "total"},
+ }
+ type countResult struct {
+ Total int64 `bson:"total"`
+ }
+ res, err := mongoutil.Aggregate[countResult](ctx, m.coll, pipeline)
+ if err != nil {
+ return 0, err
+ }
+ if len(res) == 0 {
+ return 0, nil
+ }
+ return res[0].Total, nil
+}
+
+// CountUserSendMessagesTrend 按时间区间统计用户发送消息走势数据
+func (m *MsgMgo) CountUserSendMessagesTrend(ctx context.Context, sendID string, sessionTypes []int32, startTime int64, endTime int64, intervalMillis int64) (map[int64]int64, error) {
+ if intervalMillis <= 0 {
+ return nil, errs.ErrArgs.WrapMsg("invalid interval")
+ }
+ filter := buildUserMsgTrendFilter(sendID, sessionTypes, startTime, endTime)
+ pipeline := bson.A{
+ bson.M{"$unwind": "$msgs"},
+ bson.M{"$match": filter},
+ bson.M{
+ "$addFields": bson.M{
+ "bucket_start": bson.M{
+ "$subtract": bson.A{
+ "$msgs.msg.send_time",
+ bson.M{
+ "$mod": bson.A{"$msgs.msg.send_time", intervalMillis},
+ },
+ },
+ },
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": "$bucket_start",
+ "count": bson.M{"$sum": 1},
+ },
+ },
+ bson.M{"$sort": bson.M{"_id": 1}},
+ }
+ type trendResult struct {
+ BucketStart int64 `bson:"_id"`
+ Count int64 `bson:"count"`
+ }
+ res, err := mongoutil.Aggregate[trendResult](ctx, m.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ result := make(map[int64]int64, len(res))
+ for _, item := range res {
+ result[item.BucketStart] = item.Count
+ }
+ return result, nil
+}
+
+// SearchUserMessages 按条件分页查询消息记录
+func (m *MsgMgo) SearchUserMessages(ctx context.Context, sendID string, startTime int64, endTime int64, content string, pageNumber int32, showNumber int32) (int64, []*model.MsgInfoModel, error) {
+ filter := buildUserMsgFilter(sendID, startTime, endTime, content)
+ if pageNumber <= 0 {
+ pageNumber = 1
+ }
+ if showNumber <= 0 {
+ showNumber = 50
+ }
+ skip := int64(pageNumber-1) * int64(showNumber)
+ countPipeline := bson.A{
+ bson.M{"$unwind": "$msgs"},
+ bson.M{"$match": filter},
+ bson.M{"$count": "total"},
+ }
+ type countResult struct {
+ Total int64 `bson:"total"`
+ }
+ countRes, err := mongoutil.Aggregate[countResult](ctx, m.coll, countPipeline)
+ if err != nil {
+ return 0, nil, err
+ }
+ var total int64
+ if len(countRes) > 0 {
+ total = countRes[0].Total
+ }
+ dataPipeline := bson.A{
+ bson.M{"$unwind": "$msgs"},
+ bson.M{"$match": filter},
+ bson.M{"$sort": bson.M{"msgs.msg.send_time": -1}},
+ bson.M{"$skip": skip},
+ bson.M{"$limit": int64(showNumber)},
+ bson.M{"$project": bson.M{"_id": 0, "msgs": 1}},
+ }
+ type msgWrap struct {
+ Msg *model.MsgInfoModel `bson:"msgs"`
+ }
+ msgRes, err := mongoutil.Aggregate[msgWrap](ctx, m.coll, dataPipeline)
+ if err != nil {
+ return 0, nil, err
+ }
+ msgs := make([]*model.MsgInfoModel, 0, len(msgRes))
+ for _, item := range msgRes {
+ if item.Msg == nil {
+ continue
+ }
+ msgs = append(msgs, item.Msg)
+ }
+ return total, msgs, nil
+}
+
+func (m *MsgMgo) RangeUserSendCount(ctx context.Context, start time.Time, end time.Time, group bool, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, users []*model.UserCount, dateCount map[string]int64, err error) {
+ var sort int
+ if ase {
+ sort = 1
+ } else {
+ sort = -1
+ }
+ type Result struct {
+ MsgCount int64 `bson:"msg_count"`
+ UserCount int64 `bson:"user_count"`
+ Users []struct {
+ UserID string `bson:"_id"`
+ Count int64 `bson:"count"`
+ } `bson:"users"`
+ Dates []struct {
+ Date string `bson:"_id"`
+ Count int64 `bson:"count"`
+ } `bson:"dates"`
+ }
+ or := bson.A{
+ bson.M{
+ "doc_id": bson.M{
+ "$regex": "^si_",
+ "$options": "i",
+ },
+ },
+ }
+ if group {
+ or = append(or,
+ bson.M{
+ "doc_id": bson.M{
+ "$regex": "^g_",
+ "$options": "i",
+ },
+ },
+ bson.M{
+ "doc_id": bson.M{
+ "$regex": "^sg_",
+ "$options": "i",
+ },
+ },
+ )
+ }
+ pipeline := bson.A{
+ bson.M{
+ "$match": bson.M{
+ "$and": bson.A{
+ bson.M{
+ "msgs.msg.send_time": bson.M{
+ "$gte": start.UnixMilli(),
+ "$lt": end.UnixMilli(),
+ },
+ },
+ bson.M{
+ "$or": or,
+ },
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "msgs": bson.M{
+ "$filter": bson.M{
+ "input": "$msgs",
+ "as": "item",
+ "cond": bson.M{
+ "$and": bson.A{
+ bson.M{
+ "$gte": bson.A{
+ "$$item.msg.send_time", start.UnixMilli(),
+ },
+ },
+ bson.M{
+ "$lt": bson.A{
+ "$$item.msg.send_time", end.UnixMilli(),
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "result": bson.M{
+ "$map": bson.M{
+ "input": "$msgs",
+ "as": "item",
+ "in": bson.M{
+ "user_id": "$$item.msg.send_id",
+ "send_date": bson.M{
+ "$dateToString": bson.M{
+ "format": "%Y-%m-%d",
+ "date": bson.M{
+ "$toDate": "$$item.msg.send_time", // Millisecond timestamp
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ bson.M{
+ "$unwind": "$result",
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": "$result.send_date",
+ "count": bson.M{
+ "$sum": 1,
+ },
+ "original": bson.M{
+ "$push": "$$ROOT",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": "$$ROOT",
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "count": 0,
+ "dates.original": 0,
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": nil,
+ "count": bson.M{
+ "$sum": 1,
+ },
+ "dates": bson.M{
+ "$push": "$dates",
+ },
+ "original": bson.M{
+ "$push": "$original",
+ },
+ },
+ },
+ bson.M{
+ "$unwind": "$original",
+ },
+ bson.M{
+ "$unwind": "$original",
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": "$original.result.user_id",
+ "count": bson.M{
+ "$sum": 1,
+ },
+ "original": bson.M{
+ "$push": "$dates",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": bson.M{
+ "$arrayElemAt": bson.A{"$original", 0},
+ },
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "original": 0,
+ },
+ },
+ bson.M{
+ "$sort": bson.M{
+ "count": sort,
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": nil,
+ "user_count": bson.M{
+ "$sum": 1,
+ },
+ "users": bson.M{
+ "$push": "$$ROOT",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": bson.M{
+ "$arrayElemAt": bson.A{"$users", 0},
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": "$dates.dates",
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "users.dates": 0,
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "msg_count": bson.M{
+ "$sum": "$users.count",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "users": bson.M{
+ "$slice": bson.A{"$users", pageNumber - 1, showNumber},
+ },
+ },
+ },
+ }
+ result, err := mongoutil.Aggregate[*Result](ctx, m.coll, pipeline, options.Aggregate().SetAllowDiskUse(true))
+ if err != nil {
+ return 0, 0, nil, nil, err
+ }
+ if len(result) == 0 {
+ return 0, 0, nil, nil, errs.Wrap(err)
+ }
+ users = make([]*model.UserCount, len(result[0].Users))
+ for i, r := range result[0].Users {
+ users[i] = &model.UserCount{
+ UserID: r.UserID,
+ Count: r.Count,
+ }
+ }
+ dateCount = make(map[string]int64)
+ for _, r := range result[0].Dates {
+ dateCount[r.Date] = r.Count
+ }
+ return result[0].MsgCount, result[0].UserCount, users, dateCount, nil
+}
+
+func (m *MsgMgo) RangeGroupSendCount(ctx context.Context, start time.Time, end time.Time, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, groups []*model.GroupCount, dateCount map[string]int64, err error) {
+ var sort int
+ if ase {
+ sort = 1
+ } else {
+ sort = -1
+ }
+ type Result struct {
+ MsgCount int64 `bson:"msg_count"`
+ UserCount int64 `bson:"user_count"`
+ Groups []struct {
+ GroupID string `bson:"_id"`
+ Count int64 `bson:"count"`
+ } `bson:"groups"`
+ Dates []struct {
+ Date string `bson:"_id"`
+ Count int64 `bson:"count"`
+ } `bson:"dates"`
+ }
+ pipeline := bson.A{
+ bson.M{
+ "$match": bson.M{
+ "$and": bson.A{
+ bson.M{
+ "msgs.msg.send_time": bson.M{
+ "$gte": start.UnixMilli(),
+ "$lt": end.UnixMilli(),
+ },
+ },
+ bson.M{
+ "$or": bson.A{
+ bson.M{
+ "doc_id": bson.M{
+ "$regex": "^g_",
+ "$options": "i",
+ },
+ },
+ bson.M{
+ "doc_id": bson.M{
+ "$regex": "^sg_",
+ "$options": "i",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "msgs": bson.M{
+ "$filter": bson.M{
+ "input": "$msgs",
+ "as": "item",
+ "cond": bson.M{
+ "$and": bson.A{
+ bson.M{
+ "$gte": bson.A{
+ "$$item.msg.send_time", start.UnixMilli(),
+ },
+ },
+ bson.M{
+ "$lt": bson.A{
+ "$$item.msg.send_time", end.UnixMilli(),
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "result": bson.M{
+ "$map": bson.M{
+ "input": "$msgs",
+ "as": "item",
+ "in": bson.M{
+ "group_id": "$$item.msg.group_id",
+ "send_date": bson.M{
+ "$dateToString": bson.M{
+ "format": "%Y-%m-%d",
+ "date": bson.M{
+ "$toDate": "$$item.msg.send_time", // Millisecond timestamp
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ bson.M{
+ "$unwind": "$result",
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": "$result.send_date",
+ "count": bson.M{
+ "$sum": 1,
+ },
+ "original": bson.M{
+ "$push": "$$ROOT",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": "$$ROOT",
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "count": 0,
+ "dates.original": 0,
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": nil,
+ "count": bson.M{
+ "$sum": 1,
+ },
+ "dates": bson.M{
+ "$push": "$dates",
+ },
+ "original": bson.M{
+ "$push": "$original",
+ },
+ },
+ },
+ bson.M{
+ "$unwind": "$original",
+ },
+ bson.M{
+ "$unwind": "$original",
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": "$original.result.group_id",
+ "count": bson.M{
+ "$sum": 1,
+ },
+ "original": bson.M{
+ "$push": "$dates",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": bson.M{
+ "$arrayElemAt": bson.A{"$original", 0},
+ },
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "original": 0,
+ },
+ },
+ bson.M{
+ "$sort": bson.M{
+ "count": sort,
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": nil,
+ "user_count": bson.M{
+ "$sum": 1,
+ },
+ "groups": bson.M{
+ "$push": "$$ROOT",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": bson.M{
+ "$arrayElemAt": bson.A{"$groups", 0},
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "dates": "$dates.dates",
+ },
+ },
+ bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "groups.dates": 0,
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "msg_count": bson.M{
+ "$sum": "$groups.count",
+ },
+ },
+ },
+ bson.M{
+ "$addFields": bson.M{
+ "groups": bson.M{
+ "$slice": bson.A{"$groups", pageNumber - 1, showNumber},
+ },
+ },
+ },
+ }
+ result, err := mongoutil.Aggregate[*Result](ctx, m.coll, pipeline, options.Aggregate().SetAllowDiskUse(true))
+ if err != nil {
+ return 0, 0, nil, nil, err
+ }
+ if len(result) == 0 {
+ return 0, 0, nil, nil, errs.Wrap(err)
+ }
+ groups = make([]*model.GroupCount, len(result[0].Groups))
+ for i, r := range result[0].Groups {
+ groups[i] = &model.GroupCount{
+ GroupID: r.GroupID,
+ Count: r.Count,
+ }
+ }
+ dateCount = make(map[string]int64)
+ for _, r := range result[0].Dates {
+ dateCount[r.Date] = r.Count
+ }
+ return result[0].MsgCount, result[0].UserCount, groups, dateCount, nil
+}
+
+func (m *MsgMgo) GetRandBeforeMsg(ctx context.Context, ts int64, limit int) ([]*model.MsgDocModel, error) {
+ // 修改查询逻辑:查询至少有一条消息的send_time <= ts的文档
+ // 这样可以查询到所有包含过期消息的文档,即使文档中还有其他新消息
+ return mongoutil.Aggregate[*model.MsgDocModel](ctx, m.coll, []bson.M{
+ {
+ "$match": bson.M{
+ "msgs": bson.M{
+ "$elemMatch": bson.M{
+ "msg.send_time": bson.M{
+ "$lte": ts,
+ },
+ },
+ },
+ },
+ },
+ {
+ "$project": bson.M{
+ "_id": 0,
+ "doc_id": 1,
+ "msgs.msg.send_time": 1,
+ "msgs.msg.seq": 1,
+ },
+ },
+ {
+ "$sample": bson.M{
+ "size": limit,
+ },
+ },
+ })
+}
+
+func (m *MsgMgo) DeleteDoc(ctx context.Context, docID string) error {
+ return mongoutil.DeleteOne(ctx, m.coll, bson.M{"doc_id": docID})
+}
+
+func (m *MsgMgo) GetLastMessageSeqByTime(ctx context.Context, conversationID string, time int64) (int64, error) {
+ pipeline := []bson.M{
+ {
+ "$match": bson.M{
+ "doc_id": bson.M{
+ "$regex": fmt.Sprintf("^%s", conversationID),
+ },
+ },
+ },
+ {
+ "$match": bson.M{
+ "msgs.msg.send_time": bson.M{
+ "$lte": time,
+ },
+ },
+ },
+ {
+ "$sort": bson.M{
+ "_id": -1,
+ },
+ },
+ {
+ "$limit": 1,
+ },
+ {
+ "$project": bson.M{
+ "_id": 0,
+ "doc_id": 1,
+ "msgs.msg.send_time": 1,
+ "msgs.msg.seq": 1,
+ },
+ },
+ }
+ res, err := mongoutil.Aggregate[*model.MsgDocModel](ctx, m.coll, pipeline)
+ if err != nil {
+ return 0, err
+ }
+ if len(res) == 0 {
+ return 0, nil
+ }
+ var seq int64
+ for _, v := range res[0].Msg {
+ if v.Msg == nil {
+ continue
+ }
+ if v.Msg.SendTime <= time {
+ seq = v.Msg.Seq
+ }
+ }
+ return seq, nil
+}
+
+func (m *MsgMgo) GetLastMessage(ctx context.Context, conversationID string) (*model.MsgInfoModel, error) {
+ pipeline := []bson.M{
+ {
+ "$match": bson.M{
+ "doc_id": bson.M{
+ "$regex": fmt.Sprintf("^%s", conversationID),
+ },
+ },
+ },
+ {
+ "$match": bson.M{
+ "msgs.msg.status": bson.M{
+ "$lt": constant.MsgStatusHasDeleted,
+ },
+ },
+ },
+ {
+ "$sort": bson.M{
+ "_id": -1,
+ },
+ },
+ {
+ "$limit": 1,
+ },
+ {
+ "$project": bson.M{
+ "_id": 0,
+ "doc_id": 0,
+ },
+ },
+ {
+ "$unwind": "$msgs",
+ },
+ {
+ "$match": bson.M{
+ "msgs.msg.status": bson.M{
+ "$lt": constant.MsgStatusHasDeleted,
+ },
+ },
+ },
+ {
+ "$sort": bson.M{
+ "msgs.msg.seq": -1,
+ },
+ },
+ {
+ "$limit": 1,
+ },
+ }
+ type Result struct {
+ Msgs *model.MsgInfoModel `bson:"msgs"`
+ }
+ res, err := mongoutil.Aggregate[*Result](ctx, m.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ if len(res) == 0 {
+ return nil, errs.Wrap(mongo.ErrNoDocuments)
+ }
+ return res[0].Msgs, nil
+}
+
+func (m *MsgMgo) onlyFindDocIndex(ctx context.Context, docID string, indexes []int64) ([]*model.MsgInfoModel, error) {
+ if len(indexes) == 0 {
+ return nil, nil
+ }
+ pipeline := mongo.Pipeline{
+ bson.D{{Key: "$match", Value: bson.D{
+ {Key: "doc_id", Value: docID},
+ }}},
+ bson.D{{Key: "$project", Value: bson.D{
+ {Key: "_id", Value: 0},
+ {Key: "doc_id", Value: 1},
+ {Key: "msgs", Value: bson.D{
+ {Key: "$map", Value: bson.D{
+ {Key: "input", Value: indexes},
+ {Key: "as", Value: "index"},
+ {Key: "in", Value: bson.D{
+ {Key: "$arrayElemAt", Value: bson.A{"$msgs", "$$index"}},
+ }},
+ }},
+ }},
+ }}},
+ }
+ msgDocModel, err := mongoutil.Aggregate[*model.MsgDocModel](ctx, m.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ if len(msgDocModel) == 0 {
+ return nil, nil
+ }
+ return msgDocModel[0].Msg, nil
+}
+
+//func (m *MsgMgo) FindSeqs(ctx context.Context, conversationID string, seqs []int64) ([]*model.MsgInfoModel, error) {
+// if len(seqs) == 0 {
+// return nil, nil
+// }
+// result := make([]*model.MsgInfoModel, 0, len(seqs))
+// for docID, seqs := range m.model.GetDocIDSeqsMap(conversationID, seqs) {
+// res, err := m.onlyFindDocIndex(ctx, docID, datautil.Slice(seqs, m.model.GetMsgIndex))
+// if err != nil {
+// return nil, err
+// }
+// for i, re := range res {
+// if re == nil || re.Msg == nil {
+// continue
+// }
+// result = append(result, res[i])
+// }
+// }
+// return result, nil
+//}
+
+func (m *MsgMgo) findBeforeDocSendTime(ctx context.Context, docID string, limit int64) (int64, int64, error) {
+ if limit == 0 {
+ return 0, 0, nil
+ }
+ pipeline := []bson.M{
+ {
+ "$match": bson.M{
+ "doc_id": docID,
+ },
+ },
+ {
+ "$project": bson.M{
+ "_id": 0,
+ "doc_id": 0,
+ },
+ },
+ {
+ "$unwind": "$msgs",
+ },
+ {
+ "$project": bson.M{
+ //"_id": 0,
+ //"doc_id": 0,
+ "msgs.msg.send_time": 1,
+ "msgs.msg.seq": 1,
+ },
+ },
+ }
+ if limit > 0 {
+ pipeline = append(pipeline, bson.M{"$limit": limit})
+ }
+ type Result struct {
+ Msgs *model.MsgInfoModel `bson:"msgs"`
+ }
+ res, err := mongoutil.Aggregate[Result](ctx, m.coll, pipeline)
+ if err != nil {
+ return 0, 0, err
+ }
+ for i := len(res) - 1; i >= 0; i-- {
+ v := res[i]
+ if v.Msgs != nil && v.Msgs.Msg != nil && v.Msgs.Msg.SendTime > 0 {
+ return v.Msgs.Msg.Seq, v.Msgs.Msg.SendTime, nil
+ }
+ }
+ return 0, 0, nil
+}
+
+func (m *MsgMgo) findBeforeSendTime(ctx context.Context, conversationID string, seq int64) (int64, int64, error) {
+ first := true
+ for i := m.model.GetDocIndex(seq); i >= 0; i-- {
+ limit := int64(-1)
+ if first {
+ first = false
+ limit = m.model.GetLimitForSingleDoc(seq)
+ }
+ docID := m.model.BuildDocIDByIndex(conversationID, i)
+ msgSeq, msgSendTime, err := m.findBeforeDocSendTime(ctx, docID, limit)
+ if err != nil {
+ return 0, 0, err
+ }
+ if msgSendTime > 0 {
+ return msgSeq, msgSendTime, nil
+ }
+ }
+ return 0, 0, nil
+}
+
+func (m *MsgMgo) FindSeqs(ctx context.Context, conversationID string, seqs []int64) ([]*model.MsgInfoModel, error) {
+ if len(seqs) == 0 {
+ return nil, nil
+ }
+ var abnormalSeq []int64
+ result := make([]*model.MsgInfoModel, 0, len(seqs))
+ for docID, docSeqs := range m.model.GetDocIDSeqsMap(conversationID, seqs) {
+ res, err := m.onlyFindDocIndex(ctx, docID, datautil.Slice(docSeqs, m.model.GetMsgIndex))
+ if err != nil {
+ return nil, err
+ }
+ if len(res) == 0 {
+ abnormalSeq = append(abnormalSeq, docSeqs...)
+ continue
+ }
+ for i, re := range res {
+ if re == nil || re.Msg == nil || re.Msg.SendTime == 0 {
+ abnormalSeq = append(abnormalSeq, docSeqs[i])
+ continue
+ }
+ result = append(result, res[i])
+ }
+ }
+ if len(abnormalSeq) > 0 {
+ datautil.Sort(abnormalSeq, false)
+ sendTime := make(map[int64]int64)
+ var (
+ lastSeq int64
+ lastSendTime int64
+ )
+ for _, seq := range abnormalSeq {
+ if lastSendTime > 0 && lastSeq <= seq {
+ sendTime[seq] = lastSendTime
+ continue
+ }
+ msgSeq, msgSendTime, err := m.findBeforeSendTime(ctx, conversationID, seq)
+ if err != nil {
+ return nil, err
+ }
+ if msgSendTime <= 0 {
+ break
+ }
+ sendTime[seq] = msgSendTime
+ lastSeq = msgSeq
+ lastSendTime = msgSendTime
+ }
+ for _, seq := range abnormalSeq {
+ result = append(result, &model.MsgInfoModel{
+ Msg: &model.MsgDataModel{
+ Seq: seq,
+ Status: constant.MsgStatusHasDeleted,
+ SendTime: sendTime[seq],
+ },
+ })
+ }
+ }
+ return result, nil
+}
diff --git a/pkg/common/storage/database/mgo/msg_test.go b/pkg/common/storage/database/mgo/msg_test.go
new file mode 100644
index 0000000..ae715d4
--- /dev/null
+++ b/pkg/common/storage/database/mgo/msg_test.go
@@ -0,0 +1,178 @@
+package mgo
+
+import (
+ "context"
+ "math"
+ "math/rand"
+ "strconv"
+ "testing"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func TestName1(t *testing.T) {
+ //ctx, cancel := context.WithTimeout(context.Background(), time.Second*300)
+ //defer cancel()
+ //cli := Result(mongo.Connect(ctx, options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.66:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+ //
+ //v := &MsgMgo{
+ // coll: cli.Database("openim_v3").Collection("msg3"),
+ //}
+ //
+ //req := &msg.SearchMessageReq{
+ // //RecvID: "3187706596",
+ // //SendID: "7009965934",
+ // ContentType: 101,
+ // //SendTime: "2024-05-06",
+ // //SessionType: 3,
+ // Pagination: &sdkws.RequestPagination{
+ // PageNumber: 1,
+ // ShowNumber: 10,
+ // },
+ //}
+ //total, res, err := v.SearchMessage(ctx, req)
+ //if err != nil {
+ // panic(err)
+ //}
+ //
+ //for i, re := range res {
+ // t.Logf("%d => %d | %+v", i+1, re.Msg.Seq, re.Msg.Content)
+ //}
+ //
+ //t.Log(total)
+ //
+ //msg, err := NewMsgMongo(cli.Database("openim_v3"))
+ //if err != nil {
+ // panic(err)
+ //}
+ //res, err := msg.GetBeforeMsg(ctx, time.Now().UnixMilli(), []string{"1:0"}, 1000)
+ //if err != nil {
+ // panic(err)
+ //}
+ //t.Log(len(res))
+}
+
+func TestName10(t *testing.T) {
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
+ defer cancel()
+ cli := Result(mongo.Connect(ctx, options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.48:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+
+ v := &MsgMgo{
+ coll: cli.Database("openim_v3").Collection("msg3"),
+ }
+ opt := options.Find().SetLimit(1000)
+
+ res, err := mongoutil.Find[model.MsgDocModel](ctx, v.coll, bson.M{}, opt)
+ if err != nil {
+ panic(err)
+ }
+ ctx = context.Background()
+ for i := 0; i < 100000; i++ {
+ for j := range res {
+ res[j].DocID = strconv.FormatUint(rand.Uint64(), 10) + ":0"
+ }
+ if err := mongoutil.InsertMany(ctx, v.coll, res); err != nil {
+ panic(err)
+ }
+ t.Log("====>", time.Now(), i)
+ }
+
+}
+
+func TestName3(t *testing.T) {
+ t.Log(uint64(math.MaxUint64))
+ t.Log(int64(math.MaxInt64))
+
+ t.Log(int64(math.MinInt64))
+}
+
+func TestName4(t *testing.T) {
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*300)
+ defer cancel()
+ cli := Result(mongo.Connect(ctx, options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.135:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+
+ msg, err := NewMsgMongo(cli.Database("openim_v3"))
+ if err != nil {
+ panic(err)
+ }
+ ts := time.Now().Add(-time.Hour * 24 * 5).UnixMilli()
+ t.Log(ts)
+ res, err := msg.GetLastMessageSeqByTime(ctx, "sg_1523453548", ts)
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res)
+}
+
+func TestName5(t *testing.T) {
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*300)
+ defer cancel()
+ cli := Result(mongo.Connect(ctx, options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.135:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+
+ tmp, err := NewMsgMongo(cli.Database("openim_v3"))
+ if err != nil {
+ panic(err)
+ }
+ msg := tmp.(*MsgMgo)
+ ts := time.Now().Add(-time.Hour * 24 * 5).UnixMilli()
+ t.Log(ts)
+ var seqs []int64
+ for i := 1; i < 256; i++ {
+ seqs = append(seqs, int64(i))
+ }
+ res, err := msg.FindSeqs(ctx, "si_4924054191_9511766539", seqs)
+ if err != nil {
+ panic(err)
+ }
+ t.Log(res)
+}
+
+//func TestName6(t *testing.T) {
+// ctx, cancel := context.WithTimeout(context.Background(), time.Second*300)
+// defer cancel()
+// cli := Result(mongo.Connect(ctx, options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.135:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+//
+// tmp, err := NewMsgMongo(cli.Database("openim_v3"))
+// if err != nil {
+// panic(err)
+// }
+// msg := tmp.(*MsgMgo)
+// seq, sendTime, err := msg.findBeforeSendTime(ctx, "si_4924054191_9511766539", 1144)
+// if err != nil {
+// panic(err)
+// }
+// t.Log(seq, sendTime)
+//}
+
+func TestSearchMessage(t *testing.T) {
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*300)
+ defer cancel()
+ cli := Result(mongo.Connect(ctx, options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.135:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+
+ msgMongo, err := NewMsgMongo(cli.Database("openim_v3"))
+ if err != nil {
+ panic(err)
+ }
+ ts := time.Now().Add(-time.Hour * 24 * 5).UnixMilli()
+ t.Log(ts)
+ req := &msg.SearchMessageReq{
+ //SendID: "yjz",
+ //RecvID: "aibot",
+ Pagination: &sdkws.RequestPagination{
+ PageNumber: 1,
+ ShowNumber: 20,
+ },
+ }
+ count, resp, err := msgMongo.SearchMessage(ctx, req)
+ if err != nil {
+ panic(err)
+ }
+ t.Log(resp, count)
+}
diff --git a/pkg/common/storage/database/mgo/object.go b/pkg/common/storage/database/mgo/object.go
new file mode 100644
index 0000000..30943b1
--- /dev/null
+++ b/pkg/common/storage/database/mgo/object.go
@@ -0,0 +1,126 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewS3Mongo(db *mongo.Database) (database.ObjectInfo, error) {
+ coll := db.Collection(database.ObjectName)
+
+ // Create index for name
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "name", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ // Create index for create_time
+ _, err = coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "create_time", Value: 1},
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ // Create index for key
+ _, err = coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "key", Value: 1},
+ },
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ return &S3Mongo{coll: coll}, nil
+}
+
+type S3Mongo struct {
+ coll *mongo.Collection
+}
+
+func (o *S3Mongo) SetObject(ctx context.Context, obj *model.Object) error {
+ filter := bson.M{"name": obj.Name, "engine": obj.Engine}
+ update := bson.M{
+ "name": obj.Name,
+ "engine": obj.Engine,
+ "key": obj.Key,
+ "size": obj.Size,
+ "content_type": obj.ContentType,
+ "group": obj.Group,
+ "create_time": obj.CreateTime,
+ }
+ return mongoutil.UpdateOne(ctx, o.coll, filter, bson.M{"$set": update}, false, options.Update().SetUpsert(true))
+}
+
+func (o *S3Mongo) Take(ctx context.Context, engine string, name string) (*model.Object, error) {
+ if engine == "" {
+ return mongoutil.FindOne[*model.Object](ctx, o.coll, bson.M{"name": name})
+ }
+ return mongoutil.FindOne[*model.Object](ctx, o.coll, bson.M{"name": name, "engine": engine})
+}
+
+func (o *S3Mongo) Delete(ctx context.Context, engine string, name []string) error {
+ if len(name) == 0 {
+ return nil
+ }
+ return mongoutil.DeleteOne(ctx, o.coll, bson.M{"engine": engine, "name": bson.M{"$in": name}})
+}
+
+func (o *S3Mongo) FindExpirationObject(ctx context.Context, engine string, expiration time.Time, needDelType []string, count int64) ([]*model.Object, error) {
+ opt := options.Find()
+ if count > 0 {
+ opt.SetLimit(count)
+ }
+ return mongoutil.Find[*model.Object](ctx, o.coll, bson.M{
+ "engine": engine,
+ "create_time": bson.M{"$lt": expiration},
+ "group": bson.M{"$in": needDelType},
+ }, opt)
+}
+
+func (o *S3Mongo) GetKeyCount(ctx context.Context, engine string, key string) (int64, error) {
+ return mongoutil.Count(ctx, o.coll, bson.M{"engine": engine, "key": key})
+}
+
+func (o *S3Mongo) GetEngineCount(ctx context.Context, engine string) (int64, error) {
+ return mongoutil.Count(ctx, o.coll, bson.M{"engine": engine})
+}
+
+func (o *S3Mongo) GetEngineInfo(ctx context.Context, engine string, limit int, skip int) ([]*model.Object, error) {
+ return mongoutil.Find[*model.Object](ctx, o.coll, bson.M{"engine": engine}, options.Find().SetLimit(int64(limit)).SetSkip(int64(skip)))
+}
+
+func (o *S3Mongo) UpdateEngine(ctx context.Context, oldEngine, oldName string, newEngine string) error {
+ return mongoutil.UpdateOne(ctx, o.coll, bson.M{"engine": oldEngine, "name": oldName}, bson.M{"$set": bson.M{"engine": newEngine}}, false)
+}
diff --git a/pkg/common/storage/database/mgo/redpacket.go b/pkg/common/storage/database/mgo/redpacket.go
new file mode 100644
index 0000000..fd47ee6
--- /dev/null
+++ b/pkg/common/storage/database/mgo/redpacket.go
@@ -0,0 +1,234 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+// RedPacketMgo implements RedPacket using MongoDB as the storage backend.
+type RedPacketMgo struct {
+ coll *mongo.Collection
+}
+
+// NewRedPacketMongo creates a new instance of RedPacketMgo with the provided MongoDB database.
+func NewRedPacketMongo(db *mongo.Database) (database.RedPacket, error) {
+ coll := db.Collection(database.RedPacketName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "red_packet_id", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "send_user_id", Value: 1}, {Key: "create_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "group_id", Value: 1}, {Key: "create_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "expire_time", Value: 1}},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &RedPacketMgo{coll: coll}, nil
+}
+
+// Create creates a new red packet record.
+func (r *RedPacketMgo) Create(ctx context.Context, redPacket *model.RedPacket) error {
+ if redPacket.CreateTime.IsZero() {
+ redPacket.CreateTime = time.Now()
+ }
+ return mongoutil.InsertOne(ctx, r.coll, redPacket)
+}
+
+// Take retrieves a red packet by ID. Returns an error if not found.
+func (r *RedPacketMgo) Take(ctx context.Context, redPacketID string) (*model.RedPacket, error) {
+ return mongoutil.FindOne[*model.RedPacket](ctx, r.coll, bson.M{"red_packet_id": redPacketID})
+}
+
+// UpdateStatus updates the status of a red packet.
+func (r *RedPacketMgo) UpdateStatus(ctx context.Context, redPacketID string, status int32) error {
+ return mongoutil.UpdateOne(ctx, r.coll, bson.M{"red_packet_id": redPacketID}, bson.M{"$set": bson.M{"status": status}}, false)
+}
+
+// UpdateRemain updates the remain amount and count of a red packet.
+func (r *RedPacketMgo) UpdateRemain(ctx context.Context, redPacketID string, remainAmount int64, remainCount int32) error {
+ update := bson.M{
+ "$set": bson.M{
+ "remain_amount": remainAmount,
+ "remain_count": remainCount,
+ },
+ }
+ // If remain count is 0, update status to finished
+ if remainCount == 0 {
+ update["$set"].(bson.M)["status"] = model.RedPacketStatusFinished
+ }
+ return mongoutil.UpdateOne(ctx, r.coll, bson.M{"red_packet_id": redPacketID}, update, false)
+}
+
+// DecreaseRemainAtomic 原子性地减少红包剩余数量和金额(防止并发问题)
+// 只有在 remain_count > 0 且状态为 Active 时才会更新
+func (r *RedPacketMgo) DecreaseRemainAtomic(ctx context.Context, redPacketID string, amount int64) (*model.RedPacket, error) {
+ // 过滤条件:红包ID匹配、剩余数量>0、状态为Active
+ filter := bson.M{
+ "red_packet_id": redPacketID,
+ "remain_count": bson.M{"$gt": 0},
+ "status": model.RedPacketStatusActive,
+ }
+
+ // 使用 $inc 原子性地减少剩余数量和金额
+ update := bson.M{
+ "$inc": bson.M{
+ "remain_amount": -amount,
+ "remain_count": -1,
+ },
+ }
+
+ // 使用 findOneAndUpdate 返回更新后的文档
+ opts := options.FindOneAndUpdate().SetReturnDocument(options.After)
+ var updatedRedPacket model.RedPacket
+ err := r.coll.FindOneAndUpdate(ctx, filter, update, opts).Decode(&updatedRedPacket)
+ if err != nil {
+ if err == mongo.ErrNoDocuments {
+ // 红包不存在、已领完或状态不正确
+ return nil, errs.ErrArgs.WrapMsg("red packet not available (already finished or expired)")
+ }
+ return nil, err
+ }
+
+ // 如果剩余数量为0,更新状态为已完成
+ if updatedRedPacket.RemainCount == 0 {
+ statusUpdate := bson.M{"$set": bson.M{"status": model.RedPacketStatusFinished}}
+ _ = mongoutil.UpdateOne(ctx, r.coll, bson.M{"red_packet_id": redPacketID}, statusUpdate, false)
+ updatedRedPacket.Status = model.RedPacketStatusFinished
+ }
+
+ return &updatedRedPacket, nil
+}
+
+// FindExpiredRedPackets finds red packets that have expired.
+func (r *RedPacketMgo) FindExpiredRedPackets(ctx context.Context, beforeTime time.Time) ([]*model.RedPacket, error) {
+ filter := bson.M{
+ "expire_time": bson.M{"$lt": beforeTime},
+ "status": model.RedPacketStatusActive,
+ }
+ return mongoutil.Find[*model.RedPacket](ctx, r.coll, filter)
+}
+
+// FindRedPacketsByUser finds red packets sent by a user with pagination.
+func (r *RedPacketMgo) FindRedPacketsByUser(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, redPackets []*model.RedPacket, err error) {
+ filter := bson.M{"send_user_id": userID}
+ return mongoutil.FindPage[*model.RedPacket](ctx, r.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "create_time", Value: -1}},
+ })
+}
+
+// FindRedPacketsByGroup finds red packets in a group with pagination.
+func (r *RedPacketMgo) FindRedPacketsByGroup(ctx context.Context, groupID string, pagination pagination.Pagination) (total int64, redPackets []*model.RedPacket, err error) {
+ filter := bson.M{"group_id": groupID}
+ return mongoutil.FindPage[*model.RedPacket](ctx, r.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "create_time", Value: -1}},
+ })
+}
+
+// FindAllRedPackets finds all red packets with pagination.
+func (r *RedPacketMgo) FindAllRedPackets(ctx context.Context, pagination pagination.Pagination) (total int64, redPackets []*model.RedPacket, err error) {
+ return mongoutil.FindPage[*model.RedPacket](ctx, r.coll, bson.M{}, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "create_time", Value: -1}},
+ })
+}
+
+// RedPacketReceiveMgo implements RedPacketReceive using MongoDB as the storage backend.
+type RedPacketReceiveMgo struct {
+ coll *mongo.Collection
+}
+
+// NewRedPacketReceiveMongo creates a new instance of RedPacketReceiveMgo with the provided MongoDB database.
+func NewRedPacketReceiveMongo(db *mongo.Database) (database.RedPacketReceive, error) {
+ coll := db.Collection(database.RedPacketReceiveName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "receive_id", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "red_packet_id", Value: 1}, {Key: "receive_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "receive_user_id", Value: 1}, {Key: "red_packet_id", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "receive_user_id", Value: 1}, {Key: "receive_time", Value: -1}},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &RedPacketReceiveMgo{coll: coll}, nil
+}
+
+// Create creates a new red packet receive record.
+func (r *RedPacketReceiveMgo) Create(ctx context.Context, receive *model.RedPacketReceive) error {
+ if receive.ReceiveTime.IsZero() {
+ receive.ReceiveTime = time.Now()
+ }
+ return mongoutil.InsertOne(ctx, r.coll, receive)
+}
+
+// Take retrieves a receive record by ID. Returns an error if not found.
+func (r *RedPacketReceiveMgo) Take(ctx context.Context, receiveID string) (*model.RedPacketReceive, error) {
+ return mongoutil.FindOne[*model.RedPacketReceive](ctx, r.coll, bson.M{"receive_id": receiveID})
+}
+
+// FindByRedPacketID finds all receive records for a red packet.
+func (r *RedPacketReceiveMgo) FindByRedPacketID(ctx context.Context, redPacketID string) ([]*model.RedPacketReceive, error) {
+ return mongoutil.Find[*model.RedPacketReceive](ctx, r.coll, bson.M{"red_packet_id": redPacketID}, &options.FindOptions{
+ Sort: bson.D{{Key: "receive_time", Value: 1}},
+ })
+}
+
+// FindByUserAndRedPacketID finds if a user has received a specific red packet.
+func (r *RedPacketReceiveMgo) FindByUserAndRedPacketID(ctx context.Context, userID, redPacketID string) (*model.RedPacketReceive, error) {
+ return mongoutil.FindOne[*model.RedPacketReceive](ctx, r.coll, bson.M{
+ "receive_user_id": userID,
+ "red_packet_id": redPacketID,
+ })
+}
+
+// FindByUser finds all red packets received by a user with pagination.
+func (r *RedPacketReceiveMgo) FindByUser(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, receives []*model.RedPacketReceive, err error) {
+ filter := bson.M{"receive_user_id": userID}
+ return mongoutil.FindPage[*model.RedPacketReceive](ctx, r.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "receive_time", Value: -1}},
+ })
+}
+
+// DeleteByReceiveID deletes a receive record by receive ID (for cleanup on failure).
+func (r *RedPacketReceiveMgo) DeleteByReceiveID(ctx context.Context, receiveID string) error {
+ return mongoutil.DeleteOne(ctx, r.coll, bson.M{"receive_id": receiveID})
+}
diff --git a/pkg/common/storage/database/mgo/seq_conversation.go b/pkg/common/storage/database/mgo/seq_conversation.go
new file mode 100644
index 0000000..75ffbbc
--- /dev/null
+++ b/pkg/common/storage/database/mgo/seq_conversation.go
@@ -0,0 +1,104 @@
+package mgo
+
+import (
+ "context"
+ "errors"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewSeqConversationMongo(db *mongo.Database) (database.SeqConversation, error) {
+ coll := db.Collection(database.SeqConversationName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "conversation_id", Value: 1},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &seqConversationMongo{coll: coll}, nil
+}
+
+type seqConversationMongo struct {
+ coll *mongo.Collection
+}
+
+func (s *seqConversationMongo) setSeq(ctx context.Context, conversationID string, seq int64, field string) error {
+ filter := map[string]any{
+ "conversation_id": conversationID,
+ }
+ insert := bson.M{
+ "conversation_id": conversationID,
+ "min_seq": 0,
+ "max_seq": 0,
+ }
+ delete(insert, field)
+ update := map[string]any{
+ "$set": bson.M{
+ field: seq,
+ },
+ "$setOnInsert": insert,
+ }
+ opt := options.Update().SetUpsert(true)
+ return mongoutil.UpdateOne(ctx, s.coll, filter, update, false, opt)
+}
+
+func (s *seqConversationMongo) Malloc(ctx context.Context, conversationID string, size int64) (int64, error) {
+ if size < 0 {
+ return 0, errors.New("size must be greater than 0")
+ }
+ if size == 0 {
+ return s.GetMaxSeq(ctx, conversationID)
+ }
+ filter := map[string]any{"conversation_id": conversationID}
+ update := map[string]any{
+ "$inc": map[string]any{"max_seq": size},
+ "$set": map[string]any{"min_seq": int64(0)},
+ }
+ opt := options.FindOneAndUpdate().SetUpsert(true).SetReturnDocument(options.After).SetProjection(map[string]any{"_id": 0, "max_seq": 1})
+ lastSeq, err := mongoutil.FindOneAndUpdate[int64](ctx, s.coll, filter, update, opt)
+ if err != nil {
+ return 0, err
+ }
+ return lastSeq - size, nil
+}
+
+func (s *seqConversationMongo) SetMaxSeq(ctx context.Context, conversationID string, seq int64) error {
+ return s.setSeq(ctx, conversationID, seq, "max_seq")
+}
+
+func (s *seqConversationMongo) GetMaxSeq(ctx context.Context, conversationID string) (int64, error) {
+ seq, err := mongoutil.FindOne[int64](ctx, s.coll, bson.M{"conversation_id": conversationID}, options.FindOne().SetProjection(map[string]any{"_id": 0, "max_seq": 1}))
+ if err == nil {
+ return seq, nil
+ } else if IsNotFound(err) {
+ return 0, nil
+ } else {
+ return 0, err
+ }
+}
+
+func (s *seqConversationMongo) GetMinSeq(ctx context.Context, conversationID string) (int64, error) {
+ seq, err := mongoutil.FindOne[int64](ctx, s.coll, bson.M{"conversation_id": conversationID}, options.FindOne().SetProjection(map[string]any{"_id": 0, "min_seq": 1}))
+ if err == nil {
+ return seq, nil
+ } else if IsNotFound(err) {
+ return 0, nil
+ } else {
+ return 0, err
+ }
+}
+
+func (s *seqConversationMongo) SetMinSeq(ctx context.Context, conversationID string, seq int64) error {
+ return s.setSeq(ctx, conversationID, seq, "min_seq")
+}
+
+func (s *seqConversationMongo) GetConversation(ctx context.Context, conversationID string) (*model.SeqConversation, error) {
+ return mongoutil.FindOne[*model.SeqConversation](ctx, s.coll, bson.M{"conversation_id": conversationID})
+}
diff --git a/pkg/common/storage/database/mgo/seq_conversation_test.go b/pkg/common/storage/database/mgo/seq_conversation_test.go
new file mode 100644
index 0000000..dd30286
--- /dev/null
+++ b/pkg/common/storage/database/mgo/seq_conversation_test.go
@@ -0,0 +1,43 @@
+package mgo
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func Result[V any](val V, err error) V {
+ if err != nil {
+ panic(err)
+ }
+ return val
+}
+
+func Mongodb() *mongo.Database {
+ return Result(
+ mongo.Connect(context.Background(),
+ options.Client().
+ ApplyURI("mongodb://openIM:openIM123@172.16.8.135:37017/openim_v3?maxPoolSize=100").
+ SetConnectTimeout(5*time.Second)),
+ ).Database("openim_v3")
+}
+
+func TestUserSeq(t *testing.T) {
+ uSeq := Result(NewSeqUserMongo(Mongodb())).(*seqUserMongo)
+ t.Log(uSeq.SetUserMinSeq(context.Background(), "1000", "2000", 4))
+}
+
+func TestConversationSeq(t *testing.T) {
+ cSeq := Result(NewSeqConversationMongo(Mongodb())).(*seqConversationMongo)
+ t.Log(cSeq.SetMaxSeq(context.Background(), "2000", 10))
+ t.Log(cSeq.Malloc(context.Background(), "2000", 10))
+ t.Log(cSeq.GetMaxSeq(context.Background(), "2000"))
+}
+
+func TestUserGetUserReadSeqs(t *testing.T) {
+ uSeq := Result(NewSeqUserMongo(Mongodb())).(*seqUserMongo)
+ t.Log(uSeq.GetUserReadSeqs(context.Background(), "2110910952", []string{"sg_345762580", "2000", "3000"}))
+}
diff --git a/pkg/common/storage/database/mgo/seq_user.go b/pkg/common/storage/database/mgo/seq_user.go
new file mode 100644
index 0000000..ab68186
--- /dev/null
+++ b/pkg/common/storage/database/mgo/seq_user.go
@@ -0,0 +1,127 @@
+package mgo
+
+import (
+ "context"
+ "errors"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewSeqUserMongo(db *mongo.Database) (database.SeqUser, error) {
+ coll := db.Collection(database.SeqUserName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "user_id", Value: 1},
+ {Key: "conversation_id", Value: 1},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &seqUserMongo{coll: coll}, nil
+}
+
+type seqUserMongo struct {
+ coll *mongo.Collection
+}
+
+func (s *seqUserMongo) setSeq(ctx context.Context, conversationID string, userID string, seq int64, field string) error {
+ filter := map[string]any{
+ "user_id": userID,
+ "conversation_id": conversationID,
+ }
+ insert := bson.M{
+ "user_id": userID,
+ "conversation_id": conversationID,
+ "min_seq": 0,
+ "max_seq": 0,
+ "read_seq": 0,
+ }
+ delete(insert, field)
+ update := map[string]any{
+ "$set": bson.M{
+ field: seq,
+ },
+ "$setOnInsert": insert,
+ }
+ opt := options.Update().SetUpsert(true)
+ return mongoutil.UpdateOne(ctx, s.coll, filter, update, false, opt)
+}
+
+func (s *seqUserMongo) getSeq(ctx context.Context, conversationID string, userID string, failed string) (int64, error) {
+ filter := map[string]any{
+ "user_id": userID,
+ "conversation_id": conversationID,
+ }
+ opt := options.FindOne().SetProjection(bson.M{"_id": 0, failed: 1})
+ seq, err := mongoutil.FindOne[int64](ctx, s.coll, filter, opt)
+ if err == nil {
+ return seq, nil
+ } else if errors.Is(err, mongo.ErrNoDocuments) {
+ return 0, nil
+ } else {
+ return 0, err
+ }
+}
+
+func (s *seqUserMongo) GetUserMaxSeq(ctx context.Context, conversationID string, userID string) (int64, error) {
+ return s.getSeq(ctx, conversationID, userID, "max_seq")
+}
+
+func (s *seqUserMongo) SetUserMaxSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ return s.setSeq(ctx, conversationID, userID, seq, "max_seq")
+}
+
+func (s *seqUserMongo) GetUserMinSeq(ctx context.Context, conversationID string, userID string) (int64, error) {
+ return s.getSeq(ctx, conversationID, userID, "min_seq")
+}
+
+func (s *seqUserMongo) SetUserMinSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ return s.setSeq(ctx, conversationID, userID, seq, "min_seq")
+}
+
+func (s *seqUserMongo) GetUserReadSeq(ctx context.Context, conversationID string, userID string) (int64, error) {
+ return s.getSeq(ctx, conversationID, userID, "read_seq")
+}
+
+func (s *seqUserMongo) notFoundSet0(seq map[string]int64, conversationIDs []string) {
+ for _, conversationID := range conversationIDs {
+ if _, ok := seq[conversationID]; !ok {
+ seq[conversationID] = 0
+ }
+ }
+}
+
+func (s *seqUserMongo) GetUserReadSeqs(ctx context.Context, userID string, conversationID []string) (map[string]int64, error) {
+ if len(conversationID) == 0 {
+ return map[string]int64{}, nil
+ }
+ filter := bson.M{"user_id": userID, "conversation_id": bson.M{"$in": conversationID}}
+ opt := options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1, "read_seq": 1})
+ seqs, err := mongoutil.Find[*model.SeqUser](ctx, s.coll, filter, opt)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[string]int64)
+ for _, seq := range seqs {
+ res[seq.ConversationID] = seq.ReadSeq
+ }
+ s.notFoundSet0(res, conversationID)
+ return res, nil
+}
+
+func (s *seqUserMongo) SetUserReadSeq(ctx context.Context, conversationID string, userID string, seq int64) error {
+ dbSeq, err := s.GetUserReadSeq(ctx, conversationID, userID)
+ if err != nil {
+ return err
+ }
+ if dbSeq > seq {
+ return nil
+ }
+ return s.setSeq(ctx, conversationID, userID, seq, "read_seq")
+}
diff --git a/pkg/common/storage/database/mgo/system_config.go b/pkg/common/storage/database/mgo/system_config.go
new file mode 100644
index 0000000..c1fbb13
--- /dev/null
+++ b/pkg/common/storage/database/mgo/system_config.go
@@ -0,0 +1,99 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+// SystemConfigMgo implements SystemConfig using MongoDB as the storage backend.
+type SystemConfigMgo struct {
+ coll *mongo.Collection
+}
+
+// NewSystemConfigMongo creates a new instance of SystemConfigMgo with the provided MongoDB database.
+func NewSystemConfigMongo(db *mongo.Database) (database.SystemConfig, error) {
+ coll := db.Collection(database.SystemConfigName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "key", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "enabled", Value: 1}},
+ },
+ {
+ Keys: bson.D{{Key: "create_time", Value: -1}},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &SystemConfigMgo{coll: coll}, nil
+}
+
+// Create creates a new system config record.
+func (s *SystemConfigMgo) Create(ctx context.Context, config *model.SystemConfig) error {
+ config.CreateTime = time.Now()
+ config.UpdateTime = time.Now()
+ return mongoutil.InsertOne(ctx, s.coll, config)
+}
+
+// Take retrieves a system config by key. Returns an error if not found.
+func (s *SystemConfigMgo) Take(ctx context.Context, key string) (*model.SystemConfig, error) {
+ return mongoutil.FindOne[*model.SystemConfig](ctx, s.coll, bson.M{"key": key})
+}
+
+// Update updates system config information.
+func (s *SystemConfigMgo) Update(ctx context.Context, key string, data map[string]any) error {
+ data["update_time"] = time.Now()
+ return mongoutil.UpdateOne(ctx, s.coll, bson.M{"key": key}, bson.M{"$set": data}, true)
+}
+
+// Find finds system configs by keys.
+func (s *SystemConfigMgo) Find(ctx context.Context, keys []string) ([]*model.SystemConfig, error) {
+ return mongoutil.Find[*model.SystemConfig](ctx, s.coll, bson.M{"key": bson.M{"$in": keys}})
+}
+
+// FindEnabled finds all enabled system configs.
+func (s *SystemConfigMgo) FindEnabled(ctx context.Context) ([]*model.SystemConfig, error) {
+ return mongoutil.Find[*model.SystemConfig](ctx, s.coll, bson.M{"enabled": true})
+}
+
+// FindByKey finds a system config by key (returns nil if not found, no error).
+func (s *SystemConfigMgo) FindByKey(ctx context.Context, key string) (*model.SystemConfig, error) {
+ config, err := mongoutil.FindOne[*model.SystemConfig](ctx, s.coll, bson.M{"key": key})
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) || err == mongo.ErrNoDocuments {
+ return nil, nil
+ }
+ return nil, err
+ }
+ return config, nil
+}
+
+// Delete deletes a system config by key.
+func (s *SystemConfigMgo) Delete(ctx context.Context, key string) error {
+ return mongoutil.DeleteOne(ctx, s.coll, bson.M{"key": key})
+}
diff --git a/pkg/common/storage/database/mgo/user.go b/pkg/common/storage/database/mgo/user.go
new file mode 100644
index 0000000..38e4c1f
--- /dev/null
+++ b/pkg/common/storage/database/mgo/user.go
@@ -0,0 +1,699 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewUserMongo(db *mongo.Database) (database.User, error) {
+ coll := db.Collection(database.UserName)
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &UserMgo{coll: coll}, nil
+}
+
+type UserMgo struct {
+ coll *mongo.Collection
+}
+
+func (u *UserMgo) Create(ctx context.Context, users []*model.User) error {
+ return mongoutil.InsertMany(ctx, u.coll, users)
+}
+
+func (u *UserMgo) UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mongoutil.UpdateOne(ctx, u.coll, bson.M{"user_id": userID}, bson.M{"$set": args}, true)
+}
+
+func (u *UserMgo) Find(ctx context.Context, userIDs []string) (users []*model.User, err error) {
+ query := bson.M{"user_id": bson.M{"$in": userIDs}}
+ log.ZInfo(ctx, "UserMongo Find query", "collection", u.coll.Name(), "query", query)
+ users, err = mongoutil.Find[*model.User](ctx, u.coll, query)
+ log.ZInfo(ctx, "UserMongo Find result", "userCount", len(users), "err", err)
+ return users, err
+}
+
+func (u *UserMgo) Take(ctx context.Context, userID string) (user *model.User, err error) {
+ return mongoutil.FindOne[*model.User](ctx, u.coll, bson.M{"user_id": userID})
+}
+
+func (u *UserMgo) TakeNotification(ctx context.Context, level int64) (user []*model.User, err error) {
+ return mongoutil.Find[*model.User](ctx, u.coll, bson.M{"app_manger_level": level})
+}
+
+func (u *UserMgo) TakeGTEAppManagerLevel(ctx context.Context, level int64) (user []*model.User, err error) {
+ return mongoutil.Find[*model.User](ctx, u.coll, bson.M{"app_manger_level": bson.M{"$gte": level}})
+}
+
+func (u *UserMgo) TakeByNickname(ctx context.Context, nickname string) (user []*model.User, err error) {
+ return mongoutil.Find[*model.User](ctx, u.coll, bson.M{"nickname": nickname})
+}
+
+func (u *UserMgo) Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*model.User, err error) {
+ return mongoutil.FindPage[*model.User](ctx, u.coll, bson.M{}, pagination)
+}
+
+func (u *UserMgo) PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*model.User, err error) {
+ query := bson.M{
+ "$or": []bson.M{
+ {"app_manger_level": level1},
+ {"app_manger_level": level2},
+ },
+ }
+
+ return mongoutil.FindPage[*model.User](ctx, u.coll, query, pagination)
+}
+
+func (u *UserMgo) PageFindUserWithKeyword(
+ ctx context.Context,
+ level1 int64,
+ level2 int64,
+ userID string,
+ nickName string,
+ pagination pagination.Pagination,
+) (count int64, users []*model.User, err error) {
+ // Initialize the base query with level conditions
+ query := bson.M{
+ "$and": []bson.M{
+ {"app_manger_level": bson.M{"$in": []int64{level1, level2}}},
+ },
+ }
+
+ // Add userID and userName conditions to the query if they are provided
+ if userID != "" || nickName != "" {
+ userConditions := []bson.M{}
+ if userID != "" {
+ // Use regex for userID
+ regexPattern := primitive.Regex{Pattern: userID, Options: "i"} // 'i' for case-insensitive matching
+ userConditions = append(userConditions, bson.M{"user_id": regexPattern})
+ }
+ if nickName != "" {
+ // Use regex for userName
+ regexPattern := primitive.Regex{Pattern: nickName, Options: "i"} // 'i' for case-insensitive matching
+ userConditions = append(userConditions, bson.M{"nickname": regexPattern})
+ }
+ query["$and"] = append(query["$and"].([]bson.M), bson.M{"$or": userConditions})
+ }
+
+ // Perform the paginated search
+ return mongoutil.FindPage[*model.User](ctx, u.coll, query, pagination)
+}
+
+func (u *UserMgo) GetAllUserID(ctx context.Context, pagination pagination.Pagination) (int64, []string, error) {
+ return mongoutil.FindPage[string](ctx, u.coll, bson.M{}, pagination, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+}
+
+func (u *UserMgo) Exist(ctx context.Context, userID string) (exist bool, err error) {
+ return mongoutil.Exist(ctx, u.coll, bson.M{"user_id": userID})
+}
+
+func (u *UserMgo) GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error) {
+ return mongoutil.FindOne[int](ctx, u.coll, bson.M{"user_id": userID}, options.FindOne().SetProjection(bson.M{"_id": 0, "global_recv_msg_opt": 1}))
+}
+
+// SearchUsersByFields 根据多个字段搜索用户:account(userID)、phone、nickname
+// 返回匹配的用户ID列表
+// 使用 MongoDB $lookup 实现连表查询:attribute 集合和 user 集合
+func (u *UserMgo) SearchUsersByFields(ctx context.Context, account, phone, nickname string) (userIDs []string, err error) {
+ log.ZInfo(ctx, "SearchUsersByFields START", "account", account, "phone", phone, "nickname", nickname)
+
+ // 获取 attribute 集合
+ attributeColl := u.coll.Database().Collection("attribute")
+ log.ZInfo(ctx, "SearchUsersByFields collections", "attributeCollection", "attribute", "userCollection", u.coll.Name(), "database", u.coll.Database().Name())
+
+ // 构建聚合管道,使用 $lookup 实现连表查询
+ pipeline := bson.A{}
+
+ // 第一步:从 attribute 集合开始,构建匹配条件
+ attributeMatch := bson.M{}
+ attributeOrConditions := []bson.M{}
+
+ if account != "" {
+ attributeOrConditions = append(attributeOrConditions, bson.M{"account": bson.M{"$regex": account, "$options": "i"}})
+ log.ZInfo(ctx, "SearchUsersByFields add account condition", "account", account)
+ }
+
+ if phone != "" {
+ // phone_number 在 attribute 集合中搜索(注意:字段名是 phone_number,不是 phone)
+ attributeOrConditions = append(attributeOrConditions, bson.M{"phone_number": bson.M{"$regex": phone, "$options": "i"}})
+ log.ZInfo(ctx, "SearchUsersByFields add phone condition", "phone", phone, "field", "phone_number")
+ }
+
+ // 判断查询策略:如果有 account 或 phone,从 attribute 开始;如果只有 nickname,从 user 开始
+ hasAttributeSearch := len(attributeOrConditions) > 0
+ hasUserSearch := nickname != ""
+
+ var startCollection *mongo.Collection // 记录起始集合
+
+ if hasAttributeSearch {
+ // 从 attribute 集合开始查询
+ startCollection = attributeColl
+
+ // 先直接查询 attribute 集合,看看实际的数据结构
+ type FullAttributeDoc map[string]interface{}
+ allDocs, err := mongoutil.Find[*FullAttributeDoc](ctx, attributeColl, bson.M{}, options.Find().SetLimit(5).SetProjection(bson.M{"_id": 0}))
+ if err == nil && len(allDocs) > 0 {
+ log.ZInfo(ctx, "SearchUsersByFields attribute sample documents (all fields)", "sampleCount", len(allDocs), "samples", allDocs)
+ }
+
+ // 尝试精确匹配手机号(使用正确的字段名 phone_number)
+ exactPhoneMatch := bson.M{"phone_number": phone}
+ exactDocs, err := mongoutil.Find[*FullAttributeDoc](ctx, attributeColl, exactPhoneMatch, options.Find().SetLimit(5).SetProjection(bson.M{"_id": 0}))
+ if err == nil {
+ log.ZInfo(ctx, "SearchUsersByFields exact phone_number match", "phone", phone, "matchCount", len(exactDocs), "docs", exactDocs)
+ }
+
+ // 尝试正则匹配(不区分大小写,使用正确的字段名 phone_number)
+ regexPhoneMatch := bson.M{"phone_number": bson.M{"$regex": phone, "$options": "i"}}
+ regexDocs, err := mongoutil.Find[*FullAttributeDoc](ctx, attributeColl, regexPhoneMatch, options.Find().SetLimit(5).SetProjection(bson.M{"_id": 0}))
+ if err == nil {
+ log.ZInfo(ctx, "SearchUsersByFields regex phone_number match", "phone", phone, "matchCount", len(regexDocs), "docs", regexDocs)
+ }
+
+ // 尝试查询包含 phone_number 字段的所有记录
+ hasPhoneFieldMatch := bson.M{"phone_number": bson.M{"$exists": true, "$ne": ""}}
+ hasPhoneDocs, err := mongoutil.Find[*FullAttributeDoc](ctx, attributeColl, hasPhoneFieldMatch, options.Find().SetLimit(5).SetProjection(bson.M{"_id": 0}))
+ if err == nil {
+ log.ZInfo(ctx, "SearchUsersByFields documents with phone_number field", "matchCount", len(hasPhoneDocs), "docs", hasPhoneDocs)
+ }
+
+ attributeMatch["$or"] = attributeOrConditions
+ pipeline = append(pipeline, bson.M{"$match": attributeMatch})
+ log.ZInfo(ctx, "SearchUsersByFields attribute match stage", "match", attributeMatch)
+
+ // 使用 $lookup 关联 user 集合
+ lookupStage := bson.M{
+ "$lookup": bson.M{
+ "from": u.coll.Name(), // user 集合名称
+ "localField": "user_id", // attribute 集合的字段
+ "foreignField": "user_id", // user 集合的字段
+ "as": "userInfo", // 关联后的字段名
+ },
+ }
+ pipeline = append(pipeline, lookupStage)
+ log.ZInfo(ctx, "SearchUsersByFields add lookup stage", "from", u.coll.Name(), "localField", "user_id", "foreignField", "user_id")
+
+ // 展开 userInfo 数组
+ pipeline = append(pipeline, bson.M{"$unwind": bson.M{
+ "path": "$userInfo",
+ "preserveNullAndEmptyArrays": true,
+ }})
+
+ // 如果有 nickname 条件,需要匹配 user 集合的 nickname
+ if hasUserSearch {
+ userMatch := bson.M{
+ "$or": []bson.M{
+ {"userInfo.nickname": bson.M{"$regex": nickname, "$options": "i"}},
+ {"userInfo": bson.M{"$exists": false}}, // 如果没有关联到 user,也保留
+ },
+ }
+ pipeline = append(pipeline, bson.M{"$match": userMatch})
+ log.ZInfo(ctx, "SearchUsersByFields add nickname match", "nickname", nickname)
+ }
+
+ // 从 attribute 集合开始,user_id 字段在根级别
+ pipeline = append(pipeline, bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "user_id": 1,
+ },
+ })
+ } else if hasUserSearch {
+ // 只有 nickname 条件,从 user 集合开始查询
+ startCollection = u.coll
+ userMatch := bson.M{"nickname": bson.M{"$regex": nickname, "$options": "i"}}
+ pipeline = append(pipeline, bson.M{"$match": userMatch})
+ log.ZInfo(ctx, "SearchUsersByFields user match stage", "match", userMatch)
+
+ // 从 user 集合开始,user_id 字段在根级别
+ pipeline = append(pipeline, bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "user_id": 1,
+ },
+ })
+ } else {
+ // 没有任何搜索条件,返回空
+ log.ZInfo(ctx, "SearchUsersByFields no search conditions", "returning empty")
+ return []string{}, nil
+ }
+
+ // 第五步:去重
+ pipeline = append(pipeline, bson.M{
+ "$group": bson.M{
+ "_id": "$user_id",
+ "user_id": bson.M{"$first": "$user_id"},
+ },
+ })
+
+ // 第六步:只返回 user_id
+ pipeline = append(pipeline, bson.M{
+ "$project": bson.M{
+ "_id": 0,
+ "user_id": 1,
+ },
+ })
+
+ log.ZInfo(ctx, "SearchUsersByFields pipeline", "pipeline", pipeline, "startCollection", startCollection.Name())
+
+ // 执行聚合查询
+ type ResultDoc struct {
+ UserID string `bson:"user_id"`
+ }
+
+ results, err := mongoutil.Aggregate[*ResultDoc](ctx, startCollection, pipeline)
+ if err != nil {
+ log.ZError(ctx, "SearchUsersByFields Aggregate failed", err, "pipeline", pipeline)
+ return nil, err
+ }
+
+ log.ZInfo(ctx, "SearchUsersByFields Aggregate result", "resultCount", len(results), "results", results)
+
+ // 提取 user_id 列表
+ userIDs = make([]string, 0, len(results))
+ for _, result := range results {
+ if result.UserID != "" {
+ userIDs = append(userIDs, result.UserID)
+ }
+ }
+
+ log.ZInfo(ctx, "SearchUsersByFields FINAL result", "totalUserIDs", len(userIDs), "userIDs", userIDs)
+
+ return userIDs, nil
+}
+
+// 旧版本的实现(保留作为备份)
+func (u *UserMgo) SearchUsersByFields_old(ctx context.Context, account, phone, nickname string) (userIDs []string, err error) {
+ log.ZInfo(ctx, "SearchUsersByFields START", "account", account, "phone", phone, "nickname", nickname)
+
+ userIDMap := make(map[string]bool) // 用于去重
+
+ // 获取 attribute 集合
+ attributeColl := u.coll.Database().Collection("attribute")
+ log.ZInfo(ctx, "SearchUsersByFields attribute collection", "collectionName", "attribute", "database", u.coll.Database().Name())
+
+ // 从 attribute 集合查询 account 和 phone
+ if account != "" || phone != "" {
+ attributeFilter := bson.M{}
+ attributeConditions := []bson.M{}
+
+ if account != "" {
+ // account 在 attribute 集合中搜索
+ attributeConditions = append(attributeConditions, bson.M{"account": bson.M{"$regex": account, "$options": "i"}})
+ log.ZInfo(ctx, "SearchUsersByFields add account condition", "account", account)
+ }
+
+ if phone != "" {
+ // phone 在 attribute 集合中搜索
+ attributeConditions = append(attributeConditions, bson.M{"phone": bson.M{"$regex": phone, "$options": "i"}})
+ log.ZInfo(ctx, "SearchUsersByFields add phone condition", "phone", phone)
+ }
+
+ if len(attributeConditions) > 0 {
+ attributeFilter["$or"] = attributeConditions
+
+ log.ZInfo(ctx, "SearchUsersByFields query attribute", "filter", attributeFilter, "account", account, "phone", phone, "conditionsCount", len(attributeConditions))
+
+ // attribute 集合的结构包含:user_id, account, phone 等字段
+ type AttributeDoc struct {
+ UserID string `bson:"user_id"`
+ }
+
+ // 先尝试查询,看看集合是否存在数据
+ count, err := mongoutil.Count(ctx, attributeColl, bson.M{})
+ log.ZInfo(ctx, "SearchUsersByFields attribute collection total count", "count", count, "err", err)
+
+ // 尝试查询多条记录看看结构,特别是包含 phone 数据的记录
+ type SampleDoc struct {
+ UserID string `bson:"user_id"`
+ Account string `bson:"account"`
+ Phone string `bson:"phone"`
+ }
+ // 查询所有记录,看看实际的数据结构
+ samples, err := mongoutil.Find[*SampleDoc](ctx, attributeColl, bson.M{}, options.Find().SetLimit(10).SetProjection(bson.M{"_id": 0, "user_id": 1, "account": 1, "phone": 1}))
+ if err == nil && len(samples) > 0 {
+ log.ZInfo(ctx, "SearchUsersByFields attribute sample documents", "sampleCount", len(samples), "samples", samples)
+
+ // 尝试查询包含 phone 字段不为空的记录
+ phoneFilter := bson.M{"phone": bson.M{"$exists": true, "$ne": ""}}
+ phoneSamples, err := mongoutil.Find[*SampleDoc](ctx, attributeColl, phoneFilter, options.Find().SetLimit(5).SetProjection(bson.M{"_id": 0, "user_id": 1, "account": 1, "phone": 1}))
+ if err == nil {
+ log.ZInfo(ctx, "SearchUsersByFields attribute documents with phone", "phoneSampleCount", len(phoneSamples), "phoneSamples", phoneSamples)
+ } else {
+ log.ZWarn(ctx, "SearchUsersByFields cannot find documents with phone", err)
+ }
+
+ // 尝试查询所有字段,看看实际的数据结构(只查一条)
+ type FullDoc map[string]interface{}
+ fullSample, err := mongoutil.FindOne[*FullDoc](ctx, attributeColl, bson.M{}, options.FindOne().SetProjection(bson.M{"_id": 0}))
+ if err == nil && fullSample != nil {
+ log.ZInfo(ctx, "SearchUsersByFields attribute full document structure", "fullSample", fullSample)
+ }
+ } else {
+ log.ZWarn(ctx, "SearchUsersByFields cannot get samples from attribute", err, "sampleCount", len(samples))
+ }
+
+ // 尝试精确匹配手机号,看看是否有数据
+ exactPhoneFilter := bson.M{"phone": phone}
+ exactCount, err := mongoutil.Count(ctx, attributeColl, exactPhoneFilter)
+ log.ZInfo(ctx, "SearchUsersByFields exact phone match count", "phone", phone, "count", exactCount, "err", err)
+
+ attributeDocs, err := mongoutil.Find[*AttributeDoc](ctx, attributeColl, attributeFilter, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+ if err != nil {
+ log.ZError(ctx, "SearchUsersByFields Find failed in attribute collection", err, "filter", attributeFilter)
+ return nil, err
+ }
+
+ log.ZInfo(ctx, "SearchUsersByFields Find result from attribute", "userCount", len(attributeDocs), "userIDs", attributeDocs)
+
+ for i, doc := range attributeDocs {
+ log.ZDebug(ctx, "SearchUsersByFields processing attribute doc", "index", i, "userID", doc.UserID)
+ if doc.UserID != "" && !userIDMap[doc.UserID] {
+ userIDMap[doc.UserID] = true
+ log.ZDebug(ctx, "SearchUsersByFields added userID from attribute", "userID", doc.UserID)
+ }
+ }
+ }
+ }
+
+ // 从 user 集合查询 nickname
+ if nickname != "" {
+ userFilter := bson.M{"nickname": bson.M{"$regex": nickname, "$options": "i"}}
+ log.ZInfo(ctx, "SearchUsersByFields query user", "filter", userFilter, "nickname", nickname)
+
+ users, err := mongoutil.Find[*model.User](ctx, u.coll, userFilter, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+ if err != nil {
+ log.ZError(ctx, "SearchUsersByFields Find failed in user collection", err, "filter", userFilter)
+ return nil, err
+ }
+
+ log.ZInfo(ctx, "SearchUsersByFields Find result from user", "userCount", len(users))
+
+ for i, user := range users {
+ log.ZDebug(ctx, "SearchUsersByFields processing user doc", "index", i, "userID", user.UserID)
+ if user.UserID != "" && !userIDMap[user.UserID] {
+ userIDMap[user.UserID] = true
+ log.ZDebug(ctx, "SearchUsersByFields added userID from user", "userID", user.UserID)
+ }
+ }
+ }
+
+ // 将 map 转换为 slice
+ userIDs = make([]string, 0, len(userIDMap))
+ for userID := range userIDMap {
+ userIDs = append(userIDs, userID)
+ }
+
+ log.ZInfo(ctx, "SearchUsersByFields FINAL result", "totalUserIDs", len(userIDs), "userIDs", userIDs)
+
+ return userIDs, nil
+}
+
+func (u *UserMgo) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
+ if before == nil {
+ return mongoutil.Count(ctx, u.coll, bson.M{})
+ }
+ return mongoutil.Count(ctx, u.coll, bson.M{"create_time": bson.M{"$lt": before}})
+}
+
+func (u *UserMgo) AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error {
+ collection := u.coll.Database().Collection("userCommands")
+
+ // Create a new document instead of updating an existing one
+ doc := bson.M{
+ "userID": userID,
+ "type": Type,
+ "uuid": UUID,
+ "createTime": time.Now().Unix(), // assuming you want the creation time in Unix timestamp
+ "value": value,
+ "ex": ex,
+ }
+
+ _, err := collection.InsertOne(ctx, doc)
+ return errs.Wrap(err)
+}
+
+func (u *UserMgo) DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error {
+ collection := u.coll.Database().Collection("userCommands")
+
+ filter := bson.M{"userID": userID, "type": Type, "uuid": UUID}
+
+ result, err := collection.DeleteOne(ctx, filter)
+ // when err is not nil, result might be nil
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ if result.DeletedCount == 0 {
+ // No records found to update
+ return errs.Wrap(errs.ErrRecordNotFound)
+ }
+ return errs.Wrap(err)
+}
+func (u *UserMgo) UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error {
+ if len(val) == 0 {
+ return nil
+ }
+
+ collection := u.coll.Database().Collection("userCommands")
+
+ filter := bson.M{"userID": userID, "type": Type, "uuid": UUID}
+ update := bson.M{"$set": val}
+
+ result, err := collection.UpdateOne(ctx, filter, update)
+ if err != nil {
+ return errs.Wrap(err)
+ }
+
+ if result.MatchedCount == 0 {
+ // No records found to update
+ return errs.Wrap(errs.ErrRecordNotFound)
+ }
+
+ return nil
+}
+
+func (u *UserMgo) GetUserCommand(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error) {
+ collection := u.coll.Database().Collection("userCommands")
+ filter := bson.M{"userID": userID, "type": Type}
+
+ cursor, err := collection.Find(ctx, filter)
+ if err != nil {
+ return nil, err
+ }
+ defer cursor.Close(ctx)
+
+ // Initialize commands as a slice of pointers
+ commands := []*user.CommandInfoResp{}
+
+ for cursor.Next(ctx) {
+ var document struct {
+ Type int32 `bson:"type"`
+ UUID string `bson:"uuid"`
+ Value string `bson:"value"`
+ CreateTime int64 `bson:"createTime"`
+ Ex string `bson:"ex"`
+ }
+
+ if err := cursor.Decode(&document); err != nil {
+ return nil, err
+ }
+
+ commandInfo := &user.CommandInfoResp{
+ Type: document.Type,
+ Uuid: document.UUID,
+ Value: document.Value,
+ CreateTime: document.CreateTime,
+ Ex: document.Ex,
+ }
+
+ commands = append(commands, commandInfo)
+ }
+
+ if err := cursor.Err(); err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ return commands, nil
+}
+func (u *UserMgo) GetAllUserCommand(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error) {
+ collection := u.coll.Database().Collection("userCommands")
+ filter := bson.M{"userID": userID}
+
+ cursor, err := collection.Find(ctx, filter)
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ defer cursor.Close(ctx)
+
+ // Initialize commands as a slice of pointers
+ commands := []*user.AllCommandInfoResp{}
+
+ for cursor.Next(ctx) {
+ var document struct {
+ Type int32 `bson:"type"`
+ UUID string `bson:"uuid"`
+ Value string `bson:"value"`
+ CreateTime int64 `bson:"createTime"`
+ Ex string `bson:"ex"`
+ }
+
+ if err := cursor.Decode(&document); err != nil {
+ return nil, errs.Wrap(err)
+ }
+
+ commandInfo := &user.AllCommandInfoResp{
+ Type: document.Type,
+ Uuid: document.UUID,
+ Value: document.Value,
+ CreateTime: document.CreateTime,
+ Ex: document.Ex,
+ }
+
+ commands = append(commands, commandInfo)
+ }
+
+ if err := cursor.Err(); err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return commands, nil
+}
+func (u *UserMgo) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
+ pipeline := bson.A{
+ bson.M{
+ "$match": bson.M{
+ "create_time": bson.M{
+ "$gte": start,
+ "$lt": end,
+ },
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": bson.M{
+ "$dateToString": bson.M{
+ "format": "%Y-%m-%d",
+ "date": "$create_time",
+ },
+ },
+ "count": bson.M{
+ "$sum": 1,
+ },
+ },
+ },
+ }
+ type Item struct {
+ Date string `bson:"_id"`
+ Count int64 `bson:"count"`
+ }
+ items, err := mongoutil.Aggregate[Item](ctx, u.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[string]int64, len(items))
+ for _, item := range items {
+ res[item.Date] = item.Count
+ }
+ return res, nil
+}
+
+func (u *UserMgo) SortQuery(ctx context.Context, userIDName map[string]string, asc bool) ([]*model.User, error) {
+ if len(userIDName) == 0 {
+ return nil, nil
+ }
+ userIDs := make([]string, 0, len(userIDName))
+ attached := make(map[string]string)
+ for userID, name := range userIDName {
+ userIDs = append(userIDs, userID)
+ if name == "" {
+ continue
+ }
+ attached[userID] = name
+ }
+ var sortValue int
+ if asc {
+ sortValue = 1
+ } else {
+ sortValue = -1
+ }
+ if len(attached) == 0 {
+ filter := bson.M{"user_id": bson.M{"$in": userIDs}}
+ opt := options.Find().SetSort(bson.M{"nickname": sortValue})
+ return mongoutil.Find[*model.User](ctx, u.coll, filter, opt)
+ }
+ pipeline := []bson.M{
+ {
+ "$match": bson.M{
+ "user_id": bson.M{"$in": userIDs},
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "_query_sort_name": bson.M{
+ "$arrayElemAt": []any{
+ bson.M{
+ "$filter": bson.M{
+ "input": bson.M{
+ "$objectToArray": attached,
+ },
+ "as": "item",
+ "cond": bson.M{
+ "$eq": []any{"$$item.k", "$user_id"},
+ },
+ },
+ },
+ 0,
+ },
+ },
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "_query_sort_name": bson.M{
+ "$ifNull": []any{"$_query_sort_name.v", "$nickname"},
+ },
+ },
+ },
+ {
+ "$sort": bson.M{
+ "_query_sort_name": sortValue,
+ },
+ },
+ }
+ return mongoutil.Aggregate[*model.User](ctx, u.coll, pipeline)
+}
diff --git a/pkg/common/storage/database/mgo/version_log.go b/pkg/common/storage/database/mgo/version_log.go
new file mode 100644
index 0000000..1a1705c
--- /dev/null
+++ b/pkg/common/storage/database/mgo/version_log.go
@@ -0,0 +1,304 @@
+package mgo
+
+import (
+ "context"
+ "errors"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/versionctx"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+func NewVersionLog(coll *mongo.Collection) (database.VersionLog, error) {
+ lm := &VersionLogMgo{coll: coll}
+ if err := lm.initIndex(context.Background()); err != nil {
+ return nil, errs.WrapMsg(err, "init version log index failed", "coll", coll.Name())
+ }
+ return lm, nil
+}
+
+type VersionLogMgo struct {
+ coll *mongo.Collection
+}
+
+func (l *VersionLogMgo) initIndex(ctx context.Context) error {
+ _, err := l.coll.Indexes().CreateOne(ctx, mongo.IndexModel{
+ Keys: bson.M{
+ "d_id": 1,
+ },
+ Options: options.Index().SetUnique(true),
+ })
+
+ return err
+}
+
+func (l *VersionLogMgo) IncrVersion(ctx context.Context, dId string, eIds []string, state int32) error {
+ _, err := l.IncrVersionResult(ctx, dId, eIds, state)
+ return err
+}
+
+func (l *VersionLogMgo) IncrVersionResult(ctx context.Context, dId string, eIds []string, state int32) (*model.VersionLog, error) {
+ vl, err := l.incrVersionResult(ctx, dId, eIds, state)
+ if err != nil {
+ return nil, err
+ }
+ versionctx.GetVersionLog(ctx).Append(versionctx.Collection{
+ Name: l.coll.Name(),
+ Doc: vl,
+ })
+ return vl, nil
+}
+
+func (l *VersionLogMgo) incrVersionResult(ctx context.Context, dId string, eIds []string, state int32) (*model.VersionLog, error) {
+ if len(eIds) == 0 {
+ return nil, errs.ErrArgs.WrapMsg("elem id is empty", "dId", dId)
+ }
+ now := time.Now()
+ if res, err := l.writeLogBatch2(ctx, dId, eIds, state, now); err == nil {
+ return res, nil
+ } else if !errors.Is(err, mongo.ErrNoDocuments) {
+ return nil, err
+ }
+ if res, err := l.initDoc(ctx, dId, eIds, state, now); err == nil {
+ return res, nil
+ } else if !mongo.IsDuplicateKeyError(err) {
+ return nil, err
+ }
+ return l.writeLogBatch2(ctx, dId, eIds, state, now)
+}
+
+func (l *VersionLogMgo) initDoc(ctx context.Context, dId string, eIds []string, state int32, now time.Time) (*model.VersionLog, error) {
+ wl := model.VersionLogTable{
+ ID: primitive.NewObjectID(),
+ DID: dId,
+ Logs: make([]model.VersionLogElem, 0, len(eIds)),
+ Version: database.FirstVersion,
+ Deleted: database.DefaultDeleteVersion,
+ LastUpdate: now,
+ }
+ for _, eId := range eIds {
+ wl.Logs = append(wl.Logs, model.VersionLogElem{
+ EID: eId,
+ State: state,
+ Version: database.FirstVersion,
+ LastUpdate: now,
+ })
+ }
+ if _, err := l.coll.InsertOne(ctx, &wl); err != nil {
+ return nil, err
+ }
+ return wl.VersionLog(), nil
+}
+
+func (l *VersionLogMgo) writeLogBatch2(ctx context.Context, dId string, eIds []string, state int32, now time.Time) (*model.VersionLog, error) {
+ if eIds == nil {
+ eIds = []string{}
+ }
+ filter := bson.M{
+ "d_id": dId,
+ }
+ elems := make([]bson.M, 0, len(eIds))
+ for _, eId := range eIds {
+ elems = append(elems, bson.M{
+ "e_id": eId,
+ "version": "$version",
+ "state": state,
+ "last_update": now,
+ })
+ }
+ pipeline := []bson.M{
+ {
+ "$addFields": bson.M{
+ "delete_e_ids": eIds,
+ },
+ },
+ {
+ "$set": bson.M{
+ "version": bson.M{"$add": []any{"$version", 1}},
+ "last_update": now,
+ },
+ },
+ {
+ "$set": bson.M{
+ "logs": bson.M{
+ "$filter": bson.M{
+ "input": "$logs",
+ "as": "log",
+ "cond": bson.M{
+ "$not": bson.M{
+ "$in": []any{"$$log.e_id", "$delete_e_ids"},
+ },
+ },
+ },
+ },
+ },
+ },
+ {
+ "$set": bson.M{
+ "logs": bson.M{
+ "$concatArrays": []any{
+ "$logs",
+ elems,
+ },
+ },
+ },
+ },
+ {
+ "$unset": "delete_e_ids",
+ },
+ }
+ projection := bson.M{
+ "logs": 0,
+ }
+ opt := options.FindOneAndUpdate().SetUpsert(false).SetReturnDocument(options.After).SetProjection(projection)
+ res, err := mongoutil.FindOneAndUpdate[*model.VersionLog](ctx, l.coll, filter, pipeline, opt)
+ if err != nil {
+ return nil, err
+ }
+ res.Logs = make([]model.VersionLogElem, 0, len(eIds))
+ for _, id := range eIds {
+ res.Logs = append(res.Logs, model.VersionLogElem{
+ EID: id,
+ State: state,
+ Version: res.Version,
+ LastUpdate: res.LastUpdate,
+ })
+ }
+ return res, nil
+}
+
+func (l *VersionLogMgo) findDoc(ctx context.Context, dId string) (*model.VersionLog, error) {
+ vl, err := mongoutil.FindOne[*model.VersionLogTable](ctx, l.coll, bson.M{"d_id": dId}, options.FindOne().SetProjection(bson.M{"logs": 0}))
+ if err != nil {
+ return nil, err
+ }
+ return vl.VersionLog(), nil
+}
+
+func (l *VersionLogMgo) FindChangeLog(ctx context.Context, dId string, version uint, limit int) (*model.VersionLog, error) {
+ if wl, err := l.findChangeLog(ctx, dId, version, limit); err == nil {
+ return wl, nil
+ } else if !errors.Is(err, mongo.ErrNoDocuments) {
+ return nil, err
+ }
+ log.ZDebug(ctx, "init doc", "dId", dId)
+ if res, err := l.initDoc(ctx, dId, nil, 0, time.Now()); err == nil {
+ log.ZDebug(ctx, "init doc success", "dId", dId)
+ return res, nil
+ } else if mongo.IsDuplicateKeyError(err) {
+ return l.findChangeLog(ctx, dId, version, limit)
+ } else {
+ return nil, err
+ }
+}
+
+func (l *VersionLogMgo) BatchFindChangeLog(ctx context.Context, dIds []string, versions []uint, limits []int) (vLogs []*model.VersionLog, err error) {
+ for i := 0; i < len(dIds); i++ {
+ if vLog, err := l.findChangeLog(ctx, dIds[i], versions[i], limits[i]); err == nil {
+ vLogs = append(vLogs, vLog)
+ } else if !errors.Is(err, mongo.ErrNoDocuments) {
+ log.ZError(ctx, "findChangeLog error:", errs.Wrap(err))
+ }
+ log.ZDebug(ctx, "init doc", "dId", dIds[i])
+ if res, err := l.initDoc(ctx, dIds[i], nil, 0, time.Now()); err == nil {
+ log.ZDebug(ctx, "init doc success", "dId", dIds[i])
+ vLogs = append(vLogs, res)
+ } else if mongo.IsDuplicateKeyError(err) {
+ l.findChangeLog(ctx, dIds[i], versions[i], limits[i])
+ } else {
+ log.ZError(ctx, "init doc error:", errs.Wrap(err))
+ }
+ }
+ return vLogs, errs.Wrap(err)
+}
+
+func (l *VersionLogMgo) findChangeLog(ctx context.Context, dId string, version uint, limit int) (*model.VersionLog, error) {
+ if version == 0 && limit == 0 {
+ return l.findDoc(ctx, dId)
+ }
+ pipeline := []bson.M{
+ {
+ "$match": bson.M{
+ "d_id": dId,
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "logs": bson.M{
+ "$cond": bson.M{
+ "if": bson.M{
+ "$or": []bson.M{
+ {"$lt": []any{"$version", version}},
+ {"$gte": []any{"$deleted", version}},
+ },
+ },
+ "then": []any{},
+ "else": "$logs",
+ },
+ },
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "logs": bson.M{
+ "$filter": bson.M{
+ "input": "$logs",
+ "as": "l",
+ "cond": bson.M{
+ "$gt": []any{"$$l.version", version},
+ },
+ },
+ },
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "log_len": bson.M{"$size": "$logs"},
+ },
+ },
+ {
+ "$addFields": bson.M{
+ "logs": bson.M{
+ "$cond": bson.M{
+ "if": bson.M{
+ "$gt": []any{"$log_len", limit},
+ },
+ "then": []any{},
+ "else": "$logs",
+ },
+ },
+ },
+ },
+ }
+ if limit <= 0 {
+ pipeline = pipeline[:len(pipeline)-1]
+ }
+ vl, err := mongoutil.Aggregate[*model.VersionLog](ctx, l.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ if len(vl) == 0 {
+ return nil, mongo.ErrNoDocuments
+ }
+ return vl[0], nil
+}
+
+func (l *VersionLogMgo) DeleteAfterUnchangedLog(ctx context.Context, deadline time.Time) error {
+ return mongoutil.DeleteMany(ctx, l.coll, bson.M{
+ "last_update": bson.M{
+ "$lt": deadline,
+ },
+ })
+}
+
+func (l *VersionLogMgo) Delete(ctx context.Context, dId string) error {
+ return mongoutil.DeleteOne(ctx, l.coll, bson.M{"d_id": dId})
+}
diff --git a/pkg/common/storage/database/mgo/version_test.go b/pkg/common/storage/database/mgo/version_test.go
new file mode 100644
index 0000000..6500790
--- /dev/null
+++ b/pkg/common/storage/database/mgo/version_test.go
@@ -0,0 +1,40 @@
+package mgo
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+//func Result[V any](val V, err error) V {
+// if err != nil {
+// panic(err)
+// }
+// return val
+//}
+
+func Check(err error) {
+ if err != nil {
+ panic(err)
+ }
+}
+
+func TestName(t *testing.T) {
+ cli := Result(mongo.Connect(context.Background(), options.Client().ApplyURI("mongodb://openIM:openIM123@172.16.8.48:37017/openim_v3?maxPoolSize=100").SetConnectTimeout(5*time.Second)))
+ coll := cli.Database("openim_v3").Collection("version_test")
+ tmp, err := NewVersionLog(coll)
+ if err != nil {
+ panic(err)
+ }
+ vl := tmp.(*VersionLogMgo)
+ res, err := vl.incrVersionResult(context.Background(), "100", []string{"1000", "1001", "1003"}, model.VersionStateInsert)
+ if err != nil {
+ t.Log(err)
+ return
+ }
+ t.Logf("%+v", res)
+}
diff --git a/pkg/common/storage/database/mgo/wallet.go b/pkg/common/storage/database/mgo/wallet.go
new file mode 100644
index 0000000..e239c91
--- /dev/null
+++ b/pkg/common/storage/database/mgo/wallet.go
@@ -0,0 +1,231 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/pagination"
+ "github.com/openimsdk/tools/errs"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+// WalletMgo implements Wallet using MongoDB as the storage backend.
+type WalletMgo struct {
+ coll *mongo.Collection
+}
+
+// NewWalletMongo creates a new instance of WalletMgo with the provided MongoDB database.
+func NewWalletMongo(db *mongo.Database) (database.Wallet, error) {
+ coll := db.Collection(database.WalletName)
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "user_id", Value: 1}},
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{{Key: "create_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "update_time", Value: -1}},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &WalletMgo{coll: coll}, nil
+}
+
+// Create creates a new wallet record.
+func (w *WalletMgo) Create(ctx context.Context, wallet *model.Wallet) error {
+ if wallet.CreateTime.IsZero() {
+ wallet.CreateTime = time.Now()
+ }
+ if wallet.UpdateTime.IsZero() {
+ wallet.UpdateTime = time.Now()
+ }
+ if wallet.Version == 0 {
+ wallet.Version = 1
+ }
+ return mongoutil.InsertOne(ctx, w.coll, wallet)
+}
+
+// Take retrieves a wallet by user ID. Returns an error if not found.
+func (w *WalletMgo) Take(ctx context.Context, userID string) (*model.Wallet, error) {
+ return mongoutil.FindOne[*model.Wallet](ctx, w.coll, bson.M{"user_id": userID})
+}
+
+// UpdateBalance updates the balance of a wallet.
+func (w *WalletMgo) UpdateBalance(ctx context.Context, userID string, balance int64) error {
+ update := bson.M{
+ "$set": bson.M{
+ "balance": balance,
+ "update_time": time.Now(),
+ },
+ }
+ return mongoutil.UpdateOne(ctx, w.coll, bson.M{"user_id": userID}, update, false)
+}
+
+// UpdateBalanceByAmount updates the balance by adding/subtracting an amount.
+func (w *WalletMgo) UpdateBalanceByAmount(ctx context.Context, userID string, amount int64) error {
+ update := bson.M{
+ "$inc": bson.M{
+ "balance": amount,
+ },
+ "$set": bson.M{
+ "update_time": time.Now(),
+ },
+ }
+ return mongoutil.UpdateOne(ctx, w.coll, bson.M{"user_id": userID}, update, false)
+}
+
+// UpdateBalanceWithVersion 使用版本号更新余额(防止并发覆盖)
+func (w *WalletMgo) UpdateBalanceWithVersion(ctx context.Context, params *database.WalletUpdateParams) (*database.WalletUpdateResult, error) {
+ // 基于单文档原子操作,避免并发覆盖
+ if params.Amount < 0 {
+ return nil, errs.ErrArgs.WrapMsg("amount cannot be negative")
+ }
+
+ // 兼容旧数据(无 version 字段或 version=0)
+ filter := bson.M{
+ "user_id": params.UserID,
+ "$or": []bson.M{
+ {"version": params.OldVersion},
+ {"version": bson.M{"$exists": false}},
+ {"version": 0},
+ },
+ }
+
+ // 默认更新:更新时间、版本号自增
+ update := bson.M{
+ "$set": bson.M{
+ "update_time": time.Now(),
+ },
+ "$inc": bson.M{
+ "version": 1,
+ },
+ }
+
+ switch params.Operation {
+ case "add":
+ update["$inc"].(bson.M)["balance"] = params.Amount
+ case "subtract":
+ update["$inc"].(bson.M)["balance"] = -params.Amount
+ // 防止出现负数,过滤条件要求当前余额 >= amount
+ filter["balance"] = bson.M{"$gte": params.Amount}
+ case "set":
+ // 直接设置余额,同时递增版本
+ delete(update, "$inc")
+ update["$set"].(bson.M)["balance"] = params.Amount
+ update["$set"].(bson.M)["version"] = params.OldVersion + 1
+ default:
+ return nil, errs.ErrArgs.WrapMsg("invalid operation: " + params.Operation)
+ }
+
+ // 如果 set 分支删除了 $inc,需要补上版本递增
+ if _, ok := update["$inc"]; !ok {
+ update["$inc"] = bson.M{"version": 1}
+ } else if _, ok := update["$inc"].(bson.M)["version"]; !ok {
+ update["$inc"].(bson.M)["version"] = 1
+ }
+
+ // 使用 findOneAndUpdate 返回更新后的文档
+ opts := options.FindOneAndUpdate().SetReturnDocument(options.After)
+ var updatedWallet model.Wallet
+ err := w.coll.FindOneAndUpdate(ctx, filter, update, opts).Decode(&updatedWallet)
+ if err != nil {
+ if err == mongo.ErrNoDocuments {
+ // 版本号或余额不匹配,说明有并发修改
+ return nil, errs.ErrInternalServer.WrapMsg("concurrent modification detected: version or balance mismatch")
+ }
+ return nil, err
+ }
+
+ return &database.WalletUpdateResult{
+ NewBalance: updatedWallet.Balance,
+ NewVersion: updatedWallet.Version,
+ Success: true,
+ }, nil
+}
+
+// FindAllWallets finds all wallets with pagination.
+func (w *WalletMgo) FindAllWallets(ctx context.Context, pagination pagination.Pagination) (total int64, wallets []*model.Wallet, err error) {
+ return mongoutil.FindPage[*model.Wallet](ctx, w.coll, bson.M{}, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "create_time", Value: -1}},
+ })
+}
+
+// FindWalletsByUserIDs finds wallets by user IDs.
+func (w *WalletMgo) FindWalletsByUserIDs(ctx context.Context, userIDs []string) ([]*model.Wallet, error) {
+ if len(userIDs) == 0 {
+ return []*model.Wallet{}, nil
+ }
+ filter := bson.M{"user_id": bson.M{"$in": userIDs}}
+ return mongoutil.Find[*model.Wallet](ctx, w.coll, filter)
+}
+
+// WalletBalanceRecordMgo implements WalletBalanceRecord using MongoDB as the storage backend.
+type WalletBalanceRecordMgo struct {
+ coll *mongo.Collection
+}
+
+// NewWalletBalanceRecordMongo creates a new instance of WalletBalanceRecordMgo with the provided MongoDB database.
+func NewWalletBalanceRecordMongo(db *mongo.Database) (database.WalletBalanceRecord, error) {
+ coll := db.Collection(database.WalletBalanceRecordName)
+
+ // 先尝试删除可能存在的旧 record_id 索引(如果不是稀疏索引)
+ // 忽略删除失败的错误(索引可能不存在)
+ _, _ = coll.Indexes().DropOne(context.Background(), "record_id_1")
+
+ // 创建索引,使用稀疏索引允许 record_id 为 null
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{{Key: "user_id", Value: 1}, {Key: "create_time", Value: -1}},
+ },
+ {
+ Keys: bson.D{{Key: "record_id", Value: 1}},
+ Options: options.Index().SetUnique(true).SetSparse(true),
+ },
+ {
+ Keys: bson.D{{Key: "create_time", Value: -1}},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &WalletBalanceRecordMgo{coll: coll}, nil
+}
+
+// Create creates a new wallet balance record.
+func (w *WalletBalanceRecordMgo) Create(ctx context.Context, record *model.WalletBalanceRecord) error {
+ if record.CreateTime.IsZero() {
+ record.CreateTime = time.Now()
+ }
+ return mongoutil.InsertOne(ctx, w.coll, record)
+}
+
+// FindByUserID finds all balance records for a user with pagination.
+func (w *WalletBalanceRecordMgo) FindByUserID(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, records []*model.WalletBalanceRecord, err error) {
+ filter := bson.M{"user_id": userID}
+ return mongoutil.FindPage[*model.WalletBalanceRecord](ctx, w.coll, filter, pagination, &options.FindOptions{
+ Sort: bson.D{{Key: "create_time", Value: -1}},
+ })
+}
diff --git a/pkg/common/storage/database/msg.go b/pkg/common/storage/database/msg.go
new file mode 100644
index 0000000..dd6573b
--- /dev/null
+++ b/pkg/common/storage/database/msg.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/msg"
+ "go.mongodb.org/mongo-driver/mongo"
+)
+
+type Msg interface {
+ Create(ctx context.Context, model *model.MsgDocModel) error
+ UpdateMsg(ctx context.Context, docID string, index int64, key string, value any) (*mongo.UpdateResult, error)
+ PushUnique(ctx context.Context, docID string, index int64, key string, value any) (*mongo.UpdateResult, error)
+ FindOneByDocID(ctx context.Context, docID string) (*model.MsgDocModel, error)
+ GetMsgBySeqIndexIn1Doc(ctx context.Context, userID, docID string, seqs []int64) ([]*model.MsgInfoModel, error)
+ GetNewestMsg(ctx context.Context, conversationID string) (*model.MsgInfoModel, error)
+ GetOldestMsg(ctx context.Context, conversationID string) (*model.MsgInfoModel, error)
+ DeleteMsgsInOneDocByIndex(ctx context.Context, docID string, indexes []int) error
+ MarkSingleChatMsgsAsRead(ctx context.Context, userID string, docID string, indexes []int64) error
+ SearchMessage(ctx context.Context, req *msg.SearchMessageReq) (int64, []*model.MsgInfoModel, error)
+ CountUserSendMessages(ctx context.Context, sendID string, startTime int64, endTime int64, content string) (int64, error)
+ SearchUserMessages(ctx context.Context, sendID string, startTime int64, endTime int64, content string, pageNumber int32, showNumber int32) (int64, []*model.MsgInfoModel, error)
+ // CountUserSendMessagesTrend 按时间区间统计用户发送消息走势数据
+ CountUserSendMessagesTrend(ctx context.Context, sendID string, sessionTypes []int32, startTime int64, endTime int64, intervalMillis int64) (map[int64]int64, error)
+ RangeUserSendCount(ctx context.Context, start time.Time, end time.Time, group bool, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, users []*model.UserCount, dateCount map[string]int64, err error)
+ RangeGroupSendCount(ctx context.Context, start time.Time, end time.Time, ase bool, pageNumber int32, showNumber int32) (msgCount int64, userCount int64, groups []*model.GroupCount, dateCount map[string]int64, err error)
+ DeleteDoc(ctx context.Context, docID string) error
+ GetRandBeforeMsg(ctx context.Context, ts int64, limit int) ([]*model.MsgDocModel, error)
+ GetLastMessageSeqByTime(ctx context.Context, conversationID string, time int64) (int64, error)
+ GetLastMessage(ctx context.Context, conversationID string) (*model.MsgInfoModel, error)
+ FindSeqs(ctx context.Context, conversationID string, seqs []int64) ([]*model.MsgInfoModel, error)
+}
diff --git a/pkg/common/storage/database/name.go b/pkg/common/storage/database/name.go
new file mode 100644
index 0000000..2394cc7
--- /dev/null
+++ b/pkg/common/storage/database/name.go
@@ -0,0 +1,29 @@
+package database
+
+const (
+ BlackName = "black"
+ ConversationName = "conversation"
+ FriendName = "friend"
+ FriendVersionName = "friend_version"
+ FriendRequestName = "friend_request"
+ GroupName = "group"
+ GroupMemberName = "group_member"
+ GroupMemberVersionName = "group_member_version"
+ GroupJoinVersionName = "group_join_version"
+ ConversationVersionName = "conversation_version"
+ GroupRequestName = "group_request"
+ LogName = "log"
+ ObjectName = "s3"
+ UserName = "user"
+ SeqConversationName = "seq"
+ SeqUserName = "seq_user"
+ StreamMsgName = "stream_msg"
+ CacheName = "cache"
+ RedPacketName = "red_packet"
+ RedPacketReceiveName = "red_packet_receive"
+ WalletName = "wallets"
+ WalletBalanceRecordName = "wallet_balance_records"
+ MeetingName = "meetings"
+ MeetingCheckInName = "meeting_checkins"
+ SystemConfigName = "system_configs"
+)
diff --git a/pkg/common/storage/database/object.go b/pkg/common/storage/database/object.go
new file mode 100644
index 0000000..453c1f4
--- /dev/null
+++ b/pkg/common/storage/database/object.go
@@ -0,0 +1,34 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+type ObjectInfo interface {
+ SetObject(ctx context.Context, obj *model.Object) error
+ Take(ctx context.Context, engine string, name string) (*model.Object, error)
+ Delete(ctx context.Context, engine string, name []string) error
+ FindExpirationObject(ctx context.Context, engine string, expiration time.Time, needDelType []string, count int64) ([]*model.Object, error)
+ GetKeyCount(ctx context.Context, engine string, key string) (int64, error)
+
+ GetEngineCount(ctx context.Context, engine string) (int64, error)
+ GetEngineInfo(ctx context.Context, engine string, limit int, skip int) ([]*model.Object, error)
+ UpdateEngine(ctx context.Context, oldEngine, oldName string, newEngine string) error
+}
diff --git a/pkg/common/storage/database/redpacket.go b/pkg/common/storage/database/redpacket.go
new file mode 100644
index 0000000..f201a96
--- /dev/null
+++ b/pkg/common/storage/database/redpacket.go
@@ -0,0 +1,62 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+// RedPacket defines the operations for managing red packets in MongoDB.
+type RedPacket interface {
+ // Create creates a new red packet record.
+ Create(ctx context.Context, redPacket *model.RedPacket) error
+ // Take retrieves a red packet by ID. Returns an error if not found.
+ Take(ctx context.Context, redPacketID string) (*model.RedPacket, error)
+ // UpdateStatus updates the status of a red packet.
+ UpdateStatus(ctx context.Context, redPacketID string, status int32) error
+ // UpdateRemain updates the remain amount and count of a red packet.
+ UpdateRemain(ctx context.Context, redPacketID string, remainAmount int64, remainCount int32) error
+ // DecreaseRemainAtomic 原子性地减少红包剩余数量和金额(防止并发问题)
+ // 只有在 remain_count > 0 时才会更新,返回更新后的红包信息
+ DecreaseRemainAtomic(ctx context.Context, redPacketID string, amount int64) (*model.RedPacket, error)
+ // FindExpiredRedPackets finds red packets that have expired.
+ FindExpiredRedPackets(ctx context.Context, beforeTime time.Time) ([]*model.RedPacket, error)
+ // FindRedPacketsByUser finds red packets sent by a user with pagination.
+ FindRedPacketsByUser(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, redPackets []*model.RedPacket, err error)
+ // FindRedPacketsByGroup finds red packets in a group with pagination.
+ FindRedPacketsByGroup(ctx context.Context, groupID string, pagination pagination.Pagination) (total int64, redPackets []*model.RedPacket, err error)
+ // FindAllRedPackets finds all red packets with pagination.
+ FindAllRedPackets(ctx context.Context, pagination pagination.Pagination) (total int64, redPackets []*model.RedPacket, err error)
+}
+
+// RedPacketReceive defines the operations for managing red packet receives in MongoDB.
+type RedPacketReceive interface {
+ // Create creates a new red packet receive record.
+ Create(ctx context.Context, receive *model.RedPacketReceive) error
+ // Take retrieves a receive record by ID. Returns an error if not found.
+ Take(ctx context.Context, receiveID string) (*model.RedPacketReceive, error)
+ // FindByRedPacketID finds all receive records for a red packet.
+ FindByRedPacketID(ctx context.Context, redPacketID string) ([]*model.RedPacketReceive, error)
+ // FindByUserAndRedPacketID finds if a user has received a specific red packet.
+ FindByUserAndRedPacketID(ctx context.Context, userID, redPacketID string) (*model.RedPacketReceive, error)
+ // FindByUser finds all red packets received by a user with pagination.
+ FindByUser(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, receives []*model.RedPacketReceive, err error)
+ // DeleteByReceiveID deletes a receive record by receive ID (for cleanup on failure).
+ DeleteByReceiveID(ctx context.Context, receiveID string) error
+}
diff --git a/pkg/common/storage/database/seq.go b/pkg/common/storage/database/seq.go
new file mode 100644
index 0000000..a97ca2d
--- /dev/null
+++ b/pkg/common/storage/database/seq.go
@@ -0,0 +1,16 @@
+package database
+
+import "context"
+
+type SeqTime struct {
+ Seq int64
+ Time int64
+}
+
+type SeqConversation interface {
+ Malloc(ctx context.Context, conversationID string, size int64) (int64, error)
+ GetMaxSeq(ctx context.Context, conversationID string) (int64, error)
+ SetMaxSeq(ctx context.Context, conversationID string, seq int64) error
+ GetMinSeq(ctx context.Context, conversationID string) (int64, error)
+ SetMinSeq(ctx context.Context, conversationID string, seq int64) error
+}
diff --git a/pkg/common/storage/database/seq_user.go b/pkg/common/storage/database/seq_user.go
new file mode 100644
index 0000000..9f75c71
--- /dev/null
+++ b/pkg/common/storage/database/seq_user.go
@@ -0,0 +1,13 @@
+package database
+
+import "context"
+
+type SeqUser interface {
+ GetUserMaxSeq(ctx context.Context, conversationID string, userID string) (int64, error)
+ SetUserMaxSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ GetUserMinSeq(ctx context.Context, conversationID string, userID string) (int64, error)
+ SetUserMinSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ GetUserReadSeq(ctx context.Context, conversationID string, userID string) (int64, error)
+ SetUserReadSeq(ctx context.Context, conversationID string, userID string, seq int64) error
+ GetUserReadSeqs(ctx context.Context, userID string, conversationID []string) (map[string]int64, error)
+}
diff --git a/pkg/common/storage/database/system_config.go b/pkg/common/storage/database/system_config.go
new file mode 100644
index 0000000..813bea0
--- /dev/null
+++ b/pkg/common/storage/database/system_config.go
@@ -0,0 +1,39 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+// SystemConfig defines the operations for managing system configurations in MongoDB.
+type SystemConfig interface {
+ // Create creates a new system config record.
+ Create(ctx context.Context, config *model.SystemConfig) error
+ // Take retrieves a system config by key. Returns an error if not found.
+ Take(ctx context.Context, key string) (*model.SystemConfig, error)
+ // Update updates system config information.
+ Update(ctx context.Context, key string, data map[string]any) error
+ // Find finds system configs by keys.
+ Find(ctx context.Context, keys []string) ([]*model.SystemConfig, error)
+ // FindEnabled finds all enabled system configs.
+ FindEnabled(ctx context.Context) ([]*model.SystemConfig, error)
+ // FindByKey finds a system config by key (returns nil if not found, no error).
+ FindByKey(ctx context.Context, key string) (*model.SystemConfig, error)
+ // Delete deletes a system config by key.
+ Delete(ctx context.Context, key string) error
+}
diff --git a/pkg/common/storage/database/user.go b/pkg/common/storage/database/user.go
new file mode 100644
index 0000000..3c6c091
--- /dev/null
+++ b/pkg/common/storage/database/user.go
@@ -0,0 +1,56 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+type User interface {
+ Create(ctx context.Context, users []*model.User) (err error)
+ UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error)
+ Find(ctx context.Context, userIDs []string) (users []*model.User, err error)
+ Take(ctx context.Context, userID string) (user *model.User, err error)
+ TakeNotification(ctx context.Context, level int64) (user []*model.User, err error)
+ TakeGTEAppManagerLevel(ctx context.Context, level int64) (user []*model.User, err error)
+ TakeByNickname(ctx context.Context, nickname string) (user []*model.User, err error)
+ Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*model.User, err error)
+ PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*model.User, err error)
+ PageFindUserWithKeyword(ctx context.Context, level1 int64, level2 int64, userID, nickName string, pagination pagination.Pagination) (count int64, users []*model.User, err error)
+ // SearchUsersByFields searches users by multiple fields: account (userID), phone, nickname
+ // Returns userIDs that match the search criteria
+ SearchUsersByFields(ctx context.Context, account, phone, nickname string) (userIDs []string, err error)
+ Exist(ctx context.Context, userID string) (exist bool, err error)
+ GetAllUserID(ctx context.Context, pagination pagination.Pagination) (count int64, userIDs []string, err error)
+ GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error)
+ // Get user total quantity
+ CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
+ // Get user total quantity every day
+ CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+
+ SortQuery(ctx context.Context, userIDName map[string]string, asc bool) ([]*model.User, error)
+
+ // CRUD user command
+ AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error
+ DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error
+ UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error
+ GetUserCommand(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error)
+ GetAllUserCommand(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error)
+}
diff --git a/pkg/common/storage/database/version_log.go b/pkg/common/storage/database/version_log.go
new file mode 100644
index 0000000..7c2f0c4
--- /dev/null
+++ b/pkg/common/storage/database/version_log.go
@@ -0,0 +1,21 @@
+package database
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+const (
+ FirstVersion = 1
+ DefaultDeleteVersion = 0
+)
+
+type VersionLog interface {
+ IncrVersion(ctx context.Context, dId string, eIds []string, state int32) error
+ FindChangeLog(ctx context.Context, dId string, version uint, limit int) (*model.VersionLog, error)
+ BatchFindChangeLog(ctx context.Context, dIds []string, versions []uint, limits []int) ([]*model.VersionLog, error)
+ DeleteAfterUnchangedLog(ctx context.Context, deadline time.Time) error
+ Delete(ctx context.Context, dId string) error
+}
diff --git a/pkg/common/storage/database/wallet.go b/pkg/common/storage/database/wallet.go
new file mode 100644
index 0000000..4fcbf6f
--- /dev/null
+++ b/pkg/common/storage/database/wallet.go
@@ -0,0 +1,65 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package database
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "github.com/openimsdk/tools/db/pagination"
+)
+
+// WalletUpdateParams 钱包更新参数
+type WalletUpdateParams struct {
+ UserID string // 用户ID
+ Operation string // 操作类型:set(设置为指定金额)、add(增加金额)、subtract(减少金额)
+ Amount int64 // 金额(分)
+ OldBalance int64 // 旧余额(用于乐观锁检查)
+ OldVersion int64 // 旧版本号(用于乐观锁检查)
+}
+
+// WalletUpdateResult 钱包更新结果
+type WalletUpdateResult struct {
+ NewBalance int64 // 新余额
+ NewVersion int64 // 新版本号
+ Success bool // 是否成功
+}
+
+// Wallet defines the operations for managing user wallets in MongoDB.
+type Wallet interface {
+ // Create creates a new wallet record.
+ Create(ctx context.Context, wallet *model.Wallet) error
+ // Take retrieves a wallet by user ID. Returns an error if not found.
+ Take(ctx context.Context, userID string) (*model.Wallet, error)
+ // UpdateBalance updates the balance of a wallet.
+ UpdateBalance(ctx context.Context, userID string, balance int64) error
+ // UpdateBalanceByAmount updates the balance by adding/subtracting an amount.
+ UpdateBalanceByAmount(ctx context.Context, userID string, amount int64) error
+ // UpdateBalanceWithVersion 使用版本号更新余额(防止并发覆盖)
+ // 如果 oldVersion 与当前版本号不匹配,返回错误
+ UpdateBalanceWithVersion(ctx context.Context, params *WalletUpdateParams) (*WalletUpdateResult, error)
+ // FindAllWallets finds all wallets with pagination.
+ FindAllWallets(ctx context.Context, pagination pagination.Pagination) (total int64, wallets []*model.Wallet, err error)
+ // FindWalletsByUserIDs finds wallets by user IDs.
+ FindWalletsByUserIDs(ctx context.Context, userIDs []string) ([]*model.Wallet, error)
+}
+
+// WalletBalanceRecord defines the operations for managing wallet balance records in MongoDB.
+type WalletBalanceRecord interface {
+ // Create creates a new wallet balance record.
+ Create(ctx context.Context, record *model.WalletBalanceRecord) error
+ // FindByUserID finds all balance records for a user with pagination.
+ FindByUserID(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, records []*model.WalletBalanceRecord, err error)
+}
diff --git a/pkg/common/storage/model/application.go b/pkg/common/storage/model/application.go
new file mode 100644
index 0000000..b09b0e8
--- /dev/null
+++ b/pkg/common/storage/model/application.go
@@ -0,0 +1,18 @@
+package model
+
+import (
+ "go.mongodb.org/mongo-driver/bson/primitive"
+ "time"
+)
+
+type Application struct {
+ ID primitive.ObjectID `bson:"_id"`
+ Platform string `bson:"platform"`
+ Hot bool `bson:"hot"`
+ Version string `bson:"version"`
+ Url string `bson:"url"`
+ Text string `bson:"text"`
+ Force bool `bson:"force"`
+ Latest bool `bson:"latest"`
+ CreateTime time.Time `bson:"create_time"`
+}
diff --git a/pkg/common/storage/model/black.go b/pkg/common/storage/model/black.go
new file mode 100644
index 0000000..5e60a2f
--- /dev/null
+++ b/pkg/common/storage/model/black.go
@@ -0,0 +1,28 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type Black struct {
+ OwnerUserID string `bson:"owner_user_id"`
+ BlockUserID string `bson:"block_user_id"`
+ CreateTime time.Time `bson:"create_time"`
+ AddSource int32 `bson:"add_source"`
+ OperatorUserID string `bson:"operator_user_id"`
+ Ex string `bson:"ex"`
+}
diff --git a/pkg/common/storage/model/cache.go b/pkg/common/storage/model/cache.go
new file mode 100644
index 0000000..4bbc55e
--- /dev/null
+++ b/pkg/common/storage/model/cache.go
@@ -0,0 +1,9 @@
+package model
+
+import "time"
+
+type Cache struct {
+ Key string `bson:"key"`
+ Value string `bson:"value"`
+ ExpireAt *time.Time `bson:"expire_at"`
+}
diff --git a/pkg/common/storage/model/client_config.go b/pkg/common/storage/model/client_config.go
new file mode 100644
index 0000000..f06e291
--- /dev/null
+++ b/pkg/common/storage/model/client_config.go
@@ -0,0 +1,7 @@
+package model
+
+type ClientConfig struct {
+ Key string `bson:"key"`
+ UserID string `bson:"user_id"`
+ Value string `bson:"value"`
+}
diff --git a/pkg/common/storage/model/conversation.go b/pkg/common/storage/model/conversation.go
new file mode 100644
index 0000000..590899b
--- /dev/null
+++ b/pkg/common/storage/model/conversation.go
@@ -0,0 +1,40 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type Conversation struct {
+ OwnerUserID string `bson:"owner_user_id"`
+ ConversationID string `bson:"conversation_id"`
+ ConversationType int32 `bson:"conversation_type"`
+ UserID string `bson:"user_id"`
+ GroupID string `bson:"group_id"`
+ RecvMsgOpt int32 `bson:"recv_msg_opt"`
+ IsPinned bool `bson:"is_pinned"`
+ IsPrivateChat bool `bson:"is_private_chat"`
+ BurnDuration int32 `bson:"burn_duration"`
+ GroupAtType int32 `bson:"group_at_type"`
+ AttachedInfo string `bson:"attached_info"`
+ Ex string `bson:"ex"`
+ MaxSeq int64 `bson:"max_seq"`
+ MinSeq int64 `bson:"min_seq"`
+ CreateTime time.Time `bson:"create_time"`
+ IsMsgDestruct bool `bson:"is_msg_destruct"`
+ MsgDestructTime int64 `bson:"msg_destruct_time"`
+ LatestMsgDestructTime time.Time `bson:"latest_msg_destruct_time"`
+}
diff --git a/pkg/common/storage/model/doc.go b/pkg/common/storage/model/doc.go
new file mode 100644
index 0000000..b0f988f
--- /dev/null
+++ b/pkg/common/storage/model/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model/relation"
diff --git a/pkg/common/storage/model/friend.go b/pkg/common/storage/model/friend.go
new file mode 100644
index 0000000..abcca2f
--- /dev/null
+++ b/pkg/common/storage/model/friend.go
@@ -0,0 +1,33 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "go.mongodb.org/mongo-driver/bson/primitive"
+ "time"
+)
+
+// Friend represents the data structure for a friend relationship in MongoDB.
+type Friend struct {
+ ID primitive.ObjectID `bson:"_id"`
+ OwnerUserID string `bson:"owner_user_id"`
+ FriendUserID string `bson:"friend_user_id"`
+ Remark string `bson:"remark"`
+ CreateTime time.Time `bson:"create_time"`
+ AddSource int32 `bson:"add_source"`
+ OperatorUserID string `bson:"operator_user_id"`
+ Ex string `bson:"ex"`
+ IsPinned bool `bson:"is_pinned"`
+}
diff --git a/pkg/common/storage/model/friend_request.go b/pkg/common/storage/model/friend_request.go
new file mode 100644
index 0000000..7835690
--- /dev/null
+++ b/pkg/common/storage/model/friend_request.go
@@ -0,0 +1,31 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type FriendRequest struct {
+ FromUserID string `bson:"from_user_id"`
+ ToUserID string `bson:"to_user_id"`
+ HandleResult int32 `bson:"handle_result"`
+ ReqMsg string `bson:"req_msg"`
+ CreateTime time.Time `bson:"create_time"`
+ HandlerUserID string `bson:"handler_user_id"`
+ HandleMsg string `bson:"handle_msg"`
+ HandleTime time.Time `bson:"handle_time"`
+ Ex string `bson:"ex"`
+}
diff --git a/pkg/common/storage/model/group.go b/pkg/common/storage/model/group.go
new file mode 100644
index 0000000..714fcc7
--- /dev/null
+++ b/pkg/common/storage/model/group.go
@@ -0,0 +1,37 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type Group struct {
+ GroupID string `bson:"group_id"`
+ GroupName string `bson:"group_name"`
+ Notification string `bson:"notification"`
+ Introduction string `bson:"introduction"`
+ FaceURL string `bson:"face_url"`
+ CreateTime time.Time `bson:"create_time"`
+ Ex string `bson:"ex"`
+ Status int32 `bson:"status"`
+ CreatorUserID string `bson:"creator_user_id"`
+ GroupType int32 `bson:"group_type"`
+ NeedVerification int32 `bson:"need_verification"`
+ LookMemberInfo int32 `bson:"look_member_info"`
+ ApplyMemberFriend int32 `bson:"apply_member_friend"`
+ NotificationUpdateTime time.Time `bson:"notification_update_time"`
+ NotificationUserID string `bson:"notification_user_id"`
+}
diff --git a/pkg/common/storage/model/group_member.go b/pkg/common/storage/model/group_member.go
new file mode 100644
index 0000000..adf0ec0
--- /dev/null
+++ b/pkg/common/storage/model/group_member.go
@@ -0,0 +1,35 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type GroupMember struct {
+ GroupID string `bson:"group_id"`
+ UserID string `bson:"user_id"`
+ Nickname string `bson:"nickname"`
+ FaceURL string `bson:"face_url"`
+ RoleLevel int32 `bson:"role_level"`
+ JoinTime time.Time `bson:"join_time"`
+ JoinSource int32 `bson:"join_source"`
+ InviterUserID string `bson:"inviter_user_id"`
+ OperatorUserID string `bson:"operator_user_id"`
+ MuteEndTime time.Time `bson:"mute_end_time"`
+ Ex string `bson:"ex"`
+ UserType int32 `bson:"user_type"`
+ UserFlag string `bson:"user_flag"`
+}
diff --git a/pkg/common/storage/model/group_request.go b/pkg/common/storage/model/group_request.go
new file mode 100644
index 0000000..d075699
--- /dev/null
+++ b/pkg/common/storage/model/group_request.go
@@ -0,0 +1,33 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type GroupRequest struct {
+ UserID string `bson:"user_id"`
+ GroupID string `bson:"group_id"`
+ HandleResult int32 `bson:"handle_result"`
+ ReqMsg string `bson:"req_msg"`
+ HandledMsg string `bson:"handled_msg"`
+ ReqTime time.Time `bson:"req_time"`
+ HandleUserID string `bson:"handle_user_id"`
+ HandledTime time.Time `bson:"handled_time"`
+ JoinSource int32 `bson:"join_source"`
+ InviterUserID string `bson:"inviter_user_id"`
+ Ex string `bson:"ex"`
+}
diff --git a/pkg/common/storage/model/log.go b/pkg/common/storage/model/log.go
new file mode 100644
index 0000000..9dc3921
--- /dev/null
+++ b/pkg/common/storage/model/log.go
@@ -0,0 +1,32 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type Log struct {
+ LogID string `bson:"log_id"`
+ Platform string `bson:"platform"`
+ UserID string `bson:"user_id"`
+ CreateTime time.Time `bson:"create_time"`
+ Url string `bson:"url"`
+ FileName string `bson:"file_name"`
+ SystemType string `bson:"system_type"`
+ AppFramework string `bson:"app_framework"`
+ Version string `bson:"version"`
+ Ex string `bson:"ex"`
+}
diff --git a/pkg/common/storage/model/meeting.go b/pkg/common/storage/model/meeting.go
new file mode 100644
index 0000000..cd2051d
--- /dev/null
+++ b/pkg/common/storage/model/meeting.go
@@ -0,0 +1,57 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+const (
+ MeetingTableName = "meetings"
+)
+
+// MeetingStatus 会议状态
+const (
+ MeetingStatusScheduled = 1 // 已预约
+ MeetingStatusOngoing = 2 // 进行中
+ MeetingStatusFinished = 3 // 已结束
+ MeetingStatusCancelled = 4 // 已取消
+)
+
+// Meeting 会议表
+type Meeting struct {
+ MeetingID string `bson:"meeting_id"` // 会议ID(唯一)
+ Subject string `bson:"subject"` // 会议主题
+ CoverURL string `bson:"cover_url"` // 封面URL
+ ScheduledTime time.Time `bson:"scheduled_time"` // 预约时间
+ Status int32 `bson:"status"` // 会议状态:1-已预约,2-进行中,3-已结束,4-已取消
+ CreatorUserID string `bson:"creator_user_id"` // 创建者用户ID
+ Description string `bson:"description"` // 会议描述
+ Duration int32 `bson:"duration"` // 会议时长(分钟)
+ EstimatedCount int32 `bson:"estimated_count"` // 会议预估人数
+ EnableMic bool `bson:"enable_mic"` // 是否开启连麦
+ EnableComment bool `bson:"enable_comment"` // 是否开启评论
+ AnchorUserIDs []string `bson:"anchor_user_ids"` // 主播用户ID列表(多选)
+ GroupID string `bson:"group_id"` // 关联的群聊ID
+ CheckInCount int32 `bson:"check_in_count"` // 签到人数统计
+ Password string `bson:"password"` // 会议密码(6位数字)
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+ UpdateTime time.Time `bson:"update_time"` // 更新时间
+ Ex string `bson:"ex"` // 扩展字段
+}
+
+func (*Meeting) TableName() string {
+ return MeetingTableName
+}
diff --git a/pkg/common/storage/model/meeting_checkin.go b/pkg/common/storage/model/meeting_checkin.go
new file mode 100644
index 0000000..685c1df
--- /dev/null
+++ b/pkg/common/storage/model/meeting_checkin.go
@@ -0,0 +1,38 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+const (
+ MeetingCheckInTableName = "meeting_checkins"
+)
+
+// MeetingCheckIn 会议签到表
+type MeetingCheckIn struct {
+ CheckInID string `bson:"check_in_id"` // 签到ID(唯一)
+ MeetingID string `bson:"meeting_id"` // 会议ID
+ UserID string `bson:"user_id"` // 用户ID
+ CheckInTime time.Time `bson:"check_in_time"` // 签到时间
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+ Ex string `bson:"ex"` // 扩展字段
+}
+
+func (*MeetingCheckIn) TableName() string {
+ return MeetingCheckInTableName
+}
+
diff --git a/pkg/common/storage/model/msg.go b/pkg/common/storage/model/msg.go
new file mode 100644
index 0000000..d71beb3
--- /dev/null
+++ b/pkg/common/storage/model/msg.go
@@ -0,0 +1,158 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "strconv"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+)
+
+const (
+ singleGocMsgNum = 100
+ singleGocMsgNum5000 = 5000
+ MsgTableName = "msg"
+ OldestList = 0
+ NewestList = -1
+)
+
+var ErrMsgListNotExist = errs.New("user not have msg in mongoDB")
+
+type MsgDocModel struct {
+ DocID string `bson:"doc_id"`
+ Msg []*MsgInfoModel `bson:"msgs"`
+}
+
+type RevokeModel struct {
+ Role int32 `bson:"role"`
+ UserID string `bson:"user_id"`
+ Nickname string `bson:"nickname"`
+ Time int64 `bson:"time"`
+}
+
+type OfflinePushModel struct {
+ Title string `bson:"title"`
+ Desc string `bson:"desc"`
+ Ex string `bson:"ex"`
+ IOSPushSound string `bson:"ios_push_sound"`
+ IOSBadgeCount bool `bson:"ios_badge_count"`
+}
+
+type MsgDataModel struct {
+ SendID string `bson:"send_id"`
+ RecvID string `bson:"recv_id"`
+ GroupID string `bson:"group_id"`
+ ClientMsgID string `bson:"client_msg_id"`
+ ServerMsgID string `bson:"server_msg_id"`
+ SenderPlatformID int32 `bson:"sender_platform_id"`
+ SenderNickname string `bson:"sender_nickname"`
+ SenderFaceURL string `bson:"sender_face_url"`
+ SessionType int32 `bson:"session_type"`
+ MsgFrom int32 `bson:"msg_from"`
+ ContentType int32 `bson:"content_type"`
+ Content string `bson:"content"`
+ Seq int64 `bson:"seq"`
+ SendTime int64 `bson:"send_time"`
+ CreateTime int64 `bson:"create_time"`
+ Status int32 `bson:"status"`
+ IsRead bool `bson:"is_read"`
+ Options map[string]bool `bson:"options"`
+ OfflinePush *OfflinePushModel `bson:"offline_push"`
+ AtUserIDList []string `bson:"at_user_id_list"`
+ AttachedInfo string `bson:"attached_info"`
+ Ex string `bson:"ex"`
+}
+
+type MsgInfoModel struct {
+ Msg *MsgDataModel `bson:"msg"`
+ Revoke *RevokeModel `bson:"revoke"`
+ DelList []string `bson:"del_list"`
+ IsRead bool `bson:"is_read"`
+}
+
+type UserCount struct {
+ UserID string `bson:"user_id"`
+ Count int64 `bson:"count"`
+}
+
+type GroupCount struct {
+ GroupID string `bson:"group_id"`
+ Count int64 `bson:"count"`
+}
+
+func (*MsgDocModel) TableName() string {
+ return MsgTableName
+}
+
+func (*MsgDocModel) GetSingleGocMsgNum() int64 {
+ return singleGocMsgNum
+}
+
+func (*MsgDocModel) GetSingleGocMsgNum5000() int64 {
+ return singleGocMsgNum5000
+}
+
+func (m *MsgDocModel) IsFull() bool {
+ return m.Msg[len(m.Msg)-1].Msg != nil
+}
+
+func (m *MsgDocModel) GetDocIndex(seq int64) int64 {
+ return (seq - 1) / singleGocMsgNum
+}
+
+func (m *MsgDocModel) GetDocID(conversationID string, seq int64) string {
+ seqSuffix := (seq - 1) / singleGocMsgNum
+ return m.indexGen(conversationID, seqSuffix)
+}
+
+func (m *MsgDocModel) GetDocIDSeqsMap(conversationID string, seqs []int64) map[string][]int64 {
+ t := make(map[string][]int64)
+ for _, seq := range seqs {
+ docID := m.GetDocID(conversationID, seq)
+ t[docID] = append(t[docID], seq)
+ }
+
+ return t
+}
+
+func (*MsgDocModel) GetMsgIndex(seq int64) int64 {
+ return (seq - 1) % singleGocMsgNum
+}
+
+func (*MsgDocModel) GetLimitForSingleDoc(seq int64) int64 {
+ return seq % singleGocMsgNum
+}
+
+func (*MsgDocModel) indexGen(conversationID string, seqSuffix int64) string {
+ return conversationID + ":" + strconv.FormatInt(seqSuffix, 10)
+}
+
+func (*MsgDocModel) BuildDocIDByIndex(conversationID string, index int64) string {
+ return conversationID + ":" + strconv.FormatInt(index, 10)
+}
+
+func (*MsgDocModel) GenExceptionMessageBySeqs(seqs []int64) (exceptionMsg []*sdkws.MsgData) {
+ for _, v := range seqs {
+ msgModel := new(sdkws.MsgData)
+ msgModel.Seq = v
+ exceptionMsg = append(exceptionMsg, msgModel)
+ }
+ return exceptionMsg
+}
+
+func (*MsgDocModel) GetMinSeq(index int) int64 {
+ return int64(index*singleGocMsgNum) + 1
+}
diff --git a/pkg/common/storage/model/object.go b/pkg/common/storage/model/object.go
new file mode 100644
index 0000000..e08a55d
--- /dev/null
+++ b/pkg/common/storage/model/object.go
@@ -0,0 +1,31 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type Object struct {
+ Name string `bson:"name"`
+ UserID string `bson:"user_id"`
+ Hash string `bson:"hash"`
+ Engine string `bson:"engine"`
+ Key string `bson:"key"`
+ Size int64 `bson:"size"`
+ ContentType string `bson:"content_type"`
+ Group string `bson:"group"`
+ CreateTime time.Time `bson:"create_time"`
+}
diff --git a/pkg/common/storage/model/redpacket.go b/pkg/common/storage/model/redpacket.go
new file mode 100644
index 0000000..1f631f2
--- /dev/null
+++ b/pkg/common/storage/model/redpacket.go
@@ -0,0 +1,75 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+const (
+ RedPacketTableName = "red_packets"
+ RedPacketReceiveTableName = "red_packet_receives"
+)
+
+// RedPacketType 红包类型
+const (
+ RedPacketTypeNormal = 1 // 普通红包(平均分配)
+ RedPacketTypeRandom = 2 // 拼手气红包(随机分配)
+)
+
+// RedPacketStatus 红包状态
+const (
+ RedPacketStatusActive = 0 // 进行中
+ RedPacketStatusFinished = 1 // 已领完
+ RedPacketStatusExpired = 2 // 已过期
+)
+
+// RedPacket 红包主表
+type RedPacket struct {
+ RedPacketID string `bson:"red_packet_id"` // 红包ID
+ SendUserID string `bson:"send_user_id"` // 发送者ID
+ GroupID string `bson:"group_id"` // 群ID(群红包)
+ ConversationID string `bson:"conversation_id"` // 会话ID
+ SessionType int32 `bson:"session_type"` // 会话类型:1-单聊,3-群聊
+ RedPacketType int32 `bson:"red_packet_type"` // 红包类型:1-普通红包,2-拼手气红包
+ TotalAmount int64 `bson:"total_amount"` // 总金额(分)
+ TotalCount int32 `bson:"total_count"` // 总个数
+ RemainAmount int64 `bson:"remain_amount"` // 剩余金额(分)
+ RemainCount int32 `bson:"remain_count"` // 剩余个数
+ Blessing string `bson:"blessing"` // 祝福语
+ Status int32 `bson:"status"` // 状态:0-进行中,1-已领完,2-已过期
+ ExpireTime time.Time `bson:"expire_time"` // 过期时间(默认24小时)
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+ Ex string `bson:"ex"` // 扩展字段
+}
+
+// RedPacketReceive 红包领取记录表
+type RedPacketReceive struct {
+ ReceiveID string `bson:"receive_id"` // 领取记录ID
+ RedPacketID string `bson:"red_packet_id"` // 红包ID
+ ReceiveUserID string `bson:"receive_user_id"` // 领取者ID
+ Amount int64 `bson:"amount"` // 领取金额(分)
+ ReceiveTime time.Time `bson:"receive_time"` // 领取时间
+ IsLucky bool `bson:"is_lucky"` // 是否为手气最佳(仅拼手气红包有效)
+ Ex string `bson:"ex"` // 扩展字段
+}
+
+func (*RedPacket) TableName() string {
+ return RedPacketTableName
+}
+
+func (*RedPacketReceive) TableName() string {
+ return RedPacketReceiveTableName
+}
diff --git a/pkg/common/storage/model/seq.go b/pkg/common/storage/model/seq.go
new file mode 100644
index 0000000..1dc75ef
--- /dev/null
+++ b/pkg/common/storage/model/seq.go
@@ -0,0 +1,7 @@
+package model
+
+type SeqConversation struct {
+ ConversationID string `bson:"conversation_id"`
+ MaxSeq int64 `bson:"max_seq"`
+ MinSeq int64 `bson:"min_seq"`
+}
diff --git a/pkg/common/storage/model/seq_user.go b/pkg/common/storage/model/seq_user.go
new file mode 100644
index 0000000..845996b
--- /dev/null
+++ b/pkg/common/storage/model/seq_user.go
@@ -0,0 +1,9 @@
+package model
+
+type SeqUser struct {
+ UserID string `bson:"user_id"`
+ ConversationID string `bson:"conversation_id"`
+ MinSeq int64 `bson:"min_seq"`
+ MaxSeq int64 `bson:"max_seq"`
+ ReadSeq int64 `bson:"read_seq"`
+}
diff --git a/pkg/common/storage/model/subscribe.go b/pkg/common/storage/model/subscribe.go
new file mode 100644
index 0000000..e71fef3
--- /dev/null
+++ b/pkg/common/storage/model/subscribe.go
@@ -0,0 +1,30 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+// SubscribeUserTableName collection constant.
+const (
+ SubscribeUserTableName = "subscribe_user"
+)
+
+// SubscribeUser collection structure.
+type SubscribeUser struct {
+ UserID string `bson:"user_id" json:"userID"`
+ UserIDList []string `bson:"user_id_list" json:"userIDList"`
+}
+
+func (SubscribeUser) TableName() string {
+ return SubscribeUserTableName
+}
diff --git a/pkg/common/storage/model/system_config.go b/pkg/common/storage/model/system_config.go
new file mode 100644
index 0000000..74c8bea
--- /dev/null
+++ b/pkg/common/storage/model/system_config.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+const (
+ SystemConfigTableName = "system_configs"
+)
+
+// SystemConfigValueType 配置值类型
+const (
+ SystemConfigValueTypeString = 1 // 字符串
+ SystemConfigValueTypeNumber = 2 // 数字
+ SystemConfigValueTypeBool = 3 // 布尔
+ SystemConfigValueTypeJSON = 4 // JSON
+)
+
+// SystemConfig 系统配置表
+type SystemConfig struct {
+ Key string `bson:"key"` // 配置键(唯一标识)
+ Title string `bson:"title"` // 配置标题
+ Value string `bson:"value"` // 配置值(字符串形式存储,根据ValueType解析)
+ ValueType int32 `bson:"value_type"` // 配置值类型:1-字符串,2-数字,3-布尔,4-JSON
+ Description string `bson:"description"` // 配置描述
+ Enabled bool `bson:"enabled"` // 是否启用(用于开关类配置)
+ ShowInApp bool `bson:"show_in_app"` // 是否在APP端展示
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+ UpdateTime time.Time `bson:"update_time"` // 更新时间
+}
+
+func (*SystemConfig) TableName() string {
+ return SystemConfigTableName
+}
diff --git a/pkg/common/storage/model/user.go b/pkg/common/storage/model/user.go
new file mode 100644
index 0000000..3e22651
--- /dev/null
+++ b/pkg/common/storage/model/user.go
@@ -0,0 +1,55 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+type User struct {
+ UserID string `bson:"user_id"`
+ Nickname string `bson:"nickname"`
+ FaceURL string `bson:"face_url"`
+ Ex string `bson:"ex"`
+ AppMangerLevel int32 `bson:"app_manger_level"`
+ GlobalRecvMsgOpt int32 `bson:"global_recv_msg_opt"`
+ UserType int32 `bson:"user_type"`
+ UserFlag string `bson:"user_flag"`
+ CreateTime time.Time `bson:"create_time"`
+}
+
+func (u *User) GetNickname() string {
+ return u.Nickname
+}
+
+func (u *User) GetFaceURL() string {
+ return u.FaceURL
+}
+
+func (u *User) GetUserID() string {
+ return u.UserID
+}
+
+func (u *User) GetEx() string {
+ return u.Ex
+}
+
+func (u *User) GetUserType() int32 {
+ return u.UserType
+}
+
+func (u *User) GetUserFlag() string {
+ return u.UserFlag
+}
diff --git a/pkg/common/storage/model/version_log.go b/pkg/common/storage/model/version_log.go
new file mode 100644
index 0000000..6ed8d30
--- /dev/null
+++ b/pkg/common/storage/model/version_log.go
@@ -0,0 +1,74 @@
+package model
+
+import (
+ "context"
+ "errors"
+ "github.com/openimsdk/tools/log"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+ "time"
+)
+
+const (
+ VersionStateInsert = iota + 1
+ VersionStateDelete
+ VersionStateUpdate
+)
+
+const (
+ VersionGroupChangeID = ""
+ VersionSortChangeID = "____S_O_R_T_I_D____"
+)
+
+type VersionLogElem struct {
+ EID string `bson:"e_id"`
+ State int32 `bson:"state"`
+ Version uint `bson:"version"`
+ LastUpdate time.Time `bson:"last_update"`
+}
+
+type VersionLogTable struct {
+ ID primitive.ObjectID `bson:"_id"`
+ DID string `bson:"d_id"`
+ Logs []VersionLogElem `bson:"logs"`
+ Version uint `bson:"version"`
+ Deleted uint `bson:"deleted"`
+ LastUpdate time.Time `bson:"last_update"`
+}
+
+func (v *VersionLogTable) VersionLog() *VersionLog {
+ return &VersionLog{
+ ID: v.ID,
+ DID: v.DID,
+ Logs: v.Logs,
+ Version: v.Version,
+ Deleted: v.Deleted,
+ LastUpdate: v.LastUpdate,
+ LogLen: len(v.Logs),
+ }
+}
+
+type VersionLog struct {
+ ID primitive.ObjectID `bson:"_id"`
+ DID string `bson:"d_id"`
+ Logs []VersionLogElem `bson:"logs"`
+ Version uint `bson:"version"`
+ Deleted uint `bson:"deleted"`
+ LastUpdate time.Time `bson:"last_update"`
+ LogLen int `bson:"log_len"`
+}
+
+func (v *VersionLog) DeleteAndChangeIDs() (insertIds, deleteIds, updateIds []string) {
+ for _, l := range v.Logs {
+ switch l.State {
+ case VersionStateInsert:
+ insertIds = append(insertIds, l.EID)
+ case VersionStateDelete:
+ deleteIds = append(deleteIds, l.EID)
+ case VersionStateUpdate:
+ updateIds = append(updateIds, l.EID)
+ default:
+ log.ZError(context.Background(), "invalid version status found", errors.New("dirty database data"), "objID", v.ID.Hex(), "did", v.DID, "elem", l)
+ }
+ }
+ return
+}
diff --git a/pkg/common/storage/model/wallet.go b/pkg/common/storage/model/wallet.go
new file mode 100644
index 0000000..27d4cef
--- /dev/null
+++ b/pkg/common/storage/model/wallet.go
@@ -0,0 +1,56 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package model
+
+import (
+ "time"
+)
+
+const (
+ WalletTableName = "wallets"
+ WalletBalanceRecordTableName = "wallet_balance_records"
+)
+
+// Wallet 用户钱包表
+type Wallet struct {
+ UserID string `bson:"user_id"` // 用户ID(唯一)
+ Balance int64 `bson:"balance"` // 余额(分)
+ Version int64 `bson:"version"` // 版本号(用于乐观锁,防止并发覆盖)
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+ UpdateTime time.Time `bson:"update_time"` // 更新时间
+ Ex string `bson:"ex"` // 扩展字段
+}
+
+type WalletBalanceRecord struct {
+ ID string `bson:"_id"` // 记录ID
+ UserID string `bson:"user_id"` // 用户ID
+ Amount int64 `bson:"amount"` // 变动金额(单位:分,正数表示增加,负数表示减少)
+ Type int32 `bson:"type"` // 变动类型:1-充值,2-提现/提款,3-消费,4-退款,5-奖励,6-后台充值,7-发红包,8-抢红包,99-其他
+ BeforeBalance int64 `bson:"before_balance"` // 变动前余额(单位:分)
+ AfterBalance int64 `bson:"after_balance"` // 变动后余额(单位:分)
+ OrderID string `bson:"order_id"` // 关联订单ID(可选)
+ TransactionID string `bson:"transaction_id"` // 交易ID(可选)
+ RedPacketID string `bson:"red_packet_id"` // 红包ID(用于发红包和抢红包记录关联,可选)
+ Remark string `bson:"remark"` // 备注
+ CreateTime time.Time `bson:"create_time"` // 创建时间
+}
+
+func (*Wallet) TableName() string {
+ return WalletTableName
+}
+
+func (*WalletBalanceRecord) TableName() string {
+ return WalletBalanceRecordTableName
+}
diff --git a/pkg/common/storage/versionctx/rpc.go b/pkg/common/storage/versionctx/rpc.go
new file mode 100644
index 0000000..67b95ae
--- /dev/null
+++ b/pkg/common/storage/versionctx/rpc.go
@@ -0,0 +1,14 @@
+package versionctx
+
+import (
+ "context"
+ "google.golang.org/grpc"
+)
+
+func EnableVersionCtx() grpc.ServerOption {
+ return grpc.ChainUnaryInterceptor(enableVersionCtxInterceptor)
+}
+
+func enableVersionCtxInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (any, error) {
+ return handler(WithVersionLog(ctx), req)
+}
diff --git a/pkg/common/storage/versionctx/version.go b/pkg/common/storage/versionctx/version.go
new file mode 100644
index 0000000..478c30e
--- /dev/null
+++ b/pkg/common/storage/versionctx/version.go
@@ -0,0 +1,49 @@
+package versionctx
+
+import (
+ "context"
+ "sync"
+
+ tablerelation "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+)
+
+type Collection struct {
+ Name string
+ Doc *tablerelation.VersionLog
+}
+
+type versionKey struct{}
+
+func WithVersionLog(ctx context.Context) context.Context {
+ return context.WithValue(ctx, versionKey{}, &VersionLog{})
+}
+
+func GetVersionLog(ctx context.Context) *VersionLog {
+ if v, ok := ctx.Value(versionKey{}).(*VersionLog); ok {
+ return v
+ }
+ return nil
+}
+
+type VersionLog struct {
+ lock sync.Mutex
+ data []Collection
+}
+
+func (v *VersionLog) Append(data ...Collection) {
+ if v == nil || len(data) == 0 {
+ return
+ }
+ v.lock.Lock()
+ defer v.lock.Unlock()
+ v.data = append(v.data, data...)
+}
+
+func (v *VersionLog) Get() []Collection {
+ if v == nil {
+ return nil
+ }
+ v.lock.Lock()
+ defer v.lock.Unlock()
+ return v.data
+}
diff --git a/pkg/common/webhook/condition.go b/pkg/common/webhook/condition.go
new file mode 100644
index 0000000..5c253d3
--- /dev/null
+++ b/pkg/common/webhook/condition.go
@@ -0,0 +1,14 @@
+package webhook
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+)
+
+func WithCondition(ctx context.Context, before *config.BeforeConfig, callback func(context.Context) error) error {
+ if !before.Enable {
+ return nil
+ }
+ return callback(ctx)
+}
diff --git a/pkg/common/webhook/config_manager.go b/pkg/common/webhook/config_manager.go
new file mode 100644
index 0000000..97b5313
--- /dev/null
+++ b/pkg/common/webhook/config_manager.go
@@ -0,0 +1,433 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package webhook
+
+import (
+ "context"
+ "encoding/json"
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/model"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+const (
+ // WebhookConfigKey 数据库中的webhook配置键
+ WebhookConfigKey = "webhook_config"
+ // DefaultRefreshInterval 默认刷新间隔(30秒,方便调试)
+ DefaultRefreshInterval = 30 * time.Second
+)
+
+// ConfigManager webhook配置管理器,支持从数据库读取和定时刷新
+type ConfigManager struct {
+ db database.SystemConfig
+ defaultConfig *config.Webhooks
+ mu sync.RWMutex
+ cachedConfig *config.Webhooks
+ lastUpdate time.Time
+ refreshInterval time.Duration
+ stopCh chan struct{}
+}
+
+// NewConfigManager 创建webhook配置管理器
+func NewConfigManager(db database.SystemConfig, defaultConfig *config.Webhooks) *ConfigManager {
+ cm := &ConfigManager{
+ db: db,
+ defaultConfig: defaultConfig,
+ cachedConfig: defaultConfig,
+ refreshInterval: DefaultRefreshInterval,
+ stopCh: make(chan struct{}),
+ }
+ return cm
+}
+
+// Start 启动配置管理器,开始定时刷新
+func (cm *ConfigManager) Start(ctx context.Context) error {
+ // 立即加载一次配置
+ log.ZInfo(ctx, "webhook config manager starting, initial refresh...")
+ if err := cm.Refresh(ctx); err != nil {
+ log.ZWarn(ctx, "initial webhook config refresh failed, using default config", err)
+ } else {
+ currentConfig := cm.GetConfig()
+ log.ZInfo(ctx, "webhook config manager started successfully", "url", currentConfig.URL, "refresh_interval", cm.refreshInterval)
+ }
+
+ // 启动定时刷新goroutine
+ go cm.refreshLoop(ctx)
+ return nil
+}
+
+// Stop 停止配置管理器
+func (cm *ConfigManager) Stop() {
+ close(cm.stopCh)
+}
+
+// refreshLoop 定时刷新循环
+func (cm *ConfigManager) refreshLoop(ctx context.Context) {
+ ticker := time.NewTicker(cm.refreshInterval)
+ defer ticker.Stop()
+
+ log.ZInfo(ctx, "webhook config refresh loop started", "interval", cm.refreshInterval)
+
+ for {
+ select {
+ case <-ticker.C:
+ log.ZDebug(ctx, "webhook config scheduled refresh triggered")
+ if err := cm.Refresh(ctx); err != nil {
+ log.ZWarn(ctx, "webhook config refresh failed", err)
+ }
+ case <-cm.stopCh:
+ log.ZInfo(ctx, "webhook config refresh loop stopped")
+ return
+ }
+ }
+}
+
+// Refresh 从数据库刷新配置
+func (cm *ConfigManager) Refresh(ctx context.Context) error {
+ // 从数据库读取配置
+ sysConfig, err := cm.db.FindByKey(ctx, WebhookConfigKey)
+ if err != nil {
+ // 如果查询出错,使用默认配置
+ log.ZWarn(ctx, "failed to get webhook config from database, using default config", err)
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ return nil
+ }
+
+ // 如果数据库中没有配置,使用默认配置
+ if sysConfig == nil {
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ log.ZInfo(ctx, "webhook config not found in database, using default config", "default_url", cm.defaultConfig.URL)
+ return nil
+ }
+
+ // 检查值类型
+ if sysConfig.ValueType != model.SystemConfigValueTypeJSON {
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ log.ZWarn(ctx, "webhook config valueType is not json, using default config", nil, "value_type", sysConfig.ValueType)
+ return nil
+ }
+
+ // 如果配置被禁用,使用默认配置
+ if !sysConfig.Enabled {
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ log.ZInfo(ctx, "webhook config is disabled, using default config", "default_url", cm.defaultConfig.URL)
+ return nil
+ }
+
+ // 如果配置值为空,使用默认配置
+ if sysConfig.Value == "" {
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ log.ZInfo(ctx, "webhook config value is empty, using default config", "default_url", cm.defaultConfig.URL)
+ return nil
+ }
+
+ valuePreview := sysConfig.Value
+ if len(valuePreview) > 100 {
+ valuePreview = valuePreview[:100] + "..."
+ }
+ log.ZDebug(ctx, "webhook config value found", "value_length", len(sysConfig.Value), "value_preview", valuePreview)
+
+ // 解析配置
+ var webhookConfig config.Webhooks
+ if err := json.Unmarshal([]byte(sysConfig.Value), &webhookConfig); err != nil {
+ // 如果解析失败,使用默认配置
+ log.ZWarn(ctx, "failed to unmarshal webhook config, using default config", err)
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ return nil
+ }
+
+ // 验证解析后的配置,确保AttentionIds不为nil
+ if webhookConfig.AfterSendGroupMsg.AttentionIds == nil {
+ webhookConfig.AfterSendGroupMsg.AttentionIds = []string{}
+ }
+
+ normalized, ok := normalizeWebhookConfig(cm.defaultConfig, &webhookConfig)
+ if !ok {
+ log.ZWarn(ctx, "webhook config URL is empty, using default config", nil)
+ cm.mu.Lock()
+ cm.cachedConfig = cm.defaultConfig
+ cm.lastUpdate = time.Now()
+ cm.mu.Unlock()
+ return nil
+ }
+
+ // 更新缓存
+ cm.mu.Lock()
+ oldURL := cm.cachedConfig.URL
+ oldAttentionIdsCount := len(cm.cachedConfig.AfterSendGroupMsg.AttentionIds)
+ cm.cachedConfig = normalized
+ cm.lastUpdate = time.Now()
+ newAttentionIdsCount := len(normalized.AfterSendGroupMsg.AttentionIds)
+ cm.mu.Unlock()
+
+ // 如果URL或AttentionIds发生变化,记录日志
+ urlChanged := oldURL != webhookConfig.URL
+ attentionIdsChanged := oldAttentionIdsCount != newAttentionIdsCount
+ if urlChanged || attentionIdsChanged {
+ log.ZInfo(ctx, "webhook config updated from database",
+ "old_url", oldURL, "new_url", webhookConfig.URL,
+ "old_attention_ids_count", oldAttentionIdsCount,
+ "new_attention_ids_count", newAttentionIdsCount,
+ "url_changed", urlChanged,
+ "attention_ids_changed", attentionIdsChanged)
+ } else {
+ log.ZDebug(ctx, "webhook config refreshed (no change)", "url", webhookConfig.URL, "attention_ids_count", newAttentionIdsCount)
+ }
+
+ return nil
+}
+
+// GetConfig 获取当前缓存的配置
+func (cm *ConfigManager) GetConfig() *config.Webhooks {
+ cm.mu.RLock()
+ defer cm.mu.RUnlock()
+ return cm.cachedConfig
+}
+
+// normalizeWebhookConfig 校验并填充缺省值,返回是否有效
+func normalizeWebhookConfig(defaultCfg *config.Webhooks, cfg *config.Webhooks) (*config.Webhooks, bool) {
+ if cfg == nil {
+ return nil, false
+ }
+ if cfg.URL == "" {
+ return nil, false
+ }
+
+ normalized := *cfg // 浅拷贝
+
+ // 填充默认值,避免数据库配置错误导致回调不可用
+ if normalized.AfterSendGroupMsg.AttentionIds == nil {
+ normalized.AfterSendGroupMsg.AttentionIds = []string{}
+ }
+
+ applyBeforeDefaults := func(dst *config.BeforeConfig, fallback config.BeforeConfig) {
+ if dst.Timeout <= 0 {
+ dst.Timeout = fallback.Timeout
+ }
+ }
+ applyAfterDefaults := func(dst *config.AfterConfig, fallback config.AfterConfig) {
+ if dst.Timeout <= 0 {
+ dst.Timeout = fallback.Timeout
+ }
+ if dst.AttentionIds == nil {
+ dst.AttentionIds = []string{}
+ }
+ }
+
+ if defaultCfg != nil {
+ applyBeforeDefaults(&normalized.BeforeSendSingleMsg, defaultCfg.BeforeSendSingleMsg)
+ applyBeforeDefaults(&normalized.BeforeUpdateUserInfoEx, defaultCfg.BeforeUpdateUserInfoEx)
+ applyAfterDefaults(&normalized.AfterUpdateUserInfoEx, defaultCfg.AfterUpdateUserInfoEx)
+ applyAfterDefaults(&normalized.AfterSendSingleMsg, defaultCfg.AfterSendSingleMsg)
+ applyBeforeDefaults(&normalized.BeforeSendGroupMsg, defaultCfg.BeforeSendGroupMsg)
+ applyBeforeDefaults(&normalized.BeforeMsgModify, defaultCfg.BeforeMsgModify)
+ applyAfterDefaults(&normalized.AfterSendGroupMsg, defaultCfg.AfterSendGroupMsg)
+ applyAfterDefaults(&normalized.AfterMsgSaveDB, defaultCfg.AfterMsgSaveDB)
+ applyAfterDefaults(&normalized.AfterUserOnline, defaultCfg.AfterUserOnline)
+ applyAfterDefaults(&normalized.AfterUserOffline, defaultCfg.AfterUserOffline)
+ applyAfterDefaults(&normalized.AfterUserKickOff, defaultCfg.AfterUserKickOff)
+ applyBeforeDefaults(&normalized.BeforeOfflinePush, defaultCfg.BeforeOfflinePush)
+ applyBeforeDefaults(&normalized.BeforeOnlinePush, defaultCfg.BeforeOnlinePush)
+ applyBeforeDefaults(&normalized.BeforeGroupOnlinePush, defaultCfg.BeforeGroupOnlinePush)
+ applyBeforeDefaults(&normalized.BeforeAddFriend, defaultCfg.BeforeAddFriend)
+ applyBeforeDefaults(&normalized.BeforeUpdateUserInfo, defaultCfg.BeforeUpdateUserInfo)
+ applyAfterDefaults(&normalized.AfterUpdateUserInfo, defaultCfg.AfterUpdateUserInfo)
+ applyBeforeDefaults(&normalized.BeforeCreateGroup, defaultCfg.BeforeCreateGroup)
+ applyAfterDefaults(&normalized.AfterCreateGroup, defaultCfg.AfterCreateGroup)
+ applyBeforeDefaults(&normalized.BeforeMemberJoinGroup, defaultCfg.BeforeMemberJoinGroup)
+ applyBeforeDefaults(&normalized.BeforeSetGroupMemberInfo, defaultCfg.BeforeSetGroupMemberInfo)
+ applyAfterDefaults(&normalized.AfterSetGroupMemberInfo, defaultCfg.AfterSetGroupMemberInfo)
+ applyAfterDefaults(&normalized.AfterQuitGroup, defaultCfg.AfterQuitGroup)
+ applyAfterDefaults(&normalized.AfterKickGroupMember, defaultCfg.AfterKickGroupMember)
+ applyAfterDefaults(&normalized.AfterDismissGroup, defaultCfg.AfterDismissGroup)
+ }
+
+ return &normalized, true
+}
+
+// GetURL 获取当前webhook URL
+func (cm *ConfigManager) GetURL() string {
+ cm.mu.RLock()
+ defer cm.mu.RUnlock()
+ return cm.cachedConfig.URL
+}
+
+// SetRefreshInterval 设置刷新间隔
+func (cm *ConfigManager) SetRefreshInterval(interval time.Duration) {
+ cm.mu.Lock()
+ defer cm.mu.Unlock()
+ cm.refreshInterval = interval
+}
+
+// GetLastUpdate 获取最后更新时间
+func (cm *ConfigManager) GetLastUpdate() time.Time {
+ cm.mu.RLock()
+ defer cm.mu.RUnlock()
+ return cm.lastUpdate
+}
+
+// UpdateAttentionIds 更新webhook配置中的attentionIds(添加或移除群ID)
+// add: true表示添加groupID,false表示移除groupID
+func UpdateAttentionIds(ctx context.Context, db database.SystemConfig, groupID string, add bool) error {
+ if groupID == "" {
+ return nil
+ }
+
+ if db == nil {
+ return nil
+ }
+
+ // 获取当前配置
+ sysConfig, err := db.FindByKey(ctx, WebhookConfigKey)
+ if err != nil {
+ log.ZWarn(ctx, "UpdateAttentionIds: failed to get webhook config from database", err, "groupID", groupID, "add", add)
+ return errs.WrapMsg(err, "failed to get webhook config from database")
+ }
+
+ if sysConfig == nil {
+ log.ZDebug(ctx, "UpdateAttentionIds: webhook config not found in database, skipping", "groupID", groupID, "add", add)
+ return nil
+ }
+
+ if !sysConfig.Enabled {
+ log.ZDebug(ctx, "UpdateAttentionIds: webhook config is disabled, skipping", "groupID", groupID, "add", add)
+ return nil
+ }
+
+ if sysConfig.Value == "" {
+ log.ZDebug(ctx, "UpdateAttentionIds: webhook config value is empty, skipping", "groupID", groupID, "add", add)
+ return nil
+ }
+
+ // 检查值类型
+ if sysConfig.ValueType != model.SystemConfigValueTypeJSON {
+ log.ZWarn(ctx, "UpdateAttentionIds: webhook config valueType is not json, skipping", nil, "groupID", groupID, "add", add, "value_type", sysConfig.ValueType)
+ return nil
+ }
+
+ // 解析当前配置
+ var webhookConfig config.Webhooks
+ if err := json.Unmarshal([]byte(sysConfig.Value), &webhookConfig); err != nil {
+ log.ZWarn(ctx, "UpdateAttentionIds: failed to unmarshal webhook config", err, "groupID", groupID, "add", add)
+ return errs.WrapMsg(err, "failed to unmarshal webhook config")
+ }
+
+ // 记录更新前的Enable状态
+ enableBefore := webhookConfig.AfterSendGroupMsg.Enable
+
+ // 验证解析后的配置是否有效
+ if webhookConfig.AfterSendGroupMsg.AttentionIds == nil {
+ webhookConfig.AfterSendGroupMsg.AttentionIds = []string{}
+ }
+
+ // 更新afterSendGroupMsg的attentionIds
+ attentionIds := webhookConfig.AfterSendGroupMsg.AttentionIds
+ oldCount := len(attentionIds)
+ var updated bool
+
+ if add {
+ // 添加groupID(如果不存在)
+ if !datautil.Contain(groupID, attentionIds...) {
+ attentionIds = append(attentionIds, groupID)
+ webhookConfig.AfterSendGroupMsg.AttentionIds = attentionIds
+ updated = true
+ log.ZInfo(ctx, "UpdateAttentionIds: adding groupID to attentionIds", "groupID", groupID, "old_count", oldCount, "new_count", len(attentionIds), "enable", enableBefore)
+ } else {
+ log.ZDebug(ctx, "UpdateAttentionIds: groupID already exists in attentionIds, skipping", "groupID", groupID, "count", oldCount)
+ return nil
+ }
+ } else {
+ // 移除groupID
+ newAttentionIds := make([]string, 0, len(attentionIds))
+ for _, id := range attentionIds {
+ if id != groupID {
+ newAttentionIds = append(newAttentionIds, id)
+ }
+ }
+ if len(newAttentionIds) != len(attentionIds) {
+ webhookConfig.AfterSendGroupMsg.AttentionIds = newAttentionIds
+ updated = true
+ log.ZInfo(ctx, "UpdateAttentionIds: removing groupID from attentionIds", "groupID", groupID, "old_count", oldCount, "new_count", len(newAttentionIds), "enable", enableBefore)
+ } else {
+ log.ZDebug(ctx, "UpdateAttentionIds: groupID not found in attentionIds, skipping", "groupID", groupID, "count", oldCount)
+ return nil
+ }
+ }
+
+ if !updated {
+ return nil
+ }
+
+ // 验证配置URL是否存在(必要的验证)
+ if webhookConfig.URL == "" {
+ log.ZWarn(ctx, "UpdateAttentionIds: webhook config URL is empty, skipping update", nil, "groupID", groupID, "add", add)
+ return errs.ErrArgs.WrapMsg("webhook config URL is empty")
+ }
+
+ // 记录更新后的Enable状态(确保没有被修改)
+ enableAfter := webhookConfig.AfterSendGroupMsg.Enable
+ if enableBefore != enableAfter {
+ log.ZWarn(ctx, "UpdateAttentionIds: Enable field changed unexpectedly", nil,
+ "groupID", groupID, "add", add,
+ "enable_before", enableBefore, "enable_after", enableAfter)
+ // 恢复原始值,确保不会修改Enable字段
+ webhookConfig.AfterSendGroupMsg.Enable = enableBefore
+ }
+
+ // 序列化更新后的配置(只更新attentionIds,不改变其他配置)
+ updatedValue, err := json.Marshal(webhookConfig)
+ if err != nil {
+ log.ZWarn(ctx, "UpdateAttentionIds: failed to marshal updated webhook config", err, "groupID", groupID, "add", add)
+ return errs.WrapMsg(err, "failed to marshal updated webhook config")
+ }
+
+ // 更新数据库
+ if err := db.Update(ctx, WebhookConfigKey, map[string]any{
+ "value": string(updatedValue),
+ }); err != nil {
+ log.ZWarn(ctx, "UpdateAttentionIds: failed to update webhook config in database", err, "groupID", groupID, "add", add)
+ return errs.WrapMsg(err, "failed to update webhook config in database")
+ }
+
+ log.ZInfo(ctx, "UpdateAttentionIds: successfully updated attentionIds in database",
+ "groupID", groupID, "add", add,
+ "attention_ids_count", len(webhookConfig.AfterSendGroupMsg.AttentionIds),
+ "enable", webhookConfig.AfterSendGroupMsg.Enable)
+ return nil
+}
diff --git a/pkg/common/webhook/doc.go b/pkg/common/webhook/doc.go
new file mode 100644
index 0000000..f9f5817
--- /dev/null
+++ b/pkg/common/webhook/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package webhook // import "git.imall.cloud/openim/open-im-server-deploy/pkg/common/webhook"
diff --git a/pkg/common/webhook/http_client.go b/pkg/common/webhook/http_client.go
new file mode 100644
index 0000000..63e9941
--- /dev/null
+++ b/pkg/common/webhook/http_client.go
@@ -0,0 +1,191 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package webhook
+
+import (
+ "context"
+ "encoding/json"
+ "net/http"
+ "net/url"
+ "sync"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/servererrs"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/openimsdk/tools/mq/memamq"
+ "github.com/openimsdk/tools/utils/httputil"
+)
+
+type Client struct {
+ client *httputil.HTTPClient
+ url string
+ queue *memamq.MemoryQueue
+ configManager *ConfigManager
+ mu sync.RWMutex
+}
+
+const (
+ webhookWorkerCount = 2
+ webhookBufferSize = 100
+
+ Key = "key"
+)
+
+func NewWebhookClient(url string, options ...*memamq.MemoryQueue) *Client {
+ var queue *memamq.MemoryQueue
+ if len(options) > 0 && options[0] != nil {
+ queue = options[0]
+ } else {
+ queue = memamq.NewMemoryQueue(webhookWorkerCount, webhookBufferSize)
+ }
+
+ http.DefaultTransport.(*http.Transport).MaxConnsPerHost = 100 // Enhance the default number of max connections per host
+
+ return &Client{
+ client: httputil.NewHTTPClient(httputil.NewClientConfig()),
+ url: url,
+ queue: queue,
+ }
+}
+
+// NewWebhookClientWithManager 创建支持动态配置的webhook client
+func NewWebhookClientWithManager(configManager *ConfigManager, options ...*memamq.MemoryQueue) *Client {
+ var queue *memamq.MemoryQueue
+ if len(options) > 0 && options[0] != nil {
+ queue = options[0]
+ } else {
+ queue = memamq.NewMemoryQueue(webhookWorkerCount, webhookBufferSize)
+ }
+
+ http.DefaultTransport.(*http.Transport).MaxConnsPerHost = 100 // Enhance the default number of max connections per host
+
+ return &Client{
+ client: httputil.NewHTTPClient(httputil.NewClientConfig()),
+ url: configManager.GetURL(),
+ queue: queue,
+ configManager: configManager,
+ }
+}
+
+// getURL 获取当前webhook URL(支持动态配置)
+func (c *Client) getURL() string {
+ if c.configManager != nil {
+ url := c.configManager.GetURL()
+ log.ZDebug(context.Background(), "webhook getURL from config manager", "url", url)
+ return url
+ }
+ c.mu.RLock()
+ defer c.mu.RUnlock()
+ log.ZDebug(context.Background(), "webhook getURL from static config", "url", c.url)
+ return c.url
+}
+
+// GetConfig returns the latest webhook config from the manager when available,
+// falling back to the provided default configuration.
+func (c *Client) GetConfig(defaultConfig *config.Webhooks) *config.Webhooks {
+ if c == nil {
+ return defaultConfig
+ }
+ if c.configManager != nil {
+ if cfg := c.configManager.GetConfig(); cfg != nil {
+ return cfg
+ }
+ }
+ return defaultConfig
+}
+
+func (c *Client) SyncPost(ctx context.Context, command string, req callbackstruct.CallbackReq, resp callbackstruct.CallbackResp, before *config.BeforeConfig) error {
+ return c.post(ctx, command, req, resp, before.Timeout)
+}
+
+func (c *Client) AsyncPost(ctx context.Context, command string, req callbackstruct.CallbackReq, resp callbackstruct.CallbackResp, after *config.AfterConfig) {
+ log.ZDebug(ctx, "webhook AsyncPost called", "command", command, "enable", after.Enable)
+ if after.Enable {
+ log.ZInfo(ctx, "webhook AsyncPost queued", "command", command, "timeout", after.Timeout)
+ c.queue.Push(func() { c.post(ctx, command, req, resp, after.Timeout) })
+ } else {
+ log.ZDebug(ctx, "webhook AsyncPost skipped (disabled)", "command", command)
+ }
+}
+
+func (c *Client) AsyncPostWithQuery(ctx context.Context, command string, req callbackstruct.CallbackReq, resp callbackstruct.CallbackResp, after *config.AfterConfig, queryParams map[string]string) {
+ log.ZDebug(ctx, "webhook AsyncPostWithQuery called", "command", command, "enable", after.Enable)
+ if after.Enable {
+ log.ZInfo(ctx, "webhook AsyncPostWithQuery queued", "command", command, "timeout", after.Timeout)
+ c.queue.Push(func() { c.postWithQuery(ctx, command, req, resp, after.Timeout, queryParams) })
+ } else {
+ log.ZDebug(ctx, "webhook AsyncPostWithQuery skipped (disabled)", "command", command)
+ }
+}
+
+func (c *Client) post(ctx context.Context, command string, input interface{}, output callbackstruct.CallbackResp, timeout int) error {
+ ctx = mcontext.WithMustInfoCtx([]string{mcontext.GetOperationID(ctx), mcontext.GetOpUserID(ctx), mcontext.GetOpUserPlatform(ctx), mcontext.GetConnID(ctx)})
+ fullURL := c.getURL() + "/" + command
+ log.ZInfo(ctx, "webhook", "url", fullURL, "input", input, "config", timeout)
+ operationID, _ := ctx.Value(constant.OperationID).(string)
+ b, err := c.client.Post(ctx, fullURL, map[string]string{constant.OperationID: operationID}, input, timeout)
+ if err != nil {
+ return servererrs.ErrNetwork.WrapMsg(err.Error(), "post url", fullURL)
+ }
+ if err = json.Unmarshal(b, output); err != nil {
+ return servererrs.ErrData.WithDetail(err.Error() + " response format error")
+ }
+ if err := output.Parse(); err != nil {
+ return err
+ }
+ log.ZInfo(ctx, "webhook success", "url", fullURL, "input", input, "response", string(b))
+ return nil
+}
+
+func (c *Client) postWithQuery(ctx context.Context, command string, input interface{}, output callbackstruct.CallbackResp, timeout int, queryParams map[string]string) error {
+ ctx = mcontext.WithMustInfoCtx([]string{mcontext.GetOperationID(ctx), mcontext.GetOpUserID(ctx), mcontext.GetOpUserPlatform(ctx), mcontext.GetConnID(ctx)})
+ fullURL := c.getURL() + "/" + command
+
+ parsedURL, err := url.Parse(fullURL)
+ if err != nil {
+ return servererrs.ErrNetwork.WrapMsg(err.Error(), "failed to parse URL", fullURL)
+ }
+
+ query := parsedURL.Query()
+
+ operationID, _ := ctx.Value(constant.OperationID).(string)
+
+ for key, value := range queryParams {
+ query.Set(key, value)
+ }
+
+ parsedURL.RawQuery = query.Encode()
+
+ fullURL = parsedURL.String()
+ log.ZInfo(ctx, "webhook", "url", fullURL, "input", input, "config", timeout)
+
+ b, err := c.client.Post(ctx, fullURL, map[string]string{constant.OperationID: operationID}, input, timeout)
+ if err != nil {
+ return servererrs.ErrNetwork.WrapMsg(err.Error(), "post url", fullURL)
+ }
+
+ if err = json.Unmarshal(b, output); err != nil {
+ return servererrs.ErrData.WithDetail(err.Error() + " response format error")
+ }
+ if err := output.Parse(); err != nil {
+ return err
+ }
+
+ log.ZInfo(ctx, "webhook success", "url", fullURL, "input", input, "response", string(b))
+ return nil
+}
diff --git a/pkg/common/webhook/http_client_test.go b/pkg/common/webhook/http_client_test.go
new file mode 100644
index 0000000..3c3aeb8
--- /dev/null
+++ b/pkg/common/webhook/http_client_test.go
@@ -0,0 +1,15 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package webhook
diff --git a/pkg/dbbuild/builder.go b/pkg/dbbuild/builder.go
new file mode 100644
index 0000000..c74ee14
--- /dev/null
+++ b/pkg/dbbuild/builder.go
@@ -0,0 +1,29 @@
+package dbbuild
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/redis/go-redis/v9"
+)
+
+type Builder interface {
+ Mongo(ctx context.Context) (*mongoutil.Client, error)
+ Redis(ctx context.Context) (redis.UniversalClient, error)
+}
+
+func NewBuilder(mongoConf *config.Mongo, redisConf *config.Redis) Builder {
+ if redisConf != nil {
+ cachekey.SetOnlinePrefix(redisConf.OnlineKeyPrefix, redisConf.OnlineKeyPrefixHashTag, redisConf.RedisMode)
+ }
+ if config.Standalone() {
+ globalStandalone.setConfig(mongoConf, redisConf)
+ return globalStandalone
+ }
+ return µservices{
+ mongo: mongoConf,
+ redis: redisConf,
+ }
+}
diff --git a/pkg/dbbuild/microservices.go b/pkg/dbbuild/microservices.go
new file mode 100644
index 0000000..d8cb616
--- /dev/null
+++ b/pkg/dbbuild/microservices.go
@@ -0,0 +1,26 @@
+package dbbuild
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+ "github.com/redis/go-redis/v9"
+)
+
+type microservices struct {
+ mongo *config.Mongo
+ redis *config.Redis
+}
+
+func (x *microservices) Mongo(ctx context.Context) (*mongoutil.Client, error) {
+ return mongoutil.NewMongoDB(ctx, x.mongo.Build())
+}
+
+func (x *microservices) Redis(ctx context.Context) (redis.UniversalClient, error) {
+ if x.redis.Disable {
+ return nil, nil
+ }
+ return redisutil.NewRedisClient(ctx, x.redis.Build())
+}
diff --git a/pkg/dbbuild/standalone.go b/pkg/dbbuild/standalone.go
new file mode 100644
index 0000000..e213f6e
--- /dev/null
+++ b/pkg/dbbuild/standalone.go
@@ -0,0 +1,76 @@
+package dbbuild
+
+import (
+ "context"
+ "sync"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ standaloneMongo = "mongo"
+ standaloneRedis = "redis"
+)
+
+var globalStandalone = &standalone{}
+
+type standaloneConn[C any] struct {
+ Conn C
+ Err error
+}
+
+func (x *standaloneConn[C]) result() (C, error) {
+ return x.Conn, x.Err
+}
+
+type standalone struct {
+ lock sync.Mutex
+ mongo *config.Mongo
+ redis *config.Redis
+ conn map[string]any
+}
+
+func (x *standalone) setConfig(mongoConf *config.Mongo, redisConf *config.Redis) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ x.mongo = mongoConf
+ x.redis = redisConf
+}
+
+func (x *standalone) Mongo(ctx context.Context) (*mongoutil.Client, error) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if x.conn == nil {
+ x.conn = make(map[string]any)
+ }
+ v, ok := x.conn[standaloneMongo]
+ if !ok {
+ var val standaloneConn[*mongoutil.Client]
+ val.Conn, val.Err = mongoutil.NewMongoDB(ctx, x.mongo.Build())
+ v = &val
+ x.conn[standaloneMongo] = v
+ }
+ return v.(*standaloneConn[*mongoutil.Client]).result()
+}
+
+func (x *standalone) Redis(ctx context.Context) (redis.UniversalClient, error) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if x.redis.Disable {
+ return nil, nil
+ }
+ if x.conn == nil {
+ x.conn = make(map[string]any)
+ }
+ v, ok := x.conn[standaloneRedis]
+ if !ok {
+ var val standaloneConn[redis.UniversalClient]
+ val.Conn, val.Err = redisutil.NewRedisClient(ctx, x.redis.Build())
+ v = &val
+ x.conn[standaloneRedis] = v
+ }
+ return v.(*standaloneConn[redis.UniversalClient]).result()
+}
diff --git a/pkg/localcache/cache.go b/pkg/localcache/cache.go
new file mode 100644
index 0000000..a2d7982
--- /dev/null
+++ b/pkg/localcache/cache.go
@@ -0,0 +1,131 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package localcache
+
+import (
+ "context"
+ "hash/fnv"
+ "unsafe"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/link"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/lru"
+)
+
+type Cache[V any] interface {
+ Get(ctx context.Context, key string, fetch func(ctx context.Context) (V, error)) (V, error)
+ GetLink(ctx context.Context, key string, fetch func(ctx context.Context) (V, error), link ...string) (V, error)
+ Del(ctx context.Context, key ...string)
+ DelLocal(ctx context.Context, key ...string)
+ Stop()
+}
+
+func LRUStringHash(key string) uint64 {
+ h := fnv.New64a()
+ h.Write(*(*[]byte)(unsafe.Pointer(&key)))
+ return h.Sum64()
+}
+
+func New[V any](opts ...Option) Cache[V] {
+ opt := defaultOption()
+ for _, o := range opts {
+ o(opt)
+ }
+
+ c := cache[V]{opt: opt}
+ if opt.localSlotNum > 0 && opt.localSlotSize > 0 {
+ createSimpleLRU := func() lru.LRU[string, V] {
+ if opt.expirationEvict {
+ return lru.NewExpirationLRU(opt.localSlotSize, opt.localSuccessTTL, opt.localFailedTTL, opt.target, c.onEvict)
+ } else {
+ return lru.NewLazyLRU(opt.localSlotSize, opt.localSuccessTTL, opt.localFailedTTL, opt.target, c.onEvict)
+ }
+ }
+ if opt.localSlotNum == 1 {
+ c.local = createSimpleLRU()
+ } else {
+ c.local = lru.NewSlotLRU(opt.localSlotNum, LRUStringHash, createSimpleLRU)
+ }
+ if opt.linkSlotNum > 0 {
+ c.link = link.New(opt.linkSlotNum)
+ }
+ }
+ return &c
+}
+
+type cache[V any] struct {
+ opt *option
+ link link.Link
+ local lru.LRU[string, V]
+}
+
+func (c *cache[V]) onEvict(key string, value V) {
+ _ = value
+
+ if c.link != nil {
+ lks := c.link.Del(key)
+ for k := range lks {
+ if key != k { // prevent deadlock
+ c.local.Del(k)
+ }
+ }
+ }
+}
+
+func (c *cache[V]) del(key ...string) {
+ if c.local == nil {
+ return
+ }
+ for _, k := range key {
+ c.local.Del(k)
+ if c.link != nil {
+ lks := c.link.Del(k)
+ for k := range lks {
+ c.local.Del(k)
+ }
+ }
+ }
+}
+
+func (c *cache[V]) Get(ctx context.Context, key string, fetch func(ctx context.Context) (V, error)) (V, error) {
+ return c.GetLink(ctx, key, fetch)
+}
+
+func (c *cache[V]) GetLink(ctx context.Context, key string, fetch func(ctx context.Context) (V, error), link ...string) (V, error) {
+ if c.local != nil {
+ return c.local.Get(key, func() (V, error) {
+ if len(link) > 0 {
+ c.link.Link(key, link...)
+ }
+ return fetch(ctx)
+ })
+ } else {
+ return fetch(ctx)
+ }
+}
+
+func (c *cache[V]) Del(ctx context.Context, key ...string) {
+ for _, fn := range c.opt.delFn {
+ fn(ctx, key...)
+ }
+ c.del(key...)
+}
+
+func (c *cache[V]) DelLocal(ctx context.Context, key ...string) {
+ c.del(key...)
+}
+
+func (c *cache[V]) Stop() {
+ c.local.Stop()
+}
diff --git a/pkg/localcache/cache_test.go b/pkg/localcache/cache_test.go
new file mode 100644
index 0000000..c206e67
--- /dev/null
+++ b/pkg/localcache/cache_test.go
@@ -0,0 +1,93 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package localcache
+
+import (
+ "context"
+ "fmt"
+ "math/rand"
+ "sync"
+ "sync/atomic"
+ "testing"
+ "time"
+)
+
+func TestName(t *testing.T) {
+ c := New[string](WithExpirationEvict())
+ //c := New[string]()
+ ctx := context.Background()
+
+ const (
+ num = 10000
+ tNum = 10000
+ kNum = 100000
+ pNum = 100
+ )
+
+ getKey := func(v uint64) string {
+ return fmt.Sprintf("key_%d", v%kNum)
+ }
+
+ start := time.Now()
+ t.Log("start", start)
+
+ var (
+ get atomic.Int64
+ del atomic.Int64
+ )
+
+ incrGet := func() {
+ if v := get.Add(1); v%pNum == 0 {
+ //t.Log("#get count", v/pNum)
+ }
+ }
+ incrDel := func() {
+ if v := del.Add(1); v%pNum == 0 {
+ //t.Log("@del count", v/pNum)
+ }
+ }
+
+ var wg sync.WaitGroup
+
+ for i := 0; i < tNum; i++ {
+ wg.Add(2)
+ go func() {
+ defer wg.Done()
+ for i := 0; i < num; i++ {
+ c.Get(ctx, getKey(rand.Uint64()), func(ctx context.Context) (string, error) {
+ return fmt.Sprintf("index_%d", i), nil
+ })
+ incrGet()
+ }
+ }()
+
+ go func() {
+ defer wg.Done()
+ time.Sleep(time.Second / 10)
+ for i := 0; i < num; i++ {
+ c.Del(ctx, getKey(rand.Uint64()))
+ incrDel()
+ }
+ }()
+ }
+
+ wg.Wait()
+ end := time.Now()
+ t.Log("end", end)
+ t.Log("time", end.Sub(start))
+ t.Log("get", get.Load())
+ t.Log("del", del.Load())
+ // 137.35s
+}
diff --git a/pkg/localcache/doc.go b/pkg/localcache/doc.go
new file mode 100644
index 0000000..6000441
--- /dev/null
+++ b/pkg/localcache/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package localcache // import "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
diff --git a/pkg/localcache/init.go b/pkg/localcache/init.go
new file mode 100644
index 0000000..2d6df8f
--- /dev/null
+++ b/pkg/localcache/init.go
@@ -0,0 +1,88 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package localcache
+
+import (
+ "strings"
+ "sync"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+)
+
+var (
+ once sync.Once
+ subscribe map[string][]string
+)
+
+func InitLocalCache(localCache *config.LocalCache) {
+ once.Do(func() {
+ list := []struct {
+ Local config.CacheConfig
+ Keys []string
+ }{
+ {
+ Local: localCache.Auth,
+ Keys: []string{cachekey.UidPidToken},
+ },
+ {
+ Local: localCache.User,
+ Keys: []string{cachekey.UserInfoKey, cachekey.UserGlobalRecvMsgOptKey},
+ },
+ {
+ Local: localCache.Group,
+ Keys: []string{cachekey.GroupMemberIDsKey, cachekey.GroupInfoKey, cachekey.GroupMemberInfoKey},
+ },
+ {
+ Local: localCache.Friend,
+ Keys: []string{cachekey.FriendIDsKey, cachekey.BlackIDsKey},
+ },
+ {
+ Local: localCache.Conversation,
+ Keys: []string{cachekey.ConversationKey, cachekey.ConversationIDsKey, cachekey.ConversationNotReceiveMessageUserIDsKey},
+ },
+ }
+ subscribe = make(map[string][]string)
+ for _, v := range list {
+ if v.Local.Enable() {
+ subscribe[v.Local.Topic] = v.Keys
+ }
+ }
+ })
+}
+
+func GetPublishKeysByTopic(topics []string, keys []string) map[string][]string {
+ keysByTopic := make(map[string][]string)
+ for _, topic := range topics {
+ keysByTopic[topic] = []string{}
+ }
+
+ for _, key := range keys {
+ for _, topic := range topics {
+ prefixes, ok := subscribe[topic]
+ if !ok {
+ continue
+ }
+ for _, prefix := range prefixes {
+ if strings.HasPrefix(key, prefix) {
+ keysByTopic[topic] = append(keysByTopic[topic], key)
+ break
+ }
+ }
+ }
+ }
+
+ return keysByTopic
+}
diff --git a/pkg/localcache/link/doc.go b/pkg/localcache/link/doc.go
new file mode 100644
index 0000000..90d4544
--- /dev/null
+++ b/pkg/localcache/link/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package link // import "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/link"
diff --git a/pkg/localcache/link/link.go b/pkg/localcache/link/link.go
new file mode 100644
index 0000000..8c77015
--- /dev/null
+++ b/pkg/localcache/link/link.go
@@ -0,0 +1,123 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package link
+
+import (
+ "hash/fnv"
+ "sync"
+ "unsafe"
+)
+
+type Link interface {
+ Link(key string, link ...string)
+ Del(key string) map[string]struct{}
+}
+
+func newLinkKey() *linkKey {
+ return &linkKey{
+ data: make(map[string]map[string]struct{}),
+ }
+}
+
+type linkKey struct {
+ lock sync.Mutex
+ data map[string]map[string]struct{}
+}
+
+func (x *linkKey) link(key string, link ...string) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ v, ok := x.data[key]
+ if !ok {
+ v = make(map[string]struct{})
+ x.data[key] = v
+ }
+ for _, k := range link {
+ v[k] = struct{}{}
+ }
+}
+
+func (x *linkKey) del(key string) map[string]struct{} {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ ks, ok := x.data[key]
+ if !ok {
+ return nil
+ }
+ delete(x.data, key)
+ return ks
+}
+
+func New(n int) Link {
+ if n <= 0 {
+ panic("must be greater than 0")
+ }
+ slots := make([]*linkKey, n)
+ for i := 0; i < len(slots); i++ {
+ slots[i] = newLinkKey()
+ }
+ return &slot{
+ n: uint64(n),
+ slots: slots,
+ }
+}
+
+type slot struct {
+ n uint64
+ slots []*linkKey
+}
+
+func (x *slot) index(s string) uint64 {
+ h := fnv.New64a()
+ _, _ = h.Write(*(*[]byte)(unsafe.Pointer(&s)))
+ return h.Sum64() % x.n
+}
+
+func (x *slot) Link(key string, link ...string) {
+ if len(link) == 0 {
+ return
+ }
+ mk := key
+ lks := make([]string, len(link))
+ for i, k := range link {
+ lks[i] = k
+ }
+ x.slots[x.index(mk)].link(mk, lks...)
+ for _, lk := range lks {
+ x.slots[x.index(lk)].link(lk, mk)
+ }
+}
+
+func (x *slot) Del(key string) map[string]struct{} {
+ return x.delKey(key)
+}
+
+func (x *slot) delKey(k string) map[string]struct{} {
+ del := make(map[string]struct{})
+ stack := []string{k}
+ for len(stack) > 0 {
+ curr := stack[len(stack)-1]
+ stack = stack[:len(stack)-1]
+ if _, ok := del[curr]; ok {
+ continue
+ }
+ del[curr] = struct{}{}
+ childKeys := x.slots[x.index(curr)].del(curr)
+ for ck := range childKeys {
+ stack = append(stack, ck)
+ }
+ }
+ return del
+}
diff --git a/pkg/localcache/link/link_test.go b/pkg/localcache/link/link_test.go
new file mode 100644
index 0000000..bb9fee6
--- /dev/null
+++ b/pkg/localcache/link/link_test.go
@@ -0,0 +1,34 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package link
+
+import (
+ "testing"
+)
+
+func TestName(t *testing.T) {
+
+ v := New(1)
+
+ //v.Link("a:1", "b:1", "c:1", "d:1")
+ v.Link("a:1", "b:1", "c:1")
+ v.Link("z:1", "b:1")
+
+ //v.DelKey("a:1")
+ v.Del("z:1")
+
+ t.Log(v)
+
+}
diff --git a/pkg/localcache/lru/doc.go b/pkg/localcache/lru/doc.go
new file mode 100644
index 0000000..1e65426
--- /dev/null
+++ b/pkg/localcache/lru/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package lru // import "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/lru"
diff --git a/pkg/localcache/lru/lru.go b/pkg/localcache/lru/lru.go
new file mode 100644
index 0000000..726535c
--- /dev/null
+++ b/pkg/localcache/lru/lru.go
@@ -0,0 +1,37 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package lru
+
+import "github.com/hashicorp/golang-lru/v2/simplelru"
+
+type EvictCallback[K comparable, V any] simplelru.EvictCallback[K, V]
+
+type LRU[K comparable, V any] interface {
+ Get(key K, fetch func() (V, error)) (V, error)
+ Set(key K, value V)
+ SetHas(key K, value V) bool
+ GetBatch(keys []K, fetch func(keys []K) (map[K]V, error)) (map[K]V, error)
+ Del(key K) bool
+ Stop()
+}
+
+type Target interface {
+ IncrGetHit()
+ IncrGetSuccess()
+ IncrGetFailed()
+
+ IncrDelHit()
+ IncrDelNotFound()
+}
diff --git a/pkg/localcache/lru/lru_expiration.go b/pkg/localcache/lru/lru_expiration.go
new file mode 100644
index 0000000..df6bacb
--- /dev/null
+++ b/pkg/localcache/lru/lru_expiration.go
@@ -0,0 +1,114 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package lru
+
+import (
+ "sync"
+ "time"
+
+ "github.com/hashicorp/golang-lru/v2/expirable"
+)
+
+func NewExpirationLRU[K comparable, V any](size int, successTTL, failedTTL time.Duration, target Target, onEvict EvictCallback[K, V]) LRU[K, V] {
+ var cb expirable.EvictCallback[K, *expirationLruItem[V]]
+ if onEvict != nil {
+ cb = func(key K, value *expirationLruItem[V]) {
+ onEvict(key, value.value)
+ }
+ }
+ core := expirable.NewLRU[K, *expirationLruItem[V]](size, cb, successTTL)
+ return &ExpirationLRU[K, V]{
+ core: core,
+ successTTL: successTTL,
+ failedTTL: failedTTL,
+ target: target,
+ }
+}
+
+type expirationLruItem[V any] struct {
+ lock sync.RWMutex
+ err error
+ value V
+}
+
+type ExpirationLRU[K comparable, V any] struct {
+ lock sync.Mutex
+ core *expirable.LRU[K, *expirationLruItem[V]]
+ successTTL time.Duration
+ failedTTL time.Duration
+ target Target
+}
+
+func (x *ExpirationLRU[K, V]) GetBatch(keys []K, fetch func(keys []K) (map[K]V, error)) (map[K]V, error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (x *ExpirationLRU[K, V]) Get(key K, fetch func() (V, error)) (V, error) {
+ x.lock.Lock()
+ v, ok := x.core.Get(key)
+ if ok {
+ x.lock.Unlock()
+ x.target.IncrGetSuccess()
+ v.lock.RLock()
+ defer v.lock.RUnlock()
+ return v.value, v.err
+ } else {
+ v = &expirationLruItem[V]{}
+ x.core.Add(key, v)
+ v.lock.Lock()
+ x.lock.Unlock()
+ defer v.lock.Unlock()
+ v.value, v.err = fetch()
+ if v.err == nil {
+ x.target.IncrGetSuccess()
+ } else {
+ x.target.IncrGetFailed()
+ x.core.Remove(key)
+ }
+ return v.value, v.err
+ }
+}
+
+func (x *ExpirationLRU[K, V]) Del(key K) bool {
+ x.lock.Lock()
+ ok := x.core.Remove(key)
+ x.lock.Unlock()
+ if ok {
+ x.target.IncrDelHit()
+ } else {
+ x.target.IncrDelNotFound()
+ }
+ return ok
+}
+
+func (x *ExpirationLRU[K, V]) SetHas(key K, value V) bool {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if x.core.Contains(key) {
+ x.core.Add(key, &expirationLruItem[V]{value: value})
+ return true
+ }
+ return false
+}
+
+func (x *ExpirationLRU[K, V]) Set(key K, value V) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ x.core.Add(key, &expirationLruItem[V]{value: value})
+}
+
+func (x *ExpirationLRU[K, V]) Stop() {
+}
diff --git a/pkg/localcache/lru/lru_lazy.go b/pkg/localcache/lru/lru_lazy.go
new file mode 100644
index 0000000..4a3db46
--- /dev/null
+++ b/pkg/localcache/lru/lru_lazy.go
@@ -0,0 +1,190 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package lru
+
+import (
+ "sync"
+ "time"
+
+ "github.com/hashicorp/golang-lru/v2/simplelru"
+)
+
+type lazyLruItem[V any] struct {
+ lock sync.Mutex
+ expires int64
+ err error
+ value V
+}
+
+func NewLazyLRU[K comparable, V any](size int, successTTL, failedTTL time.Duration, target Target, onEvict EvictCallback[K, V]) *LazyLRU[K, V] {
+ var cb simplelru.EvictCallback[K, *lazyLruItem[V]]
+ if onEvict != nil {
+ cb = func(key K, value *lazyLruItem[V]) {
+ onEvict(key, value.value)
+ }
+ }
+ core, err := simplelru.NewLRU[K, *lazyLruItem[V]](size, cb)
+ if err != nil {
+ panic(err)
+ }
+ return &LazyLRU[K, V]{
+ core: core,
+ successTTL: successTTL,
+ failedTTL: failedTTL,
+ target: target,
+ }
+}
+
+type LazyLRU[K comparable, V any] struct {
+ lock sync.Mutex
+ core *simplelru.LRU[K, *lazyLruItem[V]]
+ successTTL time.Duration
+ failedTTL time.Duration
+ target Target
+}
+
+func (x *LazyLRU[K, V]) Get(key K, fetch func() (V, error)) (V, error) {
+ x.lock.Lock()
+ v, ok := x.core.Get(key)
+ if ok {
+ x.lock.Unlock()
+ v.lock.Lock()
+ expires, value, err := v.expires, v.value, v.err
+ if expires != 0 && expires > time.Now().UnixMilli() {
+ v.lock.Unlock()
+ x.target.IncrGetHit()
+ return value, err
+ }
+ } else {
+ v = &lazyLruItem[V]{}
+ x.core.Add(key, v)
+ v.lock.Lock()
+ x.lock.Unlock()
+ }
+ defer v.lock.Unlock()
+ if v.expires > time.Now().UnixMilli() {
+ return v.value, v.err
+ }
+ v.value, v.err = fetch()
+ if v.err == nil {
+ v.expires = time.Now().Add(x.successTTL).UnixMilli()
+ x.target.IncrGetSuccess()
+ } else {
+ v.expires = time.Now().Add(x.failedTTL).UnixMilli()
+ x.target.IncrGetFailed()
+ }
+ return v.value, v.err
+}
+
+func (x *LazyLRU[K, V]) GetBatch(keys []K, fetch func(keys []K) (map[K]V, error)) (map[K]V, error) {
+ var (
+ err error
+ once sync.Once
+ )
+
+ res := make(map[K]V)
+ queries := make([]K, 0, len(keys))
+
+ for _, key := range keys {
+ x.lock.Lock()
+ v, ok := x.core.Get(key)
+ x.lock.Unlock()
+ if ok {
+ v.lock.Lock()
+ expires, value, err1 := v.expires, v.value, v.err
+ v.lock.Unlock()
+ if expires != 0 && expires > time.Now().UnixMilli() {
+ x.target.IncrGetHit()
+ res[key] = value
+ if err1 != nil {
+ once.Do(func() {
+ err = err1
+ })
+ }
+ continue
+ }
+ }
+ queries = append(queries, key)
+ }
+
+ if len(queries) == 0 {
+ return res, err
+ }
+
+ values, fetchErr := fetch(queries)
+ if fetchErr != nil {
+ once.Do(func() {
+ err = fetchErr
+ })
+ }
+
+ for key, val := range values {
+ v := &lazyLruItem[V]{}
+ v.value = val
+
+ if err == nil {
+ v.expires = time.Now().Add(x.successTTL).UnixMilli()
+ x.target.IncrGetSuccess()
+ } else {
+ v.expires = time.Now().Add(x.failedTTL).UnixMilli()
+ x.target.IncrGetFailed()
+ }
+
+ x.lock.Lock()
+ x.core.Add(key, v)
+ x.lock.Unlock()
+ res[key] = val
+ }
+
+ return res, err
+}
+
+//func (x *LazyLRU[K, V]) Has(key K) bool {
+// x.lock.Lock()
+// defer x.lock.Unlock()
+// return x.core.Contains(key)
+//}
+
+func (x *LazyLRU[K, V]) Set(key K, value V) {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ x.core.Add(key, &lazyLruItem[V]{value: value, expires: time.Now().Add(x.successTTL).UnixMilli()})
+}
+
+func (x *LazyLRU[K, V]) SetHas(key K, value V) bool {
+ x.lock.Lock()
+ defer x.lock.Unlock()
+ if x.core.Contains(key) {
+ x.core.Add(key, &lazyLruItem[V]{value: value, expires: time.Now().Add(x.successTTL).UnixMilli()})
+ return true
+ }
+ return false
+}
+
+func (x *LazyLRU[K, V]) Del(key K) bool {
+ x.lock.Lock()
+ ok := x.core.Remove(key)
+ x.lock.Unlock()
+ if ok {
+ x.target.IncrDelHit()
+ } else {
+ x.target.IncrDelNotFound()
+ }
+ return ok
+}
+
+func (x *LazyLRU[K, V]) Stop() {
+
+}
diff --git a/pkg/localcache/lru/lru_lazy_test.go b/pkg/localcache/lru/lru_lazy_test.go
new file mode 100644
index 0000000..ab0fa50
--- /dev/null
+++ b/pkg/localcache/lru/lru_lazy_test.go
@@ -0,0 +1,113 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package lru
+
+import (
+ "fmt"
+ "hash/fnv"
+ "sync"
+ "sync/atomic"
+ "testing"
+ "time"
+ "unsafe"
+)
+
+type cacheTarget struct {
+ getHit int64
+ getSuccess int64
+ getFailed int64
+ delHit int64
+ delNotFound int64
+}
+
+func (r *cacheTarget) IncrGetHit() {
+ atomic.AddInt64(&r.getHit, 1)
+}
+
+func (r *cacheTarget) IncrGetSuccess() {
+ atomic.AddInt64(&r.getSuccess, 1)
+}
+
+func (r *cacheTarget) IncrGetFailed() {
+ atomic.AddInt64(&r.getFailed, 1)
+}
+
+func (r *cacheTarget) IncrDelHit() {
+ atomic.AddInt64(&r.delHit, 1)
+}
+
+func (r *cacheTarget) IncrDelNotFound() {
+ atomic.AddInt64(&r.delNotFound, 1)
+}
+
+func (r *cacheTarget) String() string {
+ return fmt.Sprintf("getHit: %d, getSuccess: %d, getFailed: %d, delHit: %d, delNotFound: %d", r.getHit, r.getSuccess, r.getFailed, r.delHit, r.delNotFound)
+}
+
+func TestName(t *testing.T) {
+ target := &cacheTarget{}
+ l := NewSlotLRU[string, string](100, func(k string) uint64 {
+ h := fnv.New64a()
+ h.Write(*(*[]byte)(unsafe.Pointer(&k)))
+ return h.Sum64()
+ }, func() LRU[string, string] {
+ return NewExpirationLRU[string, string](100, time.Second*60, time.Second, target, nil)
+ })
+ //l := NewInertiaLRU[string, string](1000, time.Second*20, time.Second*5, target)
+
+ fn := func(key string, n int, fetch func() (string, error)) {
+ for i := 0; i < n; i++ {
+ //v, err := l.Get(key, fetch)
+ //if err == nil {
+ // t.Log("key", key, "value", v)
+ //} else {
+ // t.Error("key", key, err)
+ //}
+ v, err := l.Get(key, fetch)
+ //time.Sleep(time.Second / 100)
+ func(v ...any) {}(v, err)
+ }
+ }
+
+ tmp := make(map[string]struct{})
+
+ var wg sync.WaitGroup
+ for i := 0; i < 10000; i++ {
+ wg.Add(1)
+ key := fmt.Sprintf("key_%d", i%200)
+ tmp[key] = struct{}{}
+ go func() {
+ defer wg.Done()
+ //t.Log(key)
+ fn(key, 10000, func() (string, error) {
+
+ return "value_" + key, nil
+ })
+ }()
+
+ //wg.Add(1)
+ //go func() {
+ // defer wg.Done()
+ // for i := 0; i < 10; i++ {
+ // l.Del(key)
+ // time.Sleep(time.Second / 3)
+ // }
+ //}()
+ }
+ wg.Wait()
+ t.Log(len(tmp))
+ t.Log(target.String())
+
+}
diff --git a/pkg/localcache/lru/lru_slot.go b/pkg/localcache/lru/lru_slot.go
new file mode 100644
index 0000000..14ee3b5
--- /dev/null
+++ b/pkg/localcache/lru/lru_slot.go
@@ -0,0 +1,82 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package lru
+
+func NewSlotLRU[K comparable, V any](slotNum int, hash func(K) uint64, create func() LRU[K, V]) LRU[K, V] {
+ x := &slotLRU[K, V]{
+ n: uint64(slotNum),
+ slots: make([]LRU[K, V], slotNum),
+ hash: hash,
+ }
+ for i := 0; i < slotNum; i++ {
+ x.slots[i] = create()
+ }
+ return x
+}
+
+type slotLRU[K comparable, V any] struct {
+ n uint64
+ slots []LRU[K, V]
+ hash func(k K) uint64
+}
+
+func (x *slotLRU[K, V]) GetBatch(keys []K, fetch func(keys []K) (map[K]V, error)) (map[K]V, error) {
+ var (
+ slotKeys = make(map[uint64][]K)
+ kVs = make(map[K]V)
+ )
+
+ for _, k := range keys {
+ index := x.getIndex(k)
+ slotKeys[index] = append(slotKeys[index], k)
+ }
+
+ for k, v := range slotKeys {
+ batches, err := x.slots[k].GetBatch(v, fetch)
+ if err != nil {
+ return nil, err
+ }
+ for key, value := range batches {
+ kVs[key] = value
+ }
+ }
+ return kVs, nil
+}
+
+func (x *slotLRU[K, V]) getIndex(k K) uint64 {
+ return x.hash(k) % x.n
+}
+
+func (x *slotLRU[K, V]) Get(key K, fetch func() (V, error)) (V, error) {
+ return x.slots[x.getIndex(key)].Get(key, fetch)
+}
+
+func (x *slotLRU[K, V]) Set(key K, value V) {
+ x.slots[x.getIndex(key)].Set(key, value)
+}
+
+func (x *slotLRU[K, V]) SetHas(key K, value V) bool {
+ return x.slots[x.getIndex(key)].SetHas(key, value)
+}
+
+func (x *slotLRU[K, V]) Del(key K) bool {
+ return x.slots[x.getIndex(key)].Del(key)
+}
+
+func (x *slotLRU[K, V]) Stop() {
+ for _, slot := range x.slots {
+ slot.Stop()
+ }
+}
diff --git a/pkg/localcache/option.go b/pkg/localcache/option.go
new file mode 100644
index 0000000..7f234d8
--- /dev/null
+++ b/pkg/localcache/option.go
@@ -0,0 +1,136 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package localcache
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/lru"
+)
+
+func defaultOption() *option {
+ return &option{
+ localSlotNum: 500,
+ localSlotSize: 20000,
+ linkSlotNum: 500,
+ expirationEvict: false,
+ localSuccessTTL: time.Minute,
+ localFailedTTL: time.Second * 5,
+ delFn: make([]func(ctx context.Context, key ...string), 0, 2),
+ target: EmptyTarget{},
+ }
+}
+
+type option struct {
+ localSlotNum int
+ localSlotSize int
+ linkSlotNum int
+ // expirationEvict: true means that the cache will be actively cleared when the timer expires,
+ // false means that the cache will be lazily deleted.
+ expirationEvict bool
+ localSuccessTTL time.Duration
+ localFailedTTL time.Duration
+ delFn []func(ctx context.Context, key ...string)
+ target lru.Target
+}
+
+type Option func(o *option)
+
+func WithExpirationEvict() Option {
+ return func(o *option) {
+ o.expirationEvict = true
+ }
+}
+
+func WithLazy() Option {
+ return func(o *option) {
+ o.expirationEvict = false
+ }
+}
+
+func WithLocalDisable() Option {
+ return WithLinkSlotNum(0)
+}
+
+func WithLinkDisable() Option {
+ return WithLinkSlotNum(0)
+}
+
+func WithLinkSlotNum(linkSlotNum int) Option {
+ return func(o *option) {
+ o.linkSlotNum = linkSlotNum
+ }
+}
+
+func WithLocalSlotNum(localSlotNum int) Option {
+ return func(o *option) {
+ o.localSlotNum = localSlotNum
+ }
+}
+
+func WithLocalSlotSize(localSlotSize int) Option {
+ return func(o *option) {
+ o.localSlotSize = localSlotSize
+ }
+}
+
+func WithLocalSuccessTTL(localSuccessTTL time.Duration) Option {
+ if localSuccessTTL < 0 {
+ panic("localSuccessTTL should be greater than 0")
+ }
+ return func(o *option) {
+ o.localSuccessTTL = localSuccessTTL
+ }
+}
+
+func WithLocalFailedTTL(localFailedTTL time.Duration) Option {
+ if localFailedTTL < 0 {
+ panic("localFailedTTL should be greater than 0")
+ }
+ return func(o *option) {
+ o.localFailedTTL = localFailedTTL
+ }
+}
+
+func WithTarget(target lru.Target) Option {
+ if target == nil {
+ panic("target should not be nil")
+ }
+ return func(o *option) {
+ o.target = target
+ }
+}
+
+func WithDeleteKeyBefore(fn func(ctx context.Context, key ...string)) Option {
+ if fn == nil {
+ panic("fn should not be nil")
+ }
+ return func(o *option) {
+ o.delFn = append(o.delFn, fn)
+ }
+}
+
+type EmptyTarget struct{}
+
+func (e EmptyTarget) IncrGetHit() {}
+
+func (e EmptyTarget) IncrGetSuccess() {}
+
+func (e EmptyTarget) IncrGetFailed() {}
+
+func (e EmptyTarget) IncrDelHit() {}
+
+func (e EmptyTarget) IncrDelNotFound() {}
diff --git a/pkg/localcache/tool.go b/pkg/localcache/tool.go
new file mode 100644
index 0000000..ec04ea9
--- /dev/null
+++ b/pkg/localcache/tool.go
@@ -0,0 +1,23 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package localcache
+
+func AnyValue[V any](v any, err error) (V, error) {
+ if err != nil {
+ var zero V
+ return zero, err
+ }
+ return v.(V), nil
+}
diff --git a/pkg/mqbuild/builder.go b/pkg/mqbuild/builder.go
new file mode 100644
index 0000000..9986aba
--- /dev/null
+++ b/pkg/mqbuild/builder.go
@@ -0,0 +1,60 @@
+package mqbuild
+
+import (
+ "context"
+ "fmt"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/mq"
+ "github.com/openimsdk/tools/mq/kafka"
+ "github.com/openimsdk/tools/mq/simmq"
+)
+
+type Builder interface {
+ GetTopicProducer(ctx context.Context, topic string) (mq.Producer, error)
+ GetTopicConsumer(ctx context.Context, topic string) (mq.Consumer, error)
+}
+
+func NewBuilder(kafka *config.Kafka) Builder {
+ if config.Standalone() {
+ return standaloneBuilder{}
+ }
+ return &kafkaBuilder{
+ addr: kafka.Address,
+ config: kafka.Build(),
+ topicGroupID: map[string]string{
+ kafka.ToRedisTopic: kafka.ToRedisGroupID,
+ kafka.ToMongoTopic: kafka.ToMongoGroupID,
+ kafka.ToPushTopic: kafka.ToPushGroupID,
+ kafka.ToOfflinePushTopic: kafka.ToOfflineGroupID,
+ },
+ }
+}
+
+type standaloneBuilder struct{}
+
+func (standaloneBuilder) GetTopicProducer(ctx context.Context, topic string) (mq.Producer, error) {
+ return simmq.GetTopicProducer(topic), nil
+}
+
+func (standaloneBuilder) GetTopicConsumer(ctx context.Context, topic string) (mq.Consumer, error) {
+ return simmq.GetTopicConsumer(topic), nil
+}
+
+type kafkaBuilder struct {
+ addr []string
+ config *kafka.Config
+ topicGroupID map[string]string
+}
+
+func (x *kafkaBuilder) GetTopicProducer(ctx context.Context, topic string) (mq.Producer, error) {
+ return kafka.NewKafkaProducerV2(x.config, x.addr, topic)
+}
+
+func (x *kafkaBuilder) GetTopicConsumer(ctx context.Context, topic string) (mq.Consumer, error) {
+ groupID, ok := x.topicGroupID[topic]
+ if !ok {
+ return nil, fmt.Errorf("topic %s groupID not found", topic)
+ }
+ return kafka.NewMConsumerGroupV2(ctx, x.config, groupID, []string{topic}, true)
+}
diff --git a/pkg/msgprocessor/conversation.go b/pkg/msgprocessor/conversation.go
new file mode 100644
index 0000000..aca416a
--- /dev/null
+++ b/pkg/msgprocessor/conversation.go
@@ -0,0 +1,149 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msgprocessor
+
+import (
+ "sort"
+ "strings"
+
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "google.golang.org/protobuf/proto"
+)
+
+func IsGroupConversationID(conversationID string) bool {
+ return strings.HasPrefix(conversationID, "g_") || strings.HasPrefix(conversationID, "sg_")
+}
+
+func GetNotificationConversationIDByMsg(msg *sdkws.MsgData) string {
+ switch msg.SessionType {
+ case constant.SingleChatType:
+ l := []string{msg.SendID, msg.RecvID}
+ sort.Strings(l)
+ return "n_" + strings.Join(l, "_")
+ case constant.WriteGroupChatType:
+ return "n_" + msg.GroupID
+ case constant.ReadGroupChatType:
+ return "n_" + msg.GroupID
+ case constant.NotificationChatType:
+ l := []string{msg.SendID, msg.RecvID}
+ sort.Strings(l)
+ return "n_" + strings.Join(l, "_")
+ }
+ return ""
+}
+
+func GetChatConversationIDByMsg(msg *sdkws.MsgData) string {
+ switch msg.SessionType {
+ case constant.SingleChatType:
+ l := []string{msg.SendID, msg.RecvID}
+ sort.Strings(l)
+ return "si_" + strings.Join(l, "_")
+ case constant.WriteGroupChatType:
+ return "g_" + msg.GroupID
+ case constant.ReadGroupChatType:
+ return "sg_" + msg.GroupID
+ case constant.NotificationChatType:
+ l := []string{msg.SendID, msg.RecvID}
+ sort.Strings(l)
+ return "sn_" + strings.Join(l, "_")
+ }
+
+ return ""
+}
+
+func GetConversationIDByMsg(msg *sdkws.MsgData) string {
+ options := Options(msg.Options)
+ switch msg.SessionType {
+ case constant.SingleChatType:
+ l := []string{msg.SendID, msg.RecvID}
+ sort.Strings(l)
+ if !options.IsNotNotification() {
+ return "n_" + strings.Join(l, "_")
+ }
+ return "si_" + strings.Join(l, "_") // single chat
+ case constant.WriteGroupChatType:
+ if !options.IsNotNotification() {
+ return "n_" + msg.GroupID // group chat
+ }
+ return "g_" + msg.GroupID // group chat
+ case constant.ReadGroupChatType:
+ if !options.IsNotNotification() {
+ return "n_" + msg.GroupID // super group chat
+ }
+ return "sg_" + msg.GroupID // super group chat
+ case constant.NotificationChatType:
+ l := []string{msg.SendID, msg.RecvID}
+ sort.Strings(l)
+ if !options.IsNotNotification() {
+ return "n_" + strings.Join(l, "_")
+ }
+ return "sn_" + strings.Join(l, "_")
+ }
+ return ""
+}
+
+func GetConversationIDBySessionType(sessionType int, ids ...string) string {
+ sort.Strings(ids)
+ if len(ids) > 2 || len(ids) < 1 {
+ return ""
+ }
+ switch sessionType {
+ case constant.SingleChatType:
+ return "si_" + strings.Join(ids, "_") // single chat
+ case constant.WriteGroupChatType:
+ return "g_" + ids[0] // group chat
+ case constant.ReadGroupChatType:
+ return "sg_" + ids[0] // super group chat
+ case constant.NotificationChatType:
+ return "sn_" + strings.Join(ids, "_") // server notification chat
+ }
+ return ""
+}
+
+func IsNotification(conversationID string) bool {
+ return strings.HasPrefix(conversationID, "n_")
+}
+
+func IsNotificationByMsg(msg *sdkws.MsgData) bool {
+ return !Options(msg.Options).IsNotNotification()
+}
+
+type MsgBySeq []*sdkws.MsgData
+
+func (s MsgBySeq) Len() int {
+ return len(s)
+}
+
+func (s MsgBySeq) Less(i, j int) bool {
+ return s[i].Seq < s[j].Seq
+}
+
+func (s MsgBySeq) Swap(i, j int) {
+ s[i], s[j] = s[j], s[i]
+}
+
+func Pb2String(pb proto.Message) (string, error) {
+ s, err := proto.Marshal(pb)
+ if err != nil {
+ return "", errs.Wrap(err)
+ }
+ return string(s), nil
+}
+
+func String2Pb(s string, pb proto.Message) error {
+ return proto.Unmarshal([]byte(s), pb)
+}
diff --git a/pkg/msgprocessor/doc.go b/pkg/msgprocessor/doc.go
new file mode 100644
index 0000000..a8698f7
--- /dev/null
+++ b/pkg/msgprocessor/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msgprocessor // import "git.imall.cloud/openim/open-im-server-deploy/pkg/msgprocessor"
diff --git a/pkg/msgprocessor/options.go b/pkg/msgprocessor/options.go
new file mode 100644
index 0000000..2150343
--- /dev/null
+++ b/pkg/msgprocessor/options.go
@@ -0,0 +1,173 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package msgprocessor
+
+import "git.imall.cloud/openim/protocol/constant"
+
+type (
+ Options map[string]bool
+ OptionsOpt func(Options)
+)
+
+func NewOptions(opts ...OptionsOpt) Options {
+ options := make(map[string]bool, 11)
+ options[constant.IsNotNotification] = false
+ options[constant.IsSendMsg] = false
+ options[constant.IsHistory] = false
+ options[constant.IsPersistent] = false
+ options[constant.IsOfflinePush] = false
+ options[constant.IsUnreadCount] = false
+ options[constant.IsConversationUpdate] = false
+ options[constant.IsSenderSync] = true
+ options[constant.IsNotPrivate] = false
+ options[constant.IsSenderConversationUpdate] = false
+ options[constant.IsReactionFromCache] = false
+ for _, opt := range opts {
+ opt(options)
+ }
+
+ return options
+}
+
+func NewMsgOptions() Options {
+ options := make(map[string]bool, 11)
+ options[constant.IsOfflinePush] = false
+ return make(map[string]bool)
+}
+
+func WithOptions(options Options, opts ...OptionsOpt) Options {
+ for _, opt := range opts {
+ opt(options)
+ }
+ return options
+}
+
+func WithNotNotification(b bool) OptionsOpt {
+ return func(options Options) {
+ options[constant.IsNotNotification] = b
+ }
+}
+
+func WithSendMsg(b bool) OptionsOpt {
+ return func(options Options) {
+ options[constant.IsSendMsg] = b
+ }
+}
+
+func WithHistory(b bool) OptionsOpt {
+ return func(options Options) {
+ options[constant.IsHistory] = b
+ }
+}
+
+func WithPersistent() OptionsOpt {
+ return func(options Options) {
+ options[constant.IsPersistent] = true
+ }
+}
+
+func WithOfflinePush(b bool) OptionsOpt {
+ return func(options Options) {
+ options[constant.IsOfflinePush] = b
+ }
+}
+
+func WithUnreadCount(b bool) OptionsOpt {
+ return func(options Options) {
+ options[constant.IsUnreadCount] = b
+ }
+}
+
+func WithConversationUpdate() OptionsOpt {
+ return func(options Options) {
+ options[constant.IsConversationUpdate] = true
+ }
+}
+
+func WithSenderSync() OptionsOpt {
+ return func(options Options) {
+ options[constant.IsSenderSync] = true
+ }
+}
+
+func WithNotPrivate() OptionsOpt {
+ return func(options Options) {
+ options[constant.IsNotPrivate] = true
+ }
+}
+
+func WithSenderConversationUpdate() OptionsOpt {
+ return func(options Options) {
+ options[constant.IsSenderConversationUpdate] = true
+ }
+}
+
+func WithReactionFromCache() OptionsOpt {
+ return func(options Options) {
+ options[constant.IsReactionFromCache] = true
+ }
+}
+
+func (o Options) Is(notification string) bool {
+ v, ok := o[notification]
+ if !ok || v {
+ return true
+ }
+ return false
+}
+
+func (o Options) IsNotNotification() bool {
+ return o.Is(constant.IsNotNotification)
+}
+
+func (o Options) IsSendMsg() bool {
+ return o.Is(constant.IsSendMsg)
+}
+
+func (o Options) IsHistory() bool {
+ return o.Is(constant.IsHistory)
+}
+
+func (o Options) IsPersistent() bool {
+ return o.Is(constant.IsPersistent)
+}
+
+func (o Options) IsOfflinePush() bool {
+ return o.Is(constant.IsOfflinePush)
+}
+
+func (o Options) IsUnreadCount() bool {
+ return o.Is(constant.IsUnreadCount)
+}
+
+func (o Options) IsConversationUpdate() bool {
+ return o.Is(constant.IsConversationUpdate)
+}
+
+func (o Options) IsSenderSync() bool {
+ return o.Is(constant.IsSenderSync)
+}
+
+func (o Options) IsNotPrivate() bool {
+ return o.Is(constant.IsNotPrivate)
+}
+
+func (o Options) IsSenderConversationUpdate() bool {
+ return o.Is(constant.IsSenderConversationUpdate)
+}
+
+func (o Options) IsReactionFromCache() bool {
+ return o.Is(constant.IsReactionFromCache)
+}
diff --git a/pkg/notification/common_user/common.go b/pkg/notification/common_user/common.go
new file mode 100644
index 0000000..7bc775b
--- /dev/null
+++ b/pkg/notification/common_user/common.go
@@ -0,0 +1,31 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package common_user
+
+type CommonUser interface {
+ GetNickname() string
+ GetFaceURL() string
+ GetUserID() string
+ GetEx() string
+ GetUserType() int32
+ GetUserFlag() string
+}
+
+type CommonGroup interface {
+ GetNickname() string
+ GetFaceURL() string
+ GetGroupID() string
+ GetEx() string
+}
diff --git a/pkg/notification/grouphash/grouphash.go b/pkg/notification/grouphash/grouphash.go
new file mode 100644
index 0000000..e87e30f
--- /dev/null
+++ b/pkg/notification/grouphash/grouphash.go
@@ -0,0 +1,102 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package grouphash
+
+import (
+ "context"
+ "crypto/md5"
+ "encoding/binary"
+ "encoding/json"
+
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/utils/datautil"
+)
+
+func NewGroupHashFromGroupClient(x group.GroupClient) *GroupHash {
+ return &GroupHash{
+ getGroupAllUserIDs: func(ctx context.Context, groupID string) ([]string, error) {
+ resp, err := x.GetGroupMemberUserIDs(ctx, &group.GetGroupMemberUserIDsReq{GroupID: groupID})
+ if err != nil {
+ return nil, err
+ }
+ return resp.UserIDs, nil
+ },
+ getGroupMemberInfo: func(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ resp, err := x.GetGroupMembersInfo(ctx, &group.GetGroupMembersInfoReq{GroupID: groupID, UserIDs: userIDs})
+ if err != nil {
+ return nil, err
+ }
+ return resp.Members, nil
+ },
+ }
+}
+
+func NewGroupHashFromGroupServer(x group.GroupServer) *GroupHash {
+ return &GroupHash{
+ getGroupAllUserIDs: func(ctx context.Context, groupID string) ([]string, error) {
+ resp, err := x.GetGroupMemberUserIDs(ctx, &group.GetGroupMemberUserIDsReq{GroupID: groupID})
+ if err != nil {
+ return nil, err
+ }
+ return resp.UserIDs, nil
+ },
+ getGroupMemberInfo: func(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ resp, err := x.GetGroupMembersInfo(ctx, &group.GetGroupMembersInfoReq{GroupID: groupID, UserIDs: userIDs})
+ if err != nil {
+ return nil, err
+ }
+ return resp.Members, nil
+ },
+ }
+}
+
+type GroupHash struct {
+ getGroupAllUserIDs func(ctx context.Context, groupID string) ([]string, error)
+ getGroupMemberInfo func(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error)
+}
+
+func (gh *GroupHash) GetGroupHash(ctx context.Context, groupID string) (uint64, error) {
+ userIDs, err := gh.getGroupAllUserIDs(ctx, groupID)
+ if err != nil {
+ return 0, err
+ }
+ var members []*sdkws.GroupMemberFullInfo
+ if len(userIDs) > 0 {
+ members, err = gh.getGroupMemberInfo(ctx, groupID, userIDs)
+ if err != nil {
+ return 0, err
+ }
+ datautil.Sort(userIDs, true)
+ }
+ memberMap := datautil.SliceToMap(members, func(e *sdkws.GroupMemberFullInfo) string {
+ return e.UserID
+ })
+ res := make([]*sdkws.GroupMemberFullInfo, 0, len(members))
+ for _, userID := range userIDs {
+ member, ok := memberMap[userID]
+ if !ok {
+ continue
+ }
+ member.AppMangerLevel = 0
+ res = append(res, member)
+ }
+ data, err := json.Marshal(res)
+ if err != nil {
+ return 0, err
+ }
+ sum := md5.Sum(data)
+ return binary.BigEndian.Uint64(sum[:]), nil
+}
diff --git a/pkg/notification/msg.go b/pkg/notification/msg.go
new file mode 100644
index 0000000..d704ea2
--- /dev/null
+++ b/pkg/notification/msg.go
@@ -0,0 +1,272 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package notification
+
+import (
+ "context"
+ "encoding/json"
+ "time"
+
+ "google.golang.org/protobuf/proto"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mq/memamq"
+ "github.com/openimsdk/tools/utils/idutil"
+ "github.com/openimsdk/tools/utils/jsonutil"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+func newContentTypeConf(conf *config.Notification) map[int32]config.NotificationConfig {
+ return map[int32]config.NotificationConfig{
+ // group
+ constant.GroupCreatedNotification: conf.GroupCreated,
+ constant.GroupInfoSetNotification: conf.GroupInfoSet,
+ constant.JoinGroupApplicationNotification: conf.JoinGroupApplication,
+ constant.MemberQuitNotification: conf.MemberQuit,
+ constant.GroupApplicationAcceptedNotification: conf.GroupApplicationAccepted,
+ constant.GroupApplicationRejectedNotification: conf.GroupApplicationRejected,
+ constant.GroupOwnerTransferredNotification: conf.GroupOwnerTransferred,
+ constant.MemberKickedNotification: conf.MemberKicked,
+ constant.MemberInvitedNotification: conf.MemberInvited,
+ constant.MemberEnterNotification: conf.MemberEnter,
+ constant.GroupDismissedNotification: conf.GroupDismissed,
+ constant.GroupMutedNotification: conf.GroupMuted,
+ constant.GroupCancelMutedNotification: conf.GroupCancelMuted,
+ constant.GroupMemberMutedNotification: conf.GroupMemberMuted,
+ constant.GroupMemberCancelMutedNotification: conf.GroupMemberCancelMuted,
+ constant.GroupMemberInfoSetNotification: conf.GroupMemberInfoSet,
+ constant.GroupMemberSetToAdminNotification: conf.GroupMemberSetToAdmin,
+ constant.GroupMemberSetToOrdinaryUserNotification: conf.GroupMemberSetToOrdinary,
+ constant.GroupInfoSetAnnouncementNotification: conf.GroupInfoSetAnnouncement,
+ constant.GroupInfoSetNameNotification: conf.GroupInfoSetName,
+ // user
+ constant.UserInfoUpdatedNotification: conf.UserInfoUpdated,
+ constant.UserStatusChangeNotification: conf.UserStatusChanged,
+ // friend
+ constant.FriendApplicationNotification: conf.FriendApplicationAdded,
+ constant.FriendApplicationApprovedNotification: conf.FriendApplicationApproved,
+ constant.FriendApplicationRejectedNotification: conf.FriendApplicationRejected,
+ constant.FriendAddedNotification: conf.FriendAdded,
+ constant.FriendDeletedNotification: conf.FriendDeleted,
+ constant.FriendRemarkSetNotification: conf.FriendRemarkSet,
+ constant.BlackAddedNotification: conf.BlackAdded,
+ constant.BlackDeletedNotification: conf.BlackDeleted,
+ constant.FriendInfoUpdatedNotification: conf.FriendInfoUpdated,
+ constant.FriendsInfoUpdateNotification: conf.FriendInfoUpdated, // use the same FriendInfoUpdated
+ // conversation
+ constant.ConversationChangeNotification: conf.ConversationChanged,
+ constant.ConversationUnreadNotification: conf.ConversationChanged,
+ constant.ConversationPrivateChatNotification: conf.ConversationSetPrivate,
+ // msg
+ constant.MsgRevokeNotification: {IsSendMsg: false, ReliabilityLevel: constant.ReliableNotificationNoMsg},
+ constant.HasReadReceipt: {IsSendMsg: false, ReliabilityLevel: constant.ReliableNotificationNoMsg},
+ constant.DeleteMsgsNotification: {IsSendMsg: false, ReliabilityLevel: constant.ReliableNotificationNoMsg},
+ }
+}
+
+func newSessionTypeConf() map[int32]int32 {
+ return map[int32]int32{
+ // group
+ constant.GroupCreatedNotification: constant.ReadGroupChatType,
+ constant.GroupInfoSetNotification: constant.ReadGroupChatType,
+ constant.JoinGroupApplicationNotification: constant.SingleChatType,
+ constant.MemberQuitNotification: constant.ReadGroupChatType,
+ constant.GroupApplicationAcceptedNotification: constant.SingleChatType,
+ constant.GroupApplicationRejectedNotification: constant.SingleChatType,
+ constant.GroupOwnerTransferredNotification: constant.ReadGroupChatType,
+ constant.MemberKickedNotification: constant.ReadGroupChatType,
+ constant.MemberInvitedNotification: constant.ReadGroupChatType,
+ constant.MemberEnterNotification: constant.ReadGroupChatType,
+ constant.GroupDismissedNotification: constant.ReadGroupChatType,
+ constant.GroupMutedNotification: constant.ReadGroupChatType,
+ constant.GroupCancelMutedNotification: constant.ReadGroupChatType,
+ constant.GroupMemberMutedNotification: constant.ReadGroupChatType,
+ constant.GroupMemberCancelMutedNotification: constant.ReadGroupChatType,
+ constant.GroupMemberInfoSetNotification: constant.ReadGroupChatType,
+ constant.GroupMemberSetToAdminNotification: constant.ReadGroupChatType,
+ constant.GroupMemberSetToOrdinaryUserNotification: constant.ReadGroupChatType,
+ constant.GroupInfoSetAnnouncementNotification: constant.ReadGroupChatType,
+ constant.GroupInfoSetNameNotification: constant.ReadGroupChatType,
+ // user
+ constant.UserInfoUpdatedNotification: constant.SingleChatType,
+ constant.UserStatusChangeNotification: constant.SingleChatType,
+ // friend
+ constant.FriendApplicationNotification: constant.SingleChatType,
+ constant.FriendApplicationApprovedNotification: constant.SingleChatType,
+ constant.FriendApplicationRejectedNotification: constant.SingleChatType,
+ constant.FriendAddedNotification: constant.SingleChatType,
+ constant.FriendDeletedNotification: constant.SingleChatType,
+ constant.FriendRemarkSetNotification: constant.SingleChatType,
+ constant.BlackAddedNotification: constant.SingleChatType,
+ constant.BlackDeletedNotification: constant.SingleChatType,
+ constant.FriendInfoUpdatedNotification: constant.SingleChatType,
+ constant.FriendsInfoUpdateNotification: constant.SingleChatType,
+ // conversation
+ constant.ConversationChangeNotification: constant.SingleChatType,
+ constant.ConversationUnreadNotification: constant.SingleChatType,
+ constant.ConversationPrivateChatNotification: constant.SingleChatType,
+ // delete
+ constant.DeleteMsgsNotification: constant.SingleChatType,
+ }
+}
+
+type NotificationSender struct {
+ contentTypeConf map[int32]config.NotificationConfig
+ sessionTypeConf map[int32]int32
+ sendMsg func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error)
+ getUserInfo func(ctx context.Context, userID string) (*sdkws.UserInfo, error)
+ queue *memamq.MemoryQueue
+}
+
+func WithQueue(queue *memamq.MemoryQueue) NotificationSenderOptions {
+ return func(s *NotificationSender) {
+ s.queue = queue
+ }
+}
+
+type NotificationSenderOptions func(*NotificationSender)
+
+func WithLocalSendMsg(sendMsg func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error)) NotificationSenderOptions {
+ return func(s *NotificationSender) {
+ s.sendMsg = sendMsg
+ }
+}
+
+func WithRpcClient(sendMsg func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error)) NotificationSenderOptions {
+ return func(s *NotificationSender) {
+ s.sendMsg = func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
+ return sendMsg(ctx, req)
+ }
+ }
+}
+
+func WithUserRpcClient(getUserInfo func(ctx context.Context, userID string) (*sdkws.UserInfo, error)) NotificationSenderOptions {
+ return func(s *NotificationSender) {
+ s.getUserInfo = getUserInfo
+ }
+}
+
+const (
+ notificationWorkerCount = 16
+ notificationBufferSize = 1024 * 1024 * 2
+)
+
+func NewNotificationSender(conf *config.Notification, opts ...NotificationSenderOptions) *NotificationSender {
+ notificationSender := &NotificationSender{contentTypeConf: newContentTypeConf(conf), sessionTypeConf: newSessionTypeConf()}
+ for _, opt := range opts {
+ opt(notificationSender)
+ }
+ if notificationSender.queue == nil {
+ notificationSender.queue = memamq.NewMemoryQueue(notificationWorkerCount, notificationBufferSize)
+ }
+ return notificationSender
+}
+
+type notificationOpt struct {
+ RpcGetUsername bool
+ SendMessage *bool
+}
+
+type NotificationOptions func(*notificationOpt)
+
+func WithRpcGetUserName() NotificationOptions {
+ return func(opt *notificationOpt) {
+ opt.RpcGetUsername = true
+ }
+}
+func WithSendMessage(sendMessage *bool) NotificationOptions {
+ return func(opt *notificationOpt) {
+ opt.SendMessage = sendMessage
+ }
+}
+
+func (s *NotificationSender) send(ctx context.Context, sendID, recvID string, contentType, sessionType int32, m proto.Message, opts ...NotificationOptions) {
+ ctx = context.WithoutCancel(ctx)
+ ctx, cancel := context.WithTimeout(ctx, time.Second*time.Duration(5))
+ defer cancel()
+ n := sdkws.NotificationElem{Detail: jsonutil.StructToJsonString(m)}
+ content, err := json.Marshal(&n)
+ if err != nil {
+ log.ZWarn(ctx, "json.Marshal failed", err, "sendID", sendID, "recvID", recvID, "contentType", contentType, "msg", jsonutil.StructToJsonString(m))
+ return
+ }
+ notificationOpt := ¬ificationOpt{}
+ for _, opt := range opts {
+ opt(notificationOpt)
+ }
+ var req msg.SendMsgReq
+ var msg sdkws.MsgData
+ var userInfo *sdkws.UserInfo
+ if notificationOpt.RpcGetUsername && s.getUserInfo != nil {
+ userInfo, err = s.getUserInfo(ctx, sendID)
+ if err != nil {
+ log.ZWarn(ctx, "getUserInfo failed", err, "sendID", sendID)
+ return
+ }
+ msg.SenderNickname = userInfo.Nickname
+ msg.SenderFaceURL = userInfo.FaceURL
+ }
+ var offlineInfo sdkws.OfflinePushInfo
+ msg.SendID = sendID
+ msg.RecvID = recvID
+ msg.Content = content
+ msg.MsgFrom = constant.SysMsgType
+ msg.ContentType = contentType
+ msg.SessionType = sessionType
+ if msg.SessionType == constant.ReadGroupChatType {
+ msg.GroupID = recvID
+ }
+ msg.CreateTime = timeutil.GetCurrentTimestampByMill()
+ msg.ClientMsgID = idutil.GetMsgIDByMD5(sendID)
+ optionsConfig := s.contentTypeConf[contentType]
+ if sendID == recvID && contentType == constant.HasReadReceipt {
+ optionsConfig.ReliabilityLevel = constant.UnreliableNotification
+ }
+ options := config.GetOptionsByNotification(optionsConfig, notificationOpt.SendMessage)
+ s.SetOptionsByContentType(ctx, options, contentType)
+ msg.Options = options
+ // fill Notification OfflinePush by config
+ offlineInfo.Title = optionsConfig.OfflinePush.Title
+ offlineInfo.Desc = optionsConfig.OfflinePush.Desc
+ offlineInfo.Ex = optionsConfig.OfflinePush.Ext
+ msg.OfflinePushInfo = &offlineInfo
+ req.MsgData = &msg
+ _, err = s.sendMsg(ctx, &req)
+ if err != nil {
+ log.ZWarn(ctx, "SendMsg failed", err, "req", req.String())
+ }
+}
+
+func (s *NotificationSender) NotificationWithSessionType(ctx context.Context, sendID, recvID string, contentType, sessionType int32, m proto.Message, opts ...NotificationOptions) {
+ if err := s.queue.Push(func() { s.send(ctx, sendID, recvID, contentType, sessionType, m, opts...) }); err != nil {
+ log.ZWarn(ctx, "Push to queue failed", err, "sendID", sendID, "recvID", recvID, "msg", jsonutil.StructToJsonString(m))
+ }
+}
+
+func (s *NotificationSender) Notification(ctx context.Context, sendID, recvID string, contentType int32, m proto.Message, opts ...NotificationOptions) {
+ s.NotificationWithSessionType(ctx, sendID, recvID, contentType, s.sessionTypeConf[contentType], m, opts...)
+}
+
+func (s *NotificationSender) SetOptionsByContentType(_ context.Context, options map[string]bool, contentType int32) {
+ switch contentType {
+ case constant.UserStatusChangeNotification:
+ options[constant.IsSenderSync] = false
+ default:
+ }
+}
diff --git a/pkg/rpccache/auth.go b/pkg/rpccache/auth.go
new file mode 100644
index 0000000..4accc4f
--- /dev/null
+++ b/pkg/rpccache/auth.go
@@ -0,0 +1,69 @@
+package rpccache
+
+import (
+ "context"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/convert"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/auth"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewAuthLocalCache(client *rpcli.AuthClient, localCache *config.LocalCache, cli redis.UniversalClient) *AuthLocalCache {
+ lc := localCache.Auth
+ log.ZDebug(context.Background(), "AuthLocalCache", "topic", lc.Topic, "slotNum", lc.SlotNum, "slotSize", lc.SlotSize, "enable", lc.Enable())
+ x := &AuthLocalCache{
+ client: client,
+ local: localcache.New[[]byte](
+ localcache.WithLocalSlotNum(lc.SlotNum),
+ localcache.WithLocalSlotSize(lc.SlotSize),
+ localcache.WithLinkSlotNum(lc.SlotNum),
+ localcache.WithLocalSuccessTTL(lc.Success()),
+ localcache.WithLocalFailedTTL(lc.Failed()),
+ ),
+ }
+ if lc.Enable() {
+ go subscriberRedisDeleteCache(context.Background(), cli, lc.Topic, x.local.DelLocal)
+ }
+ return x
+}
+
+type AuthLocalCache struct {
+ client *rpcli.AuthClient
+ local localcache.Cache[[]byte]
+}
+
+func (a *AuthLocalCache) GetExistingToken(ctx context.Context, userID string, platformID int) (val map[string]int, err error) {
+ resp, err := a.getExistingToken(ctx, userID, platformID)
+ if err != nil {
+ return nil, err
+ }
+
+ res := convert.TokenMapPb2DB(resp.TokenStates)
+
+ return res, nil
+}
+
+func (a *AuthLocalCache) getExistingToken(ctx context.Context, userID string, platformID int) (val *auth.GetExistingTokenResp, err error) {
+ start := time.Now()
+ log.ZDebug(ctx, "AuthLocalCache GetExistingToken req", "userID", userID, "platformID", platformID)
+ defer func() {
+ if err != nil {
+ log.ZError(ctx, "AuthLocalCache GetExistingToken error", err, "cost", time.Since(start), "userID", userID, "platformID", platformID)
+ } else {
+ log.ZDebug(ctx, "AuthLocalCache GetExistingToken resp", "cost", time.Since(start), "userID", userID, "platformID", platformID, "val", val)
+ }
+ }()
+
+ var cache cacheProto[auth.GetExistingTokenResp]
+
+ return cache.Unmarshal(a.local.Get(ctx, cachekey.GetTokenKey(userID, platformID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "AuthLocalCache GetExistingToken call rpc", "userID", userID, "platformID", platformID)
+ return cache.Marshal(a.client.AuthClient.GetExistingToken(ctx, &auth.GetExistingTokenReq{UserID: userID, PlatformID: int32(platformID)}))
+ }))
+}
diff --git a/pkg/rpccache/common.go b/pkg/rpccache/common.go
new file mode 100644
index 0000000..15b3a8e
--- /dev/null
+++ b/pkg/rpccache/common.go
@@ -0,0 +1,77 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache
+
+import (
+ "github.com/openimsdk/tools/errs"
+ "google.golang.org/protobuf/proto"
+)
+
+func newListMap[V comparable](values []V, err error) (*listMap[V], error) {
+ if err != nil {
+ return nil, err
+ }
+ lm := &listMap[V]{
+ List: values,
+ Map: make(map[V]struct{}, len(values)),
+ }
+ for _, value := range values {
+ lm.Map[value] = struct{}{}
+ }
+ return lm, nil
+}
+
+type listMap[V comparable] struct {
+ List []V
+ Map map[V]struct{}
+}
+
+func respProtoMarshal(resp proto.Message, err error) ([]byte, error) {
+ if err != nil {
+ return nil, err
+ }
+ return proto.Marshal(resp)
+}
+
+func cacheUnmarshal[V any](resp []byte, err error) (*V, error) {
+ if err != nil {
+ return nil, err
+ }
+ var val V
+ if err := proto.Unmarshal(resp, any(&val).(proto.Message)); err != nil {
+ return nil, errs.WrapMsg(err, "local cache proto.Unmarshal error")
+ }
+ return &val, nil
+}
+
+type cacheProto[V any] struct{}
+
+func (cacheProto[V]) Marshal(resp *V, err error) ([]byte, error) {
+ if err != nil {
+ return nil, err
+ }
+ return proto.Marshal(any(resp).(proto.Message))
+}
+
+func (cacheProto[V]) Unmarshal(resp []byte, err error) (*V, error) {
+ if err != nil {
+ return nil, err
+ }
+ var val V
+ if err := proto.Unmarshal(resp, any(&val).(proto.Message)); err != nil {
+ return nil, errs.WrapMsg(err, "local cache proto.Unmarshal error")
+ }
+ return &val, nil
+}
diff --git a/pkg/rpccache/conversation.go b/pkg/rpccache/conversation.go
new file mode 100644
index 0000000..588f02b
--- /dev/null
+++ b/pkg/rpccache/conversation.go
@@ -0,0 +1,195 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ pbconversation "git.imall.cloud/openim/protocol/conversation"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/utils/datautil"
+ "github.com/redis/go-redis/v9"
+ "golang.org/x/sync/errgroup"
+)
+
+const (
+ conversationWorkerCount = 20
+)
+
+func NewConversationLocalCache(client *rpcli.ConversationClient, localCache *config.LocalCache, cli redis.UniversalClient) *ConversationLocalCache {
+ lc := localCache.Conversation
+ log.ZDebug(context.Background(), "ConversationLocalCache", "topic", lc.Topic, "slotNum", lc.SlotNum, "slotSize", lc.SlotSize, "enable", lc.Enable())
+ x := &ConversationLocalCache{
+ client: client,
+ local: localcache.New[[]byte](
+ localcache.WithLocalSlotNum(lc.SlotNum),
+ localcache.WithLocalSlotSize(lc.SlotSize),
+ localcache.WithLinkSlotNum(lc.SlotNum),
+ localcache.WithLocalSuccessTTL(lc.Success()),
+ localcache.WithLocalFailedTTL(lc.Failed()),
+ ),
+ }
+ if lc.Enable() {
+ go subscriberRedisDeleteCache(context.Background(), cli, lc.Topic, x.local.DelLocal)
+ }
+ return x
+}
+
+type ConversationLocalCache struct {
+ client *rpcli.ConversationClient
+ local localcache.Cache[[]byte]
+}
+
+func (c *ConversationLocalCache) GetConversationIDs(ctx context.Context, ownerUserID string) (val []string, err error) {
+ resp, err := c.getConversationIDs(ctx, ownerUserID)
+ if err != nil {
+ return nil, err
+ }
+ return resp.ConversationIDs, nil
+}
+
+func (c *ConversationLocalCache) getConversationIDs(ctx context.Context, ownerUserID string) (val *pbconversation.GetConversationIDsResp, err error) {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationIDs req", "ownerUserID", ownerUserID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationIDs return", "ownerUserID", ownerUserID, "value", val)
+ } else {
+ log.ZError(ctx, "ConversationLocalCache getConversationIDs return", err, "ownerUserID", ownerUserID)
+ }
+ }()
+ var cache cacheProto[pbconversation.GetConversationIDsResp]
+ return cache.Unmarshal(c.local.Get(ctx, cachekey.GetConversationIDsKey(ownerUserID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationIDs rpc", "ownerUserID", ownerUserID)
+ return cache.Marshal(c.client.ConversationClient.GetConversationIDs(ctx, &pbconversation.GetConversationIDsReq{UserID: ownerUserID}))
+ }))
+}
+
+func (c *ConversationLocalCache) GetConversation(ctx context.Context, userID, conversationID string) (val *pbconversation.Conversation, err error) {
+ log.ZDebug(ctx, "ConversationLocalCache GetConversation req", "userID", userID, "conversationID", conversationID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "ConversationLocalCache GetConversation return", "userID", userID, "conversationID", conversationID, "value", val)
+ } else {
+ log.ZWarn(ctx, "ConversationLocalCache GetConversation return", err, "userID", userID, "conversationID", conversationID)
+ }
+ }()
+ var cache cacheProto[pbconversation.Conversation]
+ return cache.Unmarshal(c.local.Get(ctx, cachekey.GetConversationKey(userID, conversationID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "ConversationLocalCache GetConversation rpc", "userID", userID, "conversationID", conversationID)
+ return cache.Marshal(c.client.GetConversation(ctx, conversationID, userID))
+ }))
+}
+
+func (c *ConversationLocalCache) GetSingleConversationRecvMsgOpt(ctx context.Context, userID, conversationID string) (int32, error) {
+ conv, err := c.GetConversation(ctx, userID, conversationID)
+ if err != nil {
+ return 0, err
+ }
+ return conv.RecvMsgOpt, nil
+}
+
+func (c *ConversationLocalCache) GetConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*pbconversation.Conversation, error) {
+ var (
+ conversations = make([]*pbconversation.Conversation, 0, len(conversationIDs))
+ conversationsChan = make(chan *pbconversation.Conversation, len(conversationIDs))
+ )
+
+ g, ctx := errgroup.WithContext(ctx)
+ g.SetLimit(conversationWorkerCount)
+
+ for _, conversationID := range conversationIDs {
+ conversationID := conversationID
+ g.Go(func() error {
+ conversation, err := c.GetConversation(ctx, ownerUserID, conversationID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ return nil
+ }
+ return err
+ }
+ conversationsChan <- conversation
+ return nil
+ })
+ }
+ if err := g.Wait(); err != nil {
+ return nil, err
+ }
+ close(conversationsChan)
+ for conversation := range conversationsChan {
+ conversations = append(conversations, conversation)
+ }
+ return conversations, nil
+}
+
+func (c *ConversationLocalCache) getConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) (val *pbconversation.GetConversationNotReceiveMessageUserIDsResp, err error) {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationNotReceiveMessageUserIDs req", "conversationID", conversationID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationNotReceiveMessageUserIDs return", "conversationID", conversationID, "value", val)
+ } else {
+ log.ZError(ctx, "ConversationLocalCache getConversationNotReceiveMessageUserIDs return", err, "conversationID", conversationID)
+ }
+ }()
+ var cache cacheProto[pbconversation.GetConversationNotReceiveMessageUserIDsResp]
+ return cache.Unmarshal(c.local.Get(ctx, cachekey.GetConversationNotReceiveMessageUserIDsKey(conversationID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationNotReceiveMessageUserIDs rpc", "conversationID", conversationID)
+ return cache.Marshal(c.client.ConversationClient.GetConversationNotReceiveMessageUserIDs(ctx, &pbconversation.GetConversationNotReceiveMessageUserIDsReq{ConversationID: conversationID}))
+ }))
+}
+
+func (c *ConversationLocalCache) getPinnedConversationIDs(ctx context.Context, userID string) (val []string, err error) {
+ log.ZDebug(ctx, "ConversationLocalCache getPinnedConversations req", "userID", userID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "ConversationLocalCache getPinnedConversations return", "userID", userID, "value", val)
+ } else {
+ log.ZError(ctx, "ConversationLocalCache getPinnedConversations return", err, "userID", userID)
+ }
+ }()
+ var cache cacheProto[pbconversation.GetPinnedConversationIDsResp]
+ resp, err := cache.Unmarshal(c.local.Get(ctx, cachekey.GetPinnedConversationIDs(userID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "ConversationLocalCache getConversationNotReceiveMessageUserIDs rpc", "userID", userID)
+ return cache.Marshal(c.client.ConversationClient.GetPinnedConversationIDs(ctx, &pbconversation.GetPinnedConversationIDsReq{UserID: userID}))
+ }))
+ if err != nil {
+ return nil, err
+ }
+ return resp.ConversationIDs, nil
+}
+
+func (c *ConversationLocalCache) GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error) {
+ res, err := c.getConversationNotReceiveMessageUserIDs(ctx, conversationID)
+ if err != nil {
+ return nil, err
+ }
+ return res.UserIDs, nil
+}
+
+func (c *ConversationLocalCache) GetConversationNotReceiveMessageUserIDMap(ctx context.Context, conversationID string) (map[string]struct{}, error) {
+ res, err := c.getConversationNotReceiveMessageUserIDs(ctx, conversationID)
+ if err != nil {
+ return nil, err
+ }
+ return datautil.SliceSet(res.UserIDs), nil
+}
+
+func (c *ConversationLocalCache) GetPinnedConversationIDs(ctx context.Context, userID string) ([]string, error) {
+ return c.getPinnedConversationIDs(ctx, userID)
+}
diff --git a/pkg/rpccache/doc.go b/pkg/rpccache/doc.go
new file mode 100644
index 0000000..c2e818e
--- /dev/null
+++ b/pkg/rpccache/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache // import "git.imall.cloud/openim/open-im-server-deploy/pkg/rpccache"
diff --git a/pkg/rpccache/friend.go b/pkg/rpccache/friend.go
new file mode 100644
index 0000000..2a97f87
--- /dev/null
+++ b/pkg/rpccache/friend.go
@@ -0,0 +1,102 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/relation"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewFriendLocalCache(client *rpcli.RelationClient, localCache *config.LocalCache, cli redis.UniversalClient) *FriendLocalCache {
+ lc := localCache.Friend
+ log.ZDebug(context.Background(), "FriendLocalCache", "topic", lc.Topic, "slotNum", lc.SlotNum, "slotSize", lc.SlotSize, "enable", lc.Enable())
+ x := &FriendLocalCache{
+ client: client,
+ local: localcache.New[[]byte](
+ localcache.WithLocalSlotNum(lc.SlotNum),
+ localcache.WithLocalSlotSize(lc.SlotSize),
+ localcache.WithLinkSlotNum(lc.SlotNum),
+ localcache.WithLocalSuccessTTL(lc.Success()),
+ localcache.WithLocalFailedTTL(lc.Failed()),
+ ),
+ }
+ if lc.Enable() {
+ go subscriberRedisDeleteCache(context.Background(), cli, lc.Topic, x.local.DelLocal)
+ }
+ return x
+}
+
+type FriendLocalCache struct {
+ client *rpcli.RelationClient
+ local localcache.Cache[[]byte]
+}
+
+func (f *FriendLocalCache) IsFriend(ctx context.Context, possibleFriendUserID, userID string) (val bool, err error) {
+ res, err := f.isFriend(ctx, possibleFriendUserID, userID)
+ if err != nil {
+ return false, err
+ }
+ return res.InUser1Friends, nil
+}
+
+func (f *FriendLocalCache) isFriend(ctx context.Context, possibleFriendUserID, userID string) (val *relation.IsFriendResp, err error) {
+ log.ZDebug(ctx, "FriendLocalCache isFriend req", "possibleFriendUserID", possibleFriendUserID, "userID", userID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "FriendLocalCache isFriend return", "possibleFriendUserID", possibleFriendUserID, "userID", userID, "value", val)
+ } else {
+ log.ZError(ctx, "FriendLocalCache isFriend return", err, "possibleFriendUserID", possibleFriendUserID, "userID", userID)
+ }
+ }()
+ var cache cacheProto[relation.IsFriendResp]
+ return cache.Unmarshal(f.local.GetLink(ctx, cachekey.GetIsFriendKey(possibleFriendUserID, userID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "FriendLocalCache isFriend rpc", "possibleFriendUserID", possibleFriendUserID, "userID", userID)
+ return cache.Marshal(f.client.FriendClient.IsFriend(ctx, &relation.IsFriendReq{UserID1: userID, UserID2: possibleFriendUserID}))
+ }, cachekey.GetFriendIDsKey(possibleFriendUserID)))
+}
+
+// IsBlack possibleBlackUserID selfUserID.
+func (f *FriendLocalCache) IsBlack(ctx context.Context, possibleBlackUserID, userID string) (val bool, err error) {
+ res, err := f.isBlack(ctx, possibleBlackUserID, userID)
+ if err != nil {
+ return false, err
+ }
+ return res.InUser2Blacks, nil
+}
+
+// IsBlack possibleBlackUserID selfUserID.
+func (f *FriendLocalCache) isBlack(ctx context.Context, possibleBlackUserID, userID string) (val *relation.IsBlackResp, err error) {
+ log.ZDebug(ctx, "FriendLocalCache isBlack req", "possibleBlackUserID", possibleBlackUserID, "userID", userID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "FriendLocalCache isBlack return", "possibleBlackUserID", possibleBlackUserID, "userID", userID, "value", val)
+ } else {
+ log.ZError(ctx, "FriendLocalCache isBlack return", err, "possibleBlackUserID", possibleBlackUserID, "userID", userID)
+ }
+ }()
+ var cache cacheProto[relation.IsBlackResp]
+ return cache.Unmarshal(f.local.GetLink(ctx, cachekey.GetIsBlackIDsKey(possibleBlackUserID, userID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "FriendLocalCache IsBlack rpc", "possibleBlackUserID", possibleBlackUserID, "userID", userID)
+ return cache.Marshal(f.client.FriendClient.IsBlack(ctx, &relation.IsBlackReq{UserID1: possibleBlackUserID, UserID2: userID}))
+ }, cachekey.GetBlackIDsKey(userID)))
+}
diff --git a/pkg/rpccache/group.go b/pkg/rpccache/group.go
new file mode 100644
index 0000000..133dadc
--- /dev/null
+++ b/pkg/rpccache/group.go
@@ -0,0 +1,190 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/group"
+ "github.com/openimsdk/tools/utils/datautil"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+ "golang.org/x/sync/errgroup"
+)
+
+const (
+ groupWorkerCount = 20
+)
+
+func NewGroupLocalCache(client *rpcli.GroupClient, localCache *config.LocalCache, cli redis.UniversalClient) *GroupLocalCache {
+ lc := localCache.Group
+ log.ZDebug(context.Background(), "GroupLocalCache", "topic", lc.Topic, "slotNum", lc.SlotNum, "slotSize", lc.SlotSize, "enable", lc.Enable())
+ x := &GroupLocalCache{
+ client: client,
+ local: localcache.New[[]byte](
+ localcache.WithLocalSlotNum(lc.SlotNum),
+ localcache.WithLocalSlotSize(lc.SlotSize),
+ localcache.WithLinkSlotNum(lc.SlotNum),
+ localcache.WithLocalSuccessTTL(lc.Success()),
+ localcache.WithLocalFailedTTL(lc.Failed()),
+ ),
+ }
+ if lc.Enable() {
+ go subscriberRedisDeleteCache(context.Background(), cli, lc.Topic, x.local.DelLocal)
+ }
+ return x
+}
+
+type GroupLocalCache struct {
+ client *rpcli.GroupClient
+ local localcache.Cache[[]byte]
+}
+
+func (g *GroupLocalCache) getGroupMemberIDs(ctx context.Context, groupID string) (val *group.GetGroupMemberUserIDsResp, err error) {
+ log.ZDebug(ctx, "GroupLocalCache getGroupMemberIDs req", "groupID", groupID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "GroupLocalCache getGroupMemberIDs return", "groupID", groupID, "value", val)
+ } else {
+ log.ZError(ctx, "GroupLocalCache getGroupMemberIDs return", err, "groupID", groupID)
+ }
+ }()
+ var cache cacheProto[group.GetGroupMemberUserIDsResp]
+ return cache.Unmarshal(g.local.Get(ctx, cachekey.GetGroupMemberIDsKey(groupID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "GroupLocalCache getGroupMemberIDs rpc", "groupID", groupID)
+ return cache.Marshal(g.client.GroupClient.GetGroupMemberUserIDs(ctx, &group.GetGroupMemberUserIDsReq{GroupID: groupID}))
+ }))
+}
+
+func (g *GroupLocalCache) GetGroupMember(ctx context.Context, groupID, userID string) (val *sdkws.GroupMemberFullInfo, err error) {
+ log.ZDebug(ctx, "GroupLocalCache GetGroupInfo req", "groupID", groupID, "userID", userID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "GroupLocalCache GetGroupInfo return", "groupID", groupID, "userID", userID, "value", val)
+ } else {
+ log.ZError(ctx, "GroupLocalCache GetGroupInfo return", err, "groupID", groupID, "userID", userID)
+ }
+ }()
+ var cache cacheProto[sdkws.GroupMemberFullInfo]
+ return cache.Unmarshal(g.local.Get(ctx, cachekey.GetGroupMemberInfoKey(groupID, userID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "GroupLocalCache GetGroupInfo rpc", "groupID", groupID, "userID", userID)
+ return cache.Marshal(g.client.GetGroupMemberCache(ctx, groupID, userID))
+ }))
+}
+
+func (g *GroupLocalCache) GetGroupInfo(ctx context.Context, groupID string) (val *sdkws.GroupInfo, err error) {
+ log.ZDebug(ctx, "GroupLocalCache GetGroupInfo req", "groupID", groupID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "GroupLocalCache GetGroupInfo return", "groupID", groupID, "value", val)
+ } else {
+ log.ZError(ctx, "GroupLocalCache GetGroupInfo return", err, "groupID", groupID)
+ }
+ }()
+ var cache cacheProto[sdkws.GroupInfo]
+ return cache.Unmarshal(g.local.Get(ctx, cachekey.GetGroupInfoKey(groupID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "GroupLocalCache GetGroupInfo rpc", "groupID", groupID)
+ return cache.Marshal(g.client.GetGroupInfoCache(ctx, groupID))
+ }))
+}
+
+func (g *GroupLocalCache) GetGroupMemberIDs(ctx context.Context, groupID string) ([]string, error) {
+ res, err := g.getGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ return res.UserIDs, nil
+}
+
+func (g *GroupLocalCache) GetGroupMemberIDMap(ctx context.Context, groupID string) (map[string]struct{}, error) {
+ res, err := g.getGroupMemberIDs(ctx, groupID)
+ if err != nil {
+ return nil, err
+ }
+ return datautil.SliceSet(res.UserIDs), nil
+}
+
+func (g *GroupLocalCache) GetGroupInfos(ctx context.Context, groupIDs []string) ([]*sdkws.GroupInfo, error) {
+ if len(groupIDs) == 0 {
+ return nil, nil
+ }
+ var (
+ groupInfos = make([]*sdkws.GroupInfo, 0, len(groupIDs))
+ groupInfosChan = make(chan *sdkws.GroupInfo, len(groupIDs))
+ )
+
+ eg, ctx := errgroup.WithContext(ctx)
+ eg.SetLimit(groupWorkerCount)
+
+ for _, groupID := range groupIDs {
+ groupID := groupID
+ eg.Go(func() error {
+ groupInfo, err := g.GetGroupInfo(ctx, groupID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ return nil
+ }
+ return err
+ }
+ groupInfosChan <- groupInfo
+ return nil
+ })
+ }
+ if err := eg.Wait(); err != nil {
+ return nil, err
+ }
+ close(groupInfosChan)
+ for groupInfo := range groupInfosChan {
+ groupInfos = append(groupInfos, groupInfo)
+ }
+ return groupInfos, nil
+}
+
+func (g *GroupLocalCache) GetGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ members := make([]*sdkws.GroupMemberFullInfo, 0, len(userIDs))
+ for _, userID := range userIDs {
+ member, err := g.GetGroupMember(ctx, groupID, userID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ continue
+ }
+ return nil, err
+ }
+ members = append(members, member)
+ }
+ return members, nil
+}
+
+func (g *GroupLocalCache) GetGroupMemberInfoMap(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) {
+ members := make(map[string]*sdkws.GroupMemberFullInfo)
+ for _, userID := range userIDs {
+ member, err := g.GetGroupMember(ctx, groupID, userID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ continue
+ }
+ return nil, err
+ }
+ members[userID] = member
+ }
+ return members, nil
+}
diff --git a/pkg/rpccache/online.go b/pkg/rpccache/online.go
new file mode 100644
index 0000000..887d8ee
--- /dev/null
+++ b/pkg/rpccache/online.go
@@ -0,0 +1,346 @@
+package rpccache
+
+import (
+ "context"
+ "fmt"
+ "math/rand"
+ "strconv"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/user"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache/lru"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/util/useronline"
+ "github.com/openimsdk/tools/db/cacheutil"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/mcontext"
+ "github.com/redis/go-redis/v9"
+)
+
+const (
+ Begin uint32 = iota
+ DoOnlineStatusOver
+ DoSubscribeOver
+)
+
+type OnlineCache interface {
+ GetUserOnlinePlatform(ctx context.Context, userID string) ([]int32, error)
+ GetUserOnline(ctx context.Context, userID string) (bool, error)
+ GetUsersOnline(ctx context.Context, userIDs []string) ([]string, []string, error)
+ WaitCache()
+}
+
+func NewOnlineCache(client *rpcli.UserClient, group *GroupLocalCache, rdb redis.UniversalClient, fullUserCache bool, fn func(ctx context.Context, userID string, platformIDs []int32)) (OnlineCache, error) {
+ if config.Standalone() {
+ return disableOnlineCache{client: client}, nil
+ }
+ l := &sync.Mutex{}
+ x := &defaultOnlineCache{
+ client: client,
+ group: group,
+ fullUserCache: fullUserCache,
+ Lock: l,
+ Cond: sync.NewCond(l),
+ }
+
+ ctx := mcontext.SetOperationID(context.TODO(), strconv.FormatInt(time.Now().UnixNano()+int64(rand.Uint32()), 10))
+
+ switch x.fullUserCache {
+ case true:
+ log.ZDebug(ctx, "fullUserCache is true")
+ x.mapCache = cacheutil.NewCache[string, []int32]()
+ go func() {
+ if err := x.initUsersOnlineStatus(ctx); err != nil {
+ log.ZError(ctx, "initUsersOnlineStatus failed", err)
+ }
+ }()
+ case false:
+ log.ZDebug(ctx, "fullUserCache is false")
+ x.lruCache = lru.NewSlotLRU(1024, localcache.LRUStringHash, func() lru.LRU[string, []int32] {
+ return lru.NewLazyLRU[string, []int32](2048, cachekey.OnlineExpire/2, time.Second*3, localcache.EmptyTarget{}, func(key string, value []int32) {})
+ })
+ x.CurrentPhase.Store(DoSubscribeOver)
+ x.Cond.Broadcast()
+ }
+ if rdb != nil {
+ go func() {
+ x.doSubscribe(ctx, rdb, fn)
+ }()
+ }
+ return x, nil
+}
+
+type defaultOnlineCache struct {
+ client *rpcli.UserClient
+ group *GroupLocalCache
+
+ // fullUserCache if enabled, caches the online status of all users using mapCache;
+ // otherwise, only a portion of users' online statuses (regardless of whether they are online) will be cached using lruCache.
+ fullUserCache bool
+
+ lruCache lru.LRU[string, []int32]
+ mapCache *cacheutil.Cache[string, []int32]
+
+ Lock *sync.Mutex
+ Cond *sync.Cond
+ CurrentPhase atomic.Uint32
+}
+
+func (o *defaultOnlineCache) initUsersOnlineStatus(ctx context.Context) (err error) {
+ log.ZDebug(ctx, "init users online status begin")
+
+ var (
+ totalSet atomic.Int64
+ maxTries = 5
+ retryInterval = time.Second * 5
+
+ resp *user.GetAllOnlineUsersResp
+ )
+
+ defer func(t time.Time) {
+ log.ZInfo(ctx, "init users online status end", "cost", time.Since(t), "totalSet", totalSet.Load())
+ o.CurrentPhase.Store(DoOnlineStatusOver)
+ o.Cond.Broadcast()
+ }(time.Now())
+
+ retryOperation := func(operation func() error, operationName string) error {
+ for i := 0; i < maxTries; i++ {
+ if err = operation(); err != nil {
+ log.ZWarn(ctx, fmt.Sprintf("initUsersOnlineStatus: %s failed", operationName), err)
+ time.Sleep(retryInterval)
+ } else {
+ return nil
+ }
+ }
+ return err
+ }
+
+ cursor := uint64(0)
+ for resp == nil || resp.NextCursor != 0 {
+ if err = retryOperation(func() error {
+ resp, err = o.client.GetAllOnlineUsers(ctx, cursor)
+ if err != nil {
+ return err
+ }
+
+ for _, u := range resp.StatusList {
+ if u.Status == constant.Online {
+ o.setUserOnline(u.UserID, u.PlatformIDs)
+ }
+ totalSet.Add(1)
+ }
+ cursor = resp.NextCursor
+ return nil
+ }, "getAllOnlineUsers"); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (o *defaultOnlineCache) doSubscribe(ctx context.Context, rdb redis.UniversalClient, fn func(ctx context.Context, userID string, platformIDs []int32)) {
+ o.Lock.Lock()
+ ch := rdb.Subscribe(ctx, cachekey.OnlineChannel).Channel()
+ for o.CurrentPhase.Load() < DoOnlineStatusOver {
+ o.Cond.Wait()
+ }
+ o.Lock.Unlock()
+ log.ZInfo(ctx, "begin doSubscribe")
+
+ doMessage := func(message *redis.Message) {
+ userID, platformIDs, err := useronline.ParseUserOnlineStatus(message.Payload)
+ if err != nil {
+ log.ZError(ctx, "OnlineCache setHasUserOnline redis subscribe parseUserOnlineStatus", err, "payload", message.Payload, "channel", message.Channel)
+ return
+ }
+ log.ZDebug(ctx, fmt.Sprintf("get subscribe %s message", cachekey.OnlineChannel), "useID", userID, "platformIDs", platformIDs)
+ switch o.fullUserCache {
+ case true:
+ if len(platformIDs) == 0 {
+ // offline
+ o.mapCache.Delete(userID)
+ } else {
+ o.mapCache.Store(userID, platformIDs)
+ }
+ case false:
+ storageCache := o.setHasUserOnline(userID, platformIDs)
+ log.ZDebug(ctx, "OnlineCache setHasUserOnline", "userID", userID, "platformIDs", platformIDs, "payload", message.Payload, "storageCache", storageCache)
+ if fn != nil {
+ fn(ctx, userID, platformIDs)
+ }
+ }
+ }
+
+ if o.CurrentPhase.Load() == DoOnlineStatusOver {
+ for done := false; !done; {
+ select {
+ case message := <-ch:
+ doMessage(message)
+ default:
+ o.CurrentPhase.Store(DoSubscribeOver)
+ o.Cond.Broadcast()
+ done = true
+ }
+ }
+ }
+
+ for message := range ch {
+ doMessage(message)
+ }
+}
+
+func (o *defaultOnlineCache) getUserOnlinePlatform(ctx context.Context, userID string) ([]int32, error) {
+ platformIDs, err := o.lruCache.Get(userID, func() ([]int32, error) {
+ return o.client.GetUserOnlinePlatform(ctx, userID)
+ })
+ if err != nil {
+ log.ZError(ctx, "OnlineCache GetUserOnlinePlatform", err, "userID", userID)
+ return nil, err
+ }
+ //log.ZDebug(ctx, "OnlineCache GetUserOnlinePlatform", "userID", userID, "platformIDs", platformIDs)
+ return platformIDs, nil
+}
+
+func (o *defaultOnlineCache) GetUserOnlinePlatform(ctx context.Context, userID string) ([]int32, error) {
+ platformIDs, err := o.getUserOnlinePlatform(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ tmp := make([]int32, len(platformIDs))
+ copy(tmp, platformIDs)
+ return platformIDs, nil
+}
+
+func (o *defaultOnlineCache) GetUserOnline(ctx context.Context, userID string) (bool, error) {
+ platformIDs, err := o.getUserOnlinePlatform(ctx, userID)
+ if err != nil {
+ return false, err
+ }
+ return len(platformIDs) > 0, nil
+}
+
+func (o *defaultOnlineCache) getUserOnlinePlatformBatch(ctx context.Context, userIDs []string) (map[string][]int32, error) {
+ if len(userIDs) == 0 {
+ return nil, nil
+ }
+ platformIDsMap, err := o.lruCache.GetBatch(userIDs, func(missingUsers []string) (map[string][]int32, error) {
+ platformIDsMap := make(map[string][]int32)
+ usersStatus, err := o.client.GetUsersOnlinePlatform(ctx, missingUsers)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, u := range usersStatus {
+ platformIDsMap[u.UserID] = u.PlatformIDs
+ }
+
+ return platformIDsMap, nil
+ })
+ if err != nil {
+ log.ZError(ctx, "OnlineCache GetUserOnlinePlatform", err, "userID", userIDs)
+ return nil, err
+ }
+ return platformIDsMap, nil
+}
+
+func (o *defaultOnlineCache) GetUsersOnline(ctx context.Context, userIDs []string) ([]string, []string, error) {
+ t := time.Now()
+
+ var (
+ onlineUserIDs = make([]string, 0, len(userIDs))
+ offlineUserIDs = make([]string, 0, len(userIDs))
+ )
+
+ switch o.fullUserCache {
+ case true:
+ for _, userID := range userIDs {
+ if _, ok := o.mapCache.Load(userID); ok {
+ onlineUserIDs = append(onlineUserIDs, userID)
+ } else {
+ offlineUserIDs = append(offlineUserIDs, userID)
+ }
+ }
+ case false:
+ userOnlineMap, err := o.getUserOnlinePlatformBatch(ctx, userIDs)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ for key, value := range userOnlineMap {
+ if len(value) > 0 {
+ onlineUserIDs = append(onlineUserIDs, key)
+ } else {
+ offlineUserIDs = append(offlineUserIDs, key)
+ }
+ }
+ }
+
+ log.ZInfo(ctx, "get users online", "online users length", len(onlineUserIDs), "offline users length", len(offlineUserIDs), "cost", time.Since(t))
+ return onlineUserIDs, offlineUserIDs, nil
+}
+
+func (o *defaultOnlineCache) setUserOnline(userID string, platformIDs []int32) {
+ switch o.fullUserCache {
+ case true:
+ o.mapCache.Store(userID, platformIDs)
+ case false:
+ o.lruCache.Set(userID, platformIDs)
+ }
+}
+
+func (o *defaultOnlineCache) setHasUserOnline(userID string, platformIDs []int32) bool {
+ return o.lruCache.SetHas(userID, platformIDs)
+}
+
+func (o *defaultOnlineCache) WaitCache() {
+ o.Lock.Lock()
+ for o.CurrentPhase.Load() < DoSubscribeOver {
+ o.Cond.Wait()
+ }
+ o.Lock.Unlock()
+}
+
+type disableOnlineCache struct {
+ client *rpcli.UserClient
+}
+
+func (o disableOnlineCache) GetUserOnlinePlatform(ctx context.Context, userID string) ([]int32, error) {
+ return o.client.GetUserOnlinePlatform(ctx, userID)
+}
+
+func (o disableOnlineCache) GetUserOnline(ctx context.Context, userID string) (bool, error) {
+ onlinePlatform, err := o.client.GetUserOnlinePlatform(ctx, userID)
+ if err != nil {
+ return false, err
+ }
+ return len(onlinePlatform) > 0, err
+}
+
+func (o disableOnlineCache) GetUsersOnline(ctx context.Context, userIDs []string) ([]string, []string, error) {
+ var (
+ onlineUserIDs = make([]string, 0, len(userIDs))
+ offlineUserIDs = make([]string, 0, len(userIDs))
+ )
+ for _, userID := range userIDs {
+ online, err := o.GetUserOnline(ctx, userID)
+ if err != nil {
+ return nil, nil, err
+ }
+ if online {
+ onlineUserIDs = append(onlineUserIDs, userID)
+ } else {
+ offlineUserIDs = append(offlineUserIDs, userID)
+ }
+ }
+ return onlineUserIDs, offlineUserIDs, nil
+}
+
+func (o disableOnlineCache) WaitCache() {}
diff --git a/pkg/rpccache/subscriber.go b/pkg/rpccache/subscriber.go
new file mode 100644
index 0000000..3d29e93
--- /dev/null
+++ b/pkg/rpccache/subscriber.go
@@ -0,0 +1,44 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+func subscriberRedisDeleteCache(ctx context.Context, client redis.UniversalClient, channel string, del func(ctx context.Context, key ...string)) {
+ defer func() {
+ if r := recover(); r != nil {
+ log.ZPanic(ctx, "subscriberRedisDeleteCache Panic", errs.ErrPanic(r))
+ }
+ }()
+ for message := range client.Subscribe(ctx, channel).Channel() {
+ log.ZDebug(ctx, "subscriberRedisDeleteCache", "channel", channel, "payload", message.Payload)
+ var keys []string
+ if err := json.Unmarshal([]byte(message.Payload), &keys); err != nil {
+ log.ZError(ctx, "subscriberRedisDeleteCache json.Unmarshal error", err)
+ continue
+ }
+ if len(keys) == 0 {
+ continue
+ }
+ del(ctx, keys...)
+ }
+}
diff --git a/pkg/rpccache/user.go b/pkg/rpccache/user.go
new file mode 100644
index 0000000..cc4e8f6
--- /dev/null
+++ b/pkg/rpccache/user.go
@@ -0,0 +1,130 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package rpccache
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/rpcli"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/cachekey"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/localcache"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/log"
+ "github.com/redis/go-redis/v9"
+)
+
+func NewUserLocalCache(client *rpcli.UserClient, localCache *config.LocalCache, cli redis.UniversalClient) *UserLocalCache {
+ lc := localCache.User
+ log.ZDebug(context.Background(), "UserLocalCache", "topic", lc.Topic, "slotNum", lc.SlotNum, "slotSize", lc.SlotSize, "enable", lc.Enable())
+ x := &UserLocalCache{
+ client: client,
+ local: localcache.New[[]byte](
+ localcache.WithLocalSlotNum(lc.SlotNum),
+ localcache.WithLocalSlotSize(lc.SlotSize),
+ localcache.WithLinkSlotNum(lc.SlotNum),
+ localcache.WithLocalSuccessTTL(lc.Success()),
+ localcache.WithLocalFailedTTL(lc.Failed()),
+ ),
+ }
+ if lc.Enable() {
+ go subscriberRedisDeleteCache(context.Background(), cli, lc.Topic, x.local.DelLocal)
+ }
+ return x
+}
+
+type UserLocalCache struct {
+ client *rpcli.UserClient
+ local localcache.Cache[[]byte]
+}
+
+func (u *UserLocalCache) GetUserInfo(ctx context.Context, userID string) (val *sdkws.UserInfo, err error) {
+ log.ZDebug(ctx, "UserLocalCache GetUserInfo req", "userID", userID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "UserLocalCache GetUserInfo return", "value", val)
+ } else {
+ log.ZError(ctx, "UserLocalCache GetUserInfo return", err)
+ }
+ }()
+ var cache cacheProto[sdkws.UserInfo]
+ return cache.Unmarshal(u.local.Get(ctx, cachekey.GetUserInfoKey(userID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "UserLocalCache GetUserInfo rpc", "userID", userID)
+ return cache.Marshal(u.client.GetUserInfo(ctx, userID))
+ }))
+}
+
+func (u *UserLocalCache) GetUserGlobalMsgRecvOpt(ctx context.Context, userID string) (val int32, err error) {
+ resp, err := u.getUserGlobalMsgRecvOpt(ctx, userID)
+ if err != nil {
+ return 0, err
+ }
+ return resp.GlobalRecvMsgOpt, nil
+}
+
+func (u *UserLocalCache) getUserGlobalMsgRecvOpt(ctx context.Context, userID string) (val *user.GetGlobalRecvMessageOptResp, err error) {
+ log.ZDebug(ctx, "UserLocalCache getUserGlobalMsgRecvOpt req", "userID", userID)
+ defer func() {
+ if err == nil {
+ log.ZDebug(ctx, "UserLocalCache getUserGlobalMsgRecvOpt return", "value", val)
+ } else {
+ log.ZError(ctx, "UserLocalCache getUserGlobalMsgRecvOpt return", err)
+ }
+ }()
+ var cache cacheProto[user.GetGlobalRecvMessageOptResp]
+ return cache.Unmarshal(u.local.Get(ctx, cachekey.GetUserGlobalRecvMsgOptKey(userID), func(ctx context.Context) ([]byte, error) {
+ log.ZDebug(ctx, "UserLocalCache GetUserGlobalMsgRecvOpt rpc", "userID", userID)
+ return cache.Marshal(u.client.UserClient.GetGlobalRecvMessageOpt(ctx, &user.GetGlobalRecvMessageOptReq{UserID: userID}))
+ }))
+}
+
+func (u *UserLocalCache) GetUsersInfo(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error) {
+ users := make([]*sdkws.UserInfo, 0, len(userIDs))
+ for _, userID := range userIDs {
+ user, err := u.GetUserInfo(ctx, userID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ log.ZWarn(ctx, "User info notFound", err, "userID", userID)
+ continue
+ }
+ return nil, err
+ }
+ users = append(users, user)
+ }
+ return users, nil
+}
+
+func (u *UserLocalCache) GetUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
+ users := make(map[string]*sdkws.UserInfo, len(userIDs))
+ for _, userID := range userIDs {
+ user, err := u.GetUserInfo(ctx, userID)
+ if err != nil {
+ if errs.ErrRecordNotFound.Is(err) {
+ continue
+ }
+ return nil, err
+ }
+ users[userID] = user
+ }
+ return users, nil
+}
+
+// DelUserInfo 清除指定用户的本地缓存
+func (u *UserLocalCache) DelUserInfo(ctx context.Context, userID string) {
+ u.local.DelLocal(ctx, cachekey.GetUserInfoKey(userID))
+}
diff --git a/pkg/rpcli/auth.go b/pkg/rpcli/auth.go
new file mode 100644
index 0000000..5aa1533
--- /dev/null
+++ b/pkg/rpcli/auth.go
@@ -0,0 +1,31 @@
+package rpcli
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/protocol/auth"
+ "google.golang.org/grpc"
+)
+
+func NewAuthClient(cc grpc.ClientConnInterface) *AuthClient {
+ return &AuthClient{auth.NewAuthClient(cc)}
+}
+
+type AuthClient struct {
+ auth.AuthClient
+}
+
+func (x *AuthClient) KickTokens(ctx context.Context, tokens []string) error {
+ if len(tokens) == 0 {
+ return nil
+ }
+ return ignoreResp(x.AuthClient.KickTokens(ctx, &auth.KickTokensReq{Tokens: tokens}))
+}
+
+func (x *AuthClient) InvalidateToken(ctx context.Context, req *auth.InvalidateTokenReq) error {
+ return ignoreResp(x.AuthClient.InvalidateToken(ctx, req))
+}
+
+func (x *AuthClient) ParseToken(ctx context.Context, token string) (*auth.ParseTokenResp, error) {
+ return x.AuthClient.ParseToken(ctx, &auth.ParseTokenReq{Token: token})
+}
diff --git a/pkg/rpcli/conversation.go b/pkg/rpcli/conversation.go
new file mode 100644
index 0000000..ab24577
--- /dev/null
+++ b/pkg/rpcli/conversation.go
@@ -0,0 +1,95 @@
+package rpcli
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/protocol/conversation"
+ "google.golang.org/grpc"
+)
+
+func NewConversationClient(cc grpc.ClientConnInterface) *ConversationClient {
+ return &ConversationClient{conversation.NewConversationClient(cc)}
+}
+
+type ConversationClient struct {
+ conversation.ConversationClient
+}
+
+func (x *ConversationClient) SetConversationMaxSeq(ctx context.Context, conversationID string, ownerUserIDs []string, maxSeq int64) error {
+ if len(ownerUserIDs) == 0 {
+ return nil
+ }
+ req := &conversation.SetConversationMaxSeqReq{ConversationID: conversationID, OwnerUserID: ownerUserIDs, MaxSeq: maxSeq}
+ return ignoreResp(x.ConversationClient.SetConversationMaxSeq(ctx, req))
+}
+
+func (x *ConversationClient) SetConversations(ctx context.Context, ownerUserIDs []string, info *conversation.ConversationReq) error {
+ if len(ownerUserIDs) == 0 {
+ return nil
+ }
+ req := &conversation.SetConversationsReq{UserIDs: ownerUserIDs, Conversation: info}
+ return ignoreResp(x.ConversationClient.SetConversations(ctx, req))
+}
+
+func (x *ConversationClient) GetConversationsByConversationIDs(ctx context.Context, conversationIDs []string) ([]*conversation.Conversation, error) {
+ if len(conversationIDs) == 0 {
+ return nil, nil
+ }
+ req := &conversation.GetConversationsByConversationIDReq{ConversationIDs: conversationIDs}
+ return extractField(ctx, x.ConversationClient.GetConversationsByConversationID, req, (*conversation.GetConversationsByConversationIDResp).GetConversations)
+}
+
+func (x *ConversationClient) GetConversationsByConversationID(ctx context.Context, conversationID string) (*conversation.Conversation, error) {
+ return firstValue(x.GetConversationsByConversationIDs(ctx, []string{conversationID}))
+}
+
+func (x *ConversationClient) SetConversationMinSeq(ctx context.Context, conversationID string, ownerUserIDs []string, minSeq int64) error {
+ if len(ownerUserIDs) == 0 {
+ return nil
+ }
+ req := &conversation.SetConversationMinSeqReq{ConversationID: conversationID, OwnerUserID: ownerUserIDs, MinSeq: minSeq}
+ return ignoreResp(x.ConversationClient.SetConversationMinSeq(ctx, req))
+}
+
+func (x *ConversationClient) GetConversation(ctx context.Context, conversationID string, ownerUserID string) (*conversation.Conversation, error) {
+ req := &conversation.GetConversationReq{ConversationID: conversationID, OwnerUserID: ownerUserID}
+ return extractField(ctx, x.ConversationClient.GetConversation, req, (*conversation.GetConversationResp).GetConversation)
+}
+
+func (x *ConversationClient) GetConversations(ctx context.Context, conversationIDs []string, ownerUserID string) ([]*conversation.Conversation, error) {
+ if len(conversationIDs) == 0 {
+ return nil, nil
+ }
+ req := &conversation.GetConversationsReq{ConversationIDs: conversationIDs, OwnerUserID: ownerUserID}
+ return extractField(ctx, x.ConversationClient.GetConversations, req, (*conversation.GetConversationsResp).GetConversations)
+}
+
+func (x *ConversationClient) GetConversationIDs(ctx context.Context, ownerUserID string) ([]string, error) {
+ req := &conversation.GetConversationIDsReq{UserID: ownerUserID}
+ return extractField(ctx, x.ConversationClient.GetConversationIDs, req, (*conversation.GetConversationIDsResp).GetConversationIDs)
+}
+
+func (x *ConversationClient) GetPinnedConversationIDs(ctx context.Context, ownerUserID string) ([]string, error) {
+ req := &conversation.GetPinnedConversationIDsReq{UserID: ownerUserID}
+ return extractField(ctx, x.ConversationClient.GetPinnedConversationIDs, req, (*conversation.GetPinnedConversationIDsResp).GetConversationIDs)
+}
+
+func (x *ConversationClient) CreateGroupChatConversations(ctx context.Context, groupID string, userIDs []string) error {
+ if len(userIDs) == 0 {
+ return nil
+ }
+ req := &conversation.CreateGroupChatConversationsReq{GroupID: groupID, UserIDs: userIDs}
+ return ignoreResp(x.ConversationClient.CreateGroupChatConversations(ctx, req))
+}
+
+func (x *ConversationClient) CreateSingleChatConversations(ctx context.Context, req *conversation.CreateSingleChatConversationsReq) error {
+ return ignoreResp(x.ConversationClient.CreateSingleChatConversations(ctx, req))
+}
+
+func (x *ConversationClient) GetConversationOfflinePushUserIDs(ctx context.Context, conversationID string, userIDs []string) ([]string, error) {
+ if len(userIDs) == 0 {
+ return nil, nil
+ }
+ req := &conversation.GetConversationOfflinePushUserIDsReq{ConversationID: conversationID, UserIDs: userIDs}
+ return extractField(ctx, x.ConversationClient.GetConversationOfflinePushUserIDs, req, (*conversation.GetConversationOfflinePushUserIDsResp).GetUserIDs)
+}
diff --git a/pkg/rpcli/group.go b/pkg/rpcli/group.go
new file mode 100644
index 0000000..6a782f6
--- /dev/null
+++ b/pkg/rpcli/group.go
@@ -0,0 +1,73 @@
+package rpcli
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ "google.golang.org/grpc"
+)
+
+func NewGroupClient(cc grpc.ClientConnInterface) *GroupClient {
+ return &GroupClient{group.NewGroupClient(cc)}
+}
+
+type GroupClient struct {
+ group.GroupClient
+}
+
+func (x *GroupClient) GetGroupsInfo(ctx context.Context, groupIDs []string) ([]*sdkws.GroupInfo, error) {
+ if len(groupIDs) == 0 {
+ return nil, nil
+ }
+ req := &group.GetGroupsInfoReq{GroupIDs: groupIDs}
+ return extractField(ctx, x.GroupClient.GetGroupsInfo, req, (*group.GetGroupsInfoResp).GetGroupInfos)
+}
+
+func (x *GroupClient) GetGroupInfo(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) {
+ return firstValue(x.GetGroupsInfo(ctx, []string{groupID}))
+}
+
+func (x *GroupClient) GetGroupInfoCache(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) {
+ req := &group.GetGroupInfoCacheReq{GroupID: groupID}
+ return extractField(ctx, x.GroupClient.GetGroupInfoCache, req, (*group.GetGroupInfoCacheResp).GetGroupInfo)
+}
+
+func (x *GroupClient) GetGroupMemberCache(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) {
+ req := &group.GetGroupMemberCacheReq{GroupID: groupID, GroupMemberID: userID}
+ return extractField(ctx, x.GroupClient.GetGroupMemberCache, req, (*group.GetGroupMemberCacheResp).GetMember)
+}
+
+func (x *GroupClient) DismissGroup(ctx context.Context, groupID string, deleteMember bool) error {
+ req := &group.DismissGroupReq{GroupID: groupID, DeleteMember: deleteMember}
+ return ignoreResp(x.GroupClient.DismissGroup(ctx, req))
+}
+
+func (x *GroupClient) GetGroupMemberUserIDs(ctx context.Context, groupID string) ([]string, error) {
+ req := &group.GetGroupMemberUserIDsReq{GroupID: groupID}
+ return extractField(ctx, x.GroupClient.GetGroupMemberUserIDs, req, (*group.GetGroupMemberUserIDsResp).GetUserIDs)
+}
+
+func (x *GroupClient) GetGroupMembersInfo(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ if len(userIDs) == 0 {
+ return nil, nil
+ }
+ req := &group.GetGroupMembersInfoReq{GroupID: groupID, UserIDs: userIDs}
+ return extractField(ctx, x.GroupClient.GetGroupMembersInfo, req, (*group.GetGroupMembersInfoResp).GetMembers)
+}
+
+func (x *GroupClient) GetGroupMemberInfo(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) {
+ return firstValue(x.GetGroupMembersInfo(ctx, groupID, []string{userID}))
+}
+
+func (x *GroupClient) GetGroupMemberMapInfo(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) {
+ members, err := x.GetGroupMembersInfo(ctx, groupID, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ memberMap := make(map[string]*sdkws.GroupMemberFullInfo)
+ for _, member := range members {
+ memberMap[member.UserID] = member
+ }
+ return memberMap, nil
+}
diff --git a/pkg/rpcli/msg.go b/pkg/rpcli/msg.go
new file mode 100644
index 0000000..4e6fe64
--- /dev/null
+++ b/pkg/rpcli/msg.go
@@ -0,0 +1,92 @@
+package rpcli
+
+import (
+ "context"
+
+ "google.golang.org/grpc"
+
+ "git.imall.cloud/openim/protocol/msg"
+ "git.imall.cloud/openim/protocol/sdkws"
+)
+
+func NewMsgClient(cc grpc.ClientConnInterface) *MsgClient {
+ return &MsgClient{msg.NewMsgClient(cc)}
+}
+
+type MsgClient struct {
+ msg.MsgClient
+}
+
+func (x *MsgClient) GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error) {
+ if len(conversationIDs) == 0 {
+ return nil, nil
+ }
+ req := &msg.GetMaxSeqsReq{ConversationIDs: conversationIDs}
+ return extractField(ctx, x.MsgClient.GetMaxSeqs, req, (*msg.SeqsInfoResp).GetMaxSeqs)
+}
+
+func (x *MsgClient) GetMsgByConversationIDs(ctx context.Context, conversationIDs []string, maxSeqs map[string]int64) (map[string]*sdkws.MsgData, error) {
+ if len(conversationIDs) == 0 || len(maxSeqs) == 0 {
+ return nil, nil
+ }
+ req := &msg.GetMsgByConversationIDsReq{ConversationIDs: conversationIDs, MaxSeqs: maxSeqs}
+ return extractField(ctx, x.MsgClient.GetMsgByConversationIDs, req, (*msg.GetMsgByConversationIDsResp).GetMsgDatas)
+}
+
+func (x *MsgClient) GetHasReadSeqs(ctx context.Context, conversationIDs []string, userID string) (map[string]int64, error) {
+ if len(conversationIDs) == 0 {
+ return nil, nil
+ }
+ req := &msg.GetHasReadSeqsReq{ConversationIDs: conversationIDs, UserID: userID}
+ return extractField(ctx, x.MsgClient.GetHasReadSeqs, req, (*msg.SeqsInfoResp).GetMaxSeqs)
+}
+
+func (x *MsgClient) SetUserConversationMaxSeq(ctx context.Context, conversationID string, ownerUserIDs []string, maxSeq int64) error {
+ if len(ownerUserIDs) == 0 {
+ return nil
+ }
+ req := &msg.SetUserConversationMaxSeqReq{ConversationID: conversationID, OwnerUserID: ownerUserIDs, MaxSeq: maxSeq}
+ return ignoreResp(x.MsgClient.SetUserConversationMaxSeq(ctx, req))
+}
+
+func (x *MsgClient) SetUserConversationMin(ctx context.Context, conversationID string, ownerUserIDs []string, minSeq int64) error {
+ if len(ownerUserIDs) == 0 {
+ return nil
+ }
+ req := &msg.SetUserConversationsMinSeqReq{ConversationID: conversationID, UserIDs: ownerUserIDs, Seq: minSeq}
+ return ignoreResp(x.MsgClient.SetUserConversationsMinSeq(ctx, req))
+}
+
+func (x *MsgClient) GetLastMessageSeqByTime(ctx context.Context, conversationID string, lastTime int64) (int64, error) {
+ req := &msg.GetLastMessageSeqByTimeReq{ConversationID: conversationID, Time: lastTime}
+ return extractField(ctx, x.MsgClient.GetLastMessageSeqByTime, req, (*msg.GetLastMessageSeqByTimeResp).GetSeq)
+}
+
+func (x *MsgClient) GetConversationMaxSeq(ctx context.Context, conversationID string) (int64, error) {
+ req := &msg.GetConversationMaxSeqReq{ConversationID: conversationID}
+ return extractField(ctx, x.MsgClient.GetConversationMaxSeq, req, (*msg.GetConversationMaxSeqResp).GetMaxSeq)
+}
+
+func (x *MsgClient) GetActiveConversation(ctx context.Context, conversationIDs []string) ([]*msg.ActiveConversation, error) {
+ if len(conversationIDs) == 0 {
+ return nil, nil
+ }
+ req := &msg.GetActiveConversationReq{ConversationIDs: conversationIDs}
+ return extractField(ctx, x.MsgClient.GetActiveConversation, req, (*msg.GetActiveConversationResp).GetConversations)
+}
+
+func (x *MsgClient) GetSeqMessage(ctx context.Context, userID string, conversations []*msg.ConversationSeqs) (map[string]*sdkws.PullMsgs, error) {
+ if len(conversations) == 0 {
+ return nil, nil
+ }
+ req := &msg.GetSeqMessageReq{UserID: userID, Conversations: conversations}
+ return extractField(ctx, x.MsgClient.GetSeqMessage, req, (*msg.GetSeqMessageResp).GetMsgs)
+}
+
+func (x *MsgClient) SetUserConversationsMinSeq(ctx context.Context, conversationID string, userIDs []string, seq int64) error {
+ if len(userIDs) == 0 {
+ return nil
+ }
+ req := &msg.SetUserConversationsMinSeqReq{ConversationID: conversationID, UserIDs: userIDs, Seq: seq}
+ return ignoreResp(x.MsgClient.SetUserConversationsMinSeq(ctx, req))
+}
diff --git a/pkg/rpcli/msggateway.go b/pkg/rpcli/msggateway.go
new file mode 100644
index 0000000..bc68a9b
--- /dev/null
+++ b/pkg/rpcli/msggateway.go
@@ -0,0 +1,14 @@
+package rpcli
+
+import (
+ "git.imall.cloud/openim/protocol/msggateway"
+ "google.golang.org/grpc"
+)
+
+func NewMsgGatewayClient(cc grpc.ClientConnInterface) *MsgGatewayClient {
+ return &MsgGatewayClient{msggateway.NewMsgGatewayClient(cc)}
+}
+
+type MsgGatewayClient struct {
+ msggateway.MsgGatewayClient
+}
diff --git a/pkg/rpcli/push.go b/pkg/rpcli/push.go
new file mode 100644
index 0000000..a2221d5
--- /dev/null
+++ b/pkg/rpcli/push.go
@@ -0,0 +1,14 @@
+package rpcli
+
+import (
+ "git.imall.cloud/openim/protocol/push"
+ "google.golang.org/grpc"
+)
+
+func NewPushMsgServiceClient(cc grpc.ClientConnInterface) *PushMsgServiceClient {
+ return &PushMsgServiceClient{push.NewPushMsgServiceClient(cc)}
+}
+
+type PushMsgServiceClient struct {
+ push.PushMsgServiceClient
+}
diff --git a/pkg/rpcli/relation.go b/pkg/rpcli/relation.go
new file mode 100644
index 0000000..2001187
--- /dev/null
+++ b/pkg/rpcli/relation.go
@@ -0,0 +1,24 @@
+package rpcli
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/protocol/relation"
+ "google.golang.org/grpc"
+)
+
+func NewRelationClient(cc grpc.ClientConnInterface) *RelationClient {
+ return &RelationClient{relation.NewFriendClient(cc)}
+}
+
+type RelationClient struct {
+ relation.FriendClient
+}
+
+func (x *RelationClient) GetFriendsInfo(ctx context.Context, ownerUserID string, friendUserIDs []string) ([]*relation.FriendInfoOnly, error) {
+ if len(friendUserIDs) == 0 {
+ return nil, nil
+ }
+ req := &relation.GetFriendInfoReq{OwnerUserID: ownerUserID, FriendUserIDs: friendUserIDs}
+ return extractField(ctx, x.FriendClient.GetFriendInfo, req, (*relation.GetFriendInfoResp).GetFriendInfos)
+}
diff --git a/pkg/rpcli/rtc.go b/pkg/rpcli/rtc.go
new file mode 100644
index 0000000..7c8ef91
--- /dev/null
+++ b/pkg/rpcli/rtc.go
@@ -0,0 +1,14 @@
+package rpcli
+
+import (
+ "git.imall.cloud/openim/protocol/rtc"
+ "google.golang.org/grpc"
+)
+
+func NewRtcServiceClient(cc grpc.ClientConnInterface) *RtcServiceClient {
+ return &RtcServiceClient{rtc.NewRtcServiceClient(cc)}
+}
+
+type RtcServiceClient struct {
+ rtc.RtcServiceClient
+}
diff --git a/pkg/rpcli/third.go b/pkg/rpcli/third.go
new file mode 100644
index 0000000..5023a2e
--- /dev/null
+++ b/pkg/rpcli/third.go
@@ -0,0 +1,14 @@
+package rpcli
+
+import (
+ "git.imall.cloud/openim/protocol/third"
+ "google.golang.org/grpc"
+)
+
+func NewThirdClient(cc grpc.ClientConnInterface) *ThirdClient {
+ return &ThirdClient{third.NewThirdClient(cc)}
+}
+
+type ThirdClient struct {
+ third.ThirdClient
+}
diff --git a/pkg/rpcli/tool.go b/pkg/rpcli/tool.go
new file mode 100644
index 0000000..2bd50bd
--- /dev/null
+++ b/pkg/rpcli/tool.go
@@ -0,0 +1,32 @@
+package rpcli
+
+import (
+ "context"
+ "github.com/openimsdk/tools/errs"
+ "google.golang.org/grpc"
+)
+
+func extractField[A, B, C any](ctx context.Context, fn func(ctx context.Context, req *A, opts ...grpc.CallOption) (*B, error), req *A, get func(*B) C) (C, error) {
+ resp, err := fn(ctx, req)
+ if err != nil {
+ var c C
+ return c, err
+ }
+ return get(resp), nil
+}
+
+func firstValue[A any](val []A, err error) (A, error) {
+ if err != nil {
+ var a A
+ return a, err
+ }
+ if len(val) == 0 {
+ var a A
+ return a, errs.ErrRecordNotFound.WrapMsg("record not found")
+ }
+ return val[0], nil
+}
+
+func ignoreResp(_ any, err error) error {
+ return err
+}
diff --git a/pkg/rpcli/user.go b/pkg/rpcli/user.go
new file mode 100644
index 0000000..9e83a01
--- /dev/null
+++ b/pkg/rpcli/user.go
@@ -0,0 +1,96 @@
+package rpcli
+
+import (
+ "context"
+
+ "git.imall.cloud/openim/protocol/sdkws"
+ "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/datautil"
+ "google.golang.org/grpc"
+)
+
+func NewUserClient(cc grpc.ClientConnInterface) *UserClient {
+ return &UserClient{user.NewUserClient(cc)}
+}
+
+type UserClient struct {
+ user.UserClient
+}
+
+func (x *UserClient) GetUsersInfo(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error) {
+ if len(userIDs) == 0 {
+ return nil, nil
+ }
+ req := &user.GetDesignateUsersReq{UserIDs: userIDs}
+ return extractField(ctx, x.UserClient.GetDesignateUsers, req, (*user.GetDesignateUsersResp).GetUsersInfo)
+}
+
+func (x *UserClient) GetUserInfo(ctx context.Context, userID string) (*sdkws.UserInfo, error) {
+ return firstValue(x.GetUsersInfo(ctx, []string{userID}))
+}
+
+func (x *UserClient) CheckUser(ctx context.Context, userIDs []string) error {
+ if len(userIDs) == 0 {
+ return nil
+ }
+ users, err := x.GetUsersInfo(ctx, userIDs)
+ if err != nil {
+ return err
+ }
+ if len(users) != len(userIDs) {
+ return errs.ErrRecordNotFound.WrapMsg("user not found")
+ }
+ return nil
+}
+
+func (x *UserClient) GetUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
+ users, err := x.GetUsersInfo(ctx, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ return datautil.SliceToMap(users, func(e *sdkws.UserInfo) string {
+ return e.UserID
+ }), nil
+}
+
+func (x *UserClient) GetAllOnlineUsers(ctx context.Context, cursor uint64) (*user.GetAllOnlineUsersResp, error) {
+ req := &user.GetAllOnlineUsersReq{Cursor: cursor}
+ return x.UserClient.GetAllOnlineUsers(ctx, req)
+}
+
+func (x *UserClient) GetUsersOnlinePlatform(ctx context.Context, userIDs []string) ([]*user.OnlineStatus, error) {
+ if len(userIDs) == 0 {
+ return nil, nil
+ }
+ req := &user.GetUserStatusReq{UserIDs: userIDs}
+ return extractField(ctx, x.UserClient.GetUserStatus, req, (*user.GetUserStatusResp).GetStatusList)
+
+}
+
+func (x *UserClient) GetUserOnlinePlatform(ctx context.Context, userID string) ([]int32, error) {
+ status, err := x.GetUsersOnlinePlatform(ctx, []string{userID})
+ if err != nil {
+ return nil, err
+ }
+ if len(status) == 0 {
+ return nil, nil
+ }
+ return status[0].PlatformIDs, nil
+}
+
+func (x *UserClient) SetUserOnlineStatus(ctx context.Context, req *user.SetUserOnlineStatusReq) error {
+ if len(req.Status) == 0 {
+ return nil
+ }
+ return ignoreResp(x.UserClient.SetUserOnlineStatus(ctx, req))
+}
+
+func (x *UserClient) GetNotificationByID(ctx context.Context, userID string) error {
+ return ignoreResp(x.UserClient.GetNotificationAccount(ctx, &user.GetNotificationAccountReq{UserID: userID}))
+}
+
+func (x *UserClient) GetAllUserIDs(ctx context.Context, pageNumber, showNumber int32) ([]string, error) {
+ req := &user.GetAllUserIDReq{Pagination: &sdkws.RequestPagination{PageNumber: pageNumber, ShowNumber: showNumber}}
+ return extractField(ctx, x.UserClient.GetAllUserID, req, (*user.GetAllUserIDResp).GetUserIDs)
+}
diff --git a/pkg/statistics/doc.go b/pkg/statistics/doc.go
new file mode 100644
index 0000000..2959d1e
--- /dev/null
+++ b/pkg/statistics/doc.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package statistics // import "git.imall.cloud/openim/open-im-server-deploy/pkg/statistics"
diff --git a/pkg/statistics/statistics.go b/pkg/statistics/statistics.go
new file mode 100644
index 0000000..222fd5c
--- /dev/null
+++ b/pkg/statistics/statistics.go
@@ -0,0 +1,68 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package statistics
+
+import (
+ "context"
+ "time"
+
+ "github.com/openimsdk/tools/log"
+)
+
+type Statistics struct {
+ AllCount *uint64
+ ModuleName string
+ PrintArgs string
+ SleepTime uint64
+}
+
+func (s *Statistics) output() {
+ var intervalCount uint64
+ t := time.NewTicker(time.Duration(s.SleepTime) * time.Second)
+ defer t.Stop()
+ var sum uint64
+ var timeIntervalNum uint64
+ for {
+ sum = *s.AllCount
+ <-t.C
+ if *s.AllCount-sum <= 0 {
+ intervalCount = 0
+ } else {
+ intervalCount = *s.AllCount - sum
+ }
+ timeIntervalNum++
+ log.ZWarn(
+ context.Background(),
+ " system stat ",
+ nil,
+ "args",
+ s.PrintArgs,
+ "intervalCount",
+ intervalCount,
+ "total:",
+ *s.AllCount,
+ "intervalNum",
+ timeIntervalNum,
+ "avg",
+ (*s.AllCount)/(timeIntervalNum)/s.SleepTime,
+ )
+ }
+}
+
+func NewStatistics(allCount *uint64, moduleName, printArgs string, sleepTime int) *Statistics {
+ p := &Statistics{AllCount: allCount, ModuleName: moduleName, SleepTime: uint64(sleepTime), PrintArgs: printArgs}
+ go p.output()
+ return p
+}
diff --git a/pkg/tools/batcher/batcher.go b/pkg/tools/batcher/batcher.go
new file mode 100644
index 0000000..a180776
--- /dev/null
+++ b/pkg/tools/batcher/batcher.go
@@ -0,0 +1,277 @@
+package batcher
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "sync"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/authverify"
+ "github.com/openimsdk/tools/errs"
+ "github.com/openimsdk/tools/utils/idutil"
+)
+
+var (
+ DefaultDataChanSize = 1000
+ DefaultSize = 100
+ DefaultBuffer = 100
+ DefaultWorker = 5
+ DefaultInterval = time.Second
+)
+
+type Config struct {
+ size int // Number of message aggregations
+ buffer int // The number of caches running in a single coroutine
+ dataBuffer int // The size of the main data channel
+ worker int // Number of coroutines processed in parallel
+ interval time.Duration // Time of message aggregations
+ syncWait bool // Whether to wait synchronously after distributing messages have been consumed
+}
+
+type Option func(c *Config)
+
+func WithSize(s int) Option {
+ return func(c *Config) {
+ c.size = s
+ }
+}
+
+func WithBuffer(b int) Option {
+ return func(c *Config) {
+ c.buffer = b
+ }
+}
+
+func WithWorker(w int) Option {
+ return func(c *Config) {
+ c.worker = w
+ }
+}
+
+func WithInterval(i time.Duration) Option {
+ return func(c *Config) {
+ c.interval = i
+ }
+}
+
+func WithSyncWait(wait bool) Option {
+ return func(c *Config) {
+ c.syncWait = wait
+ }
+}
+
+func WithDataBuffer(size int) Option {
+ return func(c *Config) {
+ c.dataBuffer = size
+ }
+}
+
+type Batcher[T any] struct {
+ config *Config
+
+ globalCtx context.Context
+ cancel context.CancelFunc
+ Do func(ctx context.Context, channelID int, val *Msg[T])
+ OnComplete func(lastMessage *T, totalCount int)
+ Sharding func(key string) int
+ Key func(data *T) string
+ HookFunc func(triggerID string, messages map[string][]*T, totalCount int, lastMessage *T)
+ data chan *T
+ chArrays []chan *Msg[T]
+ wait sync.WaitGroup
+ counter sync.WaitGroup
+}
+
+func emptyOnComplete[T any](*T, int) {}
+func emptyHookFunc[T any](string, map[string][]*T, int, *T) {
+}
+
+func New[T any](opts ...Option) *Batcher[T] {
+ b := &Batcher[T]{
+ OnComplete: emptyOnComplete[T],
+ HookFunc: emptyHookFunc[T],
+ }
+ config := &Config{
+ size: DefaultSize,
+ buffer: DefaultBuffer,
+ worker: DefaultWorker,
+ interval: DefaultInterval,
+ }
+ for _, opt := range opts {
+ opt(config)
+ }
+ b.config = config
+ b.data = make(chan *T, DefaultDataChanSize)
+ b.globalCtx, b.cancel = context.WithCancel(context.Background())
+
+ b.chArrays = make([]chan *Msg[T], b.config.worker)
+ for i := 0; i < b.config.worker; i++ {
+ b.chArrays[i] = make(chan *Msg[T], b.config.buffer)
+ }
+ return b
+}
+
+func (b *Batcher[T]) Worker() int {
+ return b.config.worker
+}
+
+func (b *Batcher[T]) Start() error {
+ if b.Sharding == nil {
+ return errs.New("Sharding function is required").Wrap()
+ }
+ if b.Do == nil {
+ return errs.New("Do function is required").Wrap()
+ }
+ if b.Key == nil {
+ return errs.New("Key function is required").Wrap()
+ }
+ b.wait.Add(b.config.worker)
+ for i := 0; i < b.config.worker; i++ {
+ go b.run(i, b.chArrays[i])
+ }
+ b.wait.Add(1)
+ go b.scheduler()
+ return nil
+}
+
+func (b *Batcher[T]) Put(ctx context.Context, data *T) error {
+ if data == nil {
+ return errs.New("data can not be nil").Wrap()
+ }
+ select {
+ case <-b.globalCtx.Done():
+ return errs.New("data channel is closed").Wrap()
+ case <-ctx.Done():
+ return ctx.Err()
+ case b.data <- data:
+ return nil
+ }
+}
+
+func (b *Batcher[T]) scheduler() {
+ ticker := time.NewTicker(b.config.interval)
+ defer func() {
+ ticker.Stop()
+ for _, ch := range b.chArrays {
+ close(ch)
+ }
+ close(b.data)
+ b.wait.Done()
+ }()
+
+ vals := make(map[string][]*T)
+ count := 0
+ var lastAny *T
+
+ for {
+ select {
+ case data, ok := <-b.data:
+ if !ok {
+ // If the data channel is closed unexpectedly
+ return
+ }
+ if data == nil {
+ if count > 0 {
+ b.distributeMessage(vals, count, lastAny)
+ }
+ return
+ }
+
+ key := b.Key(data)
+ vals[key] = append(vals[key], data)
+ lastAny = data
+
+ count++
+ if count >= b.config.size {
+
+ b.distributeMessage(vals, count, lastAny)
+ vals = make(map[string][]*T)
+ count = 0
+ }
+
+ case <-ticker.C:
+ if count > 0 {
+
+ b.distributeMessage(vals, count, lastAny)
+ vals = make(map[string][]*T)
+ count = 0
+ }
+ }
+ }
+}
+
+type Msg[T any] struct {
+ key string
+ triggerID string
+ val []*T
+}
+
+func (m Msg[T]) Key() string {
+ return m.key
+}
+
+func (m Msg[T]) TriggerID() string {
+ return m.triggerID
+}
+
+func (m Msg[T]) Val() []*T {
+ return m.val
+}
+
+func (m Msg[T]) String() string {
+ var sb strings.Builder
+ sb.WriteString("Key: ")
+ sb.WriteString(m.key)
+ sb.WriteString(", Values: [")
+ for i, v := range m.val {
+ if i > 0 {
+ sb.WriteString(", ")
+ }
+ sb.WriteString(fmt.Sprintf("%v", *v))
+ }
+ sb.WriteString("]")
+ return sb.String()
+}
+
+func (b *Batcher[T]) distributeMessage(messages map[string][]*T, totalCount int, lastMessage *T) {
+ triggerID := idutil.OperationIDGenerator()
+ b.HookFunc(triggerID, messages, totalCount, lastMessage)
+ for key, data := range messages {
+ if b.config.syncWait {
+ b.counter.Add(1)
+ }
+ channelID := b.Sharding(key)
+ b.chArrays[channelID] <- &Msg[T]{key: key, triggerID: triggerID, val: data}
+ }
+ if b.config.syncWait {
+ b.counter.Wait()
+ }
+ if b.OnComplete != nil {
+ b.OnComplete(lastMessage, totalCount)
+ }
+}
+
+func (b *Batcher[T]) run(channelID int, ch <-chan *Msg[T]) {
+ defer b.wait.Done()
+ ctx := authverify.WithTempAdmin(context.Background())
+ for {
+ select {
+ case messages, ok := <-ch:
+ if !ok {
+ return
+ }
+ b.Do(ctx, channelID, messages)
+ if b.config.syncWait {
+ b.counter.Done()
+ }
+ }
+ }
+}
+
+func (b *Batcher[T]) Close() {
+ b.cancel() // Signal to stop put data
+ b.data <- nil
+ //wait all goroutines exit
+ b.wait.Wait()
+}
diff --git a/pkg/tools/batcher/batcher_test.go b/pkg/tools/batcher/batcher_test.go
new file mode 100644
index 0000000..90e0284
--- /dev/null
+++ b/pkg/tools/batcher/batcher_test.go
@@ -0,0 +1,66 @@
+package batcher
+
+import (
+ "context"
+ "fmt"
+ "github.com/openimsdk/tools/utils/stringutil"
+ "testing"
+ "time"
+)
+
+func TestBatcher(t *testing.T) {
+ config := Config{
+ size: 1000,
+ buffer: 10,
+ worker: 10,
+ interval: 5 * time.Millisecond,
+ }
+
+ b := New[string](
+ WithSize(config.size),
+ WithBuffer(config.buffer),
+ WithWorker(config.worker),
+ WithInterval(config.interval),
+ WithSyncWait(true),
+ )
+
+ // Mock Do function to simply print values for demonstration
+ b.Do = func(ctx context.Context, channelID int, vals *Msg[string]) {
+ t.Logf("Channel %d Processed batch: %v", channelID, vals)
+ }
+ b.OnComplete = func(lastMessage *string, totalCount int) {
+ t.Logf("Completed processing with last message: %v, total count: %d", *lastMessage, totalCount)
+ }
+ b.Sharding = func(key string) int {
+ hashCode := stringutil.GetHashCode(key)
+ return int(hashCode) % config.worker
+ }
+ b.Key = func(data *string) string {
+ return *data
+ }
+
+ err := b.Start()
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // Test normal data processing
+ for i := 0; i < 10000; i++ {
+ data := "data" + fmt.Sprintf("%d", i)
+ if err := b.Put(context.Background(), &data); err != nil {
+ t.Fatal(err)
+ }
+ }
+
+ time.Sleep(time.Duration(1) * time.Second)
+ start := time.Now()
+ // Wait for all processing to finish
+ b.Close()
+
+ elapsed := time.Since(start)
+ t.Logf("Close took %s", elapsed)
+
+ if len(b.data) != 0 {
+ t.Error("Data channel should be empty after closing")
+ }
+}
diff --git a/pkg/util/conversationutil/conversationutil.go b/pkg/util/conversationutil/conversationutil.go
new file mode 100644
index 0000000..fcce202
--- /dev/null
+++ b/pkg/util/conversationutil/conversationutil.go
@@ -0,0 +1,61 @@
+package conversationutil
+
+import (
+ "sort"
+ "strings"
+)
+
+func GenConversationIDForSingle(sendID, recvID string) string {
+ l := []string{sendID, recvID}
+ sort.Strings(l)
+ return "si_" + strings.Join(l, "_")
+}
+
+func GenConversationUniqueKeyForGroup(groupID string) string {
+ return groupID
+}
+
+func GenGroupConversationID(groupID string) string {
+ return "sg_" + groupID
+}
+
+func IsGroupConversationID(conversationID string) bool {
+ return strings.HasPrefix(conversationID, "sg_")
+}
+
+func GetGroupIDFromConversationID(conversationID string) string {
+ if strings.HasPrefix(conversationID, "sg_") {
+ return strings.TrimPrefix(conversationID, "sg_")
+ }
+ return ""
+}
+
+func IsNotificationConversationID(conversationID string) bool {
+ return strings.HasPrefix(conversationID, "n_")
+}
+
+func GenConversationUniqueKeyForSingle(sendID, recvID string) string {
+ l := []string{sendID, recvID}
+ sort.Strings(l)
+ return strings.Join(l, "_")
+}
+
+func GetNotificationConversationIDByConversationID(conversationID string) string {
+ l := strings.Split(conversationID, "_")
+ if len(l) > 1 {
+ l[0] = "n"
+ return strings.Join(l, "_")
+ }
+ return ""
+}
+
+func GetSelfNotificationConversationID(userID string) string {
+ return "n_" + userID + "_" + userID
+}
+
+func GetSeqsBeginEnd(seqs []int64) (int64, int64) {
+ if len(seqs) == 0 {
+ return 0, 0
+ }
+ return seqs[0], seqs[len(seqs)-1]
+}
diff --git a/pkg/util/conversationutil/doc.go b/pkg/util/conversationutil/doc.go
new file mode 100644
index 0000000..64ad18c
--- /dev/null
+++ b/pkg/util/conversationutil/doc.go
@@ -0,0 +1 @@
+package conversationutil // import "git.imall.cloud/openim/open-im-server-deploy/pkg/util/conversationutil"
diff --git a/pkg/util/hashutil/id.go b/pkg/util/hashutil/id.go
new file mode 100644
index 0000000..52e7f4c
--- /dev/null
+++ b/pkg/util/hashutil/id.go
@@ -0,0 +1,16 @@
+package hashutil
+
+import (
+ "crypto/md5"
+ "encoding/binary"
+ "encoding/json"
+)
+
+func IdHash(ids []string) uint64 {
+ if len(ids) == 0 {
+ return 0
+ }
+ data, _ := json.Marshal(ids)
+ sum := md5.Sum(data)
+ return binary.BigEndian.Uint64(sum[:])
+}
diff --git a/pkg/util/useronline/split.go b/pkg/util/useronline/split.go
new file mode 100644
index 0000000..c39d31d
--- /dev/null
+++ b/pkg/util/useronline/split.go
@@ -0,0 +1,27 @@
+package useronline
+
+import (
+ "errors"
+ "strconv"
+ "strings"
+)
+
+func ParseUserOnlineStatus(payload string) (string, []int32, error) {
+ arr := strings.Split(payload, ":")
+ if len(arr) == 0 {
+ return "", nil, errors.New("invalid data")
+ }
+ userID := arr[len(arr)-1]
+ if userID == "" {
+ return "", nil, errors.New("userID is empty")
+ }
+ platformIDs := make([]int32, len(arr)-1)
+ for i := range platformIDs {
+ platformID, err := strconv.Atoi(arr[i])
+ if err != nil {
+ return "", nil, err
+ }
+ platformIDs[i] = int32(platformID)
+ }
+ return userID, platformIDs, nil
+}
diff --git a/scripts/template/LICENSE b/scripts/template/LICENSE
new file mode 100644
index 0000000..261eeb9
--- /dev/null
+++ b/scripts/template/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/scripts/template/LICENSE_TEMPLATES b/scripts/template/LICENSE_TEMPLATES
new file mode 100644
index 0000000..dbc5ce2
--- /dev/null
+++ b/scripts/template/LICENSE_TEMPLATES
@@ -0,0 +1,13 @@
+Copyright © {{.Year}} {{.Holder}} All rights reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/scripts/template/boilerplate.txt b/scripts/template/boilerplate.txt
new file mode 100644
index 0000000..2f12334
--- /dev/null
+++ b/scripts/template/boilerplate.txt
@@ -0,0 +1,3 @@
+Copyright © {{.Year}} {{.Holder}} All rights reserved.
+Use of this source code is governed by a MIT style
+license that can be found in the LICENSE file.
diff --git a/scripts/template/footer.md.tmpl b/scripts/template/footer.md.tmpl
new file mode 100644
index 0000000..6bf9116
--- /dev/null
+++ b/scripts/template/footer.md.tmpl
@@ -0,0 +1,19 @@
+**Full Changelog**: https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/compare/{{ .PreviousTag }}...{{ .Tag }}
+
+## Get Involved with OpenIM!
+
+Your patronage towards OpenIM is greatly appreciated 🎉🎉.
+
+If you encounter any problems during its usage, please create an issue in the [GitHub repository](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/), we're committed to resolving your problem as soon as possible.
+
+**Here are some ways to get involved with the OpenIM community:**
+
+📢 **Slack Channel**: Join our Slack channels for discussions, communication, and support. Click [here](https://openimsdk.slack.com) to join the Open-IM-Server Slack team channel.
+
+📧 **Gmail Contact**: If you have any questions, suggestions, or feedback for our open-source projects, please feel free to [contact us via email](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=info@openim.io).
+
+📖 **Blog**: Stay up-to-date with OpenIM-Server projects and trends by reading our [blog](https://openim.io/). We share the latest developments, tech trends, and other interesting information related to OpenIM.
+
+📱 **WeChat**: Add us on WeChat (QR Code) and indicate that you are a user or developer of Open-IM-Server. We'll process your request as soon as possible.
+
+Remember, your contributions play a vital role in making OpenIM successful, and we look forward to your active participation in our community! 🙌
\ No newline at end of file
diff --git a/scripts/template/head.md.tmpl b/scripts/template/head.md.tmpl
new file mode 100644
index 0000000..e024ab4
--- /dev/null
+++ b/scripts/template/head.md.tmpl
@@ -0,0 +1,31 @@
+## Welcome to the {{ .Tag }} release of [OpenIM](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }})!🎉🎉!
+
+We are excited to release {{.Tag}}, Branch: https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/tree/{{ .Tag }} , Git hash [{{ .ShortCommit }}], Install Address: [{{ .ReleaseURL }}]({{ .ReleaseURL }})
+
+Learn more about versions of OpenIM:
+
++ We release logs are recorded on [✨CHANGELOG](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/blob/main/CHANGELOG/CHANGELOG.md)
+
++ For information on versions of OpenIM and how to maintain branches, read [📚this article](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/blob/main/docs/contrib/version.md)
+
++ If you wish to use mirroring, read OpenIM's [🤲image management policy](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/blob/main/docs/contrib/images.md)
+
+**Want to be one of them 😘?**
+
+
+
+
+
+
+
+
+
+
+
+
+
+> **Note**
+> @openimbot and @kubbot have made great contributions to the community as community 🤖robots(@openimsdk/bot), respectively.
+> Thanks to the @openimsdk/openim team for all their hard work on this release.
+> Thank you to all the [💕developers and contributors](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/graphs/contributors), people from all over the world, OpenIM brings us together
+> Contributions to this project are welcome! Please see [CONTRIBUTING.md](https://github.com/{{ .Env.USERNAME }}/{{ .ProjectName }}/blob/main/CONTRIBUTING.md) for details.
\ No newline at end of file
diff --git a/scripts/template/project_README.md b/scripts/template/project_README.md
new file mode 100644
index 0000000..96575e6
--- /dev/null
+++ b/scripts/template/project_README.md
@@ -0,0 +1,41 @@
+# Project myproject
+
+
+
+## Features
+
+
+
+## Getting Started
+
+### Prerequisites
+
+
+
+### Building
+
+
+
+### Running
+
+
+
+## Using
+
+
+
+## Contributing
+
+
+
+## Community(optional)
+
+
+
+## Authors
+
+
+
+## License
+
+
diff --git a/start-config.yml b/start-config.yml
new file mode 100644
index 0000000..da95904
--- /dev/null
+++ b/start-config.yml
@@ -0,0 +1,18 @@
+serviceBinaries:
+ openim-api: 1
+ openim-crontask: 4
+ openim-rpc-user: 1
+ openim-msggateway: 1
+ openim-push: 8
+ openim-msgtransfer: 8
+ openim-rpc-conversation: 1
+ openim-rpc-auth: 1
+ openim-rpc-group: 1
+ openim-rpc-friend: 1
+ openim-rpc-msg: 1
+ openim-rpc-third: 1
+toolBinaries:
+ - check-free-memory
+ - check-component
+ - seq
+maxFileDescriptors: 10000
diff --git a/test/e2e/README.md b/test/e2e/README.md
new file mode 100644
index 0000000..2910f36
--- /dev/null
+++ b/test/e2e/README.md
@@ -0,0 +1,136 @@
+# OpenIM End-to-End (E2E) Testing Module
+
+## Overview
+
+This repository contains the End-to-End (E2E) testing suite for OpenIM, a comprehensive instant messaging platform. The E2E tests are designed to simulate real-world usage scenarios to ensure that all components of the OpenIM system are functioning correctly in an integrated environment.
+
+The tests cover various aspects of the system, including API endpoints, chat services, web interfaces, and RPC components, as well as performance and scalability under different load conditions.
+
+## Directory Structure
+
+```bash
+❯ tree e2e
+test/e2e/
+├── conformance/ # Contains tests for verifying OpenIM API conformance
+├── framework/ # Provides auxiliary code and libraries for building and running E2E tests
+│ ├── config/ # Test configuration files and management
+│ ├── ginkgowrapper/ # Functions wrapping the testing library for handling test failures and skips
+│ └── helpers/ # Helper functions such as user creation, message sending, etc.
+├── api/ # End-to-end tests for OpenIM API
+├── chat/ # Tests for the business server (including login, registration, and other logic)
+├── web/ # Tests for the web frontend (login, registration, message sending and receiving)
+├── rpc/ # End-to-end tests for various RPC components
+│ ├── auth/ # Tests for the authentication service
+│ ├── conversation/ # Tests for conversation management
+│ ├── friend/ # Tests for friend relationship management
+│ ├── group/ # Tests for group management
+│ └── message/ # Tests for message handling
+├── scalability/ # Tests for the scalability of the OpenIM system
+├── performance/ # Performance tests such as load testing and stress testing
+└── upgrade/ # Tests for compatibility and stability during OpenIM upgrades
+```
+
+The E2E tests are organized into the following directory structure:
+
+- `conformance/`: Contains tests to verify the conformance of OpenIM API implementations.
+- `framework/`: Provides helper code for constructing and running E2E tests using the Ginkgo framework.
+ - `config/`: Manages test configurations and options.
+ - `ginkgowrapper/`: Wrappers for Ginkgo's `Fail` and `Skip` functions to handle structured data panics.
+ - `helpers/`: Utility functions for common test actions like user creation, message dispatching, etc.
+- `api/`: E2E tests for the OpenIM API endpoints.
+- `chat/`: Tests for the chat service, including authentication, session management, and messaging logic.
+- `web/`: Tests for the web interface, including user interactions and information exchange.
+- `rpc/`: E2E tests for each of the RPC components.
+ - `auth/`: Tests for the authentication service.
+ - `conversation/`: Tests for conversation management.
+ - `friend/`: Tests for friend relationship management.
+ - `group/`: Tests for group management.
+ - `message/`: Tests for message handling.
+- `scalability/`: Tests for the scalability of the OpenIM system.
+- `performance/`: Performance tests, including load and stress tests.
+- `upgrade/`: Tests for the upgrade process of OpenIM, ensuring compatibility and stability.
+
+## Prerequisites
+
+Since the deployment of OpenIM requires some components such as Mongo and Kafka, you should think a bit before using E2E tests
+
+```bash
+docker compose up -d
+```
+
+OR User [kubernetes deployment](https://github.com/openimsdk/helm-charts)
+
+Before running the E2E tests, ensure that you have the following prerequisites installed:
+
+- Docker
+- Kubernetes
+- Ginkgo test framework
+- Go (version 1.19 or higher)
+
+## Configuration
+
+Test configurations can be customized via the `config/` directory. The configuration files are in YAML format and allow you to set parameters such as API endpoints, user credentials, and test data.
+
+## Running the Tests
+
+To run a single test or set of tests, you'll need the [Ginkgo](https://github.com/onsi/ginkgo) tool installed on your machine:
+
+```
+ginkgo --help
+ --focus value
+ If set, ginkgo will only run specs that match this regular expression. Can be specified multiple times, values are ORed.
+```
+
+To run the entire suite of E2E tests, use the following command:
+
+```sh
+ginkgo -v --randomizeAllSpecs --randomizeSuites --failOnPending --cover --trace --race --progress
+```
+
+You can also run a specific test or group of tests by specifying the path to the test directory:
+
+```bash
+ginkgo -v ./test/e2e/chat
+```
+
+Or you can use Makefile to run the tests:
+
+```bash
+make test-e2e
+```
+
+## Test Development
+
+To contribute to the E2E tests:
+
+1. Clone the repository and navigate to the `test/e2e/` directory.
+2. Create a new test file or modify an existing test to cover a new scenario.
+3. Write test cases using the Ginkgo BDD style, ensuring that they are clear and descriptive.
+4. Run the tests locally to ensure they pass.
+5. Submit a pull request with your changes.
+
+Please refer to the `CONTRIBUTING.md` file for more detailed instructions on contributing to the test suite.
+
+
+## Reporting Issues
+
+If you encounter any issues while running the E2E tests, please open an issue on the GitHub repository with the following information:
+
+Open issue: https://github.com/openimsdk/open-im-server-deploy/issues/new/choose, choose "Failing Test" template.
+
++ A clear and concise description of the issue.
++ Steps to reproduce the behavior.
++ Relevant logs and test output.
++ Any other context that could be helpful in troubleshooting.
+
+
+## Continuous Integration (CI)
+
+The E2E test suite is integrated with CI, which runs the tests automatically on each code commit. The results are reported back to the pull request or commit to provide immediate feedback on the impact of the changes.
+
+[](https://github.com/openimsdk/open-im-server-deploy/actions/workflows/e2e-test.yml)
+
+
+## Contact
+
+For any queries or assistance, please reach out to the OpenIM development team at [support@openim.com](mailto:support@openim.com).
\ No newline at end of file
diff --git a/test/e2e/api/token/token.go b/test/e2e/api/token/token.go
new file mode 100644
index 0000000..c862dc6
--- /dev/null
+++ b/test/e2e/api/token/token.go
@@ -0,0 +1,149 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package token
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+)
+
+// API endpoints and other constants.
+const (
+ APIHost = "http://127.0.0.1:10002"
+ UserTokenURL = APIHost + "/auth/user_token"
+ UserRegisterURL = APIHost + "/user/user_register"
+ SecretKey = "openIM123"
+ OperationID = "1646445464564"
+)
+
+// UserTokenRequest represents a request to get a user token.
+type UserTokenRequest struct {
+ Secret string `json:"secret"`
+ PlatformID int `json:"platformID"`
+ UserID string `json:"userID"`
+}
+
+// UserTokenResponse represents a response containing a user token.
+type UserTokenResponse struct {
+ Token string `json:"token"`
+ ErrCode int `json:"errCode"`
+}
+
+// User represents user data for registration.
+type User struct {
+ UserID string `json:"userID"`
+ Nickname string `json:"nickname"`
+ FaceURL string `json:"faceURL"`
+}
+
+// UserRegisterRequest represents a request to register a user.
+type UserRegisterRequest struct {
+ Users []User `json:"users"`
+}
+
+/* func main() {
+ // Example usage of functions
+ token, err := GetUserToken("openIM123456")
+ if err != nil {
+ log.Fatalf("Error getting user token: %v", err)
+ }
+ fmt.Println("Token:", token)
+
+ err = RegisterUser(token, "testUserID", "TestNickname", "https://example.com/image.jpg")
+ if err != nil {
+ log.Fatalf("Error registering user: %v", err)
+ }
+} */
+
+// GetUserToken requests a user token from the API.
+func GetUserToken(userID string) (string, error) {
+ reqBody := UserTokenRequest{
+ Secret: SecretKey,
+ PlatformID: 1,
+ UserID: userID,
+ }
+ reqBytes, err := json.Marshal(reqBody)
+ if err != nil {
+ return "", err
+ }
+
+ resp, err := http.Post(UserTokenURL, "application/json", bytes.NewBuffer(reqBytes))
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+ var tokenResp UserTokenResponse
+ if err := json.NewDecoder(resp.Body).Decode(&tokenResp); err != nil {
+ return "", err
+ }
+
+ if tokenResp.ErrCode != 0 {
+ return "", fmt.Errorf("error in token response: %v", tokenResp.ErrCode)
+ }
+
+ return tokenResp.Token, nil
+}
+
+// RegisterUser registers a new user using the API.
+func RegisterUser(token, userID, nickname, faceURL string) error {
+ user := User{
+ UserID: userID,
+ Nickname: nickname,
+ FaceURL: faceURL,
+ }
+ reqBody := UserRegisterRequest{
+ Users: []User{user},
+ }
+ reqBytes, err := json.Marshal(reqBody)
+ if err != nil {
+ return err
+ }
+
+ client := &http.Client{}
+ req, err := http.NewRequest("POST", UserRegisterURL, bytes.NewBuffer(reqBytes))
+ if err != nil {
+ return err
+ }
+
+ req.Header.Add("Content-Type", "application/json")
+ req.Header.Add("operationID", OperationID)
+ req.Header.Add("token", token)
+
+ resp, err := client.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+
+ respBody, err := io.ReadAll(resp.Body)
+ if err != nil {
+ return err
+ }
+
+ var respData map[string]any
+ if err := json.Unmarshal(respBody, &respData); err != nil {
+ return err
+ }
+
+ if errCode, ok := respData["errCode"].(float64); ok && errCode != 0 {
+ return fmt.Errorf("error in user registration response: %v", respData)
+ }
+
+ return nil
+}
diff --git a/test/e2e/api/user/curd.go b/test/e2e/api/user/curd.go
new file mode 100644
index 0000000..da193f0
--- /dev/null
+++ b/test/e2e/api/user/curd.go
@@ -0,0 +1,71 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package user
+
+import (
+ "fmt"
+
+ gettoken "git.imall.cloud/openim/open-im-server-deploy/test/e2e/api/token"
+ "git.imall.cloud/openim/open-im-server-deploy/test/e2e/framework/config"
+)
+
+// UserInfoRequest represents a request to get or update user information.
+type UserInfoRequest struct {
+ UserIDs []string `json:"userIDs,omitempty"`
+ UserInfo *gettoken.User `json:"userInfo,omitempty"`
+}
+
+// GetUsersOnlineStatusRequest represents a request to get users' online status.
+type GetUsersOnlineStatusRequest struct {
+ UserIDs []string `json:"userIDs"`
+}
+
+// GetUsersInfo retrieves detailed information for a list of user IDs.
+func GetUsersInfo(token string, userIDs []string) error {
+
+ url := fmt.Sprintf("http://%s:%s/user/get_users_info", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
+
+ requestBody := UserInfoRequest{
+ UserIDs: userIDs,
+ }
+ return sendPostRequestWithToken(url, token, requestBody)
+}
+
+// UpdateUserInfo updates the information for a user.
+func UpdateUserInfo(token, userID, nickname, faceURL string) error {
+
+ url := fmt.Sprintf("http://%s:%s/user/update_user_info", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
+
+ requestBody := UserInfoRequest{
+ UserInfo: &gettoken.User{
+ UserID: userID,
+ Nickname: nickname,
+ FaceURL: faceURL,
+ },
+ }
+ return sendPostRequestWithToken(url, token, requestBody)
+}
+
+// GetUsersOnlineStatus retrieves the online status for a list of user IDs.
+func GetUsersOnlineStatus(token string, userIDs []string) error {
+
+ url := fmt.Sprintf("http://%s:%s/user/get_users_online_status", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
+
+ requestBody := GetUsersOnlineStatusRequest{
+ UserIDs: userIDs,
+ }
+
+ return sendPostRequestWithToken(url, token, requestBody)
+}
diff --git a/test/e2e/api/user/user.go b/test/e2e/api/user/user.go
new file mode 100644
index 0000000..8c0c1f9
--- /dev/null
+++ b/test/e2e/api/user/user.go
@@ -0,0 +1,125 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package user
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+
+ gettoken "git.imall.cloud/openim/open-im-server-deploy/test/e2e/api/token"
+ "git.imall.cloud/openim/open-im-server-deploy/test/e2e/framework/config"
+)
+
+// ForceLogoutRequest represents a request to force a user logout.
+type ForceLogoutRequest struct {
+ PlatformID int `json:"platformID"`
+ UserID string `json:"userID"`
+}
+
+// CheckUserAccountRequest represents a request to check a user account.
+type CheckUserAccountRequest struct {
+ CheckUserIDs []string `json:"checkUserIDs"`
+}
+
+// GetUsersRequest represents a request to get a list of users.
+type GetUsersRequest struct {
+ Pagination Pagination `json:"pagination"`
+}
+
+// Pagination specifies the page number and number of items per page.
+type Pagination struct {
+ PageNumber int `json:"pageNumber"`
+ ShowNumber int `json:"showNumber"`
+}
+
+// ForceLogout forces a user to log out.
+func ForceLogout(token, userID string, platformID int) error {
+
+ url := fmt.Sprintf("http://%s:%s/auth/force_logout", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
+
+ requestBody := ForceLogoutRequest{
+ PlatformID: platformID,
+ UserID: userID,
+ }
+ return sendPostRequestWithToken(url, token, requestBody)
+}
+
+// CheckUserAccount checks if the user accounts exist.
+func CheckUserAccount(token string, userIDs []string) error {
+
+ url := fmt.Sprintf("http://%s:%s/user/account_check", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
+
+ requestBody := CheckUserAccountRequest{
+ CheckUserIDs: userIDs,
+ }
+ return sendPostRequestWithToken(url, token, requestBody)
+}
+
+// GetUsers retrieves a list of users with pagination.
+func GetUsers(token string, pageNumber, showNumber int) error {
+
+ url := fmt.Sprintf("http://%s:%s/user/account_check", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
+
+ requestBody := GetUsersRequest{
+ Pagination: Pagination{
+ PageNumber: pageNumber,
+ ShowNumber: showNumber,
+ },
+ }
+ return sendPostRequestWithToken(url, token, requestBody)
+}
+
+// sendPostRequestWithToken sends a POST request with a token in the header.
+func sendPostRequestWithToken(url, token string, body any) error {
+ reqBytes, err := json.Marshal(body)
+ if err != nil {
+ return err
+ }
+
+ client := &http.Client{}
+ req, err := http.NewRequest("POST", url, bytes.NewBuffer(reqBytes))
+ if err != nil {
+ return err
+ }
+
+ req.Header.Add("Content-Type", "application/json")
+ req.Header.Add("operationID", gettoken.OperationID)
+ req.Header.Add("token", token)
+
+ resp, err := client.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+
+ respBody, err := io.ReadAll(resp.Body)
+ if err != nil {
+ return err
+ }
+
+ var respData map[string]any
+ if err := json.Unmarshal(respBody, &respData); err != nil {
+ return err
+ }
+
+ if errCode, ok := respData["errCode"].(float64); ok && errCode != 0 {
+ return fmt.Errorf("error in response: %v", respData)
+ }
+
+ return nil
+}
diff --git a/test/e2e/conformance/.keep b/test/e2e/conformance/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/conformance/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/e2e.go b/test/e2e/e2e.go
new file mode 100644
index 0000000..72bcd98
--- /dev/null
+++ b/test/e2e/e2e.go
@@ -0,0 +1,51 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package e2e
+
+import (
+ "testing"
+
+ gettoken "git.imall.cloud/openim/open-im-server-deploy/test/e2e/api/token"
+ "git.imall.cloud/openim/open-im-server-deploy/test/e2e/api/user"
+)
+
+// RunE2ETests checks configuration parameters (specified through flags) and then runs
+// E2E tests using the Ginkgo runner.
+// If a "report directory" is specified, one or more JUnit test reports will be
+// generated in this directory, and cluster logs will also be saved.
+// This function is called on each Ginkgo node in parallel mode.
+func RunE2ETests(t *testing.T) {
+
+ // Example usage of new functions
+ token, _ := gettoken.GetUserToken("openIM123456")
+
+ // Example of getting user info
+ _ = user.GetUsersInfo(token, []string{"user1", "user2"})
+
+ // Example of updating user info
+ _ = user.UpdateUserInfo(token, "user1", "NewNickname", "https://github.com/openimsdk/open-im-server-deploy/blob/main/assets/logo/openim-logo.png")
+
+ // Example of getting users' online status
+ _ = user.GetUsersOnlineStatus(token, []string{"user1", "user2"})
+
+ // Example of forcing a logout
+ _ = user.ForceLogout(token, "4950983283", 2)
+
+ // Example of checking user account
+ _ = user.CheckUserAccount(token, []string{"openIM123456", "anotherUserID"})
+
+ // Example of getting users
+ _ = user.GetUsers(token, 1, 100)
+}
diff --git a/test/e2e/e2e_test.go b/test/e2e/e2e_test.go
new file mode 100644
index 0000000..7db3596
--- /dev/null
+++ b/test/e2e/e2e_test.go
@@ -0,0 +1,37 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package e2e
+
+import (
+ "flag"
+ "testing"
+
+ "git.imall.cloud/openim/open-im-server-deploy/test/e2e/framework/config"
+)
+
+// handleFlags sets up all flags and parses the command line.
+func handleFlags() {
+ config.CopyFlags(config.Flags, flag.CommandLine)
+ flag.Parse()
+}
+
+func TestMain(m *testing.M) {
+ handleFlags()
+ m.Run()
+}
+
+func TestE2E(t *testing.T) {
+ RunE2ETests(t)
+}
diff --git a/test/e2e/framework/config/config.go b/test/e2e/framework/config/config.go
new file mode 100644
index 0000000..14074fe
--- /dev/null
+++ b/test/e2e/framework/config/config.go
@@ -0,0 +1,84 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ "flag"
+ "os"
+)
+
+// Flags is the flag set that AddOptions adds to. Test authors should
+// also use it instead of directly adding to the global command line.
+var Flags = flag.NewFlagSet("", flag.ContinueOnError)
+
+// CopyFlags ensures that all flags that are defined in the source flag
+// set appear in the target flag set as if they had been defined there
+// directly. From the flag package it inherits the behavior that there
+// is a panic if the target already contains a flag from the source.
+func CopyFlags(source *flag.FlagSet, target *flag.FlagSet) {
+ source.VisitAll(func(flag *flag.Flag) {
+ // We don't need to copy flag.DefValue. The original
+ // default (from, say, flag.String) was stored in
+ // the value and gets extracted by Var for the help
+ // message.
+ target.Var(flag.Value, flag.Name, flag.Usage)
+ })
+}
+
+// Config defines the configuration structure for the OpenIM components.
+type Config struct {
+ APIHost string
+ APIPort string
+ MsgGatewayHost string
+ MsgTransferHost string
+ PushHost string
+ RPCAuthHost string
+ RPCConversationHost string
+ RPCFriendHost string
+ RPCGroupHost string
+ RPCMsgHost string
+ RPCThirdHost string
+ RPCUserHost string
+ // Add other configuration fields as needed
+}
+
+// LoadConfig loads the configurations from environment variables or default values.
+func LoadConfig() *Config {
+ return &Config{
+ APIHost: getEnv("OPENIM_API_HOST", "127.0.0.1"),
+ APIPort: getEnv("API_OPENIM_PORT", "10002"),
+
+ // TODO: Set default variable
+ MsgGatewayHost: getEnv("OPENIM_MSGGATEWAY_HOST", "default-msggateway-host"),
+ MsgTransferHost: getEnv("OPENIM_MSGTRANSFER_HOST", "default-msgtransfer-host"),
+ PushHost: getEnv("OPENIM_PUSH_HOST", "default-push-host"),
+ RPCAuthHost: getEnv("OPENIM_RPC_AUTH_HOST", "default-rpc-auth-host"),
+ RPCConversationHost: getEnv("OPENIM_RPC_CONVERSATION_HOST", "default-rpc-conversation-host"),
+ RPCFriendHost: getEnv("OPENIM_RPC_FRIEND_HOST", "default-rpc-friend-host"),
+ RPCGroupHost: getEnv("OPENIM_RPC_GROUP_HOST", "default-rpc-group-host"),
+ RPCMsgHost: getEnv("OPENIM_RPC_MSG_HOST", "default-rpc-msg-host"),
+ RPCThirdHost: getEnv("OPENIM_RPC_THIRD_HOST", "default-rpc-third-host"),
+ RPCUserHost: getEnv("OPENIM_RPC_USER_HOST", "default-rpc-user-host"),
+ }
+}
+
+// getEnv is a helper function to read an environment variable or return a default value.
+func getEnv(key, defaultValue string) string {
+ value, exists := os.LookupEnv(key)
+ if !exists {
+ return defaultValue
+ }
+ return value
+}
diff --git a/test/e2e/framework/config/config_test.go b/test/e2e/framework/config/config_test.go
new file mode 100644
index 0000000..66f845d
--- /dev/null
+++ b/test/e2e/framework/config/config_test.go
@@ -0,0 +1,89 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ "flag"
+ "reflect"
+ "testing"
+)
+
+func TestCopyFlags(t *testing.T) {
+ type args struct {
+ source *flag.FlagSet
+ target *flag.FlagSet
+ }
+ tests := []struct {
+ name string
+ args args
+ wantErr bool
+ }{
+ {
+ name: "Copy empty source to empty target",
+ args: args{
+ source: flag.NewFlagSet("source", flag.ContinueOnError),
+ target: flag.NewFlagSet("target", flag.ContinueOnError),
+ },
+ wantErr: false,
+ },
+ {
+ name: "Copy non-empty source to empty target",
+ args: args{
+ source: func() *flag.FlagSet {
+ fs := flag.NewFlagSet("source", flag.ContinueOnError)
+ fs.String("test-flag", "default", "test usage")
+ return fs
+ }(),
+ target: flag.NewFlagSet("target", flag.ContinueOnError),
+ },
+ wantErr: false,
+ },
+ {
+ name: "Copy source to target with existing flag",
+ args: args{
+ source: func() *flag.FlagSet {
+ fs := flag.NewFlagSet("source", flag.ContinueOnError)
+ fs.String("test-flag", "default", "test usage")
+ return fs
+ }(),
+ target: func() *flag.FlagSet {
+ fs := flag.NewFlagSet("target", flag.ContinueOnError)
+ fs.String("test-flag", "default", "test usage")
+ return fs
+ }(),
+ },
+ wantErr: true,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ defer func() {
+ if r := recover(); (r != nil) != tt.wantErr {
+ t.Errorf("CopyFlags() panic = %v, wantErr %v", r, tt.wantErr)
+ }
+ }()
+ CopyFlags(tt.args.source, tt.args.target)
+
+ // Verify the replicated tag
+ if !tt.wantErr {
+ tt.args.source.VisitAll(func(f *flag.Flag) {
+ if gotFlag := tt.args.target.Lookup(f.Name); gotFlag == nil || !reflect.DeepEqual(gotFlag, f) {
+ t.Errorf("CopyFlags() failed to copy flag %s", f.Name)
+ }
+ })
+ }
+ })
+ }
+}
diff --git a/test/e2e/framework/ginkgowrapper/.keep b/test/e2e/framework/ginkgowrapper/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/framework/ginkgowrapper/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/framework/ginkgowrapper/ginkgowrapper.go b/test/e2e/framework/ginkgowrapper/ginkgowrapper.go
new file mode 100644
index 0000000..814d393
--- /dev/null
+++ b/test/e2e/framework/ginkgowrapper/ginkgowrapper.go
@@ -0,0 +1,15 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package ginkgowrapper
diff --git a/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go b/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go
new file mode 100644
index 0000000..814d393
--- /dev/null
+++ b/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go
@@ -0,0 +1,15 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package ginkgowrapper
diff --git a/test/e2e/framework/helpers/.keep b/test/e2e/framework/helpers/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/framework/helpers/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/framework/helpers/chat/chat.go b/test/e2e/framework/helpers/chat/chat.go
new file mode 100644
index 0000000..0613ff5
--- /dev/null
+++ b/test/e2e/framework/helpers/chat/chat.go
@@ -0,0 +1,167 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+ "os/exec"
+ "path/filepath"
+)
+
+var (
+ // The default template version.
+ defaultTemplateVersion = "v1.3.0"
+)
+
+func main() {
+ // Define the URL to get the latest version
+ // latestVersionURL := "https://github.com/openimsdk/chat/releases/latest"
+ // latestVersion, err := getLatestVersion(latestVersionURL)
+ // if err != nil {
+ // fmt.Printf("Failed to get the latest version: %v\n", err)
+ // return
+ // }
+ latestVersion := defaultTemplateVersion
+
+ // getLatestVersion
+
+ // Construct the download URL
+ downloadURL := fmt.Sprintf("https://github.com/openimsdk/chat/releases/download/%s/chat_Linux_x86_64.tar.gz", latestVersion)
+
+ // Set the installation directory
+ installDir := "/tmp/chat"
+
+ // Clear the installation directory before proceeding
+ err := os.RemoveAll(installDir)
+ if err != nil {
+ fmt.Printf("Failed to clear installation directory: %v\n", err)
+ return
+ }
+
+ // Create the installation directory
+ err = os.MkdirAll(installDir, 0755)
+ if err != nil {
+ fmt.Printf("Failed to create installation directory: %v\n", err)
+ return
+ }
+
+ // Download and extract OpenIM Chat to the installation directory
+ err = downloadAndExtract(downloadURL, installDir)
+ if err != nil {
+ fmt.Printf("Failed to download and extract OpenIM Chat: %v\n", err)
+ return
+ }
+
+ // Create configuration file directory
+ configDir := filepath.Join(installDir, "config")
+ err = os.MkdirAll(configDir, 0755)
+ if err != nil {
+ fmt.Printf("Failed to create configuration directory: %v\n", err)
+ return
+ }
+
+ // Download configuration files
+ configURL := "https://raw.githubusercontent.com/openimsdk/chat/main/config/config.yaml"
+ err = downloadAndExtract(configURL, configDir)
+ if err != nil {
+ fmt.Printf("Failed to download and extract configuration files: %v\n", err)
+ return
+ }
+
+ // Define the processes to be started
+ cmds := []string{
+ "admin-api",
+ "admin-rpc",
+ "chat-api",
+ "chat-rpc",
+ }
+
+ // Start each process in a new goroutine
+ for _, cmd := range cmds {
+ go startProcess(filepath.Join(installDir, cmd))
+ }
+
+ // Block the main thread indefinitely
+ select {}
+}
+
+/* func getLatestVersion(url string) (string, error) {
+ resp, err := webhook.Get(url)
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+// location := resp.Header.Get("Location")
+// if location == "" {
+// return defaultTemplateVersion, nil
+// }
+
+ // Extract the version number from the URL
+ latestVersion := filepath.Base(location)
+ return latestVersion, nil
+} */
+
+// downloadAndExtract downloads a file from a URL and extracts it to a destination directory.
+func downloadAndExtract(url, destDir string) error {
+ resp, err := http.Get(url)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+
+ if resp.StatusCode != http.StatusOK {
+ return fmt.Errorf("error downloading file, HTTP status code: %d", resp.StatusCode)
+ }
+
+ // Create the destination directory
+ err = os.MkdirAll(destDir, 0755)
+ if err != nil {
+ return err
+ }
+
+ // Define the path for the downloaded file
+ filePath := filepath.Join(destDir, "downloaded_file.tar.gz")
+ file, err := os.Create(filePath)
+ if err != nil {
+ return err
+ }
+ defer file.Close()
+
+ // Copy the downloaded file
+ _, err = io.Copy(file, resp.Body)
+ if err != nil {
+ return err
+ }
+
+ // Extract the file
+ cmd := exec.Command("tar", "xzvf", filePath, "-C", destDir)
+ cmd.Stdout = os.Stdout
+ cmd.Stderr = os.Stderr
+ return cmd.Run()
+}
+
+// startProcess starts a process and prints any errors encountered.
+func startProcess(cmdPath string) {
+ cmd := exec.Command(cmdPath)
+ cmd.Stdout = os.Stdout
+ cmd.Stderr = os.Stderr
+ if err := cmd.Run(); err != nil {
+ fmt.Printf("Failed to start process %s: %v\n", cmdPath, err)
+ }
+}
diff --git a/test/e2e/page/chat_page.go b/test/e2e/page/chat_page.go
new file mode 100644
index 0000000..d92c7ec
--- /dev/null
+++ b/test/e2e/page/chat_page.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package page
diff --git a/test/e2e/page/login_page.go b/test/e2e/page/login_page.go
new file mode 100644
index 0000000..d92c7ec
--- /dev/null
+++ b/test/e2e/page/login_page.go
@@ -0,0 +1,15 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package page
diff --git a/test/e2e/performance/.keep b/test/e2e/performance/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/performance/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/rpc/auth/.keep b/test/e2e/rpc/auth/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/rpc/auth/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/rpc/conversation/.keep b/test/e2e/rpc/conversation/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/rpc/conversation/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/rpc/friend/.keep b/test/e2e/rpc/friend/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/rpc/friend/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/rpc/group/.keep b/test/e2e/rpc/group/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/rpc/group/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/rpc/message/.keep b/test/e2e/rpc/message/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/rpc/message/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/scalability/.keep b/test/e2e/scalability/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/scalability/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/upgrade/.keep b/test/e2e/upgrade/.keep
new file mode 100644
index 0000000..4f07f1c
--- /dev/null
+++ b/test/e2e/upgrade/.keep
@@ -0,0 +1 @@
+.keep
\ No newline at end of file
diff --git a/test/e2e/web/Readme.md b/test/e2e/web/Readme.md
new file mode 100644
index 0000000..741ca51
--- /dev/null
+++ b/test/e2e/web/Readme.md
@@ -0,0 +1,2 @@
+# OpenIM Web E2E
+
diff --git a/test/jwt/main.go b/test/jwt/main.go
new file mode 100644
index 0000000..0ef8452
--- /dev/null
+++ b/test/jwt/main.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "fmt"
+
+ "github.com/golang-jwt/jwt/v4"
+)
+
+func main() {
+ rawJWT := `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJVc2VySUQiOiI4MjkzODEzMTgzIiwiUGxhdGZvcm1JRCI6NSwiZXhwIjoxNzA2NTk0MTU0LCJuYmYiOjE2OTg4MTc4NTQsImlhdCI6MTY5ODgxODE1NH0.QCJHzU07SC6iYBoFO6Zsm61TNDor2D89I4E3zg8HHHU`
+
+ // Verify the token
+ claims := &jwt.MapClaims{}
+ parsedT, err := jwt.ParseWithClaims(rawJWT, claims, func(token *jwt.Token) (any, error) {
+ // Validate the alg is HMAC signature
+ if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
+ return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
+ }
+
+ if kid, ok := token.Header["kid"].(string); ok {
+ fmt.Println("kid", kid)
+ }
+
+ return []byte("key1"), nil
+ })
+
+ if err != nil || !parsedT.Valid {
+ fmt.Println("token valid failed", err)
+
+ return
+ }
+
+ fmt.Println("ok")
+}
diff --git a/test/readme b/test/readme
new file mode 100644
index 0000000..8c865ce
--- /dev/null
+++ b/test/readme
@@ -0,0 +1,17 @@
+## Run the Tests
+
+read: [Test Docs](./docs/contrib/test.md)
+
+To run a single test or set of tests, you'll need the [Ginkgo](https://github.com/onsi/ginkgo) tool installed on your
+machine:
+
+```console
+go install github.com/onsi/ginkgo/ginkgo@latest
+```
+
+```shell
+ginkgo --help
+ --focus value
+ If set, ginkgo will only run specs that match this regular expression. Can be specified multiple times, values are ORed.
+
+```
diff --git a/test/stress-test-v2/README.md b/test/stress-test-v2/README.md
new file mode 100644
index 0000000..cbd4bdb
--- /dev/null
+++ b/test/stress-test-v2/README.md
@@ -0,0 +1,19 @@
+# Stress Test V2
+
+## Usage
+
+You need set `TestTargetUserList` variables.
+
+### Build
+
+```bash
+
+go build -o test/stress-test-v2/stress-test-v2 test/stress-test-v2/main.go
+```
+
+### Excute
+
+```bash
+
+tools/stress-test-v2/stress-test-v2 -c config/
+```
diff --git a/test/stress-test-v2/main.go b/test/stress-test-v2/main.go
new file mode 100644
index 0000000..15e6082
--- /dev/null
+++ b/test/stress-test-v2/main.go
@@ -0,0 +1,759 @@
+package main
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "flag"
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+ "os/signal"
+ "sync"
+ "syscall"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/auth"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/sdkws"
+ pbuser "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/system/program"
+)
+
+// 1. Create 100K New Users
+// 2. Create 100 100K Groups
+// 3. Create 1000 999 Groups
+// 4. Send message to 100K Groups every second
+// 5. Send message to 999 Groups every minute
+
+var (
+ // Use default userIDs List for testing, need to be created.
+ TestTargetUserList = []string{
+ // "",
+ }
+ // DefaultGroupID = "" // Use default group ID for testing, need to be created.
+)
+
+var (
+ ApiAddress string
+
+ // API method
+ GetAdminToken = "/auth/get_admin_token"
+ UserCheck = "/user/account_check"
+ CreateUser = "/user/user_register"
+ ImportFriend = "/friend/import_friend"
+ InviteToGroup = "/group/invite_user_to_group"
+ GetGroupMemberInfo = "/group/get_group_members_info"
+ SendMsg = "/msg/send_msg"
+ CreateGroup = "/group/create_group"
+ GetUserToken = "/auth/user_token"
+)
+
+const (
+ MaxUser = 100000
+ Max1kUser = 1000
+ Max100KGroup = 100
+ Max999Group = 1000
+ MaxInviteUserLimit = 999
+
+ CreateUserTicker = 1 * time.Second
+ CreateGroupTicker = 1 * time.Second
+ Create100KGroupTicker = 1 * time.Second
+ Create999GroupTicker = 1 * time.Second
+ SendMsgTo100KGroupTicker = 1 * time.Second
+ SendMsgTo999GroupTicker = 1 * time.Minute
+)
+
+type BaseResp struct {
+ ErrCode int `json:"errCode"`
+ ErrMsg string `json:"errMsg"`
+ Data json.RawMessage `json:"data"`
+}
+
+type StressTest struct {
+ Conf *conf
+ AdminUserID string
+ AdminToken string
+ DefaultGroupID string
+ DefaultUserID string
+ UserCounter int
+ CreateUserCounter int
+ Create100kGroupCounter int
+ Create999GroupCounter int
+ MsgCounter int
+ CreatedUsers []string
+ CreatedGroups []string
+ Mutex sync.Mutex
+ Ctx context.Context
+ Cancel context.CancelFunc
+ HttpClient *http.Client
+ Wg sync.WaitGroup
+ Once sync.Once
+}
+
+type conf struct {
+ Share config.Share
+ Api config.API
+}
+
+func initConfig(configDir string) (*config.Share, *config.API, error) {
+ var (
+ share = &config.Share{}
+ apiConfig = &config.API{}
+ )
+
+ err := config.Load(configDir, config.ShareFileName, config.EnvPrefixMap[config.ShareFileName], share)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ err = config.Load(configDir, config.OpenIMAPICfgFileName, config.EnvPrefixMap[config.OpenIMAPICfgFileName], apiConfig)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ return share, apiConfig, nil
+}
+
+// Post Request
+func (st *StressTest) PostRequest(ctx context.Context, url string, reqbody any) ([]byte, error) {
+ // Marshal body
+ jsonBody, err := json.Marshal(reqbody)
+ if err != nil {
+ log.ZError(ctx, "Failed to marshal request body", err, "url", url, "reqbody", reqbody)
+ return nil, err
+ }
+
+ req, err := http.NewRequest(http.MethodPost, url, bytes.NewReader(jsonBody))
+ if err != nil {
+ return nil, err
+ }
+ req.Header.Set("Content-Type", "application/json")
+ req.Header.Set("operationID", st.AdminUserID)
+ if st.AdminToken != "" {
+ req.Header.Set("token", st.AdminToken)
+ }
+
+ // log.ZInfo(ctx, "Header info is ", "Content-Type", "application/json", "operationID", st.AdminUserID, "token", st.AdminToken)
+
+ resp, err := st.HttpClient.Do(req)
+ if err != nil {
+ log.ZError(ctx, "Failed to send request", err, "url", url, "reqbody", reqbody)
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ respBody, err := io.ReadAll(resp.Body)
+ if err != nil {
+ log.ZError(ctx, "Failed to read response body", err, "url", url)
+ return nil, err
+ }
+
+ var baseResp BaseResp
+ if err := json.Unmarshal(respBody, &baseResp); err != nil {
+ log.ZError(ctx, "Failed to unmarshal response body", err, "url", url, "respBody", string(respBody))
+ return nil, err
+ }
+
+ if baseResp.ErrCode != 0 {
+ err = fmt.Errorf(baseResp.ErrMsg)
+ // log.ZError(ctx, "Failed to send request", err, "url", url, "reqbody", reqbody, "resp", baseResp)
+ return nil, err
+ }
+
+ return baseResp.Data, nil
+}
+
+func (st *StressTest) GetAdminToken(ctx context.Context) (string, error) {
+ req := auth.GetAdminTokenReq{
+ Secret: st.Conf.Share.Secret,
+ UserID: st.AdminUserID,
+ }
+
+ resp, err := st.PostRequest(ctx, ApiAddress+GetAdminToken, &req)
+ if err != nil {
+ return "", err
+ }
+
+ data := &auth.GetAdminTokenResp{}
+ if err := json.Unmarshal(resp, &data); err != nil {
+ return "", err
+ }
+
+ return data.Token, nil
+}
+
+func (st *StressTest) CheckUser(ctx context.Context, userIDs []string) ([]string, error) {
+ req := pbuser.AccountCheckReq{
+ CheckUserIDs: userIDs,
+ }
+
+ resp, err := st.PostRequest(ctx, ApiAddress+UserCheck, &req)
+ if err != nil {
+ return nil, err
+ }
+
+ data := &pbuser.AccountCheckResp{}
+ if err := json.Unmarshal(resp, &data); err != nil {
+ return nil, err
+ }
+
+ unRegisteredUserIDs := make([]string, 0)
+
+ for _, res := range data.Results {
+ if res.AccountStatus == constant.UnRegistered {
+ unRegisteredUserIDs = append(unRegisteredUserIDs, res.UserID)
+ }
+ }
+
+ return unRegisteredUserIDs, nil
+}
+
+func (st *StressTest) CreateUser(ctx context.Context, userID string) (string, error) {
+ user := &sdkws.UserInfo{
+ UserID: userID,
+ Nickname: userID,
+ }
+
+ req := pbuser.UserRegisterReq{
+ Users: []*sdkws.UserInfo{user},
+ }
+
+ _, err := st.PostRequest(ctx, ApiAddress+CreateUser, &req)
+ if err != nil {
+ return "", err
+ }
+
+ st.UserCounter++
+ return userID, nil
+}
+
+func (st *StressTest) CreateUserBatch(ctx context.Context, userIDs []string) error {
+ // The method can import a large number of users at once.
+ var userList []*sdkws.UserInfo
+
+ defer st.Once.Do(
+ func() {
+ st.DefaultUserID = userIDs[0]
+ fmt.Println("Default Send User Created ID:", st.DefaultUserID)
+ })
+
+ needUserIDs, err := st.CheckUser(ctx, userIDs)
+ if err != nil {
+ return err
+ }
+
+ for _, userID := range needUserIDs {
+ user := &sdkws.UserInfo{
+ UserID: userID,
+ Nickname: userID,
+ }
+ userList = append(userList, user)
+ }
+
+ req := pbuser.UserRegisterReq{
+ Users: userList,
+ }
+
+ _, err = st.PostRequest(ctx, ApiAddress+CreateUser, &req)
+ if err != nil {
+ return err
+ }
+
+ st.UserCounter += len(userList)
+ return nil
+}
+
+func (st *StressTest) GetGroupMembersInfo(ctx context.Context, groupID string, userIDs []string) ([]string, error) {
+ needInviteUserIDs := make([]string, 0)
+
+ const maxBatchSize = 500
+ if len(userIDs) > maxBatchSize {
+ for i := 0; i < len(userIDs); i += maxBatchSize {
+ end := min(i+maxBatchSize, len(userIDs))
+ batchUserIDs := userIDs[i:end]
+
+ // log.ZInfo(ctx, "Processing group members batch", "groupID", groupID, "batch", i/maxBatchSize+1,
+ // "batchUserCount", len(batchUserIDs))
+
+ // Process a single batch
+ batchReq := group.GetGroupMembersInfoReq{
+ GroupID: groupID,
+ UserIDs: batchUserIDs,
+ }
+
+ resp, err := st.PostRequest(ctx, ApiAddress+GetGroupMemberInfo, &batchReq)
+ if err != nil {
+ log.ZError(ctx, "Batch query failed", err, "batch", i/maxBatchSize+1)
+ continue
+ }
+
+ data := &group.GetGroupMembersInfoResp{}
+ if err := json.Unmarshal(resp, &data); err != nil {
+ log.ZError(ctx, "Failed to parse batch response", err, "batch", i/maxBatchSize+1)
+ continue
+ }
+
+ // Process the batch results
+ existingMembers := make(map[string]bool)
+ for _, member := range data.Members {
+ existingMembers[member.UserID] = true
+ }
+
+ for _, userID := range batchUserIDs {
+ if !existingMembers[userID] {
+ needInviteUserIDs = append(needInviteUserIDs, userID)
+ }
+ }
+ }
+
+ return needInviteUserIDs, nil
+ }
+
+ req := group.GetGroupMembersInfoReq{
+ GroupID: groupID,
+ UserIDs: userIDs,
+ }
+
+ resp, err := st.PostRequest(ctx, ApiAddress+GetGroupMemberInfo, &req)
+ if err != nil {
+ return nil, err
+ }
+
+ data := &group.GetGroupMembersInfoResp{}
+ if err := json.Unmarshal(resp, &data); err != nil {
+ return nil, err
+ }
+
+ existingMembers := make(map[string]bool)
+ for _, member := range data.Members {
+ existingMembers[member.UserID] = true
+ }
+
+ for _, userID := range userIDs {
+ if !existingMembers[userID] {
+ needInviteUserIDs = append(needInviteUserIDs, userID)
+ }
+ }
+
+ return needInviteUserIDs, nil
+}
+
+func (st *StressTest) InviteToGroup(ctx context.Context, groupID string, userIDs []string) error {
+ req := group.InviteUserToGroupReq{
+ GroupID: groupID,
+ InvitedUserIDs: userIDs,
+ }
+ _, err := st.PostRequest(ctx, ApiAddress+InviteToGroup, &req)
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (st *StressTest) SendMsg(ctx context.Context, userID string, groupID string) error {
+ contentObj := map[string]any{
+ // "content": fmt.Sprintf("index %d. The current time is %s", st.MsgCounter, time.Now().Format("2006-01-02 15:04:05.000")),
+ "content": fmt.Sprintf("The current time is %s", time.Now().Format("2006-01-02 15:04:05.000")),
+ }
+
+ req := &apistruct.SendMsgReq{
+ SendMsg: apistruct.SendMsg{
+ SendID: userID,
+ SenderNickname: userID,
+ GroupID: groupID,
+ ContentType: constant.Text,
+ SessionType: constant.ReadGroupChatType,
+ Content: contentObj,
+ },
+ }
+
+ _, err := st.PostRequest(ctx, ApiAddress+SendMsg, &req)
+ if err != nil {
+ log.ZError(ctx, "Failed to send message", err, "userID", userID, "req", &req)
+ return err
+ }
+
+ st.MsgCounter++
+
+ return nil
+}
+
+// Max userIDs number is 1000
+func (st *StressTest) CreateGroup(ctx context.Context, groupID string, userID string, userIDsList []string) (string, error) {
+ groupInfo := &sdkws.GroupInfo{
+ GroupID: groupID,
+ GroupName: groupID,
+ GroupType: constant.WorkingGroup,
+ }
+
+ req := group.CreateGroupReq{
+ OwnerUserID: userID,
+ MemberUserIDs: userIDsList,
+ GroupInfo: groupInfo,
+ }
+
+ resp := group.CreateGroupResp{}
+
+ response, err := st.PostRequest(ctx, ApiAddress+CreateGroup, &req)
+ if err != nil {
+ return "", err
+ }
+
+ if err := json.Unmarshal(response, &resp); err != nil {
+ return "", err
+ }
+
+ // st.GroupCounter++
+
+ return resp.GroupInfo.GroupID, nil
+}
+
+func main() {
+ var configPath string
+ // defaultConfigDir := filepath.Join("..", "..", "..", "..", "..", "config")
+ // flag.StringVar(&configPath, "c", defaultConfigDir, "config path")
+ flag.StringVar(&configPath, "c", "", "config path")
+ flag.Parse()
+
+ if configPath == "" {
+ _, _ = fmt.Fprintln(os.Stderr, "config path is empty")
+ os.Exit(1)
+ return
+ }
+
+ fmt.Printf(" Config Path: %s\n", configPath)
+
+ share, apiConfig, err := initConfig(configPath)
+ if err != nil {
+ program.ExitWithError(err)
+ return
+ }
+
+ ApiAddress = fmt.Sprintf("http://%s:%s", "127.0.0.1", fmt.Sprint(apiConfig.Api.Ports[0]))
+
+ ctx, cancel := context.WithCancel(context.Background())
+ // ch := make(chan struct{})
+
+ st := &StressTest{
+ Conf: &conf{
+ Share: *share,
+ Api: *apiConfig,
+ },
+ AdminUserID: share.IMAdminUser.UserIDs[0],
+ Ctx: ctx,
+ Cancel: cancel,
+ HttpClient: &http.Client{
+ Timeout: 50 * time.Second,
+ },
+ }
+
+ c := make(chan os.Signal, 1)
+ signal.Notify(c, os.Interrupt, syscall.SIGTERM)
+ go func() {
+ <-c
+ fmt.Println("\nReceived stop signal, stopping...")
+
+ go func() {
+ // time.Sleep(5 * time.Second)
+ fmt.Println("Force exit")
+ os.Exit(0)
+ }()
+
+ st.Cancel()
+ }()
+
+ token, err := st.GetAdminToken(st.Ctx)
+ if err != nil {
+ log.ZError(ctx, "Get Admin Token failed.", err, "AdminUserID", st.AdminUserID)
+ }
+
+ st.AdminToken = token
+ fmt.Println("Admin Token:", st.AdminToken)
+ fmt.Println("ApiAddress:", ApiAddress)
+
+ for i := range MaxUser {
+ userID := fmt.Sprintf("v2_StressTest_User_%d", i)
+ st.CreatedUsers = append(st.CreatedUsers, userID)
+ st.CreateUserCounter++
+ }
+
+ // err = st.CreateUserBatch(st.Ctx, st.CreatedUsers)
+ // if err != nil {
+ // log.ZError(ctx, "Create user failed.", err)
+ // }
+
+ const batchSize = 1000
+ totalUsers := len(st.CreatedUsers)
+ successCount := 0
+
+ if st.DefaultUserID == "" && len(st.CreatedUsers) > 0 {
+ st.DefaultUserID = st.CreatedUsers[0]
+ }
+
+ for i := 0; i < totalUsers; i += batchSize {
+ end := min(i+batchSize, totalUsers)
+
+ userBatch := st.CreatedUsers[i:end]
+ log.ZInfo(st.Ctx, "Creating user batch", "batch", i/batchSize+1, "count", len(userBatch))
+
+ err = st.CreateUserBatch(st.Ctx, userBatch)
+ if err != nil {
+ log.ZError(st.Ctx, "Batch user creation failed", err, "batch", i/batchSize+1)
+ } else {
+ successCount += len(userBatch)
+ log.ZInfo(st.Ctx, "Batch user creation succeeded", "batch", i/batchSize+1,
+ "progress", fmt.Sprintf("%d/%d", successCount, totalUsers))
+ }
+ }
+
+ // Execute create 100k group
+ st.Wg.Add(1)
+ go func() {
+ defer st.Wg.Done()
+
+ create100kGroupTicker := time.NewTicker(Create100KGroupTicker)
+ defer create100kGroupTicker.Stop()
+
+ for i := range Max100KGroup {
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Create 100K Group")
+ return
+
+ case <-create100kGroupTicker.C:
+ // Create 100K groups
+ st.Wg.Add(1)
+ go func(idx int) {
+ startTime := time.Now()
+ defer func() {
+ elapsedTime := time.Since(startTime)
+ log.ZInfo(st.Ctx, "100K group creation completed",
+ "groupID", fmt.Sprintf("v2_StressTest_Group_100K_%d", idx),
+ "index", idx,
+ "duration", elapsedTime.String())
+ }()
+
+ defer st.Wg.Done()
+ defer func() {
+ st.Mutex.Lock()
+ st.Create100kGroupCounter++
+ st.Mutex.Unlock()
+ }()
+
+ groupID := fmt.Sprintf("v2_StressTest_Group_100K_%d", idx)
+
+ if _, err = st.CreateGroup(st.Ctx, groupID, st.DefaultUserID, TestTargetUserList); err != nil {
+ log.ZError(st.Ctx, "Create group failed.", err)
+ // continue
+ }
+
+ for i := 0; i <= MaxUser/MaxInviteUserLimit; i++ {
+ InviteUserIDs := make([]string, 0)
+ // ensure TargetUserList is in group
+ InviteUserIDs = append(InviteUserIDs, TestTargetUserList...)
+
+ startIdx := max(i*MaxInviteUserLimit, 1)
+ endIdx := min((i+1)*MaxInviteUserLimit, MaxUser)
+
+ for j := startIdx; j < endIdx; j++ {
+ userCreatedID := fmt.Sprintf("v2_StressTest_User_%d", j)
+ InviteUserIDs = append(InviteUserIDs, userCreatedID)
+ }
+
+ if len(InviteUserIDs) == 0 {
+ // log.ZWarn(st.Ctx, "InviteUserIDs is empty", nil, "groupID", groupID)
+ continue
+ }
+
+ InviteUserIDs, err := st.GetGroupMembersInfo(ctx, groupID, InviteUserIDs)
+ if err != nil {
+ log.ZError(st.Ctx, "GetGroupMembersInfo failed.", err, "groupID", groupID)
+ continue
+ }
+
+ if len(InviteUserIDs) == 0 {
+ // log.ZWarn(st.Ctx, "InviteUserIDs is empty", nil, "groupID", groupID)
+ continue
+ }
+
+ // Invite To Group
+ if err = st.InviteToGroup(st.Ctx, groupID, InviteUserIDs); err != nil {
+ log.ZError(st.Ctx, "Invite To Group failed.", err, "UserID", InviteUserIDs)
+ continue
+ // os.Exit(1)
+ // return
+ }
+ }
+ }(i)
+ }
+ }
+ }()
+
+ // create 999 groups
+ st.Wg.Add(1)
+ go func() {
+ defer st.Wg.Done()
+
+ create999GroupTicker := time.NewTicker(Create999GroupTicker)
+ defer create999GroupTicker.Stop()
+
+ for i := range Max999Group {
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Create 999 Group")
+ return
+
+ case <-create999GroupTicker.C:
+ // Create 999 groups
+ st.Wg.Add(1)
+ go func(idx int) {
+ startTime := time.Now()
+ defer func() {
+ elapsedTime := time.Since(startTime)
+ log.ZInfo(st.Ctx, "999 group creation completed",
+ "groupID", fmt.Sprintf("v2_StressTest_Group_1K_%d", idx),
+ "index", idx,
+ "duration", elapsedTime.String())
+ }()
+
+ defer st.Wg.Done()
+ defer func() {
+ st.Mutex.Lock()
+ st.Create999GroupCounter++
+ st.Mutex.Unlock()
+ }()
+
+ groupID := fmt.Sprintf("v2_StressTest_Group_1K_%d", idx)
+
+ if _, err = st.CreateGroup(st.Ctx, groupID, st.DefaultUserID, TestTargetUserList); err != nil {
+ log.ZError(st.Ctx, "Create group failed.", err)
+ // continue
+ }
+ for i := 0; i <= Max1kUser/MaxInviteUserLimit; i++ {
+ InviteUserIDs := make([]string, 0)
+ // ensure TargetUserList is in group
+ InviteUserIDs = append(InviteUserIDs, TestTargetUserList...)
+
+ startIdx := max(i*MaxInviteUserLimit, 1)
+ endIdx := min((i+1)*MaxInviteUserLimit, Max1kUser)
+
+ for j := startIdx; j < endIdx; j++ {
+ userCreatedID := fmt.Sprintf("v2_StressTest_User_%d", j)
+ InviteUserIDs = append(InviteUserIDs, userCreatedID)
+ }
+
+ if len(InviteUserIDs) == 0 {
+ // log.ZWarn(st.Ctx, "InviteUserIDs is empty", nil, "groupID", groupID)
+ continue
+ }
+
+ InviteUserIDs, err := st.GetGroupMembersInfo(ctx, groupID, InviteUserIDs)
+ if err != nil {
+ log.ZError(st.Ctx, "GetGroupMembersInfo failed.", err, "groupID", groupID)
+ continue
+ }
+
+ if len(InviteUserIDs) == 0 {
+ // log.ZWarn(st.Ctx, "InviteUserIDs is empty", nil, "groupID", groupID)
+ continue
+ }
+
+ // Invite To Group
+ if err = st.InviteToGroup(st.Ctx, groupID, InviteUserIDs); err != nil {
+ log.ZError(st.Ctx, "Invite To Group failed.", err, "UserID", InviteUserIDs)
+ continue
+ // os.Exit(1)
+ // return
+ }
+ }
+ }(i)
+ }
+ }
+ }()
+
+ // Send message to 100K groups
+ st.Wg.Wait()
+ fmt.Println("All groups created successfully, starting to send messages...")
+ log.ZInfo(ctx, "All groups created successfully, starting to send messages...")
+
+ var groups100K []string
+ var groups999 []string
+
+ for i := range Max100KGroup {
+ groupID := fmt.Sprintf("v2_StressTest_Group_100K_%d", i)
+ groups100K = append(groups100K, groupID)
+ }
+
+ for i := range Max999Group {
+ groupID := fmt.Sprintf("v2_StressTest_Group_1K_%d", i)
+ groups999 = append(groups999, groupID)
+ }
+
+ send100kGroupLimiter := make(chan struct{}, 20)
+ send999GroupLimiter := make(chan struct{}, 100)
+
+ // execute Send message to 100K groups
+ go func() {
+ ticker := time.NewTicker(SendMsgTo100KGroupTicker)
+ defer ticker.Stop()
+
+ for {
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Send Message to 100K Group")
+ return
+
+ case <-ticker.C:
+ // Send message to 100K groups
+ for _, groupID := range groups100K {
+ send100kGroupLimiter <- struct{}{}
+ go func(groupID string) {
+ defer func() { <-send100kGroupLimiter }()
+ if err := st.SendMsg(st.Ctx, st.DefaultUserID, groupID); err != nil {
+ log.ZError(st.Ctx, "Send message to 100K group failed.", err)
+ }
+ }(groupID)
+ }
+ // log.ZInfo(st.Ctx, "Send message to 100K groups successfully.")
+ }
+ }
+ }()
+
+ // execute Send message to 999 groups
+ go func() {
+ ticker := time.NewTicker(SendMsgTo999GroupTicker)
+ defer ticker.Stop()
+
+ for {
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Send Message to 999 Group")
+ return
+
+ case <-ticker.C:
+ // Send message to 999 groups
+ for _, groupID := range groups999 {
+ send999GroupLimiter <- struct{}{}
+ go func(groupID string) {
+ defer func() { <-send999GroupLimiter }()
+
+ if err := st.SendMsg(st.Ctx, st.DefaultUserID, groupID); err != nil {
+ log.ZError(st.Ctx, "Send message to 999 group failed.", err)
+ }
+ }(groupID)
+ }
+ // log.ZInfo(st.Ctx, "Send message to 999 groups successfully.")
+ }
+ }
+ }()
+
+ <-st.Ctx.Done()
+ fmt.Println("Received signal to exit, shutting down...")
+}
diff --git a/test/stress-test/README.md b/test/stress-test/README.md
new file mode 100644
index 0000000..cba93e2
--- /dev/null
+++ b/test/stress-test/README.md
@@ -0,0 +1,19 @@
+# Stress Test
+
+## Usage
+
+You need set `TestTargetUserList` and `DefaultGroupID` variables.
+
+### Build
+
+```bash
+
+go build -o test/stress-test/stress-test test/stress-test/main.go
+```
+
+### Excute
+
+```bash
+
+tools/stress-test/stress-test -c config/
+```
diff --git a/test/stress-test/main.go b/test/stress-test/main.go
new file mode 100755
index 0000000..fb126db
--- /dev/null
+++ b/test/stress-test/main.go
@@ -0,0 +1,458 @@
+package main
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "flag"
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+ "os/signal"
+ "sync"
+ "syscall"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/apistruct"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/protocol/auth"
+ "git.imall.cloud/openim/protocol/constant"
+ "git.imall.cloud/openim/protocol/group"
+ "git.imall.cloud/openim/protocol/relation"
+ "git.imall.cloud/openim/protocol/sdkws"
+ pbuser "git.imall.cloud/openim/protocol/user"
+ "github.com/openimsdk/tools/log"
+ "github.com/openimsdk/tools/system/program"
+)
+
+/*
+ 1. Create one user every minute
+ 2. Import target users as friends
+ 3. Add users to the default group
+ 4. Send a message to the default group every second, containing index and current timestamp
+ 5. Create a new group every minute and invite target users to join
+*/
+
+// !!! ATTENTION: This variable is must be added!
+var (
+ // Use default userIDs List for testing, need to be created.
+ TestTargetUserList = []string{
+ "",
+ }
+ DefaultGroupID = "" // Use default group ID for testing, need to be created.
+)
+
+var (
+ ApiAddress string
+
+ // API method
+ GetAdminToken = "/auth/get_admin_token"
+ CreateUser = "/user/user_register"
+ ImportFriend = "/friend/import_friend"
+ InviteToGroup = "/group/invite_user_to_group"
+ SendMsg = "/msg/send_msg"
+ CreateGroup = "/group/create_group"
+ GetUserToken = "/auth/user_token"
+)
+
+const (
+ MaxUser = 10000
+ MaxGroup = 1000
+
+ CreateUserTicker = 1 * time.Minute // Ticker is 1min in create user
+ SendMessageTicker = 1 * time.Second // Ticker is 1s in send message
+ CreateGroupTicker = 1 * time.Minute
+)
+
+type BaseResp struct {
+ ErrCode int `json:"errCode"`
+ ErrMsg string `json:"errMsg"`
+ Data json.RawMessage `json:"data"`
+}
+
+type StressTest struct {
+ Conf *conf
+ AdminUserID string
+ AdminToken string
+ DefaultGroupID string
+ DefaultUserID string
+ UserCounter int
+ GroupCounter int
+ MsgCounter int
+ CreatedUsers []string
+ CreatedGroups []string
+ Mutex sync.Mutex
+ Ctx context.Context
+ Cancel context.CancelFunc
+ HttpClient *http.Client
+ Wg sync.WaitGroup
+ Once sync.Once
+}
+
+type conf struct {
+ Share config.Share
+ Api config.API
+}
+
+func initConfig(configDir string) (*config.Share, *config.API, error) {
+ var (
+ share = &config.Share{}
+ apiConfig = &config.API{}
+ )
+
+ err := config.Load(configDir, config.ShareFileName, config.EnvPrefixMap[config.ShareFileName], share)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ err = config.Load(configDir, config.OpenIMAPICfgFileName, config.EnvPrefixMap[config.OpenIMAPICfgFileName], apiConfig)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ return share, apiConfig, nil
+}
+
+// Post Request
+func (st *StressTest) PostRequest(ctx context.Context, url string, reqbody any) ([]byte, error) {
+ // Marshal body
+ jsonBody, err := json.Marshal(reqbody)
+ if err != nil {
+ log.ZError(ctx, "Failed to marshal request body", err, "url", url, "reqbody", reqbody)
+ return nil, err
+ }
+
+ req, err := http.NewRequest(http.MethodPost, url, bytes.NewReader(jsonBody))
+ if err != nil {
+ return nil, err
+ }
+ req.Header.Set("Content-Type", "application/json")
+ req.Header.Set("operationID", st.AdminUserID)
+ if st.AdminToken != "" {
+ req.Header.Set("token", st.AdminToken)
+ }
+
+ // log.ZInfo(ctx, "Header info is ", "Content-Type", "application/json", "operationID", st.AdminUserID, "token", st.AdminToken)
+
+ resp, err := st.HttpClient.Do(req)
+ if err != nil {
+ log.ZError(ctx, "Failed to send request", err, "url", url, "reqbody", reqbody)
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ respBody, err := io.ReadAll(resp.Body)
+ if err != nil {
+ log.ZError(ctx, "Failed to read response body", err, "url", url)
+ return nil, err
+ }
+
+ var baseResp BaseResp
+ if err := json.Unmarshal(respBody, &baseResp); err != nil {
+ log.ZError(ctx, "Failed to unmarshal response body", err, "url", url, "respBody", string(respBody))
+ return nil, err
+ }
+
+ if baseResp.ErrCode != 0 {
+ err = fmt.Errorf(baseResp.ErrMsg)
+ log.ZError(ctx, "Failed to send request", err, "url", url, "reqbody", reqbody, "resp", baseResp)
+ return nil, err
+ }
+
+ return baseResp.Data, nil
+}
+
+func (st *StressTest) GetAdminToken(ctx context.Context) (string, error) {
+ req := auth.GetAdminTokenReq{
+ Secret: st.Conf.Share.Secret,
+ UserID: st.AdminUserID,
+ }
+
+ resp, err := st.PostRequest(ctx, ApiAddress+GetAdminToken, &req)
+ if err != nil {
+ return "", err
+ }
+
+ data := &auth.GetAdminTokenResp{}
+ if err := json.Unmarshal(resp, &data); err != nil {
+ return "", err
+ }
+
+ return data.Token, nil
+}
+
+func (st *StressTest) CreateUser(ctx context.Context, userID string) (string, error) {
+ user := &sdkws.UserInfo{
+ UserID: userID,
+ Nickname: userID,
+ }
+
+ req := pbuser.UserRegisterReq{
+ Users: []*sdkws.UserInfo{user},
+ }
+
+ _, err := st.PostRequest(ctx, ApiAddress+CreateUser, &req)
+ if err != nil {
+ return "", err
+ }
+
+ st.UserCounter++
+ return userID, nil
+}
+
+func (st *StressTest) ImportFriend(ctx context.Context, userID string) error {
+ req := relation.ImportFriendReq{
+ OwnerUserID: userID,
+ FriendUserIDs: TestTargetUserList,
+ }
+
+ _, err := st.PostRequest(ctx, ApiAddress+ImportFriend, &req)
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (st *StressTest) InviteToGroup(ctx context.Context, userID string) error {
+ req := group.InviteUserToGroupReq{
+ GroupID: st.DefaultGroupID,
+ InvitedUserIDs: []string{userID},
+ }
+ _, err := st.PostRequest(ctx, ApiAddress+InviteToGroup, &req)
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (st *StressTest) SendMsg(ctx context.Context, userID string) error {
+ contentObj := map[string]any{
+ "content": fmt.Sprintf("index %d. The current time is %s", st.MsgCounter, time.Now().Format("2006-01-02 15:04:05.000")),
+ }
+
+ req := &apistruct.SendMsgReq{
+ SendMsg: apistruct.SendMsg{
+ SendID: userID,
+ SenderNickname: userID,
+ GroupID: st.DefaultGroupID,
+ ContentType: constant.Text,
+ SessionType: constant.ReadGroupChatType,
+ Content: contentObj,
+ },
+ }
+
+ _, err := st.PostRequest(ctx, ApiAddress+SendMsg, &req)
+ if err != nil {
+ log.ZError(ctx, "Failed to send message", err, "userID", userID, "req", &req)
+ return err
+ }
+
+ st.MsgCounter++
+
+ return nil
+}
+
+func (st *StressTest) CreateGroup(ctx context.Context, userID string) (string, error) {
+ groupID := fmt.Sprintf("StressTestGroup_%d_%s", st.GroupCounter, time.Now().Format("20060102150405"))
+
+ groupInfo := &sdkws.GroupInfo{
+ GroupID: groupID,
+ GroupName: groupID,
+ GroupType: constant.WorkingGroup,
+ }
+
+ req := group.CreateGroupReq{
+ OwnerUserID: userID,
+ MemberUserIDs: TestTargetUserList,
+ GroupInfo: groupInfo,
+ }
+
+ resp := group.CreateGroupResp{}
+
+ response, err := st.PostRequest(ctx, ApiAddress+CreateGroup, &req)
+ if err != nil {
+ return "", err
+ }
+
+ if err := json.Unmarshal(response, &resp); err != nil {
+ return "", err
+ }
+
+ st.GroupCounter++
+
+ return resp.GroupInfo.GroupID, nil
+}
+
+func main() {
+ var configPath string
+ // defaultConfigDir := filepath.Join("..", "..", "..", "..", "..", "config")
+ // flag.StringVar(&configPath, "c", defaultConfigDir, "config path")
+ flag.StringVar(&configPath, "c", "", "config path")
+ flag.Parse()
+
+ if configPath == "" {
+ _, _ = fmt.Fprintln(os.Stderr, "config path is empty")
+ os.Exit(1)
+ return
+ }
+
+ fmt.Printf(" Config Path: %s\n", configPath)
+
+ share, apiConfig, err := initConfig(configPath)
+ if err != nil {
+ program.ExitWithError(err)
+ return
+ }
+
+ ApiAddress = fmt.Sprintf("http://%s:%s", "127.0.0.1", fmt.Sprint(apiConfig.Api.Ports[0]))
+
+ ctx, cancel := context.WithCancel(context.Background())
+ ch := make(chan struct{})
+
+ defer cancel()
+
+ st := &StressTest{
+ Conf: &conf{
+ Share: *share,
+ Api: *apiConfig,
+ },
+ AdminUserID: share.IMAdminUser.UserIDs[0],
+ Ctx: ctx,
+ Cancel: cancel,
+ HttpClient: &http.Client{
+ Timeout: 50 * time.Second,
+ },
+ }
+
+ c := make(chan os.Signal, 1)
+ signal.Notify(c, os.Interrupt, syscall.SIGTERM)
+ go func() {
+ <-c
+ fmt.Println("\nReceived stop signal, stopping...")
+
+ select {
+ case <-ch:
+ default:
+ close(ch)
+ }
+
+ st.Cancel()
+ }()
+
+ token, err := st.GetAdminToken(st.Ctx)
+ if err != nil {
+ log.ZError(ctx, "Get Admin Token failed.", err, "AdminUserID", st.AdminUserID)
+ }
+
+ st.AdminToken = token
+ fmt.Println("Admin Token:", st.AdminToken)
+ fmt.Println("ApiAddress:", ApiAddress)
+
+ st.DefaultGroupID = DefaultGroupID
+
+ st.Wg.Add(1)
+ go func() {
+ defer st.Wg.Done()
+
+ ticker := time.NewTicker(CreateUserTicker)
+ defer ticker.Stop()
+
+ for st.UserCounter < MaxUser {
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Create user", "reason", "context done")
+ return
+
+ case <-ticker.C:
+ // Create User
+ userID := fmt.Sprintf("%d_Stresstest_%s", st.UserCounter, time.Now().Format("0102150405"))
+
+ userCreatedID, err := st.CreateUser(st.Ctx, userID)
+ if err != nil {
+ log.ZError(st.Ctx, "Create User failed.", err, "UserID", userID)
+ os.Exit(1)
+ return
+ }
+ // fmt.Println("User Created ID:", userCreatedID)
+
+ // Import Friend
+ if err = st.ImportFriend(st.Ctx, userCreatedID); err != nil {
+ log.ZError(st.Ctx, "Import Friend failed.", err, "UserID", userCreatedID)
+ os.Exit(1)
+ return
+ }
+ // Invite To Group
+ if err = st.InviteToGroup(st.Ctx, userCreatedID); err != nil {
+ log.ZError(st.Ctx, "Invite To Group failed.", err, "UserID", userCreatedID)
+ os.Exit(1)
+ return
+ }
+
+ st.Once.Do(func() {
+ st.DefaultUserID = userCreatedID
+ fmt.Println("Default Send User Created ID:", userCreatedID)
+ close(ch)
+ })
+ }
+ }
+ }()
+
+ st.Wg.Add(1)
+ go func() {
+ defer st.Wg.Done()
+
+ ticker := time.NewTicker(SendMessageTicker)
+ defer ticker.Stop()
+ <-ch
+
+ for {
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Send message", "reason", "context done")
+ return
+
+ case <-ticker.C:
+ // Send Message
+ if err = st.SendMsg(st.Ctx, st.DefaultUserID); err != nil {
+ log.ZError(st.Ctx, "Send Message failed.", err, "UserID", st.DefaultUserID)
+ continue
+ }
+ }
+ }
+ }()
+
+ st.Wg.Add(1)
+ go func() {
+ defer st.Wg.Done()
+
+ ticker := time.NewTicker(CreateGroupTicker)
+ defer ticker.Stop()
+ <-ch
+
+ for st.GroupCounter < MaxGroup {
+
+ select {
+ case <-st.Ctx.Done():
+ log.ZInfo(st.Ctx, "Stop Create Group", "reason", "context done")
+ return
+
+ case <-ticker.C:
+
+ // Create Group
+ _, err := st.CreateGroup(st.Ctx, st.DefaultUserID)
+ if err != nil {
+ log.ZError(st.Ctx, "Create Group failed.", err, "UserID", st.DefaultUserID)
+ os.Exit(1)
+ return
+ }
+
+ // fmt.Println("Group Created ID:", groupID)
+ }
+ }
+ }()
+
+ st.Wg.Wait()
+}
diff --git a/test/testdata/README.md b/test/testdata/README.md
new file mode 100644
index 0000000..b9dfea6
--- /dev/null
+++ b/test/testdata/README.md
@@ -0,0 +1,64 @@
+
+# Test Data for OpenIM Server
+
+This directory (`testdata`) contains various JSON formatted data files that are used for testing the OpenIM Server.
+
+## Structure
+
+```bash
+testdata/
+│
+├── README.md # 描述该目录下各子目录和文件的作用
+│
+├── storage/ # 存储模拟的数据库数据
+│ ├── users.json # 用户的模拟数据
+│ └── messages.json # 消息的模拟数据
+│
+├── requests/ # 存储模拟的请求数据
+│ ├── login.json # 模拟登陆请求
+│ ├── register.json # 模拟注册请求
+│ └── sendMessage.json # 模拟发送消息请求
+│
+└── responses/ # 存储模拟的响应数据
+ ├── login.json # 模拟登陆响应
+ ├── register.json # 模拟注册响应
+ └── sendMessage.json # 模拟发送消息响应
+```
+
+Here is an overview of what each subdirectory or file represents:
+
+- `db/` - This directory contains mock data mimicking the actual database contents.
+ - `users.json` - Represents a list of users in the system. Each entry contains user-specific information such as user ID, username, password hash, etc.
+ - `messages.json` - Contains a list of messages exchanged between users. Each message entry includes the sender's and receiver's user IDs, message content, timestamp, etc.
+- `requests/` - This directory contains mock requests that a client might send to the server.
+ - `login.json` - Represents a user login request. It includes fields such as username and password.
+ - `register.json` - Mimics a user registration request. Contains details such as username, password, email, etc.
+ - `sendMessage.json` - Simulates a message sending request from a user to another user.
+- `responses/` - This directory holds the expected server responses for the respective requests.
+ - `login.json` - Represents a successful login response from the server. It typically includes a session token and user-specific information.
+ - `register.json` - Simulates a successful registration response from the server, usually containing the new user's ID, username, etc.
+ - `sendMessage.json` - Depicts a successful message sending response from the server, confirming the delivery of the message.
+
+## JSON Format
+
+All the data files in this directory are in JSON format. JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write and easy for machines to parse and generate.
+
+Here is a simple example of what a JSON file might look like:
+
+```bash
+ "users": [
+ {
+ "id": 1,
+ "username": "user1",
+ "password": "password1"
+ },
+ {
+ "id": 2,
+ "username": "user2",
+ "password": "password2"
+ }
+ ]
+
+```
+
+In this example, "users" is an array of user objects. Each user object has an "id", "username", and "password".
diff --git a/test/testdata/db/messages.json b/test/testdata/db/messages.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/db/users.json b/test/testdata/db/users.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/requests/login.json b/test/testdata/requests/login.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/requests/register.json b/test/testdata/requests/register.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/requests/send-message.json b/test/testdata/requests/send-message.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/responses/login.json b/test/testdata/responses/login.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/responses/register.json b/test/testdata/responses/register.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/testdata/responses/sendMessage.json b/test/testdata/responses/sendMessage.json
new file mode 100644
index 0000000..e69de29
diff --git a/test/webhook/msgmodify/main.go b/test/webhook/msgmodify/main.go
new file mode 100644
index 0000000..c73b6f3
--- /dev/null
+++ b/test/webhook/msgmodify/main.go
@@ -0,0 +1,65 @@
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+ "strings"
+
+ cbapi "git.imall.cloud/openim/open-im-server-deploy/pkg/callbackstruct"
+ "git.imall.cloud/openim/protocol/constant"
+ "github.com/gin-gonic/gin"
+)
+
+func main() {
+ g := gin.Default()
+ g.POST("/callbackExample/callbackBeforeMsgModifyCommand", toGin(handlerMsg))
+ if err := g.Run(":10006"); err != nil {
+ panic(err)
+ }
+}
+
+func toGin[R any](fn func(c *gin.Context, req *R)) gin.HandlerFunc {
+ return func(c *gin.Context) {
+ body, err := io.ReadAll(c.Request.Body)
+ if err != nil {
+ c.String(http.StatusInternalServerError, err.Error())
+ return
+ }
+ fmt.Printf("HTTP %s %s %s\n", c.Request.Method, c.Request.URL, body)
+ var req R
+ if err := json.Unmarshal(body, &req); err != nil {
+ c.String(http.StatusInternalServerError, err.Error())
+ return
+ }
+ fn(c, &req)
+ }
+}
+
+func handlerMsg(c *gin.Context, req *cbapi.CallbackMsgModifyCommandReq) {
+ var resp cbapi.CallbackMsgModifyCommandResp
+ if req.ContentType != constant.Text {
+ c.JSON(http.StatusOK, &resp)
+ return
+ }
+ var textElem struct {
+ Content string `json:"content"`
+ }
+ if err := json.Unmarshal([]byte(req.Content), &textElem); err != nil {
+ c.String(http.StatusInternalServerError, err.Error())
+ return
+ }
+ const word = "xxx"
+ if strings.Contains(textElem.Content, word) {
+ textElem.Content = strings.ReplaceAll(textElem.Content, word, strings.Repeat("*", len(word)))
+ content, err := json.Marshal(&textElem)
+ if err != nil {
+ c.String(http.StatusInternalServerError, err.Error())
+ return
+ }
+ tmp := string(content)
+ resp.Content = &tmp
+ }
+ c.JSON(http.StatusOK, &resp)
+}
diff --git a/tools/README.md b/tools/README.md
new file mode 100644
index 0000000..bf1e289
--- /dev/null
+++ b/tools/README.md
@@ -0,0 +1,25 @@
+# Notes about go workspace
+
+As openim is using go1.18's [workspace feature](https://go.dev/doc/tutorial/workspaces), once you add a new module, you need to run `go work use -r .` at root directory to update the workspace synced.
+
+### Create a new extensions
+
+1. Create your tools_name directory in pkg `/tools` first and cd into it.
+2. Init the project.
+3. Then `go work use -r .` at current directory to update the workspace.
+4. Create your tools
+
+You can execute the following commands to do things above:
+
+```bash
+# edit the CRD_NAME and CRD_GROUP to your own
+export OPENIM_TOOLS_NAME=
+
+# copy and paste to create a new CRD and Controller
+mkdir tools/${OPENIM_TOOLS_NAME}
+cd tools/${OPENIM_TOOLS_NAME}
+go mod init github.com/openimsdk/open-im-server-deploy/tools/${OPENIM_TOOLS_NAME}
+go mod tidy
+go work use -r .
+cd ../..
+```
\ No newline at end of file
diff --git a/tools/changelog/changelog.go b/tools/changelog/changelog.go
new file mode 100644
index 0000000..c9e2be2
--- /dev/null
+++ b/tools/changelog/changelog.go
@@ -0,0 +1,198 @@
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+ "regexp"
+ "strings"
+)
+
+// You can specify a tag as a command line argument to generate the changelog for a specific version.
+// Example: go run tools/changelog/changelog.go v0.0.33
+// If no tag is provided, the latest release will be used.
+
+// Setting repo owner and repo name by generate changelog
+const (
+ repoOwner = "openimsdk"
+ repoName = "open-im-server-deploy"
+)
+
+// GitHubRepo struct represents the repo details.
+type GitHubRepo struct {
+ Owner string
+ Repo string
+ FullChangelog string
+}
+
+// ReleaseData represents the JSON structure for release data.
+type ReleaseData struct {
+ TagName string `json:"tag_name"`
+ Body string `json:"body"`
+ HtmlUrl string `json:"html_url"`
+ Published string `json:"published_at"`
+}
+
+// Method to classify and format release notes.
+func (g *GitHubRepo) classifyReleaseNotes(body string) map[string][]string {
+ result := map[string][]string{
+ "feat": {},
+ "fix": {},
+ "chore": {},
+ "refactor": {},
+ "build": {},
+ "other": {},
+ }
+
+ // Regular expression to extract PR number and URL (case insensitive)
+ rePR := regexp.MustCompile(`(?i)in (https://github\.com/[^\s]+/pull/(\d+))`)
+
+ // Split the body into individual lines.
+ lines := strings.Split(body, "\n")
+
+ for _, line := range lines {
+ // Skip lines that contain "deps: Merge"
+ if strings.Contains(strings.ToLower(line), "deps: merge #") {
+ continue
+ }
+
+ // Use a regular expression to extract Full Changelog link and its title (case insensitive).
+ if strings.Contains(strings.ToLower(line), "**full changelog**") {
+ matches := regexp.MustCompile(`(?i)\*\*full changelog\*\*: (https://github\.com/[^\s]+/compare/([^\s]+))`).FindStringSubmatch(line)
+ if len(matches) > 2 {
+ // Format the Full Changelog link with title
+ g.FullChangelog = fmt.Sprintf("[%s](%s)", matches[2], matches[1])
+ }
+ continue // Skip further processing for this line.
+ }
+
+ if strings.HasPrefix(line, "*") {
+ var category string
+
+ // Use strings.ToLower to make the matching case insensitive
+ lowerLine := strings.ToLower(line)
+
+ // Determine the category based on the prefix (case insensitive).
+ if strings.HasPrefix(lowerLine, "* feat") {
+ category = "feat"
+ } else if strings.HasPrefix(lowerLine, "* fix") {
+ category = "fix"
+ } else if strings.HasPrefix(lowerLine, "* chore") {
+ category = "chore"
+ } else if strings.HasPrefix(lowerLine, "* refactor") {
+ category = "refactor"
+ } else if strings.HasPrefix(lowerLine, "* build") {
+ category = "build"
+ } else {
+ category = "other"
+ }
+
+ // Extract PR number and URL (case insensitive)
+ matches := rePR.FindStringSubmatch(line)
+ if len(matches) == 3 {
+ prURL := matches[1]
+ prNumber := matches[2]
+ // Format the line with the PR link and use original content for the final result
+ formattedLine := fmt.Sprintf("* %s [#%s](%s)", strings.Split(line, " by ")[0][2:], prNumber, prURL)
+ result[category] = append(result[category], formattedLine)
+ } else {
+ // If no PR link is found, just add the line as is
+ result[category] = append(result[category], line)
+ }
+ }
+ }
+
+ return result
+}
+
+// Method to generate the final changelog.
+func (g *GitHubRepo) generateChangelog(tag, date, htmlURL, body string) string {
+ sections := g.classifyReleaseNotes(body)
+
+ // Convert ISO 8601 date to simpler format (YYYY-MM-DD)
+ formattedDate := date[:10]
+
+ // Changelog header with tag, date, and links.
+ changelog := fmt.Sprintf("## [%s](%s) \t(%s)\n\n", tag, htmlURL, formattedDate)
+
+ if len(sections["feat"]) > 0 {
+ changelog += "### New Features\n" + strings.Join(sections["feat"], "\n") + "\n\n"
+ }
+ if len(sections["fix"]) > 0 {
+ changelog += "### Bug Fixes\n" + strings.Join(sections["fix"], "\n") + "\n\n"
+ }
+ if len(sections["chore"]) > 0 {
+ changelog += "### Chores\n" + strings.Join(sections["chore"], "\n") + "\n\n"
+ }
+ if len(sections["refactor"]) > 0 {
+ changelog += "### Refactors\n" + strings.Join(sections["refactor"], "\n") + "\n\n"
+ }
+ if len(sections["build"]) > 0 {
+ changelog += "### Builds\n" + strings.Join(sections["build"], "\n") + "\n\n"
+ }
+ if len(sections["other"]) > 0 {
+ changelog += "### Others\n" + strings.Join(sections["other"], "\n") + "\n\n"
+ }
+
+ if g.FullChangelog != "" {
+ changelog += fmt.Sprintf("**Full Changelog**: %s\n", g.FullChangelog)
+ }
+
+ return changelog
+}
+
+// Method to fetch release data from GitHub API.
+func (g *GitHubRepo) fetchReleaseData(version string) (*ReleaseData, error) {
+ var apiURL string
+
+ if version == "" {
+ // Fetch the latest release.
+ apiURL = fmt.Sprintf("https://api.github.com/repos/%s/%s/releases/latest", g.Owner, g.Repo)
+ } else {
+ // Fetch a specific version.
+ apiURL = fmt.Sprintf("https://api.github.com/repos/%s/%s/releases/tags/%s", g.Owner, g.Repo, version)
+ }
+
+ resp, err := http.Get(apiURL)
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ body, err := io.ReadAll(resp.Body)
+ if err != nil {
+ return nil, err
+ }
+
+ var releaseData ReleaseData
+ err = json.Unmarshal(body, &releaseData)
+ if err != nil {
+ return nil, err
+ }
+
+ return &releaseData, nil
+}
+
+func main() {
+ repo := &GitHubRepo{Owner: repoOwner, Repo: repoName}
+
+ // Get the version from command line arguments, if provided
+ var version string // Default is use latest
+
+ if len(os.Args) > 1 {
+ version = os.Args[1] // Use the provided version
+ }
+
+ // Fetch release data (either for latest or specific version)
+ releaseData, err := repo.fetchReleaseData(version)
+ if err != nil {
+ fmt.Println("Error fetching release data:", err)
+ return
+ }
+
+ // Generate and print the formatted changelog
+ changelog := repo.generateChangelog(releaseData.TagName, releaseData.Published, releaseData.HtmlUrl, releaseData.Body)
+ fmt.Println(changelog)
+}
diff --git a/tools/check-component/main.go b/tools/check-component/main.go
new file mode 100644
index 0000000..e9b5574
--- /dev/null
+++ b/tools/check-component/main.go
@@ -0,0 +1,194 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "io"
+ "log"
+ "os"
+ "path/filepath"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+ "github.com/openimsdk/tools/discovery/etcd"
+ "github.com/openimsdk/tools/discovery/zookeeper"
+ "github.com/openimsdk/tools/mq/kafka"
+ "github.com/openimsdk/tools/s3/minio"
+ "github.com/openimsdk/tools/system/program"
+)
+
+const maxRetry = 180
+
+const (
+ MountConfigFilePath = "CONFIG_PATH"
+ DeploymentType = "DEPLOYMENT_TYPE"
+ KUBERNETES = "kubernetes"
+)
+
+func CheckZookeeper(ctx context.Context, config *config.ZooKeeper) error {
+ // Temporary disable logging
+ originalLogger := log.Default().Writer()
+ log.SetOutput(io.Discard)
+ defer log.SetOutput(originalLogger) // Ensure logging is restored
+ return zookeeper.Check(ctx, config.Address, config.Schema, zookeeper.WithUserNameAndPassword(config.Username, config.Password))
+}
+
+func CheckEtcd(ctx context.Context, config *config.Etcd) error {
+ return etcd.Check(ctx, config.Address, "/check_openim_component",
+ true,
+ etcd.WithDialTimeout(10*time.Second),
+ etcd.WithMaxCallSendMsgSize(20*1024*1024),
+ etcd.WithUsernameAndPassword(config.Username, config.Password))
+}
+
+func CheckMongo(ctx context.Context, config *config.Mongo) error {
+ return mongoutil.Check(ctx, config.Build())
+}
+
+func CheckRedis(ctx context.Context, config *config.Redis) error {
+ return redisutil.Check(ctx, config.Build())
+}
+
+func CheckMinIO(ctx context.Context, config *config.Minio) error {
+ return minio.Check(ctx, config.Build())
+}
+
+func CheckKafka(ctx context.Context, conf *config.Kafka) error {
+ return kafka.CheckHealth(ctx, conf.Build())
+}
+
+func initConfig(configDir string) (*config.Mongo, *config.Redis, *config.Kafka, *config.Minio, *config.Discovery, error) {
+ var (
+ mongoConfig = &config.Mongo{}
+ redisConfig = &config.Redis{}
+ kafkaConfig = &config.Kafka{}
+ minioConfig = &config.Minio{}
+ discovery = &config.Discovery{}
+ thirdConfig = &config.Third{}
+ )
+
+ err := config.Load(configDir, config.MongodbConfigFileName, config.EnvPrefixMap[config.MongodbConfigFileName], mongoConfig)
+ if err != nil {
+ return nil, nil, nil, nil, nil, err
+ }
+
+ err = config.Load(configDir, config.RedisConfigFileName, config.EnvPrefixMap[config.RedisConfigFileName], redisConfig)
+ if err != nil {
+ return nil, nil, nil, nil, nil, err
+ }
+
+ err = config.Load(configDir, config.KafkaConfigFileName, config.EnvPrefixMap[config.KafkaConfigFileName], kafkaConfig)
+ if err != nil {
+ return nil, nil, nil, nil, nil, err
+ }
+
+ err = config.Load(configDir, config.OpenIMRPCThirdCfgFileName, config.EnvPrefixMap[config.OpenIMRPCThirdCfgFileName], thirdConfig)
+ if err != nil {
+ return nil, nil, nil, nil, nil, err
+ }
+
+ if thirdConfig.Object.Enable == "minio" {
+ err = config.Load(configDir, config.MinioConfigFileName, config.EnvPrefixMap[config.MinioConfigFileName], minioConfig)
+ if err != nil {
+ return nil, nil, nil, nil, nil, err
+ }
+ } else {
+ minioConfig = nil
+ }
+ err = config.Load(configDir, config.DiscoveryConfigFilename, config.EnvPrefixMap[config.DiscoveryConfigFilename], discovery)
+ if err != nil {
+ return nil, nil, nil, nil, nil, err
+ }
+ return mongoConfig, redisConfig, kafkaConfig, minioConfig, discovery, nil
+}
+
+func main() {
+ var index int
+ var configDir string
+ flag.IntVar(&index, "i", 0, "Index number")
+ defaultConfigDir := filepath.Join("..", "..", "..", "..", "..", "config")
+ flag.StringVar(&configDir, "c", defaultConfigDir, "Configuration dir")
+ flag.Parse()
+
+ fmt.Printf("%s Index: %d, Config Path: %s\n", filepath.Base(os.Args[0]), index, configDir)
+
+ mongoConfig, redisConfig, kafkaConfig, minioConfig, zookeeperConfig, err := initConfig(configDir)
+ if err != nil {
+ program.ExitWithError(err)
+ }
+
+ ctx := context.Background()
+ err = performChecks(ctx, mongoConfig, redisConfig, kafkaConfig, minioConfig, zookeeperConfig, maxRetry)
+ if err != nil {
+ // Assume program.ExitWithError logs the error and exits.
+ // Replace with your error handling logic as necessary.
+ program.ExitWithError(err)
+ }
+}
+
+func performChecks(ctx context.Context, mongoConfig *config.Mongo, redisConfig *config.Redis, kafkaConfig *config.Kafka, minioConfig *config.Minio, discovery *config.Discovery, maxRetry int) error {
+ checksDone := make(map[string]bool)
+
+ checks := map[string]func(ctx context.Context) error{
+ "Mongo": func(ctx context.Context) error {
+ return CheckMongo(ctx, mongoConfig)
+ },
+ "Redis": func(ctx context.Context) error {
+ return CheckRedis(ctx, redisConfig)
+ },
+ "Kafka": func(ctx context.Context) error {
+ return CheckKafka(ctx, kafkaConfig)
+ },
+ }
+ if minioConfig != nil {
+ checks["MinIO"] = func(ctx context.Context) error {
+ return CheckMinIO(ctx, minioConfig)
+ }
+ }
+ if discovery.Enable == "etcd" {
+ checks["Etcd"] = func(ctx context.Context) error {
+ return CheckEtcd(ctx, &discovery.Etcd)
+ }
+ }
+
+ for i := 0; i < maxRetry; i++ {
+ allSuccess := true
+ for name, check := range checks {
+ if !checksDone[name] {
+ if err := check(ctx); err != nil {
+ fmt.Printf("%s check failed: %v\n", name, err)
+ allSuccess = false
+ } else {
+ fmt.Printf("%s check succeeded.\n", name)
+ checksDone[name] = true
+ }
+ }
+ }
+
+ if allSuccess {
+ fmt.Println("All components checks passed successfully.")
+ return nil
+ }
+
+ time.Sleep(1 * time.Second)
+ }
+
+ return fmt.Errorf("not all components checks passed successfully after %d attempts", maxRetry)
+}
diff --git a/tools/check-free-memory/main.go b/tools/check-free-memory/main.go
new file mode 100644
index 0000000..e182a15
--- /dev/null
+++ b/tools/check-free-memory/main.go
@@ -0,0 +1,26 @@
+package main
+
+import (
+ "fmt"
+ "os"
+
+ "github.com/shirou/gopsutil/mem"
+)
+
+func main() {
+ vMem, err := mem.VirtualMemory()
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "Failed to get virtual memory info: %v\n", err)
+ os.Exit(1)
+ }
+
+ // Use the Available field to get the available memory
+ availableMemoryGB := float64(vMem.Available) / float64(1024*1024*1024)
+
+ if availableMemoryGB < 1.0 {
+ fmt.Fprintf(os.Stderr, "System available memory is less than 1GB: %.2fGB\n", availableMemoryGB)
+ os.Exit(1)
+ } else {
+ fmt.Printf("System available memory is sufficient: %.2fGB\n", availableMemoryGB)
+ }
+}
diff --git a/tools/imctl/.gitignore b/tools/imctl/.gitignore
new file mode 100644
index 0000000..72ff17c
--- /dev/null
+++ b/tools/imctl/.gitignore
@@ -0,0 +1,50 @@
+# Copyright © 2023 OpenIMSDK.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# ==============================================================================
+# For the entire design of.gitignore, ignore git commits and ignore files
+#===============================================================================
+#
+
+### OpenIM developer supplement ###
+logs
+.devcontainer
+components
+out-test
+Dockerfile.cross
+
+### Makefile ###
+tmp/
+bin/
+output/
+_output/
+
+### OpenIM Config ###
+config/config.yaml
+./config/config.yaml
+.env
+./.env
+
+# files used by the developer
+.idea.md
+.todo.md
+.note.md
+
+# ==============================================================================
+# Created by https://www.toptal.com/developers/gitignore/api/go,git,vim,tags,test,emacs,backup,jetbrains
+# Edit at https://www.toptal.com/developers/gitignore?templates=go,git,vim,tags,test,emacs,backup,jetbrains
+
+cmd/
+internal/
+pkg/
diff --git a/tools/imctl/README.md b/tools/imctl/README.md
new file mode 100644
index 0000000..4f4ff4a
--- /dev/null
+++ b/tools/imctl/README.md
@@ -0,0 +1,89 @@
+# [RFC #0005] OpenIM CTL Module Proposal
+
+## Meta
+
+- Name: OpenIM CTL Module Enhancement
+- Start Date: 2023-08-23
+- Author(s): @cubxxw
+- Status: Draft
+- RFC Pull Request: (leave blank)
+- OpenIMSDK Pull Request: (leave blank)
+- OpenIMSDK Issue: https://github.com/openimsdk/open-im-server-deploy/issues/924
+- Supersedes: N/A
+
+## 📇Topics
+
+- RFC #0000 OpenIMSDK CTL Module Proposal
+ - [Meta](#meta)
+ - [Summary](#summary)
+ - [Definitions](#definitions)
+ - [Motivation](#motivation)
+ - [What it is](#what-it-is)
+ - [How it Works](#how-it-works)
+ - [Migration](#migration)
+ - [Drawbacks](#drawbacks)
+ - [Alternatives](#alternatives)
+ - [Prior Art](#prior-art)
+ - [Unresolved Questions](#unresolved-questions)
+ - [Spec. Changes (OPTIONAL)](#spec-changes-optional)
+ - [History](#history)
+
+## Summary
+
+The OpenIM CTL module proposal aims to provide an integrated tool for the OpenIM system, offering utilities for user management, system monitoring, debugging, configuration, and more. This tool will enhance the extensibility of the OpenIM system and reduce dependencies on individual modules.
+
+## Definitions
+
+- **OpenIM**: An Instant Messaging system.
+- **`imctl`**: The control command-line tool for OpenIM.
+- **E2E Testing**: End-to-End Testing.
+- **API**: Application Programming Interface.
+
+## Motivation
+
+- Improve the OpenIM system's extensibility and reduce dependencies on individual modules.
+- Simplify the process for testers to perform automated tests.
+- Enhance interaction with scripts and reduce the system's coupling.
+- Implement a consistent tool similar to kubectl for a streamlined user experience.
+
+## What it is
+
+`imctl` is a command-line utility designed for OpenIM to provide functionalities including:
+
+- User Management: Add, delete, or disable user accounts.
+- System Monitoring: View metrics like online users, message transfer rate.
+- Debugging: View logs, adjust log levels, check system states.
+- Configuration Management: Update system settings, manage plugins/modules.
+- Data Management: Backup, restore, import, or export data.
+- System Maintenance: Update, restart services, or maintenance mode.
+
+## How it Works
+
+`imctl`, inspired by kubectl, will have sub-commands and options for the functionalities mentioned. Developers, operations, and testers can invoke these commands to manage and monitor the OpenIM system.
+
+## Migration
+
+Currently, the `imctl` will be housed in `tools/imctl`, and later on, the plan is to move it to `cmd/imctl`. Migration guidelines will be provided to ensure smooth transitions.
+
+## Drawbacks
+
+- Overhead in learning and adapting to a new tool for existing users.
+- Potential complexities in implementing some of the advanced functionalities.
+
+## Alternatives
+
+- Continue using individual modules for OpenIM management.
+- Utilize third-party tools or platforms with similar functionalities, customizing them for OpenIM.
+
+## Prior Art
+
+Kubectl from Kubernetes is a significant inspiration for `imctl`, offering a comprehensive command-line tool for managing clusters.
+
+## Unresolved Questions
+
+- What other functionalities might be required in future versions of `imctl`?
+- What's the expected timeline for transitioning from `tools/imctl` to `cmd/imctl`?
+
+## Spec. Changes (OPTIONAL)
+
+As of now, there are no proposed changes to the core specifications or extensions. Future changes based on community feedback might necessitate spec changes, which will be documented accordingly.
\ No newline at end of file
diff --git a/tools/imctl/main.go b/tools/imctl/main.go
new file mode 100644
index 0000000..9116132
--- /dev/null
+++ b/tools/imctl/main.go
@@ -0,0 +1,22 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import "fmt"
+
+func main() {
+
+ fmt.Println("imctl")
+}
diff --git a/tools/infra/main.go b/tools/infra/main.go
new file mode 100644
index 0000000..f6225a3
--- /dev/null
+++ b/tools/infra/main.go
@@ -0,0 +1,52 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "fmt"
+
+ "github.com/fatih/color"
+)
+
+// Define a function to print important link information
+func printLinks() {
+ blue := color.New(color.FgBlue).SprintFunc()
+ fmt.Printf("OpenIM Github: %s\n", blue("https://github.com/OpenIMSDK/Open-IM-Server"))
+ fmt.Printf("Slack Invitation: %s\n", blue("https://openimsdk.slack.com"))
+ fmt.Printf("Follow Twitter: %s\n", blue("https://twitter.com/founder_im63606"))
+}
+
+func main() {
+ yellow := color.New(color.FgYellow)
+ blue := color.New(color.FgBlue, color.Bold)
+
+ yellow.Println("Please use the release branch or tag for production environments!")
+
+ message := `
+____ _____ __ __
+/ __ \ |_ _|| \/ |
+| | | | _ __ ___ _ __ | | | \ / |
+| | | || '_ \ / _ \| '_ \ | | | |\/| |
+| |__| || |_) || __/| | | | _| |_ | | | |
+\____/ | .__/ \___||_| |_||_____||_| |_|
+ | |
+ |_|
+
+Keep checking for updates!
+`
+
+ blue.Println(message)
+ printLinks() // Call the function to print the link information
+}
diff --git a/tools/ncpu/README.md b/tools/ncpu/README.md
new file mode 100644
index 0000000..f7c05d5
--- /dev/null
+++ b/tools/ncpu/README.md
@@ -0,0 +1,39 @@
+# ncpu
+
+**ncpu** is a simple utility to fetch the number of CPU cores across different operating systems.
+
+## Introduction
+
+In various scenarios, especially while compiling code, it's beneficial to know the number of available CPU cores to optimize the build process. However, the command to fetch the CPU core count differs between operating systems. For example, on Linux, we use `nproc`, while on macOS, it's `sysctl -n hw.ncpu`. The `ncpu` utility provides a unified way to obtain this number, regardless of the platform.
+
+## Usage
+
+To retrieve the number of CPU cores, simply use the `ncpu` command:
+
+```bash
+$ ncpu
+```
+
+This will return an integer representing the number of available CPU cores.
+
+### Example:
+
+Let's say you're compiling a project using `make`. To utilize all the CPU cores for the compilation process, you can use:
+
+```bash
+$ make -j $(ncpu) build # or any other build command
+```
+
+The above command will ensure the build process takes advantage of all the available CPU cores, thereby potentially speeding up the compilation.
+
+## Why use `ncpu`?
+
+- **Cross-platform compatibility**: No need to remember or detect which OS-specific command to use. Just use `ncpu`!
+
+- **Ease of use**: A simple and intuitive command that's easy to incorporate into scripts or command-line operations.
+
+- **Consistency**: Ensures consistent behavior and output across different systems and environments.
+
+## Installation
+
+(Include installation steps here, e.g., how to clone the repo, build the tool, or install via package manager.)
diff --git a/tools/ncpu/main.go b/tools/ncpu/main.go
new file mode 100644
index 0000000..062618b
--- /dev/null
+++ b/tools/ncpu/main.go
@@ -0,0 +1,32 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "fmt"
+ "runtime"
+
+ "go.uber.org/automaxprocs/maxprocs"
+)
+
+func main() {
+ // Set maxprocs with a custom logger that does nothing to ignore logs.
+ maxprocs.Set(maxprocs.Logger(func(string, ...interface{}) {
+ // Intentionally left blank to suppress all log output from automaxprocs.
+ }))
+
+ // Now this will print the GOMAXPROCS value without printing the automaxprocs log message.
+ fmt.Println(runtime.GOMAXPROCS(0))
+}
diff --git a/tools/ncpu/main_test.go b/tools/ncpu/main_test.go
new file mode 100644
index 0000000..f242032
--- /dev/null
+++ b/tools/ncpu/main_test.go
@@ -0,0 +1,35 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import "testing"
+
+func Test_main(t *testing.T) {
+ tests := []struct {
+ name string
+ }{
+ {
+ name: "Test_main",
+ },
+ {
+ name: "Test_main2",
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ main()
+ })
+ }
+}
diff --git a/tools/s3/README.md b/tools/s3/README.md
new file mode 100644
index 0000000..ac30347
--- /dev/null
+++ b/tools/s3/README.md
@@ -0,0 +1,12 @@
+# After s3 switches the storage engine, convert the data
+
+- build
+```shell
+go build -o s3convert main.go
+```
+
+- start
+```shell
+./s3convert -config -name
+# ./s3convert -config ./../../config -name minio
+```
diff --git a/tools/s3/internal/conversion.go b/tools/s3/internal/conversion.go
new file mode 100644
index 0000000..318cc24
--- /dev/null
+++ b/tools/s3/internal/conversion.go
@@ -0,0 +1,203 @@
+package internal
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "log"
+ "net/http"
+ "path/filepath"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/cache/redis"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "github.com/mitchellh/mapstructure"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+ "github.com/openimsdk/tools/s3"
+ "github.com/openimsdk/tools/s3/aws"
+ "github.com/openimsdk/tools/s3/cos"
+ "github.com/openimsdk/tools/s3/kodo"
+ "github.com/openimsdk/tools/s3/minio"
+ "github.com/openimsdk/tools/s3/oss"
+ "github.com/spf13/viper"
+ "go.mongodb.org/mongo-driver/mongo"
+)
+
+const defaultTimeout = time.Second * 10
+
+func readConf(path string, val any) error {
+ v := viper.New()
+ v.SetConfigFile(path)
+ if err := v.ReadInConfig(); err != nil {
+ return err
+ }
+ fn := func(config *mapstructure.DecoderConfig) {
+ config.TagName = "mapstructure"
+ }
+ return v.Unmarshal(val, fn)
+}
+
+func getS3(path string, name string, thirdConf *config.Third) (s3.Interface, error) {
+ switch name {
+ case "minio":
+ ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
+ defer cancel()
+ var minioConf config.Minio
+ if err := readConf(filepath.Join(path, minioConf.GetConfigFileName()), &minioConf); err != nil {
+ return nil, err
+ }
+ var redisConf config.Redis
+ if err := readConf(filepath.Join(path, redisConf.GetConfigFileName()), &redisConf); err != nil {
+ return nil, err
+ }
+ rdb, err := redisutil.NewRedisClient(ctx, redisConf.Build())
+ if err != nil {
+ return nil, err
+ }
+ return minio.NewMinio(ctx, redis.NewMinioCache(rdb), *minioConf.Build())
+ case "cos":
+ return cos.NewCos(*thirdConf.Object.Cos.Build())
+ case "oss":
+ return oss.NewOSS(*thirdConf.Object.Oss.Build())
+ case "kodo":
+ return kodo.NewKodo(*thirdConf.Object.Kodo.Build())
+ case "aws":
+ return aws.NewAws(*thirdConf.Object.Aws.Build())
+ default:
+ return nil, fmt.Errorf("invalid object enable: %s", name)
+ }
+}
+
+func getMongo(path string) (database.ObjectInfo, error) {
+ var mongoConf config.Mongo
+ if err := readConf(filepath.Join(path, mongoConf.GetConfigFileName()), &mongoConf); err != nil {
+ return nil, err
+ }
+ ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
+ defer cancel()
+ mgocli, err := mongoutil.NewMongoDB(ctx, mongoConf.Build())
+ if err != nil {
+ return nil, err
+ }
+ return mgo.NewS3Mongo(mgocli.GetDB())
+}
+
+func Main(path string, engine string) error {
+ var thirdConf config.Third
+ if err := readConf(filepath.Join(path, thirdConf.GetConfigFileName()), &thirdConf); err != nil {
+ return err
+ }
+ if thirdConf.Object.Enable == engine {
+ return errors.New("same s3 storage")
+ }
+ s3db, err := getMongo(path)
+ if err != nil {
+ return err
+ }
+ oldS3, err := getS3(path, engine, &thirdConf)
+ if err != nil {
+ return err
+ }
+ newS3, err := getS3(path, thirdConf.Object.Enable, &thirdConf)
+ if err != nil {
+ return err
+ }
+ count, err := getEngineCount(s3db, oldS3.Engine())
+ if err != nil {
+ return err
+ }
+ log.Printf("engine %s count: %d", oldS3.Engine(), count)
+ var skip int
+ for i := 1; i <= count+1; i++ {
+ log.Printf("start %d/%d", i, count)
+ start := time.Now()
+ res, err := doObject(s3db, newS3, oldS3, skip)
+ if err != nil {
+ log.Printf("end [%s] %d/%d error %s", time.Since(start), i, count, err)
+ return err
+ }
+ log.Printf("end [%s] %d/%d result %+v", time.Since(start), i, count, *res)
+ if res.Skip {
+ skip++
+ }
+ if res.End {
+ break
+ }
+ }
+ return nil
+}
+
+func getEngineCount(db database.ObjectInfo, name string) (int, error) {
+ ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
+ defer cancel()
+ count, err := db.GetEngineCount(ctx, name)
+ if err != nil {
+ return 0, err
+ }
+ return int(count), nil
+}
+
+func doObject(db database.ObjectInfo, newS3, oldS3 s3.Interface, skip int) (*Result, error) {
+ ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
+ defer cancel()
+ infos, err := db.GetEngineInfo(ctx, oldS3.Engine(), 1, skip)
+ if err != nil {
+ return nil, err
+ }
+ if len(infos) == 0 {
+ return &Result{End: true}, nil
+ }
+ obj := infos[0]
+ if _, err := db.Take(ctx, newS3.Engine(), obj.Name); err == nil {
+ return &Result{Skip: true}, nil
+ } else if !errors.Is(err, mongo.ErrNoDocuments) {
+ return nil, err
+ }
+ downloadURL, err := oldS3.AccessURL(ctx, obj.Key, time.Hour, &s3.AccessURLOption{})
+ if err != nil {
+ return nil, err
+ }
+ putURL, err := newS3.PresignedPutObject(ctx, obj.Key, time.Hour, &s3.PutOption{ContentType: obj.ContentType})
+ if err != nil {
+ return nil, err
+ }
+ downloadResp, err := http.Get(downloadURL)
+ if err != nil {
+ return nil, err
+ }
+ defer downloadResp.Body.Close()
+ switch downloadResp.StatusCode {
+ case http.StatusNotFound:
+ return &Result{Skip: true}, nil
+ case http.StatusOK:
+ default:
+ return nil, fmt.Errorf("download object failed %s", downloadResp.Status)
+ }
+ log.Printf("file size %d", obj.Size)
+ request, err := http.NewRequest(http.MethodPut, putURL.URL, downloadResp.Body)
+ if err != nil {
+ return nil, err
+ }
+ putResp, err := http.DefaultClient.Do(request)
+ if err != nil {
+ return nil, err
+ }
+ defer putResp.Body.Close()
+ if putResp.StatusCode != http.StatusOK {
+ return nil, fmt.Errorf("put object failed %s", putResp.Status)
+ }
+ ctx, cancel = context.WithTimeout(context.Background(), defaultTimeout)
+ defer cancel()
+ if err := db.UpdateEngine(ctx, obj.Engine, obj.Name, newS3.Engine()); err != nil {
+ return nil, err
+ }
+ return &Result{}, nil
+}
+
+type Result struct {
+ Skip bool
+ End bool
+}
diff --git a/tools/s3/main.go b/tools/s3/main.go
new file mode 100644
index 0000000..5f54795
--- /dev/null
+++ b/tools/s3/main.go
@@ -0,0 +1,24 @@
+package main
+
+import (
+ "flag"
+ "fmt"
+ "os"
+
+ "git.imall.cloud/openim/open-im-server-deploy/tools/s3/internal"
+)
+
+func main() {
+ var (
+ name string
+ config string
+ )
+ flag.StringVar(&name, "name", "", "old previous storage name")
+ flag.StringVar(&config, "config", "", "config directory")
+ flag.Parse()
+ if err := internal.Main(config, name); err != nil {
+ fmt.Fprintln(os.Stderr, err)
+ os.Exit(1)
+ }
+ fmt.Fprintln(os.Stdout, "success")
+}
diff --git a/tools/seq/internal/seq.go b/tools/seq/internal/seq.go
new file mode 100644
index 0000000..aeaec86
--- /dev/null
+++ b/tools/seq/internal/seq.go
@@ -0,0 +1,347 @@
+package internal
+
+import (
+ "bytes"
+ "context"
+ "errors"
+ "fmt"
+ "os"
+ "os/signal"
+ "path/filepath"
+ "strconv"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "syscall"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/config"
+ "git.imall.cloud/openim/open-im-server-deploy/pkg/common/storage/database/mgo"
+ "github.com/mitchellh/mapstructure"
+ "github.com/openimsdk/tools/db/mongoutil"
+ "github.com/openimsdk/tools/db/redisutil"
+ "github.com/openimsdk/tools/utils/runtimeenv"
+ "github.com/redis/go-redis/v9"
+ "github.com/spf13/viper"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+)
+
+const StructTagName = "yaml"
+
+const (
+ MaxSeq = "MAX_SEQ:"
+ MinSeq = "MIN_SEQ:"
+ ConversationUserMinSeq = "CON_USER_MIN_SEQ:"
+ HasReadSeq = "HAS_READ_SEQ:"
+)
+
+const (
+ batchSize = 100
+ dataVersionCollection = "data_version"
+ seqKey = "seq"
+ seqVersion = 38
+)
+
+func readConfig[T any](dir string, name string) (*T, error) {
+ if runtimeenv.RuntimeEnvironment() == config.KUBERNETES {
+ dir = os.Getenv(config.MountConfigFilePath)
+ }
+ v := viper.New()
+ v.SetEnvPrefix(config.EnvPrefixMap[name])
+ v.AutomaticEnv()
+ v.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
+ v.SetConfigFile(filepath.Join(dir, name))
+ if err := v.ReadInConfig(); err != nil {
+ return nil, err
+ }
+
+ var conf T
+ if err := v.Unmarshal(&conf, func(config *mapstructure.DecoderConfig) {
+ config.TagName = StructTagName
+ }); err != nil {
+ return nil, err
+ }
+
+ return &conf, nil
+}
+
+func Main(conf string, del time.Duration) error {
+ redisConfig, err := readConfig[config.Redis](conf, config.RedisConfigFileName)
+ if err != nil {
+ return err
+ }
+
+ mongodbConfig, err := readConfig[config.Mongo](conf, config.MongodbConfigFileName)
+ if err != nil {
+ return err
+ }
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
+ defer cancel()
+ rdb, err := redisutil.NewRedisClient(ctx, redisConfig.Build())
+ if err != nil {
+ return err
+ }
+ mgocli, err := mongoutil.NewMongoDB(ctx, mongodbConfig.Build())
+ if err != nil {
+ return err
+ }
+ versionColl := mgocli.GetDB().Collection(dataVersionCollection)
+ converted, err := CheckVersion(versionColl, seqKey, seqVersion)
+ if err != nil {
+ return err
+ }
+ if converted {
+ fmt.Println("[seq] seq data has been converted")
+ return nil
+ }
+ if _, err := mgo.NewSeqConversationMongo(mgocli.GetDB()); err != nil {
+ return err
+ }
+ cSeq, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ uSeq, err := mgo.NewSeqUserMongo(mgocli.GetDB())
+ if err != nil {
+ return err
+ }
+ uSpitHasReadSeq := func(id string) (conversationID string, userID string, err error) {
+ // HasReadSeq + userID + ":" + conversationID
+ arr := strings.Split(id, ":")
+ if len(arr) != 2 || arr[0] == "" || arr[1] == "" {
+ return "", "", fmt.Errorf("invalid has read seq id %s", id)
+ }
+ userID = arr[0]
+ conversationID = arr[1]
+ return
+ }
+ uSpitConversationUserMinSeq := func(id string) (conversationID string, userID string, err error) {
+ // ConversationUserMinSeq + conversationID + "u:" + userID
+ arr := strings.Split(id, "u:")
+ if len(arr) != 2 || arr[0] == "" || arr[1] == "" {
+ return "", "", fmt.Errorf("invalid has read seq id %s", id)
+ }
+ conversationID = arr[0]
+ userID = arr[1]
+ return
+ }
+
+ ts := []*taskSeq{
+ {
+ Prefix: MaxSeq,
+ GetSeq: cSeq.GetMaxSeq,
+ SetSeq: cSeq.SetMaxSeq,
+ },
+ {
+ Prefix: MinSeq,
+ GetSeq: cSeq.GetMinSeq,
+ SetSeq: cSeq.SetMinSeq,
+ },
+ {
+ Prefix: HasReadSeq,
+ GetSeq: func(ctx context.Context, id string) (int64, error) {
+ conversationID, userID, err := uSpitHasReadSeq(id)
+ if err != nil {
+ return 0, err
+ }
+ return uSeq.GetUserReadSeq(ctx, conversationID, userID)
+ },
+ SetSeq: func(ctx context.Context, id string, seq int64) error {
+ conversationID, userID, err := uSpitHasReadSeq(id)
+ if err != nil {
+ return err
+ }
+ return uSeq.SetUserReadSeq(ctx, conversationID, userID, seq)
+ },
+ },
+ {
+ Prefix: ConversationUserMinSeq,
+ GetSeq: func(ctx context.Context, id string) (int64, error) {
+ conversationID, userID, err := uSpitConversationUserMinSeq(id)
+ if err != nil {
+ return 0, err
+ }
+ return uSeq.GetUserMinSeq(ctx, conversationID, userID)
+ },
+ SetSeq: func(ctx context.Context, id string, seq int64) error {
+ conversationID, userID, err := uSpitConversationUserMinSeq(id)
+ if err != nil {
+ return err
+ }
+ return uSeq.SetUserMinSeq(ctx, conversationID, userID, seq)
+ },
+ },
+ }
+
+ cancel()
+ ctx = context.Background()
+
+ var wg sync.WaitGroup
+ wg.Add(len(ts))
+
+ for i := range ts {
+ go func(task *taskSeq) {
+ defer wg.Done()
+ err := seqRedisToMongo(ctx, rdb, task.GetSeq, task.SetSeq, task.Prefix, del, &task.Count)
+ task.End = time.Now()
+ task.Error = err
+ }(ts[i])
+ }
+ start := time.Now()
+ done := make(chan struct{})
+ go func() {
+ wg.Wait()
+ close(done)
+ }()
+
+ sigs := make(chan os.Signal, 1)
+ signal.Notify(sigs, syscall.SIGTERM)
+
+ ticker := time.NewTicker(time.Second)
+ defer ticker.Stop()
+ var buf bytes.Buffer
+
+ printTaskInfo := func(now time.Time) {
+ buf.Reset()
+ buf.WriteString(now.Format(time.DateTime))
+ buf.WriteString(" \n")
+ for i := range ts {
+ task := ts[i]
+ if task.Error == nil {
+ if task.End.IsZero() {
+ buf.WriteString(fmt.Sprintf("[%s] converting %s* count %d", now.Sub(start), task.Prefix, atomic.LoadInt64(&task.Count)))
+ } else {
+ buf.WriteString(fmt.Sprintf("[%s] success %s* count %d", task.End.Sub(start), task.Prefix, atomic.LoadInt64(&task.Count)))
+ }
+ } else {
+ buf.WriteString(fmt.Sprintf("[%s] failed %s* count %d error %s", task.End.Sub(start), task.Prefix, atomic.LoadInt64(&task.Count), task.Error))
+ }
+ buf.WriteString("\n")
+ }
+ fmt.Println(buf.String())
+ }
+
+ for {
+ select {
+ case <-ctx.Done():
+ return ctx.Err()
+ case s := <-sigs:
+ return fmt.Errorf("exit by signal %s", s)
+ case <-done:
+ errs := make([]error, 0, len(ts))
+ for i := range ts {
+ task := ts[i]
+ if task.Error != nil {
+ errs = append(errs, fmt.Errorf("seq %s failed %w", task.Prefix, task.Error))
+ }
+ }
+ if len(errs) > 0 {
+ return errors.Join(errs...)
+ }
+ printTaskInfo(time.Now())
+ if err := SetVersion(versionColl, seqKey, seqVersion); err != nil {
+ return fmt.Errorf("set mongodb seq version %w", err)
+ }
+ return nil
+ case now := <-ticker.C:
+ printTaskInfo(now)
+ }
+ }
+}
+
+type taskSeq struct {
+ Prefix string
+ Count int64
+ Error error
+ End time.Time
+ GetSeq func(ctx context.Context, id string) (int64, error)
+ SetSeq func(ctx context.Context, id string, seq int64) error
+}
+
+func seqRedisToMongo(ctx context.Context, rdb redis.UniversalClient, getSeq func(ctx context.Context, id string) (int64, error), setSeq func(ctx context.Context, id string, seq int64) error, prefix string, delAfter time.Duration, count *int64) error {
+ var (
+ cursor uint64
+ keys []string
+ err error
+ )
+ for {
+ keys, cursor, err = rdb.Scan(ctx, cursor, prefix+"*", batchSize).Result()
+ if err != nil {
+ return err
+ }
+ if len(keys) > 0 {
+ for _, key := range keys {
+ seqStr, err := rdb.Get(ctx, key).Result()
+ if err != nil {
+ return fmt.Errorf("redis get %s failed %w", key, err)
+ }
+ seq, err := strconv.Atoi(seqStr)
+ if err != nil {
+ return fmt.Errorf("invalid %s seq %s", key, seqStr)
+ }
+ if seq < 0 {
+ return fmt.Errorf("invalid %s seq %s", key, seqStr)
+ }
+ id := strings.TrimPrefix(key, prefix)
+ redisSeq := int64(seq)
+ mongoSeq, err := getSeq(ctx, id)
+ if err != nil {
+ return fmt.Errorf("get mongo seq %s failed %w", key, err)
+ }
+ if mongoSeq < redisSeq {
+ if err := setSeq(ctx, id, redisSeq); err != nil {
+ return fmt.Errorf("set mongo seq %s failed %w", key, err)
+ }
+ }
+ if delAfter > 0 {
+ if err := rdb.Expire(ctx, key, delAfter).Err(); err != nil {
+ return fmt.Errorf("redis expire key %s failed %w", key, err)
+ }
+ } else {
+ if err := rdb.Del(ctx, key).Err(); err != nil {
+ return fmt.Errorf("redis del key %s failed %w", key, err)
+ }
+ }
+ atomic.AddInt64(count, 1)
+ }
+ }
+ if cursor == 0 {
+ return nil
+ }
+ }
+}
+
+func CheckVersion(coll *mongo.Collection, key string, currentVersion int) (converted bool, err error) {
+ type VersionTable struct {
+ Key string `bson:"key"`
+ Value string `bson:"value"`
+ }
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
+ defer cancel()
+ res, err := mongoutil.FindOne[VersionTable](ctx, coll, bson.M{"key": key})
+ if err == nil {
+ ver, err := strconv.Atoi(res.Value)
+ if err != nil {
+ return false, fmt.Errorf("version %s parse error %w", res.Value, err)
+ }
+ if ver >= currentVersion {
+ return true, nil
+ }
+ return false, nil
+ } else if errors.Is(err, mongo.ErrNoDocuments) {
+ return false, nil
+ } else {
+ return false, err
+ }
+}
+
+func SetVersion(coll *mongo.Collection, key string, version int) error {
+ ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
+ defer cancel()
+ option := options.Update().SetUpsert(true)
+ filter := bson.M{"key": key}
+ update := bson.M{"$set": bson.M{"key": key, "value": strconv.Itoa(version)}}
+ return mongoutil.UpdateOne(ctx, coll, filter, update, false, option)
+}
diff --git a/tools/seq/main.go b/tools/seq/main.go
new file mode 100644
index 0000000..bdff249
--- /dev/null
+++ b/tools/seq/main.go
@@ -0,0 +1,26 @@
+package main
+
+import (
+ "flag"
+ "fmt"
+ "os"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/tools/seq/internal"
+)
+
+func main() {
+ var (
+ config string
+ second int
+ )
+ flag.StringVar(&config, "c", "", "config directory")
+ flag.IntVar(&second, "sec", 3600*24, "delayed deletion of the original seq key after conversion")
+ flag.Parse()
+ if err := internal.Main(config, time.Duration(second)*time.Second); err != nil {
+ fmt.Println("seq task", err)
+ os.Exit(1)
+ return
+ }
+ fmt.Println("seq task success!")
+}
diff --git a/tools/url2im/main.go b/tools/url2im/main.go
new file mode 100644
index 0000000..1631738
--- /dev/null
+++ b/tools/url2im/main.go
@@ -0,0 +1,119 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "flag"
+ "log"
+ "os"
+ "path/filepath"
+ "time"
+
+ "git.imall.cloud/openim/open-im-server-deploy/tools/url2im/pkg"
+)
+
+/*take.txt
+{"url":"http://xxx/xxxx","name":"xxxx","contentType":"image/jpeg"}
+{"url":"http://xxx/xxxx","name":"xxxx","contentType":"image/jpeg"}
+{"url":"http://xxx/xxxx","name":"xxxx","contentType":"image/jpeg"}
+*/
+
+func main() {
+ var conf pkg.Config // Configuration object, '*' denotes required fields
+
+ // *Required*: Path for the task log file
+ flag.StringVar(&conf.TaskPath, "task", "take.txt", "Path for the task log file")
+
+ // Optional: Path for the progress log file
+ flag.StringVar(&conf.ProgressPath, "progress", "", "Path for the progress log file")
+
+ // Number of concurrent operations
+ flag.IntVar(&conf.Concurrency, "concurrency", 1, "Number of concurrent operations")
+
+ // Number of retry attempts
+ flag.IntVar(&conf.Retry, "retry", 1, "Number of retry attempts")
+
+ // Optional: Path for the temporary directory
+ flag.StringVar(&conf.TempDir, "temp", "", "Path for the temporary directory")
+
+ // Cache size in bytes (downloads move to disk when exceeded)
+ flag.Int64Var(&conf.CacheSize, "cache", 1024*1024*100, "Cache size in bytes")
+
+ // Request timeout in milliseconds
+ flag.Int64Var((*int64)(&conf.Timeout), "timeout", 5000, "Request timeout in milliseconds")
+
+ // *Required*: API endpoint for the IM service
+ flag.StringVar(&conf.Api, "api", "http://127.0.0.1:10002", "API endpoint for the IM service")
+
+ // IM administrator's user ID
+ flag.StringVar(&conf.UserID, "userID", "openIM123456", "IM administrator's user ID")
+
+ // Secret for the IM configuration
+ flag.StringVar(&conf.Secret, "secret", "openIM123", "Secret for the IM configuration")
+
+ flag.Parse()
+ if !filepath.IsAbs(conf.TaskPath) {
+ var err error
+ conf.TaskPath, err = filepath.Abs(conf.TaskPath)
+ if err != nil {
+ log.Println("get abs path err:", err)
+ return
+ }
+ }
+ if conf.ProgressPath == "" {
+ conf.ProgressPath = conf.TaskPath + ".progress.txt"
+ } else if !filepath.IsAbs(conf.ProgressPath) {
+ var err error
+ conf.ProgressPath, err = filepath.Abs(conf.ProgressPath)
+ if err != nil {
+ log.Println("get abs path err:", err)
+ return
+ }
+ }
+ if conf.TempDir == "" {
+ conf.TempDir = conf.TaskPath + ".temp"
+ }
+ if info, err := os.Stat(conf.TempDir); err == nil {
+ if !info.IsDir() {
+ log.Printf("temp dir %s is not dir\n", err)
+ return
+ }
+ } else if os.IsNotExist(err) {
+ if err := os.MkdirAll(conf.TempDir, os.ModePerm); err != nil {
+ log.Printf("mkdir temp dir %s err %+v\n", conf.TempDir, err)
+ return
+ }
+ defer os.RemoveAll(conf.TempDir)
+ } else {
+ log.Println("get temp dir err:", err)
+ return
+ }
+ if conf.Concurrency <= 0 {
+ conf.Concurrency = 1
+ }
+ if conf.Retry <= 0 {
+ conf.Retry = 1
+ }
+ if conf.CacheSize <= 0 {
+ conf.CacheSize = 1024 * 1024 * 100 // 100M
+ }
+ if conf.Timeout <= 0 {
+ conf.Timeout = 5000
+ }
+ conf.Timeout = conf.Timeout * time.Millisecond
+ if err := pkg.Run(conf); err != nil {
+ log.Println("main err:", err)
+ }
+}
diff --git a/tools/url2im/pkg/api.go b/tools/url2im/pkg/api.go
new file mode 100644
index 0000000..25eb76c
--- /dev/null
+++ b/tools/url2im/pkg/api.go
@@ -0,0 +1,124 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+
+ "git.imall.cloud/openim/protocol/auth"
+ "git.imall.cloud/openim/protocol/third"
+ "github.com/openimsdk/tools/errs"
+)
+
+type Api struct {
+ Api string
+ UserID string
+ Secret string
+ Token string
+ Client *http.Client
+}
+
+func (a *Api) apiPost(ctx context.Context, path string, req any, resp any) error {
+ operationID, _ := ctx.Value("operationID").(string)
+ if operationID == "" {
+ return errs.New("call api operationID is empty")
+ }
+ reqBody, err := json.Marshal(req)
+ if err != nil {
+ return err
+ }
+ request, err := http.NewRequestWithContext(ctx, http.MethodPost, a.Api+path, bytes.NewReader(reqBody))
+ if err != nil {
+ return err
+ }
+ DefaultRequestHeader(request.Header)
+ request.ContentLength = int64(len(reqBody))
+ request.Header.Set("Content-Type", "application/json")
+ request.Header.Set("operationID", operationID)
+ if a.Token != "" {
+ request.Header.Set("token", a.Token)
+ }
+ response, err := a.Client.Do(request)
+ if err != nil {
+ return err
+ }
+ defer response.Body.Close()
+ body, err := io.ReadAll(response.Body)
+ if err != nil {
+ return err
+ }
+ if response.StatusCode != http.StatusOK {
+ return fmt.Errorf("api %s status %s body %s", path, response.Status, body)
+ }
+ var baseResponse struct {
+ ErrCode int `json:"errCode"`
+ ErrMsg string `json:"errMsg"`
+ ErrDlt string `json:"errDlt"`
+ Data json.RawMessage `json:"data"`
+ }
+ if err := json.Unmarshal(body, &baseResponse); err != nil {
+ return err
+ }
+ if baseResponse.ErrCode != 0 {
+ return fmt.Errorf("api %s errCode %d errMsg %s errDlt %s", path, baseResponse.ErrCode, baseResponse.ErrMsg, baseResponse.ErrDlt)
+ }
+ if resp != nil {
+ if err := json.Unmarshal(baseResponse.Data, resp); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (a *Api) GetAdminToken(ctx context.Context) (string, error) {
+ req := auth.GetAdminTokenReq{
+ UserID: a.UserID,
+ Secret: a.Secret,
+ }
+ var resp auth.GetAdminTokenResp
+ if err := a.apiPost(ctx, "/auth/get_admin_token", &req, &resp); err != nil {
+ return "", err
+ }
+ return resp.Token, nil
+}
+
+func (a *Api) GetPartLimit(ctx context.Context) (*third.PartLimitResp, error) {
+ var resp third.PartLimitResp
+ if err := a.apiPost(ctx, "/object/part_limit", &third.PartLimitReq{}, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (a *Api) InitiateMultipartUpload(ctx context.Context, req *third.InitiateMultipartUploadReq) (*third.InitiateMultipartUploadResp, error) {
+ var resp third.InitiateMultipartUploadResp
+ if err := a.apiPost(ctx, "/object/initiate_multipart_upload", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (a *Api) CompleteMultipartUpload(ctx context.Context, req *third.CompleteMultipartUploadReq) (string, error) {
+ var resp third.CompleteMultipartUploadResp
+ if err := a.apiPost(ctx, "/object/complete_multipart_upload", req, &resp); err != nil {
+ return "", err
+ }
+ return resp.Url, nil
+}
diff --git a/tools/url2im/pkg/buffer.go b/tools/url2im/pkg/buffer.go
new file mode 100644
index 0000000..b4c1046
--- /dev/null
+++ b/tools/url2im/pkg/buffer.go
@@ -0,0 +1,110 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "bytes"
+ "io"
+ "os"
+)
+
+type ReadSeekSizeCloser interface {
+ io.ReadSeekCloser
+ Size() int64
+}
+
+func NewReader(r io.Reader, max int64, path string) (ReadSeekSizeCloser, error) {
+ buf := make([]byte, max+1)
+ n, err := io.ReadFull(r, buf)
+ if err == nil {
+ f, err := os.OpenFile(path, os.O_CREATE|os.O_RDWR, 0o666)
+ if err != nil {
+ return nil, err
+ }
+ var ok bool
+ defer func() {
+ if !ok {
+ _ = f.Close()
+ _ = os.Remove(path)
+ }
+ }()
+ if _, err := f.Write(buf[:n]); err != nil {
+ return nil, err
+ }
+ cn, err := io.Copy(f, r)
+ if err != nil {
+ return nil, err
+ }
+ if _, err := f.Seek(0, io.SeekStart); err != nil {
+ return nil, err
+ }
+ ok = true
+ return &fileBuffer{
+ f: f,
+ n: cn + int64(n),
+ }, nil
+ } else if err == io.EOF || err == io.ErrUnexpectedEOF {
+ return &memoryBuffer{
+ r: bytes.NewReader(buf[:n]),
+ }, nil
+ } else {
+ return nil, err
+ }
+}
+
+type fileBuffer struct {
+ n int64
+ f *os.File
+}
+
+func (r *fileBuffer) Read(p []byte) (n int, err error) {
+ return r.f.Read(p)
+}
+
+func (r *fileBuffer) Seek(offset int64, whence int) (int64, error) {
+ return r.f.Seek(offset, whence)
+}
+
+func (r *fileBuffer) Size() int64 {
+ return r.n
+}
+
+func (r *fileBuffer) Close() error {
+ name := r.f.Name()
+ if err := r.f.Close(); err != nil {
+ return err
+ }
+ return os.Remove(name)
+}
+
+type memoryBuffer struct {
+ r *bytes.Reader
+}
+
+func (r *memoryBuffer) Read(p []byte) (n int, err error) {
+ return r.r.Read(p)
+}
+
+func (r *memoryBuffer) Seek(offset int64, whence int) (int64, error) {
+ return r.r.Seek(offset, whence)
+}
+
+func (r *memoryBuffer) Close() error {
+ return nil
+}
+
+func (r *memoryBuffer) Size() int64 {
+ return r.r.Size()
+}
diff --git a/tools/url2im/pkg/config.go b/tools/url2im/pkg/config.go
new file mode 100644
index 0000000..740e748
--- /dev/null
+++ b/tools/url2im/pkg/config.go
@@ -0,0 +1,30 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import "time"
+
+type Config struct {
+ TaskPath string
+ ProgressPath string
+ Concurrency int
+ Retry int
+ Timeout time.Duration
+ Api string
+ UserID string
+ Secret string
+ TempDir string
+ CacheSize int64
+}
diff --git a/tools/url2im/pkg/http.go b/tools/url2im/pkg/http.go
new file mode 100644
index 0000000..50fb30c
--- /dev/null
+++ b/tools/url2im/pkg/http.go
@@ -0,0 +1,21 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import "net/http"
+
+func DefaultRequestHeader(header http.Header) {
+ header.Set("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36")
+}
diff --git a/tools/url2im/pkg/manage.go b/tools/url2im/pkg/manage.go
new file mode 100644
index 0000000..4e46002
--- /dev/null
+++ b/tools/url2im/pkg/manage.go
@@ -0,0 +1,400 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "bufio"
+ "context"
+ "crypto/md5"
+ "encoding/hex"
+ "encoding/json"
+ "fmt"
+ "io"
+ "log"
+ "net/http"
+ "net/url"
+ "os"
+ "path/filepath"
+ "strconv"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/openimsdk/tools/errs"
+
+ "git.imall.cloud/openim/protocol/third"
+)
+
+type Upload struct {
+ URL string `json:"url"`
+ Name string `json:"name"`
+ ContentType string `json:"contentType"`
+}
+
+type Task struct {
+ Index int
+ Upload Upload
+}
+
+type PartInfo struct {
+ ContentType string
+ PartSize int64
+ PartNum int
+ FileMd5 string
+ PartMd5 string
+ PartSizes []int64
+ PartMd5s []string
+}
+
+func Run(conf Config) error {
+ m := &Manage{
+ prefix: time.Now().Format("20060102150405"),
+ conf: &conf,
+ ctx: context.Background(),
+ }
+ return m.Run()
+}
+
+type Manage struct {
+ conf *Config
+ ctx context.Context
+ api *Api
+ partLimit *third.PartLimitResp
+ prefix string
+ tasks chan Task
+ id uint64
+ success int64
+ failed int64
+}
+
+func (m *Manage) tempFilePath() string {
+ return filepath.Join(m.conf.TempDir, fmt.Sprintf("%s_%d", m.prefix, atomic.AddUint64(&m.id, 1)))
+}
+
+func (m *Manage) Run() error {
+ defer func(start time.Time) {
+ log.Printf("run time %s\n", time.Since(start))
+ }(time.Now())
+ m.api = &Api{
+ Api: m.conf.Api,
+ UserID: m.conf.UserID,
+ Secret: m.conf.Secret,
+ Client: &http.Client{Timeout: m.conf.Timeout},
+ }
+ var err error
+ ctx := context.WithValue(m.ctx, "operationID", fmt.Sprintf("%s_init", m.prefix))
+ m.api.Token, err = m.api.GetAdminToken(ctx)
+ if err != nil {
+ return err
+ }
+ m.partLimit, err = m.api.GetPartLimit(ctx)
+ if err != nil {
+ return err
+ }
+ progress, err := ReadProgress(m.conf.ProgressPath)
+ if err != nil {
+ return err
+ }
+ progressFile, err := os.OpenFile(m.conf.ProgressPath, os.O_CREATE|os.O_RDWR|os.O_APPEND, 0666)
+ if err != nil {
+ return err
+ }
+ var mutex sync.Mutex
+ writeSuccessIndex := func(index int) {
+ mutex.Lock()
+ defer mutex.Unlock()
+ if _, err := progressFile.Write([]byte(strconv.Itoa(index) + "\n")); err != nil {
+ log.Printf("write progress err: %v\n", err)
+ }
+ }
+ file, err := os.Open(m.conf.TaskPath)
+ if err != nil {
+ return err
+ }
+ m.tasks = make(chan Task, m.conf.Concurrency*2)
+ go func() {
+ defer file.Close()
+ defer close(m.tasks)
+ scanner := bufio.NewScanner(file)
+ var (
+ index int
+ num int
+ )
+ for scanner.Scan() {
+ line := strings.TrimSpace(scanner.Text())
+ if line == "" {
+ continue
+ }
+ index++
+ if progress.IsUploaded(index) {
+ log.Printf("index: %d already uploaded %s\n", index, line)
+ continue
+ }
+ var upload Upload
+ if err := json.Unmarshal([]byte(line), &upload); err != nil {
+ log.Printf("index: %d json.Unmarshal(%s) err: %v", index, line, err)
+ continue
+ }
+ num++
+ m.tasks <- Task{
+ Index: index,
+ Upload: upload,
+ }
+ }
+ if num == 0 {
+ log.Println("mark all completed")
+ }
+ }()
+ var wg sync.WaitGroup
+ wg.Add(m.conf.Concurrency)
+ for i := 0; i < m.conf.Concurrency; i++ {
+ go func(tid int) {
+ defer wg.Done()
+ for task := range m.tasks {
+ var success bool
+ for n := 0; n < m.conf.Retry; n++ {
+ ctx := context.WithValue(m.ctx, "operationID", fmt.Sprintf("%s_%d_%d_%d", m.prefix, tid, task.Index, n+1))
+ if urlRaw, err := m.RunTask(ctx, task); err == nil {
+ writeSuccessIndex(task.Index)
+ log.Println("index:", task.Index, "upload success", "urlRaw", urlRaw)
+ success = true
+ break
+ } else {
+ log.Printf("index: %d upload: %+v err: %v", task.Index, task.Upload, err)
+ }
+ }
+ if success {
+ atomic.AddInt64(&m.success, 1)
+ } else {
+ atomic.AddInt64(&m.failed, 1)
+ log.Printf("index: %d upload: %+v failed", task.Index, task.Upload)
+ }
+ }
+ }(i + 1)
+ }
+ wg.Wait()
+ log.Printf("execution completed success %d failed %d\n", m.success, m.failed)
+ return nil
+}
+
+func (m *Manage) RunTask(ctx context.Context, task Task) (string, error) {
+ resp, err := m.HttpGet(ctx, task.Upload.URL)
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+ reader, err := NewReader(resp.Body, m.conf.CacheSize, m.tempFilePath())
+ if err != nil {
+ return "", err
+ }
+ defer reader.Close()
+ part, err := m.getPartInfo(ctx, reader, reader.Size())
+ if err != nil {
+ return "", err
+ }
+ var contentType string
+ if task.Upload.ContentType == "" {
+ contentType = part.ContentType
+ } else {
+ contentType = task.Upload.ContentType
+ }
+ initiateMultipartUploadResp, err := m.api.InitiateMultipartUpload(ctx, &third.InitiateMultipartUploadReq{
+ Hash: part.PartMd5,
+ Size: reader.Size(),
+ PartSize: part.PartSize,
+ MaxParts: -1,
+ Cause: "batch-import",
+ Name: task.Upload.Name,
+ ContentType: contentType,
+ })
+ if err != nil {
+ return "", err
+ }
+ if initiateMultipartUploadResp.Upload == nil {
+ return initiateMultipartUploadResp.Url, nil
+ }
+ if _, err := reader.Seek(0, io.SeekStart); err != nil {
+ return "", err
+ }
+ uploadParts := make([]*third.SignPart, part.PartNum)
+ for _, part := range initiateMultipartUploadResp.Upload.Sign.Parts {
+ uploadParts[part.PartNumber-1] = part
+ }
+ for i, currentPartSize := range part.PartSizes {
+ md5Reader := NewMd5Reader(io.LimitReader(reader, currentPartSize))
+ if err := m.doPut(ctx, m.api.Client, initiateMultipartUploadResp.Upload.Sign, uploadParts[i], md5Reader, currentPartSize); err != nil {
+ return "", err
+ }
+ if md5val := md5Reader.Md5(); md5val != part.PartMd5s[i] {
+ return "", fmt.Errorf("upload part %d failed, md5 not match, expect %s, got %s", i, part.PartMd5s[i], md5val)
+ }
+ }
+ urlRaw, err := m.api.CompleteMultipartUpload(ctx, &third.CompleteMultipartUploadReq{
+ UploadID: initiateMultipartUploadResp.Upload.UploadID,
+ Parts: part.PartMd5s,
+ Name: task.Upload.Name,
+ ContentType: contentType,
+ Cause: "batch-import",
+ })
+ if err != nil {
+ return "", err
+ }
+ return urlRaw, nil
+}
+
+func (m *Manage) partSize(size int64) (int64, error) {
+ if size <= 0 {
+ return 0, errs.New("size must be greater than 0")
+ }
+ if size > m.partLimit.MaxPartSize*int64(m.partLimit.MaxNumSize) {
+ return 0, errs.New("size must be less than", "size", m.partLimit.MaxPartSize*int64(m.partLimit.MaxNumSize))
+ }
+ if size <= m.partLimit.MinPartSize*int64(m.partLimit.MaxNumSize) {
+ return m.partLimit.MinPartSize, nil
+ }
+ partSize := size / int64(m.partLimit.MaxNumSize)
+ if size%int64(m.partLimit.MaxNumSize) != 0 {
+ partSize++
+ }
+ return partSize, nil
+}
+
+func (m *Manage) partMD5(parts []string) string {
+ s := strings.Join(parts, ",")
+ md5Sum := md5.Sum([]byte(s))
+ return hex.EncodeToString(md5Sum[:])
+}
+
+func (m *Manage) getPartInfo(ctx context.Context, r io.Reader, fileSize int64) (*PartInfo, error) {
+ partSize, err := m.partSize(fileSize)
+ if err != nil {
+ return nil, err
+ }
+ partNum := int(fileSize / partSize)
+ if fileSize%partSize != 0 {
+ partNum++
+ }
+ partSizes := make([]int64, partNum)
+ for i := 0; i < partNum; i++ {
+ partSizes[i] = partSize
+ }
+ partSizes[partNum-1] = fileSize - partSize*(int64(partNum)-1)
+ partMd5s := make([]string, partNum)
+ buf := make([]byte, 1024*8)
+ fileMd5 := md5.New()
+ var contentType string
+ for i := 0; i < partNum; i++ {
+ h := md5.New()
+ r := io.LimitReader(r, partSize)
+ for {
+ if n, err := r.Read(buf); err == nil {
+ if contentType == "" {
+ contentType = http.DetectContentType(buf[:n])
+ }
+ h.Write(buf[:n])
+ fileMd5.Write(buf[:n])
+ } else if err == io.EOF {
+ break
+ } else {
+ return nil, err
+ }
+ }
+ partMd5s[i] = hex.EncodeToString(h.Sum(nil))
+ }
+ partMd5Val := m.partMD5(partMd5s)
+ fileMd5val := hex.EncodeToString(fileMd5.Sum(nil))
+ return &PartInfo{
+ ContentType: contentType,
+ PartSize: partSize,
+ PartNum: partNum,
+ FileMd5: fileMd5val,
+ PartMd5: partMd5Val,
+ PartSizes: partSizes,
+ PartMd5s: partMd5s,
+ }, nil
+}
+
+func (m *Manage) doPut(ctx context.Context, client *http.Client, sign *third.AuthSignParts, part *third.SignPart, reader io.Reader, size int64) error {
+ rawURL := part.Url
+ if rawURL == "" {
+ rawURL = sign.Url
+ }
+ if len(sign.Query)+len(part.Query) > 0 {
+ u, err := url.Parse(rawURL)
+ if err != nil {
+ return err
+ }
+ query := u.Query()
+ for i := range sign.Query {
+ v := sign.Query[i]
+ query[v.Key] = v.Values
+ }
+ for i := range part.Query {
+ v := part.Query[i]
+ query[v.Key] = v.Values
+ }
+ u.RawQuery = query.Encode()
+ rawURL = u.String()
+ }
+ req, err := http.NewRequestWithContext(ctx, http.MethodPut, rawURL, reader)
+ if err != nil {
+ return err
+ }
+ for i := range sign.Header {
+ v := sign.Header[i]
+ req.Header[v.Key] = v.Values
+ }
+ for i := range part.Header {
+ v := part.Header[i]
+ req.Header[v.Key] = v.Values
+ }
+ req.ContentLength = size
+ resp, err := client.Do(req)
+ if err != nil {
+ return err
+ }
+ defer func() {
+ _ = resp.Body.Close()
+ }()
+ body, err := io.ReadAll(resp.Body)
+ if err != nil {
+ return err
+ }
+ if resp.StatusCode/200 != 1 {
+ return fmt.Errorf("PUT %s part %d failed, status code %d, body %s", rawURL, part.PartNumber, resp.StatusCode, string(body))
+ }
+ return nil
+}
+
+func (m *Manage) HttpGet(ctx context.Context, url string) (*http.Response, error) {
+ reqUrl := url
+ for {
+ request, err := http.NewRequestWithContext(ctx, http.MethodGet, reqUrl, nil)
+ if err != nil {
+ return nil, err
+ }
+ DefaultRequestHeader(request.Header)
+ response, err := m.api.Client.Do(request)
+ if err != nil {
+ return nil, err
+ }
+ if response.StatusCode != http.StatusOK {
+ _ = response.Body.Close()
+ return nil, fmt.Errorf("webhook get %s status %s", url, response.Status)
+ }
+ return response, nil
+ }
+}
diff --git a/tools/url2im/pkg/md5.go b/tools/url2im/pkg/md5.go
new file mode 100644
index 0000000..26b8d47
--- /dev/null
+++ b/tools/url2im/pkg/md5.go
@@ -0,0 +1,43 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "crypto/md5"
+ "encoding/hex"
+ "hash"
+ "io"
+)
+
+func NewMd5Reader(r io.Reader) *Md5Reader {
+ return &Md5Reader{h: md5.New(), r: r}
+}
+
+type Md5Reader struct {
+ h hash.Hash
+ r io.Reader
+}
+
+func (r *Md5Reader) Read(p []byte) (n int, err error) {
+ n, err = r.r.Read(p)
+ if err == nil && n > 0 {
+ r.h.Write(p[:n])
+ }
+ return
+}
+
+func (r *Md5Reader) Md5() string {
+ return hex.EncodeToString(r.h.Sum(nil))
+}
diff --git a/tools/url2im/pkg/progress.go b/tools/url2im/pkg/progress.go
new file mode 100644
index 0000000..5f30495
--- /dev/null
+++ b/tools/url2im/pkg/progress.go
@@ -0,0 +1,55 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "bufio"
+ "os"
+ "strconv"
+
+ "github.com/kelindar/bitmap"
+)
+
+func ReadProgress(path string) (*Progress, error) {
+ file, err := os.Open(path)
+ if err != nil {
+ if os.IsNotExist(err) {
+ return &Progress{}, nil
+ }
+ return nil, err
+ }
+ defer file.Close()
+ scanner := bufio.NewScanner(file)
+ var upload bitmap.Bitmap
+ for scanner.Scan() {
+ index, err := strconv.Atoi(scanner.Text())
+ if err != nil || index < 0 {
+ continue
+ }
+ upload.Set(uint32(index))
+ }
+ return &Progress{upload: upload}, nil
+}
+
+type Progress struct {
+ upload bitmap.Bitmap
+}
+
+func (p *Progress) IsUploaded(index int) bool {
+ if p == nil {
+ return false
+ }
+ return p.upload.Contains(uint32(index))
+}
diff --git a/tools/versionchecker/main.go b/tools/versionchecker/main.go
new file mode 100644
index 0000000..bec7daa
--- /dev/null
+++ b/tools/versionchecker/main.go
@@ -0,0 +1,113 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "os/exec"
+ "runtime"
+
+ "github.com/fatih/color"
+ "github.com/openimsdk/tools/utils/timeutil"
+)
+
+func ExecuteCommand(cmdName string, args ...string) (string, error) {
+ cmd := exec.Command(cmdName, args...)
+ var out bytes.Buffer
+ var stderr bytes.Buffer
+ cmd.Stdout = &out
+ cmd.Stderr = &stderr
+
+ err := cmd.Run()
+ if err != nil {
+ return "", fmt.Errorf("error executing %s: %v, stderr: %s", cmdName, err, stderr.String())
+ }
+ return out.String(), nil
+}
+
+func printTime() string {
+ formattedTime := timeutil.GetCurrentTimeFormatted()
+ return fmt.Sprintf("Current Date & Time: %s", formattedTime)
+}
+
+func getGoVersion() string {
+ version := runtime.Version()
+ goos := runtime.GOOS
+ goarch := runtime.GOARCH
+ return fmt.Sprintf("Go Version: %s\nOS: %s\nArchitecture: %s", version, goos, goarch)
+}
+
+func getDockerVersion() string {
+ version, err := ExecuteCommand("docker", "--version")
+ if err != nil {
+ return "Docker is not installed. Please install it to get the version."
+ }
+ return version
+}
+
+func getKubernetesVersion() string {
+ version, err := ExecuteCommand("kubectl", "version", "--client", "--short")
+ if err != nil {
+ return "Kubernetes is not installed. Please install it to get the version."
+ }
+ return version
+}
+
+func getGitVersion() string {
+ version, err := ExecuteCommand("git", "branch", "--show-current")
+ if err != nil {
+ return "Git is not installed. Please install it to get the version."
+ }
+ return version
+}
+
+// // NOTE: You'll need to provide appropriate commands for OpenIM versions.
+// func getOpenIMServerVersion() string {
+// // Placeholder
+// openimVersion := version.GetSingleVersion()
+// return "OpenIM Server: " + openimVersion + "\n"
+// }
+
+// func getOpenIMClientVersion() (string, error) {
+// openIMClientVersion, err := version.GetClientVersion()
+// if err != nil {
+// return "", err
+// }
+// return "OpenIM Client: " + openIMClientVersion.ClientVersion + "\n", nil
+// }
+
+func main() {
+ // red := color.New(color.FgRed).SprintFunc()
+ // green := color.New(color.FgGreen).SprintFunc()
+ blue := color.New(color.FgBlue).SprintFunc()
+ // yellow := color.New(color.FgYellow).SprintFunc()
+ fmt.Println(blue("## Go Version"))
+ fmt.Println(getGoVersion())
+ fmt.Println(blue("## Branch Type"))
+ fmt.Println(getGitVersion())
+ fmt.Println(blue("## Docker Version"))
+ fmt.Println(getDockerVersion())
+ fmt.Println(blue("## Kubernetes Version"))
+ fmt.Println(getKubernetesVersion())
+ // fmt.Println(blue("## OpenIM Versions"))
+ // fmt.Println(getOpenIMServerVersion())
+ // clientVersion, err := getOpenIMClientVersion()
+ // if err != nil {
+ // fmt.Println(red("Error getting OpenIM Client Version: "), err)
+ // } else {
+ // fmt.Println(clientVersion)
+ // }
+}
diff --git a/tools/yamlfmt/main.go b/tools/yamlfmt/main.go
new file mode 100644
index 0000000..a8d3a76
--- /dev/null
+++ b/tools/yamlfmt/main.go
@@ -0,0 +1,72 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// OPENIM plan on prow tools
+package main
+
+import (
+ "flag"
+ "fmt"
+ "io"
+ "os"
+
+ "gopkg.in/yaml.v3"
+)
+
+func main() {
+ // Prow OWNERs file defines the default indent as 2 spaces.
+ indent := flag.Int("indent", 2, "default indent")
+ flag.Parse()
+ for _, path := range flag.Args() {
+ sourceYaml, err := os.ReadFile(path)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "%s: %v\n", path, err)
+ continue
+ }
+ rootNode, err := fetchYaml(sourceYaml)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "%s: %v\n", path, err)
+ continue
+ }
+ writer, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o666)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "%s: %v\n", path, err)
+ continue
+ }
+ err = streamYaml(writer, indent, rootNode)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "%s: %v\n", path, err)
+ continue
+ }
+ }
+}
+
+func fetchYaml(sourceYaml []byte) (*yaml.Node, error) {
+ rootNode := yaml.Node{}
+ err := yaml.Unmarshal(sourceYaml, &rootNode)
+ if err != nil {
+ return nil, err
+ }
+ return &rootNode, nil
+}
+
+func streamYaml(writer io.Writer, indent *int, in *yaml.Node) error {
+ encoder := yaml.NewEncoder(writer)
+ encoder.SetIndent(*indent)
+ err := encoder.Encode(in)
+ if err != nil {
+ return err
+ }
+ return encoder.Close()
+}
diff --git a/tools/yamlfmt/main_test.go b/tools/yamlfmt/main_test.go
new file mode 100644
index 0000000..0a72e49
--- /dev/null
+++ b/tools/yamlfmt/main_test.go
@@ -0,0 +1,158 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "reflect"
+ "testing"
+
+ "github.com/likexian/gokit/assert"
+ "gopkg.in/yaml.v3"
+)
+
+func Test_main(t *testing.T) {
+ sourceYaml := ` # See the OWNERS docs at https://go.k8s.io/owners
+approvers:
+- dep-approvers
+- thockin # Network
+- liggitt
+
+labels:
+- sig/architecture
+`
+
+ outputYaml := `# See the OWNERS docs at https://go.k8s.io/owners
+approvers:
+ - dep-approvers
+ - thockin # Network
+ - liggitt
+labels:
+ - sig/architecture
+`
+ node, _ := fetchYaml([]byte(sourceYaml))
+ var output bytes.Buffer
+ indent := 2
+ writer := bufio.NewWriter(&output)
+ _ = streamYaml(writer, &indent, node)
+ _ = writer.Flush()
+ assert.Equal(t, outputYaml, string(output.Bytes()), "yaml was not formatted correctly")
+}
+
+func Test_fetchYaml(t *testing.T) {
+ type args struct {
+ sourceYaml []byte
+ }
+ tests := []struct {
+ name string
+ args args
+ want *yaml.Node
+ wantErr bool
+ }{
+ {
+ name: "Valid YAML",
+ args: args{sourceYaml: []byte("key: value")},
+ want: &yaml.Node{
+ Kind: yaml.MappingNode,
+ Tag: "!!map",
+ Value: "",
+ Content: []*yaml.Node{
+ {
+ Kind: yaml.ScalarNode,
+ Tag: "!!str",
+ Value: "key",
+ },
+ {
+ Kind: yaml.ScalarNode,
+ Tag: "!!str",
+ Value: "value",
+ },
+ },
+ },
+ wantErr: false,
+ },
+ {
+ name: "Invalid YAML",
+ args: args{sourceYaml: []byte("key:")},
+ want: nil,
+ wantErr: true,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ got, err := fetchYaml(tt.args.sourceYaml)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("fetchYaml() error = %v, wantErr %v", err, tt.wantErr)
+ return
+ }
+ if !reflect.DeepEqual(got, tt.want) {
+ t.Errorf("fetchYaml() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+func Test_streamYaml(t *testing.T) {
+ type args struct {
+ indent *int
+ in *yaml.Node
+ }
+ defaultIndent := 2
+ tests := []struct {
+ name string
+ args args
+ wantWriter string
+ wantErr bool
+ }{
+ {
+ name: "Valid YAML node with default indent",
+ args: args{
+ indent: &defaultIndent,
+ in: &yaml.Node{
+ Kind: yaml.MappingNode,
+ Tag: "!!map",
+ Value: "",
+ Content: []*yaml.Node{
+ {
+ Kind: yaml.ScalarNode,
+ Tag: "!!str",
+ Value: "key",
+ },
+ {
+ Kind: yaml.ScalarNode,
+ Tag: "!!str",
+ Value: "value",
+ },
+ },
+ },
+ },
+ wantWriter: "key: value\n",
+ wantErr: false,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ writer := &bytes.Buffer{}
+ if err := streamYaml(writer, tt.args.indent, tt.args.in); (err != nil) != tt.wantErr {
+ t.Errorf("streamYaml() error = %v, wantErr %v", err, tt.wantErr)
+ return
+ }
+ if gotWriter := writer.String(); gotWriter != tt.wantWriter {
+ t.Errorf("streamYaml() = %v, want %v", gotWriter, tt.wantWriter)
+ }
+ })
+ }
+}
diff --git a/version/version b/version/version
new file mode 100644
index 0000000..88d050b
--- /dev/null
+++ b/version/version
@@ -0,0 +1 @@
+main
\ No newline at end of file
diff --git a/version/version.go b/version/version.go
new file mode 100644
index 0000000..32ad278
--- /dev/null
+++ b/version/version.go
@@ -0,0 +1,14 @@
+package version
+
+import (
+ _ "embed"
+ "strings"
+)
+
+//go:embed version
+var Version string
+
+func init() {
+ Version = strings.Trim(Version, "\n")
+ Version = strings.TrimSpace(Version)
+}