您当前的位置: 首页 >  centos

Bulut0907

暂无认证

  • 4浏览

    0关注

    346博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

Centos7上部署TiDB 5.2在线HTAP平台

Bulut0907 发布时间:2021-12-07 10:33:29 ,浏览量:4

目录
  • 1. 集群规划
  • 2. 环境与系统配置(4台)
    • 2.1 关闭系统swap
    • 2.2 用NTP进行时间同步
    • 2.3 关闭透明大页面THP
    • 2.4 修改sysctl参数
    • 2.5 修改limits.conf
    • 2.6 安装numactl和sshpass工具
    • 2.7 配置root用户的免密登录
    • 2.8 关闭selinux
    • 2.9 开启irqbalance网卡中断绑定
  • 3. 使用Tiup部署集群(tidb1上操作)
    • 3.1 部署TiUP组件
    • 3.2 配置集群拓扑文件
    • 3.3 deploy部署集群
    • 3.4 查看集群状态
    • 3.5 启动集群
  • 4. 验证集群

1. 集群规划 2. 环境与系统配置(4台) 2.1 关闭系统swap

使用swap做为内存不足的缓冲,会降低性能,所以需要关闭

临时关闭swap

[root@tidb1 ~]# 
[root@tidb1 ~]# swapoff -a
[root@tidb1 ~]# 
[root@tidb1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:            972         142         731           7          98         707
Swap:             0           0           0
[root@tidb1 ~]#
[root@tidb3 ~]# echo "vm.swappiness = 0">> /etc/sysctl.conf
[root@tidb3 ~]#
[root@tidb3 ~]# sysctl -p
vm.swappiness = 0
[root@tidb3 ~]#

永久关闭swap,注释/etc/fstab中下面这行

/dev/mapper/centos_centos-swap swap                    swap    defaults        0 0

以后reboot重启服务器,就可以了

2.2 用NTP进行时间同步

Tidb需要服务器之间的时间同步,来保证ACID模型的事务线性一致性

时间同步可以参考Centos7服务器通过NTP设置时间同步

2.3 关闭透明大页面THP

tidb的内存访问模式是稀疏的,开启Transparent Huge Pages会带来延时

查看透明大页开启状态,[always]表示是开启状态

[root@tidb1 ~]# 
[root@tidb1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@tidb1 ~]# 

使用tuned配置系统优化参数,查看当前操作系统的tuned策略

[root@tidb1 ~]# 
[root@tidb1 ~]# tuned-adm list
Available profiles:
- balanced                    - General non-specialized tuned profile
- desktop                     - Optimize for the desktop use-case
- hpc-compute                 - Optimize for HPC compute workloads
- latency-performance         - Optimize for deterministic performance at the cost of increased power consumption
- network-latency             - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance
- network-throughput          - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks
- powersave                   - Optimize for low power consumption
- throughput-performance      - Broadly applicable tuning that provides excellent performance across a variety of common server workloads
- virtual-guest               - Optimize for running inside a virtual guest
- virtual-host                - Optimize for running KVM guests
Current active profile: virtual-guest
[root@tidb1 ~]# 

Current active profile: virtual-guest表示当前操作系统的策略是virtual-guest

在当前策略上添加操作系统优化配置

[root@tidb1 ~]#
[root@tidb1 ~]# mkdir /etc/tuned/virtual-guest-tidb-optimal
[root@tidb1 ~]#
[root@tidb1 ~]# touch /etc/tuned/virtual-guest-tidb-optimal/tuned.conf
[root@tidb1 ~]#
[root@tidb1 ~]# cat /etc/tuned/virtual-guest-tidb-optimal/tuned.conf
[main]
include=virtual-guest


[vm]
transparent_hugepages=never
[root@tidb1 ~]# 

应用新的tuned策略

[root@tidb1 ~]#
[root@tidb1 ~]# tuned-adm profile virtual-guest-tidb-optimal
[root@tidb1 ~]#
[root@tidb1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
[root@tidb1 ~]# 

下面我们关闭transparent_hugepage的defrag

查看defrag开启状态

[root@tidb1 ~]#
[root@tidb1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
[root@tidb1 ~]#

临时关闭defrag

[root@tidb1 ~]# 
[root@tidb1 ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@tidb1 ~]# 
[root@tidb1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]
[root@tidb1 ~]#

永久关闭defrag

[root@tidb1 opt]# 
[root@tidb1 opt]# touch disable_transparent_hugepage_defrag.sh
[root@tidb1 opt]# 
[root@tidb1 opt]# cat disable_transparent_hugepage_defrag.sh 
echo never > /sys/kernel/mm/transparent_hugepage/defrag

[root@tidb1 opt]# 
[root@tidb1 opt]# chmod +x disable_transparent_hugepage_defrag.sh 
[root@tidb1 opt]# 
[root@tidb1 opt]# cat /etc/rc.d/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/opt/disable_transparent_hugepage_defrag.sh

[root@tidb1 opt]# 
[root@tidb1 opt]# chmod +x /etc/rc.d/rc.local
[root@tidb1 opt]# 

以后reboot重启服务器,就可以了

2.4 修改sysctl参数
[root@tidb1 ~]# 
[root@tidb1 ~]# echo "fs.file-max = 1000000">> /etc/sysctl.conf
[root@tidb1 ~]# echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
[root@tidb1 ~]# echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
[root@tidb1 ~]# echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
[root@tidb1 ~]# echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
[root@tidb1 ~]# 
[root@tidb1 ~]# sysctl -p
vm.swappiness = 0
fs.file-max = 1000000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 0
vm.overcommit_memory = 1
[root@tidb1 ~]# 
2.5 修改limits.conf

添加以下内容到/etc/security/limits.conf文件中

tidb           soft    nofile          1000000
tidb           hard    nofile          1000000
tidb           soft    stack          32768
tidb           hard    stack          32768
2.6 安装numactl和sshpass工具

有时因为硬件机器配置往往高于需求,会考虑单机部署多个TiDB或TiKV。numa绑核工具主要为了防止CPU资源的争抢

[root@tidb1 ~]# 
[root@tidb1 ~]# yum -y install numactl
[root@tidb1 ~]#
[root@tidb1 ~]# yum -y install sshpass
[root@tidb1 ~]#
2.7 配置root用户的免密登录 2.8 关闭selinux

临时关闭selinux

[root@tidb3 ~]#
[root@tidb3 ~]# getenforce
Enforcing
[root@tidb3 ~]#
[root@tidb2 ~]# setenforce 0
[root@tidb3 ~]#
[root@tidb2 ~]# getenforce
Permissive
[root@tidb3 ~]#
[root@tidb2 ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31
[root@tidb2 ~]# 

永久关闭selinux,修改/etc/selinux/config,修改内容如下:

SELINUX=disabled

以后reboot重启服务器,就可以了

2.9 开启irqbalance网卡中断绑定
[root@tidb1 ~]# 
[root@tidb1 ~]# systemctl restart irqbalance
[root@tidb1 ~]# 
[root@tidb1 ~]# systemctl status irqbalance
● irqbalance.service - irqbalance daemon
   Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset: enabled)
   Active: active (running) since 二 2021-11-23 17:29:35 CST; 11s ago
 Main PID: 1091 (irqbalance)
   CGroup: /system.slice/irqbalance.service
           └─1091 /usr/sbin/irqbalance --foreground

11月 23 17:29:35 tidb1 systemd[1]: Started irqbalance daemon.
[root@tidb1 ~]# 
  • 开启此功能的服务器至少需要2个cpu
3. 使用Tiup部署集群(tidb1上操作)

TiUP cluster组件可以进行部署、启动、关闭、销毁、弹性扩缩容、升级TiDB集群,以及管理TiDB集群参数

TiUP支持部署TiDB、TiFlash、TiDB Binlog、TiCDC,以及监控系统

3.1 部署TiUP组件

下载TiDB server镜像包,包含TiUP

[root@tidb1 opt]#
[root@tidb1 opt]# wget https://download.pingcap.org/tidb-community-server-v5.2.2-linux-amd64.tar.gz
[root@tidb1 opt]#

进行解压安装

[root@tidb1 opt]# 
[root@tidb1 opt]# tar -zxvf tidb-community-server-v5.2.2-linux-amd64.tar.gz
[root@tidb1 opt]# 
[root@tidb1 opt]# cd tidb-community-server-v5.2.2-linux-amd64
[root@tidb1 tidb-community-server-v5.2.2-linux-amd64]# 
[root@tidb1 tidb-community-server-v5.2.2-linux-amd64]# sh local_install.sh 
Disable telemetry success
Successfully set mirror to /opt/tidb-community-server-v5.2.2-linux-amd64
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
1. source /root/.bash_profile
2. Have a try:   tiup playground
===============================================
[root@tidb1 tidb-community-server-v5.2.2-linux-amd64]#
[root@tidb1 tidb-community-server-v5.2.2-linux-amd64]# source /root/.bash_profile
[root@tidb1 tidb-community-server-v5.2.2-linux-amd64]# 
3.2 配置集群拓扑文件
  1. 生成集群拓扑文件
[root@tidb1 opt]# 
[root@tidb1 opt]# mkdir tidb-v5.2.2-install
[root@tidb1 opt]#
[root@tidb1 opt]# tiup cluster template > /opt/tidb-v5.2.2-install/topology.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster template
[root@tidb1 opt]# 
[root@tidb1 opt]# ll tidb-v5.2.2-install/
总用量 12
-rw-r--r--. 1 root root 10601 11月 23 13:22 topology.yaml
[root@tidb1 opt]#

注释topology.yaml所有内容,再参考topology.yaml的内容,添加以下内容到topology.yaml中

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/opt/tidb-v5.2.2-install/tidb-deploy"
  data_dir: "/opt/tidb-v5.2.2-install/tidb-data"
  arch: "amd64"
  
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
  
server_configs:
  pd:
    replication.enable-placement-rules: true

pd_servers:
  - host: tidb1
  - host: tidb2
  - host: tidb3
  - host: tidb4

tidb_servers:
  - host: tidb1
  - host: tidb2
  - host: tidb3
  - host: tidb4

tikv_servers:
  - host: tidb1
  - host: tidb2
  
tiflash_servers:
  - host: tidb3	
    data_dir: /opt/tidb-v5.2.2-install/tidb-data/tiflash-9000
    deploy_dir: /opt/tidb-v5.2.2-install/tidb-deploy/tiflash-9000
  - host: tidb4
    data_dir: /opt/tidb-v5.2.2-install/tidb-data/tiflash-9000
    deploy_dir: /opt/tidb-v5.2.2-install/tidb-deploy/tiflash-9000

monitoring_servers:
  - host: tidb1

grafana_servers:
  - host: tidb1

alertmanager_servers:
  - host: tidb1

参数说明如下:

  • TiKV和TiFlash需要部署在不同的服务器上或使用不同的磁盘(非目录)
  • 使用Tiflash需要开启PD的Placement Rules功能,所以需要设置此参数replication.enable-placement-rules: true
  1. 检测集群部署的问题
[root@tidb1 ~]# 
[root@tidb1 ~]# tiup cluster check /opt/tidb-v5.2.2-install/topology.yaml --user root
[root@tidb1 ~]# 
  • 这里如果检测的result为Fail,则需根据提示信息解决错误
3.3 deploy部署集群
[root@tidb1 ~]#
[root@tidb1 ~]# tiup cluster deploy tidb-cluster v5.2.2 /opt/tidb-v5.2.2-install/topology.yaml --user root
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster deploy tidb-cluster v5.2.2 /opt/tidb-v5.2.2-install/topology.yaml --user root
......省略部分......
Cluster `tidb-cluster` deployed successfully, you can start it with command: `tiup cluster start tidb-cluster`
[root@tidb1 ~]#
[root@tidb1 ~]# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster list
Name          User  Version  Path                                               PrivateKey
----          ----  -------  ----                                               ----------
tidb-cluster  tidb  v5.2.2   /root/.tiup/storage/cluster/clusters/tidb-cluster  /root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa
[root@tidb1 ~]# 
  • 会自动在目标服务器创建tidb用户
  • 可以通过tiup cluster destroy tidb-cluster销毁一个集群
3.4 查看集群状态
[root@tidb1 ~]#
[root@tidb1 ~]# tiup cluster display tidb-cluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-cluster
Cluster type:       tidb
Cluster name:       tidb-cluster
Cluster version:    v5.2.2
Deploy user:        tidb
SSH type:           builtin
ID           Role          Host   Ports                            OS/Arch       Status  Data Dir                                              Deploy Dir
--           ----          ----   -----                            -------       ------  --------                                              ----------
tidb1:9093   alertmanager  tidb1  9093/9094                        linux/x86_64  Down    /opt/tidb-v5.2.2-install/tidb-data/alertmanager-9093  /opt/tidb-v5.2.2-install/tidb-deploy/alertmanager-9093
tidb1:3000   grafana       tidb1  3000                             linux/x86_64  Down    -                                                     /opt/tidb-v5.2.2-install/tidb-deploy/grafana-3000
tidb1:2379   pd            tidb1  2379/2380                        linux/x86_64  Down    /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb2:2379   pd            tidb2  2379/2380                        linux/x86_64  Down    /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb3:2379   pd            tidb3  2379/2380                        linux/x86_64  Down    /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb4:2379   pd            tidb4  2379/2380                        linux/x86_64  Down    /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb1:9090   prometheus    tidb1  9090                             linux/x86_64  Down    /opt/tidb-v5.2.2-install/tidb-data/prometheus-9090    /opt/tidb-v5.2.2-install/tidb-deploy/prometheus-9090
tidb1:4000   tidb          tidb1  4000/10080                       linux/x86_64  Down    -                                                     /opt/tidb-v5.2.2-install/tidb-deploy/tidb-4000
tidb2:4000   tidb          tidb2  4000/10080                       linux/x86_64  Down    -                                                     /opt/tidb-v5.2.2-install/tidb-deploy/tidb-4000
tidb3:4000   tidb          tidb3  4000/10080                       linux/x86_64  Down    -                                                     /opt/tidb-v5.2.2-install/tidb-deploy/tidb-4000
tidb4:4000   tidb          tidb4  4000/10080                       linux/x86_64  Down    -                                                     /opt/tidb-v5.2.2-install/tidb-deploy/tidb-4000
tidb3:9000   tiflash       tidb3  9000/8123/3930/20170/20292/8234  linux/x86_64  N/A     /opt/tidb-v5.2.2-install/tidb-data/tiflash-9000       /opt/tidb-v5.2.2-install/tidb-deploy/tiflash-9000
tidb4:9000   tiflash       tidb4  9000/8123/3930/20170/20292/8234  linux/x86_64  N/A     /opt/tidb-v5.2.2-install/tidb-data/tiflash-9000       /opt/tidb-v5.2.2-install/tidb-deploy/tiflash-9000
tidb1:20160  tikv          tidb1  20160/20180                      linux/x86_64  N/A     /opt/tidb-v5.2.2-install/tidb-data/tikv-20160         /opt/tidb-v5.2.2-install/tidb-deploy/tikv-20160
tidb2:20160  tikv          tidb2  20160/20180                      linux/x86_64  N/A     /opt/tidb-v5.2.2-install/tidb-data/tikv-20160         /opt/tidb-v5.2.2-install/tidb-deploy/tikv-20160
Total nodes: 15
[root@tidb1 ~]# 

3.5 启动集群
[root@tidb1 ~]#
[root@tidb1 ~]# tiup cluster start tidb-cluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster start tidb-cluster
......省略部分......
Started cluster `tidb-cluster` successfully
[root@tidb1 ~]#
4. 验证集群

方式一:通过tiup cluster命令查看

[root@tidb1 ~]# 
[root@tidb1 ~]# tiup cluster display tidb-cluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-cluster
Cluster type:       tidb
Cluster name:       tidb-cluster
Cluster version:    v5.2.2
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://tidb3:2379/dashboard
ID           Role          Host   Ports                            OS/Arch       Status  Data Dir                                              Deploy Dir
--           ----          ----   -----                            -------       ------  --------                                              ----------
tidb1:9093   alertmanager  tidb1  9093/9094                        linux/x86_64  Up      /opt/tidb-v5.2.2-install/tidb-data/alertmanager-9093  /opt/tidb-v5.2.2-install/tidb-deploy/alertmanager-9093
tidb1:3000   grafana       tidb1  3000                             linux/x86_64  Up      -                                                     /opt/tidb-v5.2.2-install/tidb-deploy/grafana-3000
tidb1:2379   pd            tidb1  2379/2380                        linux/x86_64  Up      /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb2:2379   pd            tidb2  2379/2380                        linux/x86_64  Up      /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb3:2379   pd            tidb3  2379/2380                        linux/x86_64  Up|UI   /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb4:2379   pd            tidb4  2379/2380                        linux/x86_64  Up|L    /opt/tidb-v5.2.2-install/tidb-data/pd-2379            /opt/tidb-v5.2.2-install/tidb-deploy/pd-2379
tidb1:9090   prometheus    tidb1  9090                             linux/x86_64  Up      /opt/tidb-v5.2.2-install/tidb-data/prometheus-9090    /opt/tidb-v5.2.2-install/tidb-deploy/prometheus-9090
......省略部分......
tidb2:20160  tikv          tidb2  20160/20180                      linux/x86_64  Up      /opt/tidb-v5.2.2-install/tidb-data/tikv-20160         /opt/tidb-v5.2.2-install/tidb-deploy/tikv-20160
Total nodes: 15
  • 可以看到只有tidb3上的PD有UI,能打开http://tidb3:2379/dashboard

方式二:通过TiDB Dashboard查看 访问http://tidb3:2379/dashboard,登录用户名和密码为TiDB数据库的root用户和密码,密码默认为空,页面如下: TiDB Dashboard

方式三:通过Grafana查看

访问http://tidb1:3000,默认登录用户名和密码为:admin / admin,页面如下: Home DashboardTidb 方式四:登录数据库执行DDL、DML操作和查询SQL语句

访问的ip为tidb_servers定义的地址:tidb1、tidb2、tidb3、tidb4,默认端口为: 4000,默认用户名为:root,默认密码为空

[root@tidb1 ~]#
[root@tidb1 ~]# mysql -h 192.168.23.61 -P 4000 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 27
Server version: 5.7.25-TiDB-v5.2.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
mysql> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v5.2.2
Edition: Community
Git Commit Hash: da1c21fd45a4ea5900ac16d2f4a248143f378d18
Git Branch: heads/refs/tags/v5.2.2
UTC Build Time: 2021-10-20 06:08:33
GoVersion: go1.16.4
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false
1 row in set (0.00 sec)

mysql> 
mysql> select store_id, address, store_state, store_state_name, capacity, available, uptime from information_schema.tikv_store_status;
+----------+-------------+-------------+------------------+----------+-----------+--------------------+
| store_id | address     | store_state | store_state_name | capacity | available | uptime             |
+----------+-------------+-------------+------------------+----------+-----------+--------------------+
|        4 | tidb2:20160 |           0 | Up               | 49.98GiB | 42.77GiB  | 2h16m41.592563087s |
|       78 | tidb3:3930  |           0 | Up               | 49.98GiB | 49.98GiB  | 2h16m31.660965528s |
|       79 | tidb4:3930  |           0 | Up               | 49.98GiB | 49.98GiB  | 2h16m31.061873769s |
|        1 | tidb1:20160 |           0 | Up               | 49.98GiB | 37.63GiB  | 2h16m42.137683829s |
+----------+-------------+-------------+------------------+----------+-----------+--------------------+
4 rows in set (0.00 sec)

mysql> 
mysql> create database test_db;
Query OK, 0 rows affected (0.12 sec)

mysql>
mysql> use test_db;
Database changed
mysql>
mysql> create table test_tb(id int, name varchar(64));
Query OK, 0 rows affected (0.12 sec)

mysql>
mysql> insert into test_tb(id, name) values(1, 'yi'), (2, 'er');
Query OK, 2 rows affected (0.02 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql>
mysql> select * from test_tb;
+------+------+
| id   | name |
+------+------+
|    1 | yi   |
|    2 | er   |
+------+------+
2 rows in set (0.01 sec)

mysql> 
mysql> exit;
Bye
[root@tidb1 ~]# 
关注
打赏
1664501120
查看更多评论
立即登录/注册

微信扫码登录

0.0374s