Backgroud
TiDB is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. Please refer to the official document to see the details
Server preparing
We use several ECS to build the real distributed system, instead of distributed service on one server.
If you want to simulate production deployment on a single machine, refer to the ofiicial doc of quick start
Ensure the following:
- All ECS can intercommunicate through their Fire-Walls
- You can log in to all servers as
root
In this article, we use 3 ECS as an example . Their IP addresses are as follows:
1 | 10.2.103.149 |
Login to ECS
We logined to all ECS with public keys, the key-pairs are stored as ~/.ssh/jinshan
and ~/.ssh/jinshan.pub
.
1 | ssh -i ~/.ssh/jinshan_rsa root@10.2.103.43 |
SSH mutual trust
Log in to the target machine respectively using the root
user account, create the tidb
user and set the login password.
1 | useradd tidb && \ |
To configure sudo without password, run the following command, and add tidb ALL=(ALL) NOPASSWD: ALL
to the end of the file:
1 | visudo |
Use the tidb
user to log in to the control machine, and run the following command. Replace 10.2.103.43
with the IP of your target machine, and enter the tidb
user password of the target machine as prompted. After the command is executed, SSH mutual trust is already created. This applies to other machines as well. Newly created tidb
users do not have the .ssh
directory. To create such a directory, execute the command that generates the RSA key. To deploy TiDB components on the control machine, configure mutual trust for the control machine and the control machine itself.
1 | ssh-keygen -t rsa |
Log in to the control machine using the tidb
user account, and log in to the IP of the target machine using ssh
. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured.
1 | ssh 10.2.103.43 |
Attention: if you have problem with copying keys remotely, try to login to the target and write the keys to ~/.ssh/authorized_keys
manually.
Others
For futher development, please refer to the TiDB Environment and System Configuration Check.
Install TiUP
Starting with TiDB 4.0, TiUP, as the package manager, makes it far easier to manage different cluster components in the TiDB ecosystem. Now you can run any component with only a single line of TiUP commands.You can refer to the tiup document to see the details.
Install the package
1 | curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh |
Reload the shell profile
Above command installs TiUP in the $HOME/.tiup
folder. The installed components and the data generated by their operation are also placed in this folder. This command also automatically adds $HOME/.tiup/bin
to the PATH
environment variable in the Shell .profile
file, so you can use TiUP directly.
Deploy clusters
Write the configuration files
Referring to complex-multi-instance.yaml and tiup documents, we write the yaml like this:
Click here to expand / collapse details
## Global variables are applied to all deployments and used as the default value of
## the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: "/tidb-deploy/monitored-9100"
data_dir: "/tidb-data-monitored-9100"
log_dir: "/tidb-deploy/monitored-9100/log"
server_configs:
tidb:
log.slow-threshold: 300
tikv:
readpool.unified.max-thread-count: 1
readpool.storage.use-unified-pool: true
readpool.coprocessor.use-unified-pool: true
storage.block-cache.capacity: 8GB
raftstore.capacity: 250GB
pd:
replication.location-labels: ["resource_pool", "host"]
schedule.leader-schedule-limit: 4
schedule.region-schedule-limit: 2048
schedule.replica-schedule-limit: 64
pd_servers:
- host: 10.2.103.43
- host: 10.2.103.81
- host: 10.2.103.149
tidb_servers:
- host: 10.2.103.43
port: 4000
status_port: 10080
deploy_dir: "/tidb-deploy/tidb-4000"
log_dir: "/tidb-deploy/tidb-4000/log"
# numa_node: "0"
- host: 10.2.103.43
port: 4001
status_port: 10081
deploy_dir: "/tidb-deploy/tidb-4001"
log_dir: "/tidb-deploy/tidb-4001/log"
# numa_node: "1"
- host: 10.2.103.81
port: 4000
status_port: 10080
deploy_dir: "/tidb-deploy/tidb-4000"
log_dir: "/tidb-deploy/tidb-4000/log"
# numa_node: "0"
- host: 10.2.103.81
port: 4001
status_port: 10081
deploy_dir: "/tidb-deploy/tidb-4001"
log_dir: "/tidb-deploy/tidb-4001/log"
# numa_node: "1"
- host: 10.2.103.149
port: 4000
status_port: 10080
deploy_dir: "/tidb-deploy/tidb-4000"
log_dir: "/tidb-deploy/tidb-4000/log"
# numa_node: "0"
- host: 10.2.103.149
port: 4001
status_port: 10081
deploy_dir: "/tidb-deploy/tidb-4001"
log_dir: "/tidb-deploy/tidb-4001/log"
# numa_node: "1"
tikv_servers:
- host: 10.2.103.43
port: 20160
status_port: 20180
deploy_dir: "/tidb-deploy/tikv-20160"
data_dir: "/tidb-data/tikv-20160"
log_dir: "/tidb-deploy/tikv-20160/log"
# numa_node: "0"
config:
server.labels: { host: "tikv1" ,resource_pool: "pool1"}
- host: 10.2.103.43
port: 20161
status_port: 20181
deploy_dir: "/tidb-deploy/tikv-20161"
data_dir: "/tidb-data/tikv-20161"
log_dir: "/tidb-deploy/tikv-20161/log"
# numa_node: "1"
config:
server.labels: { host: "tikv1" ,resource_pool: "pool2"}
- host: 10.2.103.81
port: 20160
status_port: 20180
deploy_dir: "/tidb-deploy/tikv-20160"
data_dir: "/tidb-data/tikv-20160"
log_dir: "/tidb-deploy/tikv-20160/log"
# numa_node: "0"
config:
server.labels: { host: "tikv2" ,resource_pool: "pool1"}
- host: 10.2.103.81
port: 20161
status_port: 20181
deploy_dir: "/tidb-deploy/tikv-20161"
data_dir: "/tidb-data/tikv-20161"
log_dir: "/tidb-deploy/tikv-20161/log"
# numa_node: "1"
config:
server.labels: { host: "tikv2" ,resource_pool: "pool2"}
- host: 10.2.103.149
port: 20160
status_port: 20180
deploy_dir: "/tidb-deploy/tikv-20160"
data_dir: "/tidb-data/tikv-20160"
log_dir: "/tidb-deploy/tikv-20160/log"
# numa_node: "0"
config:
server.labels: { host: "tikv3" ,resource_pool: "pool1"}
- host: 10.2.103.149
port: 20161
status_port: 20181
deploy_dir: "/tidb-deploy/tikv-20161"
data_dir: "/tidb-data/tikv-20161"
log_dir: "/tidb-deploy/tikv-20161/log"
# numa_node: "1"
config:
server.labels: { host: "tikv3",resource_pool: "pool2" }
monitoring_servers:
- host: 10.2.103.43
# ssh_port: 22
# port: 9090
# deploy_dir: "/tidb-deploy/prometheus-8249"
# data_dir: "/tidb-data/prometheus-8249"
# log_dir: "/tidb-deploy/prometheus-8249/log"
grafana_servers:
- host: 10.2.103.43
# port: 3000
# deploy_dir: /tidb-deploy/grafana-3000
alertmanager_servers:
- host: 10.2.103.43
# ssh_port: 22
# web_port: 9093
# cluster_port: 9094
# deploy_dir: "/tidb-deploy/alertmanager-9093"
# data_dir: "/tidb-data/alertmanager-9093"
# log_dir: "/tidb-deploy/alertmanager-9093/log"
Check and deploy
1 | tiup cluster check ./complex-multi-instance.yaml --apply --user tidb -i /home/tidb/.ssh/id_rsa |
Refering to deploy document to see the details.
After initiation, you will see the password of root.
Display clusters
1 | tiup cluster list |
Then check the dashboard to see the topology:
Refference
https://docs.pingcap.com/tidb/stable/overview
https://docs.pingcap.com/tidb/stable/check-before-deployment
https://docs.pingcap.com/tidb/stable/tiup-overview
https://docs.pingcap.com/tidb/stable/production-deployment-using-tiup