mysql中如何快速搭建PXC集群以及Mycat分片,相信很多沒有經驗的人對此束手無策,為此本文總結了問題出現的原因和解決方法,通過這篇文章希望你能解決這個問題。
參考percona官方文檔:
https://www.percona.com/doc/percona-xtradb-cluster/5.7/index.html
https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/
另外單獨下載一個軟件包:qpress-11-1.el7.x86_64.rpm
yum -y remove mari*
firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --zone=public --add-port=4444/tcp --permanent firewall-cmd --zone=public --add-port=4567/tcp --permanent firewall-cmd --zone=public --add-port=4568/tcp --permanent firewall-cmd --reload
vi /etc/selinux/config #把SELINUX屬性值設置成disabled reboot
$ sudo yum install Percona-XtraDB-Cluster-57
解壓下載好的安裝包,并將 qpress 包放入一起,然后進入目錄,進行如下命令
$sudo yum localinstall *.rpm
[^說明]: You need to have root access on the node where you will be installing Percona XtraDB Cluster (either logged in as a user with root privileges or be able to run commands with
sudo
chkconfig mysqld off
啟動服務
$sudo service mysql start
查看MySQL初始密碼
cat /var/log/mysqld.log | grep "A temporary password"
用root賬號登錄
mysql -u root -p
修改密碼
mysql_secure_installation
創建遠程管理員賬號
CREATE USER 'YXC_admin'@'%' IDENTIFIED BY 'Yxc_123456'; GRANT all privileges ON *.* TO 'YXC_admin'@'%'; FLUSH PRIVILEGES; #或者如下: CREATE USER 'YXC'@'localhost' IDENTIFIED BY 'passw0rd'; GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON * . * TO 'YXC'@'localhost'; mysql> FLUSH PRIVILEGES;
停止服務
$sudo service mysql stop
Configure all nodes,配置文件內容:
# vim /etc/my.cnf [client] socket=/var/lib/mysql/mysql.sock [mysqld] server-id=31 #server-id=1 #PXC集群中MySQL實例的唯一ID,不能重復,必須是數字 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-bin log_slave_updates expire_logs_days=7 character_set_server = utf8 bind-address = 0.0.0.0 #跳過DNS解析 skip-name-resolve # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 ##集群部分 wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_name=pxc3-cluster #PXC集群的名稱 wsrep_cluster_address=gcomm://192.168.56.31,192.168.56.32,192.168.56.33 wsrep_node_name=pxc31 #當前節點的名稱 wsrep_node_address=192.168.56.31 #當前節點的IP wsrep_sst_method=xtrabackup-v2 #同步方法(mysqldump、rsync、xtrabackup) wsrep_sst_auth= YXC_admin:Yxc_123456 #同步使用的帳戶 #SST means State Snapshot Transfer pxc_strict_mode=ENFORCING #同步嚴厲模式 binlog_format=ROW #基于ROW復制(安全可靠) default_storage_engine=InnoDB #默認引擎 innodb_autoinc_lock_mode=2 #主鍵自增長不鎖表
引導節點啟停(Bootstrap the first node)
systemctl start mysql@bootstrap.service systemctl stop mysql@bootstrap.service systemctl restart mysql@bootstrap.service
非引導節點啟停(add other nodes)
service start mysql service stop mysql service restart mysql
判斷引導節點
##cat /var/lib/mysql/grastate.dat # GALERA saved state version: 2.1 uuid: 4580a102-bd96-11e9-a653-2a1e935fbf67 seqno: -1 safe_to_bootstrap: 0 #safe_to_bootstrap: 1,則為引導啟動節點
如果最后關閉的PXC節點是安全退出的,那么下次啟動要最先啟動這個節點,而且要以引導節點啟動
如最后關閉的PXC節點不是安全退出,要先修改/var/lib/mysql/grastate.dat 文件,把其中的 safe_to_bootstrap 屬性值設置為1,再按照引導節點啟動
show status like '%wsrep%'; ## wsrep:write set replication #Percona XtraDB Cluster is based on Percona Server running with the XtraDB storage engine. It uses the Galera library, which is an implementation of the write set replication (wsrep) API developed by Codership Oy. The default and recommended data transfer method is via Percona XtraBackup.
(1)安裝JDK
#搜索JDK版本 yum search jdk #安裝JDK1.8開發版 yum install java-1.8.0-openjdk-devel.x86_64
(2)配置環境變量
#查看JDK安裝路徑 ls -lrt /etc/alternatives/java vi /etc/profile #在文件結尾加上JDK路徑,例如export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/ source /etc/profile
(3)下載MyCat
http://dl.mycat.io/1.6.5/Mycat-server-1.6.5-release-20180122220033-linux.tar. gz
(4)上傳MyCat壓縮包到虛擬機
(5)安裝unzip程序包,解壓縮MyCat
yum install unzip unzip MyCAT壓縮包名稱
(6) 開放防火墻8066和9066端口,關閉SELINUX
(7) 修改MyCat的bin目錄中的權限
chmod -R 777 ./*.sh
(8) MyCat啟動與關閉
#cd MyCat的bin目錄 ./startup_nowrap.sh #啟動MyCat ps -aux #查看系統進程 kill -9 MyCat進程編號
<?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <!--配置數據表--> <schema name="ecs" checkSQLschema="false" sqlMaxLimit="100"> <table name="t_user" dataNode="dn1,dn2" rule="mod-long" /> <table name="t_customer" dataNode="dn1,dn2" rule="sharding-customer" > <childTable name="t_orders" primaryKey="ID" joinKey="customer_id" parentKey="id"> </childTable> </table> </schema> <!--配置分片關系--> <dataNode name="dn1" dataHost="cluster1" database="ecs" /> <dataNode name="dn2" dataHost="cluster2" database="ecs" /> <!--配置連接信息--> <dataHost name="cluster1" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <!-- can have multi write hosts --> <writeHost host="w1" url="192.168.56.21:3306" user="admin" password="Abc_123456"> <!-- can have multi read hosts --> <readHost host="w1r1" url="192.168.56.22:3306" user="admin" password="Abc_123456" /> <readHost host="w1r2" url="192.168.56.23:3306" user="admin" password="Abc_123456" /> </writeHost> <writeHost host="w2" url="192.168.56.22:3306" user="admin" password="Abc_123456"> <!-- can have multi read hosts --> <readHost host="w2r1" url="192.168.56.21:3306" user="admin" password="Abc_123456" /> <readHost host="w2r2" url="192.168.56.23:3306" user="admin" password="Abc_123456" /> </writeHost> </dataHost> <dataHost name="cluster2" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <!-- can have multi write hosts --> <writeHost host="w1" url="192.168.56.31:3306" user="admin" password="Abc_123456"> <!-- can have multi read hosts --> <readHost host="w1r1" url="192.168.56.32:3306" user="admin" password="Abc_123456" /> <readHost host="w1r2" url="192.168.56.33:3306" user="admin" password="Abc_123456" /> </writeHost> <writeHost host="w2" url="192.168.56.32:3306" user="admin" password="Abc_123456"> <!-- can have multi read hosts --> <readHost host="w2r1" url="192.168.56.31:3306" user="admin" password="Abc_123456" /> <readHost host="w2r2" url="192.168.56.33:3306" user="admin" password="Abc_123456" /> </writeHost> </dataHost> </mycat:schema>
<user name="YXC_admin" defaultAccount="true"> <property name="password">Yxc_123456</property> <property name="schemas">ecs</property> <!-- 表級 DML 權限設置 --> <!-- <privileges check="false"> <schema name="TESTDB" dml="0110" > <table name="tb01" dml="0000"></table> <table name="tb02" dml="1111"></table> </schema> </privileges> --> </user>
<tableRule name="sharding-customer"> <rule> <columns>sharding_id</columns> <algorithm>customer-hash-int</algorithm> </rule> </tableRule> <function name="customer-hash-int" class="io.mycat.route.function.PartitionByFileMap"> <property name="mapFile">customer-hash-int.txt</property> </function>
用一個虛擬機實例部署Haproxy
安裝Haproxy
yum install -y haproxy
編輯配置文件
vi /etc/haproxy/haproxy.cfg
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen admin_stats bind 0.0.0.0:4001 mode http stats uri /dbs stats realm Global\ statistics stats auth admin:abc123456 listen proxy-mysql bind 0.0.0.0:3306 mode tcp balance roundrobin option tcplog #日志格式 server mycat_1 192.168.99.131:3306 check port 8066 maxconn 2000 server mycat_2 192.168.99.132:3306 check port 8066 maxconn 2000 option tcpka #使用keepalive檢測死鏈
啟動Haproxy
service haproxy start
訪問Haproxy監控畫面
http://192.168.99.131:4001/dbs
用另外一個虛擬機同樣按照上述操作安裝Haproxy
在某個Haproxy虛擬機實例上部署Keepalived
開啟防火墻的VRRP協議
#開啟VRRP firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --protocol vrrp -j ACCEPT #應用設置 firewall-cmd --reload
安裝Keepalived
yum install -y keepalived
編輯配置文件
vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.99.133 } }
啟動Keepalived
service keepalived start
ping 192.168.99.133
在另外一個Haproxy虛擬機上,按照上述方法部署Keepalived
使用MySQL客戶端連接192.168.99.133
看完上述內容,你們掌握mysql中如何快速搭建PXC集群以及Mycat分片的方法了嗎?如果還想學到更多技能或想了解更多相關內容,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。