大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
这篇文章主要讲解了“在Cassandra集群中表的数据清理与恢复”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“在Cassandra集群中表的数据清理与恢复”吧!
成都创新互联公司专注于企业营销型网站、网站重做改版、巫山网站定制设计、自适应品牌网站建设、HTML5建站、成都做商城网站、集团公司官网建设、成都外贸网站建设、高端网站制作、响应式网页设计等建站业务,价格优惠性价比高,为巫山等各大城市提供网站开发制作服务。
目的:项目组需要对线上cassandra集群中某张表数据进行清理,通过实验验证truncate是否可行。
1.环境准备
阿里云环境搭建三节点集群,副本数为2
172.26.99.152
172.26.99.153
172.26.99.154
安装java jdk:
如果有遗留的旧版本,需要先删除
(1)、查看系统自带jdk是否已安装:
yum list installed |grep java
若有自带安装的jdk,如何卸载系统自带java环境:
yum -y remove java-1.7.0-openjdk*
yum -y remove tzdata-java.noarch
(2)、查看yum库中的java安装包
yum -y list java*
(3)、使用yum安装java环境(这里是安装的jdk-1.8.0,如果安装1.7,后面cassandra启动时会报错)
yum install java-1.8.0
(4)、查看刚安装的java版本信息:
java -version
mkdir /CAS
cd /CAS
tar xzvf apache-cassandra-3.11.1-bin.tar.gz
mv apache-cassandra-3.11.1 cassandra
useradd cassandra
passwd cassandra
chown -R cassandra.cassandra /CAS
chmod 755 -R /CAS/cassandra
su - cassandra
cd /CAS/cassandra/conf
$vi cassandra.yaml
- seeds: "192.26.99.152" --这一行由127.0.0.1改为集群中一个或多个节点的IP,不建议所有IP。因为种子节点损坏时修复方法相对复杂
listen_address: 192.168.73.104 --这一行改为当前IP
rpc_address: 192.168.73.104 --改为当前节点的IP
$ vi cassandra-env.sh
cassandra-env.sh 文件需要修改的参数:
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=192.168.73.104" --此行默认是注释的,需要去掉注释,并将hostname改为当前IP
配置$JAVA_HOME(java环境变量 )和$CASSANDRA_HOME(cassandra环境变量)
一般来讲通过yum安装的jdk路径应该在/usr/lib/jvm/下(例如我这里的/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-0.b15.el6_8.x86_64)
(1)、打开环境变量配置文件,添加内容:
cat >> /etc/profile < #java path export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre export JRE_HOME=$JAVA_HOME export CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin #cassandra path CASSANDRA_HOME=/CAS/cassandra export CASSANDRA_HOME EOF (2)、使配置生效: source /etc/profile 上面操作参考:https://blog.csdn.net/dengjiexian123/article/details/53033119 不过java_home的设置不太一样 启动: su - cassandra cd /CAS/cassandra/bin ./cassandra 如果java版本低或未安装成功,此处会报错Cassandra 3.0 and later require Java 8u40 or later. 如果报错Unable to find java executable. Check JAVA_HOME and PATH environment variables.重点检查JAVA_HOME是否正确,方法是执行$JAVA_HOME/bin/java -version [cassandra@node2 bin]$ ./cqlsh --request-timeout=9000 $HOSTNAME Connected to Test Cluster at node2:9042. [cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh> desc keyspaces; system_traces system_schema system_auth system system_distributed cqlsh> SELECT * FROM system_schema.keyspaces; keyspace_name | durable_writes | replication --------------------+----------------+------------------------------------------------------------------------------------- system_auth | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'} system_schema | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'} system_distributed | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'} system_traces | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'} (5 rows) cqlsh> cqlsh> create keyspace dbrsk WITH replication = {'class':'NetworkTopologyStrategy','datacenter1':2}; cqlsh> SELECT * FROM system_schema.keyspaces; keyspace_name | durable_writes | replication --------------------+----------------+--------------------------------------------------------------------------------------- system_auth | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '1'} system_schema | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'} system_distributed | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'} system | True | {'class': 'org.apache.cassandra.locator.LocalStrategy'} dbrsk | True | {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'datacenter1': '2'} system_traces | True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '2'} (6 rows) 从源库查看待测试的表结构,并导出数据: cqlsh:dbrsk> desc t_card_info; CREATE TABLE dbrsk.t_card_info ( bankcard text PRIMARY KEY, bankname text, cardname text, cardtype text, city text, province text, updatetime bigint ) WITH bloom_filter_fp_chance = 0.00075 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '卡信息' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 0.0 AND dclocal_read_repair_chance = 0.0 AND default_time_to_live = 0 AND gc_grace_seconds = 86400 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; cqlsh:dbrsk> copy t_card_info to '/tmp/t_card_info.csv'; Using 16 child processes Starting copy of dbrsk.t_card_info with columns [bankcard, bankname, cardname, cardtype, city, province, updatetime]. Processed: 2726962 rows; Rate: 5524 rows/s; Avg. rate: 57918 rows/s 2726962 rows exported to 1 files in 47.165 seconds. 当前库建表并导入数据: cqlsh> use dbrsk; cqlsh:dbrsk> copy t_card_info from '/tmp/t_card_info.csv'; Using 1 child processes Starting copy of dbrsk.t_card_info with columns [bankcard, bankname, cardname, cardtype, city, province, updatetime]. Processed: 690000 rows; Rate: 10883 rows/s; Avg. rate: 11617 rows/s Processed: 1410000 rows; Rate: 13012 rows/s; Avg. rate: 11813 rows/s Processed: 2115000 rows; Rate: 10324 rows/s; Avg. rate: 11783 rows/s Processed: 2726962 rows; Rate: 5305 rows/s; Avg. rate: 11893 rows/s 2726962 rows imported from 1 files in 3 minutes and 49.299 seconds (0 skipped). 导入数据前: [root@node2 data]# du -sh * 408K commitlog 1.4M data 4.0K hints 4.0K saved_caches 导入数据后: [root@node2 data]# du -sh * 155M commitlog 98M data 4.0K hints 4.0K saved_caches 执行truncate操作并查看效果: cqlsh:dbrsk> truncate table t_card_info; [root@node2 dbrsk]# cd t_card_info-9e129520c31c11eab89c515b68839f7c/ [root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# ls backups snapshots [root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# du -sh * 4.0K backups 103M snapshots [root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# cd snapshots/ [root@node2 snapshots]# ls truncated-1594434747140-t_card_info [root@node2 snapshots]# cd truncated-1594434747140-t_card_info/ [root@node2 truncated-1594434747140-t_card_info]# ls manifest.json mc-10-big-Statistics.db mc-11-big-Filter.db mc-12-big-Data.db mc-12-big-TOC.txt mc-9-big-Statistics.db mc-10-big-CompressionInfo.db mc-10-big-Summary.db mc-11-big-Index.db mc-12-big-Digest.crc32 mc-9-big-CompressionInfo.db mc-9-big-Summary.db mc-10-big-Data.db mc-10-big-TOC.txt mc-11-big-Statistics.db mc-12-big-Filter.db mc-9-big-Data.db mc-9-big-TOC.txt mc-10-big-Digest.crc32 mc-11-big-CompressionInfo.db mc-11-big-Summary.db mc-12-big-Index.db mc-9-big-Digest.crc32 schema.cql mc-10-big-Filter.db mc-11-big-Data.db mc-11-big-TOC.txt mc-12-big-Statistics.db mc-9-big-Filter.db mc-10-big-Index.db mc-11-big-Digest.crc32 mc-12-big-CompressionInfo.db mc-12-big-Summary.db mc-9-big-Index.db 在其他节点上,空间占用一致: [cassandra@node3 t_card_info-9e129520c31c11eab89c515b68839f7c]$ du -sh * 4.0K backups 101M snapshots 数据被移动到snapshots文件夹中 执行repair命令,snapshots中的数据不会被清理 ./nodetool repair dbrsk 尝试从操作系统中删除snapshots文件夹。删除后数据库可以正常使用。 重新导入数据,并进行表删除操作: cqlsh:dbrsk> drop table t_card_info ; [root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# du -sh * 4.0K backups 103M snapshots [root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# ls backups snapshots [root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# cd snapshots/ [root@node2 snapshots]# ls dropped-1594435864327-t_card_info 可以看到,drop和truncte表后,数据分别会被放入该表下的snapshots/droppedxxxxxx snapshots/truncatedxxxxxx中。 那么如何恢复呢? [cassandra@node2 bin]$ ./sstableloader -d 172.26.99.152 /tmp/dbrsk/t_card_info WARN 11:04:46,472 Only 31.813GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots Established connection to initial hosts Opening sstables and calculating sections to stream Skipping file mc-21-big-Data.db: table dbrsk.t_card_info doesn't exist Skipping file mc-22-big-Data.db: table dbrsk.t_card_info doesn't exist Skipping file mc-23-big-Data.db: table dbrsk.t_card_info doesn't exist Skipping file mc-24-big-Data.db: table dbrsk.t_card_info doesn't exist Summary statistics: Connections per host : 1 Total files transferred : 0 Total bytes transferred : 0.000KiB Total duration : 2934 ms Average transfer rate : 0.000KiB/s Peak transfer rate : 0.000KiB/s 直接使用sstableloader时如果表不存在,会报错。需要手工建表: [cassandra@node2 bin]$ ./cqlsh --request-timeout=90000 $HOSTNAME Connected to Test Cluster at node2:9042. [cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh> use dbrsk; cqlsh:dbrsk> CREATE TABLE dbrsk.t_card_info ( ... bankcard text PRIMARY KEY, ... bankname text, ... cardname text, ... cardtype text, ... city text, ... province text, ... updatetime bigint ... ) WITH bloom_filter_fp_chance = 0.00075 ... AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} ... AND comment = '银行卡信息数据' ... AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} ... AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} ... AND crc_check_chance = 0.0 ... AND dclocal_read_repair_chance = 0.0 ... AND default_time_to_live = 0 ... AND gc_grace_seconds = 86400 ... AND max_index_interval = 2048 ... AND memtable_flush_period_in_ms = 0 ... AND min_index_interval = 128 ... AND read_repair_chance = 0.0 ... AND speculative_retry = '99PERCENTILE'; cqlsh:dbrsk> exit [cassandra@node2 bin]$ ./sstableloader -d 172.26.99.152 /tmp/dbrsk/t_card_info WARN 11:05:57,753 Only 31.813GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots Established connection to initial hosts Opening sstables and calculating sections to stream Streaming relevant part of /tmp/dbrsk/t_card_info/mc-21-big-Data.db /tmp/dbrsk/t_card_info/mc-22-big-Data.db /tmp/dbrsk/t_card_info/mc-23-big-Data.db /tmp/dbrsk/t_card_info/mc-24-big-Data.db to [/172.26.99.154, /172.26.99.152, /172.26.99.153] progress: [/172.26.99.154]0:0/4 0 % [/172.26.99.152]0:1/4 6 % total: 4% 1.172MiB/s (avg: 1.172MiB/s) progress: [/172.26.99.154]0:0/4 0 % [/172.26.99.152]0:1/4 6 % [/172.26.99.153]0:0/4 0 % total: 3% 65.484MiB/s (avg: 1.257MiB/s) progress: [/172.26.99.154]0:0/4 0 % [/172.26.99.152]0:1/4 6 % [/172.26.99.153]0:0/4 0 % total: 3% 2.578MiB/s (avg: 1.260MiB/s) …… progress: [/172.26.99.154]0:0/4 48 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 24% 245.959MiB/s (avg: 8.540MiB/s) progress: [/172.26.99.154]0:0/4 49 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 24% 1012.976MiB/s (avg: 8.651MiB/s) progress: [/172.26.99.154]0:0/4 51 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 25% 1.454GiB/s (avg: 8.803MiB/s) progress: [/172.26.99.154]0:0/4 54 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 25% 161.665MiB/s (avg: 9.091MiB/s) progress: [/172.26.99.154]0:0/4 56 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 1.643GiB/s (avg: 9.249MiB/s) progress: [/172.26.99.154]0:0/4 56 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 134.745MiB/s (avg: 9.254MiB/s) progress: [/172.26.99.154]0:0/4 58 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 1.702GiB/s (avg: 9.371MiB/s) progress: [/172.26.99.154]0:0/4 58 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 23.592MiB/s (avg: 9.406MiB/s) …… progress: [/172.26.99.154]0:4/4 100% [/172.26.99.152]0:4/4 100% [/172.26.99.153]0:4/4 100% total: 100% 0.000KiB/s (avg: 10.816MiB/s) Summary statistics: Connections per host : 1 Total files transferred : 8 Total bytes transferred : 133.156MiB Total duration : 12314 ms Average transfer rate : 10.813MiB/s Peak transfer rate : 17.530MiB/s 结论:truncate操作可行,且出现问题可以恢复,不过恢复的时间较长。 cassandra的truncate table与drop table都不会释放空间,而是将其放入snapshot文件夹下。 感谢各位的阅读,以上就是“在Cassandra集群中表的数据清理与恢复”的内容了,经过本文的学习后,相信大家对在Cassandra集群中表的数据清理与恢复这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是创新互联,小编将为大家推送更多相关知识点的文章,欢迎关注!
分享文章:在Cassandra集群中表的数据清理与恢复
标题网址:http://dzwzjz.com/article/ieidij.html