0%

transfer.sh可以使用命令行快速的分享文件,最大可以上传10G,有效期最长14天,也可以设定下载次数和有效期。

上传文件

1
2
$ curl --upload-file ./go.sh https://transfer.sh/go.sh
https://transfer.sh/Pyg2a/go.sh #生成的下载链接

设定下载次数和最大天数

1
2
$ curl -H "Max-Downloads: 1" -H "Max-Days: 5" --upload-file ./hello.txt https://transfer.sh/hello.txt 
https://transfer.sh/66nb8/hello.txt

添加.bashrc命令行别名

1
2
3
4
5
6
transfer() {
curl --progress-bar --upload-file "$1" https://transfer.sh/$(basename "$1") tee /dev/null;
echo
}

alias transfer=transfer

然后可以这样

1
$ transfer hello.txt

是不是很简单,很好用,但是,不要用来传输敏感文件,传输敏感文件请使用onionshare或加密后再传输。
还有一点儿要注意,github上的tansfer.sh项目与http://transfer.sh网站并无关系。

References:
[1]Easy and fast file sharing from the command-line

OnionShare是一个安全匿名的开源文件分享工具,可以使用它在本地机器通过tor网络发布共享服务,生成一个不可猜测的onion地址,通过这个地址其他人可以使用tor browser来获取分享的文件。只要生成的onion地址没有泄露,文件就是安全的。文件传输的过程不会被任何第三方窃取。

安装

1
$ sudo apt install onionshare

分享文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ onionshare go.sh 
Onionshare 1.3.2 https://onionshare.org/
Connecting to the Tor network: 100% - Done
Configuring onion service on port 17620.
Starting ephemeral Tor onion service and awaiting publication
Settings saved to /home/xxx/.config/onionshare/onionshare.json
Preparing files to share.
* Serving Flask app "onionshare.web" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:17620/ (Press CTRL+C to quit)
Give this address to the person you're sending the file to:
http://huq64ocyu666ecom.onion/negligee-easing

Press Ctrl-C to stop server

获取文件

将生成的连接通过安全路径发送,然后在tor browser中打开连接即可,简单可靠又安全,美中不足,速度慢慢慢。

TLS-SNI-01已经deprecated,但certbot尚不支持tls-alpn-01验证方法,因此可以使用dehydrated或者acme.sh通过https来获取Let’s Encrypt证书。

使用TLS_ALPN获取证书,需要使用443端口进行验证,借助nginx的ngx_stream_ssl_preread_module模块,可以路由来自443的请求到不同的处理后端。

确保nginx支持ssl_preread特性

1
2
$ nginx -V 2>&1 grep -o ssl_preread
ssl_preread

配置nginx
/etc/nginx/nginx.conf文件最后添加

1
2
3
4
5
6
7
8
9
10
11
12
stream {
map $ssl_preread_alpn_protocols $tls_port {
~\\bacme-tls/1\\b 10443;
default 8443;
}
server {
listen 443;
listen \[::\]:443;
proxy_pass 127.0.0.1:$tls_port;
ssl_preread on;
}
}

这样来自签发证书时的验证请求会被路由到10443端口,而其他对443端口的访问会被路由到8443端口,所以虚拟主机应该在8443端口上监听ssl连接。

reload nginx使新配置生效

1
$ sudo systemctl reload nginx

申请证书

1
2
3
4
5
6
7
8
9
# acme.sh --issue --alpn --tlsport 10443 -d openwares.net
```

**安装证书**
```js
# acme.sh --installcert -d openwares.net \\
--key-file /etc/nginx/ssl/openwares.net.key \\
--fullchain-file /etc/nginx/ssl/fullchain.cer \\
--reloadcmd "systemctl force-reload nginx"

更新证书

1
# acme.sh --renew -d openwares.net --force

注意(updated 02/29/2020):
如果启用了proxy_protocol以获取客户端的真实地址,申请或者更新证书时会出现错误:

1
Verify error:Error getting validation data

因此申请或更新证书时需要临时禁止proxy_protocol协议。

References:
[1]Deploying Let’s Encrypt certificates using tls-alpn-01 (https)
[2]使用TLS-ALPN-01验证签发证书
[3]TLS ALPN without downtime
[4]ssl_preread_protocol, multiplex HTTPS and SSH on the same port

wordpress部署在docker上,使用http协议,现在部署https协议,增设一个nginx服务器,反向代理http协议的wordpress

nginx反向代理需要增设协议头

1
proxy_set_header X-Forwarded-Proto $scheme; 

完整的nginx反向代理设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
server {
server_nameopenwares.net;
listen8443 ssl http2;

ssl_certificate /etc/nginx/ssl/fullchain.cer;
ssl_certificate_key /etc/nginx/ssl/openwares.net.key;
ssl_protocols TLSv1.3;

location / {
proxy_pass http://localhost/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "gzip";
}
}

编辑wordpress的wp-config.php文件,在文件前面添加

1
2
3
define('FORCE_SSL_ADMIN', true);
if ($_SERVER\['HTTP_X_FORWARDED_PROTO'\] == 'https')
$_SERVER\['HTTPS'\]='on';

最后需要参考[3]将数据库中的链接从http修改为https。

References:
[1]Administration Over SSL
[2]WordPress使用Nginx做反向代理的SSL设置
[3]MySQL Queries To Change WordPress From HTTP to HTTPS In The Database

安装

1
$ sudo apt install multipath-tools

查看多路径配置

1
$ sudo multipath -ll

没有信息输出,多路径没有自动配置好,需要手工配置

查看路径信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
$ sudo multipath -v3
ul 03 15:22:38 set open fds limit to 1048576/1048576
Jul 03 15:22:38 loading //lib/multipath/libchecktur.so checker
Jul 03 15:22:38 checker tur: message table size = 3
Jul 03 15:22:38 loading //lib/multipath/libprioconst.so prioritizer
Jul 03 15:22:38 foreign library "nvme" loaded successfully
Jul 03 15:22:38 sda: udev property ID_WWN whitelisted
Jul 03 15:22:38 sda: mask = 0x1f
Jul 03 15:22:38 sda: dev_t = 8:0
Jul 03 15:22:38 sda: size = 1167966208
Jul 03 15:22:38 sda: vendor = IBM
Jul 03 15:22:38 sda: product = ServeRAID M5015
Jul 03 15:22:38 sda: rev = 2.12
Jul 03 15:22:38 sda: h:b:t:l = 0:2:0:0
Jul 03 15:22:38 sda: tgt_node_name =
Jul 03 15:22:38 sda: path state = running
Jul 03 15:22:38 sda: 7166 cyl, 255 heads, 63 sectors/track, start at 0
Jul 03 15:22:38 sda: serial = 00f8ec7f2abe47c318d0451a04b00506
Jul 03 15:22:38 sda: get_state
Jul 03 15:22:38 sda: detect_checker = yes (setting: multipath internal)
Jul 03 15:22:38 failed to issue vpd inquiry for pgc9
Jul 03 15:22:38 sda: path_checker = tur (setting: multipath internal)
Jul 03 15:22:38 sda: checker timeout = 90 s (setting: kernel sysfs)
Jul 03 15:22:38 sda: tur state = up
Jul 03 15:22:38 sda: uid_attribute = ID_SERIAL (setting: multipath internal)
Jul 03 15:22:38 sda: uid = 3600605b0041a45d018c347be2a7fecf8 (udev)
Jul 03 15:22:38 sda: detect_prio = yes (setting: multipath internal)
Jul 03 15:22:38 sda: prio = const (setting: multipath internal)
Jul 03 15:22:38 sda: prio args = "" (setting: multipath internal)
Jul 03 15:22:38 sda: const prio = 1
Jul 03 15:22:38 sr0: blacklisted, udev property missing
Jul 03 15:22:38 sdb: udev property ID_WWN whitelisted
Jul 03 15:22:38 sdb: mask = 0x1f
Jul 03 15:22:38 sdb: dev_t = 8:16
Jul 03 15:22:38 sdb: size = 10538188800
Jul 03 15:22:38 sdb: vendor = IBM
Jul 03 15:22:38 sdb: product = 1814 FAStT
Jul 03 15:22:38 sdb: rev = 1060
Jul 03 15:22:38 sdb: h:b:t:l = 1:0:0:0
Jul 03 15:22:38 SCSI target 1:0:0 -> FC rport 1:0-0
Jul 03 15:22:38 sdb: tgt_node_name = 0x20040080e52c8d92
Jul 03 15:22:38 sdb: path state = running
Jul 03 15:22:38 sdb: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jul 03 15:22:38 sdb: serial = SQ22101119
Jul 03 15:22:38 sdb: get_state
Jul 03 15:22:38 sdb: detect_checker = yes (setting: multipath internal)
Jul 03 15:22:38 loading //lib/multipath/libcheckrdac.so checker
Jul 03 15:22:38 checker rdac: message table size = 9
Jul 03 15:22:38 sdb: path_checker = rdac (setting: storage device autodetected)
Jul 03 15:22:38 sdb: checker timeout = 30 s (setting: kernel sysfs)
Jul 03 15:22:38 sdb: rdac state = up
Jul 03 15:22:38 sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
Jul 03 15:22:38 sdb: uid = 360080e50002c8d920000146c5d1bca10 (udev)
Jul 03 15:22:38 sdb: detect_prio = yes (setting: multipath internal)
Jul 03 15:22:38 loading //lib/multipath/libpriordac.so prioritizer
Jul 03 15:22:38 sdb: prio = rdac (setting: storage device configuration)
Jul 03 15:22:38 sdb: prio args = "" (setting: storage device configuration)
Jul 03 15:22:38 sdb: rdac prio = 14
Jul 03 15:22:38 sdc: udev property ID_WWN whitelisted
Jul 03 15:22:38 sdc: mask = 0x1f
Jul 03 15:22:38 sdc: dev_t = 8:32
Jul 03 15:22:38 sdc: size = 10538188800
Jul 03 15:22:38 sdc: vendor = IBM
Jul 03 15:22:38 sdc: product = 1814 FAStT
Jul 03 15:22:38 sdc: rev = 1060
Jul 03 15:22:38 sdc: h:b:t:l = 4:0:0:0
Jul 03 15:22:38 SCSI target 4:0:0 -> FC rport 4:0-0
Jul 03 15:22:38 sdc: tgt_node_name = 0x20040080e52c8d92
Jul 03 15:22:38 sdc: path state = running
Jul 03 15:22:38 sdc: 65535 cyl, 255 heads, 63 sectors/track, start at 0
Jul 03 15:22:38 sdc: serial = SQ22101611
Jul 03 15:22:38 sdc: get_state
Jul 03 15:22:38 sdc: detect_checker = yes (setting: multipath internal)
Jul 03 15:22:38 sdc: path_checker = rdac (setting: storage device autodetected)
Jul 03 15:22:38 sdc: checker timeout = 30 s (setting: kernel sysfs)
Jul 03 15:22:38 sdc: rdac state = up
Jul 03 15:22:38 sdc: uid_attribute = ID_SERIAL (setting: multipath internal)
Jul 03 15:22:38 sdc: uid = 360080e50002c8d920000146c5d1bca10 (udev)
Jul 03 15:22:38 sdc: detect_prio = yes (setting: multipath internal)
Jul 03 15:22:38 sdc: prio = rdac (setting: storage device configuration)
Jul 03 15:22:38 sdc: prio args = "" (setting: storage device configuration)
Jul 03 15:22:38 sdc: rdac prio = 9
Jul 03 15:22:38 libdevmapper version 1.02.155 (2018-12-18)
Jul 03 15:22:38 DM multipath kernel driver v1.13.0
Jul 03 15:22:38 sda: udev property ID_WWN whitelisted
Jul 03 15:22:38 wwid 3600605b0041a45d018c347be2a7fecf8 not in wwids file, skipping sda
Jul 03 15:22:38 sda: orphan path, only one path
Jul 03 15:22:38 const prioritizer refcount 1
Jul 03 15:22:38 sdb: udev property ID_WWN whitelisted
Jul 03 15:22:38 wwid 360080e50002c8d920000146c5d1bca10 not in wwids file, skipping sdb
Jul 03 15:22:38 sdb: orphan path, only one path
Jul 03 15:22:38 rdac prioritizer refcount 2
Jul 03 15:22:38 sdc: udev property ID_WWN whitelisted
Jul 03 15:22:38 wwid 360080e50002c8d920000146c5d1bca10 not in wwids file, skipping sdc
Jul 03 15:22:38 sdc: orphan path, only one path
Jul 03 15:22:38 rdac prioritizer refcount 1
Jul 03 15:22:38 unloading rdac prioritizer
Jul 03 15:22:38 unloading const prioritizer
Jul 03 15:22:38 unloading rdac checker
Jul 03 15:22:38 unloading tur checker
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod
3600605b0041a45d018c347be2a7fecf8 0:2:0:0 sda 8:0 1 undef undef IBM,Serve
360080e50002c8d920000146c5d1bca10 1:0:0:0 sdb 8:16 14 undef undef IBM,1814
360080e50002c8d920000146c5d1bca10 4:0:0:0 sdc 8:32 9 undef undef IBM,1814

或者可以直接查看存在的路径

1
2
3
4
5
6
$ sudo multipath -d -v3 2>/dev/null
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod
3600605b0041a45d018c347be2a7fecf8 0:2:0:0 sda 8:0 1 undef undef IBM,Serve
360080e50002c8d920000146c5d1bca10 1:0:0:0 sdb 8:16 14 undef undef IBM,1814
360080e50002c8d920000146c5d1bca10 4:0:0:0 sdc 8:32 9 undef undef IBM,1814

可以看到有两条路径有相同的wwid,添加此wwid到系统/etc/multipath/wwids配置文件

1
2
$ sudo multipath -a 360080e50002c8d920000146c5d1bca10
wwid '360080e50002c8d920000146c5d1bca10' added

重新启动multipathd服务

1
$ sudo systemctl restart multipathd.service

可以看到有多路径设备dm-0了

1
2
3
4
5
6
7
$ sudo multipath -l
360080e50002c8d920000146c5d1bca10 dm-0 IBM,1814 FAStT
size=4.9T features='5 queue_if_no_path pg_init_retries 50 queue_mode mq' hwhandler='1 rdac' wp=rw
-+- policy='service-time 0' prio=0 status=enabled
\`- 1:0:0:0 sdb 8:16 active undef running
\`-+- policy='service-time 0' prio=0 status=enabled
\`- 4:0:0:0 sdc 8:32 active undef running

还不好,设备没有别名,访问起来很麻烦,添加一个多路径配置文件/etc/multipath/conf.d/multipath.conf,内容如下

1
2
3
4
5
6
multipaths {
multipath {
wwid 360080e50002c8d920000146c5d1bca10
alias data
}
}

然后重新查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ sudo systemctl restart multipathd.service

$ sudo multipath -l
data (360080e50002c8d920000146c5d1bca10) dm-0 IBM,1814 FAStT
size=4.9T features='5 queue_if_no_path pg_init_retries 50 queue_mode mq' hwhandler='1 rdac' wp=rw
-+- policy='service-time 0' prio=14 status=active
\`- 1:0:0:0 sdb 8:16 active ready running
\`-+- policy='service-time 0' prio=9 status=enabled
\`- 4:0:0:0 sdc 8:32 active ready running

$ ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Jul 3 14:55 control
lrwxrwxrwx 1 root root 7 Jul 3 15:45 data -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 3 15:45 data-part1 -> ../dm-1

/dev/mapper/data就是映射后的多路径块设备,和/dev/sda等一样操作就好了,其上存在一个分区/dev/mapper/data-part1,如果没有,可以使用fdisk为其分区。

格式化为ext4文件系统,注意会破坏分区上的所有数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ sudo mkfs.ext4 /dev/mapper/data-part1 
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 1317273339 4k blocks and 164659200 inodes
Filesystem UUID: b60f85e2-5813-48d7-856b-2f8dc7c1aad0
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information:
done

$ blkid /dev/mapper/data-part1
/dev/mapper/data-part1: UUID="b60f85e2-5813-48d7-856b-2f8dc7c1aad0" TYPE="ext4" PARTUUID="ba6bcf54-89e3-4852-a993-4f44d8839531"

自动挂载/etc/fstab添加

1
/dev/mapper/data-part1 /mnt/data ext4 defaults 0 0

挂载后

1
2
3
4
5
6
7
8
9
10
11
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 9.3M 3.2G 1% /run
/dev/sda2 516G 1.4G 488G 1% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda1 511M 5.1M 506M 1% /boot/efi
tmpfs 3.2G 0 3.2G 0% /run/user/1000
/dev/mapper/data-part1 4.9T 89M 4.7T 1% /mnt/data

有一个7z格式分卷压缩的备份文件,使用7z解压缩时出现错误

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ 7z x db_2019_06_25.7z.001 

7-Zip \[64\] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,4 CPUs Intel(R) Core(TM) i5-6400 CPU @ 2.70GHz (506E3),ASM,AES-NI)

Scanning the drive for archives:
1 file, 4697620480 bytes (4480 MiB)

Extracting archive: db_2019_06_25.7z.001
ERROR: db_2019_06_25.7z.001
db_2019_06_25.7z
Open ERROR: Can not open the file as \[7z\] archive



Can't open as archive: 1
Files: 0
Size: 0
Compressed: 0
$ file db_2019_06_25.7z.001
db_2019_06_25.7z.001: 7-zip archive data, version 0.3

因为这个备份文件是在一个debian jessie服务器上使用7-Zip [64] 9.20分卷压缩的,7z archive 版本为0.3
而解压缩的机器使用的是7-Zip [64] 16.02,7z archive版本为0.4
使用7z 9.20解压缩此文档没有问题,这是7z的向后兼容性问题

备份服务器升级到debian stretch, 7z版本升级到了7-Zip [64] 16.02,这个问题就不存在了。

这里使用交互式jmx命令行客户端jmxterm来与mbeans交互。

下载uber依赖自包含版本

1
$ wget https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar

使用很简单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
$ java -jar jmxterm-1.0.1-uber.jar -h
\[USAGE\]
jmxterm <OPTIONS>
\[DESCRIPTION\]
Main executable of JMX terminal CLI tool
\[OPTIONS\]
-a --appendtooutput With this flag, the outputfile is preserved and content is appended to it
-e --exitonfailure With this flag, terminal exits for any Exception
-h --help Show usage of this command line
-i --input <value> Input script file. There can only be one input file. "stdin" is the default value which means console input
-n --noninteract Non interactive mode. Use this mode if input doesn't come from human or jmxterm is embedded
-o --output <value> Output file, stdout or stderr. Default value is stdout
-p --password <value> Password for user/password authentication
-s --sslrmiregistry Whether the server's RMI registry is protected with SSL/TLS
-l --url <value> Location of MBean service. It can be <host>:<port> or full service URL.
-u --user <value> User name for user/password authentication
-v --verbose <value> Verbose level, could be silentbriefverbose. Default value is brief
\[NOTE\]
Without any option, this command opens an interactive command line based console. With a given input file, commands in file will be executed and process ends after file is processed

$ java -jar jmxterm-1.0.1-uber.jar
Welcome to JMX terminal. Type "help" for available commands.
$>help
#following commands are available to use:
about - Display about page
bean - Display or set current selected MBean.
beans - List available beans under a domain or all domains
bye - Terminate console and exit
close - Close current JMX connection
domain - Display or set current selected domain.
domains - List all available domain names
exit - Terminate console and exit
get - Get value of MBean attribute(s)
help - Display available commands or usage of a command
info - Display detail information about an MBean
jvms - List all running local JVM processes
open - Open JMX session or display current connection
option - Set options for command session
quit - Terminate console and exit
run - Invoke an MBean operation
set - Set value of an MBean attribute
subscribe - Subscribe to the notifications of a bean
unsubscribe - Unsubscribe the notifications of an earlier subscribed bean
watch - Watch the value of one MBean attribute constantly
$>help get
\[USAGE\]
get <OPTIONS> <ARGS>
\[DESCRIPTION\]
Get value of MBean attribute(s)
\[OPTIONS\]
-b --bean <value> MBean name where the attribute is. Optional if bean has been set
-l --delimiter <value> Sets an optional delimiter to be printed after the value
-d --domain <value> Domain of bean, optional
-h --help Display usage
-i --info Show detail information of each attribute
-q --quots Quotation marks around value
-s --simple Print simple expression of value without full expression
-n --singleLine Prints result without a newline - default is false
\[ARGS\]
<attr>... Name of attributes to select
\[NOTE\]
* stands for all attributes. eg. get Attribute1 Attribute2 or get *

设置compaction策略为LCS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
$ java -jar jmxterm-1.0.1-uber.jar --url localhost:7199
Welcome to JMX terminal. Type "help" for available commands.
$>domain org.apache.cassandra.db #设置当前domain
#domain is set to org.apache.cassandra.db
$>bean org.apache.cassandra.db:columnfamily=image,keyspace=reis,type=ColumnFamilies #设置当前mbean
#bean is set to org.apache.cassandra.db:columnfamily=image,keyspace=reis,type=ColumnFamilies
$>info #显示当前mbean的信息
#mbean = org.apache.cassandra.db:columnfamily=image,keyspace=reis,type=ColumnFamilies
#class name = org.apache.cassandra.db.ColumnFamilyStore
# attributes
%0 - AutoCompactionDisabled (boolean, r)
%1 - BuiltIndexes (java.util.List, r)
%2 - ColumnFamilyName (java.lang.String, r)
%3 - CompactionParameters (java.util.Map, rw)
%4 - CompactionParametersJson (java.lang.String, rw)
%5 - CompactionStrategyClass (java.lang.String, rw)
%6 - CompressionParameters (java.util.Map, rw)
%7 - CrcCheckChance (double, w)
%8 - DroppableTombstoneRatio (double, r)
%9 - MaximumCompactionThreshold (int, rw)
%10 - MinimumCompactionThreshold (int, rw)
%11 - SSTableCountPerLevel (\[I, r)
%12 - UnleveledSSTables (int, r)
# operations
%0 - void beginLocalSampling(java.lang.String p1,int p2)
%1 - long estimateKeys()
%2 - javax.management.openmbean.CompositeData finishLocalSampling(java.lang.String p1,int p2)
%3 - void forceMajorCompaction(boolean p1)
%4 - java.util.List getSSTablesForKey(java.lang.String p1)
%5 - void loadNewSSTables()
%6 - void setCompactionThresholds(int p1,int p2)
%7 - long trueSnapshotsSize()
#there's no notifications
$>get CompactionStrategyClass # 查询compaction当前策略类
#mbean = org.apache.cassandra.db:columnfamily=mytable,keyspace=mykeyspace,type=ColumnFamilies:
CompactionStrategyClass = org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy;
$>set CompactionStrategyClass "org.apache.cassandra.db.compaction.LeveledCompactionStrategy" #设置compaction策略类为LCS
#Value of attribute CompactionStrategyClass is set to "org.apache.cassandra.db.compaction.LeveledCompactionStrategy"
$>get CompactionParametersJson #查询LCS的CompactionParametersJson参数
#mbean = org.apache.cassandra.db:columnfamily=image,keyspace=reis,type=ColumnFamilies:
CompactionParametersJson = {"class":"LeveledCompactionStrategy","sstable_size_in_mb":"160"};

$>set CompactionParametersJson #设置LCS的CompactionParametersJson参数 \\{"class":"LeveledCompactionStrategy","sstable_size_in_mb":"200"\\}
#Value of attribute CompactionParametersJson is set to {"class":"LeveledCompactionStrategy","sstable_size_in_mb":"200"}

在逐节点compaction策略转换过程中不要alter table,alter table会将jmx对节点的设置扩散到所有的其他节点。

所有节点转换完成后,使用alter table永久的改变compaction策略,否则节点重启后会用table的schema定义覆盖掉jmx对table的修改。

References:
[1]Change CompactionStrategy and sub-properties via JMX
[2]How to change Cassandra compaction strategy on a production cluster
[3]JMXTERM
[4]LAUNCH JMXTERM
[5]Interactive command line JMX client

cassandra更新或删除rows时并不会真正的更新或删除原先的rows,只是添加新的rows并将原rows打上tombstone标记,所以cassandra需要周期性的运行compaction来整理数据库。

compaction有三种策略,SizeTieredCompactionStrategy (STCS)、LeveledCompactionStrategy (LCS)和DateTieredCompactionStrategy (DTCS),默认的compaction策略是STCS。

当前使用的集群在compaction时出现错误:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
ERROR \[CompactionExecutor:41367\] 2019-06-22 11:22:02,063 CassandraDaemon.java:185 - Exception in thread Thread\[CompactionExecutor:41367,1,main\]
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 1, expected write size = 678107716200
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:275) ~\[apache-cassandra-2.2.6.jar:2.2.6\]
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:118) ~\[apache-cassandra-2.2.6.jar:2.2.6\]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~\[apache-cassandra-2.2.6.jar:2.2.6\]
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~\[apache-cassandra-2.2.6.jar:2.2.6\]
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~\[apache-cassandra-2.2.6.jar:2.2.6\]
at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:256) ~\[apache-cassandra-2.2.6.jar:2.2.6\]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~\[na:1.8.0_66\]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~\[na:1.8.0_66\]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~\[na:1.8.0_66\]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) \[na:1.8.0_66\]
at java.lang.Thread.run(Thread.java:745) \[na:1.8.0_66\]

需要600多G的空间来compaction,昏!

是的,有一个数据文件达到了600多G,默认的SizeTieredCompactionStrategy压缩策略需要比数据文件大一些的空闲空间来执行compaction,但是磁盘剩余空间只有不到400G了,so.

“SizeTieredCompactionStrategy Compaction requires a lot of temporary space: In worst case, we need to merge all existing SSTables into one, so we need half the disk to be empty to write the output file and only later can delete the old SSTables”

使用STCS compaction策略,cassandra节点一般要保持50%以上的剩余空间,对于大数据集来讲,太可怕了,几个T的数据,需要额外几个T的剩余空间才能正常运行, WTF!

STCS策略会在达到min_threshold(默认为4)时,将这几个SSTABLE合并为一个大的SSTABLE,这个SSTABLE并不会有上限大小的限制,初期数据少的时候并不会有什么问题。但是目前2T的节点,已经有3个600G的SSTABLE了,下一步compaction要生成单个2T以上的SSTABLE了,看来默认的STCS策略并不太适合大数据集。

LeveledCompactionStrategy压缩策略只使用很少的空间来执行压缩,只要10 * sstable_size_in_mb的空间,目前默认的sstable_size_in_mb为160MB,10倍的话差不多2个G的样子,不过官方也讲最好保持10G以上的剩余空间。

sstable_size_in_mb是LeveledCompactionStrategy bean的一个RW attribute来控制compaction后生成的sstable的大小,一般使用当前默认的160M或设置为200M都是适合的。对于大数据集LCS会生成数量很多的sstables。

当前表的compaction

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cqlsh> desc TABLE image;
CREATE TABLE reis.image (
id text PRIMARY KEY,
content blob,
name text
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

果然是SizeTieredCompactionStrategy

修改compaction策略

1
cqlsh>ALTER TABLE image WITH compaction = { 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy','sstable_size_in_mb':'200'}

这里虽然将compaction策略由STCS修改成了LCS,但是要完成一次完整的转换仍然需要巨大的剩余磁盘空间,因为在完整的转换为LCS之前,那些大的sstable,当前有3个600G以上的,会一直保留在磁盘上,完整转换完毕后才能删除,只剩下数量众多的小sstabel,之后的compaction就不会需要这么多的剩余磁盘空间了。

official documnet: “While a merge of several SSTables is ongoing, the request path continues to read the old SSTables. Ideally, the old SSTables would be deleted as soon as the merge is done, but we must not delete an SSTable that still has in-progress reads.”

还有一个问题,直接alter table修改compaction策略,会使所有的集群节点开始转换到LCS的compaction动作,集群的负载会高居不下,所以也可以使用jmx来逐个节点的迁移到LCS策略。

当然也可以关闭节点的自动compaction

1
$ nodetool disableautocompation -- keyspacename tablename 

修改完table的compaction策略后,手动逐个执行compaction

1
$ nodetool compact -- keyspacename tablename

最后再打开autocompaction

1
$ nodetool enableautocompaction

创建新表时
可以直接指定compaction策略

1
2
3
4
5
CREATE TABLE reis.image (
id text PRIMARY KEY,
content blob,
name text
) WITH compaction = { 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}

References:
[1]How is data maintained?
[2]Leveled Compaction in Apache Cassandra
[3]When to Use Leveled Compaction
[4]Cassandra 的压缩策略STCS,LCS 和 DTCS
[5]介绍CASSANDRA中的压缩
[6]Cancelling ongoing compaction jobs in Cassandra
[7]nodetool setcompactionthreshold
[8]被忽视的Compaction策略-有关NoSQL Compaction策略的一点思考
[9]Cassandra control SSTable size
[10]SSTable compaction and compaction strategies

buster官方源里已经不再提供openjdk-8-jdk, default-jdk版本为openjdk-11-jdk

可以使用java-packge包来安装oracle jdk 8,或者build openjdk8u

1. oracle jdk 8

安装

安装java-package及其他需要的依赖包

1
$ sudo apt install java-package java-common libgtk-3-dev libcairo-gobject2

下载

oracle官方下载jdk-8u211-linux-x64.tar.gz

打包

切换到下载目录执行

1
$ make-jpkg jdk-8u211-linux-x64.tar.gz

会在当前目录下生成oracle-java8-jdk_8u211_amd64.deb

安装

1
$ sudo dpkg -i oracle-java8-jdk_8u211_amd64.deb

2. openjdk8u

openjdk8u是openjdk8的更新版本,其官方代码仓库为http://hg.openjdk.java.net/jdk8u/jdk8u/

根据其官方build说明文件,clone源代码并编译安装即可。

References:
[1]OpenJDK / jdk8u / jdk8u
[2]OpenJDK Build README

qemu monitor可以监控guest的运行状态,还可以操纵guest的运行

telnet访问

qemu命令行上添加

1
-monitor telnet:192.168.0.86:1234,server,nowait

这样在qemu在host接口192.168.0.86端口1234上打开telnet监听,可以使用telnet连接monitor

1
2
3
4
5
6
$ telnet 192.168.0.86 1234
Trying 192.168.0.86...
Connected to 192.168.0.86.
Escape character is '^\]'.
QEMU 3.1.0 monitor - type 'help' for more information
(qemu)

注意,如果在qemu命令上quit会结束guest的运行,如果只是想退出telnet连接,应该按’^]‘,然后输入quit退出telnet,之后还可以再次连接qemu monitor

raw socket访问
命令行

1
-monitor tcp:192.168.0.86:1234,server,nowait

然后可以这样连接

1
2
3
$ nc 192.168.0.86 1234
QEMU 3.1.0 monitor - type 'help' for more information
(qemu)

again,在qemu命令行上quit,会结束guest的运行,如果只是退出nc的话crtl+c就可以了。

References:
[1]通过网络连接到QEMU MONITOR
[2]QEMU/Monitor
[3]kvm guest live migration