0%

内存是分页管理的,传统上每页内存是4K大小,当系统内存很大时,页表项剧增,查找内存页表增加了系统的负担,因此出现了巨页.当前amd64架构支持2M内存页和1G内存页,默认巨页大小是2M.

创建巨页内存组

1
2
3
4
# groupadd hugetlbfs
# getent group hugetlbfs
hugetlbfs:x:1001:
# adduser postgres hugetlbfs

把postgres用户加入巨页内存组,从而使postgresql可以分配巨页内存

编辑/etc/sysctl.conf

1
2
vm.nr_hugepages = 13824 # 巨页内存池大小,系统保留多少页巨页内存,这里保留了13824*2M=27G
vm.vm.hugetlb_shm_group = 1001 #巨页内存所属组

创建巨页内存挂载点

1
# mkdir /hugepages

编辑/etc/fstab自动挂载巨页内存文件系统

1
hugetlbfs /hugepages hugetlbfs mode=1770,gid=1001 0 0

配置生效

1
# sysctl -p

如果内存碎片化太严重而无法为巨页池保留足够的内存,可以重新启动系统,或者先试图释放系统缓存再重试sysctl命令

1
2
# sync ; echo 3 > /proc/sys/vm/drop_caches
# sysctl -p

查看巨页内存使用情况:

1
2
3
4
5
6
7
$ grep 'Huge' /proc/meminfo 
AnonHugePages: 0 kB
HugePages_Total: 13824
HugePages_Free: 13710
HugePages_Rsvd: 4096
HugePages_Surp: 0
Hugepagesize: 2048 kB

References:
[1]Hugepages

===
[erq]

V$ARCHIVED_LOG视图中的列DEST_ID指定的值N就是LOG_ARCHIVE_DEST_N参数中的那个N,也就是DEST_ID用来指定是哪一个归档目标的日志记录。

如果使用NetworkManager则十分简单,无需赘言。

使用gsettings
当然前提是使用gnome

1
2
$ gsettings set org.gnome.system.proxy autoconfig-url http://myserver/myconfig.pac
$ gsettings set org.gnome.system.proxy mode auto

浏览器配置使用系统代理设置就可以自动代理上网了。

===
[erq]

执行COPY命令时,出现Batch too large错误:

1
2
3
4
5
6
7
8
cqlsh:reis> COPY image FROM 'image.csv';
Using 7 child processes

Starting copy of reis.image with columns \['id', 'content', 'name'\].
Failed to import 11 rows: InvalidRequest - code=2200 \[Invalid query\] message="Batch too large", will retry later, attempt 1 of 5
Failed to import 16 rows: InvalidRequest - code=2200 \[Invalid query\] message="Batch too large", will retry later, attempt 1 of 5
Failed to import 18 rows: InvalidRequest - code=2200 \[Invalid query\] message="Batch too large", will retry later, attempt 1 of 5
...

/var/log/cassandra/system.log文件中可见:

1
ERROR \[SharedPool-Worker-1\] 2016-07-15 15:07:20,725 BatchStatement.java:267 - Batch of prepared statements for \[reis.image\] is of size 2732525, exceeding specified threshold of 614400 by 2118125. (see batch_size_fail_threshold_in_kb)

batch就是批量执行DML语句. 因为我的image表中有大字段,用于存储图片,每个图片不超过500K,所以遭遇了batch too large错误.

/etc/cassandra/cassandra.yaml文件中,参数batch_size_fail_threshold_in_kb的默认值只有50,一条DML语句就超过了这个阈值.
所以将此参数设置为600

1
batch_size_fail_threshold_in_kb: 600

然后将COPY命令的批操作限制为1:

1
2
3
4
5
6
cqlsh:reis> COPY image FROM 'image.csv' WITH MAXBATCHSIZE = 1 and MINBATCHSIZE = 1;
Using 7 child processes

Starting copy of reis.image with columns \['id', 'content', 'name'\].
Processed: 826 rows; Rate: 30 rows/s; Avg. rate: 60 rows/s
826 rows imported from 1 files in 13.682 seconds (0 skipped).

顺利的导入了所有数据.

===
[erq]

执行cqlsh查询:

1
cqlsh:xxx> select count(*) from image;

时出现错误提示:

1
OperationTimedOut: errors={}, last_host=x.x.x.x

在~/.cassandra/cqlshrc中设置客户端超时时间即可:

1
2
\[connection\]
request_timeout = 120 # seconds

===
[erq]

activemq官方提供了init脚本

添加运行activemq的用户:

1
# useradd -m activemq -d /srv/activemq

安装activemq

1
2
3
4
$ cd /srv/activemq
$ sudo -u activemq tar zxvf apache-activemq-<version>-bin.tar.gz
# ln -snf apache-activemq-<version> current
# chown -R activemq:users apache-activemq-<version>

修改activemq默认配置,使用activemq用户来运行activemq

1
2
# cp apache-activemq-<version>/bin/env /etc/default/activemq
# sed -i '~s/^ACTIVEMQ_USER=""/ACTIVEMQ_USER="activemq"/' /etc/default/activemq

安装init脚本:

1
2
# ln -snf /srv/activemq/current/bin/activemq /etc/init.d/activemq
# update-rc.d activemq defaults

===
[erq]

安装完成后,首先要启用默认实例:

1
# ln -sf /etc/activemq/instances-available/main /etc/activemq/instances-enabled/main

然后以debug方式启动activemq的main实例:

1
# /etc/init.d/activemq console main

会有错误提示:

1
2
ERROR Temporary Store limit is 50000 mb, whilst the temporary data directory: /var/lib/activemq/main/data/localhost/tmp_storage only has 3346 mb of usable space

这是因为硬盘空间不够了,需要更改配置文件/etc/activemq/instances-enabled/main/activemq.xml,broker节内添加以下行:

1
2
3
4
5
6
7
8
9
10
 <systemUsage>
<systemUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="500 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>

如果不配置storeUsage,会有这样的错误提示:

1
ERROR Store limit is 0 mb, whilst the max journal file size for the store is: 32 mb, the store will not accept any data when used.

References:
[1]Temporary Store Limit Error When Starting the Broker

===
[erq]

如果cqlsh执行COPY命令时出现错误”get_num_processes() takes no keyword arguments”,删除掉/usr/lib/pymodules/python2.7/cqlshlib/copyutil.so文件,如果有文件copyutil.c也删除掉就可以了。

为了防止意外,cassandra不允许更改节点的数据中心和机架名字。会有类似错误提示:

1
2
3
ERROR \[main\] 2016-06-18 11:01:40,730 CassandraDaemon.java:638 - Cannot start node if snitch's 
data center (DC1) differs from previous data center (datacenter1). Please fix the snitch configuration,
decommission and rebootstrap this node or use the flag -Dcassandra. ignore_dc=true.

如果你知道你在做什么,可以添加两个JVM参数cassandra.ignore_rack和cassandra.ignore_dc来更改数据中心和机架的名字。

编辑/etc/cassandra/cassandra-env.sh文件,添加JVM参数:

1
JVM_OPTS="$JVM_OPTS -Dcassandra.ignore_dc=true -Dcassandra.ignore_rack=true"

References:
[1]failed to start dse solr node

===
[erq]