转载:http://www.cnblogs.com/gordonchao/archive/2010/12/13/1904606.html
现象:查看页面,发现数据出现异常,今天生成数据比平常水平偏低好多,不大正常
原因查找:查看日志文件,发现有出现了几个这样的警告:** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
在查询时发现好多老外遇到这个问题,这儿要说一点,老外在描述问题上很厉害,这儿把它描述的原因copy下来,和我这儿的情况差不多(当然最后他也没得到想要的结果,不过这是另一说,非重点)
01 |
we encountered the following mnesia warning report in our system log: |
03 |
Mnesia is overloaded: {dump_log, write_threshold} |
05 |
The log contains several such reports within one second and then |
09 |
* The core is one mnesia table of type disc_copies that contains |
10 |
persistent state of all entities (concurrent processes) in our |
11 |
system (one table row for one entity). |
12 |
* The system consists of 20 such entities. |
13 |
* Each entity is responsible for updating its state in the table |
15 |
* We use mnesia:dirty_write/2, because we have no dependency |
16 |
among tables and each entity updates its state only. |
18 |
In the worst case, there is 20 processes that want to write to the |
19 |
table but each to a different row. |
22 |
* What precisely does the report mean? |
23 |
* Can we do something about it? |
24 |
* We plan to scale from units to thousands of entities. Will this |
25 |
be a problem? If so, how can we overcome it? If not, why not? |
引用地址:
[Q] Mnesia is overloaded
(说的很是详细,这点上大多国人应该向老外学习滴!)这儿还是要先说一下我们的系统结构(虽然上面说的很好,我自己描述不出这么好,但还是简单的说一下吧):
我们这有一个模块等待接收数据,它每接收一个数据生成一个进程来进行处理它,然后往数据表里面写入数据,这就导致了这样一个问题:也是出现这个错误的原因——频繁的异步写入
错误解决:错误原因找到了,怎么解决呢?其实这个问题在老外那儿已经发生N次了,有人提议把这个加入FAQ,但不知道何时才OK,在此之前要有临时方案,于是我找到下面这个地方
1 |
If you’re using mnesia disc_copies tables and doing a lot of writes all at |
2 |
once, you’ve probably run into the following message |
4 |
=ERROR REPORT==== 10-Dec-2008::18:07:19 === |
5 |
Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log, |
7 |
This warning event can get really annoying, especially when they start |
8 |
happening every second. But you can eliminate them, or at least drastically |
9 |
reduce their occurance. |
Synchronous Writes
1 |
The first thing to do is make sure to use sync_transaction or sync_dirty. |
2 |
Doing synchronous writes will slow down your writes in a good way, since |
3 |
the functions won’t return until your record(s) have been written to the |
4 |
transaction log. The alternative, which is the default, is to do asynchronous |
5 |
writes, which can fill transaction log far faster than it gets dumped, causing |
6 |
the above error report. |
Mnesia Application Configuration
1 |
If synchronous writes aren’t enough, the next trick is to modify 2 obscure |
2 |
configuration parameters. The mnesia_overload event generally occurs |
3 |
when the transaction log needs to be dumped, but the previous transaction |
4 |
log dump hasn’t finished yet. Tweaking these parameters will make the |
5 |
transaction log dump less often, and the disc_copies tables dump to disk |
6 |
more often. NOTE: these parameters must be set before mnesia is started; |
7 |
changing them at runtime has no effect. You can set them thru the |
8 |
command line or in a config file. |
dc_dump_limit
1 |
This variable controls how often disc_copies tables are dumped from |
2 |
memory. The default value is 4, which means if the size of the log is greater |
3 |
than the size of table / 4, then a dump occurs. To make table dumps happen |
4 |
more often, increase the value. I’ve found setting this to 40 works well for |
dump_log_write_threshold
1 |
This variable defines the maximum number of writes to the transaction log |
2 |
before a new dump is performed. The default value is 100, so a new |
3 |
transaction log dump is performed after every 100 writes. If you’re doing |
4 |
hundreds or thousands of writes in a short period of time, then there’s no |
5 |
way mnesia can keep up. I set this value to 50000, which is a huge |
6 |
increase, but I have enough RAM to handle it. If you’re worried that this high |
7 |
value means the transaction log will rarely get dumped when there’s very |
8 |
few writes occuring, there’s also a dump_log_time_threshold configuration |
9 |
variable, which by default dumps the log every 3 minutes. |
How it Works
1 |
I might be wrong on the theory since I didn’t actually write or design |
2 |
mnesia, but here’s my understanding of what’s happening. Each mnesia |
3 |
activity is recorded to a single transaction log. This transaction log then |
4 |
gets dumped to table logs, which in turn are dumped to the table file on |
5 |
disk. By increasing the dump_log_write_threshold, transaction log dumps |
6 |
happen much less often, giving each dump more time to complete before the |
7 |
next dump is triggered. And increasing dc_dump_limit helps ensure that the |
8 |
table log is also dumped to disk before the next transaction dump occurs. |
引用地址: How to Eliminate Mnesia Overload Events
这儿说了两种解决方案,一种是避免频繁的异步写入,另一个是把mnesia对应的配置文件权限放宽
1、这个哥推荐用sync_transaction 或者 sync_dirty来进行写入操作,认为异步写入是导致出现这个错误的原因。
2、对配置文件进行修改是在启动erlang时进行的:这哥推荐修改dc_dump_limit的设置由4改为40
修改dump_log_time_threshold 的设置由100改为50000,要想实现在启动erl时执行
erl -mnesia dump_log_write_threshold 50000 -mnesia dc_dump_limit 40
ok,下面说下这俩参数代表的意思:
dc_dump_limit:磁盘备份表从内存中被抛弃的时间间隔
dump_log_time_threshold:在新垃圾回收之前的最大的写入数(貌似翻译的不是很准哈,你能看明白就好~_~)