论文写作 14: 结论不要太啰嗦, 但进一步工作可以

作为人们的阅读习惯, 最后一部分总是要看的. 通常审稿人和读者都会逐句阅读本部分.

  1. 常言道: 虎头豹尾. 结论一般不要太长, 5 句就够了. 如果想讨论的内容比较多, 应该在本节之前加入一个单独的 Discussions 小节.
  2. 避免使用与摘要内相同的句子. 摘要里面说我们做了哪些事情, 而这里应该说我们获得哪些观察与结论. 也就是说, 结论比摘要更加具体. 可以具体到论文中的某个算法, 某条性质, 某个定理, 某个实验结果, 这样就自然与摘要区别开来.
  3. 如果要讨论说进一步工作, 可以列出 3 至 5 条, 不算在 Conclusion 的字数里面. 读者很可能非常关注这一部分, 因为他们可以按照这种思路继续进行研究. 作为一项研究工作, 打开一扇门比完全解决某一问题更重要. 如果这一部分写得好, 就会有很多的引用. 引用数量也比论文发表数量更重要.

给几个例子吧:

  • 第一篇发表 10 年了, 已有 300 + ^+ + 引用

Title: Test-cost-sensitive attribute reduction
This study has posited a new research theme in regard to attribute reduction. We formally defined the minimal test-cost reduct problem, which is more general than the traditional reduct problem. The new problem has practical areas of application. In fact, when tests must be undertaken in parallel, test-cost-sensitive attribute reduction is mandatory in dealing independently with cost issues. Algorithm 1 is a framework and one can design different attribute significance functions to obtain a substantive algorithm. We also proposed an information gain based k-weighted function. Experimental results indicate that the competition approach is a good choice even if the optimal setting of k k k is known.
The following research topics deserve further investigation:
(1) Algorithm 1 can be enhanced to provide better performance. To improve the quality of results, one could design approaches other than the simple weighting indicated by Eq. (23). To improve the speed of the algorithm such that it can be employed in very large databases, one could use the accelerator proposed by Qian et al. [30], or efficient genetic algorithms [18,35,53].
(2) The minimal test cost reduct should be considered again in more complicated models such as the simple common-test-cost decision systems and the complex common-test-cost decision systems [25]. In these models, the algorithm may also be more complicated.
(3) Since this paper only considers test costing, one may also consider the misclassification cost under the framework of decision-theoretic rough set model [46].
We note that the major contribution of the paper is in the definition of the problem rather than the algorithm. As we known, usually the problem formulation is more important than the problem solving. We hope that this study opens a new door for rough sets research.

  • 第二篇更加规范

Title: Three-way active learning through clustering selection
This study has proposed the TACS algorithm to dynamically select the appropriate clustering in the active learning process. A number of techniques were discussed, including cluster selection, query balancing, and tree pruning. Experimental results verify the effectiveness of the algorithm.
The following research topics deserve further investigation:
(1) More clustering techniques in the algorithm framework. Currently, only a few clustering techniques have been incorporated into TACS. Other techniques can be incorporated to accommodate data with different shapes. In addition, these techniques should be fine-tuned or modified to fit the framework of the algorithm.
(2) Better evaluation measures to select clustering techniques. TACS uses weighted entropy to evaluate the quality of clustering. New measures based on the Gini indicator might be good alternatives. More sophisticated measures can be designed to consider other information such as block size ratios and data distribution.
(3) Clustering ensemble techniques for active learning. TACS only selects the currently “best” clustering technique. By designing cluster ensemble techniques, new blocks can be obtained from these different techniques. Hence it is possible to obtain more stable blocks of better quality. Moreover, different classification strategies can be employed for instances inside or outside stable blocks.
In summary, the TACS is a comprehensive algorithm framework that can be enriched in the future.

第三篇八股到爆
Title: Multi-label active learning through serial-parallel neural networks
In this paper, we proposed the MASP algorithm with a serial-parallel neural network for MAL.
This simple network not only supports missing labels, but also provides label uncertainty computation.
MASP also considers instance representativeness and label sparsity when querying labels.
Its effectiveness is verified by comparison with three sets of deliberately selected algorithms.

There are still some topics that deserve further investigation.

  1. Fine-tuning the network. We have not fine-tuned the network with different numbers of layers/nodes and various activation functions. More layers/nodes and diverse activation functions may help improve network performance.
  2. Combining serial-parallel neural network with matrix factorization for MLL. Now the network of MASP only considers label correlation through feature extraction. We can learn the relationship between original features and latent labels using the network, the relationship between latent and actual labels using GLOCAL \cite{Zhu-2018-GLOCAL}.
  3. Cost-sensitive multi-label active learning. In MLL, the labels are usually highly imbalanced with a small amount of positive ones. The cost of misclassifying a positive label is usually higher than the cost of the opposite. Also, each query has a teacher cost. Following the idea of CADU \cite{Wu-2019-IJAR-CADU}, we can adjust the query strategy of MASP to handle this issue.
  4. Extreme multi-label learning. XML \cite{BhatiaB-2015-NIPS-Sparse,Liu-2017-SIGIR-Extreme} is much more challenging than MLL due to huge number of instances, extremely high dimensional label spaces, severe missing values, very high label sparsity, etc. Important resources are available online\footnote{http://manikvarma.org/downloads/XC/XMLRepository.html}. We need to design deep learning models that are completely different from serial-parallel networks to handle this problem.

你可能感兴趣的:(论文写作,论文写作)