小男孩‘自慰网亚洲一区二区,亚洲一级在线播放毛片,亚洲中文字幕av每天更新,黄aⅴ永久免费无码,91成人午夜在线精品,色网站免费在线观看,亚洲欧洲wwwww在线观看

分享

寫文獻總結的提綱--中國教育人博客

 woailife 2010-07-06

寫文獻總結的提綱

 (2010-04-18 09:21)

看到一個不錯的寫文獻總結的提綱,列出來分享一下。

原本是一門課的作業(yè),老師布置了幾篇論文,讓寫個一頁的總結,paper summary。這個如果想糊弄,其實是很容易的,隨便copy幾段文字下來,就一頁了。但我想這樣做沒什么意義,不如認認真真的理一些思路出來。于是在網上搜到了這個樣板提綱。

我認為這個提綱好在,他比較重視introduction部分的閱讀,前三個問題都是intro里面的。弄懂這三個問題,才能更好的把握文章背景信息。話說回來,要寫一篇好的文章,前8點內容是必然要交代清楚的。如果從文章里提取不到這些信息,那文章不能說是成功的文章,關鍵的問題都沒交代清楚。

模板精簡并翻譯如下:

1. What is the problem the authors are trying to solve? 作者想解決什么問題?(研究目的)

2. What other approaches or solutions existed at the time that this work was done? 這個問題現(xiàn)在有什么解決方法?

3. What was wrong with the other approaches or solutions? 這些方法存在什么問題?

4. What is the authors' approach or solution? 作者提出了什么方法?

5. Why is it better than the other approaches or solutions? 為什么說要作者的這個方法好于其他方法?

6. How did they test their solution? 作者如何檢驗自己的方法?

7. How does it perform? 方法檢驗效果如何?

8. Why is this work important? 為什么該文的研究比較重要?(研究意義)

more comments/questions 附加評注,或者問題。隨讀者背景和認識程度的不同,可以靈活的提出一些問題。


本人拙作,也就是作業(yè)。歡迎拍磚。

summary on “A. Boucher (2009): Considering complex training images with search tree partitioning, Computers & Geosciences, 35, 1151-1158.”

1. What is the problem the authors are trying to solve?
To solve the difficulty of large complex TI to be used in SNESIM.

2. What other approaches or solutions existed at the time that this work was done?
To decrease the size of search tree, general approaches are taking a smaller TI or using a smaller template. Besides, there are two alternative approaches in current practice:  1) region approach 2) probability ?eld approach.

3. What was wrong with the other approaches or solutions?
A major issue with the region approach is that there is no guarantee that the different TIs used are compatible with one another. The problem with this probability ?eld approach is that integration changes the conditional probability derived from the search tree and affects the pattern reproduction in an unknown manner.

4. What is the authors' approach or solution?
  The authors proposed a search tree partitioning approach which includes:
        Applying the filters on the TI to gain a series of filter scores that are indicative of the underlying patterns.
        Defining partition classes through the filter scores with a clustering algorithm
        Building search tree for each partition class
        At each pixel along the simulation path, the partition class is retrieved first and used to select the appropriate search tree.

5. Why is it better than the other approaches or solutions?
The proposed algorithm adds local information without using a probability field which distorts the conditional distribution obtained from the TI. It improves the region approach by requiring a single complete TI and implicitly models the transitions between regions. It facilitates the use of hierarchical framework by identifying the hierarchy structures with partition classes.

6. How did they test their solution?
They gave two examples of implementation of their approach. The first example is to simulate a series of fracture.  The second is to simulate Shallow-water tidal system. They also made the comparison to the result pattern by global search tree approach.

7. How does it perform?
Performance is very good. In the fracture case, the trend in orientation is well reproduced. In the second one the geological consistency of the TI is reproduced. These features are not exhibited in the realization by global search tree approach. Besides, the speed improved.

8. Comments & other questions
*The simulation grid can be obtained in many ways. Do the different simulation grids influence realization?
*FILTERSIM as a similar approach is interesting to compare with the authors approach.
*A. Boucher’s effort to integrate new algorithm to SGeMS is interesting and respectable. Open Geostatistic software will allow more researchers to test, apply and extend the algorithm in an easier manner.


Sample Paper Summary
Name: Scott Brandt


Paper: Sage A. Weil, Scott A. Brandt, Ethan L. Miller, Darrell D. E. Long,
       and Carlos Maltzahn, ``Ceph: A Scalable, High-Performance, Distributed
       Object-based Storage System,'' Symposium on Operating Systems Design
       and Implementation (OSDI '06), Seattle, Washington, November 6-8, 2006,
       to appear.

1. What is the problem the authors are trying to solve?

  Existing storage systems do not scale well to petabytes of data and
  terabytes/second throughput.

2. What other approaches or solutions existed at the time that this work was done?

  Lots of other file systems existed. NFS is a standard for distributed
  file systems. Lustre is a distributed object-based file system, as
  is the Panasas file system.

3. What was wrong with the other approaches or solutions?

  All have limitations that prevent them from scaling to the desired level.
  Block-based file systems have problems dealing with the large number of
  blocks in such a system. Other object-based file systems fail to take full
  advantage of the object-based paradigm and still maintain object lists.

4. What is the authors' approach or solution?

  The authors solution includes:
  - Object-based storage devices
  - A globally known mapping function for locating file data
    (instead of object lists)
  - A scalable metadata manager that dynamically redelegates authority
    for directory subtrees based on load
  - A distributed autonomous system for managing the object stores

5. Why is it better than the other approaches or solutions?

  It scales to petabytes, provides nearly linear performance improvements
  as storage devices are added, degrades gracefully as storage devices are
  removed, and provides very high performance.

6. How did they test their solution?

  They ran parts of the storage system and observed their performance
  under various workloads. Data performance was tested on a single object
  store and on several object stores. Metadata performance was tested on
  a large cluster.

7. How does it perform?

  Performance is very good. The system appears to achieve its goals,
  although scalability could be improved in certain scenarios where a lot
  of sharing occurs.

8. Why is this work important?

  This work is important because storage systems continue to grow in size
  and data is becoming increasingly important.

3+ comments/questions

  * Why didn't they directly compare the performance of their system against
    that of any other storage systems?

  * What happens if you scale to exabytes? Will the system still work? What
    factors will limit its ability to scale further?

  * How much of the improvement is due to CRUSH, and how much to the design
    of the other parts of the system? Why didn't they do any tests to isolate
    the benefits of the individual design decisions?
 

    本站是提供個人知識管理的網絡存儲空間,所有內容均由用戶發(fā)布,不代表本站觀點。請注意甄別內容中的聯(lián)系方式、誘導購買等信息,謹防詐騙。如發(fā)現(xiàn)有害或侵權內容,請點擊一鍵舉報。
    轉藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多