• Volume 11,Issue 3,2022 Table of Contents
    Select All
    Display Type: |
    • >Special Issue: New Storage Devices and Systems
    • Preface: New Storage Devices and Systems

      2022, 11(3):1-2. DOI: 10.12146/j.issn.2095-3135.20220501001

      Abstract (224) HTML (0) PDF 466.84 K (2113) Comment (0) Favorites

      Abstract:

    • A Survey on the Secure Non-Volatile Memory Technology

      2022, 11(3):3-22. DOI: 10.12146/j.issn.2095-3135.20211001002

      Abstract (688) HTML (0) PDF 5.40 M (2775) Comment (0) Favorites

      Abstract:Big data applications have an increasing demand for memory capacity, but traditional memory using DRAM as a memory medium has become more and more serious in big data applications. Computer designers began to consider using Non-Volatile Memory (NVM) to replace traditional DRAM memory. As a non-volatile storage medium, NVM does not need to be dynamically refreshed, so it will not cause a large amount of energy consumption; at the same time, the read performance of NVM is similar to that of DRAM, and the capacity of a single NVM storage unit has strong scalability. However, integrating NVM as a memory into an existing computer system needs to solve its security problem. Traditional DRAM, as a memory medium, loses data automatically after power failure, so the data will not stay in the storage medium for a long time, while NVM is a non-volatile storage medium, and the data can be retained in the NVM for a relatively long time. If attackers gain access to the NVM and then scan the contents, they can obtain the data in the memory. This security issue is defined as a "recovery vulnerability" of the data. Therefore, in a data center environment based on NVM modules, how to make full and effective use of NVM and ensure its safety has become an urgent problem to be solved. Starting from the security aspect of NVM, this article summarizes the research hotspots and progress of NVM security in recent years. First, it summarizes the main security issues faced by NVM, such as data theft, integrity damage, data consistency and crash recovery, and system performance degradation caused by the introduction of encryption and decryption and integrity protection technologies. Then, in view of the above problems, the combined counter mode encryption technology, integrity protection technology Bonsai Merkel Tree, data consistency and crash recovery technology and related optimization schemes are introduced in detail. Finally, the full text is summarized, and the issues that need further attention in the future of NVM are prospected.

    • A Survey of Flash Memory Based Near-Data Processing Technology

      2022, 11(3):23-41. DOI: 10.12146/j.issn.2095-3135.20211019001

      Abstract (537) HTML (0) PDF 3.91 M (2550) Comment (0) Favorites

      Abstract:The isolation of storage and compute units in the Von Neumann architecture leads to the “storage wall” problem, which makes the existing system architecture hard to cope with the challenges of data explosion caused by the wide application of big data and artificial intelligence technologies. The continuous growth of data has led to an evolution in the computing paradigm. Researchers try to move the compute unit to the storage system, that is Near-Data Processing (NDP) technology. NDP technology refers to utilizing the computing power of the storage controller to perform I/O intensive computing tasks, which brings advantages such as low latency, high scalability, and low power consumption while reducing data movement, and has broad application prospects. This article first introduces the near-data computing architecture, subsequently outlines the research results of NDP systems for specific applications and some general scenarios, then summarizes the hardware and software platform and industry progress of NDP, finally looks into the future development trend of NDP technology.

    • Performance, Reliability and Application of Non-Volatile Memory Devices

      2022, 11(3):42-55. DOI: 10.12146/j.issn.2095-3135.20211017001

      Abstract (521) HTML (0) PDF 1.08 M (2836) Comment (0) Favorites

      Abstract:With the development of big data and artificial intelligence applications, data are growing explosively, and the demand for data storage is increasing day by day. The capacity of traditional memory technology is approaching the limit of its physical storage density. Non-volatile memory is expected to replace traditional dynamic random access memory or disk technology due to its excellent characteristics such as byte addressability, low energy consumption, and fast read and write speed. However, the storage medium itself has some shortcomings, such as limited lifetime, asymmetric read and write speed, uneven wear and various sources of errors. The storage principles of common non-volatile memories are explained, and existing improved technologies are investigated and summarized.

    • Performance Optimization of Storage Engine Based on Non-Volatile Memory

      2022, 11(3):56-70. DOI: 10.12146/j.issn.2095-3135.20210913001

      Abstract (294) HTML (0) PDF 2.31 M (2245) Comment (0) Favorites

      Abstract:Non-volatile memory has a read/write speed that is comparable to dynamic random access memory and can be used to replace traditional storage devices to improve the performance of storage engines. However, existing storage engines typically use generic block interfaces to access devices, resulting in a long I/O software stack, increasing read/write latency at software layers, thereby limiting the performance benefits of non-volatile memory. To solve this problem, this paper proposes a new storage engine, named NVMStore, which is based on non-volatile memory and the Ceph big-data storage system platform. NVMStore accesses storage devices through memory mapping and optimizes data read/write processes according to byte-addressability and data persistence characteristics of non-volatile memory, thus reducing the data write amplification and software stack overhead. Experimental results on real non-volatile memory devices show that NVMStore can significantly improve the performance of Ceph when dealing with small block data read/ write workloads, compared with traditional storage engines using non-volatile memory.

    • Performance Analysis and Study for Hybrid NAND Flash Memory

      2022, 11(3):71-84. DOI: 10.12146/j.issn.2095-3135.20220225001

      Abstract (256) HTML (0) PDF 4.80 M (2301) Comment (0) Favorites

      Abstract:Hybrid flash storage has become the mainstream storage device in the field of consumer device. However, the academic study on hybrid flash storage is still insufficient. Based on our research activities, practical experience on hybrid storage devices, and state-of-the-art researches, this paper introduces the architecture of hybrid flash memory, the pain points that need to be solved and the relevant research progress. Firstly, this paper introduces and analyses the hybrid flash memory architecture and the corresponding characteristics. Then the experimental results on real hybrid flash memory are shown and the problems of the hybrid flash memory to be solved are exposed. These problems are full into four categories, write characteristics, read characteristics, read/write interference, and volume characteristics. Finally, the latest research progresses of the corresponding problems are introduced. The advantages and disadvantages of each technique are summarized. Additionally, the future development direction is commented.

    • Performance Optimization of Offline Batch Jobs in Erasure-Coded Storage Systems

      2022, 11(3):85-97. DOI: 10.12146/j.issn.2095-3135.20211026001

      Abstract (264) HTML (0) PDF 3.94 M (2262) Comment (0) Favorites

      Abstract:With the explosive growth of Internet data, many distributed storage systems have integrated erasure-coding mechanisms to ensure data reliability, while further reducing storage overhead. However, erasure-coding has changed the data placement scheme, thus affecting the data access of other services of the cluster. This paper proposes a new data placement scheme and a task scheduling strategy based on heterogeneous Hadoop cluster that can be better adapted to the “one-to-many” data access scenarios of a typical offline batch job——MapReduce applications. By analyzing the hardware parameters and historical load of each node in a heterogeneous cluster, the data blocks of the same erasure coded stripe are distributed as many as possible on nodes with similar performance. This way ensures that the data access pressure to each node of the cluster during the execution of the MapReduce job achieves relatively balanced state. In addition, when the system schedules tasks, the task concurrency of nodes is determined according to the current load and computing power of each node and so to avoid straggler task caused by heavy load in some nodes and optimize the progress of the MapReduce job. The experimental results show that compared with the default random data placement and task allocation strategy in Hadoop, the data layout strategy Heterogeneous-aware Data Placement Algorithm (HDPA) and the task allocation strategy Dynamic Task Allocation Algorithm (DTAA) proposed in this paper can effectively reduce the long tail effect of tasks in different types of MapReduce applications, thus reducing the running time by 10.5%~42%.

    • >Biomedicine and Biomedical Engineering
    • A Two-Stage Coronary Artery Segmentation Method Based on the Combination of 2D and 3D Convolutional Neural Networks

      2022, 11(3):98-107. DOI: 10.12146/j.issn.2095-3135.20211025001

      Abstract (400) HTML (0) PDF 5.70 M (1903) Comment (0) Favorites

      Abstract:Cardiovascular disease is a major disease that seriously endangers public health. Coronary heart disease is the leading cause of death compared with other cardiovascular diseases. Precise coronary artery segmentation is of great significance to the treatment of coronary heart disease. Deep learning has been widely used in the field of medical imaging, but the segmentation of small object like coronary arteries is still a challenge. Aiming at accurate coronary artery segmentation this research proposes a combination of 2D and 3D convolutional neural networks. Specifically, the proposed scheme uses skeleton as a bridge to combine the two kinds of convolution networks, and expand the information receiving domain of convolution network. Compared with other deep learning based methods, the proposed method exhibits a certain improvement in sensitivity, Dice coefficient, AUC, and Hausdorf distance, and can detect the coronary arteries that cannot be identified by other competing methods, which solves the problem of vascular disconnection and blood missing to a certain extent.

    • >New Energy and New Materials
    • Fabrication and Properties of Ultra-Fine BaTiO 3 Particles with High Tetragonality

      2022, 11(3):108-120. DOI: 10.12146/j.issn.2095-3135.20210830001

      Abstract (284) HTML (0) PDF 19.44 M (1672) Comment (0) Favorites

      Abstract:Ultra-fine?BaTiO 3 powder with high tetragonality is the key material for the next generation of multilayer?ceramic?capacitors.?In?this?paper,?effects?of?medium?size?of?the?sand-milling?and?the?phase?of the raw TiO 2 on the reaction activity and dielectric properties of the product were investigated, and ultra-fine?BaTiO 3 powder with high tetragonality was synthesized via the sand-milling process. Field emission scanning electron microscope images and X-ray photoelectron spectroscopy showed that the fine sand-milling?media?was?more?effective?in?crushing?raw?materials?and?mechanical?activation.?Raman?spectroscopy and?X-ray?diffraction?pattern?proved?that?the?anatase?phase?of?TiO 2 was transformed into TiO 2 -II phase and rutile?phase?successively?during?the?sand-milling?process.?Derivative?thermogravimetry?and?X-ray?diffraction analysis?proved?that?fine?sand-milling?media?was?better?at?lowering?the?reaction?temperature?and?inhibiting the formation of Ba 2 TiO 4 . The high resolution transmission electron microscopy image revealed that the formation of BaTiO 3 is a process by which Ba 2+ diffuses into the TiO 2 lattice. The anatase TiO 2 /BaCO 3 mixture were sand-milled for 4 h, using ZrO 2 balls with a diameter of 0.1 mm, and calcined at 1 100 ℃ for 3 h. Then a well-dispersed BaTiO 3 powder with an average particle size of 186 nm and tetragonality of 1.009 2 was obtained. Sintered at 1 250 ℃, the density of the ceramic derived of as-prepared powder was 96.11%, and the peak value of the dielectric constant at the Curie point (137.8 ℃) was 8 677.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

Most Read

Most Cited

Most Downloaded