2012, 1(4):14-18. DOI: 10.12146/j.issn.2095-3135.201211002
Abstract:Data center is the kernel infrastructure of cloud computing, while data center is the base on data center networking, which relates to the performance, scalability and manageability of data center. This paper points out the disadvantage of the current main data center networking, analyzesand compares the state of art research work, and makes a forecast of the future data center networking.
2012, 1(4):19-24. DOI: 10.12146/j.issn.2095-3135.201211003
Abstract:Cloud computing brings the potential to deliver flexibility, consolidation, and high resource utilization to data centers, and virtualization is the key layer. High resource utilization as well as high performance promised by virtualization largely depends on an effective and efficient resource management scheme. There is a great demand on building a multilayered resource management mechanism via virtualization: using the characters of the virtual machine to build a predictive resource model and then export the allocation strategy, finally the resource can be allocated dynamically on demand. We should focus not only on the resource utilization rate on the physical machines, but also the change of resource demand while running for the application on the virtual machines. The final target is to get a suit of virtual resource management technology, which could be widely used in the static deployment, dynamic prediction, and resource management between virtual machines or physical machines, to support the cloud computing powerfully.
2012, 1(4):25-29. DOI: 10.12146/j.issn.2095-3135.201211004
Abstract:A desktop cloud is an implementation of hosted virtual desktops using cloud computing technology. Desktop clouds are one of the most popular applications for cloud computing. In this work, mechanisms for secure accessing and sharing of virtual desktops are investigated in details. The public key infrastructure (PKI) is utilized to create virtual organizations (VO). Within a VO, virtual machines are created, remote desktops are accessed and shared. PKI provides security mechanism and multiple users can share a virtual machine via VO trust management. In order to make remote channels secure, OpenVPN is adopted to build a private network, authenticating users and encrypting communications.
2012, 1(4):30-33. DOI: 10.12146/j.issn.2095-3135.201211005
Abstract:The security issue of cloud computing has received widely attention in recent years. Due to the sharing, out-sourcing and openness features of cloud computing mode, the end-users don’t gain the complete control over their computing and data. Instead, a malicious system operator can tamper with or steal user’s critical data without user’s awareness. Some researchers tried to protect the privacy and integrity of data in the cloud by extending architecture and reduce the hardware/software stack that cloud security relies on. This paper presents technologies in these researches, including enforcing memory isolation, encryption by secure processor, et al.
2012, 1(4):34-40. DOI: 10.12146/j.issn.2095-3135.201211006
Abstract:With the development of information technology and attention of energy reservation, more and more people and organizations are interested in working virtually. However, security is one of important challenges for virtual working since sensitive data is transmitted and stored online. In this paper, we propose a novel solution as a security service on cloud computing platform to protect virtual working with on-demand virtual private network (VPN) as well as transparent encryption. Advantages of the solution are on-demand, easy to use, cost effective and simple management. It can also save expenditure for small and medium-sized enterprises which needn’t invest IT systems to set up their own security solution for virtual working.
2012, 1(4):41-45. DOI: 10.12146/j.issn.2095-3135.201211007
Abstract:This paper gives a virtualization system evaluation framework, which is needs-oriented cloud computing. The framework contains 6 testing types, including functional testing, performance testing, scalability testing, stress testing, fault-tolerance test, and power testing. For each type, the framework gives specific test objectives, methods and procedures. Finally, we give the results and analysis of multiple performance tests.
2012, 1(4):46-51. DOI: 10.12146/j.issn.2095-3135.201211008
Abstract:Cloud computing grows rapidly nowadays, which brings virtualization technology to traditional datacenters in order to implement service-on-demand of computing resources, such as Amazon’s Elastic Cloud Computing (EC2) Services. Hadoop is an open-source implementation of Google’s MapReduce, which is a distributed parallel computing model for large-scale dataset. Hadoop is gaining more and more focuses both in academy and industry. It is an open question that how to combine cloud computing infrastructures with Hadoop efficiently, i.e., making full use of the former’s elastic resources and the latter’s advantages of scalability, fault-tolerance and running on commodity hardware. In this paper, we carry out a series of experiments to evaluate and analyze the performance of Hadoop on our heterogeneous clouding computing testbed. We demonstrate that the performance of Hadoop is degraded under the scenario with high I/O overheads, compared with the traditional scenario where each node in a cluster is a physical machine. Our work can act as a basis for improving the performance of Hadoop under the cloud computing environments.
2012, 1(4):52-57. DOI: 10.12146/j.issn.2095-3135.201211009
Abstract:Cognitive Radio (CR) and Cognitive Network (CN) are proposed to meet requirements of increasing network dependability, availability and adaptability. Aiming current network puzzles, background and technical traits of CR are introduced firstly and concepts of CN are explained. Then, relevant research works of CN are summed up. Based on above expatiation, general architecture of CN is given and its resource management mode is elaborated. Finally, conclusion is drawn and future works are prospected.
2012, 1(4):58-63. DOI: 10.12146/j.issn.2095-3135.201211010
Abstract:Recently, there has been an explosive growth in cloud computing, greatly increasing the importance of storage in such systems. A wide range of applications have been running in cloud and more and more variant applications are rushing into this platform. Different applications may have different requirements for storages such as file size, the number of files, and I/O performance. This indicates only a unified file system in cloud would keep the overall system performance suboptimal or even cannot satisfy the need of all applications in a cloud. However, it is unclear that whether it is beneficial to optimize the overall I/O performance by employing variant file systems in a single cloud computing platform. In this paper, we address the above problem by characterizing several popular distributed files systems used in cloud computing. These file systems are ceph, moosefs, glusterfs and hdfs. Through the characterization, we find that the performance of the same operation such as read or write may be dramatically different for different file systems. When the file size is less than 256 MB, moosefs has the best writing performance. On average, its writing performance outperforms others by 22.3%. As for reading performance, glusterfs is the best when the file size is larger than 256KB. Its reading performance is 21.0% higher than other file systems. These findings lead us to design a hybrid file system for cloud computing platform, attempting significantly improve the overall performance.