Thursday, May 12, 2016

CLOUD COMPUTING

ATS_CC16_001: Dynamic and Public Auditing with Fair Arbitration for Cloud Data
          Cloud users no longer physically possess their data, so how to ensure the integrity of their outsourced data becomes a challenging task. Recently proposed schemes such as “provable data possession” and “proofs of retrievability” are designed to address this problem, but they are designed to audit static archive data and therefore lack of data dynamics support. Moreover, threat models in these schemes usually assume an honest data owner and focus on detecting a dishonest cloud service provider despite the fact that clients may also misbehave. This paper proposes a public auditing scheme with data dynamics support and fairness arbitration of potential disputes. In particular, we design an index switcher to eliminate the limitation of index usage in tag computation in current schemes and achieve efficient handling of data dynamics. To address the fairness problem so that no party can misbehave without being detected, we further extend existing threat models and adopt signature exchange idea to design fair arbitration protocols, so that any possible dispute can be fairly settled. The security analysis shows our scheme is provably secure, and the performance evaluation demonstrates the overhead of data dynamics and dispute arbitration are reasonable.

ATS_CC16_002: Enabling Cloud Storage Auditing with Verifiable Outsourcing of Key Updates
          Key-exposure resistance has always been an important issue for in-depth cyber defence in many security applications. Recently, how to deal with the key exposure problem in the settings of cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources, such as mobile phones. In this paper, we focus on how to make the key updates as transparent as possible for the client and propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal. In particular, we leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA only needs to hold an encrypted version of the client's secret key while doing all these burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key from the TPA when uploading new files to cloud. Besides, our design also equips the client with capability to further verify the validity of the encrypted secret keys provided by the TPA. All these salient features are carefully designed to make the whole auditing procedure with key exposure resistance as transparent as possible for the client. We formalize the definition and the security model of this paradigm. The security proof and the performance simulation show that our detailed design instantiations are secure and efficient.

ATS_CC16_003: Providing User Security Guarantees in Public Infrastructure Clouds
          The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments.

ATS_CC16_004: Attribute-Based Data Sharing Scheme Revisited in Cloud Computing
          Cipher-text-policy attribute-based encryption (CP-ABE) is a very promising encryption technique for secure data sharing in the context of cloud computing. Data owner is allowed to fully control the access policy associated with his data which to be shared. However, CP-ABE is limited to a potential security risk that is known as key escrow problem, whereby the secret keys of users have to be issued by a trusted key authority. Besides, most of the existing CP-ABE schemes cannot support attribute with arbitrary state. In this paper, we revisit attribute-based data sharing scheme in order to solve the key escrow issue but also improve the expressiveness of attribute, so that the resulting scheme is more friendly to cloud computing applications. We propose an improved two-party key issuing protocol that can guarantee that neither key authority nor cloud service provider can compromise the whole secret key of a user individually. Moreover, we introduce the concept of attribute with weight, being provided to enhance the expression of attribute, which can not only extend the expression from binary to arbitrary state, but also lighten the complexity of access policy. Therefore, both storage cost and encryption complexity for a ciphertext are relieved. The performance analysis and the security proof show that the proposed scheme is able to achieve efficient and secure data sharing in cloud computing.

ATS_CC16_005: An Efficient File Hierarchy Attribute-Based Encryption Scheme in Cloud Computing
          Ciphertext-policy attribute-based encryption (CP-ABE) has been a preferred encryption technology to solve the challenging problem of secure data sharing in cloud computing. The shared data files generally have the characteristic of multilevel hierarchy, particularly in the area of healthcare and the military. However, the hierarchy structure of shared files has not been explored in CP-ABE. In this paper, an efficient file hierarchy attribute-based encryption scheme is proposed in cloud computing. The layered access structures are integrated into a single access structure, and then, the hierarchical files are encrypted with the integrated access structure. The ciphertext components related to attributes could be shared by the files. Therefore, both ciphertext storage and time cost of encryption are saved. Moreover, the proposed scheme is proved to be secure under the standard assumption. Experimental simulation shows that the proposed scheme is highly efficient in terms of encryption and decryption. With the number of the files increasing, the advantages of our scheme become more and more conspicuous.

ATS_CC16_006: Identity-Based Proxy-Oriented Data Uploading and Remote Data Integrity Checking in Public Cloud
          More and more clients would like to store their data to public cloud servers (PCSs) along with the rapid development of cloud computing. New security problems have to be solved in order to help more clients process their data in public cloud. When the client is restricted to access PCS, he will delegate its proxy to process his data and upload them. On the other hand, remote data integrity checking is also an important security problem in public cloud storage. It makes the clients check whether their outsourced data are kept intact without downloading the whole data. From the security problems, we propose a novel proxy-oriented data uploading and remote data integrity checking model in identity-based public key cryptography: identity-based proxy-oriented data uploading and remote data integrity checking in public cloud (ID-PUIC). We give the formal definition, system model, and security model. Then, a concrete ID-PUIC protocol is designed using the bilinear pairings. The proposed ID-PUIC protocol is provably secure based on the hardness of computational Diffie-Hellman problem. Our ID-PUIC protocol is also efficient and flexible. Based on the original client's authorization, the proposed ID-PUIC protocol can realize private remote data integrity checking, delegated remote data integrity checking, and public remote data integrity checking.

ATS_CC16_007: Enabling Cloud Storage Auditing With Verifiable Outsourcing of Key Updates
          Key-exposure resistance has always been an important issue for in-depth cyber defence in many security applications. Recently, how to deal with the key exposure problem in the settings of cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources, such as mobile phones. In this paper, we focus on how to make the key updates as transparent as possible for the client and propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal. In particular, we leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA only needs to hold an encrypted version of the client's secret key while doing all these burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key from the TPA when uploading new files to cloud. Besides, our design also equips the client with capability to further verify the validity of the encrypted secret keys provided by the TPA. All these salient features are carefully designed to make the whole auditing procedure with key exposure resistance as transparent as possible for the client. We formalize the definition and the security model of this paradigm. The security proof and the performance simulation show that our detailed design instantiations are secure and efficient.

ATS_CC16_008: Outsourcing Eigen-Decomposition and Singular Value Decomposition of Large Matrix to a Public Cloud
          Cloud computing enables customers with limited computational resources to outsource their huge computation workloads to the cloud with massive computational power. However, in order to utilize this computing paradigm, it presents various challenges that need to be addressed, especially security. As eigen-decomposition (ED) and singular value decomposition (SVD) of a matrix are widely applied in engineering tasks, we are motivated to design secure, correct, and efficient protocols for outsourcing the ED and SVD of a matrix to a malicious cloud in this paper. In order to achieve security, we employ efficient privacy-preserving transformations to protect both the input and output privacy. In order to check the correctness of the result returned from the cloud, an efficient verification algorithm is employed. A computational complexity analysis shows that our protocols are highly efficient. We also introduce an outsourcing principle component analysis as an application of our two proposed protocols.

ATS_CC16_009: Performance limitations of a text search application running in cloud instances
          This article analyzes the performance of MySQL in clouds based on commodity hardware in order to identify the bottlenecks in the execution of series of scripts developed on the SQL standard. The developed scripts were designed in order to perform text search in a considerable amount of records. Two types of platforms were employed: a physical machine that serves as host and an instance within a cloud infrastructure. The results show that the intensive use of a relational database presents a greater loss of performance in a cloud instance due limitations in the primary storage system that was employed in the cloud infrastructure.

ATS_CC16_0010: A dynamic load balancing method of cloud-center based on SDN
          In order to achieve dynamic load balancing based on data flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The approach of using the SDN technology in the current task scheduling flexibility, accomplish real-time monitoring of the service node flow and load condition by the OpenFlow protocol. When the load of system is imbalanced, the controller can allocate globally network resources. What's more, by using dynamic correction, the load of the system is not obvious tilt in the long run. The results of simulation show that this approach can realize and ensure that the load will not tilt over a long period of time, and improve the system throughput.

ATS_CC16_0011: Encryption-Based Solution for Data Sovereignty in Federated Clouds
          The rapidly growing demand for cloud services in the current business practice has favored the success of the hybrid clouds and the advent of cloud federation. The available literature of this topic has focused on middleware abstraction to interoperate heterogeneous cloud platforms and orchestrate different management and business models. However, cloud federation implies serious security and privacy issues with respect to data sovereignty when data is outsourced across different judicial and legal systems. This column describes a solution that applies encryption to protect data sovereignty in federated clouds rather than restricting the elasticity and migration of data across federated clouds.

ATS_CC16_0012: Attribute-based access control for multi-authority systems with constant size ciphertext in cloud computing
          In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the efficiency of the encryption and decryption operations.

ATS_CC16_0013: AMTS: Adaptive multi-objective task scheduling strategy in cloud computing
          Task scheduling in cloud computing environments is a multi-objective optimization problem, which is NP hard. It is also a challenging problem to find an appropriate trade-off among resource utilization, energy consumption and Quality of Service (QoS) requirements under the changing environment and diverse tasks. Considering both processing time and transmission time, a PSO-based Adaptive Multi-objective Task Scheduling (AMTS) Strategy is proposed in this paper. First, the task scheduling problem is formulated. Then, a task scheduling policy is advanced to get the optimal resource utilization, task completion time, average cost and average energy consumption. In order to maintain the particle diversity, the adaptive acceleration coefficient is adopted. Experimental results show that the improved PSO algorithm can obtain quasi-optimal solutions for the cloud task scheduling problem.

ATS_CC16_0014: Efficient R-Tree Based Indexing Scheme for Server-Centric Cloud Storage System
          Cloud storage system poses new challenges to the community to support efficient concurrent querying tasks for various data-intensive applications, where indices always hold important positions. In this paper, we explore a practical method to construct a two-layer indexing scheme for multi-dimensional data in diverse server-centric cloud storage system. We first propose RT-HCN, an indexing scheme integrating R-tree based indexing structure and HCN-based routing protocol. RT-HCN organizes storage and compute nodes into an HCN overlay, one of the newly proposed sever-centric data center topologies. Based on the properties of HCN, we design a specific index mapping technique to maintain layered global indices and corresponding query processing algorithms to support efficient query tasks. Then, we expand the idea of RT-HCN onto another server-centric data center topology DCell, discovering a potential generalized and feasible way of deploying two-layer indexing schemes on other server-centric networks. Furthermore, we prove theoretically that RT-HCN is both space-efficient and query-efficient, by which each node actually maintains a tolerable number of global indices while high concurrent queries can be processed within accepted overhead. We finally conduct targeted experiments on Amazon's EC2 platforms, comparing our design with RT-CAN, a similar indexing scheme for traditional P2P network. The results validate the query efficiency, especially the speedup of point query of RT-HCN, depicting its potential applicability in future data centers.

ATS_CC16_0015: Dynamic Certification of Cloud Services: Trust, but Verify!
          Although intended to ensure cloud service providers' security, reliability, and legal compliance, current cloud service certifications are quickly outdated. Dynamic certification, on the other hand, provides automated monitoring and auditing to verify cloud service providers' ongoing adherence to certification requirements.

ATS_CC16_0016: Privacy preserving and delegated access control for cloud applications
          In cloud computing applications, users' data and applications are hosted by cloud providers. This paper proposed an access control scheme that uses a combination of discretionary access control and cryptographic techniques to secure users' data and applications hosted by cloud providers. Many cloud applications require users to share their data and applications hosted by cloud providers. To facilitate resource sharing, the proposed scheme allows cloud users to delegate their access permissions to other users easily. Using the access control policies that guard the access to resources and the credentials submitted by users, a third party can infer information about the cloud users. The proposed scheme uses cryptographic techniques to obscure the access control policies and users' credentials to ensure the privacy of the cloud users. Data encryption is used to guarantee the confidentiality of data. Compared with existing schemes, the proposed scheme is more flexible and easy to use. Experiments showed that the proposed scheme is also efficient.

ATS_CC16_0017: Auditing a Cloud Provider’s Compliance With Data Backup Requirements: A Game Theoretical Analysis
          The new developments in cloud computing have introduced significant security challenges to guarantee the confidentiality, integrity, and availability of outsourced data. A service level agreement (SLA) is usually signed between the cloud provider (CP) and the customer. For redundancy purposes, it is important to verify the CP’s compliance with data backup requirements in the SLA. There exist a number of security mechanisms to check the integrity and availability of outsourced data. This task can be performed by the customer or be delegated to an independent entity that we will refer to as the verifier. However, checking the availability of data introduces extra costs, which can discourage the customer of performing data verification too often. The interaction between the verifier and the CP can be captured using game theory in order to find an optimal data verification strategy. In this paper, we formulate this problem as a two player non-cooperative game. We consider the case in which each type of data is replicated a number of times, which can depend on a set of parameters including, among others, its size and sensitivity. We analyze the strategies of the CP and the verifier at the Nash equilibrium and derive the expected behavior of both the players. Finally, we validate our model numerically on a case study and explain how we evaluate the parameters in the model.

ATS_CC16_0018: An Efficient Privacy-Preserving Ranked Keyword Search Method
          Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy preserving. Therefore it is essential to develop efficient and reliable ciphertext search techniques. One challenge is that the relationship between documents will be normally concealed in the process of encryption, which will lead to significant search accuracy performance degradation. Also the volume of data in data centers has experienced a dramatic growth. This will make it even more challenging to design ciphertext search schemes that can provide efficient and reliable online information retrieval on large volume of encrypted data. In this paper, a hierarchical clustering method is proposed to support more search semantics and also to meet the demand for fast ciphertext search within a big data environment. The proposed hierarchical approach clusters the documents based on the minimum relevance threshold, and then partitions the resulting clusters into sub-clusters until the constraint on the maximum size of cluster is reached. In the search phase, this approach can reach a linear computational complexity against an exponential size increase of document collection. In order to verify the authenticity of search results, a structure called minimum hash sub-tree is designed in this paper. Experiments have been conducted using the collection set built from the IEEE Xplore. The results show that with a sharp increase of documents in the dataset the search time of the proposed method increases linearly whereas the search time of the traditional method increases exponentially. Furthermore, the proposed method has an advantage over the traditional method in the rank privacy and relevance of retrieved documents

ATS_CC16_0019: Towards Building Forensics Enabled Cloud Through Secure Logging-as-a-Service
          Collection and analysis of various logs (e.g., process logs, network logs) are fundamental activities in computer forensics. Ensuring the security of the activity logs is therefore crucial to ensure reliable forensics investigations. However, because of the black-box nature of clouds and the volatility and co-mingling of cloud data, providing the cloud logs to investigators while preserving users' privacy and the integrity of logs is challenging. The current secure logging schemes, which consider the logger as trusted cannot be applied in clouds since there is a chance that cloud providers (logger) collude with malicious users or investigators to alter the logs. In this paper, we analyze the threats on cloud users' activity logs considering the collusion between cloud users, providers, and investigators. Based on the threat model, we propose Secure-Logging-as-a-Service ( SecLaaS), which preserves various logs generated for the activity of virtual machines running in clouds and ensures the confidentiality and integrity of such logs. Investigators or the court authority can only access these logs by the RESTful APIs provided by SecLaaS, which ensures confidentiality of logs. The integrity of the logs is ensured by hash-chain scheme and proofs of past logs published periodically by the cloud providers. In prior research, we used two accumulator schemes Bloom filter and RSA accumulator to build the proofs of past logs. In this paper, we propose a new accumulator scheme - Bloom-Tree, which performs better than the other two accumulators in terms of time and space requirement.

ATS_CC16_0020: A Genetic Algorithm for Virtual Machine Migration in Heterogeneous Mobile Cloud Computing
          Mobile Cloud Computing (MCC) improves the performance of a mobile application by executing it at a resourceful cloud server that can minimize execution time compared to a resource-constrained mobile device. Virtual Machine (VM) migration in MCC brings cloud resources closer to a user so as to further minimize the response time of an offloaded application. Such resource migration is very effective for interactive and real-time applications. However, the key challenge is to find an optimal cloud server for migration that offers the maximum reduction in computation time. In this paper, we propose a Genetic Algorithm (GA) based VM migration model, namely GAVMM, for heterogeneous MCC system. In GAVMM, we take user mobility and load of the cloud servers into consideration to optimize the effectiveness of VM migration. The goal of GAVMM is to select the optimal cloud server for a mobile VM and to minimize the total number of VM migrations, resulting in a reduced task execution time. Additionally, we present a thorough numerical evaluation to investigate the effectiveness of our proposed model compared to the state-of-the-art VM migration policies.

ATS_CC16_0021: Online Resource Scheduling Under Concave Pricing for Cloud Computing
          With the booming growth of cloud computing industry, computational resources are readily and elastically available to the customers. In order to attract customers with various demands, most Infrastructure-as-a-service (IaaS) cloud service providers offer several pricing strategies such as pay as you go, pay less per unit when you use more (so called volume discount), and pay even less when you reserve. The diverse pricing schemes among different IaaS service providers or even in the same provider form a complex economic landscape that nurtures the market of cloud brokers. By strategically scheduling multiple customers' resource requests, a cloud broker can fully take advantage of the discounts offered by cloud service providers. In this paper, we focus on how a broker may help a group of customers to fully utilize the volume discount pricing strategy offered by cloud service providers through cost-efficient online resource scheduling. We present a randomized online stack-centric scheduling algorithm (ROSA) and theoretically prove the lower bound of its competitive ratio. Our simulation shows that ROSA achieves a competitive ratio close to the theoretical lower bound under a special case cost function. Trace driven simulation using Google cluster data demonstrates that ROSA is superior to the conventional online scheduling algorithms in terms of cost saving.

ATS_CC16_0022: Dynamic Bin Packing for On-Demand Cloud Resource Allocation
          Dynamic Bin Packing (DBP) is a variant of classical bin packing, which assumes that items may arrive and depart at arbitrary times. Existing works on DBP generally aim to minimize the maximum number of bins ever used in the packing. In this paper, we consider a new version of the DBP problem, namely, the MinTotal DBP problem which targets at minimizing the total cost of the bins used overtime. It is motivated by the request dispatching problem arising from cloud gaming systems. We analyze the competitive ratios of the modified versions of the commonly used First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a new bin only when no currently open bin can accommodate the item to be packed) algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any Fit packing cannot be better than μ + 1, where μ is the ratio of the maximum item duration to the minimum item duration. The competitive ratio of Best Fit packing is not bounded for any given μ. For First Fit packing, if all the item sizes are smaller than 1/β of the bin capacity (β> 1 is a constant), the competitive ratio has an upper bound of β/β-1·μ+3β/β-1 + 1. For the general case, the competitive ratio of First Fit packing has an upper bound of 2μ + 7. We also propose a Hybrid First Fit packing algorithm that can achieve a competitive ratio no larger than 5/4 μ + 19/4 when μ is not known and can achieve a competitive ratio no larger than μ + 5 when μ is known.

ATS_CC16_0023: A Scalable Data Chunk Similarity based Compression Approach for Efficient Big Sensing Data Processing on Cloud
          Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.



No comments:

Post a Comment