SLGP Header

Extended Functional Minimum-Storage Cooperative Regenerating for Cloud Bandwidth Problem

IJCSEC Front Page

Abstract
Cloud storage systems to protect data from corruptions, redundant data to be tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than minimized bandwidth consumption. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
Key Terms: Cloud computing, Minimum storage, Bandwidth consumption.
I.Introduction
Several trends are opening up the era of Cloud Computing, which is an Internet-based development and use of computer technology. The ever cheaper and more powerful processors, together with the “Software as a Service” (SaaS) computing architecture, are transforming data centres into pools of computing service on a huge scale. Meanwhile, the increasing network bandwidth and reliable yet flexible network connections make it even possible that clients can now subscribe high-quality services from data and software that reside solely on remote data centers. Although envisioned as a promising service platform for the Internet, this new data storage paradigm in “Cloud” brings about many challenging design issues which have profound influence on the security and performance of the overall system. One of the biggest problem in existing method is it takes more bandwidth For repairing multiple failures.
So overcome this problem we use functional minimum bandwidth cooperative regenerating method for providing minimized bandwidth consumption. Consider the large size of the outsourced electronic data and the client’s constrained resource capability, the core of the problem can be generalized as how can the client find an efficient way to perform periodical integrity verifications without the local copy of data files.
In order to solve the problem of data integrity checking, many schemes are proposed under different systems and security models. In all these works, great efforts are made to design solutions that meet various requirements: high scheme efficiency, stateless verification, unbounded use of queries and irretrievability of data, etc. Considering the role of the verifier in the model, all the schemes presented before fall into two categories: private auditability and public auditability. Although schemes with private auditability can achieve higher scheme efficiency, public auditability allows any one, not just the client (data owner), to challenge the cloud server for correctness of data storage while keeping no private information. Then, clients are able to delegate the evaluation of the service performance to an independent Third Party Auditor (TPA), without devotion of their computation resources. In the cloud, the clients themselves are unreliable or may not be able to afford the overhead of performing frequent integrity checks. Thus, for practical use, it seems more rational to equip the verification protocol with public auditability, which is expected to play a more important role in achieving economies of scale for Cloud Computing. Moreover, for efficiency consideration, the outsourced data themselves should not be required by the verifier for the verification purpose.

References:

  1. AtenieseG, Burns.R, Curtmola.R, Herring.J, Khan.O, Kissner.L, Peterson.Z, and Song.D, “Remote Data Checking Using Provable Data Possession,” ACM Trans. Information and System Security, vol. 14, article 12, May 2011.
  2. Bowers.K, Juels.A, and Oprea.A, “Proofs of Retrievability: Theory and Implementation,” Proc. ACM Workshop Cloud Computing Security (CCSW ’09), 2009.
  3. Bowers.K, Juels.A, and Oprea.A, “HAIL: A High-Availability and Integrity Layer for Cloud Storage,” Proc. 16th ACM Conf. Computer and Comm. Security (CCS ’09), 2009.
  4. Dimakis.A, Godfrey.P, Y. Wu, M. Wainwright, and K. Ramchandran, “Network Coding for Distributed Storage Systems,” IEEE Trans. Information Theory, vol. 56, no. 9, 4539-4551, Sept. 2010.
  5. Krawczyk.H, “Cryptographic Extraction and Key Derivation: The HKDF Scheme,” Proc. 30th Ann. Conf. Advances in Cryptology (CRYPTO ’10), 2010.
  6. Shacham.H and Waters.B, “Compact Proofs of Retrievability,” Proc. 14th Int’l Conf. Theory and Application of Cryptology and Information Security: Advances in Cryptology, J. Pieprzyk, ed., pp. 90-107, 2008.
  7. Schroeder.B, Damouras.S, and Gill.P, “Understanding Latent Sector Errors and How to Protect against Them,” Proc. USENIX Conf. File and Storage Technologies (FAST ’10), Feb. 2010.
  8. Vrable.M, Savage.S, and Voelker.G, “Cumulus: Filesystem Backup to the Cloud,” Proc. USENIX Conf. File and Storage Technologies (FAST), 2009.
  9. Wildani.A, Schwarz T.J.E., Miller E.L., and Long D.D, “Protecting Against Rare Event Failures in Archival Systems,” Proc. IEEE Int’l Symp. Modeling, Analysis and Simulation Computer and Telecomm. Systems (MASCOTS ’09), 2009.
  10. E. Naone, “Are We Safeguarding Social Data?”http://www.technologyreview.com/ blog/editors/22924/, Feb. 2009.
  11. J.S. Plank, “A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-Like Systems,” Software - Practice & Experience,vol. 27, no. 9, pp. 995-1012, Sept. 1997.
  12. M.O. Rabin, “Efficient Dispersal of Information for Security, LoadBalancing, and Fault Tolerance, ” J. ACM, vol. 36, no . 2, pp. 335- 348, Apr. 1989.
  13. I. Reed and G. Solomon, “Polynomial Codes over Certain Finite Fields,” J. Soc. Industrial and Applied Math., vol. 8, no. 2, pp. 300-304, 1960.
  14. B. Schroeder, S. Damouras, and P. Gill, “Understanding Latent Sector Errors and How to Protect against Them,” Proc. USENIX Conf. File and Storage Technologies (FAST ’10), Feb. 2010.
  15. B. Schroeder and G.A. Gibson, “Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?” Proc. Fifth USENIX Conf. File and Storage Technologies (FAST ’07), Feb. 2007.