SLGP Header

Fine Grained updates with Consignment Auditing of Dynamic Bigdata Storage on Cloud

IJCSEC Front Page

The users may need to split the large-scale datasets into smaller chunks before uploading to the cloud for privacy-preserving. In this regard, efficiency in processing small updates is always essential in big data applications. And the data integrity verification is done by a trusted third party where the process is called as Third-Party Auditing (TPA). It investigates the problem of integrity verification for big data storage in cloud and do better support for small dynamic updates. To provide Fine-grained data updates that can fully support authorized auditing and Fine-grained update requests. It utilizes a flexible data segmentation strategy and a Ranked Merkle Hash Tree (RMHT). An additional authorization process is added among the three participating parties of client, Cloud Storage Server (CSS) and a third-party auditor (TPA). The auditing can be performed for a consignment of files, where the authorized auditor performs the auditing for a set of files at a time.
Index Terms:Third Party Auditing, Fine Grained updates, Consignment auditing.
As internet users are increasing every year all the work gets digitalized today, as a result the data in the internet gets increased. Big Data is a term refers a large dataset. It consists of both structured and unstructured data. Unstructured data refers information that is not able to fit in row-column database. Social network like Facebook, Twitter, linkdln, Wikipedia and YouTube are some of the source for production of large amount of unstructured data. Big Data cannot be treated by existing database application. Some of the NoSQL database like Mongodb, MarkLogic, Apache Hadoop, Apache Cassandra, and IBMpureXML are used to analyze an unstructured data.Public auditing schemes can support full data dynamics. In their models, only insertions, deletions and modifications on fixed-sized blocks are discussed. Particularly, in BLS-signature-based schemes with 80-bit security, size of each data block is either restricted by the 160-bit prime group order, as each block is segmented into a fixed number of 160-bit sectors. This design is inherently unsuitable to support variable-sized blocks, despite their remarkable advantage of shorter integrity proofs. To provide insertion, deletion or modification of one or multiple fixed-sized blocks, which we call ‘coarse-grained’ updates. Although support for coarse-grained updates can provide an integrity verification scheme with basic scalability, data updating operations in practice can always be more complicated.A modification that can dramatically reduce communication overheads for verifications of small updates. Theoretical analysis and experimental results have demonstrated that our scheme can offer not only enhanced security and flexibility, but also significantly lower overheads for big data applications with a large number of frequent small updates such as applications in social media and business transactions.


  1. Ateniese.G et al (2009), “Proofs of Storage from Homomorphic Identification Protocols” ,in proceedings of the 15th international conference on the theory and application of Cryptology and Information Security(ASIACRYPT ’09),pp.319-378.
  2. Boneh.D et al (2004), “Short Signatures from the Weil Pairing”, Journal of Cryptology,vol.17,no.4,pp.297-319.
  3. Chang E.C and Xu.J, (2012), “Remote Integrity Check with Dishonest Storage server, “inProc.of ESORICS’08.Berlin, Heidelberg:Springer-Verlag.
  4. Chang Liu et al (2013), “Authorized Public Auditing of Dynamic Big Data storage on Cloud with efficient Fine-grained Updates”,in IEEE Transactions on Parallel and Distributed Systems.
  5. He.Yet al (2011),“Preventing Equivalence Attacks in Updated,Anonymized Data”,in proceedings of the 27th IEEE International Conference on Data Engineering(ICDE ’11),pp.529-540.
  6. Jiangtao Li and Ninghui Li (2007), “Automated Trust Negotiation Using Cryptographic Credentials”.In Proceedings of the 9th International Conference on FinancialCryptography and Data Security.
  7. Juels.A and Kaliski B.S, (2011), “Pors: proofs of Retrievability for large files,” in Proc. of CCS’07. New York, NY, USA: ACM.
  8. LiTheyn et al (2006),“Fine-grained Partitioning for Aggressive Data Skipping”. In R. Guerraoui, editor, DISC ‘04, pages 405–419. Springer, 2004. LNCS vol. 3274.
  9. Naor.M and Rothblum.G.N, (2010), “The complexity of online memory checking,” inProc. of FOCS’05.
  10. Shacham.H and Waters.B (2008), “ Compact proofs of Retrievability”. Cryptology ePrintArchive, Report 2008/073.