In HDFS, the data is stored in multiple racks at a data center in a cloud environment.
Each rack is stored at the main assignment table in rows and columns. The rows consist of multiple blocks, and the columns contain multiple nodes. However, the issue of large datasets arises. The solution to this problem of large datasets is data partition (i.e, breaking larger datasets into blocks). The data items stored in each block are linked by a block pointer. Thus, each data block stores the replicas. One of the biggest challenges of creating multiple replicas is data inconsistency. The key solution to the above discussed issues and data integrity can be resolved by using a data structure, called MHT, which takes less storage space. …show more content…
But if a new key is found, an update is performed. In MHT, every multiple replica is indexed by the index key stored at the root node, and the TPA veries the datasets with multiple replicas of a particular domains attributes.
The security model is implemented in a healthcare domain to keep the patient health record
(PHR) information of each patient secure by encrypting with the CP-ABE scheme on the cloud
. The PHR of each patient is maintained, and the patient has full access to monitor his/her record online after performing proper authentication. For example, a physician can only access the records of a patients sensitive information such as personal record or medical record. Also, a physician can perform update and verication of the medical record at multiple replicas by constructing a layered model of an MHT having replicas of the original record at every sub-tree.
Therefore, a relationship exists in the database of a le hierarchy like (patient 1 OR patient
2) diagnosed by any physician (physician 1 AND physician 2). By public data auditing, the integrity of the datasets is checked using an external party, that is, TPA. Motivated by