Thank you for your attention to this project ~
Please select your language:
This project is a secure storage system based on integrity auditing technology and ciphertext deduplication, aiming to provide a safe and reliable file storage and management solution. The system adopts a lightweight deduplication ciphertext integrity audit method to ensure the security and integrity of user data.
- Secure Storage: All files are encrypted to ensure data security during transmission and storage.
- Cryptotext deduplication: The system supports file deduplication, saving storage space and improving storage efficiency.
- Integrity Audit: Through ciphertext integrity audit, ensure the integrity of the file during storage and prevent data tampering.
- Privacy Information Retrieval: Through inadvertent access, the user's access behavior can also be guaranteed to be confidential. Its security is higher than the commonly discussed semantic security.
- User-friendly interface: Provides an intuitive and easy-to-use user interface to facilitate users to upload, download and manage files.
Design based on the OSU protocol in the following article.
Enabling_Efficient_Secure_and_Privacy-Preserving_Mobile_Cloud_Storage.pdf
Ciphertext deduplication is a technology that deduplicates encrypted data, aiming to reduce storage space while maintaining data security. Here's how it works:
- Encryption processing: Before the file is uploaded, the system will encrypt the file and generate ciphertext.
- Hash calculation: The system hashes the ciphertext and generates a unique hash value. Files with the same content will generate the same ciphertext and hash value after encryption.
- Deduplicated Storage: Before storing a new file, the system checks whether the hash of the file already exists. If it exists, the system will no longer store a copy of the file, but only keep a reference to the original file. This approach significantly reduces storage requirements while ensuring data security.
Integrity auditing is a technique that ensures data has not been tampered with during storage and transmission. Its main steps include:
- Data Encryption: Before the file is uploaded, the system will encrypt the file to ensure the security of the data during transmission.
- Hash Verification: When the file is uploaded, the system will generate the hash value of the file and store it in the database. The hash value is used for subsequent integrity verification.
- Periodic audit: The system will regularly conduct integrity audits on stored files. By recalculating the file's hash and comparing it to the stored hash, the system can detect if the file has been tampered with.
- Alert mechanism: If it is found that the hash value of the file does not match the stored hash value, the system will trigger an alert to notify the administrator for further investigation and processing.
cjml_server
├── bigdata
│ ├── gdbiddate-ldcia-server-v2
│ ├── gdbigdata-access-middle-server-v2
│ ├── gdbigdata-access-real-server-v2
│ ├── gdbigdata-audit-csp-server
│ ├── gdbigdata-audit-tpa-server
│ ├── gdbigdata-dupless-csp-server
│ ├── gdbigdata-dupless-ks-server
│ ├── gdbigdata-eureka-17000
│ ├── gdbigdata-gateway-17001
│ ├── gdbigdata-tempserver
│ └── gdbigdate-user-auth
└── desktop
The project mainly includes the following applications:
- gdbiddate-ldcia-server-v2: Mainly handles data auditing and integrity verification.
- gdbigdata-access-middle-server-v2: As an intermediate server, connect the client and the actual data storage server.
- gdbigdata-access-real-server-v2: Handles real data storage and retrieval.
- gdbigdata-audit-csp-server: Mainly handles integrity certification and auditing of data storage.
- gdbigdata-audit-tpa-server: Handles TPA (third-party audit) related functions.
- gdbigdata-dupless-csp-server: Handles data deduplication and storage tasks.
- gdbigdata-dupless-ks-server: Handles key management and secure storage.
- gdbigdata-eureka-17000: Service registration and discovery module.
- gdbigdata-gateway-17001: API gateway module.
- gdbigdata-tempserver: Temporary server, used for testing and development.
- gdbigdate-user-auth: User authentication module.
- desktop: Desktop client application.
Each module has its own configuration file, usually located in the src/main/resources
directory:
application.yml
: Global configuration file.application-dev.yml
: Development environment configuration file.application-prod.yml
: Production environment configuration file.
Modify the appropriate configuration files as needed to suit your local or production environment.
Server list to be deployed |
---|
gdbigdate-ldcia-server-v2 |
gdbigdata-access-middle-server-v2 |
gdbigdata-access-real-server-v2 |
gdbigdata-user-auth |
Mysql 8.0.27 |
redis |
https://github.com/826148267/cjml_server/tree/master/bigdata
docker run --name gdbd-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root --mount type=bind,src=/home/docker/mysql/conf/my.cnf,dst=/etc/mysql/my.cnf --mount type=bind,src=/media/Yang/DATA/mysql/datadir,dst=/var/lib/mysql --restart=on-failure:3 -d mysql:8.0.27
docker run -p 30060:30060 --name myredis -v/home/Yang/桌面/docker-redis/redis.conf:/etc/redis/redis.conf -d redis redis-server /etc/redis/redis.conf
#Basic image
FROM openjdk:17-oracle
#Maintainer, usually write name + email
MAINTAINER gzf<[email protected]>
#Set environment variables when building
#ENV
#Copy the jar package to the image, the first variable is
ADD gdbigdate-user-auth-1.0-SNAPSHOT.jar /gdbigdata/userauth/gdbigdate-user-auth-1.0-SNAPSHOT.jar
#Specify the command to start when the container starts
#ENTRYPOINT ["mkdir","/gdbigdata/userauth/log"]
#ENTRYPOINT ["cd","/gdbigdata/userauth"]
#working directory
#WORKDIR /gdbigdata/accessrealserver
#Container volume The main reason is that the operation and maintenance personnel forget -v. With it, it will be mounted anonymously without being written randomly to the storage layer of the container.
VOLUME ["/gdbigdata/userauth"]
# is what we usually write -p
EXPOSE 10003
#Commands that need to be run when the mirror is running
CMD ["java","-jar","/gdbigdata/userauth/gdbigdate-user-auth-1.0-SNAPSHOT.jar","&"]
#Basic image
FROM openjdk:17-oracle
#Maintainer, usually write name + email
MAINTAINER gzf<[email protected]>
#Set environment variables when building
#ENV
#Copy the jar package to the image, the first variable is
ADD target/ldcia-server-v2.jar /gdbigdata/ldcia/ldcia-server-v2.jar
ADD src/main/resources/a.properties /gdbigdata/ldcia/a.properties
#Specify the command to start when the container starts
#ENTRYPOINT ["mkdir","/gdbigdata/ldcia/log"]
#ENTRYPOINT ["cd","/gdbigdata/ldcia"]
#working directory
#WORKDIR /gdbigdata/ldcia
#Container volume The main reason is that the operation and maintenance personnel forget -v. With it, it will be mounted anonymously without being written randomly to the storage layer of the container.
VOLUME ["/gdbigdata/ldcia"]
# is what we usually write -p
EXPOSE 10004
#Commands that need to be run when the mirror is running
CMD ["java","-jar","/gdbigdata/ldcia/ldcia-server-v2.jar","&"]
#Basic image
FROM openjdk:17-oracle
#Maintainer, usually write name + email
MAINTAINER gzf<[email protected]>
#Set environment variables when building
#ENV
#Copy the jar package to the image, the first variable is
ADD target/gdbigdata-access-real-server-v2-1.0-SNAPSHOT.jar /gdbigdata/accessrealserver/gdbigdata-access-real-server-v2-1.0-SNAPSHOT.jar
#Specify the command to start when the container starts
#ENTRYPOINT ["mkdir","/gdbigdata/accessrealserver/log"]
#ENTRYPOINT ["cd","/gdbigdata/accessrealserver"]
#working directory
#WORKDIR /gdbigdata/accessrealserver
#Container volume The main reason is that the operation and maintenance personnel forget -v. With it, it will be mounted anonymously without being written randomly to the storage layer of the container.
VOLUME ["/gdbigdata/accessrealserver"]
# is what we usually write -p
EXPOSE 10001
#Commands that need to be run when the mirror is running
CMD ["java","--add-opens=java.base/java.lang=ALL-UNNAMED","-jar","/gdbigdata/accessrealserver/gdbigdata-access-real-server-v2-1.0-SNAPSHOT.jar","&"]
#Basic image
FROM openjdk:17-oracle
#Maintainer, usually write name + email
MAINTAINER gzf<[email protected]>
#Set environment variables when building
#ENV
#Copy the jar package to the image, the first variable is
ADD target/gdbigdata-access-middle-server-v2-1.0-SNAPSHOT.jar /gdbigdata/accessmiddleserver/gdbigdata-access-middle-server-v2-1.0-SNAPSHOT.jar
#Specify the command to start when the container starts
#ENTRYPOINT ["mkdir","/gdbigdata/accessrealserver/log"]
#ENTRYPOINT ["cd","/gdbigdata/accessrealserver"]
#working directory
#WORKDIR /gdbigdata/accessrealserver
#Container volume The main reason is that the operation and maintenance personnel forget -v. With it, it will be mounted anonymously without being written randomly to the storage layer of the container.
VOLUME ["/gdbigdata/accessmiddleserver"]
# is what we usually write -p
EXPOSE 10002
#Commands that need to be run when the mirror is running
CMD ["java","-jar","/gdbigdata/accessmiddleserver/gdbigdata-access-middle-server-v2-1.0-SNAPSHOT.jar","&"]
- clone the project locally:
git clone [email protected]/826148267/cjml_server.git
- Enter the project directory:
cd gdbd-desktop
-
installation dependencies: Java9 or above environment (required for modular programming, preferably Java17)
-
Start the system: After compiling and running, just click to run the software before
- Users can use the system by registering an account.
- After logging in, users can upload files, and the system will automatically encrypt and deduplicate them.
- Users can download their own files at any time and the system will ensure file integrity.
The method is roughly divided into three stages:
- Initialization Phase:
- After the data owner is initialized, the corresponding data is sent to the storage service provider and auditor.
- File Upload Phase:
- After the user preprocesses the original data, the preprocessed data is sent to the storage service provider.
- The storage service provider responds, and the data owner determines whether the tag should be calculated based on the response results.
- If calculation is needed, generate labels; otherwise, skip the label generation step and directly generate label conversion auxiliary materials and audit materials and send them to the storage service provider.
- Audit Phase:
- The auditor sends a challenge to the target file to the storage service provider.
- The storage service provider responds to the challenge, and the auditor verifies the response results.
The build method includes the following steps:
- Initialization Phase:
- Initialization of user keys, initialization of public and private key pairs for auditing, etc.
- File upload stage:
- Generate a deduplication key and encrypt the original file.
- Divide and encode encrypted files.
- Encryption of deduplication keys, upload encrypted, diced and encoded files and encrypted deduplication keys.
- Calculate and upload integrity labels (if necessary), label conversion auxiliary materials, and audit materials to storage service providers for block data.
- Audit Phase:
- The auditor sends the challenge and verifies the correctness of the response to the challenge.
- The storage service provider uses the data it holds to calculate the response to the challenge.
Lightweight deduplication ciphertext integrity audit method.docx
- Spring Boot
- Spring Data JPA
- Spring Cloud
- MySQL Connector
- Redis
- Swagger
- Fastjson
- JUnit
- Lombok
Contributions of any kind are welcome! Please submit issues, suggestions, or pull requests.
This project is licensed under the Apache license 2.0, see the LICENSE file for details.
If you have any questions, please contact [[email protected]] or visit [https://github.com/826148267]