|
Here's a `README.md` for your project: |
|
|
|
```markdown |
|
# Video Manipulation Detection Using AI |
|
|
|
This project uses an AI-powered video analysis pipeline to detect potential video manipulations such as deepfakes and other visual tampering. It leverages the **DEtection TRansformer (DETR)** model for object detection and optical flow analysis to identify motion inconsistencies. The application is built using **Streamlit**, and the model processes videos to generate detailed reports on their authenticity. |
|
|
|
## Features |
|
- **Metadata Extraction**: Extracts and displays metadata from uploaded videos. |
|
- **Frame Extraction**: Extracts frames from videos at specified intervals for analysis. |
|
- **Object Detection**: Analyzes each frame to detect objects using the DETR model. |
|
- **Optical Flow Calculation**: Computes motion patterns to detect inconsistencies. |
|
- **Report Generation**: Compiles visual and motion analysis results into a comprehensive JSON report. |
|
- **Manipulation Detection**: Uses the report to assess the likelihood of video manipulation. |
|
|
|
## Technologies Used |
|
- **Streamlit**: Front-end web framework for creating the user interface. |
|
- **PyTorch**: Machine learning framework used for object detection (DETR). |
|
- **OpenCV**: Library for video and image processing (frame extraction, optical flow). |
|
- **Transformers**: Pre-trained DETR model for object detection. |
|
- **FFmpeg**: Tool for extracting metadata from videos and processing video files. |
|
|
|
## Installation |
|
|
|
1. Clone the repository: |
|
```bash |
|
git clone https://github.com/your-username/video-manipulation-detection.git |
|
cd video-manipulation-detection |
|
``` |
|
|
|
2. Install dependencies: |
|
```bash |
|
pip install -r requirements.txt |
|
``` |
|
|
|
3. Run the Streamlit app: |
|
```bash |
|
streamlit run app.py |
|
``` |
|
|
|
## How to Use |
|
|
|
1. **Upload a Video**: Go to the "Analyze Video" section and upload a `.mp4` or `.mov` video file. |
|
2. **Start Analysis**: Click the **"Analyze Video"** button to start processing. |
|
3. **View Results**: After analysis, the app will display a score and details about the potential manipulation of the video. |
|
|
|
## Report Details |
|
The report includes the following information: |
|
- **Frame Analysis**: Number of detections, maximum and average confidence of the object detection model. |
|
- **Motion Analysis**: Calculated optical flow values to detect motion inconsistencies. |
|
- **Summary Stats**: Overall detection confidence, optical flow statistics, and manipulation detection score. |
|
|
|
## License |
|
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |
|
|
|
## Acknowledgments |
|
- [Streamlit](https://streamlit.io/) |
|
- [PyTorch](https://pytorch.org/) |
|
- [Hugging Face Transformers](https://huggingface.co/) |
|
- [OpenCV](https://opencv.org/) |
|
- [FFmpeg](https://ffmpeg.org/) |
|
- This project is built as part of a research and development effort to analyze and detect video manipulations. |
|
|
|
## Contact |
|
- **Ayush Agarwal** |
|
AIML Enthusiast, VIT Bhopal University |
|
π§ [[email protected]](mailto:[email protected]) |
|
π [LinkedIn](https://www.linkedin.com/in/ayush20039939) |
|
π [GitHub](https://github.com/your-github) |
|
``` |
|
|
|
### Key Sections: |
|
1. **Project Overview**: Describes the purpose and features of the app. |
|
2. **Technologies Used**: Lists the libraries and tools used in the project. |
|
3. **Installation**: Step-by-step guide on how to clone, install dependencies, and run the app. |
|
4. **How to Use**: Instructions on how users can interact with the app. |
|
5. **Report Details**: Describes the information generated in the analysis report. |
|
6. **License & Acknowledgments**: Includes the MIT License and credits to the libraries and tools used. |
|
|
|
This will help users understand the project and how to deploy and use it. Feel free to customize it further! |
|
|