Spaces:
Running
Running
File size: 5,714 Bytes
d80d18d adff7c4 d31cf46 d80d18d adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d80d18d adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 adff7c4 d31cf46 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
---
title: Baseer Self-Driving API
emoji: π
colorFrom: blue
colorTo: red
sdk: docker
app_port: 7860
pinned: true
license: mit
short_description: A RESTful API for an InterFuser-based self-driving model.
tags:
- computer-vision
- autonomous-driving
- deep-learning
- fastapi
- pytorch
- interfuser
- graduation-project
- carla
- self-driving
---
# π Baseer Self-Driving API
| Service | Status |
|---|---|
| **API Status** | [](https://adam-it-baseer-server.hf.space) |
| **Model** | [](https://huggingface.co/Adam-IT/Interfuser-Baseer-v1) |
| **Frameworks** | [](https://fastapi.tiangolo.com/) [](https://pytorch.org/) |
## π Project Description
**Baseer** is an advanced self-driving system that provides a robust, real-time API for autonomous vehicle control. This Space hosts the FastAPI server that acts as an interface to the fine-tuned **[Interfuser-Baseer-v1](https://huggingface.co/Adam-IT/Interfuser-Baseer-v1)** model.
The system is designed to take a live camera feed and vehicle measurements, process them through the deep learning model, and return actionable control commands and a comprehensive scene analysis.
---
## ποΈ Architecture
This project follows a decoupled client-server architecture, where the model and the application are managed separately for better modularity and scalability.
```
+-----------+ +------------------------+ +--------------------------+
| | | | | |
| Client | -> | Baseer API (Space) | -> | Interfuser Model (Hub) |
|(e.g.CARLA)| | (FastAPI Server) | | (Private/Gated Weights) |
| | | | | |
+-----------+ +------------------------+ +--------------------------+
HTTP Loads Model Model Repository
Request
```
## β¨ Key Features
### π§ **Advanced Perception Engine**
- **Powered by:** The [Interfuser-Baseer-v1](https://huggingface.co/Adam-IT/Interfuser-Baseer-v1) model.
- **Focus:** High-accuracy traffic object detection and safe waypoint prediction.
- **Scene Analysis:** Real-time assessment of junctions, traffic lights, and stop signs.
### β‘ **High-Performance API**
- **Framework:** Built with **FastAPI** for high throughput and low latency.
- **Stateful Sessions:** Manages multiple, independent driving sessions, each with its own tracker and controller state.
- **RESTful Interface:** Intuitive and easy-to-use API design.
### π **Comprehensive Outputs**
- **Control Commands:** `steer`, `throttle`, `brake`.
- **Scene Analysis:** Probabilities for junctions, traffic lights, and stop signs.
- **Predicted Waypoints:** The model's intended path for the next 10 steps.
- **Visual Dashboard:** A generated image that provides a complete, human-readable overview of the current state.
---
## π How to Use
Interact with the API by making HTTP requests to its endpoints.
### 1. Start a New Session
This will initialize a new set of tracker and controller instances on the server.
```bash
curl -X POST "https://adam-it-baseer-server.hf.space/start_session"
```
Response: `{"session_id": "your-new-session-id"}`
### 2. Run a Simulation Step
Send the current camera view and vehicle measurements to be processed.
```bash
curl -X POST "https://adam-it-baseer-server.hf.space/run_step" \
-H "Content-Type: application/json" \
-d '{
"session_id": "your-new-session-id",
"image_b64": "your-base64-encoded-bgr-image-string",
"measurements": {
"pos_global": [105.0, -20.0],
"theta": 1.57,
"speed": 5.5,
"target_point": [10.0, 0.0]
}
}'
```
### 3. End the Session
This will clean up the session data from the server.
```bash
curl -X POST "https://adam-it-baseer-server.hf.space/end_session?session_id=your-new-session-id"
```
---
## π‘ API Endpoints
| Endpoint | Method | Description |
|---|---|---|
| `/` | GET | Landing page with API status. |
| `/docs` | GET | Interactive API documentation (Swagger UI). |
| `/start_session` | POST | Initializes a new driving session. |
| `/run_step` | POST | Processes a single frame and returns control commands. |
| `/end_session` | POST | Terminates a specific session. |
| `/sessions` | GET | Lists all currently active sessions. |
---
## π― Intended Use Cases & Limitations
### β
Optimal Use Cases
- Simulating driving in CARLA environments.
- Research in end-to-end autonomous driving.
- Testing perception and control modules in a closed-loop system.
- Real-time object detection and trajectory planning.
### β οΈ Limitations
- **Simulation-Only:** Trained exclusively on CARLA data. Not suitable for real-world driving.
- **Vision-Based:** Relies on a single front-facing camera and has inherent blind spots.
- **No LiDAR:** Lacks the robustness of sensor fusion in adverse conditions.
---
## π οΈ Development
This project is part of a graduation thesis in Artificial Intelligence.
- **Deep Learning:** PyTorch
- **API Server:** FastAPI
- **Image Processing:** OpenCV
- **Scientific Computing:** NumPy
## π Contact
For inquiries or support, please use the **Community** tab in this Space or open an issue in the project's GitHub repository (if available).
---
**Developed by:** Adam-IT
**License:** MIT |