Smart Inventory Advisor for Retail Sales (Invense)
This repository contains a machine learning model trained to predict daily sales for retail products and provide actionable inventory recommendations. The project was developed as part of the Datathon Nexus Crew.
Model Details
Model Description
The core of this project is a Random Forest Regressor (scikit-learn
) trained to forecast the number of units sold (Sales
) for a given product based on a variety of features. The primary output is a "Store Owner's Action Plan" that identifies products at risk of stocking out and recommends a reorder quantity.
- Developed by: jerewy (NexusCrew)
- Model type:
RandomForestRegressor
- Language(s) (NLP): en
- License: MIT
- Finetuned from model [optional]: This model was trained from scratch.
Model Sources [optional]
- Repository: https://github.com/jerewy/datathon_nexus_crew
Uses
Direct Use
This model is intended to be used as a decision-support tool for small to medium-sized retail business owners. It helps answer two key questions:
- Which products should I focus on restocking right now?
- How many units of each product should I order?
The primary function, get_smart_inventory_recommendations
in the notebook, automates this process.
Out-of-Scope Use
This model is not suitable for real-time stock market prediction or long-term (multi-year) financial forecasting. Its predictions are based on the patterns in the provided dataset and may not be accurate if underlying market conditions change drastically. The model should not be used for making automated financial decisions without human oversight.
Bias, Risks, and Limitations
The main limitation of this model is the "predictability ceiling" of the source data. The final model achieves a Testing R² score of 0.34, which means that while it successfully captures the predictable patterns, about 66% of the variance in daily sales is due to unpredictable, random factors not present in the data.
- Data Dependency: The model's performance is entirely dependent on the quality and patterns of the training data. It assumes future trends will resemble historical ones.
- Synthetic Data: The model was trained on a synthetic dataset. Performance on real-world, noisy retail data will likely differ and may require further tuning.
Recommendations
Users should be aware that the model provides a forecast, not a guarantee. The "Recommended Reorder Qty" should be treated as a strong, data-driven suggestion, but business owners should still apply their own domain knowledge, especially when considering external factors not included in the dataset (e.g., upcoming local events, new competitors).
How to Get Started with the Model
Use the code below to load the saved model and preprocessors to make predictions on new data.
import pickle
import pandas as pd
# Load the trained model, scaler, and encoders
model = pickle.load(open('sales_model.pkl', 'rb'))
scaler = pickle.load(open('scaler.pkl', 'rb'))
label_encoders = pickle.load(open('label_encoders.pkl', 'rb'))
# --- Example: Prepare a single row of new data ---
# NOTE: This must have the same structure as the training data
new_data = pd.DataFrame([{
'Inventory': 200, 'Orders': 50, 'Price': 35.0, 'Discount': 10,
'Competitor Price': 33.0, 'Promotion': 1, 'Category': 'Groceries',
'Region': 'North', 'Weather': 'Sunny', 'Seasonality': 'Spring',
'DayOfWeek': 2, 'Month': 4, 'Day': 15
}])
# Apply the same preprocessing
for col in ['Category', 'Region', 'Weather', 'Seasonality']:
new_data[col] = label_encoders[col].transform(new_data[col])
# Scale the features
new_data_scaled = scaler.transform(new_data)
# Make a prediction
predicted_sales = model.predict(new_data_scaled)
print(f"Predicted Sales: {predicted_sales[0]:.2f} units")
Training Details
Training Data
The model was trained on the "Retail Store Inventory" dataset, which contains over 73,000 daily records across multiple stores and products. The data is synthetic but realistically models retail sales patterns.
Training Procedure
Preprocessing
The training data was preprocessed as follows:
- Feature Engineering:
DayOfWeek
,Month
, andDay
were extracted from theDate
column. - Label Encoding: Categorical features (
Category
,Region
,Weather
,Seasonality
) were converted to numerical values. - Standard Scaling: All numerical features were scaled to have a mean of 0 and a standard deviation of 1.
Training Hyperparameters
The model was tuned using RandomizedSearchCV
to find the optimal settings. The best-performing hyperparameters were:
- n_estimators: 200
- min_samples_split: 10
- min_samples_leaf: 4
- max_features: 'sqrt'
- max_depth: 20
Evaluation
Testing Data, Factors & Metrics
- Testing Data: The model was evaluated on a 20% holdout set from the original data, created using a standard
train_test_split
withrandom_state=42
. - Metrics: The primary evaluation metric was R-squared (R²) to measure the proportion of variance in sales that the model could predict. Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) were also used to measure prediction error in terms of units sold.
Results
Metric | Value |
---|---|
Training R² | 0.6324 |
Testing R² | 0.3402 |
MSE | 7811.96 |
RMSE | 88.39 |
MAE | 69.27 |
Environmental Impact
- Hardware Type: Not tracked (likely trained on Google Colab standard CPU instances).
- Hours used: Not tracked.
- Cloud Provider: Not tracked.
- Compute Region: Not tracked.
- Carbon Emitted: Not tracked.
Model Card Authors
Hernicksen Satria, Jeremy Wijaya, Lawryan Andrew Darisang (NexusCrew)
Model Card Contact
Please open an issue in the repository for questions or feedback.
- Downloads last month
- -