Question about the columns
Hi, thanks for this great dataset!
I am trying to understand it better and I can't seem to find how these columns are calculated
MN_Upper: Minopy Bands Upper.
MN_Lower: Minopy Bands Upper.
Trendline: Calculated trendline value.
Thanks🙏
Hi! Thanks for reaching out and for your kind words about the dataset. I’m glad you’re finding it useful! I’d be happy to explain how the MN_Upper
, MN_Lower
, and Trendline
columns are calculated based on the code used to generate them.
Trendline Calculation
The Trendline represents a linear regression (best-fit line) over the 'close'
prices, showing the overall direction of the trend.
Algorithm:
X-axis: We use the index of the DataFrame (np.arange(len(data))
), which represents time steps.
Y-axis: The 'close'
price values.
Linear Regression:
- We fit a 1st-degree polynomial (straight line) using
np.polyfit(x, y, 1)
, which finds the best-fitting slope and intercept. - Then, we apply the polynomial function using
np.polyval(coefficients, x)
to generate the trendline.
The result is a smooth line tracking the overall market trend.
Example code :
x = np.arange(len(data))
y = data[column].values
# Perform linear regression
coefficients = np.polyfit(x, y, 1)
trendline = np.polyval(coefficients, x)
data['Trendline'] = trendline
Minopy Bands (MN_Upper & MN_Lower)
Minopy Bands work similarly to Bollinger Bands but use a different smoothing method.
Algorithm:
Smoothing the data
- Apply a Gaussian kernel to smooth the
'close'
prices. - The bandwidth (
scale=bandwidth
) controls the smoothness. - Formula:
where is the smoothed value,
are Gaussian weights, and
are price values.
Mean Absolute Deviation (MAD)
- Compute the average absolute deviation from the smoothed prices.
- Formula:
Calculate the bands
- MN_Upper = Smoothed price + (Multiplier × MAD)
- MN_Lower = Smoothed price - (Multiplier × MAD)
Example code:
x = np.arange(len(data))
y = data[column].values
# Gaussian kernel weights
weights = norm.pdf(x[:, None] - x, scale=bandwidth)
# Smoothed values
smoothed_values = np.dot(weights, y) / weights.sum(axis=1)
# Mean absolute deviation
mad = np.mean(np.abs(y - smoothed_values))
data['MN_Upper'] = smoothed_values + mad_multiplier * mad
data['MN_Lower'] = smoothed_values - mad_multiplier * mad
I hope this clarifies how these columns are calculated! Let me know if you have more questions or need further details. 🙏
To put it simply:
- Trendline is a trend indicator. If the price goes above the Trendline, it's an uptrend; if it goes below, it's a downtrend.
- MN_Upper & MN_Lower (Minopy Bands) are similar to Bollinger Bands but use a smoother calculation based on Gaussian smoothing and Mean Absolute Deviation (MAD).
and MN_Lower: Minopy Bands Lower (fixed previous typo) ^^
Let me know if you need more details! 📈📈
Thank you so much for the info, it really helps to understand.
I have just two followup question:
Do you use all the data (since 2016) for the trendline and Minopy calculations, or do you use a rolling window. If so then which size?
In other words, inx = np.arange(len(data))
, is the data all the data from several years, or some window with fixed length of the data?In the Minopy calculations what are the bandwidth and mad multipliers that you use?
Thanks again 🙏
Hi shaiber,
Thanks for your follow-up! In our current implementation:
Data Window:
We use the entire dataset that’s passed to the functions (i.e., all available data, not a rolling window). In the example code,np.arange(len(data))
generates indices for the whole dataset. So if you're fetching data since 2016, both the Trendline and Minopy Bands are calculated over that entire span (but in ETH-USDT datasets the oldest data is from 2017 ^^).Minopy Parameters:
For the Minopy Bands, we use a default bandwidth of 8 and a MAD multiplier of 3. These parameters control the Gaussian smoothing and how wide the bands are, respectively.
Let me know if you have any more questions (^◡^)
Thanks for the quick reply!
However, I'm still failing to replicate your results for some reason.
I loaded the first 2403 of data from 2017 and tried to replicate the results in your calculated columns and I see a few things:
1 For the trendline the results are off from the start:
Original values:
0 304.172028
1 304.175079
2 304.178162
3 304.181213
4 304.184296
...
2399 295.380920
2400 295.386963
2401 295.393005
2402 295.399017
Calculated values:
0 304.565948
1 304.559082
2 304.552216
3 304.545319
4 304.538452
...
2399 288.075989
2400 288.069122
2401 288.062256
2402 288.055359
Difference:
0 -0.393921
1 -0.384003
2 -0.374054
3 -0.364105
4 -0.354156
...
2399 7.304932
2400 7.317841
2401 7.330750
2402 7.343658
Is there something that pops to mind that you think can explain this?
I get a similar situation for the MN_Upper and MN_Lower columns
For other columns, it seems that there are mismatches around the gaps in the data (there are a few hours every day without trading in the data until Jan 9, 2025). So maybe you have special case handling for these times?
After Jan 9, 2025 we suddenly have data for all hours of the day, does it come from a different source?
Thanks!
hi, thanks for taking a deep dive into the dataset and for your detailed observations!
A few things to note that might explain the differences:
Data Gaps and Time Indexing:
The trendline and Minopy Bands are calculated using the entire dataset withnp.arange(len(data))
as the x-axis. If there are missing data points, this affects the regression and smoothing calculations. In our dataset, there are indeed gaps because, prior to making the data open-source, our server did not collect data between roughly 3 PM to 3 AM UTC. We primarily traded on 1 minute and 5 minute candlesticks using local time, so the indicator differences were negligible at that time. However, due to funding constraints, we put the server to sleep during non-trading hours, which resulted in missing trading data until Jan, 2025.Different Data Source After Jan 9, 2025:
After Jan 9, 2025, We were given a $300 grant for the server under the condition of making it open source, so we ran the server 24/7 and updated prices and indicators on HF every 3 minutes. This change might lead to discrepancies in the calculated columns if the data structure or continuity differs from the earlier records.
I hope this helps explain the differences you're seeing. Let me know if you need any more details or if there's anything else I can help with ^^ I also wonder a little about your project with this datasets so I can know how to help you in more detail. 🫣
It is very helpful, thanks!
To answer your question about the project - I am getting into data science and machine learning, and I wanted to see if I could build a model that is able to predict ETH price movements. The model worked pretty well on data up to Jan 9, but very badly on later dates. So I wanted to understand how the data was gathered and how the indicators were calculated to see if there was some change that explains the difference in model performance before Jan 9 and after.
From the info you provided and the verifications I've made, I understand that there was no change in how the indicators are calculated, but there was a change in how the data is gathered (no gaps). I guess this change affects the model somehow, but I don't understand why.
If anything I would expect a more complete dataset to help the model to better, not worse, so I'm still investigating.
Hi Shaiber, thanks for the detailed update!
It sounds like you’ve uncovered an interesting challenge. The performance drop after January 9 might stem from the switch to continuous data (vs. gapped data before), affecting indicators like Trendline and Minopy Bands, or a shift in market conditions.
I understand your confusion about why your model performs well on data up to January 9 but struggles with data after that date, especially since the indicator calculations haven’t changed. Let’s break this down and explore some possible reasons for this behavior, along with steps you can take to investigate further.
Data Distribution Shift
The data before January 9 had gaps (e.g., missing hours due to the server being offline), while after January 9, it’s continuous. Even though the indicators are calculated the same way, this change in data continuity alters the underlying distribution. Your model, likely trained on the gapped data
, might not generalize well to the continuous data because it’s adapted to patterns that include those gaps.
Overfitting to Gapped Data
If your model was trained primarily on pre January 9 data, it might have overfitted to patterns specific to the gapped dataset (e.g., trends or volatility tied to the missing periods). When the data becomes continuous, those patterns might disappear or change, causing the model to perform poorly.
##Suggestions to Investigate and Address the Issue
Here are some practical steps you can take to pinpoint the cause and potentially improve your model’s performance:
Train and Test on Post-January 9 Data
- Split your dataset into two parts: pre January 9 and post January 9.
- Train a new model using only the continuous data (post January 9) and test it on a later portion of that same data. If performance improves, it suggests the model struggles with the transition from gapped to continuous data rather than the continuous data itself.
Cross validation for Time Series
Use time-series-specific cross-validation (e.g., walk-forward validation) to evaluate your model’s robustness across the transition period. This ensures it’s tested on unseen future data, mimicking real world prediction scenarios.
But in my experience, financial time series, especially cryptocurrencies, are notoriously nonstationary meaning their statistical properties change over time. This makes it common for models to perform well on historical data but falter on future data.
predicting the price of ETH or any other crypto is quite complicated, I think you can consider price action for your model.
But anyway here is a project that we have not updated since a few weeks ago and just let our server run and update to huggingface so we will review and improve this dataset 😸
Thanks for the detailed response and great suggestions!
When I said that the model performs badly on data after Jan 9, I meant that it also fails to train on that data - this is why it is so weird.
Of course it can be that after that date the patterns change and become unpredictable, but it is weird that it coincides exactly with the date when the data becomes continuos.
So I will continue to investigate, and I'll update if I find anything.
Thank you so much for your time and for publishing the dataset.
😸😸